Skip to content

Instantly share code, notes, and snippets.

View DLonng's full-sized avatar
:octocat:
Day Day Up

DLonng DLonng

:octocat:
Day Day Up
View GitHub Profile
@kashif
kashif / cem.md
Last active September 18, 2024 21:33
Cross Entropy Method

Cross Entropy Method

How do we solve for the policy optimization problem which is to maximize the total reward given some parametrized policy?

Discounted future reward

To begin with, for an episode the total reward is the sum of all the rewards. If our environment is stochastic, we can never be sure if we will get the same rewards the next time we perform the same actions. Thus the more we go into the future the more the total future reward may diverge. So for that reason it is common to use the discounted future reward where the parameter discount is called the discount factor and is between 0 and 1.

A good strategy for an agent would be to always choose an action that maximizes the (discounted) future reward. In other words we want to maximize the expected reward per episode.

@vicrucann
vicrucann / CMakeLists.txt
Created June 14, 2016 18:08
OpenSceneGraph + QOpenGLWidget - minimal example
cmake_minimum_required(VERSION 2.8.11)
project(qtosg)
set(CMAKE_INCLUDE_CURRENT_DIR ON)
set(CMAKE_AUTOMOC ON)
find_package(Qt5 REQUIRED COMPONENTS Core Gui OpenGL)
find_package(OpenSceneGraph REQUIRED COMPONENTS osgDB osgGA osgUtil osgViewer)
include_directories(${OPENSCENEGRAPH_INCLUDE_DIRS})