How do we solve for the policy optimization problem which is to maximize the total reward given some parametrized policy?
To begin with, for an episode the total reward is the sum of all the rewards. If our environment is stochastic, we can never be sure if we will get the same rewards the next time we perform the same actions. Thus the more we go into the future the more the total future reward may diverge. So for that reason it is common to use the discounted future reward where the parameter discount
is called the discount factor and is between 0 and 1.
A good strategy for an agent would be to always choose an action that maximizes the (discounted) future reward. In other words we want to maximize the expected reward per episode.
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
from nltk.corpus import wordnet as wn | |
from nltk.stem import PorterStemmer, WordNetLemmatizer | |
#from nltk import pos_tag, word_tokenize | |
# Pywsd's Lemmatizer. | |
porter = PorterStemmer() | |
wnl = WordNetLemmatizer() | |
from nltk.tag import PerceptronTagger |