Last active
December 23, 2018 21:18
-
-
Save xpe/3e2930719f0feb0b7aaa470a80009845 to your computer and use it in GitHub Desktop.
Policy Gradient Loss function for PyTorch
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
import torch | |
import torch.nn as nn | |
import torch.optim as optim | |
from torch._jit_internal import weak_module, weak_script_method | |
@weak_module | |
class PolicyGradientLoss(nn.Module): | |
""" | |
Multiplies an unreduced CrossEntropyLoss by a `q` vector. | |
""" | |
def __init__(self): | |
super(PolicyGradientLoss, self).__init__() | |
self.cross_entropy_loss = nn.CrossEntropyLoss(reduction='none') | |
@weak_script_method | |
def forward(self, input_, target, q): | |
cel = self.cross_entropy_loss.forward(input_, target) | |
return torch.mean(cel * q) |
Note: I use input_
to appease my editor, since input
is a Python built-in. See https://stackoverflow.com/a/20670757/109618
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
I'm not yet clear on the meaning (or necessity) of the annotations
@weak_module
and@weak_script_method
. These are used in PyTorch source code, but I don't know of their importance in end-user code.