Skip to content

Instantly share code, notes, and snippets.

@tamuhey
Last active August 26, 2018 04:46
Show Gist options
  • Save tamuhey/3b4d750331b01bfc51cdaa16ab633410 to your computer and use it in GitHub Desktop.
Save tamuhey/3b4d750331b01bfc51cdaa16ab633410 to your computer and use it in GitHub Desktop.
test
Display the source blob
Display the rendered blob
Raw
{
"cells": [
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"%matplotlib inline"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"\nReinforcement Learning (DQN) tutorial\n=====================================\n**Author**: `Adam Paszke <https://github.com/apaszke>`_\n\n\nThis tutorial shows how to use PyTorch to train a Deep Q Learning (DQN) agent\non the CartPole-v0 task from the `OpenAI Gym <https://gym.openai.com/>`__.\n\n**Task**\n\nThe agent has to decide between two actions - moving the cart left or\nright - so that the pole attached to it stays upright. You can find an\nofficial leaderboard with various algorithms and visualizations at the\n`Gym website <https://gym.openai.com/envs/CartPole-v0>`__.\n\n.. figure:: /_static/img/cartpole.gif\n :alt: cartpole\n\n cartpole\n\nAs the agent observes the current state of the environment and chooses\nan action, the environment *transitions* to a new state, and also\nreturns a reward that indicates the consequences of the action. In this\ntask, the environment terminates if the pole falls over too far.\n\nThe CartPole task is designed so that the inputs to the agent are 4 real\nvalues representing the environment state (position, velocity, etc.).\nHowever, neural networks can solve the task purely by looking at the\nscene, so we'll use a patch of the screen centered on the cart as an\ninput. Because of this, our results aren't directly comparable to the\nones from the official leaderboard - our task is much harder.\nUnfortunately this does slow down the training, because we have to\nrender all the frames.\n\nStrictly speaking, we will present the state as the difference between\nthe current screen patch and the previous one. This will allow the agent\nto take the velocity of the pole into account from one image.\n\n**Packages**\n\n\nFirst, let's import needed packages. Firstly, we need\n`gym <https://gym.openai.com/docs>`__ for the environment\n(Install using `pip install gym`).\nWe'll also use the following from PyTorch:\n\n- neural networks (``torch.nn``)\n- optimization (``torch.optim``)\n- automatic differentiation (``torch.autograd``)\n- utilities for vision tasks (``torchvision`` - `a separate\n package <https://github.com/pytorch/vision>`__).\n\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"import gym\nimport math\nimport random\nimport numpy as np\nimport matplotlib\nimport matplotlib.pyplot as plt\nfrom collections import namedtuple\nfrom itertools import count\nfrom PIL import Image\n\nimport torch\nimport torch.nn as nn\nimport torch.optim as optim\nimport torch.nn.functional as F\nimport torchvision.transforms as T\n\n\nenv = gym.make('CartPole-v0').unwrapped\n\n# set up matplotlib\nis_ipython = 'inline' in matplotlib.get_backend()\nif is_ipython:\n from IPython import display\n\nplt.ion()\n\n# if gpu is to be used\ndevice = torch.device(\"cuda\" if torch.cuda.is_available() else \"cpu\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Replay Memory\n-------------\n\nWe'll be using experience replay memory for training our DQN. It stores\nthe transitions that the agent observes, allowing us to reuse this data\nlater. By sampling from it randomly, the transitions that build up a\nbatch are decorrelated. It has been shown that this greatly stabilizes\nand improves the DQN training procedure.\n\nFor this, we're going to need two classses:\n\n- ``Transition`` - a named tuple representing a single transition in\n our environment\n- ``ReplayMemory`` - a cyclic buffer of bounded size that holds the\n transitions observed recently. It also implements a ``.sample()``\n method for selecting a random batch of transitions for training.\n\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"Transition = namedtuple('Transition',\n ('state', 'action', 'next_state', 'reward'))\n\n\nclass ReplayMemory(object):\n\n def __init__(self, capacity):\n self.capacity = capacity\n self.memory = []\n self.position = 0\n\n def push(self, *args):\n \"\"\"Saves a transition.\"\"\"\n if len(self.memory) < self.capacity:\n self.memory.append(None)\n self.memory[self.position] = Transition(*args)\n self.position = (self.position + 1) % self.capacity\n\n def sample(self, batch_size):\n return random.sample(self.memory, batch_size)\n\n def __len__(self):\n return len(self.memory)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Now, let's define our model. But first, let quickly recap what a DQN is.\n\nDQN algorithm\n-------------\n\nOur environment is deterministic, so all equations presented here are\nalso formulated deterministically for the sake of simplicity. In the\nreinforcement learning literature, they would also contain expectations\nover stochastic transitions in the environment.\n\nOur aim will be to train a policy that tries to maximize the discounted,\ncumulative reward\n$R_{t_0} = \\sum_{t=t_0}^{\\infty} \\gamma^{t - t_0} r_t$, where\n$R_{t_0}$ is also known as the *return*. The discount,\n$\\gamma$, should be a constant between $0$ and $1$\nthat ensures the sum converges. It makes rewards from the uncertain far\nfuture less important for our agent than the ones in the near future\nthat it can be fairly confident about.\n\nThe main idea behind Q-learning is that if we had a function\n$Q^*: State \\times Action \\rightarrow \\mathbb{R}$, that could tell\nus what our return would be, if we were to take an action in a given\nstate, then we could easily construct a policy that maximizes our\nrewards:\n\n\\begin{align}\\pi^*(s) = \\arg\\!\\max_a \\ Q^*(s, a)\\end{align}\n\nHowever, we don't know everything about the world, so we don't have\naccess to $Q^*$. But, since neural networks are universal function\napproximators, we can simply create one and train it to resemble\n$Q^*$.\n\nFor our training update rule, we'll use a fact that every $Q$\nfunction for some policy obeys the Bellman equation:\n\n\\begin{align}Q^{\\pi}(s, a) = r + \\gamma Q^{\\pi}(s', \\pi(s'))\\end{align}\n\nThe difference between the two sides of the equality is known as the\ntemporal difference error, $\\delta$:\n\n\\begin{align}\\delta = Q(s, a) - (r + \\gamma \\max_a Q(s', a))\\end{align}\n\nTo minimise this error, we will use the `Huber\nloss <https://en.wikipedia.org/wiki/Huber_loss>`__. The Huber loss acts\nlike the mean squared error when the error is small, but like the mean\nabsolute error when the error is large - this makes it more robust to\noutliers when the estimates of $Q$ are very noisy. We calculate\nthis over a batch of transitions, $B$, sampled from the replay\nmemory:\n\n\\begin{align}\\mathcal{L} = \\frac{1}{|B|}\\sum_{(s, a, s', r) \\ \\in \\ B} \\mathcal{L}(\\delta)\\end{align}\n\n\\begin{align}\\text{where} \\quad \\mathcal{L}(\\delta) = \\begin{cases}\n \\frac{1}{2}{\\delta^2} & \\text{for } |\\delta| \\le 1, \\\\\n |\\delta| - \\frac{1}{2} & \\text{otherwise.}\n \\end{cases}\\end{align}\n\nQ-network\n^^^^^^^^^\n\nOur model will be a convolutional neural network that takes in the\ndifference between the current and previous screen patches. It has two\noutputs, representing $Q(s, \\mathrm{left})$ and\n$Q(s, \\mathrm{right})$ (where $s$ is the input to the\nnetwork). In effect, the network is trying to predict the *quality* of\ntaking each action given the current input.\n\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"class DQN(nn.Module):\n\n def __init__(self):\n super(DQN, self).__init__()\n self.conv1 = nn.Conv2d(3, 16, kernel_size=5, stride=2)\n self.bn1 = nn.BatchNorm2d(16)\n self.conv2 = nn.Conv2d(16, 32, kernel_size=5, stride=2)\n self.bn2 = nn.BatchNorm2d(32)\n self.conv3 = nn.Conv2d(32, 32, kernel_size=5, stride=2)\n self.bn3 = nn.BatchNorm2d(32)\n self.head = nn.Linear(448, 2)\n\n def forward(self, x):\n x = F.relu(self.bn1(self.conv1(x)))\n x = F.relu(self.bn2(self.conv2(x)))\n x = F.relu(self.bn3(self.conv3(x)))\n return self.head(x.view(x.size(0), -1))"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Input extraction\n^^^^^^^^^^^^^^^^\n\nThe code below are utilities for extracting and processing rendered\nimages from the environment. It uses the ``torchvision`` package, which\nmakes it easy to compose image transforms. Once you run the cell it will\ndisplay an example patch that it extracted.\n\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"resize = T.Compose([T.ToPILImage(),\n T.Resize(40, interpolation=Image.CUBIC),\n T.ToTensor()])\n\n# This is based on the code from gym.\nscreen_width = 600\n\n\ndef get_cart_location():\n world_width = env.x_threshold * 2\n scale = screen_width / world_width\n return int(env.state[0] * scale + screen_width / 2.0) # MIDDLE OF CART\n\n\ndef get_screen():\n screen = env.render(mode='rgb_array').transpose(\n (2, 0, 1)) # transpose into torch order (CHW)\n # Strip off the top and bottom of the screen\n screen = screen[:, 160:320]\n view_width = 320\n cart_location = get_cart_location()\n if cart_location < view_width // 2:\n slice_range = slice(view_width)\n elif cart_location > (screen_width - view_width // 2):\n slice_range = slice(-view_width, None)\n else:\n slice_range = slice(cart_location - view_width // 2,\n cart_location + view_width // 2)\n # Strip off the edges, so that we have a square image centered on a cart\n screen = screen[:, :, slice_range]\n # Convert to float, rescare, convert to torch tensor\n # (this doesn't require a copy)\n screen = np.ascontiguousarray(screen, dtype=np.float32) / 255\n screen = torch.from_numpy(screen)\n # Resize, and add a batch dimension (BCHW)\n return resize(screen).unsqueeze(0).to(device)\n\n\nenv.reset()\nplt.figure()\nplt.imshow(get_screen().cpu().squeeze(0).permute(1, 2, 0).numpy(),\n interpolation='none')\nplt.title('Example extracted screen')\nplt.show()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Training\n--------\n\nHyperparameters and utilities\n^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\nThis cell instantiates our model and its optimizer, and defines some\nutilities:\n\n- ``select_action`` - will select an action accordingly to an epsilon\n greedy policy. Simply put, we'll sometimes use our model for choosing\n the action, and sometimes we'll just sample one uniformly. The\n probability of choosing a random action will start at ``EPS_START``\n and will decay exponentially towards ``EPS_END``. ``EPS_DECAY``\n controls the rate of the decay.\n- ``plot_durations`` - a helper for plotting the durations of episodes,\n along with an average over the last 100 episodes (the measure used in\n the official evaluations). The plot will be underneath the cell\n containing the main training loop, and will update after every\n episode.\n\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"BATCH_SIZE = 128\nGAMMA = 0.999\nEPS_START = 0.9\nEPS_END = 0.05\nEPS_DECAY = 200\nTARGET_UPDATE = 10\n\npolicy_net = DQN().to(device)\ntarget_net = DQN().to(device)\ntarget_net.load_state_dict(policy_net.state_dict())\ntarget_net.eval()\n\noptimizer = optim.RMSprop(policy_net.parameters())\nmemory = ReplayMemory(10000)\n\n\nsteps_done = 0\n\n\ndef select_action(state):\n global steps_done\n sample = random.random()\n eps_threshold = EPS_END + (EPS_START - EPS_END) * \\\n math.exp(-1. * steps_done / EPS_DECAY)\n steps_done += 1\n if sample > eps_threshold:\n with torch.no_grad():\n return policy_net(state).max(1)[1].view(1, 1)\n else:\n return torch.tensor([[random.randrange(2)]], device=device, dtype=torch.long)\n\n\nepisode_durations = []\n\n\ndef plot_durations():\n plt.figure(2)\n plt.clf()\n durations_t = torch.tensor(episode_durations, dtype=torch.float)\n plt.title('Training...')\n plt.xlabel('Episode')\n plt.ylabel('Duration')\n plt.plot(durations_t.numpy())\n # Take 100 episode averages and plot them too\n if len(durations_t) >= 100:\n means = durations_t.unfold(0, 100, 1).mean(1).view(-1)\n means = torch.cat((torch.zeros(99), means))\n plt.plot(means.numpy())\n\n plt.pause(0.001) # pause a bit so that plots are updated\n if is_ipython:\n display.clear_output(wait=True)\n display.display(plt.gcf())"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Training loop\n^^^^^^^^^^^^^\n\nFinally, the code for training our model.\n\nHere, you can find an ``optimize_model`` function that performs a\nsingle step of the optimization. It first samples a batch, concatenates\nall the tensors into a single one, computes $Q(s_t, a_t)$ and\n$V(s_{t+1}) = \\max_a Q(s_{t+1}, a)$, and combines them into our\nloss. By defition we set $V(s) = 0$ if $s$ is a terminal\nstate. We also use a target network to compute $V(s_{t+1})$ for\nadded stability. The target network has its weights kept frozen most of\nthe time, but is updated with the policy network's weights every so often.\nThis is usually a set number of steps but we shall use episodes for\nsimplicity.\n\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"def optimize_model():\n if len(memory) < BATCH_SIZE:\n return\n transitions = memory.sample(BATCH_SIZE)\n # Transpose the batch (see http://stackoverflow.com/a/19343/3343043 for\n # detailed explanation).\n batch = Transition(*zip(*transitions))\n\n # Compute a mask of non-final states and concatenate the batch elements\n non_final_mask = torch.tensor(tuple(map(lambda s: s is not None,\n batch.next_state)), device=device, dtype=torch.uint8)\n non_final_next_states = torch.cat([s for s in batch.next_state\n if s is not None])\n state_batch = torch.cat(batch.state)\n action_batch = torch.cat(batch.action)\n reward_batch = torch.cat(batch.reward)\n\n # Compute Q(s_t, a) - the model computes Q(s_t), then we select the\n # columns of actions taken\n state_action_values = policy_net(state_batch).gather(1, action_batch)\n\n # Compute V(s_{t+1}) for all next states.\n next_state_values = torch.zeros(BATCH_SIZE, device=device)\n next_state_values[non_final_mask] = target_net(non_final_next_states).max(1)[0].detach()\n # Compute the expected Q values\n expected_state_action_values = (next_state_values * GAMMA) + reward_batch\n\n # Compute Huber loss\n loss = F.smooth_l1_loss(state_action_values, expected_state_action_values.unsqueeze(1))\n\n # Optimize the model\n optimizer.zero_grad()\n loss.backward()\n for param in policy_net.parameters():\n param.grad.data.clamp_(-1, 1)\n optimizer.step()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Below, you can find the main training loop. At the beginning we reset\nthe environment and initialize the ``state`` Tensor. Then, we sample\nan action, execute it, observe the next screen and the reward (always\n1), and optimize our model once. When the episode ends (our model\nfails), we restart the loop.\n\nBelow, `num_episodes` is set small. You should download\nthe notebook and run lot more epsiodes.\n\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"num_episodes = 50\nfor i_episode in range(num_episodes):\n # Initialize the environment and state\n env.reset()\n last_screen = get_screen()\n current_screen = get_screen()\n state = current_screen - last_screen\n for t in count():\n # Select and perform an action\n action = select_action(state)\n _, reward, done, _ = env.step(action.item())\n reward = torch.tensor([reward], device=device)\n\n # Observe new state\n last_screen = current_screen\n current_screen = get_screen()\n if not done:\n next_state = current_screen - last_screen\n else:\n next_state = None\n\n # Store the transition in memory\n memory.push(state, action, next_state, reward)\n\n # Move to the next state\n state = next_state\n\n # Perform one step of the optimization (on the target network)\n optimize_model()\n if done:\n episode_durations.append(t + 1)\n plot_durations()\n break\n # Update the target network\n if i_episode % TARGET_UPDATE == 0:\n target_net.load_state_dict(policy_net.state_dict())\n\nprint('Complete')\nenv.render()\nenv.close()\nplt.ioff()\nplt.show()"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.6.4"
}
},
"nbformat": 4,
"nbformat_minor": 0
}
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment