Run with defaults
python vpg.py
| def conv2d(inputs, filter, strides, name='conv2d'): | |
| k = tf.get_variable('W', filter, initializer=xavier_initializer_conv2d()) | |
| b = tf.get_variable('b', filter[-1], initializer=tf.constant_initializer(0.0)) | |
| conv = tf.nn.conv2d(inputs, k, strides, 'SAME') | |
| bias_add = tf.nn.bias_add(conv, b) | |
| return tf.nn.relu(bias_add, name=name) | |
| def vision_model(frames, n_frames): | |
| with tf.variable_scope('Conv1') as scope: |
Run with defaults
python vpg.py
| # User configuration | |
| export PATH="$HOME/.linuxbrew/bin:$PATH" | |
| export MANPATH="$HOME/.linuxbrew/share/man:$MANPATH" | |
| export INFOPATH="$HOME/.linuxbrew/share/info:$INFOPATH" | |
| # add cuda tools to command path | |
| export PATH=/usr/local/cuda/bin:${PATH} | |
| export MANPATH=/usr/local/cuda/man:${MANPATH} | |
| # add cuda libraries to library path |
| To replicate run | |
| python cem.py --algorithm pcem --outdir CartPole-v0-pcem | |
| The --outdir argument is optional. If left as is results will be written to /tmp/CartPole-v0-pcem. |
| Implements the Cross-Entropy Method with decreasing noise added to the variance updates as described in [1]. | |
| Running cem.py with the default settings should reproduce results. | |
| [1] Szita, Lorincz 2006 | |
| http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.81.6579&rep=rep1&type=pdf |
| { | |
| "parser": "babel-eslint", | |
| "env": { | |
| "node": true, | |
| "mocha": true | |
| }, | |
| "rules": { | |
| "brace-style": 2, | |
| "camelcase": 2, | |
| "comma-dangle": [2, "never"], |
| Some questions to ask during interviews: | |
| * What does success look like for this position? How will I know if I am accomplishing what is expected of me? | |
| * What is the last project you shipped? What was the goal, how long did it take, what were the stumbling blocks, what tools did you use, etc. | |
| * What will my first 90 days in this role look like? First 180 days? | |
| * Who will I report to and how many people report to that person? Do they have regular 1:1 with their team members? | |
| * Why did the last person who quit this team leave? The company? | |
| * If a startup, how long is your runway? How are financial decisions made? | |
| * What would be my first project here? Has someone already been working on this or is this in the aspirational stage? | |
| * What is the current state of the data infrastructure? How much work needs to be done on getting the infrastructure and pipeline into shape before we start analyzing that data? |
| " General | |
| set encoding=utf-8 | |
| set fileencoding=utf-8 | |
| " Match and Search | |
| set ignorecase | |
| set smartcase | |
| set hlsearch | |
| set incsearch | |
| " set autochdir | |
| set tabstop=4 |
| from __future__ import print_function | |
| from scrapy.selector import Selector | |
| import requests | |
| def scrape_comments(url): | |
| comments = [] | |
| r = requests.get(url).text | |
| sel = Selector(text=r) | |
| rough_comments = sel.xpath("//span[@class='comment']/span") |
| from __future__ import print_function, absolute_import | |
| import cgt | |
| from cgt import nn | |
| from cgt.distributions import categorical | |
| import numpy as np | |
| from load import load_mnist | |
| import time | |
| epochs = 10 | |
| batch_size = 128 |