import urllib2 | |
import re | |
import sys | |
from collections import defaultdict | |
from random import random | |
""" | |
PLEASE DO NOT RUN THIS QUOTED CODE FOR THE SAKE OF daemonology's SERVER, IT IS | |
NOT MY SERVER AND I FEEL BAD FOR ABUSING IT. JUST GET THE RESULTS OF THE | |
CRAWL HERE: http://pastebin.com/raw.php?i=nqpsnTtW AND SAVE THEM TO "archive.txt" |
I have moved this over to the Tech Interview Cheat Sheet Repo and has been expanded and even has code challenges you can run and practice against!
\
/** | |
* Chunkify | |
* Google Chrome Speech Synthesis Chunking Pattern | |
* Fixes inconsistencies with speaking long texts in speechUtterance objects | |
* Licensed under the MIT License | |
* | |
* Peter Woolley and Brett Zamir | |
*/ | |
var speechUtteranceChunker = function (utt, settings, callback) { |
Afghanistan | |
Albania | |
Algeria | |
Andorra | |
Angola | |
Antigua & Deps | |
Argentina | |
Armenia | |
Australia | |
Austria |
The following recipes are sampled from a trained neural net. You can find the repo to train your own neural net here: https://github.com/karpathy/char-rnn Thanks to Andrej Karpathy for the great code! It's really easy to setup.
The recipes I used for training the char-rnn are from a recipe collection called ffts.com And here is the actual zipped data (uncompressed ~35 MB) I used for training. The ZIP is also archived @ archive.org in case the original links becomes invalid in the future.
from math import sqrt | |
def put_kernels_on_grid (kernel, pad = 1): | |
'''Visualize conv. filters as an image (mostly for the 1st layer). | |
Arranges filters into a grid, with some paddings between adjacent filters. | |
Args: | |
kernel: tensor of shape [Y, X, NumChannels, NumKernels] | |
pad: number of black pixels around each filter (between them) |
# Example for my blog post at: | |
# https://danijar.com/introduction-to-recurrent-networks-in-tensorflow/ | |
import functools | |
import sets | |
import tensorflow as tf | |
def lazy_property(function): | |
attribute = '_' + function.__name__ |
A basic_rl.py
provides a simple implementation of SARSA/Q-learning algorithms (specified by -a
flag) with epsilon-greedy/softmax policies (specified by -p
flag). You can also select the environment other than Roulette-v0 using -e
flag. It also generates a graphical summary of your simulation.
Type the following command in your console to run the simulation using the default setting.
chmod +x basic_rl.py
./basic_rl.py
""" Trains an agent with (stochastic) Policy Gradients on Pong. Uses OpenAI Gym. """ | |
import numpy as np | |
import cPickle as pickle | |
import gym | |
# hyperparameters | |
H = 200 # number of hidden layer neurons | |
batch_size = 10 # every how many episodes to do a param update? | |
learning_rate = 1e-4 | |
gamma = 0.99 # discount factor for reward |