使用 Python 内置的 defaultdict,我们可以很容易的定义一个树形数据结构:
def tree(): return defaultdict(tree)就是这样!
| """Short and sweet LSTM implementation in Tensorflow. | |
| Motivation: | |
| When Tensorflow was released, adding RNNs was a bit of a hack - it required | |
| building separate graphs for every number of timesteps and was a bit obscure | |
| to use. Since then TF devs added things like `dynamic_rnn`, `scan` and `map_fn`. | |
| Currently the APIs are decent, but all the tutorials that I am aware of are not | |
| making the best use of the new APIs. | |
| Advantages of this implementation: |
| """ | |
| Minimal character-level Vanilla RNN model. Written by Andrej Karpathy (@karpathy) | |
| BSD License | |
| """ | |
| import numpy as np | |
| # data I/O | |
| data = open('input.txt', 'r').read() # should be simple plain text file | |
| chars = list(set(data)) | |
| data_size, vocab_size = len(data), len(chars) |
使用 Python 内置的 defaultdict,我们可以很容易的定义一个树形数据结构:
def tree(): return defaultdict(tree)就是这样!
| """ Trains an agent with (stochastic) Policy Gradients on Pong. Uses OpenAI Gym. """ | |
| import numpy as np | |
| import cPickle as pickle | |
| import gym | |
| # hyperparameters | |
| H = 200 # number of hidden layer neurons | |
| batch_size = 10 # every how many episodes to do a param update? | |
| learning_rate = 1e-4 | |
| gamma = 0.99 # discount factor for reward |
| """Information Retrieval metrics | |
| Useful Resources: | |
| http://www.cs.utexas.edu/~mooney/ir-course/slides/Evaluation.ppt | |
| http://www.nii.ac.jp/TechReports/05-014E.pdf | |
| http://www.stanford.edu/class/cs276/handouts/EvaluationNew-handout-6-per.pdf | |
| http://hal.archives-ouvertes.fr/docs/00/72/67/60/PDF/07-busa-fekete.pdf | |
| Learning to Rank for Information Retrieval (Tie-Yan Liu) | |
| """ | |
| import numpy as np |
| # coding: utf-8 | |
| import logging | |
| import re | |
| from collections import Counter | |
| import numpy as np | |
| import torch | |
| from sklearn.datasets import fetch_20newsgroups | |
| from torch.autograd import Variable |
| ''' Script for downloading all GLUE data. | |
| Note: for legal reasons, we are unable to host MRPC. | |
| You can either use the version hosted by the SentEval team, which is already tokenized, | |
| or you can download the original data from (https://download.microsoft.com/download/D/4/6/D46FF87A-F6B9-4252-AA8B-3604ED519838/MSRParaphraseCorpus.msi) and extract the data from it manually. | |
| For Windows users, you can run the .msi file. For Mac and Linux users, consider an external library such as 'cabextract' (see below for an example). | |
| You should then rename and place specific files in a folder (see below for an example). | |
| mkdir MRPC | |
| cabextract MSRParaphraseCorpus.msi -d MRPC |
| import torch | |
| import torch.nn as nn | |
| from torch.nn.utils.rnn import pack_padded_sequence, pad_packed_sequence | |
| seqs = ['gigantic_string','tiny_str','medium_str'] | |
| # make <pad> idx 0 | |
| vocab = ['<pad>'] + sorted(set(''.join(seqs))) | |
| # make model |
As configured in my dotfiles.
start new:
tmux
start new with session name: