-
-
Save vegeta03/58d27cd1f4d8ae7a9bafd09d3763152d to your computer and use it in GitHub Desktop.
This is from Sam Bowman's course. Hopefully he's okay with me sharing it.
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
{ | |
"cells": [ | |
{ | |
"cell_type": "markdown", | |
"metadata": {}, | |
"source": [ | |
"# HW4: RNNLMs" | |
] | |
}, | |
{ | |
"cell_type": "markdown", | |
"metadata": {}, | |
"source": [ | |
"In this assignment, you'll implement an LSTM lanugage model.\n", | |
"\n", | |
"Submit your completed notebook through NYU Classes by 9:30 AM on October 27." | |
] | |
}, | |
{ | |
"cell_type": "markdown", | |
"metadata": {}, | |
"source": [ | |
"## Setup" | |
] | |
}, | |
{ | |
"cell_type": "markdown", | |
"metadata": {}, | |
"source": [ | |
"First, let's load the data as before." | |
] | |
}, | |
{ | |
"cell_type": "code", | |
"execution_count": 9, | |
"metadata": { | |
"collapsed": false | |
}, | |
"outputs": [], | |
"source": [ | |
"sst_home = '../trees'\n", | |
"\n", | |
"import re\n", | |
"import random\n", | |
"\n", | |
"# Let's do 2-way positive/negative classification instead of 5-way\n", | |
"easy_label_map = {0:0, 1:0, 2:None, 3:1, 4:1}\n", | |
"\n", | |
"def load_sst_data(path):\n", | |
" data = []\n", | |
" with open(path) as f:\n", | |
" for i, line in enumerate(f): \n", | |
" example = {}\n", | |
" \n", | |
" # Strip out the parse information and the phrase labels---we don't need those here\n", | |
" text = re.sub(r'\\s*(\\(\\d)|(\\))\\s*', '', line)\n", | |
" example['text'] = text[1:]\n", | |
" data.append(example)\n", | |
"\n", | |
" random.seed(1)\n", | |
" random.shuffle(data)\n", | |
" return data\n", | |
" \n", | |
"training_set = load_sst_data(sst_home + '/train.txt')\n", | |
"dev_set = load_sst_data(sst_home + '/dev.txt')\n", | |
"test_set = load_sst_data(sst_home + '/test.txt')\n", | |
"\n", | |
"# Note: Unlike with k-nearest neighbors, evaluation here should be fast, and we don't need to\n", | |
"# trim down the dev and test sets. " | |
] | |
}, | |
{ | |
"cell_type": "markdown", | |
"metadata": {}, | |
"source": [ | |
"Next, we'll convert the data to index vectors.\n", | |
"\n", | |
"To simplify your implementation, we'll use a fixed unrolling length of 20. This means that we'll have to expand each sentence into a sequence of 21 word indices. In the conversion process, we'll mark the start of each sentence with a special word symbol `<S>`, mark the end of each sentence (if it occurs within the first 21 words) with a special word symbol `</S>`, mark extra tokens after `</S>` with a special word symbol `<PAD>`, and mark out-of-vocabulary words with `<UNK>`, for unknown. As in the previous assignment, we'll use a very small vocabulary for this assignment, so you'll see `<UNK>` often." | |
] | |
}, | |
{ | |
"cell_type": "code", | |
"execution_count": 12, | |
"metadata": { | |
"collapsed": false | |
}, | |
"outputs": [], | |
"source": [ | |
"import collections\n", | |
"import numpy as np\n", | |
"\n", | |
"min_count = 5\n", | |
"\n", | |
"def sentence_to_padded_index_sequence(datasets):\n", | |
" '''Annotates datasets with feature vectors.'''\n", | |
" \n", | |
" START = \"<S>\"\n", | |
" END = \"</S>\"\n", | |
" END_PADDING = \"<PAD>\"\n", | |
" UNKNOWN = \"<UNK>\"\n", | |
" SEQ_LEN = 21\n", | |
" \n", | |
" # Extract vocabulary\n", | |
" def tokenize(string):\n", | |
" return string.lower().split()\n", | |
" \n", | |
" word_counter = collections.Counter()\n", | |
" for example in datasets[0]:\n", | |
" word_counter.update(tokenize(example['text']))\n", | |
" \n", | |
" vocabulary = set([word for word in word_counter if word_counter[word] > min_count])\n", | |
" vocabulary = list(vocabulary)\n", | |
" vocabulary = [START, END, END_PADDING, UNKNOWN] + vocabulary\n", | |
" \n", | |
" word_indices = dict(zip(vocabulary, range(len(vocabulary))))\n", | |
" indices_to_words = {v: k for k, v in word_indices.items()}\n", | |
" \n", | |
" for i, dataset in enumerate(datasets):\n", | |
" for example in dataset:\n", | |
" example['index_sequence'] = np.zeros((SEQ_LEN), dtype=np.int32)\n", | |
" \n", | |
" token_sequence = [START] + tokenize(example['text']) + [END]\n", | |
" \n", | |
" for i in range(SEQ_LEN):\n", | |
" if i < len(token_sequence):\n", | |
" if token_sequence[i] in word_indices:\n", | |
" index = word_indices[token_sequence[i]]\n", | |
" else:\n", | |
" index = word_indices[UNKNOWN]\n", | |
" else:\n", | |
" index = word_indices[END_PADDING]\n", | |
" example['index_sequence'][i] = index\n", | |
" return indices_to_words, word_indices\n", | |
" \n", | |
"indices_to_words, word_indices = sentence_to_padded_index_sequence([training_set, dev_set, test_set])" | |
] | |
}, | |
{ | |
"cell_type": "code", | |
"execution_count": 13, | |
"metadata": { | |
"collapsed": false | |
}, | |
"outputs": [ | |
{ | |
"name": "stdout", | |
"output_type": "stream", | |
"text": [ | |
"{'text': \"It could have been something special , but two things drag it down to mediocrity -- director Clare Peploe 's misunderstanding of Marivaux 's rhythms , and Mira Sorvino 's limitations as a classical actress .\", 'index_sequence': array([ 0, 596, 702, 1269, 76, 1397, 2245, 1415, 1738, 1983, 1327,\n", | |
" 1014, 596, 863, 1372, 3, 743, 2502, 3, 3, 2584], dtype=int32)}\n", | |
"2829\n" | |
] | |
} | |
], | |
"source": [ | |
"print(training_set[18])\n", | |
"print(len(word_indices))" | |
] | |
}, | |
{ | |
"cell_type": "markdown", | |
"metadata": {}, | |
"source": [ | |
"## Part 1: Implementation (60%)" | |
] | |
}, | |
{ | |
"cell_type": "markdown", | |
"metadata": {}, | |
"source": [ | |
"Now, using the starter code and hyperparameter values provided below, implement an LSTM language model with dropout on the non-recurrent connections. Use the standard form of the LSTM reflected in the slides (without peepholes). You should only have to edit the marked sections of code to build the base LSTM, though implementing dropout properly may require small changes to the main training loop and to brittle_sampler().\n", | |
"\n", | |
"Don't use any TensorFlow code that is specifically built for RNNs. If a TF function has 'recurrent', 'sequence', 'LSTM', or 'RNN' in its name, you should built it yourself instead of using it. (Your version will likely be much simpler, by the way, since these built in methods are powerful but fairly complex and potentially confusing.)\n", | |
"\n", | |
"We won't be evaluating our model in the conventional way (perplexity on a held-out test set) for a few reasons: to save time, because we have no baseline to compare against, and because overfitting the training set is a less immediate concern with these models than it was with sentence classifiers. Instead, we'll use the value of the cost function to make sure that the model is converging as expected, and we'll use samples drawn from the model to qualitatively evaluate it.\n", | |
"\n", | |
"Tips: \n", | |
"\n", | |
"- You'll need to use `tf.nn.embedding_lookup()`, `tf.nn.sparse_softmax_cross_entropy_with_logits()`, and `tf.split()` at least once each. All three should be easy to Google, though the last homework and the last exercise should show examples of the first two.\n", | |
"- As before, you'll want to initialize your trained parameters using something like `tf.random_normal(..., stddev=0.1)`" | |
] | |
}, | |
{ | |
"cell_type": "code", | |
"execution_count": 14, | |
"metadata": { | |
"collapsed": true | |
}, | |
"outputs": [], | |
"source": [ | |
"import tensorflow as tf" | |
] | |
}, | |
{ | |
"cell_type": "code", | |
"execution_count": 15, | |
"metadata": { | |
"collapsed": false | |
}, | |
"outputs": [], | |
"source": [ | |
"class LanguageModel:\n", | |
" def __init__(self, vocab_size, sequence_length):\n", | |
" # Define the hyperparameters\n", | |
" self.learning_rate = 0.3 # Should be about right\n", | |
" self.training_epochs = 250 # How long to train for - chosen to fit within class time\n", | |
" self.display_epoch_freq = 1 # How often to test and print out statistics\n", | |
" self.dim = 32 # The dimension of the hidden state of the RNN\n", | |
" self.embedding_dim = 16 # The dimension of the learned word embeddings\n", | |
" self.batch_size = 256 # Somewhat arbitrary - can be tuned, but often tune for speed, not accuracy\n", | |
" self.vocab_size = vocab_size # Defined by the file reader above\n", | |
" self.sequence_length = sequence_length # Defined by the file reader above\n", | |
" self.keep_rate = 0.75 # Used in dropout (at training time only, not at sampling time)\n", | |
" \n", | |
" #### Start main editable code block ####\n", | |
" \n", | |
" self.E = tf.Variable(tf.random_normal([self.vocab_size, self.embedding_dim], stddev=0.1))\n", | |
" \n", | |
" self.W_f = tf.Variable(tf.random_normal([self.embedding_dim + self.dim, self.dim], stddev=0.1))\n", | |
" self.b_f = tf.Variable(tf.random_normal([self.dim], stddev=0.1))\n", | |
" \n", | |
" self.W_i = tf.Variable(tf.random_normal([self.embedding_dim + self.dim, self.dim], stddev=0.1))\n", | |
" self.b_i = tf.Variable(tf.random_normal([self.dim], stddev=0.1))\n", | |
" \n", | |
" self.W_c = tf.Variable(tf.random_normal([self.embedding_dim + self.dim, self.dim], stddev=0.1))\n", | |
" self.b_c = tf.Variable(tf.random_normal([self.dim], stddev=0.1))\n", | |
" \n", | |
" self.W_o = tf.Variable(tf.random_normal([self.embedding_dim + self.dim, self.dim], stddev=0.1))\n", | |
" self.b_o = tf.Variable(tf.random_normal([self.dim], stddev=0.1))\n", | |
" \n", | |
" self.W_cl = tf.Variable(tf.random_normal([self.dim, self.vocab_size], stddev=0.1))\n", | |
" self.b_cl = tf.Variable(tf.random_normal([self.vocab_size], stddev=0.1))\n", | |
" \n", | |
" # Define the input placeholder(s).\n", | |
" # I'll supply this one, since it's needed in sampling. Add any others you need.\n", | |
" self.x = tf.placeholder(tf.int32, [None, self.sequence_length])\n", | |
"# self.y = tf.placeholder(tf.int32, [None])\n", | |
"\n", | |
" def step(x, h_prev, c_prev):\n", | |
" emb = tf.nn.dropout(tf.nn.embedding_lookup(self.E, x),self.keep_rate)\n", | |
" emb_h_prev = tf.concat(1, [emb, h_prev])\n", | |
" f = tf.nn.sigmoid(tf.matmul(emb_h_prev, self.W_f) + self.b_f)\n", | |
" i = tf.nn.sigmoid(tf.matmul(emb_h_prev, self.W_i) + self.b_i)\n", | |
" c_s = tf.nn.tanh(tf.matmul(emb_h_prev, self.W_c) + self.b_c)\n", | |
" o = tf.nn.sigmoid(tf.matmul(emb_h_prev, self.W_o) + self.b_o)\n", | |
" c = f*c_prev + i*c_s\n", | |
" h = o*tf.nn.tanh(c)\n", | |
" return h, c\n", | |
" \n", | |
" # TODO: Build the rest of the LSTM LM!\n", | |
" \n", | |
" self.x_slices = tf.split(1, self.sequence_length, self.x)\n", | |
" \n", | |
" self.h_zero = tf.zeros([self.batch_size, self.dim])\n", | |
" self.c_zero = tf.zeros([self.batch_size, self.dim])\n", | |
" h_prev = self.h_zero; c_prev = self.c_zero\n", | |
" \n", | |
" \n", | |
" \n", | |
" # Your model should populate the following four python lists.\n", | |
" # self.logits should contain one [batch_size, vocab_size]-shaped TF tensor of logits \n", | |
" # for each of the 20 steps of the model.\n", | |
" # self.costs should contain one [batch_size]-shaped TF tensor of cross-entropy loss \n", | |
" # values for each of the 20 steps of the model.\n", | |
" # self.h and c should each start contain one [batch_size, dim]-shaped TF tensor of LSTM\n", | |
" # activations for each of the 21 *states* of the model -- one tensor of zeros for the \n", | |
" # starting state followed by one tensor each for the remaining 20 steps.\n", | |
" # Don't rename any of these variables or change their purpose -- they'll be needed by the\n", | |
" # pre-built sampler.\n", | |
" self.logits = []\n", | |
" self.costs = []\n", | |
" self.h = [self.h_zero]\n", | |
" self.c = [self.c_zero]\n", | |
" \n", | |
" for t in range(self.sequence_length - 1):\n", | |
" x_t = tf.reshape(self.x_slices[t], [-1])\n", | |
" x_next = tf.reshape(self.x_slices[t+1], [-1])\n", | |
" h_prev, c_prev = step(x_t, h_prev, c_prev)\n", | |
" h_drop = tf.nn.dropout(h_prev, self.keep_rate)\n", | |
" logits = tf.matmul(h_drop, self.W_cl) + self.b_cl\n", | |
" costs = tf.nn.sparse_softmax_cross_entropy_with_logits(logits, x_next)\n", | |
" self.h.append(h_prev)\n", | |
" self.c.append(c_prev)\n", | |
" self.logits.append(logits)\n", | |
" self.costs.append(costs)\n", | |
" \n", | |
" \n", | |
" #### End main editable code block ####\n", | |
" \n", | |
" # Sum costs for each word in each example, but average cost across examples.\n", | |
" self.costs_tensor = tf.concat(1, [tf.expand_dims(cost, 1) for cost in self.costs])\n", | |
" self.cost_per_example = tf.reduce_sum(self.costs_tensor, 1)\n", | |
" self.total_cost = tf.reduce_mean(self.cost_per_example)\n", | |
" \n", | |
" # This library call performs the main SGD update equation\n", | |
" self.optimizer = tf.train.GradientDescentOptimizer(self.learning_rate).minimize(self.total_cost)\n", | |
" \n", | |
" # Create an operation to fill zero values in for W and b\n", | |
" self.init = tf.initialize_all_variables()\n", | |
" \n", | |
" # Create a placeholder for the session that will be shared between training and evaluation\n", | |
" self.sess = None\n", | |
" \n", | |
" def train(self, training_data):\n", | |
" def get_minibatch(dataset, start_index, end_index):\n", | |
" indices = range(start_index, end_index)\n", | |
" vectors = np.vstack([dataset[i]['index_sequence'] for i in indices])\n", | |
" return vectors\n", | |
" \n", | |
" self.sess = tf.Session()\n", | |
" \n", | |
" self.sess.run(self.init)\n", | |
" print('Training.')\n", | |
"\n", | |
" # Training cycle\n", | |
" for epoch in range(self.training_epochs):\n", | |
" random.shuffle(training_set)\n", | |
" avg_cost = 0.\n", | |
" total_batch = int(len(training_set) / self.batch_size)\n", | |
" \n", | |
" # Loop over all batches in epoch\n", | |
" for i in range(total_batch):\n", | |
" # Assemble a minibatch of the next B examples\n", | |
" minibatch_vectors = get_minibatch(training_set, self.batch_size * i, self.batch_size * (i + 1))\n", | |
"\n", | |
" # Run the optimizer to take a gradient step, and also fetch the value of the \n", | |
" # cost function for logging\n", | |
" _, c = self.sess.run([self.optimizer, self.total_cost], \n", | |
" feed_dict={self.x: minibatch_vectors})\n", | |
" \n", | |
" # Compute average loss\n", | |
" avg_cost += c / (total_batch * self.batch_size)\n", | |
" \n", | |
" # Display some statistics about the step\n", | |
" if (epoch+1) % self.display_epoch_freq == 0:\n", | |
" print(\"Epoch:\", (epoch+1), \"Cost:\", avg_cost, \"Sample:\", self.sample())\n", | |
" \n", | |
" def sample(self):\n", | |
" # This samples a sequence of tokens from the model starting with <S>.\n", | |
" # We only ever run the first timestep of the model, and use an effective batch size of one\n", | |
" # but we leave the model unrolled for multiple steps, and use the full batch size to simplify \n", | |
" # the training code. This slows things down.\n", | |
"\n", | |
" def brittle_sampler():\n", | |
" # The main sampling code. Can fail randomly due to rounding errors that yield probibilities\n", | |
" # that don't sum to one.\n", | |
" \n", | |
" word_indices = [0] # 0 here is the \"<S>\" symbol\n", | |
" for i in range(self.sequence_length - 1):\n", | |
" dummy_x = np.zeros((self.batch_size, self.sequence_length))\n", | |
" dummy_x[0][0] = word_indices[-1]\n", | |
" feed_dict = {self.x: dummy_x}\n", | |
" if i > 0:\n", | |
" feed_dict[self.h_zero] = h\n", | |
" feed_dict[self.c_zero] = c \n", | |
" h, c, logits = self.sess.run([self.h[1], self.c[1], self.logits[0]], \n", | |
" feed_dict=feed_dict) \n", | |
" logits = logits[0, :] # Discard all but first batch entry\n", | |
" exp_logits = np.exp(logits - np.max(logits))\n", | |
" distribution = exp_logits / exp_logits.sum()\n", | |
" sampled_index = np.flatnonzero(np.random.multinomial(1, distribution))[0]\n", | |
" word_indices.append(sampled_index)\n", | |
" words = [indices_to_words[index] for index in word_indices]\n", | |
" return ' '.join(words)\n", | |
" \n", | |
" while True:\n", | |
" try:\n", | |
" sample = brittle_sampler()\n", | |
" return sample\n", | |
" except ValueError as e: # Retry if we experience a random failure.\n", | |
" pass" | |
] | |
}, | |
{ | |
"cell_type": "markdown", | |
"metadata": {}, | |
"source": [ | |
"Now let's train it.\n", | |
"\n", | |
"Once you're confident your model is doing what you want, let it run for the full 250 epochs. This will take some time—likely between five and thirty minutes. If it much longer on a reasonably modern laptop—more than an hour—that suggests serious problems with your implementation. A properly implemented model with dropout should reach an average cost of less than 0.22 quickly, and then slowly improve from there. We train the model for a fairly long time because these small improvements in cost correspond to fairly large improvements in sample quality.\n", | |
"\n", | |
"Samples from a trained models should have coherent portions, but they will not resemble interpretable English sentences. Here are three examples from a model with a cost value of 0.202:\n", | |
"\n", | |
"`<S> the good <UNK> and <UNK> and <UNK> <UNK> with predictable and <UNK> , but also does one of -lrb- <UNK>`\n", | |
"\n", | |
"`<S> <UNK> has <UNK> actors seems done <UNK> would these <UNK> <UNK> to <UNK> <UNK> <UNK> 're <UNK> to mind .`\n", | |
"\n", | |
"`<S> an action story that was because the <UNK> <UNK> are when <UNK> as ``` <UNK> '' ' it is any`\n", | |
"\n", | |
"`-lrb-` and `-rrb` are the way that left and right parentheses are represented in the corpus." | |
] | |
}, | |
{ | |
"cell_type": "code", | |
"execution_count": null, | |
"metadata": { | |
"collapsed": false | |
}, | |
"outputs": [ | |
{ | |
"name": "stdout", | |
"output_type": "stream", | |
"text": [ | |
"Training.\n", | |
"Epoch: 1 Cost: 0.426955506657 Sample: <S> of depiction the everyone served suffers trashy death is <UNK> it listless is to the kids <UNK> revolution unsettling <UNK>\n", | |
"Epoch: 2 Cost: 0.356918771159 Sample: <S> an <UNK> the looks a a heartbreaking than reminds hip-hop is carry look banal pic bottom <UNK> , with b\n", | |
"Epoch: 3 Cost: 0.347536933241 Sample: <S> hilarious smart with allen major fans being stylish of desire sequels the offer , laughs 's <UNK> of some creates\n", | |
"Epoch: 4 Cost: 0.342333127152 Sample: <S> play feature-length <UNK> like ... n't <UNK> <UNK> possibly <UNK> <UNK> with an repetitive another snipes , as cho you\n", | |
"Epoch: 5 Cost: 0.337392723922 Sample: <S> recommend , compelling no blow realism who the very and the deftly make rock anything , a 'd . <PAD>\n", | |
"Epoch: 6 Cost: 0.333552892461 Sample: <S> ms. 's <UNK> <UNK> imagine <UNK> to needed is ` <UNK> inside witless <UNK> your a meets . </S> <PAD>\n", | |
"Epoch: 7 Cost: 0.3298923599 Sample: <S> as i keeping ... , a sides , mystery for were and marvelous 's just few brown but a <UNK>\n", | |
"Epoch: 8 Cost: 0.327989043611 Sample: <S> this own is the <UNK> spy out . , the a <UNK> static <UNK> told <UNK> vivid the <UNK> --\n", | |
"Epoch: 9 Cost: 0.325803942753 Sample: <S> in forgettable of <UNK> like jokes nights imitation -rrb- point and but a comedic a theme of flick in the\n", | |
"Epoch: 10 Cost: 0.324020334265 Sample: <S> <UNK> indeed <UNK> <UNK> <UNK> riveting <UNK> the park <UNK> unsettling can to characters that plotted , <UNK> for otherwise\n", | |
"Epoch: 11 Cost: 0.322233700391 Sample: <S> under ` viewer feel drama since <UNK> of shock to than you something is that the <UNK> feature-length to certain\n", | |
"Epoch: 12 Cost: 0.320459598845 Sample: <S> it less 're nothing he , this life but by it the technology is i turning of <UNK> , <UNK>\n", | |
"Epoch: 13 Cost: 0.318467099558 Sample: <S> it with a spooky simply comedy than made and leaves best , up poor in the first desire , testament\n", | |
"Epoch: 14 Cost: 0.317521890004 Sample: <S> all any uneasy whose time great he 's best <UNK> -- expectation and saved and sheer inane are there is\n", | |
"Epoch: 15 Cost: 0.315853634567 Sample: <S> up <UNK> and an had <UNK> , and the whatever another would perfectly own sitcom <UNK> for <UNK> , dead\n", | |
"Epoch: 16 Cost: 0.314317679766 Sample: <S> <UNK> <UNK> sinks more of <UNK> on film a kind of <UNK> <UNK> <UNK> in this throws a deeper of\n", | |
"Epoch: 17 Cost: 0.313417954878 Sample: <S> the edge is expected to as it the <UNK> <UNK> into an time <UNK> intended by this long -- it\n", | |
"Epoch: 18 Cost: 0.311961005131 Sample: <S> it 's much that so an feature film about <UNK> , the stand unlikely , like <UNK> again in boring\n", | |
"Epoch: 19 Cost: 0.311277264898 Sample: <S> a lot , been middle-aged is a characters of <UNK> <UNK> between satisfying <UNK> , , the christmas call a\n", | |
"Epoch: 20 Cost: 0.310234280247 Sample: <S> though a a diversion thing never to anything think <UNK> ; that 's keep from the his <UNK> into a\n", | |
"Epoch: 21 Cost: 0.309151537491 Sample: <S> me , but the movie -lrb- in <UNK> <UNK> <UNK> , returns again of course up <UNK> <UNK> when the\n", | |
"Epoch: 22 Cost: 0.307753429268 Sample: <S> and <UNK> , while while that 're not go , sophisticated way you as very book wrong ... like <UNK>\n", | |
"Epoch: 23 Cost: 0.307232930805 Sample: <S> the days like the brutally mark , a <UNK> of its chilling with movie 's <UNK> , charming true live\n", | |
"Epoch: 24 Cost: 0.306228588025 Sample: <S> an imagery never and clichés <UNK> ... bad to help music , but plays , a <UNK> genre <UNK> to\n", | |
"Epoch: 25 Cost: 0.30603117112 Sample: <S> the interesting adventure with amy , <UNK> but the striking is be dance original ' carries us <UNK> . </S>\n", | |
"Epoch: 26 Cost: 0.304994332068 Sample: <S> has the mediocre top-notch to the ghetto humor is <UNK> come of a aspects of begins through one of <UNK>\n", | |
"Epoch: 27 Cost: 0.304625029817 Sample: <S> <UNK> <UNK> , as some , an give reminiscent of on scary if music fascinating care <UNK> <UNK> chinese <UNK>\n", | |
"Epoch: 28 Cost: 0.303136646748 Sample: <S> the <UNK> whimsical <UNK> film almost as it does n't to <UNK> , the managed for the <UNK> otherwise to\n", | |
"Epoch: 29 Cost: 0.302960512313 Sample: <S> a hits <UNK> the more as ` political <UNK> 's dreams , serving matter of <UNK> ... but <UNK> with\n", | |
"Epoch: 30 Cost: 0.301779523943 Sample: <S> <UNK> : family ambitious <UNK> <UNK> before a string of company is so more of something , which documentary is\n", | |
"Epoch: 31 Cost: 0.301689463131 Sample: <S> it 's know you <UNK> lives <UNK> directed and <UNK> creates can their <UNK> about <UNK> , the murder .\n" | |
] | |
} | |
], | |
"source": [ | |
"model = LanguageModel(len(word_indices), 21)\n", | |
"model.train(training_set)" | |
] | |
}, | |
{ | |
"cell_type": "markdown", | |
"metadata": {}, | |
"source": [ | |
"Now we can draw as many samples as we like." | |
] | |
}, | |
{ | |
"cell_type": "code", | |
"execution_count": null, | |
"metadata": { | |
"collapsed": false | |
}, | |
"outputs": [], | |
"source": [ | |
"model.sample()" | |
] | |
}, | |
{ | |
"cell_type": "markdown", | |
"metadata": {}, | |
"source": [ | |
"## Part 2: Questions (40%)" | |
] | |
}, | |
{ | |
"cell_type": "markdown", | |
"metadata": {}, | |
"source": [ | |
"**Question 1:** Looking at the samples that your model produced towards the end of training, point out three properties of (written) English that it seems to have learned." | |
] | |
}, | |
{ | |
"cell_type": "markdown", | |
"metadata": {}, | |
"source": [ | |
"**Question 2:** If we could make the model as big as we wanted, train as long as we wanted, and adjust or remove dropout at will, could we ever get the model to reach a cost value of 0.0? In a single sentence, say why." | |
] | |
}, | |
{ | |
"cell_type": "markdown", | |
"metadata": {}, | |
"source": [ | |
"**Question 3:** Give an example of a situation where the LSTM language model's ability to propagate information across many steps (when trained for long enough, at least) would cause it to reach a better cost value than a model like a simple RNN without that ability. (Answer in one sentence or so.)" | |
] | |
}, | |
{ | |
"cell_type": "markdown", | |
"metadata": {}, | |
"source": [ | |
"**Question 4:** Would the model be any worse if we were to just delete unknown words instead of using an `<UNK>` token? (Answer in one sentence or so.)" | |
] | |
} | |
], | |
"metadata": { | |
"kernelspec": { | |
"display_name": "Python 3", | |
"language": "python", | |
"name": "python3" | |
}, | |
"language_info": { | |
"codemirror_mode": { | |
"name": "ipython", | |
"version": 3 | |
}, | |
"file_extension": ".py", | |
"mimetype": "text/x-python", | |
"name": "python", | |
"nbconvert_exporter": "python", | |
"pygments_lexer": "ipython3", | |
"version": "3.5.2" | |
} | |
}, | |
"nbformat": 4, | |
"nbformat_minor": 0 | |
} |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment