Last active
November 10, 2016 04:48
-
-
Save gajeam/4b983be63fd97fe260e017086ff82a29 to your computer and use it in GitHub Desktop.
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
{ | |
"cells": [ | |
{ | |
"cell_type": "markdown", | |
"metadata": {}, | |
"source": [ | |
"## TensorFlow Lecture Notes\n", | |
"\n", | |
"TensorFlow is the way of the future! Performance is bad (worse than Theano) but easy to learn and supported by Google (deep mind is moving to it)\n", | |
"\n", | |
"\n", | |
"Tensors = mathematical constructs that can hold structures and operations (vectors, matrices, linear operations)\n", | |
"Variables can be trainable or untrainable\n", | |
"OPerations can take variables or output from other operations.\n", | |
"\n", | |
"Sessions have run functions. They evaluate tensors (??)" | |
] | |
}, | |
{ | |
"cell_type": "code", | |
"execution_count": 36, | |
"metadata": { | |
"collapsed": false | |
}, | |
"outputs": [ | |
{ | |
"name": "stdout", | |
"output_type": "stream", | |
"text": [ | |
"[ 0.04180872 -0.03604269 -1.65012217 -0.904724 -0.90283179 -0.18740118\n", | |
" 0.48894274 -1.57636929 -1.18562317 3.39833093 3.52613878 1.36175334\n", | |
" 1.35648179 1.56944466 -0.96633887 -0.80551445 -3.62629843 0.50556117\n", | |
" -1.76435781 -0.4863975 0.75032651 2.67316961 2.72465897 1.47553396\n", | |
" 0.02072901 1.97096896 2.52250433 2.68102646 4.129601 1.18468165\n", | |
" 0.83145142 -1.31791615 0.66017771 -1.39094591 1.44196987 -0.37699676\n", | |
" 0.14163536 1.46200466 -0.38177407 2.21282458 1.99078321 -1.45059443\n", | |
" 1.42254686 3.47459459 1.68365407 0.19213837 2.62642503 -0.73140013\n", | |
" -0.01890814 -1.04749441 1.10894978 2.88701344 1.0055474 3.71987367\n", | |
" 2.92754197 -1.30169392 0.61442721 1.34157705 -0.62482512 0.59389627\n", | |
" 0.50309736 -0.73623776 -0.05997586 0.39145499 1.87581491 2.08221865\n", | |
" 0.29406863 1.07059884 -0.26477063 2.8146162 0.94365621 3.65058804\n", | |
" 2.1416254 2.31146383 -0.69165576 3.95350075 3.68786955 0.77084637\n", | |
" 1.80094278 0.28525567 0.24203813 -1.16820812 -2.74510241 1.42564893\n", | |
" 2.76745057 -0.57214391 1.47041941 -3.3168993 0.51615584 -0.77670121\n", | |
" 3.96465302 2.12841749 4.73956108 -0.26107335 2.83218575 -0.58476233\n", | |
" 2.56953168 -1.66726947 -2.27470946 -1.1025703 ]\n" | |
] | |
}, | |
{ | |
"data": { | |
"text/plain": [ | |
"8.7699776" | |
] | |
}, | |
"execution_count": 36, | |
"metadata": {}, | |
"output_type": "execute_result" | |
} | |
], | |
"source": [ | |
"# Just importing tensorflow into a Python Notebook took fifteen minutes. Good God.\n", | |
"import tensorflow as tf\n", | |
"\n", | |
"# Yup, got it. Variables.\n", | |
"x = tf.Variable(tf.constant(5.0))\n", | |
"# Another variable. I know these!\n", | |
"y = tf.Variable(tf.constant(3.0))\n", | |
"# A 100 dimensional vector with...random numbers?\n", | |
"z = tf.Variable(tf.random_normal([100], mean=1.0, stddev = 2.0))\n", | |
"\n", | |
"# This doesn't equal 8.0. Apparently this is defining a function called add.\n", | |
"# Is this what functional programming feels like!?\n", | |
"a = x + y\n", | |
"b = a + z # Here's an operation\n", | |
"\n", | |
"# Uh oh. What's a moment! Does everyone else here know what a moment is?\n", | |
"# Does Suren know what a moment is?\n", | |
"# He definitely knows what a moment is.\n", | |
"# Should I ask him?\n", | |
"# No, no. I'll catch up.\n", | |
"m, v = tf.nn.moments(b, [0]) # Get the mean??\n", | |
"sess = tf.Session()\n", | |
"sess.run(tf.initialize_all_variables())\n", | |
"print(sess.run(z)) # We're running this on our random normal distribution.\n", | |
"# What's a random normal distribution?\n", | |
"# Does everyone else here know what a random normal distribution is?\n", | |
"# God, I'm out of my league here\n", | |
"\n", | |
"sess.run(a)\n", | |
"sess.run(m)\n" | |
] | |
}, | |
{ | |
"cell_type": "markdown", | |
"metadata": {}, | |
"source": [ | |
"## This is confusing\n", | |
"\n", | |
"Optimizers come in these three flavors:\n", | |
"* `compute_gradients`\n", | |
"* `apply_gradients`\n", | |
"* `minimize`\n", | |
"\n", | |
"Users define loss functions. What's a loss function! I've heard that in NLP before. I should definitely know what loss functions are!\n", | |
"\n", | |
"Something something placeholders. I think they're good (bad?)\n", | |
"\n", | |
"Time to blindly copy more code from the screen and try to implement a neural network!" | |
] | |
}, | |
{ | |
"cell_type": "code", | |
"execution_count": 14, | |
"metadata": { | |
"collapsed": false | |
}, | |
"outputs": [ | |
{ | |
"name": "stdout", | |
"output_type": "stream", | |
"text": [ | |
"Successfully downloaded train-images-idx3-ubyte.gz 9912422 bytes.\n", | |
"Extracting MNIST_data/train-images-idx3-ubyte.gz\n", | |
"Successfully downloaded train-labels-idx1-ubyte.gz 28881 bytes.\n", | |
"Extracting MNIST_data/train-labels-idx1-ubyte.gz\n", | |
"Successfully downloaded t10k-images-idx3-ubyte.gz 1648877 bytes.\n", | |
"Extracting MNIST_data/t10k-images-idx3-ubyte.gz\n", | |
"Successfully downloaded t10k-labels-idx1-ubyte.gz 4542 bytes.\n", | |
"Extracting MNIST_data/t10k-labels-idx1-ubyte.gz\n" | |
] | |
} | |
], | |
"source": [ | |
"# We're going to learn neural networks trianing it on the MNIST data set\n", | |
"# I guess this is a common data set for ML training but God knows what it's for.\n", | |
"# Something about images?\n", | |
"\n", | |
"from tensorflow.examples.tutorials.mnist import input_data\n", | |
"data = input_data.read_data_sets('MNIST_data', one_hot=True) # Girl, I'll show you one_hot ;)\n", | |
"# Why can't everything be object oriented?" | |
] | |
}, | |
{ | |
"cell_type": "code", | |
"execution_count": 40, | |
"metadata": { | |
"collapsed": false | |
}, | |
"outputs": [], | |
"source": [ | |
"# PHASE 1: CONSTRUCTION\n", | |
"# \"None means that the data can be of an arbitrary batch size. We can give it a single image or multiple\"\n", | |
"# ...said the guy.\n", | |
"x = tf.placeholder(tf.float32, shape=[None, 784]) # Placeholders!\n", | |
"y = tf.placeholder(tf.float32, shape=[None, 10]) # Shapes?\n", | |
"\n", | |
"# Here we go we're making a layer! I know what that is from Bamman's class!\n", | |
"# The girl next to me is whispering about how she is confused.\n", | |
"# The guy in front of me has much less code copied from the board than I do.\n", | |
"# I must be smarter than him\n", | |
"W1 = tf.Variable(tf.truncated_normal([784, 200], stddev=0.1)) # 200 is the number of output neurons\n", | |
"b1 = tf.Variable(tf.truncated_normal([200], stddev=0.1))\n", | |
"# Ugghh what is going on. I've hea\n", | |
"h = tf.sigmoid(tf.matmul(x, W1) + b1)\n", | |
"\n", | |
"# I am very good at copying code from the screen. I'm destroying this guy in front of me at it.\n", | |
"\n", | |
"# I guess we're making another layer! Help I'm drwoning!\n", | |
"W2 = tf.Variable(tf.truncated_normal([200, 10], stddev=0.1)) # 10 is the output (one for each digit we wanna classify)\n", | |
"# Why the hell did I even show up to this thing? Do I really want to be doing machine learning stuff?\n", | |
"b2 = tf.Variable(tf.truncated_normal([10], stddev=0.1))\n", | |
"# They say it's the future though. And I want to be part of the future! That's why I'm at the I School!\n", | |
"# But how can I do that without knowing what matmul is!?\n", | |
"# Do I need matmul?\n", | |
"# He keeps mentioning it!\n", | |
"y_predict = tf.nn.softmax(tf.matmul(h, W2) + b2)" | |
] | |
}, | |
{ | |
"cell_type": "code", | |
"execution_count": 42, | |
"metadata": { | |
"collapsed": true | |
}, | |
"outputs": [], | |
"source": [ | |
"# This dude has been written a ton of code on the screen and hasn't run it ONCE.\n", | |
"# Does he have a fucking compiler in his brain?" | |
] | |
}, | |
{ | |
"cell_type": "code", | |
"execution_count": 41, | |
"metadata": { | |
"collapsed": false | |
}, | |
"outputs": [ | |
{ | |
"name": "stdout", | |
"output_type": "stream", | |
"text": [ | |
"Oh look, the end value is 0.9436.\n", | |
"Guess this was all worth it in the end :|\n" | |
] | |
} | |
], | |
"source": [ | |
"# There's no way anyone knows what cross_entropy is\n", | |
"# SUREN ASKED A QUESTION\n", | |
"# SUREN KNOWS ENOUGH TO ASK A QUESTION\n", | |
"# I'LL NEVER KNOW AS MUCH AS SUREN\n", | |
"# IF I DON'T KNOW AS MUCH AS SUREN MY SKILLS WILL ATROPHY AND I'LL DDDIIIIIIEEEEE\n", | |
"cross_entropy = tf.reduce_mean(-tf.reduce_sum(y * tf.log(y_predict), reduction_indices=[1]))\n", | |
"\n", | |
"# I know what backpropagating is!\n", | |
"# But is that enough to keep me off of food stamps when the machines rise?\n", | |
"backprop = tf.train.GradientDescentOptimizer(0.5).minimize(cross_entropy) # Choose whatever optimizer you want, baby!\n", | |
"# If copying the board still exists as a job after the AI revolution I am all set\n", | |
"correct = tf.equal(tf.argmax(y, 1), tf.argmax(y_predict, 1))\n", | |
"# Mmmhhmm. Looks right.\n", | |
"accuracy = tf.reduce_mean(tf.cast(correct, tf.float32))\n", | |
"# What the hell have I gotten myself into.\n", | |
"\n", | |
"# EXECUTION PHASE, I GUESS\n", | |
"\n", | |
"# I should have gone to business school. They don't need to worry about TensorFlow\n", | |
"# Or when to call initialize_all_variables\n", | |
"sess = tf.Session()\n", | |
"# They just drink and socialize.\n", | |
"# I can do both of those things! Particularly well in fact!\n", | |
"sess.run(tf.initialize_all_variables())\n", | |
"train_steps = 2000\n", | |
"batch_size = 50\n", | |
"\n", | |
"# Being an iOS developer at a TensorFlow workshop feels like a taxi driver watching an Uber ad.\n", | |
"for i in range(train_steps):\n", | |
" # Look it's the word \"data!\" Is it BIG DATA?\n", | |
" batch_x, batch_y = data.train.next_batch(batch_size)\n", | |
" # Fuck big data\n", | |
" sess.run(backprop, feed_dict={x: batch_x, y:batch_y}) # Backpropagating! I know that that is!\n", | |
"\n", | |
"# Goddamnit. I feel like I should know ML stuff but this is just so goddamn intimidating\n", | |
"# Why did I come to this session? To eat Papa John's and feel inadequate about my skills?\n", | |
"# Fuck and Suren as asking ANOTHER QUESTION.\n", | |
"# How does he do it?\n", | |
"# Ugh.\n", | |
"# Fuck.\n", | |
"end_val = sess.run(accuracy, feed_dict={x:data.test.images, y:data.test.labels})\n", | |
"\n", | |
"# Man, fuck this session, fuck Papa John's, fuck the guy who's too smart to be copying off the board, and fuck computers.\n", | |
"print('Oh look, the end value is ' + str(end_val) + '.\\nGuess this was all worth it in the end :|')\n", | |
"# Shit, and I still have to do 202" | |
] | |
} | |
], | |
"metadata": { | |
"anaconda-cloud": {}, | |
"kernelspec": { | |
"display_name": "Python [conda env:tensorflow]", | |
"language": "python", | |
"name": "conda-env-tensorflow-py" | |
}, | |
"language_info": { | |
"codemirror_mode": { | |
"name": "ipython", | |
"version": 3 | |
}, | |
"file_extension": ".py", | |
"mimetype": "text/x-python", | |
"name": "python", | |
"nbconvert_exporter": "python", | |
"pygments_lexer": "ipython3", | |
"version": "3.5.2" | |
} | |
}, | |
"nbformat": 4, | |
"nbformat_minor": 0 | |
} |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment