Skip to content

Instantly share code, notes, and snippets.

@exelban
Created February 25, 2017 21:49
Show Gist options
  • Save exelban/9fc5fc7145a57dae18586a25c827be84 to your computer and use it in GitHub Desktop.
Save exelban/9fc5fc7145a57dae18586a25c827be84 to your computer and use it in GitHub Desktop.
Simple logical function in neural network using tensorflow.
import numpy as np
import tensorflow as tf
x = tf.placeholder("float", [None, 2], name="X")
y = tf.placeholder("float", [None, 1], name="Y")
w = tf.Variable(tf.random_normal([2, 1], stddev=0.01), tf.float32)
b = tf.Variable(tf.random_normal([1]))
linear_model = tf.matmul(x, w) + b
loss = tf.reduce_sum(tf.square(linear_model - y))
optimizer = tf.train.GradientDescentOptimizer(0.01).minimize(loss)
init = tf.global_variables_initializer()
sess = tf.Session()
sess.run(init)
x_train = [[0, 0], [0, 1], [1, 0], [1, 1]]
y_train = [[0], [0], [0], [1]]
for i in range(100):
sess.run(optimizer, {x:x_train, y:y_train})
pred = sess.run(linear_model, feed_dict={x: [[1, 1], [1, 0]]})
pred = np.rint(pred).astype(int).T
print(pred)
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment