Skip to content

Instantly share code, notes, and snippets.

@lbxa
Last active June 29, 2017 12:59
Show Gist options
  • Save lbxa/e4bce6915c8a06d482aff12fb206418b to your computer and use it in GitHub Desktop.
Save lbxa/e4bce6915c8a06d482aff12fb206418b to your computer and use it in GitHub Desktop.
ML: Linear Regression with Gradient Descent
'''
Framework for Linear Regression with Gradient Descent
This micro-project contains code to transform the gradient descent
mathematical algorithm into a programatic example which can be applied to
in common Machine Learning practices to predict and input's corresponding
output once the model has been trainned.
25.06.2017 | Lucas Barbosa | TensorFlow | Open Source Software (C)
'''
X = 0
Y = 1
# 1) The cost function
def eval_loss(m, b, data_points):
totalError = 0
for i in range(0, len(data_points)):
x = data_points[i][X]
y = data_points[i][Y]
totalError += (y - (m * x + b)) **2
return totalError / float(len(data_points))
'''
Our goal using this tha above cost function, is to find the mininal error
value, and thats where our pretty, shiny and lustorous line of best fit
shall be....but first, time for some multivariable calculus.
'''
# 2) Calculating & descending the gradient from partial derivatives
def eval_nabla_gradient(m_curr, b_curr, training_data, learning_rate):
b_nabla_descent = m_nabla_descent = 0
N = float(len(training_data))
for i in range(0, len(training_data)):
x = training_data[i][X]
y = training_data[i][Y]
m_nabla_descent += -(2/N) * (x * (y - ((m_curr * x) + b_curr)))
b_nabla_descent += -(2/N) * (y - ((m_curr * x) + b_curr))
new_m = m_curr - (m_nabla_descent * learning_rate)
new_b = b_curr - (b_nabla_descent * learning_rate)
return [new_m, new_b]
# 3) Applying gradient descent over a fixed amount of iterations
def gradient_descent_runner(m_start, b_start, training_data, learning_rate, time_steps):
m = m_start
b = b_start
for i in range(time_steps):
m, b = eval_nabla_gradient(m, b, training_data, learning_rate)
return [m, b]
# 4) A facade function
def gradient_descent_optimizer(m_start, b_start, training_data, learning_rate, time_steps):
m, b = gradient_descent_runner(m_start, b_start, training_data, learning_rate, time_steps)
return [m, b]
# --------------------- Trainning the model
training_data = [[1, 0], [2, -1], [3, -2], [4, -3], [5, -4], [6, -5],
[7, -6], [8, -7], [9, -8], [10, -9], [11, -10], [12, -11], [13, -12],
[14, -13], [15, -14], [16, -15]]
init_m = -3
init_b = 3
total_loss = eval_loss(init_m, init_b, training_data)
print("Current model:")
print("[y= %sx + %s]" % (init_m, init_b))
init_m, init_b = gradient_descent_optimizer(init_m, init_b, training_data, 0.01, 2000)
print("Optimized model:")
print("[y= %sx + %s]" % (int(init_m), int(init_b)))
print("Total loss on the new model = %s" % total_loss)
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment