Created
October 25, 2011 12:56
-
-
Save retnuh/1312646 to your computer and use it in GitHub Desktop.
Machine Learning MultiVar Gradient Descent
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
function [theta, J_history] = gradientDescentMulti(X, y, theta, alpha, num_iters) | |
%GRADIENTDESCENTMULTI Performs gradient descent to learn theta | |
% theta = GRADIENTDESCENTMULTI(x, y, theta, alpha, num_iters) updates theta by | |
% taking num_iters gradient steps with learning rate alpha | |
% Initialize some useful values | |
m = length(y); % number of training examples | |
n = size(X, 2); % number of features (+ 1) | |
J_history = zeros(num_iters, 1); | |
for iter = 1:num_iters | |
% ====================== YOUR CODE HERE ====================== | |
% Instructions: Perform a single gradient step on the parameter vector | |
% theta. | |
% | |
% Hint: While debugging, it can be useful to print out the values | |
% of the cost function (computeCostMulti) and gradient here. | |
% | |
% thetaNew = theta; | |
% for i=1:n; | |
% thetaNew(i) = theta(i) - (alpha / m * sum(((X*theta) - y) .* X(:,i))); | |
% end; | |
% theta = thetaNew; | |
delta = ((theta' * X' - y')*X)'; | |
theta = theta - alpha / m * delta; | |
% ============================================================ | |
% Save the cost J in every iteration | |
J_history(iter) = computeCostMulti(X, y, theta); | |
end | |
end |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
This has my original version, which used a loop, and the fully vectorized version.
Man, check out all those transpositions! I imagine it must be pretty fast, since you don't actually need to copy anything, just swap indices when looping, etc.