Skip to content

Instantly share code, notes, and snippets.

@bartvm
Last active May 14, 2016 00:19
Show Gist options
  • Save bartvm/292bff63a380b0a2ff5505bf5f8bdeb4 to your computer and use it in GitHub Desktop.
Save bartvm/292bff63a380b0a2ff5505bf5f8bdeb4 to your computer and use it in GitHub Desktop.
t = require 'torch'
grad = require 'autograd'
function loop(p, y, idxs)
-- Only works if h is a derivable value as well
x = p.x
h = p.h
for i = 1, x:size(1) do
h[idxs[i]] = x[i]
end
return t.mean(t.pow(y - h, 2))
end
x_val = t.linspace(0, 1, arg[1])
y_val = t.linspace(0, 1, arg[1])
idxs_val = t.range(arg[1], 1, -1)
h_val = t.zeros(arg[1])
dloop = grad(loop) -- Crashes with {optimize = true}
d, loss = dloop({x=x_val, h=h_val}, y_val, idxs_val)
-- For some reason the gradient of x is empty
-- but h isn't, and that is simply the gradient in reverse
-- However, the last element seems to be 0, which is wrong?
print(d.h, loss)
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment