-
-
Save apaszke/226abdf867c4e9d6698bd198f3b45fb7 to your computer and use it in GitHub Desktop.
import torch | |
def jacobian(y, x, create_graph=False): | |
jac = [] | |
flat_y = y.reshape(-1) | |
grad_y = torch.zeros_like(flat_y) | |
for i in range(len(flat_y)): | |
grad_y[i] = 1. | |
grad_x, = torch.autograd.grad(flat_y, x, grad_y, retain_graph=True, create_graph=create_graph) | |
jac.append(grad_x.reshape(x.shape)) | |
grad_y[i] = 0. | |
return torch.stack(jac).reshape(y.shape + x.shape) | |
def hessian(y, x): | |
return jacobian(jacobian(y, x, create_graph=True), x) | |
def f(x): | |
return x * x * torch.arange(4, dtype=torch.float) | |
x = torch.ones(4, requires_grad=True) | |
print(jacobian(f(x), x)) | |
print(hessian(f(x), x)) |
Hi,
I want to find a Hessian matrix for the loss function of the pre-trained neural network with respect to the parameters of the network. How can I use this method? Can someone please share an example? Thanks.
Hi,
I want to find a Hessian matrix for the loss function of the pre-trained neural network with respect to the parameters of the network. How can I use this method? Can someone please share an example? Thanks.
Hi,
I am looking for the same thing. Could you figure out how we can do it?
I think this has now been added to recent versions of torch's autograd module. Maybe look at the examples here
I think this has now been added to recent versions of torch's autograd module. Maybe look at the examples here
Right. I checked it. When I use this method I am getting multiple errors. I am looking for an example or similar code to see how the implementation is done.
Now the function
torch.autograd.functional.jacobian
can do the same thing, I think.output