Skip to content

Instantly share code, notes, and snippets.

@ikhlestov
Last active September 11, 2017 20:14
Show Gist options
  • Save ikhlestov/cf5c50993dd2ba1f6fac120ad047ae16 to your computer and use it in GitHub Desktop.
Save ikhlestov/cf5c50993dd2ba1f6fac120ad047ae16 to your computer and use it in GitHub Desktop.
pytorch: simple layer with optimizer
import torch
from torch.autograd import Variable
import torch.nn.functional as F
x = Variable(torch.randn(10, 20), requires_grad=False)
y = Variable(torch.randn(10, 3), requires_grad=False)
# define some weights
w1 = Variable(torch.randn(20, 5), requires_grad=True)
w2 = Variable(torch.randn(5, 3), requires_grad=True)
learning_rate = 0.1
loss_fn = torch.nn.MSELoss()
optimizer = torch.optim.SGD([w1, w2], lr=learning_rate)
for step in range(5):
pred = F.sigmoid(x @ w1)
pred = F.sigmoid(pred @ w2)
loss = loss_fn(pred, y)
# manually zero all previous gradients
optimizer.zero_grad()
# calculate new gradients
loss.backward()
# apply new gradients
optimizer.step()
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment