-
-
Save kylemcdonald/e8ca989584b3b0e6526c0a737ed412f0 to your computer and use it in GitHub Desktop.
Modules: show_array and make_mosaic do not exist in the shared link.
make_mosaic
is under mosaic.py
from utils.mosaic import make_mosaic
Not sure about show_array, however.
You can see the code for show_array.py here - kylemcdonald/python-utils@9528bfc
Hello,
Thanks for providing us with this code.
I am however encountering one error and I dont know how to fix it.
The loss_disc.backwards() breaks
and I am getting an error:
RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation: [torch.cuda.FloatTensor [1, 16, 3, 3]] is at version 2; expected version 1 instead. Hint: the backtrace further above shows the operation that failed to compute its gradient. The variable in question was changed in there or anywhere later. Good luck!
Hello,
Thanks for providing us with this code.
I am however encountering one error and I dont know how to fix it.
The loss_disc.backwards() breaksand I am getting an error:
RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation: [torch.cuda.FloatTensor [1, 16, 3, 3]] is at version 2; expected version 1 instead. Hint: the backtrace further above shows the operation that failed to compute its gradient. The variable in question was changed in there or anywhere later. Good luck!
I think its an error in the code. Interestingly if you run in an older versions of pytorch it seems to run fine, but it look like the Autoencoder l2 loss and the Discriminator l2 loss are backwards. ae_l2 should take the "disc", while disc_l2 should take "disc_mix". If they are flipped around the error seems to go away.
In such a pipeline:
optim1 = optim.Adam(G.parameters())
optim2 = optim.Adam(D.parameters())
G = Model1()
D = Model2()
recons, z = G(input)
loss1 = loss_func1(recons)
diff = D(z)
loss2 = loss_func2(diff)
loss3 = loss_func3(diff)
loss_G = loss1 + loss2 # we don’t want to update D parameters here
loss_D = loss3
Solution #1
optim1.zero_grad()
loss_G.backward(retain_graph=True)
optim2.zero_grad()
loss_D.backward()
optim1.step()
optim2.step()
Solution #2
optim1.zero_grad()
loss_G.backward(retain_graph=True, inputs=list(G.parameters()))
optim1.step()
optim2.zero_grad()
loss_D.backward(inputs=list(D.parameters()))
optim2.step()
Modules: show_array and make_mosaic do not exist in the shared link.