Skip to content

Instantly share code, notes, and snippets.

@sjmielke
Created March 1, 2018 19:18
Show Gist options
  • Save sjmielke/4d2eb5e7ca50573bc454aee66202adab to your computer and use it in GitHub Desktop.
Save sjmielke/4d2eb5e7ca50573bc454aee66202adab to your computer and use it in GitHub Desktop.
Nuclear norm + gradient in PyTorch
import numpy as np
import torch
x = torch.rand(10, 5).double()
x /= torch.norm(x)
def grad(x):
# Alternatively: https://math.stackexchange.com/a/934443
# from scipy.linalg import sqrtm
# return sqrtm(np.linalg.inv(x @ x.transpose())) @ x
# Using SVD: https://math.stackexchange.com/a/701104 + https://math.stackexchange.com/a/1663012
u, sig, v = torch.svd(x)
return u @ v.t()
for i in range(10000):
if i % 1000 == 0:
print("Nuclear norm =", np.linalg.norm(x.numpy(), 'nuc'), "; k =", np.linalg.cond(x.numpy()))
x -= 0.0001 * grad(x)
x /= torch.norm(x)
print(x)
@smokbel
Copy link

smokbel commented Jun 9, 2021

Is there a way to minimize the nuclear norm as part of a loss function in pytorch?

@sjmielke
Copy link
Author

sjmielke commented Jun 11, 2021

I just added this gradient to the gradient that was computed from other losses, I guess the nicer thing would've been filling in forward and backward of a new function like here: https://pytorch.org/tutorials/beginner/examples_autograd/two_layer_net_custom_function.html ... Hope that helps!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment