Created
October 31, 2023 05:03
-
-
Save meghbhalerao/b28a5317bb7f00d0a1ac8b015483931d to your computer and use it in GitHub Desktop.
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
sim_mat_dims = (len(dl.dataset), len(dl.dataset)) | |
print("Dimensions of similarity matrix is", sim_mat_dims) | |
print("Making empty matrix to store similarities ......") | |
feat_mat = np.empty(sim_mat_dims, dtype=np.float32) | |
loss_fn = nn.CrossEntropyLoss(reduction='mean').to(self.device) | |
for idx, data in tqdm(enumerate(dl)): | |
loss_val = loss_fn(net(data[0].to(self.device)), data[1].to(self.device)) | |
grad_list = torch.autograd.grad(loss_val, inputs = [p for p in net.parameters() if p.requires_grad]) | |
feats_outer = [t.flatten() for t in grad_list] | |
feats_outer = torch.cat(feats_outer) | |
for idx, data in tqdm(enumerate(dl)): | |
loss_val = loss_fn(net(data[0].to(self.device)), data[1].to(self.device)) | |
grad_list = torch.autograd.grad(loss_val, inputs = [p for p in net.parameters() if p.requires_grad]) | |
feats_inner = [t.flatten() for t in grad_list] | |
feats_inner = torch.cat(feats_inner) | |
feat_mat[idx, idx] = torch.norm(feats_outer - feats_inner).cpu().detach().numpy() | |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
The first thing to note about that code is that there is a bug, since you are using the same variable names in the inner loop as you are in the outer loop. This is scope masking and also the final feat_mat[idx,idx] = is only setting the diagonal of the matrix. Updated version below but in general in your code you should always use distinct variable names in nested loops: