Skip to content

Instantly share code, notes, and snippets.

View satyaborg's full-sized avatar

satya satyaborg

View GitHub Profile
@sbarratt
sbarratt / torch_jacobian.py
Created May 9, 2019 19:40
Get the jacobian of a vector-valued function that takes batch inputs, in pytorch.
def get_jacobian(net, x, noutputs):
x = x.squeeze()
n = x.size()[0]
x = x.repeat(noutputs, 1)
x.requires_grad_(True)
y = net(x)
y.backward(torch.eye(noutputs))
return x.grad.data
@johnhw
johnhw / umap_sparse.py
Last active May 11, 2025 07:18
1 million prime UMAP layout
### JHW 2018
import numpy as np
import umap
# This code from the excellent module at:
# https://stackoverflow.com/questions/4643647/fast-prime-factorization-module
import random
@InnovArul
InnovArul / tied_linear.py
Last active January 6, 2025 23:27
tied linear layer experiment
import torch, torch.nn as nn, torch.nn.functional as F
import numpy as np
import torch.optim as optim
# tied autoencoder using off the shelf nn modules
class TiedAutoEncoderOffTheShelf(nn.Module):
def __init__(self, inp, out, weight):
super().__init__()
self.encoder = nn.Linear(inp, out, bias=False)
self.decoder = nn.Linear(out, inp, bias=False)
@Tushar-N
Tushar-N / hook_activations.py
Created August 3, 2018 00:06
Pytorch code to save activations for specific layers over an entire dataset
import torch
import torch.nn as nn
import torch.nn.functional as F
import torchvision.models as tmodels
from functools import partial
import collections
# dummy data: 10 batches of images with batch size 16
dataset = [torch.rand(16,3,224,224).cuda() for _ in range(10)]
@doctorpangloss
doctorpangloss / repetition_algorithm.ipynb
Last active November 23, 2023 19:13
Supermemo 2 Algorithm, Unobscured (Python 3)
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
OS: Ubuntu 18.04
Others: Opencv, NCCL, CUDA 9.2, CUDNN
NOTE: For Ubuntu 17.04/ 18.04, there's an alternate way to directly install Caffe via apt-get install caffe-cpu and caffe-cuda
I am installing from source so that I can with other caffe algorithms that demand the existence of a CAFFE_ROOT directory
Modify Makefile.config file
--------------------
@gaearon
gaearon / modern_js.md
Last active January 11, 2026 02:45
Modern JavaScript in React Documentation

If you haven’t worked with JavaScript in the last few years, these three points should give you enough knowledge to feel comfortable reading the React documentation:

  • We define variables with let and const statements. For the purposes of the React documentation, you can consider them equivalent to var.
  • We use the class keyword to define JavaScript classes. There are two things worth remembering about them. Firstly, unlike with objects, you don't need to put commas between class method definitions. Secondly, unlike many other languages with classes, in JavaScript the value of this in a method [depends on how it is called](https://developer.mozilla.org/en-US/docs/Web/Jav
@ppope
ppope / preprocess_twitter.py
Last active May 13, 2021 14:32 — forked from tokestermw/preprocess-twitter.py
FORK: Python version of Ruby script to preprocess tweets for use in GloVe featurization http://nlp.stanford.edu/projects/glove/.
"""
preprocess-twitter.py
python preprocess-twitter.py "Some random text with #hashtags, @mentions and http://t.co/kdjfkdjf (links). :)"
Script for preprocessing tweets by Romain Paulus
with small modifications by Jeffrey Pennington
with translation to Python by Motoki Wu
Translation of Ruby script to create features for GloVe vectors for Twitter data.
@goerz
goerz / Elements of Statistical Learning.md
Last active November 6, 2025 11:56
PDF bookmarks for "Hastie, Tibshirani, Friedman - The Elements of Statistical Learning" (LaTeX)

This gist contains out.tex, a tex file that adds a PDF outline ("bookmarks") to the freely available pdf file of the book

The Elements of Statistical Learning (2nd ed), by Trevor Hastie, Robert Tibshirani, and Jerome Friedman

https://web.stanford.edu/~hastie/ElemStatLearn/

The bookmarks allow to navigate the contents of the book while reading it on a screen.

Usage

@xmfbit
xmfbit / pytorch_mnist.py
Last active March 4, 2023 19:45
an example of pytorch on mnist dataset
import os
import torch
import torch.nn as nn
from torch.autograd import Variable
import torchvision.datasets as dset
import torchvision.transforms as transforms
import torch.nn.functional as F
import torch.optim as optim
## load mnist dataset
use_cuda = torch.cuda.is_available()