Skip to content

Instantly share code, notes, and snippets.

View hzyjerry's full-sized avatar
brewing

Jerry Zhi-Yang He hzyjerry

brewing
View GitHub Profile

NIPS Notes

Tutorials

Variational Inference

  • Simple Intro by Blei mostly going over review paper of Jordan
  • Later introduce SVI (Stochastic VI) as a remedy to solve VI tractably with large dataset.
  • Review the black box inference -assumption free VI- http://www.jmlr.org/proceedings/papers/v33/ranganath14.pdf
  • Key idea is replacing gradient and the expectation in VI formulation. Since expectation reqiuires exponential family assumption to work replacing expectation and gradient solves this if overall method is stochastic since your samples are unbiased gradient estimates satisfying Robinson-Monroe conditions however the variance is very large and it requires even further tricks
@8enmann
8enmann / reinstall.sh
Last active November 14, 2024 07:23
Reinstall NVIDIA drivers without opengl Ubuntu 16.04 GTX 1080ti
# Download installers
mkdir ~/Downloads/nvidia
cd ~/Downloads/nvidia
wget https://developer.nvidia.com/compute/cuda/8.0/Prod2/local_installers/cuda_8.0.61_375.26_linux-run
wget http://us.download.nvidia.com/XFree86/Linux-x86_64/384.59/NVIDIA-Linux-x86_64-384.59.run
sudo chmod +x NVIDIA-Linux-x86_64-384.59.run
sudo chmod +x cuda_8.0.61_375.26_linux-run
./cuda_8.0.61_375.26_linux-run -extract=~/Downloads/nvidia/
# Uninstall old stuff
sudo apt-get --purge remove nvidia-*
@HarshTrivedi
HarshTrivedi / pad_packed_demo.py
Last active April 17, 2025 01:26 — forked from Tushar-N/pad_packed_demo.py
Minimal tutorial on packing (pack_padded_sequence) and unpacking (pad_packed_sequence) sequences in pytorch.
import torch
from torch import LongTensor
from torch.nn import Embedding, LSTM
from torch.autograd import Variable
from torch.nn.utils.rnn import pack_padded_sequence, pad_packed_sequence
## We want to run LSTM on a batch of 3 character sequences ['long_str', 'tiny', 'medium']
#
# Step 1: Construct Vocabulary
# Step 2: Load indexed data (list of instances, where each instance is list of character indices)
@phillipi
phillipi / biggan_slerp
Last active October 8, 2023 01:25
Slerp through the BigGAN latent space
# to be used in conjunction with the functions defined here:
# https://colab.research.google.com/github/tensorflow/hub/blob/master/examples/colab/biggan_generation_with_tf_hub.ipynb
# party parrot transformation
noise_seed_A = 3 # right facing
noise_seed_B = 31 # left facing
num_interps = 14
truncation = 0.2
category = 14
@unixpickle
unixpickle / maml.py
Created October 12, 2019 19:08
MAML in PyTorch
import torch
import torch.nn.functional as F
def maml_grad(model, inputs, outputs, lr, batch=1):
"""
Update a model's gradient using MAML.
The gradient will point in the direction that
improves the total loss across all inner-loop