Skip to content

Instantly share code, notes, and snippets.

View thomwolf's full-sized avatar
🚂
training

Thomas Wolf thomwolf

🚂
training
View GitHub Profile
@thomwolf
thomwolf / rectangle_struct.pyx
Created June 6, 2018 09:19
A C structure for a rectangle
cdef struct Rectangle:
float w
float h
@thomwolf
thomwolf / fast_loop.pyx
Last active January 10, 2021 15:59
A Cython loop on an array of C structs
from cymem.cymem cimport Pool
from random import random
cdef struct Rectangle:
float w
float h
cdef int check_rectangles(Rectangle* rectangles, int n_rectangles, float threshold):
cdef int n_out = 0
# C arrays contain no size information => we need to give it explicitly
@thomwolf
thomwolf / slow_loop.py
Last active June 11, 2018 10:06
A Python loop on a list of Python objects
from random import random
class Rectangle:
def __init__(self, w, h):
self.w = w
self.h = h
def area(self):
return self.w * self.h
def check_rectangles(rectangles, threshold):
@thomwolf
thomwolf / rectangle_class.py
Last active June 6, 2018 08:29
A simple Python Rectangle class
class Rectangle:
def __init__(self, w, h):
self.w = w
self.h = h
def area(self):
return self.w * self.h
@thomwolf
thomwolf / profile.py
Created June 4, 2018 08:31
Profiling a Python module
import cProfile
import pstats
import my_slow_module
cProfile.run('my_slow_module.run()', 'restats')
p = pstats.Stats('restats')
p.sort_stats('cumulative').print_stats(30)
@thomwolf
thomwolf / meta_train.py
Last active July 5, 2019 15:43
Simple gist on how to train a meta-learner in PyTorch
def train(forward_model, backward_model, optimizer, meta_optimizer, train_data, meta_epochs):
""" Train a meta-learner
Inputs:
forward_model, backward_model: Two identical PyTorch modules (can have shared Tensors)
optimizer: a neural net to be used as optimizer (an instance of the MetaLearner class)
meta_optimizer: an optimizer for the optimizer neural net, e.g. ADAM
train_data: an iterator over an epoch of training data
meta_epochs: meta-training steps
To be added: intialization, early stopping, checkpointing, more control over everything
"""
@thomwolf
thomwolf / MetaLearner.py
Last active June 29, 2023 10:06
A simple bare MetaLearner class in PyTorch
class MetaLearner(nn.Module):
""" Bare Meta-learner class
Should be added: intialization, hidden states, more control over everything
"""
def __init__(self, model):
super(MetaLearner, self).__init__()
self.weights = Parameter(torch.Tensor(1, 2))
def forward(self, forward_model, backward_model):
""" Forward optimizer with a simple linear neural net
@thomwolf
thomwolf / get_params.py
Last active April 3, 2018 08:48
A PyTorch iterator over module parameters that allows to update module parameters (and not only the data tensor).
def get_params(module, memo=None, pointers=None):
""" Returns an iterator over PyTorch module parameters that allows to update parameters
(and not only the data).
! Side effect: update shared parameters to point to the first yield instance
(i.e. you can update shared parameters and keep them shared)
Yields:
(Module, string, Parameter): Tuple containing the parameter's module, name and pointer
"""
if memo is None:
memo = set()
@thomwolf
thomwolf / neuralcoref.py
Last active March 22, 2018 22:04
The Neuralcoref pyTorch model
class Model(nn.Module):
def __init__(self, vocab_size, embed_dim, H1, H2, H3, pairs_in, single_in, drop=0.5):
super(Model, self).__init__()
self.embed = nn.Embedding(vocab_size, embedding_dim)
self.drop = nn.Dropout(drop)
self.pairs = nn.Sequential(nn.Linear(pairs_in, H1), nn.ReLU(), nn.Dropout(drop),
nn.Linear(H1, H2), nn.ReLU(), nn.Dropout(drop),
nn.Linear(H2, H3), nn.ReLU(), nn.Dropout(drop),
nn.Linear(H3, 1),
nn.Linear(1, 1))
@thomwolf
thomwolf / bayes_by_backprop.py
Created November 30, 2017 13:27 — forked from vvanirudh/bayes_by_backprop.py
Bayes by Backprop in PyTorch (introduced in the paper "Weight uncertainty in Neural Networks", Blundell et. al. 2015)
# Drawn from https://gist.github.com/rocknrollnerd/c5af642cf217971d93f499e8f70fcb72 (in Theano)
# This is implemented in PyTorch
# Author : Anirudh Vemula
import torch
import torch.nn as nn
from torch.autograd import Variable
import numpy as np
from sklearn.datasets import fetch_mldata