Skip to content

Instantly share code, notes, and snippets.

@neubig
neubig / dynet-tagger.py
Last active May 21, 2018 06:01
A small sequence labeler in DyNet
"""
DyNet implementation of a sequence labeler (POS taggger).
This is a translation of this tagger in PyTorch: https://gist.github.com/hal3/8c170c4400576eb8d0a8bd94ab231232
Basic architecture:
- take words
- run though bidirectional GRU
- predict labels one word at a time (left to right), using a recurrent neural network "decoder"
The decoder updates hidden state based on:
- most recent word
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
@nickkraakman
nickkraakman / ffmpeg-cheatsheet.md
Last active February 9, 2026 10:27
FFmpeg cheat sheet for 360 video

FFmpeg Cheat Sheet for 360º video

Brought to you by Headjack

 
FFmpeg is one of the most powerful tools for video transcoding and manipulation, but it's fairly complex and confusing to use. That's why I decided to create this cheat sheet which shows some of the most often used commands.

 
Let's start with some basics:

  • ffmpeg calls the FFmpeg application in the command line window, could also be the full path to the FFmpeg binary or .exe file
PostgreSQL Data Types AWS DMS Data Types Redshift Data Types
INTEGER INT4 INT4
SMALLINT INT2 INT2
BIGINT INT8 INT8
NUMERIC (p,s) If precision is 39 or greater, then use STRING. If the scale is => 0 and =< 37 then: NUMERIC (p,s) If the scale is => 38 and =< 127 then: VARCHAR (Length)
DECIMAL(P,S) If precision is 39 or greater, then use STRING. If the scale is => 0 and =< 37 then: NUMERIC (p,s) If the scale is => 38 and =< 127 then: VARCHAR (Length)
REAL REAL4 FLOAT4
DOUBLE REAL8 FLOAT8
SMALLSERIAL INT2 INT2
SERIAL INT4 INT4
import pandas as pd
import numpy as np
#setting up a comparable dataframe
df = pd.DataFrame(np.random.randint(20,100,size=(50, 4)), columns=['A','B','C','D'])
#these two columns become a multi-column index
df['year_idx'] = np.random.randint(2000,2004,50)
df['id_idx'] = np.random.randint(10000,19999,50)
df.drop_duplicates(subset=['year_idx','id_idx'],inplace=True)
@karpathy
karpathy / pg-pong.py
Created May 30, 2016 22:50
Training a Neural Network ATARI Pong agent with Policy Gradients from raw pixels
""" Trains an agent with (stochastic) Policy Gradients on Pong. Uses OpenAI Gym. """
import numpy as np
import cPickle as pickle
import gym
# hyperparameters
H = 200 # number of hidden layer neurons
batch_size = 10 # every how many episodes to do a param update?
learning_rate = 1e-4
gamma = 0.99 # discount factor for reward
@karpathy
karpathy / min-char-rnn.py
Last active February 19, 2026 10:10
Minimal character-level language model with a Vanilla Recurrent Neural Network, in Python/numpy
"""
Minimal character-level Vanilla RNN model. Written by Andrej Karpathy (@karpathy)
BSD License
"""
import numpy as np
# data I/O
data = open('input.txt', 'r').read() # should be simple plain text file
chars = list(set(data))
data_size, vocab_size = len(data), len(chars)
"""
Fast non-maximum suppression numpy port
from:
http://quantombone.blogspot.com/2011/08/blazing-fast-nmsm-from-exemplar-svm.html
"""
import numpy as np
@bradmontgomery
bradmontgomery / count_words.py
Last active August 15, 2022 19:11
playing with python's `collections.Counter`
"""
Use a Counter to find the most common words in "The Wonderful Wizard of Oz" by
L. Frank Baum.
Available in (mostly) plain text at:
https://archive.org/stream/wonderfulwizardo00baumiala/wonderfulwizardo00baumiala_djvu.txt
Note: This code also counts the words in the header, so it's not a *realistic*
applicaton, but more of a demonstration of python's Counter.
@agramfort
agramfort / lowess.py
Last active April 7, 2025 08:22
LOWESS : Locally weighted regression
"""
This module implements the Lowess function for nonparametric regression.
Functions:
lowess Fit a smooth nonparametric regression curve to a scatterplot.
For more information, see
William S. Cleveland: "Robust locally weighted regression and smoothing
scatterplots", Journal of the American Statistical Association, December 1979,