Skip to content

Instantly share code, notes, and snippets.

View todpole3's full-sized avatar
πŸ„β€β™€οΈ
Focusing

Victoria X Lin todpole3

πŸ„β€β™€οΈ
Focusing
View GitHub Profile
@yoavg
yoavg / LLMs.md
Last active February 6, 2025 02:39

Some remarks on Large Language Models

Yoav Goldberg, January 2023

Audience: I assume you heard of chatGPT, maybe played with it a little, and was imressed by it (or tried very hard not to be). And that you also heard that it is "a large language model". And maybe that it "solved natural language understanding". Here is a short personal perspective of my thoughts of this (and similar) models, and where we stand with respect to language understanding.

Intro

Around 2014-2017, right within the rise of neural-network based methods for NLP, I was giving a semi-academic-semi-popsci lecture, revolving around the story that achieving perfect language modeling is equivalent to being as intelligent as a human. Somewhere around the same time I was also asked in an academic panel "what would you do if you were given infinite compute and no need to worry about labour costs" to which I cockily responded "I would train a really huge language model, just to show that it doesn't solve everything!". We

@spitis
spitis / bnlstm.py
Created February 2, 2017 03:05
Batch normalized LSTM Cell for Tensorflow
"""adapted from https://github.com/OlavHN/bnlstm to store separate population statistics per state"""
import tensorflow as tf, numpy as np
RNNCell = tf.nn.rnn_cell.RNNCell
class BNLSTMCell(RNNCell):
'''Batch normalized LSTM as described in arxiv.org/abs/1603.09025'''
def __init__(self, num_units, is_training_tensor, max_bn_steps, initial_scale=0.1, activation=tf.tanh, decay=0.95):
"""
* max bn steps is the maximum number of steps for which to store separate population stats
"""
@nikitakit
nikitakit / tf_beam_decoder.py
Last active January 6, 2024 08:48
Tensorflow Beam Search
"""
Beam decoder for tensorflow
Sample usage:
```
from tf_beam_decoder import beam_decoder
decoded_sparse, decoded_logprobs = beam_decoder(
cell=cell,
@karpathy
karpathy / pg-pong.py
Created May 30, 2016 22:50
Training a Neural Network ATARI Pong agent with Policy Gradients from raw pixels
""" Trains an agent with (stochastic) Policy Gradients on Pong. Uses OpenAI Gym. """
import numpy as np
import cPickle as pickle
import gym
# hyperparameters
H = 200 # number of hidden layer neurons
batch_size = 10 # every how many episodes to do a param update?
learning_rate = 1e-4
gamma = 0.99 # discount factor for reward
@tushortz
tushortz / UNICODE to ASCII python replace
Created April 6, 2016 22:14
Function to replace some annoying characters
def unicodetoascii(text):
TEXT = (text.
replace('\\xe2\\x80\\x99', "'").
replace('\\xc3\\xa9', 'e').
replace('\\xe2\\x80\\x90', '-').
replace('\\xe2\\x80\\x91', '-').
replace('\\xe2\\x80\\x92', '-').
replace('\\xe2\\x80\\x93', '-').
replace('\\xe2\\x80\\x94', '-').
@nbremer
nbremer / .block
Last active February 6, 2025 14:31
D3.js - Radar Chart or Spider Chart - Adjusted from radar-chart-d3
height: 650
license: mit
@mbostock
mbostock / .block
Last active October 9, 2024 15:14
Collapsible Tree
license: gpl-3.0
redirect: https://observablehq.com/@d3/d3-collapsible-tree
@jboner
jboner / latency.txt
Last active April 19, 2025 21:29
Latency Numbers Every Programmer Should Know
Latency Comparison Numbers (~2012)
----------------------------------
L1 cache reference 0.5 ns
Branch mispredict 5 ns
L2 cache reference 7 ns 14x L1 cache
Mutex lock/unlock 25 ns
Main memory reference 100 ns 20x L2 cache, 200x L1 cache
Compress 1K bytes with Zippy 3,000 ns 3 us
Send 1K bytes over 1 Gbps network 10,000 ns 10 us
Read 4K randomly from SSD* 150,000 ns 150 us ~1GB/sec SSD