Skip to content

Instantly share code, notes, and snippets.

View rewinfrey's full-sized avatar
🏄‍♂️
fix f = let x = f x in x

Rick Winfrey rewinfrey

🏄‍♂️
fix f = let x = f x in x
View GitHub Profile
@aparrish
aparrish / understanding-word-vectors.ipynb
Last active October 7, 2025 16:12
Understanding word vectors: A tutorial for "Reading and Writing Electronic Text," a class I teach at ITP. (Python 2.7) Code examples released under CC0 https://creativecommons.org/choose/zero/, other text released under CC BY 4.0 https://creativecommons.org/licenses/by/4.0/
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
@porglezomp
porglezomp / Leftpad.idr
Last active August 13, 2024 22:22
Taking on Hillel's challenge to formally verify leftpad (https://twitter.com/Hillelogram/status/987432181889994759)
-- Note: There is a more complete explanation at https://github.com/hwayne/lets-prove-leftpad/tree/master/idris
import Data.Vect
-- `minus` is saturating subtraction, so this works like we want it to
eq_max : (n, k : Nat) -> maximum k n = plus (n `minus` k) k
eq_max n Z = rewrite minusZeroRight n in rewrite plusZeroRightNeutral n in Refl
eq_max Z (S _) = Refl
eq_max (S n) (S k) = rewrite sym $ plusSuccRightSucc (n `minus` k) k in rewrite eq_max n k in Refl
-- The type here says "the result is" padded to (maximum k n), and is padding plus the original
@adrienbrault
adrienbrault / llama2-mac-gpu.sh
Last active April 8, 2025 13:49
Run Llama-2-13B-chat locally on your M1/M2 Mac with GPU inference. Uses 10GB RAM. UPDATE: see https://twitter.com/simonw/status/1691495807319674880?s=20
# Clone llama.cpp
git clone https://github.com/ggerganov/llama.cpp.git
cd llama.cpp
# Build it
make clean
LLAMA_METAL=1 make
# Download model
export MODEL=llama-2-13b-chat.ggmlv3.q4_0.bin