Skip to content

Instantly share code, notes, and snippets.

View HusseinLezzaik's full-sized avatar
💫
dreaming

Hussein Lezzaik HusseinLezzaik

💫
dreaming
View GitHub Profile
@jboner
jboner / latency.txt
Last active October 26, 2025 02:11
Latency Numbers Every Programmer Should Know
Latency Comparison Numbers (~2012)
----------------------------------
L1 cache reference 0.5 ns
Branch mispredict 5 ns
L2 cache reference 7 ns 14x L1 cache
Mutex lock/unlock 25 ns
Main memory reference 100 ns 20x L2 cache, 200x L1 cache
Compress 1K bytes with Zippy 3,000 ns 3 us
Send 1K bytes over 1 Gbps network 10,000 ns 10 us
Read 4K randomly from SSD* 150,000 ns 150 us ~1GB/sec SSD
@1wErt3r
1wErt3r / SMBDIS.ASM
Created November 9, 2012 22:27
A Comprehensive Super Mario Bros. Disassembly
;SMBDIS.ASM - A COMPREHENSIVE SUPER MARIO BROS. DISASSEMBLY
;by doppelganger ([email protected])
;This file is provided for your own use as-is. It will require the character rom data
;and an iNES file header to get it to work.
;There are so many people I have to thank for this, that taking all the credit for
;myself would be an unforgivable act of arrogance. Without their help this would
;probably not be possible. So I thank all the peeps in the nesdev scene whose insight into
;the 6502 and the NES helped me learn how it works (you guys know who you are, there's no
@tylerneylon
tylerneylon / learn.lua
Last active September 27, 2025 10:19
Learn Lua quickly with this short yet comprehensive and friendly script. It's written as both an introduction and a quick reference. It's also a valid Lua script so you can verify that the code does what it says, and learn more by modifying and running this script in your Lua interpreter.
-- Two dashes start a one-line comment.
--[[
Adding two ['s and ]'s makes it a
multi-line comment.
--]]
----------------------------------------------------
-- 1. Variables and flow control.
----------------------------------------------------
@myusuf3
myusuf3 / delete_git_submodule.md
Created November 3, 2014 17:36
How effectively delete a git submodule.

To remove a submodule you need to:

  • Delete the relevant section from the .gitmodules file.
  • Stage the .gitmodules changes git add .gitmodules
  • Delete the relevant section from .git/config.
  • Run git rm --cached path_to_submodule (no trailing slash).
  • Run rm -rf .git/modules/path_to_submodule (no trailing slash).
  • Commit git commit -m "Removed submodule "
  • Delete the now untracked submodule files rm -rf path_to_submodule
@karpathy
karpathy / gist:587454dc0146a6ae21fc
Last active May 14, 2025 00:08
An efficient, batched LSTM.
"""
This is a batched LSTM forward and backward pass
"""
import numpy as np
import code
class LSTM:
@staticmethod
def init(input_size, hidden_size, fancy_forget_bias_init = 3):
@nylki
nylki / char-rnn recipes.md
Last active November 18, 2024 13:19
char-rnn cooking recipes

do androids dream of cooking?

The following recipes are sampled from a trained neural net. You can find the repo to train your own neural net here: https://github.com/karpathy/char-rnn Thanks to Andrej Karpathy for the great code! It's really easy to setup.

The recipes I used for training the char-rnn are from a recipe collection called ffts.com And here is the actual zipped data (uncompressed ~35 MB) I used for training. The ZIP is also archived @ archive.org in case the original links becomes invalid in the future.

@karpathy
karpathy / min-char-rnn.py
Last active October 23, 2025 16:55
Minimal character-level language model with a Vanilla Recurrent Neural Network, in Python/numpy
"""
Minimal character-level Vanilla RNN model. Written by Andrej Karpathy (@karpathy)
BSD License
"""
import numpy as np
# data I/O
data = open('input.txt', 'r').read() # should be simple plain text file
chars = list(set(data))
data_size, vocab_size = len(data), len(chars)
@karpathy
karpathy / pg-pong.py
Created May 30, 2016 22:50
Training a Neural Network ATARI Pong agent with Policy Gradients from raw pixels
""" Trains an agent with (stochastic) Policy Gradients on Pong. Uses OpenAI Gym. """
import numpy as np
import cPickle as pickle
import gym
# hyperparameters
H = 200 # number of hidden layer neurons
batch_size = 10 # every how many episodes to do a param update?
learning_rate = 1e-4
gamma = 0.99 # discount factor for reward
@wojteklu
wojteklu / clean_code.md
Last active October 24, 2025 14:05
Summary of 'Clean code' by Robert C. Martin

Code is clean if it can be understood easily – by everyone on the team. Clean code can be read and enhanced by a developer other than its original author. With understandability comes readability, changeability, extensibility and maintainability.


General rules

  1. Follow standard conventions.
  2. Keep it simple stupid. Simpler is always better. Reduce complexity as much as possible.
  3. Boy scout rule. Leave the campground cleaner than you found it.
  4. Always find root cause. Always look for the root cause of a problem.

Design rules

@leopd
leopd / almost-attention.py
Last active April 21, 2023 22:35
Explanatory (non-vectorized) code for how attention works
# This code doesn't work, and isn't intended to.
# The goal of this code is to explain how attention mechansisms work, in code.
# It is deliberately not vectorized to make it clearer.
def attention(self, X_in:List[Tensor]):
# For every token transform previous layer's out
for i in range(self.sequence_length):
query[i] = self.Q * X_in[i]
key[i] = self.K * X_in[i]
value[i] = self.V * X_in[i]