As configured in my dotfiles.
start new:
tmux
start new with session name:
| from nltk.probability import ELEProbDist, FreqDist | |
| from nltk import NaiveBayesClassifier | |
| from collections import defaultdict | |
| train_samples = { | |
| 'I hate you and you are a bad person': 'neg', | |
| 'I love you and you are a good person': 'pos', | |
| 'I fail at everything and I want to kill people' : 'neg', | |
| 'I win at everything and I want to love people' : 'pos', | |
| 'sad are things are heppening. fml' : 'neg', |
As configured in my dotfiles.
start new:
tmux
start new with session name:
| /* VT100 terminal reset (<ESC>c) */ | |
| console.log('\033c'); | |
| /* numbers comparations */ | |
| > '2' == 2 | |
| true | |
| > '2' === 2 |
##VGG19 model for Keras
This is the Keras model of the 19-layer network used by the VGG team in the ILSVRC-2014 competition.
It has been obtained by directly converting the Caffe model provived by the authors.
Details about the network architecture can be found in the following arXiv paper:
Very Deep Convolutional Networks for Large-Scale Image Recognition
K. Simonyan, A. Zisserman
The main concept is to assign and separate the responsibilities of each layer.
Resources:
| ####### Shortcut Hotkeys ############# | |
| # open terminal | |
| alt - return : open -n /Applications/Alacritty.app | |
| # restart Yabi, SpaceBar, and SKHD | |
| alt + shift - r : \ | |
| launchctl kickstart -k "gui/${UID}/homebrew.mxcl.yabai"; \ | |
| skhd --reload | |
A pattern for building personal knowledge bases using LLMs.
This is an idea file, it is designed to be copy pasted to your own LLM Agent (e.g. OpenAI Codex, Claude Code, OpenCode / Pi, or etc.). Its goal is to communicate the high level idea, but your agent will build out the specifics in collaboration with you.
Most people's experience with LLMs and documents looks like RAG: you upload a collection of files, the LLM retrieves relevant chunks at query time, and generates an answer. This works, but the LLM is rediscovering knowledge from scratch on every question. There's no accumulation. Ask a subtle question that requires synthesizing five documents, and the LLM has to find and piece together the relevant fragments every time. Nothing is built up. NotebookLM, ChatGPT file uploads, and most RAG systems work this way.