Skip to content

Instantly share code, notes, and snippets.

View alx's full-sized avatar
🌌
exploring

Alexandre Girard Davila alx

🌌
exploring
View GitHub Profile
check process merb_app_master
with pidfile /home/deploy/legodata/slice/shared/pids/production-merb.main.pid
start program = "/home/deploy/bin/monit_merb_mpc slice start_master /home/deploy/legodata -c2 -n4000"
stop program = "/home/deploy/bin/monit_merb_mpc slice stop_master /home/deploy/legodata"
#if totalmem is greater than 80.0 MB for 2 cycles then restart # eating up memory?
group master.slice.legodata.com
# Worker configuration (one for each worker port required)
check process merb_app_4000
with pidfile /home/deploy/legodata/slice/shared/pids/production-merb.4000.pid

Depends on:

  • Octopress git repo
  • tar
  • rsync
  • rake

Fill in the variables appropriately and copy to hooks/post-receive in your bare git repo

  • git_branch: git branch which holds the source files for the live site
@alx
alx / api.js
Created April 18, 2017 14:45 — forked from fwielstra/api.js
An example NodeJS / Mongoose / Express application based on their respective tutorials
/* The API controller
Exports 3 methods:
* post - Creates a new thread
* list - Returns a list of threads
* show - Displays a thread and its posts
*/
var Thread = require('../models/thread.js');
var Post = require('../models/post.js');
@alx
alx / llm-wiki.md
Created April 12, 2026 04:09 — forked from karpathy/llm-wiki.md
llm-wiki

LLM Wiki

A pattern for building personal knowledge bases using LLMs.

This is an idea file, it is designed to be copy pasted to your own LLM Agent (e.g. OpenAI Codex, Claude Code, OpenCode / Pi, or etc.). Its goal is to communicate the high level idea, but your agent will build out the specifics in collaboration with you.

The core idea

Most people's experience with LLMs and documents looks like RAG: you upload a collection of files, the LLM retrieves relevant chunks at query time, and generates an answer. This works, but the LLM is rediscovering knowledge from scratch on every question. There's no accumulation. Ask a subtle question that requires synthesizing five documents, and the LLM has to find and piece together the relevant fragments every time. Nothing is built up. NotebookLM, ChatGPT file uploads, and most RAG systems work this way.