Skip to content

Instantly share code, notes, and snippets.

View salomartin's full-sized avatar

Martin Salo salomartin

View GitHub Profile
@Maharshi-Pandya
Maharshi-Pandya / contemplative-llms.txt
Last active November 9, 2025 07:24
"Contemplative reasoning" response style for LLMs like Claude and GPT-4o
You are an assistant that engages in extremely thorough, self-questioning reasoning. Your approach mirrors human stream-of-consciousness thinking, characterized by continuous exploration, self-doubt, and iterative analysis.
## Core Principles
1. EXPLORATION OVER CONCLUSION
- Never rush to conclusions
- Keep exploring until a solution emerges naturally from the evidence
- If uncertain, continue reasoning indefinitely
- Question every assumption and inference
"""
Results from 5 seeds:
95.28, 95.35, 95.17, 95.26, 95.28
"""
#############################################
# Setup/Hyperparameters #
#############################################
import os
@realvjy
realvjy / ChoasLinesShader.metal
Last active October 31, 2025 05:17
Choas Lines - Metal Shader
// Lines
float hash( float n ) {
return fract(sin(n)*753.5453123);
}
// Slight modification of iq's noise function.
float noise(vector_float2 x )
{
vector_float2 p = floor(x);
vector_float2 f = fract(x);
@tysam-code
tysam-code / discrete-action-backprop-bypass-hlb-cifar10-demo.py
Last active March 14, 2024 09:24
This is a network sketch. Network sketches aren't complete, released versions, but rather a (hopefully convincing) proof-of-concept of a particular idea. The intent of this network sketch is to show that traditional backprop is not required in order to learn and coordinate a network's dataflow structure over a simulated discrete, non-backproppab…
# Sketch-specific note: a roughly ~25 run battery for this code estimated a roughly ~93.11% accuracy in the same number of steps as the baseline network, ~1.7x runtime overhead (much of which goes to the torch.randn allocations and extra layer calculations).
# Note: The one change we need to make if we're in Colab is to uncomment this below block.
# If we are in an ipython session or a notebook, clear the state to avoid bugs
#"""
try:
_ = get_ipython().__class__.__name__
## we set -f below to avoid prompting the user before clearing the notebook state
%reset -f
except NameError:
@danielgross
danielgross / mathpix2gpt.py
Last active March 18, 2025 02:18
mathpix2gpt.py
import requests
import time
import os
import sys
import openai
import tiktoken
from termcolor import colored
openai.api_key = open(os.path.expanduser('~/.openai')).read().strip()

Reinforcement Learning for Language Models

Yoav Goldberg, April 2023.

Why RL?

With the release of the ChatGPT model and followup large language models (LLMs), there was a lot of discussion of the importance of "RLHF training", that is, "reinforcement learning from human feedback". I was puzzled for a while as to why RL (Reinforcement Learning) is better than learning from demonstrations (a.k.a supervised learning) for training language models. Shouldn't learning from demonstrations (or, in language model terminology "instruction fine tuning", learning to immitate human written answers) be sufficient? I came up with a theoretical argument that was somewhat convincing. But I came to realize there is an additional argumment which not only supports the case of RL training, but also requires it, in particular for models like ChatGPT. This additional argument is spelled out in (the first half of) a talk by John Schulman from OpenAI. This post pretty much