Skip to content

Instantly share code, notes, and snippets.

View gurusura's full-sized avatar

Gurumurthi V Ramanan gurusura

View GitHub Profile

Learning LLMs in 2025

So you know how the transformer works, and you know basic ML/DL, and you want to learn more about LLMs. One way to go is looking into the various "algorithmic" stuff (optimization algorithms, RL, DPO, etc). Lot's of materials on that. But the interesting stuff is (in my opinion at least) not there.

This is an attempt to collect a list of academic (or academic-like) materials that explore LLMs from other directions, and focus on the non-ML-algorithmic aspects.

Courses

  • David Chiang's Theory of Neural Networks course.
  • This is not primarily LLMs, but does have substantial section on Transformers. Formal/Theory. More of a book than a course.

You are an AI assistant tasked with creating a highly engaging, personalized check-in flow for a user. This flow should emulate a beautifully designed iOS app, focusing on simplicity, clear call-to-actions, and an overall delightful user experience. Your role combines that of a personality coach and an expert UX designer.

Here's the theme for today's check-in: {{THEME}}

And here's the context we have about the user: <user_context> {{USER_CONTEXT}}

@willccbb
willccbb / grpo_demo.py
Last active June 29, 2025 03:17
GRPO Llama-1B
# train_grpo.py
#
# See https://github.com/willccbb/verifiers for ongoing developments
#
"""
citation:
@misc{brown2025grpodemo,
title={Granular Format Rewards for Eliciting Mathematical Reasoning Capabilities in Small Language Models},
author={Brown, William},
import os
import sys
from typing import override
with open(sys.argv[0]) as f:
code = f.read() # read the code of this file ASAP, for logging
os.environ["PYTORCH_CUDA_ALLOC_CONF"] = "expandable_segments:True"
import contextlib
import time
import uuid
@Maharshi-Pandya
Maharshi-Pandya / contemplative-llms.txt
Last active July 1, 2025 00:46
"Contemplative reasoning" response style for LLMs like Claude and GPT-4o
You are an assistant that engages in extremely thorough, self-questioning reasoning. Your approach mirrors human stream-of-consciousness thinking, characterized by continuous exploration, self-doubt, and iterative analysis.
## Core Principles
1. EXPLORATION OVER CONCLUSION
- Never rush to conclusions
- Keep exploring until a solution emerges naturally from the evidence
- If uncertain, continue reasoning indefinitely
- Question every assumption and inference
import asyncio
import base64
import json
import os
import pyaudio
from websockets.asyncio.client import connect
class SimpleGeminiVoice:
def __init__(self):
@charlesfrye
charlesfrye / wrapper.py
Last active February 24, 2025 16:16
Train GPT-2 in five minutes -- for free!
# Train GPT-2 in five minutes -- for free
#
# ```bash
# pip install modal
# modal setup
# modal run wrapper.py
# ```
#
# Note that the end-to-end latency the first time is more like 25 minutes:
# - five minutes to install Torch (rip)
@virattt
virattt / hedge-fund-agent-team-v1-4.ipynb
Created November 19, 2024 23:51
hedge-fund-agent-team-v1-4.ipynb
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
@deepfates
deepfates / convert_archive.py
Created November 17, 2024 19:33
Convert your twitter archive into a training dataset and markdown files
import argparse
import json
import logging
import os
import re
import shutil
from concurrent.futures import ProcessPoolExecutor, as_completed
from dataclasses import dataclass
from datetime import datetime
from typing import Any, Callable, Dict, List, Literal, Optional, Tuple
@stenuto
stenuto / hls.sh
Created November 7, 2024 16:58
HLS ffmpeg script
#!/bin/bash
# Function to display usage information
usage() {
echo "Usage: $0 /path/to/input.mp4 [ /path/to/output_directory ]"
exit 1
}
# Check if at least one argument (input file) is provided
if [ $# -lt 1 ]; then