Skip to content

Instantly share code, notes, and snippets.

View tekumara's full-sized avatar
🥚

Oliver Mannion tekumara

🥚
View GitHub Profile
@andyjessop
andyjessop / prompt.txt
Created April 20, 2024 07:43
A prompt to categorise and analyse sentiment for GitHub issues
Please analyze the following GitHub issue data, which is provided as a JSON object:
{
"title": "🐛 BUG: WebSocket typing doesn't work in apps that also pull in DOM types",
"body": "Which Cloudflare product(s) does this pertain to?",
}
Provide a response with the following structure:
<json>
@dhh
dhh / linux-setup.sh
Last active November 15, 2024 03:53
linux-setup.sh
# THIS LINUX SETUP SCRIPT HAS MORPHED INTO A WHOLE PROJECT: HTTPS://OMAKUB.ORG
# PLEASE CHECKOUT THAT PROJECT INSTEAD OF THIS OUTDATED SETUP SCRIPT.
#
#
# Libraries and infrastructure
sudo apt update -y
sudo apt install -y \
docker.io docker-buildx \
build-essential pkg-config autoconf bison rustc cargo clang \
import asyncio
import copy
import hashlib
import json
import os
import random
from dataclasses import dataclass
from typing import Any, Dict, List, Optional
import numpy as np
@kalomaze
kalomaze / llm_samplers_explained.md
Last active November 15, 2024 17:40
LLM Samplers Explained

LLM Samplers Explained

Everytime a large language model makes predictions, all of the thousands of tokens in the vocabulary are assigned some degree of probability, from almost 0%, to almost 100%. There are different ways you can decide to choose from those predictions. This process is known as "sampling", and there are various strategies you can use which I will cover here.

OpenAI Samplers

Temperature

  • Temperature is a way to control the overall confidence of the model's scores (the logits). What this means is that, if you use a lower value than 1.0, the relative distance between the tokens will become larger (more deterministic), and if you use a larger value than 1.0, the relative distance between the tokens becomes smaller (less deterministic).
  • 1.0 Temperature is the original distribution that the model was trained to optimize for, since the scores remain the same.
  • Graph demonstration with voiceover: https://files.catbox.moe/6ht56x.mp4
@yoavg
yoavg / GM-level-chess-without-search.md
Last active October 20, 2024 16:32
Grand-master Level Chess without Search

Grand-master Level Chess without Search: Modeling Choices and their Implications

Yoav Golderg, February 2024.


Researchers at Google DeepMind released a paper about a learned systems that is able to play blitz-chess at a grandmaster level, without using search. This is interesting and imagination-capturing, because up to now computer-chess systems that play at this level, either based on machine-learning or not, did use a search component.[^1]

Indeed, my first reaction when reading the paper was to tweet wow, crazy and interesting. I still find it crazy and interesting, but upon a closer read, it may not be as crazy and as interesting as I initially thought. Many reactions on twitter, reddit, etc, were super-impressed, going into implications about projected learning abilities of AI systems, the ability of neural networks to learn semantics from observations, etc, which are really over-the-top. The paper does not claim any of them, but they are still perceiv

@mitchellh
mitchellh / merge_vs_rebase_vs_squash.md
Last active November 11, 2024 17:07
Merge vs. Rebase vs. Squash

I get asked pretty regularly what my opinion is on merge commits vs rebasing vs squashing. I've typed up this response so many times that I've decided to just put it in a gist so I can reference it whenever it comes up again.

I use merge, squash, rebase all situationally. I believe they all have their merits but their usage depends on the context. I think anyone who says any particular strategy is the right answer 100% of the time is wrong, but I think there is considerable acceptable leeway in when you use each. What follows is my personal and professional opinion:

@hannes
hannes / dlopen.md
Last active December 15, 2023 10:42

Parallel Python within the same process or hacking around the cursed GIL with a hand-rolled library loader

From its obscure beginnings in Amsterdam, the Python programming language has become a fundamental building block of our digital society. It is used literally everywhere and by everyone for a mind-boggingly wide variety of tasks.

Python is also the lingua franca of Data Science, tying together tools for data loading, wrangling, analysis and AI. There is a massive ecosystem of contributed Python packages, which - for example - allows reading every obscure data format under the sun. This makes Python and its ecosystem extremely valuable for analytical data management systems: Users are likely somewhat familiar with Python due to its immense popularity and the ecosystem provides solutions for most data problems. As a result, Python is being integrated into SQL systems, typically through so-called User-Defined Functions (UDFs). For example, [Apach

@veekaybee
veekaybee / normcore-llm.md
Last active November 15, 2024 12:06
Normcore LLM Reads

Anti-hype LLM reading list

Goals: Add links that are reasonable and good explanations of how stuff works. No hype and no vendor content if possible. Practical first-hand accounts of models in prod eagerly sought.

Foundational Concepts

Screenshot 2023-12-18 at 10 40 27 PM

Pre-Transformer Models

@chrisguidry
chrisguidry / stream_subscriber.py
Last active August 16, 2023 01:59
Stream the events from a Prefect Cloud workspace over Websockets
from uuid import UUID
import orjson
import pendulum
import rich.console
from websockets.client import connect
from websockets.exceptions import ConnectionClosedError
from prefect.cli import root
from prefect.cli._types import PrefectTyper
@MarkRoddy
MarkRoddy / parse_s3_access_logs.sql
Last active August 22, 2024 20:34
DuckDB: Query S3 Access Logs
/*
Usage: you'll want to search for the strings <bucket> and <prefix>, and insert the S3 bucket where your access
logs are being delivered. Use (or delete) <prefix> to filter to a subset of your logs.
*/
/*
These commented out configuration settings you can either run yourself in the REPL and source this file using
`.read parse_s3_access_logs.sql`, or you can uncomment them and supply values for yourself.