Skip to content

Instantly share code, notes, and snippets.

View Nyandwi's full-sized avatar
:electron:
Counting stars!

Jean de Dieu Nyandwi Nyandwi

:electron:
Counting stars!
View GitHub Profile
@tokenbender
tokenbender / train_modal_standalone.py
Last active October 12, 2025 06:57
standalone serverless simple character level transformer
import os
import sys
import time
import math
import pickle
from contextlib import nullcontext
from pathlib import Path
import subprocess
from dataclasses import dataclass
import inspect
@cavit99
cavit99 / qwen2vl.py
Last active August 11, 2025 13:49
Qwen2-VL-7B-Instruct inference on Apple silicon
import torch
from transformers import Qwen2VLForConditionalGeneration, AutoTokenizer, AutoProcessor
from PIL import Image
from pathlib import Path
import sys
# Toggle to switch between full response and extracted description
OUTPUT_FULL_RESPONSE = False
# Ensure we're using the MPS device if available
@chrisalbon
chrisalbon / gist:9affdbe840ac0a30fd5d0faa63169eb0
Created March 2, 2024 19:18
VSCode Terminal <-> File keyboard shortcut
// Place your key bindings in this file to override the defaults
[
// Toggle between terminal and editor focus
{
"key": "ctrl+`",
"command": "workbench.action.terminal.focus"
},
{
"key": "ctrl+`",
"command": "workbench.action.focusActiveEditorGroup",
@yoavg
yoavg / GM-level-chess-without-search.md
Last active April 1, 2025 04:44
Grand-master Level Chess without Search

Grand-master Level Chess without Search: Modeling Choices and their Implications

Yoav Golderg, February 2024.


Researchers at Google DeepMind released a paper about a learned systems that is able to play blitz-chess at a grandmaster level, without using search. This is interesting and imagination-capturing, because up to now computer-chess systems that play at this level, either based on machine-learning or not, did use a search component.[^1]

Indeed, my first reaction when reading the paper was to tweet wow, crazy and interesting. I still find it crazy and interesting, but upon a closer read, it may not be as crazy and as interesting as I initially thought. Many reactions on twitter, reddit, etc, were super-impressed, going into implications about projected learning abilities of AI systems, the ability of neural networks to learn semantics from observations, etc, which are really over-the-top. The paper does not claim any of them, but they are still perceiv

@antonisa
antonisa / SAC.md
Last active December 1, 2024 15:33
Conference Decisions

The Impossible Task of Conference SACs/PCs or How I lost 3 Nights of Sleep

I am writing this post in order to share my thoughts on the processes behind acceptance/rejection decisions in top-tier (NLP) conferences. I'll first discuss the process and then share some thoughts on its shortcomings.

Before we start, a bit about me. I am an assistant professor (aka, rather junior: I have been in this position for less than 4 years, following my PhD studies and a short postdoc) working on NLP, with a focus on multilingualism and low-resource settings. While I have submitted, published at, and reviewed for *ACL conferences and workshops for many years, it was at EMNLP'23 that I was a Senior Area Chair (SAC) for the first time.

The Conference Paper Pipeline

Let's first briefly outline the process that a paper undergoes, from submission to decision:

@jph00
jph00 / fast_peft.py
Last active September 16, 2024 04:30
Make get_peft_model() fast
from bitsandbytes.nn.modules import Linear8bitLt, Linear4bit
from contextlib import contextmanager
def noop (x=None, *args, **kwargs):
"Do nothing"
return x
@contextmanager
def no_kaiming():
old_iku = init.kaiming_uniform_
@younesbelkada
younesbelkada / finetune_llama_v2.py
Last active July 1, 2025 23:14
Fine tune Llama v2 models on Guanaco Dataset
# coding=utf-8
# Copyright 2023 The HuggingFace Inc. team. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software

Reinforcement Learning for Language Models

Yoav Goldberg, April 2023.

Why RL?

With the release of the ChatGPT model and followup large language models (LLMs), there was a lot of discussion of the importance of "RLHF training", that is, "reinforcement learning from human feedback". I was puzzled for a while as to why RL (Reinforcement Learning) is better than learning from demonstrations (a.k.a supervised learning) for training language models. Shouldn't learning from demonstrations (or, in language model terminology "instruction fine tuning", learning to immitate human written answers) be sufficient? I came up with a theoretical argument that was somewhat convincing. But I came to realize there is an additional argumment which not only supports the case of RL training, but also requires it, in particular for models like ChatGPT. This additional argument is spelled out in (the first half of) a talk by John Schulman from OpenAI. This post pretty much

@rain-1
rain-1 / LLM.md
Last active October 20, 2025 07:02
LLM Introduction: Learn Language Models

Purpose

Bootstrap knowledge of LLMs ASAP. With a bias/focus to GPT.

Avoid being a link dump. Try to provide only valuable well tuned information.

Prelude

Neural network links before starting with transformers.

Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.