Goals: Add links that are reasonable and good explanations of how stuff works. No hype and no vendor content if possible. Practical first-hand accounts of models in prod eagerly sought.

# train_grpo.py | |
# | |
# See https://github.com/willccbb/verifiers for ongoing developments | |
# | |
import re | |
import torch | |
from datasets import load_dataset, Dataset | |
from transformers import AutoTokenizer, AutoModelForCausalLM | |
from peft import LoraConfig | |
from trl import GRPOConfig, GRPOTrainer |
CUDA 12.1.1 toolkit is gonna offer to install Nvidia driver 530 for us. It's from New Feature branch. It's likely to be newer than the default Nvidia driver you would've installed via apt-get (apt would prefer to give you 525, i.e. Production Branch).
If you're confident that you already have a new enough Nvidia driver for CUDA 12.1.1, and you'd like to keep your driver: feel free to skip this "uninstall driver" step.
But if you're not sure, or you know your driver is too old: let's uninstall it. CUDA will install a new driver for us later.
Yoav Goldberg, April 2023.
With the release of the ChatGPT model and followup large language models (LLMs), there was a lot of discussion of the importance of "RLHF training", that is, "reinforcement learning from human feedback". I was puzzled for a while as to why RL (Reinforcement Learning) is better than learning from demonstrations (a.k.a supervised learning) for training language models. Shouldn't learning from demonstrations (or, in language model terminology "instruction fine tuning", learning to immitate human written answers) be sufficient? I came up with a theoretical argument that was somewhat convincing. But I came to realize there is an additional argumment which not only supports the case of RL training, but also requires it, in particular for models like ChatGPT. This additional argument is spelled out in (the first half of) a talk by John Schulman from OpenAI. This post pretty much
import time | |
from contextlib import suppress | |
import torch | |
import torch.nn as nn | |
import torch.optim as optim | |
import torch.nn.functional as F | |
import torch.backends.cuda as cuda | |
from torch.utils.data import DataLoader, IterableDataset |
from transformers import AutoTokenizer, T5ForConditionalGeneration | |
# Model Init | |
n_gpu = 8 | |
tokenizer = AutoTokenizer.from_pretrained("google/flan-ul2") | |
model = T5ForConditionalGeneration.from_pretrained("google/flan-ul2") | |
heads_per_gpu = len(model.encoder.block) // n_gpu | |
device_map = { | |
gpu: list( | |
range( |
import torch | |
import torch.multiprocessing as mp | |
from absl import app, flags | |
from torchvision.models import AlexNet | |
FLAGS = flags.FLAGS | |
flags.DEFINE_integer("num_processes", 2, "Number of subprocesses to use") | |
Using https://github.com/NVIDIA/cuda-samples
You can also view the GPU topology using nvidia-smi topo -m
git clone https://github.com/NVIDIA/cuda-samples.git
git checkout tags/v11.1
sudo apt-get install freeglut3-dev build-essential libx11-dev libxmu-dev libxi-dev libgl1-mesa-glx libglu1-mesa libglu1-mesa-dev libglfw3-dev libgles2-mesa-dev
make
in root dir. Or cd Samples/p2pBandwidthLatencyTest; make
PROJECT(project C) | |
CMAKE_MINIMUM_REQUIRED(VERSION 3.1.0) | |
FIND_PACKAGE(PkgConfig REQUIRED) | |
PKG_CHECK_MODULES(GLIB REQUIRED glib-2.0) | |
INCLUDE_DIRECTORIES( | |
src/include/ |
def top_k_top_p_filtering(logits, top_k=0, top_p=0.0, filter_value=-float('Inf')): | |
""" Filter a distribution of logits using top-k and/or nucleus (top-p) filtering | |
Args: | |
logits: logits distribution shape (vocabulary size) | |
top_k >0: keep only top k tokens with highest probability (top-k filtering). | |
top_p >0.0: keep the top tokens with cumulative probability >= top_p (nucleus filtering). | |
Nucleus filtering is described in Holtzman et al. (http://arxiv.org/abs/1904.09751) | |
""" | |
assert logits.dim() == 1 # batch size 1 for now - could be updated for more but the code would be less clear | |
top_k = min(top_k, logits.size(-1)) # Safety check |