Goals: Add links that are reasonable and good explanations of how stuff works. No hype and no vendor content if possible. Practical first-hand accounts of models in prod eagerly sought.

""" To use: install LLM studio (or Ollama), clone OpenVoice, run this script in the OpenVoice directory | |
git clone https://github.com/myshell-ai/OpenVoice | |
cd OpenVoice | |
git clone https://huggingface.co/myshell-ai/OpenVoice | |
cp -r OpenVoice/* . | |
pip install whisper pynput pyaudio | |
""" | |
from openai import OpenAI | |
import time |
def get_GPU_usage(): | |
cmd = "nvidia-smi --query-gpu=utilization.gpu --format=csv,noheader,nounits" | |
result = subprocess.check_output(cmd, shell=True).decode('utf-8') | |
usages = list(map(int, result.strip().split('\n'))) | |
return usages |
Yoav Goldberg, April 2023.
With the release of the ChatGPT model and followup large language models (LLMs), there was a lot of discussion of the importance of "RLHF training", that is, "reinforcement learning from human feedback". I was puzzled for a while as to why RL (Reinforcement Learning) is better than learning from demonstrations (a.k.a supervised learning) for training language models. Shouldn't learning from demonstrations (or, in language model terminology "instruction fine tuning", learning to immitate human written answers) be sufficient? I came up with a theoretical argument that was somewhat convincing. But I came to realize there is an additional argumment which not only supports the case of RL training, but also requires it, in particular for models like ChatGPT. This additional argument is spelled out in (the first half of) a talk by John Schulman from OpenAI. This post pretty much
Maybe you've heard about this technique but you haven't completely understood it, especially the PPO part. This explanation might help.
We will focus on text-to-text language models 📝, such as GPT-3, BLOOM, and T5. Models like BERT, which are encoder-only, are not addressed.
Reinforcement Learning from Human Feedback (RLHF) has been successfully applied in ChatGPT, hence its major increase in popularity. 📈
RLHF is especially useful in two scenarios 🌟:
#!/usr/bin/env bash | |
for f in $(<~/git/repos); do | |
cd ~/git/$f | |
git pull > /dev/null & | |
cd - > /dev/null | |
done | |
wait < <(jobs -p) | |
for f in $(<~/git/repos); do |
JAX released a persistent compilation cache for TPU VMs! When enabled, the cache writes compiled JAX computations to disk so they don’t have to be re-compiled the next time you start your JAX program. This can save startup time if any of y’all have long compilation times.
First upgrade to the latest jax release:
pip install -U "jax[tpu]>=0.2.18" -f https://storage.googleapis.com/jax-releases/libtpu_releases.html
Then use the following to enable the cache in your jax code:
from jax.experimental.compilation_cache import compilation_cache as cc
import jax | |
import jax.numpy as jnp | |
def ou_process(key, steps, dt, mu, tau, sigma): | |
""" Generate an Ornstein-Uhlenbeck process sample. """ | |
ou_init = jnp.zeros((steps + 1, )) | |
noise = jax.random.normal(key, (steps,)) | |
def ou_step(t, val): | |
dx = (-(val[t-1]-mu)/tau * dt | |
+ sigma*jnp.sqrt(2/tau)* |
Lecture 1: Introduction to Research — [📝Lecture Notebooks] [
Lecture 2: Introduction to Python — [📝Lecture Notebooks] [
Lecture 3: Introduction to NumPy — [📝Lecture Notebooks] [
Lecture 4: Introduction to pandas — [📝Lecture Notebooks] [
Lecture 5: Plotting Data — [📝Lecture Notebooks] [[