Skip to content

Instantly share code, notes, and snippets.

View Privilger's full-sized avatar
🎯
Focusing

Yizheng Zhang Privilger

🎯
Focusing
View GitHub Profile
@xuhdev
xuhdev / ctags_with_dep.sh
Last active June 13, 2024 15:47
Generate ctags file for C or C++ files and its depedencies (included header files). This could avoid you to always generate a huge tags file.
#!/bin/sh
# https://www.topbug.net/blog/2012/03/17/generate-ctags-files-for-c-slash-c-plus-plus-source-files-and-all-of-their-included-header-files/
# ./ctags_with_dep.sh file1.c file2.c ... to generate a tags file for these files.
gcc -M "$@" | sed -e 's/[\\ ]/\n/g' | \
sed -e '/^$/d' -e '/\.o:[ \t]*$/d' | \
ctags -L - --c++-kinds=+p --fields=+iaS --extra=+q
@ryerh
ryerh / tmux-cheatsheet.markdown
Last active November 18, 2024 13:47 — forked from MohamedAlaa/tmux-cheatsheet.markdown
Tmux 快捷键 & 速查表 & 简明教程

注意:本文内容适用于 Tmux 2.3 及以上的版本,但是绝大部分的特性低版本也都适用,鼠标支持、VI 模式、插件管理在低版本可能会与本文不兼容。

Tmux 快捷键 & 速查表 & 简明教程

启动新会话:

tmux [new -s 会话名 -n 窗口名]

恢复会话:

from timeit import default_timer as time
import numpy as np
from numba import cuda
import os
os.environ['NUMBAPRO_LIBDEVICE']='/usr/lib/nvidia-cuda-toolkit/libdevice/'
os.environ['NUMBAPRO_NVVM']='/usr/lib/x86_64-linux-gnu/libnvvm.so.3.1.0'
import numpy
import torch
import ctypes
@jisungk
jisungk / tf_tutorial.py
Last active June 18, 2024 19:02
Dead simple TensorFlow 1.X tutorial: Training a feedforward neural network
"""Dead simple tutorial for defining and training a small feedforward neural
network (also known as a multilayer perceptron) for regression using TensorFlow 1.X.
Introduces basic TensorFlow concepts including the computational graph,
placeholder variables, and the TensorFlow Session.
Author: Ji-Sung Kim
Contact: hello (at) jisungkim.com
"""

Reinforcement Learning for Language Models

Yoav Goldberg, April 2023.

Why RL?

With the release of the ChatGPT model and followup large language models (LLMs), there was a lot of discussion of the importance of "RLHF training", that is, "reinforcement learning from human feedback". I was puzzled for a while as to why RL (Reinforcement Learning) is better than learning from demonstrations (a.k.a supervised learning) for training language models. Shouldn't learning from demonstrations (or, in language model terminology "instruction fine tuning", learning to immitate human written answers) be sufficient? I came up with a theoretical argument that was somewhat convincing. But I came to realize there is an additional argumment which not only supports the case of RL training, but also requires it, in particular for models like ChatGPT. This additional argument is spelled out in (the first half of) a talk by John Schulman from OpenAI. This post pretty much