注意:本文内容适用于 Tmux 2.3 及以上的版本,但是绝大部分的特性低版本也都适用,鼠标支持、VI 模式、插件管理在低版本可能会与本文不兼容。
启动新会话:
tmux [new -s 会话名 -n 窗口名]
恢复会话:
from timeit import default_timer as time | |
import numpy as np | |
from numba import cuda | |
import os | |
os.environ['NUMBAPRO_LIBDEVICE']='/usr/lib/nvidia-cuda-toolkit/libdevice/' | |
os.environ['NUMBAPRO_NVVM']='/usr/lib/x86_64-linux-gnu/libnvvm.so.3.1.0' | |
import numpy | |
import torch | |
import ctypes |
"""Dead simple tutorial for defining and training a small feedforward neural | |
network (also known as a multilayer perceptron) for regression using TensorFlow 1.X. | |
Introduces basic TensorFlow concepts including the computational graph, | |
placeholder variables, and the TensorFlow Session. | |
Author: Ji-Sung Kim | |
Contact: hello (at) jisungkim.com | |
""" |
Yoav Goldberg, April 2023.
With the release of the ChatGPT model and followup large language models (LLMs), there was a lot of discussion of the importance of "RLHF training", that is, "reinforcement learning from human feedback". I was puzzled for a while as to why RL (Reinforcement Learning) is better than learning from demonstrations (a.k.a supervised learning) for training language models. Shouldn't learning from demonstrations (or, in language model terminology "instruction fine tuning", learning to immitate human written answers) be sufficient? I came up with a theoretical argument that was somewhat convincing. But I came to realize there is an additional argumment which not only supports the case of RL training, but also requires it, in particular for models like ChatGPT. This additional argument is spelled out in (the first half of) a talk by John Schulman from OpenAI. This post pretty much