For example, kill command contains python3 -u experiment_main.py
kill $(ps aux | grep '[p]ython3 -u experiment_main.py' | awk '{print $2}')
hdfs dfs -ls / | sort -k6,7
pip install streamlit | |
pip install spacy | |
python -m spacy download en_core_web_sm | |
python -m spacy download en_core_web_md | |
python -m spacy download de_core_news_sm |
new_adj = torch.triu(adj, diagonal=1) | |
# print(new_adj[-1, :10, :10]) | |
new_adj1 = torch.bmm(new_adj, new_adj) | |
# print(new_adj1[-1, :10, :10]) | |
new_adj_or = torch.clamp((new_adj + new_adj1), max=1) | |
# print('new_adj_or', new_adj_or[-1, :10, :10]) | |
loop = 1 | |
while not torch.equal(torch.bmm(new_adj_or, new_adj_or), new_adj1): | |
new_adj1 = torch.bmm(new_adj_or, new_adj_or) | |
new_adj_or = torch.clamp((new_adj_or + new_adj1), max=1) |
class PointerNetwork(nn.Module): | |
""" | |
From "Pointer Networks" by Vinyals et al. (2017) | |
Adapted from pointer-networks-pytorch by ast0414: | |
https://github.com/ast0414/pointer-networks-pytorch | |
Args: | |
n_hidden: The number of features to expect in the inputs. | |
""" |
Yoav Goldberg, April 2023.
With the release of the ChatGPT model and followup large language models (LLMs), there was a lot of discussion of the importance of "RLHF training", that is, "reinforcement learning from human feedback". I was puzzled for a while as to why RL (Reinforcement Learning) is better than learning from demonstrations (a.k.a supervised learning) for training language models. Shouldn't learning from demonstrations (or, in language model terminology "instruction fine tuning", learning to immitate human written answers) be sufficient? I came up with a theoretical argument that was somewhat convincing. But I came to realize there is an additional argumment which not only supports the case of RL training, but also requires it, in particular for models like ChatGPT. This additional argument is spelled out in (the first half of) a talk by John Schulman from OpenAI. This post pretty much
import streamlit as st | |
if 'count' not in st.session_state: | |
st.session_state.count = 0 | |
if 'quotes' not in st.session_state: | |
st.session_state.quotes = [ | |
"Life is what happens when you're busy making other plans. — John Lennon", | |
"Get busy living or get busy dying. — Stephen King", | |
"You only live once, but if you do it right, once is enough. — Mae West", |
from torch.utils.data import DataLoader | |
from transformers import AutoTokenizer, PreTrainedTokenizerFast, set_seed, AutoModelForCausalLM, AutoConfig | |
from tqdm import tqdm | |
import argparse | |
import torch | |
import torch.nn as nn | |
import logging | |
from typing import Dict, Tuple | |
from accelerate import Accelerator, DistributedDataParallelKwargs | |
from accelerate.logging import get_logger |
import openai | |
import asyncio | |
async def get_choice_completion(prompt, choices): | |
# Initialize an asynchronous OpenAI client | |
async with openai.AsyncClient(base_url="http://127.0.0.1:8000/v1", api_key="abc") as client: | |
choice_probs = {} | |
# Calculate logprobs for each prompt + choice sequence | |
for choice in choices: |