A collection of music APIs, databases, and related tools.
To allow the Hugging Face version of Whisper to predict word-level timestamps, a new property alignment_heads must be added to the GenerationConfig object. This is a list of [layer, head] pairs that select the cross-attention heads that are highly correlated to word-level timing.
If your Whisper checkpoint does not have the alignment_heads property yet, it can be added in two possible ways.
Method 1. Change the model.generation_config property:
# load the model
model = WhisperForConditionalGeneration.from_pretrained("your_checkpoint")Yoav Goldberg, April 2023.
With the release of the ChatGPT model and followup large language models (LLMs), there was a lot of discussion of the importance of "RLHF training", that is, "reinforcement learning from human feedback". I was puzzled for a while as to why RL (Reinforcement Learning) is better than learning from demonstrations (a.k.a supervised learning) for training language models. Shouldn't learning from demonstrations (or, in language model terminology "instruction fine tuning", learning to immitate human written answers) be sufficient? I came up with a theoretical argument that was somewhat convincing. But I came to realize there is an additional argumment which not only supports the case of RL training, but also requires it, in particular for models like ChatGPT. This additional argument is spelled out in (the first half of) a talk by John Schulman from OpenAI. This post pretty much
Maybe you've heard about this technique but you haven't completely understood it, especially the PPO part. This explanation might help.
We will focus on text-to-text language models π, such as GPT-3, BLOOM, and T5. Models like BERT, which are encoder-only, are not addressed.
Reinforcement Learning from Human Feedback (RLHF) has been successfully applied in ChatGPT, hence its major increase in popularity. π
RLHF is especially useful in two scenarios π:
- You canβt create a good loss function
- Example: how do you calculate a metric to measure if the modelβs output was funny?
- You want to train with production data, but you canβt easily label your production data
A non-exhaustive list of WebGL and WebGPU frameworks and libraries. It is mostly for learning purposes as some of the libraries listed are wip/outdated/not maintained anymore.
| Name | Stars | Last Commit | Description |
|---|---|---|---|
| three.js | 