Maybe you've heard about this technique but you haven't completely understood it, especially the PPO part. This explanation might help.
We will focus on text-to-text language models ๐, such as GPT-3, BLOOM, and T5. Models like BERT, which are encoder-only, are not addressed.
Reinforcement Learning from Human Feedback (RLHF) has been successfully applied in ChatGPT, hence its major increase in popularity. ๐
RLHF is especially useful in two scenarios ๐:
- You canโt create a good loss function
- Example: how do you calculate a metric to measure if the modelโs output was funny?
- You want to train with production data, but you canโt easily label your production data