You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
Instantly share code, notes, and snippets.
Junyan Xu
alreadydone
PhD in Math, working on Lean's mathlib4 and formalization of FLT. Interested in autoformalization and AI for mathematics.
With the release of the ChatGPT model and followup large language models (LLMs), there was a lot of discussion of the importance of "RLHF training", that is, "reinforcement learning from human feedback".
I was puzzled for a while as to why RL (Reinforcement Learning) is better than learning from demonstrations (a.k.a supervised learning) for training language models. Shouldn't learning from demonstrations (or, in language model terminology "instruction fine tuning", learning to immitate human written answers) be sufficient? I came up with a theoretical argument that was somewhat convincing. But I came to realize there is an additional argumment which not only supports the case of RL training, but also requires it, in particular for models like ChatGPT. This additional argument is spelled out in (the first half of) a talk by John Schulman from OpenAI. This post pretty much
Warning: lots of hallucination due to lack of web browsing.
Name: SiegelGPT
Goal: Find out the status of Yitang Zhang's paper "Discrete mean estimates and the Landau-Siegel zero".
Embarking on a new goal:Find out the status of Yitang Zhang's paper "Discrete mean estimates and the Landau-Siegel zero".
Thinking...
Added task:Search for Yitang Zhang's paper "Discrete mean estimates and the Landau-Siegel zero"
Added task:Find out if the paper has been published in a journal or conference proceedings
Added task:Check for any news or updates on the status of the paper
Thinking...
Last month, a news article titled "Man beats machine at Go in human victory over AI" caught my interest. The article was first published on February 17th by Financial Times. However, I was traveling in London at the time and missed the news by a few days. So, I decided to wait for follow-up news before discussing it. But as I waited, there were no further updates, so I decided to revisit the topic and discuss it again.
The news was about an American amateur 6-dan Go player, Kellin Pelrine, who used the vulnerabilities of the open-source Go programs KataGo and Leela Zero to defeat them with overwhelming superiority.
Before 2015, this type of news would not have made headlines because Go programs were far inferior to human players. However, since DeepMind's Go system, AlphaGo, defeated the French Chinese professional 2-dan player Fan Hui with a score of 5-0 in October 2015, and then defeated the 14-time world champion South Korean professional 9-dan player Lee
Audience: I assume you heard of ChatGPT, maybe played with it a little, and was impressed by it (or tried very hard not to be). And that you also heard that it is "a large language model". And maybe that it "solved natural language understanding". Here is a short personal perspective of my thoughts of this (and similar) models, and where we stand with respect to language understanding.
Intro
Around 2014-2017, right within the rise of neural-network based methods for NLP, I was giving a semi-academic-semi-popsci lecture, revolving around the story that achieving perfect language modeling is equivalent to being as intelligent as a human. Somewhere around the same time I was also asked in an academic panel "what would you do if you were given infinite compute and no need to worry about labor costs" to which I cockily responded "I would train a really huge language model, just to show that it doesn't solve everything!". We
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Invertible ideals in commutative semi-local rings are principal.
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters