Sam Altman: "I think if you have a smart person who has learned to do good research and has the right sort of mindset, it only takes about six months to make them, you know, take a smart physics researcher and make them into a productive AI researcher. So we don't have enough talent in the field yet, but it's coming soon. We have a program at open AI that does exactly this. And I'm astonished how well it works."
- Software Engineer (not the focus of this Gist): Build customer-facing features, optimize applications for speed and scale, use AI APIs. Prompt engineering expertise is generally helpful, but AI experience beyond using the APIs or using ChatGPT like an expert is generally not needed. This Gist isn't aimed at this role.
- Machine Learning Engineer: Build pipelines for data management, model training, and model deployment, to improve models (not the focus of this Gist). And/or implement cutting-edge research papers (a focus of this Gist).
- Research Engineer (a focus of this Gist though missing resources on massive-scale): Build massive-scale distributed machine learning systems. Focus on massive-scale and large distributed systems.
- Research Scientist (a focus of this Gist): Develop new ML techniques to push the state of the art forward.
https://openai.com/research/spinning-up-in-deep-rl
https://iconix.github.io/notes/2018/10/07/what-i-learned
https://github.com/iconix/openai/blob/master/syllabus.md
See also other OpenAI fellows/scholars blog posts (" We ask all Scholars to document their experiences studying deep learning to hopefully inspire others to join the field too.") eg https://openai.com/blog/openai-scholars-2021-final-projects
https://80000hours.org/podcast/episodes/chris-olah-unconventional-career-path/
https://80000hours.org/podcast/episodes/richard-ngo-large-language-models/
John Schulman: https://www.youtube.com/watch?v=hhiLw5Q_UFg
Alec Radford: https://www.youtube.com/watch?v=BnpB3GrpsfM, https://www.youtube.com/watch?v=3X3EY2Fgp3g, https://www.youtube.com/watch?v=S75EdAcXHKk, https://www.youtube.com/watch?v=VINCQghQRuM, https://www.youtube.com/watch?v=KeJINHjyzOU
https://web.archive.org/web/20200813005847/http://wiki.fast.ai:80/index.php/Calculus_for_Deep_Learning and https://www.quantstart.com/articles/matrix-algebra-linear-algebra-for-deep-learning-part-2/ (via https://openai.com/blog/openai-scholars-2019)
https://www.deepmind.com/learning-resources/introduction-to-reinforcement-learning-with-david-silver
http://karpathy.github.io/2019/04/25/recipe/
https://karpathy.medium.com/yes-you-should-understand-backprop-e2f06eab496b
These HN threads https://news.ycombinator.com/item?id=35114530
https://www.deeplearning.ai/ and https://www.coursera.org/collections/machine-learning Andrew Ng
Most up to date version is now here: https://llm-utils.org/AI+Learning+Curation