Ref: Exclusive Q&A: John Carmack’s ‘Different Path’ to Artificial General Intelligence
"So I asked Ilya Sutskever, OpenAI’s chief scientist, for a reading list. He gave me a list of like 40 research papers and said, ‘If you really learn all of these, you’ll know 90% of what matters today.’ And I did. I plowed through all those things and it all started sorting out in my head."
Ref: https://x.com/ID_AA_Carmack/status/1622673143469858816
I rather expected @ilyasut to have made a public post by now after all the discussion of the AI reading list he gave me. A canonical list of references from a leading figure would be appreciated by many. I would be curious myself about what he would add from the last three years.
- The Annotated Transformer
- The First Law of Complexodynamics
- The Unreasonable Effectiveness of RNNs
- Understanding LSTM Networks
- Recurrent Neural Network Regularization
- Keeping Neural Networks Simple by Minimizing the Description Length of the Weights
- Pointer Networks
- ImageNet Classification with Deep CNNs
- Order Matters: Sequence to sequence for sets
- GPipe: Efficient Training of Giant Neural Networks using Pipeline Parallelism
- Deep Residual Learning for Image Recognition
- Multi-Scale Context Aggregation by Dilated Convolutions
- Neural Quantum Chemistry
- Attention Is All You Need
- Neural Machine Translation by Jointly Learning to Align and Translate
- Identity Mappings in Deep Residual Networks
- A Simple NN Module for Relational Reasoning
- Variational Lossy Autoencoder
- Relational RNNs
- Quantifying the Rise and Fall of Complexity in Closed Systems: The Coffee Automaton
- Neural Turing Machines
- Deep Speech 2: End-to-End Speech Recognition in English and Mandarin
- Scaling Laws for Neural LMs
- A Tutorial Introduction to the Minimum Description Length Principle
- Machine Super Intelligence Dissertation
- PAGE 434 onwards: Komogrov Complexity
- CS231n Convolutional Neural Networks for Visual Recognition
Thank you 🙏