Skip to content

Instantly share code, notes, and snippets.

@dela3499
Last active September 24, 2017 03:35
Show Gist options
  • Save dela3499/6268f1865981890708b9ca3d5e6eea81 to your computer and use it in GitHub Desktop.
Save dela3499/6268f1865981890708b9ca3d5e6eea81 to your computer and use it in GitHub Desktop.
Two approaches to AGI

Our computer programs can't yet match human-level general intelligence, but there are two directions of research I think are especially encouraging.

First, we can extend the capabilities of deep learning systems, which have been wildly successful in many narrow applications. As Francois Chollet argues, deep learning programs are only a subset of all possible programs. In other words, they aren't universal. So, it's not surprising we'd find success in only narrow applications. Humans can presumably learn any program (that can run given the memory and speed limitations of our brains). So, if the geometric transforms of deep learning aren't enough, then where do we go from here?

We already have two examples: RNNs and AlphaGo. Recurrent neural networks are like normal deep learning architectures, except that they effectively contain a for loop. AlphaGo also contains an additional component (tree-search) that isn't purely deep-learned. These examples point the way to finding systems which incorporate ideas from general-purpose programming languages. For instance, one area of research would be to create a system that could learn to create an RNN on its own. And presumably other useful architectures too.

(need to name these approaches, not only give examples!)

A second, perhaps less obvious, approach is to start with general-purpose programs, and make them more useful. Right now, we have a range of useful deep learning programs, and narrow-AI programs more generally. These programs work. Lots of working software is being built with them right now.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment