I'd like to understand intelligence. I think there are several areas of research that are especially interesting: conjecture, criticism, realism, problem-solving, and universality.
We're faced with a constant stream of problems, and we're quite good at finding solutions. How do we quickly and efficiently navigate the space of all possible solutions?
A new problem might be solved with an old solution, or some variant of it, with some ideas added, removed, or tweaked. Analogies or random combinations may help.
We're especially efficient when it comes to understanding each other in conversation. How, in a fraction of second, are we able to grasp the ideas other share with us?
How do we learn scientific theories like Newton's?
As we come up with new solutions to problems, how do we quickly select among their infinite number? The process involves disparate ideas, from simple facts and common-sense to theories of all kinds.
When faced with the infinity of theories that might account for some phenomena, how can we choose which to prefer and act upon? There is no fixed, automatic way to do this, though there are helpful criteria - simplicity, size, being hard-to-vary.
As an aside, questions contain implicit selection criteria. If you ask, "When did you last eat?", some answers don't make sense, like "2 inches". In this case, the answer must be a time, not a distance.
We also learn criteria as we attempt to solve a problem and fail. We see what prevents us from making progress, and test new solutions against it as well.
Knowledge of reality has reach beyond the situation that created it. It can be used for new, unexpected purposes, and be applied to situations never yet seen. It is very different from pattern-type knowledge or rules of thumb.
How do we represent physical, abstract, and subjective aspects of reality?
Tenenbaum has had some success by allowing systems to construct Newtonian physical models. Lots of practical progress can be made on pressing problems by embedding some modeling tools in a system, and allowing a system to construct a model of a system from these components. This is especially useful when the model components aren't likely to be generated by a data-first strategy like deep learning.
By the way, the modularity of knowledge, ideas, and problems is required for knowledge-sharing to be possible.
How are problems represented? How do we retain knowledge of our attempts to solve problems? How can we construct a problem-solver? What is the structure of a problem? **How do we decompose problems into parts to make them tractable? **How do we represent our problem-situation? What keeps a problem-solver moving and surviving?
Though we can build systems that contain knowledge of Newtonian mechanics, say, it's not clear how to create a system that contains this knowledge, and the ability to change it. In other words, this knowledge should not be treated as foundational or fixed. Just like any candidate explanation of some particular situation, it is open to revision.
Everything about a system, from the way it represents ideas, to the means of changing them, to its goals, and so on, should be open to change, since it is subject to error.
These are examples of what Karl Popper called the common-sense, "bucket", theory of mind and knowledge. Knowledge enters our minds through the senses. This approach does not actually explain the origins of universal knowledge, since we only ever see particular events. It cannot, therefore, explain the origins of theories. It also allows only one mechanism for distinguishing between theories - probabilities. This is not valid, and is not even applicable in many cases.
It's often useful to find patterns. They're rules of thumb, but their domain of accuracy or relevance is often unknown. They may fail when used outside the area they were trained on, or with adversarial examples.
This approach is still useful and often effective in narrow domains. It is hopelessly narrow, however, when compared to model-based strategies which are infinitely malleable. A decent model of reality allows one to ask all sorts of question that were never considered when the model was created. Explanatory knowledge has a qualitatively different character from pattern-type knowledge.
In one case, a function has been approximated. In the other, a model of 'what is really out there' has been produced.
An AGI shouldn't have any fixed ideas. Not about physics, morality, or anything else. In the case of Tenenbaum, the physics engine is 'hard-coded' - no mechanism is provided to change or improve it. The same is true of all optimizers, which have a fixed objective function.
Approaches like AIXI purport to be optimal by some criterion, but also intractable in principle - requiring infinite computation to operate. These are not explanations of real creativity, and seem orthogonal to the question of how to create knowledge in practice. Human minds have made problem-solving tractable somehow, and the real explanation for human creativity will allow us to implement and run it on a machine in a tractable way. Optimality is also orthogonal to the question of human performance. Nothing about humans is necessarily optimal. In fact, we can never know that anything we do is optimal - that would be equivalent to claiming no more progress would occur, which we cannot know.
Can you identify characters after seeing only one or a few examples of them? Can you draw them and pass a visual Turing test?
Can you play a game that seems to require solving a range of independent problems, and requires some intuitive physics, along with planning?
Tenenbaum embedded a Newtonian physics engine in a system, but how could a system create such a system on its own? Would deep learning techniques be helpful here? There are also two forms of this challenge. The weak form is to produce intuitive physics - good enough to walk, say. The strong form is to actually learn the real Newtonian theory, and maybe build an accurate simulator. (This is clearly much harder!) This second task is meant to get at our ability to learn and apply abstract knowledge.
Despite the name, this approach from Tenenbaum seems to be quite effective at guessing programs for drawing characters, or creating LaTex drawings based on hand-drawings.
Humans use a variety of ideas to create and distinguish between theories - quickly too. One-shot learning removes the possibility of finding a pattern among many examples. Instead, a system is forced to find a pattern quickly, and to construct it in a new way. There are two difficulties in one-shot learning: creating models consistent with the observation, and selecting among these models. In normal supervised learning, the model is produced by small variations, and is selected to be consistent with many examples. This method of variation and this selection criterion are both withheld in the case of one-shot learning.
Intuitive physics and psychology are useful for explaining things. Something similar is provided in Tenenbaum's approach to the Characters challenge. So, there may be many opportunities to provide modeling tools, and then construct good models from them.
Building in models may, as in the case of some one-shot learning problems, vastly reduce the amount of data and computation required to obtain a good result. Efficiently achieving state of the art results could be a very good route forward.
Popper thought science was far easier to analyze than common sense. The body of knowledge and parochial facts that may be relevant to any normal day-to-day situation can be large and unpredictable. There are masses of facts about people, culture, food, music, and many other things which are relevant to our conversations and how we navigate our daily lives.
In science, these various fields are mostly irrelevant. Instead, we're left with theories, their consequences, the results of various experiments, and so on. The relationships between ideas are much clearer. This may make it a good candidate for artificial intelligence research as well. Then again, there are many, many ideas which are required to really apply a theory, or to arrive at it. It's not clear one can really divorce oneself from common-sense-type knowledge when doing science either. Perhaps programming is the simpler way forward. This is an area where there might be still less cultural and technical knowledge required when trying to explain something.
In excel, you can provide examples of the inputs and outputs of some function, and, in some cases, the system can discover the function and apply it to additional examples.
counterfactuals, agents