This is an edited excerpt from the transcript of Bartosz Milewski's Category Theory 1.1: Motivation and Philosophy.
Turing machine and assembly approaches to programming are not very practical; it's possible, but they don't really scale. So we came out with languages that offer higher levels of abstraction. The next level abstraction was procedural programming. You divide a big problem into procedures, and each procedure has its name, has a certain number of arguments. Maybe it returns a value. Sometimes, not necessarily. Maybe it's just for side effects. You chop up your work into smaller pieces and you can deal with bigger problems.
The next thing people came up with this idea of object-oriented programming right, even more abstract. Now you have stuff that you are hiding inside objects, and then you can compose these objects. Once you program an object, you don't have to look inside the object; you can forget about the implementation and just look at the surface. Then you combine these objects, without looking inside, into bigger objects. Again the important idea is that if you want to deal with more complex problems you have to be able to chop the bigger problem into smaller problems, solve them separately, and then combine the solutions together. And there is a name for this: composability. Composability really helps us in programming.
What else helps us in programming? Abstraction. "Abstraction" comes from from a Greek word that means more or less the same as "subtraction", which means "getting rid of details". You want to say "these things, they differ in some small details but for me they are the same". An object in object-oriented programming is something that hides the details, abstracts over some details right.
There is something wrong with this object-oriented approach: concurrency does not mix very well with object-oriented programming. Because objects hide hide exactly the wrong thing, which makes them not composable. They hide two things that are very important: Mutation and pointer sharing. Mixing sharing and mutation has a name: It's called a data race. What the objects in object-oriented programming are abstracting over is the data races. You start combining these objects you create data races, which is a problem.
Okay, I know how to avoid data races: I'm going I'm going to use locks! And I'm going to hide the locking, too, because I want to abstract over it. So in Java, every object has its own lock. Unfortunately, locks don't compose either.
The point is, when you raise the levels of abstraction, you have to be careful what you're abstracting over. What are the things that you are subtracting, throwing away and not exposing?
So the next level of abstraction that came after that (actually, it came before that, but people realised "Hey, maybe we have to dig it out and start using this functional programming thing"): You abstract things into functions without mutations, so you don't have this problem of hiding data races, and then you also have ways of composing data structures into bigger data structures.
We want to get to a highest possible level of abstraction to help us express ideas that later can be translated into programs. So that for me is the main practical motivation for category theory. But then I started also thinking about the theoretical or more philosophical motivation. When mathematicians discover things like this they turn philosophical -- "Oh my god, we are discovering stuff!" -- like you're not really creating mathematics, you're discovering some deep deep truth about the universe.
What do you think, is mathematics something that we invent, or is it built into the universe? For physicists, this is "no"; physicists do experiments, throw these atoms at each other, and discover stuff that's around us. Mathematicians, just sit down at the desk with a pencil or walk around in a park and think. What are they discovering? And now they are saying that since independent discoveries of logic, type theory, etc. are all unified under the same thing, then there is some really really deep truth, some Platonic ideal, that we are discovering. And I was thinking about it and I thought: No, there has to be at a simpler explanation. And that goes back to the way we do mathematics, or the way we discover the universe.
We are human. We are these evolved monkeys. Our brains have evolved to do stuff like image processing to answer the important questions like "Where is the predator?" "Where's the food?" Now we are trying to imitate this stuff with computers and we're finding image processing is a really hard problem. But we've been working on it for a billion years, so our brains know how to do this stuff.
But then there are there are things that evolved much more recently. We suddenly had these brains that that can actually think abstractly -- we can count, we can communicate, we can organise stuff, and we can do math and we can do science. This is a really fresh ability. We've been doing science for the last few thousand years onwards, which is nothing on the evolutionary scale. And we're doing it with these brains that did not evolve to do programming. So compared to the complexity of our visual cortex, this new newly acquired ability to think abstractly is a very fresh thing, and it's very primitive. It hasn't had time to evolve.
When we come head-to-head with a very complex problem, like how to provide food for our tribe, how do we solve it? We divide it into smaller problems that we can solve and then we combine the solutions. This is the only way we know how to deal with complex situations, and this is everywhere. This permeates everything we do that we don't even notice it. In every branch of science and mathematics, we can only see these things that can be chopped into pieces and then put together. No wonder they look the same! Because we can only see the problems that have this structure. If they don't have this structure, we just don't see them, we just say "We cannot solve this problem, let's do something else."
Maybe the whole universe is like this. Maybe everything in this universe can be chopped into little pieces and then put together. Maybe that's a property of this universe, and our brains are just reflecting this.
Personally, I don't think so. Maybe I'm wrong. Hopefully I'm wrong.
I'm a physicist, and I saw what was happening in physics: We wanted to chop things into little pieces. We were very successful at this. We found out that matter is built from atoms. But we didn't stop at that. There are these elementary particles: electrons, protons and so on. Then we found that at this lowest level, things are not as choppable as we thought. It seems that the elementary thing is not divisible, but is also not a point.
Quantum theory gives ua another non-choppable piece of knowledge. If you have a bigger system, you would like to separate it into elementary particles and say: I have a system of particles, I know properties of these 10 particles, and I call this system something bigger like an object, and I can find out the structure of this object by looking at these particles. But it turns out, in quantum mechanics, that the states that it comes to -- they don't add up! A state that has two particles is not a sum or a product or a convolution of states of a single particle. It's a new state which follows a different differential equation and so on. We try to separate particles, but when we cut the particles apart, it turns out that something weird happens in between when you are cutting.
So maybe, maybe at the very bottom -- or maybe there is no bottom -- things are not separable, and things are not composable. Maybe this composability that we love so much is not a property of nature. Maybe it is just property of our brains; our brains are such that we have to see structure everywhere, and if we can't find the structure, we just give up.
So in this way, category theory is not really about mathematics or physics; category theory is about our minds, how our brains work. So it's more epistemology than ontology. Epistemology is how we can reason about stuff, how we can learn about stuff. Ontology is about what things are. Maybe we cannot learn what things are. But we can learn about how we can study. That's what category theory tells us.