editione1.0.2
Updated November 2, 2022Youβre reading an excerpt of Making Things Think: How AI and Deep Learning Power the Products We Use, by Giuliano Giacaglia. Purchase the book to support the author and the ad-free Holloway reading experience. You get instant digital access, plus future updates.
The truth is that a human is just a brief algorithmβ10,247 lines. They are deceptively simple. Once you know them, their behavior is quite predictable.Westworld, season two finale (2018)
Humans have long deemed ourselves as the pinnacle of cognitive abilities among animals. Something unique about our brains makes us able to question our existence and, at the same time, believe that we are king of the animal kingdom. We build roads, the internet, and even spaceships, and we are at the top of the food chain, so our brains must have something that no other brain has.* Our cognitive abilities allow us to stay at the top even though we are not the fastest, strongest, or largest animals.
The human brain is special, but sheer mass is not the reason why humans have more cognition than different animals. If that were the case, then elephants would be at the top of the pyramid because of their larger brains. But not all brains are the same.* Primates have a clear advantage over other mammals. Evolution resulted in an economical way in which neurons are added to their brains without the massive increase in average cell sizes seen in other animals.
Primates also have another advantage over other mammals in the ability to use complex tools. Humans arenβt the only primates who can do this: chimpanzees, for example, use sprigs to perform many tasks, from scratching their back to digging termites. Tool use isnβt restricted to primates, either. Crows also use sticks to extract prey from their hiding spaces. And, they can even make their sticks into better tools like by making a carving hook at the end of a twig to better reach their prey.*
Other animals also have similar cognitive abilities as humans. Chimpanzees and gorillas, which cannot vocalize for anatomical reasons, learn to communicate with sign language. A chimpanzee in Japan named Ai (meaning βloveβ in Japanese) plays games on a computer better than the average human.* With her extensive chimpanzee research, Jane Goodall showed that they could understand other chimpanzeesβ and humansβ mental states and deceive others based on their behavior.* Even birds seem to know other individualsβ mental states. For example, magpies fetch food in the presence of onlookers and then move it to a secret location as soon as the onlookers are gone. Birds can also learn language. Alex,* an African gray parrot owned by psychologist Irene Pepperberg,* learned to produce words that symbolize objects.* Chimpanzees, elephants,* dolphins,* and even magpies* appear to recognize themselves in the mirror.*
So, what makes humans smarter than chimpanzees that are, in turn, smarter than elephants? Professor Suzana Herculano-Houzelβs research showed that the number of neurons in the mammalian cerebral cortex and the bird pallium has a high correlation with their cognitive capability.*
The cerebral cortex and bird pallium are the outermost part of the brain and more evolutionary advanced than other brain regions. The more neurons in these specific regions, regardless of brain or body size, the better a species performs at the same task. For example, birds have a large number of neurons compressed in their brain compared to mammals, even though the size of their brains are smaller.
Not only that, but the size of the neocortex, the largest and most modern part of the cortex, is also a constraint for group size in animals, meaning in social relationships.
Robin Dunbar suggests that a cognitive limit exists for the number of people you can maintain a relationship with. His work led to what is called Dunbarβs number, and he posits that the answer is 150 based on the size of the human brain and the number of cortical neurons.*
Figure: Animalsβ cognitive ability and the respective number of cortical and pallial neurons in their brains.* This image shows that there is a clear correlation between cognitive ability and performance, and the number of cortical or pallial neurons. The performance % on the y axis is the completion of a simple task.
There is a simple answer for how our brains can be at the same time similar to others in its evolutionary constraints and yet so advanced to create language and develop tools as complex as we do. Being primates bestows upon humans the advantage of a large number of neurons packed into a small cerebral cortex.*
What do animal brains have to do with AI systems and humans? First, the cognitive capacity of some animals suggests that we are not as unique as some think. While some argue that there are certain capabilities that only humans can perform, they have been proven wrong time and again. Second, the correlation of cognitive ability and the number of neurons might be an indication that neural networks will perform better as the number of artificial neurons increases. These artificial neural networks, of course, need the correct data and right type of software, as discussed in the previous section.
While the number of neurons affects animalsβ cognitive ability, their brains have much more neurons than most deep learning models. Todayβs neural networks have around 1 million neurons, about the same number as a honeybee. It might not be a coincidence that as neural networks increase in size, the better they perform at different tasks. As they approach the number of neurons in a human brain, around 100 billion neurons, it could be that they will perform all human tasks with the same capability.
A clear correlation exists between the cognitive capacity of animals and the number of pallial or cortical neurons. Therefore, it follows that the number of neurons in an artificial neural network should affect the performance of these models since neural networks were designed based on how neurons interact with each other.
A neural network can represent any kind of program, and neural networks that have a larger number of neurons and layers can represent more complex programs. Because more complex problems require more complicated programs, larger neural networks are the solution. As machine learning evolved to make more efficient algorithms, neural networks needed more layers and neurons. But with that advancement came the problem of figuring out the weights of all these neurons.
With 1,000 connections, at least configurations are possible, assuming that each weight can be either 0 or 1. Since the weights are usually real numbers between 0 and 1, the number of configurations is infinite. So, figuring out the weights became intractable, but backpropagation solved this problem. That technique helped researchers determine the weights by changing them on the last layer first, and then going down the layers until reaching the first one. This made the problem more tractable and allowed developers and researchers to use multilayer neural networks for different algorithms. By the way, this work was conducted independently from research in neuroscience.
Years of research demonstrated that the backpropagation technique used in computer science also happens in the brain. Neuroscientists have models that might show that the human brain could employ a similar method for learning, and the brain performs the same learning algorithm that researchers created to update their artificial neural networks. Short pulses of dopamine* are released onto many dendrites, driving synaptic learning in the human brainβpart of the neuron-prediction error from a failure to predict what was expected. In deep learning, backpropagation works by updating the neural network weights based on the prediction error of the modelβs output compared to the expected output. Both the brain and artificial neural networks use these errors to update the weights or synapses. Research on the brain and in computer science seem to converge. It is as if mechanical engineers developed airplanes merely to figure out that birds use the same technique. In this case, computer scientists developed artificial neural networks that demonstrate how brains work.
Human brains* and AI algorithms developed separately and over time, but they still perform in similar ways. It might not be a coincidence that billions of years of evolution led to better-performing algorithms as well as improved techniques to learn and interact with the environment. Therefore, it is valuable to understand how the brain operates and compare it to the software that computer scientists develop.
The algorithms that are winning in games like Go or Dota 2 use reinforcement learning to train multilayer neural networks. The animal brain also uses reinforcement learning via dopamine. But research shows that the human brain performs two types of reinforcement learning on top of each other. This new theory implements a technique called Learning to Learn, also called meta-reinforcement learning, which may benefit machine learning algorithms.
Dopamine is the neurotransmitter associated with the feeling of desire and motivation.