Why are we so clever? In evolutionary terms this isn't obvious: evolution tends to favour cheap solutions and the human brain is expensive. It consumes about 20% of our body's energy budget yet it only makes up 2% of our body mass. There are many species that do perfectly well with only a minimum of intelligence, so why did it make evolutionary sense for us humans, and some of our animal cousins, to develop powerful brains?
One possible reason is our sociable lifestyle. We usually think of our ability to cooperate, share knowledge, and be nice to each other as being a result of our intelligence: you need to be clever to recognise the advantages of cooperation, to recognise individuals, remember how they behaved in the past and predict their future actions. But things might also work the other way around. Once a degree of cooperation has emerged in a society of humans or animals, intelligent individuals will do better, they will have more offspring, so bigger and more powerful brains evolve as a result of natural selection.
A new study by scientists from Ireland and the UK has put this social intelligence hypothesis to the test, using mathematical games and very basic artificial brains. Their results suggest that cooperation can indeed breed intelligence.
Player 2 cooperates | Player 2 defects | |
Player 1 cooperates | 6,6 | 1,7 |
Player 1 defects | 7,1 | 2,2 |
The difficulty with cooperation is encapsulated in a mathematical game called the prisoner's dilemma. In this game two players have the option to cooperate with each other or not to cooperate (to defect). There are four possible outcomes — both cooperate, both defect, player 1 cooperates and player 2 defects, player 2 cooperates and player 1 defects — and each comes with a benefit/cost to the players (see the table on the right: the first number gives the pay-off for player 1 and the second number the pay-off for player 2).
The table shows that each player would do better defecting than cooperating, regardless of what the opponent does. So as long as both players act rationally and selfishly, defect is what they'll do. This is slightly paradoxical since they would have done better still if they had both cooperated. It's a bit like two nuclear powers locked in stale mate: everyone would be better off if they decommissioned their arms, but without certainty that the other power is going to cooperate, it makes more sense to keep them.
Things change, however, when the game is played repeatedly: if you know that your opponent has cooperated in the past, it may make sense to trust them and cooperate too. The mathematician Martin Nowak has used the repeated prisoner's dilemma to explore how cooperation might emerge in societies of self-interested individuals. He created virtual societies of computer programs designed to play the prisoner's dilemma with each other using a variety of strategies. These included "always defect" and "always cooperate", but also more subtle ones that took their opponents' past behaviour into account. Each program would play every other program several times over a number of rounds, totting up the pay-off from each interaction. Nowak also equipped his programs with the ability to produce offspring: the bigger the total pay-off a program managed to accrue, the more children it would produce. And to reflect how evolution works in real life, he also allowed for random mutation, equipping some offspring with a strategy that varies slightly from their parent's.
Why did we develop such powerful brains?
As he sat back and watched his artificial society evolve over generations, Nowak found that it went through cycles. Initially defection would emerge as the most successful strategy (producing the most offspring) but soon cooperative strategies would begin to take hold and eventually become dominant. With so many trusting opponents, back-stabbing would then become profitable again, the peak of cooperation would be followed by one of defection, and so it went on in endless repetition (you can read about this in detail in the Plus article Does it pay to be nice?).
Nowak's work showed that in evolutionary terms it can indeed pay to be nice, but it did not make room for exploring the connection between cooperation and intelligence. The new study, conducted by Luke McNally and Andrew Jackson, both from Trinity College Dublin, and Sam Brown from the University of Edinburgh, used a similar approach to Nowak's, using the repeated prisoner's dilemma and a similar game called the repeated snowdrift game. But they replaced Nowak's simple computer programs by neural networks. These are mathematical objects that mimic the human brain: they are made up of interconnected nodes — artificial neurons — which act as processing units that transform the information they are given and then return an output.
When the networks used in the study met an opponent, they took the outcome of the last interaction with that opponent as input and then performed a number of calculations whose result indicated how they should behave in the current encounter: defect or cooperate. But the networks came with varying degrees of complexity, or "intelligence". Some would always behave in the same way — always defect or always cooperate — while more intelligent ones would base their decisions on more complex computations, taking varying amounts of past history into account.
Like in Nowak's work, the neural networks in the new study were able to produce offspring, but the number of offspring now depended on the pay-off they accrued minus a penalty for intelligence. This reflects the fact that having a large and powerful brain comes with a disadvantage. Random mutation would ensure that offspring networks might be a little more stupid or intelligent than their parent.
The team let their neural network society evolve over generations, keeping track not only of the strategies that emerged, but also of the level of intelligence their networks evolved. They found it was during phases in which cooperation first starts to emerge in a society that brains tend to become bigger. "The strongest selection for larger, more intelligent brains, occurred when the social groups were first beginning to start cooperating, which then kicked off an evolutionary Machiavellian arms race of one individual trying to outsmart the other by investing in a larger brain. Our digital organisms typically start to evolve more complex 'brains' when their societies first begin to develop cooperation." explains Jackson.
And the more intelligent brains also produced more intelligent strategies, involving forgiveness and patience as well as deceit and trickery. Their results, so the scientists argue, provide some evidence for the social intelligence hypothesis. So next time you're tempted to do the dirty on someone, remember: without cooperation and kindness, you may never have evolved the brain that enables you to cheat in the first place.