Does it pay to be nice? – the maths of altruism part ii
As we saw in the previous article, we can use evolutionary game theory to show that it does pay to be nice when you repeatedly deal with the same person. Martin Nowak, from the Program for Evolutionary Dynamics at Harvard University, explored many other types of games using this mathematics, and the evolution of cooperation seemed to be inevitable in all of them.
Your reputation precedes you
Nowak had already discovered that playing according to the rules of direct reciprocity leads to the rise of cooperation. In direct reciprocity, my strategy for playing with you depends only on how you played in your previous games with me. But what if we had never met before and may never meet again? Instead, I might base my decision on how you played in all your previous games against other players. That is, my decision of how to play is based on your reputation.
This more complex game is called indirect reciprocity. A number of games are played in each round, where players are randomly paired such that one is acting as a donour and the other as a recipient. The donour then either chooses to help the recipient (cooperate), giving the recipient a benefit b at a cost of c to themselves (where b>c), or chooses not to help them (defect). Now our priority isn't the immediate payoff from each game, instead we want to improve our reputation in order to increase our future success within the round (in terms of number of offspring, which depends on the total costs and benefits we accrue during a round). "I might help someone even though I don't expect it to be returned by that person, but it gives me a better reputation," says Nowak.
Strategies for playing indirect reciprocity games consist of a social norm and an action rule. The social norm gives players a way to judge other players' reputations and interpret their actions. These social norms might be very simple, such as "All players are good" or "All players are bad". Or they might be very nuanced, considering a player's behaviour and the reputations of their past opponents. For example: "A player is good if they helped those with good reputations and didn't help those with bad reputations". (Simple approaches use the same social norm for all players. Allowing social norms to evolve for individual players is a more mathematically challenging problem.)
Online shopping is a familiar example of indirect reciprocity. Shoppers cooperate by handing money over to online retailers, but only if the online feedback about the retailer shows that they have a good reputation. (Image by JuSun)
Mathematically Nowak defined the social norm to be how a player's reputation changes, in our eyes, as we observe them playing with other opponents. A simple example would be that our assessment of a player's reputation is a number, r, which we set to zero until we observe them playing the game. Then their reputation increases by one unit each time we see them help and decreases by one unit each time we see that they don't (r can be any integer: positive, negative or zero). A more complex social norm might have a player's reputation increase (or decrease) only if we see them help (or not help) players with reputations larger than a certain value.
The action rule defines how I will act in a game. It is similar to the probabilistic strategies of direct reciprocity but this time my decision as a donour to help a recipient depends on how I perceive their reputation, rather than on their past history with me. For example, my strategy might be to only help those recipients whose reputation I judge to have at least some value, k (so I would help someone if their reputation r≥k).
Cooperative strategies are defined as those that would help a recipient, even if the donour didn't know anything about the recipient's reputation (and so the recipient's reputation r=0). So I would be considered as cooperative if my value for k was at most zero (k≤0). This means that, not only would I help players with a positive reputation score, I would also help players that were playing for the first time or that I hadn't observed playing before (whose reputation score was, therefore, zero). And a strategy is more and more cooperative the lower the value of k.
If, however, my value for k was greater than zero (k>0) then I would not help someone whose reputation was zero (eg if they were playing for the first time). An all-out defector would have k=∞ and would always defect against all other players.
When Nowak ran computer tournaments based on indirect reciprocity he again found, when allowing for mutation and errors, that the population would cycle from unbending defectors, through cooperative strategies, to unconditional cooperators and back again (you can find out more about this in the previous article). But the most successful, in terms of how long they remained dominant in the population, were those strategies that behaved cooperatively and discriminated on the basis of their opponents' reputation, that is, those with k≤0.
Indirect reciprocity can explain many altruistic behaviours, such as acts of charity. "For example, suppose the university goes to a donour to ask for a donation," says Nowak. "The donour might not consider this based on repeated interaction with the university but rather, I'll give this donation and get a reputation as a valuable member of society." So giving to others not only makes us feel better about ourselves, but also makes others in our community think more highly of us, and our enhanced reputation might benefit us in the future.
Cleaner fish benefit from having a good reputation. Their clients, such as this cod, are more likely to let themselves be cleaned by those cleaner fish that have been observed to have done a good job on others. (Image by Richard Ling)
It's good to gossip
Although there are examples of indirect reciprocity in animals (such as cleaner fish and their clients), it is humans who have taken this behaviour to a whole new level with the development of language. "Animals have a more elementary version of indirect reciprocity," says Nowak. It seems that animals can't share their experiences of others' past behaviour. Instead they have to observe the behaviour of others directly. "No other organism has the full blown indirect reciprocity because of the lack of human language. They can have indirect reciprocity by observation, but not augmented by communication."
Humans, however, are great at talking and we also love nothing more than a bit of social gossip, passing on who has done what to whom and trying to understand why. "We humans are very good at that," says Nowak. "And that is what I find fascinating: what could have been the [evolutionary] selection pressure that made humans? It's not just [group] hunting as that is done by other animals. It really is the complicated politics of indirect reciprocity." Nowak believes that the cooperative force of indirect reciprocity is not only responsible for the evolution of human language, but that it also drove the development of our understanding of social complexity. (You can read a fascinating paper by Nowak and colleague Natalia Komarova giving a mathematical model that describes how language evolves.) "Indirect reciprocity drove the selection of human language and social intelligence."
Family, friends and neighbours
As well as the games involving direct and indirect reciprocity, cooperation also emerges in a number of other evolutionary games. For example, spatial structure can change the outcome of games. In games where defection would win straight out in a well mixed population, says Nowak, cooperation triumphs if the players are restricted to only interacting with their neighbours: "Neighbours help each other."(You can play with some beautiful examples of spatial games at Christoph Hauert's website, VirtualLabs.)
The cooperative mechanisms of group and kin selection have been studied as part of evolution for decades but have also caused much controversy. Darwin himself investigated group selection, the idea that a trait that is beneficial to a group (as opposed to an individual) will be favoured by natural selection. Many examples were given to support this theory, for example vervet monkeys calling to warn their group of a predator, despite putting themselves in greater danger, or people sacrificing themselves to save the lives of strangers. However, there was much argument over when and why this behaviour arose. "[Previously] there wasn't a good mathematical approach to group selection, it was only a verbal discussion. But when you look at the mathematics it is clear when [group selection] is valid and when it isn't. The discussion disappears and mathematics decides the argument."
Evolutionary mathematics can explain the altruism in insect societies, such as leafcutter ant colonies, but it doesn't involve inclusive fitness. (Image by Adrian Pingstone)
The advantages of kin selection, helping a relative (rather than just an unrelated member of your social group, as in group selection), seem obvious. "For me, kin selection makes a lot of sense if you present it in a precise way," says Nowak. "I recognise my brother and I behave differently to my brother than to a stranger." However, things are not quite as straight forward as they seem. Nowak and his colleagues Corina Tarnita and Edward Wilson caused much controversy in 2010 when they published a paper criticising the standard mathematical argument for kin selection, called inclusive fitness theory. They showed that inclusive fitness theory had much greater limitations than people had realised. In particular, inclusive fitness could not be used to explain the evolution of the highly complex cooperative insect societies, such as leaf cutter ant colonies, where millions of ants work and die so that only one individual, the queen ant, can reproduce.
The analysis of group and kin selection demonstrates the contribution of maths to evolution and biology. "That's what the role of mathematics is in the sciences. If you have a verbal discussion, you make certain assumptions and you think it leads to the following conclusions, but it's not rigorous." Nowak, like many other mathematicians and scientists, believes that if you can't give a clear mathematical description you haven't really understood the science. "Real understanding in science in terms of mathematics is ultimately elegant, it's ultimately simple, so we understand the situation well if we have a simple description. If we don't, then I'm not quite satisfied."
Nowak's research has ranged from studying human, animal and insect behaviour, to the development of language, and even delved into cancer and cell biology. "People often ask me: how can you work on so many different questions in so many different fields? It is always the same questions, it's always the mathematics of evolution. The maths of populations that are reproducing, competing and interacting. It's the approach that I'm using to study questions in the life sciences, in economics, and in the origin of life. It seems to me that evolution is such a fundamental principle that it should be everywhere, in every scientific description of the world, not just the biological. I'd be very curious [to see if I could find it] in the fundamental laws of physics!"
Can we save the world?
It seems that wherever you apply the mathematics of evolution, there is cooperation. These different mechanisms – direct and indirect reciprocity, spatial game, kin and group selection – that influence how people, animals, insects, or even how molecules or cells interact all lead to emergence of cooperation. Moreover, Nowak is convinced that cooperation isn't just an outcome of evolution, it is necessary for evolution. "I have slowly realised that cooperation plays a role in so many aspects of evolutionary organisation. That's why I made the argument that cooperation can be seen as the third fundamental pillar of evolution, next to mutation and natural selection. Without cooperation you wouldn't get the construction that is evident [in the world]."
Can we cooperate to save the world? (Image from NASA)
And cooperation might not just have been vital in our evolution, but also in our very survival. "The biggest problem we are facing is not one of medical research, though it would be great to cure cancer and many infectious diseases. And the biggest problem isn't economic, how to fix the economy. The biggest problem is how to maintain the stability of intelligent life on this planet." And the only way to solve that problem, Nowak thinks, is that we all become supercooperators, in the sense that we must cooperate with many other individuals everywhere on the globe, now and in the future. It is no longer enough to just play the evolutionary game with the people we know or even the people who are alive today. "Everything we are doing now has an implicit cost on the people to come. Eventually they will have to pay it, and maybe they will not be able to. That is the biggest problem we are facing."
So if cooperation is the answer, can this new mathematical understanding of its evolution show us how to become supercooperators? "I hope so. I hope it will lead to a global rational analysis of the situation where mathematics can play a role to identify the problems and identify the solutions." Let's hope that Nowak and his colleagues discover a clear and elegant mathematical argument for why we should cooperate with future generations, before it's too late.
About this article
Martin Nowak and his colleagues at the Program for Evolutionary Dynamics
Martin Nowak is Professor of Biology and of Mathematics at Harvard University and Director of Harvard’s Program for Evolutionary Dynamics. You can read a review on Plus of Nowak's latest book, SuperCooperators, written with Roger Highfield.
Rachel Thomas, editor of Plus, interviewed Martin Nowak in Boston in January 2012.