Constantinos Daskalakis has been awarded the Rolf Nevanlinna Prize at the International Congress of Mathematicians 2018. The prize is awarded every four years for outstanding contributions in mathematical aspects of information sciences. Daskalakis is being honoured for his "lively curiosity and infectious enthusiasm" and his "fearlessness in tackling difficult, complex, and longstanding problems".
Constantinos Daskalakis (third from the left) being awarded is Nevanlinna Prize.
One of the difficult problems Daskalakis has worked on affects many of as we go about our everyday lives. When people analyse or design systems to be used by many people — such as road networks, real or online markets, or even dating apps — they often use the mathematics of game theory. In such situations people are constrained by certain rules, they are out to further some specific aim, and they can (mostly) be trusted to act rationally. This makes their actions more predictable and the consequences amenable to mathematical analysis. The idea of game theory is to harness this mathematical perspective to understand real-life human interactions, as they occur in markets, on the roads, in politics, and in many online situations.
An important notion in this context is that of an equilibrium, in particular a Nash equilibrium. When you throw together a collection of agents (people, cars, etc) in a strategic environment, they will probably start by trying out all sorts of different ways of behaving — all sorts of different strategies. Eventually, though, they all might settle on the single strategy that suits them best in the sense that no other strategy can serve them better. This situation, when nobody has an incentive to change, is called a Nash equilibrium.
"For example, we all make decisions in a road network as to which route to choose to go from work to our home," explains Daskalakis. "Every day we optimise our route. Ultimately the system arrives at equilibrium when no driver can improve their travel time by changing their route." A Nash equilibrium is not necessarily a good thing, for example, it could correspond to complete grid lock on the roads. But at least it's a stable situation: nothing changes because no one has an incentive to change.
It exists, but where is it?
John Nash, after whom the equilibrium is named, proved in 1950 that a Nash equilibrium always exists. "In a strategic environment, no matter how complex it is, no matter how many agents it has, and how many strategies each agent has, it's always possible to arrive at an equilibrium," explains Daskalakis. "It could be a good equilibrium for social welfare, or a bad equilibrium for social welfare — what Nash's theorem says is that the system can stabilise."
John Nash won the 1994 Nobel Prize in Economics for his work in game theory. Image: Peter Badge / Typos1.
That's good to know, but the fact that a system can stabilise doesn't necessarily mean that it will. To see if a system is likely to attain its Nash equilibrium we better know how to find it ourselves, but this is exactly what Nash's proof doesn't tell us. It's an example of a non-constructive existence proof: it shows that something must exist by logical necessity, but doesn't show you how to construct it. Mathematicians on the whole are quite happy with these kind of proofs (you might even have come across some at school, see here for an example), but for practical purposes they are useless: what good is it to know that something exists if you can't find it?
Inspired by Nash's result people soon began looking for algorithms that would find the Nash equilibrium of a given system. They found them too, but this didn't solve the practical problem: in general, it wasn't clear that the algorithms would be able to complete their task in a "reasonable" amount of time. In fact, some of these algorithms were shown to be hugely inefficient. This posed the worrying possibility that, in general, Nash equilibria are more mythical than real: somewhere over the rainbow they may exist, but in practice we will never find them.
One of Daskalakis' major contributions to the area was to show, together with his PhD supervisor Christos Papadimitriou and Paul Goldberg, that the worry is most likely justified. "My work is a critique of Nash's theorem coming from a computational perspective," he explains. "What we showed is that [while] an equilibrium may exist, it may not be attainable. The best supercomputers may not be able to find it."
"This theorem applies to games that we play, it applies to road networks, it applies to markets. In many complex systems it may be computationally intractable for the system to find a stable operational mode. The system could be wondering around the equilibrium, or be far away from the equilibrium, without ever being drawn to a stable state."
What makes the Nash equilibrium problem so hard?
Nash's proof isn't the only non-constructive existence proof in mathematics, far from it. In fact, it relies on another result called Brouwer's fixed point theorem, sometimes considered the daddy of all non-constructive results (we have explored it in this Plus article). Like for Nash's equilibrium problem, nobody knows of an efficient computational version for Brouwer's theorem, that is, a way of finding the thing that's been proved to exist in a reasonable amount of time.
This opens up an intriguing question: is there perhaps something that all such problems have in common?
The easy parity lemma
Suppose you have a network consisting of a (finite) collection of nodes connected by links that all have a direction, so they can be represented by arrows. If the network has an unbalanced node — one that has more arrows coming into it than going out or vice versa — then there must be another unbalanced node.
One suspect in this context is the easy parity lemma and involves finding your way in and out of nodes in a network (see the box). In 1994 Papadimitriou decided to give a name to the collection of problems for which a solution is guaranteed to exist by dint of this lemma: it is called PPAD (for "polynomial parity argument for directed graphs"). Papadimitriou showed that the computational version of Brouwer's theorem belongs to this class, and also that it is as hard as any other problem in PPAD: it is PPAD complete.
The Nash equilibrium problem also belongs to PPAD and it's also PPAD complete: that's the result proved by Daskalakis, Papadimitriou and Goldberg mentioned above. In further work, Daskalakis showed that the PPAD-ness of the problem really is the spanner in the works. It’s what makes those algorithms to find Nash equilibria inefficient. Daskalakis also showed that the signature structure of it can't be avoided.
Daskalakis' work straddles the areas of game theory and complexity theory, a field of mathematics that aims to classify problems in terms of how hard they are to solve (find out more here). You may have heard about the P class, which consists of problems considered easy to solve, and the NP class, which contains problems for which no efficient algorithms are known. If you have, then you'll be interested to know that the PPAD class is contained in NP (and you might want to note that it's still not proven that P and NP are really different).
But if you're of a less theoretical bent, then you probably want to know what all this means in practice.
Why should we care?
"Our result means a few things: if you are an analyst who is trying to predict what happens in a system, you better be careful if you are going to apply the Nash equilibrium," says Daskalakis. "[Firstly] you may not be able to find [the equilibrium] because you don't have the algorithms. Secondly, and that's even worse from a philosophical stand point, if the Nash equilibrium is intractable, then all bets are off as to whether the agents are going to get there." This isn't only frustrating for the analyst, but also for the agents interacting in a system: no matter what they do, they may never be able to reach a stable operational mode.
People who design systems, such as road networks or online markets such as dating sites or taxi hailing apps, are also affected by Daskalakis' results. When designing such a system, you want to optimise some objective: you want to make sure that traffic flows consistently, that potential dates are matched up efficiently, or that taxi drivers and riders are happy.
If you are counting on an equilibrium to deliver this happy state of affairs, then you better make sure the equilibrium can actually be reached. "You better be careful that the rules that you set inside your system do not lead to a situation where our theorem applies," says Daskalakis. "Your system should be clean enough and have the right mathematical structure so that equilibria can arise easily from the interaction of agents. [You need to make sure] that agents are able to get to equilibrium and that in equilibrium the objectives are promoted."
Constantinos Daskalakis wit his Nevanlinna medal.
"Another option is to forget about the equilibrium and try to guarantee that your objective is promoted even [with] dynamically changing behaviour of people in your system."
Daskalakis’ results have inspired people working in relevant industries to concentrate on these two options, and they have provided complexity theory with some deeper insights into how hard certain problems really are. But they also provide some food for thought for mathematical philosophers. Non-constructive proofs are all over mathematics. Few mathematicians worry about them as they go about their day-to-day work, though there is a school of thought called constructivism, which endeavours to avoid them (find out more here). As Daskalakis has shown there are situations in which not having a constructive proof matters. "I'm a computer scientists by training and also an engineer," he says. "For me constructiveness is super important. Unless you have no goal in ever actually attaining [the object in question] you should come up with constructive proofs."
You can find out more about Daskalakis' work in this lovely writeup by Allyn Jackson.