Add new comment

A conversation with Stephen Cook

Stephen Cook is a pioneering mathematician and computer scientist. Back when computers were still made of vacuum tubes Cook started thinking about the complexity of computer algorithms and came up with the concept of NP complete problems. Last year we had the pleasure of meeting Cook at the Heidelberg Laureate Forum and (along with other journalists) asked him questions about his work, computer science and artificial intelligence. To find out what NP complete problems are, read What's your problem?. Otherwise, see the very brief reminder in the box below.

You can also watch a video of the interview.

An extremely superficial look at some complexity classes

Problems in the P class are thought of as easy (in that a computer algorithm can solve them in a feasible amount of time) and that problems in the NP class are generally thought to be harder (in that even a computer might take billions of years to solve them). NP complete problems can be thought of as the hardest problems in NP, and they are all equivalent: if you can solve one, you can solve the others too.

This hierarchy of complexity leads to one of the most famous questions in mathematics: are there problems in NP that are really harder than P ones, or is it that we just haven't found the best algorithms to solve them yet? It's know as the P versus NP problem.

A famous problem in this context is the travelling salesman problem: given a number of cities, find the shortest route that visits every city exactly once. The problem is thought to be NP-hard (at least as hard as any problem in NP), but we can't rule out that one day someone will find an optimal algorithm to solve it.

Why did you become interested in theoretical computer science, rather than any other area of maths?

Cook: We're talking about a long time ago now. When I went to graduate school in 1961 computers were pretty young: they were vacuum tubes and thousands of other [components]. My advisor [as a graduate student], Hao Wang, who was a logician, also had an interest in computers. He worked for IBM for a while. That influence [made me realise] that there are interesting questions in complexity theory.

Please tell us about your current work on complexity classes, such as P and NP.

Cook: We have a whole range of difficulties of complexity classes. Polynomial time [P] is supposed to mean feasible: you can [solve problems in P] in a reasonable amount of time. But we have smaller classes where we are looking at the amount of memory needed in order to compute. The small class there is logarithmic space, in which the number of memory bits [needed] is only a multiple of the logarithm of the size of the input. Logarithmic space is supposedly a small part of [the P class]. But we complexity theorists are very bad at separating complexity classes. It's not just separating P and NP. Logarithmic space is supposedly a very small subset of P. Everything in logarithmic space can be done in polynomial time, but we cannot prove that [logarithmic space and P] are different. And it's much worse than that: we can't even separate logarithmic space from NP.

Will we need a completely new theoretical approach to separate the complexity classes and to solve the P versus NP problem?

Cook: We certainly need some new ideas, that's for sure. I'm not sure what you mean by "approaches". Our approach is to think hard about the problem.

But is the barrier that you hit when you try to prove that logarithmic space is smaller than P similar to the barrier that you hit when you try to prove that P is smaller than NP?

Cook: The problems are definitely related. But the sad story is that we just don't know how to proceed to [solve them]. Obviously a huge amount of effort has been put into that. I've been particularly thinking about separating logarithmic space from P because we have simple examples of problems that are in P, and there seems to be no way of doing them in logarithmic time [which suggests that logarithmic space and P really are different]. But there seems to be no way to prove it.

Stephen Cook

Stephen Cook at a press conference at the Heidelberg Laureate Forum 2017. Picture/Credit: Christian Flemming/HLF.

What does the NP hardness of problems like the travelling salesman problem mean in practice, for people who do really need to solve such problems in real life?

Cook: What's NP hard of course is finding the exact best order of cities. That's very hard, but there are algorithms to approximate the best route. You can argue 'why do we need to have the exact [solution]?'. That's true. But there are certainly questions where [an approximation isn't good enough]. If you're in cryptography and you want to show that you can't break the RSA encryption scheme — [to break the scheme] it doesn't help if you can approximate the [solution]. So there are reasons for showing that problems are impossible.

Will quantum computers help with solving NP complete problems?

Quantum computing in principle seems to be very powerful. For example, quantum computers in principle can factor large numbers into their prime factorisations. My understanding, though, is that in practice quantum computer are well behind what they are supposed to be able to do. Actually making them work seems to be a huge problem.

They are able to solve problems that are in NP that we don't know how to solve in polynomial time. The factoring problem is the best example: when you have a product of 2000-digit prime numbers it's easy to multiply them, but when you are given the product finding the prime factors [is something] we don't know how to do [in polynomial time]. The RSA encryption scheme depends on that assumption. In principle quantum computers are supposed to be able to do that, my understanding though is that in practice they can't — yet.

How did you come up with the concept of NP complete problems?

Cook: My advisor, Hao Wang at Harvard, was interested in logic and recursion theory. What led me to this idea was the theory of computable functions, which has so-called recursively enumerable sets. You had recursively enumerable complete sets and they were all equivalent to the hardest recursively enumerable set. Of course the difference was that these sets were not decidable: there was no algorithm that could solve them at all. But that was the analogy. Of course NP complete sets are computable, though not feasibly computable. Presumably, we guess, it takes an exponential amount of time to solve the problem. But the analogy was there. That's where I got the idea.

How far away are we from creating an artificial intelligence that can trick us into thinking that it's human (pass the so-called Turing test)?

A monarch butterfly

A monarch butterfly. Image: Kenneth Dwain Harrelson, Creative Commons.

Cook: It's hard to know. That's not my field; I have lots of colleagues thinking about that. I guess the worry is that the AI robots can start competing with us in more and more areas. So what are we to do then? I don't know the answer to that.

Self-driving cars [for example] seem to be developing pretty fast, there seems to be some sense that they are going to exist, they will replace drivers and they are ultimately going to be safe. That's relatively easy. But replacing mathematical thinking, and other thinking, is a different question, that's much harder to say. Ultimately I suppose there's no reason to think that machines can't do what people can do. There's no barrier there that I can think of, it's just a question of time.

So do our minds work just like computers do?

Cook: I think our brains are computers. There are billions of neurons and they are extremely complicated computers. That's why ultimately I assume that we can build computers that can do what we do. I don't see any barrier there. I know there exist scientists who think there is a barrier, but I can't see it. That's the way it is. What to do about it is a different question. What is our role? What is the role of humans is a different question. Right now of course our minds seem to be more sophisticated than robots and AI. But that's just the beginning of what's going to happen.

As I mentioned before, and as everyone knows, we have extremely complex and sophisticated brains. [But an interesting point is that] even insects can do amazing things. Look at the monarch butterflies that manage to find a way from Toronto to Texas or Mexico and all know where to go. At this point we can't build machines that are as small as butterflies and can do anything like that. So in a sense there is long way to go. And of course butterflies have relatively small brains.

Also, I watch the squirrels in the back yard climbing the trees and jumping here and here and there. That's not thinking, that's motor control. They have huge motor control: they can keep their balance and they estimate where to go, and this and that. We can't do that. I guess there's a long way to go. I just can't see why ultimately it can't be done.

Filtered HTML

  • Web page addresses and email addresses turn into links automatically.
  • Allowed HTML tags: <a href hreflang> <em> <strong> <cite> <code> <ul type> <ol start type> <li> <dl> <dt> <dd>
  • Lines and paragraphs break automatically.