In the first part of this article we've seen that some problems can take very long to solve, even on a computer. If you have dabbled in computer programming, you may also have come across the annoying situation in which a program doesn't complete, but keeps looping forever. So how can we know how long a particular computation will take? If the program we are running seems to be taking a long time, it would certainly be nice to know whether or not it is actually going to finish and hasn't just entered an infinite loop.
To effectively reason about the pure process of computation, regardless of the actual technology available, we need to reduce it to its most basic elements. One quite direct way to model computation is by considering it as some input, operated on by a program (i.e. a set of rules describing how to interpret and respond to the input), and producing some output.
The mathematician Alan Turing constructed just such a model of computation in 1936, and considered the question of determining whether or not a program will halt when given a certain input. He reasoned as follows.
Suppose we write a program HALT, that, given another program and its associated input, can determine whether running that program on the input will halt or not. We can represent this as:
This would no doubt be a very complicated program, but there doesn't seem to be any theoretical reason why it could not be written. So supposing that we have our program HALT, we can use it to write a new program OPPOSITE that analyses programs running with themselves as input, and acts as follows:
That is, if the given program does not halt, the program OPPOSITE does, and vice-versa. Although HALT may be a very complicated program, once we have it, the program OPPOSITE is trivial to write.
Everything appears fine, until we try to run OPPOSITE on itself: if OPPOSITE(opposite) halts, then it must loop forever, and if it loops forever it must halt. We have found a logical contradiction that shows the program HALT cannot exist.
This is a truly remarkable result. We have found a program whose existence is logically impossible, and shown that the problem of knowing, in general, whether a program halts on some given data is undecidable. (This paradox is nicely described by Thore Husfeldt in the context of freezing iPhone apps).
Beyond exponential complexity
Maybe having a program such as HALT, able to analyse any other program and input, is simply too much to hope for. So let's now consider something a little more mundane — computing numbers given by mathematical functions. In the previous article we have seen that some computations take so long they are considered intractable in practice. But we are now moving in an idealised domain, disregarding the practicality of runtime and storage requirements. Are there mathematical functions that grow faster than exponential that might take us to the limits of idealised computation?
To explore the possibility of super-exponential growth, we can consider the standard arithmetical operations as a hierarchy:
Level 1: Addition is repeatedly adding 1s (repeated succession): Level 2: Multiplication is repeated addition: Level 3. Exponentiation is repeated multiplication:
When viewed this way, the hierarchy is very naturally extended by defining:Level 4: Tetration is repeated exponentiation: (a power tower of
Graham's number is bigger than the number of atoms in the observable Universe. Image: ESA and the Planck Collaboration.
These higher level operations (hyper-operations) open up vast new possibilities for generating growth. Tetration is like super-exponentiation, and each level beyond grows inconceivably faster than its predecessor. The best known way to represent this hierarchy is something called Knuth's up-arrow notation and its best known application is expressing Graham's number — a famous upper bound calculated for a problem in graph colouring, which is absolutely gargantuan. You can find out more in this article.
But rather than focusing on a single, special number, we can actually generate numbers of this magnitude from seemingly straightforward recursive functions: for these functions a particular value is calculated using the values that came before it. An example is the Ackermann function. It has two variables and , both natural numbers. The general formula is:
Hidden within this innocent looking definition there lurks a monster! The function is very deeply recursive and although it starts off slowly, it grows incredibly quickly. Working through a simple recursion gives some insight into the degree to which this function repeatedly invokes itself to generate rapid growth.
Here are the first few terms
The result is small, but the process hints at the maze of recursion that occurs as the parameters increase. For from to , the function behaves as shown below:
Using these, we can evaluate as
a number that has 19729 digits. Just one step further and we find
and we are already at a number of such incredible magnitude that its number of digits is approaching
We have seen how recursion is capable of generating functions that grow outrageously faster than exponentials, and this raises the question of how would a computer cope with such functions? Are these values computable, or have the growth rates become too large? Well, as rapidly growing as it is, the Ackermann function is computable, at least in theory. After however many lifetimes of the Universe the calculation might require, the recursion eventually completes for any and To shake the foundations of computation we need to ask a different kind of question, such as the one we will consider in the third part of this article.
About the author
Ken Wessen has PhDs in Theoretical Physics and Human Biology, and a Graduate Diploma in Secondary Mathematics Education. He has taught both at university and in high schools, but most of his working life has been in investment banking, developing algorithms for automated trading of equities and foreign exchange. In his spare time he writes free software and develops online activities for mathematics learning at The Mathenaeum.