icon

Not just a matter of time: The halting problem

Ken Wessen Share this page

In the first part of this article we've seen that some problems can take very long to solve, even on a computer. If you have dabbled in computer programming, you may also have come across the annoying situation in which a program doesn't complete, but keeps looping forever. So how can we know how long a particular computation will take? If the program we are running seems to be taking a long time, it would certainly be nice to know whether or not it is actually going to finish and hasn't just entered an infinite loop.

Brain

Alan Turing.

To effectively reason about the pure process of computation, regardless of the actual technology available, we need to reduce it to its most basic elements. One quite direct way to model computation is by considering it as some input, operated on by a program (i.e. a set of rules describing how to interpret and respond to the input), and producing some output.

The mathematician Alan Turing constructed just such a model of computation in 1936, and considered the question of determining whether or not a program will halt when given a certain input. He reasoned as follows.

Suppose we write a program HALT, that, given another program and its associated input, can determine whether running that program on the input will halt or not. We can represent this as:

  \[  \mbox{\textbf{HALT}(program,input)}=\left\{  \begin{tabular}{l}\mbox{Output YES, if the program does halt on the input,}

\\ \mbox{Output NO, otherwise. } 

\end{tabular}  \]    

This would no doubt be a very complicated program, but there doesn't seem to be any theoretical reason why it could not be written. So supposing that we have our program HALT, we can use it to write a new program OPPOSITE that analyses programs running with themselves as input, and acts as follows:

  \[  \mbox{\textbf{OPPOSITE}(program)}=\left\{  \begin{tabular}{l}\mbox{Halt, if \textbf{HALT}(program, program) returns NO, }

\\ \mbox{Loop forever, otherwise. } 

\end{tabular}  \]    

That is, if the given program does not halt, the program OPPOSITE does, and vice-versa. Although HALT may be a very complicated program, once we have it, the program OPPOSITE is trivial to write.

Everything appears fine, until we try to run OPPOSITE on itself: if OPPOSITE(opposite) halts, then it must loop forever, and if it loops forever it must halt. We have found a logical contradiction that shows the program HALT cannot exist.

This is a truly remarkable result. We have found a program whose existence is logically impossible, and shown that the problem of knowing, in general, whether a program halts on some given data is undecidable. (This paradox is nicely described by Thore Husfeldt in the context of freezing iPhone apps).

Beyond exponential complexity

Maybe having a program such as HALT, able to analyse any other program and input, is simply too much to hope for. So let's now consider something a little more mundane — computing numbers given by mathematical functions. In the previous article we have seen that some computations take so long they are considered intractable in practice. But we are now moving in an idealised domain, disregarding the practicality of runtime and storage requirements. Are there mathematical functions that grow faster than exponential that might take us to the limits of idealised computation?

To explore the possibility of super-exponential growth, we can consider the standard arithmetical operations as a hierarchy:

Level 1: Addition is repeatedly adding 1s (repeated succession): $a+\underbrace{1+1+...+1}_ n=a+n$
Level 2: Multiplication is repeated addition: $\underbrace{a+a+a+...+a}=n \times a$
Level 3. Exponentiation is repeated multiplication: $\underbrace{a \times a \times a \times ... \times a}_ n=a^ n$

When viewed this way, the hierarchy is very naturally extended by defining:

Level 4: Tetration is repeated exponentiation: $^ n a=a^{a^{a^{\ldots ^ a } } }$ (a power tower of $n\[ a$’s). 

Level 5: \emph{Pentation} is repeated tetration: 

$^{^{^{^ a}}} ^{^{^{\ldots }}} ^{^ a} ^ a a$ (a \emph{tetration tower} of $n$ $a$'s) $ \]$
Observable universe

Graham's number is bigger than the number of atoms in the observable Universe. Image: ESA and the Planck Collaboration.

These higher level operations (hyper-operations) open up vast new possibilities for generating growth. Tetration is like super-exponentiation, and each level beyond grows inconceivably faster than its predecessor. The best known way to represent this hierarchy is something called Knuth's up-arrow notation and its best known application is expressing Graham's number — a famous upper bound calculated for a problem in graph colouring, which is absolutely gargantuan. You can find out more in this article.

But rather than focusing on a single, special number, we can actually generate numbers of this magnitude from seemingly straightforward recursive functions: for these functions a particular value is calculated using the values that came before it. An example is the Ackermann function. It has two variables $m$ and $n$, both natural numbers. The general formula is:

  $\displaystyle  A(0,n)  $ $\displaystyle  =  $ $\displaystyle  n+1 $    
  $\displaystyle A(m,0)  $ $\displaystyle  =  $ $\displaystyle  A(m-1,1) $    
  $\displaystyle A(m,n)  $ $\displaystyle  =  $ $\displaystyle  A(m-1,A(m,n-1)) $    

Hidden within this innocent looking definition there lurks a monster! The function is very deeply recursive and although it starts off slowly, it grows incredibly quickly. Working through a simple recursion gives some insight into the degree to which this function repeatedly invokes itself to generate rapid growth.

Here are the first few terms

  $\displaystyle  A(0,0)  $ $\displaystyle  =  $ $\displaystyle  1 $ $\displaystyle  $ $\displaystyle  $ $\displaystyle  $ $\displaystyle  $    
  $\displaystyle A(0,1)  $ $\displaystyle  =  $ $\displaystyle  2 $ $\displaystyle  $ $\displaystyle  $ $\displaystyle  $ $\displaystyle  $    
  $\displaystyle A(1,0)  $ $\displaystyle  =  $ $\displaystyle  A(0,1)  $ $\displaystyle = $ $\displaystyle  2  $ $\displaystyle  $ $\displaystyle  $    
  $\displaystyle A(1,1)  $ $\displaystyle  =  $ $\displaystyle  A(0,A(1,0))  $ $\displaystyle =  $ $\displaystyle A(0,2)  $ $\displaystyle = $ $\displaystyle  3 $    
  $\displaystyle A(1,2)  $ $\displaystyle  =  $ $\displaystyle  A(0,A(1,1)) $ $\displaystyle  =  $ $\displaystyle A(0,3) $ $\displaystyle = $ $\displaystyle 4.  $    

The result is small, but the process hints at the maze of recursion that occurs as the parameters increase. For $m$ from $1$ to $3$, the function behaves as shown below:

  $\displaystyle  A(1,n)  $ $\displaystyle  = $ $\displaystyle  n+2 $    
  $\displaystyle A(2,n)  $ $\displaystyle  =  $ $\displaystyle  2n+3 $    
  $\displaystyle A(3,n)  $ $\displaystyle = $ $\displaystyle  2^{n+3}-3 $    

Using these, we can evaluate $A(4,2)$ as

  \[ A(4,2) = 2^{65536}-3, \]    

a number that has 19729 digits. Just one step further and we find

  \[ A(4,3) = 2^{2^{65536}}-3, \]    

and we are already at a number of such incredible magnitude that its number of digits is approaching $10^{20000}.$

We have seen how recursion is capable of generating functions that grow outrageously faster than exponentials, and this raises the question of how would a computer cope with such functions? Are these values computable, or have the growth rates become too large? Well, as rapidly growing as it is, the Ackermann function is computable, at least in theory. After however many lifetimes of the Universe the calculation might require, the recursion eventually completes for any $m$ and $n.$ To shake the foundations of computation we need to ask a different kind of question, such as the one we will consider in the third part of this article.


About the author

Ken Wessen

Ken Wessen has PhDs in Theoretical Physics and Human Biology, and a Graduate Diploma in Secondary Mathematics Education. He has taught both at university and in high schools, but most of his working life has been in investment banking, developing algorithms for automated trading of equities and foreign exchange. In his spare time he writes free software and develops online activities for mathematics learning at The Mathenaeum.