Skip to main content
Home
plus.maths.org

Secondary menu

  • My list
  • About Plus
  • Sponsors
  • Subscribe
  • Contact Us
  • Log in
  • Main navigation

  • Home
  • Articles
  • Collections
  • Podcasts
  • Maths in a minute
  • Puzzles
  • Videos
  • Topics and tags
  • For

    • cat icon
      Curiosity
    • newspaper icon
      Media
    • graduation icon
      Education
    • briefcase icon
      Policy

      Popular topics and tags

      Shapes

      • Geometry
      • Vectors and matrices
      • Topology
      • Networks and graph theory
      • Fractals

      Numbers

      • Number theory
      • Arithmetic
      • Prime numbers
      • Fermat's last theorem
      • Cryptography

      Computing and information

      • Quantum computing
      • Complexity
      • Information theory
      • Artificial intelligence and machine learning
      • Algorithm

      Data and probability

      • Statistics
      • Probability and uncertainty
      • Randomness

      Abstract structures

      • Symmetry
      • Algebra and group theory
      • Vectors and matrices

      Physics

      • Fluid dynamics
      • Quantum physics
      • General relativity, gravity and black holes
      • Entropy and thermodynamics
      • String theory and quantum gravity

      Arts, humanities and sport

      • History and philosophy of mathematics
      • Art and Music
      • Language
      • Sport

      Logic, proof and strategy

      • Logic
      • Proof
      • Game theory

      Calculus and analysis

      • Differential equations
      • Calculus

      Towards applications

      • Mathematical modelling
      • Dynamical systems and Chaos

      Applications

      • Medicine and health
      • Epidemiology
      • Biology
      • Economics and finance
      • Engineering and architecture
      • Weather forecasting
      • Climate change

      Understanding of mathematics

      • Public understanding of mathematics
      • Education

      Get your maths quickly

      • Maths in a minute

      Main menu

    • Home
    • Articles
    • Collections
    • Podcasts
    • Maths in a minute
    • Puzzles
    • Videos
    • Topics and tags
    • Audiences

      • cat icon
        Curiosity
      • newspaper icon
        Media
      • graduation icon
        Education
      • briefcase icon
        Policy

      Secondary menu

    • My list
    • About Plus
    • Sponsors
    • Subscribe
    • Contact Us
    • Log in
    • Not just a matter of time: The halting problem

      Ken Wessen
      12 October, 2016

      In the first part of this article we've seen that some problems can take very long to solve, even on a computer. If you have dabbled in computer programming, you may also have come across the annoying situation in which a program doesn't complete, but keeps looping forever. So how can we know how long a particular computation will take? If the program we are running seems to be taking a long time, it would certainly be nice to know whether or not it is actually going to finish and hasn't just entered an infinite loop.

      Brain

      Alan Turing.

      To effectively reason about the pure process of computation, regardless of the actual technology available, we need to reduce it to its most basic elements. One quite direct way to model computation is by considering it as some input, operated on by a program (i.e. a set of rules describing how to interpret and respond to the input), and producing some output.

      The mathematician Alan Turing constructed just such a model of computation in 1936, and considered the question of determining whether or not a program will halt when given a certain input. He reasoned as follows.

      Suppose we write a program HALT, that, given another program and its associated input, can determine whether running that program on the input will halt or not. We can represent this as:

      Unknown environment 'tabular'Unknown environment 'tabular'

      This would no doubt be a very complicated program, but there doesn't seem to be any theoretical reason why it could not be written. So supposing that we have our program HALT, we can use it to write a new program OPPOSITE that analyses programs running with themselves as input, and acts as follows:

      Unknown environment 'tabular'Unknown environment 'tabular'

      That is, if the given program does not halt, the program OPPOSITE does, and vice-versa. Although HALT may be a very complicated program, once we have it, the program OPPOSITE is trivial to write.

      Everything appears fine, until we try to run OPPOSITE on itself: if OPPOSITE(opposite) halts, then it must loop forever, and if it loops forever it must halt. We have found a logical contradiction that shows the program HALT cannot exist.

      This is a truly remarkable result. We have found a program whose existence is logically impossible, and shown that the problem of knowing, in general, whether a program halts on some given data is undecidable. (This paradox is nicely described by Thore Husfeldt in the context of freezing iPhone apps).

      Beyond exponential complexity

      Maybe having a program such as HALT, able to analyse any other program and input, is simply too much to hope for. So let's now consider something a little more mundane — computing numbers given by mathematical functions. In the previous article we have seen that some computations take so long they are considered intractable in practice. But we are now moving in an idealised domain, disregarding the practicality of runtime and storage requirements. Are there mathematical functions that grow faster than exponential that might take us to the limits of idealised computation?

      To explore the possibility of super-exponential growth, we can consider the standard arithmetical operations as a hierarchy:

      Level 1: Addition is repeatedly adding 1s (repeated succession): a+1+1+...+1⏟n=a+n
      Level 2: Multiplication is repeated addition: a+a+a+...+a⏟=n×a
      Level 3. Exponentiation is repeated multiplication: a×a×a×...×a⏟n=an

      When viewed this way, the hierarchy is very naturally extended by defining:

      Level 4: Tetration is repeated exponentiation: na=aaa…a (a power tower of na's). Level 5: Pentation is repeated tetration: Double exponent: use braces to clarifyDouble exponent: use braces to clarify (a tetration tower of n a's)
      Observable universe

      Graham's number is bigger than the number of atoms in the observable Universe. Image: ESA and the Planck Collaboration.

      These higher level operations (hyper-operations) open up vast new possibilities for generating growth. Tetration is like super-exponentiation, and each level beyond grows inconceivably faster than its predecessor. The best known way to represent this hierarchy is something called Knuth's up-arrow notation and its best known application is expressing Graham's number — a famous upper bound calculated for a problem in graph colouring, which is absolutely gargantuan. You can find out more in this article.

      But rather than focusing on a single, special number, we can actually generate numbers of this magnitude from seemingly straightforward recursive functions: for these functions a particular value is calculated using the values that came before it. An example is the Ackermann function. It has two variables m and n, both natural numbers. The general formula is: A(0,n)=n+1A(m,0)=A(m−1,1)A(m,n)=A(m−1,A(m,n−1)) Hidden within this innocent looking definition there lurks a monster! The function is very deeply recursive and although it starts off slowly, it grows incredibly quickly. Working through a simple recursion gives some insight into the degree to which this function repeatedly invokes itself to generate rapid growth. Here are the first few terms \begin{eqnarray*} ended with \end{eqnarray}\begin{eqnarray*} ended with \end{eqnarray} Using these, we can evaluate A(4,2) as A(4,2)=265536−3, a number that has 19729 digits. Just one step further and we find A(4,3)=2265536−3, and we are already at a number of such incredible magnitude that its number of digits is approaching 1020000.

      We have seen how recursion is capable of generating functions that grow outrageously faster than exponentials, and this raises the question of how would a computer cope with such functions? Are these values computable, or have the growth rates become too large? Well, as rapidly growing as it is, the Ackermann function is computable, at least in theory. After however many lifetimes of the Universe the calculation might require, the recursion eventually completes for any m and n. To shake the foundations of computation we need to ask a different kind of question, such as the one we will consider in the third part of this article.


      About the author

      Ken Wessen

      Ken Wessen has PhDs in Theoretical Physics and Human Biology, and a Graduate Diploma in Secondary Mathematics Education. He has taught both at university and in high schools, but most of his working life has been in investment banking, developing algorithms for automated trading of equities and foreign exchange. In his spare time he writes free software and develops online activities for mathematics learning at The Mathenaeum.

      • Log in or register to post comments

      Read more about...

      computer science
      halting problem
      Graham's number
      University of Cambridge logo

      Plus Magazine is part of the family of activities in the Millennium Mathematics Project.
      Copyright © 1997 - 2025. University of Cambridge. All rights reserved.

      Terms