
In a previous article we saw that writing the HALT program is a logical impossibility, but what about more straightforward calculating tasks? Our simple computing model has stood up to the challenge of the hyper-exponential growth generated by the ultra-recursive Ackermann function. But how will it go playing the Busy Beaver game, invented by Tibor Radó in 1962?
The Busy Beaver game

To understand the Busy Beaver game we must return to our idealised model of computation, and add a few more details. Firstly, we have to specify the nature of the input and output of our computing machine. We will use an infinite tape of discrete cells, each of which contains either a 0 or a 1, and each able to be read by a program and overwritten. Secondly, we will model a program as a set of cards, operating on a currently active cell. Each card consists of two rules, one to apply if the active cell contains a 0, and another if it contains a 1. The rules tell the machine to write a new symbol into the cell (it may be the same symbol as is currently there), make either the next left or next right cell the new active cell, and then load a new card (or HALT). The size of the program, or equivalently the size of the machine, is simply the number of cards.
This model might seem too simple, but it turns out that any calculation a more complex machine can perform can also be computed under this simple model. In fact, any computer program you can think of can be implemented on such a machine, which, by the way, is called a Turing machine.
Representing the rules as a triple (new symbol, shift L or R, new card), here is an example two-card machine.
Starting with card 1 and a tape initialised with 0 in every position, running this machine proceeds as in the following table.
Card | Symbol | Write | Move | Next card | Tape (New active cell in bold) |
1 | 0 | 1 | R | 2 | ...0001000... |
2 | 0 | 1 | L | 1 | ...0001100... |
1 | 1 | 1 | L | 2 | ...0001100... |
2 | 0 | 1 | L | 1 | ...0011100... |
1 | 0 | 1 | R | 2 | ...111100... |
2 | 1 | 1 | R | HALT | ...0111100... |
We see that this machine runs through six card transitions (steps), writes four 1s onto the tape, and then halts. A different machine with different instructions written on the two cards might of course run on for longer than six steps. This raises a question: out of all the halting machines running a two-card program and set to start on a tape with all 0s, which one runs the longest?
Radó calls this busiest machine a Busy Beaver. You can also define a busy beaver for machines running with three cards, four cards, or any numberA | B | C | D | E | |
0 | 1RB | 1RC | 1RD | 1LA | 1RH |
1 | 1LC | 1RB | 0LE | 1LD | 0LA |
A | B | C | D | E | F | |
0 | 1RB | 1RC | 1LD | 1RE | 1LA | 1LH |
1 | 1LE | 1RF | 0RB | 0LC | 0RD | 1RC |
It is clear that the Busy Beaver function grows very rapidly. Indeed, the Busy Beaver candidate machines are similar to the extremely rapidly growing functions we considered in part II of this article—they implement the calculation of some super-exponential function
The Busy Beaver numbers are interesting and important tools for studying the nature and limits of computation, but a further use of more general appeal is wonderfully described by Scott Aaronson in Who can name the biggest number?—if you want to win at Who can name the biggest number? the Busy Beaver numbers are hard to go past.
About the author

Ken Wessen has PhDs in Theoretical Physics and Human Biology, and a Graduate Diploma in Secondary Mathematics Education. He has taught both at university and in high schools, but most of his working life has been in investment banking, developing algorithms for automated trading of equities and foreign exchange. In his spare time he writes free software and develops online activities for mathematics learning at The Mathenaeum.