
As computers are constantly becoming faster and better, many computational problems that were previously out of reach have now become accessible. But is this trend going to continue forever, or are there problems computers will never, ever be able to solve? Let's start our consideration of this question by looking at how computer scientists measure and classify the complexity of computational algorithms.
Classifying complexity

How complex is complex?
Suppose you are responsible for retrieving files from a large filing system. If the files are all indexed and labelled with tabs, the task of retrieving any specific file is quite easy — given the required index, simply select the file with that index on its tab. Retrieving file 7, say, is no more difficult than retrieving file 77: a quick visual scan reveals the location of the file and one physical move delivers it into your hands. The total number of files doesn't make much difference. The process can be carried out in what is called constant time: the time it takes to complete it does not vary with the number of files there are in total. In computers, arrays and hash-tables are commonly used data structures that support this kind of constant time access.
Now suppose that over time the tabs have all fallen off the files. They are still indexed and in order, but you can no longer immediately spot the file you want. This introduces the requirement to search, and a particularly efficient way to do so is a binary search. This involves finding the middle file and seeing whether the file you need comes before or after. For example, when looking for file 77, pull out the middle file and see if its index is smaller or larger than 77, and then keep looking to the left or right of the middle file as appropriate.
Big O notation
WriteWith this single step you have effectively halved the size of the problem, and all you need to do is repeat the process on each appropriate subset of files until the required file is found. Since the search space is halved each step, dealing with twice as many files only requires one additional step.
Writing
A logarithmic time process is more computationally demanding that a constant time process, but still very efficient.
Logarithmic growth
Suppose the total numberBut what if over time the loss of the tabs has allowed the files to become disordered? If you now pull out file 50 you have no idea whether file 77 comes before or after it. The binary search algorithm therefore can't be used. The best option in this situation is a linear search, where you just go through the files one by one to find the one you need. The complexity of this search increases in line with the number of files, and so it is an
This last example is the infamous and inefficient Bubblesort, and the number of comparisons required gets very big very fast. Fortunately many more efficient sorting techniques exist, such as Quicksort, and Mergesort, each with complexity

The complexity hierarchy: the black line illustrates zero growth (constant time), the yellow line logarithmic growth, the blue line linear growth, the green line polynomial growth, the red line exponential growth and the purple line factorial growth.
The famous travelling salesman problem provides an example of such an algorithm: the task is to find the shortest road circuit that visits each of
In complexity theory, any algorithm with complexity described by a polynomial in
Some huge examples
Some well known puzzles provide a powerful illustration of the growth rate of the exponential function. One example is asking
How thick would a regular piece of paper be if you could somehow fold it 42 times?
The answer is quite astounding – it would reach the Moon! (Indeed, fewer than 100 folds would be necessary to reach the limits of the visible Universe.) Another is the ancient wheat and chessboard problem that asks

How many grains are on the last square?
How many grains of wheat would be needed to enable you to place 1 grain on the first square of a chessboard, 2 grains on the second, 4 grains on the third, and so on, doubling each time until all squares have been used?
The repeated doubling means this is an exponential process, and the 64th square will contain
Factorial growth is even faster. Consider the number of arrangements of a pack of cards. The first card chosen in any arrangement is one of 52. The next choice is one of the remaining 51, then one of the remaining 50, and so on until every card has been chosen. The total number of arrangements possible is therefore
Scott Czepiel beautifully describes the magnitude of this number on this website. One of his illustrations involves imagining that you start a timer to count down this many seconds, and begin to walk around the equator, taking just a single step every billion years. Then, each complete circuit, remove a single drop of water from the Pacific Ocean and start around the Earth again. When you have emptied the ocean, place one piece of paper flat on the ground, refill the ocean and start yet again. Repeat until the stack of paper reaches the Sun, then look at the timer to see how far it has counted down. The first three digits will not even have changed. The entire process must be repeated thousands of times before the counter reaches 0.
These examples illustrate why problems with solutions of such complexity are considered intractable. But such a consideration is rooted in practical concerns. What if we propel ourselves into an ideal future, to a time when we'll have all the computing power we need and no limits on storage? In the second part of this article, we will look at functions that grow much faster than exponentials and factorials, and see what implications such functions have for idealised computability.
About the author

Ken Wessen has PhDs in Theoretical Physics and Human Biology, and a Graduate Diploma in Secondary Mathematics Education. He has taught both at university and in high schools, but most of his working life has been in investment banking, developing algorithms for automated trading of equities and foreign exchange. In his spare time he writes free software and develops online activities for mathematics learning at The Mathenaeum.