Are there more irrational numbers than rational numbers, or more
rational numbers than irrational numbers? Well, there are infinitely
many of both, so the question doesn't make sense. It turns out,
however, that the set of rational numbers is infinite in a very
different way from the set of irrational numbers.
As we saw here, the rational numbers (those that can be written as fractions) can be lined up one by one and labelled 1, 2, 3, 4, etc. They form what mathematicians call a countable infinity.
The same isn't true of the irrational numbers (those that cannot be written as fractions): they form an
uncountably infinite set. In 1873 the mathematician
came up with a beautiful and elegant proof of this fact. First notice
that when we put the rational numbers and the irrational numbers
together we get all the real numbers: each number on the line is
either rational or irrational. If the irrational numbers were
countable, just as the rationals are, then the real numbers would be
countable too — it's not too hard to convince yourself of that.
So let’s suppose the real numbers are countable, so that we can make a list of them, for example
and so on, with every real number occurring somewhere in the infinite list. Now take the first digit after the decimal point of the first number, the second digit after the decimal point of the second number, the third digit after the decimal point of the third number, and so on, to get a new number .
Now change each digit of this new number, for example by adding . This gives the new number . This new number is not the same as the first number on the list, because their first decimal digits are different. Neither is it the same as the second number on the list, because their second decimal digits are different. Carrying on like this shows that the new number is different from every single number on the list, and so it cannot appear anywhere in the list.
But we started with the assumption that every real number was on the
list! The only way to avoid this contradiction is to admit that the
assumption that the real numbers are countable is false. And this then
also implies that the irrational numbers are uncountable.
It's easy to see that an uncountable infinity is "bigger" than a
countable one. An uncountable infinity can form a continuum, such as
the number line, in a way that a countable infinity can't. Cantor went
on to define all sorts of other infinities too, one bigger than the
other, with the countable infinity at the bottom of the
hierarchy. When he first published these ideas, Cantor faced strong
opposition from some of his colleagues. One of them,
described Cantor's ideas as a "grave disease" and another,
went so far as to denounce Cantor as a "scientific charlatan" and
"corrupter of youth". Cantor suffered severe mental health problems
which may have resulted in part from the rejection his work had met
with. But we now know that his work had simply come too soon: 150
years on, Cantor's ideas form a central pillar of mathematics and many
of his results can be found in standard textbooks.
With spring (hopefully) on its way, it looks increasingly less likely that we will be blessed with the cold, white, fluffy stuff this year. But if the winter Olympics leave you yearning for snow and ice, here are some related maths stories for you.
Maths solves frozen mystery
A mathematical computer model of glaciers has shed new light on the disappearance of four young Alpine walkers, nearly ninety years after they met their deaths in 1926.
A molecule's eye view of water
Water is essential for life on Earth, and it is a resource we all take for granted. Yet it has many surprising properties that have baffled scientists for centuries. Seemingly simple ideas such as how water freezes are not understood because of water's unique properties. Now scientists are utilising increased computer power and novel algorithms to accurately simulate the properties of water on the nanoscale, allowing complex structures of hundreds or thousands of molecules to be seen and understood.
Maths and climate change: the melting Arctic
The Arctic ice cap is melting fast and the consequences are grim. Mathematical modelling is key to predicting how much longer the ice will be around and assessing the impact of an ice free Arctic on the rest of the planet. Plus spoke to Peter Wadhams from the Polar Ocean Physics Group at the University of Cambridge to get a glimpse of the group's work.
Teacher package: On thin ice - maths and climate change in the Arctic
On the 1st of March 2009 three intrepid polar explorers, Pen Hadow, Ann Daniels and Martin Hartley, set out on foot on a gruelling trip across the Arctic ice cap to conduct the Catlin Arctic Survey. In this teacher package we look at some of the maths and science behind their expedition — climate and sea ice models, GPS and cartography, and how to present statistical evidence.
Climate change threatens the world's glaciers, which is why scientists simulate them on computers. Based on mathematical models, these computer simulations help predict how glaciers are likely to change in the future, depending on environmental factors.
The Aletsch glacier in Switzerland.
But you can also run these simulations backwards in time, to see how a glacier behaved in the past. This is what a mathematician and a glaciologist have just done, not in order to understand glaciers, but in order to solve a mystery. In March 1926 four young men, three of whom were brothers, set off on a tour from the Aletsch glacier in Switzerland, the longest glacier in the Alps. They never returned. It's likely they got caught up in a blizzard that raged in the mountains for several days. Despite an extensive search their bodies could not be found and nobody knew where on their trip they met their end. Eighty-six years later, in the summer of 2012, two English alpinists found the remains of the three brothers and some of their equipment.
The mathematician Guillaume Jouvet of Freie Universität Berlin and the glaciologist Martin Funk of the ETH in Zürich realised that they could use a computer model of glaciers, which Jouvet had developed during his PhD in 2012, to find out where the men had met their death. The model was the first to represent the three-dimensional flow of a glacier, including the velocity of the flow beneath its surface. Starting from the place the men's bones were found, the scientists simulated the evolution of the glacier backwards in time. They found that the bodies probably moved by around 10.5km in the 86 years since they had been swallowed by the ice, at an average speed of 122 meters per year. In 1980 they were buried some 250m below the surface of the ice. And the place of the men's demise could be narrowed down to an area of 1600m by 300m.
Jouvet and Funk hope that they can solve other glacial mysteries using their method. For example, in 1964 a plane carrying US military crashed on the Gauli glacier in Switzerland. Crew and passengers could be rescued, but the plane disappeared in the ice without a trace. Using their model, Juvet and Funk hope to predict when and where it will be released from the ice. And who knows what other icy stories their model will reveal in the future.
It might be grey and raining outside, but inside the Museum of London and Barnard's Inn Hall, Gresham College have some fascinating lectures planned to brighten the darkest day! First up Caroline Crawford will explore the lives of stars on Wednesday 5 February at 1pm at the Museum of London, how they are born, evolve over billions of years and dramatically burn, revealing clues to our own origins. It seems Hollywood isn't the only place where the stars crash and burn.
Prince Charming? Or just a slimy toad?
And surely the best way to cheer up a dreary day is to fall in love. Tony Mann will explain the mechanics of computer dating and how maths can help us find our heart's desire (or at least a charming dinner companion) in Finding stable matches: the mathematics of computer dating on Monday 17 February at 6pm, at Barnard's Inn Hall.
And if you are using a coin to decide whether to get out of bed or stay under the duvet, you need to hear what Raymond Flood has to say about Probability and its limits on Tuesday 18 February at 1pm at the Museum of London. He'll explain why we know a coin will come up heads roughly half the time over many tosses, but we can't tell you whether yours will land heads or tails tomorrow morning. If it lands heads and you make it to his lecture, you'll find out much more!
There's no need to register for these free public events, just come a bit early to get a seat. You can find out about all the other fascinating talks on the Gresham College website and you can get in the mood by reading about astronomy, dating and probability on Plus.
Vending machines that don't return change are annoying, especially if the prices they demand aren't nice round figures you can make up with a single coin. If that's the case, then there's nothing but to cram through your wallet, fishing out the right coins to make up the amount exactly. What's the best way of doing that? Without noticing many of us probably follow this recipe: find the biggest (as in largest denomination) coin that fits into the amount, then the next-biggest that fits into the remainder, and so on, until you (hopefully) hit the required sum. As an example, if you're being asked for 85p, you probably fish a 50p coin out first, then a 20p coin, then a 10p coin, and finally a 5p coin. And what if you haven't got all of the coins just mentioned in your wallet? In that case you follow the same recipe using what you've got.
This greedy recipe (greedy because you always go for the biggest coin that fits) seems to offer the best solution in that it seems to involve the fewest number of coins to make up the amount you need. For example, supposing you do have a 20p coin, but decide for to go for two 10p coins instead, you increase the number of coins to make up 85p from four to five. So the greedy algorithm seems useful, not just for people struggling with vending machines, but also for cashiers returning change to customers.
But is greed really always the best option? It turns out that this depends on the coins that are available. Imagine, for example, you need to make up 8p. Greed would tell you to go for a 5p coin, then a 2p coin and then a 1p coin. And that's indeed the smallest number of coins to make up 8p with if you are using Pound Sterling, Euros, US Dollars, and most other currencies. But now imagine a currency that in addition to these denominations also has a 4p coin. Then you could make up 8p with two of those, beating the greedy strategy by one. Such a currency system might seem silly but it's not unheard-of: the pre-decimal British coinage system was one for which the greedy recipe failed when it came to minimising the number of coins needed to make up a given amount.
Santa's just putting his hat on and button up his jolly red coat when an exasperated elf runs up to him: "Santa! We've got a problem! There have been so many good children this year we can't put all the presents into your sleigh!"
The problem is that Santa's sleigh has a weight limit, and can only carry 2 tonnes of present. Each present has a different weight, and each present has a different star value too: a star for each good deed of the toy's recipient. Santa obviously wants to reward as many good deeds as possible, maximising the total star value of the presents in the sleigh. But he can't exceed the maximal weight or those presents are going nowhere! So which should he put in his sleigh?
The elves start out trying all different combinations of presents to find the best one. This would have been fine if there only were a few items, but they quickly realise it is totally impractical when you have millions of presents to deal with. If they carry on this way Christmas Eve will have long passed and no presents at all will be delivered!
Such a brute force solution is unacceptable, and not only to the elves and Santa. Mathematicians would also find this an unacceptable approach, and ask whether there is an algorithm
– a recipe for finding a solution – which works for any number of items,
and so that the time it takes to complete the algorithm grows with the
number of items in a reasonable, non-explosive fashion.
Mathematicians have a clear definition of "reasonable and non-
explosive" in this context: the time it takes to complete the
algorithm should grow with the number N of items no faster than the
polynomial N to the power of K, for some integer K, grows with N.
That's still pretty rapid growth, especially if K is large, but at
least it's not exponential.
So does such a polynomial time algorithm exist for our problem? The
answer is that nobody knows - not yet - though most mathematicians
believe that there isn't. In fact, if you can prove or disprove that a
polynomial time algorithm exists you will have answered the question behind door #22 too, as they both can be turned into NP complete problems.
Optimisation problems such as this one, which is known as the
knapsack problem, crop up in real life all the time. Thankfully Santa is well read in mathematical literature and knows an algorithm they can use that won't take too long and, while it not give the
very best combination of starred pressies, it will get sufficiently close.
So rest assured, if you've been really, really good, your present is probably now packed on the sleigh, ready for the Santa's big delivery run. Merry Christmas!
You can find out more about NP complete problems on Plus.