Plus Blog
January 23, 2014
Vending machines that don't return change are annoying, especially if the prices they demand aren't nice round figures you can make up with a single coin. If that's the case, then there's nothing but to cram through your wallet, fishing out the right coins to make up the amount exactly. What's the best way of doing that? Without noticing many of us probably follow this recipe: find the biggest (as in largest denomination) coin that fits into the amount, then the nextbiggest that fits into the remainder, and so on, until you (hopefully) hit the required sum. As an example, if you're being asked for 85p, you probably fish a 50p coin out first, then a 20p coin, then a 10p coin, and finally a 5p coin. And what if you haven't got all of the coins just mentioned in your wallet? In that case you follow the same recipe using what you've got. This greedy recipe (greedy because you always go for the biggest coin that fits) seems to offer the best solution in that it seems to involve the fewest number of coins to make up the amount you need. For example, supposing you do have a 20p coin, but decide to go for two 10p coins instead, you increase the number of coins to make up 85p from four to five. So the greedy algorithm seems useful, not just for people struggling with vending machines, but also for cashiers returning change to customers. But is greed really always the best option? It turns out that this depends on the coins that are available. Imagine, for example, you need to make up 8p. Greed would tell you to go for a 5p coin, then a 2p coin and then a 1p coin. And that's indeed the smallest number of coins to make up 8p with if you are using Pound Sterling, Euros, US Dollars, and most other currencies. But now imagine a currency that in addition to these denominations also has a 4p coin. Then you could make up 8p with two of those, beating the greedy strategy by one. Such a currency system might seem silly but it's not unheardof: the predecimal British coinage system was one for which the greedy recipe failed when it came to minimising the number of coins needed to make up a given amount.
2 comments

December 24, 2013
Santa's just putting his hat on and button up his jolly red coat when an exasperated elf runs up to him: "Santa! We've got a problem! There have been so many good children this year we can't put all the presents into your sleigh!" The problem is that Santa's sleigh has a weight limit, and can only carry 2 tonnes of present. Each present has a different weight, and each present has a different star value too: a star for each good deed of the toy's recipient. Santa obviously wants to reward as many good deeds as possible, maximising the total star value of the presents in the sleigh. But he can't exceed the maximal weight or those presents are going nowhere! So which should he put in his sleigh? The elves start out trying all different combinations of presents to find the best one. This would have been fine if there only were a few items, but they quickly realise it is totally impractical when you have millions of presents to deal with. If they carry on this way Christmas Eve will have long passed and no presents at all will be delivered! Such a brute force solution is unacceptable, and not only to the elves and Santa. Mathematicians would also find this an unacceptable approach, and ask whether there is an algorithm – a recipe for finding a solution – which works for any number of items, and so that the time it takes to complete the algorithm grows with the number of items in a reasonable, nonexplosive fashion. Mathematicians have a clear definition of "reasonable and non explosive" in this context: the time it takes to complete the algorithm should grow with the number N of items no faster than the polynomial N to the power of K, for some integer K, grows with N. That's still pretty rapid growth, especially if K is large, but at least it's not exponential. So does such a polynomial time algorithm exist for our problem? The answer is that nobody knows  not yet  though most mathematicians believe that there isn't. In fact, if you can prove or disprove that a polynomial time algorithm exists you will have answered the question behind door #22 too, as they both can be turned into NP complete problems. Optimisation problems such as this one, which is known as the knapsack problem, crop up in real life all the time. Thankfully Santa is well read in mathematical literature and knows an algorithm they can use that won't take too long and, while it not give the very best combination of starred pressies, it will get sufficiently close. So rest assured, if you've been really, really good, your present is probably now packed on the sleigh, ready for the Santa's big delivery run. Merry Christmas! You can find out more about NP complete problems on Plus.Back to the Plus Advent Calendar 
December 23, 2013
When Kepler was decorating his Christmas tree you can be sure his Christmas baubles weren't boring and round. Oh no, we're sure they would have been beautiful and elliptical! Ah, the humble ellipse. It might look just like a squashed circle but it is so much more! One of the conic sections discovered by the ancient Greeks, it generalises the circle using eccentricity, a measure of how squashed it is. A circle is an ellipse with eccentricity zero, a very squashed nearly flat ellipse will have eccentricity close to one. An ellipse has two foci (points that lie within the ellipse), with the handy property that the sum of the distances from each focus to a point on the ellipse is equal to the ellipse's width. But not just a pretty shape, it turned out to be Kepler's answer to life, the universe and everything  he discovered that the heavenly bodies didn't move in circles, as had been thought for hundreds of years, but instead had elliptical orbits around the sun. Kepler tried to convince himself of the perfection of this design of the heavens by describing the interaction of nature with the heavens introducing a distortion to pull the motion from a perfect circle into an ellipse. You can read more about conic sections and Kepler's proof on Plus. Return to the Plus Advent Calendar 
December 22, 2013
It's that time of the year again when Santa needs to think about how to deliver all those presents to the people on Earth. Unfortunately his reindeers are a bit weak this year, having suffered some severe bouts of flu, and he knows he can't ask them to travel more than a million kilometres in total. Is there a route around the Earth that visits all those cities, towns and villages he needs to visit, but is less than a million kilometres long? A bit tired this year. That, it turns out, is a very tricky question. One way of answering it would be to try out all different routes and work out their lengths. But since there are 7 billion people on Earth and he needs to visit every single dwelling that's totally unfeasible. It would take him until long after the Universe has ended to try all possible routes. Is there a clever algorithm Santa could use instead? Obviously, the time it will take any algorithm to answer the question will depend on the number of places that need visiting — the more places, the longer it takes. So the question is whether there's an algorithm for which the time taken grows in a manageable, nonexplosive way as the number of places grows — it's what complexity theorists call a polynomial time algorithm. If Santa were to ask such a complexity theorist he would find out that the class of all problems that can be solved in polynomial time has a name: it's called P. Now Santa doesn't know if his problem of finding a route belongs to the class P. What's pretty obvious, though, is that if someone gave him a potential solution — a route they claim visits every place but is less than a million kilometres long — it would be easy to check whether that's true: he'd simply have to add up all the individual distances between places. A computer could do that quite easily. This means that Santa's problem belongs to a class of problems called NP. It consists of those problems for which potential answers can be checked in polynomial time, even though we might not know how to solve them from scratch in polynomial time. But now suppose that Santa has a brainwave. Sitting down with a computer he constructs a polynomial time algorithm that finds the kind of route he is after. He would then have shown that his problem is not only a member of the NP class, but also of the P class. Amazingly, it would also hand Santa a polynomial time algorithm for all other problems in the NP class — because it turns out that his problem is a version of the famous travelling salesman problem, and that all the other problems in the NP class can be reduced to it. What is more, his algorithm would win Santa $1 million: the question of whether the P class and the NP class of problems are equal is considered one of the hardest in mathematics, which is why the Clay Mathematics Institute as offered $1 million to anyone who solves it. Unfortunately Santa doesn't have the time to do any of this, and he's not a computer expert anyway. So he just takes an educated guess at what route to take and gets going, hoping for the best. It's a shame really because if he'd been able to show that P=NP he would have held the key to many of the world's secret communications, including everyone's bank details that are sent encrypted over the internet. This is because NP problems are used as the key for such encryptions, with the secrecy relying on the fact that the problems are very hard to crack. You can find out more in the Plus articles The travelling salesman and Safety in numbers. Return to the Plus Advent Calendar 
December 21, 2013
Raging rivers and thundering waves are exciting and frightening and for mathematicians they are a massive problem. Suppose you've got a turbulent mountain stream — if you're a mathematician, you'll want to know if you can describe the flow of the water using a mathematical equation. Given a point in space (that is somewhere in the stream) and a point in time (say 5 minutes from now), you would like to know the velocity and maybe also the pressure of the water at that point in time and space. Image: misty. Scientists believe that turbulence is described to a reasonable level of accuracy by a very famous set of equations, known as the NavierStokes equations. These are partial differential equations which relate changes in velocity, changes in pressure and the viscosity of the liquid. To find the velocity and pressure of your liquid, you have to solve them. But that's no easy feat. Exact solutions to the equations — solutions that can be written down as mathematical formulae — exist only for simplified problems that are of little or no physical interest. For most practical purposes, approximate solutions are found through computer simulations — essentially through educated guesswork — that require immense computing power. Plus there is an additional problem that makes numerical difficulties pale into insignificance: no one knows if exact mathematical solutions even exist in all cases. And if they do exist, we still don't know if they involve oddities, such as discontinuities or infinities, that don't square with our intuition of how a liquid should behave. It's these difficulties that have turned the understanding of the NavierStokes equations into one of the seven Millennium Problems posed by the Clay Mathematics Institute. Whoever proves or disproves the existence of finite and smooth solutions is set to earn a million dollars. All this might seem strange, even scary, given that the equations are used all over the place, all the time — meteorology and aircraft design are just two examples. The fact is that, in the cases we can compute, approximate solutions do seem to give an accurate description of the motion of fluids. What we don't know is what, if anything, the model given by the NavierStokes equations tells us about the exact nature of fluid flow. You can find out more about turbulence on Plus:
Return to the Plus Advent Calendar 
December 20, 2013
The central idea of applied statistics is that you can say something about a whole population by looking at a smaller sample. Without this idea there wouldn't be opinion polls, the social sciences would be stuffed, and there would be no way of testing new medical drugs, or the safety of bridges, etc, etc. It's the central limit theorem that is to a large extent responsible for the fact that we can do all these things and get a grip on the uncertainties involved. Suppose that you want to know the average weight of the population in the UK. You go out and measure the weight of, say, 100 people whom you've randomly chosen and work out the average for this group — call this the sample average. Now the sample average is supposed to give you a good idea of the nation's average. But what if you happened to pick only fat people for your sample, or only very skinny ones? To get an idea of how representative your average is likely to be, you need to know something about how the average weight of 100peoplesamples varies over the population: if you took lots and lots of samples of size 100 and worked out the average weight for each, then how variable would this set of numbers be? And what would its average (the average of averages) be compared to the true average weight in the population? For example, suppose you know that if you took lots and lots of 100 peoplesamples and wrote down the average weight of each sample, you'd get all values from 10kg to 300kg in equal proportion. Then this would tell you that your method of estimating the overall average by taking one sample of a 100 people isn't a very good one, because there's too much variability — you're just as likely to get any of the possible values, and you don't know which one is closest to the true average weight in the population. Four versions of the normal distribution with different means and variances. So how can we say anything about the distribution of 100peopleaverages — called the sampling distribution — when we don't know anything about the distribution of weight across the population? This is where the central limit theorem comes in: it says that for a big enough sample (usually sampling 30 people is good enough) your sampling distribution is approximated by a normal distribution — this is the distribution with the famous bell shape. The mean of this normal distribution (the average of averages corresponding to the tip of the bell) is the same as the mean in the population (the average weight of the population). The variance of this normal distribution, that is how much it varies about the mean (indicated by the width of the bell), depends on the sample size: the larger the sample, the smaller the variance. There's an equation which gives the exact relationship. So if your sample size is big enough (100 would certainly do since it's bigger than 30), then the relatively small variance of the normal sampling distribution means that the average weight you observe is close to the mean of that normal distribution (since the bell is quite narrow). And since the mean of that normal distribution is equal to the true average weight across the population, your observed average is a good approximation of the true average. You can make all this precise, for example you can say exactly how confident you are that the true average is within a certain distance of your sample average, and you can also use the result to calculate how large a sample you need to get an estimate of a given accuracy. It's the central limit theorem that lends precision to the art of statistical inference, and it's also behind the fact that the normal distribution is so ubiquitous. The central limit theorem is actually a bit more general than we've let on here. See here for a precise statement. Return to the Plus Advent Calendar 