icon

How to add up quickly

Chris Budd Share this page

How to add up quickly

Chris Budd
One of my favourite mathematical results is the famous formula π4=113+1517+19+(1) As far as I'm concerned, all of maths is here, and if this formula doesn't blow you away then you simply have no soul. What the formula does is to connect two quite different concepts, the geometry linked to the number π and the simplicity of the odd numbers. The result is truly magical and surprising, and exactly illustrates the extraordinary way that maths can link patterns together. Whenever I am asked to define mathematics I simply write down equation (1). Those of you who think that maths is just a language, think again.
Pi

How quickly do these series converge to a value involving π?


This formula has a splendid history. It was derived in the West in 1671 by James Gregory from the formula for arctan(x) and slightly later and independently by Gottfried Leibniz. However, the same formula (along with many other results involving infinite series) was discovered long before in the 1300s by the great Indian mathematician Madhava. Similar results of equal beauty are the convergent series given by

π26=1+122+132+142+,(2) π332=1133+153173+,(3) and  π490=1+124+134+144+(4) However, in one sense, these formulae are disappointing. If you want to actually calculate π,π2,π3 or π4 then you probably would not reach for one of these formulae. The reason is that they converge very slowly. If you take formula (1), add up 100 terms and multiply by 4, you get 3.146567747182956, which whilst fairly close to π=3.141592653589793... is not a particularly accurate estimate given the effort involved in adding up 100 terms. If you wanted to calculate π or π2 to an accuracy of six decimal places, you would have to take on the order of 106 terms of either series (1) or (2), and long before you have added up all of the terms in the series, the rounding errors associated with computer calculations will have accumulated to the point where the accuracy of the answer is severely degraded.

The world needs pi

So what, you (and many pure mathematicians) might say. Surely you don't need to know the value of π that accurately, after all the Bible was content to give it to just one significant figure. However, π is not any number. It lies at the heart of any technology that involves rotation or waves, and that is much of mechanical and electrical engineering. If rotating parts in, say, a typical jet engine are not manufactured to high tolerance, then the parts simply won't rotate. This typically involves measurements correct to one part in 104 and, as these measurements involve π, we require a value of π to at least this order of accuracy to prevent errors. In medical imaging using CAT or MRI scanners, the scanning devices move on a ring which has to be manufactured to a tolerance of one part in 106, requiring an even more precise value of π. However, even this level of accuracy pales into insignificance when we look at modern electrical devices. In high frequency electronics, with frequencies in the order of 1GHz (typical for mobile phones or GPS applications), electrical engineers have to work with functions of the form u(t)=cos(2πft) where f109 and t is a number close to one. To get the accuracy in the function u(t) needed for GPS to work requires a precision in the value used for π in the order of one part in 1015. So, to live in the modern world we really do need to know π very accurately. So, what can we do? One possibility is to take a vast number of terms of the series for π etc. above, book lots of time on a very expensive computer, sit back and wait (and wait, and wait). Or we can try and accelerate their convergence. So that with only a small number of terms (say 10) we can get 10 significant figures for π. The nice thing about this method is that the derivation of the formulae is very transparent (well within the reach of a first year undergraduate or even a good A-level student). In principle this method can also be used to find the sum of other slowly convergent series.

Accelerating the convergence of a series

Let's suppose that we have a series a1+a2+a3+a4++an+ and we define the sum Sn by Sn=a1+a2++an. We will assume that this series {\em converges}. This means there is a {\em limiting sum} S so that SnSasn. If we want to work out the value of S then we can simply take the values Sn and let n get very large. However, just adding up a series loses quite a lot of the information contained within it, and is really a very crude thing to do. Maybe we can squeeze more information out of the series and use this information to accelerate the convergence of Sn. This means that we add a correction term to Sn so that it approaches S much more rapidly. What is nice about this approach is that it is quite easy to work out the correction terms. To illustrate this idea we will take series (2) for π2/6. Let's define Sn=1+122++1n2.(5) As n increases, Sn also increases steadily towards the value of S=π2/6. In figure 1 we plot the values of Sn which you can see are increasing towards the value of π2/6=1.644934066848226.
Graph 1

Figure 1: The sum Sn which monotonically increases to π2/6.

Suppose that we next look at the difference En=SSn. A plot of is given in figure 2(a) and we can see that this decreases to zero as n increases. But how fast? To estimate this we plot in figure 2(b) the values of nEn. It is clear from these figures that while E0 as n, nEn approaches a constant that looks suspiciously like one.
Graph 2

Figure 2: (a) The error En tending to zero. (b) nEn which tends to one.

From this, we might guess that En1/n so that to a first approximation S=Sn+1n+ We can improve on this guess by assuming that there is a sequence of numbers B0,B1,B2,B3, such that as n S=Sn+B0n+B1n2+B2n3+B3n4+(6) If we can calculate the terms Br for r=0,1,2,k then we can estimate S from the value of Sn by the expression SSn+B0n+B1n2+B2n3+B3n4++Bk1nk.

But what are the Br?

It turns out that the Br are a very well-known set of numbers. There is a nice and systematic way to calculate their values (but if you want you can skip this calculation and go straight to the result).

If we take (6) and replace n by n1 then we get S=Sn1+B0n1+B1(n1)2+B2(n1)3+B3(n1)4+(7) so that Sn1Sn=B0(1n1(n1))+B1(1n21(n1)2)+B2(1n31(n1)3)+(8) Also, from the definition of Sn we know that SnSn1=1n2. Combining these two expressions we get 1n2=B0(1n1(n1))+B1(1n21(n1)2)+B2(1n31(n1)3)+(9) We can now find the values of each of the terms Bk recursively by expanding each of these expressions in powers of 1/n and considering n to be very large. For example 1(n1)=1n(11/n)=1n+1n2+1n3+, and 1(n1)2=1n2(11/n)2=1n2+2n3+3n4+4n5+. More generally 1(n1)k=1nk(11/n)k=1nk+knk+1++(k+r1r)nk+r+ where (k+r1r)=k(k+1)(k+2)(k+r1)r!.

These are the so-called Taylor series expansions for the above expressions.

Now, we can combine these expressions with the equation (9). This gives 1n2=B0(1n2+1n3+)+B1(2n3+3n4+)++Bk1(knk+1+k(k+1)2nk+2++(k+r1r)1nk+r+...)+...(10) To find the value of each of the terms Bk we compare the expressions involving terms of the form 1/nm for m=2,3,4,. For example, the coefficient of 1n2 on the left of the above equation is 1 and on the right it is B0. Hence B0=1. If we next look at the terms in 1/n3 and 1/n4 we get (respectively) 1/n3:0=B0+2B1,and1/n4:0=B0+3B1+3B2. Substituting the value of B0 into the first equation gives B1 and knowing this we can then find B2 from the second equation. This gives B1=12andB2=16. We can then continue inductively, and can work out the terms Bk from the recurrence relations B0=1andBk1=1k((k2)Bk2+(k3)Bk3++B0)(11) Turning the handle we get the following numbers B0=1,B1=12,B2=16,B3=0,B4=130, B5=0,B6=142,B7=0,B8=130,B9=0,.(12) We notice that if k is odd and bigger than one then Bk=0, and that the values of Bk with k even alternate in sign. The numbers that we have calculated are all quite small, but they get much bigger (rapidly!) as k increases. For example B50=49505720524107964821247752566.
Jakob Bernoulli

Jakob Bernoulli, 1654 – 1705.

These numbers are famous and are called the Bernoulli numbers. They come up everywhere, from number theory to mechanics and beyond. Jakob Bernoulli described them in the book Ars Conjectandi (published posthumously in 1713) in connection with sums of powers of integers and they were discovered almost simultaneously by a number of other mathematicians. Since then, they have played a starring role in mathematical history. For example they are very important in the understanding of both Fermat's last theorem and the Riemann zeta function, and the first computer program written by Ada Lovelace in 1842 was designed to compute them.

Doing a quick sum

Putting this all together, for any fixed values of n and k we can add up n terms of the original series to find Sn and then add up k terms of the series En,k=B0n+B1n2++Bk1nk(13)to get the correction. We can then estimate S by SSn+En,k. The error between this approximation and S is approximately given by the next term in the series En,k which is given by Bk+1/nk+1. For example, if n=k=10 then S=Sn+1n12n2+16n3130n5+142n7130n9+O(1n11),(14) where the last term expresses the fact that the error we expect to see is proportional to 1/n11. For n=10 terms this would mean an error of 1011 which is a huge improvement over the error of about 0.1 that we would get from naively adding up the series.
So, what's the catch with doing this? Well the problem with this approach is that as the Bernouili numbers Bk increase rapidly as k increases, then ultimately the correction term gets rather large. In fact if n is {\em fixed} then the individual terms in En,k tend to infinity as k. For an {\em optimal error} for a given value of n it generally works best to truncate the series for the correction and to take k=n, estimating S by Sn+En,n. This gives an error proportional to 1/nn.

We can easily play the same game with the other series mentioned in this article. See this appendix for details.

How well does this work?

Very! To show this we will look at a table of values of Sn for n=1,5 and 10, and then the three corrections given by Pn=Sn+1n, Qn=Sn+1n12n2+16n3, Rn=Sn+1n12n2+16n3130n5+142n7130n9. Unknown environment 'center' We can see that the approximation R10 is a very good approximation indeed for π2/6=1.644934066848226 with an error of only 7×1013. This is especially impressive when we see that S10 is a very poor approximation to π2/6. Note that we have only had to add in an extra 6 terms to do this. To gain the same level of accuracy with Sn we would have to have included another 1011 terms! Readers are invited to work out S20 and R20 to further test the impressive accuracy of this method.

Is this new?

Not at all. The basic idea of accelerating the convergence of a series certainly goes back to the extraordinary mathematician Leonhard Euler, if not earlier. Faced with an {\em alternating series}, such as the first one (1) that we looked at to calculate π/4 in which the terms alternate in sign, Euler devised a transformation, which instead of summing the series, summed the various divided differences of the series. This method does not work for series with all positive terms such as the series (2) to calculate π2/6. However, Euler needed to find the sum of this series as at the time (1735) its sum was unknown. The question of finding it (and finding a proof) was posed in 1644, and many distinguished mathematicians had tried, and failed, to find the sum. Euler's idea was to compare the sum of terms of the form 1/n2 with the integral of the function 1/x2. As was well known at the time dxx2=1x+C, and the sum of the terms 1/n2 is an approximation to this integral. If you can find the error in this approximation then you can sum the series by comparing it to the known integral. In a (typical) tour-de-force, Euler was able to calculate this error deriving the (now famous) Euler-Maclaurin (divergent) series for the error. If you take n terms of the original series for the sum and k terms of the Euler-Maclaurin series then you get exactly our formula (13). It is pleasing (for a numerical analyst like me) to recount that Euler used his formula to calculate the sum of the series (2) correct to 20 decimal places (without of course the benefit of any form of caculator). From this calculation he was then able to {\em guess} that the sum of the series was π2/6. Knowing the answer he was then able to find a proof.

The problem was renamed the Basel problem in honour of the home town of both Euler and the Bernoullis, and Euler's reputation (at the age of only 28) was made for life.

By his method of attack, Euler anticipated a lot of modern mathematics, in which the computer is used as an experimental tool, to gain insight into the solution of a mathematical problem as a first stage in proving the result. Nowadays we use this method all the time, and it continues to rely making highly accurate calculations. But what is nice about the techniques I have described in this article is that we can see that by doing only a little extra mathematics we can do much better than even the most powerful computers.


About the author

Chris Budd is Professor of Applied Mathematics at the University of Bath, Vice President of the Institute of Mathematics and its Applications, Chair of Mathematics for the Royal Institution and an honorary fellow of the British Science Association. He is particularly interested in applying mathematics to the real world and promoting the public understanding of mathematics.

He has co-written the popular mathematics book Mathematics Galore!, published by Oxford University Press, with C. Sangwin.

Comments

Permalink

In the first formula, after 1/9 , isn't better to write "- ..." instead of "+ ..."? I understand that, by writing "+ ...", the author may be denoting "+ -1/11", that is, "plus whatever comes next"; if that's the case, OK.