
If you're familiar with some calculus, then you might know that certain types of functions can be approximated using convergent series. (If that's news to you, then you might first want to read this short article and familiarise yourself with the idea.)
It turns out that some functions can also be approximated using series that don't converge at all. This seems weird at first: a divergent series (one that doesn't converge) can't be considered to be "equal to" a single finite value (as a convergent series can be) so how can you possibly use it to approximate anything? Here's an answer to this question, which was first formalised by the French mathematician Henri Poincaré in 1886.
To illustrate the idea behind so-called asymptotic expansions, let's first imagine that we have some function
Now imagine that you also have an infinite series (i.e. infinite sum) of the form
The reason we are considering a series of this form is to do with the way the functions
Now given some positive integer

The remainder function RN corresponding to the asymptotic expansion of the gamma function, plotted against the number of terms N. Blue dots show the value of the remainder for x=2 and red dots for x=3. As you can see, in both cases the remainder decreases at first with the number of terms N, until it reaches a minimum value: truncating the expansion at this value of N gives you the best approximation to the function. The remainder then increases, so the approximation becomes less and less accurate.
Asymptotic expansions can be really useful when approximating functions. Even when your function can also be approximated by a convergent series (see here) it may be that the convergence is very slow, so you'd have to consider a very long partial sum to get a good approximation. When that is the case, an asymptotic expansion may do a better job, as it may deliver you a good approximation with fewer terms. See Stokes phenomenon: An asymptotic adventure for a problem where this is the case.
Even though asymptotic expansions involve divergent series (which are famous for being tricky) you can work with them quite nicely. For example, if you have two functions with two corresponding asymptotic expansions, then the asymptotic expansion of the sum (or difference, or product, or quotient) of the functions is the sum (or difference, or product, or quotient) of the asymptotic expansions of the original functions.
Also, if a function has an asymptotic expansion, then this expansion is unique — there's no ambiguity. This doesn't work the other way around though: two different functions can have the same asymptotic expansions. A poignant example comes from adding an exponentially small term to a function
two functions
The definition we gave here is somewhat specific in that we were looking at the behaviour as
This article relates to the Applicable resurgent asymptotics research programme hosted by the Isaac Newton Institute for Mathematical Sciences (INI). You can see more articles relating to the programme here.
Return to the Plus advent calendar 2021.
This article is part of our collaboration with the Isaac Newton Institute for Mathematical Sciences (INI), an international research centre and our neighbour here on the University of Cambridge's maths campus. INI attracts leading mathematical scientists from all over the world, and is open to all. Visit www.newton.ac.uk to find out more.
