The Euler method

Share this page
Euler

The Euler method

How to approximate solutions to ordinary differential equations

The Euler method is a way of approximating solutions to ordinary differential equations. If you're not sure what a differential equation is, see this brief introduction. You will need some understanding of derivatives to understand this article.

An ordinary differential equation is a differential equation that depends on just a single independent variable, which we will call x. Suppose you have a function f(x) which is unknown. However, suppose you do know that its derivative f(x) is a given function of both x and the original f(x), so it looks like 

(1)f(x)=g(x,f(x)) where the function g(x) is known to you. Then the equation (1) is an ordinary differential equation. Solving the differential equation means finding f(x).

For many ordinary differential equations there's no neat formula  that gives you its solution. Euler's method helps you find an approximate solution if you do know one initial value. We'll first give the general definition and then look at an example.

First of all, let's suppose there's a number x0 for which we do know the value of f,

f(x0)=y0.

Euler's method relies on the fact that for  a number h whose absolute value |h| is sufficiently small

(2)f(x0+h)f(x0)+hf(x0).  This follows from the definition of a derivative. We're deliberately being vague about how good the approximation is and how small |h| needs to be, because we just want to get the general idea across. 

Euler's method works step by step.

Step 1

Let's set x1=x0+h. Then using the fact that f(x0)=g(x0,f(x0)) we have from (2) above that

f(x1)f(x0)+hg(x0,f(x0))=y0+hg(x0,y0).

Since we know the function g and the values of x0 and y0, we now have approximated the value of f at x1. Let's call this approximation y1, so

y1=y0+hg(x0,y0).

Step 2

Now set x2=x1+h. Again we have

f(x2)=f(x1+h)f(x1)+hf(x1)=f(x1)+hg(x1,f(x1)). Since f(x1)y1 we have

f(x2)y1+hg(x1,y1). Because we now have two approximations involved, the one saying that y1f(x1) and the one coming from (2) above, the overall approximation is probably worse that the one in the first step, but never mind. 

Let's set y2=y1+hg(x1,y1).

And so on

You might be able to see where this is going. We could continue for as long as we like, defining xn=xn1+h and

yn=yn1+hg(xn1,yn1), for successive natural numbers n. This would give us approximations of f(xn). The smaller the step size h we choose the better the approximations. However, they do get successively worse as we increase the number n of steps. To find out more, see the Wikipedia entry.

We can think of all this geometrically. The graph of the function f(x) will be a curve sitting in a Cartesian coordinate system. The points x1, x2, x3, etc, we have defined all lie along the x-axis with a distance h between each pair of successive points. The point (x0,y0) lies on the graph of f(x) (because that's our know initial point). The points (x1,y1), (x2,y2), (x3,y3), etc, will sit near the graph of f(x) (they are the approximations). If we connect successive points by straight line segments we have an approximation of the graph of f(x). We have therefore found an approximate solution to our ordinary differential equation.

Euler's method is best implemented on a computer as computations by hand soon become tedious. See here for some code in Python and R that implements Euler's method for the example below, as well as another interesting example: the SIR model that describes the spread of infectious diseases.

Euler's method is the most simple approach to approximating solutions to ordinary differential equation and is the simplest of a whole family of methods called the Runge-Kutta methods.  In reality mathematician often use more sophisticated members of this family to reduce the error.

A simple example

Suppose our ordinary differential equation is

f(x)=f(x) and that we know that f(0)=1. It's a well-known equation, and you might know that its solution is f(x)=ex. But let's pretend for a moment that we don't know that and use Euler's method to approximate a solution.

We set x0=0 so y0=f(x0)=f(0)=1. For the step size h let's choose h=0.5, so we have

x0=0

x1=0.5

x2=1

x3=1.5

x4=2.

In general, after n steps we have 

xn=0.5n.

By Euler's method we have

yn=yn1+hg(xn1,yn1)=yn1+0.5yn1=1.5yn1. This gives

y1=1.5×1=1.5,

y2=1.5y1=1.52=2.25,

y3=1.5y2=1.53=3.375,

y4=1.5y3=1.54=5.0625.

In general, after n steps we have  

yn=1.5n.

The points (xn,yn)=(0.5n,1.5n) now approximate points on the graph of f(x). 

We have plotted these points in the Geogebra applet below and connected them by straight line segments (in blue), together with the graph of f(x) (in black). We have set h=0.5 as in our example, but you can use the slider to change the value of h. You see that as the step size h becomes smaller, the approximation gets better. However, for a smaller step size you do need to approximate more points to get an approximation of the graph for the entire segment shown here. This soon becomes very tedious to do by hand, which is why we wrote a bit of code in Geogebra to produce the approximations in an automated way. It's a good illustration of the fact that to solve differential equations, even if it's only approximately, you usually need a computer. 

The Euler method is named after the mathematician Leonhard Euler (shown in the image at the top of this article) who first proposed it in his book Institutionum calculi integralis (published 1768–1770).