If I tell someone I am a financial mathematician, they often think I am an accountant with pretensions. Since accountants do not like using negative numbers, one of the oldest mathematical technologies, I find this irritating.
A roll of the dice
I was drawn into financial maths not because I was interested in finance, but because I was interested in making good decisions in the face of uncertainty. Mathematicians have been interested in the topic of decision-making since Girolamo Cardano explored the ethics of gambling in his Liber de Ludo Aleae of 1564, which contains the first discussion of the idea of mathematical probability. Cardano, famously, commented that knowing that the chance of a fair dice coming up with a six is one in six is of no use to the gambler since probability does not predict the future. But it is of interest if you are trying to establish whether a gamble is fair or not; it helps in making good decisions.
The average value of rolling a dice converges to the expected value of 3.5 when
the dice is rolled a large number of times.
With the exception of Pascal’s wager (essentially that you've got nothing to lose by betting that God exists), the early development of probability, from Cardano, through Galileo and Fermat and Pascal up to Daniel Bernoulli, was driven by considering gambling problems. These ideas about probability were collected by Jacob Bernoulli (Daniel's uncle), in his work Ars Conjectandi. He introduced the law of large numbers, proving that if you repeat the same experiment (say rolling a dice) a large number of times, then the observed mean (the average of the scores you have rolled) will converge to the expected mean. (For a fair dice each of the six scores is equally likely, so the expected mean is (1+2+3+4+5+6)/6 = 3.5.)
Measure theory
Building on Jacob Bernoulli’s work, probability theory was developed by the likes of Laplace in the eighteenth century and the Fisher, Neyman and Pearson in the twentieth. In conjunction with statistics, probability theory became an essential tool of the scientist. For the first third of the twentieth century, probability was associated with inferring results, such as the life expectancy of a person, from observed data. But as an inductive science (i.e. the results were inspired by experimental observations, rather than the deductive nature of mathematics built on axioms), probability was not fully integrated into maths until 1933 when Andrey Kolmogorov identified probability with measure theory. Kolmogorov defined probability to be any measure on a collection of events — not necessarily based on the frequency of events.
What is it worth?
This idea is counter-intuitive if you have been taught to calculate probabilities by counting events, but can be explained with a simple example. If I want to measure the value of a painting, I can do this by measuring the area that the painting occupies, base it on the price an auctioneer gives the painting or base it on my own subjective assessment. For Kolmogorov, these are all acceptable measures which could be transformed into probability measures. The measure you choose to help you make decisions will depend on the problem you are addressing: if you want to work out how to cover a wall with pictures, the area measure would be best; if you are speculating, the auctioneer’s would be better.
Kolmogorov formulated the axioms of probability that we now take for granted. Firstly, that the probability of an event happening is a non-negative real number (P(E) ≥ 0). Secondly, that you know all the possible outcomes, and the probability of one of these outcomes occurring is 1 (e.g. for a six-sided dice, the probability of rolling a 1, 2, 3, 4, 5, or 6 is P(1,2,3,4,5,6) = 1). And finally, that you can sum the probability of mutually exclusive events (e.g. the probability of rolling an even number is P(2,4,6) = P(2) + P(4) + P(6) = 1/2). (You can read more about probability and its development on the Understanding Uncertainty site, and the Plus article Measure for measure is an excellent introduction to measure theory.)
Deciding a fair price
Why is the measure theoretic approach so important in finance? Financial mathematicians investigate markets on the basis of a simple premise; when you price an asset it should be impossible to make money without the risk of losing money, and by symmetry, it should be impossible to lose money without the chance of making money. If you stop and think about this premise you should quickly realise it has little to do with the practicalities of business, where the objective is to make money without the risk of losing it, which is called an arbitrage, and financial institutions invest millions in technology that helps them identify arbitrage opportunities.
An asset should be priced so as to prevent such arbitrages. Financial mathematicians realised that an asset’s price can be represented as an expectation under a special probability measure, called a risk-neutral measure, which bears no direct relation to the 'natural' probability of the asset price rising or falling based on past observations. (The explanation of risk-neutral measures is pretty straightforward and is described here. You can also read a general introduction to arbitrage and pricing in the Plus article Rogue Trading.)
However, as with much of probability, what seems simple can be very subtle. A no-arbitrage price is not simply an expectation using a special probability; it is only an arbitrage-free if it is risk neutral and will not result in the possibility of making or losing money. And you have to undertake an investment strategy, known as hedging, that removes these possibilities. In the real world, which involves awkward things like taxes and transaction costs, it is impossible to find a unique risk-neutral measure that will ensure all these risks can be hedged away. One of the key objectives of financial maths is to understand how to construct the best investment strategies that minimises risks in the real world.
In good company
Financial mathematics is interesting because it synthesizes a highly technical and abstract branch of maths, measure theoretic probability, with practical applications that affect peoples’ everyday lives. Financial mathematics is exciting because, by employing advanced mathematics, we are developing the theoretical foundations of finance and economics. To appreciate the impact of this work, we need to realise that much of modern financial theory, including Nobel prize winning work, is based on assumptions that are imposed, not because they reflect observed phenomena but because they enable mathematical tractability. Just as physics has motivated new maths, financial mathematicians are now developing new maths to model observed economic, rather than physical, phenomena.
Financial innovation currently has a poor reputation and some might feel that mathematicians should think twice before becoming involved with "filthy lucre". However, Aristotle tells us that Thales, the father of western science, became rich by applying his scientific knowledge to speculation, Galileo left the University of Padua to work for Cosimo II de Medici, and wrote On the discoveries of dice, becoming the first quant. Around a hundred years after Galileo left Padua, Sir Isaac Newton, left Cambridge to become warden of the Royal Mint, and lost the modern equivalent of £3,000,000 in the South Sea Bubble. Personally, what was good enough for Newton is good enough for me. Moreover, interesting things happen when maths meets finance: the concept of probability emerged out of the interface. And looking at the 23 DARPA Challenges for mathematics, several of these — the mathematics of the brain, the dynamics of networks and capturing and harnessing stochasticity in nature, beyond convex optimization — are all highly relevant to finance.
The Credit Crisis did not affect all banks in the same way. Some banks, like J.P. Morgan. engaged with mathematics and made good decisions, while others did not and caused mayhem (see Gillian Tett’s book Fools’ Gold for more information). Since Cardano, financial maths has been about understanding how humans make decisions in the face of uncertainty and then establishing how to make good decisions. Making, or at least not losing, money is simply a by-product of this knowledge. As Xunyu Zhou, who is developing the rigorous mathematical basis for behavioural economics at Oxford, recently commented:
Financial mathematics needs to tell not only what people ought to do, but also what people actually do. This gives rise to a whole new horizon for mathematical finance research: can we model and analyse ... the consistency and predictability in human flaws so that such flaws can be explained, avoided or even exploited for profit?.
This is the theory. In practice, in the words of one investment banker:Banks need high level maths skills because that is how the bank makes money.
About the author
Tim Johnson is an RCUK Academic Fellow in Financial Mathematics, based at Heriot-Watt University and the Maxwell Institute for Mathematical Sciences in Edinburgh. He is active in promoting the sensible use of mathematics in finance and highlighting the need for more research into mathematics in order to better understand random and complex environments. He is Course Director for the only undergraduate course in Financial Mathematics, on which he teaches, and undertakes research in stochastic optimal control. Prior to becoming an academic, he worked for sixteen years in the oil exploration industry.
You can read more about probability and financial mathematics on Tim's blog Magic, maths and money.