icon

Struggling with chance

Marianne Freiberger Share this page

Struggling with chance

Marianne Freiberger

It makes more sense to bet on a coin flip than to buy a lottery ticket. With a coin flip you have a 50:50 chance of winning but with the lottery that chance is only around 1 in 14 million. If you stand to win the same amount for the same stake, the choice is clear.

1 in 14 million.

The numbers here come from straight-forward reasoning. For a coin there are only two possible outcomes, heads or tails, which are equally likely, so you get an even 50:50 chance. In the National Lottery there are around 14 million possible outcomes (13,983,816 to be precise) because that's how many different six number combinations you can draw from 49. They are all equally likely, hence the estimate of your winning chance.

But what if the coin is crooked, the lottery machine rigged, or aliens direct the outcome from another planet? This reasoning hinges on the assumption that all outcomes are equally likely, but how can you be sure of that? Even if they are, what do the numbers really tell you about the one-off event of the next coin flip or lottery draw?

Frequent failure?

A more empirical approach would be to use experience as a guide. If in a hundred previous flips the coin came up heads 75% of the time, then that's evidence that the coin is biased. You could use this proportion — 75% (or 3/4) — as defining the chance of heads. But ironically, the result you observed might itself be due to chance: the coin might in reality be biased towards tails and produced the 75% frequency of heads by a freak accident. You could throw your coin another 100 times to confirm your result, but by the same reasoning you should then toss it another 100 times, and another 100, and so on. You'd really have to toss it an infinite number of times, which you can only do in theory, so you've lost the empiricism that was meant to guide you. More generally, this approach doesn't give you a way of assigning probabilities to one-off events, such as football world cups or presidential elections. Taking this frequentist approach to the extreme you could say that the probability of Obama winning the 2008 presidential election was 0 since no black man ever won it before.

It's not surprising that chance is hard to pin down, after all it's about unpredictability. That's the whole point of it. What might be surprising, then, is that mathematics, the most exact of all sciences, has a whole branch devoted to it. It does this by dodging the central issue: it doesn't tell you what probability actually is, or how to measure it exactly. It simply assumes the formal structure of probability theory and the mathematical and logical consequences of that. It states that probabilities can be expressed as numbers between 0 and 1 (0 means something can't happen and 1 that it's sure to happen) and then tells you how to calculate with them so that you can work out the probabilities of combinations of events. For example, it tells you that if two events are independent (eg the coin comes up heads and you win the lottery) and have probabilities p and q respectively, then the probability of both of them happening (eg heads and lottery win) is the product p x q (eg 1/2 x 1/14,000,000 = 1/28,000,000).

twins

Boy or girl?

Scientists, from theoretical physicists to sociologists, make heavy use of the mathematics of probability, and most of them are frequentists. Ask a medical doctor what she means when she says that your chance of having a baby girl is 0.49, and she will answer that 0.49 is the proportion of girls among children born last year, or in the last decade, or in some other time period, to a group of people that are similar to you in some way. That's not a particularly tight definition, but the idea is that if you yourself were to have a large number of babies under similar conditions, then that's the proportion of girls you'd expect to have. But since you are not about to have a large number of babies under similar conditions it's not entirely clear what that proportion should mean for you and your offspring.

This leaves probability theory on shaky philosophical ground. It's an exact science based on a woolly concept that can neither be measured nor linked to our actions by any compelling chain of reasoning.

Place your bets

There is, however, another sense of probability that gives some justification to the theory. The central idea is that probabilities are subjective degrees of belief. If I tell you that the probability of heads on a coin is 1/2, then that's not because the number 1/2 is attached to the coin like its diameter or weight. It's because for some reason or other I have decided that I'm 50% confident that heads will come up at the next toss. It may be because I tossed the coin 100 times to assess it, or because I observed its bilateral symmetry, or because an elf whispered to me in a dream.

Degrees of belief are obviously highly personal, but proponents of this approach have argued that they can nevertheless be measured by observing a person's betting behaviour. Loosely speaking, the more confidence I have in an event to occur, the more money am I going to be prepared to wager on it. In this spirit the mathematician Bruno de Finetti defined a person's degree of belief p in an event A to be the amount (in a currency that is valued by the person, material or otherwise) for which they would buy or sell a bet on A to happen. Other people have proposed different ways of measuring degrees of belief, but all of them are based on the gambling analogy. We have now ventured into what is called game theory.

All of this pre-supposes that the person whose degrees of belief are being measured possesses a minimum amount of rationality. Not because you need to rule out their degrees of belief being misguided (eg the elf), for we have left behind the idea that probabilities are quantities that are "out there in the world". Rather it's because an entirely irrational person cannot be expected to act on their degrees of belief in any meaningful way, so not only can we not measure them by the mechanism proposed above, we can't draw any further conclusions from their behaviour either.

Chance: it's personal.

Once you assume that a person adheres to a set of pretty basic rationality axioms a very interesting result emerges. It can be shown that people should handle their degrees of belief just like mathematicians handle their probabilities. They should be able to express their degrees of belief as numbers between 0 and 1 and then calculate with them just like probability theory stipulates you should calculate with probabilities. To pick up the earlier example, if a person's degree of belief in A happening is p and their degree of belief in B happening is q, and they believe that A happening is independent from B happening, then their degree of belief in both happening should be p x q.

If a person doesn't handle their degrees of belief in accordance with the rules of probability theory, then it can be shown that they will systematically lose money in certain betting situations, something which no rational person would want to do. Here is an example. Suppose a person, call them Alice, has decided that their personal probability (their degree of belief) of heads coming up on a coin flip is 0.5 and that the probability of tails is 0.25. This violates the rules of probability because the probabilities of complementary events should add to 1, while here they only add to 0.75. Now a cunning gambler, Bob, can convince Alice to offer a bet with even odds on heads and 3 to 1 odds on tails — these are the odds that reflect the probabilities so they are acceptable to Alice. If Bob now bets £20 on heads and £10 on tails he will take £40 if heads comes up (the £20 he bet and £20 he won) and £40 if tails comes up (the £10 he bet and £30 he won). Since he's bet £30 in total he will make a sure profit of £10, in other words, there's a guaranteed loss for Alice. This is an example of what's known as a Dutch book.

This, so the argument goes, releases probability theory from its abstract vacuum. There is a compelling link between the theory and rational decision making. The general advice would presumably be this: weigh up your degrees of belief as carefully as possible and then use the rules of probability theory to calculate with them.

A logic of belief

The obvious objection to this kind of approach is that it is highly contrived. The principles of rationality underlying these arguments are basic — for example, one of them says that if a person prefers outcome A to outcome B and outcome B to outcome C then they should also prefer outcome A to outcome C — but it can probably be argued that real people don't always adhere to them. And real people certainly don't treat life like a series of betting games. They may refuse to enter any bets at all, thereby undermining the whole theory. But the idea here is not to simulate real people. It is to find out whether the mathematical theory of probability can in any way be linked to rational decision making. To give a way of measuring probability, albeit only in theory, and decide how these probabilities should ideally be handled — as the mathematician Frank Ramsey put it, to construct a "logic of partial beliefs".

Thomas Bayes (1701-1761).

The game theoretical approach to probabilities was developed during the twentieth century, mostly by Ramsey, Finetti and Leonard Savage. It is based on ideas whose seeds were sown in the eighteenth century by the Presbyterian minister Thomas Bayes and the famous mathematician Pierre Laplace who rescued Bayes' ideas from oblivion. Bayes had found a way of calculating with conditional probabilities: rather than thinking about the probability that heads comes up on a coin, you might think about the probability that heads comes up, given that it came up 75 times in 100 previous trials. It's the kind of notion you need if you are trying to make judgments about the world based on the evidence that's in front of you, acknowledging that the judgment is subjective. Today the term Bayesian is attached to the interpretation that probability is a measure of the plausibility of some event, which can vary in response to evidence.

Most philosophers are happy with subjective probabilities and the justification the game theoretic arguments give to probability theory. But is it really true that there is no objective chance in the world; no probabilities that are independent of our beliefs? Find out in the next article.


About this article

This article was inspired by an interview with Simon Saunders, a philosopher of physics at the University of Oxford. Marianne Freiberger, Editor of Plus, met Saunders at a conference in Oxford in June 2013. The conference was part of an ongoing project called Establishing the Philosophy of Cosmology. You can read more about the issues discussed by philosophers and physicists as part of the project here.

Comments

Permalink

Most people believe 1,2,3,4,5,6 have no chance to come out, as opposed to a random suite of numbers. I guess nobody plays 1,2,3,4,5,6. But probability is the same though.