Chance is a slippery concept. We all know that some random events are more likely to occur than others, but how do you quantify such differences? How do you work out the probability of, say, rolling a 6 on a die? And what does it mean to say the probability is 1/6?

The probability of getting 6 right in the national lottery is around 1 in 14 million.

Mathematicians avoid these tricky
questions by defining the probability of an event mathematically
without going into its deeper meaning. At the heart of this definition
are three conditions,
called the *axioms* of probability theory.

**Axiom 1:**The probability of an event is a real number greater than or equal to 0.**Axiom 2:**The probability that at least one of all the possible outcomes of a process (such as rolling a die) will occur is 1.**Axiom 3:**If two events*A*and*B*are mutually exclusive, then the probability of either*A*or*B*occurring is the probability of*A*occurring plus the probability of*B*occurring.

As we have presented them here, these axioms are a simplified
version of those laid down be the mathematician Andrey
Kolmogorov in 1933. Up until then, the basic
concepts of probability theory had been "considered to be
quite peculiar" (Kolmogorov's words) so his aim was to put them in their "natural place,
among the general notions of modern mathematics". To this end
Kolmogorov also gave a precise mathematical definition (in terms of
sets) of what is
meant by a "random event". We have left this bit out when stating the
axioms above, but you can read about it in Kolmogorov's original text
*Foundations
of the theory of probability*.

With his axioms Kolmogorov put
probability into the wider context of *measure theory*. When
you are measuring something (such as length, area or volume) you are
assigning a number to some sort of mathematical object (a line
segment, a 2D shape, or a 3D shape). In a similar way, probability is
also a way of assigning a number to a mathematical object (collections
of events). Kolmogorov's formulation meant that the mathematical
theory of measures could encompass the theory of probability as a
special case.

If you are familiar with probability you might feel that two
central ideas of the theory are missing from the above axioms. One is
the idea that the probabilities of all the possible mutually exclusive
outcomes of a process sum to 1, and the other is the notion of
*independent events*.

The first is simply a consequence of the axioms. Suppose that some
process (rolling a die) can result in a number of mutually exclusive *elementary
events* (rolling a
1, 2, 3, 4, 5, or 6). Then by axiom 2, the probability that at least
one of these events occurs is 1. Axiom 3 implies that the probability that at least
one of them occurs is the sum of the individual probabilities of the
elementary events. In
other words, the sum of the individual probabilities of the
elementary events is 1.

Moving on to the second missing feature, notice that the notion of independence doesn't apply at the level of the events that can result from a single process (such as rolling a die), but at the level of processes: we need to say what we mean by two processes (such as rolling a die twice) being independent. Having carefully defined what we mean by "two processes" mathematically, Kolmogorov gives the familiar definition of independence. It amounts to saying that two events are independent if the probability of both of them occurring is equal to the product of their individual probabilities.

## Comments

## Expansiation on subject matter

Yeah those were the axioms, but what about total probability??

## Second axiom covers that.

Second axiom covers that. Probability that any outcome will result is 1.

I.e. something is bound to happen in an experiment.