NB: UNDER CONSTRUCTION.
Part 3: Optimal codes
Introduction
In part 1 of this threepart article, we considered what a code is, and why the prefix property of codes is important.
In part 2, we introduced the field of Information Theory and discovered the important property of entropy.
In this part, we'll be examining the notion of an optimal code, and showing how entropy can help us to specify and develop such a code.
Optimal Codes
The concept of entropy is closely related to the idea of an optimal code.
What is a code? To put it simply, a code is simply a regularised way of representing data. The information on a computer hard disk is stored in a code based on sequences of 1s and 0s, which are then decoded into program instructions, characters in text files, etc. Different kinds of codes suit different needs. Sometimes we want a very efficient code that stores data as concisely as possible (for example, in storing large image files). Sometimes, however, we want a less efficient code that has special properties allowing us to correct and detect errors (for example, when a distant space probe is sending back a faint and noisy signal. See Coding theory: the first 50 years in Issue 3 for more about errorcorrecting codes.)To keep it simple, let's look at a binary code, or a code constructed of an alphabet consisting of only two symbols, 0 and 1, like that used in storing data on a disk. Initially, we'll consider an intuitive explanation of optimality.
A binary code is said to be optimal if the number of bits of information conveyed by any message in that code is equal to the length of the message in binary digits (which, confusingly, are often also described as bits).
For example, consider constructing a code that is a series of code words, or strings of binary digits, each of which is used to represent the occurrence of a particular event.
In an optimal code, more likely events will have shorter codes (e.g. 01), while less likely events will have proportionately longer codes (e.g. 1001010). The relative lengths of the code words will be in the same proportion as the relative probabilities of the events. Why is this? Because the number of binary digits and the number of bits of information contained in a particular code word are the same if the code is optimal, and messages describing likely events contain less information (and should therefore have shorter code words) than messages describing unlikely events.



Consider the two coding schemes in tables 2a and 2b above. In the first scheme, we have taken a "naive" approach: we've gone ahead and assigned sequential 3digit code words to the five events. While these events have different probabilities, they are represented by code words that all have the same length: the code cannot therefore be optimal, since the number of bits of information conveyed by each code word varies (because of the different probabilities), and yet the number of binary digits in each code word is the same.
In the second scheme, we've assigned code words of different lengths to the different events. The relative lengths of the code words are in proportion with the information conveyed by each codeword: for example, event is eight times as likely as event , its codeword therefore containing only onequarter as much information as 's codeword, and only having onequarter as many binary digits (one, as opposed to four).
Noting the correspondence between the number of code word binary digits and the number of bits of information conveyed by the code word is an intuitive way of defining an optimal code. However, a more precise definition of an optimal code is in terms of the expected code length.
Consider a set of mutually exclusive events with each event having probability , and some code which supplies a code word for each event .
Each code word has a length , and the expected or average length of a oneword message in code can be expressed as:
An optimal code is one which minimises . In the case of a binary code, it can be shown that is minimised if :
INSERT PROOF!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
This confirms the previous assertion that optimal codes are codes for which the amount of information conveyed by a message in that code is equal to the length of the message.
Substituting the minimising expression into the expression for gives:
which corresponds to the entropy, .
What this implies is that the lower bound on expected message length corresponds with the expected information content of the message; messages cannot on average be any shorter than the amount of information they convey. In an optimal code, the average message length and the average information content per message will be the same; in a nonoptimal code, the average message length will be greater.
Information and Messages
Let's now look at the message length that's required to describe a particular event in an optimal code.
If a receiver has a good theory or model that leads them to expect that event, then an optimallycoded message describing the event will be shorter (since it contains less information) than for a receiver who has a bad model that leads them not to expect that event.
For example, consider two receivers living in Melbourne, both of whom have the prior knowledge that it is currently summer. The first has experienced many Melbourne summers, and she has a good model (or set of prior expectations) about what kind of weather is likely in summer. The second has lived all her life in a windowless building airconditioned to a uniform 21 C and constant humidity (apart from occasional airconditioning breakdowns), and has no clear idea of what "summer" implies.
We wish to convey to both receivers, neither of whom have been outside today or have any explicit knowledge of today's weather conditions, the following data:
 (a) The temperature is higher than 21 C,
 (b) it is sunny,
 (c) the humidity is low, and
 (d) the level of UV radiation is high.
Now, let's say an optimallyencoded message describing these data for each receiver can be constructed. Note that the optimal code in this context is {\em subjective}: since the perceived likelihood of particular events or circumstances will vary with the receiver's model, therefore the optimal code for these events will also vary according to that model.
The message to the first receiver (who, with her good model of Melbourne summers, will think all of the above data items fairly likely) will be comparatively short. The message to the second receiver (who may well, from her previous experience of summer indoors, think all of the above fairly unlikely), will be longer. The first message contains less information (since the receiver already strongly suspected its contents to be true), and therefore, when encoded in an optimal code, produced a shorter message.
For a more detailed explanation, consider table {\ref{tbl:weather}}. This table gives the prior probability that each receiver assigns to each data item. For example, the first receiver believes that on average, 8 times out of 10 the humidity will be low, (and, of course, the converse outcome that 2 times out of 10 the humidity will {\em not} be low). As already mentioned, the first receiver's prior probabilities represent a more accurate set of beliefs about typical Melbourne summer conditions than do the second receiver's.
Now, as already discussed, the length of the code word required to transmit a given data item to a receiver, in an optimal binary code, is , where is the prior probability assigned by the receiver to that data item.
Therefore, for each of the four data items (a)–(d), the length of the optimal binary code word for that event for each receiver can be calculated. The results are given in table ????. As expected, the code words are shorter for the first receiver, who has more realistic prior beliefs. The length of the total message, which conveys all four data items, is the sum of the code lengths for the individual items. For the first receiver, this length is 1.2 binary digits, and for the second receiver it is 7.7 binary digits. In other words, since the code is optimal, the first receiver gains 1.2 bits of information, and the second receiver gains 7.7 bits of information.