It's a tricky question — whether or not you find something informative depends on your personal point of view and there are many different ways of expressing the same thing. Objectivity seems impossible. Yet, as communication technology became ever more important over the last century or so, objective measures became necessary and people's attempts to find them have led to some very interesting ideas. The articles below present some highlights in information theory. You can read them in sequence, but they also work stand-alone.
Information: Baby steps — If I tell you that it's Monday today, then you know it's not any of the other six days of the week. Perhaps the information content of my statement should be measured in terms of the number of all the other possibilities it excludes? Back in the 1920s this consideration led to a very simple formula to measure information.
Information is surprise — If I tell you something you already know, then that's not very informative. So perhaps information should be measured in terms of unexpectedness, or surprise? In the 1940s Claude Shannon put this idea to use in one of the greatest scientific works of the century.
Information is bits — Computers represent information using bits, represented as 0s and 1s. It turns out that Claude Shannon's entropy, a measure of information invented long before computers became mainstream, measures the minimal number of bits you need to encode a piece of information.
Information is noisy — When you transmit information long-distance there is always a chance that some of it gets mangled and arrives at the other end corrupted. Luckily, there are clever ways of encoding information which ensure a tiny error rate, even when your communication channel is prone to errors.
Information is complexity — There are many ways of saying the same thing; you can use many words, or few. Perhaps information should be measured in terms of the shortest way of expressing it? In the 1960s this idea led to a measure of information called Kolmogorov complexity.
Information is sophistication — Kolmogorov complexity gives a high value to strings of symbols that are essentially random. But isn't randomness essentially meaningless? Shouldn't a measure of information assign a low value to it? The concept of sophistication addresses this question.
The following article first appeared on the FQXi website.
Quantifying Occam — An idea called Occam's razor states that the simplest answer is always the best. But is this really true? Computer scientist Noson Yanofsky is trying to find out, applying Kolmogorov complexity to a branch of mathematics known as category theory.
I carry a plank of wood along to the carpenter and ask her/him to cut me another to the same length. Next time I'm a bit more savvy and mark off a length of string against the plank, roll it up and put it my pocket, and take that along to the carpenter instead. Third time I'm even cuter, I measure the length in numbers on a tape measure, and just go along with nothing but tiny puffs of air on my lips or marks on a bit of paper.
Each step progressively liberates information from its cumbersome physical representation or embodiment for the purposes of transmission, storage and eventual reconstitution. Living things do the same with DNA.
So surely one kind of measurement we're concerned with must be a ratio demonstrating the degree to which this liberation is accomplished, such as gigabytes per chip or per second in electronics, or quantity per number in arithmetic. For example that ratio is 37 for the number 111 since we have just three symbols expressing the quantity one hundred and eleven which in primitive times would have had to be transmitted as a hundred and eleven symbols.
This approach also means that information can be instructions as well as descriptions, those two words simply represent different directions: to and from physical embodiment.