## complexity

How fast can you tell whether two networks are the same?

Are there problems computers will never be able to solve, no matter how powerful they become?

A famous question involving networks appears to have come closer to an answer.

Kolmogorov complexity gives a high value to strings of symbols that are essentially random. But isn't randomness essentially meaningless? Should a measure of information assign a low value to it? The concept of sophistication addresses this question.

There are many ways of saying the same thing — you can use many words, or few. Perhaps information should be measured in terms of the shortest way of expressing it? In the 1960s this idea led to a measure of information called *Kolmogorov complexity*.

Can you measure information? It's a tricky question — but people have tried and come up with very interesting ideas.

On the face of it the Universe is a fairly complex place. But could mathematics ultimately lead to a simple description of it? In fact, should simplicity be a defining feature of a "theory of everything"? We ponder the answers.

In this, the second part of this series, we look at a mathematical notion of complexity and wonder whether the Universe is just too complex for our tiny little minds to understand.

The *Travelling Salesman* movie is coming to the UK! Get your tickets here and find out about the P vs NP problem.

A traditional view of science holds that every system — including ourselves — is no more than the sum of its parts. To understand it, all you have to do is take it apart and see what's happening to the smallest constituents. But the mathematician and cosmologist George Ellis disagrees. He believes that complexity can arise from simple components and physical effects can have non-physical causes, opening a door for our free will to make a difference in a physical world.