Your brain is an extremely expensive piece of equipment. "The human brain is about 2% of body mass, but it consumes about 20% of the body's energy budget," says Ed Bullmore, Professor of Psychiatry at the Brain Mapping Unit in the University of Cambridge. "It's expensive to build and expensive to keep running. Every time you send a signal down a neuron, it costs quite a lot of metabolic money to reset the neuron after the signal has passed."

Viewed as a network of inter-connected regions, the brain faces a difficult trade-off. On the one hand it needs to be complex to ensure high performance. On the other hand it needs to minimise what you might call wiring cost — the sum of the length of all the connections — because communication over distance is metabolically expensive.

In computer chips and in brains wiring is expensive.

It's a problem well-known to computer scientists. "In computer circuits wires are really, really expensive," says Daniel Greenfield, a computer scientists who's just finished his PhD at the University of Cambridge. "You have this complex circuit description and you need to put it down on a two-dimensional computer chip. You can put it down in many different ways, but if you choose a bad placement, you're going to have lots of long wires. It's going to be very expensive to make and also be really slow and consume a lot of energy. If you released this into the market place, people wouldn't buy it."

So are there similarities in how the brain and computer chip designers have coped with the problem of high complexity versus minimal wiring cost? In a recent study, Bullmore and Greenfield, together with Danielle Bassett of the University of California Santa Barbara and colleagues from the UK, Germany and the US, have shown that indeed there are. Comparing the human brain, the nervous system of the nematode worm C. elegans and high performance computer circuits, the researchers found that all three networks are organised in ways that provide good solutions to the trade-off problem. It seems that market-driven human invention and natural selection, faced with similar challenges, have come up with similar solutions.

Small worlds

Structural simlarities between brains and computer circuits, both viewed as networks, have been found before. One property that is shared by many networks — from brains and ecosystems to computer circuits and social networks — is known as small worldness.

"A small world network has a high degree of clustering, that is high connectivity between nearest neighbours, but it also has a small average path length between pairs of nodes," says Bullmore. "In a social network, this means that if you and I are friends, then it's very likely that another friend of mine is also a friend of yours — that's clustering." Short path length means that a relatively small number of nodes separate any two given nodes. So even though you might not personally know the Queen, you're probably connected to her via a surprisingly short chain of acquaintances.

Small world networks have advantages in terms of information transfer. "In a small world network it's easy for information to get from one node in the network to any other node — short path length means higher global efficiency," says Bullmore. Clustering gives rise to higher local efficiency, as there are many ways for information to get around clusters — just think of how quickly gossip spreads.

Russian dolls

Small world networks have captured people's imagination because everyone likes the idea of being connected to the Queen via just a few aquaintances. But there's another, subtly different feature that has been found in many real-life networks. It's called modularity. "Modularity means that you can break down the overall architecture of the network into a number of modules containing nodes that are densly connected to each other, but sparsely connected to nodes in other modules," says Bullmore. "Modular systems are often small world, but small world networks are not necessarily modular."

Hierarchical modularity: the network shown (large square) contains sub-modules (smaller square) and sub-sub-modules (smallest square). Image from [1]

Like small worldness, modularity has a particular advantage. "It's been suggested that modularity is advantageous to any information processing system that has to adapt, evolve, or change," says Bullmore. "This is because modularity allows a complex system to change one module at a time without threatening function in other modules."

Bullmore and his colleagues found that the human brain, the nervous system of the nematode worm and some high performance computer circuits all exhibit a very special kind of hierarchical modularity: if you look within a module, you'll find submodules, and possibly even submodules within submodules, and so on. It's a kind of Russian doll phenomenon.

The researchers looked at the connections between neurons in the worm's nervous system and at connections between structural regions of the human brain (it's not yet possible to resolve the brain to neuronal level). For the computer circuits and the nematode worm they found modularity over four hierarchical levels and for the human brain they found up to three hierarchical levels, depending on the imaging technology used to look into the brain.

Rent's rule

In the 1960s the IBM employee E.F. Rent discovered a surprising mathematical relationship in computer circuits, which is connected to their hierarchical nature. Suppose that by cutting through a few connections you’ve partitioned your network into chunks, each consisting of around $N$ nodes. Then the number $C$ of connections that link each chunk to the rest of the network (that is the number of connections you’ve had to snip through to cut that chunk loose) is roughly equal to

  \[ C=kN^ p. \]    

The numbers $k$ and $p$ don’t depend on your individual chunk, or on the number $N$, but are characteristic of the network as a whole. The number $k$ is the average number of connections per node in your system as a whole. The number $p$, called the Rent exponent, is between 0 and 1. For computer circuits $p$ is typically somewhere between 0.45 and 0.75.

Take a chunk of the network (the nodes A to G) and compare the number of nodes in it (7) to the number of connections from the chunk to the rest of the network (3). Image from [1]

"In computer circuits you see this Rentian [relationship] at all different scales," says Greenfield. "You can go down to smaller and smaller [chunks of network] and you still see the same exponent." The relationship, which has become known as Rent's rule, indicates self-similarity in the network: no matter how closely you zoom in, you see the same proportions.

What is surprising about Rent's rule is that no-one had explicitly set out to design computer circuits to follow it. "The industry was making those chips, trying to make them better and faster and cheaper," says Bullmore. "Without knowning what it was doing, the industry converged on this very simple relationship."

Why Rent's rule?

Rent’s rule seems to have emerged because engineers need to keep circuits simple enough to be buildable and economical. In a totally random network, one that doesn’t exhibit any structure in the way the nodes are connected up, you might expect the number $C$ (the number of connections from a chunk of the network to the rest of the network) to grow in direct proportion to $N$ (the number of nodes in the chunk): the more nodes there are in a chunk, the more connections you need. So you might expect a relationship of the form

  \[ C=kN. \]    

In other words, a random network has a Rent exponent $p=1$.

However, no-one in their right mind would build a network in such a random fashion. "If you have a system that's randomly connected, you soon find that the amount of aggregate wiring you need grows so rapidly that you can't build it," says Greenfield. Computer chip designers are much cleverer than someone operating at random. They build up circuits module by module, placing and connecting things cleverly to minimise wire length as much as possible. The reduced Rent exponent, which appears at all scales, seems to be a result of a hierarchical design process, which favours short-range over long-range communication.

The Rent exponent is also a measure of the intrinsic complexity of a network. Loosely speaking, the more connections there are between different parts of the network, the higher $p.$ "Interestingly, different types of circuit - memory, or random logic, or arithmetic logic units - have different characteristic exponents," says Greenfield. "The number $p$ characterises the self-similar complexity of the connectivity of each type of circuit."

Rentian brains

If Rent's rule has evolved to keep complex networks buildable in physical space, then the obvious question is whether natural information processing networks also exhibit it. The study of Bullmore, Greenfield and their colleagues is one of the first to show that Rent's rule does indeed occur in biological systems. They found it in the human brain and the nervous system of the nematode worm, as well as in the high performance computer circuits they looked at.

The Rent exponents they found both for the worm and the human brain lie in the region of 0.75 — just as for high performance computer circuits.

The logarithm of the number of connections that link a group of nodes to the rest of the network (log(C)) plotted against the logarithm of number of nodes in each group (log(N)) for the nematode worm (left) and the human brain imaged using MRI scans (right). The fact that in both cases data points lie along a straight line indicates that both networks follow Rent's rule. (If C=kNp, then log(C)=log(k)+p log(N), which is the equation of a straight line.) Image from [1]

Two embeddings of the same network.

Wiring cost

So brains exhibit a similar level of connection complexity as high performance computer circuits, but how do you know whether they actually minimise the amount of "wiring" needed? A given network can be put down in space in many different ways. Given a computer circuit description, you can put it down on a chip in a clever way that minimises the amount of wire you need, or you can be stupid about it and end up with an expensive tangle of wires.

Embedding a given network in space in a way that minimises wiring cost isn't an easy task. No-one knows of a method that works for any given network and can be implemented in a reasonable amount of time. In fact, if you can find such a generic algorithm, you're set to win a million dollars from the Clay Mathematics Institute (the problem is what's called NP complete, see How maths can make you rich and famous). So given a particular embedding, how do you know how good it is in terms of wiring cost?

One method for finding out comes from two subtly different ways of looking at the Rent exponent. Firstly, you can think of a network as an abstract entity. To compute the Rent exponent, you partition the network into chunks without reference to where they might sit in space (in the image below such a chunk might consist of all the green nodes). The resulting topological Rent exponent is independent of how the network can be embedded in space.

The physical Rent exponent is calculated by looking at collection of boxes drawn in the physical space the network is embedded in. For the topological Rent exponent you define the chunks of network you're looking at purely in terms of the network (for example red nodes, green nodes, or blue nodes). Image from [1]

However, you can also think of Rent's rule in a physical sense. Given a physical embedding of a network, for example a brain in 3D space, you can divide the physical space up into boxes. You then compare the number of connections emanating from each box to the number of nodes sitting inside each box.

Just as the topological Rent exponent measures a network's inherent complexity, the physical Rent exponent measures the complexity of the physical embedding in space — the higher the physical exponent, the higher the wiring cost. The physical Rent exponent can be higher than the topological one, but not smaller. This reflects the fact that you can't make the network simpler by putting it down into space, but if you put it down in a silly way, you can sure make it expensive to wire.

Comparing the physical Rent exponent to the topological one provides one way of testing if your network has been efficiently embedded in terms of wiring cost. There's a theoretical result which says that (if your network is sufficiently large and complex) your embedding is close to optimal if your physical Rent exponent is equal to the topological one.

Computing the Rent exponents of a given network isn't entirely straight-forward. For the topological exponent it involves chopping the network up into chunks and there are many ways of doing this. The problem of partitioning a network in a way that gives you the true topological Rent exponent is also NP-complete. There are, however, heuristic methods that give you good approximations of the Rent exponents. Bullmore and Greenfield applied them in their study and found that the physical exponents for the worm, the human brain and the computer circuits were close, though not quite equal, to the topological ones. This indicates that in all cases the networks are cost-effectively, though not absolutely optimally, embedded in space — 2D space for the computer circuits and 3D space for the brain and worm.

Conflicting pressures

The result suggests that the nervous systems have evolved to keep wiring cost low. "I think that nature is parsimonious," says Bullmore. "Natural systems are brilliant at delivering high performance for no more than it absolutely costs. If you think of the time that homo sapiens has been around, there hasn't been an abundant supply of energy. This would have been an important driver in the evolution of all types of nervous systems."

But wiring cost minimisation isn't the whole story. In their study the researchers also checked if the human brain could be rewired, possibly at the cost of a little complexity, in a way that minimises wiring cost further. They fed the brain wiring diagram into a computer and asked it to keep the nodes (the different regions of the brain) in place, but change the connections between them. This can obviously change the inherent complexity of the network and destroy any structures like modularity and Rent's rule.

"It turns out that you can make a brain wiring diagram that is slightly more minimally wired than it is in nature," says Bullmore. However, it also turned out that these "cheaper" brain models were also less complex. "This suggests that energy minimisation and wiring cost minimsation are not the whole story. You need the complexity and the complexity costs a bit more than the absolute minimal amount you'd need to wire up the circuit."

So it seems that nature and computer scientists have come up against similar problems and solved them in similar ways. In mechanics there's something called the principle of least action: all physical systems, from the orbiting planets to the apple that supposedly fell on Newton's head, behave in a way that minimises the effort required. You can use the principle to derive the fundamental laws of motion. So could Rent's rule represent a similar fundamental law of information processing, a result of a principle of least cost? It's something for information philosophers to ponder.


Further reading

[1] The article is based on the paper Efficient Physical Embedding of Topologically Complex Information Processing Networks in Brains and Computer Circuits by Danielle S. Bassett, Daniel L. Greenfield, Andreas Meyer-Lindenberg, Daniel R. Weinberger, Simon W. Moore and Edward T. Bullmore.


About this article

Ed Bullmore

Ed Bullmore is Professor of Psychiatry and a founding Director of CAMEO, an award-winning service for first episode psychosis in Cambridge. Since 2005 he has been Clinical Director of the Wellcome Trust and MRC-funded Behavioural & Clinical Neurosciences Institute and has worked half-time for GlaxoSmithKline as Vice-President, Experimental Medicine and Head, Clinical Unit Cambridge. Ed Bullmore has published more than 250 peer-reviewed papers and his research on brain networks has largely been supported by a Human Brain Project grant from the National Institutes of Health. In 2008, he was elected Fellow of the United Kingdom Academy of Medical Sciences, in 2009 Fellow of the Royal College of Psychiatrist and in 2010 Fellow of the Royal College of Physicians.

Dan Greenfield

Dan Greenfield has recently completed his PhD from Cambridge University as a Gates Cambridge Scholar, in which he developed a new framework for understanding the spatio-temporal locality of communication in computation. Before his PhD he worked for many years designing innovative computer chips in Silicon Valley, and writing cutting-edge software for Australian startups. He obtained his previous degrees from the University of New South Wales, where his Masters developed new algorithms for systems biology. He has won multiple competitions in programming and has represented Australia internationally. He is currently a director of Fonleap Ltd, a new startup located in Cambridge.

Marianne Freiberger is co-editor of Plus. She interviewed Ed Bullmore and Dan Greenfield in Cambridge in July 2010.

Comments

Permalink

Found this really interesting!