Do you remember telephone books? These great big lumbering things, often yellow, were once an indispensible part of every household. Today we don't need them anymore, as we can store several phone books' worth of information on small devices we carry around in our pockets. Those devices will also soon be outdated. And one day in the not too distant future our control of information will be complete. We will be able to encode an infinite amount of it on tiny little chips we can implant in our brains.
An image of a coiled galaxy taken by NASA's Spitzer Space Telescope. The eye-like object at the centre of the galaxy is a monstrous black hole surrounded by a ring of stars. Image: NASA/JPL-Caltech
Except that we won't. Not because of a lack of technological know-how, but because the laws of nature don't allow it. There is only so much information you can cram into a region of space that contains a finite amount of matter. "We are talking about information in the sense of something that you can store and reproduce," explains Jacob Bekenstein, the physicist who first came up with this limit of information in the early 1980s. "[To be able to do that] you need a physical manifestation of it; it could be on paper, or it could be electronically [stored]."
Bekenstein isn't a computer scientist or engineer, but a theoretical physicist. When he came up with the Bekenstein bound, as the information limit is now known, he was thinking about a riddle posed by black holes. These arise when a lot of mass is squeezed into a small region of space. According to Einstein's theory of gravity the gravitational pull of that mass will become so strong that nothing, not even light, can escape from its vicinity. That feature is what gave black holes their name.
Room for randomness
The riddle concerned the question of what happens when something falls into a black hole. Most physical systems come with room for variation. For example, at this particular instant in time all the atoms and molecules that make up my body are in a particular configuration. But that configuration is only one of many that are possible. You could swap the position of the tea molecules currently sloshing around in my stomach, or reverse the direction in which they are moving, without altering my macrostate: the physical variables I am able to observe in myself.
This room for variation — the allowed amount of randomness underlying my macrostate — is measured by a number physicists would call my entropy. The more configurations of smallest components (the more microstates) there are corresponding to my macrostate, the higher my entropy. You can also think of entropy in terms of information. If a large number of microstates are possible, then that's because there are many different components (eg atoms) that can be arranged in many different ways. To describe a single microstate exactly would mean to specify the exact position, speed and direction of motion of each component, which requires a lot of information. The higher the entropy, the more information you need. This is why you can think of entropy as measuring the minimum number of bits of information you would need to exactly describe my microstate given my macrostate.
The behaviour of the entropy of a system over time is described by a law of physics called the second law of thermodynamics. It says that the entropy of an isolated physical system can only ever go up or stay the same, but it can never decrease. To shed some light on this, think of my cup of tea before I imbibed it. At the very start, the instant I put the milk in, the tea and milk molecules were neatly separated. After a while, however, the milk will have diffused, milk and tea will be thoroughly mixed up, and the liquid will have reached an equilibrium temperature. The latter situation has a higher entropy than the initial situation. That's because there are many more microstates that correspond to an equilibrium cup of tea than there are microstates that correspond to a situation in which the milk and tea molecules are only allowed to be in certain regions within the cup. So the entropy of my cup of tea has increased over time. (You can find out more about entropy here.)
Losing entropy
But what about those black holes? Initially black holes were thought of as very simple objects with no room for variation at all. Physicists thought their entropy was zero. But if I fell into a black hole, I would never get out again and my entropy would be lost to the world. The overall entropy of the Universe would have decreased. "The moment you have a black hole, you have some sort of trash can where to hide entropies," says Bekenstein. "So the question is, what does the second law say in that case?"
A photo taken by NASA's Chandra X-ray Observatory revealing the remains of an explosion in the galaxy Centaurus A. There is a supermassive black hole in the nucleus. Image: NASA.
It seems that the second law would be violated, and this would indeed be true if the black hole had no entropy at all. However, in 1970 Stephen Hawking found that black holes come with a property that behaves very much like entropy. Every black hole has an event horizon. That's their boundary of no return: if you cross it, you won't come back. Like the shell of an egg the event horizon has an area. Using theoretical calculations Hawking showed that, whatever happens to the black hole, this area never decreases — just like the entropy of an ordinary physical system.
Bekenstein took the bold step of suggesting that the area of the event horizon does indeed measure a form of entropy. "A black hole is very simple, but it's hiding a complicated history," explains Bekenstein. In an ordinary system like my cup of tea, entropy is a measure of our uncertainty about what's going on at a molecular level. If its entropy is high then that's because there are many possible microstates corresponding to a macrostate. I can observe a macrostate, for example the tea's temperature and mass, but that doesn't give me a clue about what the exact microstate is because there are so many possibilities. "For the simplest black hole all I can figure out is its mass, but it has been formed in one of many possible ways," says Bekenstein. "There are many alternative histories and they all count towards the entropy."
Bekenstein's idea was controversial at first, but further investigations into the theory of black holes confirmed that it made sense to define a black hole entropy (call it $S_{BH}$). It turns out to be proportional to a quarter times their horizon's surface area $A$; to be precise $$S_{BH} = \frac{1}{4} \times \frac{A}{L_p^2},$$ where $L_p = 1.62 \times 10^{-35}$ cm is called the Planck length.Recovering entropy
The notion of black hole entropy gave people a way of generalising the second law of thermodynamics to systems that include black holes: for such a system it's the sum of the ordinary entropy that lives outside the black hole and the black hole entropy that can never decrease. "If some entropy falls into the black hole the surface area will grow enough for the sum of these two entropies to grow," explains Bekenstein. The increase in the black hole entropy will compensate, and most frequently over-compensate, for the loss in the ordinary entropy outside it.
The generalised second law inspired Bekenstein to a little thought experiment which gave rise to the Bekenstein bound on information. Suppose you take a little package of matter with entropy $S$ and you lower it into a black hole. This will increase the black hole's entropy and, equivalently, its surface area. You lower the package into the hole very carefully so as to disturb the hole as little as possible and increase the surface area by the smallest possible amount. Physicists know how to calculate that smallest possible amount. Writing $G$ for Newton's gravitational constant and $c$ for the speed of light, it turns out to be $$A_{increase} \geq \frac{8\pi \times G \times m \times r}{c^2},$$ where $m$ is the total mass of the package and $r$ is its radius. Thus, lowering the package into the black hole will have increased $S_{BH}$ by at least $$\frac{2\pi \times G \times m \times r}{c^2 \times L_p^2}.$$ When you have dropped the package into the black hole, the outside will have lost an amount $S$ of entropy. Since the overall entropy cannot decrease, the increase in $S_{HB}$ must exactly balance or exceed $S.$ In other words, $$S \leq \frac{2\pi \times G \times m \times r}{c^2 \times L_p^2}.$$ The entropy of your package cannot be bigger than the number on the right of this inequality, which depends on the package's mass and its size. And since any package carrying entropy could in theory be dropped into a black hole in this way, any package must comply with the bound.The limits of information storage
How is all of that linked to the storage capacity of a computer chip or some other information storage device? The entropy measures the number of bits needed to describe the chip's microstate. Some of those bits go towards describing the parts of the chip designed to store information. More storage capacity requires more entropy. And since the entropy is limited (in terms of the chip's mass and size) by the expression above, so is its storage capacity. To increase the amount of information a device can carry beyond any bound, we would have to increase its size and/or mass beyond any bound too.Could a brain be uploaded on a computer?
Current devices don't come anywhere near the Bekenstein bound, so there's no need to worry that we will hit the limit any time soon. In fact, the only things physicists know of that exactly reach the bound are black holes themselves. But it is interesting that such a bound even exists. "In the future when information storage technologies will get much better you still will not be able to exceed this," says Bekenstein. "It's a very big bound, but it's a finite bound."
There is another interesting angle on Bekenstein's bound. It puts a limit on how much information you need to completely describe a physical system, such as a human brain, down to the tiniest detail. Since, according to the bound, that information is finite this means that, in theory at least, a human brain could be entirely recreated on a computer. Many people believe that everything about a person, including consciousness and a sense of self, arise from physical processes in the brain, so in effect it would be possible to upload a person onto a machine. We are nowhere near being able to do this, both in terms of the computing power and the other technologies that would be needed. But still, it's a fascinating thought. Matrix, here we come.
About this article
Jacob Bekenstein.
Jacob Bekenstein is Polak Professor of Theoretical Physics at the Hebrew University of Jerusalem. Marianne Freiberger, Editor of Plus, interviewed him in London in July 2014.
This article is part of our Information about information project, run in collaboration with FQXi. Click here to read other articles on information and black holes.
Comments
entropy / information
thanks for this information / entropy and effort made for clarity . Well balanced combination of understandable equations (ie simple) and clear text and examples.
Maybe another sub title eg ' entropy and energy'
or ' difference between entropy and energy' might help.
Finally understanding these concepts and I will ensure this is passed on.
Always have had interest in ''information'' but did not in my youth have time / resource to find any underpinning insight.
Much appreciated