Reply to comment
You may have heard of quantum theory before and you probably know what a field is. But what is quantum field theory? This four-part article traces the development of an example of a quantum field theory, quantum electrodynamics, in the first half of the 20th century. You can read the next article in the series here.
Iron filings scattered around a bar magnet arrange themselves along field lines.
Do you remember those pretty field lines that emerge when you scatter iron filings around a magnet? In the case of a simple magnet the field is static; it doesn't change with time. But magnetism is just one aspect of something bigger: electromagnetism. You are at this very moment immersed in electromagnetic fields, generated by the Earth, the Sun, and even your toaster. Fluctuations of an electromagnetic field are called electromagnetic waves — it's those waves that make up visible light, as well as radio waves, x-ray, and microwaves. You are constantly bombarded by them as they travel across space in the form of energy being carried across the electromagnetic field.
James Clerk Maxwell realised, in 1864, that electricity and magnetism were just two sides of the same coin and that light was made up of electromagnetic waves. He developed an elegant theory describing the unified force of electromagnetism and the equations that describe the dynamics of an electromagnetic field now carry his name.
More generally, the idea of a field became an important one in physics because it cleared up a conundrum that had been bugging physicists for a long time. If you think of a force, such as electromagnetism or gravity, as acting between two objects, then you have to admit that it acts instantaneously across space, an idea that seems altogether too magical. If, on the other hand, you think of an object as generating a field around it, then you can explain the force in terms of of the field — the mysterious action at a distance is replaced by a perfectly reasonable local one. Once a field has been generated it has a life of its own, carrying along energy, which is described by its very own equations of motion. Einstein picked up this idea in his 1916 theory of general relativity, which describes gravity in terms of gravitational fields generated by massive bodies like the Sun or the planets.
A couple of decades before Einstein had his revolutionary insight into physics on the cosmological scale, another revolution happened in the physics of the very small, with serious consequences for Maxwell's theory of electromagnetism. At the turn of the twentieth century it became clear that light doesn't always behaves like waves: under certain circumstances it seems to come in streams of particles called photons. This is what Einstein realised when he discovered the photoelectric effect. Prompted by this discovery Louis de Broglie suggested in the early 1920s that little particles of matter, such as electrons, could also display wave-like behaviour. This wave-particle duality emerged as a fundamental feature of physics and it is the central idea of quantum mechanics.
The curious new physics of quantum mechanics required new mathematics and this was independently discovered by Erwin Schrödinger and Werner Heisenberg in the mid 1920s. Their equivalent theories described the behaviour of collections of particles moving freely, or under the influence of a force. The next step was to modify Maxwell's equations for the electromagnetic field to take account of the new insights from quantum mechanics.
Illustration of the electric vectorfield surrounding a positive point charge. Image Wikimedia commons.
This was a difficult task: a finite collection of particles is described by a finite amount of information, but a field, extending through a region of space made up of infinitely many points, is described by an infinite amount of information. In Maxwell's original formulation each point in the field came with a couple of arrows, describing the direction in which the two forces (electric and magnetic) would act on a test particle placed at that point. The length of the arrows was proportional to the strength of the forces. Maxwell's equation described how these arrows change over time. In a quantised version of electromagnetism these arrows, called vectors, would have to be replaced by more complex mathematical objects and their change over time would have to be described by a more complicated equation.
Exactly how Maxwell's equations should be modified was anyone's guess until the physicist Paul Dirac had an important insight in 1927. He considered an electromagnetic field without matter. Maxwell's equations showed that this field is in motion, with gently undulating electromagnetic waves propagating through it as the electric and magnetic components interact. Just as sound waves can be decomposed into harmonics, these electromagnetic waves could be decomposed into pure sine waves using a well-known mathematical technique called Fourier analysis.
The periodic fluctuations of these regular waves are akin to the motion of a pendulum or a mass suspended from a spring: in both cases an object displaced from equilibrium feels a restoring force that is proportional to the displacement. Systems such as these are called harmonic oscillators. Luckily, physicists already knew how to deal with these oscillators quantum mechanically. Dirac thus managed to quantise the electromagnetic field by first decomposing it, mathematically, into infinitely many harmonic oscillators and then applying existing techniques, namely Schrödinger's equation, to quantise those.
Schrödinger’s quantum mechanical treatment of harmonic oscillators had led to some curious results. The total energy stored in a classical harmonic oscillator, such as a pendulum, remains constant over time: when we see a pendulum slow down, this is only because other processes, such as friction, intervene. The energy of an ideal and eternally swinging pendulum comes from the push you start the pendulum off with. You would think that by getting your push just right, you can make the energy take on any value at all. But for a quantum harmonic oscillator this isn’t true: its energy can only take discrete values which depend on the frequency of oscillation
Here is the angular frequency of the oscillator, is a fundamental constant of nature called Planck’s constant and is a natural number. The important point is that the value of the energy of the quantum harmonic oscillator can only be exactly , , and so on, and no value in between — the oscillator has a discrete energy spectrum.
Curiously, the lowest energy state
called the ground state, does not correspond to zero energy: a quantum harmonic oscillator is never completely at rest.
(This reflects Heisenberg's famous uncertainty principle which we shall meet in the next article in this series.)
In electromagnetism the discrete energy levels reflect wave-particle duality. The energy carried along by a classical wave can vary continuously, but the constituent waves of the quantised electromagnetic field are only allowed discrete packets of energy. These packets can be viewed as individual photons: a wave with energy level corresponds to photons each with a given frequency. Phrased differently, a photon can be viewed as a "unit of excitation" of the underlying field. It’s like a quiver in a photon jelly with the quiver’s energy coming in precisely prescribed units.
Dirac's feat was impressive, but so far it only applied to an empty electromagnetic field. What about matter particles like electrons, which after all interact with electromagnetic fields and even generate fields? Schrödinger and Heisenberg's mathematics described the behaviour of these particles, but it did not take account of Einstein's special theory of relativity. This comes into play whenever things move close to the speed of light, that is, at the speed of photons. And since electromagnetism is all about photons you cannot ignore relativistic effects when dealing with electromagnetism.
This picture, taken at the 5th Solvay conference in 1927, contains some of the greats of quantum mechanics. Back row from left to right: Wolfgang Pauli is 5th and Werner Heisenberg is 6th. Middle row from left to right: Louis de Broglie is 7th, Max Born 8th, Niels Bohr 9th. Front row from left to right: Max Planck is 2nd and Albert Einstein 5th.
A new equation was needed and it was again Dirac who came up with the goods. His equation gave rise to a pleasing synergy with the photon picture, in keeping with the notion of wave-particle duality. The solutions to Dirac's equation were again waves, which could be decomposed into harmonic oscillators and then quantised. Electrons, just as photons, emerged as units of excitation of an underlying field: not quite waves and not quite particles.
And there was more. To make his equation take account of real physical properties of electrons, such as spin, a sort of angular momentum, Dirac had to use a mathematical representation that contained twice as many bits of information than, on the face of it, were necessary. What could those extra bits mean? Dirac predicted that they describe a curious twin of the electron, called an anti-electron or positron, which has the same mass and opposite charge. When an electron meets its anti-twin the two annihilate each other, producing chargeless photons. Shortly after Dirac's stunning mathematical prediction, positrons were detected in lab experiments by Carl D. Anderson. In fact, most fundamental particles were later shown to come with their own antiparticle. The laws of nature as we understand them treat particles and antiparticles equally so, on the face of it, there should be the same amount of matter and antimatter in the Universe. Why this isn't a case — there seems to be a lot more matter than antimatter — is still a mystery today.
Ready, steady ... damn!
Dirac's efforts seemed to provide all that is necessary to construct a full theory of quantum electrodynamics. It described photons and electrons as excitations of underlying quantum fields, so it was a matter of putting the equations to work to see how photons and electrons interact: how light interacts with itself and scatters off matter. But there was one major problem. The answer to pretty much any calculation physicists cared to attempt was infinity. Something was seriously wrong.
This is what we will explore in the next article.
About this article
Marianne Freiberger and Rachel Thomas are Editors of Plus. They are hugely grateful to Jeremy Butterfield, a philosopher of physics at the University of Cambridge, and Nazim Bouatta, a Postdoctoral Fellow in Foundations of Physics at the University of Cambridge, for their many patient explanations and help in writing this article.