The Internet is huge. With mobile phones and other devices now being hooked up as well as computers, it will soon comprise many billions of endpoints. In sheer size and complexity the Internet is not far off the human brain with its hundred billion neurons linked up by around ten thousand trillion individual connections. If you're finding it hard even to read these huge numbers off the page,
how can anyone be expected to cope with complexity on this vast scale? While mathematical tools that deal with complexity have experienced rapid development it recent years, there still isn't an overarching science of complexity, a mathematical toolbox serving everyone who's dealing with it in whatever shape or form.
This month a group of experts from a range of different areas got together in Cambridge to try and remedy the situation. It was the first ever meeting of the Cambridge Complex Systems Consortium. Delegates included a climate scientist, a former head of MI6, a neuroscientist, a sociologists, and a mathematician. Plus went to see them, to learn about their struggle with
If you're a fan of our regular Outer Space column by John D Barrow, then you can now get your hands on some of it, and much more besides, in book form. In A hundred essential things you didn't know you didn't know Barrow applies his usual style and insight to anything from winning the
lottery, placing bets at the races, and escaping from bears, to sports, Shakespeare, Google, game theory, drunks, divorce settlements and dodgy accounting — all obviously things that only make sense with maths. The bite-sized chapters are easy to digest and form formidable weapons in your war on ignorance.
For those of you who enjoy Ben Goldacre's Bad Science column in the Guardian, now is your chance to meet him in person. Ben is talking about "How the media promote the public misunderstanding of science" on Tuesday 21 October (next week) at 5.30pm in the Babbage Lecture Theatre, New Museums
Site, Cambridge. It's a free public lecture and everyone is welcome, just turn up on the night.
The lecture is brought to you by Understanding Uncertainty, the Winton programme for the public understanding of risk at Cambridge. You can read more from the Understanding Uncertainty team on their site, or in their column on Plus.
Good science is all about good method. Every scientific study worth its salt should begin with a clearly delineated question to be investigated, followed by the systematic gathering of evidence (often involving the use of statistics), and concluding with a sober-headed interpretation of evidence found.
The 2008 Nobel Prize in Physics has been awarded to three men who's work has contributed significantly to our understanding of why we're here. Makoto Kobayashi of the High Energy Accelerator Research Organization, Japan, and Toshihide Maskawa of Kyoto Sangyo University share one half of the prize, with the remaining half going to Yoichiro Nambu of the University of Chicago. Their combined body
of work paves the way towards solving two of the biggest mysteries of physics: why there is no antimatter and why things have mass. The answers to both are connected to flaws in nature's symmetry.
It's a bleak time for the financial markets. We've seen financial institutions fall and governments around the world struggling to stabilise the markets. But who is to blame? According to media reports there are two suspects in the dock: the "rocket scientists" (a.k.a. the financial mathematicians) who provided the information behind the market's decisions, or the greedy bankers who only
thought about quick profits and their end-of-year bonuses.
But just what role does maths have in the financial market? Most of us will have come into direct contact with financial maths when applying for a loan from a high street bank. Rather than the bank manager relying on how well they know you personally, as might have happened in the past, now loan decisions are based on statistical models. But these robust mathematical models predicting whether
or not someone will be able to repay their loan did not avert the subprime mortgages in the US , the first domino to fall in the current crisis. Loans were still given to people the models predicted would default on their payments.
The failure of these high-risk loans infected the whole financial markets thanks to the use of credit derivatives to share the credit risk around. These complex financial instruments were profitable in a booming market, but are now paralysing the financial system. Did the mathematicians involved in developing these products get their sums wrong?
On the other hand, if the real culprits are the bankers, then what lead them to make such bad decisions? Could it have been down to their biochemical make up, and would the problem be solved by more diversity on the trading floor?
I would like to read further articles about finance and markets, for there are many questions, sometimes of the kind only children can ask.
For instance, the author of the "Good Math, Bad Math" blog stated that the failure to model that default rates of different laons are not independent but correlated (if one fails, the others will fail as well). Sounds very reasonable, is there a way to quantify the effects? Other example where this phenomenon might occur?
Or take the broader question: what is money? Well, the double coincidence of wants problem is a satisfactory answer to why we need/have money. Also, (some) money is created by banks via the fractional reserve system, but how does this work? I mean, in the case of counterfeit money, it's clear who profits from it, but who gets to spend the money created by banks? I mean, the loans are paid back at
some point in the future?
What about interest? I mean, in a "closed economy" (which I define to be one where the amount of money is constant), where does the money for interest rates come from? Surely, you can't lend all the money in the economy and expect interest on that, you can't pay back $120 if only $100 exist in total. What is going to happen? Calculations?
Lawrence Summers said that we have been having a crisis about every 3 years since the 80's. I recall reading stories in the early 80's about unemployed physicists and mathematicians could only get jobs on Wall Street. The "what went wrong" pot is full of reasons, but I'd bet the rest of my retirement that it's fraud. The environment was created by tearing down regulations without consideration
for the consequences, and lax oversight, perhaps some pay to play going on. The financial models and politicizing financial statements has more to do with imaginary numbers and the fourth dimension than reality.
If history repeats itself, here are some interesting stories on recent crises:
Nova did a story on Long Term Capital Management when the Feds thought they would bring down the financial markets in 1998: Trillion Dollar Bet
Frontline did a program called Dotcon after the stock market bubble burst of 2000: Dotcon
The story of what failed goes beyond this (but is no less fascinating in a mathematical, yet morbid sense).
The key cause is derivatives and their pricing combined with how financial markets misused them.
The first fatal flaw was treating derivatives as an externality-creating form of insurance, without thinking deeply enough about what that really means.
At one level there is no difference between internally pooled insurance (like traditional insurance policies) and derivative "insurance". The thought process that differentiated the latter was part of the central flaw in understanding the risks however. Both types of "insurance" are the same in that you are relying on aggregating a market to assure that expected costs of payouts are lower than
incoming premium cash flows.
The difference is that with derivative forms, by externalizing the risk you no longer have the visibility to know whether the pool's statistical distribution still assures the required expected cost margin below the incoming revenue!. Of course, you've externalize the revenue also but now it's the entire system at risk when that distribution is upside-down instead. Ergo what we are seeing
now. When you internally manage the pool, it may be difficult to assure the distribution but you can control things like who enters to pool and what it costs to be part of the pool so you have some levers. Instead though you throw all direct control out the window instead. Like outsourcing, when you externalize that risk, you lose control and visibility of the process and work.
To put it slightly differently, it's like choosing to outsource your product sales to a distributor rather than use your own direct sales force or stores. In doing so you necessarily disconnect yourself from knowledge about what is happening. When insurance is involved that's a bit more risky as the very product (or value) itself depends on factors that can vary of over time and which you need as
"process variables" to your product production. You've basic chosen to run your business open-loop without any control in the name of externalizing things.
By "trading risk to someone who can better afford it" as the standard justification goes, there is an implicit assumption that someone will 1) have a better portfolio mix to make the risk you couldn't stomach seem trivial, 2) that you are not in a closed system where you are buying back your own "excremental" risk.
Neither of these held true in part because the market space was too small, too interconnected and too consolidated. Such uses of "derivatives as insurance" are only valid in an "open system" model of financial markets where there is tremendous diversity to assure "any possible" better portfolio mix to neutralize a sold risk. In a globalized and/or highly consolidated industry there is absolutely
no way that can be true.
All factors that businesses choose to make "externalities" are intrinsically playing off an assumption of the external world being an open system with infinite sources and sinks for what you are externalizing. This is certainly not a universally true model. If you are very big or make too much presumption of sampled independence of the externalized, you are taking a risk. Possibly a very big
Strictly there is almost certainly an upper bound on how much or for how long you can ever do this "passing off risk" - at some point it becomes a game of musical chairs. This is just like there being an upper bound to how much you can pollute the environment before it stops being an infinite sink. Which leads to the second issue.
The second fatal flaw involves ignoring the underlying statistical requirements for the pricing models used for derivatives. Every model every conceived is "wrong" at some level: the trick to proper use of any model is knowing when and where it fails because the only certainty is that it will fail if you push it far enough.
You can make the argument for this based on entropy and Godel's theorem lines but not here. You will always get bitten when you don't do pay attention to the assumptions/axioms on which the model is based.
And on what are derivative pricing models based? Thermodynamics and statistical mechanics, of course. Let's note the presences of the work "statistical" here. Black-Scholes, for example, is a direct lift from statistical mechanics. You can even map the correspondence of variables: Value->Energy, Price->Temperature, Transactions->Atoms...
If you go to the Wikipedia page for Black-Scholes, you'll see a list of "assumptions" or "axioms" - you know, that boring boilerplate stuff they drone on about in math class before they get on with the proof at hand.
If you read them and are familiar with just a little bit of the origins of this model, you'll recognize these are pretty much the same statistical assumptions that you have to presume to use Boltzmann's equation (or Fermi-Dirac or Bose-Einstein) in physics. Where these models stop working is on the low end of things where you can not presume "equilibrium" and where quantum mechanics tends to take
over: small numbers of particles and mixes of non-identical, interacting particles.
So if we have a small (in market participants - titanic (!) in currency value), contrived "value-creation" market like a CDO or CSO market, are we keeping these assumptions?
If 80% of the market volume in such a market is comprised of a group of firms numbering fewer than 8-10 firms which are reselling risk incestuously to each other, are we keeping these assumptions?
If the "safe" investments reference interest rate used for derivative pricing is their own "outer market" (e.g. bonds rated as AAA are deemed "safe" and our own market's instruments are AAA so the price we use is our own market rates), are we keeping these assumptions?
If we are "extra careful" and choose a safe interest rate based on some market "completely outside our own", given that all markets are intimately tied together by globalization such that nothing is independent any longer, are we keeping these assumptions?
The answer to all of this is: NO. The assumptions are multilevel - these models presume Gaussian distributions where events are independent; the Central Limit Theorem only kicks in when enough samples are present. We have neither, etc. etc. So we are certainly not keeping our model assumptions valid and thus whatever price the pops out is certainly wrong.
And where is the price-value intuition to know this? For a small, eclectic market there is no intuition or external reference. Few if anyone even in the business of derivatives has an intuition about their value. These situations are why iterated price setting systems like auctions and negotiation exist; they are emergent price-value setting tools for sparse-knowledge transactions. But when both
sides of the transaction are running from the same play-book and neither has an intuition about price-value, even iterated pricing is useless.
So it should not be surprising when small (or large) errors in price-value mapping get multiplied to the point of potential system collapse. The fact that CDO markets are small and highly interconnected means that there are positive feedback cycles that multiple up small pricing errors and biases is trivially possible and perhaps inevitably so.
I should point out that my background is engineering involved in semiconductor device physics, analog circuit design and systems reliability. So at a mere glance, all of these issues jump out at me as being frighteningly self-evident and inevitably problematic. I also have an MBA which is where I became intimately introduced to these matters.
It's not that math is to blame; this makes as much sense as blaming the tool or ascribing a technology with moral failure - it's a cop-out to blame the tools when it's people who misused the tools. Morality and its responsibility must always sit squarely on the shoulders of the sentient - the sentient are too slippery to not have this as a strong principal. The only, next best, excuse should
group or organizational factors but only the fate the members tracks with the fate of the organization to avoid moral hazard.
People make decisions, not inanimate objects or knowledge. Even objects that appear to "decide" (or act "immorally") are ultimately only playing out the design decisions of their creators. Every object or form of knowledge can be used for good or evil and it must always be the people making those decisions who are blamed for wielding them destructively as society defines the concept.
Clearly, the people responsible can be dealt with either by holding them guilty of sins of commission or omission (they are adults and either do know or should have known better) or you can say "they were children" and are incapable of adult responsibility thus they and us must be protected from making the same mistake again and then proceed with putting "parental" controls in place to prevent
recurrence of the mistake.
Much of explanation for these recurrent "crises" can be explained by the insistence on continual and perpetual compound growth (aka exponential growth) beyond a sustainable level of compounding rate.
A "natural" sustainable rate without much innovation is about 3% a year. With innovation you might be able to get as high as 8% consistently. Double digit rates over a long term period? Not possible without fraud or self-delusion. Value-creation markets (which create "value" out of thin air just by trading money around like a perpetual motion machine) are just such an example of both.
When you push beyond capacity you're doing to get an oscillatory effect if the phase and feedback are right. Since economic systems are nonlinear dynamical systems such cycling could be a small but damped cusp catastrophe or smoother trajectory cycle. What we have now may well be a large cusp catastrophe with a very precipitous drop-off.
One of the underlying assumptions seems to be that we can (and should)try to achieve year-on-year economic growth. It seems to me that such exponential growth is impossible over time. We must eventually run out of resources. What then for a growing money supply backing up credit?
There is also a moral issue concerning the rights of future generations to their share of these ever scarcer resources but that is something our generation prefers to ignore.
There is a very interesting podcast on this topic produced by the BBC program Discovery. It's certainly worth a listen. One of the interesting things they talk about is that it is very hard to mathematically model the human elements of the system - we aren't atomic particles that can be
I recently came across your post and have been reading along. I thought I would leave my first comment. I don't know what to say except that it caught my interest and you've provided informative points. I will visit this blog often. Thank you.