Tuesday 2 April 2013

Enlightenment Exchange: forecasting the future


I was due to be participating in the Edinburgh International Science Festival as part of the  Enlightenment Exchange on forecasting the future, however I am recovering from surgery and so will not make it.  This is an outline of what I planned to talk about.

Needham’s Question is why did the development technology in western European accelerate much faster than in China after 1600.

The issue that Needham wanted to tackle was that China had the physical   and intellectual resources, mathematics, alchemy, astrology and magic, just as Renaissance Europe did, but it did not develop science as Europe did.

One key distinguishing feature between the science that emerged in western Europe in the seventeenth century and other scientific cultures was the use of mathematics.  Aristotle, and his heirs, did not think that physics was reducible to mathematics, similarly the Chinese and Indians did not apply maths to solving scientific problems in the way Newton and Clerk Maxwell did.

One answer to why European science adopted mathematics lies in medieval European finance.  Medieval Western Europe was unique in having to deal simultaneously with a heterogeneity of currency and prohibitions of usury. Muslims had the usury prohibitions but homogeneous currency, the Chinese were lax on usury and generally had homogeneous currencies..

In Western Europe, any local lord would mint their own coin, as soon as they had the power to do so, Italy had 28 currencies in the High Middle Ages while the single authority that ruled the Kingdom of France used  three currencies.  Meanwhile, merchants were prevented from charging for the use of money, usury, but they could ask for compensation for risk, interest.  The skill came in treading a fine line between usury and interest under the scrutiny of the Catholic Church.

As a consequence medieval finance was not simple, on the contrary, having to deal with uncertainty and a complex regulatory system, it was highly sophisticated.  The “modern” financial techniques, such as asset backed securities, collateralisation - “slicing and dicing” - and credit default swaps, were all developed by Europe’s merchants between Charlemagne’s reign and the discovery of North America.

The solution to the problems that Medieval European merchants faced was mathematics, specifically the mathematics Fibonacci described in his 1202 text, the Liber Abaci.  Fibonacci’s mathematics revolutionised European commercial practice.  Prior to the Liber Abaci, merchants would perform a calculation, using an abacus, and then record the result.  The introduction of Hindu/Arabic numbers in the Liber enabled merchants to “show their working” as an algorithm, and these algorithms could be discussed and improved upon.  Essentially after Fibonacci mathematics ceased to be simply a technique of calculation but became a rhetorical device, a language of debate.

The Liber, and the abacco schools that emerged across Europe to train merchants, separate to  the Universities, disseminated and developed practical mathematics and the influence of this training was profound.  The “Merton Calculator” Thomas Bradwardine, who  would have been familiar with medieval mercantile practice and would leave Oxford to to work for the Treasurer of England before becoming the Archbishop of Canterbury  wrote in 1323
 “[Mathematics] is the revealer of genuine truth, for it knows every hidden secret and bears the key to every subtlety of letters. Whoever, then, has the effrontery to pursue physics while neglecting mathematics should know from the start that he will never make his entry through the portals of wisdom.”
Copernicus, who wrote on money before he wrote on planets, Simon Stevin, the founder of the Dutch Mathematical School that inspired Descartes, Thomas Gresham, Francis Bacon’s influential uncle who established  the first Chair in Mathematics in England and laid the foundations for the Royal Society,  were all trained in the abbaco tradition.

Newton, who spent as much time as Master of the Mint as he did as an academic, stood on the shoulders of merchants.  His derivation of calculus comes from writing a function as a polynomial, mimicking Stevin’s representation of a number as a decimal fraction.

Between 1650 and 1713 extraordinary advances were made in the the mathematical theory of chance in the context of the “fair” pricing of commercial contracts in the context of Christian morality: Faith was associated with statistics, Hope with probability.  The modern conception of probability theory as being based on performing repeatable experiments was simultaneously presented in 1713 by de Moivre and Montmort, this was taken up by Laplace a hundred years later when he argued that nothing was random, it was just humans lacked knowledge to appreciate the links between cause and effect.
“We look upon something as the effect of chance when it exhibits nothing regular,or when we ignore its causes”
To Laplace and scientists that came after him, the problem of uncertainty could be resolved by gathering more data, the mathematics of chance, which developed in the context of ethical finance before calculus was developed in the context of physics, was reduced to the Law of Large Numbers.  Economics developed in the context of deterministic models based on analogies from physics or biology.

The significance of randomness in physics started to emerge in the second half of the nineteenth century in the context of thermodynamics, leading to Einstein’s analysis of Brownian motion, which explained random behaviour of pollen particles in terms of “invisible” atoms, and then in the 1920s with the Copenhagen Interpretation of Quantum Mechanics, prompting Einstein to state that he did not believe God played dice.  Meanwhile, in biology, R A Fisher synthesised Darwinian evolution with Mendelian genetics and in the process revolutionised statistics, the analysis of data.  Meanwhile  economists, such as the American Frank Knight and John Maynard Keynes, started placing uncertainty at the heart of Economics.

At the outbreak of the war in 1939 the vast majority of soldiers and politicians would not have thought mathematicians had much to offer the military effort, an orthodox attitude amongst the military is still that “war is a human activity that cannot be reduced to mathematical formulae”.  However, operational researchers  laid the foundations for Britain's survival in the dark days of 1940-1941. Code-breakers transformed seemingly random streams of letters into messages that enabled the allies to keep one step ahead of the Nazis while Allied scientists had ensured that the scarce resources of men and arms were effectively allocated to achieving their military objectives. By the end of the war General Eisenhower was calling for more scientists to support the military.

During the war many economists had worked alongside mathematicians solving military problems, such as Paul Samuelson.  In the post war decades mainstream economics transformed itself from a discursive discipline into a mathematical science.  However the biggest change in the field came in 1972 with the Nixon Shock and the collapse of the Bretton-Woods system of exchange rates.  Overnight finance transformed from a broadly deterministic system, fixed exchange rates, static interest rates and constant commodity prices, into a stochastic system.  As exchange rates fluctuated, interest rates started changing monthly and the commodity prices became a random processes.

The markets evolved quickly in response to the changed environment.  Financial instruments that had not been a feature of financial economics since 1914 re-appeared, and over the next decades techniques that would have been familiar to Renaissance bankers like the Fuggers, such as securitisation, emerged to cope with the fundamental uncertainty of the markets.

Mathematics enables science without experiments, the Large Hadron Collider was built on the basis of a mathematical theory, and so is essential in the markets that are too dynamic for experimentation.  The question is, was the post-Laplacian science, with an implicit rejection of randomness, up to the task of  dealing with the fundamentally uncertain markets?

The UK’s Financial Services Authority in their 2009 review of the Financial Crisis of 2007-2009 identified a “mis-placed reliance on sophisticated mathematics” as a root cause of the Crisis.  Last June, the Director of Financial Stability at the Bank of England, identified basic flaws in mathematical understanding as a “top 5 but not top 3” cause of the crisis, more significant was the fact that Banks were not maintaining adequate records of their loan portfolios.  The Financial Crises since 2007 have been less to do with fancy finance backed up by even fancier maths, it was more to do with banks, and their regulators, not keeping an eye on their core business of lending prudentially.  It is worth noting that Fred Goodwin at RBS and Andy Horner at HBOS both came from outside banking.

But the facts are dull, it is more interesting, and convenient for many, to talk about the slicing and dicing of loan portfolios as the cause of the Crisis.  In the aftermath of the Crisis, the technology magazine Wired identified the “Formula that Destroyed Wall Street”, the Gaussian Copula.  The formula is a function that captures the distribution of loan defaults in an abstract  portfolio of infinitely many infinitesimally small loans based on a single parameter,rho,  representing the dependence of loans in the portfolio failing.  If rho=0 then the loans were completely independent, one default would not affect any other loan,  if rho=1.0 they were completely dependent, one default would lead to all the loans to default.  This formula was applied to Mortgage Backed Securities across investment banks with rho set at 0.3.  This choice of rho was based on historical data based on the defaults of corporate bonds, and it resulted in low chances of significant portfolio defaults.  (For a full analysis see MacKenzie and MacKenzie and Spears)

What is most significant about the Gaussian Copula is that it was popularised in investment banking by J.P. Morgan, a very old fashioned bank, who relied on the moral character of their employees and employed a lot of good mathematicians.  J.P. Morgan did not create, distribute or invest in many Mortgage Backed Securities.  The reason is described in Gillian Tett’s book on the crisis, Fool’s Gold.  The mathematicians at Morgan’s reverse engineered prices of traded MBS and deduced that they were generally based on rho=0.3, the bankers then asked themselves was this reasonable, and decided that since corporations where different to mortgage borrowers, it was not.  rho should be higher indicating a greater probability that if one homeowner defaulted, another would.  On this basis, of a rho=0.5,  there was no chance of making a profit on MBS.  J.P. Morgan were right, other banks, who relied on rho=0.3, were wrong.

Were the failed banks bad scientists as well as bad bankers?  While the banks that chose rho=0.3 were wrong, their behaviour was not too different from that encountered in more traditional branches of science and engineering.  Official assessments of the probability of a serious failure of a nuclear power plant do not match the empirical evidence that the have been 5 major failures of commercial, not just experimental, nuclear power plants, most recently at Fukushima.  Similarly, in the 1980s, the official NASA assessment of  the probability of a Shuttle failure was 1 in 10,000, while engineers on the shopfloor reckoned it to be 1 in 200.  In fact, reading Richard Feynman’s Appendix to the Rogers’ Commission Report on the Challenger Failure is interesting in context of the banking failures.

A common feature of the under-reporting of the chances of failure in nuclear engineering, NASA and banking has been a belief in “perfect” systems so that data that can be extrapolated from the known into the unknown.  Today, as “big data” becomes the current buzz-word, with Wired declaring in 2008 the “end of theory”, the power of data enables the scientific method, that searches for coherence and causality, to be replaced by the identification of correlations.

To understand why this is relevant, observe that the Financial Crisis on 2007-2009 was not a global event, whatever Gordon Brown says.  While British and American banks failed, German, and particularly French, banks did not.  At the time there was anger in the Anglo-Saxon banking community, who knew their Continental colleagues were involved in exactly the same practices but did not report the staggering losses.  When a prominent mathematician at a major French bank was asked about this, his response was candid and logical: “We thought the pricing models were weak before the crisis, we knew they were wrong during the crisis.  Why ruin the bank by reporting numbers we knew were wrong?”.  In 2009, The Economist, a tub-thumper for Anglo-Saxon culture, reported with glee on the problems at Societe General caused by the fraud of Jerome Kerviel.  They observed that the problem with French risk managers is that their attitude is ‘That is all very well in practice, but what about the theory”.  It was precisely this belief in placing data in the context of a theory that prevented the failure of French banks, the more empirical, pragmatic Anglo-Saxon approach led to disaster.

The malevolent impact of “social physics”, aggregating a data set into a representative point on which decisions are based, ignoring the distribution, is particularly resonant for me at the moment.  SInce January I had been suffering gastric problems.  Based on my symptoms the initial assessment was that I had gallstones, a prognosis that was confirmed by ultrasound.  However, this evidence, when presented to a consultant,  was contradicted by a brute fact that it is unusual for men in my condition to be troubled by gallstones.  Within 10 days of this assessment, I was admitted to hospital as an emergency, with a gallstone blocking my bile duct,  and have had two operations in the past two weeks.  I proved not to be the “average man”.

I suspect that “on average” the consultant made the right, money saving, decision.  However in making this decision there was a risk of a greater loss, I had two complicated procedures instead of one and spent 5 nights in hospital instead of none.

Nassim Taleb might observe that my consultant was Fooled by Randomness, or had experienced a “rare event”, what Taleb calls a “Black Swan Event”.  But Taleb’s definition of a Black Swan event is novel.

The concept of the Black Swan emerged in the late thirteenth and early fourteenth centuries, at a time when scholars were introducing mathematics into physics.  The concept was part of a vigorous debate between Realists, in the Platonic sense, such as Thomas Aquinas,  who believed that by observing nature its rules and mechanisms could be gleaned.  Opposed to the Realists were Nominalists, who rejected the idea that there was a hidden Reality of nature, the rules and mechanisms were constructed by the men observing them.  Nominalists included Franciscans such as John Duns Scotus and William of Ockham, who rejected a mechanistic universe and believed if God wished to create a Black Swan, He could. The fact that only white swans had been observed did not preclude the fact that a Black Swan could exist.  Of course, the Nominalists were proved right when Black Swans were discovered in Australia.

A good financier is a Nominalist: they recognise that their models are man-made constructions that are a simple approximation of the nature of markets.  On this basis a financier does not attempt to predict the future, and is circumspect about their ability to “beat the market”.  A good financier is often distinguished by their obsession with attempting to handle the risks they might encounter rather than exploit the opportunities that might arise.  We cannot predict an earthquake but we can plan for them.

There is nothing new in suggesting that applied science should broaden its outlook to consider distributions rather than focus on expectations.  However I think finance tells us something more, we need to be sceptical about the very models that produce our distributions.

For example, the debate about climate change is not a dispute between good and evil, or between idealists and empiricists, but more a disagreement between two approaches to modelling.  It is not the data that is disputed but the modelling framework used to interpret the data.  I feel the reason the Climate Debate is so intractable is that popular science, as distinct from mainstream science as practised by the majority of scholars, cannot accommodate the idea that there can be competing theories and this is a consequence of the pervasive belief that science has a Real  basis, rather than being a social construction.  A Realist will always believe that, given enough effort, the secrets of the universe can be uncovered and the future predicted and controlled. A Nominalist is far more humble.

So none of this is new, but in the context of the Enlightenment Exchange, it is worth noting that some people argue that Quetelet, with his social physics based on the average man, put an end to the Enlightenment's focus on the rational man.