Monday, 29 October 2012

The Problem with the Foresight Report on Computer Trading

It turns out that Paul Wilmott shares my views on the BIS Foresight  Future of Computer  Trading in the Financial Markets.  I have made the point that the Report is running the risk of appearing as what Roger Pielke Jr describes as "stealth advocacy", scientific research presenting itself as neutral technical analysis while it is in fact advocating a specific course of action.

My main issue with the Report is captured in its observation that (p 140, ss8.2)
In financial markets, the ideal is to discover the ‘fundamental’ price of assets
This statement is economically controversial, I do not think Keynes would agree with it.  Intuitively, if you have ever bought a house, did that house have a 'fundamental' price? Rather than develop these more philosophical points I shall focus on the mathematical objections to the statement after highlighting how this assumption drives the Report's overall conclusions.

If there exists a fundamental price, then the markets are attempting to solve an epistemological problem, and wish to identify the true price in a mass of noisy data.  This means that financial stability is defined as (p 19 n1)
the lack of extreme movements in asset prices over short time periods
and is closely related to volatility (p 19 n2)
variability of an asset’s price over time
which is a bad thing, so the report looks for evidence of HFT increasing volatility (which it does not find, but the FT cites research that suggests there is a link between increased volatility and HFT).  

The definition does mean, however, that the pathological behaviour described in Figs 4.2-4.6 of the Report is not market instability, they are localised transient effects that are quickly corrected.  The analogy is that it is OK for an aeroplane to go out of control, providing it does not crash, I do not think the aeronautical industry would allow itself to be run on this basis.

The problem is with the Report's treatment of High Frequency Trading.  The Report recognises that this aspect is controversial, particularly with pension managers and conventional fund managers.  Because it takes an approach that will see HFT as beneficial, because it increases information flows that help resolve the epistemological problem,  the concerns of those not involved in HFT, like the pension funds,  are counter-balanced/nullified. 

My concern is that in taking the epistemological approach to markets, it is inevitable that HFT is beneficial.  Just as if you model  credit default as some form of contagious agent, dynamical systems analysis will point to a solution involving a few, large, well defended financial institutions (i.e. how you deal with mad cow disease), while modelling the banking system as a communications network, loans are 'packets' that move around the network, a distributed system like the internet involving many small institutions with lots of connections, is best.  The result, by and large, depends on how you approach the problem.  The Bank of England seem to be moving to the Internet model of banking, rather than the Contagion model. 

The mathematical objection to the approach the Report takes is rooted in the Fundamental Theorem of Asset Prices, which indicates that a 'true' price of an asset only exists in an idealised situation, the second statement of the Theorem is explicit in stating that in actual markets, a unique, 'fundamental'  price is impossible to identify, there is a financial Heisenberg uncertainty principle going on.  The problem of a price is ontological, not epistemological.  I suspect this is one way of summarising the points Wilmott made to the enquiry, because it is such a dominant theme in contemporary mathematical finance.

The line the Report takes is explainable in terms of the background of the bulk of the contributors to the evidence based, they are in the main from dynamical systems (rooted in ergodic systems) and computer science (data and rules to manipulate data).  There are only a few who contributed to the evidence base whom I recognise would be familiar with the FTAP (Cont, Shied, Avellaneda, Mitra, Jaimungal, Cvitanic out of over 200, 2 of whom declare a commercial interest.  There were 11 alone at the Complex Systems workshop).

However, the Report, in taking a very particular approach  narrows the scope of discussion and leads the reader of the Report to a conclusion that is heavily dependent on the assumption that markets solve an epistemological problem.  The fact that there is such a bias towards market insiders on the High Level Stakeholder Group, there is no representation from pension funds but the ISDA, a lobby group for derivatives traders that gave us the infamous Potts opinion, are there,  lays the report open to the accusation that it is stealth advocacy.  Given that the public have been angered by the apparent privatisation of profits and socialisation of losses by banks in the past, appearing to side with the proprietary traders over pension fund managers, seems a very short-sighted approach to take.
I think these observations are compounded by the fact that the Report could have been clearer in distinguishing the various impacts computers are having on markets, rather than a focus on addressing HFT within this context described above.  The Report's Executive Summary opens with
A key message: despite commonly held negative perceptions, the available evidence indicates that high  frequency trading (HFT) and algorithmic trading (AT) may have several beneficial effects on markets.  However, HFT/AT may cause instabilities in financial markets in specific circumstances. This Project has shown that carefully chosen regulatory measures can help to address concerns in the shorter term. 
This distinction is lost in discussing the benefits of Computer Based Trading (the combination of the two) as improved liquidity, reduction in transaction costs and improved market efficiency.  

The risks are observed collapses in liquidity and instability.  When discussing instability the observation is made that HFT does not appear to increase volatility, but there are issues about stability, which the report describes in terms of non-linearities, incomplete information and what the report calls 'normalisation of deviance' but sociologists describe as 'counter-performativity' (markets follow a model and then discover the model is wrong).  

HFT is highlighted in regard to market abuse, which the report argues there is no evidence.  However the distinction between AT, usually conducted for agency trading, and HFT, usually conducted by proprietary traders, in their contribution to the risks is not really developed.  Can we explore this issue?

The report defines 'liquidity' (p 19, n3) as 

the ability to buy or sell an asset without greatly affecting its price. The more liquid the market, the smaller the price impact of sales or purchases
AT has made a significant contribution to this, particularly through the work developing out of Chriss and Almgren's pioneering research in optimally executing large trades.  The report's definition of liquidity is somewhat biased , a different definition associates liquidity with the 'depth' of the market, the ability to actually buy or sell an asset in the market.  HFT will not increase depth, but it may present a mirage of improved liquidity as assets are churned by proprietary traders.  This is associated with the Flash Crash, p 56. 

Liquidity is related to what the Report describes as  (p 19, n4)

price discovery ... the market process whereby new information is impounded into asset prices.
Clearly if trades are executed quickly, price discovery improves.  However there are issues with mechanistic approaches to news processing (Derwent Capital Markets' closure).

The Report notes that there has been a reduction in transaction costs since the introduction of CBT, but this cannot be ascribed to HFT and the FT cites 2012 research that suggests the reduction in costs occurred before HFT emerged.

Overall I think there is a good argument that the Report is far from being "pure science" and lacks the balance of "honest brokerage".

Thursday, 25 October 2012

The Future of Computer Based Trading

The UK Government has published its Foresight Report on the Future of Computer  Trading in the Financial Markets .  "Foresight" is a bit of a mis-nomer here, given that Computer Based Trading (CBT) has already had a profound effect on the markets across the globe.

What first struck me about this Foresight report is that is presents particular solutions that have undergone a cost-benefit analysis. BIS Foresight reports typically highlight problems and the recommendations can usually be summed up as "the issue is complex, policy makers need to initiate inter-disciplinary research".  Also, the Project Team was large and the Acknowledgements list long.

I interpret this as reflecting a fact, that people do not really know what is going on.  Recent reports from BIS (covering Migration, Climate Change, Food and Farming) all relate to large, well established academic fields.  At the time the Foresight report was initiated I do not think there was a well established research centre in CBT in the UK, though UCL and Essex are now building them.

The report does recognise this, with John Beddington's summary noting that

there is a relative paucity of evidence and analysis to inform new  regulations, not least because of the time lag between rapid technological  developments and research into their effects. This latter point is of particular concern, since good regulation clearly needs to be founded on good evidence and sound analysis.
 This is developed on p 11
Markets are already ‘socio-technical’ systems, combining human and robot participants. Understanding and managing these systems to prevent undesirable behaviour in both humans and robots will be key to ensuring effective regulation: While this Report demonstrates that there has been some progress in developing a better understanding of markets as socio-technical systems, greater effort is needed in the longer term. This would involve an integrated approach combining social sciences, economics, finance and computer science. As such, it has significant implications for future research priorities.
My principal concern is that this important point, directly relevant to BIS, will be lost amid the specific recommendations in the Report.  BIS might be excused in offering solutions given that the problems are current, however I doubt that BIS has the ability to address the issues, either in terms of skills or authority.

The Report is useful in that it does categorise different types of CBT.  I should declare an interest: my mathematical research is in optimal control, and this is relevant to the optimal execution of large trades, which  is described as 'agency' based, as opposed to proprietary, trading.  This is a very active area of mathematical research, though I am not convinced there are robust practical solutions.  The key issue is that current (optimal stochastic control) algorithms make assumptions on the trading environment, and implicit is that the environment is static (ergodic if you prefer).

However, I feel the Report could be clearer about distinguishing agency and proprietary trading.  This distinction was traditionally a part of the London markets, that disappeared with Big Bang in the 1980s.  In the seventeenth century John Houghton describes stock-brokering as
 the Monied Man goes amongst the Brokers (which are chiefly upon the Exchange [Alley], and at Jonathan's Coffee House [the origins of the London Stock Exchange], sometimes at Garraway's and at some other Coffee Houses) and asks how Stocks go? and upon Information, bids the Broker to buy  or sell so many Shares of such and such Stocks if he can, at such and such Prizes
Brokering has always been seen as reputable, unlike stock-jobbing or dealing, which was described by  Daniel Defoe in 1719 (the year he published Robinson Crusoe) in an article  The Anatomy of Exchange Alley  as
a trade founded in fraud, born of deceit, and nourished by trick, cheat, wheedle, forgeries, falsehoods, and all sorts of delusions; coining false news, this way good, this way bad; whispering imaginary terrors, frights hopes, expectations, and then preying upon the weakness of those whose imaginations they have wrought upon

Thomas Mortimer described  in 1761 the type of person involved in stock-jobbing; there are three types of stock-jobber, firstly foreigners, secondly gentry, merchants and tradesmen, and finally, and "by far the greatest number'', people
 with very little, and often, no property at all in the funds, who job in them on credit, and transact more business in several government securities in one hour, without having a shilling of property in any of them, than the real proprietors of thousand transact in several years
Modern criticism of high frequency trading  is a continuation of a long tradition of distinguishing the activities of Monied Men investing in the market and Traders speculating within the market.  As such it can get entangled in a web of social judgements, such as it is OK for the rich to gamble but not the poor.  To make decisions about HFT, therefore, requires some understanding of this mess.

I regularly make the point that finance is not just an important social utility, but a melting point for scientific innovation.  One lesson that science, as a whole, can learn from contemporary finance is the problem of "calculability".  Nineteenth century science relied on this principle, and twentieth century science, to a large degree, accepted problems of chaos and complexity, but still relies on determinism, which is often missing  in the social sciences. The assumption of "calculability", that induction is possible, is at the root of many of the debates about science and, for example, the climate debate is not about the raw data but the models that process that data.  This is a broad issue and relates the the UK's science minister's recent comments on how "red tape" gets in the way of progress. The Minister argues that in Europe there is
an approach to regulation so far removed from any rational appraisal of risk that it threatens to exclude Europe from many of the key technologies of the 21st century
The Science Minister cites examples from the physical sciences, I wonder if he is so confident in the value of invention in finance?  I often worry about terms like "rational appraisal of risk" because I feel all to often it is based on assumptions of stable chances.

The Report echoes the views of Willets when it opens with the ambiguous statement

A key message: despite commonly held negative perceptions, the available evidence indicates that high frequency trading (HFT) and algorithmic trading (AT) may have several beneficial effects on markets. However, HFT/AT may cause instabilities in financial markets in specific circumstances. (my bold)

The sense appears to be "we don't really know, but hey-ho lets see what happens".   This makes me very nervous.

The main issue is, predictably, related to HFT.  HFT is "good" in that it enables price-discovery.  It is "bad" in that it might destroy liquidity in times of stress.  So, generally good, occasionally really bad.  However this is based on an assumption tied up with the whole "efficient markets" paradigm,

pricing is efficient when an asset’s price reflects the true underlying value of an asset; (Note 4 p 19)
This is a Platonic argument implying a problem of epistemology:  mere humans have difficulty in perceiving the Real price, but by having HFT we get closer to Reality.  If you reject this, and believe that there is an ontological perspective: a "true" price may not exist, the justification of "generally good" falls away.  My work on Ethics and Finance: The Role of Mathematics is concerned with this issue.

Finance is critical to society, as we have all found out recently.  We should be debating finance with the vigour and passion that we debate climate change, GMOs, nano-technology, energy and so on.  Without this debate we cannot establish the values upon which the evidence base, which informs the regulations, is built.  I feel that the Report misses making this point clearly, and so cannot really address the issue in hand.

Andy Haldane, the man at the Bank of England responsible for financial stability was initially involved in the Foresight project but withdrew.  Haldane has recently discussed the issue complexity in financial regulation.  He appears to reject the premise that finance is susceptible to calculation and rules and advocates judgement, a virtue ethics argument.  I wonder if we should trust the Bank of England more than the Department of Business Innovation and Skills on the issue of computer trading.

Wednesday, 24 October 2012

Ethics and Finance: The Role of Mathematics

Between 2006-2011 I was the "RCUK Academic Fellow"  in Financial Mathematics.  In this role I was obliged to discuss my field with public and policy makers.  Usually this "public engagement" aspect of the Fellowship was ignored, but the role of mathematics in finance became one aspect of the Financial Crises, the dominant theme of public discussion since the summer of 2007 and I was catapulted out of my comfort zone in mathematics.

The issue I really needed to address in discussion with outsiders was whether the involvement of mathematicians in finance was moral.  Mathematicians do not like addressing issues of morality, and the standard response in the UK is to resort to G.H. Hardy's "claim" that a mathematician does nothing useful: mathematics is so irrelevant to society that it is morally neutral.  I don't think this is (morally) acceptable for a whole load of reasons and so I had to address the substantive issue.

Since 2008 a substantial portion of my time has been spent on this problem, to the detriment of my mathematical research.  However, it has had some impact.  In 2011 the RCUK (the UK equivalent of the US National Science Foundation) recognised the result of this work as one of "One Hundred Big Ideas for the Future".

The "big" idea is that ethics have been central to finance, and out of the ethical examination of finance, mathematical probability emerged.  This is a heterodox, but not unique, interpretation of the history of science.  The significance of this is that modern mathematical approaches to derivative pricing implicitly involve the principle of reciprocity, which is anterior to the concept of justice and fairness and at odds with the orthodox objective of utility maximisation.   This led me to address the issue as to why, today, we do not associate finance with ethics, and I came to the conclusion that during the first half of the nineteenth century, the Romantic period during which contemporary science was laid down,  there were paradigm shifts in society comparable to those in the late seventeenth century.  Specifically, in response to perceived scarcity, ideals of reciprocity were replaced by concepts of individuality and competition.  By the 1950s the ideal of "my word is my bond" had been lost (see Buttle v Saunders).

I have drafted my argument in a paper available on SSRN (or ArXiv).  The paper discusses the Fundamental Theorem of Asset Pricing within the context of measure theoretic probability, as opposed to Knightian or Keynesian conceptions of probability.  It then discusses the Scholastic concept of the Just Price in the context of the genesis of probability.  The importance of finance in the mathematisation of Western science is discussed as a prelude to the early development of mathematical probability in the context of the ethical evaluation of commercial contracts.  This part finishes with an explanation of why the Fundamental Theorem appears equivalent to seventeenth century approaches to pricing assets.

The second part of the paper presents a narrative of why this equivalence is no longer obvious, arguing that neoclassical economics has been developed to address scarcity, where as the Fundamental Theorem addresses uncertainty.   The paper finishes with some implications and considers the legitimacy of gambling, within the context of Virtue Ethics.

I am interested to hear any comments on this argument.

The implications of the argument, I believe, are very positive for economics.  Firstly is the point that economics and social science is anterior/senior to the physical sciences.  Historically the solution of economic problems precedes the solution of physical problems. An aspect of the contemporary situation is that economics relies on technology developed in the physical sciences, and is therefore inadequate.  Recognising the seniority of social sciences can correct this situation.  Secondly, there is an implicit implication that developing a better understanding of finance can have a positive impact on science as a whole.  A significant issue in scientific debate is that data is rarely disputed, at the root of debates on, say climate change, GMOs etc, is disagreement on models.  This is at the heart of modern finance (as discussed in the paper).  Understanding how finance uses mathematics as a "rhetorical tool" can help everyone.

I am wary of the sectarian divisions in economics, but I feel my thesis leans heavily towards certain aspects of Post-Keynsian thought, specifically relating to the non-neutrality of money and having to deal with ontological uncertainty.  More broadly, I think the paper is a clear argument against the fact/value dichotomy in economics and is aligned with McCloskey's project promoting virtue ethics within economics.

I hope the thesis sits within Pielke's "honest broker" classification.  The thesis is an apologia for the use of mathematics in finance, which includes an argument legitimising speculation, suggestin advocacy.  However it also provides a framework, in the context of virtue ethics, for when speculation is legitimate.  I believe it contributes to being an "honest broker" in that it opens the debate and recognises that science is implicitly based on values, and if we are to get finance right we need to examine the normative basis on which we work.

Monday, 8 October 2012

Individuality and reciprocity

Philip Pilkington has written a piece about the problem with individualism and myths, a topic I have written on also.  While I agree with the bulk of what he has to say, and I offer some of my own comments (these are taken from a paper in review, a copy is available on request) in what follows, I feel he misses the true culprits in criticising Adam Smith.

Central to Pilkington's argument is this text from Smith

Society may subsist among different men, as among different merchants, from a sense of its utility, without any mutual love or affection; and though no man in it should owe any obligation, or be bound in gratitude to any other, it may still be upheld by a mercenary exchange of good offices according to an agreed valuation.

I am not convinced the problem originates in Smith, who was, I believe, embedded within Aristotelian ideas of reciprocity and justice, immediately before the section Pilkington quotes, Smith writes:
All the members of human society stand in need of each others assistance, and are likewise exposed to mutual injuries. Where the necessary assistance is reciprocally afforded from love, from gratitude, from friendship, and esteem, the society flourishes and is happy. All the different members of it are bound together by the agreeable bands of love and affection, and are, as it were, drawn to one common centre of mutual good offices.
Society MAY subsist without the basis of reciprocity, but its not going to be the best type of society.  I believe Pilkington makes this point, but still seems to hold Smith responsible for us ending up as we have.

I believe the issue arises out of Romanticism, which placed greater emphasis on individuality and created the environment for both neo-classical economics and Marxism.  The issue that economics, before the 1920s, Marxism and Graeber share is that they ignore  uncertainty.  If the world is uncertain, I argue that we  need reciprocity  love,  gratitude,  friendship, while if there is a 'certain' scarcity, individualism triumphs. I suggest that when faced with scarcity, society responds by fragmenting into elements that compete for scarce resources. Alternatively, when society is challenged by uncertainty it turns to communality, seeking to diversify risks. On this basis, the rise of expected utility maximisation as dealing with scarcity in a Romantic context of the individual genius struggling against nature and within a framework of stable chances, can be explained. I contend that since the ‘Nixon shock’ society has been more focused on uncertainty than scarcity and is struggling to shift the economic paradigm in response to this change in the economic environment.

Arjun Appadurai captures this in arguing that the leading agents in modern finance

believe in their capacity to channel the workings of chance to win in the games dominated by cultures of control …[they] are not those who wish to “tame chance” but those who wish to use chance to animate the otherwise deterministic play of risk [quantifiable uncertainty]”. [Appadurai2011, p 533-534]
These observations conform to the definition of a ‘speculator’ offered by Reuven and Gabrielle Brenner: a speculator makes a bet on a mis-pricing. They point out that this definition explains why speculators are regarded as socially questionable: they have opinions that are explicitly at odds with the consensus ([Brenner and Brenner1990, p 91], see also [Beunza and Stark2012, p 394]). In comparison, gamblers will bet on an outcome: an ace will be drawn; such-and-such a horse will win against this field in these conditions; or Company Z will outperform Company Y over the next year. Investors do not speculate or gamble, they defer income, ‘saving for a rainy day’, wishing to minimise uncertainty at the cost of potential profits.

Speculation in modern finance, as Daniel Beunza and David Stark observe, is about understanding the relationship between different assets and “to be opportunistic you must you must be principled, i.e. you must commit to an evaluative metric” [Beunza and Stark2004, p 372]. This explains why modern finance relies on mathematics, “Mathematicians do not study objects, but the relations between objects” [Poincaré1902 (2001), p 22].

The speculator has been a feature of the modern markets ever since they were established in the seventeenth century. In 1719 Daniel Defoe described stock-jobbing in The Anatomy of Exchange Alley as

a trade founded in fraud, born of deceit, and nourished by trick, cheat, wheedle, forgeries, falsehoods, and all sorts of delusions; coining false news, this way good, this way bad; whispering imaginary terrors, frights hopes, expectations, and then preying upon the weakness of those whose imaginations they have wrought upon [Poitras2000, p 290 quoting Defoe]
The process described is speculation because the stock-jobbers are not betting on outcomes, rather they are attempting to change the expectations of others, create a mis-pricing that they can then exploit.

The philosophical antecedents of the modern hedge fund manager, betting against determinism, could be even further back, in the medieval Franciscans such as Pierre Jean Olivi and John DunsScotus. While the Dominican, empirical rationalist, Aquinas argued that knowledge rested on reason and revelation, Scotus argued that reason could not always be relied upon: there was no true knowledge of anything apart from theology founded on faith. While Aquinas argued that God could be understood by rational examination of nature, Scotus believed this placed unjustifiable restrictions on God, who could interfere with nature at will: God, and nature, could be capricious [Luscombe1997, p 127].

Appadurai was motivated to study finance by Marcel Mauss’ essay Le Don (‘The Gift’), exploring the moral force behind reciprocity in primitive and archaic societies. Appadurai notes that the speculator, as well as the gambler and investor, is “betting on the obligation of return” [Appadurai2011, p 535]. The role of this balanced reciprocity in finance can be seen as an axiom in that it lays the foundation for subsequent analysis, it can also be seen as a simplifying assumption: if the future is uncertain what mechanism ensures that agreements will be honoured. David Graeber also recognises the fundamental position reciprocity has in finance [Graeber2011], but where as Appadurai recognises the importance of reciprocity in the presence of uncertainty, Graeber essentially ignores the problem of in-determinism in his analysis that ends with the conclusion that “we don’t ‘all’ have to pay our debts” [Graeber2011, p 391].

Aristotle’s Nicomachean Ethics is concerned with how an individual can live as part of a community and he saw reciprocity in exchange as being important in binding society together ([Kaye1998, p 51], [Aristotle1925, V.5.1132b31-34]). According to Joel Kaye, this means that Aristotle took a very different view of the purpose of economic exchange, it is performed to correct for inequalities in endowment and to establish a social equilibrium and not in order to generate a profit. This view is at odds with that taken by mainstream modern economists, or even anthropologists such as Graeber, who seem to be committed to a nineteenth century ideal of determinism and so can ignore the centrality of reciprocity in markets. I argue that in the presence of uncertainty, society needs reciprocity in order to function.

Reciprocity and fairness are linked, and the importance of fairness in human societies is demonstrated in the so–called ‘Ultimatum Game’, an important anomaly for neo-classical economics [Thaler1988]. The game involves two participants and a sum of money. The first player proposes how to share the money with the second participant. The division is made only if the second participant accepts the split, if the first player’s proposal is rejected, neither participant receives anything. The key result is that if the money is not split ‘fairly’ (approximately equally) then the second player rejects the offer. This contradicts the assumption that people are rational utility maximising agents, since if they were the second player would accept any positive payment. Research has shown that chimpanzees are rational maximisers while the willingness of the second player to accept an offer is dependent on age and culture. Older people from societies where exchange plays a significant role are more likely to demand a fairer split of the pot than young children or adults from isolated communities ([Murnighan and Saxon1998], [Henrich et al.2006], [Jensen et al.2007]). Fair exchange appears to be learnt behaviour developed in a social context and is fundamental to human society.

The relevance of the Ultimatum Game to the argument presented here is that separates the Classical themes of fairness and reciprocity from the Romantic theme of Darwinian ‘survivial of the fittest’. In An Inquiry into the Nature and Causes of the Wealth of Nations, published in 1776, Adam Smith, working in the Classical framework, argues that humans are distinctive from other animals in the degree to which they are co-operative

Nobody ever saw a dog make a fair and deliberate exchange of one bone for another with another dog. [Smith1776 (2012), Book 1, Chapter 2]
Humans , on the other hand, exhibit

the propensity to truck, barter, and exchange one thing for another. [Smith1776 (2012), Book 1, Chapter 2]
Markets are not simply a technical tool to facilitate life, but they capture a key distinction between humans and other animals. This is observation is very different to Darwin, who wrote in The Descent of Man in 1871,

My object in this chapter is to shew that there is no fundamental difference between man and the higher mammals in their mental faculties. [Darwin1871, p 36]
Darwin would go on to note that
the weak members of civilised societies propagate their kind. No one …will doubt that this must be highly injurious to the race of man [Darwin1871, p 168]
and to argue that society should control this process within the ethic of Consequentialism. Darwin did acknowledge that ‘survival of the fittest’ did not explain the ‘nobler’ aspects of civilisation [Darwin1871, p 161–167], but his arguments are integral to the ‘social Darwinism’ of Spencer and Galton that explained how the ‘best’ became the ruling. Darwinian metaphors are still a powerful feature of the paradigm centred on neo-classical economics and Consequentialist Ethics and the Ultimatum Game, and the concept of fairness, is an anomaly for the whole of this paradigm.

I conject that balanced reciprocity should be an axiom of exchange, possibly in preference to the axiom that the aggregate demand price is equal to the aggregate supply price [Keynes1936, Ch 2, VII], or that a price is the cost of labour and a risk premium (e.g. Duns Scotus [Kaye1998, p 140], Smith, Marx). Reciprocity is anterior to the concept of justice, in particular the argument for justice and fairness as being essential components in commerce follow Aristotle’s arguments for how people, in a state based on egalitarianism, can live together in an urban society.

Before the nineteenth century, scholarship would be conducted in the context of medieval  Virtue Ethics: the four ‘Pagan’ or ‘Cardinal’ virtues; Courage (Fortitudo); Justice (Iustitia); Temperance (Temperantia); and Prudence(Prudentia), which originated in Nicomachean Ethics, and three ‘Christian’ virtues; Hope (Spes); Faith (Fides); and Charity (Caritas). Medieval scholars approached morality using the same framework that they used to study physics or medicine by blending four elements, or humours, in the right manner. For example, Charity and Faith yield loyalty, Temperance and Courage give humility and Justice, Courage and Faith result in honesty [McCloskey2007, p 361]. An ethical life was one that exhibited all, not just some, of the virtues, and a merchant, by demonstrating them, could be seen as being as virtuous as a prince or a priest. This was not just a Latin Christian view, the first century Mahayana Buddhist Vimalakirti Sutra tells the story of how a virtuous merchant teaches kings and monks.

Following the analysis of Nicomachean Ethics medieval scholars, like Aquinas and Olivi, placed the virtue of Justice at the centre of commerce. Prudence is the common sense to decide between different courses of action, it is at the root of reason and rationality and can be seen as the motivation for all science. This virtue is the one most closely associated, in the modern mind at least, with effective merchants. Temperance, a word that comes into English with Grosseteste’s translation of Ethics, is the virtue least associated with modern bankers. However, the modern understanding of temperance as denial or abstinence is not only how a medieval friar would have understood the virtue. The word is related to ‘temper’ and is concerned with getting the right balance between the virtues. A good merchant would exhibit the virtue by mixing Courage, to take a risk, and Prudence, allowing for the unforeseen, and diversifying.

Faith is the ability to believe without seeing, and was central to Olivi’s whole philosophy. The Latin root is fides captures the concept of trust, the very essence of finance exhibited by the “promise to pay”. While Faith is backward looking, you build trust, Hope is its forward-looking complement. Charity, along with Temperance, is the virtue least likely to be associated with merchants. While we now think of charity in terms of giving to others, in the past it was associated with a love, or care, for others. Shakespeare’s play The Merchant of Venice is about ‘Antonio, a merchant of Venice’ who characterises Charity, or agape, though his sacrifices for his young friend Bassanio. The view that Antonio and Bassanio were physical lovers is a modern interpretation that does not distinguish storge (familial love, the deficit Jessica/Shylock), philia (friendship, Portia/Nerissa, Lorenzo/Bassanio), eros (physical love, Portia/Bassanio, Lorenzo/Jessica) and agape (spiritual love, Antonio/Bassanio, Shylock’s deficit), clear themes running through the play. We would suggest that the problem Graeber should be tackling in his discussion of debt is not the presence of reciprocity but rather the absence of Charity.


   A. Appadurai. The ghost in the financial machine. Public Culture, 23(3):517–539, 2011.
   Aristotle. Nicomachean Ethics, translated by W. D. Ross. Oxford University Press, 1925.
   D. Beunza and D. Stark. Tools of the trade: the socio-technology of arbitrage in a Wall Street trading room. Industrial & Corporate Change, 13(2):369–400, 2004.
   D. Beunza and D. Stark. From dissonance to resonance: cognitive interdependence in quantitative finance. Economy and Society, 41(3):383–417, 2012.
   R. Brenner and G. A. Brenner. Gambling and Speculation: A theory, a history and a future of some human decisions. Cambridge University Press, 1990.
   C. Darwin. The descent of man, and selection in relation to sex. John Murray, 1871. darwin-online. org. uk/contents. html.
   D. Graeber. Debt: The first 5,000 years. Melville House, 2011.
   J. Henrich et al. Costly punishment across human societies. Science, 312:1767–1770, 2006.
   K. Jensen, J. Call, and M. Tomasello. Chimpanzees are rational maximizers in an ultimatum game. Science, 318:107–108, 2007.
   J. Kaye. Economy and Nature in the Fourteenth Century. Cambridge University Press, 1998.
   J. M. Keynes. The general theory of employment, interest and money. Macmillian, 1936.
   D. Luscombe. Medieval Thought. Oxford University Press, 1997.
   D. N. McCloskey. The Bourgeois Virtues: Ethics for an Age of Commerce. University of Chicago Press, 2007.
   J. K. Murnighan and M. S. Saxon. Ultimatum bargaining by children and adults. Journal of Economic Psychology, 19:415–445, 1998.
   H. Poincaré. Science and hypothesis. In S. J. Gould, editor, The Value of Science: Essential Writing of Henri Poincaré. Modern Library, 1902 (2001).
   G. Poitras. The Early History of Financial Economics, 1478–1776. Edward Elgar, 2000.
   A. Smith. An inquiry into the nature and causes of the wealth of nations. Project Gutenburg, 1776 (2012).
   R. H. Thaler. Anomalies: The ultimatum game. The Journal of Economic Perspectives, 2(4):195–206, 1988.

Wednesday, 26 September 2012

The Fundamental Theory of Asset Pricing

This is now published, open access.

Within the field of Financial Mathematics, the Fundamental Theorem of Asset Pricing consists of two statements, (e.g. [Shreve2004, Section 5.4])

Theorem: The Fundamental Theorem of Asset Pricing
1. A market admits no arbitrage, if and only if, the market has a martingale measure.
2. The martinagale measure is unique, if and only if, every contingent claim can be hedged.

The theorem emerged between 1979 and 1983 ([Harrison and Kreps1979], [Harrison and Pliska1981],[Harrison and Pliska1983]) as Michael Harrison sought to establish a mathematical theory underpinning the well established Black-Scholes equation for pricing options. One remarkable feature of the Fundamental Theorem is its lack of mathematical notation, which is highlighted by the use of mathematical symbols in the Black-Scholes equation, which came out of economics. Despite its non-mathematical appearance, the work of Harrison and his collaborators opened finance to investigation by functional analysts (such as [Schachermayer1984]) and by 1990, any mathematician working on asset pricing would have to do so within the context of the Fundamental Theorem.

 The use of the term ‘probability measure’ places the Fundamental Theory within the mathematical theory of probability formulated by Andrei Kolmogorov in 1933 ([Kolmogorov1933 (1956)]). Kolmogorov’s work took place in a context captured by Bertrand Russell, who in 1927 observed that
It is important to realise the fundamental position of probability in science. …As to what is meant by probability, opinions differ. Russell [1927 (2009), p 301]
The significance of probability in providing the basis of statistical inference in empirical science had been generally understood since Laplace. In the 1920s the idea of randomness, as distinct from a lack of information, the absence of Laplace’s Demon, was becoming significant. In 1926 the physicist Max Born was “inclined to give up determinism”, to which Einstein responded with “I for one am convinced that [God] does not play dice” [von Plato1994, pp 147–157]. Outside the physical sciences, Frank Knight, in Risk, Uncertainty and Profit, argued that uncertainty, a consequence of randomness, was the only true source of profit, since if a profit was predictable the market would respond and make it disappear (Knight [1921 (2006, III.VII.1–4]). Simultaneously, in his Treatise on Probability, John Maynard Keynes observed that in some cases cardinal probabilities could be deduced, in others, ordinal probabilities, one event was more or less likely than another, could be inferred, but the largest class of problems were not reducible to the conventional concept of probability ([Keynes1972, Ch XXIV, 1]. Keynes would place this inability to precisely define a numerical probability at the heart of his economics ([Skidelsky2009, pp 84–90]).

Two mathematical theories had become ascendant by the late 1920s. Richard von Mises, an Austrian engineer linked to the Vienna Circle of logical-positivists, and brother of the economist Ludwig, attempted to lay down the axioms of probability based on observable facts within a framework of Platonic-Realism. The result was published in German in 1931 and popularised in English as Probability, Statistics and Truth and is now regarded as a key justification of the frequentist approach to probability.

To balance von Mises’ Realism, the Italian actuary, Bruno de Finetti presented a more Nominalist approach. De Finetti argued that “Probability does not exist” because it was only an expression of the observer’s view of the world. De Finetti’s subjectivist approach was closely related to the less well-known position taken by Frank Ramsey, who, in 1926, published Probability and Truth, in which he argued that probability was a measure of belief. Ramsey’s argument was well-received by his friend and mentor John Maynard Keynes but his early death hindered its development.

While von Mises and de Finetti took an empirical path, Kolmogorov used mathematical reasoning to define probability. Kolmogorov wanted to adress they key issue for physics at the time which was that was that, following the work of Montmort and de Moivre in the first decode of the eighteenth century, probability had been associated with counting events and comparing relative frequencies. This had been coherent until mathematics became focused on infinite sets at the same time as physics became concerned with statistical mechanics in the second half of the nineteenth century. Von Mises had tried to address these issues but his analysis was weak in dealing with infinite sets, that came with continuous time. As Jan von Plato observes
von Mises’s theory of random sequences has been remembered as something to be criticized: a crank semi-mathematical theory serving as a warning of the state of probability [at the time] von Plato [1994, p 180]

In 1902 Lebesgue had redefined the mathematical concept of the integral in terms of abstract ‘measures’ in order to accommodate new classes of mathematical functions that had emerged in the wake of Cantor’s transfinite sets. Kolmogorov made the simple association of these abstract measures with probabilities, solving the von Mises’ issue of having to deal with infinite sets in an ad hoc manner. As a result Kolmogorov identified a random variable with a function and an expectation with an integral, probability became a branch of Analysis, not Statistics.

Kolmogorov’s work was initially well received, but slow to be adopted. One contemporary American reviewer noted it was an important proof of Bayes’ Theorem ([Reitz1934]), then still controversial (Keynes [1972, Ch XVI, 13]) but now a cornerstone of statistical decision making. Amongst English-speaking mathematicians, the American Joseph Doob was instrumental in promoting probability as measure ([Doob1941]) while the full adoption of the approach followed its advocacy by Doob and William Feller at the First Berkeley Symposium on Mathematical Statistics and Probability in 1945–1946.

While measure theoretic probability is a rigorous theory outside pure mathematics it is seen as redundant. Von Mises criticised it as un-necessarily complex ([von Mises1957 (1982), p 99]) while the statistician Maurice Kendall argued that measure theory was fine for mathematicians, but of limited practical use to statisticians and fails “to found a theory of probability as a branch of scientific method” ([Kendall1949, p 102]). More recently the physicist Edwin Jaynes champions Leonard Savage’s subjectivism as having a “deeper conceptual foundation which allows it to be extended to a wider class of applications, required by current problems of science” in comparison with measure theory ([Jaynes2003, p 655]). Furthermore in 2001 two mathematicians Glenn Shafer and Vladimir Vovk, a former student of Kolmogorov, proposed an alternative to measure-theoretic probability, ‘game-theoretic probability’, because the novel approach “captures the basic intuitions of probability simply and effectively” ([Shafer and Vovk2001]). Seventy-five years on Russell’s enigma appears to be no closer to resolution.

The issue around the ‘basic intuition’ of measure theoretic probability for empirical scientists can be accounted for as a lack of physicality. Frequentist probability is based on the act of counting, subjectivist probability is based on a flow of information, where as measure theoretic probability is based on an abstract mathematical object unrelated to phenomena. Specifically in the Fundamental Theorem, the ‘martingale measure’ is a probability measure, usually labelled , such that the price of an asset today, X0 is the expectation, under the martingale measure, of the discounted asset prices in the future, XT
Given a current asset price X0, and a set of future prices, XT the probability distribution is defined such that this equality holds, and so is forward looking, in the fact that it is based on current and future prices. The only condition placed on the relationship that the martingale measure has with the ‘natural’, or ‘physical’, probability measure, inferred from historical price changes and usually assigned the label , is that they agree on what is possible.

The term ‘martingale’ in this context derives from doubling strategies in gambling and it was introduced into mathematics by Jean Ville in 1939, in a critique of von Mises work, to label a random process where the value of the random variable at a specific time is the expected value of therandom variable in the future. The concept that asset prices have the martingale property was first proposed by Benoit Mandlebrot ([Mandelbrot1966]) in response to an early formulation of Eugene Fama’s Efficient Market Hypothesis (EMH) ([Fama1965]), the two concepts being combined by Fama in 1970 ([Fama1970]). For Mandelbrot and Fama the key consequence of prices being martingales was that the price today was, statistically, independent of the future price distribution: technical analysis of markets was charlatanism. In developing theEMH there is no discussion on the nature of the probability under which assets are martingales, and it is often assumed that the expectation is calculated under the natural measure.

Arbitrage, the word derives from ‘arbitration’, has long been a subject of financial mathematics. In Chapter 9 of his 1202 text advising merchants, the Liber Abaci, Fibonacci discusses ‘Barter of Merchandise and Similar Things’,
20 arms of cloth are worth are worth 3 Pisan pounds and 42 rolls of cotton are similarly worth 5 Pisan pounds; it is sought how many rolls of cotton will be had for 50 arms of cloth. ([Sigler2002, p 180])

In this case there are three commodities, arms of cloth, rolls of cotton and Pisan pounds, and Fibonacci solves the problem by having Pisan pounds ‘arbitrate’ between the other two commodities.

Over the centuries this technique of pricing through arbitration evolved into the law of one price, that if two assets offer identical cash flows then they must have the same price. This was employed by Jan de Witt in 1671 when he solved the problem of pricing life annuities in terms of redeemable annuities, based on the presumption that
the real value of certain expectations or chances of objects, of different value, should be estimated by that which we can obtain from as many expectations or chances dependent on one or several equitable contracts. [Sylla2003, p 313, quoting De Witt, The Worth of Life Annuities in Proportion to Redeemable Bonds]

In 1908 the Croatian mathematician, Vincent Bronzin, published a text which discusses pricing derivatives by ‘covering’, or hedging them, them with portfolios of options and forward contracts employing the principle of ‘equivalence’, the law of one price ([Zimmermann and Hafner2007]). In 1965 the functional analyst and probabilist, Edward Thorp, collaborated with a post-doctoral mathematician, Sheen Kassouf, and combined the law of one price with basic techniques of calculus to identify market mis-pricing of warrant prices, at the time a widely traded stock option. In 1967 they published their methodology in a best-selling book, Beat the Market ([MacKenzie2003]).

Within economics, the law of one price was developed in a series of papers between 1954 and 1964 by Kenneth Arrow, Gerard Debreu and Lionel MacKenzie in the context of general equilibrium. In his 1964 paper, Arrow addressed the issue issue of portfolio choice in the presence of risk and introduced the concept of an Arrow Security, an asset that would pay out ‘1’ in a specific future state of the economy but zero for all other states, and by the law of one price, all commodities could be priced in terms of these securities ([Arrow1964]). The work of Fischer Black, Myron Scholes and Robert Merton ([Black and Scholes1973]) employed the principal and presented a mechanism for pricing warrants on the basis that “it should not be possible to make sure profits” with the famous Black-Scholes equation being the result.

In the context of the Fundamental Theorem, ‘an arbitrage’ is the ability to formulate a trading strategy such that the probability, whether under or , of a loss is zero, but the probability of a profit is positive. This definition is important following Hardie’s criticism of the way the term is applied loosely in economic sociology ([Hardie2004]). The obvious point of this definition is that, unlike Hardie’s definition [Hardie2004, p 243], there is no guaranteed (strictly positive) profit, however there is also a subtle technical point: there is no guarantee that there is no loss if there is an infinite set of outcomes. This is equivalent to the observation that there is no guarantee that an infinite number of monkeys with typewriters will, given enough time, come up with a work of Shakespeare: it is only that we expect them to do so. This observation explains the caution in the use of infinite sets taken by mathematicians such as Poincare, Lebesgue and Brouwer.

To understand this meaning of arbitrage, consider the most basic case of a single period economy, consisting of a single asset whose price, X0, is known at the start of the period and can take on one of two (present) values, XT U > X T D, representing two possible states of the economy at the end of the period. In this case an arbitrage would exist if XT U > X T D X 0, buying the asset now would lead to a possible profit at the end of the period, with the guarantee of no loss. Similarly, if X0 XT U > X T D, short selling the asset now, and buying it back at the end of the period would also lead to an arbitrage.

In summary, for there to be no arbitrage opportunities we require that
This implies that there is a real number, q, 0 q 1 such that
X0 = XT D + q(X T U - X T D)
= qXT U + (1 - q)X T D
E[XT ],
and it can be seen that q represents a measure theoretic probability that the economy ends in the U state.
With this in mind, the first statement of the Fundamental Theorem can be interpreted simply as “the price of an asset must lie between its maximum and minimum possible (discounted) future price”. If X0 > XT D we have that q< 0 where as if X T U<X 0 then q >1, and in both cases q does not represent a probability measure, which, by definition must lie between 0 and 1. In this simple case there is a trivial intuition behind measure theoretic probability, the martingale measure and an absence of arbitrage are a simple tautology.

To appreciate the meaning of the second statement of the theorem, consider the situation when the economy can take on three states at the end of the time period, not two. If we label possible future asset prices as XT U > X T M >X T D, we cannot deduce a unique set of probabilities 0 qU,qM,qD 1, with qU + qM + qD = 1, such that

The market still precludes arbitrage, but we no longer have a unique probability measure under which asset prices are martingales, and so we cannot derive unique prices for other assetsin the market. In the context of the law of one price, we cannot hedge, replicate or cover, a position in the market, making it riskless and in terms of Arrow’s work the market is incomplete. This explains the sense of the second statement of the Fundamental Theorem and is important in that the statement tells the mathematician that in the real world of imperfect knowledge and transaction costs, a model within the Theorem’s framework cannot give a precise price.

Most models employed in practice ignore the impact of transaction costs, on the utopian basis that precision will improve as market structures evolve and transaction costs disappear. Situations where there are many, possibly infinitely many, prices at the end of the period are handled by providing a model for asset price dynamics, between times 0 and T. The choice of asset price dynamics defines the distribution of XT , either under the martingale or natural probability measure, and in making the choice of asset price dynamics, the derivative price is chosen. This effect is similar to the choice of utility function determining the results of models in some areas of welfare economics.

The Fundamental Theorem is not well known outside the limited field of financial mathematics, practitioners focus on the models that are a consequence of the Theorem where as social scientists focus on the original Black-Scholes-Merton model as an exemplar. Practitioners are daily exposed to the imprecision of the models they useand are skeptical, if not dismissive, of the validity of the models they use ([Miyazaki2007, pp 409-410 ], [MacKenzie2008, p 248], [Haugh and Taleb2009]). Following the market crash of 1987, few practioners used the Black-Scholes equation to actually ‘price’ options, rather they used the equation to measure market volatility, a proxy for uncertainty.

However, the status of the Black-Scholes model as an exemplar in financial economics has been enhanced following the adoption of measure theoretic probability, and this can be understood because the Fundamental Theorem, born out of Black-Scholes-Merton, unifies a number of distinct theories in financial economics. MacKenzie ([MacKenzie2003, p 834]) describes a dissonance between Merton’s derivation of the model (Merton [1973]) using techniques from stochastic calculus, and Black’s, based on the Capital Asset Pricing Model (CAPM) (Black and Scholes [1973]). When measure theoretic probability was introduced it was observed that the Radon-Nikodym derivative, a mathematical object that describes the relationship between the stochastic processes Merton used in the natural measure and the martingale measure, involved the market-price of risk (Sharpe ratio), a key object in the CAPM. This point was well understood in the academic literature in the 1990s and was introduced into the fourth edition of the standard text book, Hull’s Options, Futures and other Derivatives, in 2000.

The realisation that the Fundamental Theorem unified Merton’s approach, based on stochastic calculus advocated by Samuelson at M.I.T, CAPM, which had been developed at the Harvard Business School and in California, martingales, a feature of efficient markets that had been proposed at Chicago and incomplete markets, from Arrow and Debreu in California, enhanced the status of Black-Scholes-Merton as representing a Kuhnian paradigm. This unification of a plurality of techniques within a ‘theory of everything’ came just as the Black-Scholes equation came under attack for not reflecting empirical observations of market prices and obituaries were being written for the broader neoclassical programme ([Colander2000])and can explain why, in 1997, the Nobel Prize in Economics was awarded to Scholes and Merton “for a new method to determine the value of derivatives”.

The observation that measure theoretic probability unified a ‘constellation of beliefs, values, techniques’ in financial economics can be explained in terms of the transcendence of mathematics. To paraphrase Tait ([Tait1986, p 341])
A mathematical proposition is about a certain structure, financial markets. It refers to prices and relations among them. If it is true, it is so in virtue of a certain fact about this structure. And this fact may obtain even if we do not or cannot know that it does.
In this sense, the Fundamental Theorem confirms the truth of the EMH, or any other of the other ‘facts’ that go into the proposition. It becomes doctrine that more (derivative) assets need to be created in order to complete markets, or as Miyazaki observes [Miyazaki2007, pp 404 ], speculative activity as arbitration, is essential for market efficiency.

However, this relies on the belief in the transcendence of mathematics.  If mathematics is a human construction, it does not hold true.


   K. J. Arrow. The role of securities in the optimal allocation of risk-bearing. The Review of Economic Studies, 31(2):91–96, 1964.
   F. Black and M. Scholes. The pricing of options and corporate liabilities. Journal of Political Economy, 81(3):637–654, 1973.
   D. Colander. The death of neoclassical economics. Journal of the History of Economic Thought, 22(2):127, 2000.
   J.L. Doob. Probability as measure. The Annals of Mathematical Statistics, 12(2):206–214, 1941.
   E. F. Fama. The behavior of stock–market prices. The Journal of Business, 38(1):34–105, 1965.
   E. F. Fama. Efficient capital markets: A review of theory and empirical work. The Journal of Finance, 25(2):383–417, 1970.
   I. Hardie. ‘The sociology of arbitrage’: a comment on MacKenzie. Economy and Society, 33(2):239–254, 2004.
   J. M. Harrison and D. M. Kreps. Martingales and arbitrage in multiperiod securities markets. Journal of Economic Theory, 20:381–401, 1979.
   J. M. Harrison and S. R. Pliska. Martingales and stochastic integrals in the theory of continuous trading. Stochastic Processes and their Applications, 11:215–260, 1981.
   J. M. Harrison and S. R. Pliska. A stochastic calculus model of continuous trading: complete markets. Stochastic Processes and their Applications, 15:313–316, 1983.
   E. G. Haugh and N. N. Taleb. Why we have never used the Black–Scholes–Merton option pricing formula. 2009.
   E. T. Jaynes. Probability Theory: The Logic of Science. Cambridge University Press, 2003.
   M. G. Kendall. On the reconciliation of theories of probability. Biometrika, 36(1/2): 101–116, 1949.
   J. M. Keynes. The collected writings of John Maynard Keynes. Vol. 8 : Treatise on probability. Macmillian, 1972.
   F. H. Knight. Risk, Uncertainty, and Profit. Hart, Schaffner & Marx (Cosimo), 1921 (2006).
   A. N. Kolmogorov. Foundations of the Theory of Probability. Julius Springer (Chelsea), 1933 (1956).
   D. MacKenzie. An equation and its worlds: Bricolage, exemplars, disunity and performativity in financial economics. Social Studies of Science, 33(6):831–868, 2003.
   D. MacKenzie. An Engine, Not a Camera: How Financial Models Shape Markets. The MIT Press, 2008.
   M. S. Mahoney. The Mathematical Career of Pierre de Fermat, 1601–1665. Princeton University Press, 1994.
   B. Mandelbrot. Forecasts of future prices, unbiased markets and ”martingale” models. The Journal of Business, 39(1, Supplement on Security Prices):242–255, 1966.
   R. C. Merton. Theory of rational option pricing. The Bell Journal of Economics and Management Science, 4(1):141–183, 1973.
   H. Miyazaki. Between arbitrage and speculation: an economy of belief and doubt. History of Political Economy, 36(3):369–415, 2007.
   H.L. Reitz. Review of Grundbegriffe der Wahrscheinlichkeitsrechnung. Bulletin of the American Mathematical Society, 40(7):522–523, 1934.
   B. Russell. An Outline of Philosophy. George Allen & Unwin (Routledge), 1927 (2009).
   W. Schachermayer. Die Uberprufung der Finanzierbarkeit der Gewinnbeteiligung. Mitteilungen der Aktuarvereinigung Osterreichs, 2:13–30, 1984.
   G. Shafer and V. Vovk. Probability and Finance: It’s Only a Game! Wiley, 2001.
   S. E. Shreve. Stochastic Calculus for Finance II: Continuous-Time Models. Springer, 2004.
   L. E. Sigler. Fibonacci’s Liber Abaci. Springer-Verlag, 2002.
   R. Skidelsky. Keynes, The Return of the Master. Allen Lane, 2009.
   E. D. Sylla. Business ethics, commercial mathematics, and the origins of mathematical probability. History of Political Economy, 35:309–337, 2003.
   W. W. Tait. Truth and proof: The Platonism of mathematics. Synthese, 69(3):341–370, 1986.
   R. von Mises. Probability, statistics and truth. Allen & Unwin (Dover), 1957 (1982).
   J. von Plato. Creating Modern Probability. Cambridge University Press, 1994.
   H. Zimmermann and W. Hafner. Amazing discovery: Vincenz Bronzin’s option pricing models. Journal of Banking and Finance, 31:531–546, 2007.