Saturday, 19 October 2013

Two Women and a Duck - a Pragmatic case against HFT

This is the last of four articles on the role of reciprocity in financial economics.  It develops the case for taking a Pragmatic perspective when trying to understand contemporary finance.

I have presented the case that the essence of the FTAP isreciprocity, alternatively Justice and equality in exchange, colloquially fairness. The pre-history of mathematical probability lies in Olivi’s examination of commercial exchange in the context of Aristotle’s Ethics. The subsequent emergence of the topic is in the seventeenth century analysis of contracts in the context of ‘fair’ pricing. In the twentieth century Ramsey provides the ‘Dutch book’ argument, which can be viewed as the ‘Golden Rule’ of reciprocity. However, under theinfluence of a strong fact/value dichotomy that was established in the nineteenth century, the moral injunction not to engage in turpe lucrum, through the practice of arbitrage, becomes highly technical, and ethically neutral, and in the process the essence of reciprocity in the FTAP becomes obscured.

This argument associates the FTAP with the experimental results of the ‘Ultimatum Game’, an important anomaly for neo-classical economics [38]. The game involves two participants and a sum of money. The first player proposes how to share the money with the second participant. The division is made only if the second participant accepts the split, if the first player’s proposal is rejected neither participant receives anything. The key result is that if the money is not split ‘fairly’ (approximately equally) then the second player rejects the offer. This contradicts the assumption that people are rational utility maximising agents, since if they were the second player would accept any positive payment. Research has shown that chimpanzees are rational maximisers while the willingness of the second player to accept an offer is dependent on age and culture. Older people from societies where exchange plays a significant role are more likely to demand a fairer split of the pot than young children or adults from isolated communities ([30], [20], [21], [24]). Fair exchange appears to be learnt behaviour developed in a social context and is fundamental to human society and distinguishes the sapient member of a civitas from the sentient animals. The Ultimatum Game provides observational evidence that reciprocity is, or at least should be, a fundamental concept for financial economics.

We have shown the key role that the FTAP plays in the dominant paradigm of financial economics, involving CAPM (Markowitz portfolio selection), the Efficient Markets Hypothesis (martingales), the use of stochastic calculus and incomplete markets. At first sight one might assume that this paradigm associated with utility maximisation, but on closer reflection the key components are not.

Markowitz portfolio theory explicitly observes that portfolio managers are not (expected) utility maximisers, as they diversify, and offers the hypothesis that a desire for reward is tempered by a fear of uncertainty ([26], see also [35, p 432]). Markowitz’s theory was developed into the CAPM by Sharpe while similar models were developed independently by Treynor, Lintner and Mossin. These models conclude that all investors should hold the same portfolio, their individual risk-reward objectives are satisfied by the weighting of this ‘index portfolio’ in comparison to riskless cash in the bank, a point on the capital market line. The slope of the CML is the market price of risk, which is an important parameter in arbitrage arguments. Significantly, as MacKenzie [25, pp 86—87] observes, Markowitz portfolio selection and CAPM are prescriptive, not descriptive theories; just as medieval merchants were told what was licit by the Scholastics, so, in the 1980s, asset managers were being told what is ‘rational’ by academics.
Merton had initially attempted to provide an alternative to Markowitz based on utility maximisation employing stochastic calculus. He was only able to resolve the problem by employing the hedging arguments of Black and Scholes, and in doing so built a model that was based on the absence of arbitrage, free of turpe-lucrum. The opening paragraph of Black and Scholes includes the prescriptive statement that “it should not be possible to make sure profits”, a statement explicit in the Efficient Markets Hypothesis and in employing an Arrow security in the context of the Law of One Price.

Based on these observations, we conject that the whole paradigm for financial economics, not just the FTAP, is built on the principle of balanced reciprocity. In order to explore this conjecture we shall examine the relationship between commerce and themes in Pragmatic philosophy. Specifically, we highlight Robert Brandom’s position that there is
a pragmatist conception of norms — a notion of primitive correctnesses of performance implicit in practice that precludes and are presupposed by their explicit formulation in rules and principles. [5, p 21]
The argument that we have presented is that reciprocity is implicit in the practice of commerce (e.g. [22]) and this norm becomes explicit in Virtue Ethics and then in the early conceptions of mathematical probability.

The ‘primitive correctnesses’ of commercial practices was recognised by Aristotle when he investigated the nature of Justice in the context of commerce and then by Olivi when he looked favourably on merchants. It is exhibited in the doux-commerce thesis, compare Fourcade and Healey’s contemporary description of the thesis
Commerce teaches ethics mainly through its communicative dimension, that is, by promoting conversations among equals and exchange between strangers. [14, p 287]
with Putnam’s description of Habermascommunicative action based on
the norm of sincerity, the norm of truth-telling, and the norm of asserting only what is rationally warranted ...[and] is contrasted with manipulation. [34, pp 113-114]
There are practices (that should be) implicit in commerce that make it an exemplar of communicative action.

A further expression of markets as centres of communication is manifested in the Asian description of a market as “Two women and a duck”, which immediately brings to mind Donald Davidson’s argument that knowledge is not the product of a bipartite conversations but a tripartite relationship between two speakers and their shared environment (e.g. [12]). The essence of the proverb is that if two women, who are characterised as talkative, and a duck come together, eventually the value of the duck will be determined—knowledge is created. Replacing the negotiation between market agents with an algorithm that delivers a theoretical price replaces ‘knowledge’, generated through communication, with dogma. The problem with the performativity that Donald MacKenzie is concerned with [25] is one of monism. In employing pricing algorithms, the markets cannot perform to something that comes close to ‘true belief’, which can only be identified through communication between sapient humans. This is an almost trivial observation to (successful) market participants (e.g. [37], [4], [13, especially Ch 12]), but difficult to appreciate by spectators who seek to attain ‘objective’ knowledge of markets from a distance.

To appreciate the relevance to financial crises of the position that ‘true belief’ is about establishing coherence through myriad triangulations centred on an asset rather than relying on a theoretical model, consider the comment made by Parliamentary Commission on Banking Standards

Excessive complexity in the major banks is not restricted to organisational structure. The fuelling of the financial crisis by misguided risk models was not simply the consequence of some mathematicians getting their equations wrong. It was the result of ignorance, coupled with excessive faith in the application of mathematical precision, by senior management and by regulators. Many of the elements of this problem remain. [32, para. 93, v. II]

Mathematicians understood the limitations of their models, which they communicated. The problem was that these concerns were not appreciated by policy makers, within an institution, nationally or globally, who appear to have succumbed to the indubitable authority of mathematics [32, para. 60—61, v. II]. Stephen Krasner observes [27] that academics can help policy makers in two respects: “Provide empirical evidence about what has happened, and offer a conceptual framework through which to understand it.” A significant issue with the highly technical mathematical models employed in finance is that they lack a “conceptual framework” that non-specialists can understand. This means that policy makers, whether within or without banks, cannot ascertain the limitations of mathematical models that inform their decision making. Pragmatism provides the philosophical basis for a conceptual framework that acknowledges both the usefulness and the fallibility of mathematics in finance.

The significance of these issues to the FTAP is captured in a text by Rama Cont and Peter Tankov addressing pricing in markets with discontinuous prices
Unless the martingale measure is a by-product of a hedging approach, the price given by such martingale measures is not related to the cost of a hedging strategy therefore the meaning of such ‘prices’ is not clear. [10, 10.5.2]
If the hedging argument cannot be employed, as in the markets studied by Cont and Tankov, there is no conceptual framework supporting the prices obtained from the FTAP. This lack of meaning can be interpreted as a consequence of the strict fact/value dichotomy in contemporary mathematics that came with the eclipse of Poincaré’s Intuitionism by Hilbert’s Formalism and Bourbaki’s Rationalism [39]. The practical problem of supporting the social norms of market exchange has been replaced by a theoretical problem of developing formal models of markets. These models then legitimate the actions of agents in the market without having to make reference to explicitly normative values.

In making this observation and by considering the implications of believing that the FTAP is an expression of reciprocity, we are employing the ‘Pragmatic maxim’ and are making a commitment to real-life experiences. Another, more direct, consequence of associating the FTAP with reciprocity is related to the EMH. Miyazaki observes [29, p 404] that speculation by arbitrageurs has been legitimised as ensuring that markets are efficient. The EMH is based on the axiom that the market price is determined by the balance between supply and demand, and so an increase in trading facilitates the convergence to equilibrium. If this axiom is replaced by the axiom of reciprocity, the justification for speculative activity in support of efficient markets disappears. In fact, the axiom of reciprocity would de-legitimise ‘true’ arbitrage opportunities, as being unfair. This would not necessarily make the activities of actual market arbitrageurs illicit, since there are rarely strategies that are without the risk of a loss, however, it would place more emphasis on the risks of speculation and inhibit the hubris that has been associated with the prelude to the recent Crisis.

These points raise the question of the legitimacy of speculation in the markets. In an attempt to understand this issue Gabrielle and Reuven Brenner identify the three types of market participant. ‘Investors’ are preoccupied with future scarcity and so defer income. Because uncertainty exposes the investor to the risk of loss, investors wish to minimise uncertainty at the cost of potential profits, this is the basis of classical investment theory. ‘Gamblers’ will bet on an outcome taking odds that have been agreed on by society, such as with a sporting bet or in a casino, and relates to de Moivre’s and Montmort’s ‘taming of chance’. ‘Speculators’ bet on a mis-calculation of the odds quoted by society and the reason why speculators are regarded as socially questionable is that they have opinions that are explicitly at odds with the consensus: they are practitioners who rebel against a theoretical ‘Truth’ ([6, p 91], [4, p 394]). This is captured in Arjun Appadurai’s argument that the leading agents in modern finance
believe in their capacity to channel the workings of chance to win in the games dominated by cultures of control ...[they] are not those who wish to “tame chance” but those who wish to use chance to animate the otherwise deterministic play of risk [quantifiable uncertainty]”. [1, p 533-534]
In the context of Pragmatism, financial speculators embody pluralism, a concept essential to Pragmatic thinking (e.g. [33], [2], [3, Ch 2]) and an antidote to the problem of radical uncertainty.

Appadurai was motivated to study finance by Marcel Mauss’ essay Le Don (‘The Gift’), exploring the moral force behind reciprocity in primitive and archaic societies and goes on to say that the contemporary financial speculator is “betting on the obligation of return” [1, p 535], and this is the fundamental axiom of contemporary finance. David Graeber also recognises the fundamental position reciprocity has in finance [17], but where as Appadurai recognises the importance of reciprocity in the presence of uncertainty, Graeber essentially ignores uncertainty in his analysis that ends with the conclusion that “we don’t ‘all’ have to pay our debts” [17, p 391]. In advocating that reciprocity need not be honoured, Graeber is not just challenging contemporary capitalism but also the foundations of the civitas, based on equality and reciprocity [16, p 235].

The origins of Graeber’s argument are in the first half of the nineteenth century. In 1836 John Stuart Mill defined political economy as being
concerned with [man] solely as a being who desires to possess wealth, and who is capable of judging of the comparative efficacy of means for obtaining that end. [28]
In Principles of Political Economy of 1848 Mill defended Thomas Malthus’ An Essay on the Principle of Population, which focused on scarcity. Mill was writing at a time when Europe was struck by the Cholera pandemic of 1829—1851 and the famines of 1845—1851 and while Lord Tennyson was describing nature as “red in tooth and claw”. At this time, society’s fear of uncertainty seems to have been replaced by a fear of scarcity (e.g. [23]), and these standards of objectivity dominated economic thought through the twentieth century. Almost a hundred years after Mill, Lionel Robbins defined economics as “the science which studies human behaviour as a relationship between ends and scarce means which have alternative uses”.

Dichotomies emerge in the aftermath of the Cartesian revolution that aims to remove doubt from philosophy [3, Ch 1]. Theory and practice, subject and object, facts and values, means and ends are all separated. In this environment ex cathedra norms, in particular utility (profit) maximisation, encroach on commercial practice. This is exemplified by the 1950 English court case Buttle v. Sunders ([1950] 2 All ER 193) where it was judged that ‘my word is my bond’ was subordinate to the profit maximisation principle.

In order to set boundaries on commercial behaviour motivated by profit maximisation, particularly when market uncertainty returned after the Nixon shock of 1971, society imposes regulations on practice. As a consequence, two competing ethics, functional Consequential ethics guiding market practices and regulatory Deontological ethics attempting stabilise the system, vie for supremacy. It is in this debilitating competition between two essentially theoretical ethical frameworks that we offer an explanation for the Financial Crisis of 2007-2009: profit maximisation, not speculation, is destabilising in the presence of radical uncertainty and regulation cannot keep up with motivated profit maximisers who can justify their actions through abstract mathematical models that bare little resemblance to actual markets.

This tension is exemplified by the Chartered Financial Analyst (CFA) Institute Standards of Practice Handbook [9], where the primary obligation is to obey the law, where Buttle v Saunders is tempered by the Basel treaties. There is no discussion of how professionals should interact amongst themselves, only how they interact with clients and employers, agents with whom they have a contractual relationship. This suggests that a distinction is being made between the market, populated by analysts, and society as a whole.

An implication of reorienting financial economics to focus on the markets as centres of ‘communicative action’ is that markets could become self-regulating, in the same way that the legal or medical spheres are self-regulated through professions. This is not a ‘libertarian’ argument based on freeing the Consequential ethic from a Deontological brake. Rather it argues that being a market participant entails restricting norms on the agent such as sincerity and truth telling that support knowledge creation, of asset prices, within a broader objective of social cohesion. This immediately calls into question the legitimacy of algorithmic/high-frequency trading that seems an anathema in regard to the principles of communicative action.

The purpose of these four posts has been to explore the ethical character of contemporary financial economics in light of the Financial Crisis of 2007—2008.

By examining the contemporary scholarship on the early development of probability we show that the field emerged in the seventeenth century out of the ethical assessment of commercial contracts. In the following century, the doux-commerce thesis dominated discussion of the morality of markets, emphasising the role markets play in binding society. The ethical aspect of probability theory disappears from mathematics at the start of the nineteenth century as science replaces uncertainty with Laplacian determinism [15] and the self-destructive thesis eclipses doux-commerce. Economics developed on Mill’s premise that the discipline is “concerned with [man] solely as a being who desires to possess wealth” and ‘value—neutrality’ emerges, built on the foundation scientific determinism. It was within this conceptual framework that the Black-Scholes equation was developed.

When a mathematical ‘theory’ to underpin the Black-Scholes-Merton approach, the Fundamental Theorem of Asset Pricing, is developed it relies on Kolmogorov’s abstract probabilities. The essence of this paper is in identifying these ‘martingale measures’ with probabilities that ensure equality in exchange, implicitly imitating the explicitly ethical approach of the early probabilists. This observation is significant in that it provides evidence of ‘oversocialisation’ in a domain traditionally considered ‘undersocialised’.

The argument presented in this post is based on employing the Pragmatic approach that acknowledges the contingency of knowledge. By taking this path we argue that markets should be regarded as centres of ‘communicative action’ governed by Pragmatic norms and that recent financial crises have been as a consequence of a dissonance between market participants working to Consequentialist norms but constrained by Deontological norms. In taking this approach we see a correspondence with Brandom’s semantic pragmatism, firstly because we see the implicit norm of reciprocity being made explicit in probability, and secondly because there is a correspondence between the results of the Ultimatum game, which show humans prefer reciprocity to utility maximisation and animals do not, and Brandom’s distinction between animal sentinence and human sapience. This, in turn, offers a solution to the problems of financial regulation.


[1]    A. Appadurai. The ghost in the financial machine. Public Culture, 23(3):517—539, 2011.
[2]    R.J. Bernstein. Pragmatism, pluralism and the healing of wounds. In The New Constellation: The Ethical-political Horizons of Modernity/postmodernity, pages 323—340. MIT Press, 1992.
[3]    R.J. Bernstein. The Pragmatic Turn. Wiley, 2013.
[4]    D. Beunza and D. Stark. From dissonance to resonance: cognitive interdependence in quantitative finance. Economy and Society, 41(3):383—417, 2012.
[5]    R. Brandom. Making it explicit: reasoning, representing, and discursive commitment. Harvard University Press, 1994.
[6]    R. Brenner and G. A. Brenner. Gambling and Speculation: A theory, a history and a future of some human decisions. Cambridge University Press, 1990.
[7]    A. A. Brown and L. C. G. Rogers. Diverse beliefs. Stochastics An International Journal of Probability and Stochastic Processes, 84(5-6):683—703, 2012.
[8]    F. Caccioli, M. Marsili, and P. Vivo. Eroding market stability by proliferation of financial instruments. Eur. Phys. J. B, 71:467—479, 2009.
[9]    CFA Institute Standards of Practice Council. Standards of Practice Handbook. Technical report, Chartered Financial Analyst Institute, 2010.
[10]    R. Cont and P. Tankov. Financial Modelling with Jump Processes. Chapman & Hall/CRC, 2004.
[11]    R. Cont and L. Wagalath. Running for the exit: short selling and endogenous correlation in financial markets. Mathematical Finance, In press, 2013.
[12]    D. Davidson. A coherence theory of truth and knowledge. In Subjective, Intersubjective, Objective: Philosophical Essays Volume 3, Philosophical Essays of Donald Davidson, pages 137—153. Oxford University Press, 2001.
[13]    T. Duhon. How the Trading Floor Really Works. Wiley, 2012.
[14]    M. Fourcade and K. Healy. Moral views of market society. Annual Review of Sociology, 33:285—311, 2007.
[15]    G. Gigerenzer. The Empire of Chance: how probability changed science and everyday life. Cambridge University Press, 1989.
[16]    J. J. Graafland. Calvins restrictions on interest: Guidelines for the credit crisis. Journal of Business Ethics, 96(2):233—248, 2010.
[17]    D. Graeber. Debt: The first 5,000 years. Melville House, 2011.
[18]    A.G. Haldane and R. M. May. Systemic risk in banking ecosystems. Nature, 469:351—355, 2011.
[19]    V. Henderson and D. Hobson. Horizon-unbiased utility functions. Stochastic Processes and their Applications, 117(11):1621 — 1641, 2007.
[20]    J. Henrich, R. Boyd, S. Bowles, C. Camerer, E. Fehr, and H. Gintis. Foundations of Human Sociality. Oxford University Press, 2004.
[21]    J. Henrich, R. McElreath, A. Barr, J. Ensminger, C. Barrett, A. Bolyanatz, J. C. Cardenas, M. Gurven, E. Gwako, N. Henrich, C. Lesorogol, F. Marlowe, D. Tracer, and J. Ziker. Costly punishment across human societies. Science, 312:1767—1770, 2006.
[22]    C. Humphrey. Barter and economic disintegration. Man, 20(1):48—72, 1985.
[23]    W. James. The dilemma of determinism. In W. James, editor, The Will to Believe and Other Essays in Popular Philosophy, pages 145—183. Longmans Green & Co. (Project Gutenburg), 1896 (2009).
[24]    K. Jensen, J. Call, and M. Tomasello. Chimpanzees are rational maximizers in an ultimatum game. Science, 318:107—108, 2007.
[25]    D. MacKenzie. An Engine, Not a Camera: How Financial Models Shape Markets. The MIT Press, 2008.
[26]    H. Markowitz. Portfolio selection. The Journal of Finance, 7(1):77—91, 1952.
[27]    B. McMurtrie. Social scientists seek new ways to influence public policy. Chronicle of Higher Education, 60(1):24, 2013.
[28]    J. S. Mill. On the definition of political economy; and on the method of investigation proper to it. In J. M. Robson, editor, The Collected Works of John Stuart Mill, Volume IV - Essays on Economics and Society Part I,. Routledge, 1967.
[29]    H. Miyazaki. Between arbitrage and speculation: an economy of belief and doubt. History of Political Economy, 36(3):369—415, 2007.
[30]    J. K. Murnighan and M. S. Saxon. Ultimatum bargaining by children and adults. Journal of Economic Psychology, 19:415—445, 1998.
[31]    M. Musiela and T. Zariphopoulou. The single period binomial model. In R. Carmona, editor, Indifference Pricing: Theory and Applications, pages 3—43. Princeton University Press, 2009.
[32]    PCBS. Changing Banking for Good. Technical report, The Parliamentary Commission on Banking Standards, 2013.
[33]    H. Price. Metaphysical pluralism. The Journal of Philosophy, 89(8):387—409, 1992.
[34]    H. Putnam. The Collapse of the Fact/Value Dichotomy and Other Essays. Harvard University Press, 2002.
[35]    A. D. Roy. Safety first and the holding of assets. Econometrica, 20(3):431—449, 1952.
[36]    A. Simsek. Speculation and risk sharing with new financial assets. The Quarterly Journal of Economics, 128(3):1365—1396, 2013.
[37]    G. Tett. Fools’ Gold. Little Brown, 2009.
[38]    R. H. Thaler. Anomalies: The ultimatum game. The Journal of Economic Perspectives, 2(4):195—206, 1988.
[39]    E. R. Weintraub. How Economics Became a Mathematical Science. Duke University Press, 2002.

Saturday, 12 October 2013

Reciprocity as the Foundation of Financial Economics

This is the third of four posts developing my argument that at the heart of financial economics is the principle of balanced reciprocity.  In the previous post, Probability Theory: the synthesis of commerce and ethics I discussed the origins of probability. Here I describe the Fundamental Theorem of Asset Pricing in the context of these ethical origins.  This is wonkish but does give an account of the role of the Black-Scholes-Merton approach to pricing derivatives in the development of Financial Economics since the 1970s.

The Fundamental Theorem of Asset Pricing (FTAP) consists of two statements, (e.g. [36, Section 5.4])
1. A market admits no arbitrage, if and only if, the market has a martingale measure.
2. Every contingent claim can be hedged, if and only if, the martingale measure is unique.

The context of the FTAP

The FTAP emerged between 1979 and 1983 ([13], [14], [15]) as Michael Harrison sought to establish a mathematical theory underpinning the Black-Scholes-Merton (BSM) equation for pricing options, which was introduced in 1973.
In the late 1960s, Fischer Black and Myron Scholes worked as investment consultants and one of the problems the pair addressed was the valuation of ‘warrants’, options bundled with bonds. Black was an applied mathematician who had worked in consultancy for Jack Treynor around the time that Treynor developed his version of the Capital Asset Pricing Model (CAPM). Scholes had studied for a doctorate under Eugene Fama looking at risk-reward in the context of efficient markets [35]. Black tackled the problem of pricing warrants as an applied mathematician: the value of the warrant would be a function of the underlying asset’s price and amenable to the type of calculus that had been employed since Newton and Leibnitz. Scholes approached the problem from a financial perspective: the risk of holding a warrant could be removed by holding a complementary (short) position in the underlying asset, by hedging. What Scholes did not know was how to establish the size of the hedging portfolio, but when he discussed this with Black they realised the solution was in the slope of the function relating the warrant price and asset price, a result that had been anticipated by Thorp and Kassouf [22, pp 130—131].
Simultaneously, Robert C. Merton, who had studied advanced engineering mathematics before becoming a student of Paul Samuelson, was considering the problem of pricing warrants from a different perspective. Samuelson had never accepted Markowitz’s criterion of trading the expected returns of a portfolio against the variance of returns [33], which was a foundation of CAPM and Scholes’ work, so Merton tackled the problem of valuing warrants by maximising expected utility employing the stochastic calculus that had become important in aeronautical and electronic engineering. This work was published in 1969 ([32], [24]).
Despite the fact that Black never liked Merton’s highly mathematical technique, Scholes discussed their work with Merton in 1970. Merton saw how the Black-Scholes approach of hedging could be incorporated into his own continuous time models, removing the need to incorporate an arbitrary utility function in solving the pricing problem. Merton showed that a portfolio made up of: a single warrant, or an option; a hedging position in the risky underlying asset; and a funding position in the riskless bank account, would offer the same, certain, return as the initial cost of the portfolio deposited in the riskless bank account. It seemed that both subjectivity and risk had been removed from the pricing problem.
In October 1970 Black and Scholes submitted their work to the Journal of Political Economy and then the Review of Economics and Statistics, but it was rejected without review, on the basis that there was not enough economics in it. The paper was only published by the Journal of Political Economy [3] in 1973 after the intervention of influential academics and shortly after the opening of the Chicago Board Options Exchange ([2, p 314—315], [22, pp 133—136]). Merton published his approach almost simultaneously [25].
When BSM was being developed option pricing was a relatively unimportant activity. Gambling legislation in the United States meant that options were only traded on ‘deliverable’ assets, principally agricultural commodities, and these markets were stagnant [22, pp 142-145]. However, following the ‘Nixon Shock’ of August 1971, the Bretton-Woods system of fixed exchange rates collapsed and in the aftermath, interest rates, exchange rates and commodity prices became much more volatile. Options, which have been a feature of financial practice since the seventeenth century, and were widely traded before the suspension of the European financial markets during the First World War [28], re-emerged as a tool to insure against volatile asset prices.
Despite the financial rational for options, their legitimacy with regard to gambling legislation was still ambiguous. The introduction of BSM delivered a mathematical equation that defined the price of an option in terms of known parameters, making their valuation deterministic. Trading in options could not be gambling, given that there was no speculation in their valuation. Donald MacKenzie reports the view of the legal counsel to the Chicago Board of Trade at the time, Burton Rissman
Black-Scholes was what really enabled the exchange to thrive ...we were faced in the late 60s — early 70s with the issue of gambling. That fell away, and I think Black-Scholes made it fall away. It wasn’t speculation or gambling it was efficient pricing. [22, p 158]
Essentially a mathematical formula transformed index options from being illegitimate gambles to deterministic investments.
Both the Black-Scholes and Merton approaches to pricing options involved heuristic arguments, they were‘engineering solutions’. Harrison sought to establish a rigorous option pricing‘theory’ to support the range of mathematical models developed on the back of the explosion in derivatives markets [22, pp 140—141]. Harrison, and his colleagues, were successful in their mission and opened finance to investigation by pure mathematicians, such as [34], [5], [6], and by 2000, any mathematician working on asset pricing would do so within the context of the FTAP.
The FTAP is not well known outside the academic field of financial mathematics. Practitioners focus on the models that are a consequence of the Theorem while social scientists focus on the original Black-Scholes-Merton approach as an exemplar. Even before the market crash of 1987 practitioners were sceptical as to the validity of the prices produced by their models ([26, pp 409-410 ], [22, p 248], [16]) and today the original Black-Scholes equation is used to measure market volatility, a proxy for uncertainty, rather than to ‘price’ options.
However, the status of the Black-Scholes model as an exemplar in financial economics has been enhanced following the development of the FTAP. Significantly, the theorem unifies different approaches in financial economics. The most immediate example of this synthesis was that in the course of the development of the FTAP it was observed that a mathematical object, the Radon-Nikodym derivative, which is related to the stochastic calculus Merton employed involved the market-price of risk (Sharpe ratio), a key object in CAPM that Black used. Without the FTAP the two approaches are incongruous [21, p 834]. Overall, as will be discussed in full in the next section, the FTAP brings together: Merton’s approach employing stochastic calculus advocated by Samuelson; CAPM, developed by Treynor and Sharpe; martingales, a mathematical concept employed by Fama in the development of the Efficient Markets Hypothesis; and the idea of incomplete markets, introduced by Arrow and Debreu.
The synthesis by the FTAP of a‘constellation of beliefs, values, techniques’ represented a Kuhnian paradigm for financial economics focused on the Black-Scholes-Merton approach to pricing options. The paradigm was further strengthen by the fact that the unification was presented as emerging out of pure mathematics and appeals to Realists who believe in the transcendence of mathematics and the existence of an Idealised economic universe.

An Ethical Analysis of the FTAP

The FTAP is a theorem of mathematics, and the use of the term ‘measure’ in its statement places the FTAP within the theory of probability formulated by Andrei Kolmogorov in 1933 [20]. Kolmogorov’s work took place in a context captured by Bertrand Russell, who in 1927 observed that
It is important to realise the fundamental position of probability in science. ... As to what is meant by probability, opinions differ. [31, p 301]
In the 1920s the idea of randomness, as distinct from a lack of information, was becoming substantive in the physical sciences [41, pp 147—157] because of the emergence of the Copenhagen Interpretation of quantum mechanics. In the social sciences, Frank Knight argued that uncertainty was the only source of profit [19, III.VII.1—4] and the concept was pervading John Maynard Keynes’ economics ([27], [38, pp 84—88]).
Two mathematical theories of probability had become ascendant by the late 1920s. Richard von Mises (brother of the Austrian economist Ludwig) [40] attempted to lay down the axioms of classical probability within a framework of Empiricism, the‘frequentist’ or ‘objective’ approach. To counter—balance von Mises, the Italian actuary Bruno de Finetti presented a more Pragmatic approach, characterised by his claim that “Probability does not exist” because it was only an expression of the observer’s view of the world. This ‘subjectivist’ approach was closely related to the less well-known position taken by the Pragmatist Frank Ramsey who developed an argument against Keynes’ Realist interpretation of probability presented in the Treatise on Probability ([29], [30], [4], [7]).
Kolmogorov addressed the trichotomy of mathematical probability by generalising so that Realist, Empiricist and Pragmatist probabilities were all examples of ‘measures’ satisfying certain axioms. In doing this, a random variable became a function while an expectation was an integral: probability became a branch of Analysis, not Statistics.
Von Mises criticised Kolmogorov’s generalised framework as un-necessarily complex [40, p 99] while the statistician Maurice Kendall argued that abstract measure theory failed “to found a theory of probability as a branch of scientific method” [18, p 102]. More recently the physicist Edwin Jaynes champions Leonard Savage’s subjectivist Bayesianism as having a “deeper conceptual foundation which allows it to be extended to a wider class of applications, required by current problems of science” [17, p 655].
The objections to measure theoretic probability for empirical scientists can be accounted for as a lack of physicality. Frequentist probability is based on the act of counting; subjectivist probability is based on a flow of information, which, following Claude Shannon, is now an observable entity in Empirical science. Measure theoretic probability is based on abstract mathematical objects unrelated to sensible phenomena. However, the generality of Kolmogorov’s approach made it flexible enough to handle problems that emerged in physics and engineering during the Second World War and his approach became widely accepted after 1950 because it was practically more useful.
In the context of the first statement of the FTAP, a ‘martingale measure’ is a probability measure, usually labelled , such that the (real, rather than nominal) price of an asset today, X0, is the expectation, using the martingale measure, of its (real) price in the future, XT . Formally,
X0 = E[XT ].
The abstract probability distribution is defined so that this equality exists, not on any empirical information of historical prices or subjective judgement of future prices. The only condition placed on the relationship that the martingale measure has with the ‘natural’, or ‘physical’,probability measures usually assigned the label ,is that they agree on what is possible.
The term ‘martingale’ in this context derives from doubling strategies in gambling and it was introduced into mathematics by Jean Ville in a development of von Mises work of 1939. The idea that asset prices have the martingale property was first proposed by Benoit Mandelbrot [23] in response to an early formulation of Eugene Fama’s Efficient Market Hypothesis (EMH) [8], the two concepts being combined by Fama in 1970 [9]. For Mandelbrot and Fama the key consequence of prices being martingales was that the current price was independent of the future price and technical analysis would not prove profitable in the long run. In developing the EMH there was no discussion on the nature of the probability under which assets are martingales, and it is often assumed that the expectation is calculated under the natural measure. While the FTAP employs modern terminology in the context of value-neutrality, the idea of equating a current price with a future, uncertain, payoff would have been understood by Olivi and obvious to Huygens, both working in an explicitly ethical framework.
The other technical term in the first statement of the FTAP, arbitrage, has long been used in financial mathematics. In Chapter 9 of the Liber Abaci Fibonacci discusses ‘Barter of Merchandise and Similar Things’,
20 arms of cloth are worth 3 Pisan pounds and 42 rolls of cotton are similarly worth 5 Pisan pounds; it is sought how many rolls of cotton will be had for 50 arms of cloth. [37, p 180]
In this case there are three commodities, arms of cloth, rolls of cotton and Pisan pounds, and Fibonacci solves the problem by having Pisan pounds ‘arbitrate’, or ‘mediate’ as Aristotle might say, between the other two commodities. Over the centuries this technique of pricing through arbitration evolved into the Law of One Price: if two assets offer identical cash flows then they must have the same price. This was employed by Jan de Witt in 1671 when he solved the problem of pricing life annuities in terms of redeemable annuities, based on the presumption that
the real value of certain expectations or chances of objects, of different value, should be estimated by that which we can obtain from as many expectations or chances dependent on one or several equitable contracts. [39, p 313, quoting De Witt]
In 1908 Vincent Bronzin published a text which discusses pricing derivatives by ‘covering’, or hedging, them with portfolios of options and forward contracts employing the principle of‘equivalence’ [43]. In 1965 the mathematicians, Edward Thorp and Sheen Kassouf, combined the Law of One Price with basic techniques of calculus to identify market mis-pricing of warrant prices and in 1967 they published their methodology in a best-selling book, Beat the Market.
Within neo-classical economics, the Law of One Price was developed in a series of papers between 1954 and 1964 by Kenneth Arrow, Gérard Debreu and Lionel MacKenzie in the context of general equilibrium, in particular the introduction of the Arrow Security, which, employing the Law of One Price, could be used to price any asset [1]. It was on this principle that Black and Scholes believed the value of the warrants could be deduced by employing a hedging portfolio, in introducing their work with the statement that “it should not be possible to make sure profits” [3] they were invoking the arbitrage argument, which had an eight hundred year history.
In the context of the FTAP, ‘an arbitrage’ has developed into the ability to formulate a trading strategy such that the probability, under a natural or martingale measure, of a loss is zero, but the probability of a positive profit is not. This definition is important following Hardie’s criticism of the way the term is applied loosely in economic sociology, and elsewhere [12]. The important point of this definition is that, unlike Hardie’s definition [12, p 243], there is no guaranteed (strictly positive) profit.
To understand the connection between the financial concept of arbitrage and the mathematical idea of a martingale measure, consider the most basic case of a single asset whose current price, X0, can take on one of two (present) values, XT D < XTU, at time T > 0, in the future. In this case an arbitrage would exist if X0 < XT D < XTU: buying the asset now, at a price that is less than or equal to the future pay-offs, would lead to a possible profit at the end of the period, with the guarantee of no loss. Similarly, if XT D < XT  < X0, short selling the asset now, and buying it back would also lead to an arbitrage. So, for there to be no arbitrage opportunities we require that

This implies that there is a number, 0 < q < 1, such that
X0 =
XT D + q(X T U - X T D)
qXT U + (1 - q)X T D.
The price now, X0, lies between the future prices, XT U and XT D, in the ratio q : (1 - q) and represents some sort of ‘average’. The first statement of the FTAP can be interpreted simply as “the price of an asset must lie between its maximum and minimum possible (real) future price”.
If X0 < XT  < XT U we have that q < 0 where as if XT  < XT U < X0 then q> 1, and in both cases q does not represent a probability measure which by Kolmogorov’s axioms, must lie between 0 and 1. In either of these cases an arbitrage exists and a trader can make a riskless profit, the market involves ‘turpe lucrum’. This account gives an insight as to why James Bernoulli, in his moral approach to probability, considered situations where probabilities did not sum to 1, he was considering problems that were pathological not because they failed the rules of arithmetic but because they were unfair.
It follows that if there are no arbitrage opportunities then quantity q can be seen as representing the ‘probability’ that the XT U price will materialise in the future. Formally
X0 =
qXT U + (1 - q)X T D
E[XT ].
The connection between the financial concept of arbitrage and the mathematical object of a martingale is essentially a tautology: both statements mean that the price today of an asset must lie between its future minimum and maximum possible value.
This first statement of the FTAP was anticipated by Ramsey in 1926 when he defined ‘probability’ in the Pragmatic sense of ‘a degree of belief’ and argues that measuring ‘degrees of belief’ is through betting odds [29, p 171]. On this basis he formulates some axioms of probability, including that a probability must lie between 0 and 1 [29, p 181]. He then goes on to say that
These are the laws of probability, ... If anyone’s mental condition violated these laws, his choice would depend on the precise form in which the options were offered him, which would be absurd. He could have a book made against him by a cunning better and would then stand to lose in any event. [29, p 182]
This is a Pragmatic argument that identifies the absence of the martingale measure with the existence of arbitrage and today this forms the basis of the standard argument as to why arbitrages do not exist: if they did the, other market participants would bankrupt the agent who was mis-pricing the asset. This has become known in philosophy as the ‘Dutch Book’ argument and as a consequence of the fact/value dichotomy this is often presented as a ‘matter of fact’. However, ignoring the fact/value dichotomy, the Dutch book argument is an alternative of the ‘Golden Rule’— “Do to others as you would have them do to you.”— it is infused with the moral concepts of fairness and reciprocity ([42], [11]).
The essential result of this paper is that embedded at the heart of the first statement of the FTAP is the ethical concept Justice, capturing the social norms of reciprocity and fairness. This is significant in the context of Granovetter’s discussion of embeddedness in economics [10]. It is conventional to assume that mainstream economic theory is ‘undersocialised’: agents are rational calculators seeking to maximise an objective function. The argument presented here is that a central theorem in contemporary economics, the FTAP, is deeply embedded in social norms, despite being presented as an undersocialised mathematical object. This embeddedness is a consequence of the origins of mathematical probability being in the ethical analysis of commercial contracts: the feudal shackles are still binding this most modern of economic theories.
Ramsey goes on to make an important point
Having any definite degree of belief implies a certain measure of consistency, namely willingness to bet on a given proposition at the same odds for any stake, the stakes being measured in terms of ultimate values. Having degrees of belief obeying the laws of probability implies a further measure of consistency, namely such a consistency between the odds acceptable on different propositions as shall prevent a book being made against you. [29, p 182—183]
Ramsey is arguing that an agent needs to employ the same measure in pricing all assets in a market, and this is the key result in contemporary derivative pricing. Having identified the martingale measure on the basis of a ‘primal’ asset, it is then applied across the market, in particular to derivatives on the primal asset but the well-known result that if two assets offer different ‘market prices of risk’, an arbitrage exists. This explains why the market-price of risk appears in the Radon-Nikodym derivative and the Capital Market Line, it enforces Ramsey’s consistency in pricing.
The second statement of the FTAP is concerned with incomplete markets, which appear in relation to Arrow-Debreu prices. In mathematics, in the special case that there are as many, or more, assets in a market as there are possible future, uncertain, states, a unique pricing vector can be deduced for the market because of Cramer’s Rule. If the elements of the pricing vector satisfy the axioms of probability, specifically each element is positive and they all sum to one, then the market precludes arbitrage opportunities. This is the case covered by the first statement of the FTAP.
In the more realistic situation that there are more possible future states than assets, the market can still be arbitrage free but the pricing vector, the martingale measure, might not be unique. The agent can still be consistent in selecting which particular martingale measure they choose to use, but another agent might choose a different measure, such that the two do not agree on a price. In the context of the Law of One Price, this means that we cannot hedge, replicate or cover, a position in the market, such that the portfolio is riskless. The significance of the second statement of the FTAP is that it tells us that in the sensible world of imperfect knowledge and transaction costs, a model within the framework of the FTAP cannot give a precise price. When faced with incompleteness in markets, agents need alternative ways to price assets and behavioural techniques have come to dominate financial theory. This feature was realised in The Port Royal Logic when it recognised the role of transaction costs in lotteries.


[1] K. J. Arrow. The role of securities in the optimal allocation of risk-bearing. The Review of Economic Studies, 31(2):91—96, 1964.
[2] P. L. Bernstein. Against the Gods, The Remarkable Story of Risk. Wiley, 1998.
[3] F. Black and M. Scholes. The pricing of options and corporate liabilities. Journal of Political Economy, 81(3):637—654, 1973.
[4] J. B. Davis. The relationship between Keynes’ early and later philosophical thinking. In S. Mizuhara and J. Runde, editors, The Philosophy of Keynes’Economics: Probability, Uncertainty and Convention, pages 100 — 110. Taylor & Francis, 2004.
[5] F. Delbaen and W. Schachermayer. A general version of the fundamental theorem of asset pricing. Mathematische Annalen, 300:463—520, 1994.
[6] F. Delbaen and W. Schachermayer. The fundamental theorem of asset pricing for unbounded stochastic processes. Mathematische Annalen, 312:215—250, 1998.
[7] D. Edgington. Ramsey and pragmatism: Probability, conditionals and truth, May 2012.
[8] E. F. Fama. The behavior of stock—market prices. The Journal of Business, 38(1):34—105, 1965.
[9] E. F. Fama. Efficient capital markets: A review of theory and empirical work. The Journal of Finance, 25(2):383—417, 1970.
[10] M. Granovetter. Economic action and social structure: the problem of embeddedness. American Journal of Sociology, 91(3):481—493, 1985.
[11] A. Hájek. Arguments for — or against— Probabilism? The British Journal, 59(4):793—819, 2008.
[12] I. Hardie. ‘The sociology of arbitrage’: a comment on MacKenzie. Economy and Society, 33(2):239—254, 2004.
[13] J. M. Harrison and D. M. Kreps. Martingales and arbitrage in multiperiod securities markets. Journal of Economic Theory, 20:381—401, 1979.
[14] J. M. Harrison and S. R. Pliska. Martingales and stochastic integrals in the theory of continuous trading. Stochastic Processes and their Applications, 11:215—260, 1981.
[15] J. M. Harrison and S. R. Pliska. A stochastic calculus model of continuous trading: Complete markets. Stochastic Processes and their Applications, 15:313—316, 1983.
[16] E. G. Haugh and N. N. Taleb. Option traders use (very) sophisticated heuristics, never the Black-Scholes-Merton formula. Journal of Economic Behavior & Organization, 77(2):97—106, 2011.
[17] E. T. Jaynes. Probability Theory: The Logic of Science. Cambridge University Press, 2003.
[18] M. G. Kendall. On the reconciliation of theories of probability. Biometrika, 36(1/2):101—116, 1949.
[19] F. H. Knight. Risk, Uncertainty, and Profit. Hart, Schaffner& Marx (Cosimo), 1921 (2006).
[20] A. N. Kolmogorov. Foundations of the Theory of Probability. Julius Springer (Chelsea), 1933 (1956).
[21] D. MacKenzie. An equation and its worlds: Bricolage, exemplars, disunity and performativity in financial economics. Social Studies of Science, 33(6):831—868, 2003.
[22] D. MacKenzie. An Engine, Not a Camera: How Financial Models Shape Markets. The MIT Press, 2008.
[23] B. Mandelbrot. Forecasts of future prices, unbiased markets and “martingale” models. TheJournal of Business, 39(1, Supplement on Security Prices):242—255, 1966.
[24] R. C. Merton. Lifetime portfolio selection under uncertainty: The continuous—time case. Review of Economics and Statistics, 51(3):247—257, 1969.
[25] R. C. Merton. Theory of rational option pricing. The Bell Journal of Economics andManagement Science, 4(1):141—183, 1973.
[26] H. Miyazaki. Between arbitrage and speculation: an economy of belief and doubt. Historyof Political Economy, 36(3):369—415, 2007.
[27] S. Mizuhara and J. Runde. The Philosophy of Keynes’ Economics: Probability, Uncertainty and Convention. Taylor& Francis, 2004.
[28] S. A. Nelson. The ABC of Options and Arbitrage. 1904.
[29] F.P. Ramsey. Truth and probability. In R.B. Braithwaite, editor, Ramsey, 1931, The Foundations of Mathematics and other Logical Essays, pages 156—198. Kegan, Paul, Trench, Trubner & Co., 1931.
[30] F.P. Ramsey and D.H. Mellor. Prospects for Pragmatism: Essays in Memory of F P Ramsey. Cambridge University Press, 1980.
[31] B. Russell. An Outline of Philosophy. George Allen & Unwin (Routledge), 1927 (2009).
[32] P. A. Samuelson. Lifetime portfolio selection by dynamic stochastic programming. Reviewof Economics and Statistics, 51(3):239—246, 1969.
[33] P. A. Samuelson. The fundamental approximation theorem of portfolio analysis in terms of means, variances and higher moments. The Review of Economic Studies, 37(4):537—542, 1970.
[34] W. Schachermayer. Die Überprüfung der Finanzierbarkeit der Gewinnbeteiligung. Mitteilungen der Aktuarvereinigung Österreichs, 2:13—30, 1984.
[35] M. S. Scholes. The market for securities: Substitution versus price pressure and the effects of information on share prices. The Journal of Business, 45(2), 1972.
[36] S. E. Shreve. Stochastic Calculus for Finance II: Continuous-Time Models. Springer, 2004.
[37] L. E. Sigler. Fibonacci’s Liber Abaci. Springer-Verlag, 2002.
[38] R. Skidelsky. Keynes, The Return of the Master. Allen Lane, 2009.
[39] E. D. Sylla. Business ethics, commercial mathematics, and the origins of mathematical probability. History of Political Economy, 35:309—337, 2003.
[40] R. von Mises. Probability, statistics and truth. Allen & Unwin (Dover), 1957 (1982).
[41] J. von Plato. Creating Modern Probability. Cambridge University Press, 1994.
[42] J. Wattles. The Golden Rule. Oxford University Press, 1996.
[43] H. Zimmermann and W. Hafner. Amazing discovery: Vincenz Bronzin’s option pricing models. Journal of Banking and Finance, 31:531—546, 2007.