The Complex Market Hypothesis
If you find WORDS helpful, Bitcoin donations are unnecessary but appreciated. Our goal is to spread and preserve Bitcoin writings for future generations. Read more. | Make a Donation |
The Complex Markets Hypothesis
By Allen Farrington
Posted February 15, 2020
In which I hypothesise that markets are subjective, uncertain, complex, stochastic, adaptive, fractal, reflexive ⊠â really any clever sounding adjective you like â just not efficient.
available as pdf here, if desired
photo by skeeze_, _via Pixabay
Around a month ago, Nic Carter asked me to have a look at a final draft of his article on the basics of the Efficient Markets Hypothesis. Dancing around the edges of Bitcoin Twitter as I am prone to do, I immediately grasped both the need for and the point of such an article; the question of whether the upcoming âhalvingâ is âpriced inâ or not had âbecome a source of great rancor and debate,â as Nic wrote. For the uninitiated, âthe halvingâ is the reduction of the bitcoin block reward from 12.5 bitcoin to 6.25, expected around May 2020. Nic set himself the task of explaining the EMH more or less from scratch, in such a way that the explanation would naturally lend itself towards insight on questions of Bitcoinâs market behaviour.
[An Introduction to the Efficient Market Hypothesis for Bitcoiners, What the EMH does and does not say}(https://medium.com/@nic__carter/an-introduction-to-the-efficient-market-hypothesis-for-bitcoiners-ed7e90be7c0d)
I think he did a great job and the article is well worth reading. But I couldnât help thinking as I went through it that, basically, I didnât believe this stuff the first time around, and it all seemed strangely incongruous in a setting explicitly involving Bitcoin, what with the tendency of serious thinkers in this space to treat highly mathematised mainstream / neoclassical financial economics with something between suspicion and disdain.
To be completely clear, this is in no way a ârebuttalâ to Nic. He articulated the EMH very well, but didnât defend it. That wasnât the point of his article at all. He watered down the presentation at several points by saying (quite helpful) things like:
âI do not believe in the âstrong formâ of the EMH. No finance professional I know does. It is generally a straw man,â
and,
âInterestingly, by caveating the EMH, we have stumbled on an alternative conception entirely. The model I have described here somewhat resembles Andrew Loâs adaptive market hypothesis. Indeed, while I am very happy to maintain that most (liquid) markets are efficient, most of the time, the adaptive market model far more closely captures my views on the markets than any of the generic EMH formulations.â
One passage in particular stuck out to me:
âReferring to it as a model makes it very clear that itâs just an abstraction of the world, a description of the way markets should (and generally do) work, but by no means an iron law. Itâs just a useful way to think about markets.â
This is where Iâm not so sure. Yes, itâs an abstraction, and no, itâs not an iron law. But I donât think itâs a terribly good abstraction, and I think the reason is that it subtly contradicts and elides what are, in fact, iron laws, or as close to iron laws as can be found in economics. Itâs a useful way to think about markets, to a point, but I want to explore what I think is a more useful way.
My argument will go through the following propositions, which serve as headings for their own sub-sections of discussion: value is subjective; uncertainty is not risk; economic complexity resists equilibria; markets aggregate prices, not information; and, markets tend to leverage efficiency.
I will conclude with some additional commentary on Andrew Loâs Adaptive Markets Hypothesis and Benoit Mandelbrotâs interpretation of fractal geometry in financial markets, simply because, of all the reading around this topic that was thrown up by Nicâs article, these two were by far the most intriguing. I didnât want to do either an injustice by bending their arguments too far to make them fit my own, but I think that they can be very fruitfully analysed with the conceptual tools we will have developed by the conclusion of the essay. I will also occasionally invoke the concepts of âreflexivityâ, as articulated by George Soros in The Alchemy of Finance, and several concepts popularised and articulated by Nassim Taleb, such as âskin in the gameâ and ârobustnessâ.
This might seem like an excessive coverage list just to offer a counter to the claim that markets are âefficientâ â which seems pretty reasonable in and of itself. If it is at all reassuring to the reader before diving in, I donât think my thesis has five intimidating-sounding propositions, so much as one quite simple idea, from which many related propositions can be shown to follow. I think that, fundamentally, the efficient markets hypothesis is contradicted by the implications of value being subjective, and that some basic elements of complex systems are helpful, in places, to nudge the reasoning along. This essay is an attempt to tease these implications out.
Value is Subjective
You shouldnât compare apples and oranges, except that sometimes you have to, like when you are hungry. If apples and oranges are the same price, you need to make a decision that simply cannot be mathematised. You either like apples more than oranges, or vice versa. And actually, even this may not be true. Maybe you know full well you like oranges, but you just feel like an apple today, or you need apples for a pie recipe for which oranges would be trĂšs gauche. This reasoning is readily extended in all directions; which is _objectively _better, a novel by Dickens or Austen? A hardback or an ebook by either, or anybody? And what about the higher order capital goods that go into producing apples, oranges, novels, Kindles, and the like? Clearly they are âworthâ only whatever their buyer subjectively assesses as likely to be a worthwhile investment given the (again) subjective valuations of others as to the worth of apples, oranges, novels, and whatnot âŠ
This is all fine and dandy; readily understood since the marginal revolution of Menger, Jevons, and Walras in the 1870s rigorously refuted cost and labour theories of value. As Menger put it in his magisterial Principles of Economics,
âValue is thus nothing inherent in goods, no property of them, nor an independent thing existing by itself. It is a judgment economizing men make about the importance of the goods at their disposal for the maintenance of their lives and well-being. Hence value does not exist outside the consciousness of men.â
Fair enough. But the first seductive trappings of the EMH come from the rarely articulated assumption that such essential subjectivity is erased in financial markets because the goods in the market are defined only in terms of cash flows. There may not be a scientific answer as to whether apples are better than oranges, but surely $10 is better than $5? And surely $10 now is better than $10 in the future? But what about $5 now or $10 in the future?
There are (at least) two reasons this reductionism is misleading. The first comes from the mainstream neoclassical treatment of temporal discounting, which is to assume that only exponential discounting can possibly be âoptimalâ. The widespread prevalence of alternative approaches â hyperbolic discounting, for example â is then usually treated via behavioural economics, as a deviation from optimality that is evidence of irrational cognitive biases.
This has been challenged by a recently published preprint paper by Alex Adamou, Yonatan Berman, Diomides Mavroyiannis, and Ole Peters, entitled The Microfoundations of Discounting (arXiv link here) arguing that the single assumption of an individual aiming to optimise the growth rate of her wealth can generate different discounting regimes that are optimal relative to the conditions by which her wealth grows in the first place. This in turn rests on the relationship between her current wealth and the payments that may be received. Sometimes this the discounting that pops out is exponential, sometimes hyperbolic, sometimes something else entirely. It depends on her circumstances.
I would editorialise here that an underlying cause of confusion is that people value time itself, and, naturally, do so subjectively. It may be fair enough to say that they typically want to use their time as efficiently as possible â or grow their wealth the fastest â but this is rather vacuous in isolation. Padding it out with circumstantial information immediately runs into the fact that everybodyâs circumstances are different. As Adamou said on Twitter shortly after the paperâs first release, not many 90-year olds play the stock market. Itâs funny because itâs true.
And it is easy to see how this result can be used as a wedge to pry open a conceptual can of worms. In financial markets, there are far more variables to compare than just the discount rate â and if we canât even assess an objective discount rate, we really are in trouble! In choosing between financial assets we are choosing between non-deterministic streams of future cash flows, as well as (maybe â who knows?) desiring to preserve some initial capital value.
Assume these cash flows are âriskyâ, in the sense that we can assign probabilities to their space of outcomes. In the following section, we will see that really the cash flows are not âriskyâ, but âuncertainâ, which makes this problem even worse â but we can stick with âriskyâ for now as it works well enough to make the point. There can be no objective answer because different market participants could easily have different risk preferences, exposure preferences, liquidity needs, timeframes, and so on.
Timeframes are worth dwelling on for a second longer (thereâs âtimeâ again) because this points to an ill-definition in my hasty setup of the problem: to what space of outcomes are we assigning probabilities, exactly? Financial markets do not have an end-point, so this makes no sense on the face of it. If we amend it by suggesting (obviously ludicrously) that the probabilities are well-defined for every intervalâs end-point, forever, then we invite the obvious criticism that different participants may care about different sequences of intervals. Particularly if their different discount rates (which we admitted they must have) have a different effect on how far in the future cash flows have to come to be discounted back to a value that is negligible in the present. Once again, people value time itself subjectively.
In the readily understood language employed just above, market participants almost certainly have different circumstances to one another, from which different subjective valuations will naturally emerge. What seems to you like a stupidly low price at which to sell an asset might be ideal for the seller because they are facing a margin call elsewhere in their portfolio (see Nicâs cited example of what blew up LTCM despite it being a ârational betâ), or because they hold too much of this asset for their liking and want to rebalance their exposure. Or perhaps some price might seem stupidly high to buy, but the buyer has a funding gap so large that they need to invest in something that has a non-zero probability of appreciating by that much. If you _need _to double your money, then the ârisk-free assetâ is infinitely risky. There is no right answer, because value is subjective.
Uncertainty is not Risk
âRiskâ characterises a nondeterministic system for which the space of possible outcomes can be assigned probabilities. Expected values are meaningful and hence prices, if they exist in such a system, lend themselves to effective hedging. âUncertaintyâ characterises a nondeterministic system for which probabilities _cannot _be assigned to the space of outcomes. Uncertain outcomes cannot be hedged. This distinction in economics is usually credited to Frank Knight and his wonderful 1921 book, Risk, Uncertainty, and Profit. In the introduction, Knight writes,
âIt will appear that a measurable uncertainty, or âriskâ proper, as we shall use the term, is so far different from an unmeasurable one that it is not in effect an uncertainty at all. We shall accordingly restrict the term âuncertaintyâ to cases of the non-quantitive type. It is this âtrueâ uncertainty, and not risk, as has been argued, which forms the basis of a valid theory of profit and accounts for the divergence between actual and theoretical competition.â
Keynes is often also credited an excellent exposition,
âBy âuncertainâ knowledge, let me explain, I do not mean merely to distinguish what is known for certain from what is only probable. The game of roulette is not subject, in this sense, to uncertainty⊠Or, again, the expectation of life is only slightly uncertain. Even the weather is only moderately uncertain. The sense in which I am using the term is that in which the prospect of a European war is uncertain, or the price of copper and the rate of interest twenty years hence, or the obsolescence of a new invention, or the position of private wealth owners in the social system in 1970. About these matters there is no scientific basis on which to form any calculable probability whatever. We simply do not know. Nevertheless, the necessity for action and for decision compels us as practical men to do our best to overlook this awkward fact and to behave exactly as we should if we had behind us a good Benthamite calculation of a series of prospective advantages and disadvantages, each multiplied by its appropriate probability, waiting to be summed.â
The conclusion of the Keynes passage is particularly insightful as it gets at why it is so important to be clear on the difference, which otherwise might seem like little more than semantics: people need to act. They will strive for a basis to treat uncertainty as if it were risk so as to tackle it more easily, but however successful they are or are not, they must act nonetheless.
The Knight extract hints at the direction of the bookâs argument, which I will summarise here: that profit is the essence of competitive uncertainty. Were there no uncertainty, but merely quantifiable risk in patterns of production and consumption, competition would drive all prices to a stable and commoditised equilibrium. In financial vocabulary, we would say there would be no such thing as a sustainable competitive advantage. The cost of capital would be the risk-free rate, as would all returns on capital, meaning profit is minimised. In aggregate, profit would function merely as a kind of force pulling all economic activity to this precise point of strong attraction.
But of course, uncertainty is very real, as Keynesâ quote makes delightfully clear. I would argue, in fact, that in the economic realm it is a direct consequence of subjective value; in engaging in pursuing profit, you are guessing what others will value. As Knight later writes,
âWith uncertainty present, doing things, the actual execution of activity, becomes in a real sense a secondary part of life; the primary problem or function is deciding what to do and how to do it.â
So far I have danced around the key word and concept here, so as to try to let the reader arrive at it herself, but this âdeciding what to do and how to do itâ, and âpursuing profitâ, we call entrepreneurship. In a world with uncertainty, the role of the entrepreneur is to shoulder the uncertainty of untried combinations of capital, the success of which will ultimately be dependent on the subjective valuations of others. This is not something that can be calculated or mathematised, as any entrepreneur (or VC) will tell you. As Ross Emmett noted in his centennial review of Risk, Uncertainty, and Profit, it is no coincidence that the word âjudgmentâ appears on average every two pages in the book.
There are two points about the process of entrepreneurship that I believe ought to be explored further, and which lead us to Soros and Taleb: you canât just _imagine _starting a business; you have to actually do it in order to learn anything. And, in order to do it, you have to expose yourself to your own successes and failures. Your experiment changes the system in which you are experimenting, and you will inevitably have a stake in the experimentâs result.
This is fertile ground in which to plant Sorosâ theory of reflexivity. As briefly as possible, and certainly not doing it justice, Soros believes that financial markets are fundamentally resistant to truly scientific analysis because they can only be fully understood in such a way that acknowledges the fact that thinking about the system influences the system. He writes that the scientific method:
âis clearly not applicable to reflexive situations because even if all the observable facts are identical, the prevailing views of the participants are liable to be different when an experiment is repeated. The very fact that an experiment has been conducted is liable to change the perceptions of the participants. Yet, without testing, generalisations cannot be falsified.â
All potential entrepreneurial activity is uncertain (by definition) but the fact of engaging in it crystallises the knowledge of its success or failure. The subjective valuations on which its success depends are revealed by the experiment, and you canât repeat the experiment pretending you donât now know this information. Alternatively, this can be conceived of in terms of the difference between thinking and acting, or talking and doing. In a reflexive environment, you canât say what would have happened had you done something, because, had you done it, you would have changed the circumstances that lead to you now claiming you would have done it. As Yogi Berra (allegedly) said, âin theory there is no difference between theory and practice, but in practice, there is,â and as Amy Adams vigorously proclaims in Talladega Nights, quite the treatise on risk and uncertainty by the way, and with a criminally underrated soundtrack, âRicky Bobby is not a thinker. Ricky Bobby is a driver.â
We can also now invoke âskin in the gameâ, a phrase of dubious origin, but nowadays associated primarily with Nassim Taleb, and expounded in his 2017 book of the same name. Again, not doing it justice (he did write a whole book about this) Taleb believes that people ought to have equal exposure to the potential upsides and downsides of their decisions; âought toâ in both a moral sense of deserving the outcome, but also in the sense of optimal system design, in that such an arrangement encourages people to behave the most prudently out of all possible incentive schemes. It readily applies here in that braving the wild uncertainties of entrepreneurship requires capital â it requires a stake on which the entrepreneur might _get the upside of profit, but _might _get the downside of loss. I say âmightâ because you cannot possibly know the odds of such a wager. It relies not on risk, but on uncertainty. As Taleb writes, â_entrepreneurs are heroes in our society. They fail for the rest of us.â
The combined appreciation of âjudgmentâ and âskin in the gameâ is key to understanding what entrepreneurs are actually doing. They do not merely throw capital into a combinatorial vacuum; they are intuiting the wants and needs of potential customers. And as I simply cannot resist the opportunity to employ, possibly my favourite quote from any economist, ever: as Alex Tabarrok says, a bet is a tax on bullshit. Or, donât talk; do.
my desk at work. Itâs good to keep these things in mind.
That same Emmett review of Knightâs book noted that the very concept of _Knightian uncertainty _re-emerged in the public consciousness around a decade ago due to two events: the role ironically played by financial risk instruments in the financial crisis, which neoclassical economists had up until that point insisted would reduce uncertainty in markets (search for âRaghuram Rajan Jackson Holeâ if unfamiliar); and Taleb publishing the bestseller The Black Swan.
( Although, naturally, Taleb deplores the concept of âKnightian Uncertaintyâ. What I believe Taleb truly objects to, however, is how the concept has come to be used, rather than anything Knight himself believed. Economists often invoke âKnightian Uncertaintyâ as a sleight of hand to demarcate some corner of reality, and imply that everywhere else is merely âriskyâ and can be modelled. This is nonsense. In real life, everything is uncertain, or, as Joseph Walker succinctly put, âTalebâs problem with Knightian uncertainty is that thereâs no such thing as non-Knightian uncertainty.â I think Knight would almost certainly agree, as would anybody who has actually read _Knight, instead of employing his name in the course of _macro-bullshitting, as Taleb would put it.)
Writes Emmett,
âTaleb did not suggest that uncertainty could be handled by risk markets. Instead, he made a very Knightian argument: since you cannot protect yourself entirely against uncertainty, you should build robustness into your personal life, your company, your economic theory, and even the institutions of your society, to withstand uncertainty and avoid tragic results. These actions imply costs that may limit other aspects of your business, and even your openness to new opportunity.â
But enough about entrepreneurship, what about financial markets? Well, financial markets are readily understood as one degree removed from entrepreneurship. With adequate mental flexibility, you can think of them as markets for fractions of entrepreneurial activity. Entrepreneurship-by-proxy, we might say. If you want, you can use them to mimic the uncertainty profile of an entrepreneur: your âportfolioâ could be 100% the equity of the company you wish you founded. Or 200%, with leverage, if you are really gung-ho! But most people think precisely the opposite way: markets present the opportunity to tame the rabid uncertainty of entrepreneurship in isolation, and skim some portion of its aggregate benefit.
There is an additional complication. The fact of such markets usually being liquid enough to enable widespread ownership creates the incentive to think not about the underlying entrepreneurship at all, but only about the expectations of other market participants â to ignore the fundamentals and consider only the valuation. There are shades of Sorosâ reflexivity here. The market depends to some extent on the thinking of those participating in the market about _the market. This is sometimes called a Keynesian beauty contest, after Keynesâ analogy of judging a beauty contest not on the basis of who you think is most beautiful, but on the basis of who you think others will think is the most beautiful. But if everybody is doing that, then you really need to judge on the basis of who you think others will think others will think is the most beautiful, and so on. Unlike the entrepreneur, who must only worry about the subjective valuation of his potential customers, participants in financial markets must worry, in addition, about the subjective valuation _of this subjective valuation _by other market participants. There is often grumbling at this point that this represents âspeculationâ as opposed to âinvestmentâ, and I certainly buy the idea that over time periods long enough to reflect real economic activity allowed for by the investments, such concerns will make less and less of a difference. As Benjamin Graham famously said, â_in the short run, the market is a voting machine, but in the long run, it is a weighing machineâ. But the voting still happens. It is clearly real and needs to be accounted for. Risk is once again useless. Uncertainty abounds.
This range of possibilities is intriguing and points to a deeper understanding of what financial markets really are: the aim of a great deal of finance is to grapple with the totality of uncertainty inherent in entrepreneurial activity â equally well understood as âinvestment of capitalâ, given the need for a âstakeâ â by partitioning it into different exposures that can sensibly be described as _relatively _more or less âriskyâ. The aim of doing so is generally to minimise the cost of capital going towards real investment by tailoring the packaging of uncertainty to the ârisk profilesâ of those willing to invest, as balanced by escalating transaction costs if this process becomes too fine-grained.
This is the essence of a capital structure: the more senior the capital claim, the better defined the probability space of outcomes for that instrument. Uncertainty in aggregate cannot be altered, nor can its influence be completely removed from individual instruments, but exposure to uncertainty can be unevenly parcelled out amongst instruments.
This suggests a far more sophisticated understanding of âthe risk/reward trade-offâ and âthe equity premiumâ than is generally accepted in the realm of modern portfolio theory, and, by extension, the EMH: bonds are likely to get a lower return than stocks not because they are less âriskyâ (which in that context is even more questionably interpreted as âless volatileâ) but because they are engineered to be less uncertain. The burden of uncertainty is deliberately shifted from debt to equity. You donât get a âhigher rewardâ for taking on the ârisk/volatilityâ of equities; you deliberately expose yourself to the uncertain _possibility _of a greater reward in exchange for accepting an uncertain possibility of a greater loss.
It is worth pondering for a second that this is arguably why the âequity risk premiumâ even exists (and why neoclassical economists are so confused about it, while financial professionals are not in the slightest) â if there really were no uncertainty in investment and every enterprise â and hence every financial instrument linked to it â had a calculable risk profile, then price discrepancies derivable from expectation values could be arbitraged away. There would be no equity risk premium â nor a risk premium of any kind on any asset. Everything would be priced correctly and volatility would be zero. That volatility is _never _zero clearly invalidates this idea. I suggest that the distinction between risk and uncertainty provides at least part of the explanation: unless, by remarkable coincidence, every market participantâs opportunity costs (of exposure, liquidity, time, etc.) and perception of uncertainty (of fundamentals, otherâ perceptions of fundamentals, othersâ perception of othersâ perceptions, etc.) is all identical, and remains so over a period of time, price-altering trading will occur.
(I will note in passing that this commentary is merely intended to provide the intuition that something is amiss with the âequity premium puzzleâ. It is incomplete as an explanation. In the later section on leverage efficiency, I will cover Petersâ and Adamouâs more formal proof of the puzzleâs non-puzzliness.)
An important concept to appreciate in the context of uncertainty is that of âheuristicsâ. This is an important loose-end to tie up before moving on from uncertainty, along with one more, ârandomness and unpredictabilityâ, which I cover shortly. This is quite a simple idea that originates with Herbert Simon and has been taken up with force more recently by Gerd Gigerenzer of the Max Planck Institute for Human Development, and more obliquely by Taleb. Simonâs framing began by assuming that individuals do not in fact have perfect information, nor the resources to compute perfectly optimal decisions. Given these constraints, Simon proposed that individuals demonstrate bounded rationality; they will be as rational as they can given the information and resources they actually have. This probably sounds straightforward enough â perhaps tautological â but notice it flies in the face of behavioural economics, which tends to cover for neoclassical economics by saying, effectively, that since information and competition are perfect, risk is always defined and the optimal decision can always be calculated, but the reason people donât do so is that they are hopelessly irrational. I have always thought this is quite silly on the face of it, but it is clearly also seductive. Anybody reading the likes of Thinking, Fast and Slow immediately gets the intellectual rush of thinking everybody is stupid except him.
Bounded rationality encourages the development of âheuristicsâ, which the reader may recall behavioural economists railing against. A heuristic is effectively a rule of thumb for dealing with an uncertain environment that you are pretty sure will work even if you canât explain why, precisely. The classic example is that of a dog and a frisbee, or an outfielder in baseball catching a flyball: the outfielder _could _solve enough differential equations to calculate the spot the ball will land, but the dog certainly canât. And it turns out that neither do: in real life, they adjust their running speed and direction such that the angle at which they see the frisbee or ball stays constant. And it works. No equations required. Yippee.
(Farringtonâs Heuristic is another good example, which I made up while editing a later version of this essayâ if a writer is discussing risk, uncertainty, knowledge, and the like, if he refers to Gödelâs Incompleteness Theorems, and if he is not obviously joking, then everything else he says can immediately be dismissed because he is a bullshitting charlatan enamoured by cargo cult math. This heuristic has only one binary parameter â âis he joking?â â and so is highly robust. In case anybody cares, the theorems are NOT about âknowledgeâ: they are about provability within first order formal logical theories strong enough to model the arithmetic of the natural numbers. This is quite a specific mathematical thing that bears no relation whatsoever to epistemology or metaphysics. Also, there are two of them, which turns out to be important if you understand what the first one says.)
The implied simplicity of heuristics has subtle mathematical importance, also. A more technical way of specifying this is to say that they have very few parameters â discrete, independent information inputs to the decision procedure â ideally they could even have zero. In a purely risky environment (if such a thing exists, which, in real life, it almost certainly does not) a decision procedure ought to have as many _parameters as are needed to capture the underlying probability distribution. But the more uncertainty you add to such an environment, the more dangerous this becomes, essentially because what you are doing is fine-tuning your model to an environment that simply no longer exists. Eventually you will get an unforeseen fluctuation so large that your overfitted model gives you a truly awful suggestion. Heuristics are _robust to such circumstances in light of having very few parameters to begin with. Think back to the outfielder: imagine he solves all the necessary fluid dynamical equations, taking account of the fly ballâs mass, velocity, and rotation, the airâs viscosity, the turbulence generated, and so on. If there is then a gust of wind, heâs screwed. His calculation will be completely wrong. But if he embraces the heuristic of _just looking at the damn ball _this wonât matter!
Interested readers are encouraged to peruse Gigerenzerâs recent(ish) work on the use of heuristics in finance, and their abuse in behavioural economics, which recently got a shout out in Bloomberg, or this video which is a great introduction to Gigerenzerâs ideas, as well as their connection to Talebâs more informal thinking on the same topic. (also, note the number of times Gigerenzer uses the word âcomplexâ. It is no coincidence that this is a lot, as we shall shortly see):
The video linked to is around an hour and a half, so the reader need not take such a detour now, but I would encourage it at some point, as both Gigerenzer and Taleb are excellent. My favourite excerpt comes around the 19-minute mark, when Gigerenzer recalls that Harry Markowitz â considered the founder of modern portfolio theory â didnât actually use any Nobel-prize winning modern portfolio theory for his own retirement portfolio; he used the zero-parameter 1/n approach. If one were being especially mean-spirited, one might say that he didnât want his own bullshit to be taxed. And as it turns out, in order for the Markowitz many-many-many parameter approach to investing to consistently outperform 1/n, you would need around 500 years of data to finetune the parameters. Of course, you also need the market to _not change at all _in that time. Good luck with that.
Since markets feature multitudes of interrelated uncertainties, it is reasonable to expect participants to interact with them not with the perfect rationality of provably optimal behaviour, but with the bounded rationality of heuristics, which are selected on the basis of judgment, intuition, creativity, etc. Basically, people mostly are not stupid. And if they are, they have skin in the game, so they get punished, and possibly wiped out.
A kind of nice, conceptual corollary to ârisk is not uncertaintyâ is, âunpredictability is not randomnessâ. There can be unpredictable events that are not random, and randomness that is not unpredictable. The difference essentially comes down to âcausationâ. Think of Keynesâ example of the obsolescence of a new invention. This is âunpredictableâ not because it is subject to an extremely complicated probability density function, but because the path of causation that would lead to such a situation involves too much uncertainty to coherently grasp. Or think of the bitcoin mining process. The time series of the first non-zero character in the hash of every block is certifiably random, but it is not unpredictably random. It is the result of a highly coordinated and purposeful effort. It doesnât spring forth from beta decay. Because we understand the causal process by which this time series emerges, we can predict this randomness very effectively.
A key building block of the EMH is the ârandom walk hypothesisâ: the idea that you can âproveâ using statistical methods that stock prices follow ârandom walksâ â a kind of well-defined and genuinely random mathematical behaviour. But you can do no such thing. You can prove that they are _indistinguishable _from random walks, but that is really just saying you can use a statistical test to prove that some data can pass a statistical test. If you _understand _what _causes _price movements, you will arrive at no such nonsense as claiming that the moves are, themselves, random. They very probably _look _random because they are fundamentally unpredictable from the data. And they are fundamentally unpredictable from the data because they derive from the incalculable interplay of millions of market participantsâ subjective assessments of the at-root uncertain process of entrepreneurship.
None of this is based on randomness, nor âriskâ, nor âluckâ. It is based on the unknown and unknowable profit that results from intuiting the results of untried and unrepeatable experiments and backing oneâs intuition with skin in the game.
Before moving on, I think it is worth tying all of this to where it is more tangibly sensible, lest the reader not quite know what to do with it all. A big deal was made recently about Netflix being by far the best performing US mid-to-large-cap stock of the 2010s. Netflix is useful as an example because of the scale of its success, but note the following argument does not depend on scale at all. While you could craft an explanation as complicated as you like, I think saying, streaming is better than cable, pretty much does it, once added to all the circumstantial factors to do with the competitive and technological environment. Now imagine an investor in 2010 whose thesis was that streaming is better than cable and would likely win in the long run, who surveyed the competitive environment, and decided Netflix would be a good investment. Is their outperformance over the next 10 years âluckâ? Was all the âinformationâ âin the priceâ in 2010? Would the CAPM tell you what the price _should _have been? Did the stock go for a nice little random walk to the moon?
This is clearly an insane interpretation. Consider the alternative: The investor better intuited the subjective values of future consumers than did the average market participant. Very likely she justified this on the basis of a heuristic or two. She staked capital on this bet â which was not risky and random, but uncertain and unpredictable â and exposed herself to a payoff that turned out to be huge, because she was right! To the peddlers of the EMH, rational expectations, perfect information, and the like, this obviously sensible interpretation is utterly heretical.
Economic Complexity Resists Equilibria
The link between profit and entrepreneurship can be tugged at ever-so-slightly further, and invites a brief detour into the basics of complex systems. The argument goes more or less as follows: the discussion on uncertainty neednât be interpreted as a call to abandon mathematical analysis altogether â just the sloppy mathematics of risk and randomness that has effectively no connection to the real world. There is an alternative mathematical approach, however, which directly addresses and contradicts the standard neoclassical formalism.
The starting point is Israel Kirzner, widely considered one of the foremost scholars of entrepreneurship, and his book, _Competition and Entrepreneurship. _One of Kirznerâs theses is a positive argument that has roughly two parts, as follows: first, entrepreneurship is by its nature non-exclusionary. It is a price discrepancy between the costs of available factors of production and the revenues to be gained by employing them in a particular way â or, profit. In other words, it is perfectly competitive. It does not rely on any privileged position with respect to access to assets; The assets are presumed to be available on the market. They are just not yet employed in that way, but could be, with capital that is presumably homogeneous. Anybody could do so. The only barrier is that of the willingness to judge and stake on uncertainty. He writes,
âThe entrepreneurâs activity is essentially competitive. And thus competition is inherent in the nature of the entrepreneurial market process. Or, to put it the other way around, entrepreneurship is inherent in the competitive market process.â
This notion of what âcompetitionâ really means is highly antithetical to the neoclassical usage. In fact, it is more or less the exact opposite. Rather than meaning something like, _tending towards abnormal profit and hence away from equilibrium, _the neoclassicals mean, _tending towards equilibrium and hence away from abnormal profit. _Kirzner bemoans this,
âClearly, if a state of affairs is to be labelled competitive, and if this label is to bear any relation to the laymanâs use of the term, the term must mean either a state of affairs from which competitive activity (in the laymanâs sense) is to be expected or a state of affairs that is the consequence of competitive activity ⊠[Yet] competition, to the equilibrium price theorist, turned out to refer to a state of affairs into which so many competing participants have already entered that no room remains for additional entry (or other modification of existing market conditions). The most unfortunate aspect of this use of the term âcompetitionâ; is of course that, by referring to the situation in which no room remains for further steps in the competitive market process, the word has come to be understood as the very opposite of the kind of activity of which that process consists. Thus, as we shall discover, any real-world departure from equilibrium conditions came to be stamped as the opposite of âcompetitiveâ and hence, by simple extension, as actually âmonopolisticâ.â
Iâd note in passing the delightful similarity in the concluding thought of this extract to the argument of Peter Thielâs Zero to One, considered by many a kind of spiritual bible for â you guessed it â entrepreneurship. Anyway âŠ
Kirznerâs second positive argument is that correcting this conceptual blunder leads one to realise that a realistic description of competitive markets would be not as constantly at equilibrium, but rather as constantly out of equilibrium. And thatâs really all we need to move on to complex systems.
Complex systems are commonly associated with the Santa Fe Institute, and popularised by W. Mitchell Waldropâs fantastic popular science book, Complexity_. _Waldrop focuses, for the most part, on one of the SFIâs first ever workshops, held between a group of physicists and economists in 1987. The proceedings of the workshop are fantastic, have aged very well, and seem to your author cheap relative to his subjective valuation of them at ~$70 in paperback or ~$140 in hardback. My thinking here comes from the very first paper of the workshop, W. Brian Arthurâs now somewhat infamous work on increasing returns. To get a sense of what I mean by âinfamousâ, consider the following from Waldrop:
âArthur had convinced himself that increasing returns pointed the way to the future for economics, a future in which he and his colleagues would work alongside the physicists and the biologists to understand the messiness, the upheaval, and the spontaneous self-organisation of the world. Heâd convinced himself that increasing returns could be the foundation for a new and very different kind of economic science.
Unfortunately, however, he hadnât much luck convincing anybody else. Outside of his immediate circle at Stanford, most economists thought his ideas were â strange. Journal editors were telling him that this increasing-returns stuff âwasnât economics.â In seminars, a good fraction of the audience reacted with outrage: how dare he suggest that the economy was not in equilibrium! â
Readers can probably sense where this is going.
Arthurâs paper at the workshop, Self-Reinforcing Mechanisms in Economics, is a breath of fresh air if you have ever slogged through the incessant cargo cult math of neoclassical financial economics (as I had to in researching this essay â thanks a lot, Nic!) It is frankly just all so sensible! Okay, so there are a few differential equations, but only after ten pages of things that are obviously true, and only to frame the obviously true observations in the absurd formalism of the mainstream.
To begin with, âconventional economic theory is built largely on the assumption of diminishing returns on the margin (local negative feedbacks); and so it may seem that positive feedback, increasing-returns-on-the-margin mechanisms ought to be rare.â Standard neoclassical theory assumes competition pushes all into equilibrium, from which a deviation is punished by the negative feedback of reduced profits. So far, so good.
âSelf-reinforcement goes under different labels in these different parts of economics: increasing returns; cumulative causation; deviation-amplifying mutual causal processes; virtuous and vicious circles; threshold effects; and non-convexity. The sources vary. But usually self-reinforcing mechanisms are variants of or derive from four generic sources: large set-up or fixed costs (which give the advantage of falling unit costs to increased output); learning effects (which act to improve products or lower their cost as their prevalence increases); coordination effects (which confer advantages to âgoing alongâ with other economics agents taking similar action); and adaptive expectations (where increased prevalence on the market enhances beliefs of further prevalence).â
Now we are getting into the meat of it. An example or two wouldnât hurt before applying this to entrepreneurship and markets.
Arthur likes Betamax versus VHS â which is a particularly good example in hindsight because we know that VHS won despite being mildly technologically inferior. Point number 1: If a manufacturer of VHS tapes spends an enormous amount on the biggest VHS (or Betamax) factory in the world, then the marginal costs of producing VHS will be lower from that point on. Even if the factory as a whole is loss making, the costs are sunk, and so the incentive is to pump out VHS by the gallon. The fact that this can be done so cheaply makes consumers more likely to choose VHS over Betamax, which will in turn justify the initial expense and contribute positive feedback (via profit).
Point number 2: doing so may give the owner of the factory the experience to learn how to do so even more efficiently in the future. By the same eventual mechanism as above, this contributes positive feedback via lower prices. (interested readers are encouraged to look into âWrightâs Lawâ, in particular a recent paper by BĂ©la Nagy, Doyne Famer, Quan Bui, and Jessika Trancik, which basically says that Mooreâs Law happens for everything, just slower; or, we learn by doing)
Points number 3 and 4: if more people seem to be buying VHS tapes than Betamax, then producers of Betamax players _are incentivised to shift production towards VHS players instead. Cheaper VHS players incentivise consumers to buy more VHS _tapes. The _appearance _of VHS winning this battle causes economic agents to adapt their behaviour in such a way that makes VHS more likely to _actually _win. In glancing over an early draft of this essay, Nic kindly pointed out to me that this represents the dominant philosophy behind growth VC from 2015 until WeWorkGate, as if a bunch of zealous, born-again Arthurians were playing a game of non-iterated prisonerâs dilemma with other peopleâs money. Anyway âŠ
Arthur writes, âif Betamax and its rival VHS compete, a small lead in market share gained by one of the technologies may enhance its competitive position and help it further increase its lead. There is positive feedback. If both systems start out at the same time, market shares may fluctuate at the outset, as external circumstances and âluckâ change, and as backers manoeuvre for advantage. And if the self-reinforcing mechanism is strong enough, eventually one of the two technologies may accumulate enough advantage to take 100% of the market. Notice however we cannot say in advance _which _one this will be.â
While Arthur mostly considers realistic examples in economics which have discrete end-states that are then âlocked intoâ, such as settling on VHS over Betamax, or Silicon Valley over Massachusetts Route 128, my contention would be that every one of these features describes a part of the process of entrepreneurial competition. The fact of staking capital at all towards an uncertain end represents a fixed cost which must be matched by competitors, and after which unit costs fall. As we have mentioned several times, entrepreneurs learn from the result of their experiments and improve their own processes. There is a clear coordination effect for customers in the default assumption of doing whatever other customers are doing. And adaptive expectations are likewise fairly straightforwardly applied: we tend to assume that businesses will continue to exist and that we can continue to act as their customers. Businesses tend to assume the same of their customers within reasonable bounds of caution. The specific positive feedback as a result of each individual effect is that of âprofitâ â it is positive in the sense that it can be reinvested in the enterprise and allow it to grow.
Of course, it is possible that these effects would diminish and the marginal feedback become negative. But what we are more tangibly proposing here is that any once-existing competitive advantage has been completely eroded away. This only happens when the product itself becomes either obsolete in light of a superior competitor, or completely commoditised. The former is simply more of the same at the macro level, but the latter we can in turn explain by uncertainty becoming so minimal that we can more or less safely assume it is merely risk. Such circumstances are few and far between. Uncertainty is prevalent in all aspects of economic life, as we have discussed. My argument here is that, so, therefore, are increasing returns and positive feedback loops.
To bring in Arthur one last time:
âif self-reinforcement is not offset by countervailing forces, local positive feedbacks are present. In turn, these imply that deviations from certain states are amplified. These states are therefore unstable. If the vector-field associated with the system is smooth and if its critical points â its âequilibriaâ â lie in the interior of some manifold, standard PoincarĂ©-index topological arguments imply the existence of other critical points or cycles that are stable, or attractors. In this case multiple equilibria must occur. Of course, there is no reason that the number of these should be small. Schelling gives the practical example of people seating themselves in an auditorium, each with the desire to sit beside others. Here the number of steady-states or âequilibriaâ would be combinatorial.â
Recall there is no way to know from the starting point which steady-state will be settled into. And of course, Arthur is only talking about specific economic circumstances, not the aggregate of all economic behaviour. The aggregate will likely have shades of evolution in a competitive environment (another concept we will soon encounter in more detail): many, many such interdependent sub-systems, always moving towards their own steady state, but almost all never getting there. And so, in summary, there is a solid mathematical basis to saying that economic behaviour in aggregate is wildly uncertain.
Before moving on, I just want to mention that Arthur should almost certainly be better known and respected in Bitcoin circles. Readers uninterested in the connection I am proposing between Bitcoin and complex systems (or unimpressed by my amateur passion for both) can skip ahead without missing anything. Arhurâs 2013 paper, Complexity Economics, is an excellent place to start. Likewise, a good argument can be made that complex systems researchers should be a lot more interested in Bitcoin. Readers may well have picked up on the essence of Arthurâs analysis consisting of ânetwork effectsâ. I avoided using the term because Arthur himself doesnât use it. But he is considered the pioneer of their analysis in economics, and when you think about it, the concept of âincreasing returnsâ makes perfect sense in the context of a network. What greater competitive advantage can you have than everybody needing to use your product simply because enough people already use it? And what product do people need to use solely because others are using it more than âmoneyâ?
Although I have eschewed the idea of âlock-inâ as helpful for the analysis above, Bitcoin surely has amongst the strongest interdependent network effects of any economic phenomenon in history? Is it not a naturally interdisciplinary complex adaptive system par excellence? Is it not a form of artificial life, coevolved with economising humans in the ecology of the Internet? I mean, for goodnessâ sake, Andreas Antonopoulos claims to have put ants on the cover of Mastering Bitcoinbecause,
âthe highly intelligent and sophisticated behaviour exhibited by a multimillion-member ant colony is an emergent property form the interaction of the individuals in a social network. Nature demonstrates that decentralised systems can be resilient and can produce emergent complexity and incredible sophistication without the need for a central authority, hierarchy, or complex parts.â
Back in the SFI workshop, Arthur writes,
âWhen a nonlinear physical system finds itself occupying a local minimum of a potential function, âexitâ to a neighbouring minimum requires sufficient influx of energy to overcome the âpotential barrierâ that separates the minima. There are parallels to such phase-locking, and to the difficulties of exit, in self-reinforcing economic systems. Self-reinforcement, almost by definition, means that a particular equilibrium is locked in to a degree measurable by the minimum cost to effect changeover to an alternative equilibrium.â
Iâm not sure anybody can sensibly describe what such a âminimum costâ would be. Particularly because Bitcoin is set up in such a way that any move away from lock-in by one metric causes a disproportionate pull back to lock-in by another. Itâs Schelling points all the way down.
Markets Aggregate Prices, Not Information
The most frustrating thing about the EMH for me is that even the framing is nonsensical. You donât really need to get into subjective value, uncertainty, complex systems, and so on, to realise that in reading the proposition, prices reflect all available information, you have already been hoodwinked (hoodwunk?). What does âreflectâ mean?
Nic dramatically improved upon this by saying that markets aggregate information. I noticed this is typical of many more enlightened critiques of EMH, and it serves as a far better starting point, in at least _suggesting _a mechanism by which the mysterious link between information and price might be instantiated. Unfortunately, I think the mechanism suggested is simply invalid. It is not realistic at all and it implicitly encourages a dramatic misunderstanding of what prices really are and where they come from.
In making sense of this we have to assume some kind of âfunctionâ from the space of information to price. I think itâs acceptable to mean this metaphorically, without implying the quasi-metaphysical existence of some such force. We might really mean something like_, the market behaves _as if __operating according to such and such a function. Adam Smithâs âinvisible handâ is an instructive comparison. For the time being, I will talk as if some such function âexistsâ.
We can maybe imagine information as existing as a vector in an incredibly high-dimensional space, at least as compared to price, which is clearly one-dimensional. We could even account for the multitudes of uncertainty we have already learned to accept by suggesting that each individualâs subjective understanding of all the relevant factors and/or ignorance of many of them constitutes a unique mapping of this space to itself, such that the âtrue information vectorâ is transformed into something more personal for each market participant. Perhaps individuals then bring this personal information vector to the market, and what the market does is aggregate _all the vectors by finding the average. Finally, the market _projects _this n-dimensional average vector onto the single dimension of price. If you accept the metaphorical nature of all these functions, I can admit this model has some intuitive appeal, in the vein of James Surowieckiâs _The Wisdom of Crowds_._
The problem is that this is clearly not how anybody actually interacts with markets. You donât submit your n-dimensional information/intention-vector; you submit your one-dimensional price. Thatâs it. The market aggregates these one-dimensional price submissions in real time by matching the flow of marginal bids and asks.
This understanding gets two birds stoned at once. First, it captures the mechanics of how we know price discovery in markets actually works. There is no mysterious, market-wide canonical projection function â no inexplicable âprices reflect informationâ â there are just prices, volumes, and the continuous move towards clearing.
Second, it implies a perfectly satisfactory and not at all mysterious source of the projection of information into price: individuals who make judgments and act. Any supposedly relevant âinformationâ is subject both to opportunity cost and uncertainty. Individuals alone know the importance of their opportunity costs, and individuals alone engage with uncertainty with heuristics, judgment, and staking. If individuals are wrong, they learn. If they are very wrong, they are wiped out. Effective heuristics live to fight another day.
I am genuinely surprised that this confusion continues to exist in the realm of the EMH, given that, as far as I am concerned, Hayek cleared it up in its entirety in The Use of Knowledge in Society. A superficial reading of Hayekâs ingenious essay might lead one to believe something like prices reflect information. But, to anachronistically borrow our function metaphor once more, Hayek points out that the projection from the n-dimensions of information to the one dimension of price _destroys an enormous amount of information. _Which is the whole point! Individuals are incapable of understanding _the entirety of information in the world. _Even the entirety of individuals is incapable of this. Thanks to the existence of markets, nobody has to. They need only know about prices. âPerfect informationâ is once again shown to be an absurdity. Of the âman on the spotâ, whom we might hope would make a sensible decision about resource allocation,
âThere is hardly anything that happens anywhere in the world that might not have an effect on the decision he ought to make. But he need not know of these events as such, nor of all their effects. It does not matter for him why at the particular moment more screws of one size than of another are wanted, why paper bags are more readily available than canvas bags, or why skilled labor, or particular machine tools, have for the moment become more difficult to obtain. All that is significant for him is how much more or less difficult to procure they have become compared with other things with which he is also concerned, or how much more or less urgently wanted are the alternative things he produces or uses. It is always a question of the relative importance of the particular things with which he is concerned, and the causes which alter their relative importance are of no interest to him beyond the effect on those concrete things of his own environment.â
Hayek proposes this be resolved by the price mechanism:
âFundamentally, in a system in which the knowledge of the relevant facts is dispersed among many people, prices can act to coordinate the separate actions of different people in the same way as subjective values help the individual to coordinate the parts of his plan.â
Perhaps ironically, this points to the only sensible way in which markets can _be called âefficientâ. They are efficient with respect to the information they manipulate and convey: as a one-dimensional price, it is the absolute minimum required for participants to interpret and sensibly respond. Markets have excellent social scalability; they are the original _distributed systems, around long before anybody thought to coin that expression.
Interestingly, this meshes very nicely with the complex systems approach to economics associated with Arthur at SFI, and perhaps more specifically with John Holland. His paper at the aforementioned inaugural economics workshop, The Global Economy as an Adaptive Process_, _at seven pages and zero equations, is well worth a read. Holland recounts many, now familiar, difficulties in mathematical analysis of economics that assume linearity, exclusively negative feedback loops, equilibria, and so on, before proposing that âthe economyâ is best thought of as what he calls an âadaptive nonlinear networkâ. Its features are worth exploring, even if they require some translation:
âEach rule in a classifier system is assigned a strength that reflects its usefulness in the context of other active rules. When a ruleâs conditions are satisfied, it competes with other satisfied rules for activation. The stronger the rule, the more likely it is to be activated. This procedure assures that a ruleâs influence is affected by both its relevance (the satisfied condition) and its confirmation (the strength). Usually many, but not all, of the rules satisfied will be activated. It is in this sense that a rule serves as a hypothesis competing with alternative hypotheses. Because of the competition there are no internal consistency requirements on the system; the system can tolerate a multitude of models with contradictory implications.â
We could easily translate âruleâ as âentrepreneurial planâ or something similar. Entrepreneurial plans can contradict one another, clearly â if they are bidding on the same resources for a novel combination â and can and do compete with one another. Clearly, such plans are hypotheses about the result of an experiment that hasnât been run yet. Holland then says,
âA ruleâs strength is supposed to reflect the extent to which it has been confirmed as a hypothesis. This, of course, itâs a matter of experience, and subject to revision. In classifier systems, this revision of strength is carried out by the bucket-brigade credit assignment algorithm. Under the bucket-brigade algorithm, a rule actually bids a part of its strength in competing for activation. If the rule wins the competition, it must pay this bid to the rules sending the messages that satisfied its condition (its suppliers). It thus pays for the right to post its message. The rule will regain what it has paid only if there are other rules that in turn bid and pay for its message (its consumers). In effect, each rule is a middleman in a complex economy, and it will only increase its strength if it turns a profit.â
Much of this does not need translating at all: we see Mengerâs higher orders of capital goods, and value of intermediate goods resting ultimately with the subjective value of consumers, who pass information up the chain of production. We see agents that learn from their experience. We see skin-in-the-game of staked capital in âbidding part of its strengthâ and we see uncertain gain or reward ultimately realised by profit or loss. But most importantly â most Hayekily â we see agents who have no such fiction as âperfect informationâ, but rather responding solely to prices in their immediate environment, and whose reactions affect prices that are passed to other environments. In Complexity, Waldrop quotes Hollandâs frustration with the neoclassical obsession with well-defined mathematical problems:
ââEvolution doesnât care whether problems are well-defined or not.â Adaptative agents are just responding to a reward, he pointed out. They donât have to make assumptions about where the reward is coming from. In fact, that was the whole point of his classifier systems. Algorithmically speaking, these systems were defined with all the rigor you could ask for. And yet they could operate in an environment that was not well defined at all. Since the classifier rules were only hypotheses about the world, not âfactsâ they could be mutually contradictory. Moreover, because the system was always testing those hypotheses to find out which ones were useful and led to rewards, it could continue to learn even in the face of crummy, incomplete information â and even while the environment was changing in unexpected ways.
âBut its behaviour isnât optimal!â the economists complain, having convinced themselves that a rational agent is one who optimises his âutility functionâ.
âOptimal relative to what?â Holland replied. Talk about your ill-defined criterion: in any real environment, the space of possibilities is so huge that there is no way an agent can find the optimum â or even recognise it. and thatâs before you take into account the fact that the environment might be changing in unforeseen ways.â
Hayek gives us the intuition of prices conveying only what market participants deem to be the most important information and actually destroying the rest, and Holland shows how this can be represented with the formalism of complex systems. But note that the EMH forces us to imagine that the information is somehow in the market itself. It is honestly unclear to me whether the EMH even allows for honest or ârationalâ disagreement given it implies that the price is âcorrectâ, and all other trading is allegedly ânoiseâ. By my account (and Hayekâs) people can clearly disagree. Thatâs why they trade in the first place; they value the same thing differently. This is not at all mysterious if we realise that engaging with markets requires individuals to âprojectâ the n-dimensions of their information, heuristics, judgments, and stakes onto the single dimension of price, and that markets do not project the aggregates; they aggregate the projections.
Markets Tend to Leverage Efficiency
So we know that entrepreneurial efforts will tend towards positive feedback loops if successful, which is a fancy way of saying, they will âgrowâ. And we know that the diversity of compounding uncertainty in markets for securities linked to these efforts will likely generate substantial volatility. But can we say anything more? Can we expect anything more precise?
It turns out that we can, and here we finally get to Alex Adamou, Ole Peters, and the ergodicity economics research program. Itâs about time! The goal of the program is to trace the repercussions of a conceptual and algebraic error regarding the proper treatment of âtimeâ in calculations of âexpectation valueâ that pervaded mainstream economics over the course of the twentieth century. Interested readers are encouraged to visit the programâs website, check out this recent primer in Nature Physics, or just follow Ole and Alex on Twitter, which is where most of the action seems to happen anyway!
First, a down to earth example. Imagine you want a pair of shoes. You can either go to the same shoe store every day for a month, or you can do to every shoe store in town all in one trip. If it turns out there is no difference between these approaches, this system is âergodicâ. If, as seems more likely, there is a difference, the system is ânon-ergodicâ.
Now with more technical detail, the _conceptual and algebraic error _is as follows: imagine some variable that changes over time, subject to some well-defined randomness. Now imagine a system of many such variables, whose âvalueâ is just the sum of all the values of the variables. Now imagine you want to find the âaverageâ value of a variable in this system in some pure, undefined sense.
How do you make sense of an âaverageâ of a system that will be different every time you run it? Well, you could fix the period of time the system runs for, and take the limit of where individual variables get to attained by running the system over and over and over to infinity. Or, you could fix the number of systems (preferably at âoneâ for minimal confusion) and take the limit of where individual variables get to attained by running the system further and further into the future, to infinity.
These are called, respectively, the âensemble averageâ and the âtime averageâ, and are easily remembered as the average achieved by taking x to infinity. âEnsemble averageâ is commonly known as âthe expectationâ but Peters and Adamou resist this terminology because it has nothing whatsoever to do with the English word âexpectationâ. You shouldnât necessarily _expect _the expectation.
Now these values might be the same. This means you can measure one of these even if what you really want is the other. If so, your system is called âergodicâ. The concept first developed within nineteenth century physics when Ludwig Boltzmann wanted to justify using ensemble averages to model macroscopic quantities such as pressure and temperature in fluids, which are strictly speaking better understood as time averages over bajillions of classically mechanical collisions. If any regular readers of mine exist, they will remember me going through much of this in Cargo Cult Math:
My point that time around was to go on to say that a great deal of financial modelling uses techniques â most notably expectation values â which would only be appropriate if the corresponding observables were ergodic. But they are not. Almost none of them are, to a degree that is both obvious and scary once you grasp it in its totality: clearly the numbers in finance are causally dependent on one another and take place in a world in which time has a direction.
My point this time around is more cheerful. I want to direct the readerâs attention to another of Petersâ and Adamouâs papers on the topic: Leverage Efficiency (arXiv link here). This subsection is a whistle-stop tour of what that paper says. The usual disclaimer about not doing it justice absolutely applies. The reader is heartily encouraged to read the paper too.
Imagine a toy model of the price of a stock that obeys geometric Brownian motion with constant drift and with volatility that varies by random draws from a normal distribution. It turns out that the growth rate of the ensemble average price â i.e. the price averaged over all possible parallel systems â is not the same as the time average growth rate of the price â i.e. the growth rate in a single system taken in the long time limit. Clearly what we care about is the time average, as we donât tend to hold stocks across multiple alternate universes, but rather across time in the actual universe. In particular, in turns out that the ensemble average growth rate is equal to the drift, while the time average growth rate is equal to the drift minus a correction term: the variance over 2.
This becomes very important when we introduce leverage via a riskless asset an investor can hold short. Letâs call the model drift of the stock minus the stipulated drift of the non-volatile riskless asset âthe excess growth rateâ. Then we can say that the ensemble average growth rate in situations with variable leverage is the growth rate of the riskless asset, plus the leverage multiplied by the excess growth rate. However, the time average has a linked correction, as above. As it is difficult at this point to continue the exposition in English, compare the formulae below:
The relevance of the difference is that the latter formula is not monotonic in l. In other words, you donât increase your growth rate unboundedly by leveraging up more and more. This might seem intuitively obvious, and, in fact, the intuition likely strikes in exactly the right spot: in reality there is volatility. The more and more levered you are, the more susceptible you are to total wipeout for smaller and smaller swings. In fact, we can go further and observe that we can therefore maximise the growth rate as a function of leverage, implying an objectively optimal leverage for this toy stock:
What might this optimal leverage be in practice? Well, Peters and Adamoupropose the tantalising alternative to the EMH: the stochastic markets hypothesis. As opposed to the EMHâs price efficiency, they propose leverage efficiency: it is impossible for a market participant without privileged information to beat the market by applying leverage. In other words, real markets self-organise such that the optimal leverage of 1 is an attractive point for their stochastic properties.
The paper continues in two directions: firstly, Peters and Adamou propose a theoretical argument for feedback systems that ought to be triggered over long enough periods of the theoretical value of optimal leverage in fact deviating from 1, which all ought to pull it back to 1. I will skip this as it is tangential to the point I am building towards, although obviously very interesting in its own right. Secondly, Peters and Adamou gather data from real markets to establish what the optimal leverage would, in fact, have been. I include some screenshots that strongly suggest this approach is quite fruitful:
This chart probably deserves some explanation, but is very satisfying once grasped: as opposed to just the S&P500, above, both the German equity market (DAX) and Bitcoin show pretty much identical behaviours to the S&P500, which Peters and Adamou term âsatisfying leverage efficiencyâ. That the Madoff curve is so different, and seems to have no clear maximum, indicates it is likely too good to be true. This is a nice result given that we know Madoffâs returns to be fraudulent!
However, what I really want to get to in all of this is a specific interpretation of the SMH; that markets self-organise such that optimal leverage tends to 1 in the long run. If we assume that the excess return of the stock price is generated by real economic activity (ultimately, the consistency of the stockâs return on equity) in the long enough run, this would seem to suggest that a certain amount of volatility is actually natural. Were a stock to consistently generate an excess return above that of the riskless asset, investors would lever up to purchase it. This mass act would (reflexively!) cause its volatility to shoot up as the price shoots up, and in the inevitable case of a margin call on these levered investors, volatility would increase further as the stock price comes back down.
This is a somewhat naĂŻve explanation, but the gist is that the lack of volatility in the short run will tend to generate excess volatility in the medium run, such that a natural level is tended to in the long run. Or, markets are stochastically efficient. Readers familiar with the unsuspecting role that âportfolio insuranceâ turned out to have played in 1987âs Black Monday â the single biggest daily stock market drop in modern history that seemed to follow no negative news whatsoever â will find all this eerily familiar. Taleb calls Black Monday a prototypical Black Swan that shaped his formative years as a Wall Street trader. Mandelbrot cites it as sure-fire evidence of power laws and wild randomness in financial markets. It ties together many themes of this essay because, evidently, the information was not in any of the prices. Not in the slightest. I leave it to the reader to mull over what all this implies if interventions in financial markets are targeted solely at reducing volatility as a worthwhile end in itself, the rationale of which makes no mention of growth or leverage. Once again, search for âRaghuram Rajan Jackson Holeâ or read about the so-called Great Moderationif unsure where to start. Volatility signals stability, in financial markets and likely well beyond âŠ
All this has a final interesting implication that I teased earlier: the resolution of the so-called âequity premium puzzleâ; that, according to such-and-such behavioural models from the psychological literature, the excess return of equities âshould beâ much lower than it really is. Cue the behavioural economists claims of irrational risk aversion, blah blah blah. Peters and Adamou provide an alternative with no reference to human behaviour at all. The difference between the growth rates of the risky (l=1) and riskless (l=0) assets is the excess return minus the volatility correction. If markets are attracted to the point at which leverage efficiency equals 1, then it follows by substituting the definition of the equity risk premium in terms of risky and riskless assets into the equation defining optimal leverage, that the equity premium ought to be attracted to the excess return over 2. Peters and Adamou delightfully write, âour analysis reveals this to be a very accurate prediction ⊠we regard the consistency of the observed equity premium with the leverage efficiency hypothesis to be a resolution of the equity premium puzzle.â QED.
Iâll note before wrapping up this sub-section that any readers triggered by such terms as geometric Brownian motion _and _normal distributions neednât be. Peters and Adamou acknowledge that GBM is not realistically either necessary or sufficient as a mechanism for stock price movements. But their argument really only depends on the characteristics of an upward drift and random volatility, both of which_ are_ reasonable to expect. They choose GBM because it is simple to handle, well understood, and prevalent in the literature they criticise, but they also write that,
âfor any time-window that includes both positive and negative daily excess returns, regardless of their distribution, a well-defined optimal constant leverage exists in our computations âŠ
Stability arguments, which do not depend on the specific distribution of returns and go beyond the model of geometric Brownian motion, led us to the quantitative prediction that on sufficiently long time scales real optimal leverage is attracted to 0 †lopt †1 (or, in the strong form of our hypothesis, to l-opt =1).â
We knew from previous sections that volatility is likely. It will exist to some extent due to the teased-out implications of subjective values and omnipresent uncertainty. But now we know that it is _necessary. _It is not noise, irrationality, panic, etc, around a correct price. It is, at least in part, inevitable reflexive rebalancing of leverage around whatever the price happens to be.
Incompleteness
You canât write ten thousand words on mathematical formalisms outlining the limits of human knowledge without mentioning Gödelâs Incompleteness Theorems.
Adaptation and Fractals
As mentioned in the introduction, of all the dissenting work on the EMH, I most recommend, by far, Andrew Loâs Adaptive Markets Hypothesis â the original paper and the follow-up book- and the various thoughts of Benoit Mandelbrot on fractals in financial markets â strewn across numerous academic papers, but lucidly conveyed in the popular book, The (Mis)behaviour of Markets. I assume familiarity with these works to avoid explaining everything from scratch, so if the reader is unfamiliar, I encourage jumping ahead to the next section.
My main critique of Lo is that he doesnât take uncertainty seriously enough. In covering the academic history surrounding the EMH, he only gives Simon a page or so, and Gigerenzer a paragraph. The key point of failure, in my view, is his treatment of the Ellsberg paradox. Or rather, the fact that he stops his rigorous discussion of uncertainty at this point.
The problem here is that the uncertainty in the Ellsberg paradox is confined to the odds, whereas we know from the previous discussion that the uncertainty in economics exists in the outcomes. This means that the odds arenât just uncertain, they are non-existent. By stopping here, Lo passes off the results of running the experiment that gives rise to the so-called paradox as simply indicating ambiguity aversion, which he presents as a kind of irrational bias â then a segue to behavioural economics. This prevents Lo from exploring the implications of Knightian uncertainty on entrepreneurship and competition, and ultimately gives him little ammunition to take on the EMH directly. In fact, he acknowledges that he never really does â he just proposes something he thinks is better.
That said, I agree that his model is better. Far better! Adaptation is a fascinating concept to employ here. As noted several times, it comes through very naturally in the complex systems approach. I wonât comment on it too much as its roots in evolutionary biology are outside my academic pedigree. But the basic intuition of changing circumstances and responding agents I find rather compelling. As do, it would seem, several thinkers I have already cited. Consider this passage from Kirzner,
âit is necessary to introduce the insight that men learn from their experiences in the market. It is necessary to postulate that out of the mistakes which led market participants to choose less-than-optimal courses of action yesterday, there can be expected to develop systematic changes in expectations concerning ends and means that can generate corresponding alterations in plans.â
Also, there is a tradition of referring to heuristics as ecologically rational, and the biological analogy is no coincidence. This passage from Waldropâs _Complexity _on John Hollandâs conversion to complex systems thinking in his study of genetics is striking in the almost simple obviousness of the comparison drawn to economics (again, not at all a coincidence):
âit bothered Holland that [R.A.] Fisher kept talking about evolution achieving a stable equilibrium â that state in which a given species has attained its optimum size, its optimum sharpness of tooth, its optimum fitness to survive and reproduce. Fisherâs argument was essentially the same one that economist use to define economic equilibrium: once a speciesâ fitness is at a maximum, he said, any mutation will lower the fitness ⊠but that did not sound like evolution to me.
⊠to Holland, evolution and learning seemed much more like â well, a game. In both cases, trying to win enough of what it needed to keep going. In evolution that payoff is literally survival, and a chance for the agent to pass its genes on to the next generation. In learning, the payoff is a reward of some kind, such as food, a pleasant sensation, or emotional fulfilment, but either way, the payoff (or lack of it) gives agents the feedback they need to improve their performance: if theyâre going to be âadaptiveâ at all, they somehow have to keep the strategies that pay off well, and let the others die out.â
One thing I especially like about Loâs approach is his idea of âevolution at the speed of thoughtâ, often rhetorical as much as anything else. I think this provides a useful conceptual tool to deal with what I deemed to be the only consistent deficiency in the material I covered on complex systems: Arthur, Holland, et al, seem to me so focused on the comparison to biological evolution, and on shifting the comparative conceptual framework from physics to biology as a whole, that they forget the role of purposeful human beings in all of this. Economic âmutationâ is not random, it is creative, intuitive, judgmental. It happens at the speed of thought because humans think on purpose. They do not cycle through the space of every thought that can possibly be had until they hit on one that happens to be a business plan.
To put this in a wider context and loop back to previously cited thinkers, I think Arthur is best read alongside Kirzner, and indeed Kirzner is best read alongside Arthur. Particularly in The Nature of Technology, which is otherwise an excellent book, Arthur perfectly grasps how _change happens, but not _why. In Competition and Entrepreneurship, Kirzner perfectly grasps why _change happens, but not _how. Both the why and the how rely, in part, on understanding economic evolution as an essentially human phenomenon, because genes mutate, but humans think.
Mandelbrotâs ideas on fractals in finance are iconoclastic, to say the least. Unlike Lo, I see nothing to disagree with, and much that probably went over my head. But given Mandelbrot sets himself the task of demolishing the EMH out of left field, and seemingly succeeds, it is definitely worth grappling with.
The mildly boring part of The (Mis)behaviour of Markets _is Mandelbrot showing that, empirically, financial data does not seem to fit the Brownian motion of the random walk hypothesis, and hence the EMH. The really juicy part is his explanation of _why. To avoid getting into any really tricky mathematics and essentially rewriting his book, I will summarise his argument, again not doing it justice, as, this isnât random enough. More suggestively, it is too predictably random.
Mandelbrot thinks that prices in financial markets are, up to a certain granularity, fractals. If true, this has many fascinating implications, but the most relevant here is that the self-similarity this implies means that any randomness in their fluctuation must be irregular. It should not be possible to ascertain any regularity just by changing the timespan because they look the same on _every _timespan (look! thereâs âtimeâ again!) The randomness must itself be pretty random. And that randomness must be random, and so on. There are no genuinely normal distributions in finance, Mandelbrot believes, but rather they all tend towards Cauchy. We could be less hand-wavy about all this and point out instead that while a statistical test on some financial data might suggest the tail of a lognormal distribution, we are really looking at a power law. If parameterized to induce fat enough tails, such a distribution may not have a variance, and if fat enough, not even a mean. (And no, it doesnât have an âinfinite varianceâ or âinfinite meanâ because that is meaningless, but nice try). As Taleb and Mandelbrot both wrote in Fortune,
âIn bell-curve finance, the chance of big drops is vanishingly small and is thus ignored. The 1987 stock market crash was, according to such models, something that could happen only once in several billion billion years.â
Black swans, amiright?
What does this have to do with âcomplexâ markets? Mandelbrot doesnât explore this idea, and I may be going out on a limb here, but I think this is almost exactly what you would expect if you thought markets were maximally uncertain, so to speak. If risk were predictable, then it could be hedged against. If it were unpredictable in and of itself, but were distributed predictably, then _that _could be hedged against. And so on and so forth. This all lends itself to a hand-wavy inductive proof by contradiction. We know that nothing can be perfectly hedged because it derives from uncertainty, and uncertainty on uncertainty, and uncertainty on _that _uncertainty, and so on. Financial markets can shift uncertainty around, and selectively parcel it into more and less risky instruments, but uncertainty itself cannot be removed.
Bitcoin
Oh goodness, I guess I have to say something about Bitcoin now, lest I be accused of rickrolling an angry twitter mob into a sermon on armchair economics. Is the halving priced in?
I have no idea. Which is sort of the moral of all of this. You canât predict the uncertain future, but you can bet on it. Iâm not sure how you would bet on this exact hypothesis: perhaps a combination of options that pay off if and only if the price goes up (or doesnât go up) by whatever the stock-to-flow model predicts, within some bounds, when it predicts it, within some bounds? Obviously, you could just be long Bitcoin, but then you arenât isolating the essence of this claim, and you can benefit for all sorts of other reasons. If you do either, youâll move the price towards the outcome you are hoping for. But only by having put skin in the game. Also, you could believe something very specific about the halving, but have no way of testing it as you donât believe in stock-to-flow models, or any other valuation model, for that matter. Thatâs the essence of my noncommittal answer above: I shouldnât tell you what I think, I should show you my portfolio, right? Well, I have no âhalving betâ in my portfolio, so I guess I think nothing. Which is what I said :)
Even so, we can still make a few interesting observations that draw on the above discussion. Clearly, the question relies on reflexivity, which is interesting in and of itself. Itâs only derivatively a question about the fundamentals, and more about the extent to which the market is a well-oiled beauty contest. I donât think I know enough about the actual workings of the Bitcoin market to comment on this. It strikes me that, relative to global equities markets, at least, the range of heuristics that market participants in Bitcoin are using vary wildly from one another. If it makes any sense to say so, they likely have pretty dramatic variance. At the same time, the market itself is probably highly illiquid, relative to what we might be used to. This might suggest the halving isnât priced in, in the sense that the change in marginal bids and asks at which the market clears that we know is going to happen dwarfs the capital that is already deployed, including towards solely this essentially reflexive bet. But then again, maybe it doesnât.
Honestly, I just donât know. And if somebody does claim to know, tell them to show you their portfolio.
Conclusion
Value is subjective, which means uncertainty governs all economic phenomena. This creates a complexity that resists equilibria and is constantly changing besides. Within such a system, prices convey the minimal possible information necessary for economic agents to purposefully react. They do so with judgment and heuristics, not âperfect informationâ, which is nonsensical, as is âperfect competitionâ and ârational expectationsâ. Prices may pass statistical tests for randomness, but they are not themselves random (although it is plausible that their randomness is random, and that randomness is random, and so on) but rather are unpredictable on the basis of market data alone. They are, however, predictable to the extent that the predictor accurately assesses the future subjective valuations both of economic agents and fellow market participants, and backs up this prediction with staked capital. This act of staking changes the uncertainties at play, rendering any attempt at genuinely scientific analysis futile. You can beat the market, itâs just really hard, and it depends on understanding_ people_, not data. And itâs meaningless if you do it in theory but not practice.
Markets have many characteristics. I suggest they are subjective, uncertain, complex, stochastic, adaptive, fractal, reflexive ⊠â really any clever sounding adjective you like â just not efficient.
Thanks to Nic Carter, Robert Natzler, Alex Adamou, and Sacha Meyers, for edits and contributions.
follow me on Twitter @allenf32 and feel free to contribute to the Allen ideas fund: bc1q8utvneuvn3hf2lm5nvt3dreqenad6hh5sda2sa