Einstein, CERN and the Limitations of Models

In 1905, a twenty-six year old patent clerk named Albert Einstein upended nearly two centuries of conventional thought when his ideas of Relativity displaced Newtonian Physics. Newton’s model was useful and an enormous achievement but imperfect at explaining our world — especially for objects or particles moving very fast. Einstein’s theories have now underpinned a century of thought in Physics, but Einstein acknowledged that his theory could be wrong. He even hypothesized a way to disprove it when he said, “No amount of experimentation can ever prove me right; a single experiment can prove me wrong.” He meant that one needed simply to find something that travels faster than the speed of light. Recently, physicists at CERN on the Swiss/French border have experimented with sending neutrinos 450 miles through the earth to detectors in Gran Sasso, Italy. The trouble is that the particles showed up on average about 60 nanoseconds earlier than anticipated, which implies that they exceeded the speed of light. While it very well may be that some sort of error in experimental construction or measurement occurred and the General Theory of Relativity still holds, it reminds us that we should be cautious about relying uncritically on the models we have developed to explain our world.
This recent episode reminds us of George Box’s maxim that, “All models are false; some are useful.” When we try to predict the future of a business, we must allow for the fact that things can happen which we cannot even conceive. We believe past manias, bubbles, panics, and depressions clearly indicate that the “unexpected” happens far more often than Wall Street acknowledges. As increased computing power has lowered the costs of complicated computations, risk management departments have proliferated at Wall Street banks and hedge funds. These risk managers rely on complex statistical tools that attempt to understand and manage the exposure of their firms to the millions of trades making up their balance sheets. At the core of many of these tools, such as Value at Risk (VaR) [1], is the assumption that security prices will trade in a normal distribution (“bell curve”). Just because assumptions of normality work well in the natural sciences and other disciplines where statistics are used regularly does not mean a normal distribution fits financial market data. While we do believe that security prices cluster around intrinsic values, we actually have little certainty about the frequency of outlying events. We propose that the statistical tools used on Wall Street would have a better chance of explaining “outliers” if they were instead based on Student’s t-distribution, which have fatter tails than normal distributions and can be shown to be robust against departures from normality [2].

In other words, t tests have a better chance of protecting investors if their assumptions about the way things will behave are wrong. The application of Student’s t-distribution and the willingness of decision makers on Wall Street to acknowledge the limitations of their ability to predict unexpected events would have a profound impact on their approaches, and they would significantly reduce leverage and risk-taking while concurrently diminishing the likelihood of bankruptcies. Of course, because the adoption of this framework would also decrease expected short-run profits substantially, we do not suspect it will happen.
This same phenomenon of underestimating the frequency of outcomes that general wisdom considers outliers has meaningful applications for operating companies as well. For example, standard MBA theory requires businesses to manage their working capital as tightly as possible through initiatives like “just in time” inventory management. While these programs have merit, short-term inventory savings must be counterbalanced against the opportunity costs associated with being unprepared for rare events, such as this year’s earthquake off the Pacific coast of Tohoku. While we will never know the total amount of lost sales of the Japanese car makers from their supply chain interruptions, it would be interesting to know if the money they saved on inventory over the past fifteen or so years has more than offset the forgone profits from their current lost sales. Managers should recognize that squeezing suppliers, employees and customers is not without an opportunity cost that must be judged with a long time horizon rather than a short one. In fact, they should structure their balance sheets and the amount of cash on hand in anticipation of negative externalities such that their companies are prepared to profit when opportunities present themselves. The Mexican Coca-Cola bottler we own did just this in the 2008 financial crisis – they put reserve capital to work when weaker competitors were suffering. Their gains in market share over the last two years demonstrate the wisdom of allowing for extreme events, understanding the impact of opportunity costs, and having a long time horizon.
Ultimately, our models are necessarily inadequate at describing reality, so we spend time thinking through their limitations to avoid mistakes. We continuously contemplate and discuss how the individual businesses that constitute the Fund’s portfolio are positioned for the unexpected. Following Einstein’s example, we hypothesize what would alter our expectations for a business, and we recognize that our valuations are just theories, not certainties. We try to purchase businesses whose understanding of the world influences them to take a long-term perspective and structure a conservative balance sheet. On top of that we only purchase a stake in a business when the market offers a margin of safety to allow for mistakes we may make or events we cannot foresee. We believe this strategy helps to reduce the risk of permanent capital loss.
[1] Value at Risk is defined as a technique used to estimate the probability of portfolio losses based on the statistical analysis of historical price trends and volatilities.
[2] King, M.L. “Robust Tests for Spherical Symmetry and their Application to Least Squares Regression.” The Annals of Statistics Volume 8, Number 6 (1980): 1265-1271.