C&B Notes

The Limitations of Expert Judgment: A Political Case Study

We have explored this topic before via a Malcolm Gladwell interview, but this article by Nate Silver over at FiveThirtyEight revisits it in an interesting way. Using the example of Herman Cain’s recent rise in Republican primary polls, Silver lays out “a critique of expert judgment” that he summarizes as such (emphasis is his):

“Experts have a poor understanding of uncertainty. Usually, this manifests itself in the form of overconfidence: experts underestimate the likelihood that their predictions might be wrong.

Examples of this can be found in numerous fields. Economics is an obvious one.  In November 2007 — just a month before the economy officially went into recession — economists polled in the Survey of Professional Forecasters thought there was only about a 1 in 500 chance that economic growth would decline by 2 percent or more in 2008. (In fact, it declined by 3.3 percent)…”

In a recent letter to investors, we discussed Wall Street’s tendency to underestimate “unexpected” black swan-type events.  The list of case studies demonstrating the danger of this serial underestimation is long (e.g. Long Term Capital Management, Lehman Brothers, etc.), yet risk management departments at major banks and hedge funds continue to use models that do not adjust for this reality.  Specifically, these models do a terrible job of accounting for the frequency of outlying events when “fatter” tail models are much better equipped to do so.  Silver points out that this phenomenon of significantly underestimating unlikely events is just as pervasive in politics:

“…political forecasts may be especially vulnerable to this.  A long-term study of expert political forecasts by Philip E. Tetlock, a professor of psychology at the University of Pennsylvania, found that events that experts deemed to be absolutely impossible in fact occurred with some frequency.  In some fields, these zero-percent-likelihood events came into being as much as 20 or 30 percent of the time…

These are not semantic distinctions or errors around the margin — saying something has a 2 percent chance of occurring when really there is a 3 percent chance, or a 0.01 percent chance when really there is a 0.02 percent chance. Expert estimates of probability are often off by factors of hundreds or thousands.”

Later in the article Silver goes on to expound on another topic that we find interesting when it comes to experts and their predictions.  Specifically, he discusses the tendency of experts to try to fit evidence/data to their predictions, rather than allowing an objective look at evidence to appropriately refine — or even overhaul — these predictions (our note on Bayes’ Theorem also addresses this idea from a complementary angle):

“There have been only about 15 competitive nomination contests since we began picking presidents this way in 1972.  Some of them — like the nominations of George McGovern in 1972 and Jimmy Carter in 1976 — are dismissed by experts if their outcomes did not happen to agree with their paradigm of how presidents are chosen. (Another fundamental error: when you have such little data, you should almost never throw any of it out, and you should be especially wary of doing so when it happens to contradict your hypothesis.)  One or two past precedents are mistaken for iron laws…

In short, while I think the conventional wisdom is probably right about Mr. Cain, it is irresponsible not to account for the distinct and practical possibility (not the mere one-in-a-thousand or one-in-a-million chance) that it might be wrong. The data we have on presidential primaries is not very rich, but there is abundant evidence from other fields on the limitations of expert judgment.”

>> Click here for the full posting on FiveThirtyEight