The Myth and Reality of Predictions and Forecasting

If we are truly able to predict, how come we repeatedly fail?

How come that Wall Street ‘experts’ miss market crashes? In his Market Watch article, Brett Arends (August 24, 2015) noted that Wall Street experts failed to predict the housing bust, a majority of economist polled in early 2008 failed to predict the biggest recession in 70 years, all experts at the International Monetary Fund failed to predict the financial crisis, and since 2011 most Wall Street experts have missed the crashes in emerging market stocks and commodities.

Most likely you are not a statistician, and have little interest in formulas, but understanding their conceptual meanings are critical to the way you engage when you are presented with predications, forecasting, and projections. In statistics there is this concept called margin of error. It tells us something about confidence levels and the degree of uncertainty one is accounting for when forecasting.

The reality is every single time you see a forecast of some sort, someone built a model that includes mathematical assumptions, and estimated error. At the end of the day these are mostly determined by human judgment. Some judgments turn out to be more accurate than others, but apparently a lot of experts are really lousy at it.

The concept of the assumed margin of error and its implications has been discussed most eloquently by Nassim Nicholas Taleb in his book:  The Black Swan: the impact of the highly improbable. In his chapter “the Scandal of prediction” he argues that forecasting without incorporating an error rate uncovers three fallacies, all arising from the same misconception about the nature of uncertainty. He details them as follow:

Fallacy one: variability matters. The first error lies in taking a projection too seriously, without heading its accuracy. Yet, for planning purposes, the accuracy in your forecast matters far more than the forecast itself.  Therefore, the policies we need to make decisions on should depend far more on the range of possible outcomes than on the expected final number. He shares the dire consequences of financial and government institutions projecting cash flows without wrapping them in the thinnest layer of uncertainty.

Fallacy two: failing to take into account forecast degradation as the projected period lengthens. We do not realize the full extent of the difference between the near and far futures. Historically, forecasting errors have been enormous, and there is no reason for us to believe that we are suddenly in a more privileged position to see into the future compared to our predecessors. From Facebook to Apple to Ali Baba in their early days very few would have predicted their dominance in their respective markets.

Fallacy three: misunderstanding the random character of the variables being forecasted. Owing to the Black Swan, these variables can accommodate far more optimistic or far more pessimistic scenarios than currently expected. While there are numerous examples in the tech world, Fab stands out as the quintessential example of a bad prediction: a company that was at one point valued at ~$1bn was acquired in a fire sale for about $20 million.

What does that mean for you?  Whether you are a venture capitalist, corporation or non-profit, the next time you are presented with forecasting, projections, and predications, make sure you put a significant amount of thought into your assumptions and margin of error.