Faculty News
Famous economic oracles rarely pass the accuracy test
—
By Gary Connolly
The Sunday Times
ST
© 2012 Times Newspapers Ltd. All rights reserved
Our fascination with those who claim to foresee the future has endured through centuries. Be it sport, weather, politics, or the future of the euro, people want to know what will happen before it does.
According to Daniel Khaneman, a Nobel laureate in economics, this need stems from a desire to feel we are in control.
So it's a little uncomfortable to think that those making predictions are no better and possibly worse than the rest of us. Surely the experts have some ability to forecast? The evidence suggests otherwise. A paper produced by the New York University Stern Business School tracked The Wall Street Journal's survey of economic forecasts to test their accuracy when large sums were at stake.
Christina Fang and Jerker Denrell, who conducted the study, took predictions from July 2002 to July 2005 and calculated which economists had the best record of calling "extreme" outcomes, defined as either 20% higher or lower than the average prediction.
The findings were striking: economists who had a better record at calling extreme events had a worse record in general: Denrell and Fang found the "analyst with the largest number as well as the highest proportion of accurate and extreme forecasts, had, by far, the worst forecasting record".
So rather than indicating good judgment, accurately forecasting a rare event may indicate the opposite.
Philip Tetlock, professor of psychology at the University of Pennsylvania, discovered similar findings in his 15-year study, detailed in his book Expert Political Judgment.
He scored the accuracy of 28,000 forecasts from 280 experts in politics and economics from 1988 to 2003. He found the better-known and more frequently quoted forecasters are, the less reliable their guesses about the future are likely to be.
The accuracy of an expert's predictions actually has an inverse relationship to their self-confidence, renown, and, beyond a certain point, depth of knowledge.
Ironically, the more skilful forecasters are those who rely less on intuition and more on statistical evidence. They see explanation and prediction not as deductive exercises, but try to stitch together ideas from diverse sources.
And critically, they are diffident about their prowess, making them less likely to offer firm predictions. That's usually why we never hear them on radio or television.
One of the main reasons forecasters have difficulties is that predictions depend on variables such as geopolitics, regulations, nature and changing relationships that affect asset prices. In short, luck plays a part. A big part.
Throughout history, great and terrible events have often hinged on chance. Consider how the 20th century would have been without Hitler, Stalin or Mao? Khaneman says there was a 50-50 chance that Hitler could have been female, so by extension, a one-eighth probability all three despots could have been born female.
These parallel universes offer some insight into how differently things might have turned out but for the role of chance. It plays a greater role than we give it credit for.
The extent of these studies' implications are not well understood. Consider the credit crunch. Economists were widely criticised for their inability to predict the disaster. Yet one had a "good" crisis: Nouriel Roubini.
It would be a great disservice to discredit his predictions as serendipity. But, according to Denrell and Fang, we inevitably end up giving "'forecasters of the year' awards to a procession of cranks" who make "a series of extreme predictions relying on intuition".
To truly claim that successful prognosticators have an ability to foresee the future, we need to take all their incorrect predictions into account. However, we rarely hold forecasters to account. We tend to go by a recent forecast or new information. And financial forecasters who are not held accountable for incorrect predictions get more renown by making extreme forecasts. Wild predictions pay because the downside of being wrong is limited, but the upside is potential lifelong fame for being right.
The question is not why so many financial and economic experts predict so poorly, it is why do we seek these people out and reward them for what is, more often than not, serendipity? The world is too complex for financial experts to accurately predict with consistency.
Gary Connolly is chairman of the Value Investment Institute, a think tank set up by investment professionals practicing a value style. This is an abridged version of an article on its website valueinstitute.org. Gary can be contacted at gary.icubed.ie
News International Associated Services Limited