Remember when Bill Kristol said the Iraq War would last two months? It (officially) lasted nine years and cost Americans over $2 trillion—and we’re still there.
How about when Kristol boldly stated that Barack Obama wouldn’t win a single primary against Hillary Clinton? Or when he claimed Iraq was “not in a civil war?” What about when he predicted that 1993 would be the “high water-mark” for the gay-rights movement?
For a man who makes his living prognosticating, he’s shockingly bad—much worse than a coin-flip. In fact, if you bet against Bill Kristol every time he predicted something, not only would you be rich, but there’s a good chance you’d be offered a syndicated column and a nightly timeslot on Fox—step aside Tucker!
But as bad as he is, Kristol’s in good (bad) company. In fact, failure is par-for-the-course when it comes to political prediction. Why?
how to fail at everything
Philip Tetlock, a psychologist at the University of Pennsylvania and co-founder of the Good Judgement Project, has dedicated his career to understanding why “expert” predictions are so often wrong. He published the fruit of his labors in his book Expert Political Judgement (2005), which sums up over twenty years of research.
Over two decades, Tetlock interviewed 284 “experts,” people who made their living “commenting or offering advice on political or economic trends,” and asked them to assess the probabilities that specific events would (or would not) happen in the near future. The questions were tailored to each experts area of specialization.
For example, Tetlock asked experts in Russian politics questions like whether or not Premier Gorbachev would be ousted in a coup. Meanwhile, he asked experts in finance questions like which country would be the next emerging market. Additionally, he also asked the experts to rate the probabilities of three possible outcomes in every case (would the status quo persist, or would there be more or less of X?).
In total, Tetlock gathered and analyzed over 80,000 predictions from people like Bill Kristol. The results were humbling.
Tetlock’s experts did significantly worse than chance—they would have been far more successful had they simply assigned equal outcomes to all three options. Now to some degree, this can be explained by the fact that many political pundits may benefit from making bombastic claims. Hubris attracts eyes, and as they say: there’s no such thing as bad publicity. But even so, there’s more to the story.
Researchers from Duke University mirrored Tetlock’s findings among a much more sober sample. Every year Duke collects survey data from the chief financial officers of America’s largest corporations. Among other things, they ask the CFOs to estimate returns from the Standard & Poor’s index for next year.
The researchers looked at 11,600 forecasts, and found that the overall correlation between the CFO’s estimates and the market’s actual performance was slightly less than zero. That is, the CFO’s also did worse than chance. This is troubling, because unlike political pundits, CFOs often have a great deal of skin in the game—their career, yearly bonus, and sometimes even their corporation’s future is on the line. And yet, their performance is bad.
And to make matters worse, the CFOs were grossly overconfident. Specifically, the actual market’s performance fell beyond their confidence interval more than three times as much as it should have, if they were properly calibrated.
To sum up: “expert” predictions are literally worthless; and Bill Kristol’s blistering incompetence is matched only by everyone else’s.
fantastic Mr. Fox
Thankfully, it’s not all doom-and-gloom. A more nuanced understanding of Tetlock’s data reveals that there are (broadly speaking) two types of experts. Tetlock calls them hedgehogs and foxes.
Hedgehogs are poor predictors—they’re the experts who perform worse than chance. Most experts fall into this category. Foxes, on the other hand, are somewhat competent: they perform slightly better than chance. For Tetlock, the million dollar question is this: what makes some experts hedgehogs and others foxes?
Most hedgehogs suffer from a psychological bias called theory-induced blindness. The Nobel Prize-winning behavioral psychologist Daniel Kahneman describes the bias in his book Thinking Fast and Slow as such: “once you have accepted a theory and used it as a tool in your thinking, it is extraordinarily difficult to notice its flaws.” Basically, hedgehogs interpret reality in light of their theory, and not vice versa—data that do not conform to their theory are either rationalized or ignored.
This is not to say that hedgehogs are unintelligent—many, like Milton Friedman or Ayn Rand, possess sharp analytical minds and vice-like memories—but they nevertheless see the world as abstract, individual pieces rather than an organic whole. In doing so, they miss the forest for the trees.
For example, Milton Friedman unequivocally asserted that free trade would enrich America by increasing labor specialization, neglecting the (rather obvious) fact that people are not fungible—unemployed factory workers cannot simply become software engineers. Different people have different talents, temperaments, and interests, in addition to different levels of intelligence, creativity, and knowledge. Theories are necessarily abstractions of reality, yet many experts confuse the two. This is why they fail.
Many hedgehogs also benefit from hubris: wild predictions attract attention. And you know what they say: there’s no such thing as bad publicity. Tetlock’s research confirms this: he found that the more popular the expert, the worse their predictions. People don’t want accuracy, they want a show. Thus, although Bill Kristol may appear (and probably is) incompetent, this doesn’t detract from the fact that he makes money from making wild predictions—right or wrong. Ever wonder why so many idiots work at CNN? This is why.
Regarding foxes: foxes are moderately successful predictors. There are many reasons for this: some, like Nate Silver, have supercomputers and endless data; others are exceedingly well-connected and simply have the “inside scoop”; but most foxes are successful because they’re conservative—not in the political sense, but as a modus operandi. Foxes tend to eschew theories and like to examine data from multiple angles. Furthermore, they calibrate their predictions for unknown variables and the vicissitudes of chance.
While conservatism makes foxes able predictors, it also makes them appear unconfident—even overly cautious. Basically, they make for bad TV. For this reason, foxes rarely appear on the nightly news, and when they do they tend to annoy viewers by seeming wishy-washy. So much for foxes.
you will ignore this article
Of course, I cannot in good conscience write an article on prediction without tying my own noose. So here’s my prediction: you will read this article, think something along the lines of “hm, interesting,” and move on with your day. After a few weeks, you will forget you ever read this article—this sentence in particular. Perhaps you’ll retain some vague recollection of a statistic involving CFOs, the name “Tetlock” may ring a bell, or you’ll sarcastically roll your eyes when someone on TV says “X will happen” for a reason you cannot quite remember.
But hopefully, some of you prove me wrong, and instead send this article to every blowhard hedgehog you know—especially Bill Kristol.