Superforecasting: The Art and Science of Prediction
Cover & Diagrams
Did you know it's possible to make accurate predictions about the future without psychic powers? Given the right practice and strategies to explore, you can become what's known as a super forecaster.
In Superforecasting: The Art and Science of Prediction by Wharton professor Philip E. Tetlock and co-author Dan Gardner, readers learn about the qualities and skills that make a super forecaster and how you can apply the knowledge to any situation. You will also learn about real-life super forecasters from all walks of life and how to break down even the most difficult questions to achieve the best results.
Top 20 insights
- Super forecasting is not about the ability to crunch numbers, but what you do with it that matters most. A brilliant puzzle solver will be at a disadvantage relative to a less intelligent person who possesses a great capacity for self-critical thinking.
- For super forecasters, beliefs are hypotheses to be tested, not treasures to be guarded. Do not be open-minded, be super open-minded. However, when you make a prediction, be as precise as possible. If the prediction is too vague, you can run into the "Forer Effect," where people assume its meaning and apply it to themselves.
- Unpack the question into components, then distinguish which parts you know and which ones you don't. Then, put the problem into a comparative perspective that downplays the situation's uniqueness. Look at factors that play up a situation's uniqueness and synchronize your findings to make as precise a judgement as you can.
- Super forecasters adjust their views in light of new information as often as necessary to draw the most accurate conclusion. Carefully balance the old with the new and incorporate them into your latest prediction. Update often, but in small increments. This concept is perfectly illustrated by using the Bayesian belief-updating equation.
- There are two dangers a forecaster faces after making an initial determination. One is underreaction to new information (bias or "belief perseverance"), and the second is to overreact. Both can diminish accuracy and in extreme cases, destroy a perfectly good forecast. Disregard irrelevant information to avoid the dilution effect on your information, then commit.
- Bring out the best in others and let others bring out the best in you. The balance you learn in forecasting will translate to team management, especially when you hear different perspectives. Former LA Dodgers coach Tommy Lasorda said that management is "like holding a dove." Hold too tight, kill it. Hold too loose, lose it.
- Tweak the wording of a question to get another perspective. For example: "Will the South African government grant the Dalai Lama a visa within six months?" In addition to reasons they would grant him a visa, look at reasons they wouldn't. Change the word "grant" to "deny" and you have a new criterion for research.
- Forecasters run into several barriers that impact accuracy. Vague language such as "significant market share" can be interpreted based on the reader's biases and not facts. Time lag is another issue. When forecasts span months or years, beware of "hindsight bias" that changes your current perspective to match the results.
- To be a super forecaster, a growth mindset is essential. Not all practice improves skill, however. You need to know which mistakes to look out for, and pair your practice with clear and timely feedback. Be careful not to let your confidence grow faster than your accuracy.
- Intractable problem? Break it into tractable sub-problems that you can identify as knowable and unknowable. The big question of, "Will there be another Korean war?" is much harder to quantify than "What is the frequency of North Korean nuclear tests?" and "Will North Korea launch a cyber-attack on South Korea?"
- Strike the right balance between inside and outside views. Inside views are specific to the situation, such as recent events. Outside views are more generic, i.e. how often the situation at hand occurs, on average. History tends to repeat itself. Even seemingly unique events can relate to trends, which are then weighted against inside views.
- Don't overreact to evidence, but don't underreact, either. Forecasting is all about observation and balance. Super forecasters are agile, but don't jump needlessly. When you update your prediction, it can be boring or even uncomfortable, but worth it in the long run. The best forecasters tend to update probabilities incrementally, such as from 0.4 to 0.35.
- "Dragonfly eye forecasting" is the pursuit of point-counterpoint discussions, i.e. "on the other hand…" This method is common among the forecasting world because the best forecasters are precise, but willing to weigh all sides. Super forecasters often score high on an active open-mindedness tests, such as one by Psychologist Jonathan Baron at the University of Pennsylvania.
- Make yourself aware of causal forces at work in your problem. Information that clashes is just as important, if not more so, than evidence that supports your hypothesis. Just as a dragon fly sees multiple images and synthesizes them all together into a single picture, so must forecasters do with opposing views.
- As you dissect a question, you will be able to determine various probabilities that range from "remote" to "almost certainly." The more degrees of uncertainty you can distinguish, the better a forecaster you will become. It feels unnatural at first, but with patience and practice you will be able to translate vague-verbiage hunches into numeric probabilities.
- Strike a healthy balance between overconfidence and under confidence. Super forecasters do not rush to judgement, nor do they linger too long near "maybe." Long-term accuracy requires calibration and resolution, prudence and decisiveness. Do post-mortems on your experiments to learn what worked and find creative solutions to the errors you find.
- Hindsight is greater than 20/20, especially if you made a prediction. A common pitfall to avoid is "rearview mirror hindsight bias." Own your failures. Don't overlook flaws in your basic assumptions. You might have been on the right track but were thrown off course by a minor technical error.
- Complex algorithms fed into super computers may soon complement forecast endeavors. Human judgement can stand to benefit from a second perspective devoid of emotion, but as of right now, only humans can understand human meaning. "There's a difference between mimicking and reflecting meaning and originating meaning," said Watson's Chief Engineer, David Ferrucci.
- There are obstacles to consider if you plan to put a team of forecasters together with a single objective. Forecasters can adopt "group think" and become too agreeable. Likewise, they can slip into "cognitive loafing," which is the attitude that others should do the heavy lifting. Maintain independent judgement in the group.
- Learning requires doing, with good feedback that leaves no ambiguity on whether you are on the right track. Practice is not helpful if you simply go through the forecasting motions. Super forecasting is the product of deep, deliberative practice. Super forecasting requires constant mindfulness even when you try to follow the rules.
What does it take to be a good superforecaster?
Celebrity forecasters like Tom Friedman are called upon in times of crisis to help make long term decisions based on current events. You don't have to be a celebrity to make accurate predictions, however, and many "super forecasters" with high accuracy rates are unsung. Forecasting is a skill to be learned and continually mastered.
To be a reliable and confident forecaster, you'll need to be open to new experiences. It's not enough to be open-minded; you must be super open-minded to sacrifice your own preconceived ideas and opinions for the sake of the most accurate prediction.
Unfortunately, no magic formula exists that forecasters can turn to – just broad principles with a lot of caveats. However, there are a number of tried-and-true methods of forecasting that can help you on your journey.
Goldilocks was right
When posed with a big question, triage the situation. That is, focus on questions where your hard work is likely to pay off, as opposed to the hardest or the easiest questions. Go for the "Goldilocks" approach, i.e. somewhere in the middle and work your way outward.
If you were to sum up forecasting in one word, it might be "balance." This doesn't mean that your predictions should always be somewhere in the middle but take everything into consideration even if it contrasts with your current view. A closer inspection might introduce a factor you hadn't thought of that alters the course of your probabilities.
Italian American physicist Enrico Fermi, a central figure in the invention of the atomic bomb, posed a brainteaser for forecasting that asks how many piano tuners are in Chicago.
Without looking at the internet or Yellow Pages, a forecaster can come up with an educated answer if they know four things:
- The number of pianos in Chicago
- How often pianos are tuned each year
- How long it takes to tune a piano
- How many hours a year the average piano tuner works
Fermi taught that breaking down the question can separate the knowable and unknowable from this list. Despite the seemingly random nature of the answers, the result tends to be more accurate than a random guess. Many have attempted this puzzle, but one presentation by psychologist Daniel Levitin shows how to come up with a solution.
- For the first answer, set a confident interval – a range you are 90% sure contains the right answer. Levitin guessed that Chicago has around 2.5 million people because it is smaller than Los Angeles but large enough to house over 1.5 million residents.
- Next, Levitin supposed that a piano might need tuning once per year.
- Since pianos are too expensive for most families, Levitin guessed that 1/100 homes in Chicago own a piano. That number is doubled when you factor in schools, concert halls, etc. that possess more than one. 2.5 million residents x 2/100 (2%) = 50,000 pianos in Chicago.
- Then, Levitin guessed that it takes around two hours to tune a piano.
- Assuming that a piano tuner works 40 hours a week plus two weeks' vacation and spends about 20% of their time driving from job to job, the average piano tuner might work 1,600 hours per year.
Therefore, if 50,000 pianos need tuning once per year, and it takes two hours to tune one piano, that comes out to 100,000 total piano-tuning hours. If you divide that by the annual hours worked by one piano tuner, it comes out to 62.5 piano tuners in Chicago. Levitin found 83 listings for piano tuners in Chicago, but many of them were duplicates, such as businesses with more than one phone number. So, an accurate number is not known, but Levitin's calculation shows how close you can get.
Forecasting step-by-step: let's solve a murder
Pose a question. For example, let's say you're a homicide detective and you need to find out who did it. Unlike on TV, the clues will not fall in your lap before the next commercial break.
- First, check the outside view: Refer to statistics as a base rate. The FBI says that 28.3% of homicide victims are killed by someone they know, so there is a 28.3% chance the victim knew their killer. Likewise, there is a 9% chance it was a stranger.
- Next, check the inside view: Examine facts specific to this case. Who had the ability, means, and motive for killing this person? Adjust your chance percentile up and down based on each suspect. Start with the most obvious and move your way outward. (That's why they always look at the spouse or significant other first.) If the victim had a recent fight with their significant other, the likelihood that this person killed them goes up. If that significant other had a verifiable alibi, the likelihood goes down. Note: Don't get stuck on your initial gut feelings, but don't ignore them, either. It's easy to latch on to a prediction and find information to support it, rather than weigh all options.
- Now, merge the two views to create a synthesized prediction. Let's say the victim was seen getting into a car the night they were killed. You've identified a person that worked with the victim who drives the same kind of car. Co-workers say that person was obsessed with the victim. Their alibi is weak. They look like the strongest suspect. Let's say you come up with a 75% chance that this person is your culprit.
- Have your colleagues assume your judgement is wrong and make their own estimates. Researchers have found that combining your first judgement with a second one made by others is often more accurate. Another way to approach this is to step back from your first estimate for several weeks (if you have the luxury of time outside of a murder case) before asking peers to make one of their own. Likewise, you can make your own second judgement after a break, as billionaire investor George Soros does. Soros has often cited this method as a key part of his success.
Psychologists who test police officers find a large gap between their confidence and their skill. As officers become more experienced, that gap grows. Beware of growing confident faster than you grow accurate.
Update often, but bit by bit
Statisticians will be familiar with a thought experiment proposed in the 1700s by Presbyterian minister, Thomas Bayes. He wrote "An Essay Towards Solving a Problem in the Doctrine of Chances," which was refined and published posthumously in 1761 by his friend, Richard Price.
Essentially, the theorem says that your new belief should depend on your prior belief, multiplied by the diagnostic value of the new information.
While super forecasters should be numerate, they don't have to turn to algebra every time they want to make a prediction. What matters more is Bayes' core insight of getting closer to the truth gradually by updating in proportion to the weight of the evidence.
Going back to the homicide example, you might increase the likelihood of one subject being your killer once you find out they lied about their whereabouts. If you overreact and think, "Ah ha! I'm 99% sure now" you can overlook unknowns, such as the reasons why they lied (to save their job, to save their spouse's feelings, etc.).
Predicting the unpredictable
Don't forget to factor in situations that could change everything overnight. It's better to give yourself a bit of wriggle room "just in case" than assume everything will go as planned.
In 2010, a poor Tunisian fruit vendor was robbed by corrupt police officers ̶ sadly, a common occurrence at the time. Later that day, he set himself on fire outside the town office. Protests erupted. The dictator of Tunisia, President Zine el-Abidine Ben Ali fled the country. Still, the civil unrest continued throughout the Arab world and resulted in a number of rebellions and civil wars. Who could have predicted that one man's self-emollition would cause the "Arab Spring?
A situation might be identified as a "powder keg ready to explode," but it's nearly impossible to tell what will light the fuse.
American meteorologist Edward Lorenz discovered that tiny data entry variations in computer simulated weather patterns could produce dramatically different long-term forecasts. His insight, published in an article called, "Predictability: Does the Flap of a Butterfly's Wings in Brazil Set Off a Tornado in Texas?" became the inspiration for chaos theory.
Predictions are everywhere
How predictable something is will depend on what we want to predict, how far into the future, and under what circumstances. Tomorrow's weather forecast is going to be much more accurate than one five days from now because as Lorenz discovered, a lot can change between now and then.
The internet is full of forecasts. A quick visit to Amazon illustrates the algorithm's prediction of other items you might like to buy. When you provide feedback on recommendations, the algorithm updates its predictions ever so slightly.
Life is full of mundane predictions, too. You see clouds on the horizon and grab an umbrella. Scientific laws like phases of the moon can predict the weather with enough accuracy to plan agriculture. But, it's much harder to forecast when you should fill up your gas tank this week because the pipeline might get attacked by hackers and drives the prices up.
To err (and assume) is human
A now famous "Cognitive Reflection Test" was introduced by Shane Frederick, a management science professor at the Massachusetts Institute of Technology. It poses this seemingly easy question:
"A bat and ball cost $1.10. The bat costs one dollar more than the ball. How much does the ball cost?"
Most people immediately think, $0.10. If you think about it more carefully, you find that this answer is incorrect. Our brains automatically latch on to the "dollar" and not the "more." If the ball costs $0.10 and the bat costs a dollar more ($1.10), then the total cost will be $1.20. Therefore, the correct answer is $0.05.
Modern psychologists attribute this phenomenon to a division of human brain function into two systems. System One is the subconscious. It makes automatic cognitive and perceptual decisions, and very quickly at that. System Two is our conscious mind, or whatever we choose to focus on at the moment. System One makes split second decisions based on historical experience, existing knowledge, predispositions, and other factors that "feel" right but are not necessarily correct.
To be a super forecaster, you will need to be aware of System One and how its vital operations can sometimes hinder the judgement of intelligent people.
The importance of human predictions
As imperfect and bias as humans can be, they will still be a necessary component of forecasting in the future. The advent of super computers and artificial intelligence makes it tempting to assume we can leave all the predictions up to machines. Polymath Herbert Simon predicted in 1965 that we were only 20 years away from a world in which machines could do "any work a man can do."
While this is certainly the case in many automated industries, there is a reason that computers and robots are still overseen by humans. The authors spoke to Watson's chief engineer, David Ferrucci, who has worked in artificial intelligence for over 30 years. Computers are better able to spot patterns these days, he noted, but machine learning requires the presence of humans to feed the learning process. As of right now, a computer can look up a fact, but a forecast requires an informed guess based on a myriad of information.
The human brain is wonderous because the task of compiling data and making a prediction is extremely difficult, and yet we do it all the time. The biggest hurdle for computers if they are to ever replace a super forecaster is understanding. Humans may get better at mimicking human meaning and therefore better at predicting human behavior, noted Ferrucci, but "there is a difference between mimicking and reflecting meaning and originating meaning."