Chapter Content

Calculating...

Okay, so, um, Moss, you know, Amos, he always used to say, like, no matter what someone asks you to do—go to a party, give a speech, help someone out—don't, like, immediately say yes. Even if you *want* to! He said, just sleep on it for a day, and, you know, you'd be surprised how many of those "yes" invitations you were totally ready to accept you'd actually end up turning down after a day of thinking. He was, like, super cutthroat about his time, always ready to just bail if he wasn't feeling it.

He thought it was just totally fine to leave a boring meeting or a cocktail party if he wasn't having fun. He was like, "Just walk out!" And he said you'd be amazed at how quickly you could, like, whip up an excuse to leave, like, on the spot. It's kind of the same way he approached everything. He said, basically, if you don't regret cutting something out of your life at least once a month, you're not cutting enough. So, yeah, anything Amos didn't think was important just got tossed aside. So, whatever *did* make it through his, you know, ruthless filtering system had to be pretty special.

And, okay, one thing that probably shouldn't have survived, was this, like, crumpled piece of paper with some scribbled lines on it. It was actually some thoughts he and Danny—Daniel—had jotted down back in, uh, when they were about to leave Eugene. For some reason, Amos kept it. And it said stuff like:

"People predict by telling stories."

"People predict less, explain more."

"Willingly or not, people always live in uncertainty."

"People believe they can predict the future if they try hard enough."

"People accept any explanation that fits the facts."

"What’s written on the wall is just invisible ink."

"People always try to get what they already know, but shy away from learning new stuff they don't have."

"Man is a creature of certainty, thrown into a universe of uncertainty."

"In the battle between man and the universe, the end will be unexpected."

"What happened was always inevitable."

Sounds like a poem, right? But it was really just some, like, random thoughts that Danny and Amos were tossing around for another paper. This time, they were trying to present their ideas in a way that would, like, reach beyond just psychology. They were planning to write about human prediction before they even went back to Israel.

They were super clear on the difference between judging and predicting. Like, they thought judging someone, like saying, "He looks like a brave Israeli officer," implied a prediction: "He'll *become* a brave Israeli officer." And vice versa. There couldn't be prediction *without* some kind of judgment, right? They figured the difference was that a judgment becomes a prediction when there's uncertainty involved. Like, "Adolf Hitler is a powerful speaker" is just a statement, a judgment. But "Adolf Hitler *will* become Chancellor of Germany" is a prediction, 'cause, you know, anything could have happened at that point. They called this new paper "On the Psychology of Prediction," and they said something like, "When people make predictions or judgments under uncertainty, they don't seem to use statistical theory. Instead, they rely on a few heuristics, which sometimes lead to reasonable judgments but sometimes lead to serious, systematic errors."

Looking back, this whole thing kinda started when Danny was in the military. The guys who were in charge of processing info on young Israeli guys couldn’t really predict who would be a good officer. The people in charge of the training school couldn’t predict which officers would excel in battle. I mean, it's tough, right? Then, one time, Danny and Amos were, like, randomly trying to predict the future careers of their friends' kids, and they realized they were, like, way too confident about it. So, they wanted to test—or, like, show—how people use these "representativeness heuristics" to make predictions.

But to do that, they needed to, like, *give* people prediction tasks.

So, they figured they'd give people some personality traits and have them predict whether students would go to grad school and which subjects they'd choose. First, they asked people to predict the percentage of students in each subject. They got answers like, you know, 15% in business, 7% in computer science, stuff like that.

You could use those percentages as a baseline for your predictions. Like, if you knew nothing about a student and were asked to guess if they'd go into business, you'd say 15%, right? Amos and Danny were like, "If you know absolutely nothing, then the base rate is your answer."

So, then, what happens when people *do* have some info? Danny and Amos wanted to show this. But, what kind of information? Danny spent a whole day thinking about it at the Oregon Research Institute. And after pulling an all-nighter, he created this stereotype for a computer science grad student. He called him "Tom W."

Tom W was pretty smart, but not super creative. He liked order and clarity and wanted everything just so. His writing was boring, but sometimes there was a, like, clever pun or sci-fi idea in there. He wanted to be talented but didn’t care about other people's problems and didn't really like interacting with people. He was, like, self-centered, but had some strong principles about, like, big issues.

They had one group of people—the "similarity group"—assess how similar Tom W was to students in each field. They wanted to see which field Tom W "represented."

Then, they gave another group—the "prediction group"—this additional info:

"The above description of Tom W's personality was done by a psychologist when Tom was a senior in high school. Now, Tom's a grad student. Based on your judgment, please rank the probability of Tom majoring in these fields."

They also told the people that the description of Tom W might not be reliable. First, a psychologist did it; and second, the report was from years ago. Amos and Danny were worried that people would just jump from similarity to prediction ("That guy sounds like a computer whiz!") and ignore the base rate (only 7% of grad students major in computer science) and the reliability of the description.

The morning Danny finished creating the character, Tom W, the first person to come to the institute was Robin Dawes. Dawes was a statistics expert, known for his rigor. Danny showed him the description of Tom W. "He read it and, like, gave me this knowing, wicked smile," Danny said. "Then he said, 'Computer scientist!' That was, like, a relief. I knew the character would hook those Oregon students."

The students in Oregon immediately thought Tom W was a computer science major, based only on their intuition. They didn't focus on the objective data. So, that showed that people would let stereotypes mess with their judgment. That led Amos and Danny to their next question: If people make irrational predictions based on *some* info, what kind of predictions would they make based on totally *irrelevant* info? So they, you know, brainstormed this idea – to see if giving them worthless info increased people's confidence in their ability to predict, and they were basically cracking each other up with this. Finally, Danny came up with another character, "Dick."

Dick was a 30-year-old guy, married, no kids. Super talented, very driven, seemed like he'd be super successful. His coworkers loved him.

Then they did another experiment. It was kind of like the test they did in Danny's class with the bag of chips. They told people there were 100 people in a group, 70% engineers, 30% lawyers. If they picked one at random, what's the chance that person's a lawyer? People said 30%, which was right. If 70 were lawyers, 30 engineers, what were the chances? Right again, 70%. But when they gave them a description of "Dick"—irrelevant info that had nothing to do with his job—people said there was a 50% chance Dick was either profession. They basically threw out the info about the percentages and based their answers on nothing. "Clearly, people react differently when they have no info versus useless info," Danny and Amos wrote. "With no info, people rely on prior probability (the base rate); but with useless info, they disregard the prior probability."

In "On the Psychology of Prediction," they also talked about stuff like how things that make people *more* confident can actually make them *less* accurate. At the end of the paper, they went back to the question Danny thought about in the Israeli military: how to select and train soldiers.

Flight school instructors started doing what psychologists recommended, giving soldiers positive reinforcement. Every time a soldier did well on a flight mission, they'd praise him. But after a while, the instructors said it wasn't working. Praising the pilots for doing well actually made them perform worse the next time. So, what's the explanation?

People came up with all kinds of ideas. They guessed that the praise made the soldiers too confident, or that the instructors weren't being sincere. Only Danny figured out the real problem: the pilots were going to perform well or badly *no matter what* the instructors said. If someone did poorly, they were probably going to do better next time just by chance. If someone did great, they were probably going to mess up a little the next time, just by chance. Because they didn’t realize this, people couldn’t understand how the world worked. If you don't realize that things tend to average out, you're doomed to think you're rewarding people when you're punishing them, and punishing them when you're rewarding them.

When Danny and Amos wrote those early papers, they didn’t really think about who would read them. Probably just a few academics who happened to subscribe to the right journals. By summer 1972, they'd spent almost three years researching the mysteries of human judgment and prediction. The examples they used were either straight out of psychology or these, like, kind of weird tests they designed for high schoolers and college students. But they were sure that their findings applied to anything involving probability judgment and decision-making. So they thought they needed to find a bigger audience. They wrote in their research proposal, "The main goal of the next stage is to expand this research and apply it to high-level professional activities such as economic planning, technological forecasting, political decision-making, medical diagnosis, and legal evaluation." They hoped that experts in those areas would "be able to recognize biases, avoid biases, reduce biases, and ultimately make better decisions." They wanted to turn the whole world into their lab. And instead of just students, their subjects would be doctors, judges, and politicians. But how?

They were getting more and more interested in their research. Danny said, "That year, we truly realized we were doing something great, and people began to look at us with respect." Irv Biederman, a psychology professor from Stanford, was a visiting scholar there, and he heard Danny give a talk on heuristics and biases in early 1972. Biederman remembered, "After hearing the talk, I went home and told my wife that this research could win the Nobel Prize in economics. I was convinced. He was using psychological theory to study economics, and I thought there was nothing better. He was explaining why people make irrational or incorrect judgments. It all came from the inner workings of the brain."

Biederman had known Amos back in Michigan, and he was now at SUNY Buffalo. The Amos he knew had always been obsessed with these really obscure statistical measurement questions that might be important but were probably unanswerable. "I would never have invited Amos to Buffalo to talk about his statistical measurements," Biederman said, 'cause no one would care or understand. But this new research that Amos was doing with Daniel Kahneman blew him away. It confirmed Biederman's idea that "most scientific advances don’t come from some sudden flash of insight. They come from interesting ideas and fun thoughts." So he convinced Amos to stop in Buffalo on his way back to Israel from Oregon in the summer of 1972. Amos gave five different talks about the stuff he and Danny were working on, each aimed at a different academic field, and they were all packed. Fifteen years later, when Biederman left Buffalo for Minnesota, people were still talking about those Amos talks.

Amos talked about the heuristics he and Danny had identified and about the problem of prediction. The thing that stuck with Biederman the most was the last talk, "The Historical Perspective: Judgment Under Uncertainty." Amos stood in front of a room full of history professors, waved his hands, and described how they could use his and Danny's ideas to see human behavior in a totally new way.

He said that we're always seeing stuff happen that doesn't make sense. Like, we can't understand why someone did something or why an experiment turned out a certain way. But usually, we can come up with an explanation pretty fast, some kind of story that makes it all seem clear and normal. It's the same way we see the world. We're super good at finding patterns, even in random data. We can easily create scenarios, give explanations, and tell stories. But, while we're great at that, we're terrible at evaluating the likelihood of events or looking at things critically. Once we accept an idea or an explanation, we tend to blow it up and can't see things any other way.

Amos didn't put it too strongly. He didn't say, like he usually did, that "history books are shockingly boring because a lot of it is just made up." But what he did say was maybe even more shocking to the audience: that historians, like everyone else, are subject to the cognitive biases he and Danny were talking about. He said, "Historical judgment, on a large scale, is an intuitive judgment based on data." So, history is subject to bias, too. To show this, Amos talked about a grad student at Hebrew University named Baruch Fischhoff who was working on a research project. Richard Nixon had just announced that he was going to China and the Soviet Union, which blew everyone's mind. Fischhoff used that to design a test, asking people to predict the chances of different outcomes, like the chances of Nixon meeting with Mao more than once, the chances of the U.S. and Soviet Union jointly developing a space program, the chances of Soviet Jews being arrested for trying to talk to Nixon. After Nixon came back from the trips, Fischhoff asked the same people to remember what they had predicted. And their memories were totally off. Everyone thought they had predicted the events that actually happened *much* more accurately than they actually had. They just assumed that things had gone the way they'd expected all along. Years later, Amos’s talk, Fischhoff would name this phenomenon "hindsight bias."

Amos told the historians they were at risk of only accepting the facts they saw (and ignoring the facts they didn’t or couldn’t see) and then turning those facts into persuasive stories.

He said, basically, that we often can't predict what's gonna happen. But when it *does* happen, we act like it was all predictable and come up with explanations. Even when we don't have all the information, people can still explain things they couldn't have predicted. It reveals this huge flaw in how we think, even though it's kind of hidden. It makes us think that the world isn't that uncertain, and that we're not as smart as we think. 'Cause if we can explain things that we couldn't predict, even without all the facts, then that means the ending was inevitable, and we should have seen it coming. If we didn't predict it, that means *we're* not so smart, not that the world's that uncertain. We're always blaming ourselves for not seeing what now seems obvious. Like, were the signs always there?

To make their commentary match the ending, sports commentators or political pundits will, like, totally change the way they tell the story. Historians do the same thing. They force patterns onto random events, maybe without even knowing it. Amos called this "creeping determinism," and he jotted down one of its dangers: "He who looks at yesterday with the eyes of today will face a tomorrow of surprises."

Messing up the way we see the past makes it harder to predict the future. These historians prided themselves on their "construction abilities," their ability to take pieces of history and explain them so they would seem predictable. So, once they explained the story, the only mystery was why the people involved hadn't seen it coming. Biederman said, "The whole history department went to Amos's talk, and they all came out looking like they'd been beaten up."

Amos was arguing that the way we understand history makes the past seem more certain and predictable than it really was. After hearing that, Biederman really understood what Amos and Danny were doing. He was sure it would affect all areas of life where experts had to judge the likelihood of uncertain events. But so far, it was all just academic. Only professors and scholars—mostly psychologists—had heard about it. How would they take their big discoveries to other fields? They didn't know yet.

Early in 1973, after they went back to Israel from Eugene, Amos and Danny started working on a long paper that would put all their stuff together. They wanted to put the main ideas from the four papers they'd already written in one place so readers could figure it out themselves. Danny said, "We decided to present it as it was: just a psychological study. What insights it contained, that would be up to the reader to decide." They thought *Science* magazine was the best hope for reaching beyond psychology.

The paper was more built than written, if that makes sense (Danny: "One sentence was a good day."). While they were building it, they happened to find a clear way to connect their ideas to everyday life. It was this article by Ron Howard, a professor at Stanford, called "Decision Analysis in Hurricane Modification." Howard was one of the founders of decision analysis. The basic idea was that decision-makers have to assign probabilities to different outcomes, which means they have to clarify their thinking before they make a decision. How to deal with a super-dangerous hurricane was one example, so policymakers might rely on decision analysts to help them out. Most of the Mississippi Gulf Coast had just been hit by Hurricane Camille, which could have been even worse if it had hit New Orleans or Miami. The meteorologists thought they had a new technology: silver iodide, which they could use to weaken hurricanes and maybe even change their course. But it wasn't easy to control a hurricane. If the government got involved, they’d be blamed for any damage caused by the storm. If everything went fine, nobody—not the public, not the courts—would praise the government 'cause no one would know what would have happened if the government *hadn't* interfered. But if the damage was bad, everyone would blame the government for any destruction the hurricane caused. Howard’s article analyzed the options the government had, including estimating the probability of different outcomes.

But Danny and Amos thought the way the decision analysts got the probabilities from the hurricane experts was kind of weird. The analysts would have the government's hurricane modification experts spin a roulette wheel, with, like, a third of it painted red. They’d ask them, "Would you bet on red, or on the hurricane causing more than $30 billion in damages?" If the official said the first option, then that meant he thought there was a 33% chance of the hurricane causing more than $30 billion in damages. Then the decision analyst would have him spin another wheel, this one with only 20% red. They’d keep doing that until the percentage of red on the wheel matched the official's estimated probability of the hurricane causing more than $30 billion in damages. They assumed that these hurricane experts were able to correctly evaluate extremely uncertain events.

In their previous research, Danny and Amos had shown that people's brains react to uncertain situations in all kinds of ways that mess with their judgment of probabilities. They believed that, with their new research on systematic biases, they could help people make more accurate decisions. For example, if you're trying to judge the chances of a big storm hitting in 1973, you're gonna be influenced by how well you remember Hurricane Camille. But *how much* are you being influenced? "We thought decision analysis would one day become mainstream, and that we could help," Danny said.

The big decision analysis experts, along with Ron Howard, were all at the Stanford Research Institute in Menlo Park, California. In the fall of 1973, Danny and Amos flew out there to meet with them. But before they could apply their theory of uncertainty to the real world, something unexpected happened. On October 6, a combined force from Egypt and Syria—with help from troops and planes from up to nine Arab nations—attacked Israel. The Israeli intelligence experts totally didn’t see it coming. The troops were caught by surprise. On the Golan Heights, about a hundred Israeli tanks were attacked by 1,400 Syrian tanks. Along the Suez Canal, 500 Israeli soldiers and three tanks were wiped out in an instant by 2,000 Egyptian tanks and 100,000 soldiers. Amos and Danny were in Menlo Park, and in this beautiful weather, they heard this shocking news about the Israeli army's collapse. They rushed to the airport, got on the first flight back to Israel, and prepared to join another war.

Go Back Print Chapter