Chapter Content
Okay, so like, I guess I'm just gonna talk about this chapter. It's called "Collision: Mouse and Python" or something, which, like, sounds kinda dramatic.
So, it's about these two guys, Neil and Amos, right? They were both at the University of Michigan for six months, but, get this, they never even talked! Like, their paths never crossed, their ideas never, you know, bumped into each other. One was studying, uh, people's pupils in one building, and the other was in a totally different building using, like, these crazy math models to analyze similarity and, like, decision-making. Apparently, one of them, Daniel, said, "We had nothing to talk about." So, it was a real surprise when Amos showed up in Daniel's seminar one spring.
Daniel never, like, invited guest speakers, you know? His seminar was his thing, his show. Amos, apparently, didn't really fit in with the whole applied psychology vibe of the class. One of the students even said that Daniel and Amos seemed to be, like, competing with each other, which is weird, right? They were like these star figures in the psychology department, but they just weren't on the same page.
Someone else also kinda felt like there was some tension or whatever. Like, they felt that Daniel was maybe a little uncomfortable, a little suspicious of them. But Daniel said he was just curious about Amos, like, "I need to get to know him better," he said.
So Daniel invites Amos to his seminar, and he tells him to talk about whatever he wants. But, surprise, surprise, Amos doesn't talk about his own research. It was still pretty theoretical, maybe he didn't think it was right for the atmosphere or something. But it was kinda disconnected from the real world, you know? Amos was like, super into that abstract stuff, while Daniel was more about real-world problems, even though he kept his distance from people.
People call Amos a "mathematical psychologist," which is kinda... strange, apparently. To the psychologists who didn't do math, like Daniel, mathematical psychologists were basically people hiding their ignorance of psychology behind math, doing stuff that was ultimately meaningless. And from the other side, the mathy psychologists thought non-mathy psychologists were clueless and didn't understand how important their research was. Like, at the time, Amos was working on this huge, super complicated, three-volume textbook called "Measurement Foundations," with, like, a thousand pages of arguments and examples about measurement. It was like, theoretically brilliant, but kinda obscure, you know?
Instead of talking about himself, Amos talks about this research that was going on at the University of Michigan, about how people react to new information when they're making decisions. So, like, they'd give people two bags of chips, one with mostly white chips and some red ones, and the other with mostly red chips and some white ones. The person would pick a bag, and then they'd pull out chips one by one, without looking inside. And after each chip, they had to guess which bag they had, the red one or the white one.
The cool thing about this experiment is that there's actually a mathematically correct answer. You can use something called Bayes' theorem to figure out the probability that you've picked the red bag or the white bag. Like, before you pull out any chips, it's 50/50. But as you pull out more chips, the probability changes. If you know that, like, one bag has 99% red chips and the other has 99% white chips, then each chip will give you more certainty than if the ratio was just, like, 51%. And you can actually, like, calculate this using Bayes' theorem.
So, the people in the experiment, they don't know about Bayes' theorem, of course. They have to guess, so the psychologists can see how their guesses compare to the, like, mathematically correct answers. The psychologists wanted to find out if people's brains process new information in a similar way to statistical calculations. Like, are we all secretly statistical geniuses? Can we make accurate guesses even without knowing the formulas?
At the time, this was a pretty new idea. The psychologists thought their findings could explain all sorts of things, like how investors make decisions based on earnings reports, or how patients judge their health based on a doctor's diagnosis, or how policymakers react to public opinion polls. So, like, the question was, can people, like, correctly estimate those probabilities? How do they use clues to make judgments? And do their predictions change when they get new information?
So, Amos tells Daniel's class that the researchers at Michigan had found that people do change their minds when they get new information. Like, if they pull out a red chip, they're more likely to think they have the red bag. But, get this, they don't change their minds enough! Like, if they pull out three red chips in a row, they might think they're three times more likely to have the red bag, but according to Bayes' theorem, it's actually twenty-seven times more likely. So, people are moving in the right direction, but they're not going far enough. The researchers called people "conservative Bayesians," meaning they act kinda like they're using the formula, but not really.
The researchers believed that people's behavior was somehow connected to Bayes' theorem. This idea fit in with the mainstream thinking at the time. It was like, people might not know the physics of playing pool, but they still hit the ball at the right angle with the right amount of force. Their brains are making a choice that's close to the right answer, even if they don't know how they're doing it.
After Amos was done talking, Daniel was confused. He thought it was ridiculous. Like, of course, people are more likely to think they have the red bag after pulling out a red chip. What else would they think? Daniel wasn't really familiar with this kind of research. He'd never really thought about how people think. He thought of thinking as just, like, seeing. But this research was about thinking, and it didn't seem to match up with what people actually do in real life. What you see and hear is often an illusion.
Daniel's favorite psychologists had made some big discoveries about visual illusions. You know, knowing it's an illusion doesn't stop you from being tricked by it. So, Daniel thought thinking was, like, unreliable. He didn't think people were natural statistical geniuses. Just go to a statistics class, you know? Students don't automatically know the importance of baseline rates. He himself had made that mistake, thinking, that he could draw conclusions about kids from a small sample.
In Daniel's opinion, people aren't "conservative Bayesians," or any kind of statisticians. They just make judgments based on little details. The idea that the brain is like a statistician is just a metaphor, and Daniel thought it was a bad one.
Daniel felt like these psychologists who thought people were good at statistics were just as bad as, like, psychoanalysts. He thought they couldn't face their own stupidity. He thought experiments like this could only appeal to people who thought human intuition was pretty accurate, like they're fundamentally good statisticians.
Daniel thought most things in life aren't as clear as figuring out which bag has more red chips. He thought these experiments only proved that people are bad at making intuitive judgments, and they can't even see the obvious answers. Someone who can predict the color of chips in a bag might still struggle with more complex situations, like predicting whether a dictator has weapons of mass destruction. Daniel thought that when people rely on theories, they make the evidence fit the theory, instead of the other way around. And that's when their views become biased.
Daniel thought that psychologists had been studying how rats learn to find their way through mazes for decades, and, like, even dumb people thought it was nonsense. He thought people studying decision-making were also blinded by theory. Like, what does the phrase "conservative Bayesian" even mean? Like, are people taking the answer directly from Bayes' theorem? What are they actually doing in their heads? Amos was a psychologist, but he seemed to be praising this experiment that had nothing to do with psychology, or at least he wasn't criticizing it.
So, Daniel does what any, you know, self-respecting person would do, and he argues with Amos! Daniel later said that he "made him look bad," even when they were just talking as friends. In Jerusalem, you can express yourself and debate - itβs not like in America, where everyone has a right to their opinion.
Daniel went home and bragged to his wife that he had won an argument with this arrogant young colleague. Daniel said it was an important moment in the debate and that both of them were good speakers.
Amos hardly ever lost debates. He was rarely convinced by the other side. One of Amos' students said that you just couldn't tell him he was wrong, even if he was. It wasn't because Amos was rigid, he was always happy to talk and open to new ideas, even if they didn't clash with his own. It may just be that he was usually right and everyone, including himself, assumed that he was.
Daniel wondered if Amos hadn't actually seriously considered the link between human thinking and Bayes' statistics. It wasn't his area of expertise. Daniel said that perhaps he'd never had a serious discussion on the subject, or if he had, no one had objected so clearly.
But Amos was open to the main theories in the social sciences. Unlike Daniel, he wasn't resistant to theory. To him, theory was like a pocket where you could put whatever you wanted. He felt that old theories shouldn't be dismissed until a better one was found. Theory provided knowledge and made predictions more accurate. At the time, the most influential theory in social science was that humans were rational β or at least competent, intuitive statisticians.
But something did change in Amos that day. He left Daniel's seminar in a state of unusual skepticism.
His close friends spotted the amazing change in him. In their eyes, Amos was always somewhat skeptical. Once, he spoke of a problem experienced by Israeli officers when leading troops through the desert. The desert makes judging shapes and distances difficult, so it's hard to know which direction to go. As a soldier, you need to be aware of the country in order to defend it, but the country is so hard to understand. When he led troops, he had to constantly decide which direction to go, which was hard even for him. Amos felt that the desert distorted perception, which could kill you. In the 1950s and 60s, if an Israeli officer in the desert lost his way, then the soldiers would become disobedient, as they knew how close they were to death.
There were a number of other things that showed that Amos didn't completely agree with the views of other theoreticians in the field of decision-making. Before the seminar, he'd been called back to the army to serve in the Golan Heights. His job was to command a group monitoring the Syrian soldiers' movements. One of his squad was about to become a math professor. He and Amos talked about how people judge the probability of uncertain events. Amos was interested in why the government believed that war in 1956 would last no more than 5 years, while others thought it would last 10. Amos wanted to demonstrate how uncertain possibility was. People didn't know how to judge possibility correctly.
If Amos had felt a rift before returning to Israel, this interaction with Daniel had started an earthquake. Not long afterwards, he met one of his friends. Amos grabbed him and said that he wouldn't believe what he'd just experienced. He told him about his talk and that Daniel had said, "Good talk, but I don't believe it at all." He seemed to be really troubled, so his friend comforted him. He said that thinking isn't an isolated activity and that perception and judgement had to be linked. The latest research looked at how the mind works when making objective judgments. But it didn't mention how the mind worked at other times. After that afternoon, Amos adopted a different set of views. With the new views, Edwards' research seemed crazy.
After the seminar, Amos and Daniel had lunch together a few times, but they still worked separately. That summer, Amos went to America, while Daniel went to Britain to continue his research into attention. He tried sending a set of numerical signals to people's left ears, and another set to their right ears. Then, he tested how quickly they could switch between the signals and whether they could successfully block out the information that should be ignored. In tank warfare, the speed with which the warring party identified the target and took action would determine their life or death. Using this test, he could tell which tank commander would make the best judgments β identifying the relevant signals and focusing on them, instead of being blown to pieces.
In the autumn of 1969, Amos and Daniel both returned to Hebrew University. Apart from when they were asleep, they were basically inseparable. You could find Daniel before lunch, as he got up early. Amos was a night owl, so you could disturb him late at night. But for the rest of the time, they would disappear into a seminar room. People sometimes heard them arguing loudly, but more often they heard laughter. No one was invited to join them, so people assumed they were discussing things that were interesting and private. If you listened at the door, you could hear them switching between English and Hebrew.
The strangest thing was that the two most intelligent people in Hebrew University, who'd kept their distance in the past, now had so much in common and were actually close friends. One student said that it was hard to imagine what had happened between them. Daniel had lived through the Holocaust as a child, while Amos was, in the words of the old saying, "a native Israeli". Daniel always thought he was wrong, while Amos always thought he was right. Daniel didn't like parties, while Amos was always at the center of them. Daniel was formal, while Amos was relaxed. It was easy to feel you knew Amos, but with Daniel, you would feel you had to get to know him again, even if you'd chatted with him the previous day. Amos was tone deaf, but would enthusiastically hum Hebrew folk songs; while Daniel had a good voice, but he didn't sing. Amos would dismiss illogical claims, while Daniel would ask, "Could this be true in some circumstances?" Daniel was pessimistic, while Amos was optimistic, as he saw pessimism as stupid. They were totally different. Daniel tried to make other people happy, while Amos couldn't understand why anyone would do that.
Besides, Amos had the scariest brain that people had ever seen. One of his friends said that people were afraid to discuss problems with him, as they were scared that he'd point out flaws they hadn't been aware of. The psychologists even decided that when he came to their research building, he should let his grad students drive him, as his mind was so sharp. One day, his old student Luma drove him and was so nervous, as he always criticized her so much. Now, Amos spent his free time with Daniel, who was sensitive.
They also had things in common, like both being the descendants of East European Jewish Rabbis. They were fascinated by how people behaved in "normal" non-emotional states. They wanted to find simple, forceful truths.
The biggest difference between the two of them could be seen in their offices. Daniel's office assistant said that it was a total mess, with scraps of paper all over the place. Books were opened to help him find his way back when he read them again. A few rooms away, Amos' office was empty, except for a pen on the desk. You couldn't find anything in Daniel's office, as it was too messy; and you couldn't find anything in Amos' office, as it was too empty. An observer wrote of this. One colleague said that Amos was making concessions for Daniel, as he would never take a master's with this sort of people.
Daniel and Amos didn't say much about what they did when they were alone, which just added to the intrigue. They talked about Daniel's proposition: that humans weren't Bayesians, or conservative statisticians, or any kind of statistician. They decided to design a statistical test that would break the mold and be checked for efficacy by scientists. It would be based on real data and the answers to questionnaires. The questions would be designed by Daniel and would often be an upgraded version of the "red or white chips" game.
In the summer of 1969, Amos took the questions designed by Daniel to the American Psychology Annual Conference, and then to the mathematical psychology conference. After the test, Amos collected the test results and flew back to Jerusalem with them.
In Jerusalem, he and Daniel wrote the dissertation for the first time. The office was too small, so they worked in a small seminar room. They would scrutinize every sentence and only write one or two paragraphs a day. Daniel says that he felt that it wasn't just any old experience, as it was so much fun.
Looking back, Daniel remembers the laughter most clearly. The laughter that people heard from outside. If they laughed because of Amos' jokes, it would often be louder, because Amos would go on and on laughing at his own jokes. With Amos, Daniel became funny, something he'd never experienced before. And under Daniel's influence, Amos was like a changed man; he wasn't so critical. He didn't make so many jokes. He made Daniel feel more confident, and Daniel had the feeling of being the attacker for the first time. They worked together so well that they felt that they couldn't take the top spot. To decide who would, they tossed a coin, which Amos won.
Their article presented a systematic account of a common thinking bias that even well-trained statisticians had. It said that people wrongly assumed that the local was representative of the whole. Even statisticians would jump to conclusions from a small amount of data. Daniel and Amos put this down to the fact that people wrongly believed that the sample would always reflect the character of the whole.
The article caused a storm. Psychologists who were tested were given a question: how should you teach students to test a psychological theory, like, people with long noses are more likely to lie? When the students test the theory on sample A and it's proven, but it's disproven on sample B, what should they do? Daniel and Amos provided the people being tested with multiple choice answers. Three of the choices recommended either increasing the sample size or further improving the theory. But the psychologists almost all voted for the fourth answer: "He should try to analyze the differences between the two groups of samples."
As people were so reliant on small samples, they believed that there had to be a reason for the differences, no matter how great they were. Those being tested rarely put an unexpected result down to sample variation, so it was hard for them to see that sampling variation was at play.
When the article was finally completed, the pair couldn't remember which parts had been written by who. But Daniel felt that its confident style was because of Amos. He said that if he'd written it alone, he would have been self-deprecating and called himself a moron who'd only just been enlightened. He could've completed the article on his own, but that it probably wouldn't have attracted so much attention. It had something of a star quality, thanks to Amos.
With Daniel on his own, he wouldn't have had a humorous, rebellious, provocative nature. In fact, he believed that Amos' contribution was just that. They gave the article to someone who they saw as skeptical, a psychology professor from the University of Michigan. One of the economists said that if there's evidence that something is right, then people will try to find a way for it to be right.
They found that people were so used to judging on instinct that they often jumped into the trap, even though they knew it was there. Instinct had been formed because of humans being wrongly perceptive. They'd learned to judge on instinct because of wrongly perceiving the world for a long time. This wrong perception was entrenched in people's thinking. If the way people think is wrong when the world is uncertain, then what else is wrong?