Chapter Content
So, uh, yeah, this chapter's about the "isolation effect," and, like, how it affects our decisions when there's a risk involved. Basically, it all started when Amos and Daniel, you know, they were just kinda brainstorming, and, I think it was Daniel, actually, who first had this idea that people are super sensitive to changes when making choices where there's, like, a gamble involved. But, it was Amos who, like, really made the idea click.
So, they were, like, working on some experiment, and Amos was just like, âHey, what if we flipped the whole thing around?â Before, all their gambles were about, like, winning. You know, "Do you want a sure $500, or a 50% chance to win a thousand?" Amos wondered what people would choose if it was about losing money instead.
So, here's the question they posed: âPick one. A: a lottery ticket with a 50% chance of losing a thousand bucks, or B: a guaranteed loss of five hundred bucks.â
And boom, everything changed. Suddenly, people were making totally different choices than when they were thinking about gains. Daniel said it was a huge "aha!" moment, like, "Why didn't we think of this before?"
It turns out, when facing a potential gain, people usually play it safe. But, when it comes to potential losses, theyâre more likely to gamble. Theyâre, like, trying to escape the loss, you know? Itâs, like, equally hard to get someone to accept a loss for sure as it is to get them to pass up a sure gain for a chance at more. Like, if you want someone to give up a sure thing to take a 50/50 shot at winning a thousand dollars, you have to drop that sure thing down to, like, maybe $370. And, the same goes for losses; to get someone to accept a sure loss instead of risking a bigger one with a 50/50 chance to avoid the loss entirely, you have to make that sure loss smaller⊠like, around $370.
Actually, they found you have to make the sure loss *even smaller* for people to accept it. People really hate losing more than they like winning. Like, the fear of losing is more powerful than the desire to gain.
This is even more obvious when the gamble involves both gains and losses, which, let's be real, is most of life. To get someone to flip a coin for a hundred bucks, the reward for winning has to be way bigger than the risk of losing a hundred. Like, maybe two hundred bucks if you win. And itâs even bigger when the stakes get higher, like, for ten thousand dollars. As Amos and Daniel wrote, people are more sensitive to losses than they are to gains, and it's not just about money. Itâs, like, a basic human thing. They basically said people want pleasure and avoid pain, and for most people, avoiding the pain of losing something is way more important than the pleasure of getting something they really want.
And, you know, it makes sense. Being super aware of pain helps you survive. Like, a species that only felt pleasure but not pain probably wouldn't last very long, right?
But, um, as they dug into this, they realized something, like, kinda important: Their "regret theory," well, it had to go. It explained why people do irrational things like taking a sure win instead of gambling for a bigger one â they're afraid of regretting losing the gamble. But it couldn't explain why, when facing losses, people become risk-takers. What makes them gamble instead of just accepting a smaller loss?
And, get this, Daniel and Amos, like, didn't even mourn the loss of their pet theory. They just dropped it, even though some of it was still useful. One day, they were talking about regret as if it explained everything, and the next day, they were off exploring something new.
So, what they did next was figure out exactly when and how people react to different gambles with gains and losses. Amos called important discoveries "raisins," and this new theory had three "raisins."
First, people react to changes from a starting point, not to absolute values. Second, people feel differently about gains than they do about losses. And third, and this one's big, people don't react to probabilities in a straightforward way. Like, Amos and Daniel already knew that people would pay a lot for a sure thing. But now they found out that people react differently depending on how uncertain something is. If you offer someone a 90% chance of winning versus a 10% chance, they don't see it as nine times more likely to win, they kind of adjust those numbers in their head and act as if the 90% chance isnât quite that good and the 10% chance isnât quite that bad. They're using, like, feelings, not just logic, to figure out probabilities.
And those feelings get stronger when the probabilities are super small. If you tell people there's a one in a billion chance of winning something or losing something, they don't react as if it's one in a billion, they react as if it's much higher, like, one in ten thousand. They worry too much about the tiny risk of losing and get way too excited about the tiny chance of winning. And, thatâs why people buy lottery tickets *and* insurance. "If you take all of these possibilities into account, itâs easy to worry too much," Daniel said. "If your daughter is late coming home, you start worrying, even though you know thereâs probably nothing to worry about, your mind conjures dangerous scenarios." So, to get rid of that worry, you have to stop thinking about it.
People basically treat small probabilities as if theyâre more likely than they are. So, to create a theory that predicts how people react to uncertainty, you have to figure out, like, how much weight those probabilities have in their emotional world. And, once you get that, you can understand why people buy lottery tickets and insurance, and even explain something called the "Allais paradox."
The Allais paradox is, like, this confusing problem that Daniel and Amos used to show how their theory explained people's weird reactions to probabilities. They'd already "solved" it once with regret theory, but this new theory was even better.
So, here's a simplified version of the Allais paradox:
"Pick one: A: A 100% chance to win thirty thousand dollars, or B: a 50% chance to win seventy thousand dollars, and a 50% chance to win nothing."
Most people picked A. They like the sure thing. They are avoiding risk, you know? They'd rather have the definite win than the gamble with a higher expected value (which would be $35,000 in this case). That doesn't break utility theory, it just means option B is not as attractive to most people. But then...
"Pick one: A: A 4% chance to win thirty thousand dollars, and a 96% chance to win nothing, or B: a 2% chance to win seventy thousand dollars, and a 98% chance to win nothing."
Most people picked B, going for the bigger prize even with lower odds. That means, like, winning seventy thousand dollars with a 2% chance is more appealing than winning thirty thousand with a 4% chance. This, again, illustrates how people overvalue the 50% probability and undervalue the difference between 2% and 4%.
So, yeah, this new theory basically explained everything that expected utility theory couldnât, but it also showed something new: that itâs just as easy to make people chase risk as it is to make them avoid it. Just show them options that involve losses. Ever since Bernoulli, like, centuries ago, everyone thought risk-seeking behavior was crazy. If it's part of human nature, why had no one noticed it?
Daniel and Amos, they figured it's because the people studying decision-making were looking in the wrong place. Most of them were economists, who were focused on how people reacted to financial gains. Amos and Daniel said, "Most decisions in economics, except for insurance, mainly deal with good prospects, it's a matter of ecology." Economists studied choices with potential upsides, like saving and investing. And with gains, people definitely avoid risk. They want the sure thing. Daniel and Amos thought that if economists looked beyond money and studied politics, war, or relationships, they might come to totally different conclusions. Those areas are filled with tough choices between bad options. As they wrote, "If decisions in areas like personal, political, or strategic issues could be measured as clearly as economic gains or losses, the study of human decision-making might have changed dramatically."
So, for the first part of 1975, Daniel and Amos, they spent all their time perfecting this theory, trying to write a paper to share it. At first, they called it "Value Theory," but then they changed it to "Prospect Theory." And, as psychologists challenging a theory created and defended by economists, they were surprisingly confident, like, almost arrogant. They wrote that the existing theory didn't really explain how people make risky decisions, it just explained "how people judge risk when facing economic gains." You could feel their boldness, you know? Daniel even wrote to a friend, saying that they were building what they thought was a complete and new system for explaining choices under uncertainty.
This theory first debuted at an economics conference, which was actually held at a farm outside of Jerusalem, of all places. Amos presented the theory, since decision-making was his area of expertise. Among the attendees were at least three future Nobel laureates in economics. So, yeah, pretty big deal.
One of them, Kenneth Arrow, asked a really important question after Amos finished: What exactly is a loss?
Obviously, this theory was about how people react differently to potential losses versus potential gains. A loss happens when a decision leads to something worse than your "reference point." But what's that reference point? Basically, it's your starting point, your current situation. You experience a loss when the end result is worse than what you have now. But how do you define someone's current situation? As Arrow later said, "Losses are obvious in experiments, but in real life, they are subtle."
Think about Wall Street at the end of the year. If a trader expects a million-dollar bonus but only gets half a million, they'll feel and act like they've suffered a loss. Their reference point is their expectation. And those expectations can change. If that trader expects a million but then hears someone else got two million, their reference point shifts. Even if they get their million, it'll feel like a loss. Daniel even used this to explain how chimpanzees act in the lab. "If the chimp next door does well and gets the same cucumber itâs getting, everything is fine. If the chimp next door gets a banana and the chimp that did well gets a cucumber, it will throw the cucumber in the experimenter's face." The banana becomes the other chimpâs new reference point from that moment on.
The reference point is basically a state of mind. Even in the simplest gambles, you can change someone's reference point by making a gain look like a loss, or a loss look like a gain. Just by changing how you describe something, you can change how people choose. They gave economists this example:
Situation 1: "You get a thousand dollars on top of your existing situation. Choose one: A: a 50% chance to gain another thousand, or B: a 100% chance of gaining five hundred dollars." Most people pick B, the sure thing.
Situation 2: "You get two thousand dollars on top of your existing situation. Choose one: A: a 50% chance of losing a thousand, or B: a 100% chance of losing five hundred dollars." Most people pick A, the gamble.
The two situations are exactly the same. If you gamble, you have a 50% chance of keeping two thousand dollars. If you choose the sure thing, you keep fifteen hundred. But, when you describe the sure thing as a loss, people gamble. When you describe it as a gain, they choose the sure thing. The reference point â what helps you separate gains from losses â isn't fixed. It's psychological. "Whether it's a gain or a loss depends on how the problem is presented, on the context in which it occurs," Daniel and Amos said in the first draft of "Prospect Theory."
Basically, they were saying people don't look at risky choices in context, they look at them in isolation. And, in exploring this "isolation effect," as they called it, Amos and Daniel stumbled on something else, that has huge real-world implications. They called it "framing." By changing how you describe the facts, making gains look like losses, you can easily change how people feel about risk, turning risk-averse people into risk-seekers. Daniel said, "We didnât know we were inventing framing. You pick two things that are the same, that differ in irrelevant respects, and by proving they are irrelevant you prove that the expectation utility theory is wrong." Daniel felt that "framing" was like the judgment problems they'd studied before. It was just another trick our minds play on us.
Framing itself isn't a theory. But Amos and Daniel ended up spending a lot of time finding real-world examples of it, proving how it affects our decisions. The most famous example is the "Asian Disease Problem."
The "Asian Disease Problem" is actually two different questions. They gave one question to one group of people, and another question to another group. Neither group knew anything about framing.
The first group was given this: "Imagine the US is preparing for a large outbreak of an Asian disease, estimated to kill six hundred people. Two programs have been proposed to combat the disease, with different consequences:
If Program A is adopted, two hundred people will be saved.
If Program B is adopted, there is a one-third probability that six hundred people will be saved, and a two-thirds probability that no people will be saved.
Which program would you favor?"
The majority of people chose Program A, the sure thing of saving two hundred lives.
The second group was given the same initial information, but the options were different:
"If Program C is adopted, four hundred people will die.
If Program D is adopted, there is a one-third probability that nobody will die, and a two-thirds probability that six hundred people will die."
When framed that way, the majority of people chose Program D. The two problems are actually the same. But in the first problem, the options are presented as gains, so people choose the sure gain of saving two hundred people. In the second problem, the options are presented as losses, and people make the opposite choice, risking the chance that everyone might die with Program D.
People arenât choosing things, they are choosing a description of things. Economists, and anyone who thinks humans are rational, would have to explain loss aversion. But how could they do that? Economists believed that you could tell what people wanted by watching what they chose. But what if what people want changes depending on how things are presented? "Itâs interesting to say this, because in psychology this is really a basic idea," psychologist Niss Bittner later said. "Of course weâre going to be influenced by how things are presented!"
After the meeting in Jerusalem, the economists went back to America, and Amos sent a letter to his friend Paul Slovic. "We got a favorable response on all the issues we considered," he wrote. "Somehow the economists feel that we are right, but at the same time they wish we were wrong because the trouble would be enormous if our conceptions were to supplant utility theory."
At least one economist didn't feel that way, but this person, at least when he first heard about Daniel and Amos's theories, didn't fit anyoneâs idea of a future Nobel laureate. His name was Richard Thaler. In 1975, he was a not-so-promising 35-year-old assistant professor at the University of Rochester's management school. Getting that job was a miracle, because two very obvious qualities made him an outsider in economics and even in academia. First, he got bored easily and was incredibly creative in escaping boredom. When he was a kid, he loved changing the rules of games. He found the normal game of Monopoly, where people buy properties and randomly land on them, boring. So, after a few games, he would tell everyone it was a stupid game. He refused to play unless they scrambled the locations of all the properties on the board each time. He did the same thing with Scrabble. When he had five Es and no good consonants, he changed the rules, classifying letters into three categories: vowels, common consonants, and rare hard consonants. Each player got the same number of each. After seven rounds, everyone had a hard consonant. His rule changes reduced waiting time, reduced the role of luck, made the games more challenging, and usually increased competition.
Thaler's other defining quality was his stupidity, which seemed contradictory. He was an average student, a B student, around age 10. His father was an insurance manager who was obsessed with details. Anxious about his son's sloppy homework, he made him copy out pages of Tom Sawyer, as if Mark Twain had just written it. Thaler did as he was told. Every time, his father found errors: a missing word, a missed punctuation mark. The quotation marks in a dialogue between Tom and Aunt Polly confused him. Looking back, he knew his problem wasn't just a lack of effort. He might have had some mild dyslexia. But, at the time, everyone thought he was either careless, or lazy, or both.
So, he started to think of himself that way, too. Economics might not be the right field for someone who got bored easily and was clumsy with details. Thaler concluded from his father's life that working in business would drain his brain and bore him. And he also knew he wasn't good at serving other people. So, not knowing what else to do, he went straight to graduate school and chose economics because "it seemed practical." It wasn't until then that he realized the field required incredible precision and mathematical ability, and that the tiny number of people who could publish in economic journals were mathematical geniuses. When he arrived at the University of Rochester's graduate school of management, Thaler was a long way from his classmates and his field. "I was more interesting than them, but I wasnât as good at math," he said. "If you had asked me what my strength was, it was that I found things interesting."
His dissertation tried to explain why the death rate of black newborns was twice that of white newborns. After controlling for all the obvious variables â parents' education, income, whether the baby was born in a hospital â he could explain only half the difference. The other half was a mystery. "I tried, but I couldnât explain it," he said. "If I had had more confidence, maybe I would have made the dissertation more interesting." After graduating, he applied to many universities but got no job offers. He ended up with a job at a consulting firm.
But just as he was about to start this new chapter, the company went out of business, and he was sent home. Nearly 30, with nothing to show for it, and with a wife and two kids to support, Thaler had to ask the dean of the University of Rochester's management school for help. The dean gave him a temporary one-year teaching job, teaching cost-profit analysis to business students. Back on campus, he started writing another paper. He found an interesting topic: What is the value of a human life? He found a shortcut for solving the problem. He compared the incomes of people in dangerous jobs â coal mining, logging, high-rise window washing â with their life spans. From that data, he derived a formula for how much Americans should be paid to take dangerous jobs. If you wanted to calculate how much someone should be paid for a 1% increase in the risk of dying on the job, you could, theoretically, calculate how much someone should be paid if they were 100% certain to die on the job (the answer he came up with, in 2016 dollars, was $1.4 million). Later, he decided his method was stupid. But experienced and successful economists approved of the finding. They argued that miners, for example, could estimate their individual rewards and then make their wage demands.
That paper got Thaler a full-time, though not tenured, job at Rochester's management school. But, in the middle of calculating the value of human life, he began to have doubts about economic theory. He gave people a questionnaire that asked them a hypothetical question: If they were exposed to a virus at work, and knew that there was a one in a thousand chance they would get a fatal disease, how much would they be willing to pay for a cure? Because he was an economist, he knew there was more than one way to ask this question, so he also asked: How much would you need to be paid to accept a job that exposed you to a virus, and that had a one in a thousand chance of giving you a fatal disease? According to economic theory, the answers to the two questions should be the same. How much you would pay to avoid a one in a thousand chance of dying from a disease, and how much you would need to be paid to be exposed to that one in a thousand chance of dying, should be the same amount: that amount is what you assign to that 1/1000 probability of death. But the people in the experiment didn't see it that way. "They reacted completely differently to the two questions," Thaler said. "They were willing to pay ten thousand dollars for the cure, but they thought they should be compensated a million dollars for taking the job."
Thaler thought it was interesting. He mentioned his discovery to his dissertation advisor. "Stop wasting your time with questionnaires and focus on real economic research," his advisor said.
Thaler didn't listen. Instead, he made a list of all the things economists said people wouldn't do â because people are rational animals, they always said â but people actually did. At the top of the list was the example he'd just come up with.
Thaler might not have been so sure about himself, but he quickly realized that other people shouldnât be so sure about themselves, either. He realized that his fellow economists ate too many cashews before dinner, and then didn't have much appetite for the meal. More importantly, when he took the cashews away, they seemed grateful that they could finally enjoy the meal. "That means that less choice can make you better off, which is heresy in economics," he said. Someone gave him two tickets to a basketball game in Buffalo. He and a friend decided that driving all that way in the snow wasn't worth it, but his friend said, "If we had paid for the tickets, we would definitely go." To an economist, the tickets were "sunk costs." People will force themselves to go to a game they donât want to see because they paid for the ticket. Why throw good money after bad? "I said, 'Don't you know about sunk costs?'" Thaler recalled. The friend, a computer scientist, didn't. After Thaler explained it, his friend stared at him and said, "Oh, what a crock."
Thaler's list grew quickly. He classified many of the items on it into what he would later call "the endowment effect," a psychological theory with huge implications for economics. Just by owning something, people irrationally assign extra value to the things that happen to belong to them, so they are reluctant to give them up even for economic gain. But, early on, Thaler didn't think about classifying. "I was just collecting a bunch of silly things people do," he said. Why were people reluctant to sell a vacation home they didnât want if they unexpectedly acquired it and hadnât chosen to buy it in the first place? Why were NFL teams reluctant to trade their draft picks, even when they knew they would get more value out of trading them? Why were investors reluctant to sell stocks that had fallen in value, even though they admitted they would never buy the stock at the current market price? The examples of things economic theory couldnât explain were endless. "The endowment effect is everywhere," Thaler said. By now, his attitude toward economics was a little like his attitude toward Monopoly: It was boring, and it didnât have to be. Economics was supposed to explore one side of human nature, but it had stopped paying attention to human nature. "Thinking about these questions was much more interesting than studying economics," he said.
When he shared his findings with his fellow economists, they didn't react well. "They always said, 'People make mistakes, we know that, but these mistakes are random, they will be washed out by the market,'" Thaler recalled. He didn't believe it. His list, and his enthusiasm for it, didn't make him any friends at the University of Rochester's business school and its economics department. "He made enemies, and he wasnât good at making friends out of enemies," said Tom Russell, a professor of economics at Rochester. "If you talked about your academic views in front of him, he would say, 'That idea is stupid.' It's okay for senior academics to say, 'How is it stupid?' but junior people get upset."
The University of Rochester didn't give him tenure, so Thaler's future was still uncertain when he attended a conference in 1976 about how to price life. One of the attendees heard his new ideas and told him to read an article that Daniel and Amos had published in Science, which also tried to uncover irrational human behavior. Back home, Thaler found the article, "Judgment Under Uncertainty," in an old copy of Science. He was thrilled when he read it. He hunted down everything else Daniel and Amos had published. Thaler said, "I remember reading them one after another, it was like striking gold. I wondered why I was so excited, and I realized they were articulating something fundamental: systematic bias." If people's mistakes are systematic, you can't just ignore them. You can't offset irrational behavior in certain situations with rational behavior most of the time. People can be systematically wrong, and so can the market.
Thaler had someone send him a copy of the "Value Theory" paper. He immediately saw the point: it was like a truck loaded with psychological ideas crashing into the economics building. The logic was irresistible. Using what would become known as "prospect theory," the authors explained irrational human behavior in a language that economists could understand. Many of the items on Thaler's list were included, with a few exceptions â the problem of self-control, for example â but that was fine. The paper was like a wind that opened a crack in the economic theory wall, allowing psychology to enter. Thaler said, "That really was the beauty of the paper, it put the psychology into mathematics. Economists could point to it as a reason to exist. And the explanation of human nature was profound."
Before that, Thaler didn't have much confidence in what he was doing in economics, like when he was copying out Tom Sawyer. "I might not have stayed in the field if it hadn't been for them," he said. After reading everything the two Israeli psychologists had written, he had a different feeling. "I realized that maybe my purpose in life was to think about things. Now I was doing it." He decided to turn his list into an article. Just as he was about to start writing, he saw the mailing address of the psychology department at the Hebrew University, so he wrote a letter to Amos Tversky.
Economists usually wrote to Amos, who they were more familiar with, because his strong logic was more like theirs, or even better: they could see that he was a genius. To most economists, Danielâs thinking was like a maze. As Richard Zeckhauser, a professor of economics at Harvard, and later a friend of Amos, said, "The impression I got of their collaboration was that a lot of things came through Daniel, 'You know, Amos, I bought a car and offered thirty-eight thousand and the salesman said thirty-eight-nine and I agreed! Isnât that great?' And Amos would say, 'We can write about that.'" To other economists, Amos was like an anthropologist, studying a tribe of aliens that he was more rational than, and Daniel was from that tribe. Once, an American economist complained that "Value Theory" shouldn't reveal human nature like that. Amos wrote back, "Like you, I believe such behavior is at times unwise or even wrong, but that does not mean it does not exist. We should not deny the value of visual theory because visual illusions exist. Similarly, we should not deny the correctness of value theory because it reflects the irrationality of choice."
For his part, Daniel said he didnât realize until 1976 that their theory might affect a field he knew nothing about. He realized it when Amos showed him a paper written by an economist. The paper began: "Economic theory is written by rational, selfish agents whose tastes do not change." At the Hebrew University, the economics department was in the building next to theirs, but Daniel had never paid attention to the economists' pronouncements about human nature. "I just didnât believe they really thought that way, that this was their worldview," he said. "It seemed illogical to them that people would tip a waiter at a restaurant they were never going to visit again." This worldview naturally assumed that the only way to change people's behavior was with financial incentives. Daniel found that ridiculous. To him, proving that humans were irrational was as pointless as proving they didnât have fur. Obviously, humans werenât rational in any meaningful way.
Daniel and Amos didnât want to get into an argument about whether humans were rational. That would just distract people from what they were revealing. They wanted to reveal human nature, and let everyone else look in the mirror. Their next job was to continue editing and improving "Value Theory," and then get it published. They both worried that someone would find an obvious flaw in the theory right away, like the Allais paradox. For three years, they stopped doing almost everything else, and just looked for internal contradictions. "For those three years, we didn't talk about anything we were interested in," Daniel said. Daniel's interest was in psychology, and Amos was passionate about building a structure on top of it. Amos probably saw it more clearly than Daniel: The only way to get the world to accept their view of human nature was to embed that view in a theory. Such a theory had to both explain and predict human behavior better than the existing theories, and had to be expressed in mathematical logic. "There's a difference between something mattering and something being feasible," Daniel said years later. "Science is a conversation, and you have to fight for the right to be heard. And the rules for being heard are that you validate yourself with formal theory." When they finally sent the edited paper to Econometrica, an economics journal, the editorâs response confused Daniel. "I sort of hoped he would say âloss aversion is a great idea,â but he said âNo, I just like the math.â And I was devastated a little bit."
By 1976, purely for promotional reasons, they had changed the name of the paper to "Prospect Theory." Daniel said, "Mostly we wanted the theory to sound unique, so you wouldnât associate it with anything else. If you said âProspect Theory,â nobody knew what it was. We thought: Who would know? That would make it stick out. And if it became popular, we didnât want people to confuse it with other theories."
In the middle of all that, their work was greatly slowed down by Daniel's personal life. In 1974, he had moved out of his house, separating from his wife and children. A year later, the marriage ended. Daniel then flew to London, and formally expressed his love to the psychologist Anne Treisman, who happily accepted it. By the fall of 1975, Amos was sick of dealing with the aftereffects of the situation. "He is giving these matters more time, emotion, and effort than they deserve," he wrote to his friend Paul Slovic.
In October 1975, Daniel flew to London again, and after meeting Anne in Cambridge, they went to Paris for a vacation. During those days, he was happy in love, but worried that his new relationship would damage his friendship with Amos. When he got to Paris, he found a letter had already arrived from Amos. But, when he opened the envelope, he saw only an edited draft of "Prospect Theory," with no note, a subtle signal, he thought. So, in the most romantic city in the world, Daniel sat next to his lover and started writing a letter to Amos that was almost like a love letter. "Dear Amos," he began, "I got to Paris and got your letter, but it contained only your ms, I told myself that Amos must be very dissatisfied with me, and not without reason. After dinner I looked for an old envelope to put in this reply, and found your letter and then the note inside. I skimmed the end and got a great thrill when I saw âEver yours, Amos.â" He then wrote that he had explained to Anne that he couldn't have accomplished all this on his own, and that the paper they were writing was a new step in their partnership. "To me this is the high point of our friendship and also a high point of my life," he wrote. He went on to add, "I gave a talk in Cambridge yesterday on our value theory. I was a little embarrassed by the enthusiasm. At the end, I summarized some early work on the isolation effect, and that was what the audience responded to especially warmly. All in all, they made me feel as if I were an important person. They were all trying so hard to impress me that I came to the conclusion that I no longer need to make the effort to impress them, these days are gone."
On their way to the top, they maintained, in some weird way, a secret collaboration, just the two of them, just the two of us, just an adventure for them and no one else. Daniel said, "We benefited from being isolated in Israel, we didnât care what people thought of us." That isolation required that they be together, in the same room, with the door closed, with no one to bother them.
And now the door was being pulled open. Anne was British, not Jewish, and she had four children, one of whom had Down syndrome. So, she had a dozen very good reasons why she couldnât or wouldnât move to Israel. If she wasnât moving to Israel, that meant Daniel might be leaving Israel. After a brief discussion, Daniel and Amos came up with a temporary solution. In 1977, they both took sabbaticals from the Hebrew University and went to Stanford to be with Anne. But, after spending a few months in America, Daniel announced that he was going to marry Anne and stay. That forced Amos to choose between reality and their friendship.
Now it was Amosâs turn to sit down and write a heartfelt letter. Daniel's state of chaos was something Amos couldnât match even if he wanted to. Amos had wanted to be a poet when he was younger, but he had become a scientist. Daniel had been a poet, but had stumbled into science. Daniel knew that he wanted to live like Amos. And Amos, even if he wasnât so sure, was also envious of Daniel. He was a genius, but he knew he needed Daniel. He wrote the letter to the president of the Hebrew University, Gideon Zapfsky, who was also a close friend. "Dear Gideon," he began, "The decision to remain in America is the most difficult decision that I have ever had to make. I must admit that I wanted to complete the work with Daniel, at least to some extent. I can't accept the end of our collaboration which lasted for many years, and the stagnation of our research." Amos then told him that he intended to accept a visiting professorship at Stanford. He knew that his decision would shock and anger everyone in Israel. A Hebrew University official had recently told him, "If Daniel leaves Israel, it's a personal tragedy. If you leave, it's a national tragedy."
Until Amos actually left, his friends didnât believe he could live anywhere but Israel. Amos and Israel were inseparable. Even his American wife was sad. Barbara had come to love the country, its intensity, its solidarity, its attitude toward lifeâs trivia. She now felt more Israeli than American. She said, "I had to work as hard as I could to become an Israeli, I donât want to go back to the United States. I said to Amos: âHow am I going to do this again?â He told me: âYou can do it.â"