Chapter Content

Calculating...

Okay, so, like, Chapter Six, right? Or, I guess it's actually Chapter Eleven, but whatever, it's called "Heraclitus Rules," which is kinda cool.

So, it's all about how much we can actually predict stuff in a world that's just, like, constantly changing, you know? We humans, we're basically prediction machines. That's how we survive, right? Figuring out if we should go find food, fight something, or just run away. We're always making guesses about what's gonna happen, even if we don't use numbers or, like, fancy logic. Every single thing that happens to us becomes, like, a little data point in our brains. And when something unexpected happens, boom, our brains adjust. That's how we get around. But, like, how do we deal with a world where, like, one tiny thing, like a grain of sand, can cause a huge disaster?

For a long time, people just accepted that some things were beyond their control, you know? Like, they put their faith in gods who knew everything. If you were good, the gods would help you out. If you were bad, they'd punish you. It wasn't really our job to figure out the future, right? It was all up to the divine. Uncertainty wasn't a *thing* of the world, but just a flaw in humans because we're ignorant.

So, the best humans could do was, like, try to get a peek at what the gods knew. Like, in ancient China, they had the I Ching, you know, those yarrow stalks and all that? It was a way to try to tap into some deeper truth. But for most of history, trying to measure or quantify uncertainty was seen as, like, a really arrogant thing to do. It was like you were trying to put God into a math equation or something. So, surprisingly, there weren't many attempts to actually, like, measure uncertainty and risk.

Maybe that's why the ancient Greeks, who were, like, super smart about everything else, didn't really get into probability. They loved games of chance, like with those anklebones, the astragali, whatever. People were thinking about odds, but they didn't come up with, like, a systematic way to think about them. Other cultures had similar games too, but the math just didn't catch up.

Then, somebody used a Latin word that would become the word "risk," you know, the modern word for risk. It came from a contract in Italy back in 1156, used for splitting the profits from those risky shipping trips across the Mediterranean, which could be super profitable, or a complete disaster. But you still needed mathematicians to figure out risk and quantify it, and even then, they kind of messed it up. They followed some old guy who said that future probabilities could be gotten by figuring out "what happened for the most part" in everyday life. It turns out that assuming the past can be relied on is a big mistake when navigating a changing world.

Oh, and probability theory? Yeah, that took a while. The number system held it back. Roman numerals were super clunky. And Arabic numerals, the ones we use now, didn't become popular right away because Europeans thought they were too easy to fake on documents. Can you imagine how easy it would be to change a 1 into a 4? So, the articulation of probability theory was plausibly delayed for centuries.

But breakthroughs finally came, driven by games of chance. Two dudes, Pascal and Fermat, solved this problem about splitting the pot in a game that got interrupted, you know? That led to, like, a rapid advancement in probability, with other, like, math titans pitching in.

As the math got better, people started to try to understand and measure *everything*. People wanted to solve the mysteries of human society using numbers and equations. Then, some guy did a groundbreaking study of mortality in London, which started the field of demography. Then, a philosopher came up with sociology and was obsessed with counting and quantifying everything. It was a time of new thought, transforming uncertainty into certainty.

But, this other dude came along and warned that probability wasn't the same as certainty. He said that we understand cause and effect based on experience, on what happened in the past. But there's no guarantee that the future will be, like, the past. Probabilities are useful, but the future could be different.

Nowadays, probability theory is this huge, sophisticated thing. Millions of people use it to make forecasts and judgments. And we're using algorithms and machine learning to quantify everything.

We've come a long way from those knucklebones, right? But our faith in our ability to master uncertainty has gone a bit too far. We think we can answer questions that we really can't. And that overconfidence makes us, like, ignore chance and chaos because it doesn't fit into the, you know, world we like to imagine.

And why? Well, we've had so many successes that we think science has figured out most of the mysteries of the world. But much remains uncertain or unknown. The biggest mysteries are, like, the most basic ones. We don't even know what consciousness is, for example! We don't have a clue how our brains give rise to it. We don't understand the fundamental laws of the universe, either! At the smallest levels, matter acts in ways that seem totally impossible, like being in two places at once.

And even the top scientists believe things that sound like science fiction. I mean, they have come to believe in the many-worlds interpretation of quantum physics. The theory implies infinite copies of you exist, as well as infinite universes in which you never existed. It may sound like the pipe dream of a 1960s sci-fi writer who picked up his pen after taking too much LSD, but it’s also one of the most straightforward mathematical interpretations of the firmly validated equations that govern quantum mechanics. I mean, nobody really understands our world, right? We live in a world that will always seem uncertain to us.

So, can we at least understand ourselves?

The Economist looked at fifteen years of economic forecasts and found that they basically never predicted recessions correctly. They were only slightly better than just guessing that every country would grow at the same rate every year. But in physics, if your theories are off by even a little bit, you toss them out.

We launched a spacecraft that landed on a comet traveling super fast. Every calculation had to be perfect. And it was. But we can't figure out if Thailand's economy will grow or shrink, you know? Even the best experts don't really understand how our social world works. Our world is too complex for us to master, with all the feedback loops and tipping points.

Some economist talked about the difference between uncertainty and risk. Risk is when you don't know what will happen, but you know the odds. Like tossing a die. Uncertainty is when you don't know what will happen and you don't have any way of knowing the odds, so we're completely in the dark. He said we treat uncertainty as if it's resolvable risk, which makes our forecasts fail. It's crucial to separate what can and can't be known. To cope, many people turned to probabilities. But if you go into a world that's unknowable with your trusty probability to guide you, you're gonna have a bad time.

Some people say that intelligent people will try to interpret every kind of future uncertainty in terms of some probability. This is a mistake because probabilities are useful for risk, but if you have unresolvable uncertainty, you're better off saying, "I don't know."

Sometimes, we have to choose even when we're uncertain. The world of questions can be split into two categories: those that must be answered and those that need not be. We might call these the “take your best shot” questions versus the “don’t bother trying” questions. For example, if you have a rare disease, doctors must decide how to treat it. Take your best shot. But there is no law that forces us to predict that economic growth in Burundi will be exactly 3.3 percent in five years.

But if probabilities aren't helpful in situations of genuine uncertainty, why do we misuse probabilistic reasoning so often? The problems begin because we use that single word—probability—to mean countless different things. That confusion is compounded because once someone provides a specific number such as a “63.8 percent chance” to describe the likelihood of a future event, it’s as though the quantification has transformed the person into a modern oracle, commanding knowledge that has magically become more legitimate or true because it has been produced by math (even if that math is based on severely flawed assumptions). But is that the right way to look at it?

There's the frequency type of probability, which is based on how often something will happen over the long run. Then there's the belief type, which is just how confident you are in a claim. People don't always tell you which type they're using, which is confusing.

Probability works great in simple systems, like rolling dice. But in the real world, things get messy. It's useful in "situations in which ‘the possible outcomes are well defined, the underlying processes that give rise to them change little over time, and there is a wealth of [relevant] historic information.’" The problem is that many of the problems we encounter don't meet these assumptions.

To see why, let's return to a problem concerning risk rather than uncertainty: coin flips. The underlying dynamics of cause and effect are stable across time and space. They are, to use the technical term, stationary. Furthermore, when we talk about the probability of a coin flip, we’re talking about the average distribution of outcomes, rather than trying to forecast whether a specific toss will be heads or tails. We’re also able to conduct coin tosses as many times as we’d like, so the phenomenon is repeatable. The coins themselves are also comparable or exchangeable—it doesn’t matter whether I use my coin or yours, so long as they’re both quarters or are part of a category of fair coins more generally. As a result of all these factors, the coin toss probability is convergent. The longer you do it, the closer you’ll get to 50 percent for each outcome.

Now, let's consider another example, in which we're trying to figure out whether ibuprofen helps alleviate headache symptoms. Unless the headaches are being caused by a new, unknown disease, it's safe to say that the mechanism by which ibuprofen may help alleviate headache symptoms isn't changing from day to day, so this is a stationary problem.

It is important, though, to make sure we're using the right categories. What if I use the word headache to refer to a migraine or a feeling of head pain produced by a brain tumor? Probability-based estimates rely on accurate categories, the notion that when I refer to a headache in different contexts, I'm comparing apples to apples rather than apples and oranges.

So, past patterns are a reliable predictor of the future, so probabilities are a safe bet. This is the Land of Stationary Probabilities.

Now, let's move to thornier problems of uncertainty that arise from our complex, dynamic, contingent, intertwined world, prone to tipping points, feedback loops, and cascades caused by the tiniest changes.

Obama's advisers tried to give the president probabilistic estimates so he could make the right call. “There’s a seventy percent chance he’s there, Mr. President.” These were subjective, belief-based expressions of confidence in the available evidence, not what most people think of when they hear the world probability. The decision needed to be made with unavoidable uncertainty.

Rather than being a case of stationary causality, in this instance, the underlying dynamics that would determine the outcome of a potential special forces raid in Pakistan were nonstationary. The outcome of the exact same raid might unfold radically differently if it had been tried on May 1 and not on May 2. The dynamics were variable and therefore unknowable. Barack Obama wasn’t interested in average outcomes across all past special forces raids. The raid wasn’t repeatable. It was also unique rather than comparable or exchangeable. The raid was contingent, not convergent.

Together, those factors made for irreducible, or radical, uncertainty. The past offered no reliable guide to the future.

This is what I call the Land of Heraclitean Uncertainty. Heraclitus was clearly right that change is constant. When uncertainty is produced because the world itself is changing, that’s Heraclitean uncertainty, and probabilities quickly become useless, as past patterns can become meaningless in an instant.

Imagine it's 1995 and you've been asked to predict how many hours per day the average British person will spend using his or her telephone by the year 2020. You could study past patterns until the cows come home and use whatever forms of Bayesian logic you wanted, but it wouldn't likely have helped. Why? Because the relationship between humans and phones fundamentally changed.

Let's briefly return to weather forecasting. But now for the problem: weather patterns are contingent. As we know, initial conditions matter enormously, so weather patterns will diverge more and more over time based on the smallest imaginable changes. Because we need specific predictions for weather forecasting to be useful, and because infinitesimal shifts in initial conditions create wildly different results, all bets are off after about ten days. Chaos theory takes over.

Layered on top of these forms of uncertainty are other kinds that catch us by surprise because of what were called the “unknown unknowns.” We often don’t know what we don’t know. We can’t search for the right information because it often doesn’t even occur to us that it might exist. It’s impossible to calculate what you can’t anticipate. Unknown unknowns are therefore directly related to what we call Black Swans, in which we are surprised by rare, unexpected, and consequential events that can be neither anticipated nor quantified by equations. Black Swans are the inevitable outcome of complex adaptive systems.

Hubris is particularly dangerous today because our world is changing, and it’s changing in ways that would be alien. Worse, the world is now changing so quickly that past regularities are becoming less predictive of the future than ever before. The shelf life of probability is getting shorter. The future is becoming more uncertain and often impossible to predict. At the same time, we are making increasingly precise predictions that often turn out to be wildly wrong. We put blind faith in probability at our peril.

If we take a step back, perhaps we don’t always need to worry so much about some forms of uncertainty. Uncertainty is too often treated as a dragon to be slayed. But contemplate a world that is fully certain. Few would choose that world, with the unexpected joys and disappointments of life becoming expected, etched into cold, fixed equations.

There can be upside to uncertainty.

We cling to false certainty. Much of our world now runs on complex models that few of us understand. The problem, however, is that the models have become so influential that we can forget that they are models—deliberate simplifications that are, by design, inaccurate representations of the thing itself, just as a map helps us navigate a territory. But the map is not the territory. Everything simple is false. Everything which is complex is unusable.

We run into trouble when we conflate map and territory, mistaking representation for reality. The key is to remind ourselves that our methods of making sense of everything around us don’t change, that beneath it all lies a far more chaotic, contingent world.

But we must still make choices. So, how should we decide?

The most common answer to that question lies with decision theory. But the assumptions for decision theory apply best to a simple social world that doesn’t exist. Crucially, the standard decision-making model assumes that you can make a decision in isolation without affecting the system you live within. A bank run is a great example of this dynamic, in which it seems individually rational for you to withdraw your money from a risky financial institution. But in doing so, you make it more likely that the whole system will collapse, which is even worse for you. Decision theory often pretends, too, that your actions are isolated, not intertwined with everything else, and that just isn’t true. It also operates on short time scales. Decision theory is therefore a flawed, sometimes useful, way of navigating the garden of forking paths before us.

The world works differently from how we imagine it. But so far, we’ve ignored a key question: Where do the flukes of life come from?

Okay, so, that's it for this section. Next up, we're going to talk about a red cow. Yeah, you heard me right.

Go Back Print Chapter