Chapter Content

Calculating...

Okay, so, like, to really understand emergence, we gotta, you know, connect the physical and the informational worlds. Basically, nature's full of physical stuff, right? And that stuff, it solves problems by processing information. So, yeah, unless we connect those two, the physical and informational, we just, like, can't really grasp what all those little pieces are doing when they come together to form bigger stuff.

Current science, it kind of misses the mark on complexity a lot, but, I mean, it does give us some, like, useful frameworks. Things like information theory, the theory of computation, evolution... these are important. I've already, like, kinda touched on these to explain physical abstraction and, you know, how nature builds stuff and what it means to solve, like, naturally hard problems. But now, we're gonna, like, dive deeper, and, well, propose a mechanism for how emergence actually happens.

Now, this isn't, like, a widely accepted thing, this explanation of emergence, 'cause, like, there isn't really one! The science world's still too into, like, breaking things down to really figure out how emergence works. So, what I'm putting forward here is, like, what I think is the most, uh, rational explanation based on information, computation, and evolution. It'll probably have some stuff in common with other theories, but, you know, I'm presenting it on its own terms.

So, let's start with the basics of information theory. It's, like, a branch of math and computer science that deals with, you know, measuring information, especially when it comes to communication. The main ideas come from Claude Shannon, a computer scientist and mathematician who, like, laid the groundwork way back when. The key thing we get from him is entropy. It's got roots in thermodynamics, thanks to physicists like Rudolf Clausius and Ludwig Boltzmann.

Thermodynamic entropy is about the, uh, like, microscopic behavior of molecules and how systems move towards equilibrium, whereas Shannon's entropy, it's about the amount of information you need to describe or predict what's gonna happen in a random process. Usually, people treat them as separate, but they're actually, like, deeply connected. I'd even say they're the same thing, just looked at from different angles. I mean, information always comes from something physical. And, yeah, both the physical and informational versions of entropy are, like, super important to how nature creates. So, it makes sense Shannon and Boltzmann both landed on the same idea.

Information theory kind of ignores the physical stuff of the system, so it can focus on the bigger patterns that come from statistical behavior. That way, we can talk about complex systems without getting, like, bogged down in the details. I mean, information theory applies to all kinds of things, communication systems, biological networks, even social interactions. Which is important, 'cause only, like, really fundamental aspects of nature connect to lots of different things.

Entropy helps us measure uncertainty because it, well, it's a measure of disorder in a system. Some scientists don't like the word "disorder" for entropy, 'cause sometimes systems seem more ordered even with higher entropy. But, that's just semantics, really. The important thing is, more entropy means more uncertainty. Like, a more disordered room, it's gonna be less predictable than an ordered one, right?

In the physical world, entropy tells us how many microscopic configurations, or ways to arrange particles, there are that match a given macroscopic state. It's how we count the number of possible microscopic configurations that fit the macroscopic stuff we see, like temperature, pressure, volume... things we can measure. Atoms and molecules, those are microscopic.

If a system has more possible configurations, it's less predictable. 'Cause, you know, more possibilities means less certainty about what something is or could become. A gas filling a whole container evenly has higher entropy than if it's all squeezed into one side.

You can also think of entropy as the average amount of information needed to describe or predict something. If you flip a fair coin, that's maximum entropy because there's no way to know what's gonna happen. But if you bend the coin, so it's more likely to land on one side, then, you know, there's less uncertainty, so lower entropy.

Shannon entropy measures the distribution of probabilities for different outcomes. It's like flipping thousands of coins and seeing how often they come up heads or tails, then drawing a bar chart showing the counts.

Of course, two bars on a chart isn't really much of a distribution. Real life isn't like coin flips. Real systems have, like, tons of possible configurations, a trillion-sided coin, and so if you tried to show the frequencies of those configurations, it would be all smoothed out into a curve. Like the bell curve you see in textbooks. You can't get a bell curve with coin flips, 'cause there are only two outcomes. But you can with things like people's heights. The most common height is the peak of the distribution, 'cause most people fall in that range.

But, the bell curve can't really capture complexity, because it's, like, totally devoid of interactions. Individual heights are independent. They don't talk to each other. Your best friend's height doesn't affect yours, so there's a well-defined average height. Complexity's defined by lots of pieces with lots of interactions. It's those interactions that give complex systems their structures and behaviors, which makes distributions way too simple. Whatever "distribution" exists in nature, it probably doesn't look like anything in textbooks.

The core concept entropy gives us, which I'll use to talk about emergence, is the idea of many different configurations leading to one observable macroscopic thing. That's the key mechanism that describes how emergence happens. Not a bunch of separate steps adding up to the outcome, but something arising from the most statistically likely configuration.

Complexity is using physical stuff to do things informationally. That's why there are processes in nature, to arrange matter to turn inputs into outputs and solve problems. Entropy's a measure of information content, so it's, like, crucial to how information gets processed. All physical processes change information, so nature's problem-solving is gonna involve changes in entropy.

Entropy goes up when heat moves from hot to cold, 'cause thermal energy becomes more evenly distributed. It increases when reactants become products in chemical reactions. Phase transitions, like ice melting to water, entropy goes up because water molecules are more disordered in liquid. Any time gases expand, entropy increases.

But, we also see entropy decrease, locally anyway, as nature makes her objects. Living things are lower entropy compared to what created them. Life can organize and maintain low entropy structures by getting energy from their surroundings. Atoms and molecules often become more ordered when substances solidify from solution. Order can spontaneously emerge in flocks of birds or schools of fish.

We know something has higher entropy if there are more microscopic configurations that match the macroscopic property we see, like pressure or volume. If you measure a system's pressure, that pressure's gonna be the one with the most microscopic configurations, atom arrangements, that would create that pressure. All the other pressures we could've seen probably have fewer microscopic configurations. So, what we observe is the most probable state of the system, which is the state with the most underlying microstates that produce what we're looking at.

Systems naturally move towards states with the most accessible microstates. This isn't just for pressures and temperatures. The physical structures and behaviors we see probably happen because they have the most microstates consistent with how they look. This applies to all scales.

Think about emergent structures in nature. They're patterns that come from the interactions of lots of pieces. Those pieces are the lower-level microstates that map to some macroscopic emergent structure. Microscopic doesn't have to mean actually microscopic in size, just a level below what we're looking at.

The role of entropy in life has been debated for a while. The second law of thermodynamics says entropy always increases in isolated systems, so things should become more disordered. But living things are, like, highly ordered structures, with low entropy. The answer is that life's systems only decrease entropy locally, by increasing entropy in their surroundings. So, living things aren't isolated systems, they exchange energy and matter with their environment.

But saying living things are highly ordered and open isn't the whole picture. Within a single thing, more disorder in the lower levels, higher entropy, should lead to more order at the higher levels. That's why emergent structures come from the most possible entropy a stable system can have. So, entropy isn't being lowered or increased in any definite way, it's always changing, with low-level entropy increases fueling entropy decreases at higher scales.

Anyway, the key point is that explaining emergence doesn't need some chain of cause and effect between levels. It doesn't need some approximation based on reductionism or deterministic reasoning. If you look at it in terms of entropy, emergence is just the physical result of having the most lower-level pieces of matter match the structures and behaviors we see, measure, and experience.

The stripes on a zebrafish are what we see. Asking how they form is the wrong question. No amount of cell-imaging or genetic tools will ever answer that correctly. But we know the stripes exist at the highest level (n), and they must have come from the cells below (n-1). So, the cells below must exist with the most possible configurations that match the stripes we see. Of all possible cell configurations, the ones that create stripes are the most common.

Ant colonies show sophisticated self-organization, division of labor, robustness, adaptability, and efficiency. These traits are emergent patterns. There's no causal chain to explain where they come from, but we know those patterns (n) must come from the most microscopic configurations that match the macroscopic patterns we see. That's what emergence is, I think.

And entropy can't be adequately discussed at only one scale. We should expect the increase and decrease of entropy to happen at the same time within a single entity. A natural solution will have the lower levels of its system with more entropy than its higher levels. Zebrafish stripes must have a lower entropy than their next level down structures, like cells. So, the inner details are messier than the higher-level order we see.

This is true for philosophy too, not just physical systems. That would make most scientists cringe, but it's not 'cause there's no objective truth, it's 'cause they rely on a dying reductionist idea. Reality's ultimately informational and computational, and there are different levels of tractability. The truths that last are at the highest levels of abstraction, while the specific details are always changing.

This isn't a statement about philosophy, or trying to include it in science, it's just how systems behave based on information and computation. It's how nature works, whether you like it or not.

Just like the peak of a bell curve tells us the most likely height in a group of people, the peak of any distribution is where the probability is most concentrated. That highest point of probability density tells us what we can expect to see. That's where the most numerous microstates of a system match what we observe. Nature doesn't use something as basic as a bell curve, but still, peaks in distributions are what we see probabilistically because that's statistically where the most structure/behavior-producing configurations exist.

So, we can think of emergent structures as the physical versions of the peaks of probability distributions. By combining the thermodynamic and information-theoretic interpretations of entropy, we start to see what emergence actually is.

Entropy shows us the connection between the underlying configurations and the macroscopic attributes we see. At any scale in nature, the observed structure should be the peak of a distribution of possibilities. That peak represents the most possible configurations that produce the same result. So, whatever feature we're observing in nature, it's the one that can be achieved in the most possible ways.

Therefore, there will always be lots of ways to compute the right answer to a hard problem. 'Cause nature's emergent structures are its computing constructs, and they're made by mapping many possible microstates to its existence. So, it's physically impossible for a single path to produce an outcome in a complex object.

That alone disproves reductionism, and the idea that nature's systems could be internally deterministic. The individual pieces of anything in nature aren't deterministically connected to outcomes. There can be no root causes in complex systems, 'cause something that can be achieved in the most possible ways can't have a root cause. Entropy tells us that instead of a path from input to output, we have a system that arranges itself in countless ways to produce the same output. That's how nature structures itself, and those structures process information to produce what we see.

Nature's solutions are multiply realizable. What we measure, observe, and experience in nature are things that can be achieved in the most possible ways.

Okay, so, I'm gonna make a bold claim: emergence isn't some niche thing we sometimes see in nature, it is nature. When we look at any system at any scale, we're not seeing something deterministically determined. The pieces don't have specific individual roles, but they contribute to a new synergistic entity, something with its own distinct and independent existence. Emergence is usually thought of as individual elements producing a combined effect greater than the sum of their separate effects. That's true, but the word "greater" is problematic. It suggests the whole is still causally connected to the individual pieces. I think emergence is an abrupt transition from a collection of individual pieces to a completely new object with distinct functionality. This emergence erases any idea of causality between lower and higher levels.

The distinct functionalities at each scale are a result of the informational and computational realization of configurations that compute unique outputs. Nature makes what's needed to solve a given problem, and it does that through the multiply realizable mechanism of entropy. There must be a mapping from many possible inputs to a few specific outputs. That can't happen using deterministic causal paths, only by entropically driven information compression.

Atomic structure isn't a result of neutrons, protons, and electrons working together in some simplistic way. We already know that, 'cause we can't exactly solve all atoms above hydrogen on the periodic table. Once we hit the 3-body problem, interactions between components prevent making exact deterministic predictions. Such calculations are treated "approximately," but nature isn't an approximation. The calculations used today assume underlying determinism and try to create some softer version of that determinism. But if nature abruptly transitions its various scales as entirely new entities, those calculations aren't approximations, they're more like near misses. Approximations are answers that aren't as good or exact as they would be if we approached the problem directly. But when we use approaches like numerical methods, heuristics, mathematical optimization, or machine learning, we're not finding "approximate solutions," because there's nothing to approximate. These techniques aren't sitting above some deeper causal reality.

There's no such thing, even conceptually, as being more direct in our calculations. Observing things at different scales, like molecules versus atoms, or cells versus organelles, is to observe entirely new things altogether. The calculations we use to model complexity are closer to the real thing, not something near an underlying reality. Good models that seem to logically define or even predict a complex phenomenon are becoming something like the phenomenon itself.

That's closer to what some call strong emergence. Weak emergence suggests the properties and behaviors that arise from simpler components are fully explainable in terms of the underlying rules and interactions of those components. Strong emergence claims the structures and behaviors of complex phenomena can't be reduced or explained by the underlying rules and interactions of the system's components. I agree, but I don't relegate complexity to some niche corner of science. Emergence is nature. All of it. And that's not just a belief, it's based on proposed mechanisms aligned to known properties of information, computation, and evolution.

The emergence I'm talking about means nature is completely disconnected from the reductionist assumptions of our current science and engineering. While the reductionist view works in some situations, it's ultimately wrong. Just like classical mechanics is quite useful, but ultimately wrong. But whereas classical mechanics is still useful, the things we must build in the age of complexity make reductionism and design utterly void. Reaching into complex systems and arranging its pieces deliberately will only produce useless or dangerous outcomes.

Instead of thinking of complexity and emergence as just for certain phenomena, it's more honest to admit emergence is nature. Viewing nature in terms of its universal properties makes this explanation rigorous. It doesn't call on unprovable tales about inner causality. It's based on what we know happens.

Flexible determinism makes nature's solutions totally different from the rules-based engineering humans have used forever. But nature's solutions do have something in common with rules, they convert inputs into outputs. What makes this conversion so different from rules-based engineering is that nature's computation isn't some causal chain of one-to-one events. Nature doesn't produce the same output for the same input. Nature squashes many different inputs into a small set of familiar outputs that solve a given problem. This squashing is extreme, lots of inputs get converted into fewer outputs. Of all the different environmental challenges a beaver faces, it always produces the same few beaver outputs. That's the flexible determinism we see in nature.

The solutions in nature can't be a set of rules that do the conversion. They must be something else. We've already seen that something else is the physical abstractions nature creates. But what are those physical abstractions doing that enable nature's solutions to harbor their flexible determinism? The answer must be information compression.

Information compression is another key idea from information theory, it's reducing the size of data without losing significant information. In communication, it's finding ways to represent data more efficiently, so it can be stored or transmitted using fewer bits. It's done by exploiting redundancies in data, so repetition or predictability can be reconstructed after a message is received.

Imagine we had a string of letters like "AAAABBBCCDAA". We could send that more efficiently by reducing its size, representing the repeated patterns more efficiently. We could represent sequences of repeated characters by the character itself followed by the number of times it repeats, encoding our original string as "4A3B2C1D2A". Our message has been "squashed" from 12 characters to 10. Doesn't seem like much, but with lots of data it can make a huge difference. At the receiving end, decompressing, we can just expand each character followed by the number of times it should be repeated, returning us to the original message. Modern compression algorithms in computers use this approach to store files, photos, and other things efficiently.

Information compression is what nature is accomplishing via its emergent structures. It must be, 'cause the emergent structures in nature's solutions convert countless possibilities into the few outputs that solve their challenges.

That's why nature is so flexible and adaptive. When we think about how a cheetah can maneuver on a dime, and execute its speed in so many situations, we're amazed by its sophistication compared to something like a car. That's 'cause we're thinking in terms of simple systems. People picture separate components bumping into each other and wonder how the animal accounts for so many factors.

But if we view things properly, through the lens of information compression, we can understand how the cheetah can achieve that level of sophistication. The countless inputs, terrain, obstacles, weather, competition, are being pared down to the few outputs that allow the cheetah to run effectively. Human-made objects, other than things like AI, can't do that. Those objects need individual pieces to explicitly process information. There are too many inputs to convert to outputs for deterministic machines to operate outside contrived and narrow environments. Only information compression realized by physical matter can enable nature to do what it does.

The physical abstractions nature creates do what all abstractions do. They compress information down to fewer outputs. But nature's physical abstractions have more in common with mental abstraction than with the designed physical abstractions of traditional engineering. Traditional engineering reduces the number of levers you must pull by making explicit causal connections between levels, nature's physical abstractions reduce the resources necessary to compute.

The emergent structures and behaviors in nature are best understood in terms of how they process information. They process information by compressing countless inputs down to fewer outputs. Just as our minds form concepts that act as common nouns for all subordinate concepts, nature produces structures that subordinate physical details adhere to. Just as "dog" is a category for all dog breeds, zebrafish stripes are a physical "category" for all subordinate tissues or cells that make those stripes possible.

Mentally, creating abstractions lowers the cognitive load when you're moving through complex environments and solving hard problems. By grouping superficially different things into single categories, based on deeper shared structures, we limit the amount of processing power needed to compute answers. But this isn't just informational. We can't fully separate the physical from the informational. The physical structures in nature exist to solve problems, and problem solving is an informational thing. There's always the transformation of information from inputs to outputs in nature.

Viewing nature's solutions at different scales, like comparing a whole tree to its individual branches and leaves, is viewing a system's different interfaces. These interfaces, its physical abstractions, have been created by nature as per progress by abstraction. The tree doesn't stop at solving the problem its branches solve, it solves harder problems by fashioning the entire tree.

The branches of a tree and the entire tree address different problems, fulfilling different roles within the broader context of the tree's survival and overall ecological function. Branches maximize photosynthesis and provide structural support. The entire tree solves the harder problem of anchoring itself to the ground through its root system, and extracting water and nutrients from the soil.

The tree is a system of interfaces that exist at different levels of abstraction, not just mentally, but physically. The demarcations we define in nature, "tree" and "branches," are the physical abstractions that have been created by nature to compute answers to its survival challenges. These levels go all the way down. From the tree we move down to branches and leaves, then further down to the microscopic cellular level, down even further to the atoms and molecules that make the cells. No matter how we choose to demarcate the different levels, each level is solving a problem, meaning it's processing information of a specific kind.

Now, you might argue that I'm just reifying mental abstractions. That the different scales of the tree are just convenient demarcations humans make mentally. By defining roles and levels, it might seem like I'm relying on the very reductionism I argue against. But that's not true, 'cause the roles I'm identifying lose their meaning once the group ceases to exist. Under reductionism, the organelles in a cell have self-contained definitions. The mitochondria produces energy through cellular respiration. That's true, but that role has no meaning outside the cell. If it weren't for all the other organelles and the matrix they sit within, there'd be no point in producing energy through cellular respiration. So, identifying an individual's role is less meaningful than reductionism suggests. If we tried to design a better solution to cellular respiration by targeting the mitochondria, those designs would probably lead to poor outcomes.

Mental abstractions aren't figments of our imagination, they're units of processing. To define different levels in a natural object is to observe the nested structure of challenges that object solves. We'll learn more about the nested structure of problems later. The point here is we're not reifying nature into artificial yet convenient constructs. This is about undeniable computation, not reification. This is robust even to the possibility that we're demarcating incorrectly. Regardless of where you place boundaries, any observed level of nature (n) is solving a problem that's different from the group of pieces below (n - 1) and the group of pieces above (n + 1).

I talked about how the physical abstractions created throughout human history have all been designed, using inner knowledge of causality to connect the (n - 1) level to the (n) level. But in nature, no such causal connection exists. Each level is its own entity with distinct functionality solving a distinct problem. But physical abstraction is how progress of any kind must occur. So, nature is producing its own physical abstractions within the solutions it creates. Physical abstractions are what emergent structures are. Emergent structures are the interfaces created by nature that reduce the number of "levers" the next stage of progress must pull to coordinate underlying details.

Humans devise such interfaces through design, nature isn't using cognition to make choices about what to subsume into what. Again, nature's recipe is a beautifully mindless process. But that raises an obvious question. If no decisions are being made, how can nature form physical abstractions? How does nature choose which pieces to combine into a bundle at one level (n), to serve the next level (n + 1)?

Humans make abstractions by operating from outside the system. You can only notice that all dog breeds belong to the same abstract category, dog, by stepping outside the system of dogs and noticing what they all have in common. All dogs have four legs, teeth, a tail. The precise description of what makes a dog a dog will never suffice, 'cause precision can't capture what happens under complexity. But the human brain can recognize when a species is indeed a dog, and place this into a mental category. That's also the case for the physical abstractions that have been designed throughout history. Humans use their awareness and cognition to spot how different pieces of a system can be bundled together into a single unit. But how does nature "observe" how different pieces have something in common? How can nature "observe" itself to create its physical abstractions?

Humans create mental and physical abstractions by making analogies. An analogy is a comparison of two things to show their similarities. It involves spotting some deeper structure between superficially disparate things, and abstraction is how we do that. Abstraction lets us isolate the core attributes of the source and target domains, so we can compare them more effectively. By stripping away the details, you can focus on the more essential principles or structures common to both things being compared. If we compare the solar system to an atom, it's by abstraction that we relate the atomic parts to the celestial parts.

So, if nature's creating physical abstractions, it must be doing something like making analogies. That sounds weird, 'cause making analogies is obviously cognitive, requiring consciousness. What would it mean for a mindless process like natural selection to bring about analogy making? But if you look at it mechanistically, complex systems can indeed take on the analogy-making apparatus. I mentioned before that nature must "observe" itself to note how different pieces have something in common. Mechanistically, that means the system must be self-referencing.

Just as an analogy binds things together, self-referencing systems can bind inner details by their shared structures. This binding doesn't require consciousness, only the mindless process of producing invariance due to survival. Think about how natural selection keeps certain things around while tossing out everything else. That can occur because the outputs of natural selection, a given generation, become its inputs, the next generation.

Self-referencing leads to reinforcing and stabilizing certain structures and behaviors that persist amidst the flux of everything else. That's how all pattern formation happens in nature. It leads to the stabilization, selection, and reinforcement of particular states or structures. We see that manifest in various areas, attractors inside weather patterns, fixed points in ecological systems, the periodic behavior of heartbeats, energy minimization within chemical reactions, the formation of stripes on our zebrafish. This is evolution writ large, not just the parts relevant to biology. The same pattern formation mechanism occurs in our cities, financial systems, electrical grids, the internet, and artificial intelligence systems. It's complexity that brings about the self-referencing mechanism due to feedback loops, and this mechanism is how we get non-random things to appear and remain.

Self-referencing is how nature observes itself. It's how survival becomes the mode by which better things are created, by which some things stick around, and others don't. The structures and behaviors we see in nature are just the parts that persisted, automatically and inevitably, through self-referencing.

It's self-referencing that enables nature to bundle the guts of her systems into physical abstractions. It's those physical abstractions that compute the answers to nature's posed challenges, as already described. Self-referencing builds configurations of emergent matter that compute the needed answer in natural systems. That's why the properties of complexity are so automatic and inevitable. Emergent structures and behaviors need no guide or path, just the ability to bind matter by continuous self-referencing.

Which brings us to the method by which nature is able to reference itself. The final piece in our quest to demystify emergence.

Go Back Print Chapter