Chapter Content
Okay, so, like, chapter eight, huh? Properties over reasons. That's a catchy title. And, you know, it's all about... Invariance as truth. So, what is truth, really? I mean, we always think of knowledge as being this thing, you know, this information, this understanding that lets us figure out the world. But, the thing is, it's gotta be tied to the truth. Otherwise, it's just, like, an agenda, you know? It's not really knowledge, it's just... something else.
So, truth is, well, it's what sticks around, right? It's what persists, even when other stuff is changing. And that, interestingly enough, connects it to abstraction. Think about it. An abstraction, like, "dog," right? It's a higher-level category, and you got all these, like, sub-categories, like different breeds. Even if, like, half the dog breeds vanished, it wouldn't change the fact that dogs exist. You could even come up with thousands of new breeds, and still, "dog" would be there. So, abstractions are robust. They last longer than the specific details.
And, this applies to physical things, too, not just ideas. Like, higher-level physical structures, they outlast the details, you know, the inner workings at lower levels. Okay, granted, this is about complex systems, not simple ones, right? You take a gear out of a car transmission – a simple system – and, boom, the stick shift doesn't work anymore. But, with a complex system, you've got multiple configurations that can all map to the same output, like the “stick shift” of the system. That's that multiple realizability we talked about earlier, right? It’s an entropic thing.
The stuff we see, the emergent structures, the behaviors... they exist because there are tons of ways to achieve them. That's why they're invariant. If they were only possible through, like, a few very specific pathways, they'd be way too fragile to survive in the real world. So, the higher the level of abstraction, the more invariant it is. And that goes for both, you know, informational and physical abstractions. So, this gives us, like, an anchor, a way to tell if something's more likely to be true. Just like nature only keeps what can survive, truth is what hangs on despite everything else changing. Invariance as truth. Boom.
The knowledge we're after, the knowledge that's tied to truth, it can't be about the tiny details, because, well, those details are always changing. They can't tell you the real story. But the big, abstract patterns? Those are more permanent, and they speak to the real truths about nature and life.
And this leads us to something kinda big about how we see knowledge. The idea that we just keep *accumulating* knowledge? Nah, that doesn't really work under complexity. Knowledge has to *converge*. It’s gotta come together, not just keep expanding. And that's super different from what we're usually taught in science and engineering. They tell us that there's always more to know, that we have to keep peeling back layers, finding new stuff. They say knowledge is like an ever-expanding circle, but...
But if truth is about what's invariant, then, further exploration only reveals what we already know. I mean, yeah, you find a new species in the ocean, something you've never seen before, that's amazing. But, that new life form? It’s still just one instance of the same processes that govern all life. There will always be new stuff to uncover, sure, but that doesn't necessarily mean we're learning anything *fundamentally* new.
So, we gotta talk about knowledge growth versus knowledge convergence. Mostly, science and engineering have focused on just piling up knowledge and organizing it. But, knowledge only really grows when it's the reductionist kind, you know, the knowledge made of little pieces and details. There will always be more of those details. But, if we're being honest about how nature works, those details don't actually map to what we measure, what we see, what we experience. And, the problem is, the whole idea of reductionism is that the little pieces and connections explain the big picture, but…that's just not the case.
The inner pieces aren't *responsible* for the structures and behaviors we see in nature, not like reductionists think. The whole "pieces lead to the experience" thing was just a convenient way to, like, label what we see and assign credit.
Emergence is the inevitable result of structures that process information to solve tough problems. It's not a causal chain from little pieces to big pieces. Emergence just finds the configurations that map many inputs to fewer outputs. And, that's all in service to the nested levels inside a natural problem.
So, knowledge and truth outside reductionism? It's all about invariance. And, by definition, only what persists is invariant. So, that takes us away from this idea that knowledge just grows. Instead of accumulating more info, it's about seeing the same patterns play out again and again. It’s a shift in how complexity affects epistemology – how we study knowledge. Any theory of knowledge based on a causal link between tiny details and what we see and feel? That's not gonna work. The methods, the validity, the scope of today's science and engineering? They’re against how nature actually works.
Framing knowledge as a universe full of yet-to-be-discovered truths? That's a problem. I’m not saying there aren’t discoveries to be made, of course. To discover isn't about revealing some hidden detail that makes something tick, it’s about building something that works. It's not about discovering new knowledge, it's about discovering solutions, solutions that tackle the challenges at hand. It's not mere praxis, not just the practical application of theory, but actual building, actual creating things. Our creative solutions, they shouldn't come from past knowledge. They gotta emerge on their own.
Knowledge convergence, that's why philosophies from thousands of years ago still feel true today. The truths discovered back then were these invariant abstractions that came from, you know, lives full of turmoil. The details of their lives are nothing like ours, but the truths still matter. I’m not trying to connect philosophy and tech, this is just an undeniable part of how information works in nature. We can't separate the informational from the physical, and trying to is just a convenient way for humans to look at things.
Yeah, we can still be surprised by a new species, deep in the ocean or in the rainforest. But, when we see it, the truest parts of its body and behavior, those aren't surprising. It's just nature's solution to problems in its environment. And those discoveries always fall back to the same core truths.
Discovery in science and engineering, it's less about uncovering something totally new and more about exposing the same patterns we see again and again. True knowledge, the kind that's really aligned with nature, it converges, it doesn't grow.
Now, we gotta talk about the tyranny of explanation. Science, well, it's made explanation its whole reason for being, right? It's supposed to explain how the world works. The power of science is supposed to come from its ability to uncover the causal reasons behind everything we see. Science is here to reveal nature's secrets and use them to make our knowledge and technology grow.
But, resting the purpose of science on explanation? That's flawed. In today's science, explanation depends totally on the idea of inner knowledge, so it's basically reductionist. To explain something scientifically, we're told, is to talk about how it functions on the inside.
An explanation gives a causal story about how outputs are produced, you know? If you take something simple, like an atom, we can explain the colors we see by talking about electron transitions. Electrons jump between levels, give off photons of light, and our eyes see those photons as color.
But, is that what color *really* is? Photon emissions play a role, sure, but color happens in the realm of complexity, not simplicity. Perception, period, goes way beyond just counting particles hitting your eye. Our brains are processing and interpreting what we see.
Some people will say we can just add more reductionism to our explanation, try to account for what's missing. We can pick apart the biology of our eyes, and maybe add some psychology to explain how we interpret color. We can say color perception involves a bunch of processes in the eye, including photoreceptor cells, signal processing, and sending signals to the visual cortex in the brain.
But, what about lighting conditions, other colors around, and the fact that everyone might see colors differently? I mean, we don't know if people see colors the same way. There might even be cultural influences that affect how we see color.
We can keep adding explanations to account for color perception, but at some point, is this even making sense? I mean, photon emissions are *part* of color, but they barely explain anything. And adding in biological and psychological explanations just makes things murkier.
The problem with reductionist explanations is that we can *always* make them. We can always find a piece of the system and talk about it in isolation. But that piece hardly explains what we see, measure, or experience. In reality, isolating something tells us almost nothing about how things come to be. The only reason people tend to believe in isolated causes is because we *assume* that isolated piece is causally connected to perception. But it's not. It can't be. We know this, because, in complex systems, pieces don't lead to what we perceive, not in any deterministic way. And yet that’s how the scientific enterprise markets explanations. This assumed determinism is so baked into society’s perception of science that when an explanation is provided, we presume it is causally connected to what we see.
The core problem with explanations is that you can’t test them when dealing with complex phenomena. Unlike a prediction, which you can test with repeated observations, an explanation in the complex world is almost impossible to disprove. Imagine explaining anger by measuring activity in some region of the brain. Okay, the activity is real, the measurement is real, and we can keep refining that measurement. But, none of that makes the connection between that brain region and the actual experience any more real. The connection between reductionist discoveries and human experience is largely fiction. It's based on this assumed causal connection that doesn't exist. We know it doesn't exist because that's not how nature works. Nature doesn't make big, complex things using causal connections from little things.
Think about Occam's razor, the idea that you should look for the simplest explanation. The point of simple explanations isn't just that they're easy to understand, or that simple theories are more likely to be true. It's that simple things can be destroyed. Occam's razor works because it admits we can't know if something is true, but we *can* know if it survives. And things don't survive for random reasons. Survival is the best proof of a thing's validity. But, if explanations for complex phenomena are almost impossible to test, then Occam's razor goes out the window. Now explanations get propped up artificially, kept alive by bogus assumptions about the discoveries being made, by the assumption that they're causally connected to what we experience.
That doesn't mean we can just throw out the idea of justified belief, right? Just because we can't see causality in complex systems doesn't mean we can't tell the difference between real knowledge and just an opinion. And, we absolutely can know that reductionist explanations are bogus. Not by testing them with repeated observations, but by basing our arguments on properties and logic.
We know that complex phenomena have certain properties they follow. Those properties are emergent, and they don't come from some simple set of steps. That's why it's irrational to accept electronic transitions as an explanation for color. Color has no meaning without perception, and perception is something that emerges from a complex phenomenon. No amount of extra physical, chemical, biological, or psychological explanation can add to the picture of color, because there's nothing to add.
Now, to be clear, that doesn't mean those electronic transitions, or any biological, chemical, or psychological mechanism, aren't playing a role. Of course, they are. But knowing the role is basically knowing nothing. Like we talked about earlier, saying mitochondria produce energy is interesting, but doesn't really mean much. That "role" disappears completely when you take the mitochondria out of the cell. Roles are convenient labels, not causal reality. If something depends entirely on being embedded in a matrix of countless other roles, then the word "role" loses all meaning.
The tyranny of explanation forces us to see the world through inner knowledge. It makes society accept disconnected mechanisms found through isolation as explanations for how the world works.
If the worst thing about reductionist explanations was that they were just fairy tales, they'd be misleading at worst. But reductionist explanations worm their way into our designs, right? Think about healthcare. A study finds some "statistically significant" result, and it gets folded into society. Researchers isolate some part of a health-related thing and say that it plays a role. That becomes a path to human health. But, just like inner knowledge of electronic transitions doesn't tell us what color really is, neither does the role of any vitamin, mineral, or health-related intervention.
That's why explanations of complex phenomena have to rest on properties and logic, not reductionist explanations. But logic alone isn't the answer when it's used inside a broken paradigm. Logic only works if the assumptions behind someone's statement are valid. Someone can make a logical argument about the role vitamin C plays in health, but only because they're using assumptions that society *incorrectly* assumes are valid. That's how our reductionist paradigm gets away with so much nonsense. It's not the logic that's flawed. It's not that real discoveries aren't being made. It's the assumption that what's been discovered is automatically connected to what we see.
But if logic is used with the properties known to be true in complex systems, then it becomes a powerful tool to argue about what we see. Logic, paired with a better kind of knowledge, a knowledge based on invariant truths, that's a powerful way to reason about what's real.
So, let’s talk about logic with properties. Logic is, you know, reasoning that's conducted or assessed according to strict rules of validity. It gives us a framework to back up statements rationally, and that makes them more likely to be accepted as generally true. But, you know, in the real world, there's no such thing as pure deduction. There's always some fuzziness to how true our premises are, and that makes it impossible to have both purely true *and* realistic statements. You can only prove super simple situations to be true, and reality is not simple. That's why there's no such thing as scientific proof. Logical proof? Yes. Mathematical proof? Yes. Scientific proof? No.
So, the strength of a real-world logical argument depends on its premises. Logic can stitch together our premises and conclusions, but only the truth of our premises can connect an argument to nature. It's how close our premises are to what we know about nature that makes something more true.
Today’s science uses logic (loosely and indirectly) to defend itself. Scientists run experiments and develop theories to support the truthiness of the premises they use in their arguments. If someone measures activity in the brain, they'll reason about how that activity can be used to infer a conclusion about the source of some behavior. Maybe fMRI scans show increased activity in the prefrontal cortex when people are making decisions. The researchers will then reason that the prefrontal cortex plays a crucial role in decision-making. Okay, so far, that's valid. But that argument is based on a deeply flawed assumption about how nature works. It *assumes* there's a neural source of human decision-making. In a full argument, that would be another one of their premises. But that premise cannot be true. It can't be true because complex systems don't have sources. Complexity rests on emergence, which doesn't function with source locations. Emergence happens in a holistic, interdependent way. The properties of complex systems, like the human brain, can't produce outputs with just a region or a location.
The assumption that there *must* be a neural source of human decision-making is a hidden premise (also called an implicit premise). It's a premise that's not stated, but it's assumed to be true for the conclusion to be valid. Hidden premises are a problem because people don't notice them and don't examine them. And that can lead to faulty reasoning. In the case of regions that supposedly explain the outputs of a complex system, these hidden premises are just plain false, and that makes any conclusions drawn from those experiments bogus.
That’s the rot at the heart of today’s scientific and engineering paradigm. Reductionist premises worked fine when we were discovering and building simple things, but they're false when we're dealing with complexity. Today's science and engineering get away with so much reductionism because they're not using logic correctly to defend their conclusions. Most of today's experiments and theories are based on arguments with hidden premises rooted in reductionism.
To the average person, those studies seem perfectly fine. Maybe even ethical. If we're told that brain region studies show how certain people struggle with decision-making when that region is impaired, it suggests that there might be some treatment down the line. But that's bad science making its way into our designs. When it comes to intervention, that's a recipe for disaster, not ethics.
Again, that's not logic's fault. It's because we misunderstand complexity. Logic is a powerful tool for human reasoning, but it's only as strong as its premises. And our premises have to be close to what we know about nature. What we know are properties, not causes.
That's what I call properties over reasons. There aren't an unlimited number of properties to discover in complex systems, there are just a handful, and those are all we need to make the most important decisions in science, engineering, and society as a whole.
Properties are best understood when you compare them to causal explanations. A causal explanation involves identifying the underlying causes or mechanisms that produce the outputs we see. Causal explanations try to explain *how* something happens. We could explain how a metal expands when heated by talking about the increase in kinetic energy of the atoms, which leads to more atomic separation.
In contrast, a property is a descriptive aspect of an object or phenomenon. It answers *what* an object is like, not *how* it produces its outputs. In our metal example, the fact that metal expands when heated is a property of metal (thermal expansion). There's no appeal to causal mechanisms, just the fact that metal expands when heated.
Properties can be thought of as constraints that nature follows. They set the boundaries for physical, chemical, and biological processes. There are tons of properties in nature, like how mass and energy can't be created or destroyed, only transformed. How the total momentum of a closed system stays constant over time, unless acted upon by external forces. How entropy doesn't decrease in isolated systems. How gravity attracts objects with mass to each other. How electromagnetic forces bind small matter. Others are about how organisms pass on traits, and how those better suited to the environment tend to survive and reproduce. We know that ecosystems cycle nutrients and flow energy. We know there are limits on the speed of light. We know that the properties of materials limit what's physically possible. We know that systems tend to seek equilibrium and stability through feedback mechanisms, and so on.
Those properties aren't *how* things happen, they're *why* things happen. In simple systems, the how and why are basically the same. If I ask why planets stay close to the sun, the property of gravitational attraction can tell us both. But when it comes to complexity, that's not the case. If we ask how all the planets stay where they are, we can still answer the why (because of gravity), but it's hard to answer the how (the specific process that keeps the planets where they are) in any exact way. And, when we get to more complex systems, the how disappears completely.
We've seen the various properties of complexity throughout this book. They fall under thermodynamics, information theory, computation, and evolution. We've talked about the evolutionary process, nature's recipe of variation, selection, and iteration, the way entropy is tied to both physical and informational aspects, nature's use of information compression, the nested structure of problems, flexible determinism, multiple realizability, how meta-level processes create abstractions, group selection, and the fact that things don't survive for random reasons.
Those all come from more basic properties like nonlinearity, self-organization, adaptiveness, resilience, feedback loops, hierarchy, criticality, chaotic and periodic dynamics, synchronization, phase transitions, bifurcation, and spontaneous pattern formation.
That list might seem long, but it's tiny compared to the number of causal explanations scientists and engineers come up with. There's no limit to the number of explanations you can concoct under reductionism. You can always peel back layers, choose to focus on some bit of matter (genes, regions, etc.), and then invent a story about how it connects to what we see.
I argue that real scientific descriptions of things, and any decision-making that comes from them, can't rest on causal explanations. It's more scientific to describe and decide on nature based on its universal, timeless properties. Real-world situations and nature's phenomena don't have paths and root causes, they have properties that they adhere to. Properties are the invariant truths that exist in the abstract, where genuine truth lives. It's only with a framework that pairs logic with properties, not reasons, that we can enter an intellectually honest phase of science and engineering in the age of complexity.
So, what the properties of nature show us, which we'd miss if we were focused on causes, is that there's a one-way direction to complexity. Complex things have a sudden and irreversible emergence of physical structures and behaviors. The properties we see in nature aren't the result of some source or path, they materialize out of a fantastically intricate system of statistical likelihood. All the pieces are needed to make nature what it is. Nature's solutions couldn't function without the entire group working together to produce the holistic output. In simple systems, each piece adds incrementally to the whole, but that's not how complex systems work. Complex systems emerge in an instant, when the necessary pieces are in place to compute the answers to their external problems.
The complete absence of a deterministic path in complexity means that complexity only operates in one direction. We can't piece together the components that make a complex system work. Complexity has to arise after-the-fact. And that completely eliminates the idea that design can lead to good outcomes.
This one-way direction of complexity guarantees that design will interfere with building complex things, because it prevents emergence from happening correctly. That's why writing that uses upfront literary structures is boring. That's why introducing a genetic change on purpose, to design some outcome, never works without side effects. That's why "precision medicine" is an oxymoron. That's why drastic social engineering will eventually lead to atrocities. Good design under complexity is not a matter of difficulty, it's impossible.
Introducing changes to the inputs will produce a wide range of changes to the outputs. A few of those changes might be desirable. The headache might go away, the corn fields might flourish, the baby might have blue eyes. That doesn't mean the *cause* has been identified. It just means that turning a crank on one end of the system led to some reproducible change on the other end. DDT was effective at controlling mosquito populations, but also good at disrupting food chains, thinning eggshells, and destroying populations. Everything's connected. Nature doesn't run on the fictitious causes that humans define. Complexity doesn't run on a path from inputs to outputs in the deterministic sense. Interventions into complex systems are based on design, and design has to interfere in a detrimental way, because it's based on an idea that is diametrically opposed to how complex systems produce their outputs.
Knowing the properties associated with complexity, and thus life, allows us to make better decisions based on universal patterns. All situations can be decided on better because, instead of resting decisions on fictitious inner knowledge (reasons), we base them on properties that are universally true and guaranteed to hold. And that doesn't guarantee specific outcomes, but it does guarantee that systems will adhere to constraints and patterns that we already know.
Validation has always played a central role in design. Countless processes have been invented to help confirm that a design meets the needs of end-users and other stakeholders. The point of validation is to make sure that a design solves the problem it was intended to address. But, all of that depends fully on the causal determinism of simple machines, which doesn't exist under complexity. So, what does validation mean when we build complex things? How can we know that our work is conforming to what is needed to solve the problem?
The key difference is that complex solutions can't solve problems as intended (in the sense of a known process) because intention smacks of design. We can't know any specifics about how the internal guts of our complex inventions solve problems because there are no "guts" in the reductionist sense.
And yet validation is important. We have to validate the automatic realization of physical abstractions, which compute answers to naturally hard problems. We can't write a whole book without some form of validation along the way. We can't engineer the next deep learning system without validating that our ongoing efforts are conforming to knowledge about systems. That's true, but it's critical to redefine what constitutes knowledge about systems.
As discussed previously, it's the properties of complexity that matter now, that separate good from bad solutions. Those properties are what we have to conform to. But we have to be careful with the word "conform." Remember the direction of complexity? Validation under complexity can only work after-the-fact, not before.
Something conforms when it complies with rules, standards, or laws. But in the current science and engineering paradigm, that happens in the opposite direction to complexity. The rules, standards, and laws are set at the beginning, and our work is expected to conform to them along the way. In complexity, that conformance has to operate in the opposite direction. Rules, standards, and laws have to be used as signals that our trial-and-error and heuristics are going well.
It's about making sure that the structures inside the guts of our creations emerge. The rules, standards, and laws are only there to help signal that what we create is showcasing properties that we expect. What makes properties, instead of reasons, work under complexity is that they don't interfere. Properties don't intervene on the organic flow of our creation's inner details.
That makes properties categorically meta. Take any property of complexity, like self-organization. It's self-referencing because it involves entities organizing themselves without external intervention. Self-organization leverages self-sustained feedback loops that let the system refer to itself for guidance. What about multiple realizability, where different structures produce the same outcome? There's a kind of redundancy that lets the system adapt by finding alternative paths to achieve the desired outcome. Again, there's no external intervention that tells the system where to go or how to change. It's the built-in capacity to map many inputs to few outputs that lets the system self-adjust.
In contrast to design, building complex things isn't about conformity, it's about taking more actions until the structures that emerge show signs that they're truly complex. It's not about expecting our creations to look a certain way, it's about seeing if they show the telltale signs of complexity, regardless of what emergent structure appears.
Ultimately, the validation of anything is survival. In writing, our work should only be what survives our intuitions and emotions. We're on the right track if things feel right. We consider it done when there's intuitive closure. Following intuition hardly sounds like a rigorous approach to building things, and yet those high-level, imprecise motivations are why they're so powerful. Intuitions aren't just excuses for justifying decisions. They're powerful emotional cues that have evolved over millions of years. When we see intuition that way, we realize it's a potent guide to crafting complex objects. More to the point, that binds intuition to something mechanistic and rigorous, to the properties of complexity. Intuition is implemented via heuristics and pattern recognition. It's not some untethered sentiment lacking rigor, it's an evolutionary trigger that operates far more effectively than anything offered by reductionism and precision.
Back to writing, the properties of complexity are all here, intuitively looked for when one writes well. Good writing leverages a great deal of nonlinearity, as the author bounces around the vast possibility space in an initially haphazard fashion. The words on the page eventually self-organize, as each iteration acts as self-referencing feedback to improve content. The work begins to show resilience, as the author looks upon their work the next day with fresh perspective, retaining words that survive. Hierarchy forms naturally, as words become paragraphs, paragraphs combine to form sections, and sections aggregate into chapters. There are phase transitions in writing, where one's earliest ideas appear choppy and disjointed, with inconsistent pacing and awkward phrasing, only to get smoothed into a fluid phase over time. These aren't forced analogies. Reductionists despise that emotive phrasing because it refuses to latch onto their isolated symbols and words of precision. But there's no ultimate distinction between the physical and informational. Intuition works because it taps into complexity. Period.
Regardless of the project, if what's being created is a truly complex object, then it will show signs of complexity. And those signs speak to the fact that the top stressor of a hard problem, survival, is being resolved. That's true validation, not designs, survival. We can't know what a surviving, valid structure is supposed to look like. Our solutions must appear as nature's solutions, forever surprising in their appearance, yet unsurprising in the properties they adhere to.
Design tries to mimic the properties of complexity in a low-dimensional fashion and applies them in the wrong direction. Design uses structure, organization, and even feedback, but it does so in an intervening and destructive way.
In the previous sections, we saw how the properties of complexity let us make better, more logical decisions because our premises become more legitimate than anything based on fictitious inner causal knowledge. Both science and engineering have to align their work with making better arguments about why something is valid. And there's no better argument than one based on what survives. What survives is complexity.
Okay, now let’s think about how versus why. We saw the connection between the thermodynamic and information-theoretic versions of entropy, right? That allowed us to see nature for what it really is: entities that compute answers to problems. So, while we usually think of nature in terms of its physical processes, it's ultimately about processing information to solve problems.
Atoms coming together to form molecules solves the problem of fueling chemical reactions, adding structural support, bringing about function, or enacting energy conversion. Yeah, atoms stick together because of the interactions of electrons and nuclei, but that's *how* they come together, not *why*. Atoms stick together in order to form an inevitable configuration of matter that processes information to solve a problem.
And it’s important to point out that when we isolate the physical realm into discrete pieces, we are losing the essence of why something is happening. In simple systems, such delimitation serves to bring forth a clean objectivity to how we understand the world, but when it comes to complexity such delimitation mitigates understanding. This is because the discrete interactions have almost nothing to do with why something came to be.
Changing our focus from how to why is one of the major shifts (both scientifically and philosophically) demanded by any intellectually honest account of reality. The structures we see in nature are the only ones that could have formed, not because of some unique set of interactions among their components, but because there can be no other configuration that solves the problem. Nature works in an automatic and inevitable fashion, not by reductionist reasoning, isolation, and inner causality.
Atoms and molecules are one thing. What about the beaver, or some other highly complex thing, anything far beyond the scale of microscopic entities? It turns out we can explain beavers with as much rigor as atoms, as long as we're reasoning about the why, not the conventional how.
"How," in today's science and engineering paradigm, asks about the specific manner in which something occurs. How does photosynthesis happen in plants? How does an engine create motion? How does one solve this mathematical problem? But the "why" relates to the rationale or underlying principles that lead to a given structure or behavior. The reductionist will spend their career trying to answer how the beaver looks and acts the way it does. They'll never bring a rigorous answer, because nature's solutions aren't about causal chains, they're about manifesting structures that solve problems with flexible determinism. The beaver's set of configurations are the inevitable outgrowth of problem-solving. Its features, and the way they're used, overlap, informationally, with the demands of its environment. That's it.
And that's why we can see the extreme universality of thinking about things in informational and computational terms. Under complexity, reaching inside the guts of a system isn't going to tell you how that system works. The mechanism, and any rationale used to defend it, must be based fully on the logic of how informational and computational properties are satisfied.