Chapter Content
Okay, so, like, complexity, right? People always talk about it like it's this huge, difficult thing to handle. But, honestly, I think that's kind of backwards. See, the problem is we think complexity is just, like, a harder version of simple systems. But it's not! I mean, it only gets tough when you try to get bogged down in every little detail. And chasing those details in, like, complex situations? That's just chasing ghosts.
We already know complexity creates structures and behaviors that you just don't see in the individual parts. Which, like, makes it way simpler than simplicity when it comes to making decisions, you know? Simple systems show you all these detailed mechanisms, because that's how they work. But with complexity, those little details? They don't really matter to what actually happens in the real world.
That's why math and physics seem so hard, right? And they are, don't get me wrong, but they're hard *because* they're simplifying things. It's only when we're obsessing over the tiny, intricate details that we get lost in the mess. Physics chalkboards are covered in equations, yeah, but it's because they're drastically simplifying nature, like gamifying it. And those details? They don't really tell you much about what you see on a bigger scale.
Think about it. Humans solve problems using, you know, like, heuristics and quick thinking. Not because we're too dumb for complexity, but because that's how you *actually* deal with it! Thinking that we'd be better off slowing down and picking apart every little thing we see? That's just missing the point.
All this, like, pathologizing of quick thinking by, you know, society and psychologists? It comes from the flawed idea that's behind this whole notion of design. I mean, only when we think we're supposed to know all the little details, all the inner workings, would anyone even suggest that, like, high-level, abstract thinking is a bad thing. We only make real-world problems hard because we try to break them down into little pieces and focus on details that don't really matter.
So, about genius, you know, how history books are full of names, people who supposedly figured stuff out that no one else could? But progress by abstraction basically throws that whole idea out the window. The idea of genius just doesn't line up with how we actually solve problems. Problems are solved by changing information from inputs to outputs. And in simple systems, that happens through, like, deliberate pathways. But that's not how nature works! Nature's outputs come from emergence. And emergence, as we've already talked about, comes from the most likely configurations, statistically, which overlap with the inherent structure of the problems themselves.
The presence of these most likely configurations? It only happens because of *all* the possible arrangements. It takes the whole group to give what we see and experience its existence and meaning. It's like a sentence, and the best configuration is a poignant word. All the other words are what give that poignant word its meaning. Whether you're talking about a word, a sentence, a paragraph, the whole book, the demarcation between these things is real, but it doesn't have meaning or utility outside the higher-level group. Nature's always using the entire collection of possibilities to solve the problem. Nature selects using the group.
That's why no one can solve a problem alone. Saying someone solved it on their own is like crediting a single word for the meaning of a sentence. It's just not possible. Attributing cause to individuals goes against any intellectually honest account of how nature functions. If you isolate a guy on an island, his knife was still made by others, his knowledge of shelters came from his village. Even the most isolated person still depends on the ecosystem. And nowadays, that ecosystem is our economy, this massively complex web of dependencies. The idea that individuals solve problems? It's just not accurate.
History books, of course, tell a different story. They appease our need to give order to chaos. We want something to point to as the cause. But root causes? They're fiction. I mean, there are mechanisms, sure, but they don't work through deterministic paths and root causes. Attribution under complexity? It's unscientific.
History's full of stories of innovation giants, you know, the Einsteins, people with brilliance unlike others. We even dissect their brains to see what made them different. Of course, some people have more interest and drive. And maybe without them, the innovation wouldn't have happened *when* it did. But it *definitely* would have happened. Multiple realizability shows us that invention can happen in different ways, from different cultures. It's a statistical reality. Attribution goes to whoever was in the right place at the right time. And no one invents anything new without the contributions of countless others.
Progress by abstraction is automatic and inevitable, achieved by the group solving problems for the next level. It's not a story of giants. There were no giants, only shoulders. And that's not just a platitude, it's about describing human progress in a way that lines up with how nature actually works.
If you look at a successful life, however you define success, it'll *look* like a designed system. Pieces seem to fit together perfectly. But those pieces emerged over time to solve a life's challenges. The structure of a life, like any natural solution, emerges out of chaos. That's why those business books are so misleading. They talk like there's a path to success, that if you follow the author's approach, your life will follow suit. But the real world doesn't have paths. Following someone else's emerged structure? It's impossible. There's no way to configure two complex systems the same way.
Assuming there are paths is actually worse than meaningless, it's damaging! Following another life as if it's a path is like intervening in your own life's natural emergence. It's stopping what should flow.
A lot of what we taste when it comes to food is narrative. I mean, it's not just the chemical reaction, but the story we're told about the food. A new breakfast place opens, marketing its unique recipes. But the ingredients aren't really doing much flavor-wise. It'll probably taste like any other breakfast place. But, you know, that doesn't work for marketing. People want to believe something is different and interesting.
That's why coffee tastes better in a special cup. We seek meaning, and we do that by assigning causes, even if they don't exist. Try convincing someone their favorite restaurant isn't that different, they'll disagree. People are attached to stories.
Believing your favorite restaurant is special is harmless. But in other areas, this design narrative isn't so innocuous. Think about policies. Governments try to reduce risk based on research. Healthcare funding, insurance, access to services... it all draws on scientific research. So-called evidence-based policies look to assess effectiveness and decide how to allocate resources. Public health recommendations lean on science to provide insights into disease. Researchers provide evidence on drug safety, which informs regulatory decisions.
Behind all this is the design narrative. The idea that we can use knowledge gained through experiments and apply it to make real-world decisions. It seems to make sense. Run experiments, find causal factors, improve society.
But there's that word "causal." It doesn't take much to convince someone there's a causal connection. That's why the design narrative gets away with so much. There's a sense of control. It tells us we can discover something about how the world works and then use that knowledge to build the next solution. But there's a huge disconnect between what research finds in isolation and what actually happens in the real world.
I'm not criticizing governments. I'm talking about the paradigm that rests on the design narrative. It's becoming increasingly problematic because it rests on a faulty premise: the idea that inner causal knowledge can be used to build good solutions. That approach is going to produce unrealistic and potentially dangerous outcomes.
It's so easy for us to believe that our designs determine the outcomes because causal explanations can't be truly validated under complexity, so they get a free pass. Isolating something tells us almost nothing about how the bigger system works. But we've been told that pieces are causally connected to the outputs. So, any post-hoc explanation can be given, as long as it fits within the scientific paradigm.
We can always create a reasonable-sounding narrative for anything we see. We can even stitch them together into logical arguments. People who think the Earth is flat can make a valid logical argument. All they have to do is use premises that are true and plausibly lead to their conclusion. But if there's a hidden, false assumption, the argument is bogus.
When explanations are given inside a broken scientific paradigm, they're essentially unfalsifiable. No amount of math or statistics can get past the logical failings. Science can't save itself from bad logic. No amount of fancy math or trials can negate the fact that there's a difference between something playing a role and knowing what that role is.
This doesn't apply to obvious negative things. If a study confirms cyanide in the water, policies should be put in place. I'm talking about how problematic it is to *build* things based on the design narrative. Think about how cyanide probably got into the water in the first place. Mining uses it to process gold and silver. Chemicals and pharmaceuticals are made possible because of it. It's used in electroplating. All of these can pose significant risks, and no study can absolutely confirm they can be used safely.
The design narrative tells us something plays a role, but it doesn't tell us what its *full* role is, let alone the fact that the notion that things *have* roles is itself flawed. Cyanide doesn't just interact with metals, it interacts with systems in countless ways. If we assume control and determinism between the smaller pieces and the bigger pieces, we'll always be building solutions that ultimately do more damage than good.
The structures that emerge in complex settings are the ones that come from naive action. Trial-and-error is how nature creates. There are no exceptions. The fallacy is believing that once we observe a structure, we now have the blueprint to make it ourselves.
Putting in place structures as a way to build complex things runs into the problem I call, "the pattern is not the path." There's this deeply ingrained belief in education that the pieces we discover are informative of how to construct things. But that runs in the exact opposite direction of complexity. Complex things don't use a path to get to their outputs.
The current paradigm tries to suggest that complexity is an ill-defined term. But it has well-established hallmarks that undeniably run up against how we think things get created, what constitutes genuine knowledge, and how our economy is fashioned.
The pieces of a system that are uncovered and analyzed by reductionism have almost nothing to do with the structure and behaviors that emerge in nature. Peeling back the layers of a cell isn't going to tell you how it functions. This would surprise a lot of people, and a lot of scientists would disagree, but that's because they're framing it in terms of reductionism.
I'm not saying building complex things means we have no use for rearrangement, switching pieces, or getting transitions right. Those all happen. But those decisions are being made to attend to high-level signals, rather than fitting things to some predefined structure.
Think about the difference between writing a story that follows a structure versus one that just sounds good. The latter will produce superior writing. The former will have interventions because it assumes the pattern is the path. A structure will interfere with the natural flow of words, interfering with the natural emergence of words that truly work.
The best writing comes about not through structure but by chasing unlabeled feelings about a topic. All truly great works let their structures emerge. But that's not enough for academics. They want something precise, something systematic, a theory. They'll look at writing and notice structure, like the flowing from introduction to rising tension, climax, and resolution. And that structure *does* exist. The problem is when someone takes that structure and believes they now have the blueprint to create their own great work.
It's easy to fall into the trap. Why not introduce the reader to the main topics, pose challenges, and so on? If all great works have this pattern, why not structure our work accordingly? But that will always produce pedantic things. People can always detect bad designs. Writing by design forces you to write things you wouldn't say. It's attending to emotional cues and intuition that allows the right structure and content to emerge. To build as nature builds.
Imposing structure onto naturally emerging work will always interfere with the process of emergence. It must interfere because of the direction of complexity. DNA can tell you who was at the crime scene, but it can't tell you how to cure diseases or design healthy babies with intended traits. The pattern is not the path. Seeing what has emerged has no dictates on how to make that thing emerge again. The process of emergence, whereby physical abstractions are created by group selection, such that lower-level details are subsumed into higher-level constructs, does not operate via strict determinism.
Writing a book is a great example of a serious undertaking. You have to put effort into taking revelations and expounding on them at length. The amount of effort is often connected to motivation, as many find it difficult to maintain inspiration long enough to finish such a large publication. But this should strike us as strange. Someone should only write a book about things they are deeply familiar with and comfortable talking about. If this is the case, why do books seem like such arduous undertakings?
This is the problem with design. The only reason people would not want to sit down and go off on topics they are passionate about is because something is getting in the way of this most natural activity. And what is getting in the way is design. When we think about a book, we are thinking about the defined construct; the thing we are told a book is supposed to be. This makes us immediately begin to question our natural impulses and frame them around designs rather than emotions.
This isn’t just about books of course. Book writing is an example of how large and difficult tasks so easily fall into the trap of design. We attempt to force our work into expected structures, only to lose the natural structure that would have emerged in the absence of design. And let’s be clear. There is no comparison between the structures one attempts to design and the structures that emerge naturally, through impassioned trial-and-error. There is deep coordination between the inner details that cannot be seen or labeled. These structures do not have names. They cannot be codified and followed by others. They can only emerge from the purposeful ignoring of preceding structures.
We often hear about AI getting more powerful. That the intelligence of our AI systems is approaching that seen in humans, at least in some specific areas. Along with this AI hype comes the idea that whatever humans have already discovered will only get better as AI gets smarter. A superintelligence should bring about new cures, since it would take whatever scraps of discovery we have currently and reach deeper insights, finding correlations and making connections humans alone could never make. After all, more smartness should lead to more innovation.
Hopefully the reader now appreciates what is wrong with this line of reasoning. First, the comparison between AI and human intelligence is largely unjustified, since intelligence cannot be measured in any scientifically honest fashion. Second, AI might represent a different kind of intelligence, not necessarily a better one. Different people solve different problems. Even comparing human to animal intelligence is flawed given that humans are not surviving against the same factors as other animals. AI is something new, not something that is necessarily better.
But even if we allow that AI will be, in some sense, more capable than humans, the argument that our current science and engineering will only get better has a fatal flaw. It assumes that our current approach will be extended. As I have shown, the current paradigm is itself ultimately incorrect, as it runs counter to the direction of complexity. And it is complexity that we must now build.
In drug discovery and development, AI is being used to predict how different molecules will interact, in an effort to speed up the drug discovery process. In genetic analysis, AI is being used to analyze genetic data to identify mutations and variations associated with diseases. In materials science, AI is being used to discover how new materials might be made. And so on.
But all of these examples use AI to do reductionist science and engineering. As already discussed, looking closer at a gene will tell you more about the gene, but not much about a disease. A superintelligence will not reveal a cure, because we were never on that path to begin with. There is nothing to extend if what AI must work with is disconnected from real world outcomes.
Imagine AI as the famous computer in Douglas Adams’ The Hitchhiker's Guide to the Galaxy. Named Deep Thought, this device was built to give the answer to the “Ultimate Question of Life, the Universe, and Everything.” The humorous answer was of course “42.” The hype surrounding AI imagines it as something akin to such a machine, bringing forth unimaginably powerful solutions to hard problems. If our grand question was related to human health, we might imagine AI providing us the way to cure diseases. But if I had to guess, AI’s version of Deep Thought would not produce a cure as its ultimate answer, but something closer to the spirit of “stop eating garbage.”
This is in fact a far more rigorous and scientific answer than the notion that we can design cures for diseases. I am not saying that cures are not possible, only that the best answer under complexity is to allow systems to function naturally, not to intervene with design. This is why complexity is simpler than simplicity. Decision making under complexity does not pretend to know things it does not. There are only so many pieces of information to wield to make the best decisions. Things based on the relatively small set of universal properties that represent converged knowledge. Deciding to avoid harmful environments is a simple decision, one that likely helps prevent disease, and is far more intelligent and rational than hoping for some designed cure.
AI’s ultimate answer will not be a cure for the same reason the human genome project has done little to cure diseases using knowledge of genetics. Using AI in science to do what science is already doing can only exacerbate the problem. We can use AI to discover new things about genes, but this discovery will never be responsible for the things we want to change; at least not without causing harmful and unforeseen side effects.
If AI reaches a true form of higher intelligence, it will realize that the pursuit of causal knowledge is the problem, and it will come up with solutions that look nothing like what the current paradigm assumes scientific solutions are supposed to look like.