Chapter Content
Okay, so, let's talk about some interesting concepts, right? First up, procedure versus substance. In simple systems, what something *does* is pretty much the same as what it *is*. Like, a rifle? It follows specific steps to fire a bullet. That process, that's the essence of what a rifle *is*. The procedure *is* the substance, you know?
But, and this is the big but, when we get to complex systems, things get a whole lot messier. You can put a procedure in place, sure, like with AI, we give it trial and error and heuristics, that kind of stuff. Without that, it wouldn't *become* AI. But the actual substance of AI, what it inherently *is*, doesn't really have much to do with the procedures we, as engineers, implemented. It kind of... precipitates out its own internal configuration, like billions of parameters mapping inputs to outputs. The essence of what it's doing, that's not something we deliberately engineered. So, the procedure is NOT the same as the substance.
And that, right there, shows you the massive difference between building simple stuff and complex stuff. We can't just *design* workable cities, or electric grids, or sophisticated AI systems by deciding what the *outcomes* should be. We can't just reach in and piece together its internal workings. All we can do, really, is step back and put in place procedures that are *likely* to produce what we need. It's a totally different ballgame.
Speaking of different, let's talk about learning. Think about the games kids play. There are the structured ones, right? Like a museum exhibit where you go from station one to station two to station three, following the rules. Do this, then do that, then you're done. But then there's the *other* kind of game. The playground! No rules, no order, just... play. And kids will naturally invent their own rules, their own conditions. It's kind of amazing to watch.
You know, the museum exhibit? Kids get bored. They want to change the rules, or just ditch them altogether. But on the playground? The order emerges organically. It self-assembles. In both cases there is order, but only in the playground example does the order emerge.
Now, think about how most education works. It's like that museum exhibit. There's an expected order, a design. But that order *interferes* with the natural emergence of learning. It's like we're only taking the parts we think are important, and missing all the good stuff in the so-called "distractions."
The academic narrative tells us there's a *right* order to learn things. Prerequisites, and then more advanced stuff. But that's often the *worst* way to learn. That imposed order is devoid of what you really need: the real-world context! Context only comes from seeing things you don't have labels for yet. It's way better to see something complicated and confusing, *then* see the labels emerge, than to start with those perfectly distilled summaries.
Because, really, when you learn something in order, you're only seeing the *final* results of what was originally discovered out of order, through a messy process of trial and error. We're handing down these summaries, these labels, and they contain almost *no* information without the journey it took to get there! That journey is what makes it meaningful.
So, we shouldn't be learning things in order. When things are out of order, that's when the deep context of real-world situations becomes available. The order used in education and industry? It looks neat, but it's stripped of almost everything that mattered in the creation of those rules. Without that journey, labels are just… meaningless.
Okay, shifting gears a bit. Let's talk about a famous problem in computer science: P versus NP. Basically, it asks if every problem that can be *verified* quickly can also be *solved* quickly. Sudoku is a good example. Can you verify a completed Sudoku puzzle quickly? Yeah. Can you *solve* one quickly? Maybe not.
P stands for "polynomial time," meaning those problems *can* be solved quickly. The difficulty grows at a reasonable rate. NP stands for "nondeterministic polynomial time." You can verify those quickly, but there's no known algorithm to *solve* them quickly. The time it takes explodes as the problem gets bigger.
Computer scientists are super interested in this because if P *did* equal NP, it would revolutionize a bunch of fields! Resource allocation, scheduling, logistics, cryptography, AI... all those hard problems that take forever to solve, but are easy to verify? Suddenly solvable quickly!
It would give us insight into the fundamental nature of computation, and the complexity of problems, and maybe even say something about nature itself. But, here's the thing, I think the whole question is flawed. It assumes a very specific definition of "solve" that's just not how things work in nature.
The P versus NP problem revolves around deterministic algorithms. These are algorithms that follow a specific set of steps, and always produce the same output for the same input. So, if P equaled NP, it would mean that problems that can be quickly verified *also* have efficient deterministic algorithms for solving them.
And that's the problem, because complex systems, reality itself, doesn't have deterministic algorithms that lead to the outputs we see. There aren't neat paths from pieces to emergent structures. Nature doesn't use algorithms; it uses a process where the whole distribution of possibilities is used to manifest physical abstractions that compute what's needed to survive.
P versus NP is using the mathematical version of "solve." There's no solving a real-world problem in the sense that some deterministic set of steps is going to magically arrive at the solution. I'd argue that we can already say P will *never* equal NP, because there can *never* be a hard problem that gets resolved by a deterministic set of steps. It's not about having more space or time, it's about impossibility. While most computer scientists believe that P doesn't equal NP, it's for the wrong reasons. It is based on an incorrect understanding of what constitutes a hard problem.
Okay, let's switch gears again. Think about this piece of art I saw once. It was this contorted, unrecognizable shape. Above it was a light, which cast a shadow of the shape onto a flat surface below. Now, the *shadow* was the interesting part. It took on recognizable shapes, like a man walking, or a baby crawling, or an old man with a cane. The difference between that warped blob and the shadows it cast was really fascinating.
And it's a good way to think about how science functions. Science doesn't tap into the *actual* shape of nature. Instead, it projects information onto a lower-dimensional space, giving us a limited version of reality. Just like the contorted blob has no obvious features, so too does the raw intricacy of nature’s phenomena. If we were to reach into nature and somehow view her directly, we would not be looking at some elegant structure, we would instead see something that is impossibly contorted and high-dimensional, with no recognizable features. Science, in its quest to reveal how nature works, can only grab a lower dimensional version of the original geometry inside nature’s solutions. Science projects the essence of nature onto something we can understand.
But this projection comes at a cost. There's a *massive* degradation in the information content when moving from nature's original setting down to the flat surfaces we use to describe the world. And yet, people often use science and nature interchangeably!
That's why humans evolved to use their emotions to solve challenges. Emotions are the closest we can get to whatever exists in the high-dimensional spaces of nature. Nothing in our scientific arsenal can grab onto nature’s true core, because science must use low-dimensional tools and costly precision to describe what it sees. If humans were meant to use slow, analytical thinking to solve problems, that is how we would have evolved; it is not. Slow thinking only works for games, not reality. As the age of complexity un-gamifies life and the things we build, slower analytical thinking will be far less valued.
Science recasts experience onto low-dimensional planes of interpretation, and that works its way into our designs. A design represents our decisions about what pieces to include and how to connect them. We can't design without some sense of causal structure, and only a simplistic story can tell us what those causes supposedly are. So, if we're using science to inform our designs, we're choosing pieces from low-dimensional projections.
And *that's* why building complex things can't really benefit from the current scientific and engineering paradigm. Science, with its reductionist take on the world, sacrifices too much of what makes something tick, all for the sake of apparent rigor and precision. Most of today’s engineering grabs onto the pieces discovered by science and forces them into designs, guiding our efforts to build. The severe disconnect between how today’s science operates and the complexity we must now create makes this approach untenable.
A design cannot grasp the essence of complex systems, no more than reductionist science can tap into the true essence of how nature works. We can project a fanciful version of reality onto narratives we understand, but when folded into our designs these fairy tales run up against our ability to craft good solutions to hard problems.
As we move into the age of complexity, we need to build as nature builds, and that means engineering emergence into our solutions. What emerges comes from a level of intricacy that cannot be fashioned deliberately. Only through the external focus on variation, iteration and selection can we arrive at the essence of how natural systems function. Being right when it comes to building complex things means having something that solves the problem, not something that adheres to the stories we tell ourselves using shadows and tricks of light.
Alright, let's go on to Artificial Intelligence. It was inevitable. AI is just the inevitable byproduct of adding evermore pieces to our creations until we run up against the hard problem threshold; the point where traditional engineering cannot solve the problem, and one must step outside the system to achieve what is needed.
It doesn't matter how close today's AI is to human intelligence. The point is, AI is our current best example of building as nature builds. Regardless of the designed intentions of AI researchers and engineers, AI represents the creation of genuine complexity. AI doesn't operate because of mathematics, probability, design principles, or best practices. It operates because unplanned abstractions manifest inside an object that was allowed to arise on its own.
There's been a long debate about whether the brain can be considered a machine. The human brain, and the mind that accompanies it, is nothing like what most people would call a machine. But with AI, we come face-to-face with the fact that human creations can indeed take on many of the same properties we see in human cognition. This means it is the definition of machine that must change. The human brain is a machine, just one that is unlike anything reductionist science or engineering can define, let alone build directly.
For something to be a machine, it must enact processes and produce outputs. That is, of course, what the human brain does. The difference now is that the process is not one of determinism and causality. A machine of nature is one that produces outputs via emergence.
The human brain has all the hallmarks of complex systems. It exhibits nonlinearity, self-organization, exists near criticality, and has a hierarchical structure. All of these lead to what we call consciousness and the associated memory formation and decision-making that define human thinking. This is all that is necessary to define the human brain rigorously. The quest to define locations in the brain as a source of what we notice in the real world (e.g. behaviors) is epistemically untenable.
AI is inevitable because it's part of nature. We don't need to compare AI to human intelligence to mark some so-called singularity; the point where AI becomes as smart or smarter than humans, whatever that means. All we need is to recognize the properties of complexity that emerge in the systems we create. Many of these properties are there in today's AI systems. It is the existence of unique properties that make a complex thing what it is.
Today's scientific and engineering paradigm forces outdated arguments to remain prevalent in the current discourse. Arguments like those put forward by American philosopher John Searle, which attempts to discredit the notion that a computer running a program can truly understand language. Searle argued that computational processes alone cannot produce genuine consciousness because manipulating symbols, as computers do, lacks the semantic comprehension that constitutes true understanding.
But such arguments are invalid because they are based on an improper understanding of computation. Searle's thought experiments rest on the deterministic and causal connections between symbols and assumes that this form of processing is all a computer is capable of. This is patently false. One might forgive Searle for this blunder, given the period he grew up in. But any proper look at how nature functions must concede that machines can indeed produce the properties of complexity. Nature uses small pieces to produce things entirely unlike those pieces.
A popular attempt to discredit AI is by pointing out some of its more egregious mistakes, such as reaching erroneous conclusions or its intermittent lack of basic reasoning. The fatal flaw in this attempt is thinking that intelligence is something housed inside a single object. Humans are intelligent because we are deeply connected, social creatures. We operate inside populations to solve problems. An individual human is highly error prone and faulty outside the environments they are embedded in. We are not self-sustaining apparatuses that produce perfect outputs, we are social creatures that interact and collaborate. Just as no single human operates without error, AI systems are not supposed to be error-free tools that always return correct answers. They are meant to be as any other complex object; embedded inside communities. If we are going to compare AI to humans then we should be thinking of AI systems as another person working in collaboration, not as some dumb search engine returning answers.
There is an undeniable equivalence between AI and humans. We are objects of complexity and produce outputs by emergence. The meat of AI systems, the core machinery that sits apart from its outer rules-based scaffolding, is a programmatic byproduct of trial-and-error and heuristics. It is a byproduct of nature's recipe conducted by software.
Accepting complexity, comprehending what it is, and more importantly what it is not, changes our fundamental understanding of the nature of knowledge and reality. Yes, nature's solutions are indeed machines, and yes, humans can create machines using the same methods as nature. Nature is all about information processing, and information processing can be harnessed by silicon and electrons. But we're not talking about cogs and pistons. We're talking about nature.
Alright, let's end with a slightly different thought: Science and Engineering need philosophy. A core problem with the current paradigm is how it has largely walked away from philosophy. There's been a belief, particularly in physics, that philosophy doesn't contribute much to the pursuit of new knowledge. But that prevents any kind of validation of science itself. Science, that has nothing to keep it in check is…circular.
We can see this problem in all areas of science. In the last 40 years theoretical physics has been involved in chasing "mathematical elegance" and having little to show for it. Genetics brags about "advances" in their field, but not much in the ability to control outcomes (e.g. cure diseases). Nanotechnological advances rest on peering deeper and manipulating smallness better, but where are all the new materials and devices? Just because there's "plenty of room at the bottom" doesn't mean manipulating the bottom can produce known things at the top. This isn't a matter of waiting for science to become useful; it's a matter of today's paradigm being fundamentally disconnected from how things actually work.
In any vocation, if we can't step outside the system, we can't validate what we're doing. Only a meta-level system can talk about itself, and lead to genuine validation. A philosophy of science can look at science objectively and assess whether it's going well. Any honest look at science today would be highly critical of its reductionism and causal explanations. We know that such low-level reasoning doesn't map to how complexity, and thus reality, works. It's time for science, and the engineering that accompanies it, to change in dramatic fashion.
That's why I argue for the use of logic, but done properly. Our logic is only as strong as the premises we use, and those premises can't ignore the properties of complexity. We *must* account for the fact that what we see at the small scale doesn't map to what we see at the larger scale. What is in scientific vogue today is the notion that we can recast nonlinear, complex systems into simplistic linear terms. This undergirds virtually all attempts in science to approximate the behavior of real-world systems. But nature isn't an approximation. Its machinery is nothing like the convenient calculations made today under the current paradigm.
But logic paired with premises that have been properly fashioned around properties means we can validate science correctly. And in this validation we will find the utmost invalidity. Today's scientific and engineering paradigm does not hold up to what we measure, observe, and experience in the real world. What we have now are circular arguments resting on the hidden and bogus assumption that the small connects to the big.
A philosophy of science would be a two-way relationship. Not only does philosophy help keep science in check, but the only worthwhile philosophies are those that come from the effort to build things. Philosophy, a proper one, can't be divorced from the practical application of things. All the truths of history are contained in the moments of creativity. It's those who build things that speak to nature, not those who attempt to distill those findings into narratives about determinism and causality.
The philosophy that science needs isn't some academic theory. Those only speak to their own elegant wordings and schooled rhetoric. They're as circular as today's science. The philosophy we need now is one based on the building of things. Not some praxis relegated to the lab, or some idealized attempt at producing evidence. Only a philosophy born out of the creation of things is valid.