Chapter Content

Calculating...

Okay, so, like, let's talk about coming full circle, right? It's all about these skills we have. For ages, we've been rewarding these, like, super detailed, hard skills, you know? Knowing exactly *how* something works. But now, with AI and everything, a lot of those skills are getting automated, which, like, forces us to rethink how we create value as humans.

It's almost like, for the first time, our tools are, like, bringing us back to our natural abilities, you know? Instead of just going deeper and deeper into detail, we're, like, moving toward a different kind of abstraction.

See, technologies today, they're embracing complexity. And that brings us full circle. The skills that matter now? They're the ones we used to rely on, the ones we evolved for. Humans are amazing at dealing with uncertain situations. We use our gut feelings, and our pattern recognition skills to solve tough problems. The things we thought were important back in The Enlightenment? They don't really work in complex situations. Their focus on, like, isolating stuff? It just doesn't match how nature works. We have to build things the way nature builds them.

Think about, like, spell check and autocomplete. Some people say it's making us dumber, right? Because we don't have to worry about spelling. But, like, we were never *supposed* to worry about spelling! For the most part, bad spelling doesn't change the meaning of what we're trying to say. There's this thing called typoglycemia, where people can understand text even with spelling mistakes. It shows that, like, the tiny details in writing aren't that important for understanding. Spelling is an academic thing, not a natural one. What we *should* be doing is getting our thoughts out and expressing our feelings. And that's what autocomplete helps us do. If we don't have to spend time on spelling, we can just communicate, you know?

Spelling and grammar, they happen naturally, but, like, the pattern isn't the path. The way things look aren't the way to get there. That's why languages change over time, syntax and grammar change. Language is a living thing that adapts to what its speakers need. We're not supposed to be focused on spelling and grammar. Those are just, like, side effects of real communication. We're supposed to be solving real-world problems. The best grammar comes from people who speak naturally, based on what they're trying to say, not from some academic rulebook, you know?

So, the technologies we create will take away a lot of skills that people have valued for a long time. But they'll also let us use what we're naturally good at, and those are the skills that can solve our biggest challenges.

Now, are we building the *right* thing, though? That's a question. We can't really know if we're right in our own lifetime. Only time will tell, right? Simulations aren't realistic enough to know for sure, and one life isn't enough to prove success.

Okay, so here's the deal. I've talked about focusing on properties, not causes. And making logical arguments based on those properties. What makes properties so important is that they're, like, timeless. They're not like causal reasons that, like, fall apart. Properties are the rules that nature follows. They're strong because they exist outside of any specific example of nature's solutions.

Even though we can't know for sure if something will survive in the long run, we can use properties to check if we're on the right track. There are things like nonlinearity, self-organization, adaptiveness, resilience, feedback, hierarchy, criticality, periodicity, synchronicity, and phase transitions.

These are, like, the things we see when systems become complex. They tell us that real complexity is happening. We can't engineer these things directly. They can only come from nature's trial-and-error. What we *can* do is create the initial setup that makes variation, iteration, and selection happen, and then use high-level heuristics.

Think about writing a book. The academic way is to use literary devices and all that stuff from the start. But doing that can only hurt our work. We should just follow our gut. Don't try to get the right words, try to get the right *feeling*.

Only by letting our creativity emerge can we find original ideas and insights. Our work should surprise us. We should see things revealed to us along the way, things that came about on their own, because of what we did.

The details of our work should surprise us, but the properties we see in good work are expected. Something new might look different than anything we've seen before, but it'll still follow properties that we know and expect. But, yeah, we can't use those properties to, like, *force* our work. They have to come naturally. Only when the signs of complexity show up do the details work together to create something right.

Looking for surprise in our work is a way to pay attention to the signs of complexity, and that's how we can know if what we're creating is good. At the beginning, we have our deep feelings and experiences, but they don't have labels. Our feelings don't have words or categories we can put them in. How those feelings get expressed is something we can only see later on.

We see nonlinearity in how ideas come to us, and self-organization as the content gets better with each try. We see self-referencing and feedback loops as new ideas change our original wording. We see resilience in the parts that stay, and hierarchy as words become paragraphs, paragraphs become sections, and sections become chapters. There are phase transitions as disconnected thoughts become clear over time.

This is true for anything we build in the age of complexity. If we want to make the next big thing in AI, we can't just follow best practices or copy what the best model is doing. That can only hurt our chances of getting those emergent structures and behaviors that we need.

Complex systems can achieve the same results in different ways. They have to be arrived at in different ways than we did it before. That makes sure we're only paying attention to the high-level properties, not just some set of rules or designs. We can't reach in and design the content of our work. That won't lead to the next level of abstraction.

Okay, so, is design dead? Do we just give up on controlling what we build? Well, under the current idea of design, yeah. Design as we know it doesn't work under complexity. If we want to build complex things, which we need to do to solve tough problems, then we have to let go of design as we know it.

But there's a new kind of knowledge, based on meta-level properties that complex things always follow. That means there's still a place for design, just a different kind. A way to think about how we can set things up to get good results, and how to know if those results are valid.

This new design would have to be external to the systems we create. The steps used to create complex things have little to do with what the things actually are. So, designs can be upfront, if they just put in place processes that lead to emergence. The idea of design, which is to use structures to guide outcomes, can still work if design stays outside the system.

To meta-design something would mean only choosing the parts and connections at the meta level. It's like making better arguments by basing them on properties instead of reasons. We can look at the problems we want to solve, and create meta-level things that lead, not to specific answers, but to systems that will either survive or not. That changes the focus from causal reasoning to making things that survive. We need to engineer systems that find what's needed on their own, but in ways that we expect.

Science and engineering have always been related, but separate. The story is that science makes the discoveries, and engineering turns them into tools. Science is the foundation for engineering, because it provides the theory and principles that engineers use.

We're told computers wouldn't exist without quantum mechanics. We're told aerospace parts, electronics, construction materials, and biomedical implants are here because of materials science. We're told machines were possible because of theories on statics, dynamics, and fluid mechanics. We're told civil engineers couldn't create without knowledge of physics and geology.

It all seems to make sense. Engineers are trying to create things that work, and things only work when there are underlying forces working together. Science is where that knowledge comes from, so science and engineering go together, right?

But this goes against the direction of complexity. The lack of a path from pieces to properties in complex things means the science-leads-to-engineering story doesn't work. Whatever science finds has little chance of being used inside complex solutions. We see this in fields like genetics and nanotechnology. Now that we want to create complex things with emergent outputs, science can't provide the building blocks.

What works now are structures that emerge, and those structures can only be arrived at through external efforts, not reductionist discoveries. The way we do scientific experiments, where we take things out and isolate them to make a discovery, is different from what we need to build. Any knowledge we gain ends up being self-serving, not really true.

Engineers have to stumble across insights through their external efforts, and only then find discovered truths. That's how scientific discovery has always happened. Even though people say that fundamentals lead to applications, it's those who try things out naively that find things that later get written into textbooks. The real story is not science leading to engineering, but engineering leading to science.

The reason the academic story has lasted this long is because the things we've built have almost all been deterministic. When the inventions of humans can be explained in terms of internal causality, credit can be given to scientists. But when the things we build are disconnected from the causal explanations of science, that story doesn't work anymore.

Someone might say science is a good starting point, but those starting points are likely to slow us down. Starting a project from reductionist knowledge puts us into bad patterns, because any structure that doesn't emerge doesn't fit how complex systems work. And, like, designs that we force into our projects actually make systems weaker. Design can't just be a motivation when it gets in the way of how complex systems work.

Okay, this doesn't mean science isn't important. The properties that complex systems follow, and which I argue should be the basis of thinking, come from scientific discovery. What makes properties different from causal explanations is that they're not causal. Properties are high-level truths about nature, and they don't care how they came to be. Properties apply to all examples of a complex system. That's real science, because it doesn't pretend to know things it doesn't know.

Science never really informed engineering, but in the age of complexity, discovered properties can tell us when we're on the right track. That's the kind of scientific knowledge that can validate our efforts to engineer complex things. But it has to be applied after the fact, once structures and behaviors have already emerged.

If we want to build the next big language model in AI, one of the most complex things ever created, then telling people to follow inner principles will only slow progress. Today's AI systems are becoming complex. We know this, because simple systems can't produce the signs of complexity. But the outer principles of complex systems can signal that we're on the right track. The difference is working inside versus outside. Only external, meta-level efforts can let humans engineer emergence.

In the coming age of complexity, where we need to build complex things to solve problems, science and engineering need to become one. The only recognized way to gather knowledge comes from building things naively, and then seeing those discoveries as worthwhile knowledge that can be used to signal effective building. Bringing science and engineering together ensures that the direction of complexity is respected.

Okay, so, we're supposed to have biases, right? The current thinking is that biases are bad. Because if you think there are root causes to real-world situations, then biases mess things up. Racial biases affect medical treatment, hiring decisions, judicial decisions, and financial opportunities, and they also hurt the integrity of scientific research. If biases aren't controlled, things are unfair and untrue. That's why going meta is so important, because it brings together different opinions/pieces/approaches to see something more hidden, more true, more unbiased.

But human bias isn't just some leftover from evolution to get rid of. We have biases for reasons, evolutionary reasons. When evolution keeps something around, it's because it solves tough problems in complex environments. Getting rid of human bias must be wrong. Especially now that we need to create complex solutions.

The problem isn't the bias, it's the lack of group selection. When biases affect medical settings, it's because individuals are giving the treatment. Individual treatment isn't all bad, as people's unique experiences, training and perspectives are often gleaned through one-on-one interaction. But consider pain management. That's a hard problem, because it works with a complex system; the human body. The problem of pain management can't be solved by one person. Nature solves problems by selecting groups, so that the (n - 1) level of pieces creates a configuration that solves the (n) level aggregate challenge. Here, the (n - 1) level is different healthcare practitioners with their unique biases, and the (n) level is the solution to pain management that comes from them.

We shouldn't expect one person to make a good decision about how much medication to give. Those decisions have to come naturally, from the collection of biased pieces to create something no one person could create. Just like wisdom of the crowds leads to more accurate information, so do groups lead to solutions to hard problems. The truth is, individual biases are needed to create unbiased results.

Trying to get rid of bias from individuals is wrong. Biases are needed to help the group solve problems. We're supposed to have biases so that we can tease out the different sides of truth from nature's complex reality. Just like a meta model tries to find something deeper and more universally true than any one model can reveal, people are meant to work in a meta way.

The important thing is to realize that the best way people can work in a meta way is by building something that works. Building something that works under complexity only happens when we set the target to an external, meta goal. Then the internal dynamics of the system, biases included, arrange themselves to solve hard problems.

Go Back Print Chapter