Chapter Content

Calculating...

Okay, so, like, let's talk about a new beginning, a different way of, you know, educating. We've talked about how the education system is kinda messed up because of this whole "design narrative," right? It's like, we think students need to learn all these concepts, all these facts, and then, boom, use them to solve problems later in life. The idea is that what you learn in school, like, *actually* helps you design solutions when you graduate.

But, the problem is, everything's so disconnected! It's like trying to build a house with a bunch of random bricks and no blueprints. It just...doesn't work. The only stuff that *really* matters when you're building something complicated are the things you figure out *after* you start. And, honestly, it's better to be, like, a total newbie and just try stuff out, you know, trial and error, instead of trying to force-fit what you learned in a textbook. That kind of intervention just slows things down.

Like, everything we learn in school is based on this idea of "design." We learn about atoms, genes, history, whatever, and the whole point is supposed to be that we can use this stuff to make the world better. We're told there's a direct link between knowing all this stuff and building cool solutions. Learn spelling, write something amazing! Learn calculus, design killer machines! Learn civics, fix the government!

But that knowledge doesn't really translate when you're dealing with something *complex*. Think of a book, right? It's got characters, settings, plot twists, themes... everything's interacting. And the story that comes out isn't something the writer planned perfectly from the start. It emerges from all those pieces banging around together, often in ways you can't predict. Good writing doesn't happen when you force it into a rigid structure. It happens when you just start writing and let your intuition guide you.

And that's how we should be building technology now too. Used to be, the gap between school and the real world was just annoying. But now, if we're trying to build something *really* complex, we can't just rely on textbook knowledge.

The tech solutions we need today are more like books or art than, like, assembly-line machines. Knowing about forces and energy isn't going to help us invent the next big thing. Understanding civic engagement won't automatically fix the government. The solutions we need have to be like nature's solutions: complex, messy, and built through trial and error, a whole lot of, you know, winging it.

Now, I'm not saying we should just throw out everything we learned. I mean, it'd be insane to not teach humanity's biggest achievements to the next generation! Seeing what's been done inspires people to make their *own* discoveries. And it stops us from, like, reinventing the wheel every single time. So, how do we teach what we know without falling into that design narrative trap?

We gotta rethink what "learning" even *means*. And that brings us back to redefining "knowledge." Only those, like, high-level principles and logical thinking are going to be effective when things get complicated. These principles aren't about causes, they're about properties. They're not instructions, they're just signals that tell us we're on the right track. We can build complex stuff by focusing on these properties because they're separate from whatever inner workings are making things happen.

Think about atoms, right? We should definitely teach about atoms. But knowing about atoms doesn't automatically tell you how to make amazing drugs or new materials. Sounds crazy, I know, but remember that design narrative! It's easy to convince yourself that your actions have a direct, causal connection to the final result.

Design gives us this illusion of control. It tells us we can understand how the world works and then use that knowledge to solve problems. But there's a HUGE disconnect between what we find in a lab and what actually happens in the real world. Just because drug companies are using systematic approaches doesn't mean they're *engineering* health outcomes. There are always going to be side effects.

The only real proof is whether something *works*. Does the drug kill the headache? Is the material super strong? We don't need to understand *why* it works. The reason is because it survives testing! This includes the safety checks, of course. Any explanations we give are, like, a story we tell *afterward* to justify what we did.

So, the only things that are *really* worth knowing are: one, that we're building something complex; and two, that it actually *solves* the problem. For drug discovery, it's not about knowing atoms in a causal way – that knowing atoms *leads* to drug design – it's about knowing properties that show complexity is being achieved, and showing that it's working.

We need to be doing trial and error, not just following a blueprint. It'd be better if drug discovery was more like writing a book. You mix chemicals randomly, and then patterns start to emerge. Those patterns tell you if you're on the right track. You look for certain properties that show a compound might be effective. These properties aren't specific to *one* drug, they're common to *many* drugs in that category.

For example, you'd look for consistency in the appearance of the compound, like color and texture, to make sure it's pure and stable. You'd check if it dissolves in water and oil, which is important for how it gets absorbed into the body. You'd see how fast it dissolves, how it handles light and temperature, and whether it shows a clear dose-response relationship, which would indicate its toxicity and effectiveness. And maybe see if the compound seems to target specific symptoms.

Now, the temptation is to design these properties into the drug from the *start*, using chemistry knowledge. You could design a molecule with a specific structure that minimizes variations in melting point, solubility, and stability. You could choose functional groups that are known to dissolve well in water and oil. You could control the particle size to optimize how fast it dissolves. You could select chemical bonds that resist degradation. You could use structure-activity relationships to predict the dose-response. You could use what you know about molecules and targets to make your compound interact with specific targets.

It all *sounds* reasonable. We *do* know how to control for all these things. So why *not* use that knowledge?

Well, it's like writing a book according to a rigid formula. You'll get *something*, but it'll probably be boring and predictable. The different parts of the solution won't work together properly. Drug discovery *has* been real, but the design narrative is mostly made-up. The reliance on design in drug discovery probably causes *more* problems than it solves.

Properties aren't something you design *into* the system. They're something you *notice* after trial and error produces something that works. Yes, you can design those properties into a solution, and you'll get them. The drug will be consistent, soluble, and all that. But it'll also have a bunch of side effects that barely make it worthwhile.

Pharma is just one example. The point is, to create complex solutions, we need to use properties to *signal* that trial and error is working, not to *design* the system as a causal chain of properties. We can make better treatments and materials by focusing on successful trial and error, not on designing specific outcomes.

But if we don't need to know about atoms to make drugs, why bother learning about atoms at all? Because it's not about *using* atoms to make better things. It's about learning about atoms as examples of how certain systems organize themselves and behave.

The knowledge we've gathered shouldn't be seen as building blocks for the future. They're instances of nature's solutions that have universally true properties. Knowing about those properties helps us validate that our efforts are working.

Knowing that electrons occupy specific energy levels in an atom might suggest a useful approach for robust digital communication, where well-defined states create signal integrity. Weak bonds between atoms, which allow flexibility in materials, might suggest a better way to achieve collaboration while staying flexible. The way outer electrons dictate atomic behavior might be the key to effective interaction protocols in highly connected systems. The arrangements seen in atomic lattices might be applicable to urban planning.

So, studying atoms *is* worth it, but not to connect them causally into larger systems through design. Studying atoms is important because they show universal properties that nature follows, and similar systems at different scales will probably operate under similar constraints.

We need to ditch the design narrative when it comes to teaching future generations. The isolated pieces of knowledge in textbooks aren't paths to real-world solutions. They're examples that show universal properties; properties that many other systems will show. It's knowing those properties, combined with our natural ability to embrace trial and error and think in a smart, useful way, that will lead to the best solutions.

It’s Like, We're Supposed to be Doing Alchemy

You know, people often say that AI research today is more like alchemy than actual science. And you know what? They might have a point! Progress in AI doesn’t really come from careful planning or breaking things down into smaller pieces. It comes from throwing in more data, using more computing power, and just mixing and matching things until something works – which drives traditional scientists crazy!

But, by now, hopefully you can see that this kind of messy work is *exactly* what AI needs. AI is getting closer to real complexity, which it *has* to do to solve big problems, and you can’t create real complexity by design. Traditional science, with its focus on cause-and-effect, isn’t going to make AI better.

AI's been successful because deep learning takes a different approach from science, statistics, and rules-based software. Like cities, power grids, and the economy, AI doesn’t get its best results from careful engineering. AI creates what it needs through a process similar to how nature solves problems.

But, the design narrative sneaks back in. This ad-hoc way of engineering AI feels, well, unsophisticated. Researchers want to *design* neural networks.

But we should be doing alchemy! We should be mixing things together and seeing what happens. That’s actually a *more* scientific approach than pretending you have information you don’t. Instead of pretending to know everything, you stay outside the system and see what happens. You let nature find the solutions.

Of course, alchemy never made gold, but you get the idea. Stepping back and letting nature work isn't unscientific. It's just a straw man argument used by scientists who are stuck in an old way of thinking. Part of this whole thing is embracing the spirit of experimentation that drove our ancestors to try to turn something ordinary into something precious.

Redefining Rigor, You Know?

Being rigorous is important in science and engineering, no doubt. It helps us make sure that what we discover and build is reliable and valid. It gives us a foundation for future work and earns the public's trust.

But what we think of as "rigorous" is becoming a problem. Scientists and regular people see complex things, like math equations, as being more rigorous than, say, words or diagrams. Math feels more solid. But there's a downside to that precision that we ignore. When we express things with equations or causal calculations, we lose a lot of the context that gives them meaning.

We need to have concrete things, you know? We need to know that what we measure and experience fits into realistic models that help us make better decisions.

Proper rigor under complexity means creating logical arguments based on timeless properties, not causes. But that doesn't take care of our need for causality and determinism. Yes, properties are constraints that complex things have to follow, and they help us reason about whether we're on the right track. But they're also connected. They're not just separate things.

Let's go back to the book example. Books have a narrative structure: a setup, rising challenges, and solutions. As I’ve said, you shouldn’t use this structure to *guide* your writing, but it *is* a sign that your writing is going well. The parts of that structure are properties of good writing, but they're also connected. The setup leads to tension, which leads to a climax, which leads to a resolution.

That narrative structure is like a framework, with properties that are causally connected. If we shift our focus to what we know *outside* the system, we can still use causality and determinism.

This is a properly placed concreteness, unlike the *misplaced* concreteness that we get from trying to break things down too much. When you use causality at that high level, it becomes a powerful tool for understanding what you're building. There's structure and reason at that level. It's a more realistic way to be rigorous when talking about complex things.

We need to redefine rigor as what exists *outside* the systems we create. That's where humans first learned about the causality and determinism that our self-awareness depends on. Not in the inner workings of deterministic systems, or fancy symbols, but in the complex world we learned to survive in. Maybe our greatest discovery will be realizing that progress, at its best, brings us closer to where we started.

Go Back Print Chapter