Chapter Content
Okay, so, Chapter 9. Let's dive in. It's called "Life is Not a Game," and it's all about how we use math, and maybe *overuse* it, you know? Like, uh, how it really fits into the natural world.
So, right off the bat, it talks about how math is, like, super important throughout history. I mean, early civilizations used arithmetic for farming, trade, taxes, all that stuff. Geometry for building. Calendars, you know, tracking the stars... Statistical methods, economic theories, all based on math principles. Even Einstein's relativity and quantum mechanics, like, those are built on math, right? And early computers too.
Today, math is, like, seen as the purest science, you know? Like, it's how we make science *really* good, really rigorous. The more you can use math to describe something, the more careful you are, the more…precise it is, yeah? It’s seen as the language of nature, since natural systems often show the symmetries and structures that we find in math itself.
But here's the thing. When we *apply* math, it's actually most successful in, like, *unnatural* situations. Take poker, for example. Math can help you win, for sure. Probabilities, strategies… But poker is a game, it has rules, right? It's not the real world. Step outside those rules and… well, you're not playing poker anymore, are you?
And, you know, if poker's too weird of an example, think about route optimization, finding the best path among a bunch of possibilities. Sounds pretty realistic, right? Humans have always needed to find their way around. But… well, there are no *routes* in nature, are there? The forest has trees, leaves, branches, organic matter, but not, like, marked routes. Routes are things *we* make, like trails, paths, roads. They're, like, a byproduct of our modern, organized world.
Even when those kinda games have real-world versions, like negotiating a car purchase or finding the fastest way to work, they still happen under pretty artificial circumstances, like pre-defined rules and social conventions. Car buying? There's not a ton of flexibility. You're just looking for those tiny advantages to use. Now, compare that to, like, meeting a bear in the forest. No rules there. Just instinct and quick decisions. But there *is* negotiation, right? You have to consider the bear's position, its potential threats, its behavior. You're using information, even if it's not mathematical, you know?
The fastest way to work? That uses math because we drive in, like, a grid-like environment with rules. Math works well when things are set up like games, with clearly defined pieces. That simplicity lets math hook into the system.
But move into nature, and… that attachability goes away. Things become smooth and turbulent. Math loses its grip. People often just say that it's approximation, but complexity isn't just a powered-up version of a deterministic system. It's different. No math is going to help you with a bear encounter, or finding your way through a really dense forest.
Now, someone might say that our lives are pretty game-like anyway. We're not running into bears on the way to work, right? But the *age* of complexity kind of messes with that idea. Yeah, our society has become pretty game-like, lending itself to mathematics. But when complexity is how things really work, things change, you know?
We're not just building simple machines anymore. We have to account for complexity. Complexity isn’t just a niche area in science; it's *everything*. Our theories of nature, our definitions of knowledge and skill, even our softer, heuristic thinking – that's what matters. Using math to describe nature becomes problematic because we can’t ignore complexity.
Now, probability might seem different. It tries to account for uncertainty and randomness, right? It's about unpredictable situations.
But probability still has that attachment issue. It’s about comparisons, dividing one thing by another. Calculate the probability of drawing an ace from a deck of cards? You know the total outcomes, you know the favorable outcomes, and you divide them. Boom. Probability. But… that only works because we can attach our probability to the game-like situation.
Yeah, there are more advanced methods, but they still make comparisons. You're contrasting the likelihood of something happening versus the likelihood of it *not* happening.
A deck of cards is one thing, but how do you compare in nature? What's the numerator? The number of times *something* happens? What's the denominator? The number of *overall* events? There's no way to know that! It's virtually infinite.
So, what about distributions? I mean, when we see probability distribution plotted on a graph we are visualizing the values of a mathematical function, one that represents the likelihood of each possible outcome of a random variable.
Okay, so, the question then becomes, “if probability is so limited, why is it used so much?” And the answer is simply because everything listed previously operates under the structured rules and social conventions of the world we have created.
Probability came from gambling, and that makes sense, right? The origins don't invalidate it, but it shows how it was originally envisioned: as a tool for games of chance. It's unlikely it would've been discovered in nature. It doesn't mean chance plays no role, just that nature's chance doesn't follow our simple rules.
Only games are contained enough to calculate the likelihood of something happening. So, math, even probability, is disconnected from nature and how it works. In controlled environments, math is a great tool. But in real-world complexity, it loses its grip. That doesn't threaten pure math, but it suggests limits to its *application*, even in, like, fundamental physics theories.
When things are simple, applied math is useful. But as we build truly complex things, the assumption that math is where we find rigor becomes suspect.
A lot of people get this disconnect. Students complain that math isn't useful, right? That complaint is usually dismissed by those that say that mathematics gives us a better way to think. But, I mean, that doesn’t hold up when we are building complex things. Mathematics, in reality, can easily encourage us to think incorrectly.
It's not just students. Look at the stock market. People with the resources to exploit tiny differences in asset prices have an advantage. But those advantages are subtle. If math was *that* powerful, everyone would be making bank. Same with sports betting. Slight advantage, maybe, but not enough for most people. And even *that* ignores survivorship bias.
The assumption that math and probability map to the real world comes from a time when they did. But that's not the future. Making a tiny profit in the market or sports is one thing, but what about truly complex things?
The argument that modern society makes game-like calculations useful is fading. STEM knowledge needs a major overhaul to stay relevant in the face of complexity. The more physical abstraction we use, the further we get from inner knowledge and closer to trial-and-error, heuristics, and pattern recognition.
This isn't an anti-math argument, but rather an issue with how it's *applied*. Mathematics is more about abstraction than calculation, and abstraction is what complexity is all about. Not just informationally, but physically. Mathematics is wielded in the causal sense, as though it speaks to the inner workings of the systems we create. An approach that will prove utterly untenable in the age of complexity.
They tell us that AI is made possible by math, right? Deep learning uses all kinds of linear algebra, calculus, probability, graph theory, etc. It seems like a success story for applied mathematics.
And, in a sense, that's true. But it's misleading. Math is used to build the *scaffolding* of AI, but it's not what makes it *work*. AI works because of emergent properties that *weren't* engineered into the system. Math is like individual ants in a colony. Important, but not solving hard problems on their own. It's the *collection* of ants that do. It is that letting loose of countless “ants” that allow the meat of AI to materialize and compute the outputs needed. Mathematics is how we construct the individual pieces of the high-level process we need to enact: trial-and-error paired with heuristics. After that, the systems converge in ways we cannot understand in the deterministic, causal sense.
To ensure trial-and-error happens, AI engineers need ways to compute distances, rates, and mixing. Math gives us a way to define these things computationally. But these concepts aren't *owned* by math. They're necessary aspects of trial-and-error and heuristics. There may be better ways than math to enact them, but for now, math is all we have. The vectors and matrices are useful, but there's no reason to believe their existence is how nature actually operates.
It's not math that makes AI tick, it's the concepts behind trial-and-error and heuristics. Math must be understood as nothing more than a residual of what occurs in nature, not some definitive account of her inner workings.
But our science and engineering paradigms assume math is how complex things function. That’s why today’s scientists and engineers don’t like the “alchemy” of AI. It pains many in the scientific and engineering circle that AI appears more like an art than a science. AI systems are improved, not through careful design or deep causal reasoning, but by high-level mixing and matching, adding more data and processing power to achieve results. It all sounds so unrigorous.
It is the fight against the alchemy of AI research that is the problem. Today’s AI researchers want to find a more rigorous description of how AI works internally. But there is nothing to find. We know how AI works, as long as we move away from the causal version of how. Only surface-level knowledge related to information, computation and evolution can describe what AI is doing. The academic exercise to reach into systems and find some deterministic story told by an elegant mathematical theory is bogus under complexity. Math is not how AI works, it is how AI is set up.
Mathematics being nothing but a residual of reality speaks to the nature of how we must build complex things. We have to reframe our understanding of what mathematics represents. It is not something that can describe how something complex works internally, nor can it guide us on how to build complex solutions. There will never be a proper mathematical theory about how the guts of complex things function. Mathematics, at best, is a useful tool for programming the kinds of computational scaffolds needed to ensure trial-and-error happens in a machine.
This dramatically changes the way we think about applied mathematics, and more broadly (and more importantly) what it means to bring rigor to science and engineering. Mathematics is not some universal language with which we can understand the universe, rather it is a framework for creating and thinking about computational constructs that set up systems, but do not govern their operation.
So, yeah, math is, like, useful, but it's not the whole story.
Okay, moving on, Nature Uses the Full Distribution
Despite mathematics and probability being inherently disconnected from how complexity works internally, probability does offer a useful analogical tool. We can think of nature’s phenomena as producing her outputs according to a range of possible values. This is what probability distributions attempt to capture.
Most probability distributions have peaks, which are where the values are most concentrated. This means the peak/s is the set of values we are most likely to observe. We are most likely to observe those values because they occur with the highest frequency. If we roll a fair six-sided die again and again, we expect each number to appear roughly the same number of times, producing a uniform (flat line) distribution. But if we bias the die, making it land predominantly on the number 6, a peak will appear in the distribution of values, showing us that 6 is a more likely outcome.
A critical realization in all this is that the peak is nothing without the rest of the distribution. The peak shows us what to expect, since it represents the most statistically likely microscopic configurations of the system. But this doesn't mean the other configurations aren't relevant. Quite the opposite; these less-probable configurations play a critical role in shaping the overall distribution. More to the point, the most probable configurations would not exist unless all the other configurations were present. Without these less likely configurations the statistical properties of the system would cease to exist.
So, nature uses the whole distribution to make complexity work. This is important to realize because that means that we cannot understand things through isolation. That whole reductionist thing; it is wrong, because it is inherently disconnected from what we observe, measure and experience.
There is nothing “rigorous” or “scientific” about picking things apart in the attempt to reverse engineer nature. This is as true scientifically as it is for the things we build. For now, understand that the isolation done by reductionism does not even hold up to the mathematical and scientific principles the current paradigm supposedly adores.
Okay, then, don’t run the calculation
There are 2 ways we can use mathematics to make a decision. We can run the calculation and live by the results, or we can understand the universal properties that mathematics speak to. The use of mathematics, generally, is to run calculations to get some result.
But the other side of mathematics is related to its properties, not its calculations. A property answers what an object is like, not how it produces its outputs. Mathematics is full of important properties that show us how a formal system behaves, and the constraints it adheres to. Counterintuitively, this is more true for pure mathematics than it is for applied mathematics. This arguably makes pure mathematics more relevant to building complex things than today’s applied mathematics.
Despite being a mere residual of reality, mathematics need not be tossed aside. I have already shown how the conceptual understanding of probability distributions is a potent ally in framing how nature functions. But this is very different from running a calculation and seeing what spits out. Running a calculation to know the outcome of a complex situation suggests mathematics can tell us something it cannot. But appreciating the properties of mathematics can shed light on systems, because math is itself a system. The first one is a pattern that speaks to the dynamics of information, while the second one acts as though math itself reflects the internal workings of the system of interest.
Decision making under complexity should leverage the properties of math and probability, not their calculations. Calculations enforce determinism onto systems. It changes the decision-making framework from one that uses general behaviors to one that pretends to know what systems will do specifically. The former is a powerful tool for making decisions, the latter a dangerous and naive one.
Math is a residual of nature, but it is a system worthy of study, because it can tell us properties that are universally true. We can base decisions on math, by pairing its inherent properties, not its calculations, with the premises of our rational arguments. Beyond this, as I will discuss shortly, it turns out that mathematics can be more than just a description of patterns. It can in fact create things.
So, Gamification and Design…
There is a direct relationship between the gamification of life and the notion of design. When we assume that the world can be modeled as games we port such models into the designs of our real-world systems. We assume that the visible causality inside games must also be there in complex situations, and that specific control over internals can still work effectively.
But the real world is not a messy extension of what we see in games. This is why taking what is found in the simplistic models of academia and applying them to real life is so deeply and profoundly problematic. As I discussed in my section titled Nature Does Not Approximate, nature is not doing what simple systems do, neither literally nor as an approximation.
To today’s science and engineering paradigm the idea of not relying on inner causal knowledge sounds impractical. Where are the best practices? The design patterns? The industry standards? Are we supposed to just muck about and hope for the best? But the reality is this “mucking about” is far more rigorous than anything the current reductionist paradigm can offer. When we look at how genuinely hard problems are made tractable it becomes clear that naive action and pattern recognition is indeed the most efficient and effective way to build complex things. By keeping problem statements as general as possible and creating highly flexible internals, mucking about becomes a far more effective means to build solutions.
Most engineers today would struggle to understand how AI could be done without design. Imagine an engineer working on improving a new memory structure, something like the matrix or tensor used today. They would call upon design principles and industry standards to structure their creation. In this case, the engineer is working on a single component of the overall system, and this piece itself is essentially deterministic.
But the role the memory structure plays can no longer be assumed. The correct memory structure will be the one that emerges, when considered inside the context of the entire AI system. The only way for the correct memory structure to emerge is to build it from the outside, rather than being concerned with the specific design of a better isolated component. As argued in chapter 7, the internals of what we build must be flexible, not specific. It is not for us to know what the memory structure should look like, only that its emergence is part of a bigger picture.
Design is a byproduct of the gamification of life. It worked well for almost everything we have built throughout our history, because our world has been fashioned around the structured rules and conventions of simple machines. But complexity brings the behaviors found in real life closer to the way we build things. Design, like games, cannot make life tractable.
Okay, so, The Not So Surprising Reproducibility Crisis
There is a growing awareness of just how irreproducible much of science is. This obviously calls into question the reliability of the original findings published in scientific journals. After all, if we cannot reproduce what a scientist reports to have found, how can we trust that it was ever found in the first place?
The reproducibility crisis is known to be worse for the softer sciences. Psychology has been tainted with reproducibility issues since its inception. The fact that the softer sciences are more susceptible to irreproducibility is not surprising. Whereas a field like physics measures things against simple systems, fields like psychology attempt to measure and explain the mind, which stems from the most complex thing of all, the human brain.
Of course, this does not stop psychologists from trying to make their field something akin to physics, with its precise definitions and causal explanations. It is not enough to note the attributes of anxiety in individuals. To be considered a “real scientist” psychologists must make a causal connection between anxiety and some definition of distorted thinking. A set of root causes must be identified, and a story pieced together to show the path from source to outcome.
It is this physics envy in softer sciences that leads to their version of the reproducibility crisis, since one is attempting to measure things that are ill-defined. A behavioral scientist will attempt to show that human behavior has a root cause. Add to this the challenge of knowing that someone is indeed exhibiting a specific behavior.
Regardless of the field, irreproducibility usually gets chalked up to poor scientific technique. Flawed experimental design, inconsistent or poorly defined methodologies, the misapplication of statistical methods, poor data management and reporting, differences in equipment calibration, and a host of cognitive biases are all deemed culprits of the reproducibility crisis.
But the true source of irreproducibility is the lack of determinism in nature. While reproducibility is deemed important for quality research, it is wholly unrealistic for anything but the simplest of systems. Reductionist science has chosen to define knowledge in terms of isolation and extraction. We are expected to control for variables, separate and confine regions of interest, and measure very specific things. But this is not how nature works. Nature does not have root causes and deterministic pathways. There is no reason to expect reproducibility in nature because, as per multiple realizability, there is no reason to think nature takes the same path twice.
Imagine 2 metal poles sticking out of the ground, about 4 feet apart. Now imagine I told you to stand a few feet back, and throw a frisbee between the poles, without the frisbee touching the poles. This is easy. But now imagine I move the poles closer together, say 2 feet apart. Now the challenge is more difficult, but still doable. But then I move the poles so that the distance between them is smaller than the diameter of the frisbee. We could still do it, by throwing the frisbee in a vertical fashion. But the more we constrain the system the more difficult it is to get the frisbee through the spacing.
Our poles and frisbee example is analogous to the mismatch between nature and the tools we use to measure nature. The poles are nature’s phenomena, and our attempt to get the frisbee between the poles is our measurement. The more we artificially constrain phenomena via reductionism (move poles closer together) the less reproducible we can expect our measurement to be (getting frisbee through poles without touching them).
Reductionist science tells us to gamify the world, by isolating it, controlling it, constraining it. We place nature into labs and inspect its pieces, then expect others to measure the same thing. This is akin to moving poles closer and closer together and expecting many people to always throw the frisbee between the poles. Sure, an individual can develop “skill” in reproducing a good shot, but getting many people to do this is unlikely.
Humans are not meant to be working inside confined settings. Humans, as one of nature’s solutions, are highly flexible. We operate best in complex environments solving categorically hard problems. We have many ways to throw a frisbee, including backhand, forehand, flick, hammer, thumber, upper hand, roller, overhand, push, etc. This is because humans are high-dimensional, and we are meant to operate in settings that are as high or higher in dimension than we are; to fill the space of possibilities with our abilities. But as soon as a game is created, we are confined to operate within constrained rules. The more gamified the environment the more we must artificially restrict our natural abilities.
Reductionism gamifies nature. It squeezes it into confined spaces and pretends it is far simpler than it is. By doing so we can get away with precise measurements, because we are now acting as though nature is itself precise. But nature is not precise. Sure, physics can measure things like the gravitational constant, the speed of light and the fine-structure constant to several decimal places, but these are disconnected from how nature functions. They are but pieces that get statistically smeared into the ill-defined aggregations of nature.
Physics is more reproducible only because it more artificially defines nature. Any other science that follows suit will run into reproducibility issues as the complexity of its phenomena increases.
Attempting to do better science, by today’s definitions of better, will not help an honest study of nature. No amount of improvement in experimental design, consistency in methodology, appropriate use of statistics, better data management, superior equipment calibration or reduction in human bias can fix the problem. It is the reductionism that is at fault, because it creates a mismatch between what we are studying and our ability to measure it.
Science is a story of creating measuring tools that are constrained, in order to make precise measurements. But this constraining of our tools is no different than forcing our frisbee throw into a lower-dimensional, unnatural form. The hidden and faulty premise that isolated pieces speak to aggregate structure and behavior (i.e. the real world) is what has allowed science to get away with their precise measurements. Physics can get away with it because they are mostly interested in non-complex things to begin with. But the same cannot be said as we move up the ladder of complexity.
As long as researchers all act like physicists they will be constraining nature far too much to be measuring much of consequence, let alone things reproducible. A behavioral scientist using an fMRI machine to isolate some region of the brain is changing their frisbee throw to a very specific type to get it through the poles. The measurement can be made, but it operates inside a gamified version of nature.
This point is missed by most scientists. We know this, because the problem is almost always attributed to error in technique. But the problem is not the techniques used, it is the gamified foundation that today’s science rests on. The absurdity gets demonstrated with the use of AI in science. Applying AI as a tool in science makes sense. AI can detect patterns that humans miss and find correlations among massive datasets. But those who apply AI to their scientific research are running into the same old reproducibility problems, with researchers failing to reproduce a good number of studies. This gets attributed to things like AI hype leading to overly optimistic expectations, lack of documentation on how models are created, and various sources of “leakage” such as when data used for training the model overlaps with data used for testing it.
But AI as a tool cannot be constrained the way other tools in science can. This is because AI is itself a complex object, something much closer to what nature is. Trying to force AI down to the low dimensionality of reductionist science is to force the frisbee player to only throw in a very specific manner. AI researchers having reproducibility issues is wholly unsurprising. Not because of overly enthusiastic feelings, lack of documentation or data leakage, but because AI cannot operate effectively in constrained environments. It was never meant to. The reproducibility crisis is just another byproduct of gamifying our world.
Does science need reproducibility? Isolated structures and causal explanations are not what we need. We need things that work. And the things we build are not to be replicated, they are to be realized in many different ways. The knowledge we need is not confined pieces of reproducibility; it is meta-level properties that are shown to hold true (survive) across many different instances. Inline with redefining knowledge is the necessity of reframing what it means to discover.
Okay, then. Starting Points of Computation
Mathematics is a residual of nature in the sense that it expresses some remnant of a deeper truth. Math’s connection to nature is supported by the correspondence we see between mathematical and natural patterns, and the fact that mathematical models can find solutions inside complex spaces. The residual aspect is demonstrated by the degradation of mathematical explanation and prediction outside simple, gamified systems.
This is why I believe mathematics is best thought of as one of many phenomena worth studying, rather than some universal language that underpins nature itself. Mathematics lands on repeatable and recognizable patterns as one rearranges its symbols into new forms. This means mathematics can be expected to have its own meta structures and can bring forth discoveries via its self-consistency and built-in invariant patterns.
This seems to suggest that math is discovered rather than invented. The discovery versus invented question is in fact the wrong one to ask. It is a philosophical paradox, which seems to suggest both answers cannot be true. But paradoxes only come about from inside the systems that express them, not when we step outside systems to comment on their validity.
Consider that mathematics may be nothing more than a reflection of how the mind arranges thoughts and finds logical consistency. To use mathematics is to hold a mirror up to our own thinking, where the rules and structures that keep mathematics self-contained are the very constraints that make human thought possible. If true, it is likely that mathematics emerged as a byproduct of the self-referencing complex objects regularly undergo.
Complex systems fold-back-in on themselves to establish their emergent structures and behaviors. The human brain “observes” itself in the meta sense, as do all complex objects, but with a sentient mind this self-referencing produces human-level awareness. Beyond the raw mechanistic version of self-referencing seen in all nature’s solutions, humans externalize theirs as symbols drawn on paper. To use mathematics is to take the same constraints the mind uses to harbor emergent reasoning, and wield it ourselves.
This does not make mathematics less interesting or less important. After all, human thinking is all we have, to reason about how and why our world works. But it does keep mathematics relegated to the phenomenon of human thought, rather than some universal language that underpins all reality. This is not as limiting as one might think. For mathematics unleashed within computing brings about a great deal of flexibility, as demonstrated by today’s AI systems. As already discussed, what makes AI tick is not mathematics itself. Computing constructs are created using mathematics, but the system is let loose and allowed to converge on its own. Mathematics merely serves as the rules that enable data to be mashed together, but this mashing leads to the emergence of things not found in the mathematical language.
This makes mathematics more of a creative thing than a descriptive thing. Mathematics is how we set up machines to undergo mixing and matching. The properties of mathematics help us determine how such mixing and matching might happen. What this is not, is the internal rules of mathematics lending its symbolic reasoning to the systems we build. This is why symbolic AI never worked to solve hard problems, but today’s connectionist AI does.
The fact that mathematics is a mental reflection of the constraints used by the human mind, and that those constraints are what bring about the flexibility seen in complex systems, brings us to why mathematics is a residual of nature: mathematics represents the starting points that nature uses to achieve emergence, but it does not represent emergence itself.
Since mathematics can only represent the starting points of complexity, and not provide the actual mechanisms by which nature works, this means that the natural home of mathematics is in computation. Nature computes using its emergent physical abstractions, not some deterministic machine that bumps pieces into pieces. Mathematics belongs to computing because its residual form represents the starting points of emergent computation.
We see paradoxes arise in mathematics because it is being incorrectly applied within the language itself, instead of being unleashed as creative fuel for emergence. The discovered versus invented debate is only a paradox when viewed in a non-meta fashion. This is in fact true for all paradoxes. Paradoxes are resolved when viewed outside the language, because the supposedly mutually incompatible sides are mere facets of the larger truth; one that can only be seen from the outside.
If we say "this statement is false" we will call it a contradiction or paradox. All paradoxes are of this essential nature, whereby pieces of the system appear mutually incompatible. But the fact that we can look upon such a statement and still reason about it means operating externally, outside the system we are commenting on, alleviates the paradox. For example, the statement might speak to something higher-level, something that is meta to the statement itself. Imagine our real purpose was to highlight the topic of contradiction, as we are doing now. Now the statement is perfectly logical, because it serves as a demonstration of a higher-level realization.
Mathematics is not adversely affected by its inherent incompleteness because its true utility does not come from its internal rules, rather it stems from something else; that something else being computation. The contradictions inside math cannot interfere with computational reality because such a reality exists outside the mathematical starting points that breed its existence. This is true of any system under complexity. In real life, creative flexibility is made possible by a system’s constraints, not through unbounded freedom and autonomy.
If the proper home of mathematics is computation, then why is most of mathematics and its use in theories not about computation? Einstein’s Relativity does not speak of computing, it uses mathematics to describe space, time and gravity. How and why would Relativity be a residual of computing?
If mathematics is nothing more than a reflection of how the mind arranges thoughts, then it stands to reason that the symbols we use in mathematics are part of the brain’s computation. If those parts of computation are mere starting points, as I argue, then Einstein’s mathematics can be considered as starting points to computed Relativity. What is critical to realize here is that we are not running some calculation to determine anything specific, we are manifesting relativistic realities inside computers, via computation. One wonders if there is in fact much difference between what is created through computation and what exists in our physical worlds.
And so, yeah, all this, leads up to seeing how this new foundation, that is, the need for a critical shift in science and engineering, and how we go about building things for the age of complexity, sheds light on some of our most critical social, economic, political and technological challenges.