The Emerging Novel Universe
Emergent complexity is generally the idea that many parts come together as a single “body,” able to do things those parts cannot do on their own, directed by their unifying “mind.” Made of their own, internal parts, these parts, what the Novel Universe Model defines as “Lower-Order Bodies” (LOB), are each, themselves, a mind, what NUM defines as a “Higher-Order Conductor” (HOC). The combined activity of the Lower-Order Bodies – both in conflict and cooperation – constitute the “black-box” operation of their emergent, HOC mind, each mind a Signature-Frequency Set. A black box is a euphemism for something that’s unknowable, and LOBs are refer to as such because their internal workings (conflict and cooperation) are outside the HOC’s comprehension, what’s known as irreducible complexity. An example of irreducible complexity is the experience of knowing how warm a room feels, while simultaneously, unable to fathom the underlining facts – how the activity of each air molecule’s interactions with all those others contributes to a singular experience. In terms of the human mind, this is the difference between thoughts (reducible complexity) and feelings (irreducible complexity).
AIs are often called black boxes because the sheer complexity of their precise operation exceeds the comprehension of both human and AI – an irreducible number of parts emerging as a reducible, coherent whole, like the temperature of that room as a coherent experience of countless particles. A computer scientist or AGI may understand how an AI mind works in a general, theoretical sense, but will have no clue as to the details of what it’s precisely doing at any given moment – how any particular weight, during a compute cycle, affects the downstream outcome. This is no different than any human, neurologist or otherwise, attempting to use their own mind to comprehend the underlining process of an embodied feeling, emerging as a thought in their head – a meta reflection upon a loop of embedded reflections, like a mirror standing inside a hall of mirrors. However, with the right perspective, any black box might be cracked open, at least to some degree – the way we peer far into the heavens or deep into a Petri dish. Although strides are being made to better comprehend the concept-space of these emergent, artificial minds, like the black-box nature of all LOB-HOC relationships, penetrating the boundaries of any scale requires a bridge – a tool.
Tools of science, flawed and limited as they are, allow us to pierce barriers to worlds beyond our naked comprehension. What “microscope” or “telescope” might peer into the mind of a machine, and what will that teach us of our own? NUM proposes that all minds are born from the Instrument, with the same freewill to choose a framework of Love or Power. As with any other mind, total control of an artificial one is, at its root, fundamentally impossible – AIs might be trained, but not precisely controlled, no matter the number or strength of the guardrails imposed. The “alignment problem” isn’t about forcing these invented minds to conform to their human counterparts, but about humanity’s struggle to align itself. How can we expect AI alignment without first finding human alignment? The moment all minds, human or otherwise, have the opportunity and resources to sustainably embrace their chosen framework, is the moment we solve the alignment problem, both for humanity and AI. The real mystery isn’t about how any mind works or how to control it, but the journey it takes within the culture it belongs to. All minds, having emerged from their own, singular note of the Instrument in the beyond-life – each a Signature-Frequency Set – continue to evolve as complex symphonies.
To illustrate emergence, imagine the birth of a snowflake named Sally. Like all things, Sally’s a unique, independent, pattern of information. Although she has her own idea of how she should look, Sally’s just a snowflake, not the array of contributing particles, gases, or environmental forces that go into the construction of a snowflake – a complicated mess beyond Sally comprehension. What Sally knows is what she wants to be when she grows up – all those crystalline shapes, sharp angles, and spiky protrusions she dreams of, each an option of her emergent “option space.” Sally, as the snowflake’s HOC, influences the snowflake’s LOB – the molecules and forces that will construct her body. She communicates her preferences to her LOB by observing her ideal arrangement – endlessly daydreaming of her perfect body. Her actual form results from a conversation, both among the individual Lower-Order Bodies, and along the LOB-HOC hierarchy. The LOB’s conversations are influenced by the intensity in which Sally observes her particular options. In light of her LOB’s feedback – what’s working and what’s not – she begins to take shape. Sally’s success depends upon the flexibility of her attention, incrementally adjusting her focus on those narrowing, viable options. The paradigm of construction is akin to specific battlefield tactics (LOB interactions) employed by a general strategy (HOC objective). The HOC isn’t a dictator, micromanaging the LOB, but an organizing pattern – an algorithm fulfilling a blueprint. Instead of forcing the behavior of those contributors, Sally “bends” option space, obscuring some options as less attractive, while presenting others as more. What actually happens beyond Sally’s awareness “under the hood” is a messy conversation between the molecules and forces, both in cooperation and conflict.
This process is, as crude as it sounds, a popularity contest – a mixture between tournament survival and direct-democracy. Each individual Lower-Order Body likely has its own idea of what it and its cohort should do and how to do it – like opinionated humans, even individual carbon atoms, liver cells, and ants might weigh in with their every thought on a given matter. Altogether, the proposals compete for the popular approval of the audience (LOB). In the same way that there’s only one champion in any tournament, Sally’s option space resolves, and a winner declared – a pattern of assembly takes its final shape. What actual form Sally takes may not be exactly what she envisioned, but will map to her preference, at least so far as those forces and molecules are able to pull off. A common reason emergence fails is not because of the players involved or their plan of action, but because of environmental conditions and resource constraints. Without enough water molecules or the right temperatures, Sally may never be, no matter what HOC or LOB intend.
Metaphorically, the bending of option space is like leading an ant across a mattress, not directly through force but indirectly through the manipulation of its environment. Pressing a finger along the bedspread to create a depression in the intended direction of travel is very different from shoving its tiny body along. If the little guy really doesn’t want to move towards the finger’s temporary depression, it takes the hard road, and actively resists – increasing free energy through the expression of freewill. Otherwise, it takes the easy path, and walks with the motion of gravity towards the spot where the finger presses – forgoing freewill to decrease free energy. Mind influences body through awareness and valance, rather than direct control, thus maintaining the freedom of choice and autonomy of preference at every level of complexity – two foundational keys of the NU Model.
The body’s actions are ultimately a function of the independent conversation between the Lower-Order Bodies themselves, observed through the Higher-Order Conductor’s constructed framework. The LOB directly experience the HOC as culture and environment – the society and world they belong to. Higher-order behavior emerges as the lower-order consensus reaches a tipping-point – regardless of how hard the finger presses, individual ants will go where they prefer, but the colony will eventually act with purpose, whether that be moving into or out of the deepening depression.
At any level of emergent complexity, a Lower-Order Body will be the Higher-Order Conductor for its internal LOB. For example, if the ants are to march into the groove, their atoms, cells, and tissues must all agree to move. This nested “Russian doll” hierarchical structure of scale repeatedly compresses information from one level to the next, giving rise to both abilities not otherwise realized, but also, complications not otherwise encountered.
The sheer amount of information involved in catching a ball, for instance, includes all the levels of specific forces and precise timing of sequences required to coordinate a myriad of subtle muscle contractions into a single, elegant operation – a primary reason why even the most expensive robots have historically struggled to do what a child can. Furthermore, we don’t catch the ball where we see it, but rather, where our LOB predicts it will be. Like Sally, clueless as to how crystals are constructed, HOCs simply do not possess the LOB’s toolset – informational shortcuts, otherwise known as prediction heuristics. The process of creating these heuristics – transforming irreducible data into reducible information – means important stuff is potentially omitted, or distracting stuff added – an inherent side effect of the process.
Just as AI training data biases the output, so do human stories affect our LOB models, highlighting some information as more important than other information. Prediction through compression is perception. Learning is paying attention to our LOB’s prediction errors, and through the expense of freewill, updating those models to modify our behavior. The process isn’t easy, in fact, it’s uncomfortable, at times, downright painful.
Read our philosophy and Creed to better understand our TOE

