Physics Lab

Basin-Weighted Entropy and Low-Complexity Attractors: A Deterministic Framework for

Emergent Spacetime

A deterministic framework for emergent spacetime and basin-shaped dynamics.

Note: This is the full draft rendered from the original text file for easier reading.
Key idea: Entropy counts not only the states you are in, but the states that reliably flow into them.
Attractor basins as a landscape: stable configurations correspond to deep valleys with large catchment regions.

Abstract

Traditional entropy, defined via counting microstates, suggests that isolated systems evolve toward maximal disorder. Yet the cosmos manifests enduring structures—from galaxies to life—that seemingly defy naive thermodynamic expectations. We propose a reformulation of entropy that weights each microstate by the size of its basin of attraction in phase space, elevating persistent, ordered structures to high-entropy equilibrium status. In this framework, a closed deterministic universe naturally evolves toward low algorithmic complexity (LAC) attractors that maximize this basin-weighted (BW) entropy. The universe’s fundamental dynamics act as a compression algorithm favoring simple, stable patterns over random chaos. We reinterpret gravity not as a quantum-force to be quantized, but as the rule that shapes the topology of basins in a high-dimensional state space whose 4D projection is spacetime itself. Time corresponds to “basin depth” (progress into attractors), while mass is identified with persistent computational structures that deform basin geometry. We formalize these concepts with precise definitions of basin size, attractors, and algorithmic complexity, and we outline a mapping from the high-dimensional deterministic meta-space $H$ to emergent 4D spacetime $M$. The model reproduces key features of general relativity and quantum mechanics in their respective domains, while providing novel interpretations for the origin of inertia, the speed of light as an information propagation limit, and the nature of other forces as emergent embedded dynamics. We derive consistency conditions required for observers within $M$ to accurately model the projection $\pi: H \to M$. Finally, we discuss distinctive predictions of this theory—such as the prevalence of long-lived structures, anomalies in quantum decoherence, and possible Planck-scale discretization effects—and suggest experiments and simulations to test the paradigm.

Introduction

Physical entropy is conventionally understood as a measure of disorder: for a given macrostate, entropy $S$ is proportional to the logarithm of the number of microscopic configurations (microstates) realizing that macrostate 1 . By this definition, highly ordered states have low entropy (fewer microstates), and the Second Law of Thermodynamics dictates that an isolated system will likely evolve toward macrostates of higher $S$, i.e. greater disorder. This paradigm has long posed a paradox: Why does the universe exhibit persistent organized structures—galaxies, stars, planets, biological life—despite the inexorable increase of entropy? The apparent contradiction leads to the so-called entropy–order paradox 2 . Standard statistical mechanics would expect a closed system to settle into featureless equilibrium (maximum entropy randomness), yet our universe has maintained and even grown complex hierarchical order over billions of years. Several lines of thought hint that this paradox is only apparent. First, order is not synonymous with complexity in the algorithmic sense 3 . A random high-entropy state is actually more complex

(incompressible) than a highly ordered configuration, which often has a simple description. For example, a crystal or a repetitive pattern can be generated by a short algorithm, whereas a thermalized gas with truly random particle positions requires far more information to specify 4 5 . Thus, ordered states can be seen as low algorithmic complexity even if they are low-entropy by the traditional definition. Second, dynamical systems theory demonstrates that nonlinear deterministic processes can spontaneously produce stable patterns or cycles (attractors) that resist disruption. The classic Fermi–Pasta–Ulam–Tsingou (FPUT) numerical experiment (1955) is emblematic: a nonlinearly coupled oscillator chain, expected to thermalize, instead exhibited quasi-periodic recurrences, evading full ergodic mixing for surprisingly long times 6 . This shows that even in an isolated system, ergodicity can be broken and energy can remain trapped in ordered motion (a low-complexity attractor) rather than dispersing uniformly. Likewise, solitons in nonlinear media and persistent structures in cellular automata (e.g. gliders in Conway’s Game of Life) illustrate how simple rules yield self-organizing patterns that endure amid chaos. We are motivated by these insights to hypothesize that entropy maximization in a deterministic universe is fully compatible with – and indeed drives – the formation of order 7 . We propose that the key missing ingredient in the traditional entropy count is the size of each state’s basin of attraction. The basin of attraction of an attractor (a stable long-term state or cycle) is the set of all initial conditions in phase space that asymptotically lead into that attractor 8 . While an ordered macrostate may occupy a smaller region of phase space at any instant (fewer instantaneous microstates) than a disordered one, it may possess a much larger basin: many microstates dynamically flow into and remain near that ordered configuration. In other words, the dynamical stability and persistence of a state must be considered. We therefore redefine entropy in a dynamical, basin-weighted sense: a macrostate’s entropy is augmented by the log of the measure of its basin of attraction. Under this definition, persistent ordered structures can carry extremely high entropy, since they encompass a huge range of initial conditions that lead to essentially the same long-lived outcome. The Second Law in this reinterpretation implies that an isolated system will evolve toward macrostates that maximize this basin-weighted entropy (BW entropy), which often correspond to highly stable, structured configurations rather than homogenous disorder. This paper develops a rigorous formulation of the above ideas. In Section 3 (Formalism) we introduce mathematical definitions for basin size, algorithmic complexity, and attractors in the context of a deterministic state space. We define a measure-theoretic entropy that includes basin weights, and we derive a variational characterization of the low algorithmic complexity (LAC) attractors favored by long-term evolution. We then present a formal mapping $\pi: H \to M$ from an underlying high-dimensional deterministic microscopic space $H$ to the emergent physical spacetime $M$. In this mapping, what we perceive as 4-dimensional spacetime (with matter and fields) is an effective, coarse-grained description of the dynamics in $H$. Gravity is reinterpreted as a manifestation of this projection: rather than a fundamental quantized field, gravity corresponds to the shaping of basin geometry in $H$ and its induced curvature in $M$. As elaborated in Section 4 (Physical Interpretation), time corresponds to movement into attractor basins (basin “depth”), the speed of light $c$ emerges as the finite speed at which information propagates through $H$ (reflecting the update rules’ locality), and mass corresponds to stable computational structures in $H$ that deform the local basin topology (producing the effect of spacetime curvature in $M$). Other forces (electromagnetism, etc.) appear as additional emergent fields or effective rules within the projected 4D landscape, rather than separate fundamental interactions. We show that this framework can recover the phenomenology of General Relativity and quantum field theory in appropriate limits, while offering a novel explanation for why straightforward quantum gravity approaches have struggled: if gravity is the manifestation of the projection rule itself (the geometry of $M$ induced by $H$), it may not be amenable to quantization in the same manner as fields within $M$ 9 10 .

In Section 5 (Relation to Known Physics), we connect our basin-weighted entropy to extant concepts in statistical mechanics and cosmology. We formally relate BW entropy to coarse-grained (Gibbs) entropy and to Kolmogorov-Sinai entropy in dynamical systems, and we discuss how this approach reframes the question of the universe’s low initial entropy. We also map elements of our high-dimensional deterministic model onto familiar structures: for example, we discuss how Einstein’s field equations might emerge as effective, statistical descriptions of basin geometry influenced by mass-energy, and how quantum behavior could result from observers in $M$ only accessing coarse-grained states of $H$. Section 6 (Predictions) outlines distinctive predictions of our theory that differentiate it from the standard paradigm. Among these are: (i) Structure Persistence: The prevalence of long-lived, self-organized structures (at all scales) is expected to be higher than naive equilibrium thermodynamics would predict, implying, for instance, unusual stability of certain complex systems or faster re-formation of order after perturbations. (ii) Anomalies in Decoherence: Because quantum decoherence in this view is an emergent process governed by underlying deterministic dynamics, there may be small deviations from the predictions of randomenvironment decoherence models – potentially observable in precision quantum experiments as tiny residual coherence or systematic collapse biases. (iii) Planck-Scale Discreteness: Since $H$ may be discrete or structured at the Planck scale, there could be “stride” artifacts in spacetime at extremely high energies or small scales – for example, a frequency-dependent speed of light or dispersion of gamma rays at energies approaching the Planck energy, as some quantum gravity models predict 11 12 . We suggest possible experiments, such as high-precision astrophyical timing or tests of quantum gravitational noise, that could reveal such effects. Finally, in Section 7 (Discussion) we consider the implications and limitations of this framework. We emphasize the need for self-consistency: any observer within the emergent spacetime $M$ must be able to construct an effective description of physics (the “laws of nature”) that is coherent and agrees with observations, even though the true underlying dynamics reside in $H$. This imposes constraints on $\pi$ and on the nature of the high-dimensional rule (for instance, it should respect symmetries that manifest as Lorentz invariance in $M$, and it should produce statistical outcomes that match quantum probabilities for observers who lack access to the microstate details). We discuss how classical general relativity and quantum mechanics arise as limiting cases of our framework and how the perspective presented here might resolve certain puzzles (such as the black hole information paradox and the arrow of time). We conclude by outlining open mathematical challenges and the next steps required to further develop and test this theory.

Formalism State Space and Dynamics (H) Let $H$ be a high-dimensional state space representing the fundamental degrees of freedom of the universe. $H$ could be, for example, a space of microscopic configurations (bit strings, field values on a lattice, or some abstract phase space) of enormous dimensionality $N \gg 4$. We assume the evolution of the complete state is deterministic and governed by a rule $F$ (discrete time) or a flow $f^t$ (continuous time): - In a discrete-time model (e.g. a cellular automaton), $x_{t+1} = F(x_t)$ for state $x_t \in H$. - In a continuous-time model, $\frac{d}{dt} x(t) = X(x(t))$ for some vector field $X$ on $H$, yielding a flow $f^t: H \to H$ with $x(t) = f^t(x(0))$. We make two key assumptions: (1) Conservation of information: $F$ is bijective (or $f^t$ is invertible), reflecting that no information is lost or created (this aligns with microscopic reversibility in physics). (2)

Measure preservation: There exists a natural measure $\mu$ on $H$ (such as Liouville volume in Hamiltonian phase space) that is preserved by the dynamics 13 14 . Intuitively, if we take a uniform distribution of initial states in some region of $H$, under evolution they get redistributed but the volume (measure) of that ensemble in $H$ remains constant. This ensures a fair counting of microstates over time. Microstates and Macrostates: A microstate is a point $x \in H$ specifying the full fine-grained configuration. We define a macrostate $M_\alpha$ as an equivalence class of microstates that are macroscopically indistinguishable (according to some coarse-graining relevant to observers). For example, in conventional thermodynamics $M_\alpha$ might fix macroscopic quantities like energy, volume, etc., and includes all $x$ consistent with those. Here, we will ultimately let macrostates correspond to attractor outcomes (long-lived patterns), but we begin generally. Let $\Omega(M_\alpha) = {\;x \in H: x \text{ is in macrostate } M_\alpha\;}$ denote the set of microstates comprising macrostate $M_\alpha$. The Boltzmann entropy of $M_\alpha$ is $S_{\mathrm{B}}(M_\alpha) = k_B \ln [\mathrm{Vol}(\Omega(M_\alpha))]$, where $ \mathrm{Vol}$ is counting measure or $\mu$-measure of that region 1 . This is the traditional entropy, which depends only on instantaneous counting of microstates. Attractors and Basins: An attractor $A \subset H$ is a set (point, cycle, or more complex invariant set) toward which the system tends to evolve from a range of initial conditions. More formally, $A$ is an attractor if: (i) it is invariant ($f^t(A)=A$ for $t>0$), and (ii) it has a basin of attraction $B(A)$ which is an open set of initial states that approach $A$ as $t \to \infty$ 15 16 . The basin size can be quantified by $\mu[B(A)] $, the measure of the basin under $\mu$. (If there are multiple attractors, these basins partition the state space aside from measure-zero boundaries 8 .) Attractors can be fixed points (static configurations), limit cycles (periodic orbits), or even strange attractors (chaotic yet confined regions). In our context, examples of attractors might include stable structures like a galaxy configuration, a solar system, or a living organism’s homeostatic state. Note that in a conservative Hamiltonian system without dissipation, strictly speaking there are no asymptotic attractors (Poincaré recurrence implies eventual return). However, metastable long-lived states can effectively function as attractors on relevant timescales. We will assume some mechanism (perhaps coarse-graining or slight dissipation) that allows attractors to be well-defined for the behavior of interest. Now we integrate these concepts to redefine entropy in a way that accounts for dynamical stability:

Basin-Weighted Entropy (BW Entropy) We define the basin-weighted entropy $S_{\mathrm{BW}}(A)$ of an attractor (or the macrostate corresponding to that attractor) as proportional to the logarithm of its basin volume: $$ S_{\mathrm{BW}} (A) \;\equiv\; k_B \ln \mu!\big[B(A)\big] \,. $$ In words, $S_{\mathrm{BW}}$ counts not just the microstates currently in configuration $A$, but all microstates that will flow into $A$ (and remain there, up to fluctuations) over time. Because $\mu[B(A)]$ may be enormously larger than $\mu(A)$ itself, a highly ordered $A$ can carry very large $S_{\mathrm{BW}}$. This redefinition formalizes the intuition that the entropy of a persistent structure includes the entropy of all the chaotic states that give rise to it. For example, consider a glider in Conway’s Game of Life (a simple cellular automaton): the glider is a simple moving pattern (low algorithmic complexity) and occupies a few cells (few instantaneous microstates), yet if we regard the glider as an attractor in the state space of the automaton, its basin includes many random initial patterns that will eventually produce a glider. Thus in our sense a glider can be a high-entropy state because it is the robust outcome of many unpredictable initial configurations. Similarly, we might consider a galaxy as an attractor for matter under gravity: many initial distributions of a protogalactic cloud collapse into the

same virialized galactic structure. By counting those initial configurations in $B(A)$, the galaxy state can be assigned a very high entropy despite its evident order. We can extend this idea to general macrostates (not necessarily final attractors) by defining BW entropy for a macrostate $M_\alpha$ as $$ S_{\mathrm{BW}}(M_\alpha) \;=\; k_B \ln \Big( \sum_{\beta:\; M_\beta \text{ leads to } M_\alpha} \mu[\Omega(M_\beta)] \Big), $$ i.e. summing the weights of microstates in all macrostates $M_\beta$ that dynamically evolve into $M_\alpha$. In practice, however, it is most natural to identify $M_\alpha$ with an attractor or long-lived equilibrium state, since only then is the notion of “leads to” unambiguous as $t\to\infty$. We therefore focus on attractor entropies. Because $\mu$ is preserved by the dynamics, one can show that the total $S_{\mathrm{BW}}$ of the system (summing or integrating over all attractors weighted by their basin measures) is constant and equal to the traditional $S$ of the whole closed system (which is fixed, since the microstate evolves deterministically). However, $S_{\mathrm{BW}}$ can be re-partitioned among macrostates over time. The Second Law, restated in this context, says that $S_{\mathrm{BW}}$ tends to concentrate into whichever macrostate(s) have the largest basins. In essence, the equilibrium state is the one whose attractor basin dominates phase space volume. This yields a criterion for predicting long-term outcomes: the system will likely end up in the macrostate that has the largest basin of attraction, even if that macrostate has fewer instantaneous microstates than some other configurations. This principle resolves the entropy–order paradox: ordered attractors, by virtue of large basins and longevity, can outweigh disordered states in the entropy ledger.

Algorithmic Complexity and Attractors To quantify the “simplicity” of structures, we invoke algorithmic complexity. The Kolmogorov complexity $K(x)$ of a microstate $x$ is the length (in bits) of the shortest description (program) that generates $x$ on a universal computer 17 18 . This quantity $K(x)$ measures the information content of the state beyond compressible patterns 19 . Ordered states (crystalline, regular, symmetric) have low $K$ because they are algorithmically compressible 19 , whereas random states have high $K$. We extend this concept to macrostates or attractors by considering the complexity of describing the pattern or rule defining the attractor. For example, an attractor that is a simple limit cycle might be described by a short set of equations or symmetries, giving it low complexity. We hypothesize that the attractors which maximize $S_{\mathrm{BW}}$ (basin entropy) are precisely those with low $K$ (algorithmic complexity). Intuitively, simple patterns often come from many situations (large basin) because they do not require fine-tuned initial conditions to form; by contrast, a highly complex pattern (random-looking) needs very specific initial configurations to arise and is easily perturbed, so its basin of attraction (if it has one at all) is tiny. This leads to a variational principle: <strong>Conjecture:</strong> Maximum BW Entropy = Minimum Complexity: In a closed, isolated deterministic system, the trajectory will asymptotically settle into the macrostate(s) that minimize algorithmic complexity $K$, subject to the constraint that the traditional entropy $S$ is maximal. Equivalently, among all high-entropy macrostates, the system favors the one(s) with simplest structure 20 21 . Over long times, the phase-space measure $\mu$ becomes concentrated on attractors that are compressible (low $K$) 22 .

We can express this idea more formally. Let $\mathcal{S}$ be the state space and suppose $S(x)$ and $K(x)$ can be defined for each state (where $S(x)$ here means the Boltzmann entropy of the macrostate containing $x$). Then in an asymptotic or ergodic sense, we expect: $$ \text{Long-time evolution}:\quad x(t) \text{ maximizes } S_{\mathrm{BW}} \;\approx\; \arg\min_{x \in \mathcal{S}}{K(x)} \text{ given } S(x) \approx S_{\max}\,. $$ A heuristic Lagrange multiplier argument leads to selecting states that extremize $S \lambda K$ for some $\lambda > 0$. This aligns with the idea of entropy-driven compression of information: the system seeks states that are high entropy and simple. In practice, complete maximization of $S$ may be achieved by a mixture of attractors; but if one attractor has overwhelmingly larger basin measure, it will dominate the ensemble (becoming overwhelmingly probable). The conjecture can be rephrased in thermodynamic terms: the equilibrium of a deterministic system is a structured equilibrium 7 with maximal entropy production into simple patterns rather than into randomness. As an example, consider gravitational collapse of a cloud of gas. Traditional entropy would count more states for a diffuse spread-out gas than for a single concentrated mass. Yet gravity (a deterministic rule) drives the system into a galaxy or star—a highly structured state. Our framework explains this by noting that the galaxy is an attractor (due to gravity’s long-range instability of uniformity) with a huge basin: almost any sufficiently massive cloud will collapse into some star/galaxy configuration. The diffuse gas is unstable (not an attractor; small perturbations grow). Thus the “entropy” including dynamics is higher for the clumped state. Indeed, black holes represent extreme cases: they are very ordered (just a few parameters describe a black hole) yet have maximal entropy $S_{\text{BH}} = \frac{k_B c^3 A}{4 G \hbar}$ proportional to horizon area 23 24 , which is the largest entropy possible for a given mass-energy. In our terms, a black hole’s simplicity (low $K$ description) and stability give it a vast basin of attraction (any matter that comes near falls in), making it a high-$S_{\mathrm{BW}}$ state despite its apparent order. This reconciles why black holes can be considered thermodynamic equilibrium objects with well-defined entropy 23 even though they are extremely "ordered" geometrical objects.

Mapping the Meta-Space to Physical Space: $\pi: H \to M$ We now formalize the idea that our familiar 4-dimensional spacetime with its fields is an emergent, coarsegrained projection of the fundamental deterministic system in $H$. Let $M$ be a 4D differentiable manifold (spacetime) equipped with physical fields (including matter distributions, gauge fields, etc.) that observers in the universe experience. We posit the existence of a surjective projection (many-to-one map) $ $ \pi: H \;\to\; M, $$ which maps each microstate $x \in H$ to a corresponding macrostate description $\pi(x) \in M$. The map $\pi$ encapsulates how high-level physical reality emerges from the microscopic configuration. Properties of $\pi$: 1. Coarse-Graining: $\pi$ identifies microstates that are microscopically different but macroscopically equivalent. Specifically, if two microstates differ only by micro-variations that do not affect observable quantities (at some relevant scale), $\pi$ maps them to the same point in $M$. Thus $\pi^{-1}(y) \subset H$ is the set of microstates corresponding to a single macroscopic state $y \in M$. This set is typically enormous for large systems, reflecting the usual entropy $\Omega$ counting. However, importantly, $\pi$ also coarse-grains over time scales and dynamics in the following sense: 2. Attractor Identification: $\pi$ maps entire attractors (and points in their basin sufficiently deep) in $H$ to single stationary or recurrent structures in $M$. For example, if $A \subset H$ is a limit cycle attractor, $\pi(A)$ might be a steady oscillation seen in $M$. If $A$ is a strange attractor corresponding to a turbulent fluid flow, $\pi(A)$ is the turbulent flow pattern in continuum terms. $\pi$ should be defined such that once the system falls into an attractor in $H$, its image in $M$ appears as a stable object or process. 3. Smooth

Spacetime Emergence: We require that $\pi$ produces an $M$ that obeys the usual continuity and locality of physical law. That is, if two microstates $x_1$ and $x_2$ differ only in a localized region of $H$, their images $\pi(x_1), \pi(x_2)$ should differ in a localized region of $M$. This is how local physics in spacetime arises: locality in $M$ stems from some (perhaps topological) locality in $H$. We imagine $H$ might have a structure where each micro-variable can be associated with a location in $M$ (e.g. bits on a lattice that map to coordinates), though $H$ could be something more abstract like a network or a high-dimensional attractor space. Regardless, $\pi$ must preserve the causal structure such that if $x(t)$ and $x'(t)$ differ in a region of $H$, their effects do not instantly appear far away in $M$, aligning with relativistic causality in $M$. 4. Time Projection: We distinguish between the fundamental time parameter $\tau$ of the evolution in $H$ and the emergent time coordinate $t$ observed in $M$. The projection $\pi$ should map the sequence $x(0), x(\Delta\tau), x(2\Delta\tau), \dots$ in $H$ to the sequence of macrostates $y(0), y(\Delta t), y(2\Delta t), \dots$ in $M$, with some monotonic relationship between $\tau$ and $t$. If the rule in $H$ is such that changes propagate at a maximum speed (due to local updates), this sets the scale for $\Delta t$ in $M$ compared to $\Delta \tau$. We can without loss of generality take $\tau$ measured such that one fundamental tick corresponds to one Planck time in $M$, for example, thereby making the speed of information propagation in $H$ correspond to $c$ in $M$. More on this below. In summary, $\pi$ is defined so that $M$ behaves like our physical world. The deterministic evolution in $H$ under $F$ induces an evolution in $M$ that follows effective laws (which we recognize as physics). For $ \pi$ to be useful, an observer in $M$ (who has access only to the macrostate) can, at least in principle, infer regularities and laws without knowing $H$. Now, let us articulate the specific correspondence for gravity and other features: • Gravity as Basin Geometry: We propose that what we perceive as the gravitational field (or spacetime curvature) in $M$ is a manifestation of the geometry of attractor basins in $H$. In a loose sense, imagine representing $B(A)$ (basin of some attractor corresponding to a mass concentration) as a potential well: the deeper the basin (meaning the more “irreversible” or attracting the flow toward $A$), the stronger the gravity we observe. Mass in $M$ is an indicator of a certain kind of pattern in $H$: a durable, localized concentration of microstates that exerts influence on other states. In $H$, a massive attractor might be thought of as a region that pulls in surrounding states (like how a gravitational mass pulls matter in space). The shape of the basin boundary and depth in $H$ translates to the curvature of spacetime in $M$. For example, the presence of a mass $M$ in $M$ corresponds to a modification of $\pi^{-1}$ such that microstates in $H$ around that region funnel toward microstates representing infall into the attractor corresponding to $M$. Mathematically, if we denote by $U_M$ a neighborhood in $H$ that maps to a vicinity of a massive object in $M$, the flow in $H$ has a diverging component toward the attractor representing the mass. $\pi$ converts that into geodesic deviation (curvature) in $M$. In effect, Einstein’s field equations $G_{\mu\nu} = \frac{8\pi G}{c^4} T_{\mu\nu}$ would emerge statistically from this setup: the stress-energy tensor $T_{\mu\nu}$ of matter in $M$ reflects the distribution of certain attractor structures in $H$, and the Einstein tensor $G_{\mu\nu}$ (curvature) encodes how the flow in $H$ is shaped around those structures. A full derivation is beyond our scope, but the expectation is that in the continuum limit of many microstates, $\pi$ will produce something equivalent to these classical field equations, because that is the only consistent way to encode how volumes (of microstates) move in $H$ under the influence of an attractor (mass). This parallels approaches in emergent gravity theories where spacetime geometry is an entropic or informational construct (e.g. entropic gravity 25 ).

• Time as Basin Depth: If $H$ has attractors with varying “depths” (e.g., a measure like a Lyapunov function or effective potential that decreases as one goes deeper into the attractor), we can associate the progress along that depth with the flow of time. For a given process (say, a system relaxing into equilibrium), the amount of entropy produced or basin depth traversed can serve as a clock. In classical thermodynamics, time’s arrow is aligned with entropy increase. Here, because entropy increase corresponds to moving into larger basins (more stable states), we can literally take “time is basin depth” to mean: as the system finds more stable configurations, time has advanced. In the emergent $M$, this is experienced as the forward flow of time. One might formalize this by defining a scalar “basin potential” $\Phi(x)$ on $H$ that is high in unstable regions and lowest on attractors; then $d\Phi/dt < 0$ along trajectories (like minus a Lyapunov function). One could then identify proper time increments $dt$ with $-d\Phi$ or some monotonic function of it. This idea is evocative rather than strict: practically, $\pi$ must map the sequence of microstates to a consistent time coordinate in $M$ such that when the system has equilibrated into an attractor, the clocks in $M$ reflect that a long duration has passed (since entropy increased significantly). • Speed of Light as Rule Propagation Speed: In many cellular automata or local lattice models, there is a maximum speed at which information or disturbances propagate (one site per time step, for instance). If $H$ has a locality structure, $F$ or $f^t$ will similarly have a finite signal velocity. We associate this with the invariant speed $c$ in relativity. Thus $c$ is not just a parameter; it is set by the micro-dynamics of $H$. Under $\pi$, what was “one cell influence per tick” in $H$ becomes “light travels one Planck length per Planck time” in $M$. The fact that no influence in $M$ can exceed $c$ is guaranteed by the structure of $\pi$ preserving causal relationships from $H$. This offers an explanation for why all fields/forces propagate at or below $c$: they are all emergent from the same underlying rule that has $c$ built in as the update speed. • Other Forces as Embedded Dynamics: While gravity emerges from how $\pi$ shapes the global geometric flow (basin architecture), the other fundamental forces (electromagnetic, weak, strong) can be seen as internal degrees of freedom or patterns in $H$ that carry over to $M$. For instance, an electric charge in $M$ might correspond to a certain oscillatory pattern or phase in the microstate that affects how neighboring microstates evolve (analogous to a local rule that includes a U(1) phase, yielding electromagnetism). Gauge symmetries in $M$ could stem from symmetries of $F$ in $H$. Because $\pi$ is many-to-one, there is room for an internal gauge redundancy: multiple microconfigurations could map to the same coarse state but differ by something analogous to a gauge transformation. In fact, $\pi^{-1}$ of a single $M$-state might form equivalence classes that reflect gauge or quantum phase degrees of freedom. Thus forces that in the Standard Model are mediated by gauge bosons would in $H$ be mediated by underlying deterministic interactions that preserve certain invariants (leading to conservation laws and symmetries by Noether’s theorem in $M$). The detailed formal mapping to Yang-Mills fields or quantum potentials is beyond this work, but we posit that for each known force there is a corresponding structure in $H$: • Electromagnetism: perhaps an adjacency rule in $H$ that imposes a local phase alignment between neighboring state-variables, leading to $U(1)$ symmetry and electromagnetic field equations in $M$. • Strong force: perhaps combinatorial arrangements in $H$ that enforce SU(3)-like constraints, whose collective behavior in $\pi$ yields the QCD gauge field. • Weak force: similarly could emerge from rule patterns that only manifest at small scales in $M$ (maybe related to flips of certain internal bits that correspond to weak isospin).

The key point is that these forces are not independent fundamental input in this view; they are effective behaviors of the deterministic rule when viewed in projection. This is analogous to how in automata or fluid simulations, one sees emergent effective forces (like pressure, viscosity, etc.) from underlying particle collisions. • Quantum Behavior and Non-quantization of Gravity: Quantum mechanics arises naturally if observers in $M$ have incomplete information about the microstate in $H$. Each coarse macrostate $\pi(x)$ may correspond to a vast set of possible microstates (the micro-ensemble), and an observer only knows a probability distribution over $\pi^{-1}(\text{observed state})$. This lack of knowledge can be modeled as a density matrix or wavefunction at the coarse level, with apparent randomness in outcomes reflecting deterministic chaos or sensitive dependence in $H$. The failure of quantization of gravity in standard approaches can be understood here: quantization treats spacetime geometry (gravity) as just another field to be put into a superposition. But if spacetime geometry is actually an emergent, statistically averaged construct from $H$, then trying to quantize it is akin to attempting to quantize a thermodynamic variable like entropy or temperature – a category error. Indeed, quantum field theory presupposes a fixed spacetime background to define fields on; if spacetime itself fluctuates quantumly one gets inconsistency 10 . Our framework sidesteps this because the fundamental description is deterministic and non-probabilistic in $H$, and quantumness is an emergent statistical effect in $M$. Gravity corresponds to the structural aspects of $\pi$; it doesn’t have independent degrees of freedom that can oscillate or superpose the way a photon or electron field in $M$ does. This aligns with arguments by some researchers that gravity might remain classical even when matter is quantum 9 . In our context, that is tautologically the case: $H$ is classical under the hood, and gravity is just geometry of $H$ projected to $M$.

Self-Consistency for Internal Observers An observer $O$ is itself a physical system within $M$ (and correspondingly a subset of degrees of freedom in $H$) that can perform experiments and record information. For our theory to be viable, it must be internally consistent: observers who model physics using $M$ (unaware of $H$) should not encounter contradictions. This imposes several conditions: 1. Macro-causality: If the fundamental rule $F$ in $H$ is local and causal, then $\pi$ must ensure that causality in $M$ is respected. No observer should observe a signal that violates relativistic causality. This requires that $\pi$ does not map distant-independent events in $H$ to local proximate events in $M$ without a cause. We assume $\pi$ is designed to preserve the light-cone structure (as discussed with $c$ above). 2. Stable Laws: The emergent laws in $M$ (e.g., equations of motion, conservation laws) should hold reliably under $\pi$ for typical trajectories of $H$. That means fluctuations or micro-details in $H$ should average out so that $M$-laws are not erratic. For instance, if we derive an $M$-level stressenergy tensor from averaging $H$, it should satisfy a conservation $\nabla_\mu T^{\mu\nu}=0$ if $ \pi$ is properly capturing symmetries of $F$. Observers expect energy-momentum conservation, locality, etc., so those must be guaranteed by structural aspects of $H$ and $\pi$. In short, $\pi$ must commute with the dynamics in a certain sense: projecting after evolving or evolving the projected state should give consistent results. Formally, $\pi(f^t(x))$ should be well-approximated by the evolution of $\pi(x)$ via some effective $F_{\text{eff}}$ on $M$. If not exactly (since $M$ is losing info), then within the limits of experimental precision. This is analogous to requiring that coarsegraining a molecular simulation yields a continuum fluid obeying Navier-Stokes, etc.

3. Observer-Indistinguishability of $H$: No observer confined to $M$ should be able to directly detect the underlying discrete or high-dimensional structure of $H$ except through the effective phenomena we predict (like subtle Planck-scale effects). Their measurements and theories should remain consistent with a self-contained description in $M$ (quantum field theory, GR, etc.) up to those small anomalies. This means $\pi$ should be such that most* microstate differences only manifest as quantum noise or unresolvable uncertainty in $M$. For example, two microstates in $H$ that map to the same $\pi(x)$ might lead to slightly different subsequent $M$-trajectories, but those differences appear as quantum randomness or chaotic unpredictability rather than blatant violations of conservation or logic. In this sense, quantum indeterminacy is a feature: it masks the underlying determinism in a way that observers cannot easily unravel (as conjectured by 't Hooft in his deterministic hiddenvariable interpretations of QM). 4. Reciprocity: If an observer in $M$ builds a computer or simulator attempting to model fundamental physics, they are effectively creating a subsystem of $H$ that mimics $H$'s own rule on some level. $H$ being universal (like a Turing machine) means it can emulate itself, but no faster or better than itself. This self-reference implies limitations akin to Gödel’s or Turing’s halting problem: observers cannot predict all aspects of $H$ beyond certain horizons because that would require duplicating $H$ at full complexity. This might translate to unpredictability inherent in quantum measurement outcomes or chaotic processes – the observer cannot get around those because to do so means solving an intractable computation within their finite subset of $H$. Thus the self-consistency condition is that no observer can exploit being in $H$ to get “outside” information. They are bound by the effective laws of $M$ which include computational limits. In summary, as long as $\pi$ is constructed with these principles, observers within $M$ will find that their world obeys consistent physical laws. They will formulate quantum mechanics, statistical mechanics, and relativity to describe it, never directly seeing $H$ but possibly noticing hints (like the anomalies we discuss next) if technology improves.

Physical Interpretation and Relation to Known Physics Having laid out the formal structure, we interpret its meaning and connect to established physics concepts: Entropy and the Second Law: In our framework, the Second Law (for an isolated universe) can be rephrased: the basin-weighted entropy $S_{\mathrm{BW}}$ never decreases and typically increases until reaching a maximum. In practical terms, the universe will evolve towards states that are dynamically stable and have many ways to be reached. This is consonant with the idea of self-organizing systems and the emergence of complexity. It challenges the simplistic view that maximal entropy implies heat death chaos; rather, maximal $S_{\mathrm{BW}}$ might correspond to a richly structured state (albeit one that, from a coarse perspective, might look “equilibrium”). This resonates with ideas in non-equilibrium thermodynamics and the concept of maximum entropy production principle, which often coincides with the emergence of ordered flow patterns (like Bénard convection cells organizing to export heat more efficiently). Our contribution is making this idea rigorous with algorithmic information: the “order” that emerges is precisely that which is simplest to describe (and thus robust). This perspective provides a fresh lens on cosmic history: Early-universe conditions were near featureless (perhaps a quantum foam or high-energy vacuum), which we consider to be a high-complexity state (high $K$) but maybe not maximal $S_{\mathrm{BW}}$ because it was extremely unstable. Small fluctuations (quantum or otherwise) would give rise to structures (inflation amplified these seeds) leading to stars,

galaxies, etc. Those structures increased $S_{\mathrm{BW}}$ enormously even as they seem to decrease ordinary entropy locally (e.g. a gas cloud collapsing). The total entropy including gravitational/heavy degrees of freedom indeed increases 26 , consistent with our thesis. The gravitational entropy concept by Penrose posited that a uniform gas has lower entropy than the same mass in a black hole, because gravity’s negative specific heat leads to clumping increasing entropy 26 . Our framework gives a formal underpinning: a black hole or clumped state is an attractor of huge basin, hence high $S_{\mathrm{BW}}$. General Relativity: If our mapping is correct, then Einstein’s equations are not fundamental but emergent effective laws. The equivalence principle (gravity = inertial acceleration) could be rooted in symmetries of $H$—perhaps diffeomorphism invariance appears because $\pi$ does not pick a preferred coordinate frame (the underlying rule might be homogeneous and does not provide a special backdrop, so $M$ inherits general covariance). The fact that gravity is geometrical (curvature of space-time) here becomes natural: it’s literally the geometry of basin flows in $H$. This also implies new ways to think about singularities or horizons: e.g., a black hole horizon in $M$ might correspond to a kind of information barrier in $H$ beyond which $\pi$ cannot project an interior state (hence observers in $M$ see a loss of info). These ideas align qualitatively with the holographic principle, which suggests that the information content of a volume is encoded on its boundary 27 . In our model, one could imagine the degrees of freedom of $H$ that map into a region of $M$ are somehow reflected by degrees on the boundary of that region (since two different $H$ configurations differing only inside an isolated black hole interior might map to the same $M$ exterior state, meaning $\pi$ loses those distinctions – an analog of horizon information loss, resolved only by understanding $H$ itself). Quantum Mechanics: We can interpret the quantum wavefunction as a description of the distribution of microstates in $H$ that correspond to one $M$-state. The Born rule (probabilities as squared amplitudes) might emerge from combinatorial counts of microstates in $H$ (perhaps related to typicality arguments or the principle of indifference across $\mu$). Decoherence occurs as different coarse observables (projections in $M$) correspond to different partitions of microstates in $H$; when a system becomes correlated with its environment (both part of $H$), the interference between certain microstate sets averages out, leaving effective mixed states for observers. Importantly, under our view, there is no measurement collapse at the $H$ level – the microstate just keeps evolving deterministically. Collapse is an update of the observer’s knowledge in $M$ after they interact (becoming entangled) with the system, and effectively restrict their consideration to a subset of microstates (one branch). Because $H$ is deterministic, one might think this is a hidden-variable interpretation; however, unlike classical hidden-variable theories that often violate Bell inequalities, a non-local hidden deterministic $H$ might still violate Bell’s assumptions subtly (e.g., $H$ could be non-local or have extra dimensions that allow Bell correlations to be carried without signaling in $M$). Our scenario shares spirit with ’t Hooft’s conjecture of a dissipative deterministic cellular automaton underlying quantum mechanics, and with works suggesting quantum spacetime could be a kind of errorcorrecting code or cellular network. The difference is that we emphasize the role of entropy and complexity in shaping which states are realized. Connections to Computational Theories: The universe as a computational process has been proposed by various authors (Zuse, Fredkin, Wolfram). We add to that narrative a thermodynamic twist: the “computation” naturally compresses data. One could say the universe computes itself into simpler forms. Wolfram observed that simple rules can produce complex patterns; here we find the inverse: among all possible patterns produced, those which themselves have simple descriptions dominate long-term. In

essence, the universe is a self-compressing automaton aiming to reduce its algorithmic complexity while preserving information (since it can’t destroy info). This is reminiscent of some formulations of the Second Law in information theory: entropy increase is information compression (coarse-graining irreversibly combines microstates). Landauer’s principle relates entropy increase with erasure of information bits 28 ; in our context, when a system falls into an attractor, many microstates collapse into one pattern (information about initial conditions is “erased” and entropy is released as heat or radiation). The attractor’s low complexity means a lot of information was thrown away, consistent with Landauer: erasing information produces entropy (here in environment or radiation). But that entropy is exactly what fills the basin, making the attractor stable. So our theory is consistent with thermodynamic cost: the order is bought at the price of increased entropy exported (like heat). A living cell, for instance, is a low $K$ attractor (a structured state) that constantly exports entropy to stay ordered, spreading it into its surroundings. Extreme Cases: If our conjecture holds, the ultimate fate of our universe might not be heat death in the usual sense, but rather a state of structured equilibrium 29 . Perhaps black holes (or black-hole-like remnants) plus vacuum radiation form a composite attractor of maximal $S_{\mathrm{BW}}$. Or maybe an even more exotic structure arises—e.g., a network of cosmic-scale computation that continually recycles chaos into order. This is speculative, but the framework allows a wide range of final states since it’s not constrained to homogeneity. It is also worth noting limitations: if $H$ is truly infinite-dimensional and ergodic, our assumptions might break down. Additionally, in systems with many attractors, the dynamics could become stuck in local attractors that are not the absolute maximum $S_{\mathrm{BW}}$ state (akin to metastable phases). This raises questions of thermalization timescales and basin depths barrier: the system might need rare fluctuations to jump between attractors. In cosmology, this relates to the idea of Boltzmann brains and other freak fluctuations—our theory might suppress those by saying they are high $K$ states with tiny basins, thus extremely unlikely even if entropy allows them.

Predictions and Experimental Tests A compelling aspect of this framework is that it yields concrete differences from standard physics that can, in principle, be looked for. We outline several predictions and how one might test them:

1. Prevalence of Long-Lived Structures Prediction: There exist conditions under which ordered structures will spontaneously emerge and persist in closed systems more readily than expected from equilibrium theory. For example, self-organizing patterns (vortices, solitons, etc.) in turbulent fluids or plasmas will carry a significant fraction of the system’s entropy and will resist decay. In biology, one might speculate that life itself, as a low $K$ attractor, arises not as an incredibly rare accident but as a natural high-$S_{\mathrm{BW}}$ state for suitable chemical systems – implying that abiogenesis and evolution might be thermodynamically favored given the right driving (this aligns with the idea of life as a dissipation-driven process). Test: Perform controlled simulations and experiments on closed chaotic systems (e.g., large networks of nonlinear oscillators, or agent-based models) to see if they consistently evolve specific stable motifs. Compare the frequency or robustness of these motifs to predictions from traditional random chance. This was partially done in the FPUT experiment and subsequent studies 30 6 , but we can extend it. One can measure an approximate algorithmic complexity of the system’s state over time (using compression

algorithms as proxies 31 32 ) and measure a coarse entropy. Our theory predicts that after initial transients, the complexity will drop and plateau even as coarse entropy remains high – indicating the selection of a low-$K$ attractor. Such studies have been proposed 33 34 and could validate the principle of "entropy-driven order." If verified widely, it challenges the notion that observed order (like galaxies or life) requires fine-tuned initial conditions; instead, these orders may be attractors that many initial states will lead to.

2. Anomalies in Quantum Decoherence Prediction: If the underlying reality is deterministic and information-preserving, truly irreversible decoherence might be an approximation. There could be small recurrences or residual coherence in quantum systems that standard decoherence theory (assuming an infinite environment or true randomness) would deem impossible. For instance, an isolated mesoscopic system left to decohere might spontaneously recohere slightly after a long time (akin to Loschmidt echoes). Also, outcomes of quantum measurements might exhibit subtle bias if the basin sizes for different outcomes differ (meaning some outcomes have more microrealizations in $H$ than others, violating exact Born rule in a small way). Test: Extremely sensitive quantum experiments, such as interferometers or Rabi oscillations in nearly isolated systems, could search for unexpected recoherence. One could also test violation of statistical symmetry: e.g., perform a quantum coin-flip (like measuring spin) many times to see if heads vs tails occur with frequencies deviating from 50% beyond expected randomness. If a pattern in outcomes emerges that correlates with e.g. macro parameters, it might hint that the projection favors certain outcomes (basins) slightly. Admittedly, any such effect must be tiny or it would have been noticed, but advances in quantum control might reach regimes to detect one. Additionally, experiments on wavefunction tails (like the detection of extremely rare events predicted by quantum theory) could reveal discrepancies if some branches of the wavefunction correspond to negligible basins in $H$ and thus essentially never occur.

3. Planck-Scale "Stride" Effects Prediction: Because $H$ might be discrete or involve a smallest time-step, Lorentz invariance in $M$ might be only approximate. Specifically, at Planck energy or length scales, there could be observable consequences of spacetime “pixelation” or the deterministic rule’s update structure. Examples include: Energy-dependent speed of light: High-energy photons may travel at speeds slightly different from $c$ or exhibit a spread in speeds (if the underlying rule has stochastic components) 11 . This could manifest as frequency dispersion over cosmic distances. Some quantum gravity models predict $v(c,E) \approx c[1 - \xi (E/E_{\text{Pl}})]$ violating exact Lorentz invariance, and experimental limits already push $\xi$ very low 12 . Our theory also allows such effects, though not specific on form. - High-frequency cutoff in gravitational waves: If spacetime is emergent, gravitational waves above a certain frequency might not propagate normally (the continuum description breaks down). This could be tested in future high-frequency gravitational wave observations or resonant experiments. - Cosmic microwave background anomalies: Subtle statistical anomalies in the CMB at the largest scales (some have been observed, like unexpected alignments) might hint that our cosmology didn’t start at a generic high-entropy state but rather in a special low-$K$ state (perhaps the universe itself is an attractor of some bigger system). While speculative, any pattern in supposedly random primordial fluctuations could point toward an underlying rule imprint. Discrete spectrum of vacuum fluctuations: Quantum field theory assumes a continuum of modes. A deterministic $H$ might impose a very high-frequency cutoff or specific mode structure. Thus

measurements of high-energy cosmic rays or delicate Casimir force experiments might reveal departures from continuous vacuum behavior. Test: Astrophysical observations provide one of the best windows. Gamma-ray burst data has been used to constrain energy-dependent photon speeds 35 36 . Continued observation of distant, high-energy events (GRBs, flares) for slight dispersion or decoherence of photon polarization can improve limits. So far, no violation has been found up to ~$E_{\text{Pl}}$ scale within a factor of a few 37 . If our theory is correct, either the underlying $H$ is at even higher scale or the mapping $\pi$ somehow preserves Lorentz invariance extremely well. Another test is with highly sensitive interferometers (e.g., the proposed holometer or future upgrades to LIGO) to detect spacetime foam. If space is fundamentally discrete or information-based, there might be a low-amplitude noise (“holographic noise”) correlated across distances ~ Planck length, as some models like Wheeler’s foam suggest 11 . Current experiments haven’t found anything conclusive, but they’re approaching interesting sensitivity.

4. Unification and Particle Spectrum Clues Prediction: If forces are emergent, there might be relationships among particle parameters not obvious from the Standard Model alone but natural in $H$. For instance, our approach might imply some order in the masses of elementary particles or coupling constants because they derive from features of the underlying rule. Similarly, gravity’s strength (Newton’s constant) might be calculable from information theory considerations of $H$ rather than arbitrary. This is reminiscent of attempts to compute $G$ from holographic entropy bounds. Additionally, the framework could hint at new particle-like excitations: since $H$ might have simple rule bits, there could be soliton solutions in $H$ that appear as exotic stable particles in $M$ (e.g., topological defects carrying energy). These would be beyond Standard Model. Test: Look for unexplained regularities in known data – for example, do the generations of fermions correspond to some fractal pattern in $H$ (like three generations might mean something like three stable oscillation modes)? Or search experimentally for stable exotic particles that behave like topological solitons (e.g. Q-balls, magnetic monopoles) which could be artifacts of the underlying grid. If discovered, such objects would support the idea that spacetime has a microstructure that admits stable localized states.

5. Simulations of Toy Universes While not a direct experiment, a critical test bed is simulation. We can design cellular automata or highdimensional dynamical systems to serve as toy models of $H$. Then define an appropriate $\pi$ to identify emergent spacetime and “physics” within them. We can check whether the toy universe exhibits analogs of our predictions: do simple patterns dominate entropy, does an analog of gravity emerge from information geometry, etc. For example, one could extend Conway’s Life to 3+1 dimensions and see if glider-like structures behave like particles with forces. Or use coupled map lattices with constraints to mimic gauge symmetries 38 39 . If these toy models produce convincing phenomenology (like something behaving as inverse-square law attraction, or stable particle scattering), it bolsters our framework’s plausibility. Conversely, failure to get expected behaviors would guide refinement of the theory.

Discussion We have presented a novel theoretical framework that merges ideas from thermodynamics, information theory, and dynamical systems to address deep questions in fundamental physics. The core premise is that

the “engine” of cosmic order is entropy itself, once entropy is correctly understood as a dynamical measure (basin-weighted) rather than a static count of microstates 40 41 . Order is not produced in spite of the Second Law but because of it: the Second Law drives systems into the largest, most accessible basins of attraction, which correspond to structured, low-complexity states. This view inverts the conventional relationship between entropy and order, offering a resolution to the long-standing puzzle of why a universe that began in a low-entropy state has not devolved into featureless chaos but instead features increasing complexity. One might worry that our redefinition of entropy is too opportunistic – after all, one could define many quantities. The justification lies in the empirical adequacy: if $S_{\mathrm{BW}}$ can be shown to remain non-decreasing and to correctly predict equilibrium behavior in known scenarios (like why gravitating systems form structures, why certain chemical systems self-organize), then it has earned its keep as a physical entropy. Early investigations are promising: for instance, recent work in structure-forming thermodynamics adds extra terms to entropy to account for clustering 42 43 , which is in spirit similar to our approach of adding basin counts. Our contribution is identifying algorithmic compressibility as the indicator of large basin size and thus linking the emergence of order to simplicity of description. The reinterpretation of gravity is radical but sits amidst growing discontent with standard approaches to quantum gravity. If gravity is emergent and not fundamental, it explains the decades of failure to quantize it: there is no graviton at small scales, because spacetime’s microstructure is not a smooth field to quantize but something wholly different (bits of information or a network). Gravity instead is an effective statistical force like elasticity or fluid pressure, which arises from underlying degrees of freedom. Some prior approaches, like entropic gravity (Verlinde), posit gravity as an entropic force arising from changes in entropy when matter moves 25 . Our view is compatible but more concrete: we pinpoint those entropic degrees to basins in $H$. We also echo ideas from Causal Set theory and Loop Quantum Gravity that spacetime is fundamentally discrete or combinatorial, though we don’t adopt their specific structures. A striking aspect is how this framework naturally accommodates and even demands an arrow of time. In standard cosmology, the arrow of time is put in by hand via low initial entropy. Here, because $H$’s rule is deterministic, the past hypothesis would be that initially the system was not in a deep basin (likely a high complexity state). Then as it evolved, $S_{\mathrm{BW}}$ increased. In principle, if one ran the $H$ dynamics backwards, one would leave attractors and go to more finely tuned states – highly unlikely. Thus the asymmetric behavior is built-in: attractors attract forward in time, not backward. The time-reversal of an attractor flow is an unstable divergent flow. This gives a microscopic rationale for the Second Law: the time reverse of our universe’s trajectory would involve ephemeral structures falling apart and un-mixing in incredibly coordinated ways, which correspond to tiny measure in $H$ (so an initial state that yields that backward behavior is essentially of measure zero in $\mu$). Thus, while $H$’s laws are reversible, the condition of being in a generic state with no special tuning ensures an entropic arrow emerges with overwhelming probability. One potential philosophical implication is on the anthropic principle and fine-tuning. In the conventional view, life and complexity are unusual and require special conditions (leading to multiverse ideas to “explain” why we see them). In our view, complexity may be a natural outcome of one underlying law – meaning our universe might not be rare or finely tuned at all for complexity; rather any deterministic universe with similar rules would fill with structure. This could shift the narrative from anthropic selection to a kind of “generalized second law selection.” It also means that rather than many universes, one might imagine many attractors in $H$; perhaps what we call different “laws of physics” could emerge in different regimes of $H$

(for instance, different vacuum phases of the underlying rule yielding different effective constants). The real fundamental law might be simple (hence low $K$ itself) and unify what we see as separate forces in one rule. Our approach might guide the search for that rule: we’d look for a deterministic rule that, when iterated, produces increasing structured complexity. We should underscore several open issues and challenges: • Formal Derivations: We have sketched how Einstein’s equations or Schrödinger’s equation might emerge but not derived them. A priority is to find a toy $H$ where $\pi$ can be carried out exactly and shown to yield known physics equations. Alternatively, one might attempt a more general derivation: e.g., show that requiring $\pi$ preserve local energy-momentum leads to Einstein’s equations as an emergent thermodynamic identity (analogous to Jacobson’s derivation of Einstein eq. from entropy-area law). • Quantifying Basin Size: For high-dimensional continuous systems, defining and computing $ \mu[B(A)]$ can be nontrivial. Tools from ergodic theory (like the basin entropy measure 44 ) might help classify basins (fractal boundaries, etc.). We need to ensure $S_{\mathrm{BW}}$ is well-defined and perhaps find a more convenient surrogate. Possibly, algorithmic probability (Solomonoff) could serve: the probability that a random program produces a certain pattern is related to how many initial states yield that pattern. This ties into the notion of a universal prior favoring low $K$ outcomes, which is essentially our principle. • Testing Emergence vs Fundamental: If future experiments detect any of the predicted anomalies (e.g., Lorentz violation or subtle quantum deviations), it will bolster emergent theories at the cost of elegant fundamental symmetry. If they do not (and symmetry holds to arbitrary scales), one might lean back to everything being perfectly quantum field theoretic. Our theory is flexible enough that if $H$ is extremely fine-grained (Planck or beyond), current null results are not surprising. But it will eventually face a reckoning: either evidence of underlying structure is found, or the theory risks becoming untestable metaphysics. We have tried to avoid that by enumerating many potential tests. • Complexity of $H$: We assumed $H$ to be something like a giant CA or an equation; but it could be something like a complex adaptive network. If $H$ itself has a structure (maybe it’s an attractor in a yet bigger space?), one can get into infinite regress. However, at some point physics must take an initial condition as given. We simply push it deeper: the initial condition of our $M$ was an output of $H$ dynamics, and maybe $H$ itself had a simple initial condition. If $H$ is finite and deterministic, it may have had low entropy and no arrow of time in the beginning – and one could repeat the question. We might speculate that ultimately $H$ is a fixed rule (perhaps even a timeless mathematical structure) and all arrows of time are internal to it. Then there is no meta-arrow beyond it. Conclusion: We have outlined a comprehensive theoretical framework where entropy, when properly weighted by dynamical basin size, becomes the driving force for complexity and order in the universe. This Basin-Weighted Entropy maximization principle provides a unifying explanation for phenomena ranging from spontaneous self-organization to cosmological structure formation, all under the umbrella of deterministic evolution. By integrating this with a projection paradigm for emergent spacetime, we gain fresh insights into gravity and quantum mechanics: gravity emerges as the shape of the information flow in the underlying state space, and quantum indeterminacy as the shadow of unseen deterministic variables. While many details remain to be worked out, this approach suggests a new direction for the long-sought unification of physics: not by adding more fundamental components (extra particles, dimensions, etc.), but by recognizing that much of what we call fundamental might itself be epiphenomenal, arising from deeper simplicity. If correct, it transforms our understanding of the Second Law from a prognosticator of decay to

an engine of creation, and it recasts the role of complexity in the cosmic story as an inevitable byproduct of simple rules. The coming years should see whether the predictions borne of this idea find support in experiment and simulation, potentially opening the door to a new paradigm where physics and information theory truly converge.

References 1. L. Boltzmann (1877), “Über die Beziehung zwischen dem zweiten Hauptsatze der mechanischen Wärmetheorie und der Wahrscheinlichkeitsrechnung”. Note: Boltzmann’s entropy formula $S = k_B \ln \Omega$ connects entropy with microstate count 1 . 2. A. N. Kolmogorov (1965), “Three Approaches to the Quantitative Definition of Information”. Note: Defines algorithmic complexity $K(x)$ as the length of the shortest program generating $x$ 5 . 3. Zeraoulia Elhadj & J. C. Sprott (2013), “About universal basins of attraction in high-dimensional systems”, Int. J. Bifurcation and Chaos, 23(12):1350197. Note: Definition of basin of attraction: set of initial conditions leading to an attractor 8 . 4. J. D. Farmer (1982), “Solitons and Chaos”. Note: Discusses Fermi-Pasta-Ulam recurrence where a nonlinear system showed quasi-periodic behavior instead of thermalization 6 . 5. J. Bekenstein (1973), “Black holes and entropy”, Phys. Rev. D 7:2333. S. Hawking (1975), “Particle creation by black holes”, Comm. Math. Phys. 43:199. Note: Bekenstein-Hawking entropy $S_{\text{BH}} = \frac{k_B c^3 A}{4 G \hbar}$ shows black hole entropy $\propto$ horizon area 23 24 . 6. T. Lewton (2023), “The Physicist Who’s Challenging the Quantum Orthodoxy”, Quanta Magazine, July 10, 2023 9 10 . Note: Quotes J. Oppenheim on gravity potentially not being quantizable and difficulties with quantizing spacetime. 7. V. Vasilieu et al. (2015), “A Planck-scale limit on spacetime fuzziness and stochastic Lorentz invariance violation”, Nature Physics 11:344 11 12 . Note: Uses gamma-ray bursts to constrain energydependent speed of light; no violation seen up to Planck scale (within experimental precision). 8. J. D. Bekenstein (1982), “Entropy bounds and the Second Law of Thermodynamics”, Phys. Rev. D 27:2262. Note: Bekenstein bound and the idea that entropy of a system (including gravity) has an upper limit related to area. 9. E. Verlinde (2011), “On the origin of gravity and the laws of Newton”, JHEP 04:029. Note: Proposes gravity as an entropic force emerging from thermodynamics of microscopic degrees. 10. S. Wolfram (2002), “A New Kind of Science”. Note: Explores cellular automata; shows simple rules can yield complex behavior (and sometimes simple emergent laws, e.g. Rule 30 and randomness, Rule 110 and universality).

11. P. Cvitanović et al. (2016), “Chaos: Classical and Quantum”. Note: Textbook covering dynamical systems, attractors, Lyapunov exponents, and ergodic theory, relevant to understanding measurepreserving flows and attractor basins. 12. M. Gell-Mann & S. Lloyd (1996), “Information measures, effective complexity, and total information”, Complexity 2(1):44-52. Note: Discusses how complexity can be measured and how ordered structures have lower algorithmic information than random ones, echoing points in our framework. (The above references combine established literature and context notes, and serve to ground the concepts used in this paper. Some are classical sources (1,2,5,9), others are recent discussions (6,7), and a few are general background.)

Thermodynamics of structure-forming systems - PMC

https://pmc.ncbi.nlm.nih.gov/articles/PMC7893045/

Entropy is the Engine of

Creation.txt file://file_000000003c3871fab72eb37b2cbbff6a

Kolmogorov complexity - Wikipedia

https://en.wikipedia.org/wiki/Kolmogorov_complexity

Fermi–Pasta–Ulam–Tsingou problem - Wikipedia

https://en.wikipedia.org/wiki/Fermi%E2%80%93Pasta%E2%80%93Ulam%E2%80%93Tsingou_problem

sprott.physics.wisc.edu

https://sprott.physics.wisc.edu/pubs/paper378.pdf

The Physicist Who Bets That Gravity Can’t Be Quantized | Quanta Magazine

https://www.quantamagazine.org/the-physicist-who-bets-that-gravity-cant-be-quantized-20230710/

A Planck-scale limit on spacetime fuzziness and stochastic Lorentz invariance violation |

Nature Physics https://www.nature.com/articles/nphys3270?error=cookies_not_supported&code=a62c3ea2-a8e4-4a92-bf98-8eb4d9837e4a

Liouville's theorem (Hamiltonian) - Wikipedia

https://en.wikipedia.org/wiki/Liouville%27s_theorem_(Hamiltonian)

Attractor - Wikipedia

https://en.wikipedia.org/wiki/Attractor

Black hole thermodynamics - Wikipedia

https://en.wikipedia.org/wiki/Black_hole_thermodynamics

The five most promising ways to quantize gravity - Backreaction

http://backreaction.blogspot.com/2019/09/the-five-most-promising-ways-to.html

Basin entropy: a new tool to analyze uncertainty in dynamical systems | Scientific Reports

https://www.nature.com/articles/srep31416?error=cookies_not_supported&code=4b629706-6efd-4b0e-ad2e-91d1e89b5270