Physics Lab

Basin-Weighted Entropy and Low-Complexity Attractors: A Deterministic Framework for

Emergent Spacetime

A deterministic framework for emergent spacetime and basin-shaped dynamics.

Note: This is the full draft rendered from the original text file for easier reading.
Key idea: Entropy counts not only the states you are in, but the states that reliably flow into them.
Attractor basins as a landscape: stable configurations correspond to deep valleys with large catchment regions.

Abstract

Traditional entropy, defined via counting microstates, suggests that isolated systems evolve toward maximal disorder. Yet the cosmos manifests enduring structures—from galaxies to life—that seemingly defy naive thermodynamic expectations. We propose a reformulation of entropy that weights each microstate by the size of its basin of attraction in phase space, elevating persistent, ordered structures to high-entropy equilibrium status. In this framework, a closed deterministic universe naturally evolves toward low algorithmic complexity (LAC) attractors that maximize this basin-weighted (BW) entropy. The universe’s fundamental dynamics act as a compression algorithm favoring simple, stable patterns over random chaos. We reinterpret gravity not as a quantum-force to be quantized, but as the rule that shapes the topology of basins in a high-dimensional state space whose 4D projection is spacetime itself. Time corresponds to “basin depth” (progress into attractors), while mass is identified with persistent computational structures that deform basin geometry. We formalize these concepts with precise definitions of basin size, attractors, and algorithmic complexity, and we outline a mapping from the high-dimensional deterministic meta-space H to emergent 4D spacetime M. The model reproduces key features of general relativity and quantum mechanics in their respective domains, while providing novel interpretations for the origin of inertia, the speed of light as an information propagation limit, and the nature of other forces as emergent embedded dynamics. We derive consistency conditions required for observers within M to accurately model the projection \pi: H \to M. Finally, we discuss distinctive predictions of this theory—such as the prevalence of long-lived structures, anomalies in quantum decoherence, and possible Planck-scale discretization effects—and suggest experiments and simulations to test the paradigm.

Introduction

Physical entropy is conventionally understood as a measure of disorder: for a given macrostate, entropy S is proportional to the logarithm of the number of microscopic configurations (microstates) realizing that macrostate 1 . By this definition, highly ordered states have low entropy (fewer microstates), and the Second Law of Thermodynamics dictates that an isolated system will likely evolve toward macrostates of higher S, i.e. greater disorder. This paradigm has long posed a paradox: Why does the universe exhibit persistent organized structures—galaxies, stars, planets, biological life—despite the inexorable increase of entropy? The apparent contradiction leads to the so-called entropy–order paradox 2 . Standard statistical mechanics would expect a closed system to settle into featureless equilibrium (maximum entropy randomness), yet our universe has maintained and even grown complex hierarchical order over billions of years. Several lines of thought hint that this paradox is only apparent. First, order is not synonymous with complexity in the algorithmic sense 3 . A random high-entropy state is actually more complex

(incompressible) than a highly ordered configuration, which often has a simple description. For example, a crystal or a repetitive pattern can be generated by a short algorithm, whereas a thermalized gas with truly random particle positions requires far more information to specify 4 5 . Thus, ordered states can be seen as low algorithmic complexity even if they are low-entropy by the traditional definition. Second, dynamical systems theory demonstrates that nonlinear deterministic processes can spontaneously produce stable patterns or cycles (attractors) that resist disruption. The classic Fermi–Pasta–Ulam–Tsingou (FPUT) numerical experiment (1955) is emblematic: a nonlinearly coupled oscillator chain, expected to thermalize, instead exhibited quasi-periodic recurrences, evading full ergodic mixing for surprisingly long times 6 . This shows that even in an isolated system, ergodicity can be broken and energy can remain trapped in ordered motion (a low-complexity attractor) rather than dispersing uniformly. Likewise, solitons in nonlinear media and persistent structures in cellular automata (e.g. gliders in Conway’s Game of Life) illustrate how simple rules yield self-organizing patterns that endure amid chaos. We are motivated by these insights to hypothesize that entropy maximization in a deterministic universe is fully compatible with – and indeed drives – the formation of order 7 . We propose that the key missing ingredient in the traditional entropy count is the size of each state’s basin of attraction. The basin of attraction of an attractor (a stable long-term state or cycle) is the set of all initial conditions in phase space that asymptotically lead into that attractor 8 . While an ordered macrostate may occupy a smaller region of phase space at any instant (fewer instantaneous microstates) than a disordered one, it may possess a much larger basin: many microstates dynamically flow into and remain near that ordered configuration. In other words, the dynamical stability and persistence of a state must be considered. We therefore redefine entropy in a dynamical, basin-weighted sense: a macrostate’s entropy is augmented by the log of the measure of its basin of attraction. Under this definition, persistent ordered structures can carry extremely high entropy, since they encompass a huge range of initial conditions that lead to essentially the same long-lived outcome. The Second Law in this reinterpretation implies that an isolated system will evolve toward macrostates that maximize this basin-weighted entropy (BW entropy), which often correspond to highly stable, structured configurations rather than homogenous disorder. This paper develops a rigorous formulation of the above ideas. In Section 3 (Formalism) we introduce mathematical definitions for basin size, algorithmic complexity, and attractors in the context of a deterministic state space. We define a measure-theoretic entropy that includes basin weights, and we derive a variational characterization of the low algorithmic complexity (LAC) attractors favored by long-term evolution. We then present a formal mapping \pi: H \to M from an underlying high-dimensional deterministic microscopic space H to the emergent physical spacetime M. In this mapping, what we perceive as 4-dimensional spacetime (with matter and fields) is an effective, coarse-grained description of the dynamics in H. Gravity is reinterpreted as a manifestation of this projection: rather than a fundamental quantized field, gravity corresponds to the shaping of basin geometry in H and its induced curvature in M. As elaborated in Section 4 (Physical Interpretation), time corresponds to movement into attractor basins (basin “depth”), the speed of light c emerges as the finite speed at which information propagates through H (reflecting the update rules’ locality), and mass corresponds to stable computational structures in H that deform the local basin topology (producing the effect of spacetime curvature in M). Other forces (electromagnetism, etc.) appear as additional emergent fields or effective rules within the projected 4D landscape, rather than separate fundamental interactions. We show that this framework can recover the phenomenology of General Relativity and quantum field theory in appropriate limits, while offering a novel explanation for why straightforward quantum gravity approaches have struggled: if gravity is the manifestation of the projection rule itself (the geometry of M induced by H), it may not be amenable to quantization in the same manner as fields within M 9 10 .

In Section 5 (Relation to Known Physics), we connect our basin-weighted entropy to extant concepts in statistical mechanics and cosmology. We formally relate BW entropy to coarse-grained (Gibbs) entropy and to Kolmogorov-Sinai entropy in dynamical systems, and we discuss how this approach reframes the question of the universe’s low initial entropy. We also map elements of our high-dimensional deterministic model onto familiar structures: for example, we discuss how Einstein’s field equations might emerge as effective, statistical descriptions of basin geometry influenced by mass-energy, and how quantum behavior could result from observers in M only accessing coarse-grained states of H. Section 6 (Predictions) outlines distinctive predictions of our theory that differentiate it from the standard paradigm. Among these are: (i) Structure Persistence: The prevalence of long-lived, self-organized structures (at all scales) is expected to be higher than naive equilibrium thermodynamics would predict, implying, for instance, unusual stability of certain complex systems or faster re-formation of order after perturbations. (ii) Anomalies in Decoherence: Because quantum decoherence in this view is an emergent process governed by underlying deterministic dynamics, there may be small deviations from the predictions of randomenvironment decoherence models – potentially observable in precision quantum experiments as tiny residual coherence or systematic collapse biases. (iii) Planck-Scale Discreteness: Since H may be discrete or structured at the Planck scale, there could be “stride” artifacts in spacetime at extremely high energies or small scales – for example, a frequency-dependent speed of light or dispersion of gamma rays at energies approaching the Planck energy, as some quantum gravity models predict 11 12 . We suggest possible experiments, such as high-precision astrophyical timing or tests of quantum gravitational noise, that could reveal such effects. Finally, in Section 7 (Discussion) we consider the implications and limitations of this framework. We emphasize the need for self-consistency: any observer within the emergent spacetime M must be able to construct an effective description of physics (the “laws of nature”) that is coherent and agrees with observations, even though the true underlying dynamics reside in H. This imposes constraints on \pi and on the nature of the high-dimensional rule (for instance, it should respect symmetries that manifest as Lorentz invariance in M, and it should produce statistical outcomes that match quantum probabilities for observers who lack access to the microstate details). We discuss how classical general relativity and quantum mechanics arise as limiting cases of our framework and how the perspective presented here might resolve certain puzzles (such as the black hole information paradox and the arrow of time). We conclude by outlining open mathematical challenges and the next steps required to further develop and test this theory.

Formalism

State Space and Dynamics (H)

Let H be a high-dimensional state space representing the fundamental degrees of freedom of the universe. H could be, for example, a space of microscopic configurations (bit strings, field values on a lattice, or some abstract phase space) of enormous dimensionality N \gg 4. We assume the evolution of the complete state is deterministic and governed by a rule F (discrete time) or a flow f^t (continuous time):

We make two key assumptions: (1) Conservation of information: F is bijective (or f^t is invertible), reflecting that no information is lost or created (this aligns with microscopic reversibility in physics). (2) Measure preservation, described next.

Measure preservation

There exists a natural measure \mu on H (such as Liouville volume in Hamiltonian

phase space) that is preserved by the dynamics 13 14 . Intuitively, if we take a uniform distribution of initial states in some region of H, under evolution they get redistributed but the volume (measure) of that ensemble in H remains constant. This ensures a fair counting of microstates over time.

Microstates and Macrostates

A microstate is a point x \in H specifying the full fine-grained configuration. We define a macrostate M_\alpha as an equivalence class of microstates that are macroscopically indistinguishable (according to some coarse-graining relevant to observers). For example, in conventional thermodynamics M_\alpha might fix macroscopic quantities like energy, volume, etc., and includes all x consistent with those. Here, we will ultimately let macrostates correspond to attractor outcomes (long-lived patterns), but we begin generally. Let \Omega(M_\alpha) = {\;x \in H: x \text{ is in macrostate } M_\alpha\;} denote the set of microstates comprising macrostate M_\alpha. The Boltzmann entropy of M_\alpha is S_{\mathrm{B}}(M_\alpha) = k_B \ln [\mathrm{Vol}(\Omega(M_\alpha))], where \mathrm{Vol} is counting measure or \mu-measure of that region 1 . This is the traditional entropy, which depends only on instantaneous counting of microstates.

Attractors and Basins

An attractor A \subset H is a set (point, cycle, or more complex invariant set) toward which the system tends to evolve from a range of initial conditions. More formally, A is an attractor if: (i) it is invariant (f^t(A)=A for t>0), and (ii) it has a basin of attraction B(A) which is an open set of initial states that approach A as t \to \infty 15 16 . The basin size can be quantified by \mu[B(A)] , the measure of the basin under \mu. (If there are multiple attractors, these basins partition the state space aside from measure-zero boundaries 8 .) Attractors can be fixed points (static configurations), limit cycles (periodic orbits), or even strange attractors (chaotic yet confined regions). In our context, examples of attractors might include stable structures like a galaxy configuration, a solar system, or a living organism’s homeostatic state. Note that in a conservative Hamiltonian system without dissipation, strictly speaking there are no asymptotic attractors (Poincaré recurrence implies eventual return). However, metastable long-lived states can effectively function as attractors on relevant timescales. We will assume some mechanism (perhaps coarse-graining or slight dissipation) that allows attractors to be well-defined for the behavior of interest. Now we integrate these concepts to redefine entropy in a way that accounts for dynamical stability:

Basin-Weighted Entropy (BW Entropy)

We define the basin-weighted entropy S_{\mathrm{BW}}(A) of an attractor (or the macrostate corresponding to that attractor) as proportional to the logarithm of its basin volume:

S_{\mathrm{BW}} (A) \;\equiv\; k_B \ln \mu!\big[B(A)\big] \,.

In words, S_{\mathrm{BW}} counts not just the microstates currently in configuration A, but all microstates that will flow into A (and remain there, up to fluctuations) over time. Because \mu[B(A)] may be enormously larger than \mu(A) itself, a highly ordered A can carry very large S_{\mathrm{BW}}. This redefinition formalizes the intuition that the entropy of a persistent structure includes the entropy of all the chaotic states that give rise to it. For example, consider a glider in Conway’s Game of Life (a simple cellular automaton): the glider is a simple moving pattern (low algorithmic complexity) and occupies a few cells (few instantaneous microstates), yet if we regard the glider as an attractor in the state space of the automaton, its basin includes many random initial patterns that will eventually produce a glider. Thus in our sense a glider can be a high-entropy state because it is the robust outcome of many unpredictable initial configurations. Similarly, we might consider a galaxy as an attractor for matter under gravity: many initial distributions of a protogalactic cloud collapse into the

same virialized galactic structure. By counting those initial configurations in B(A), the galaxy state can be assigned a very high entropy despite its evident order. We can extend this idea to general macrostates (not necessarily final attractors) by defining BW entropy for a macrostate M_\alpha as

S_{\mathrm{BW}}(M_\alpha) \;=\; k_B \ln \Big( \sum_{\beta:\; M_\beta \text{ leads to } M_\alpha} \mu[\Omega(M_\beta)] \Big),

i.e. summing the weights of microstates in all macrostates M_\beta that dynamically evolve into M_\alpha. In practice, however, it is most natural to identify M_\alpha with an attractor or long-lived equilibrium state, since only then is the notion of “leads to” unambiguous as t\to\infty. We therefore focus on attractor entropies. Because \mu is preserved by the dynamics, one can show that the total S_{\mathrm{BW}} of the system (summing or integrating over all attractors weighted by their basin measures) is constant and equal to the traditional S of the whole closed system (which is fixed, since the microstate evolves deterministically). However, S_{\mathrm{BW}} can be re-partitioned among macrostates over time. The Second Law, restated in this context, says that S_{\mathrm{BW}} tends to concentrate into whichever macrostate(s) have the largest basins. In essence, the equilibrium state is the one whose attractor basin dominates phase space volume. This yields a criterion for predicting long-term outcomes: the system will likely end up in the macrostate that has the largest basin of attraction, even if that macrostate has fewer instantaneous microstates than some other configurations. This principle resolves the entropy–order paradox: ordered attractors, by virtue of large basins and longevity, can outweigh disordered states in the entropy ledger.

Algorithmic Complexity and Attractors

To quantify the “simplicity” of structures, we invoke algorithmic complexity. The Kolmogorov complexity K(x) of a microstate x is the length (in bits) of the shortest description (program) that generates x on a universal computer 17 18 . This quantity K(x) measures the information content of the state beyond compressible patterns 19 . Ordered states (crystalline, regular, symmetric) have low K because they are algorithmically compressible 19 , whereas random states have high K. We extend this concept to macrostates or attractors by considering the complexity of describing the pattern or rule defining the attractor. For example, an attractor that is a simple limit cycle might be described by a short set of equations or symmetries, giving it low complexity. We hypothesize that the attractors which maximize S_{\mathrm{BW}} (basin entropy) are precisely those with low K (algorithmic complexity). Intuitively, simple patterns often come from many situations (large basin) because they do not require fine-tuned initial conditions to form; by contrast, a highly complex pattern (random-looking) needs very specific initial configurations to arise and is easily perturbed, so its basin of attraction (if it has one at all) is tiny. This leads to a variational principle: Conjecture (Maximum BW Entropy = Minimum Complexity): In a closed, isolated deterministic system, the trajectory will asymptotically settle into the macrostate(s) that minimize algorithmic complexity K, subject to the constraint that the traditional entropy S is maximal. Equivalently, among all high-entropy macrostates, the system favors the one(s) with simplest structure 20 21 . Over long times, the phase-space measure \mu becomes concentrated on attractors that are compressible (low K) 22 .

We can express this idea more formally. Let \mathcal{S} be the state space and suppose S(x) and K(x) can be defined for each state (where S(x) here means the Boltzmann entropy of the macrostate containing x). Then in an asymptotic or ergodic sense, we expect:

\text{Long-time evolution}:\quad x(t) \text{ maximizes } S_{\mathrm{BW}} \;\approx\; \arg\min_{x \in \mathcal{S}}{K(x)} \text{ given } S(x) \approx S_{\max}\,.

A heuristic Lagrange multiplier argument leads to selecting states that extremize S \lambda K for some \lambda > 0. This aligns with the idea of entropy-driven compression of information: the system seeks states that are high entropy and simple. In practice, complete maximization of S may be achieved by a mixture of attractors; but if one attractor has overwhelmingly larger basin measure, it will dominate the ensemble (becoming overwhelmingly probable). The conjecture can be rephrased in thermodynamic terms: the equilibrium of a deterministic system is a structured equilibrium 7 with maximal entropy production into simple patterns rather than into randomness. As an example, consider gravitational collapse of a cloud of gas. Traditional entropy would count more states for a diffuse spread-out gas than for a single concentrated mass. Yet gravity (a deterministic rule) drives the system into a galaxy or star—a highly structured state. Our framework explains this by noting that the galaxy is an attractor (due to gravity’s long-range instability of uniformity) with a huge basin: almost any sufficiently massive cloud will collapse into some star/galaxy configuration. The diffuse gas is unstable (not an attractor; small perturbations grow). Thus the “entropy” including dynamics is higher for the clumped state. Indeed, black holes represent extreme cases: they are very ordered (just a few parameters describe a black hole) yet have maximal entropy S_{\text{BH}} = \frac{k_B c^3 A}{4 G \hbar} proportional to horizon area 23 24 , which is the largest entropy possible for a given mass-energy. In our terms, a black hole’s simplicity (low K description) and stability give it a vast basin of attraction (any matter that comes near falls in), making it a high-S_{\mathrm{BW}} state despite its apparent order. This reconciles why black holes can be considered thermodynamic equilibrium objects with well-defined entropy 23 even though they are extremely "ordered" geometrical objects.

Mapping the Meta-Space to Physical Space

\pi: H \to M

We now formalize the idea that our familiar 4-dimensional spacetime with its fields is an emergent, coarse-grained projection of the fundamental deterministic system in H. Let M be a 4D differentiable manifold (spacetime) equipped with physical fields (including matter distributions, gauge fields, etc.) that observers in the universe experience. We posit the existence of a surjective projection (many-to-one map) \pi: H \to M, which maps each microstate x \in H to a corresponding macrostate description \pi(x) \in M. The map \pi encapsulates how high-level physical reality emerges from the microscopic configuration.

Properties of \pi:

In summary, \pi is defined so that M behaves like our physical world. The deterministic evolution in H under F induces an evolution in M that follows effective laws (which we recognize as physics). For \pi to be useful, an observer in M (who has access only to the macrostate) can, at least in principle, infer regularities and laws without knowing H. Now, let us articulate the specific correspondence for gravity and other features:

The key point is that these forces are not independent fundamental input in this view; they are effective behaviors of the deterministic rule when viewed in projection. This is analogous to how in automata or fluid simulations, one sees emergent effective forces (like pressure, viscosity, etc.) from underlying particle collisions.

Self-Consistency for Internal Observers

An observer O is itself a physical system within M (and correspondingly a subset of degrees of freedom in H) that can perform experiments and record information. For our theory to be viable, it must be internally consistent: observers who model physics using M (unaware of H) should not encounter contradictions. This imposes several conditions:

  1. Macro-causality: If the fundamental rule F in H is local and causal, then \pi must ensure that causality in M is respected. No observer should observe a signal that violates relativistic causality. This requires that \pi does not map distant-independent events in H to local proximate events in M without a cause. We assume \pi is designed to preserve the light-cone structure (as discussed with c above).
  2. Stable Laws: The emergent laws in M (e.g., equations of motion, conservation laws) should hold reliably under \pi for typical trajectories of H. That means fluctuations or micro-details in H should average out so that M-laws are not erratic. For instance, if we derive an M-level stressenergy tensor from averaging H, it should satisfy a conservation \nabla_\mu T^{\mu\nu}=0 if \pi is properly capturing symmetries of F. Observers expect energy-momentum conservation, locality, etc., so those must be guaranteed by structural aspects of H and \pi. In short, \pi must commute with the dynamics in a certain sense: projecting after evolving or evolving the projected state should give consistent results. Formally, \pi(f^t(x)) should be well-approximated by the evolution of \pi(x) via some effective F_{\text{eff}} on M. If not exactly (since M is losing info), then within the limits of experimental precision. This is analogous to requiring that coarsegraining a molecular simulation yields a continuum fluid obeying Navier-Stokes, etc.
  1. Observer-Indistinguishability of H: No observer confined to M should be able to directly detect the underlying discrete or high-dimensional structure of H except through the effective phenomena we predict (like subtle Planck-scale effects). Their measurements and theories should remain consistent with a self-contained description in M (quantum field theory, GR, etc.) up to those small anomalies. This means \pi should be such that most* microstate differences only manifest as quantum noise or unresolvable uncertainty in M. For example, two microstates in H that map to the same \pi(x) might lead to slightly different subsequent M-trajectories, but those differences appear as quantum randomness or chaotic unpredictability rather than blatant violations of conservation or logic. In this sense, quantum indeterminacy is a feature: it masks the underlying determinism in a way that observers cannot easily unravel (as conjectured by 't Hooft in his deterministic hiddenvariable interpretations of QM).
  2. Reciprocity: If an observer in M builds a computer or simulator attempting to model fundamental physics, they are effectively creating a subsystem of H that mimics H's own rule on some level. H being universal (like a Turing machine) means it can emulate itself, but no faster or better than itself. This self-reference implies limitations akin to Gödel’s or Turing’s halting problem: observers cannot predict all aspects of H beyond certain horizons because that would require duplicating H at full complexity. This might translate to unpredictability inherent in quantum measurement outcomes or chaotic processes – the observer cannot get around those because to do so means solving an intractable computation within their finite subset of H. Thus the self-consistency condition is that no observer can exploit being in H to get “outside” information. They are bound by the effective laws of M which include computational limits. In summary, as long as \pi is constructed with these principles, observers within M will find that their world obeys consistent physical laws. They will formulate quantum mechanics, statistical mechanics, and relativity to describe it, never directly seeing H but possibly noticing hints (like the anomalies we discuss next) if technology improves.

Physical Interpretation and Relation to Known Physics

Having laid out the formal structure, we interpret its meaning and connect to established physics concepts:

Entropy and the Second Law

In our framework, the Second Law (for an isolated universe) can be

rephrased: the basin-weighted entropy S_{\mathrm{BW}} never decreases and typically increases until reaching a maximum. In practical terms, the universe will evolve towards states that are dynamically stable and have many ways to be reached. This is consonant with the idea of self-organizing systems and the emergence of complexity. It challenges the simplistic view that maximal entropy implies heat death chaos; rather, maximal S_{\mathrm{BW}} might correspond to a richly structured state (albeit one that, from a coarse perspective, might look “equilibrium”). This resonates with ideas in non-equilibrium thermodynamics and the concept of maximum entropy production principle, which often coincides with the emergence of ordered flow patterns (like Bénard convection cells organizing to export heat more efficiently). Our contribution is making this idea rigorous with algorithmic information: the “order” that emerges is precisely that which is simplest to describe (and thus robust). This perspective provides a fresh lens on cosmic history: Early-universe conditions were near featureless (perhaps a quantum foam or high-energy vacuum), which we consider to be a high-complexity state (high K) but maybe not maximal S_{\mathrm{BW}} because it was extremely unstable. Small fluctuations (quantum or otherwise) would give rise to structures (inflation amplified these seeds) leading to stars,

galaxies, etc. Those structures increased S_{\mathrm{BW}} enormously even as they seem to decrease ordinary entropy locally (e.g. a gas cloud collapsing). The total entropy including gravitational/heavy degrees of freedom indeed increases 26 , consistent with our thesis. The gravitational entropy concept by Penrose posited that a uniform gas has lower entropy than the same mass in a black hole, because gravity’s negative specific heat leads to clumping increasing entropy 26 . Our framework gives a formal underpinning: a black hole or clumped state is an attractor of huge basin, hence high S_{\mathrm{BW}}.

General Relativity

If our mapping is correct, then Einstein’s equations are not fundamental but emergent

effective laws. The equivalence principle (gravity = inertial acceleration) could be rooted in symmetries of H—perhaps diffeomorphism invariance appears because \pi does not pick a preferred coordinate frame (the underlying rule might be homogeneous and does not provide a special backdrop, so M inherits general covariance). The fact that gravity is geometrical (curvature of space-time) here becomes natural: it’s literally the geometry of basin flows in H. This also implies new ways to think about singularities or horizons: e.g., a black hole horizon in M might correspond to a kind of information barrier in H beyond which \pi cannot project an interior state (hence observers in M see a loss of info). These ideas align qualitatively with the holographic principle, which suggests that the information content of a volume is encoded on its boundary 27 . In our model, one could imagine the degrees of freedom of H that map into a region of M are somehow reflected by degrees on the boundary of that region (since two different H configurations differing only inside an isolated black hole interior might map to the same M exterior state, meaning \pi loses those distinctions – an analog of horizon information loss, resolved only by understanding H itself).

Quantum Mechanics

We can interpret the quantum wavefunction as a description of the distribution of

microstates in H that correspond to one M-state. The Born rule (probabilities as squared amplitudes) might emerge from combinatorial counts of microstates in H (perhaps related to typicality arguments or the principle of indifference across \mu). Decoherence occurs as different coarse observables (projections in M) correspond to different partitions of microstates in H; when a system becomes correlated with its environment (both part of H), the interference between certain microstate sets averages out, leaving effective mixed states for observers. Importantly, under our view, there is no measurement collapse at the H level – the microstate just keeps evolving deterministically. Collapse is an update of the observer’s knowledge in M after they interact (becoming entangled) with the system, and effectively restrict their consideration to a subset of microstates (one branch). Because H is deterministic, one might think this is a hidden-variable interpretation; however, unlike classical hidden-variable theories that often violate Bell inequalities, a non-local hidden deterministic H might still violate Bell’s assumptions subtly (e.g., H could be non-local or have extra dimensions that allow Bell correlations to be carried without signaling in M). Our scenario shares spirit with ’t Hooft’s conjecture of a dissipative deterministic cellular automaton underlying quantum mechanics, and with works suggesting quantum spacetime could be a kind of errorcorrecting code or cellular network. The difference is that we emphasize the role of entropy and complexity in shaping which states are realized.

Connections to Computational Theories

The universe as a computational process has been proposed

by various authors (Zuse, Fredkin, Wolfram). We add to that narrative a thermodynamic twist: the “computation” naturally compresses data. One could say the universe computes itself into simpler forms. Wolfram observed that simple rules can produce complex patterns; here we find the inverse: among all possible patterns produced, those which themselves have simple descriptions dominate long-term. In

essence, the universe is a self-compressing automaton aiming to reduce its algorithmic complexity while preserving information (since it can’t destroy info). This is reminiscent of some formulations of the Second

Law in Information Theory

entropy increase is information compression (coarse-graining irreversibly

combines microstates). Landauer’s principle relates entropy increase with erasure of information bits 28 ; in our context, when a system falls into an attractor, many microstates collapse into one pattern (information about initial conditions is “erased” and entropy is released as heat or radiation). The attractor’s low complexity means a lot of information was thrown away, consistent with Landauer: erasing information produces entropy (here in environment or radiation). But that entropy is exactly what fills the basin, making the attractor stable. So our theory is consistent with thermodynamic cost: the order is bought at the price of increased entropy exported (like heat). A living cell, for instance, is a low K attractor (a structured state) that constantly exports entropy to stay ordered, spreading it into its surroundings.

Extreme Cases

If our conjecture holds, the ultimate fate of our universe might not be heat death in the

usual sense, but rather a state of structured equilibrium 29 . Perhaps black holes (or black-hole-like remnants) plus vacuum radiation form a composite attractor of maximal S_{\mathrm{BW}}. Or maybe an even more exotic structure arises—e.g., a network of cosmic-scale computation that continually recycles chaos into order. This is speculative, but the framework allows a wide range of final states since it’s not constrained to homogeneity. It is also worth noting limitations: if H is truly infinite-dimensional and ergodic, our assumptions might break down. Additionally, in systems with many attractors, the dynamics could become stuck in local attractors that are not the absolute maximum S_{\mathrm{BW}} state (akin to metastable phases). This raises questions of thermalization timescales and basin depths barrier: the system might need rare fluctuations to jump between attractors. In cosmology, this relates to the idea of Boltzmann brains and other freak fluctuations—our theory might suppress those by saying they are high K states with tiny basins, thus extremely unlikely even if entropy allows them.

Predictions and Experimental Tests

A compelling aspect of this framework is that it yields concrete differences from standard physics that can, in principle, be looked for. We outline several predictions and how one might test them:

  1. Prevalence of Long-Lived Structures. There exist conditions under which ordered structures will spontaneously emerge and persist in closed systems more readily than expected from equilibrium theory. For example, self-organizing patterns (vortices, solitons, etc.) in turbulent fluids or plasmas will carry a significant fraction of the system’s entropy and will resist decay. In biology, one might speculate that life itself, as a low K attractor, arises not as an incredibly rare accident but as a natural high-S_{\mathrm{BW}} state for suitable chemical systems – implying that abiogenesis and evolution might be thermodynamically favored given the right driving (this aligns with the idea of life as a dissipation-driven process).

    Test: Perform controlled simulations and experiments on closed chaotic systems (e.g., large networks of nonlinear oscillators, or agent-based models) to see if they consistently evolve specific stable motifs. Compare the frequency or robustness of these motifs to predictions from traditional random chance. This was partially done in the FPUT experiment and subsequent studies 30 6 , but we can extend it. One can measure an approximate algorithmic complexity of the system’s state over time (using compression algorithms as proxies 31 32 ) and measure a coarse entropy. Our theory predicts that after initial transients, the complexity will drop and plateau even as coarse entropy remains high – indicating the selection of a low-K attractor. Such studies have been proposed 33 34 and could validate the principle of "entropy-driven order." If verified widely, it challenges the notion that observed order (like galaxies or life) requires fine-tuned initial conditions; instead, these orders may be attractors that many initial states will lead to.

  2. Anomalies in Quantum Decoherence. If the underlying reality is deterministic and information-preserving, truly irreversible decoherence might be an approximation. There could be small recurrences or residual coherence in quantum systems that standard decoherence theory (assuming an infinite environment or true randomness) would deem impossible. For instance, an isolated mesoscopic system left to decohere might spontaneously recohere slightly after a long time (akin to Loschmidt echoes). Also, outcomes of quantum measurements might exhibit subtle bias if the basin sizes for different outcomes differ (meaning some outcomes have more microrealizations in H than others, violating exact Born rule in a small way).

    Test: Extremely sensitive quantum experiments, such as interferometers or Rabi oscillations in nearly isolated systems, could search for unexpected recoherence. One could also test violation of statistical symmetry: e.g., perform a quantum coin-flip (like measuring spin) many times to see if heads vs tails occur with frequencies deviating from 50% beyond expected randomness. If a pattern in outcomes emerges that correlates with e.g. macro parameters, it might hint that the projection favors certain outcomes (basins) slightly. Admittedly, any such effect must be tiny or it would have been noticed, but advances in quantum control might reach regimes to detect one. Additionally, experiments on wavefunction tails (like the detection of extremely rare events predicted by quantum theory) could reveal discrepancies if some branches of the wavefunction correspond to negligible basins in H and thus essentially never occur.

  3. Planck-Scale “Stride” Effects. Because H might be discrete or involve a smallest time-step, Lorentz invariance in M might be only approximate. Specifically, at Planck energy or length scales, there could be observable consequences of spacetime “pixelation” or the deterministic rule’s update structure.

    Examples include:

    • Energy-dependent speed of light: High-energy photons may travel at speeds slightly different from c or exhibit a spread in speeds (if the underlying rule has stochastic components) 11 . This could manifest as frequency dispersion over cosmic distances. Some quantum gravity models predict v(c,E) \approx c[1 - \xi (E/E_{\text{Pl}})] violating exact Lorentz invariance, and experimental limits already push \xi very low 12 . Our theory also allows such effects, though not specific on form.
    • High-frequency cutoff in gravitational waves: If spacetime is emergent, gravitational waves above a certain frequency might not propagate normally (the continuum description breaks down). This could be tested in future high-frequency gravitational wave observations or resonant experiments.
    • Cosmic microwave background anomalies: Subtle statistical anomalies in the CMB at the largest scales (some have been observed, like unexpected alignments) might hint that our cosmology didn’t start at a generic high-entropy state but rather in a special low-K state (perhaps the universe itself is an attractor of some bigger system). While speculative, any pattern in supposedly random primordial fluctuations could point toward an underlying rule imprint.
    • Discrete spectrum of vacuum fluctuations: Quantum field theory assumes a continuum of modes. A deterministic H might impose a very high-frequency cutoff or specific mode structure.

    Test: Measurements of high-energy cosmic rays or delicate Casimir force experiments might reveal departures from continuous vacuum behavior. Astrophysical observations provide one of the best windows. Gamma-ray burst data has been used to constrain energy-dependent photon speeds 35 36 . Continued observation of distant, high-energy events (GRBs, flares) for slight dispersion or decoherence of photon polarization can improve limits. So far, no violation has been found up to ~E_{\text{Pl}} scale within a factor of a few 37 . If our theory is correct, either the underlying H is at even higher scale or the mapping \pi somehow preserves Lorentz invariance extremely well. Another test is with highly sensitive interferometers (e.g., the proposed holometer or future upgrades to LIGO) to detect spacetime foam. If space is fundamentally discrete or information-based, there might be a low-amplitude noise (“holographic noise”) correlated across distances ~ Planck length, as some models like Wheeler’s foam suggest 11 . Current experiments haven’t found anything conclusive, but they’re approaching interesting sensitivity.

  4. Unification and Particle Spectrum Clues. If forces are emergent, there might be relationships among particle parameters not obvious from the Standard Model alone but natural in H. For instance, our approach might imply some order in the masses of elementary particles or coupling constants because they derive from features of the underlying rule. Similarly, gravity’s strength (Newton’s constant) might be calculable from information theory considerations of H rather than arbitrary. This is reminiscent of attempts to compute G from holographic entropy bounds. Additionally, the framework could hint at new particle-like excitations: since H might have simple rule bits, there could be soliton solutions in H that appear as exotic stable particles in M (e.g., topological defects carrying energy). These would be beyond Standard Model.

    Test: Look for unexplained regularities in known data – for example, do the generations of fermions correspond to some fractal pattern in H (like three generations might mean something like three stable oscillation modes)? Or search experimentally for stable exotic particles that behave like topological solitons (e.g., Q-balls, magnetic monopoles) which could be artifacts of the underlying grid. If discovered, such objects would support the idea that spacetime has a microstructure that admits stable localized states.

  5. Simulations of Toy Universes. While not a direct experiment, a critical test bed is simulation. We can design cellular automata or high-dimensional dynamical systems to serve as toy models of H. Then define an appropriate \pi to identify emergent spacetime and “physics” within them. We can check whether the toy universe exhibits analogs of our predictions: do simple patterns dominate entropy, does an analog of gravity emerge from information geometry, etc. For example, one could extend Conway’s Life to 3+1 dimensions and see if glider-like structures behave like particles with forces. Or use coupled map lattices with constraints to mimic gauge symmetries 38 39 . If these toy models produce convincing phenomenology (like something behaving as inverse-square law attraction, or stable particle scattering), it bolsters our framework’s plausibility. Conversely, failure to get expected behaviors would guide refinement of the theory.

Discussion

We have presented a novel theoretical framework that merges ideas from thermodynamics, information theory, and dynamical systems to address deep questions in fundamental physics. The core premise is that

the “engine” of cosmic order is entropy itself, once entropy is correctly understood as a dynamical measure (basin-weighted) rather than a static count of microstates 40 41 . Order is not produced in spite of the Second Law but because of it: the Second Law drives systems into the largest, most accessible basins of attraction, which correspond to structured, low-complexity states. This view inverts the conventional relationship between entropy and order, offering a resolution to the long-standing puzzle of why a universe that began in a low-entropy state has not devolved into featureless chaos but instead features increasing complexity. One might worry that our redefinition of entropy is too opportunistic – after all, one could define many quantities. The justification lies in the empirical adequacy: if S_{\mathrm{BW}} can be shown to remain non-decreasing and to correctly predict equilibrium behavior in known scenarios (like why gravitating systems form structures, why certain chemical systems self-organize), then it has earned its keep as a physical entropy. Early investigations are promising: for instance, recent work in structure-forming thermodynamics adds extra terms to entropy to account for clustering 42 43 , which is in spirit similar to our approach of adding basin counts. Our contribution is identifying algorithmic compressibility as the indicator of large basin size and thus linking the emergence of order to simplicity of description. The reinterpretation of gravity is radical but sits amidst growing discontent with standard approaches to quantum gravity. If gravity is emergent and not fundamental, it explains the decades of failure to quantize it: there is no graviton at small scales, because spacetime’s microstructure is not a smooth field to quantize but something wholly different (bits of information or a network). Gravity instead is an effective statistical force like elasticity or fluid pressure, which arises from underlying degrees of freedom. Some prior approaches, like entropic gravity (Verlinde), posit gravity as an entropic force arising from changes in entropy when matter moves 25 . Our view is compatible but more concrete: we pinpoint those entropic degrees to basins in H. We also echo ideas from Causal Set theory and Loop Quantum Gravity that spacetime is fundamentally discrete or combinatorial, though we don’t adopt their specific structures. A striking aspect is how this framework naturally accommodates and even demands an arrow of time. In standard cosmology, the arrow of time is put in by hand via low initial entropy. Here, because H’s rule is deterministic, the past hypothesis would be that initially the system was not in a deep basin (likely a high complexity state). Then as it evolved, S_{\mathrm{BW}} increased. In principle, if one ran the H dynamics backwards, one would leave attractors and go to more finely tuned states – highly unlikely. Thus the asymmetric behavior is built-in: attractors attract forward in time, not backward. The time-reversal of an attractor flow is an unstable divergent flow. This gives a microscopic rationale for the Second Law: the time reverse of our universe’s trajectory would involve ephemeral structures falling apart and un-mixing in incredibly coordinated ways, which correspond to tiny measure in H (so an initial state that yields that backward behavior is essentially of measure zero in \mu). Thus, while H’s laws are reversible, the condition of being in a generic state with no special tuning ensures an entropic arrow emerges with overwhelming probability. One potential philosophical implication is on the anthropic principle and fine-tuning. In the conventional view, life and complexity are unusual and require special conditions (leading to multiverse ideas to “explain” why we see them). In our view, complexity may be a natural outcome of one underlying law – meaning our universe might not be rare or finely tuned at all for complexity; rather any deterministic universe with similar rules would fill with structure. This could shift the narrative from anthropic selection to a kind of “generalized second law selection.” It also means that rather than many universes, one might imagine many attractors in H; perhaps what we call different “laws of physics” could emerge in different regimes of H

(for instance, different vacuum phases of the underlying rule yielding different effective constants). The real fundamental law might be simple (hence low K itself) and unify what we see as separate forces in one rule. Our approach might guide the search for that rule: we’d look for a deterministic rule that, when iterated, produces increasing structured complexity. We should underscore several open issues and challenges:

Conclusion

We have outlined a comprehensive theoretical framework where entropy, when properly

weighted by dynamical basin size, becomes the driving force for complexity and order in the universe. This Basin-Weighted Entropy maximization principle provides a unifying explanation for phenomena ranging from spontaneous self-organization to cosmological structure formation, all under the umbrella of deterministic evolution. By integrating this with a projection paradigm for emergent spacetime, we gain fresh insights into gravity and quantum mechanics: gravity emerges as the shape of the information flow in the underlying state space, and quantum indeterminacy as the shadow of unseen deterministic variables. While many details remain to be worked out, this approach suggests a new direction for the long-sought unification of physics: not by adding more fundamental components (extra particles, dimensions, etc.), but by recognizing that much of what we call fundamental might itself be epiphenomenal, arising from deeper simplicity. If correct, it transforms our understanding of the Second Law from a prognosticator of decay to

an engine of creation, and it recasts the role of complexity in the cosmic story as an inevitable byproduct of simple rules. The coming years should see whether the predictions borne of this idea find support in experiment and simulation, potentially opening the door to a new paradigm where physics and information theory truly converge.

References

  1. 1. L. Boltzmann (1877), “Über die Beziehung zwischen dem zweiten Hauptsatze der mechanischen Wärmetheorie und der Wahrscheinlichkeitsrechnung”. Note: Boltzmann’s entropy formula $S = k_B \ln \Omega$ connects entropy with microstate count 1 .
  2. 2. A. N. Kolmogorov (1965), “Three Approaches to the Quantitative Definition of Information”. Note: Defines algorithmic complexity $K(x)$ as the length of the shortest program generating $x$ 5 .
  3. 3. Zeraoulia Elhadj & J. C. Sprott (2013), “About universal basins of attraction in high-dimensional systems”, Int. J. Bifurcation and Chaos, 23(12):1350197. Note: Definition of basin of attraction: set of initial conditions leading to an attractor 8 .
  4. 4. J. D. Farmer (1982), “Solitons and Chaos”. Note: Discusses Fermi-Pasta-Ulam recurrence where a nonlinear system showed quasi-periodic behavior instead of thermalization 6 .
  5. 5. J. Bekenstein (1973), “Black holes and entropy”, Phys. Rev. D 7:2333. S. Hawking (1975), “Particle creation by black holes”, Comm. Math. Phys. 43:199. Note: Bekenstein-Hawking entropy $S_{\text{BH}} = \frac{k_B c^3 A}{4 G \hbar}$ shows black hole entropy $\propto$ horizon area 23 24 .
  6. 6. T. Lewton (2023), “The Physicist Who’s Challenging the Quantum Orthodoxy”, Quanta Magazine, July 10, 2023 9 10 . Note: Quotes J. Oppenheim on gravity potentially not being quantizable and difficulties with quantizing spacetime.
  7. 7. V. Vasilieu et al. (2015), “A Planck-scale limit on spacetime fuzziness and stochastic Lorentz invariance violation”, Nature Physics 11:344 11 12 . Note: Uses gamma-ray bursts to constrain energydependent speed of light; no violation seen up to Planck scale (within experimental precision).
  8. 8. J. D. Bekenstein (1982), “Entropy bounds and the Second Law of Thermodynamics”, Phys. Rev. D 27:2262. Note: Bekenstein bound and the idea that entropy of a system (including gravity) has an upper limit related to area.
  9. 9. E. Verlinde (2011), “On the origin of gravity and the laws of Newton”, JHEP 04:029. Note: Proposes gravity as an entropic force emerging from thermodynamics of microscopic degrees.
  10. 10. S. Wolfram (2002), “A New Kind of Science”. Note: Explores cellular automata; shows simple rules can yield complex behavior (and sometimes simple emergent laws, e.g. Rule 30 and randomness, Rule 110 and universality).
  11. 11. P. Cvitanović et al. (2016), “Chaos: Classical and Quantum”. Note: Textbook covering dynamical systems, attractors, Lyapunov exponents, and ergodic theory, relevant to understanding measurepreserving flows and attractor basins.
  12. 12. M. Gell-Mann & S. Lloyd (1996), “Information measures, effective complexity, and total information”, Complexity 2(1):44-52. Note: Discusses how complexity can be measured and how ordered structures have lower algorithmic information than random ones, echoing points in our framework.

(The above references combine established literature and context notes, and serve to ground the concepts used in

Additional Links