A systematic evaluation of twenty-first century frameworks claiming to explain reality, from Platonic morphospace to mathematical universes to computational ruliad, using constraint-based falsification methodology. What survives scrutiny, what gets demoted to metaphor, and why the distinction matters.
Part I: The Problem of Theories That Cannot Fail
In 1950, philosopher Antony Flew posed a question that still haunts metaphysics: “What would have to occur or to have occurred to constitute for you a disproof?”
The question sounds simple. It is not. Most frameworks claiming to explain fundamental reality (from mathematical Platonism to panpsychism to computational universes) are structured in ways that make this question unanswerable. They accommodate any possible evidence through auxiliary modifications, semantic escape hatches, or appeals to domains beyond measurement.
This is not necessarily dishonest. It may reflect genuine uncertainty about what counts as evidence for claims about reality’s deepest structure. But it creates a governance problem: frameworks that cannot lose cannot learn. They cannot be improved through contact with evidence. They cannot be distinguished from sophisticated storytelling.
This article applies a systematic methodology, Recursive Constraint Falsification (RCF), to evaluate major metaphysical frameworks currently influencing physics, biology, cognitive science, and philosophy of mind. The goal is not to crown a winner but to distinguish frameworks that can be improved through evidence from those that cannot, and to identify what (if anything) survives the audit.
The frameworks evaluated include:
In physics and cosmology: Max Tegmark’s Mathematical Universe Hypothesis, Stephen Wolfram’s Ruliad and computational universe claims, Sam Senchal Observer Theory, Roger Penrose and Stuart Hameroff’s Orchestrated Objective Reduction (Orch-OR)
In biology and cognitive science: Michael Levin’s Platonic morphospace and “cognitive glue” framework, bioelectric causation claims
In philosophy of mind: Panpsychism and cosmopsychism (Philip Goff, Galen Strawson), Analytical Idealism (Bernardo Kastrup), Integrated Information Theory (Giulio Tononi)
In theology and metaphysics: Classical theism, process theology (Whitehead, Hartshorne), strong mathematical Platonism
The methodology is deliberately conservative. Nothing here claims logical impossibility. Everything is conditional, scoped, and constraint-based. A framework can fail RCF criteria and still remain meaningful as a heuristic, a narrative compression, a value system, or a research inspiration. RCF refutes frameworks only insofar as they present themselves as causal, explanatory accounts of physical reality.
Part II: The Five Evaluation Criteria
Criterion 1: Falsifiability Under Realistic Measurement Constraints
Karl Popper established that a theory earns scientific status only if it specifies what observations would refute it. But RCF extends this requirement: falsifiability must operate under realistic measurement constraints, not merely in principle.
Popper wrote in The Logic of Scientific Discovery (1959): “A theory is to be called ’empirical’ or ‘falsifiable’ if it divides the class of all possible basic statements unambiguously into… the class of its potential falsifiers.”
The operational test for any claim X:
- What specific observation O would demonstrate X is false (not merely “less useful”)?
- Can O be performed with existing or near-term technology?
- Would proponents of X actually accept O as falsifying, or would they invoke auxiliary hypotheses?
The third question is critical. Many frameworks specify “falsifiers” that their proponents would never accept. When challenged, they retreat to weaker claims or redefine terms. This is immunization, not falsifiability.
Criterion 2: Explanatory Compression Without Surplus Ontology
Following Ockham’s razor formalized through algorithmic information theory (Solomonoff, Li-Vitanyi), a framework should achieve explanatory compression (accounting for more phenomena with fewer free parameters and ontological commitments) without introducing entities that do no measurable work.
Li and Vitányi’s formalization: “The Kolmogorov complexity of a string is the length of the shortest computer program that produces that string as output.”
The operational test for any ontological posit P:
- What phenomena does P explain that cannot be explained without P?
- What is the complexity cost of adding P to the framework?
- Does P generate novel predictions that justify its complexity cost?
Abstractions must “pay rent” in measurable performance gains. If invoking an entity does not change energy budgets, error rates, compression limits, or control costs, it is not doing explanatory work.
Criterion 3: Thermodynamic and Information-Theoretic Grounding
Every pattern, every computation, every causal influence requires thermodynamic work. Rolf Landauer established in 1961 that information is physical; erasing one bit requires minimum energy dissipation of kT ln(2). Any framework positing causation, pattern persistence, or information processing must respect these constraints.
Landauer wrote: “We may then examine logical operations that do not have unique inverses… such operations require a minimal dissipation of kT ln 2 per step.”
The operational test for any proposed causal mechanism M:
- What is the energy budget for M to operate?
- How does information flow through M, and what are the entropy costs?
- If M is claimed to be “non-physical,” how does it interact with physical systems without violating Landauer’s principle?
This criterion has empirical teeth. Landauer’s principle has been experimentally confirmed to extraordinary precision (Yan et al. 2018, Nature Physics). Claiming future physics will overturn it is not falsification; it is deferral.
Criterion 4: Novel Prediction Generation
Following Imre Lakatos’s methodology of scientific research programs, a framework demonstrates progressive character by generating novel predictions: predictions of facts not used in constructing the framework, especially “risky” predictions that could easily fail.
Lakatos wrote: “A research programme is said to be progressing as long as its theoretical growth anticipates its empirical growth, that is, as long as it keeps predicting novel facts with some success.”
The operational test for any framework F:
- What predictions does F make that competing frameworks do not?
- Have any of these predictions been confirmed?
- Does F generate new research questions, or merely reinterpret existing data?
A framework that only accommodates existing data while generating no novel testable predictions is degenerating rather than progressing, regardless of its internal elegance.
Criterion 5: Resistance to Immunization
Nicholas Shackel identified the “motte-and-bailey” doctrine as a pattern of argumentation where a bold, contentious claim (the bailey) is defended by retreating to a more modest, defensible claim (the motte) when challenged, while continuing to assert the bailey when unchallenged.
Shackel wrote: “The motte is the defensible but undesired position to which one retreats when hard pressed. The bailey is the dungeons, dungheap and hovel-strewn stretch of open ground which is desired but is difficult to defend.”
The operational test for any framework F:
- Does F retreat from strong ontological claims to weak methodological claims when challenged?
- Are auxiliary hypotheses added to protect core claims from refutation?
- Can the framework be stated in a form that would convince proponents it is false if it is false?
Warning sign: If proponents say “that’s not what I meant” to every falsification attempt while continuing to make the original claims in other contexts, the framework is immunized.
Part III: Six Diagnostic Failure Modes
Before evaluating specific frameworks, it is useful to identify recurrent failure patterns that appear across metaphysical positions. These are not logical contradictions; they are structural features that make frameworks unable to lose and therefore unable to learn.
Failure Mode 1: The Actuality/Potentiality Conflation
Many metaphysical frameworks conflate mathematical possibility (δύναμις/dynamis) with physical actuality (ἐνέργεια/energeia). If all mathematically consistent structures “exist,” or if all computable processes are “real,” the framework explains nothing about why this particular universe instantiates these particular laws.
Tomas Natal’s 2024 analysis (arXiv:2411.12562v3) identifies this precisely: “Both Wolfram and Tegmark conflate the inherent potential (δύναμις) of mathematical truths with their instantiation or actuality (ἐνέργεια) in reality, making a similar error to that of the ‘so-called’ Pythagoreans rebuked by Aristotle.”
Diagnostic questions:
- Does the framework distinguish between “mathematically possible” and “physically actual”?
- If all possibilities are claimed to exist, what selection principle explains this universe?
- Does the framework generate different predictions than “everything exists”?
Frameworks affected: Mathematical Universe Hypothesis (Tegmark), Ruliad (Wolfram), Modal Realism (Lewis), some versions of Mathematical Platonism, Levin’s morphospace (when forms are claimed to exist independently).
Failure Mode 2: The Interaction Problem
Any framework positing non-physical causes of physical effects must specify how the non-physical causally interacts with the physical without violating conservation laws or thermodynamic constraints. This is the classical mind-body problem, first articulated by Princess Elisabeth of Bohemia to Descartes in 1643.
Elisabeth wrote: “How can the soul of a man determine the bodily spirits to perform voluntary actions, being but a thinking substance? For it seems that all determination of movement is made by the pushing of the thing moved.”
Diagnostic questions:
- If X is non-physical, by what mechanism does X influence physical systems?
- What is the energy source for this influence?
- Would this influence be detectable as apparent conservation law violation?
Frameworks affected: Substance Dualism, Platonic Causal Realism (Levin’s morphospace when forms “guide” development), Classical Theism (as causal theory), Analytical Idealism (on the matter-emergence problem).
Failure Mode 3: The Realization Problem (Infinite Regress)
If abstract structures are claimed to “generate” or “realize” physical reality, the question arises: what realizes the abstract structures? If the realizer must itself be mathematically describable, the framework generates infinite regress.
Rickles, Elshatlawy, and Arsiwalla (arXiv:2308.16068v2) articulate this: “How does an abstract rule get turned into physical reality? If this reality is the result of the computation of rules, then what is doing the computation?… This is a bit like Baron von Munchausen rescuing himself and his horse from a quagmire by lifting himself up by his own hair.”
Diagnostic questions:
- If abstract structures generate reality, what generates the abstract structures?
- Does the framework require a “ground floor” of brute physical fact?
- If so, why not stop at physics without the abstract superstructure?
Frameworks affected: Mathematical Universe Hypothesis, Ruliad, Structural Realism (in eliminative versions), Information Ontology (in some versions), Levin’s morphospace (when forms are posited as causally prior to physics).
Failure Mode 4: The Combination/Decombination Problem
Panpsychism and cosmopsychism face inverse versions of the same problem. Panpsychism must explain how micro-experiential properties combine into unified macro-experience. Cosmopsychism must explain how a cosmic mind decomposes into apparently separate individual perspectives.
David Chalmers states: “The combination problem is widely regarded as the most serious problem facing panpsychism… How do micro-level experiences combine to yield macro-level experiences?”
William James anticipated this in 1890: “Take a hundred of them [feelings], shuffle them and pack them as close together as you can… still each remains the same feeling it always was, shut in its own skin, windowless, ignorant of what the other feelings are and mean.”
Frameworks affected: Panpsychism (Goff, Strawson), Cosmopsychism, Integrated Information Theory (Tononi), Analytical Idealism (Kastrup).
Failure Mode 5: Prediction Vacuity (The Measure Problem)
Frameworks claiming “all X exist” (all mathematical structures, all computational histories, all possible worlds) face the measure problem: without a principled probability measure over the space of possibilities, no predictions follow. The framework predicts everything and therefore predicts nothing.
Tegmark himself acknowledges: “There is a severe ‘measure problem’ that must be solved to make testable predictions at levels II-IV.”
Jürgen Schmidhuber’s critique: “Although Tegmark suggests that ‘all mathematical structures are a priori given equal statistical weight,’ there is no way of assigning equal non-vanishing probability to all (infinitely many) mathematical structures.”
Frameworks affected: Mathematical Universe Hypothesis, Multiverse theories (without measure), Ruliad, Modal Realism.
Failure Mode 6: Degenerating Research Program
A research program degenerates when it responds to anomalies not by generating novel predictions but by adding auxiliary hypotheses that protect the core while explaining away failures. The program becomes increasingly baroque without increasing predictive power.
The Stanford Encyclopedia states: “The first is progressive if the theory is empirically progressive, that is, if it predicts novel and hitherto unexpected facts… The second is degenerating if it is not empirically progressive.”
Frameworks affected: Many frameworks degenerate under sustained critique. The diagnostic applies dynamically rather than to frameworks per se.
Part IV: The Shared Nominalization Problem
Before examining individual frameworks, one structural failure deserves special attention because it appears across nearly all of them: the nominalization error.
Across most metaphysical frameworks claiming to explain reality, processes that describe how systems behave under constraints are repeatedly frozen into nouns and then mistaken for things that do causal work. This creates explanatory surplus without new predictions.
Pattern gets treated as an entity rather than a constraint outcome. “Pattern” shifts from meaning regularity that persists under constraints to something that exists independently and shapes matter. But patterns do not act. Constraints act. Patterns are the residue of constraint satisfaction. This error appears in misreadings of Bateson, in Levin’s Platonic morphospace, in mathematical Platonism, in the MUH, in the Ruliad, and in panpsychism.
Information gets treated as a substance rather than a thermodynamic relation. Information becomes a thing that “flows,” “exists,” or “influences” rather than a bookkeeping relation tied to entropy and energy. Without Landauer-style cost accounting, information becomes causal magic. This error appears in Levin’s framework, in the Ruliad, in Observer Theory, in the MUH, and in analytical idealism.
Goal / Preference / Intention get treated as intrinsic properties. Goal-directedness becomes a property the system “has” instead of a trajectory that persists under constraints. Goals are observer-relative summaries of stable attractors, not internal drivers unless explicitly implemented. This error appears in Levin’s cognitive agency language, in process theology, in panpsychism, and in some active inference misreadings.
Observer gets treated as a primitive instead of a constrained physical system. The observer becomes an ontological primitive rather than a pattern sustained by thermodynamic and informational constraints. Observation has physical cost. Coarse-graining is not free. This error appears in Observer Theory, in QBism-adjacent views, and in idealism.
Computation gets treated as abstract existence rather than physical process. Computation becomes something that “exists” independently of physical realization. But computation without a substrate collapses into mathematical description, not causation. This error appears in the Ruliad, in the MUH, in digital physics, and in some computational universe claims.
Form / Morphospace get treated as a causal realm. Morphospace shifts from “the set of possible forms under constraints” to “a real space that biological systems access or are guided by.” But forms do not guide systems. Constraints eliminate alternatives. This error appears explicitly in Levin’s Platonic morphospace framework.
Memory gets treated as stored entity rather than persistent constraint. Memory becomes a thing stored somewhere rather than a stabilized configuration that resists perturbation. But memory requires energy to maintain and degrades without it. This error appears in bioelectric cognition language, in panpsychism, and in some information-theoretic metaphors.
Glue / Binding / Integration get treated as a thing. Integration becomes a mysterious substance rather than coordinated constraint propagation. But binding is a dynamical outcome, not an added ingredient. This error appears in Levin’s “cognitive glue” language and in consciousness binding problem discussions.
The shared structural failure: verbs are frozen into nouns, then asked to do causal work. Patterning becomes Pattern. Constraining becomes Constraint-as-thing. Persisting becomes Memory. Filtering becomes Observer. Computing becomes Computation. Eliminating alternatives becomes Goal. Stabilizing becomes Glue.
RCF dissolves this by verb-restoring every term: What process? Under what constraints? At what energetic cost? With what falsifier? Anything that cannot survive that translation is not false, but it is non-explanatory.
Part V: Systematic Evaluation of Major Frameworks
Framework 1: Mathematical Universe Hypothesis (Tegmark)
Canonical statement: Physical existence is mathematical existence. Our universe is a mathematical structure, and all mathematical structures exist with equal ontological status (“Level IV multiverse”). The appearance of physical reality is how mathematical existence feels “from inside.”
Key proponent: Max Tegmark, Our Mathematical Universe (2014)
Key critics: Sabine Hossenfelder, Peter Woit, Jürgen Schmidhuber, Tomas Natal
RCF Evaluation:
Criterion 1 (Falsifiability): Tegmark acknowledges the measure problem makes Level IV predictions difficult. What would falsify MUH? If we observed physical properties that could not be captured by any mathematical structure, MUH would be falsified, but this seems impossible by construction. The framework is structured to be unfalsifiable.
Criterion 2 (Explanatory Compression): MUH radically inflates ontology. Instead of one universe requiring explanation, all mathematical structures exist. The measure problem means this inflation yields no predictive compression; the cost is paid without benefit.
Criterion 3 (Thermodynamic Grounding): MUH faces the actuality/potentiality conflation directly. Mathematical consistency does not entail physical instantiation. Natal’s analysis shows this violates the Aristotelian distinction between potential and actual being that physics requires.
Sabine Hossenfelder’s verdict: “Claiming the universe ‘is’ mathematics rather than ‘is described by’ mathematics adds nothing explanatory, since scientists never need this assumption to do physics.”
Criterion 4 (Novel Predictions): MUH predicts we should find ourselves in a “typical” mathematical structure compatible with observers. But without a measure over structures, “typical” is undefined. No confirmed novel predictions have emerged.
Criterion 5 (Immunization): MUH is highly immunized. Any universe we observe is “a mathematical structure we’re inside,” so no observation can falsify MUH. This is the hallmark of unfalsifiable metaphysics.
Failure modes triggered: Actuality/potentiality conflation, prediction vacuity (measure problem), realization problem.
What RCF refutes: MUH as a scientific explanatory framework. The conflation of mathematical possibility with physical actuality. The claim that “all mathematical structures exist” constitutes explanation rather than abdication of explanation.
What survives: Mathematics as descriptive language. Anthropic reasoning within constrained ensembles. The observation that physics is highly mathematizable.
RCF verdict: Poor survivor. Peter Woit’s assessment applies: “not even wrong.”
Framework 2: Computational Universe / Ruliad (Wolfram)
Canonical statement: The universe is computational at its deepest level. The “Ruliad” is the entangled limit of all possible computational processes, the unique object that contains all rules and all their consequences. Physics emerges from the structure of the Ruliad as perceived by computationally bounded observers.
Key proponent: Stephen Wolfram, A New Kind of Science (2002) and Wolfram Physics Project (2020-)
Key critics: Scott Aaronson, Steven Weinberg, Dean Rickles
RCF Evaluation:
Criterion 1 (Falsifiability): Scott Aaronson’s peer-reviewed critique identifies specific falsifiable claims that fail:
“Wolfram’s proposal for a deterministic model underlying quantum mechanics, with ‘long-range threads’ to connect entangled particles… cannot be made compatible with both special relativity and Bell inequality violation.”
This is genuine engagement with falsification, and the framework fails the test.
Criterion 2 (Explanatory Compression): The Ruliad contains all possible computations, which means it contains no information about why this physics rather than another. Infinite ontological inflation with zero predictive compression.
Criterion 3 (Thermodynamic Grounding): Rickles et al. identify the realization problem directly:
“How does an abstract rule get turned into physical reality? If this reality is the result of the computation of rules, then what is doing the computation?… The Ruliad is a mathematical object; it is not a physical object per se, even though things described by physics would be emergent features of it.”
What computes the computation? The framework has no answer that does not either regress infinitely or collapse back into physics.
Criterion 4 (Novel Predictions): The Wolfram Physics Project claims to derive known physics (general relativity, quantum mechanics) from hypergraph rewriting. But critics note these derivations are post-hoc accommodations rather than novel predictions.
Steven Weinberg’s assessment: “No real world system has been explained using Wolfram’s methods in a satisfactory fashion.”
Criterion 5 (Immunization): The Ruliad contains all possible computations, so any observation is compatible with “being inside” some computational structure. Observer-relative emergence adds flexibility that approaches unfalsifiability.
Failure modes triggered: Actuality/potentiality conflation, realization problem (infinite regress), prediction vacuity, post-hoc accommodation.
What RCF refutes: The Ruliad as ontological generator. The claim that “reality is computation” rather than “computation describes reality.”
What survives: Computational exploration as modeling tool. Hypergraphs and rewriting systems as descriptive formalisms. The empirical work on cellular automata has value; the metaphysical superstructure does not.
RCF verdict: Poor survivor. The metaphysical claims fail; the mathematical tools remain useful.
Framework 3: Observer Theory (Sam Senchal, Ruliad Extension)
Canonical claim:
The Ruliad represents the entangled limit of all possible computations. Observers are finite, physical computational processes that sample constrained subsets of this structure. What we call physical laws, causation, continuity, and meaning emerge from observer-relative coarse-graining imposed by boundedness, persistence, and relevance constraints.
Key proponents:
Sam Senchal, building on Stephen Wolfram’s Ruliad. Conceptual relatives include Carlo Rovelli’s relational quantum mechanics and QBism (Fuchs, Schack), though Senchal’s framework is more explicitly computational and categorical.
RCF Evaluation
Criterion 1: Falsifiability
Observer Theory proposes empirical hypotheses about how observer constraints shape perceived reality. However, these predictions are not discriminating. Any finite observer model predicts observer-relative structure. No observation currently distinguishes “Ruliad sampling” from standard coarse-graining, predictive processing, or information-theoretic limits in physical systems.
The Ruliad itself remains unfalsifiable, and observer-relative explanations inherit this insulation.
Criterion 2: Explanatory Compression
The framework formalizes observers rigorously as computational processes and functors. However, it does not reduce explanatory load. Instead, it relocates it.
The question “why these physical laws?” becomes “why these observer constraints?” The latter are assumed rather than derived from deeper physical conditions.
Criterion 3: Thermodynamic Grounding
This is the decisive failure.
Observation, information integration, and entropy reduction are physical processes with energetic costs. Observer Theory treats these primarily as formal or computational operations. Without explicit derivation from thermodynamic constraints, observation risks functioning as abstract transformation rather than dissipative physical work.
RCF requires observers to be explained as patterns that persist under energy, material, and control constraints. Observer Theory assumes this persistence rather than explaining it.
Criterion 4: Novel Predictions
The theory proposes testable scenarios involving altered observer capacity and integration. However, these predictions do not require the Ruliad. They are equally explained by finite-resource agents embedded in physical environments.
Thus, the Ruliad superstructure does no unique explanatory work.
Criterion 5: Structural Immunization
Observer Theory is highly flexible. Any regularity can be explained as a consequence of observer sampling. Any anomaly can be attributed to constraint mismatch. This makes refutation difficult without independent constraints on what observers must physically be.
Failure Modes Identified
- Realization problem: observers specified computationally but not thermodynamically
- Nominalization of “observer” as an explanatory unit
- Lack of discriminating predictions
- Ontological inflation via the Ruliad
What RCF Refutes
Observer Theory as an ontological explanation of emergence. Specifically, the claim that observer-relative sampling explains physical law without explaining observers themselves as thermodynamically constrained systems.
What Survives
- Observer-relative descriptions as genuine epistemic facts
- Coarse-graining as unavoidable for finite systems
- Relational perspectives as descriptively powerful
RCF Verdict
RCF absorbs Observer Theory by grounding observers within the same constraint landscape they are invoked to explain. Observers are not filters of reality. They are patterns that persist under thermodynamic bounds while maintaining internal models of external regularities.
The observer is not primitive. The observer is derived.
Framework 4: Levin’s Platonic Morphospace and Cognitive Glue
Canonical statement: Biological systems “access” pre-existing patterns in a Platonic morphospace. Goal-directedness in morphogenesis reflects navigation through this space toward attractors that exist independently of physical substrate. The mind “is not coming from the physical substrate”; rather, physical systems “facilitate” pre-existing forms into material reality. “Cognitive glue” binds competencies across scales.
Key proponents: Michael Levin (Tufts University), with collaborators including David Resnik and Chris Fields
Key critics: Thermodynamic constraint theorists, interaction problem tradition, Gordana Dodig-Crnkovic (partial)
RCF Evaluation:
Criterion 1 (Falsifiability): This is where the framework encounters its most serious difficulty. When asked directly what would falsify Platonic ingression, Levin’s documented response was:
“I move on when I feel that it’s not being fruitful for new discoveries.”
This is a pragmatic criterion about research utility, not a falsification condition. A framework can be endlessly “fruitful” while being empirically empty. Creationism has been “fruitful” to its proponents for centuries. Fruitfulness alone does not constitute falsifiability and often contradicts it.
Criterion 2 (Explanatory Compression): Levin’s framework adds a non-physical realm and an “access” mechanism to what thermodynamic constraint satisfaction already explains. Bioelectric patterns, active inference, and morphogenetic fields (empirically characterized) do the explanatory work. The Platonic superstructure adds ontological cost without predictive gain.
Consider the alternative: constraint satisfaction under thermodynamic bounds explains why certain forms persist. They are what survives when constraints eliminate alternatives. This requires no transcendent morphospace, no access mechanism, no “cognitive glue” as a thing rather than a process.
Criterion 3 (Thermodynamic Grounding): The interaction problem applies directly. How do non-physical patterns causally influence physical bioelectric gradients?
Levin’s documented response: “A better science of Platonic forms plus their interfaces will force a re-do of Landauer’s Principle.”
This is a promissory note that defers the constraint indefinitely. Landauer’s principle is empirically confirmed to extraordinary precision. Claiming future physics will overturn it is not falsification; it is evasion.
The “cognitive glue” concept exemplifies the nominalization problem. What is glue? A thing that binds. What is binding? A process of coordinated constraint propagation. The noun “glue” adds nothing that “constraint coordination” does not already explain, while suggesting a mysterious substance that does the binding.
Criterion 4 (Novel Predictions): Levin’s empirical work generates testable predictions about bioelectric manipulation of morphology. These predictions are valuable and have been confirmed. But these predictions follow from bioelectric causation without requiring Platonic access.
Critically, Durant et al. (2017, Biophysical Journal), from Levin’s own lab, shows path-dependent morphological outcomes that contradict convergence toward pre-existing ideal forms. Two-headed planarians created by transient bioelectric intervention remain two-headed through subsequent regeneration cycles. They do not “correct” toward canonical form.
If forms exist in a Platonic morphospace that organisms “navigate toward,” why do artificially created forms persist? The constraint-satisfaction explanation is immediate: the two-headed configuration satisfies local bioelectric constraints, so it persists. There is no “correct” form to converge toward, only forms that can pay their thermodynamic rent.
Criterion 5 (Immunization): The framework exhibits classic motte-and-bailey structure:
Bailey (strong claim, deployed when unchallenged): “Minds are forms in that space, and they access each other… the mind is not coming from the physical substrate.”
Motte (weak claim, retreat position when challenged): “I’m just using mathematical language pragmatically; patterns in dynamical systems.”
Gordana Dodig-Crnkovic’s talk at Levin’s own Platonic Symposium (December 2025) illustrates the substitution: “Transcendence not a separate domain. It’s an emergent consequence of cognitive architecture.”
David Resnik (Levin’s collaborator) privately acknowledged: “My own view is much closer to my father’s structuralism, and I hope I can convince Michael that this is a workable position.”
When a symposium participant’s framework is identified by another collaborator as “refangled Kantianism” rather than Platonism, the core claim has been replaced. The motte has consumed the bailey, but the bailey language continues in public communications.
Michael Resnik states in Mathematics as a Science of Patterns (1997), Chapter 7: “Arguing from the perspective of a Quinean epistemic holism, I claim that this feature of the practice should not make us conclude that mathematics is an a priori science, disconnected evidentially from both observation and natural science, for observation is relevant to mathematics, and technological and scientific success forms a vital part of our justification for believing in the truth of mathematics.” His work explicitly avoids the abstract objects that Tegmark’s Mathematical Universe Hypothesis and other strong Platonic frameworks requires.
If mathematical facts are independent of physical facts, they are by definition transcendent, not immanent. The SEP explicitly states this rules out naturalist realism. Stanford Encyclopedia of Philosophy on Naturalism and Mathematics: “Given that mathematical and modal facts are abstract in the sense of lying outside space and time, it follows that there is no possibility of identifying them with the kind of natural facts that have physical effects. If naturalist realism about mathematics is thus ruled out, the remaining options are irrealism and non-naturalist realism.”
Bertrand Russell’s critique of modal Platonism (1903): “It is impossible that the ordinals should be, as Dedekind suggests, nothing but the terms of such relations as constitute a progression. If they are to be anything at all, they must be intrinsically something.” If mathematical structures exist for non-actual universes, they exist independently of any physical instantiation = transcendence, not immanence.
Additional concern regarding Discovery Institute appropriation: Levin’s Platonist language provides formal-sounding vocabulary for transcendent causation in biology. The Discovery Institute has cited Levin’s work. This is not Levin’s intention, but unfalsifiable frameworks cannot control downstream appropriation. When a framework cannot specify what would make it false, it becomes available for appropriation by anyone who finds its conclusions convenient.
Failure modes triggered: Interaction problem (unresolved), motte-and-bailey immunization, nominalization of “pattern,” “form,” “glue,” empirical results (Durant 2017) contradict convergence predictions.
What RCF refutes: Levin’s framework at the point it invokes Platonic causation. The claim that non-physical forms guide physical development. The “cognitive glue” as a thing rather than a process.
What survives: Bioelectric control as real, measurable, powerful. Morphogenesis as constraint navigation. Goal-directed behavior as emergent control architecture. Levin’s empirical work is valuable; the metaphysical surplus layered atop it is not.
RCF verdict: The biology survives; the Platonism does not. The empirical contributions do not require (and are sometimes contradicted by) the metaphysical framework.
Framework 5: Orchestrated Objective Reduction (Penrose-Hameroff)
Canonical statement: Consciousness arises from quantum gravitational processes in microtubules within neurons. Quantum superpositions in tubulin proteins undergo “objective reduction” (collapse) orchestrated by entanglement, and this process gives rise to conscious experience.
Key proponents: Roger Penrose, Stuart Hameroff
Key critics: Max Tegmark, Matthew McKemmish, Scott Aaronson
RCF Evaluation:
Criterion 1 (Falsifiability): Orch-OR makes specific physical predictions about quantum coherence timescales in microtubules. This is its strength: it is empirically falsifiable. It specifies what observations would refute it.
Criterion 2 (Explanatory Compression): Orch-OR adds quantum gravitational mechanisms to explain consciousness. If these mechanisms are required and confirmed, the complexity is justified. If not, it is surplus.
Criterion 3 (Thermodynamic Grounding): Max Tegmark’s calculation is the decisive critique:
“Decoherence time scales (~10⁻¹³-10⁻²⁰ s) are typically much shorter than the relevant dynamical time scales (~10⁻³-10⁻¹ s), both for regular neurons and for microtubules… This conclusion disagrees with suggestions by Penrose and others that the brain acts as a quantum computer.”
The mismatch is 10-17 orders of magnitude. The warm, wet brain environment decoheres quantum superpositions far too quickly for the proposed mechanism to operate.
Criterion 4 (Novel Predictions): Orch-OR predicts specific quantum effects in microtubules that should be detectable. To date, these have not been confirmed.
McKemmish et al. (2009, Physical Review E): “The tubulins do not possess essential properties required for the Orch OR proposal… no reformation of the proposal based on known physical paradigms could lead to quantum computing within microtubules.”
Criterion 5 (Immunization): Orch-OR has been modified in response to criticism, but the core quantum-gravitational mechanism has not been empirically supported. The modifications have not produced confirmed novel predictions.
Failure modes triggered: Decoherence timescale mismatch (empirical falsification), degenerating research program (auxiliary modifications without novel success).
What RCF refutes (conditionally): Orch-OR’s specific mechanism, given current evidence. But importantly, RCF does not permanently close the door; the framework is structured to be improvable through evidence.
What survives: Orch-OR earns credit for testability. It is scientifically respectable precisely because it can lose. It remains revisable if physics changes or if new biological evidence emerges. The framework deserves more respect than unfalsifiable alternatives, even in failure.
RCF verdict: Marginal/Mixed. Currently functions as a degenerating program (auxiliary modifications without confirmed novel predictions). But the testability is valuable. Orch-OR is a failed-but-legitimate scientific hypothesis, which is categorically different from unfalsifiable metaphysics.
Framework 6: Analytical Idealism (Kastrup)
Canonical statement: Reality is fundamentally mental. Physical matter is what universal consciousness looks like from the outside, the “extrinsic appearance” of mental processes. Individual minds are dissociated alters of cosmic consciousness, analogous to dissociative identity disorder.
Key proponent: Bernardo Kastrup, The Idea of the World (2019)
Key critics: Philip Goff, Keith Frankish, physicalist tradition
RCF Evaluation:
Criterion 1 (Falsifiability): What observation would demonstrate that reality is not fundamentally mental? Kastrup’s framework interprets all physical observations as appearances of underlying mental processes, making physical evidence incapable of falsifying the mental substrate claim.
Criterion 2 (Explanatory Compression): Analytical idealism trades the hard problem of consciousness (explaining how matter produces experience) for an analogous hard problem: explaining how mind produces the appearance of matter with its specific lawful regularities.
Philip Goff’s critique: “Kastrup confuses identity with elimination… The hard problem isn’t solved by saying ‘it’s all mind’; it’s relocated.”
The problem is not dissolved; it is mirrored.
Criterion 3 (Thermodynamic Grounding): If matter is “appearance,” what constrains the regularities of appearance? Why does appearance follow thermodynamic laws? The framework must either accept physical constraints (undermining pure idealism) or explain why mental processes generate thermodynamic-law-following appearances without physical substrate. Neither move is made.
Criterion 4 (Novel Predictions): Analytical idealism predicts that consciousness cannot emerge from non-conscious matter (since matter is already mental). But this prediction is shared by panpsychism and cannot distinguish analytical idealism from alternatives.
Criterion 5 (Immunization): The DID metaphor is problematic. Kastrup notes it is “metaphorical, not literal,” but metaphors cannot substitute for mechanisms. How exactly does cosmic consciousness “dissociate” into individual perspectives while maintaining coherent physics across all perspectives?
Failure modes triggered: Problem relocation (not dissolution), no discriminating predictions, mechanistic gap, immunization via “it’s all appearance.”
What RCF refutes: Analytical idealism as reduction or explanation. The claim that “it’s all mind” solves rather than relocates the hard problem.
What survives: Critique of naïve materialism. Emphasis on first-person data as epistemically relevant. The phenomenological observations idealism highlights are real; the metaphysical conclusion does not follow.
RCF verdict: Poor survivor. Relocates rather than solves the hard problem. Functions as interpretive overlay, not explanation.
Framework 7: Panpsychism and Cosmopsychism (Goff, Strawson, Chalmers)
Canonical statement: Consciousness is fundamental and ubiquitous. Either micro-level entities have proto-experiential properties that combine into macro-experience (micropsychism/panpsychism), or the cosmos as a whole is conscious and individual minds are aspects of cosmic consciousness (cosmopsychism).
Key proponents: Philip Goff, Galen Strawson, David Chalmers (sympathetic), Hedda Hassel Mørch
Key critics: John Searle, Keith Frankish, Daniel Dennett
RCF Evaluation:
Criterion 1 (Falsifiability): What observation would demonstrate that electrons lack proto-experience? Panpsychism is structured to be compatible with any physical observation, since it concerns intrinsic natures that physics by construction does not address.
John Searle’s verdict: “Panpsychism does not get up to the level of being false. It is strictly speaking meaningless because no clear notion has been given to the claim.”
Criterion 2 (Explanatory Compression): Panpsychism avoids emergence but multiplies experiential properties throughout nature without clear payoff. Every particle now has experiential properties in addition to physical properties, but this addition does not change any prediction.
Criterion 3 (Thermodynamic Grounding): Proto-experiential properties are claimed to be non-physical intrinsic natures. This invites the interaction problem: how do these properties causally influence physical dynamics? If they do not, they are epiphenomenal. If they do, where is the coupling mechanism?
Criterion 4 (Novel Predictions): Panpsychism predicts no specific physical observations that physicalism does not predict.
Goff’s attempt to derive fine-tuning arguments from cosmopsychism has been critiqued. Chan and Chan (2024, International Journal for Philosophy of Religion): “Two important premises are dubious and not rationally acceptable. Therefore, Goff’s argument for agentive cosmopsychism fails.”
Criterion 5 (Immunization): The combination problem is widely acknowledged but not solved. Responses (emergence, fusion, etc.) add auxiliary hypotheses without resolving the core issue.
Keith Frankish: “Panpsychism consigns consciousness to a metaphysical limbo where it is beyond the reach of science and lacks ethical and personal significance.”
Failure modes triggered: Combination/decombination problem, no discriminating predictions, unfalsifiable intrinsic natures.
What RCF refutes: Panpsychism and cosmopsychism as scientific theories. The claim that “consciousness is fundamental” constitutes explanation.
What survives: Philosophical pressure on reductive emergence. Ethical and phenomenological motivations. The problems that motivate panpsychism are real; the solution offered does not solve them.
RCF verdict: Poor survivor. Functions as philosophical position, not testable framework.
Framework 8: Strong Mathematical Platonism
Canonical statement: Mathematical objects exist independently of human minds and physical reality. Mathematical truths are discovered, not invented. The existence of mathematical entities is analogous to the existence of physical entities.
Key proponents: Kurt Gödel, G. H. Hardy, Roger Penrose (partially), Mark Balaguer
Key critics: Nominalists, structuralists, naturalized epistemologists
RCF Evaluation:
Criterion 1 (Falsifiability): What observation would demonstrate that mathematical objects do not exist independently? If mathematicians agree on proofs because they’re accessing the same Platonic realm versus because they share cognitive architecture and proof conventions, how would we distinguish these hypotheses?
Criterion 2 (Explanatory Compression): Mathematical Platonism adds an ontological realm beyond physics. What phenomena require this realm that cannot be explained by mathematical structuralism (patterns without objects) or nominalism (no abstract objects)?
Criterion 3 (Thermodynamic Grounding): The epistemological problem: how do physical brains causally interact with non-physical mathematical objects to acquire mathematical knowledge?
The Internet Encyclopedia of Philosophy states: “An impenetrable metaphysical gap between the mathematical and spatio-temporal realms of the type that proponents of the epistemological challenge insist exists if platonism is true would exclude the possibility of causal interaction between human beings, who are inhabitants of the spatio-temporal realm, and mathematical entities.”
This is Benacerraf’s dilemma: Platonism makes mathematical truth easy to explain but mathematical knowledge impossible to explain.
Criterion 4 (Novel Predictions): Mathematical Platonism does not generate different predictions from anti-Platonist positions about which mathematical claims are provable or about mathematical practice.
Criterion 5 (Immunization): Platonism is easily immunized. Any mathematical discovery can be interpreted as accessing pre-existing structure; any disagreement can be interpreted as incomplete access.
Failure modes triggered: Interaction problem (Benacerraf), no discriminating predictions, unfalsifiable access claims.
What RCF refutes: Strong Platonism as ontology. The claim that mathematical objects exist independently as entities.
What survives: Mathematical practice. Mathematical structuralism about relations. Mathematics as constraint language. Michael Resnik’s structural realism captures what practice requires without the metaphysical baggage.
RCF verdict: Poor survivor. Adds ontological commitment without predictive payoff.
Framework 9: Classical Theism
Canonical statement: Reality is created and sustained by a necessary, omnipotent, omniscient, perfectly good being who exists outside spacetime and acts within it through providence.
Key proponents: Thomas Aquinas, Anselm, Alvin Plantinga, Richard Swinburne, William Lane Craig
Key critics: Antony Flew, logical positivists, naturalists
RCF Evaluation:
Criterion 1 (Falsifiability): Classical theism as a causal theory faces severe falsification problems. What observation would demonstrate that God does not exist, not merely that we lack evidence for God?
Antony Flew’s challenge: religious claims undergo “death by a thousand qualifications” when faced with counter-evidence. Every apparent evil is reinterpreted; every failure of prayer is accommodated.
Criterion 2 (Explanatory Compression): Classical theism adds an unexplained explainer. Why does God exist rather than nothing? If God’s existence is “necessary,” this is either question-begging or requires specifying what makes existence necessary.
Criterion 3 (Thermodynamic Grounding): The interaction problem applies with full force. How does a non-spatiotemporal being causally influence spatiotemporal systems? What is the energy budget for divine action?
Criterion 4 (Novel Predictions): Classical theism makes some predictions (answered prayer, moral order, teleological structure) but these are either unfalsifiable or empirically contested.
Criterion 5 (Immunization): Classical theism is highly immunized. “God’s ways are mysterious” and “we cannot comprehend infinite wisdom” function as semantic escape hatches that render any evidence compatible with the framework.
Failure modes triggered: Interaction problem, immunization, surplus ontology, no discriminating predictions.
What RCF refutes: Classical theism as a causal explanatory theory of physical phenomena.
What survives: Classical theism as moral, existential, or cultural practice. RCF evaluates truth-claims, not values or practices.
RCF verdict: As a causal theory: Degenerating. As existential orientation: Outside RCF’s jurisdiction.
Framework 10: Process Theology (Whitehead, Hartshorne)
Canonical statement: God is not a supernatural exception to metaphysical principles but the chief exemplification of them. God experiences and responds to the world, is affected by events, and acts through persuasion rather than coercion. Reality is fundamentally processual rather than substantial.
Key proponents: Alfred North Whitehead, Charles Hartshorne, John Cobb, David Ray Griffin
RCF Evaluation:
Criterion 1 (Falsifiability): Process theology is more falsifiable than classical theism because it makes specific claims: God does not coerce, God is affected by worldly events, divine action is through “lure” rather than intervention.
Criterion 2 (Explanatory Compression): Process theology integrates better with naturalistic frameworks. By denying supernatural intervention, it reduces ontological commitments relative to classical theism.
Criterion 3 (Thermodynamic Grounding): The “divine lure” mechanism remains underspecified thermodynamically. How does God’s persuasion influence physical systems without energy transfer?
Criterion 4 (Novel Predictions): Process theology predicts: (1) no miraculous violations of physical law, (2) genuine novelty in cosmic evolution, (3) real freedom incompatible with determinism. These are testable against competing frameworks.
Criterion 5 (Immunization): Less immunized than classical theism. Process theists engage substantively with science and accept constraints that classical theists reject.
What RCF refutes: Process theology only where it posits a distinct divine causal influence beyond physical constraint propagation.
What survives: Process metaphysics aligns strongly with constraint-first ontology. Rejection of supernatural intervention is compatible with RCF. Process theology functions as a progressive theological research program even if it adds no explanatory leverage over naturalism.
RCF verdict: Marginal/Mixed. Better structured than classical theism, more compatible with naturalism. Everything it explains is already explained by constraint satisfaction, leaving God explanatorily optional.
Part VI: The Global Pattern
Across these ten frameworks (and the many variations they represent), RCF identifies a consistent pattern of what fails and what survives.
What RCF Consistently Refutes:
Non-falsifiable causation. Frameworks that posit causal influence without specifying mechanisms that could be tested and could fail.
Actuality without selection. Frameworks claiming “all X exist” without principled selection that explains why we observe this rather than that.
Pattern persistence without energy cost. Frameworks treating patterns, forms, information, or computation as causally efficacious without thermodynamic grounding.
Frameworks that cannot lose. Frameworks structured so that any observation is compatible, any criticism is deflected, any failure is reinterpreted.
What RCF Does Not Refute:
Meaning systems. Religious practice, existential orientation, ethical frameworks are outside RCF’s jurisdiction when they do not claim to be causal explanations of physical reality.
Symbolic narratives. Metaphors, heuristics, and conceptual tools that aid thinking without claiming literal truth about reality’s structure.
Ethical orientations. Value systems do not require metaphysical validation to be actionable.
Heuristic metaphysics. Frameworks acknowledged as useful fictions or interpretive lenses rather than theories of what exists.
What Survives Across All Successful Explanations:
The pattern is narrow and invariant: Constraint satisfaction under thermodynamic bounds.
Everything else either:
- Reduces to it
- Decorates it
- Escapes into interpretation
This is not metaphysical triumphalism. It is what remains after auditing what can pay rent. The question “What is real?” becomes operational: real patterns are those that persist because they satisfy constraints. Real explanations are those that specify mechanisms, generate predictions, and can lose.
Part VII: Implications
For Physics and Cosmology
Frameworks like MUH and the Ruliad represent a category error: confusing the remarkable effectiveness of mathematics in describing physics with a metaphysical claim that physics is mathematics. The description is not the described. The map is not the territory.
This does not diminish mathematics. It clarifies mathematics as constraint language, the formalism that captures what must hold for patterns to persist. The power of mathematics in physics reflects the fact that physics is constraint satisfaction, and mathematics is the native notation for constraints.
For Biology and Cognitive Science
Levin’s empirical contributions to understanding bioelectric control of morphogenesis are valuable and will persist regardless of metaphysical framing. But the Platonic superstructure adds nothing that constraint satisfaction does not already explain, and the path-dependent results from Levin’s own lab contradict convergence toward pre-existing ideal forms.
The “cognitive glue” concept should be dissolved back into the process it nominalizes: coordinated constraint propagation across scales. Integration is not a mysterious substance added to systems; it is what happens when local constraints couple across boundaries.
For Philosophy of Mind
The hard problem of consciousness remains hard. But the solutions offered (panpsychism, idealism, Orch-OR) either relocate the problem, fail empirically, or generate no discriminating predictions.
What survives is more modest: consciousness is a pattern of self-modeling that persists under thermodynamic constraints. What it is “like” to be such a pattern may be irreducibly perspectival, a feature of being inside the pattern rather than observing it from outside. This does not solve the hard problem, but it dissolves the expectation that the problem must have a solution in the form of reduction or identification.
For Epistemology
The audit reveals a meta-pattern: frameworks that cannot lose cannot learn. The capacity to fail is not a weakness; it is the mechanism by which contact with evidence improves understanding. Frameworks designed to be unfalsifiable forfeit this capacity.
This does not mean all valuable frameworks must be falsifiable in the strict Popperian sense. Heuristics, metaphors, and interpretive lenses serve genuine purposes. But they should not be marketed as causal explanations of physical reality. The confusion between “useful for thinking” and “true about the world” generates most of the problems documented here.
For Intellectual Honesty
The nominalization audit is perhaps the most transferable tool. When a framework converts processes into things (patterning into Pattern, constraining into Constraint, observing into Observer, computing into Computation) and then treats those things as causally efficacious, it has committed a category error that generates pseudoexplanation.
The correction is verb-restoration: What process? Under what constraints? At what energetic cost? With what falsifier?
Any framework that cannot survive this translation is not necessarily false. But it is not explanatory. It is interpretive overlay, which may still be valuable but should be acknowledged as such.
Part VIII: Residual Uncertainty
The ε₁⁄₆₄ that always escapes:
The evaluation criteria could be wrong. RCF assumes that falsifiability, thermodynamic grounding, and predictive success are the correct criteria for evaluating metaphysical frameworks. But these are themselves philosophical commitments that could be contested. If a framework systematically outperformed RCF-approved frameworks while violating RCF criteria, RCF would need revision.
Some evaluated frameworks may be premature rather than failed. Orch-OR in particular may be ahead of its time rather than wrong. Physics changes. What fails decoherence constraints under current models might succeed under future models.
Constraint satisfaction may itself be a nominalization. The phrase “constraint satisfaction under thermodynamic bounds” sounds like a thing. It is not. It is a description of a process: the elimination of what cannot coexist, leaving what can. But descriptions are compressions, and compressions are lossy.
The critics could be wrong. Tegmark, Wolfram, Levin, Kastrup, Goff are serious thinkers whose work deserves engagement. RCF’s verdicts are provisional, not final. If these frameworks evolve to specify falsifiers, generate confirmed novel predictions, and satisfy thermodynamic constraints, RCF upgrades them.
There may be modes of understanding that RCF cannot evaluate. Meaning systems, ethical frameworks, aesthetic judgments, spiritual practices may provide genuine value that RCF, by design, cannot assess. The limitation is acknowledged: RCF evaluates truth-claims about physical reality, not all forms of human understanding.
Conclusion: What Survives and Why
Gregory Bateson asked what pattern connects all patterns. The answer derived through constraint-based analysis is: constraint satisfaction under thermodynamic bounds, the process by which possibility space collapses into actuality through the elimination of what cannot coexist.
This answer is not grand. It does not invoke transcendent realms, cosmic minds, computational substrates, or mathematical universes. It is, in a sense, deflationary: reality is what survives. What survives is what can.
But the deflationary answer has one property that the grander alternatives lack: it can be tested, and it can fail. It specifies what observations would refute it. It generates discriminating predictions. It respects thermodynamic constraints. It does not immunize itself against criticism.
The frameworks evaluated in this audit (from Tegmark’s Mathematical Universe to Wolfram’s Ruliad to Levin’s Platonic Morphospace to Kastrup’s Analytical Idealism to Goff’s Cosmopsychism) share a common structure: they are designed not to lose. And because they cannot lose, they cannot learn.
This is not a condemnation of the thinkers who propose them. The problems they address are real. The desire to explain consciousness, mathematical effectiveness, morphological regulation, and cosmic fine-tuning is legitimate. But the solutions offered share a failure mode: they nominalize processes into things, add those things to ontology, and then claim explanatory credit without predictive accountability.
What remains after the audit is spare but stable:
- Constraints eliminate alternatives.
- What persists is what can persist.
- Thermodynamics is the landlord.
- Patterns pay rent or dissipate.
- Falsifiability is the mechanism by which frameworks learn.
- Everything else is interpretation, valuable perhaps, but not explanation.
The audit is repeatable. The criteria are specified. The falsifiers are stated. Anyone can apply the methodology to any framework, including this one.
What survives is what can.
References
Aaronson, S. (2002). Book Review: A New Kind of Science. Quantum Information & Computation. arXiv:quant-ph/0206089.
Benacerraf, P. (1973). Mathematical Truth. Journal of Philosophy, 70(19), 661-679.
Bennett, C. H. (1973). Logical reversibility of computation. IBM Journal of Research and Development, 17(6), 525-532. https://doi.org/10.1147/rd.176.0525
Chalmers, D. (2016). The Combination Problem for Panpsychism. In Panpsychism: Contemporary Perspectives. Oxford University Press.
Chan, K. H., & Chan, J. (2024). On Philip Goff’s Case for Agentive Cosmopsychism. International Journal for Philosophy of Religion, 96, 199-221.
Durant, F., et al. (2017). Long-term, stochastic editing of regenerative anatomy via targeting endogenous bioelectric gradients. Biophysical Journal, 112(10), 2231-2243. https://doi.org/10.1016/j.bpj.2017.04.011
Flew, A. (1950). Theology and Falsification. University, 1, 1-8.
Frankish, K. (2021). Panpsychism and the Depsychologization of Consciousness. Aristotelian Society Supplementary Volume, 95, 51-70.
Goff, P. (2019). Galileo’s Error: Foundations for a New Science of Consciousness. Pantheon.
Hossenfelder, S. (2014). Review of Our Mathematical Universe. BackReaction blog.
James, W. (1890). The Principles of Psychology. Henry Holt.
Kastrup, B. (2019). The Idea of the World: A Multi-Disciplinary Argument for the Mental Nature of Reality. iff Books.
Lakatos, I. (1970). Falsification and the Methodology of Scientific Research Programmes. In I. Lakatos & A. Musgrave (Eds.), Criticism and the Growth of Knowledge.
Landauer, R. (1961). Irreversibility and Heat Generation in the Computing Process. IBM Journal of Research and Development, 5(3), 183-191. https://doi.org/10.1147/rd.53.0183
Li, M., & Vitányi, P. (2019). An Introduction to Kolmogorov Complexity and Its Applications (4th ed.). Springer.
McKemmish, L. K., et al. (2009). Penrose-Hameroff Orchestrated Objective-Reduction Proposal for Human Consciousness Is Not Biologically Feasible. Physical Review E, 80, 021912.
Natal, T. (2024). Refuting the Metaphysics of Wolfram and Tegmark. arXiv:2411.12562v3.
Popper, K. (1959/1934). The Logic of Scientific Discovery. Routledge.
Rickles, D., Elshatlawy, H., & Arsiwalla, X. (2023). Ruliology: Linking Computation, Observers and Physical Law. arXiv:2308.16068v2.
Shackel, N. (2005). The Vacuity of Postmodernist Methodology. Metaphilosophy, 36, 295-320.
Tegmark, M. (2008). The Mathematical Universe. Foundations of Physics, 38, 101-150.
Tegmark, M. (2000). Importance of Quantum Decoherence in Brain Processes. Physical Review E, 61, 4194-4206.
Tegmark, M. (2014). Our Mathematical Universe: My Quest for the Ultimate Nature of Reality. Knopf.
Weinberg, S. (2002). Review of A New Kind of Science. New York Review of Books.
Wolfram, S. (2002). A New Kind of Science. Wolfram Media.
Yan, L. L., et al. (2018). Single-atom demonstration of the quantum Landauer principle. Nature Physics, 14, 773-778.







