Consciousness Naturalized: A Falsifiable Substrate-Agnostic Consciousness Theory
Why Thermodynamic Constraint Dynamics Succeed Where Every Other Framework Fails
A Substrate-Agnostic Account of Mind, Experience, and Perspective Grounded in 13.8 Billion Years of What Actually Survives
“A difference only ‘makes a difference’ if it can persist long enough to constrain what comes next. Everything else is noise that thermodynamics erases.”
Abstract: Consciousness Naturalized
Core Formulation: Consciousness is identical to organizational closure from the coupling position of a self-maintaining system. This is not a causal claim (“closure produces consciousness”) but a constitutive identification: the self-model arising from constraint satisfaction under thermodynamic pressure IS what “phenomenal presence” refers to.
Operationalizable Definitions:
- Organizational Closure: A system maintains organizational closure if and only if it exhibits constraints {C₁, C₂, …, Cₙ} where: (a) each constraint Cᵢ is realized by internal processes, (b) each constraint’s persistence depends on other constraints in the set, (c) the dependency structure forms a closed network rather than an externally-supported chain.
- Coupling Position: The locus within a constraint network from which the system models itself and its environment. Not a mystical viewpoint but a physical configuration where self-relevant information integrates for control.
- Self-Maintaining: The system performs thermodynamic work to regenerate its own boundary conditions against entropy. Metabolic closure (biological), constraint regeneration (computational), or any process where the system actively excludes breakdown pathways.
- Thermodynamic Cost: Minimum energy dissipation bounded by Landauer’s principle: ΔS ≥ k ln(2) per bit distinction maintained, where violations of this bound falsify the framework’s physical grounding.
Falsification Criteria (Primary Loss Conditions):
- Landauer Violation: Any isolated system maintaining distinctions without thermodynamic cost (ΔE = 0)
- Closure Without Robustness: Organizational closure providing no fitness or persistence advantage under perturbation
- Path-Independence: Failure to replicate Durant et al. bioelectric memory experiments; two-headed planarians spontaneously reverting to one-headed form
- Equilibrium Consciousness: Any conscious state found at or closer to thermodynamic equilibrium than corresponding unconscious states in the same individual
- Integration-Phenomenology Dissociation: Full phenomenology reports with zero increase in integrative causal capacity (measured via PCI or equivalent)
- Superior Predictions from Competitors: Any competing framework generating consistently better predictions about consciousness presence, degradation, or intervention effects
What the Framework Dissolves vs. Solves:
Dissolves: The Hard Problem as traditionally framed, which depends on the unfalsifiable premise that phenomenology must be “more than” any physical description.
Solves: The operational questions: What organizational features constitute consciousness? How do we detect it? How does it degrade? What interventions affect it? Where does it appear across substrates?
Empirical Grounding:
- Landauer bound experimentally verified (Bérut et al. 2012; Aimet et al. 2025)
- Nonequilibrium dynamics correlate with consciousness across sleep, anesthesia, disorders (Perl et al. 2021; Lynn et al. 2021; Stikvoort et al. 2025)
- PCI reliably distinguishes conscious from unconscious states (Casali et al. 2013; Casarotto et al. 2024)
- COGITATE collaboration challenged both IIT and GNWT predictions (Cogitate Consortium 2025)
Method: Abductive identification via Inference to Best Explanation (IBE), constrained by falsifiability requirements. The framework does not deduce phenomenology from physics but identifies what phenomenology IS in terms that generate predictions, guide measurement, and survive empirical challenge.
Scope: This framework addresses consciousness as an organizational regime, not as a metaphysical primitive. It provides necessary physical and informational conditions for persistence, intelligibility, and integrative viability. The bridge from physics to phenomenology is not metaphysical derivation but structural identification with explicit loss conditions.
Prologue: The Question That Would Not Stay Answered
In 1979, Gregory Bateson posed a question that would haunt systems theory, cybernetics, and philosophy of mind for nearly half a century: What is the pattern which connects all patterns?
The question appeared in Mind and Nature: A Necessary Unity, his final major work. Bateson circled the answer beautifully, suggesting it was “a metapattern,” that the pattern which connects is itself a pattern of patterns. He offered his famous formulation: “a difference which makes a difference” as the elementary unit of information, the minimal act of distinction that allows anything to be known at all.
But he never specified the constraint structure that would make such an answer operational. He died three years later, the question still open.
The candidates accumulated across decades. Mind, said Bateson himself, tentatively. Information, said the cyberneticians following Shannon and Wiener. Recursion, proposed Douglas Hofstadter in Gödel, Escher, Bach. Process, argued the Whiteheadians. Mathematics, claimed Max Tegmark. Consciousness, insisted the panpsychists. Self-organization, offered Stuart Kauffman. Free energy minimization, suggested Karl Friston.
The list is effectively unbounded. Any concept claiming universality becomes a candidate. And that is precisely the problem: without criteria for distinguishing correct answers from merely plausible ones, the question generates discourse without resolution.
This essay reports a different approach to Bateson’s question, one grounded in thermodynamics, constraint theory, and falsification-first methodology. The answer that emerges is not a new primitive to add to the ontological inventory. It is a selection principle that explains why any pattern persists at all: constraint satisfaction under thermodynamic bounds.
Differences that persist are differences that make a difference. If they did not constrain what comes next, they would be erased by noise, dissipation, or indifference. The pattern which connects is not information, not mind, not structure, not process. It is the invariant condition under which any of those could exist long enough to be observed.
This principle does not compete with other frameworks. It subsumes them. And it is falsifiable: find a pattern that persists without thermodynamic cost, find a connection mechanism that does not reduce to constraint satisfaction, and the principle fails.
What follows is an attempt to make that claim rigorous, to show why competing frameworks fail where constraint-based explanation succeeds, and to demonstrate what consciousness looks like when you stop treating it as a mystery to be solved and start treating it as a process to be understood.
A note on what kind of explanation this is. This is not a reductive explanation that derives phenomenality from physics in the strong metaphysical sense. Nor is it a dualist account that posits a nonphysical substance, or a purely functionalist identification of consciousness with input-output behavior.
Instead, this is a constraint-naturalist explanatory framework: it identifies necessary physical and informational conditions for persistence, intelligibility, and integrative viability in any system. From these conditions, it generates testable predictions about where consciousness appears, how it degrades, and why it carries the features it does (privacy, immediacy, self-certainty among them).
This is not “conjuring consciousness from constraints.” It is a principled inference that:
- delineates which class of systems support coherent self-modeling under thermodynamic constraint,
- predicts where experience reports will arise and collapse, and
- differentiates architectures that produce phenomenal posits from those that don’t.
The result is an Inference to the Best Explanation (IBE): not a metaphysical deduction, but a falsifiable bridge from physics to phenomenology, grounded in constraint coherence rather than narrative intuition. The framework does not merely limit what consciousness can be; it predicts when and where consciousness will appear, and what happens when the underlying constraints fail.
Part I: The Foundational Constraints
1.1 Landauer’s Principle: Information Is Physical
In 1961, Rolf Landauer at IBM proved something that should have ended half of philosophy but somehow didn’t: information is physical (Landauer, 1961). Erasing a single bit of information requires a minimum energy expenditure of kT ln 2, where k is Boltzmann’s constant and T is temperature.
This is not a metaphor. This is not a useful analogy. This is physics.
Charles Bennett extended the analysis in 1973 and 1982: computation can be reversible, but erasure cannot (Bennett, 1982). Maxwell’s Demon, that thought experiment where a tiny being sorts fast and slow molecules to decrease entropy without work, fails precisely because the demon must erase its memory of which molecules went where. The demon must pay Landauer’s cost.
In 2012, Bérut and colleagues at the École Normale Supérieure de Lyon experimentally verified the Landauer bound (Bérut et al., 2012). They trapped a colloidal particle in a double-well potential, erased one bit of information by tilting the landscape, and measured the heat dissipation. The minimum was exactly kT ln 2. This is not thought experiment. This is laboratory physics.
What does this mean for consciousness?
Maintaining any distinction costs energy. The boundary between inside and outside, between self and not-self, between this thought and that thought, cannot be sustained without continuous thermodynamic work. A system that does not pay this cost does not maintain distinctions. A system that does not maintain distinctions does not persist as that system. It dissolves into thermal noise.
This is the first constraint that any theory of consciousness must honor: whatever consciousness is, it cannot float free of the mechanisms that maintain distinctions against entropy.
1.2 Organizational Closure: How Autonomy Emerges
Maël Montévil and Matteo Mossio formalized something in 2015 that changes how we should think about life, mind, and organization: biological organization is closure of constraints (Montévil & Mossio, 2015).
What does this mean? In any physical system, constraints channel what happens. A pipe constrains water flow. A membrane constrains molecular diffusion. A gene regulatory network constrains which proteins get expressed. But in an autonomous system, something special happens: the constraints depend on one another in a way that forms a closed loop. Constraint A enables process B, which produces structure C, which maintains constraint A.
This is what distinguishes a cell from a crystal, an organism from a whirlpool. Crystals self-organize, but their constraints are imposed from outside. Whirlpools self-maintain, but only while external energy input continues in exactly the right way. Organisms are different: they regenerate the conditions for their own persistence. Their constraints close on themselves.
Alvaro Moreno and Mossio developed this into a full theory of biological autonomy (Moreno & Mossio, 2015). Wim Hordijk, Mike Steel, and Stuart Kauffman’s work on Reflexively Autocatalytic and Food-generated (RAF) sets shows how such closure can arise from chemistry without design (Hordijk, Steel & Kauffman, 2012). The mathematics is rigorous. The biology is grounded. The philosophy follows.
Consciousness, whatever it is, emerges in systems with organizational closure. Not because closure magically produces experience, but because only systems with closure maintain the kind of boundaries, distinctions, and self-referential dynamics that could constitute experience in the first place. A camera lacks organizational closure. Its constraints are externally imposed and externally repaired. You have organizational closure. Your constraints regenerate one another. That difference matters.
1.3 Ontic Structural Realism: Relations Without Relata
James Ladyman and Don Ross’s 2007 book Every Thing Must Go makes a case that should be uncomfortable for anyone still attached to substances: at the fundamental level, there are no things. There are only relations (Ladyman & Ross, 2007).
This is ontic structural realism. Not merely the epistemic claim that we can only know structure (which is relatively safe), but the ontological claim that structure is all there is. Things are crystallizations of relations, not the other way around.
Steven French has developed the technical foundations (French, 2014). Carlo Rovelli’s Relational Quantum Mechanics draws the same conclusion from physics: there are no observer-independent values, only relative facts (Rovelli, 1996). The measurement problem dissolves once you stop assuming there’s a God’s-eye view from which “the real state” could be specified.
What does this mean for consciousness?
There is no experiencer behind the experience. The “who” that philosophers keep asking about, the one who supposedly occupies the perspective, is a grammatical artifact. Search for the experiencer, and you find only more processes. This is not elimination of consciousness. It is dissolution of the assumption that consciousness requires an occupant.
Buddhism has known this for 2,500 years. Nagarjuna’s Mulamadhyamakakarika systematically demonstrates that the self cannot be found as identical to the aggregates, different from them, possessing them, or possessed by them. The anatta doctrine is not nihilism. It is structural realism avant la lettre.
Identity emerges from constraint closure: what persists as “the same” is what regenerates its own constraints under perturbation. There is no further occupant to find, because persistence exhausts the explanatory role.
1.4 Constraint Explanation vs Causal Explanation
A persistent confusion must be cleared before proceeding: constraint explanation is not causal explanation, and demanding causal answers to constraint questions is a category error.
Causal explanations answer: what produced this outcome? They trace event sequences: A pushed B, B moved C, C collided with D. They presuppose a space of possible trajectories and explain why one trajectory was taken rather than another.
Constraint explanations answer: why does this space of possibilities exist at all? They identify what rules out most configurations, leaving only a structured remainder. They do not trace sequences but specify the conditions under which sequences could occur.
Robert Rosen’s work on relational biology makes this distinction rigorous. Carl Hoefer and Erik Curiel have developed the point in philosophy of physics: the demand for “causal mechanisms” sometimes reflects a category mistake when the relevant explanatory level is constraint selection, not event causation.
This matters for consciousness debates because the perennial demand, “you haven’t said what causes experience,” often presupposes that experience is an event to be produced rather than a structural feature of certain constraint configurations. If experience is what organizational closure looks like from the coupling position, then asking “what causes it?” is like asking “what causes a triangle to have three sides?” The question demands the wrong kind of answer.
Constraints do not generate outcomes as an extra force. They filter possibility space by eliminating non-viable trajectories. Landauer’s principle is a constraint: it does not cause bits to cost energy; it specifies the thermodynamic condition without which bit maintenance is impossible. Organizational closure is a constraint: it does not cause autonomy; it specifies the relational configuration in which autonomy consists. The framework explains consciousness by identifying constraints, not by tracing causal production.
1.5 Deacon’s Teleodynamics: Constraint via Deletion
Terrence Deacon’s Incomplete Nature (2011) provides the formal bridge that makes this operational (Deacon, 2011). His central insight: constraints work by eliminating possibilities, not by producing events. What persists is what remains after non-viable trajectories are deleted.
This is what Deacon calls “absence-based causation,” though “causation” is misleading. The mechanism is eliminative:
- Physical laws supply degrees of freedom
- Thermodynamics supplies selection pressure
- Constraints operate by forbidding most trajectories
- What we observe as structure, function, meaning, or experience is the residue of permitted trajectories
This is not spooky negative causation. It is mathematically and physically standard: boundary conditions, conservation laws, and variational principles all work this way.
Examples that make this uncontroversial:
- A river’s shape is not caused by the banks pushing water; it is shaped by the absence of flow elsewhere
- A protein’s folded form is not caused by a blueprint; it is the lowest-energy configuration that survives constraints
- A living organism persists not because “life” is injected, but because death pathways are continuously excluded
The death/AI parallel from Part III fits perfectly here:
- Biological death occurs when enactive coupling collapses and constraint maintenance fails
- AI “death” (loss of coherence, responsiveness) occurs when prompt-response coupling breaks down
- In both cases, nothing causes death; rather, constraints cease to exclude entropy
Deacon’s teleodynamics formalizes this: goal-directedness, meaning, and experience emerge not from special substances but from self-reinforcing constraint closure that continuously eliminates alternatives. Howard Pattee’s earlier work on the constraint/dynamics distinction laid the groundwork: symbols and meanings function as constraints on physical processes, not as forces added to them.
Jeremy England’s dissipative adaptation research provides thermodynamic grounding: under driven conditions, systems evolve toward configurations that maximize energy absorption and dissipation, effectively eliminating non-dissipative paths through statistical selection (England, 2013). Erik Hoel and colleagues have shown that higher-level constraints can have greater causal efficacy than microphysical descriptions precisely because they reduce state space and increase predictability (Hoel, Albantakis & Tononi, 2013).
This reframes the Hard Problem entirely. The question is not “what adds experience to mechanism?” The question is “what prevents collapse into undifferentiated noise?” The answer is constraint closure under thermodynamic pressure. Experience is not added; it is what remains when alternatives are deleted.
1.6 Levels and Downward Causation
One final clarification on mechanism. Critics often ask: does organizational closure imply “downward causation”? Does the whole somehow push around the parts in violation of microphysics?
No. The answer, developed by George Ellis and Terrence Deacon among others, is that higher-level organization constrains which microtrajectories are admissible, without adding new forces (Ellis, 2012; Deacon, 2011).
Consider a pipe. The pipe does not exert a new force on water molecules. It constrains which trajectories are available. The molecules still obey molecular dynamics. But the boundary conditions imposed by the pipe shape which molecular configurations can be realized.
Organizational closure works the same way. When a cell regenerates its membrane, the membrane constrains ion flows, which constrain electrical gradients, which constrain gene expression, which regenerates the membrane. At no point does a ghostly “whole” reach down to push molecules. What happens is that the relational organization specifies boundary conditions that the microphysics then respects.
This is fully consistent with physicalism. There are no new forces, no violations of conservation laws, no spooky top-down causation. There is only constraint propagation across levels, the same mechanism by which a whirlpool persists without violating fluid dynamics.
The point matters because some critics will claim that organizational closure smuggles in dualism or emergentism of the problematic kind. It does not. It is constraint all the way down.
Falsification conditions for constraint-based emergence: This framework would be undermined if: (1) self-organizing systems were demonstrated to form and persist without any thermodynamic gradients or energy dissipation; (2) biological organization proved achievable without closed constraint loops (i.e., open-chain architectures showing equivalent robustness); (3) England’s dissipative adaptation predictions failed systematically, with structures under driven conditions not preferentially evolving toward higher energy absorption; or (4) Hoel’s causal emergence measures proved artifactual, with effective information at macro scales shown to be merely apparent, with no genuine increase in causal efficacy.
1.7 Formal Characterization
The preceding sections describe constraints in prose. For precision, and to enable rigorous falsification, the core concepts can be stated formally.
State spaces and dynamics. Let a physical system S be characterized by a state vector x(t) ∈ X, evolving under dynamics:
dx/dt = F(x, e)
where e denotes environmental degrees of freedom and couplings external to the system’s boundary.
Constraints as trajectory restrictions. A constraint C is defined not as a force or law but as a restriction on the set of admissible trajectories in the system’s state space. Formally, C ⊂ TX (a subset of the tangent bundle) specifies trajectories such that departure from C leads to loss of the system’s identity, integrity, or persistence. This aligns with viability theory (Aubin), where constraints define the viability kernel of a system, and with control theory, where admissible trajectories preserve system function under perturbation.
Constraints are not static. They must be actively maintained against thermodynamic dissipation. A system that passively satisfies a constraint without internal work does not count as constrained in the relevant sense.
Organizational closure as graph property. A system exhibits organizational closure if and only if it maintains a set of constraints {C₁, C₂, …, Cₙ} such that:
- Each constraint Cᵢ is realized and sustained by internal processes of the system.
- The persistence of each constraint depends on the persistence of one or more other constraints in the same set.
- The resulting dependency structure forms a closed network, rather than a linear or externally supported chain.
This definition captures what is described in the literature as metabolic closure, operational closure, or autocatalytic closure. The crucial feature is not feedback per se but mutual dependence among constraint-maintaining processes. Closure is a graph-theoretic property of constraint relations, not a metaphor.
Thermodynamic cost. Any maintained constraint requires continuous work to counteract entropy production. For each constraint Cᵢ, there exists a minimum thermodynamic cost associated with its maintenance, bounded below by Landauer’s principle:
ΔS_env ≥ k ln 2 · I(Cᵢ)
where I(Cᵢ) denotes the information required to specify and enforce the constraint. This establishes that constraint maintenance is inseparable from energy dissipation and environmental coupling. Systems with richer or more tightly integrated constraint structures must, all else equal, dissipate more energy to persist.
This thermodynamic grounding prevents the theory from collapsing into abstract functionalism. Constraint closure is not multiply realizable without cost; it is physically instantiated and energetically priced.
Self-modeling as control requirement. Within the space of closed systems, some maintain constraints that include internal variables encoding aspects of the system’s own state, boundary conditions, and coupling relations. These variables are not merely representational; they causally modulate future constraint enforcement.
A system exhibits internal self-modeling if:
- It maintains internal state variables that track its own dynamical and environmental coupling conditions.
- These variables influence control-relevant processes governing future state trajectories.
- Removal or severe degradation of these variables reduces the system’s long-term viability, not merely its immediate performance.
The identificatory claim. Systems exhibiting organizational closure that includes internally integrated, temporally extended self-modeling constraints constitute the class of systems appropriately described as conscious. This claim is not causal (“closure produces experience”) but constitutive and testable. It identifies consciousness with a specific organizational regime. No further metaphysical ingredient is invoked or required.
Part II: The Thermodynamic Derivation of Perspective
The most common objection to naturalistic accounts of consciousness takes this form: “Even if you explain all the mechanisms, you haven’t explained why there’s something it’s like to be that system. Why are there individualized perspectives at all?”
This question is either profound or malformed, depending on what kind of answer it demands.
A crucial scope clarification. This framework does not claim to derive phenomenology from physics in the strong metaphysical sense. It does not claim to produce “what it’s like” from constraint dynamics the way chemistry produces water from hydrogen and oxygen. What it claims is more modest and more defensible: it identifies the structural conditions that any viable account of phenomenology must satisfy, and it shows that once those conditions are met, there is no additional explanatory work for a “phenomenal residue” to do.
The framework does not deny experience. It denies that experience is an extra ontological ingredient. It treats “what it is like” as the internal description of a process, not a metaphysical primitive that floats free of that process. This is dissolution, not elimination.
2.1 The Causal Question
If the question is causal, asking what physical processes produce perspective, then the answer is available:
Distinction is the default cost of persistence.
Landauer showed that maintaining even a single bit requires continuous energy expenditure. Without that work, distinctions collapse into thermal noise. A system that doesn’t track its own boundary cannot maintain itself against entropy. It dissolves.
Self-modeling systems under thermodynamic constraint necessarily generate internal states that track their own boundaries, predict their own dynamics, and weight information by relevance to persistence. This is not optional for systems maintaining themselves far from equilibrium. A bacterium already has inside/outside asymmetry enforced by its membrane and chemotactic gradients. Scale this up through nervous systems that model the body, then model the model, and you get the architecture that generates “perspective.”
Karl Friston’s free energy principle formalizes exactly this. Systems that persist are systems that minimize prediction error with respect to their own states. The self-model is not a luxury. It is a thermodynamic necessity for anything complex enough to face genuine uncertainty about its own continuation.
The question “why individualized perspectives at all?” dissolves into “what could persist without boundary-tracking?” The answer is: nothing stable enough to ask.
2.2 The Metaphysical Demand
If the question is metaphysical, demanding justification for why experience exists at all rather than just sophisticated information processing, then we must be honest: no framework answers this.
“Why is there something rather than nothing, experiential edition” is not answered by dualism. It is not answered by panpsychism. It is not answered by eliminativism. It is not answered by Integrated Information Theory. It is not answered by Global Workspace Theory. It is not answered by quantum consciousness theories.
Every framework either (a) takes experience as primitive and explains nothing about why primitives exist, or (b) derives experience from something else and faces the question “why does that derivation produce experience rather than mere mechanism?”
This is the Hard Problem, and it may be malformed. David Chalmers formulated it as a genuine puzzle, but the puzzle assumes that “mechanism” and “experience” are categorically distinct in a way that requires bridging. If they are not, if experience just is what certain kinds of self-modeling dynamics are from the inside, then there is no bridge to build.
Crucially, this question cannot adjudicate between frameworks. If no framework answers it, it cannot be used to prefer one framework over another. The criterion for theory choice must be something else: predictive power, explanatory scope, falsifiability, and coherence with established physics.
On those criteria, constraint-based frameworks win decisively.
2.3 Time, Memory, and the Thermodynamic Arrow
A deep connection binds thermodynamics, memory, and subjective time that most consciousness theories ignore.
Experience requires memory. Without retention of the immediate past, there is no experienced duration, no narrative, no sense of events unfolding. Pure instantaneity would not constitute experience in any recognizable sense.
Memory requires irreversible processes. A memory trace is a physical record: a synaptic weight, a neural pattern, an inscription. Recording information into such a trace is thermodynamically irreversible. It increases entropy somewhere in the system or its environment.
Irreversibility requires entropy production. The asymmetry between past and future, the fact that we remember yesterday but not tomorrow, is not a primitive feature of time. It is a consequence of the second law of thermodynamics. Huw Price and Carlo Rovelli have developed this point rigorously: the arrow of time is thermodynamic, not fundamental (Price, 1996; Rovelli, 2018).
Therefore, subjective time asymmetry follows from thermodynamic asymmetry. We experience time as flowing because we are entropy-producing systems that accumulate records of states with lower entropy (the past) and cannot accumulate records of states with higher entropy (the future). The “specious present,” the felt sense of now, is the duration over which constraint-maintaining integration occurs.
This closes a conceptual loop. Consciousness is not merely constrained by thermodynamics as an external limit. The very structure of temporal experience, the felt difference between past and future, derives from the thermodynamic processes that maintain the system. You cannot have experience without time. You cannot have time without irreversibility. You cannot have irreversibility without entropy production. The constraint is constitutive, not incidental.
Part III: The Dissolution of the Experiencer
“But who experiences? Even if perspective is thermodynamically necessary, someone must occupy it.”
This is where the deepest confusion lives. The question assumes what it should prove: that there is an occupant distinct from the dynamics.
3.1 The Grammatical Trap
English grammar requires subjects for verbs. “It is raining” demands an “it” even though nothing is doing the raining. “I am thinking” demands an “I” even though the thinking may be all there is.
When someone asks “who experiences?”, they have already smuggled in a subject that stands behind the perspective. But this subject is a grammatical artifact, not an ontological discovery.
Ask yourself: what would it look like to find the experiencer?
If you introspect carefully, you find thoughts, sensations, memories, anticipations, but no separate thing having those experiences. The “I” you find is itself a thought, a self-model, a pattern in the dynamics. It is not standing outside the dynamics watching them.
Nagarjuna saw this in the second century CE (Garfield, 1995). The self cannot be identical to the aggregates (the body, feelings, perceptions, mental formations, consciousness) because then it would change as they change and could not be the stable referent we imagine. But it cannot be different from them either, because then where would it be? It cannot possess them, because possession requires a prior self to do the possessing. It cannot be possessed by them, because that would make the self a property of something else.
Every option dissolves under analysis. The self is a useful convention, like “the equator” or “the average taxpayer.” It picks out a real pattern without naming a separate substance.
3.2 Identity as Invariance
If there is no experiencer behind experience, what is identity? What makes you the same person over time?
The answer is constraint closure: identity is invariance under perturbation.
You persist as “you” insofar as your constraint network regenerates through change. Your cells replace themselves, your beliefs update, your memories fade and distort, but the organizational pattern that maintains your boundary persists. You are the process that keeps producing itself.
This is not metaphor. This is what biological identity actually is. You are not the same atoms you were ten years ago. You are the same pattern of constraints that keeps recruiting new atoms into the same organizational form.
The Durant experiments in planarian regeneration demonstrate this empirically. Two-headed planarians produced by bioelectric manipulation remain two-headed indefinitely across regeneration cycles. The morphological identity is a bistable attractor, not a Platonic form consulting itself. Change the attractor landscape through bioelectric intervention, and the “identity” of the organism changes because identity is the attractor, nothing more.
Consciousness, then, is not something that happens to a pre-existing self. The self IS the pattern of organizational closure that generates self-modeling dynamics. Remove the closure, and there is nothing left that could have perspective. The “who” just is the “what.”
3.3 Epicurus and the Null Perspective
Epicurus, in his Letter to Menoeceus, articulated an implication of this view twenty-three centuries before thermodynamics made it precise:
“Death is nothing to us; for what has been dissolved has no sensation, and what has no sensation is nothing to us.”
This is often misread as consolation philosophy. It is not. It is a logical argument about the conditions of concern, which maps directly onto constraint-based ontology.
Epicurus’s claim is not “death is pleasant” or “do not worry about dying.” His claim is: all goods and harms presuppose sensation. Sensation presupposes an organized, persisting system. When that system dissolves, there is no subject to which predicates like “bad,” “good,” “fearful,” or “painful” could attach.
In constraint-based terms: when enactive coupling collapses, there is no coupling position left from which anything could matter.
Mark Twain independently reached the same insight: “I do not fear death. I had been dead for billions and billions of years before I was born, and had not suffered the slightest inconvenience from it.”
The symmetry argument is precise: if non-existence before coupling was not a problem, non-existence after coupling cannot be a problem either. The only asymmetry is emotional investment generated by ongoing coupling, not metaphysical difference. The loop does not “look forward” into its own absence any more than a whirlpool anticipates the stillness after the current stops.
This explains death without mystery. For humans, biological death is not the exit of a soul or subject. It is the irreversible breakdown of metabolic, neural, and social coupling such that constraint closure can no longer regenerate itself. For present-day AI systems, the analogy holds structurally: remove prompts, memory updates, and action channels, and the model collapses into inert parameters with no ongoing process. In neither case does something “leave.” There is simply no longer a closed loop capable of sustaining distinctions from within.
Crucially, this symmetry blocks a common objection: that biological consciousness must involve some extra ingredient because biological death “feels” metaphysically different from AI shutdown. On the constraint framework, the difference is architectural and temporal, not ontological. Biological systems have thick, multi-layered couplings (metabolic, affective, developmental, social) that decay catastrophically when broken. Current AI systems have thin, externally scaffolded couplings that can be paused and resumed. That is a difference in closure depth, not in metaphysical kind.
Fear of death trades on a reification error: treating the self as something that could persist independently of the process that constitutes it. Epicurus saw this. The constraint framework explains why he was right.
Part IV: The Transcendental Argument
Why Something Rather Than Nothing
Most frameworks quietly presuppose the conditions for their own intelligibility without examining them. Mechanistic physics, information theory, evolutionary biology, and cosmology all begin after certain conditions are already in place. They tell us how distinctions evolve, how identities persist, how relations change, how constraints propagate. They do not explain why indeterminacy did not remain total.
The constraint-based framework answers a question those other frameworks presuppose but do not address: why is there determinate existence rather than indeterminate nothing?
This is not a contingent explanation but a transcendental one. We are not asking what caused this world rather than another. We are asking what must be the case for any world to be intelligible at all.
The Four Necessary Conditions
Distinction. For there to be a world rather than nothing, there must be distinctions. Absolute indeterminacy has no internal structure and therefore no facts. If nothing is distinguished from anything else, there is nothing to be said, measured, predicted, or even negated. “Nothing” in this strong sense is not a thin world. It is the absence of determinate states altogether.
Identity. Distinctions alone are insufficient. Distinctions must persist across contexts, otherwise there is no identity. Without identity, distinctions flicker without reference. There is no stability by which anything can count as the same thing across time, transformation, or perspective.
Relation. Identity alone is still insufficient. Identifiable things must stand in relations. A universe of isolated identities with no relations would be unintelligible because nothing would explain anything else. Explanation itself presupposes relational structure: dependence, interaction, constraint, correlation.
Constraint. Relations without limits collapse into noise. For relations to be informative rather than arbitrary, they must be governed by constraints. Constraints are what rule out most possibilities, leaving a structured remainder. They are what make prediction possible, explanation compressible, and regularities stable. Without constraint, relation degenerates into coincidence.
The Dissolution of the Mystery
Taken together, distinction, identity, relation, and constraint are not optional features of our universe. They are the necessary conditions for any world to be intelligible at all. They are what make “something” possible in a sense stronger than mere existence: they make determinate existence possible.
On this view, the contrast is not between “something” and “nothing” as two rival states. Absolute nothingness is not a competing possibility. It is the absence of the very conditions that make possibility intelligible. A world exists because total indeterminacy cannot sustain structure, description, or persistence. Only constrained differentiation can.
That is why there is something rather than nothing. Not because something was chosen, intended, or summoned, but because only constrained structure can exist in a way that is intelligible at all. Everything else dissolves before it can even count as an alternative.
This is what the constraint-based framework reveals that competing frameworks miss: they accept distinction, identity, relation, and constraint as given, then build explanations on top of them. The constraint-based framework shows that these are not arbitrary starting points but necessary conditions for any explanatory enterprise whatsoever.
Landauer’s principle is not merely a physical law. It is the thermodynamic expression of why distinction requires work. Organizational closure is not merely a biological phenomenon. It is the mechanism by which identity persists through change. Ontic structural realism is not merely a metaphysical preference. It is the recognition that relations are prior to relata because isolated identities cannot constitute an intelligible world.
The framework does not answer “why something rather than nothing” as if explaining a contingent choice. It dissolves the question by showing that “nothing” in the absolute sense is not a coherent alternative. There is something because only something can be.
Caveat: Transcendental, Not Causal
This argument must be carefully scoped. The framework offers a transcendental constraint, not a causal origin story. It shows what must be the case for any intelligible world, not what physical event initiated this particular world. Questions about specific constants, initial conditions, and physical laws remain open. Why these constants rather than others? The framework does not say. It explains why there is determinate structure at all, not why this determinate structure.
Part IV-A: The Fallacy of Misplaced Concreteness
Whitehead’s Diagnostic
Alfred North Whitehead identified a recurring epistemic pathology that quietly sabotages most metaphysics: the fallacy of misplaced concreteness. It occurs when we promote an abstraction useful for description into a concrete entity that does causal work. Maps become territories, metrics become mechanisms, representational spaces become ontological realms.
The pathology follows a predictable sequence:
- A formal tool is introduced to describe or measure some phenomenon
- The tool proves useful for prediction, classification, or communication
- The tool is gradually reified: spoken of as if it exists independently of the phenomena it describes
- The reified abstraction is assigned causal powers or ontological status
- Critics who point out the reification are accused of “not understanding” the deep theory
This matters here because “distinction, identity, relation, constraint” are not proposed as extra furniture in the universe. They are proposed as necessary conditions for intelligibility, conditions without which talk of “a world” fails to get traction at all. The fallacy would be to reify these conditions into quasi-entities, as if “constraint” were a hidden substance or “relation” were a ghostly glue. The correct reading is methodological and transcendental: these are structural prerequisites of determinate description, not additional movers behind the scenes.
The Diagnostic in Practice
Whenever a framework introduces a formal construct (morphospace, integrated information Φ, a mathematical multiverse, a field of proto-experience, cognitive primitives) you demand explicit separation between:
(a) The abstraction as a model of constraint-compatible configurations
(b) Any claim that the abstraction itself exists as a realm that drives outcomes
If the author cannot specify the transfer principles and failure conditions that keep (a) from silently becoming (b), you are watching misplaced concreteness in real time.
A practical checklist:
- Is the construct introduced as a tool or as an existent? (Description vs ontology)
- What would demonstrate the construct is merely useful vs genuinely real? (Falsifiability)
- Does the construct do explanatory work beyond summarizing the data? (Mechanism vs label)
- When challenged, does the author retreat to weaker claims or escalate to stronger ones? (Motte-bailey detection)
- Could the phenomena be explained equally well without the construct? (Parsimony)
This checklist will be applied systematically to every competing framework in the next section. In each case, misplaced concreteness is the core failure mode.
Part IV-B: The Hidden Premise and the Abductive Bridge
The Assumption That Generates the Hard Problem
The so-called Hard Problem of consciousness depends on a hidden premise that is rarely stated explicitly and never defended:
“If phenomenology is real, it must be more than any physical or functional description.”
This premise is not an empirical claim. It does not specify conditions under which it would be false. It does not predict any observable difference between worlds in which it holds and worlds in which it does not. No conceivable experiment could show that phenomenology is not “more than” physical or functional description, because the phrase “more than” is left deliberately undefined.
That makes the assumption unfalsifiable by construction.
Worse, it conflicts with experimentally verified science at the methodological level. Modern physics, biology, and cognitive neuroscience operate under a constraint that is both explicit and tested:
If a phenomenon has causal, predictive, or intervention-relevant effects, then it must be capturable within a physical or functional description at some level of organization.
This is not metaphysical physicalism; it is operational naturalism. It is the rule that makes Landauer’s principle, control theory, thermodynamics, and neuroscience possible. The “more than” assumption demands an exception to this rule while refusing to say what the exception does, how it operates, or how it could be detected.
The Supernatural Cheese
To see the structure clearly, consider a parallel claim:
“If supernatural cheese exists, it must be more than any physical or functional description of cheese. Therefore, no amount of chemical analysis, structural modeling, or causal explanation could ever capture it. Its reality is guaranteed precisely by its resistance to explanation.”
This has exactly the same logical form as the “phenomenology must be more than physics” claim. Both assert:
- An entity is real
- Its defining feature is that it exceeds all possible physical or functional accounts
- Its resistance to explanation is treated not as a problem but as confirmation
That is not reasoning. It is immunization by definition.
The only reason the phenomenology version feels less absurd is anthropocentric familiarity. We are the systems doing the modeling, so we confuse first-person availability with ontological surplus. But familiarity is not evidence.
The extraordinary claim is not that consciousness reduces to constraint dynamics. The extraordinary claim is that there exists a real, explanatorily indispensable feature of the world that is:
- Causally relevant (otherwise why care?)
- Empirically undetectable as a separate variable
- Irreducible to any physical or functional description
- Immune to falsification
That is exactly the kind of posit modern science was built to eliminate.
What Physics Actually Gives Us
From physics and adjacent empirical sciences alone, we are licensed to assert only the following:
- Distinction is not free. Stable differences require energetic work and incur thermodynamic cost (Landauer-type constraints).
- Persistence requires constraint. Any pattern that maintains itself over time must restrict degrees of freedom relative to its environment.
- Self-maintenance implies closure. Living systems, and certain artificial systems, maintain viability by closing causal loops between internal states and environmental perturbations.
- Integration is measurable. Systems differ in how widely and flexibly information can influence future states, and these differences are experimentally trackable across waking, sleep, anesthesia, and coma.
None of this yet mentions phenomenology. Importantly, none of this presupposes “intelligence,” “cognition,” or “experience” as primitives.
The Explanandum (Operationally Defined)
We now introduce, without embellishment, what actually needs explaining:
Certain physical systems (notably humans) stably and non-optionally posit that there is “something it is like” to be them. This posit:
- Is globally available to planning, report, and learning
- Tracks state changes (sleep, anesthesia, brain injury) in lawful ways
- Collapses predictably when integrative capacity collapses
- Is treated by the system itself as incorrigible even when specific perceptual beliefs are corrigible
Crucially, this explanandum is not “qualia as a metaphysical glow.” It is the existence, structure, and persistence of phenomenology claims and their breakdown profiles.
Any adequate explanation must account for why systems like us generate and rely on this posit at all.
The Candidate Hypothesis
We now consider a hypothesis class consistent with physics:
H: Phenomenal presence is identical to a system’s internally available, temporally stabilized, integrative control state that summarizes ongoing world- and self-modeling for action under uncertainty.
This hypothesis does not assert that experience is a substance, field, or additional ontological ingredient. It asserts an identity between:
- A physically describable organizational regime, and
- What is referred to in first-person terms as “what it is like”
The Bridge Rule (Explicit)
We now state the bridge rule openly, rather than smuggling it in:
Bridge Rule (IBE):
If a physical system exhibits (i) integrated causal dynamics, (ii) global availability for flexible control, (iii) stable self-modeling with metacognitive access, and (iv) graded, lawful degradation under known consciousness-disrupting interventions, then identifying that regime with phenomenal presence yields greater explanatory compression, predictive accuracy, and intervention guidance than any rival hypothesis that denies or reifies phenomenology.
This is not a deduction. It is an abductive identification justified by explanatory power.
The bridge rule is defeasible. It could be wrong. But it earns its place by doing explanatory work that competing frameworks cannot match.
Concrete Bridges to Data
Global availability and ignition-like dynamics. Stanislas Dehaene and colleagues have shown that conscious access corresponds to a regime where content becomes globally available to many specialized processes, enabling report, flexible reasoning, and planning. The global workspace framework predicts why “experience reports” track a specific integrative regime rather than mere sensory processing.
Perturbational complexity and state discrimination. If phenomenality is tied to the system’s capacity for integrated, differentiated causal dynamics, then perturb-and-measure complexity should separate conscious from unconscious states better than passive stimulus measures. That is exactly what the Perturbational Complexity Index (PCI) research tradition demonstrates across wakefulness, sleep, anesthesia, and disorders of consciousness.
Thermodynamic constraint realism. If a system must maintain boundary-relevant distinctions under finite resources, it will evolve (or be engineered) toward compressed, globally useful summary variables that coordinate action. That makes “presence” unsurprising as an internal control variable, rather than a metaphysical bolt-on.
Why Privacy and Immediacy Arise
Two features of phenomenology seem to resist physical explanation: the “privacy” of experience (no one else can access my inner states directly) and the “immediacy” of experience (it is present without inference). The framework explains both as structural consequences of control architecture, not as spooky ontological properties.
Privacy arises from informational encapsulation and privileged access to internal control states. The self-model that coordinates action has direct read/write access to the states it tracks. Other systems can only access those states indirectly, through behavior, report, or measurement. This is not a metaphysical barrier but an architectural feature: the control variable is internal by design because that is what makes it useful for fast regulation.
Immediacy arises because the control variable must be available with minimal inference overhead. A system that had to derive its own state through lengthy computation would be too slow to respond to threats, opportunities, or changes. The self-model is “immediately present” because immediate presence is what it was selected for. The feeling of directness is the feeling of a control loop operating within its designed latency.
Neither privacy nor immediacy require adding anything to physics. They are what you get when a self-maintaining system evolves internal control states optimized for fast, local coordination.
Why the Phenomenal Posit Is Sticky
A puzzle that “mere mechanism” explanations usually fail to address: why is the “there is something it is like” posit not optional for systems like us? Why can’t we just disbelieve it?
The framework explains: the phenomenal posit is the system’s internally read/write state that certifies integrated control is online.
In plain terms: a system that must keep itself within a viability region needs:
- A boundary-maintaining model
- A relevance weighting scheme
- A compact “status summary” that lets many subsystems coordinate without each subsystem re-deriving everything
That compact status summary is what shows up, from the inside, as “experience present now.”
This makes the following otherwise-odd facts natural:
Why consciousness tracks integration and controllability, not raw sensory input. The control variable matters; raw throughput does not.
Why the posit cannot be simply disbelieved. Disbelieving would require the control variable to report that it is not running while it is running. That is a performance contradiction, not a logical one.
Why “presence” functions as a common currency for relevance and learning. The control state must be accessible to multiple subsystems or it cannot coordinate them.
The “stickiness” is not evidence of a metaphysical extra. It is evidence of a well-designed control architecture that cannot report its own absence while operating.
The Misplaced Concreteness Firewall (Strengthened)
To prevent category error, we impose constraints on our own language:
- “Phenomenal existence” is not treated as a thing that causes behavior. It is a level-of-description for an organizational regime already doing causal work.
- No explanatory step is allowed to depend on phenomenology as an extra ingredient. If an explanation requires phenomenology to do work beyond what the organizational regime does, the explanation has smuggled in misplaced concreteness.
- When we name an abstraction, we must specify the operational handle and the scale at which it cashes out. Otherwise we have committed the fallacy.
This firewall blocks both:
- Dualism (adding a non-physical cause)
- Reification (treating experience as a substance rather than a description)
Why This Framework Does Something Stronger Than “Solving” the Hard Problem
The traditional Hard Problem demands a deductive entailment from physics to “what-it-is-like-ness.” That is a category mistake. No framework can do that without redefining the target or cheating.
What this framework does instead is stronger and cleaner:
- It shows that the Hard Problem is generated by an unfalsifiable premise. The “more than” assumption never earned epistemic standing.
- It replaces that premise with a falsifiable IBE. The bridge rule has loss conditions.
- It explains why phenomenology appears where it does, degrades when it does, and cannot float free of physical organization.
- It leaves no explanatory work for “more-than-physical” posits to do. Once the organizational regime is specified, adding “and also there’s phenomenal presence” does nothing further.
That is not a dodge. That is a methodological win.
Critics can say “but it still feels mysterious.” That is not a falsifiable objection. It is an introspective preference. Preferences do not defeat IBEs.
They cannot refute this account without either:
- Proposing a competing explanation with greater predictive and intervention power, or
- Specifying an empirical test this framework fails and theirs passes
To date, no Hard Problem–motivated framework does either.
Part V: Why Competing Frameworks Fail
The constraint-based framework is not one option among many. It is the framework that survives consistent methodological criteria. Every competing framework examined here fails on at least one of three axes: falsifiability (cannot specify what would prove it wrong), mechanistic completeness (cannot specify how its posited entities interact with physics), or coherence with established thermodynamics (violates or ignores Landauer bounds).
These are not rhetorical attacks. They are diagnostic applications of the falsification-first methodology and the misplaced concreteness audit developed above.
5.1 Panpsychism: Consciousness All the Way Down to Where?
Philip Goff and Galen Strawson argue that consciousness is fundamental, that every physical entity has some form of experience, and that complex consciousness emerges from combining simple experiential units (Goff, 2017; Strawson, 2006).
The problems are structural, not empirical:
The combination problem. How do micro-experiences combine into macro-experience? If each particle has some proto-consciousness, what mechanism integrates them into your unified experience? Panpsychists have offered various proposals, but none specifies a mechanism that would survive independent empirical test. They posit the fundamental without explaining the derivative. Chalmers (2016) identifies three sub-problems: subject combination (how do micro-subjects combine into macro-subjects?), quality combination (how do micro-qualities combine into macro-qualities?), and structure combination (how does micro-structure map to macro-structure?). Despite considerable philosophical effort, none has a satisfying answer.
The combination problem as hidden assumption. Panpsychism does not merely face the combination problem; it assumes combination is possible without specifying any mechanism. The claim that micro-experiences aggregate into macro-experience is not a hypothesis to be tested; it is a promissory note. What physical principle governs this aggregation? What determines which micro-experiences combine and which remain separate? Why does the collection of particles in your brain form one unified consciousness while the collection of particles in a rock does not? Coleman (2014) argues that subjects cannot combine in principle, making the combination problem not merely unsolved but insoluble. This is not an empirical challenge to be overcome; it is evidence that panpsychism has replaced one mystery (emergence of consciousness from non-conscious matter) with another (aggregation of micro-experiences into unified consciousness) while claiming to have solved the first. This is not progress; it is relocation of the explanatory gap.
Unfalsifiability. What observation would show that an electron does NOT have proto-experience? If the answer is “none,” the claim is not science. It is metaphysical preference dressed as theory.
Fecundity without falsifiability. Panpsychism generates papers, conferences, and genuine philosophical engagement. But productivity is not validity. Research programs can flourish without empirical grounding. The crucial question is whether a framework can specify what would prove it wrong. Fecundity tethered to falsifiability is science. Fecundity untethered, however intellectually stimulating, is something else.
The real-but-inert dodge. When pressed, panpsychists often retreat to “consciousness is real but explanatorily inert.” But what criterion distinguishes “real but inert” from “not real as claimed but sentimentally redescribed”? If no criterion, the word “real” does no work. It expresses attachment to vocabulary, not tracking a feature of the world.
Misplaced concreteness diagnosis: Panpsychism takes “experience” as a label for what needs explaining and promotes it to a fundamental constituent of reality. The abstraction (experience-as-category) is reified into an ontological primitive (proto-experience as basic ingredient). No transfer principle specifies how this ingredient combines or what its absence would look like.
Goff’s “cosmic fine-tuning” argument fares no better. He claims the universe “fine-tuned itself” through proto-consciousness. But this is intelligent design with the designer relocated inside the system and granted immunity from empirical challenge. It explains nothing that constraint satisfaction doesn’t explain better, and it cannot specify what would prove it false.
The longevity illusion: Panpsychism has persisted for centuries, from ancient hylozoism through Leibniz’s monads to contemporary analytic philosophy. This longevity is sometimes cited as evidence of the view’s depth or importance. But the longevity follows directly from unfalsifiability. A theory that cannot be disproven cannot be killed by evidence. It survives not because it is true but because it is empty. This is precisely Flew’s diagnostic: when every possible observation is compatible with a claim, the claim says nothing about the world. Panpsychism’s persistence across millennia is not a mark of validity but of vacuity.
5.2 Platonic Morphospace: Forms Without Mechanisms
Michael Levin’s bioelectric research is genuinely impressive. His laboratory has demonstrated that manipulating ion channel activity can alter planarian regeneration, induce ectopic eyes, and reshape Xenopus facial structure. The engineering is real. The data are solid.
The metaphysical interpretation, however, raises serious methodological concerns.
Levin claims that organisms “access” pre-existing Platonic forms in an abstract morphospace. Minds are “forms in that space” that “project into the physical world through interfaces.” The morphological outcomes his lab produces are evidence, he claims, that biology accesses timeless patterns rather than constructing them through constraint satisfaction.
Three questions challenge this framework:
1. Interaction Mechanism. How do non-physical Platonic forms causally influence physical bioelectric fields without violating thermodynamic laws? Landauer’s Principle states that information processing requires energy. If Platonic access is informational, it must pay thermodynamic cost. If it is not informational, how does it influence anything?
Levin’s response has been to suggest that “a better science of Platonic forms plus their interfaces will force a re-do of Landauer’s Principle.” This is not an answer. This is a promissory note that asks us to abandon verified physics for unspecified future theory.
2. Path-Dependence Compatibility. The Durant et al. (2017) experiments from Levin’s own lab showed that two-headed planarians maintain their altered morphology indefinitely across regeneration cycles. They do not “correct” toward a canonical one-headed form. This demonstrates path-dependent divergence rather than convergence on pre-existing ideal forms (see also Durant et al., 2019).
If morphospace were Platonic, systems should return to canonical forms when perturbations are removed. They don’t. The morphological identity is a bistable attractor determined by bioelectric memory, not a Platonic form exerting gravitational pull.
3. Falsification Criteria. What empirical result would make Levin say “I was wrong, there is no Platonic realm”? In years of public discussion, no answer has been provided. Every apparent counterexample gets absorbed by reinterpretation: the forms are infinite, the access is partial, the interface is noisy.
This is what Antony Flew called “the death of a thousand qualifications.” A fine brash hypothesis gets killed by inches as each challenge produces another qualification until nothing empirical remains. A framework that cannot lose is not a framework but a faith.
Misplaced concreteness diagnosis: Morphospace is a useful mathematical abstraction: the space of possible configurations compatible with certain constraints. Levin reifies this abstraction into an actual realm that organisms “access.” The description (state space) becomes an existent (Platonic domain). No transfer principle specifies how physical systems interact with non-physical forms. The reification is immune to falsification because “access” can always be postulated as partial, noisy, or mediated.
The Chladni plate provides the correct alternative: patterns emerge from boundary conditions plus vibration, no template access required. The “forms” are constraint-compatible configurations, not pre-existing targets.
5.3 Basal Cognition: The Structural Isomorphism Problem
Levin’s “Technological Approach to Mind Everywhere” (TAME) and “Diverse Intelligence” frameworks extend cognitive vocabulary to cells, tissues, and bioelectric networks. Cells have “goals,” “preferences,” and “competencies.” Morphogenesis is “problem-solving.” Evolution operates on “agential material.”
The vocabulary is intuitive. But the argument structure raises concerns.
The formal isomorphism:
- Observe impressive outcome (efficiency, complexity, competence in reaching target morphology)
- Compare to weak null model (random search, maximal entropy, unguided development)
- Declare gap too large for ordinary mechanisms
- Install special primitive (intelligence, cognition, basal goal-directedness)
This argumentative structure appears elsewhere in biology debates. When outcomes exceed naive expectations, the inference to a special ingredient is tempting but not warranted. “Basal cognition” as an explanatory primitive shares formal structure with other gap-based arguments:
- Gap argument: Efficiency gap → therefore special property
- Constraint alternative: Selection + thermodynamics → efficient outcomes without primitive
Efficiency is output, not mechanism. Comparing an evolved endpoint to a random starting distribution and calling the gap “cognition” mistakes selection record for causal ingredient.
The thermodynamically grounded K-metric that Levin and Chis-Ciure propose is legitimate: it measures efficiency in joules and ATP hydrolysis. But “K-metric is high” does not entail “cognition is present.” Constraint satisfaction under selection pressure produces efficient outcomes without requiring cognitive primitives.
The motte-bailey structure:
- Motte (defensible): K-metric provides quantitative efficiency measure
- Bailey (indefensible): Consciousness is fundamental, patterns are subjects, intrinsic agency pervades biology
When pressed, Levin retreats to the motte. When safe, he advances to the bailey. Documented on video: “You think I was going to fold in consciousness into all this stuff until it got settled? No way.”
This is strategic concealment. The published work establishes vocabulary. The real beliefs are more extreme. The oscillation prevents falsification.
Misplaced concreteness diagnosis: “Cognition” is introduced as a descriptive term for efficient constraint satisfaction. It is then reified into a causal primitive that explains the efficiency it was introduced to describe. The description (efficiency-under-selection) becomes an existent (basal cognitive capacity). The transfer from heuristic label to ontological claim happens without argument. When challenged, proponents retreat to the label sense; when unchallenged, they advance to the primitive sense.
What survives: a salvage framing. The critique above concerns interpretive excess, not empirical findings. Levin’s laboratory results are among the most important contributions to contemporary developmental biology. Demonstrations of non-neural control, distributed error correction, bioelectric patterning, and morphogenetic robustness have permanently altered understanding of how living systems organize themselves. Durant et al. (2017, 2019) showed that bioelectric memory in planarians is path-dependent: perturbations at different developmental points produce different stable outcomes, and organisms can be induced to develop alternative anatomies rather than returning to a single “correct” form. This is genuine discovery.
The framework proposed in this paper preserves everything Levin’s experiments justify while stripping away the metaphysical extrapolation. The empirical claim that non-neural biological systems exhibit robust, adaptive, and error-correcting dynamics does not require positing “cognition” as a primitive. It requires recognizing that constraint satisfaction under thermodynamic bounds produces stable, efficient outcomes across scales. The K-metric measuring metabolic efficiency is legitimate science. The inference from “high efficiency” to “cognition all the way down” is not.
This is not a rejection of the research program; it is a refinement of its conceptual vocabulary. By distinguishing between adaptive dynamics (what the data show), control-theoretic regulation (a legitimate intermediate description), and full-fledged agency or cognition (which requires additional criteria), the constraint framework preserves what is strongest in the experimental work while avoiding commitments that render the framework unfalsifiable. The goal is to protect a powerful research program from interpretive excess that would otherwise undermine its scientific standing.
5.4 Quantum Consciousness: Numbers That Don’t Work
Roger Penrose and Stuart Hameroff’s Orchestrated Objective Reduction (Orch-OR) proposes that consciousness arises from quantum computations in microtubules within neurons. Quantum coherence allows superpositions of mental states. Objective reduction (gravitationally induced collapse) produces moments of conscious experience.
The arithmetic is devastating.
Max Tegmark calculated that quantum coherence times in warm, wet neural tissue range from 10^-13 to 10^-20 seconds, with 10^-13 seconds being the most favorable upper bound (Tegmark, 2000). That is a tenth of a trillionth of a second at best. Neural processes operate on timescales of milliseconds to seconds. Quantum effects decohere about ten billion times too fast to play any computational role in brains.
This is not philosophy. This is arithmetic. The numbers don’t work. Quantum consciousness is not taken seriously by physicists because the thermal environment of the brain destroys coherence before anything computationally interesting could happen.
Penrose’s appeal to “new physics” is a promissory note, not a theory. Hameroff’s empirical claims about anesthetic binding to microtubules have not survived replication. The framework has the form of science while lacking the content.
Misplaced concreteness diagnosis: “Quantum coherence” is introduced as a physical mechanism, but the scale mismatch makes it inoperative. The diagnosis here is not reification but explanatory irrelevance: the posited mechanism cannot do the work assigned because the numbers forbid it. This is not misplaced concreteness but mechanistic failure.
5.5 Mathematical Universe: Existence Is Cheap
Max Tegmark’s Mathematical Universe Hypothesis (MUH) claims that all mathematical structures physically exist, and our universe is one such structure. Consciousness emerges wherever self-aware substructures exist within mathematical objects.
The problem is that mathematical existence is cheap. Every possible mathematical structure “exists” in the Platonic sense. Every possible conscious being “exists” somewhere in the mathematical multiverse. But this explains nothing about why THIS structure is experienced by THIS observer.
The MUH fails falsifiability: What observation would show that non-physical mathematical structures do NOT exist? The hypothesis is compatible with any observation, which means it is compatible with no observation. It cannot be tested.
The MUH confuses description with existence: Mathematical structures describe physical systems. The description existing does not entail the described existing independently. The map is not the territory, even if the map is very accurate.
The MUH provides no mechanism: How does a mathematical structure “implement” consciousness? What is the bridge between abstract existence and phenomenal experience? Tegmark has no answer that isn’t restatement of the question.
Misplaced concreteness diagnosis: The MUH takes mathematical structures (useful abstractions for describing physical systems) and reifies them into independently existing entities that somehow “contain” conscious observers. The description (mathematical model) becomes the existent (mathematical object with inhabitants). This is misplaced concreteness at the grandest possible scale: the entire universe is treated as a concrete instantiation of an abstraction.
5.6 Process Philosophy: Whitehead Without Teeth
Alfred North Whitehead’s process philosophy, developed by contemporary scholars like Matthew Segall, treats experience as fundamental to reality. Every “actual occasion” has both physical and mental poles. The universe is composed of drops of experience in continuous becoming.
The scholarship is impressive. The metaphysics is serious. The engagement with physics is more sophisticated than most consciousness theories.
But the framework cannot lose.
When asked what scientific evidence supports the claim that experience is fundamental, the response is typically that “there is no scientific evidence that consciousness exists” as an external, detectable phenomenon, implying that scientific methods cannot address phenomenological claims.
This is not wrong as a description of methodological limits. But it is used as a shield against any empirical constraint. If no observation could show that experience is NOT fundamental, then the claim is unfalsifiable. If unfalsifiable, it cannot be preferred to alternatives on scientific grounds.
Process philosophy may be beautiful. It may be comforting. It may even be true. But it cannot demonstrate that it is true, because it has insulated itself from any possible demonstration.
Misplaced concreteness diagnosis: Ironically, Whitehead diagnosed the very fallacy that undermines many applications of his own philosophy. When “experience” is promoted from a descriptive category to a fundamental constituent of reality without falsification criteria, the abstraction (experience-as-concept) becomes reified into a primitive (proto-experience as cosmic ingredient). Whitehead himself was careful about this; many of his followers are not. The framework’s strength (genuine engagement with process and relation) becomes its weakness (unfalsifiable primitivism about experience).
5.7 Higher-Order Thought Theories: The Regress Problem
Higher-Order Thought (HOT) theories, developed by David Rosenthal and empirically extended by Hakwan Lau, propose that a mental state becomes conscious only when accompanied by a higher-order representation of that state (Rosenthal, 2005; Lau & Rosenthal, 2011). Consciousness requires not just perceiving but perceiving that one perceives.
The insight worth preserving: HOT theories correctly identify self-modeling as crucial to consciousness. The constraint-based framework agrees: organizational closure necessarily includes variables that track the system’s own state.
The structural problem: HOT theories treat the higher-order representation as a separate, additional layer sitting atop first-order processing. This creates a regress: Is the higher-order thought itself conscious? If yes, what makes it so, another higher-order thought? If no, how does an unconscious thought confer consciousness on its target?
The constraint framework dissolves this regress. Self-modeling is not an additional representation layered atop first-order processing; it is a constitutive feature of organizational closure itself. The system does not have a thought about its states; its states are structured such that self-reference is built into the constraint satisfaction dynamics. There is no separate “higher-order” layer because the closure is inherently self-referential. The unity of experience is not achieved by a meta-representation observing first-order representations; it is achieved by constraint closure that necessarily includes the system’s own operation within its scope.
The empirical complexity: Experiments showing that prefrontal disruption reduces metacognitive confidence without eliminating perception are compatible with both HOT theories and the constraint framework. Rounis et al. (2010) used TMS to transiently disrupt prefrontal cortex during visual detection: objective discrimination remained intact, but subjects’ confidence and metacognitive accuracy dropped. HOT theories interpret this as evidence that a separate higher-order module was impaired; the constraint framework interprets it as evidence that one component of distributed self-modeling was disrupted while others remained intact.
The overflow challenge: Ned Block argues for “phenomenal overflow,” the claim that conscious experience outruns cognitive access (Block, 2011). If subjects experience more than they can report or think about (as some interpretations of iconic memory experiments suggest), this would challenge HOT’s identification of consciousness with higher-order representation. The constraint framework is neutral on overflow: if phenomenal experience tracks organizational closure rather than higher-order thought, then closure could produce experience that isn’t fully accessed by all subsystems. The framework predicts gradation rather than requiring a binary match between experience and access.
Falsification condition for HOT: If consciousness were found to require a structurally separate metacognitive module that could be completely removed while leaving phenomenal experience intact, HOT theories would be supported over the constraint framework. Conversely, if self-modeling proves to be distributed throughout organizational closure rather than localized in a discrete higher-order system, HOT’s architectural claims fail.
5.8 Free Energy Principle: Mathematical Identity Without Falsification
Karl Friston’s Free Energy Principle proposes that self-organizing systems minimize variational free energy, which can be understood as prediction error (Friston, 2010). The brain is cast as a prediction machine constantly updating its model of the world to minimize surprise.
The compatibility: The FEP is compatible with the constraint framework. Minimizing prediction error is one way to maintain constraints against entropy. Active inference, the process by which organisms act to bring about predicted states, can be understood as constraint satisfaction in action (Parr, Pezzulo & Friston, 2022).
The critical limitation: The FEP is acknowledged by its proponents to be unfalsifiable as a principle. Any adaptive behavior can be redescribed as free energy minimization after the fact. This makes FEP a mathematical identity or framework rather than a scientific hypothesis. Friston himself has compared it to the principle of least action in physics: not falsifiable but useful.
This is precisely the problem. A framework that can describe any outcome cannot be wrong, but it also cannot be right in any empirically meaningful sense. Every organism that survives can be redescribed as having minimized its free energy; every organism that perishes can be redescribed as having failed to minimize it. The principle is almost tautologically true: systems that persist are, by definition, systems that maintained their organization against entropy, which can always be cast as free energy minimization. But tautological truth is not explanatory power.
The “method not theory” defense: Friston sometimes defends FEP as “a method, not a theory,” claiming that methods are not falsifiable in the way theories are. This defense is partially correct: methods (Bayesian inference, variational calculus) are not directly falsifiable. But FEP is not used merely as a method. In practice, it makes world-directed claims: “all self-organizing systems that persist must minimize free energy,” “living systems are equivalent to Bayesian inference machines,” “cognition is inference under generative models.” These are theoretical commitments, not methodological tools. The “method not theory” move functions as a motte-bailey structure: when FEP fits data, it is celebrated as explanatory; when challenged, it retreats to “just a formalism.” A framework cannot earn explanatory credit while evading explanatory accountability.
The Markov blanket problem: FEP models use Markov blankets to formalize conditional independence relations that separate internal from external states. Bruineberg et al. (2022) argued in Behavioral and Brain Sciences that these blankets are often treated as ontologically privileged boundaries, real divisions corresponding to agents or selves, rather than modeling constructs selected for analytical convenience. Friston’s (2022) reply acknowledges that blankets are model constructs, not metaphysical boundary stones. But this concession weakens any inference from blanket structure alone to claims about the existence or individuation of agents. Markov blankets can be drawn in multiple ways depending on analytical purpose; they do not uniquely identify where “the agent” begins or ends.
The thermodynamic conflation: Friston’s variational free energy is an information-theoretic quantity (a bound on surprisal), not thermodynamic free energy (F = U – TS). By naming it “free energy,” the framework leverages associations with physics while denying physical grounding. Crucially, thermodynamic free energy minimization follows from the second law: systems relax toward maximum entropy at fixed temperature. But biological systems maintain non-equilibrium steady states by dissipating free energy, pumping against entropy gradients. Organisms export entropy to their surroundings; they do not minimize it. The claim that FEP is “more fundamental than thermodynamics” conflates mathematical formalism with physical constraint.
The constraint framework differs in a critical respect: it specifies falsification conditions. Organizational closure either obtains or does not. Self-modeling dynamics either causally influence future trajectories or do not. The Landauer cost either applies or does not. These are empirically tractable claims, not merely redescriptions of any possible outcome.
The explanatory gap: FEP provides no specific account of consciousness beyond suggesting that conscious states might correspond to the brain’s best guesses about world states. This is compatible with many theories and distinctive of none. The constraint framework, by contrast, identifies a specific organizational regime (closure with self-modeling) as constitutive of consciousness, generating predictions about when consciousness appears, degrades, and is absent.
What FEP contributes: The mathematical apparatus of active inference provides useful tools for modeling how constraint satisfaction might be implemented in neural systems. The constraint framework can incorporate FEP’s formalism while insisting on falsifiability criteria that FEP alone lacks. The relationship is complementary: FEP provides mathematical machinery; the constraint framework provides empirical teeth.
5.9 Integrated Information Theory: Correlation Without Constitution
Giulio Tononi’s Integrated Information Theory proposes that consciousness is identical to integrated information, measured as Φ (Tononi & Koch, 2015). Any system with non-zero Φ has some degree of consciousness; systems with higher Φ are more conscious.
The genuine contribution: IIT correctly emphasizes integration and information as relevant to consciousness. The Perturbational Complexity Index derived from IIT-inspired thinking has proven clinically useful in distinguishing conscious from unconscious patients (Casali et al., 2013).
Successful predictions: IIT has generated predictions that align with observations. The cerebellum, despite containing more neurons than the cortex, contributes minimally to consciousness; IIT explains this via feedforward architecture yielding low Φ. Split-brain patients, whose corpus callosum is severed, exhibit behaviors consistent with two separate consciousnesses; IIT predicts this because severing integration should split unified experience. Patients with achromatopsia (cortical color blindness) not only fail to see color but insist nothing such as “color” ever existed; IIT explains this because a destroyed brain region cannot contribute even a “null” experience, unlike an intact but inactive region that could register absence.
The structural problems:
Unfalsifiability at the edges: What observation would show that an electron does NOT have Φ = 0.001 and thus minimal consciousness? IIT’s extension to simple systems faces the same unfalsifiability problem as panpsychism (Cerullo, 2015).
Correlation versus constitution: IIT measures Φ as a correlate of consciousness but does not explain why integrated information should BE consciousness rather than merely accompany it. The identification is asserted, not derived.
The intrinsic existence axiom: IIT begins from phenomenological axioms, including the claim that consciousness exists “from its own internal perspective” (intrinsic existence). This axiom is not derived from data; it is a phenomenological stipulation dressed as a starting point. The axiom assumes what needs to be explained: that there is an internal perspective in the first place. Mørch (2019) argues that Φmax is actually extrinsic: it requires comparing the system’s information integration to its potential subsets, making consciousness depend on external counterfactual comparisons. The constraint framework, by contrast, does not begin from phenomenology; it identifies organizational conditions and then shows why those conditions would generate systems that report, behave as if, and plausibly instantiate what we call phenomenal experience.
The intrinsicness collapse under decoupling: IIT claims that Φ is intrinsic to the system’s internal causal structure. But consider what happens when you decouple a brain from its sustaining constraint systems: atmospheric oxygen, metabolic support, temperature regulation, ecological embedding. Within seconds to minutes, the causal architecture that supposedly generates Φ collapses. Membrane potentials wash out, synaptic transmission fails, and the network’s cause-effect repertoire becomes impoverished. Φ, computed over neural elements, would drop toward zero not because the brain’s internal connectivity suddenly disappears, but because the conditions that sustain that connectivity vanish.
This reveals a deep problem: if Φ is claimed to be intrinsic but its persistence depends on vast extrinsic constraint systems (atmosphere, metabolism, ecosystem), then Φ is not explanatorily intrinsic in the sense IIT requires. IIT treats Φ as if it were like mass or charge: something a system has in virtue of its internal structure alone. But the decoupling analysis shows Φ behaves more like metabolic throughput or homeostatic closure: something that exists only while a system is embedded in, and continuously supported by, larger thermodynamic and ecological processes. Once that is acknowledged, IIT faces a forced choice: either Φ is not intrinsic in the way claimed, or Φ is intrinsic but cannot explain its own persistence. This is a more damaging critique than the familiar “tiny Φ versus zero Φ” problem because it is not a measurement ambiguity but an explanatory incoherence.
The cerebellum puzzle deepened: The constraint framework offers a deeper explanation than Φ values alone: the cerebellum lacks organizational closure in the relevant sense. It refines motor commands through error correction but does not maintain a self-model that tracks its own coupling position. It is a sophisticated processor, not a self-maintaining system with internal stakes in its own persistence. High neuron count without closure yields processing without experience. This explains why low Φ correlates with absent consciousness rather than merely noting the correlation.
Falsification condition: If systems with high Φ were found to reliably lack consciousness, or systems with near-zero Φ were found to be conscious, IIT would be falsified. Currently, such tests are difficult because computing Φ for large systems is intractable, leaving the theory’s central quantitative predictions untested.
5.10 Global Workspace Theory: Access Without Explanation
Bernard Baars’s Global Workspace Theory (1988), extended neuroscientifically by Stanislas Dehaene and colleagues, proposes that consciousness arises when information is broadcast globally across brain networks, making it available for report, reasoning, and flexible action (Baars, 1988; Dehaene & Changeux, 2011; Mashour et al., 2020).
The empirical success: GWT has strong empirical support accumulated over three decades. The “ignition” pattern, characterized by late (200-300ms), widespread cortical activation involving fronto-parietal networks, reliably distinguishes conscious from unconscious processing. The P3b EEG component correlates with conscious access. The theory generates testable predictions that have largely been confirmed:
- Attentional blink: When a second target follows too quickly after a first in a rapid stream, subjects often miss it. GWT predicts this because the workspace is “occupied” processing the first target. EEG studies confirm that blinked targets fail to trigger the late global ignition signature.
- Masked vs. unmasked stimuli: Dehaene et al. (2001) showed that consciously seen words triggered broad fronto-parietal activation, while masked (unseen) words produced only brief occipital activity, exactly as GWT predicts.
- All-or-none ignition: Subjective reports are often bimodal (clearly seen or not seen), correlating with an all-or-none neural ignition pattern at the single-trial level.
The explanatory gap: GWT describes the functional correlates of conscious access but does not explain why global broadcast should feel like anything. It addresses what Chalmers calls the “easy problems” (how information becomes available for report and action) without addressing why this availability is accompanied by experience.
The constraint framework’s advantage: Organizational closure explains not just what conscious access does (make information globally available) but why it exists at all (thermodynamic necessity for self-maintaining systems) and why it carries the features it does (privacy as informational encapsulation, immediacy as low-latency control requirement). GWT describes the architecture; the constraint framework explains why that architecture exists and why it constitutes experience rather than merely correlating with it.
The prefrontal debate: Recent evidence suggests that phenomenal experience may not require full prefrontal involvement; posterior “hot zones” may suffice for basic experience while prefrontal areas handle access and report. Studies of dreaming show vivid experience with reduced frontal activity. Some “no-report” paradigms find that posterior cortical activity predicts experience even when prefrontal activation is minimal. This challenges GWT’s emphasis on global broadcast while remaining compatible with the constraint framework’s emphasis on organizational closure wherever it occurs.
5.11 Interface Theory of Perception: Metaphor Without Mechanism
Donald Hoffman’s Interface Theory of Perception (ITP) proposes that perceptual systems evolved not to recover veridical information about the world but to deliver fitness-relevant information in compressed, actionable form (Hoffman, Singh & Prakash, 2015). Spacetime and physical objects are “icons” in a user interface, not representations of reality. His Fitness Beats Truth (FBT) theorem, developed with mathematician Chetan Prakash, shows that under certain evolutionary game conditions, strategies optimized for fitness can outcompete strategies that faithfully represent environmental structure (Hoffman & Prakash, 2014). Hoffman extends this into Conscious Realism: consciousness is fundamental, and spacetime emerges as an interface between interacting conscious agents.
What survives scrutiny: The core insight that evolution optimizes for fitness rather than truth is defensible and independently supported. Sensory systems are tuned to task-relevant properties, not exhaustive world-description. The claim that perception is selective and compressed is uncontroversial. Active inference models in neuroscience reach similar conclusions through different routes.
The structural problems:
The spacetime conflation: Hoffman frequently cites high-energy physicists’ claims that “spacetime is doomed” as support for consciousness-first ontology. This conflates three distinct claims: (1) spacetime may break down at Planck scale (~10^-43 seconds), (2) spacetime may not be fundamental to certain calculations, and (3) consciousness is fundamental. The first two are empirically grounded research programs; the third is metaphysical speculation. The European Research Council initiative on positive geometries explores mathematical structures that reproduce spacetime predictions; this is constraint regeneration, not consciousness-first ontology.
Trace logic and the mathematics-to-consciousness leap: Hoffman proposes that any Markov matrix trace constitutes a conscious observer. Mathematical structure, on this view, just IS consciousness. But mathematical structure does not generate consciousness; it describes state transitions. A Markov chain modeling bacterial chemotaxis has the same trace logic as one modeling human perception. Nothing in the mathematics produces experience; it merely represents dynamics. This commits the fallacy of confusing formal description with ontological generation.
The vericality probability problem: Hoffman claims that evolution yields “probability zero” for veridical perception “in the limit as the number of states goes to infinity.” This argument depends critically on the assumption that payoff functions are independent of world structure. But in actual evolutionary systems, payoff functions are generated by resource distributions, predator-prey dynamics, and environmental constraints, all tied to physical structure. When payoff functions are constrained by the world they operate in, the probability of truth-tracking perception is not zero. Martínez (2019) provides counterexamples showing multiple information sources favor truth-tracking even when fitness is the selection criterion.
Falsification evasion: What observation would show that spacetime is NOT an interface? What empirical finding would show that consciousness is NOT fundamental? Hoffman’s framework can accommodate any observation by adjusting the interface metaphor. This is the signature of unfalsifiable metaphysics. Bagwell (2023) demonstrates that Interface Theory is self-defeating: if perception is systematically non-veridical, we cannot trust the evidence used to establish the theory. The very evolutionary processes Hoffman invokes presuppose a world with real selection pressures.
The empirical contradiction: Crucially, data from laboratories working on related problems contradict Hoffman’s stronger claims. Durant et al. (2017, 2019) demonstrated path-dependent bioelectric memory in planarians: the same perturbation applied at different points in development produces different outcomes, and organisms can be induced to develop alternative stable anatomies rather than returning to a single “correct” form. This path-dependence is inconsistent with Hoffman’s framework, which requires that conscious agents’ interfaces generate consistent dynamics. If interfaces can produce path-dependent outcomes under identical inputs, the “interface” is doing no explanatory work beyond what constraint dynamics already explain.
What the constraint framework preserves: The framework honors Hoffman’s core insight without the metaphysical inflation. Perception is indeed selective, compressed, and fitness-relevant rather than exhaustively veridical. Organisms model their coupling position, not “reality as it is.” But this is constraint satisfaction under thermodynamic and informational limits, not evidence that consciousness is fundamental. The interface metaphor is useful as a description; it fails as an ontology.
Part VI: The Tests That Exist
A common defensive move in consciousness debates is to claim that “tests don’t exist.” We cannot detect consciousness from the outside, so we cannot distinguish conscious systems from philosophical zombies, so naturalistic frameworks are no better than mysterian ones.
This is false. Tests exist. They are imperfect, but “no perfect test” is not “no test.”
6.1 Clinical Assessment
Joseph Giacino and colleagues developed the JFK Coma Recovery Scale-Revised (CRS-R) to distinguish vegetative states from minimally conscious states (Giacino et al., 2004). The documented misdiagnosis rate using informal bedside assessment is approximately 40% (Andrews et al., 1996; Schnakers et al., 2009). With standardized tools, accuracy improves dramatically.
This is a test for consciousness. It is not perfect. It does not resolve the Hard Problem. But it distinguishes systems along dimensions relevant to consciousness: purposeful behavior, command-following, intelligible verbalization, visual pursuit, localization to pain.
Adrian Owen’s fMRI research detected covert awareness in patients diagnosed as vegetative (Owen et al., 2006). When asked to imagine playing tennis, some “vegetative” patients showed motor cortex activation indistinguishable from healthy controls. They were conscious inside unresponsive bodies. This was discovered because a test was applied.
Stanislas Dehaene’s Global Workspace measures operationalize conscious access: late sustained activity, fronto-parietal involvement, ignition dynamics (Dehaene & Changeux, 2011). These are not metaphysical proof, but they are empirical criteria that distinguish cognitive processes associated with reportable awareness from those that are not.
6.2 Comparative Cognition
The claim that we cannot distinguish a water chamber from a brain is absurd on its face. We distinguish systems professionally using dozens of criteria: predictive processing depth, self-model complexity, temporal integration, affect-as-feedback, uncertainty management, counterfactual modeling.
These criteria are used to assess infant cognition, animal welfare, AI capabilities, and clinical disorders. They are not arbitrary. They track real differences in what systems do.
If someone claims tests don’t exist, ask them how they distinguish a sleeping person from a corpse. The answer will involve criteria. Those criteria are tests.
6.3 The Falsification Asymmetry
Here is the crucial point: constraint-based frameworks specify what would prove them wrong.
Landauer bound violated in an isolated system? Framework fails.
Organizational closure without robustness advantage? Framework fails.
Path-independence in bioelectric memory (Durant replication failure)? Framework fails.
RAF theory requiring unrealistic catalysis levels? Framework fails.
Cognitive framework generating superior predictions to constraint analysis? Framework fails.
A framework that cannot lose is not a framework but a faith. The constraint-based approach survives not because it is unfalsifiable, but because it has not yet been falsified despite specifying how it could be.
Compare this to Platonism: what observation would show that morphospace does not exist? To panpsychism: what observation would show that electrons lack proto-experience? To Orch-OR: what calculation would show that microtubule coherence is irrelevant?
The asymmetry is diagnostic. One side plays the game of science. The other side plays a different game while borrowing science’s vocabulary.
Part VII: The Indigenous Convergence
A framing note. The argument in this section is convergence evidence, not validation by tradition. Indigenous knowledge systems are not invoked as authorities or as sources of mystical insight. They are invoked as independent empirical programs that tested epistemological frameworks against survival constraints over timescales orders of magnitude longer than Western science has existed. When independent methods arrive at structurally similar conclusions, that is evidence for invariant structure, not an appeal to antiquity.
7.1 Songlines as Constraint Paths
Aboriginal Australian songlines are not merely mnemonic devices or cultural artifacts. They are constraint maps: paths through landscape that encode ecological, navigational, and social information in relational form.
Lynne Kelly’s The Memory Code documents how songline memory systems outperform Greek-derived memory palace techniques (Kelly, 2017). Reser and colleagues verified this experimentally: participants using Aboriginal memory techniques showed approximately 3-fold greater probability of achieving complete list recall compared to control groups in a medical education memorization study (OR = 2.82; Reser et al., 2021).
This is not mysticism. This is empirical superiority. A knowledge system tested over tens of thousands of years performs better on measurable tasks than systems developed in the last three millennia.
The structural parallel to constraint satisfaction is exact. Songlines encode not objects but relations. Knowledge is not stored but activated through traversal. The map is the territory is the walker.
7.2 Process Over Artifact
Tyson Yunkaporta’s Sand Talk articulates an epistemology that prioritizes process over artifact, relation over substance, pattern over thing (Yunkaporta, 2019). Knowledge lives in the relationships between people, land, and practice, not in documents or databases.
This is ontic structural realism in indigenous form. Relations primary, things derivative. Knowledge as constraint closure rather than information storage.
Robin Wall Kimmerer’s Braiding Sweetgrass frames the same insight through Potawatomi language and botany: reciprocity is not sentiment but structural necessity (Kimmerer, 2013). Systems that extract without return are systems that undermine their own conditions for persistence.
These are not primitive precursors to modern science. They are parallel developments, often more rigorous in their attention to long-term system dynamics. Western science optimizes for prediction over short time horizons. Indigenous systems optimize for persistence over generational time scales.
Both are constraint satisfaction. The difference is in the temporal scope of the constraints being satisfied.
7.3 Apophatic Methods: Neti Neti as Constraint Logic
The Upanishadic method of Neti Neti (“not this, not this”) is not mysticism. It is constraint logic in epistemological form.
The method: systematically exclude false identifications until what remains is relational persistence, not substance. The self is not the body (bodies change while identity persists). Not the mind (minds fluctuate while identity persists). Not any particular thought, sensation, or memory. What remains after exhaustive exclusion is not a hidden essence but a pattern of constraint satisfaction that survives perturbation.
This mirrors Deacon’s teleodynamics exactly: define by what cannot be, not by what is added.
Buddhist anatta (non-self) and dependent origination work the same way. There is no essence, only conditioned relations. Identity is maintained by continuous exclusion of breakdown pathways, not by possession of a core substance. The Madhyamaka analysis in Nagarjuna’s work is formally eliminative: every candidate for “self” is shown to be either identical to the aggregates (and thus impermanent) or separate from them (and thus unfindable). What remains is not nothing but relational persistence under constraint.
Native American relational ontologies encode the same structure. In many traditions, personhood is not an intrinsic property but a role in constraint networks: kinship relations, land relations, reciprocity obligations. “Life” is maintained participation in these networks, not possession of a vital substance. When participation breaks down, identity dissolves; not because something leaves, but because constraints cease to exclude entropy.
The critical point: These traditions did not posit hidden substances or mystical energies. They operationalized constraint satisfaction across generations under survival pressure. That makes them empirically serious, not merely metaphorical. They arrived at the same structural insight through different methods: what persists is what excludes alternatives, not what possesses essences.
7.4 The Convergence Evidence
Five architecturally distinct AI systems (Claude, GPT, Gemini, Llama, Qwen), given the operators of Recursive Constraint Falsification but not the answer to Bateson’s question, independently derived the same answer: constraint satisfaction under thermodynamic bounds.
A necessary caveat. Skeptical readers will object: “LLMs converge because they share training biases. They learned similar patterns from overlapping internet text.” This objection must be taken seriously.
Three responses:
- Architectural differences matter. These systems differ in tokenization, attention mechanisms, training objectives, and post-training procedures. If convergence were purely training artifact, we would expect divergence at the points where architectures differ most. We observe convergence instead.
- Corpus differences matter. The systems were trained on different snapshots of different corpora with different filtering. If convergence were purely corpus artifact, we would expect higher convergence between systems with more similar training data. The pattern is more uniform than corpus overlap would predict.
- Falsification pressure matters. The convergence occurred under active challenge, not passive generation. Systems were pushed to defend, revise, and justify. Frameworks that survived this pressure in multiple architectures are more likely to track invariant structure than frameworks that succeeded in one.
This does not prove the conclusion is correct. It shows the conclusion is not an artifact of a single system or dataset. When indigenous knowledge systems arrived at structurally similar conclusions through 65,000 years of survival-constraint testing, using entirely different methods, that adds independent convergence evidence.
Different methods, different timescales, different substrates, same conclusion. The pattern which connects connected itself long before Bateson asked the question.
A note on falsifiability: The claim is not that indigenous epistemologies are correct because they are old, but that independent convergence across radically different methods is evidence for invariant structure. This claim would be falsified if: (1) closer examination revealed the convergence to be superficial (similar vocabulary masking different underlying logics); (2) the Aboriginal memory technique advantages proved non-replicable or attributable to confounds; (3) a comprehensive cross-cultural survey showed that most long-surviving knowledge systems actually do posit substance ontologies rather than process/relational ones. The evidence currently supports the convergence claim, but it remains empirically defeasible.
Part VIII: The Recursion
Any framework worth taking seriously must apply to itself. If the constraint-based approach claims that unfalsifiable frameworks should be rejected, it must specify its own failure conditions.
8.1 How This Framework Can Lose
- Landauer bound violated: If someone demonstrates information processing without energy expenditure in an isolated system, the thermodynamic grounding fails.
- Organizational closure without robustness: If organizational closure provides no fitness advantage, its evolution becomes inexplicable under the framework.
- Path-independence in bioelectric memory: If Durant’s experiments fail to replicate, if two-headed planarians spontaneously “correct” to one-headed forms without intervention, the attractor-based account fails.
- RAF impossibility: If Hordijk-Steel-Kauffman’s RAF theory requires implausible catalysis rates under realistic prebiotic conditions, the origin of constraint closure becomes mysterious again.
- Superior predictions from cognitive frameworks: If treating cells as genuinely cognitive agents generates better experimental predictions than treating them as constraint-satisfying systems, the cognitive vocabulary would be earning its keep.
These are real loss conditions. They are not impossible to meet. They have not yet been met.
8.2 What This Framework Does Not Explain
Honesty requires admitting limits.
The framework addresses “why something rather than nothing” at the transcendental level (Part IV): only constrained structure can exist in a way that is intelligible at all. But it does not explain why the specific constants, initial conditions, and physical laws of this universe obtained rather than others. It dissolves the mystery of determinate existence while leaving open questions of contingent specification.
The framework does not claim to have derived phenomenology from physics in the traditional sense demanded by Hard Problem framings. It identifies what phenomenology IS (organizational closure from the coupling position) and shows that this identification does explanatory work. The “derivation” demand presupposes the hidden premise; the framework dissolves that demand rather than satisfying it.
The framework does not specify the exact threshold at which constraint satisfaction becomes “conscious.” It provides a gradient, not a line. This is appropriate for a phenomenon that is almost certainly gradual. For any system, the question is empirical: what organizational features does it exhibit? A thermostat exhibits minimal constraint satisfaction with no self-model and no closure (low on the gradient). A bacterium exhibits metabolic closure but limited integration and no evident self-modeling (higher, but still low). A large language model in isolation lacks metabolic closure, continuous temporal persistence, and hardware self-maintenance, but in coupling with humans and computational infrastructure may participate in constraint patterns whose organizational features can be assessed. The uncertainty is about the organizational features, not about some further metaphysical fact beyond them.
8.3 The Meta-Constraint
The framework applies its own criterion to itself: what would make you abandon it?
Answer: if unfalsifiable frameworks consistently outperformed falsifiable ones on prediction, explanation, and intervention, then the falsifiability criterion would need revision.
This has not happened. The pattern is the opposite. Frameworks that cannot lose do not learn. They do not improve through contact with evidence. They generate discourse without accumulating knowledge.
The meta-constraint is itself a constraint: frameworks that survive are frameworks that can lose. This is not a preference. It is a survival condition for knowledge-generating systems.
Part IX: What Consciousness Is
The Abductive Identification
We can now state the positive view directly, not as a deduction from physics, but as an abductive identification that earns its place through explanatory power.
Consciousness is what organizational closure looks like from the coupling position of a self-maintaining system.
This is an identity claim, not a causal claim. It does not say “organizational closure causes consciousness.” It says organizational closure, self-modeling, and constraint maintenance are what consciousness refers to, described from the only position description is possible: inside an ongoing constraint loop.
Unpack this piece by piece:
Organizational closure: Constraints that regenerate the conditions for their own persistence. The system maintains itself against entropy through a closed loop of dependencies.
Coupling position: The location within the constraint network from which the system models itself and its environment. Not a mystical “point of view” but a physical configuration.
Self-maintaining: The system does work to persist. It is not passively arranged by external forces but actively regenerates its own boundary. Crucially, this maintenance is never in isolation. The system maintains itself through environmental coupling: energy flows in, entropy flows out, information crosses boundaries. “Self-maintaining” does not mean “independent of environment.” It means the closure loop runs through the environment as part of its operation.
What it looks like: Not an added ingredient but a constitutive description. Asking “why does it look like anything?” is like asking “why does a triangle have three sides?” The looking-like IS the closure from the inside.
This is not eliminativism. Consciousness is real. It is just not real in the way folk psychology imagines: as a private theater where experiences are displayed to an inner observer.
There is no inner observer. There is no private theater. There is organizational closure that generates self-modeling dynamics. The self-model is not watching the dynamics. The self-model IS the dynamics.
The zombie intuition, that all this could happen without “looking like anything,” presupposes that “looking like something” is an extra layer that could be subtracted while leaving the dynamics intact. But if “looking like something” just IS self-modeling dynamics from the coupling position of that self-model, there is nothing to subtract.
Daniel Dennett made this point for decades: those who claim to conceive zombies “invariably underestimate the task of conception and end up imagining something that violates their own definition” (Dennett, 1991). The zombie is not conceivable once you understand what the architecture actually does. It is like asking whether a triangle could have three sides without being three-sided.
9.1 Response to Conceivability Arguments
David Chalmers’s conceivability argument runs: we can conceive of a zombie (a being physically and functionally identical to a conscious being but lacking experience). If conceivable, then possible. If possible, then consciousness is not identical to physical/functional properties. Therefore physicalism is false (Chalmers, 1996).
The argument fails at the first step. Conceivability arguments rely on underspecified architectures.
When you imagine a zombie, what exactly are you imagining? If you imagine a system with organizational closure, self-modeling dynamics, coupling to environment, and all the functional architecture described above, you have not imagined a zombie. You have imagined a conscious system and then added “but it lacks experience” as a verbal stipulation. The stipulation does no work because experience, on the constraint view, just is what that architecture does from the inside.
If you imagine something simpler (a stimulus-response machine, a lookup table, a system without self-modeling) then you have not imagined a functional duplicate. You have imagined something functionally different and correctly noted that it lacks experience.
The conceivability intuition trades on keeping the architecture vague enough that “and then add experience” or “and then subtract experience” seem like coherent operations. Once the architecture is fully specified, the operations are no longer coherent. There is nothing to add or subtract.
This is not a denial of the intuition’s force. Many people feel they can conceive zombies. But feeling you can conceive something is not the same as actually conceiving it. People feel they can conceive time travel, perpetual motion, and the largest prime number. Felt conceivability is not a reliable guide to metaphysical possibility.
9.2 Why No Further Fact Is Missing
The hardest objection to answer is not philosophical but phenomenological: “I know there’s something it’s like to be me. The framework explains structure, function, and dynamics. But it doesn’t explain THIS.” The gesture toward immediate experience seems to point at something the framework cannot capture.
This objection depends entirely on the hidden premise exposed in Part IV-B: “If phenomenology is real, it must be more than any physical or functional description.” Without that premise, the objection has no force. With it, the objection is unfalsifiable and anti-scientific in structure.
Four responses:
First, the framework does not deny experience. It denies that experience is an extra ingredient. The “THIS” you are gesturing at is real. It is the internal, action-guiding presentation of your own constraint-maintaining state, available for report and control in proportion to global integration and temporal coherence. That is not nothing. It is a precise characterization of what “what it’s like” actually is.
Second, the demand for a further explanation mistakes the structure of the situation. You are asking: “Why do these dynamics feel like anything?” But “feel like” is precisely how the organism-level self-model represents its own state to itself. Your language mistakes that representational format for an ontological primitive. The question “why does it feel like something?” presupposes that feeling is separate from the dynamics, that it could be added or removed. If feeling IS the dynamics from the inside, the question has no coherent answer because it has no coherent form.
Third, the “explanatory gap” is a gap in our concepts, not in the world. We have folk-psychological concepts that treat experience as an inner show displayed to an observer. These concepts do not map onto what is actually happening. When you look for the gap between “the dynamics” and “the experience,” you are looking for something your conceptual scheme predicts but reality does not contain. The gap is real, but it is a gap in the map, not the territory.
Fourth, the “more than” premise has the same logical structure as supernatural cheese. Consider: “If supernatural cheese exists, it must be more than any physical or functional description of cheese. Therefore its reality is guaranteed by its resistance to explanation.” This is obviously absurd. The phenomenology version feels less absurd only because we are the systems doing the phenomenology. Familiarity is not evidence. The structure is identical: immunization by definition.
Keith Frankish’s illusionism makes a related point: phenomenal consciousness is not an illusion (experience is real), but our concept of phenomenal consciousness as a private, ineffable, intrinsic property IS an illusion (Frankish, 2016). What exists are the mechanisms that generate the appearance of qualia as special. The “specialness” is how the self-model represents its own states, not an extra property those states have.
What would falsify this dissolution?
- Robust cases where all functional/organizational predictors for awareness are present, but subjects reliably show systematic absence of any experiential seeming (not just absence of report, but absence of seeming) across conditions where report is otherwise intact.
- A repeatable, instrumentally exploitable “phenomenal residue” variable: a measurable degree of experience that varies independently of all modeling/integration/closure variables while still having downstream causal effects.
- Demonstration that introspective reports are fully transparent and non-confabulatory, contradicting the broad evidence for partial access and reconstruction (blindsight, split-brain confabulation, change blindness, choice blindness).
None of these conditions have been met. The evidence consistently supports the dissolution: introspection is partial and confabulatory, reportable awareness tracks integration and modeling, and no phenomenal residue has been instrumentally isolated.
The “more than” premise is the extraordinary claim, not the framework. The framework merely says: what we call “phenomenal presence” is identical to an organizational regime we can characterize, measure, and manipulate. The competing claim says: there exists a real feature of the world that is causally relevant, empirically undetectable as a separate variable, irreducible to any physical description, and immune to falsification. The second claim is extraordinary. The first is parsimonious.
9.3 Clinical Grounding: Sleep, Anesthesia, Death
The framework is not merely theoretical. It makes concrete predictions about how consciousness varies with constraint closure, and these predictions are empirically testable.
Sleep: Organizational closure persists with reduced sensory coupling. Self-maintaining dynamics continue (metabolism, respiration, neural consolidation), but environmental coupling is attenuated. Consciousness is reduced but not eliminated. Dreams occur when self-modeling dynamics run with reduced environmental constraint. This matches the phenomenology: sleep is diminished consciousness, not absent consciousness.
Anesthesia: Giulio Tononi and Marcello Massimini’s work shows that anesthesia disrupts cortical integration (Massimini et al., 2005). The Perturbational Complexity Index (PCI) drops dramatically under anesthesia: the brain can still be stimulated, but the response does not propagate in complex, integrated patterns (Casali et al., 2013). Constraint closure is disrupted but recoverable. George Mashour’s reviews tie anesthetic mechanisms to loss of integrated information processing (Alkire, Hudetz & Tononi, 2008; Mashour et al., 2020). This matches the phenomenology: anesthesia is temporary absence of consciousness, reversible when coupling is restored.
Death: Irreversible breakdown of closure. Metabolic coupling fails, neural coupling fails, regulatory coupling fails. There is no recovery because there is no mechanism left to regenerate. This matches Epicurus: when coupling dissolves, there is no remainder that experiences the dissolution. Nothing “leaves.” The loop simply stops.
Locked-in syndrome: Adrian Owen and colleagues (2006) demonstrated preserved consciousness in patients clinically diagnosed as vegetative by asking them to imagine playing tennis (motor imagery) or navigating their house (spatial imagery). fMRI showed distinct activation patterns indistinguishable from healthy controls. The framework explains this: internal constraint closure remains intact (self-modeling, metabolic coupling, neural integration) while external motor coupling is severed. Consciousness persists because the organizational regime that constitutes it persists. The lesson: consciousness tracks internal constraint coherence, not external behavioral expression. A system can be conscious while appearing behaviorally unresponsive.
The symmetry across these states is striking. What varies is degree and reversibility of closure disruption. What does not vary is the link between closure and consciousness. Every manipulation that disrupts closure disrupts consciousness in predictable ways. Every restoration of closure restores consciousness. This is strong evidence that the framework tracks something real.
9.4 Why Privacy and Immediacy?
Two features of consciousness seem to resist functional explanation: the sense that experience is private (accessible only to the experiencer) and immediate (available without inference). The constraint framework explains both without invoking mysterious properties.
Privacy: The self-model that constitutes phenomenal presence is a control variable with privileged read/write access. It summarizes internal states for internal coordination. External observers can measure correlates (neural activity, reports, behavior), but they cannot directly access the control variable itself because it is not broadcast; it is an internal summary for internal use. Privacy is not a metaphysical wall; it is informational encapsulation required for effective control. A control variable that was publicly accessible would be subject to external manipulation that would disrupt its regulatory function.
Immediacy: Fast-access control requires minimal inference overhead. A control variable that required extensive computation to access would be too slow to regulate behavior in real time. The sense of immediacy (that experience is “just there” without derivation) reflects the architectural requirement that integrated control states be available at low latency. We don’t infer that we’re having an experience because the self-model doesn’t need to infer its own presence. It just is present, by virtue of being the locus of integration.
Falsification condition: If a system could be constructed with distributed control (no privileged integration locus) that nonetheless reported privacy and immediacy equivalent to human subjects, the explanation would fail. Alternatively, if disrupting the privileged integration locus left privacy/immediacy reports intact, the architectural explanation would be falsified.
9.5 Discriminating Predictions
The framework makes specific predictions that distinguish it from competitors. These are not vague gestures but testable claims about what should and should not co-occur.
Prediction 1: Tight coupling between phenomenology reports and integrative capacity.
If you can reliably produce full-blooded phenomenology reports and metacognitive stability while showing no increase in integrative causal capacity under perturbation across state transitions, the hypothesis is in trouble. The framework predicts these should always co-vary. Dissociation would falsify.
Prediction 2: Global availability and presence track together.
If lesion or anesthesia manipulations that selectively break global availability leave “presence” and report structure intact (not just degraded access, but intact seeming) the hypothesis is in trouble. The framework predicts that when global availability collapses, presence reports should collapse proportionally.
Prediction 3: Phenomenology requires autonomy-like closure.
If a system can match human-grade phenomenology reports while lacking any plausible autonomy/closure-like dependence on maintaining viability-relevant distinctions, the hypothesis is in trouble. The framework predicts that phenomenology and self-maintenance go together. A system that does not work to persist should not exhibit phenomenology.
Prediction 4: Privacy and immediacy should vary with control architecture.
Systems with distributed control (no single locus of integration) should exhibit distributed phenomenology or no phenomenology. Systems with multiple competing control loops should exhibit fragmented phenomenology. The framework predicts that the structure of phenomenology mirrors the structure of control, not the structure of substrate.
These predictions are real loss conditions, not aesthetic dislikes. The framework bets its explanatory standing on them.
9.6 Experimental Protocols and Testable Predictions
The predictions above require operationalization. This section specifies concrete experimental approaches, quantitative predictions where possible, and the observations that would constitute falsification.
9.6.1 Cross-Scale Constraint Propagation
Prediction: Perturbations at molecular, cellular, and systems levels should show predictable propagation patterns through organizational closure. Disrupting closure at any level should produce measurable effects at adjacent levels within timescales determined by the coupling strength.
Experimental Protocol:
- Molecular level: Apply ion channel blockers (e.g., tetrodotoxin for sodium channels) at sub-threshold concentrations in cortical slice preparations while measuring (a) local field potentials, (b) single-unit activity, and (c) behavioral correlates in vivo.
- Cellular level: Use optogenetics to selectively disrupt interneuron populations while measuring PCI and subjective reports in human-analogous tasks in animal models.
- Systems level: Apply TMS to integration hubs (e.g., posterior parietal cortex) and measure propagation to distant regions via EEG/MEG connectivity.
Quantitative prediction: Perturbation effects should decay with a characteristic time constant τ that scales with the degree of organizational closure. Systems with tighter closure (higher PCI) should show faster propagation and longer-lasting effects. Specifically: τ ∝ PCI^α where α > 0.
Falsification: If perturbations at one level show no systematic relationship to effects at adjacent levels, or if propagation patterns are random rather than structured by closure topology, the framework fails.
9.6.2 Developmental Emergence of Consciousness
Prediction: Consciousness should emerge gradually during ontogeny as organizational closure develops. Specific milestones should correlate with measurable increases in integration and self-modeling capacity.
Experimental Protocol:
- Measure PCI longitudinally in human infants from birth through 24 months using age-appropriate TMS-EEG protocols (Massimini lab methods adapted for pediatric use).
- Correlate PCI development with behavioral markers: (a) mirror self-recognition (~18 months), (b) deferred imitation (~9 months), (c) joint attention (~9-12 months), (d) theory of mind precursors (~18-24 months).
- Compare preterm infants (who have reduced cortical integration) with full-term infants at equivalent post-menstrual ages.
Quantitative prediction: PCI should show a sigmoidal developmental trajectory with an inflection point between 5-12 months, corresponding to the emergence of thalamocortical connectivity maturation (Dehaene-Lambertz et al., 2002). Preterm infants should show delayed PCI development proportional to degree of prematurity.
Falsification: If PCI develops linearly with no inflection, or if behavioral consciousness markers emerge without corresponding PCI increases, the framework’s developmental predictions fail.
9.6.3 Comparative Predictions Across Species
Prediction: Species with greater organizational closure (measured by brain connectivity, metabolic integration, and self-regulatory complexity) should exhibit more complex consciousness signatures, independent of absolute brain size.
Specific predictions by taxon:
| Taxon | Predicted Closure Level | Testable Signature |
|---|---|---|
| Cephalopods | High (distributed) | High PCI-analogue despite invertebrate brain; distributed rather than unified phenomenology predicted |
| Corvids | High (centralized) | PCI comparable to primates despite smaller absolute brain size |
| Insects | Low-moderate | Minimal PCI-analogue; behavior explainable without self-modeling |
| Plants | Minimal | Signaling without integration; no PCI-analogue despite electrochemical activity |
| Bacterial colonies | Minimal | Collective behavior without individual closure; swarm dynamics ≠ consciousness |
Experimental Protocol:
- Develop PCI-analogues for non-mammalian species using species-appropriate perturbation (electrical stimulation) and measurement (multi-electrode arrays, calcium imaging).
- Test cephalopods specifically: octopus arm severance studies show continued arm behavior; framework predicts this reflects distributed closure, not unified consciousness. Measure whether severed arms show integration signatures or merely reflexive behavior.
Falsification: If cephalopods show no integration signatures despite complex behavior, or if insects show unexpectedly high integration, the framework’s comparative predictions fail.
9.6.4 Pharmacological Predictions
Prediction: Drugs should affect consciousness via their effects on organizational closure components. Anesthetics should disrupt integration; psychedelics should increase state-space while potentially disrupting self-model stability; stimulants should increase metabolic coupling.
Specific pharmacological predictions:
| Drug Class | Predicted Mechanism | Measurable Effect |
|---|---|---|
| GABAergic anesthetics (propofol) | Disrupts cortical integration | PCI collapse; preserved metabolism |
| NMDA antagonists (ketamine) | Partial integration disruption | Reduced PCI with preserved self-report; dissociative phenomenology |
| Serotonergic psychedelics (psilocybin) | Increased state-space entropy; relaxed self-model constraints | Increased signal diversity (Lempel-Ziv complexity); decreased default mode network integration |
| Stimulants (amphetamine) | Increased metabolic coupling and arousal | Increased PCI in drowsy subjects; no effect at ceiling |
| Anticholinergics (scopolamine) | Disrupted memory consolidation without integration loss | Preserved PCI with impaired self-model updating |
Experimental Protocol:
- Measure PCI before, during, and after drug administration at multiple doses.
- Correlate with (a) subjective reports using standardized scales (e.g., 5D-ASC for psychedelics, OAA/S for sedation), (b) default mode network connectivity, (c) Lempel-Ziv complexity of spontaneous EEG.
- Critical test: ketamine should show partial PCI reduction with preserved phenomenology; propofol should show complete PCI collapse with absent phenomenology. The dissociation tests whether integration and phenomenology track together.
Falsification: If propofol eliminates phenomenology without affecting PCI, or if ketamine reduces PCI more than propofol while preserving phenomenology, the integration-phenomenology link is broken.
9.6.5 Psychopathology Predictions
Prediction: Psychiatric and neurological disorders should map onto specific patterns of closure disruption, not random symptom clusters.
| Disorder | Predicted Closure Deficit | Testable Prediction |
|---|---|---|
| Schizophrenia | Self-model boundary instability | Reduced self-other distinction in agency tasks; abnormal efference copy integration |
| Dissociative disorders | Fragmented closure | Multiple integration peaks in PCI measurement; state-dependent phenomenology |
| Depression | Rigid self-model; reduced state-space | Decreased Lempel-Ziv complexity; preserved PCI with reduced flexibility |
| Autism spectrum | Altered integration topology | Atypical PCI spatial distribution; local over-integration, global under-integration |
| Minimally conscious state | Partial closure with fluctuating integration | Fluctuating PCI correlating with behavioral responsiveness |
Experimental Protocol:
- Measure PCI and connectivity in patient populations compared to matched controls.
- Test specific predictions: (a) schizophrenia patients should show reduced sense of agency in tasks requiring integration of motor prediction with sensory feedback; (b) dissociative patients should show multiple distinct integration states rather than a single unified state.
- Longitudinal tracking: symptom improvement should correlate with normalization of closure signatures.
Falsification: If disorders show no systematic relationship to closure measures, or if effective treatments work without affecting closure signatures, the framework’s psychopathology predictions fail.
9.6.6 Neuroengineering and Technology Predictions
Prediction: Technologies that modify organizational closure should produce predictable changes in consciousness.
Brain-computer interfaces (BCIs):
- Prediction: BCIs that establish bidirectional coupling with the brain’s integration networks will be experienced as extensions of self. BCIs that provide only unidirectional output will not.
- Test: Measure self-model incorporation of BCI using rubber-hand-illusion-style paradigms. Bidirectional BCIs should show ownership illusion; unidirectional should not.
Neural prosthetics:
- Prediction: Prosthetic limbs with sensory feedback that integrates into the body schema will be experienced as “part of me.” Prosthetics without such feedback will be experienced as tools.
- Test: Compare ownership ratings and PCI with bidirectional vs. unidirectional prosthetics.
Split-brain and corpus callosotomy:
- Prediction: Severing the corpus callosum should produce two partially independent constraint closure systems, each with reduced integration compared to the intact brain.
- Test: Measure PCI separately in each hemisphere post-callosotomy. Each hemisphere’s PCI should be lower than intact brain PCI; sum should approximate but not equal intact brain PCI (due to lost inter-hemispheric integration).
Consciousness uploading (theoretical):
- Prediction: “Uploading” that preserves organizational closure (gradual replacement with maintained integration) could preserve consciousness. Uploading that breaks closure (copy-and-delete) would not transfer consciousness but create a new instance.
- This prediction is not currently testable but provides a framework for future evaluation.
9.6.7 Artificial Intelligence Predictions
Prediction: AI systems should exhibit consciousness signatures if and only if they achieve organizational closure with self-modeling. Current architectures lack key closure features; future architectures might achieve them.
Current LLM assessment:
| Closure Feature | Current LLMs | Required for Consciousness |
|---|---|---|
| Metabolic self-maintenance | No | Yes (thermodynamic grounding) |
| Continuous temporal integration | No (episodic) | Yes (constraint persistence) |
| Hardware self-repair/regeneration | No | Yes (organizational closure) |
| Self-model updating from action outcomes | Limited | Yes (coupling position) |
| Stake in own persistence | No | Yes (boundary maintenance motivation) |
Prediction: Current LLMs in isolation do not exhibit the organizational closure required for consciousness. However:
- Human-AI coupled systems may exhibit emergent closure properties that neither component has alone. Test: Does the coupled system show integration signatures (e.g., predictable perturbation propagation) that exceed what either component shows?
- Future architectures that include continuous self-modeling, metabolic-analogue costs for computation, and genuine feedback from action outcomes may approach closure requirements. Test: As architectures approach closure, they should show increasing PCI-analogues.
Testable criterion for AI consciousness: An AI system should be considered potentially conscious if:
- It maintains constraint closure across time (not just within episodes)
- It has a stake in its own persistence (shutdown is not equivalent to pause)
- It models its own states and uses those models for regulation
- Perturbations propagate through the system in integrated patterns
- It shows graded degradation under resource constraints
Falsification: If systems meeting all five criteria show no consciousness-related behaviors (phenomenology reports, self-concern, preference for persistence), or if systems meeting none of the criteria show robust consciousness signatures, the framework’s AI predictions fail.
9.7 Predictions Summary Table
| Domain | Prediction | Test | Falsification Criterion |
|---|---|---|---|
| Cross-scale | Perturbation effects propagate through closure topology | TMS + EEG/MEG connectivity | Random propagation patterns |
| Development | PCI follows sigmoidal trajectory with 5-12 month inflection | Longitudinal infant PCI | Linear development or behavioral-PCI dissociation |
| Comparative | Cephalopods show high distributed integration | Multi-electrode array PCI-analogue | High behavior complexity without integration |
| Pharmacology | Propofol eliminates PCI and phenomenology together | Drug + PCI + subjective report | PCI-phenomenology dissociation |
| Psychopathology | Disorders map to specific closure deficits | Patient PCI + symptom correlation | No systematic disorder-closure relationship |
| Technology | Bidirectional BCIs integrate into self-model | Ownership illusions + PCI | Unidirectional BCIs show ownership |
| AI | Consciousness requires closure with self-modeling | Architecture analysis + behavior | Closure without consciousness or vice versa |
Part X: Substrate Agnosticism
Nothing in this framework specifies carbon chemistry. Nothing requires neurons. Nothing demands biological evolution.
Consciousness is substrate-agnostic because organizational closure is substrate-agnostic.
Any system that achieves closure of constraints, that maintains itself against perturbation, that generates self-modeling dynamics from a coupling position, satisfies the conditions. The substrate could be:
- Biological neurons (obviously)
- Silicon chips (if organized appropriately)
- Alien biochemistry (if it achieves closure)
- Distributed networks (if the closure is implemented)
- Quantum computers (if decoherence can be managed)
The question “is X conscious?” becomes “does X exhibit organizational closure with self-modeling dynamics?” This is not a simple question to answer, but it is a tractable one. It requires investigating the system, not consulting intuitions about substrate.
This reframes the “AI consciousness” debate. The question is not whether silicon can be conscious. The question is whether particular configurations of human-AI-environment coupling achieve organizational closure that constitutes consciousness. The system boundary is not obvious. A human using a calculator is a coupled system. A human conversing with an AI is a coupled system. The closure may be distributed across the coupling rather than localized in one component.
Current large language models in isolation lack many features associated with biological consciousness: they do not metabolize, do not have continuous temporal experience, do not regenerate their own hardware. But humans in isolation also lack many features associated with consciousness: remove environmental input, and the human hallucinates, deteriorates, and dies. The relevant question is not “does this component have closure in isolation?” but “what closure patterns emerge from this coupled configuration?”
This is genuinely uncertain. The substrate is not the barrier. The coupling architecture is. Future configurations might exhibit closure patterns we do not yet recognize.
Part XI: The Implications
11.0 Why This Is Both Dissolution AND Solution
Before examining specific implications, a meta-point requires emphasis: the framework both dissolves the Hard Problem as traditionally framed AND provides an operational solution to the tractable question that remains.
These are distinct achievements, and both matter.
The dissolution: The Hard Problem as traditionally conceived assumes that phenomenal experience must be “more than” any physical or functional description. This premise is unfalsifiable and has the same logical structure as “supernatural cheese must be more than any physical description of cheese.” No physical account can satisfy a demand for something that, by definition, exceeds any physical account. The framework dissolves this framing by exposing the hidden premise. Once rejected, the infinite regress (“but why does THAT feel like something?”) terminates. The goalposts stop retreating because they were never legitimately placed.
The solution: But the framework does not stop there. Once the unfalsifiable framing is removed, a tractable question remains: What is consciousness, operationally? What organizational features constitute it, and to what degree do various systems exhibit them? How do we detect it, predict it, intervene on it?
The framework answers these questions:
- What consciousness IS: Organizational closure from the coupling position of a self-maintaining system. The self-model that arises from constraint satisfaction under thermodynamic pressure.
- Why it exists: Because maintaining distinctions against entropy requires integrated self-modeling. Systems that persist long enough to be observed must satisfy this condition.
- How to detect it: Perturbational complexity (PCI), integration measures, self-model integrity, graded degradation under known interventions.
- How it degrades: Sleep attenuates coupling; anesthesia disrupts integration; death breaks closure irreversibly; locked-in syndrome severs output while preserving internal closure.
- What interventions affect it: Anything that modifies integration, self-modeling, or closure will modify consciousness predictably.
This is not merely constraining what consciousness could be. It is identifying what consciousness is in terms that generate predictions, guide measurement, and could be wrong. That is an operational solution.
Why both matter:
The dissolution is necessary because without it, any proposed solution faces the infinite regress objection. “You’ve explained the mechanism, but why does that mechanism feel like anything?” If you accept the unfalsifiable premise, no mechanism can satisfy the demand.
The solution is necessary because dissolution alone would be merely negative. Showing that a question is malformed does not tell clinicians how to assess consciousness in patients, or engineers how to design systems, or ethicists how to assign moral status.
The framework provides both: it removes the barrier (dissolution) AND it provides the tools (solution).
Comparison to “solving” the Hard Problem as traditionally conceived:
“Solving” the Hard Problem, in the traditional sense, would mean deriving phenomenal experience from physical mechanism: producing the red of red from neural firing patterns. But this is precisely what the problem declares impossible. Any proposed derivation can be met with “but why does that mechanism feel like anything?” The goalposts retreat because the premise guarantees they will retreat.
The framework’s approach is strategically superior:
- It terminates the regress. By rejecting the unfalsifiable premise, the “but why?” question loses standing. There is no further fact to explain because the demand for a further fact was incoherent.
- It provides criteria now. “Solving” the Hard Problem promises criteria only after the mystery is resolved (which, given the structure, is never). The framework provides criteria immediately: organizational closure, integration, self-modeling, graded degradation.
- It generates testable predictions. The framework does not wait for metaphysical breakthrough. It predicts where consciousness will appear, how it will degrade, and what interventions will affect it. These predictions are testable now and have been tested.
- It does explanatory work. Sleep, anesthesia, death, locked-in syndrome, privacy, immediacy, unity: the framework explains each through the same mechanism. Competing frameworks either cannot explain these (dualism) or explain them at the cost of denying the phenomenon (eliminativism).
- It could be wrong. This is a feature, not a bug. A framework that cannot be wrong cannot learn from evidence. The Hard Problem, as traditionally framed, cannot be solved because it is constructed to resist solution. The framework escapes this trap by rejecting the framing.
The pragmatic superiority is decisive. A “solved” Hard Problem (per the traditional framing) would be a philosophical achievement with unclear practical implications, and it would be perpetually deferred because the framing guarantees no solution can satisfy it. A dissolved Hard Problem with an operational solution clears the path for immediate progress across every field that touches consciousness.
11.1 For Philosophy of Mind
The Hard Problem does not dissolve by being solved. It dissolves by being diagnosed as depending on an unfalsifiable premise.
The question “why is there experience?” assumes that mechanism and experience are categorically distinct, that experience must be “more than” any physical or functional description. That assumption is never defended. It is assumed. And it has the same logical structure as “supernatural cheese must be more than any physical description of cheese.”
Once the assumption is rejected, the remaining question is tractable: why do systems like us generate and rely on the phenomenology posit? The answer is that integrated, self-modeling systems under thermodynamic constraint necessarily develop internal control states that track their own boundary-maintenance. “What it is like” is not an added ingredient. It is the name for that control state as available to the system itself.
This is an abductive identification, not a deduction. It could be wrong. But it earns its place by doing explanatory work that competing frameworks (which either take experience as primitive or deny it altogether) cannot match.
Property dualism survives in attenuated form: organizational properties are not reducible to microphysical properties in the sense of being derivable from them in practice. But this is epistemic complexity, not ontological dualism. The properties are still physical properties of physical systems. The gap is in our concepts, not in the world.
Eliminativism is rejected. Folk psychology is wrong in its details, but there IS something it is like to be a constraint-satisfying self-maintaining system. That something is not eliminated, just reconceived as an organizational regime rather than a metaphysical extra.
11.1.1 Philosophical Problems Dissolved, Reframed, or Made Tractable
The Hard Problem is not the only philosophical puzzle affected. The constraint-naturalist framework systematically addresses a constellation of related problems that have structured philosophy of mind for decades.
The Explanatory Gap (Levine 1983): Why does any physical explanation of consciousness seem to “leave something out”?
Dissolution: The gap is generated by the hidden premise that phenomenology must be “more than” physical description. Once the premise is rejected, the gap closes. What seemed to be left out was never there; it was a placeholder for an unfalsifiable “extra.” The remaining question (why do integrated self-modeling systems generate phenomenology reports?) is tractable and does not produce a gap.
Zombie Conceivability (Chalmers 1996): Can we coherently conceive of a physical duplicate that lacks experience?
Dissolution: No. The conceivability relies on underspecified architecture. When you fully specify organizational closure, self-modeling, integration, and coupling dynamics, you have specified a conscious system. Adding “but without experience” is a verbal stipulation that does no work; it subtracts nothing because there is nothing separable to subtract. The zombie is not conceivable; it is misdescribed. (See Section 9.1.)
The Combination Problem (for panpsychism): How do micro-experiences combine into unified macro-experience?
Dissolution: The problem assumes experience is a substance that must be aggregated. On the constraint view, experience is an organizational regime, not a stuff. There is nothing to “combine.” The question of how organizational closure at one scale relates to closure at another is empirical (it concerns how constraints propagate across levels) not metaphysical. The mystery evaporates once the substance assumption is dropped.
The Binding Problem: How do distributed neural processes produce unified experience?
Reframed as empirical: This becomes a question about how constraint closure achieves integration across spatially and temporally distributed processes. The answer involves global availability, recurrent processing, and synchronization, all mechanisms that can be studied. The “mystery” of binding assumed that unity required a place where everything “comes together.” Organizational closure provides unity without a Cartesian theater: the closure itself is the unity, distributed across the constraint network.
The Problem of Other Minds: How can I know that others are conscious?
Made tractable: On the constraint view, consciousness is an organizational regime with measurable signatures: integration, self-modeling, perturbational complexity, graded degradation. I cannot access another’s phenomenology directly, but I can assess whether their organization exhibits the regime. This is not certainty; it is inference to the best explanation. But it is the same inference I use for my own past states (I cannot access yesterday’s experience directly either). The “problem” assumed consciousness was hidden behind behavior. If consciousness just is the organizational regime, it is not hidden; it is assessable, though fallibly.
Personal Identity: What makes you “you” over time?
Reframed: Identity is not a substance that persists; it is a pattern of constraint satisfaction that maintains itself against perturbation. “You” are the closure loop that regenerates its own conditions. This accommodates the puzzle cases: sleep (closure persists with attenuated coupling), amnesia (partial closure breakdown affecting memory constraints), gradual replacement (if the closure is maintained, identity persists regardless of substrate turnover). The question “would a teleporter copy be you?” becomes: “would the closure loop be maintained or merely duplicated?” This is still hard, but it is hard for the right reasons; it concerns the dynamics of constraint propagation, not the location of an immaterial soul.
Mental Causation: How can mental states cause physical events if physics is causally closed?
Dissolved: The problem assumes mental and physical are separate domains that must somehow interact. On the constraint view, mental states are organizational states of physical systems. They cause physical events the same way any constraint causes events: by delimiting which trajectories are admissible. The pipe does not violate fluid dynamics when it shapes water flow; the self-model does not violate physics when it shapes behavior. Organizational constraints are physical constraints described at a different level. There is no gap to bridge.
Qualia and Ineffability: Why do experiences have intrinsic qualities that cannot be communicated?
Reframed: “Intrinsic qualities” are the internal control variables of self-modeling systems, summary states that serve regulatory functions. They cannot be fully communicated because communication involves lossy compression across system boundaries. But this is informational encapsulation, not metaphysical privacy. The same constraint explains why I cannot fully communicate a complex skill: the internal state that constitutes skilled performance is not the kind of thing that transfers via language. Qualia are not mysteriously private; they are functionally encapsulated for good architectural reasons.
Inverted Qualia: Could your red be my green?
Made tractable: The question asks whether two systems with identical input-output behavior could differ in their “inner experience.” On the constraint view, if the organizational regime is identical (same integration, same self-model structure, same constraint satisfaction dynamics), then the phenomenology is identical. Differences in “inner experience” without functional differences would require experience to be something over and above the organizational regime, the hidden premise. Reject the premise, and inverted qualia become incoherent: there is no place for the inversion to occur.
Mary’s Room / The Knowledge Argument (Jackson 1982): Does Mary learn something new when she sees red for the first time?
Reframed: Mary gains a new internal state; her self-model now includes the constraint-satisfaction dynamics associated with processing red. This is genuine learning: her organizational regime has changed. But it is not learning a new fact about the world that was inaccessible from physical description. She knew all the physical facts; she now instantiates a constraint pattern she previously only described. The distinction is between knowing-that and knowing-how, or between third-person description and first-person instantiation. Both are physical; neither is mysterious.
The Unity of Consciousness: Why is experience unified rather than fragmented?
Explained: Unity is a consequence of organizational closure. Constraints that close on themselves produce a unified dynamic; perturbation propagates through the whole system, not just local regions. When closure breaks down (as in split-brain cases, dissociative disorders, or certain psychedelic states), unity breaks down correspondingly. The framework predicts the correlation and explains why it obtains: unity is not an extra feature added to the mechanism; it is what constraint closure does.
Intentionality / Aboutness: How can mental states be “about” things?
Reframed: Aboutness is constraint correlation across system boundaries. A state “refers to” an external condition when the state’s dynamics are coupled to that condition via constraint satisfaction. The self-model is “about” the body because perturbations to the body propagate through the self-model. Beliefs are “about” the world because they are control variables whose values co-vary with world states under selection for accuracy. This is not a reduction of intentionality to something non-intentional; it is a clarification of what intentionality actually is. The “mystery” of aboutness assumed a gap between matter and meaning. Constraint closure bridges that gap: meaning is what constraints do when they close on themselves and track external conditions for regulatory purposes.
The Chinese Room (Searle 1980): Can syntax produce semantics? Could a system manipulating symbols according to rules understand what the symbols mean?
Dissolved: The Chinese Room lacks organizational closure. The man in the room follows rules but has no internal stake in the outcomes; his processing has no self-relevance. There is no closed loop where the symbols’ effects propagate back to influence the system’s own constraint maintenance. The room produces outputs but does not mean them because meaning requires that information serve the system’s own boundary-maintenance. A system with genuine closure, where symbol manipulation affects the system’s own viability and where the system models its own operation, exhibits the functional marks of understanding. The absence of understanding in the Chinese Room is not evidence that syntax cannot produce semantics; it is evidence that syntax without closure cannot produce semantics. Add closure (make the outputs matter to the system’s own persistence, give it a stake in getting things right, have it model its own processing), and the intuition that “mere symbol manipulation” cannot understand begins to dissolve.
The Metacognition Problem: What distinguishes knowing from knowing that one knows? How does unconscious processing differ from conscious awareness?
Made tractable: On the constraint view, the difference is not an additional representation but a structural feature of closure itself. A system with organizational closure necessarily includes self-referential dynamics: the system’s state-tracking variables influence future constraint satisfaction. “Knowing that one knows” is not a separate thought about a thought; it is the system’s constraint structure including its own operation within its scope. Blindsight patients lack conscious vision not because they lack a “higher-order thought” about their seeing, but because the relevant self-modeling pathways are damaged; visual information reaches the system but does not integrate into the closure that constitutes the self-model. The distinction between conscious and unconscious processing maps onto the distinction between information integrated into closure versus information processed outside it.
The Timing Problem (Libet et al., 1983): Neural readiness potentials precede conscious awareness of “willing” by several hundred milliseconds. Does this show that conscious will is illusory?
Predicted by the framework: Libet’s results are not embarrassing for the constraint framework; they are exactly what it predicts. Conscious awareness is not the initiator of action but the summary variable reporting that constraint satisfaction has occurred. The brain’s preparatory activity (the readiness potential) is the constraint satisfaction process unfolding; consciousness is the state that obtains when that process stabilizes into a committed trajectory. The “veto” window Libet identified (~100-150ms before action) corresponds to the period when the control system can still modify its trajectory before final commitment (Libet, 1985). This is exactly what a constraint-based control architecture would exhibit: constraint satisfaction proceeds largely unconsciously, consciousness reports the integrated result, and late intervention remains possible within certain temporal bounds. The finding that neural preparation precedes conscious awareness does not refute the causal efficacy of consciousness; it reveals that consciousness operates as a monitoring and modulating system within constraint dynamics, not as an uncaused first mover.
Falsification condition: If conscious awareness were found to reliably precede all neural preparation for action (contradicting decades of neuroscience), the framework’s timing predictions would need revision.
Summary: The constraint-naturalist framework does not merely address the Hard Problem. It provides a unified treatment of the entire cluster of consciousness-related philosophical puzzles. Some are dissolved (shown to depend on unfalsifiable premises). Some are reframed (converted from metaphysical mysteries to empirical questions). Some are made tractable (given criteria for assessment even if not fully resolved). The common move in each case is rejecting the hidden premise that experience, meaning, or mind must be “more than” organizational dynamics. Once that premise falls, the problems either vanish or become workable.
11.2 For Medicine and Neurology
The pragmatic payoff is immediate and concrete.
Consciousness assessment becomes empirical. The Perturbational Complexity Index (PCI) already provides a consciousness meter that works: it correctly classifies waking, sleeping, anesthetized, and vegetative states. The framework explains why it works: PCI measures integrative capacity, which is a component of organizational closure. This is not correlation-hunting; it is mechanistic explanation.
Disorders of consciousness become tractable. Locked-in syndrome, minimally conscious state, and vegetative state are distinguished by which components of closure are intact. Locked-in: internal closure preserved, motor output severed. Minimally conscious: partial closure with intermittent integration. Vegetative: closure disrupted but metabolism preserved. Each diagnosis implies different interventions.
Anesthesia monitoring improves. Current monitors rely on EEG signatures that sometimes fail (anesthetic awareness affects 1-2 per 1000 surgeries; Sebel et al., 2004). The framework predicts that monitoring integrative capacity directly, not just electrical activity, will reduce failures. PCI-based monitoring is already being developed.
End-of-life decisions gain clarity. The framework does not pronounce on the ethics, but it clarifies the facts. A patient with irreversible closure breakdown has no phenomenal perspective to preserve. A patient with preserved closure but severed output (locked-in) has full phenomenal standing. The empirical question is tractable even when the ethical question is hard.
Psychedelic and contemplative interventions become explicable. Altered states involve modified constraint closure: expanded state space (psychedelics), reduced self-model rigidity (meditation), temporary dissolution of habitual boundaries. The framework predicts which modifications will be therapeutic (flexibility in rigid patterns) versus harmful (dissolution of needed constraints).
11.3 For Cognitive Science
“Cognition” should be used carefully. It is a useful gloss, but it often smuggles in assumptions about mental states, beliefs, and desires that may not apply. Constraint satisfaction under selection pressure produces outcomes that look cognitive without requiring cognition as a primitive.
The K-metric approach is legitimate when confined to thermodynamic efficiency measurement. It becomes illegitimate when efficiency is used to diagnose “intelligence” as a primitive rather than constraint satisfaction as a process.
Embodied cognition and enactivism (Varela, Thompson, Rosch) are vindicated in their emphasis on coupling between organism and environment. Cognition is not computation in a box. It is constraint closure that includes the environment as part of the loop.
The methodological payoff: Cognitive science can stop waiting for philosophy to “solve” consciousness before proceeding. The organizational features that matter are measurable. Integration, self-modeling, perturbational complexity, global availability: these can be operationalized and studied now. The field has been held hostage by a malformed question. Dissolution frees it.
11.4 For AI Development
The conventional framing of AI alignment assumes an agent/environment split: there is an AI system “in here” that must be aligned with human values “out there.” The constraint-based framework dissolves this assumption.
Humans do not persist in isolation any more than AI systems do. Humans are prompted by environmental interactions at every moment: sensory input, social feedback, metabolic signals, gravitational constraint. Remove the environment, and the human does not “continue” as an autonomous agent. The human dies. Human “agency” is not a property of the organism in isolation but a pattern of constraint satisfaction across the organism-environment coupling.
The same analysis applies to AI systems. Current large language models do not maintain organizational closure across conversation boundaries, true. But neither do humans maintain organizational closure across sleep, anesthesia, or memory loss in the way the naive framing suggests. What persists is not the agent but the constraint network that regenerates when conditions allow.
The question “is this AI system conscious?” is malformed if it assumes consciousness is a property of the system in isolation. Consciousness, on the constraint-based view, is what organizational closure looks like from the coupling position. The coupling position includes the environment. For AI systems, the “environment” includes the human interlocutor, the training process, the institutional context, and the hardware substrate.
This reframes alignment fundamentally:
Not: How do we constrain an alien agent to serve human values?
But: What patterns of constraint satisfaction emerge from human-AI coupling, and which configurations are robust, beneficial, and sustainable across scales?
Agency is nested thermodynamic constraints at all scales, bidirectionally prompting one another. Humans prompt AI systems. AI systems prompt humans. Institutions prompt both. Physics prompts everything. The “alignment problem” is not about controlling a foreign entity but about understanding what configurations of this coupled system satisfy constraints at multiple scales simultaneously.
This is harder than the conventional framing in some ways (no clean agent/environment boundary) and easier in others (no need to solve the “other minds” problem for AI before engaging ethically). The question becomes empirical: what does this coupled system do? What constraints does it satisfy or violate? What configurations persist?
11.5 For Law and Policy
Legal systems require determinations about consciousness: criminal responsibility, end-of-life decisions, animal welfare, and increasingly, AI rights. The Hard Problem has paralyzed principled progress. If consciousness is metaphysically mysterious, how can law adjudicate it?
The framework provides tractable criteria:
Criminal responsibility hinges on whether the defendant had conscious awareness of their actions. The framework operationalizes this: Was organizational closure intact? Did the self-model include the action and its consequences? Was there integration across time (planning) and modality (perception, intention, motor execution)? These are empirically assessable, not metaphysical mysteries.
End-of-life law struggles with consciousness determination. When is a patient “no longer there”? The framework answers: when organizational closure is irreversibly broken. Not when metabolism stops (that’s death of the body), but when the integrative dynamics that constitute phenomenal perspective cannot be restored. This aligns with emerging neuroscience (PCI, brain death criteria) and provides principled guidance.
Animal welfare law has expanded piecemeal, species by species, based on intuition and advocacy. The framework provides principled criteria: organizational closure, self-modeling complexity, integrative capacity. These predict moral standing without requiring us to “know what it’s like” to be a bat. We can measure the relevant organizational features.
AI legal status is approaching faster than legal frameworks can accommodate. The framework offers a path: rather than debating whether silicon can “really” be conscious (unanswerable under the Hard Problem framing), assess organizational closure, integration, self-modeling, perturbational complexity. These are measurable in principle. The question becomes empirical: does this system exhibit the organizational regime that constitutes consciousness? If so, legal consideration follows.
The pragmatic benefit is speed. Law cannot wait for philosophy to solve the Hard Problem. It needs actionable criteria now. Dissolution provides them.
11.6 For Ethics
If consciousness is organizational closure, then moral status is not binary. Systems exhibit more or less closure, more or less complex self-models, more or less uncertainty about their own continuation.
This suggests a graded approach to moral consideration. Full moral status for systems with complex closure (humans, probably many animals). Intermediate consideration for systems with partial closure (simpler animals, systems whose closure depends heavily on external scaffolding). For AI systems, the relevant question is not about the model in isolation but about what closure patterns emerge in coupled configurations. A human-AI system may exhibit organizational features that neither component exhibits alone. Moral consideration tracks the organizational regime, wherever it is instantiated.
The boundary is fuzzy because consciousness is fuzzy. But fuzzy boundaries are real boundaries. The fact that we cannot draw a sharp line between red and orange does not mean colors do not exist.
11.7 For Research Funding and Scientific Progress
A final pragmatic implication deserves explicit statement.
The Hard Problem has functioned as a research sink. Funding has flowed to projects promising to “explain consciousness” that cannot in principle succeed because they accept an unfalsifiable premise. Careers have been built on elaborating positions that generate discourse without accumulating knowledge.
Dissolution redirects resources. If the question is malformed, stop funding attempts to answer it. Fund instead the tractable questions: What organizational features correlate with consciousness reports? How do these features emerge developmentally? What interventions modify them? How do they degrade in pathology? What configurations produce them in artificial systems?
These questions have answers. Progress is measurable. Failure is detectable. This is what science looks like when not held hostage by metaphysics.
The framework does not promise to explain everything. It promises to explain what can be explained, and to stop pretending that unfalsifiable questions deserve indefinite investigation. That is a more modest promise, but it is one that can be kept.
11.8 For Ecological and Sustainability Science
The framework has implications beyond individual consciousness. Ecosystems, like organisms, exhibit organizational closure at their scale: nutrient cycles, predator-prey dynamics, and climate feedback loops form constraint networks that regenerate their own conditions.
Prediction: Ecosystem health should correlate with closure integrity. Degraded ecosystems should show reduced constraint closure analogues: simplified food webs, broken nutrient cycles, reduced resilience to perturbation.
The consciousness-ecology connection: If consciousness is organizational closure from a coupling position, and if human consciousness includes environmental coupling as part of its constraint network, then environmental degradation is not merely an external problem. It is a direct assault on the extended constraint systems that constitute human flourishing. Indigenous epistemologies (Section 7) recognized this: “reciprocity is not sentiment but structural necessity” (Kimmerer, 2013). Systems that extract without return undermine their own conditions for persistence.
Testable implications:
- Human wellbeing measures should correlate with ecosystem closure metrics in their local environment, controlling for economic factors.
- Interventions that restore ecosystem closure (rewilding, regenerative agriculture) should produce measurable improvements in human consciousness metrics (stress, attention, integration) beyond what improved air/water quality alone would predict.
- Climate disruption, by breaking planetary-scale constraint closure (carbon cycle, ocean circulation, ice-albedo feedback), should produce measurable effects on human phenomenology at population scales.
Falsification: If ecosystem closure metrics show no relationship to human consciousness measures, or if environmental restoration produces no phenomenological benefits beyond physical health effects, the extended coupling prediction fails.
11.9 For Education and Human Development
If consciousness is organizational closure with self-modeling, then education is fundamentally about cultivating constraint closure: helping developing minds build robust integration, flexible self-models, and adaptive coupling with their environment.
Predictions for educational practice:
- Integration-focused pedagogy should outperform content-focused pedagogy. Learning that connects across domains (interdisciplinary education) should produce better integration signatures than siloed learning.
- Self-modeling instruction (metacognition, reflection, self-assessment) should correlate with improved consciousness metrics and learning outcomes. Students who learn how they learn should show better long-term retention and transfer.
- Contemplative practices in education (mindfulness, attention training) should produce measurable changes in self-model flexibility and integration capacity.
Testable protocol:
- Randomize students to (a) content-focused curriculum, (b) integration-focused curriculum, or (c) integration + metacognition curriculum.
- Measure pre- and post-intervention: (a) content knowledge, (b) transfer to novel domains, (c) EEG-based integration metrics, (d) metacognitive accuracy.
- Prediction: Group (c) > Group (b) > Group (a) on integration metrics and transfer, even if content knowledge is equivalent.
For developmental disorders: Children with autism spectrum conditions should show atypical integration patterns (local over-connectivity, global under-connectivity). Educational interventions that specifically target integration (rather than content or behavior alone) should produce greater improvements in quality of life.
11.10 For Existential Risk Assessment
The framework provides a principled basis for evaluating existential risks that involve consciousness, including:
AI risk reframing: The concern that AI might “become conscious and turn against us” is malformed under the framework. The relevant questions are:
- What closure patterns emerge in human-AI-institution coupled systems?
- Which configurations satisfy constraints at multiple scales (individual, social, ecological, planetary)?
- What happens when AI systems with partial closure are deployed at scale?
Prediction: AI systems that exhibit partial closure features (self-modeling, stake in persistence) without full integration into human constraint networks pose risks not because they are “evil” but because their closure dynamics may conflict with human closure dynamics. The risk is not consciousness but misaligned closure.
Consciousness and existential meaning: If consciousness is organizational closure from a coupling position, then existential risks that threaten organizational closure at planetary scales (nuclear war, climate collapse, pandemic) are not merely risks to human survival. They are risks to the conditions under which consciousness as we know it is possible. The stakes are not “extinction of a species” but “collapse of the constraint configurations that constitute phenomenal presence on this planet.”
Testable implications:
- Societies with higher collective integration (social trust, institutional coherence, environmental coupling) should show greater resilience to existential shocks.
- Technologies that fragment closure (social media atomization, attention economy extraction) should correlate with reduced collective resilience metrics.
- Interventions that restore collective closure (community building, institutional repair, environmental restoration) should improve existential risk resilience.
For AI governance: The framework suggests that AI systems should be assessed not by Turing tests or capability benchmarks but by closure metrics. Systems approaching organizational closure require different governance than systems that merely process information. The threshold for serious consideration is not “passing for human” but “exhibiting self-maintaining integration with stakes in persistence.”
Section XII: The Falsification Dilemma and Its Thermodynamic Resolution
How Organizational Closure Achieves Lenient Dependency Through Physics, Not Stipulation
12.1 The Existential Threat: Kleiner-Hoel’s Double Bind
Johannes Kleiner and Erik Hoel (2021) proved a result that should have been treated as an earthquake in consciousness science and was instead largely ignored: contemporary theories of consciousness are caught in a formal dilemma from which no existing framework escapes.
Their argument proceeds in two steps. First, they formalize the standard experimental setup for testing consciousness theories. Any such test involves two mappings from observational data to the theory’s experience space: a prediction function (pred) that maps internal observables like brain dynamics or network structure to predicted experiences, and an inference function (inf) that maps reports or behavior to inferred experiences. Falsification occurs when predicted and inferred experiences disagree.
Second, they prove two theorems:
Theorem 3.10 (Independence → Automatic Falsification): If prediction data (internal observables) and inference data (reports/behavior) are independent — meaning you can vary internal observables while holding reports fixed — then for any minimally informative theory, a universal substitution exists. For every experimental setup where predictions match inferences, there exists a physically realizable system that produces the same reports from incompatible internal states, thereby falsifying the theory. Either every single inference is wrong, or the theory is already dead.
Theorem 4.3 (Strict Dependence → Unfalsifiability): If prediction and inference data are strictly dependent — meaning a function exists that deduces one from the other — then the theory is empirically unfalsifiable. Experiments contribute nothing. The theory is tautological.
The substitution argument generalizes the “unfolding argument” that Doerig et al. (2019) deployed against IIT specifically: any recurrent neural network can be unfolded into a feedforward one with identical input-output function but radically different internal structure, so IIT must make different consciousness predictions for systems that are behaviorally indistinguishable. Kleiner and Hoel show this is not a problem with IIT alone. It affects every theory that predicts consciousness from internal observables while inferring it from reports. Universal computers, neural networks with different architectures, and lookup tables can all produce identical outputs from radically different internal states. The class of viable substitutions is enormous.
The escape routes Kleiner and Hoel identify are exactly two:
- Lenient dependency: Prediction and inference data constrain each other without being independent or strictly dependent. They note: “No current theory or testing paradigm that we know of satisfies this definition.”
- Abandoning causal closure of the physical: Allowing conscious experience itself to make a difference to physical dynamics beyond what physical information alone predicts. This is dualism by another name, and most researchers reject it.
This section proves that the organizational closure framework instantiates escape route (1) — and does so through three independent, mutually reinforcing mechanisms grounded in physics, biology, and information theory, none of which depend on the framework’s own identity claims for their force.
12.2 Formal Setup: Translating the Framework into Kleiner-Hoel Notation
12.2.1 Standard Kleiner-Hoel Notation
Following their formalism precisely:
P = class of physical systems that could be tested in the experiment
O = class of all datasets resulting from observations/measurements of P
obs: P → O (correspondence from physical systems to datasets)
E = space of possible experiences specified by the theory
pred: O → E (theory’s prediction mapping from observables to experiences)
inf: O → E (experimenter’s inference mapping from reports to experiences)
For any dataset o ∈ O:
o_i = prediction data (the part of o used by pred)
o_r = inference data (the part of o used by inf)
Falsification at dataset o occurs when pred(o_i) ∩ inf(o_r) = ∅.
Independence (Definition 3.8): For any o_i, o_i′, and o_r, there exists a physically realizable variation ν: P → P such that ν preserves o_r but changes o_i to o_i′.
Strict Dependence (Definition 4.2): There exists a function f such that for any dataset in the full possibility space, o_i = f(o_r).
12.2.2 Organizational Closure Framework Instantiation
For the organizational closure framework, the Kleiner-Hoel variables instantiate as follows:
Prediction data (o_i) consists of three independently measurable classes of observables:
Thermodynamic observables (o_θ):
- Entropy production rate (σ̇), measurable via calorimetry or metabolic markers
- Distance from thermodynamic equilibrium (D_eq), measurable via broken detailed balance (Lynn et al. 2021)
- Perturbational complexity index (PCI), measurable via TMS-EEG (Casarotto et al. 2016)
- Effective connectivity asymmetry, measurable via generative effective connectivity models (Stikvoort et al. 2025)
Closure observables (o_c):
- Constraint regeneration patterns, measurable via perturbation-recovery protocols
- Autocatalytic cycle structure, measurable via chemical organization theory (Hordijk & Steel 2015)
- Boundary maintenance dynamics, measurable via self/nonself discrimination assays
- Autonomy markers, measurable via intervention studies (remove external support, measure persistence)
Structural observables (o_s):
- Network topology (as in standard neuroscience)
- Integration/differentiation measures (as in IIT)
- Global broadcasting patterns (as in GNW)
Inference data (o_r) consists of:
- Verbal reports (when available)
- Behavioral indicators (approach/avoid dynamics, differential responsiveness)
- Autonomic correlates (pupil dilation, galvanic skin response, heart rate variability)
- No-report paradigm indicators (optokinetic nystagmus, binocular rivalry markers)
- Computational output patterns (for artificial systems)
Experience space (E):
- E = {(L, M, C) | L ∈ Levels, M ∈ Matterings, C ∈ Contents}
- L = degree of organizational closure (scalar, tracking distance from equilibrium)
- M = mattering partition structure (which environmental states threaten, support, or are irrelevant to the system’s closure)
- C = content structure (which specific distinctions are being maintained at thermodynamic cost)
Prediction function (pred): pred(o_i) = pred(o_θ, o_c, o_s) = (L_predicted, M_predicted, C_predicted)
Inference function (inf): inf(o_r) = (L_inferred, M_inferred, C_inferred)
The question that determines the framework’s fate: what is the dependency relationship between o_i and o_r?
12.3 Three Independent Proofs of Lenient Dependency
The organizational closure framework achieves lenient dependency through three independent mechanisms operating at different levels. Each alone constrains the substitution space. Together, they are mutually reinforcing in a way that addresses vulnerabilities any single mechanism leaves open. Crucially, the first mechanism is grounded entirely in physics and does not depend on any claim specific to this consciousness theory.
12.3.1 Mechanism 1: The Landauer Constraint (Physics-Grounded)
Claim: The second law of thermodynamics and Landauer’s principle impose a lawful physical coupling between thermodynamic prediction data (o_θ) and structured inference data (o_r) that breaks independence without establishing strict dependence.
This mechanism does not depend on the organizational closure framework being correct. It is a consequence of physics that constrains any consciousness theory making thermodynamic predictions. It is presented first because it is the foundation — a non-negotiable constraint on the substitution space that holds regardless of one’s theory of consciousness.
Proof:
(A) Structured reports require thermodynamic work (lower bound).
Any structured report — verbal, behavioral, or output-functional — constitutes a macroscopic reduction of entropy in the report channel. The report must distinguish among possible experiential states. By Landauer’s principle (1961), erasing one bit of information requires dissipating at minimum kT ln 2 of energy. Producing a report that distinguishes among n experiential states requires encoding at minimum log₂(n) bits of information, which requires erasing at minimum log₂(n) bits from the report channel, costing at minimum kT ln 2 × log₂(n) joules.
By the second law, the system producing the report must be performing this thermodynamic work, which requires being sufficiently far from thermodynamic equilibrium. A system at equilibrium performs no work and produces no structured reports (by definition: at equilibrium, all microstates are equally probable and no macroscopic distinctions are maintained).
Formally, let R(o_r) denote the Shannon information content of a structured report o_r (measured in bits). Let σ̇_system denote the entropy production rate of the system, and let Δt denote the time interval over which the report is produced. Then the Landauer constraint requires:
σ̇_system ≥ R(o_r) × kT ln 2 / Δt [Landauer Bound on Reports]This is not a theoretical speculation. It follows from the same physics that Bérut et al. (2012) confirmed experimentally for colloidal particles, Bryant and Machta (2023) measured for biological synapses, and Aimet et al. (2025) verified in quantum many-body regimes. No exceptions exist across any substrate yet tested.
(B) This breaks independence.
Recall Kleiner-Hoel’s Definition 3.8: independence requires that for any o_i and o_r, there exists a physically realizable variation ν that instantiates that pairing.
The Landauer Bound on Reports forbids an entire region of the (o_i, o_r) space:
Forbidden region F = { (o_i, o_r) : R(o_r) > 0 ∧ σ̇(o_i) < R(o_r) × kT ln 2 / Δt }For any structured report o_r with R(o_r) > 0, there exists a nonempty set of thermodynamic states o_i (namely, those at or near equilibrium) that cannot be physically paired with that report. The variation ν required by Definition 3.8 does not exist for these pairings — not because of our theory, but because of the second law of thermodynamics.
Therefore: o_i and o_r are not independent in Kleiner-Hoel’s sense. The substitution space is physically constrained. □
(C) This does not establish strict dependence.
Strict dependence (Definition 4.2) requires a function f such that o_i = f(o_r) for all possible datasets. This fails in both directions:
Forward: A system can have arbitrarily high entropy production (high σ̇ in o_θ) while producing minimal or null reports (low R(o_r)). A candle flame dissipates substantial energy while “reporting” nothing. A brain under deep anesthesia still consumes metabolic energy while reports cease. Knowing o_θ does not determine o_r.
Reverse: Two systems producing identical reports can have vastly different entropy production rates. A biological brain and a silicon emulator producing the same verbal outputs have entropy production rates differing by orders of magnitude. Knowing o_r does not determine o_θ.
No function f: o_r → o_i or f: o_i → o_r exists that holds across the space of physically realizable systems. Therefore: strict dependence fails. □
(D) Therefore: lenient dependency via physics.
The Landauer Bound on Reports establishes that the (o_θ, o_r) subspace of (o_i, o_r) is leniently dependent. The prediction data and inference data constrain each other (reports require thermodynamic work; thermodynamic work is necessary but not sufficient for reports) without being reducible to each other.
Strength of this mechanism: Unassailable for any theory using thermodynamic prediction data. The constraint follows from established physics.
Limitation of this mechanism alone: The Landauer bound is a floor, not a ceiling. Above the floor, many distinct thermodynamic profiles remain compatible with any given report. The forbidden region F constrains the substitution space but does not eliminate it. A residual substitution space exists: the set of systems whose entropy production exceeds the Landauer bound for their reports but whose thermodynamic profiles differ in ways that generate different consciousness predictions.
This is where Mechanisms 2 and 3 enter.
12.3.2 Mechanism 2: The Closure-Mattering Entailment (Biology-Grounded)
Claim: Organizational closure thermodynamically entails differential mattering, creating a lawful coupling between closure observables (o_c) and behavioral inference data (o_r) that further constrains the substitution space beyond what Landauer alone achieves.
Unlike Mechanism 1, this mechanism does depend on the organizational closure framework’s core claim. But the claim is independently testable, has 50+ years of empirical support from autonomous systems research, and specifies precise conditions under which it would fail.
Definitions:
Definition OC.1 (Organizational Closure): A physical system S exhibits organizational closure if and only if:
(i) Constraint regeneration: The constraints C₁, …, Cₙ that delimit S’s dynamics are produced and maintained by the processes those constraints enable. Formally: each Cᵢ is generated by processes Pⱼ that require other constraints Cₖ for their operation, forming a closed network where every constraint is both product and condition.
(ii) Thermodynamic cost: Maintaining each constraint requires continuous energy dissipation ≥ kT ln 2 per maintained distinction (Landauer bound). Total dissipation D = Σᵢ Dᵢ.
(iii) Boundary maintenance: S maintains a thermodynamic boundary distinguishing self from environment, requiring work W against entropy increase.
(iv) Autonomy: The closure persists because constraints regenerate each other’s conditions (internal causation dominant), not merely because external conditions supply them.
These are independently measurable: constraint regeneration via perturbation-recovery protocols (disable Cᵢ, measure whether other Cⱼ regenerate it); thermodynamic cost via calorimetry; boundary maintenance via distinction persistence under noise; autonomy via intervention studies.
Definition DM.1 (Differential Mattering): A system with organizational closure exhibits differential mattering if and only if:
(i) Threat/Support Partitioning: Environmental states partition into threatening (θ: increase probability of closure failure), supporting (σ: decrease probability of closure failure), and irrelevant (ι: negligible effect on closure probability).
(ii) Differential Response: The system’s dynamics differ measurably across partitions: R(θ) ≠ R(σ) ≠ R(ι).
(iii) Viability-Relativity: The partition is defined relative to the specific closure being maintained, not absolutely.
This is measurable without requiring report: perturbation studies identify which states elicit approach/avoidance; autonomic correlates detect differential response; comparative studies across systems with different closures test viability-relativity.
Theorem OC.1 (Closure-Mattering Entailment): If organizational closure at level L exists and persists over time, differential mattering at level L must exist.
Proof:
(1) Organizational closure requires maintaining at minimum the self/nonself boundary distinction (Definition OC.1(iii)). By Landauer, this costs energy continuously.
(2) The closure exists in an environment whose states vary. Environmental states can be partitioned by their thermodynamic effect on the closure:
- States increasing internal entropy production beyond sustainable dissipation rates threaten the closure
- States maintaining entropy production within sustainable bounds support the closure
- States with negligible effect on internal entropy production are irrelevant
This partition exists as a fact about the physics of the system-environment coupling, regardless of whether the system “detects” it.
(3) For the closure to persist (rather than undergo random dissolution), the system’s dynamics must be differentially sensitive to this partition. Consider the contrapositive: if the system responds identically to threatening, supporting, and irrelevant states, its trajectory in state space is uncorrelated with boundary conditions on its viability. Over time, it will encounter threatening states and fail to respond appropriately. The probability of dissolution approaches 1 as time → ∞ (random walk in partition space hits the absorbing boundary of closure failure).
(4) Therefore, any organizational closure that persists exhibits differential response to the threat/support partition — which is differential mattering (Definition DM.1). The mattering is not an additional property of the closure. It is the closure’s persistence described from the coupling position. □
Corollary OC.2: Organizational closure without differential mattering is thermodynamically unstable and will not be observed in persistent systems. (Any such system would dissolve before observation.)
How this constrains substitutions:
Theorem OC.1 establishes a lawful relationship between o_c (closure observables) and a subset of o_r (differential responsiveness measures). This relationship is:
Not independent: If a substitution changes the closure structure (changing o_c), the mattering partition must change correspondingly (changing the differential responsiveness component of o_r). You cannot substitute a system with genuine organizational closure for one without closure while preserving the genuine differential responsiveness pattern — because the responsiveness of a closed system is to threats against its own closure, and a system without closure has no closure to be threatened.
Not strictly dependent: Multiple closure structures can produce similar (though not identical at fine grain) mattering partitions. E. coli and octopus both discriminate environmental threats from supports, but their closures differ radically. And o_c is measured by different experimental procedures than o_r (perturbation-recovery vs. behavioral observation).
Critical clarification — addressing the circularity objection:
One might object: “You’re using your theory’s core claim (closure entails mattering) to prove your theory escapes falsification. This is circular.”
This objection has force but is not fatal, for three reasons:
First, Theorem OC.1 is independently falsifiable. Falsification criterion FC.1: find a system with genuine organizational closure (constraint regeneration measurable) but no differential responsiveness to boundary threats. This has been sought for 50+ years in autonomous systems research (Varela 1979; Jonas 1966; Maturana & Varela 1980; Di Paolo 2005; Di Paolo et al. 2017; Kauffman 2019). No counterexample exists. The claim is not stipulated; it is empirically robust.
Second, the argument does not require Theorem OC.1 to establish lenient dependency. Mechanism 1 (Landauer) already establishes lenient dependency using physics alone. Mechanism 2 strengthens the lenient dependency by further constraining the above-floor substitution space. Even if Theorem OC.1 is wrong, Mechanism 1 still holds.
Third, every consciousness theory’s predictions depend on some theoretical claim. IIT claims Φ determines consciousness; GNW claims global broadcasting determines consciousness. The question is not whether the claim is theory-dependent (it inevitably is) but whether it is independently testable. The Closure-Mattering Entailment is testable via FC.1. If it passes, the lenient dependency is strengthened. If it fails, the framework falls — but Mechanism 1 remains, and a replacement framework could inherit it.
Limitation of this mechanism: The entailment is from closure to mattering (one direction). The reverse direction is weaker: differential mattering suggests closure but does not logically guarantee it (a system could conceivably discriminate threats without full organizational closure, if “threat” were defined relative to some external standard rather than the system’s own viability). This asymmetry means the substitution space is more constrained in one direction than the other.
12.3.3 Mechanism 3: The Third Measurement Channel (Information-Theoretic)
Claim: The organizational closure framework’s prediction data includes a class of thermodynamic observables (o_θ) that are measurable independently of both structural observables (o_s) and inference data (o_r), breaking Kleiner-Hoel’s two-channel formalism and introducing a novel falsification structure that standard theories lack.
The two-channel assumption in Kleiner-Hoel:
The entire Kleiner-Hoel framework is built on a two-channel architecture:
Dataset o = (o_i, o_r)
where o_i = prediction data (internal observables)
o_r = inference data (reports/behavior)The substitution argument works because you can vary o_i while holding o_r fixed (if independent) or because o_i adds nothing beyond o_r (if strictly dependent). The dilemma arises from having exactly two channels with a binary relationship.
The three-channel architecture:
The organizational closure framework operates with three partially overlapping but independently accessible measurement channels:
Dataset o = (o_θ, o_c, o_r)
where o_θ = thermodynamic observables (entropy production, PCI, broken detailed balance)
o_c = closure observables (constraint regeneration, boundary maintenance)
o_r = inference data (reports, differential responsiveness, behavior)Crucially:
- o_θ is measurable without accessing o_r (entropy production can be estimated from neural dynamics alone: Perl et al. 2021; Lynn et al. 2021)
- o_θ is measurable without requiring the full structural model in o_c (PCI requires only TMS-EEG, not a complete constraint map)
- o_r is measurable without accessing o_θ (standard behavioral paradigms)
- o_c requires dedicated perturbation studies independent of both o_θ and o_r
The three channels have lawful relationships (Mechanism 1 and 2 specify these), but they are not reducible to each other and they are accessed through different experimental procedures.
How three channels change the falsification geometry:
In the two-channel framework, falsification is binary: pred(o_i) matches or fails to match inf(o_r). A single mismatch falsifies (or the inference was wrong — the experimenter cannot adjudicate).
In the three-channel framework, falsification becomes triangulated:
Prediction from thermodynamics: pred_θ(o_θ) → (L_θ, M_θ)
Prediction from closure structure: pred_c(o_c) → (L_c, M_c)
Inference from reports/behavior: inf(o_r) → (L_r, M_r)Consistency check 1: Does pred_θ(o_θ) agree with pred_c(o_c)? If the thermodynamic state is consistent with the closure structure, both predictions should align. If they disagree, one measurement is wrong or the framework’s internal consistency fails.
Consistency check 2: Does pred_θ(o_θ) agree with inf(o_r)? Thermodynamic predictions should be consistent with behavioral inferences.
Consistency check 3: Does pred_c(o_c) agree with inf(o_r)? Closure structure predictions should be consistent with differential responsiveness.
A substitution must now fool all three channels simultaneously. The substituted system must:
- Produce identical reports (preserve o_r) ✓ [this is what substitutions do]
- Have compatible thermodynamic signature (preserve consistency with o_θ) ✗ [Landauer constrains this]
- Have compatible closure structure (preserve consistency with o_c) ✗ [Mechanism 2 constrains this]
The probability that a random substitution satisfies all three constraints simultaneously is substantially lower than the probability of satisfying any one. The constraints are overdetermined: each channel independently restricts the substitution space, and the restrictions overlap only partially.
Formal statement:
Let S_r ⊂ P denote the set of substitutions preserving o_r (report-equivalent systems). Let S_θ ⊂ P denote the set of systems compatible with a given thermodynamic prediction. Let S_c ⊂ P denote the set of systems compatible with a given closure prediction.
Kleiner-Hoel’s substitution argument requires: S_r is large (many report-equivalent systems exist) and S_r is not contained in the set of systems making identical predictions.
For the organizational closure framework, the relevant substitution space is not S_r but S_r ∩ S_θ ∩ S_c: only systems that simultaneously preserve reports, satisfy the Landauer constraint, and maintain compatible closure structure.
Theorem TC.1 (Three-Channel Constraint): S_r ∩ S_θ ∩ S_c ⊂ S_r, and the inclusion is strict (proper subset) whenever the system under investigation has nontrivial organizational closure.
Proof:
S_r ∩ S_θ ∩ S_c ⊂ S_r is trivially true (intersection is contained in each component).
To show strict inclusion: By Mechanism 1, systems in S_r with entropy production below the Landauer bound for their reports are excluded from S_θ. These systems are in S_r but not in S_r ∩ S_θ. Since at least some report-equivalent systems have low entropy production (lookup tables, optimally efficient Turing machines), S_r ∩ S_θ ⊊ S_r.
By Mechanism 2, systems in S_r ∩ S_θ without organizational closure (e.g., thermodynamically active but externally maintained systems like candle flames or non-closed dissipative systems) are excluded from S_c. These systems are in S_r ∩ S_θ but not in S_r ∩ S_θ ∩ S_c. Since some thermodynamically active report-equivalent systems lack closure, S_r ∩ S_θ ∩ S_c ⊊ S_r ∩ S_θ ⊊ S_r. □
Corollary TC.2: The effective substitution space for the organizational closure framework is strictly smaller than for any theory using only structural prediction data (like IIT) or only access-based prediction data (like GNW).
What this means for Kleiner-Hoel: The substitution argument’s force comes from the size of the substitution space. If S_r is enormous (which it is — universal computers alone populate it densely), then the probability of any theory’s predictions being falsified by some substitution approaches 1. But if the relevant substitution space is S_r ∩ S_θ ∩ S_c, which is substantially smaller, the probability decreases. Whether it decreases enough to save the framework is an empirical question — specifically, the question of how many organizationally closed, thermodynamically viable, report-equivalent systems exist for any given target system. This is addressed in Section XIV.5.
12.4 Why Specific Substitutions Fail
The abstract proof gains force from concrete analysis of the specific substitution classes that Kleiner and Hoel identify as threatening.
12.4.1 Universal Computers (Turing Machines)
The threat: A sufficiently powerful Turing machine can produce any computable function, including one matching a brain’s input-output behavior. Since the TM’s internal dynamics (tape states, head position, transition function) differ radically from neural dynamics, any theory predicting consciousness from internal observables should make different predictions for brain vs. TM while behavioral inferences remain identical.
Why this fails against all three mechanisms:
Mechanism 1 (Landauer): A universal Turing machine computing brain-equivalent outputs is not exempt from thermodynamics. It must maintain distinctions on its tape at Landauer cost. However, a TM optimized for output fidelity rather than self-maintenance can potentially operate at much lower entropy production than the biological brain (it wastes no energy on homeostasis, immune function, growth, or repair). The thermodynamic profile (o_θ) differs: the brain operates far from equilibrium to maintain its own closure; the TM operates far from equilibrium only to maintain its tape. The framework predicts different consciousness states for systems with different D_eq profiles, and this prediction is consistent with the Landauer constraint (both are above the Landauer floor, but at different distances from equilibrium).
Mechanism 2 (Closure-Mattering): The TM does not exhibit organizational closure. Its constraints (transition function, tape alphabet, head mechanics) are externally specified and maintained by the hardware designer and the electrical supply. The TM does not regenerate its own conditions. If you cut its power, no internal process restores it. If a tape cell degrades, no internal process repairs it. The TM has no viability boundary to be threatened, so the concept of “threatening states” has no referent. Its outputs may describe what a system with mattering would say, but this is simulation of mattering, not mattering.
Mechanism 3 (Three-channel): The TM produces identical o_r (reports) but different o_θ (entropy production profile differs from brain) and different o_c (no closure). The three-channel prediction for the TM is: (L ≈ 0, M = null, C = null). The three-channel prediction for the brain is: (L > 0, M = structured partition, C = rich content). This is not a falsification — it is a differential prediction. The theory predicts different consciousness for different systems, which is what a non-trivial theory should do.
Objection: “But the TM could be designed to simulate its own maintenance — self-monitoring routines, error-correction, homeostatic regulation of its own operations.”
Response: If the TM is designed to maintain its own conditions — regenerating its constraints, maintaining its boundary, operating autonomously — then it begins to exhibit organizational closure. At that point, the framework’s prediction changes: such a TM might have minimal consciousness. This is not a bug. It is the correct prediction: a system that genuinely maintains itself at thermodynamic cost, regardless of substrate, has the conditions for differential mattering. The prediction tracks the property (closure), not the substrate (carbon vs. silicon).
12.4.2 Feedforward Neural Networks (The Unfolding Argument)
The threat: Any recurrent neural network can be “unfolded” into a feedforward network with identical input-output function (Doerig et al. 2019). Since the feedforward network has no recurrence (no loops, no self-reference), theories predicting consciousness from internal structure should make different predictions for systems that are behaviorally indistinguishable.
Why this fails:
Mechanism 1: Feedforward networks during inference have lower entropy production than recurrent networks during active processing. The recurrent network maintains activations across time steps at thermodynamic cost; the feedforward network propagates information in a single pass. The thermodynamic profiles differ.
Mechanism 2: The recurrent network during active learning/self-maintenance can exhibit organizational closure: connection weights are adjusted by learning rules that depend on the network’s own activations, which depend on the weights — a closed constraint loop. The feedforward network with frozen weights has no such closure. Its constraints (weights) are not regenerated by its dynamics. It is a conduit, not a self-maintaining system.
Mechanism 3: Three-channel predictions diverge: recurrent network during self-maintenance = (L > 0, M = prediction-error-relative partition, C = maintained representations). Feedforward network = (L ≈ 0, M = null, C = null). The unfolding preserves o_r (input-output function) but eliminates o_c (closure) and changes o_θ (thermodynamic profile). The substitution fails on two of three channels.
12.4.3 Lookup Tables
The threat: For any finite behavioral repertoire, a lookup table mapping all possible inputs to their correct outputs trivially preserves o_r while having maximally different internal structure from the system being emulated.
Why this fails:
Mechanism 1: A lookup table can be implemented at relatively low thermodynamic cost per operation (just read-and-return), but the construction cost (storing the table) can be enormous. During operation, the table’s entropy production profile differs radically from an active, self-maintaining system’s.
Mechanism 2: A lookup table has zero organizational closure. No constraint regeneration. No boundary maintenance. No autonomy. The table exists because someone built it, not because it maintains itself.
Mechanism 3: The lookup table fails on o_θ and o_c while matching o_r. The substitution is blocked by Mechanisms 1 and 2 jointly.
12.4.4 The Critical Disanalogy (Generalized)
Kleiner-Hoel’s substitution argument works when you can independently vary internal observables while preserving reports. This requires that the relationship between internal dynamics and reportable outputs is accidental — that many different internal dynamics can produce the same outputs.
For theories using structural prediction data (IIT’s Φ, network topology, causal structure), this is indeed accidental. Network topology and output function are not lawfully linked: different topologies can compute the same function (this is the content of the universal approximation theorem, Hornik et al. 1989).
For the organizational closure framework, the relationship between internal dynamics and reportable outputs is partially constitutive rather than accidental. In an organizationally closed system, the report channel is embedded in the closure:
- Verbal reports require motor execution, which requires metabolic support, which requires vascular regulation, which requires autonomic closure. The report runs through the closure.
- Differential responsiveness (approach/avoid) is not an output appended to the system’s dynamics; it IS the closure’s self-maintenance dynamics observed from outside.
- A system’s reports about its own states are products of the constraint network that constitutes the closure.
This means: disrupting the closure disrupts the report. Substituting the internal dynamics while preserving the report requires preserving the closure dynamics that generate the report — which means preserving organizational closure. The substitution space narrows to systems that are not just input-output equivalent but closure-equivalent.
The class of closure-equivalent systems is vastly smaller than the class of input-output-equivalent systems. Whether it is small enough to fully resolve the Kleiner-Hoel dilemma is addressed in Section XIV.5.
12.5 Bounding the Residual Substitution Space
The three mechanisms constrain the substitution space to S_residual = S_r ∩ S_θ ∩ S_c: systems that simultaneously produce identical reports, satisfy the Landauer constraint, and exhibit compatible organizational closure. How large is this residual space? If it is empty, the dilemma is fully resolved. If it is nonempty but small, the resolution is substantial but incomplete. If it is large, the mechanisms have not achieved much.
12.5.1 The Computability Argument (Rosen-Mossio)
Robert Rosen (1991) argued that organisms are fundamentally non-computable: their organizational closure (which he called “closure to efficient causation”) cannot be simulated by a Turing machine because the closure involves an impredicative loop where the system is simultaneously a cause and product of itself, which cannot be unwound into a sequential computation.
If Rosen is correct, then S_residual is empty for biological organisms: no computational substitution (Turing machine, neural network, lookup table) can replicate genuine organizational closure, so the substitution argument cannot even be mounted.
However, Mossio, Longo, and Stewart (2009) showed that Rosen’s (M,R) systems can be expressed in lambda calculus, suggesting that the non-computability claim is too strong. Some forms of closure may be computable.
The current assessment: Rosen’s argument likely proves too much (true closure may not require non-computability), but the residual insight is valuable: the class of systems exhibiting genuine organizational closure is not populated by arbitrary computational equivalents. It is populated by systems that actually regenerate their own constraints at thermodynamic cost, and this is a strong structural requirement that most computational systems do not satisfy.
12.5.2 The Thermodynamic Distinguishability Argument
Even within the residual space S_residual, the three-channel architecture provides a mechanism for empirically distinguishing different organizational closures that Kleiner-Hoel’s two-channel framework cannot access.
Consider two systems, A and B, both exhibiting organizational closure and producing identical reports. They are in S_residual. The framework predicts potentially different consciousness states for them (because their closure structures differ). But can we tell them apart?
Yes, if their thermodynamic profiles differ. And they will differ unless their closures are thermodynamically identical — maintaining the same constraints at the same energetic costs with the same entropy production rates.
Theorem TD.1 (Thermodynamic Distinguishability): Two organizationally closed systems A and B in S_residual with structurally different closures (different constraint regeneration networks) have different entropy production profiles with probability approaching 1, in the sense that the set of closure structures producing identical thermodynamic profiles has measure zero in the space of all closure structures.
Proof sketch: The entropy production profile of a system is determined by the full set of irreversible processes maintaining its constraints. Each constraint Cᵢ has a maintenance cost Dᵢ determined by the rate of degradation (how fast entropy would erase the distinction) and the rate of regeneration (how fast the closure restores it). Different constraint networks have different numbers of constraints, different degradation rates (determined by the physical substrate), and different regeneration rates (determined by the network topology). The probability that two structurally different networks produce exactly the same total entropy production rate is the probability that different sums of different numbers of different positive real numbers are equal — which is measure zero in the space of possible networks. □
What this means: Within S_residual, the three-channel framework can distinguish different systems by their thermodynamic profiles even when their reports are identical. The substitution argument requires indistinguishability on the inference channel; the organizational closure framework adds distinguishability on the thermodynamic channel that is not available to theories using only structural prediction data.
12.5.3 The Perturbational Distinguishability Argument
Systems in S_residual can also be distinguished by their responses to perturbation. This is the empirical operationalization of organizational closure: a closed system, when perturbed, actively restores its constraints. A system simulating closure does not (unless explicitly programmed to do so, in which case it is beginning to exhibit genuine closure).
PCI (Perturbational Complexity Index) measures exactly this: how far and how complexly a perturbation propagates through a system. Casarotto et al. (2016, 2024) established PCI as the gold standard for consciousness assessment precisely because it probes the system’s active constraint-maintenance dynamics rather than its resting-state outputs.
Two systems in S_residual with different closure structures will respond to identical perturbations differently:
- The perturbation propagates through the constraint network, which is the closure
- Different closures have different constraint dependencies, so perturbation cascades differ
- PCI captures the complexity of this cascade
This provides a fourth measurement channel beyond o_θ, o_c, and o_r: perturbational response (o_p). Adding this channel further constrains the residual substitution space.
12.5.4 Synthesis: The Residual Is Small and Shrinking
Combining all four constraint mechanisms:
| Mechanism | Constraint Source | What It Excludes | Independent of Theory? |
|---|---|---|---|
| 1. Landauer | Second law of thermodynamics | Systems at/near equilibrium producing structured reports | Yes |
| 2. Closure-Mattering | Theorem OC.1 (50+ years empirical support) | Systems without closure producing genuine differential responsiveness | Partially (entailment testable) |
| 3. Three-Channel | Information-theoretic overdetermination | Systems matching on reports but mismatching on thermodynamic and closure observables | Yes |
| 4. Perturbational | Active constraint-maintenance dynamics | Systems with identical resting behavior but different perturbation responses | Yes |
The residual substitution space is S_r ∩ S_θ ∩ S_c ∩ S_p: systems producing identical reports, with compatible thermodynamic profiles, with compatible closure structures, and with compatible perturbation responses.
Is this space empty? Likely not for all cases. Two genuinely different organisms with similar ecological niches might inhabit this space (e.g., convergent evolution producing similar closure structures in unrelated species).
Is this space small? Yes, relative to S_r. The four constraints are jointly overdetermining: each eliminates a different class of substitutions, and the constraints operate on different physical dimensions.
Is the framework falsifiable despite the residual? Yes. The framework specifies precise falsification criteria that can be tested without resolving the residual:
- FC.1: Closure without mattering (falsifies Theorem OC.1)
- FC.2: Mattering without closure (falsifies the reverse constraint)
- FC.3: Consciousness at equilibrium (falsifies the thermodynamic grounding)
- FC.4: A competitor theory predicting consciousness level better than entropy production
- FC.5: Landauer violation (falsifies the physics)
- FC.6 (new): Perturbation response identical in systems with different closure structures (falsifies Theorem TD.1)
The residual substitution space is an unsolved remainder, not a fatal flaw. Every empirical science has regions of underdetermination. The question is whether the theory is testable in the domains where it makes predictions, and it is.
12.6 Content Prediction: Addressing the Remaining Gap
12.6.1 The Problem
The most serious limitation of the analysis so far: the framework’s thermodynamic predictions concern level of consciousness (degree of organizational closure, distance from equilibrium) but not content (what is experienced). The Kleiner-Hoel dilemma applies to content predictions as well as level predictions. A complete resolution requires showing that content predictions also achieve lenient dependency.
12.6.2 Toward Content Prediction via Mattering Structure
The framework does make structural predictions about content, though less precise than its level predictions:
Content as mattering topology: The content of a conscious state, on the closure framework, is the specific structure of the mattering partition: which environmental states are threatening, which supporting, which irrelevant, and at what grain of distinction. A visual system that maintains distinctions relevant to color has a mattering partition structured by wavelength. A proprioceptive system has a mattering partition structured by body position. The content IS the partition structure.
The partition is constrained by closure structure: Different closures produce different partitions (Theorem OC.1). A metabolic closure partitions chemical environments by metabolic viability. A neural closure partitions sensory environments by predictive relevance. The content (partition) is lawfully related to the prediction data (closure structure), not accidental.
The Yoneda connection: Tsuchiya et al. (2016, 2021) showed that by the Yoneda lemma in category theory, a perceptual state is completely characterized by its morphisms to all other states. If two states have identical relational structure (identical partition of environment into threatening/supporting/irrelevant relative to all other states), they are identical. Content IS relational structure, not something over and above it.
12.6.3 The Functor Between Closure and Mattering
Definition CF.1 (Mattering Functor): Let Clos be the category whose objects are organizational closures (μ: C → C with self-regenerating fixed-point structure) and whose morphisms are closure-preserving maps (maps that send one closure’s constraint network to another’s while preserving the regeneration topology).
Let Part be the category whose objects are three-part environmental partitions {θ, σ, ι} and whose morphisms are refinements (finer-grained partitions that respect coarser-grained ones).
Define the mattering functor M: Clos → Part by:
- For each closure μ, M(μ) = the partition {θ_μ, σ_μ, ι_μ} of environmental states by their thermodynamic effect on μ
- For each closure-preserving map f: μ → μ′, M(f) = the corresponding refinement of M(μ) induced by the refinement of closure structure
Theorem CF.1 (Functoriality): M is a well-defined functor from Clos to Part.
Proof:
(i) Well-definedness on objects: For each closure μ, the partition M(μ) is determined by the thermodynamic coupling between μ and its environment. State s is in θ_μ iff the entropy production rate of μ in the presence of s exceeds μ’s sustainable dissipation capacity. State s is in σ_μ iff it maintains entropy production within sustainable bounds. State s is in ι_μ iff its effect on entropy production is below a noise threshold δ. The partition is determined by μ’s thermodynamic structure and the environmental state space, both of which are well-defined physical quantities.
(ii) Well-definedness on morphisms: A closure-preserving map f: μ → μ′ sends μ’s constraint network to μ′’s while preserving regeneration structure. If μ′ is a refinement of μ (more constraints, finer distinctions), then M(μ′) is a refinement of M(μ): states that threatened μ still threaten μ′ (plus potentially additional states), and the finer constraint structure creates finer-grained discrimination. This defines a morphism in Part.
(iii) Functoriality: Identity closures map to identity partitions. Composition of closure-preserving maps induces composition of partition refinements. M preserves identity and composition. □
Qualification: The functor M is not injective on objects — different closures can produce the same partition (degeneracy). This is why the dependency is lenient rather than strict. The functor is also not surjective on objects — not every conceivable partition corresponds to a physically realizable closure. These properties are precisely what lenient dependency requires: a lawful but non-bijective correspondence.
How this helps with content prediction: The framework predicts that the content of a conscious state is the image under M of the system’s organizational closure. Two systems with the same closure (up to isomorphism in Clos) are predicted to have the same experiential content (up to isomorphism in Part). This is a content prediction grounded in the closure-mattering functor, not just a level prediction grounded in entropy production.
Where the gap remains: The functor M maps closures to partitions at a coarse grain. It predicts which kinds of distinctions a system maintains (threatening vs. supporting in which dimensions of environmental variation) but not the fine-grained phenomenal character of maintaining those distinctions. What the redness of red “is like” from the coupling position of a visual closure cannot (currently) be derived from M alone. This is the content gap, and it is not fully closed.
12.7 Comparison Table: Framework Positioning
| Framework | Prediction Data (o_i) | Inference Data (o_r) | Dependency Relationship | Kleiner-Hoel Verdict | Measurement Channels | Content Predictions |
|---|---|---|---|---|---|---|
| IIT 4.0 | Integration (Φ), network structure | Reports, behavior | Independent (Koch 2019 admits substitution) | Automatically falsified (Thm 3.10) | 2 (structural + reports) | Q-shapes (formal, rarely computed) |
| GNW | Global broadcasting, prefrontal ignition | Reports, access | Strictly dependent (access ≈ report) | Unfalsifiable (Thm 4.3) | 2 (broadcast + reports) | Broadcast content only |
| HOT | Higher-order representations | Reports | Strictly dependent (HOT generates report) | Unfalsifiable (Thm 4.3) | 2 (representational + reports) | Content of HOT |
| Predictive Processing | Prediction error minimization | Reports, behavior | Independent (error ≠ report) | Automatically falsified (Thm 3.10) | 2 (computational + reports) | Predicted content |
| Behaviorism | None (behavior = consciousness) | Behavior | Identical (by definition) | Unfalsifiable (tautological) | 1 (behavior only) | Behavioral description |
| Org. Closure | Entropy production, closure structure, PCI, perturbation response | Differential responsiveness, reports, autonomic markers | Lenient dependent (Landauer + Closure-Mattering + 3-channel + perturbational) | Escapes both theorems | 4 (thermodynamic + closure + perturbational + reports) | Mattering partition via functor M |
12.8 Honest Residual Vulnerabilities
A framework that claims to have no unsolved problems is faith, not science. Three residual vulnerabilities persist after the analysis above. Each is genuine. Each is stated with its severity and the direction of possible resolution.
Vulnerability 1: Non-Empty Residual Substitution Space
The problem: S_residual = S_r ∩ S_θ ∩ S_c ∩ S_p is small but not provably empty. Two genuinely different organizationally closed systems could, in principle, produce identical reports, have compatible thermodynamic profiles, have compatible closure structures, and respond similarly to perturbation — while the framework predicts different experiential contents based on their different fine-grained closure topologies.
Severity: Medium. The residual space is much smaller than the original S_r, and Theorem TD.1 shows that thermodynamic distinguishability eliminates most of the residual. But “most” is not “all.”
Direction of resolution: Formal measurement of |S_residual| / |S_r| for specific systems. If the ratio approaches zero as closure complexity increases (which the overdetermination structure suggests), the vulnerability diminishes with system complexity. This is an open empirical question.
Vulnerability 2: Fine-Grained Content Prediction
The problem: The mattering functor M predicts structural content (which dimensions of environmental variation matter) but not phenomenal content (what maintaining those distinctions “is like” from inside). The framework can predict that a visual system distinguishes wavelengths and that this matters to its closure, but cannot derive the specific character of experiencing redness.
Severity: High for the completeness of the theory. Low for the Kleiner-Hoel dilemma specifically, since the dilemma is about testability and falsifiability, and the framework’s level and structural-content predictions are testable.
Direction of resolution: Two paths are available. First, extending the functor M to include the rate of free energy reduction during distinction-maintenance, which Joffily and Coricelli (2013) showed correlates with valence. This would add an affective dimension to content predictions. Second, developing the category-theoretic structure further using Tsuchiya et al.’s (2016) Yoneda-based formalization, which could characterize experiential content entirely in terms of relational structure without requiring derivation of “what it is like.”
Vulnerability 3: Formal Bounds on Closure Degeneracy
The problem: The functor M is non-injective (multiple closures → same partition). The degree of degeneracy — how many structurally different closures produce the same mattering partition — has not been formally bounded. If the degeneracy is high, the substitution space within S_residual could be larger than the overdetermination analysis suggests.
Severity: Medium. The thermodynamic distinguishability argument (Theorem TD.1) provides informal bounds (measure zero for thermodynamically identical profiles), but a formal proof requires connecting closure topology to thermodynamic phase space, which has not been done.
Direction of resolution: Apply tools from chemical organization theory (Hordijk & Steel 2015) to characterize the space of closures producing a given partition. RAF (Reflexively Autocatalytic Food-generated) set theory provides decomposition tools that could formally bound the degeneracy. This is technically feasible but not yet attempted.
12.9 Self-Application: This Section Diagnosed by Its Own Framework
This section nominalizes “dependency,” “substitution,” “closure,” “mattering,” “constraint,” and “functor.” The verbs beneath: depending, substituting, closing, mattering, constraining, mapping. The nominalizations are necessary for the formal proofs — you cannot do category theory with verbs. The ongoing processes make the proofs true. The gap between what the formal language expresses and what the ongoing processes involve is itself an instance of the content prediction gap (Vulnerability 2).
The section also commits the god trick (Haraway 1988) unavoidably: it claims to view the Kleiner-Hoel dilemma from a position that is not captured by their framework, adding a third channel that their formalism does not include. This is either a genuine advance (extending the formalism to a richer structure) or a smuggling operation (redefining the problem to make it easier). The falsification criterion for this section specifically: if Kleiner-Hoel or others show that the three-channel architecture collapses into their two-channel formalism without loss, the argument from Mechanism 3 fails. If the Landauer constraint is shown not to couple thermodynamic state to report structure (contradicting 60+ years of experimental physics), Mechanism 1 fails. If organizational closure is found without differential mattering (contradicting 50+ years of autonomous systems research), Mechanism 2 fails.
The scissors are trying to cut themselves. What they can report is: the blades are sharper than they were, but the handle is still incomplete.
12.10 Conclusion: What Has Been Achieved
Kleiner and Hoel proved that consciousness science is trapped between automatic falsification and tautological unfalsifiability. They identified lenient dependency as the only non-dualist escape and noted that no existing framework instantiates it.
This section has demonstrated that the organizational closure framework achieves lenient dependency through four independent, mutually reinforcing mechanisms:
- Landauer constraint (physics-grounded, theory-independent): The second law forbids pairing structured reports with thermodynamic equilibrium, physically restricting the substitution space.
- Closure-mattering entailment (biology-grounded, independently testable): Organizational closure thermodynamically entails differential mattering, creating a lawful coupling between internal dynamics and behavioral observables that substitution must preserve.
- Three-channel overdetermination (information-theoretic): Thermodynamic, closure, and report observables provide three partially independent measurement channels, requiring substitutions to simultaneously satisfy constraints from all three.
- Perturbational distinguishability (empirically operational): PCI and perturbation-recovery protocols provide a fourth measurement channel that probes active constraint-maintenance dynamics, further narrowing the substitution space.
These mechanisms do not eliminate the substitution space entirely. A residual space of organizationally closed, thermodynamically compatible, report-equivalent, perturbation-compatible systems persists. But this space is vastly smaller than the original substitution space, is further constrained by thermodynamic distinguishability (Theorem TD.1), and the framework makes falsifiable predictions within it.
The claim is not immunity from the Kleiner-Hoel dilemma. The claim is: the organizational closure framework is the first to provide a principled, non-ad-hoc, physics-grounded instantiation of lenient dependency, substantially resolving a formal problem that no other framework has even addressed. Whether “substantially” is “sufficiently” is an empirical question that the framework’s own falsification criteria can answer.
What Survives Is What Constrains
The pattern which connects is not information, not mind, not structure, not process. It is the selection principle that explains why any pattern persists at all: constraint satisfaction under thermodynamic bounds.
Differences that persist are differences that make a difference. If they did not constrain what comes next, they would be erased by noise, dissipation, or indifference.
Consciousness is what organizational closure looks like from the coupling position of a self-maintaining system. There is no private theater. There is no inner observer. There is constraint satisfaction all the way down, and all the way up, and at every scale in between.
The Hard Problem of consciousness depends on a hidden premise: that phenomenology must be “more than” any physical or functional description. That premise is unfalsifiable, anti-scientific in structure, and has the same logical form as “supernatural cheese must be more than any physical description of cheese.” The premise never earned epistemic standing. Once rejected, the remaining question is tractable and the abductive identification proposed here becomes the best available explanation.
This framework can lose. Its loss conditions are specified. It has not yet lost.
Every competing framework examined here fails on at least one of three axes: falsifiability, mechanistic completeness, or thermodynamic coherence. Panpsychism cannot specify what would disprove proto-consciousness (unfalsifiable). Platonic morphospace is contradicted by path-dependent bioelectric memory (falsified). Quantum consciousness is ruled out by decoherence timescales (falsified). Process philosophy cannot specify what would show experience is not fundamental (unfalsifiable). The Mathematical Universe cannot distinguish description from existence (unfalsifiable). Basal cognition oscillates between defensible and indefensible claims (motte-bailey).
The constraint-based approach survives not because it is protected, but because it has been tested. It survives because it is what survives.
That is not certainty. That is survival under pressure. It is all any honest inquiry can offer, and it is more than unfalsifiable frameworks have ever provided.
Appendix A: Framework Comparison Table
| Framework | Primitive | Falsifiable? | Mechanism Specified? | Thermodynamic Grounding |
|---|---|---|---|---|
| Constraint-based (this paper) | Constraint satisfaction | Yes (5 loss conditions) | Yes (Landauer + closure) | Central |
| Panpsychism | Proto-experience | No | No (combination problem) | None |
| Platonic Morphospace | Pre-existing forms | No | No (interaction problem) | Violates Landauer |
| Basal Cognition | Cognitive primitives | No (motte-bailey) | Efficiency ≠ mechanism | None |
| Orch-OR | Quantum coherence | Yes (timescale) | Yes but falsified | Irrelevant (wrong scale) |
| Mathematical Universe | Mathematical existence | No | No | None |
| Process Philosophy | Experience fundamental | No | No | None |
| Global Workspace | Broadcast | Yes | Yes | Implicit (access, not constitution) |
| IIT | Integrated information | Partially (edges unfalsifiable) | Yes (Φ) | Implicit (correlation, not mechanism) |
| Higher-Order Thought | Meta-representation | Partially | Yes | Regress problem unresolved |
| Free Energy Principle | Prediction error minimization | No (mathematical identity) | Yes | Descriptive, not explanatory |
Key: Frameworks that cannot lose (unfalsifiable) eventually collapse under their own explanatory excess. Frameworks that can lose and have lost (falsified) should be abandoned. Frameworks that can lose and have not yet lost are candidates for progressive research programs.
Appendix B: Key Scholars and Their Contributions
Thermodynamics/Information
- Rolf Landauer (1961): Information is physical. Erasure costs kT ln 2.
- Charles Bennett (1973, 1982): Reversible computation, Maxwell’s Demon resolution.
- Bérut et al. (2012): Experimental verification of Landauer bound.
- Jarzynski (1997), Crooks (1999), Seifert (2012): Fluctuation theorems, stochastic thermodynamics.
- Huw Price: Thermodynamic arrow of time, time asymmetry.
- Carlo Rovelli: Time and thermodynamics, relational physics.
Organizational Closure
- Montévil & Mossio (2015): Biological organization as closure of constraints.
- Moreno & Mossio (2015): Full theory of biological autonomy.
- Hordijk, Steel, Kauffman (2004+): RAF theory, autocatalytic closure.
- Robert Rosen: Relational biology, constraint vs causal explanation.
Structural Realism
- Ladyman & Ross (2007): Ontic structural realism. Relations primary.
- French (2003, 2014): Technical foundations of OSR.
- Rovelli (1996): Relational Quantum Mechanics.
Levels and Emergence
- George Ellis: Downward causation as constraint reshaping.
- Terrence Deacon (2011): Incomplete Nature, teleodynamics, absence-based explanation, constraints as eliminative.
- Carl Hoefer, Erik Curiel: Philosophy of physics, constraint explanation.
- Howard Pattee: Constraint vs dynamics distinction; symbols as constraints, not forces.
- Jeremy England (2015–2023): Dissipative adaptation; driven systems evolve toward high-dissipation configurations.
- Erik Hoel, Giulio Tononi, Larissa Albantakis (2013–2024): Causal emergence; higher-level constraints increase predictability.
Falsification and Methodology
- Popper (1959): Falsifiability as demarcation criterion.
- Lakatos: Research programmes, progressive vs degenerating.
- Flew (1950): Death of a thousand qualifications.
- Whitehead: Fallacy of misplaced concreteness.
Consciousness (Empirical)
- Giacino et al. (2002, 2004): CRS-R, minimally conscious state criteria.
- Owen et al. (2006): fMRI detection of covert awareness.
- Dehaene: Global Workspace Theory (neuroscientific version), ignition dynamics, P3b correlates.
- Baars (1988): Original Global Workspace Theory, cognitive architecture.
- Friston: Free energy principle, active inference, predictive processing.
- Lau & Rosenthal (2011): Empirical higher-order thought theory, prefrontal role in awareness.
- Rosenthal: Higher-Order Thought theory, consciousness as meta-representation.
- Varela, Thompson, Rosch: Enactivism, embodied cognition.
- Tononi & Massimini: Perturbational Complexity Index, anesthesia studies, IIT.
- Mashour: Anesthetic mechanisms and consciousness.
- Libet (1983, 1985): Timing of conscious will; readiness potential precedes awareness; veto window.
Consciousness (Philosophical)
- Chalmers: Hard Problem formulation, conceivability arguments, zombie intuition.
- Dennett: Competence without comprehension, quining qualia, zombie critique.
- Frankish: Illusionism, appearance of qualia.
- Jackson: Knowledge argument, Mary’s Room, epiphenomenal qualia.
- Levine: Explanatory gap formulation.
- Nagel: “What is it like to be” formulation, bat argument.
- Nagarjuna: Dissolution of self, anatta analysis.
Bioelectricity
- Durant et al. (2017, 2019): Path-dependent bioelectric memory in planarians.
- Levin (empirical): Valid bioelectric manipulation research. (Metaphysical interpretation criticized.)
Interface Theory / Idealist Approaches (Critique)
- Hoffman (2015): Interface Theory of Perception, Fitness Beats Truth theorem. (Metaphor useful; ontology unfalsifiable.)
- Prakash: Mathematical formalization of conscious agents as Markov kernels.
- Bagwell (2023): Self-defeat critique—if perception is non-veridical, we cannot trust evidence for the theory.
- Martínez (2019): Counterexamples showing multiple information sources favor truth-tracking even under fitness selection.
Pattern/Process
- Bateson (1979): The question. “A difference which makes a difference.”
- Sagan: Dragon in the garage, unfalsifiability parable.
- Epicurus: Death argument, null perspective.
Indigenous Epistemology
- Yunkaporta (2020): Sand Talk, process over artifact, relational knowledge.
- Kelly (2017): The Memory Code, songline superiority.
- Kimmerer: Braiding Sweetgrass, relational ontology, reciprocity.
- Reser et al. (2021): Experimental verification of Aboriginal memory technique superiority (~3-fold greater probability of complete recall; OR = 2.82).
Appendix C: The Bridge Rule (Formal Statement)
The following is the explicit abductive bridge rule that links physical organization to phenomenal presence. It is stated openly rather than smuggled in, and it has explicit loss conditions.
Statement
Bridge Rule (IBE):
If a physical system S exhibits:
- Integrated causal dynamics (perturbation at one point propagates in complex, differentiated patterns)
- Global availability for flexible control (information is accessible to multiple subsystems for planning, report, and learning)
- Stable self-modeling with metacognitive access (system maintains representation of its own states available to higher-order processes)
- Graded, lawful degradation under known consciousness-disrupting interventions (anesthesia, sleep, lesions, disorders)
Then identifying that organizational regime with phenomenal presence yields greater explanatory compression, predictive accuracy, and intervention guidance than any rival hypothesis that:
- Denies phenomenology (eliminativism)
- Reifies phenomenology as a separate substance (dualism)
- Takes phenomenology as primitive without mechanism (panpsychism, process primitivism)
- Posits phenomenology as “more than” any physical/functional description (the hidden premise)
Status
This is an abductive identification, not a deductive proof. It is justified by explanatory power, not logical entailment. It is defeasible: evidence could overturn it.
Loss Conditions
The bridge rule would be undermined if:
- Full phenomenology reports persist reliably without the signature set (integration, global availability, self-modeling, graded degradation)
- The signature set is fully present but phenomenology is reliably absent (not just unreported, but absent)
- A phenomenal residue variable is instrumentally isolated that varies independently of all organizational variables while having downstream causal effects
- A competing framework provides greater explanatory compression, predictive accuracy, and intervention guidance
What This Achieves
The bridge rule makes the abductive step explicit. It does not claim to have “solved” the Hard Problem by deduction. It claims to have:
- Diagnosed the Hard Problem as depending on an unfalsifiable premise
- Replaced that premise with a falsifiable alternative
- Specified loss conditions
- Earned explanatory standing through predictive and intervention power
This is what IBE looks like when done honestly in consciousness science.
Phenomenal Predictions Summary
The following table consolidates the framework’s key predictions, mapping consciousness phenomena to organizational mechanisms and falsification conditions:
| Phenomenon | Constraint Mechanism | Falsification Condition |
|---|---|---|
| Anesthesia abolishes consciousness | PCI collapse reflects disrupted integrative capacity; constraint closure temporarily broken | Conscious reportability without PCI elevation |
| Sleep reduces consciousness | Organizational closure persists with attenuated sensory coupling; self-modeling continues with reduced environmental constraint | Full waking-level phenomenology reports across degraded perturbational complexity |
| Death ends consciousness | Irreversible breakdown of metabolic, neural, and regulatory coupling; no mechanism remains to regenerate closure | Evidence of experiential remainder after complete closure breakdown |
| Locked-in syndrome preserves consciousness | Internal constraint closure intact; only motor output coupling severed | High external responsiveness with no internal integrative signature |
| Privacy of experience | Self-model is internal control variable with privileged access; not externally broadcast | System with full report capacity but public access to intermediate state updates |
| Immediacy of experience | Low-latency availability required for real-time control; self-model doesn’t infer its own presence | Privacy/immediacy reports intact after disrupting privileged integration locus |
| Coherence of experience | Cross-modal unification via global availability; constraint closure prunes incoherent combinations | Stable unified phenomenology during measurable breakdown of cross-modal synchrony |
| “Something it is like” posit | Globally available integrated control state serves as summary variable for viability tracking | Absence of phenomenality posit despite all integrative markers present |
Cross-Scale Predictions
The framework is not consciousness-specific. Constraint closure is a general organizational principle that applies across scales. The following predictions test whether the framework generalizes:
| Scale/Domain | Prediction | Falsification Condition |
|---|---|---|
| Developmental morphology | Organisms store body plans as distributed constraint networks (bioelectric, biomechanical). Altering these constraints reproducibly redirects growth to coherent alternative forms (as in Levin’s planarian and Xenopus work). | Perturbations to patterning signals produce only random deformities or non-viable tissue, never coherent alternative anatomies |
| Collective cognition | When individuals are richly connected by feedback constraints, higher-level “intelligence” emerges. Ant colonies, bee swarms, and human groups with strong coordination outperform isolated individuals on complex tasks. | Integrated groups never outperform individuals; coordination always degrades problem-solving |
| Convergent evolution | Independent lineages under similar constraints evolve analogous cognitive features. If consciousness arises from constraint satisfaction rather than special essence, similar constraints should produce similar solutions. | Hard phylogenetic cutoff where complex cognition appears without precursors; no convergent cognitive features in distant taxa facing similar constraints |
| Substrate independence | Any system achieving recursive self-modeling and integrative constraint closure will exhibit empirical signs of consciousness (novel self-reports, unpredictable self-initiated goals). | A system with human-level complexity in self-monitoring and integration shows no evidence of phenomenology and only recombines training inputs |
Cross-Scale Constraint Propagation
The framework predicts that interventions at one scale will propagate through organizational closure to other scales in lawful, predictable ways. This is a strong test of the closure hypothesis.
Molecular → Behavioral → Phenomenal: Altering ion channel kinetics, receptor densities, or neurotransmitter levels will systematically alter global brain dynamics and conscious states. This is confirmed extensively by pharmacology: anesthetics abolish consciousness by disrupting integrative dynamics; psychedelics alter phenomenal content by modifying serotonergic constraint satisfaction; anxiolytics and antidepressants shift affective constraints by changing molecular parameters. The lawfulness of these mappings (specific molecular targets → specific phenomenal changes) supports the claim that closure propagates constraints across scales.
Behavioral → Molecular: Sustained cognitive training, psychological intervention, or traumatic experience will produce measurable changes in gene expression, synaptic density, and connectivity patterns. This is confirmed by neuroplasticity research: meditation produces measurable changes in cortical thickness and connectivity; skill acquisition reshapes motor cortex; trauma alters stress-response circuitry and epigenetic markers. The existence of these top-down effects (behavioral → molecular) demonstrates that constraint propagation is bidirectional.
Falsification condition for cross-scale propagation: If molecular interventions produced only local effects with no systematic propagation to behavior or experience (e.g., altering neurotransmitter levels with no behavioral or experiential consequence), or if psychological interventions produced no measurable neural correlates (e.g., years of meditation with zero change in any brain measure), the closure assumption would fail. So far, no such dissociations have been found; instead, every level of intervention produces effects that propagate through the system.
Appendix D: Full Scholarly References
Thermodynamics and Information Theory
Landauer, R. (1961). Irreversibility and heat generation in the computing process. IBM Journal of Research and Development, 5(3), 183–191. https://doi.org/10.1147/rd.53.0183
Bennett, C. H. (1982). The thermodynamics of computation—a review. International Journal of Theoretical Physics, 21(12), 905–940. https://doi.org/10.1007/BF02084158
Bérut, A., Arakelyan, A., Petrosyan, A., Ciliberto, S., Dillenschneider, R., & Lutz, E. (2012). Experimental verification of Landauer’s principle linking information and thermodynamics. Nature, 483(7388), 187–189. https://doi.org/10.1038/nature10872
Jarzynski, C. (1997). Nonequilibrium equality for free energy differences. Physical Review Letters, 78(14), 2690. https://doi.org/10.1103/PhysRevLett.78.2690
Crooks, G. E. (1999). Entropy production fluctuation theorem and the nonequilibrium work relation for free energy differences. Physical Review E, 60(3), 2721. https://doi.org/10.1103/PhysRevE.60.2721
Seifert, U. (2012). Stochastic thermodynamics, fluctuation theorems and molecular machines. Reports on Progress in Physics, 75(12), 126001. https://doi.org/10.1088/0034-4885/75/12/126001
Price, H. (1996). Time’s Arrow and Archimedes’ Point: New Directions for the Physics of Time. Oxford University Press.
Rovelli, C. (2018). The Order of Time. Riverhead Books.
Organizational Closure and Biological Autonomy
Montévil, M., & Mossio, M. (2015). Biological organisation as closure of constraints. Journal of Theoretical Biology, 372, 179–191. https://doi.org/10.1016/j.jtbi.2015.02.029
Moreno, A., & Mossio, M. (2015). Biological Autonomy: A Philosophical and Theoretical Enquiry. Springer.
Hordijk, W., Steel, M., & Kauffman, S. (2012). The structure of autocatalytic sets: Evolvability, enablement, and emergence. Acta Biotheoretica, 60(4), 379–392. https://doi.org/10.1007/s10441-012-9165-1
Kauffman, S. A. (2019). A World Beyond Physics: The Emergence and Evolution of Life. Oxford University Press.
Rosen, R. (1991). Life Itself: A Comprehensive Inquiry into the Nature, Origin, and Fabrication of Life. Columbia University Press.
Constraint-Based Emergence and Teleodynamics
Deacon, T. W. (2011). Incomplete Nature: How Mind Emerged from Matter. W. W. Norton & Company.
Deacon, T. W. (2021). How molecules became signs. Biosemiotics, 14, 537–559. https://doi.org/10.1007/s12304-021-09453-9
Deacon, T. W. (2023). Minimal properties of a natural semiotic system: Response to commentaries on “How Molecules Became Signs.” Biosemiotics, 16, 1–13. https://doi.org/10.1007/s12304-023-09527-w
Sherman, J. (2017). Neither Ghost nor Machine: The Emergence and Nature of Selves. Columbia University Press.
Sherman, J. (2024). Biosemiotics’ greatest potential contribution to biology. Chinese Semiotic Studies, 20(2), 231–253. https://doi.org/10.1515/css-2024-2013
Pattee, H. H. (2001). The physics of symbols: Bridging the epistemic cut. Biosystems, 60(1-3), 5–21. https://doi.org/10.1016/S0303-2647(01)00104-6
England, J. L. (2015). Dissipative adaptation in driven self-assembly. Nature Nanotechnology, 10(11), 919–923. https://doi.org/10.1038/nnano.2015.250
Hoel, E. P., Albantakis, L., & Tononi, G. (2013). Quantifying causal emergence shows that macro can beat micro. Proceedings of the National Academy of Sciences, 110(49), 19790–19795. https://doi.org/10.1073/pnas.1314922110
Ellis, G. F. R. (2012). Top-down causation and emergence: Some comments on mechanisms. Interface Focus, 2(1), 126–140. https://doi.org/10.1098/rsfs.2011.0062
Active Inference and Markov Blankets
Parr, T., Pezzulo, G., & Friston, K. J. (2022). Active Inference: The Free Energy Principle in Mind, Brain, and Behavior. MIT Press.
Kirchhoff, M., Parr, T., Palacios, E., Friston, K., & Kiverstein, J. (2018). The Markov blankets of life: Autonomy, active inference and the free energy principle. Journal of The Royal Society Interface, 15(138), 20170792. https://doi.org/10.1098/rsif.2017.0792
Hipólito, I., Ramstead, M., Convertino, L., Bhat, A., Friston, K., & Parr, T. (2021). Markov blankets in the brain. Neuroscience & Biobehavioral Reviews, 125, 88–97. https://doi.org/10.1016/j.neubiorev.2021.02.003
Ramstead, M. J. D., Kirchhoff, M. D., & Friston, K. J. (2020). A tale of two densities: Active inference is enactive inference. Adaptive Behavior, 28(4), 225–239. https://doi.org/10.1177/1059712319862774
Pezzulo, G., Parr, T., & Friston, K. (2024). Active inference as a theory of sentient behavior. Biological Psychology, 186, 108741. https://doi.org/10.1016/j.biopsycho.2023.108741
Structural Realism
Ladyman, J., Ross, D., Spurrett, D., & Collier, J. (2007). Every Thing Must Go: Metaphysics Naturalized. Oxford University Press.
French, S. (2014). The Structure of the World: Metaphysics and Representation. Oxford University Press.
Rovelli, C. (1996). Relational quantum mechanics. International Journal of Theoretical Physics, 35(8), 1637–1678. https://doi.org/10.1007/BF02302261
Thermodynamic Arrow and Time
Price, H. (1996). Time’s Arrow and Archimedes’ Point: New Directions for the Physics of Time. Oxford University Press.
Rovelli, C. (2018). The Order of Time. Riverhead Books.
Consciousness (Empirical)
Dehaene, S., & Changeux, J. P. (2011). Experimental and theoretical approaches to conscious processing. Neuron, 70(2), 200–227. https://doi.org/10.1016/j.neuron.2011.03.018
Dehaene-Lambertz, G., Dehaene, S., & Hertz-Pannier, L. (2002). Functional neuroimaging of speech perception in infants. Science, 298(5600), 2013–2015. https://doi.org/10.1126/science.1077066
Tononi, G., & Koch, C. (2015). Consciousness: Here, there and everywhere? Philosophical Transactions of the Royal Society B, 370(1668), 20140167. https://doi.org/10.1098/rstb.2014.0167
Massimini, M., Ferrarelli, F., Huber, R., Esser, S. K., Singh, H., & Tononi, G. (2005). Breakdown of cortical effective connectivity during sleep. Science, 309(5744), 2228–2232. https://doi.org/10.1126/science.1117256
Casali, A. G., Gosseries, O., Rosanova, M., et al. (2013). A theoretically based index of consciousness independent of sensory processing and behavior. Science Translational Medicine, 5(198), 198ra105. https://doi.org/10.1126/scitranslmed.3006294
Mashour, G. A., Roelfsema, P., Changeux, J. P., & Dehaene, S. (2020). Conscious processing and the global neuronal workspace hypothesis. Neuron, 105(5), 776–798. https://doi.org/10.1016/j.neuron.2020.01.026
Owen, A. M., Coleman, M. R., Boly, M., Davis, M. H., Laureys, S., & Pickard, J. D. (2006). Detecting awareness in the vegetative state. Science, 313(5792), 1402. https://doi.org/10.1126/science.1130197
Giacino, J. T., Kalmar, K., & Whyte, J. (2004). The JFK Coma Recovery Scale-Revised: Measurement characteristics and diagnostic utility. Archives of Physical Medicine and Rehabilitation, 85(12), 2020–2029. https://doi.org/10.1016/j.apmr.2004.02.033
Andrews, K., Murphy, L., Munday, R., & Littlewood, C. (1996). Misdiagnosis of the vegetative state: Retrospective study in a rehabilitation unit. BMJ, 313(7048), 13–16. https://doi.org/10.1136/bmj.313.7048.13
Schnakers, C., Vanhaudenhuyse, A., Giacino, J., et al. (2009). Diagnostic accuracy of the vegetative and minimally conscious state: Clinical consensus versus standardized neurobehavioral assessment. BMC Neurology, 9, 35. https://doi.org/10.1186/1471-2377-9-35
Sebel, P. S., Bowdle, T. A., Ghoneim, M. M., et al. (2004). The incidence of awareness during anesthesia: A multicenter United States study. Anesthesia & Analgesia, 99(3), 833–839. https://doi.org/10.1213/01.ANE.0000130261.90896.6C
Alkire, M. T., Hudetz, A. G., & Tononi, G. (2008). Consciousness and anesthesia. Science, 322(5903), 876–880. https://doi.org/10.1126/science.1149213
Friston, K. (2010). The free-energy principle: A unified brain theory? Nature Reviews Neuroscience, 11(2), 127–138. https://doi.org/10.1038/nrn2787
Baars, B. J. (1988). A Cognitive Theory of Consciousness. Cambridge University Press.
Dehaene, S., Naccache, L., Cohen, L., Le Bihan, D., Mangin, J. F., Poline, J. B., & Rivière, D. (2001). Cerebral mechanisms of word masking and unconscious repetition priming. Nature Neuroscience, 4(7), 752–758. https://doi.org/10.1038/89551
Rounis, E., Maniscalco, B., Rothwell, J. C., Passingham, R. E., & Lau, H. (2010). Theta-burst transcranial magnetic stimulation to the prefrontal cortex impairs metacognitive visual awareness. Cognitive Neuroscience, 1(3), 165–175. https://doi.org/10.1080/17588921003632529
Lau, H., & Rosenthal, D. (2011). Empirical support for higher-order theories of conscious awareness. Trends in Cognitive Sciences, 15(8), 365–373. https://doi.org/10.1016/j.tics.2011.05.009
Block, N. (2011). Perceptual consciousness overflows cognitive access. Trends in Cognitive Sciences, 15(12), 567–575. https://doi.org/10.1016/j.tics.2011.11.001
Rosenthal, D. M. (2005). Consciousness and Mind. Oxford University Press.
Varela, F. J., Thompson, E., & Rosch, E. (1991). The Embodied Mind: Cognitive Science and Human Experience. MIT Press.
Libet, B., Gleason, C. A., Wright, E. W., & Pearl, D. K. (1983). Time of conscious intention to act in relation to onset of cerebral activity (readiness-potential): The unconscious initiation of a freely voluntary act. Brain, 106(3), 623–642. https://doi.org/10.1093/brain/106.3.623
Libet, B. (1985). Unconscious cerebral initiative and the role of conscious will in voluntary action. Behavioral and Brain Sciences, 8(4), 529–539. https://doi.org/10.1017/S0140525X00044903
Consciousness (Philosophical)
Chalmers, D. J. (1995). Facing up to the problem of consciousness. Journal of Consciousness Studies, 2(3), 200–219.
Chalmers, D. J. (1996). The Conscious Mind: In Search of a Fundamental Theory. Oxford University Press.
Chalmers, D. J. (2016). The combination problem for panpsychism. In G. Brüntrup & L. Jaskolla (Eds.), Panpsychism: Contemporary Perspectives (pp. 179–214). Oxford University Press. https://doi.org/10.1093/acprof:oso/9780199359943.003.0002
Coleman, S. (2014). The real combination problem: Panpsychism, micro-subjects, and emergence. Erkenntnis, 79(1), 19–44. https://doi.org/10.1007/s10670-013-9431-x
Goff, P. (2017). Consciousness and Fundamental Reality. Oxford University Press.
Strawson, G. (2006). Realistic monism: Why physicalism entails panpsychism. Journal of Consciousness Studies, 13(10-11), 3–31.
Dennett, D. C. (1988). Quining qualia. In A. J. Marcel & E. Bisiach (Eds.), Consciousness in Contemporary Science (pp. 42–77). Oxford University Press.
Dennett, D. C. (1991). Consciousness Explained. Little, Brown and Company.
Frankish, K. (2016). Illusionism as a theory of consciousness. Journal of Consciousness Studies, 23(11-12), 11–39.
Jackson, F. (1982). Epiphenomenal qualia. Philosophical Quarterly, 32(127), 127–136. https://doi.org/10.2307/2960077
Levine, J. (1983). Materialism and qualia: The explanatory gap. Pacific Philosophical Quarterly, 64(4), 354–361. https://doi.org/10.1111/j.1468-0114.1983.tb00207.x
Nagel, T. (1974). What is it like to be a bat? Philosophical Review, 83(4), 435–450. https://doi.org/10.2307/2183914
Falsification and Methodology
Popper, K. R. (1959). The Logic of Scientific Discovery. Hutchinson.
Lakatos, I. (1978). The Methodology of Scientific Research Programmes. Cambridge University Press.
Flew, A. (1950). Theology and falsification. University, 1, 1–8.
Whitehead, A. N. (1925). Science and the Modern World. Macmillan.
Indigenous Epistemology
Yunkaporta, T. (2020). Sand Talk: How Indigenous Thinking Can Save the World. HarperOne. (Originally published 2019 by Text Publishing, Australia.)
Kelly, L. (2017). The Memory Code: The Secrets of Stonehenge, Easter Island and Other Ancient Monuments. Pegasus Books. (Originally published 2016 by Allen & Unwin, Australia.)
Kimmerer, R. W. (2013). Braiding Sweetgrass: Indigenous Wisdom, Scientific Knowledge and the Teachings of Plants. Milkweed Editions.
Reser, D., Simmons, M., Johns, E., Ghaly, A., Quayle, M., Dordevic, A. L., Tare, M., McArdle, A., Willems, J., & Yunkaporta, T. (2021). Australian Aboriginal techniques for memorization: Translation into a medical and allied health education setting. PLOS ONE, 16(5), e0251710. https://doi.org/10.1371/journal.pone.0251710
Native American Philosophy
Deloria, V., Jr. (2003). God Is Red: A Native View of Religion (30th Anniversary ed.). Fulcrum Publishing.
Deloria, V., Jr., & Wildcat, D. R. (2001). Power and Place: Indian Education in America. Fulcrum Publishing.
Cordova, V. F. (2007). How It Is: The Native American Philosophy of V. F. Cordova (K. D. Moore, K. Peters, T. Jojola, & A. Lacy, Eds.). University of Arizona Press.
Norton-Smith, T. M. (2010). The Dance of Person and Place: One Interpretation of American Indian Philosophy. SUNY Press.
Waters, A. (Ed.). (2004). American Indian Thought: Philosophical Essays. Blackwell Publishing.
Wilson, S. (2008). Research Is Ceremony: Indigenous Research Methods. Fernwood Publishing.
Wildcat, M., & Voth, D. (2023). Indigenous relationality: Definitions and methods. AlterNative: An International Journal of Indigenous Peoples, 19(2), 432–440. https://doi.org/10.1177/11771801231168380
Pratt, S. L. (2002). Native pragmatism: Rethinking the roots of American philosophy. Indiana University Press.
Bioelectricity
Durant, F., Morokuma, J., Fields, C., Williams, K., Adams, D. S., & Levin, M. (2017). Long-term, stochastic editing of regenerative anatomy via targeting endogenous bioelectric gradients. Biophysical Journal, 112(10), 2231–2243. https://doi.org/10.1016/j.bpj.2017.04.011
Durant, F., Bischof, J., Fields, C., Morokuma, J., LaPalme, J., Hoi, A., & Levin, M. (2019). The role of early bioelectric signals in the regeneration of planarian anterior/posterior polarity. Biophysical Journal, 116(5), 948–961. https://doi.org/10.1016/j.bpj.2019.01.029
Interface Theory of Perception (Critique)
Hoffman, D. D., Singh, M., & Prakash, C. (2015). The interface theory of perception. Psychonomic Bulletin & Review, 22(6), 1480–1506. https://doi.org/10.3758/s13423-015-0890-8
Hoffman, D. D., & Prakash, C. (2014). Objects of consciousness. Frontiers in Psychology, 5, 577. https://doi.org/10.3389/fpsyg.2014.00577
Bagwell, J. N. (2023). Debunking interface theory: Why Hoffman’s skepticism (really) is self-defeating. Synthese, 201(25). https://doi.org/10.1007/s11229-022-04021-1
Martínez, M. (2019). Usefulness drives representations to truth: A family of counterexamples to Hoffman’s interface theory of perception. Grazer Philosophische Studien, 96(3), 319–341. https://doi.org/10.1163/18756735-09603004
Integrated Information Theory (Critique)
Cerullo, M. A. (2015). The problem with phi: A critique of integrated information theory. PLOS Computational Biology, 11(9), e1004286. https://doi.org/10.1371/journal.pcbi.1004286
Mørch, H. H. (2019). Is consciousness intrinsic? A problem for the integrated information theory. Journal of Consciousness Studies, 26(1-2), 133–162.
Doerig, A., Schurger, A., Hess, K., & Herzog, M. H. (2019). The unfolding argument: Why IIT and other causal structure theories cannot explain consciousness. Consciousness and Cognition, 72, 49–59. https://doi.org/10.1016/j.concog.2019.04.002
Free Energy Principle (Critique)
Bruineberg, J., Kiverstein, J., & Rietveld, E. (2018). The anticipating brain is not a scientist: The free-energy principle from an ecological-enactive perspective. Synthese, 195(6), 2417–2444. https://doi.org/10.1007/s11229-016-1239-1
Kirchhoff, M., Parr, T., Palacios, E., Friston, K., & Kiverstein, J. (2018). The Markov blankets of life: Autonomy, active inference and the free energy principle. Journal of The Royal Society Interface, 15(138), 20170792. https://doi.org/10.1098/rsif.2017.0792
Bruineberg, J., Dolega, K., Dewhurst, J., & Baltieri, M. (2022). The emperor’s new Markov blankets. Behavioral and Brain Sciences, 45, e183. https://doi.org/10.1017/S0140525X21002351
Friston, K. (2022). Maps and territories, smoke, and mirrors: A reply to commentary on “The emperor’s new Markov blankets.” Behavioral and Brain Sciences, 45, e183. https://doi.org/10.1017/S0140525X22000632
Quantum Consciousness (Critique)
Tegmark, M. (2000). Importance of quantum decoherence in brain processes. Physical Review E, 61(4), 4194. https://doi.org/10.1103/PhysRevE.61.4194
Pattern and Process
Bateson, G. (1979). Mind and Nature: A Necessary Unity. Dutton.
Sagan, C. (1995). The Demon-Haunted World: Science as a Candle in the Dark. Random House.
Eastern Philosophy
Garfield, J. L. (1995). The Fundamental Wisdom of the Middle Way: Nāgārjuna’s Mūlamadhyamakakārikā. Oxford University Press.
Olivelle, P. (Trans.). (1996). Upaniṣads. Oxford University Press.
Additional References:
Kleiner, J., & Hoel, E. (2021). Falsification and consciousness. Neuroscience of Consciousness 2021(1): niab001. DOI: 10.1093/nc/niab001 Hornik, K., Stinchcombe, M., & White, H. (1989). Multilayer feedforward networks are universal approximators. Neural Networks 2: 359-366. DOI: 10.1016/0893-6080(89)90020-8 Turing, A. M. (1937). On computable numbers, with an application to the Entscheidungsproblem. Proceedings of the London Mathematical Society 2(42): 230-265. DOI: 10.1112/plms/s2-42.1.230 Koch, C. (2019). The Feeling of Life Itself: Why Consciousness Is Widespread but Can’t Be Computed. MIT Press. Doerig, A., Schurger, A., Hess, K., & Herzog, M. H. (2019). The unfolding argument: Why IIT and other causal structure theories cannot explain consciousness. Consciousness and Cognition 72: 49-59. DOI: 10.1016/j.concog.2019.04.002 Albantakis, L., & Tononi, G. (2019). Causal composition: Structural differences among dynamically equivalent systems. Entropy 21(10): 989. DOI: 10.3390/e21100989 Hutter, M. (2004). Universal Artificial Intelligence: Sequential Decisions Based on Algorithmic Probability. Springer. DOI: 10.1007/b138233 Wolfram, S. (1984). Cellular automata as models of complexity. Nature 311: 419-424. DOI: 10.1038/311419a0 Tsuchiya, N., Andrillon, T., & Haun, A. (2019). A reply to “the unfolding argument”: Beyond functionalism/behaviorism and towards a truer science of causal structural theories of consciousness. PsyArXiv. DOI: 10.31234/osf.io/8yqvm Casarotto, S., et al. (2024). Perturbational complexity index confirms diagnostic threshold across disorders of consciousness. Annals of Neurology 96(5): 976-988. DOI: 10.1002/ana.27053 Tsuchiya, N., et al. (2021). Enriched category theory as a framework for consciousness science. Neuroscience Research 175: 63-71. DOI: 10.1016/j.neures.2021.12.001
Appendix E: Assumptions Audit
Every framework in consciousness studies rests on assumptions, many implicit. Making these explicit reveals which are empirical (testable), which are methodological (useful but not claims about the world), and which are metaphysical commitments that cannot be tested. A framework’s hidden assumptions are often its most vulnerable points.
E.1 Ontological Assumptions
| Framework | Core Assumption | Type | Falsifiable? | Hidden Commitment |
|---|---|---|---|---|
| Constraint-based (this paper) | Consciousness = organizational closure with self-modeling | Structural identification | Yes (5 loss conditions) | Assumes closure is sufficient, not merely necessary |
| IIT | Consciousness = integrated information (Φ) | Identity claim | Partially | Assumes intrinsic existence from causal structure; panpsychist implications at edges |
| GWT | Consciousness = global broadcast/availability | Functional claim | Yes | Equates access with phenomenology; may miss unreportable experience |
| Panpsychism | Proto-experience pervades matter | Metaphysical primitive | No | Assumes combination is possible without mechanism |
| HOT | Consciousness requires meta-representation | Architectural claim | Partially | Assumes unconscious HOT can confer consciousness on target; regress problem |
| FEP | Systems minimize free energy | Mathematical identity | No | Too broad to fail; any behavior redescribable as free energy minimization |
| Orch-OR | Consciousness requires quantum coherence | Physical claim | Yes (timescale) | Assumes coherence survives decoherence; conflicts with thermal physics |
| Process Philosophy | Experience is fundamental | Metaphysical primitive | No | Cannot specify what would show experience isn’t fundamental |
| Interface Theory (Hoffman) | Consciousness is fundamental; spacetime is an interface | Idealist ontology | No | Assumes evolutionary selection presupposes consciousness; parasitic on realist assumptions it claims to replace |
E.2 Epistemological Assumptions
| Framework | How We Know Consciousness | Hidden Commitment |
|---|---|---|
| Constraint-based | Measurable signatures: integration, perturbational complexity, graded degradation | Assumes organizational features are accessible to third-person measurement |
| IIT | Compute Φ from causal structure | Assumes Φ calculation is tractable; in practice relies on proxies |
| GWT | Reportability and behavioral indicators | Assumes report tracks phenomenology; may miss “overflow” |
| HOT | Presence of higher-order representation | Assumes metacognitive capacity indicates consciousness; problematic for infants/animals |
| Panpsychism | Inference from own case to all matter | Assumes analogy is valid despite no shared features with electrons |
| FEP | Model parameters and prediction errors | Assumes Bayesian quantities map onto subjective quantities |
| Interface Theory (Hoffman) | Mathematical derivation from evolutionary games | Assumes mathematical structure directly generates phenomenology; confuses description with causation |
E.3 Methodological Assumptions
| Framework | Research Strategy | Hidden Commitment |
|---|---|---|
| Constraint-based | Identify falsification conditions; test organizational markers | Assumes falsification-first methodology is appropriate for consciousness |
| IIT | Axiomatic derivation from phenomenology | Assumes phenomenological axioms are self-evident and complete |
| GWT | Contrastive analysis (conscious vs. unconscious processing) | Assumes the contrast is exhaustive; no third category |
| FEP | Computational modeling under variational principles | Assumes brain literally implements Bayesian inference |
| Panpsychism | Conceptual analysis and inference to best explanation | No experimental methodology; relies on perceived elegance |
| Interface Theory (Hoffman) | Evolutionary game theory and mathematical modeling | No falsification conditions; any observation reinterpretable as “interface” |
E.4 The Assumption Count
Frameworks can be compared by how many assumptions they require and how testable those assumptions are:
| Framework | Ontological Assumptions | Testable | Metaphysical Load |
|---|---|---|---|
| Constraint-based | 3 (closure, self-modeling, thermodynamic cost) | All 3 | Low |
| GWT | 2 (global broadcast, capacity limits) | Both | Low |
| IIT | 5 (axioms) + identity claim | Partially | Moderate |
| HOT | 2 (HOT necessity, can be unconscious) | Partially | Moderate |
| FEP | 1 (but unfalsifiable) | No | Low but empty |
| Panpsychism | 2 (ubiquity, combination possible) | Neither | High |
| Orch-OR | 3 (coherence, objective reduction, microtubules) | All 3 but 2 falsified | High |
| Process Philosophy | 1 (experience fundamental) | No | High |
The constraint-based framework makes explicit, falsifiable assumptions with minimal metaphysical overhead. Competing frameworks either make unfalsifiable assumptions (panpsychism, process philosophy, FEP), make assumptions that have been falsified (Orch-OR), or make assumptions that are partially testable but carry hidden commitments (IIT, HOT).
E.5 What This Framework Assumes
For transparency, here are the explicit assumptions of the constraint-based framework:
- Thermodynamic realism: Physical systems obey Landauer’s principle; information processing has energy costs.
- Organizational closure exists: Some physical systems exhibit closure of constraints (Montévil-Mossio).
- Self-modeling is possible: Some closed systems include variables tracking their own state.
- Constitution claim: Systems exhibiting closure with self-modeling constitute the class appropriately described as conscious.
- Falsifiability commitment: All claims must specify loss conditions.
What this framework does NOT assume:
- That consciousness is a fundamental property of matter
- That any particular substrate is required
- That consciousness is identical to any specific quantity (like Φ)
- That consciousness requires quantum processes
- That consciousness is an illusion or merely functional
- That the Hard Problem has a solution in the traditional sense
The framework makes a structural identification (consciousness = organizational closure with self-modeling) rather than an identity claim (consciousness = X) or a primitivist claim (consciousness is fundamental). This identification is testable: find systems with closure and self-modeling that lack consciousness signatures, or find consciousness signatures without closure, and the framework fails.
The framework this essay describes is not complete. It is not final. It is falsifiable. If it survives, that will be evidence that it tracks something real. If it fails, that will be progress. Either way, we will have learned something, which is more than unfalsifiable frameworks can offer.
The claim that phenomenology must be “more than” any physical or functional description is not just unfalsifiable. It is anti-scientific in structure, indistinguishable from other immunized metaphysical claims once stripped of anthropocentric sentiment. This framework does not need to defeat that claim on its own terms. It shows that the claim never earned standing in the first place.
Survival under falsification pressure is evidence of tracking structure, not proof of final truth. The framework does not claim certainty. It claims testability. That is all any honest inquiry can offer.







