Important Article Disclaimer: This article isn’t a refutation of Matt Segall. I have genuine respect for his scholarship and his willingness to do the difficult bridging work between process philosophy and empirical biology. What follows is offered in the spirit of collaborative falsification. I’m trying to stress-test both his framework and my own, with the goal of evolving toward something more robust, not claiming I’ve got it right. I welcome correction. If I’ve misread Segall, Levin, or any of the other scholars mentioned below, missed crucial Whiteheadian distinctions, or committed the very errors I’m flagging in his stated positions, I want to know. The principle of charity applies in both directions: I’ve tried to engage his strongest arguments, and I expect the same courtesy extended to whatever survives scrutiny here. The goal is convergence on better constraints, not winning.
This article attempts to do three things. First, it agrees with what Segall gets right about evidence and theory-ladenness, because that part matters. Second, it isolates where the argument upgrades into an evidential exemption and why that upgrade fails on standard scientific and philosophical criteria. Third, it demonstrates a concrete methodology, Recursive Constraint Falsification, that keeps the insight about interpretation while still forcing world-facing claims to state what would count against them.
What Segall Gets Right About Evidence, and Where the Upgrade Happens
Segall starts from a point that deserves agreement, not reflexive dismissal. Evidence in science rarely equals “a camera caught it.” Scientific practice lives inside inference, modeling choices, calibration decisions, and theory-laden measurement. If someone tells you science runs on raw, uninterpreted “sense data,” ask them to show you a single instrument readout that does not presuppose a model of the instrument, the signal, and the noise. That point stands.
The problem begins when a correct observation gets quietly upgraded into a stronger conclusion. The upgrade usually looks like this: because first-person awareness does not appear as a photographed object, it supposedly sits outside empirical constraint, experiment, and mechanistic explanation. The reasonable critique of naive empiricism gets quietly upgraded into something far stronger: a stance that treats certain consciousness claims as insulated from the ordinary discipline of constraint and revision. That upgrade does not need to be framed as dogma to function as a shield; it’s implicit in the design.
That move smuggles in an unexamined premise: empirical means third-person detection of a separable object. But many successful sciences do not work that way. They infer unobservables from disciplined constraint patterns and intervention-sensitive changes. The relevant question becomes: does consciousness exhibit stable, cross-checked dependencies on manipulable variables, and do those dependencies discriminate between models?
Segall’s headline-style claim, “There’s no scientific evidence that consciousness exists,” works rhetorically because it sounds like a sober conclusion while hiding a moving standard. What does “scientific evidence” mean here? If it means “direct object on a detector,” then the statement becomes a definitional maneuver, not a discovery. If it means “public, intersubjectively checkable constraints that reliably favor some models over others,” then the claim starts to look false on its face, because consciousness enters the evidence stream through report, behavior, physiology, and intervention.
It helps to separate two things that often get conflated. There is (1) evidence that conscious reports and discriminations occur, which is trivial and abundant, and (2) evidence for any particular metaphysical theory of consciousness, which is nontrivial and model-dependent. Segall’s headline reads as if it denies (1), but the actual argument aims at (2) while quietly swapping the evidential standard midstream. That swap is exactly where the “no evidence” rhetoric gets its force.
This is where the first Socratic fork matters. When you say consciousness, do you mean a first-person datum (what it is like, reportable experience), or a hypothesized feature of the world (a property, field, or ubiquitous interiority)? If it names a datum, then the scientific problem becomes modeling how reports, discriminations, attention, and behavior covary with physiology, and how interventions shift those covariations. If it names a world feature, then it needs world-facing constraints, or it drifts into private language plus rhetorical confidence. Either way, “science cannot touch it” does not follow. It only follows that the measurement bracket must include the observer and the reporting channel.
A devil’s advocate test exposes the hidden premise in “no scientific evidence.” Suppose systematic perturbations to brain dynamics predictably shift level and content of experience, across labs, methods, and conditions. What, exactly, would you count as evidence then? If the answer becomes “nothing, because consciousness is transcendental,” then the claim stops behaving like a fallible empirical statement and starts behaving like a methodological firewall. That may function as philosophy. It should not present itself as a scientific verdict.
A Quick Primer on Whitehead, So We Stop Arguing With Straw Men
Segall’s framework sits inside a lineage with real intellectual weight: Alfred North Whitehead’s process philosophy. Readers who only know Whitehead as “the guy panpsychists cite” miss the core move. Whitehead objects to “substance” metaphysics, the picture where reality consists of inert things that sometimes interact. He proposes that actuality consists of events or occasions, with relations and inheritances as fundamental. He also attacks what he calls the “bifurcation of nature,” the idea that the world splits cleanly into objective, measurable stuff on one side and subjective, qualitative stuff on the other. You can read this directly in Whitehead’s own presentations, not as secondhand gloss, in Science and the Modern World (1925) and Process and Reality (1929).
That matters because Segall often reads as more careful than the caricature suggests. He typically does not claim that rocks have human-like consciousness. He draws a distinction between consciousness as reflective, self-aware, linguistically scaffolded awareness, and experience as something more basic and pervasive. This resembles a common Whiteheadian move: deny “vacuous actualities,” meaning deny that the basic units of reality are entirely devoid of internal character, while also refusing to anthropomorphize everything. That is not automatically incoherent. It is a legitimate metaphysical proposal.
Segall also inherits Whitehead’s insistence that philosophy and science are not enemies. Whitehead did not build a metaphysics to ignore science. He built it to interpret science and to repair what he saw as conceptual fractures introduced by earlier metaphysical assumptions. If you want to evaluate a Whiteheadian system fairly, you do not ask whether it feels spiritually satisfying. You ask whether it improves coherence across physics, biology, experience, and method without buying itself immunity from correction.
So it helps to be explicit about what the critique here is not. This is not a complaint that Segall departs from Whitehead, or that Whitehead lacks historical importance, or that first-person experience should be ignored. The critique targets a specific inference: the move from “consciousness functions as a condition for inquiry” to “consciousness cannot be empirically constrained,” and then to “there is no scientific evidence consciousness exists.” Even if Segall stays faithful to Whitehead, fidelity does not automatically generate discriminating predictions. Exegesis does not buy exemption.
One guardrail makes the methodological point clean. Even if Whitehead denies vacuous actuality, that does not settle what kind of empirical constraints, if any, follow from that metaphysical commitment. A process metaphysics may be valuable as a coherence framework, but the question remains: does it change what we should expect to observe, or change what interventions should do? If it does not, then it can still matter as worldview or interpretation. It should not overrule mechanistic models while also claiming special evidential status.
What I’d Like to Ask Segall: How Would Your Metaphysics Fail?
Segall’s most defensible posture looks like this: reject cartoon empiricism, keep the observer in the measurement bracket, and treat metaphysics as an attempt to improve global coherence rather than a lab protocol. Fine. But once a framework starts making world-facing claims, it inherits a basic obligation: say what would count against it. So the question I’d like to ask Segall is not “can metaphysics run experiments,” but “how does metaphysics lose?” What would count as a clear failure condition for the Whiteheadian moves being made here, especially when those moves get used to underwrite the “no evidence” rhetoric?
More concretely, when Segall distinguishes consciousness from experience and denies crude anthropomorphism, what exactly does the broader “experience” claim add that changes expectations? Does it generate discriminators, or does it function mainly as an interpretive overlay on top of whatever mechanistic story we already have? If it changes nothing we would predict, then it can still operate as a coherence vocabulary, but it cannot legitimately borrow the authority of evidential constraint while also declaring itself outside that constraint.
The same question sharpens the Segall–Levin boundary in a useful way. Segall often resists Levin’s “thin client” and “agency in patterns” vibes by relocating agency in organisms or occasions. But what would differ operationally between “agency lives in patterns” and “agency lives in organisms oriented by a structured possibility space”? Under anesthesia, lesions, stimulation, developmental rewrites, or bioelectric perturbations, where do the two views actually diverge? If no divergence can be named, then the dispute risks becoming grammatical rather than empirical, and that is exactly the kind of underdetermination that lets rhetorical certainty masquerade as philosophical progress.
So the Socratic demand stays simple: if a claim about forms, aims, ingression, or ubiquitous interiority bites reality, what observation would make you revise it, narrow it, or drop it? If the honest answer becomes “nothing,” then the move functions as protected metaphysics. Protected metaphysics can still be meaningful as interpretation. It just cannot carry the evidential weight implied by “no scientific evidence,” because it has disabled the conditions under which evidence could ever count.
Where the “No Evidence” Argument Breaks: Fallacies, Category Errors, and Goalpost Drift
Segall is right that naive “photograph-or-it-isn’t-real” empiricism is a cartoon. The failure comes from a different place: the way definitions and evidence standards can slide without being marked. That slide creates arguments that feel rigorous while staying insulated from constraint.
Equivocation, if the key term shifts without notice
If consciousness sometimes means reportable awareness with access, attention, memory, and verbal or behavioral report, and sometimes means a broader experience that attaches to energetic transmission at every scale, then the term shifts between two targets. If the shift is not explicitly marked, equivocation occurs. The result is that measurement defeats one target, while the argument retreats to the other.
If “consciousness” means reportable awareness with access and report, then it clearly covaries with physical organization and can be perturbed. If “experience” means any interior aspect of any energetic transmission, then the term expands until it stops discriminating. That expansion may look metaphysically elegant, but it weakens the evidential claim because it makes the target unfalsifiable by definition.
Motte-and-bailey, if a modest claim defends a stronger one
If the argument defends a modest claim, for example first-person reports matter, and then uses that modest claim to support a stronger metaphysical thesis, for example experience pervades all processes, and then retreats back to the modest claim when challenged, that is the motte-and-bailey pattern. This does not require bad faith. It is a predictable failure mode when a framework wants both rhetorical reach and evidential safety.
Category mistake, if “evidence” gets reduced to “object on a detector”
Scientific evidence does not mean “third-person photograph of a thing.” It means publicly checkable constraints that can, in principle, distinguish models. We routinely accept evidence for entities and structures we do not photograph directly by observing how interventions and measurements cohere across contexts. If consciousness gets treated as “nowhere to be found” because it does not weigh anything or show up as an object on a detector, the argument collapses into narrow operationalism that ignores how science handles latent variables and causal structure.
Self-sealing move, if transcendental framing blocks revision
Now consider the transcendental move, often traced to Kant: consciousness as a condition for the possibility of observation. Even if you grant this framing, what follows? It shows that inquiry occurs from within a perspective, through an interface that makes the world available to a knower. But every measurement in physics also presupposes measurement conditions. That does not make electrons exempt from study. Even granting categorical uniqueness, the question remains whether that uniqueness generates discriminating predictions or merely blocks inquiry. Kant’s transcendental conditions don’t exempt causality from empirical study; they frame how we study it. The same should apply to consciousness, should it not?
So the relevant question becomes: does the transcendental framing generate new risky, discriminating predictions, or does it function primarily as a protective rule that blocks revision? If it blocks revision, it behaves like a self-sealing maneuver.
A reductio clarifies the stakes. If you define “consciousness” as the condition for any evidence whatsoever, and then declare it therefore unmeasurable and scientifically unconstrainable, you have insulated the claim by construction. At that point “no scientific evidence” stops being an empirical conclusion and becomes a stipulation about what counts as evidence. The position can still be philosophically interesting. It should not present itself as a scientific verdict.
Finally, notice what happens when the argument confronts intervention. Segall often grants correlations but denies explanation by insisting “condition is not cause” and “correlation cannot decide.” Correlation alone does not decide, agreed. But interventions can discriminate causal structure. If targeted perturbations to brain dynamics predictably alter level and content of experience, then “brain as merely a passive limiter” becomes harder to maintain without paying an explanatory price. A widely cited example: breakdowns in effective connectivity during sleep, studied with perturbational approaches, show systematic differences between conscious and unconscious states in ways that are not captured by simple input-output correlation alone (Massimini et al., Science, 2005, DOI: 10.1126/science.1117256). You can debate metaphysical interpretation. You cannot honestly call the resulting constraint map “no scientific evidence.”
Recursive Constraint Falsification, Demonstrated as Method (With Falsifiable Predictions)
This is the missing “how,” and it matters because Segall’s critique often wins by making the method feel either naive or impossible. Recursive Constraint Falsification is a disciplined way to keep what Segall gets right about theory-ladenness while refusing the upgrade to evidential exemption. It treats every claim about reality as a bundle: a target claim plus auxiliary assumptions plus measurement choices plus language that compresses a messy interface into a sentence. The method does not pretend we can escape interpretation. It forces interpretation to show its commitments, pay its bills, and accept revision.
Step 1: Definitional tightening without definitional cheating
You do not get to use “consciousness” as both a reportable phenomenon and an all-pervading interiority unless you clearly mark the switch. You write down at least two competing operationalizations, and you say what would count as failure for each. Example: consciousness-as-reportable-access versus experience-as-ubiquitous-interiority. Then you ask a Popper-style question that sounds simple but behaves like a scalpel: what observation would make me stop using this definition as my working handle? If the honest answer becomes “nothing,” you just learned something important. You are doing protected metaphysics, not a risky claim about the world.
Prediction 1 (diagnostic): If a “no evidence” headline depends on elastic definitions, tightening the terms will force one of two outcomes. Either the claim shrinks into a narrower, defensible thesis (for example: no direct detector readout of consciousness as a separable object), or it reveals itself as a stipulation about what counts as evidence (for example: nothing counts because the target is transcendental). This is testable in practice by asking competent readers to operationalize the same text. If they produce incompatible operationalizations while still thinking they preserved the author’s meaning, the original claim was underconstrained.
Step 2: Constraint translation, instead of “photographability”
Instead of asking, “Can we photograph consciousness?” you ask, “What constraint patterns should appear if this model is true?” This is where consciousness becomes empirically tractable without being treated as a separable object. You can predict intervention-sensitive shifts in reportability, discriminability, temporal integration, and effective connectivity measures. You can also predict where those links should break if the model is wrong. The whole point is to move from vibe-level plausibility to discriminating expectations. If two stories make the same predictions across the full constraint map, then the stories are empirically tied. The right move becomes methodological humility plus targeted experiments, not metaphysical victory laps.
Prediction 2 (model discrimination): If consciousness-as-reportable-access (a broadly mechanistic family, including global workspace style approaches) is close to right, then perturbations that disrupt long-range effective connectivity and integration should reliably reduce reportable awareness and flexible access, and restorations should reliably recover it, across anesthesia, sleep, and disorders of consciousness. If a competing brain-as-tuner view is doing more than interpretive overlay, it should yield at least one systematic divergence: some intervention that changes the proposed tuning variables without producing the mechanistic signature changes, or produces experience-level changes without the predicted connectivity and integration changes. If it cannot name such a divergence, the view is not empirically rivalrous.
Step 3: Counterfactual stress testing in Pearl’s sense
Segall is correct that correlation alone does not decide “cause versus condition.” So do not stop at correlation. You explicitly propose interventions that should behave differently on rival views. If the brain-as-tuner story is doing real explanatory work, it should yield at least one prediction that differs from mechanistic access models under perturbation, lesion, anesthesia, or stimulation. If it yields none, then the view may still function as an interpretive lens. It cannot honestly support “no scientific evidence.”
This step also yields a constraint test for the metaphysical “experience all the way down” move. If “experience” is asserted as ubiquitous interiority attached to energetic transmission, the claim needs a measurable footprint that differs from a world where energy transfer has no interior correlate.
Prediction 3 (footprint requirement): A serious footprint would show up as a systematic anomaly in our best physical and biological models: an intervention or measurement regime where treating systems as purely dynamical and relational fails in a way that is repaired by positing experiential interiority. If no such footprint exists even in principle, then the claim has no empirical content. That does not prove it false. It proves it does not belong in the same evidential register as claims that do generate footprints.
Step 4: Duhem-Quine and Lakatos made operational
You keep track of where you place blame when predictions fail. When a test goes badly, you do not get to silently shift the goalposts by moving the failure into an unnamed auxiliary. You state, in advance, which auxiliary assumptions you are willing to revise, and which revisions count as progressive rather than degenerating. Progressive means the revisions reduce ad hoc patches and increase novel predictive power. Degenerating means the revisions mainly protect a preferred conclusion while generating no new risky expectations. This is how you keep “metaphysics as interpretation” from quietly turning into “metaphysics as immunity.”
Step 5: Cost accounting and calibration
Real error correction is not free, cognitively or thermodynamically. A thinker or community that constantly claims certainty while also claiming deep rigor should trigger suspicion, because serious falsification incurs visible costs: revisions, narrowed claims, dropped intuitions, and sometimes social discomfort. Over time, you can test falsificationism itself by checking whether this posture produces a robustness dividend: better long-horizon calibration, fewer catastrophic reversals under novelty, and more reliable interventions.
Prediction 4 (method signature): If falsification, implemented as a genuine control regime rather than a slogan, tracks reality’s constraint structure in a way that improves long-horizon performance, it should exhibit a distinctive tradeoff pattern. In the short run, stronger falsification pressure should increase visible uncertainty, explicit error admission, model revisions and retractions, and reduce rhetorical confidence. In the long run, the same agents or communities should outperform on calibration under surprise, out-of-distribution robustness, and intervention success. If you can crank falsification pressure up and see none of the short-run costs, you probably are watching theater. If you observe those costs but never see the long-run dividend, then falsificationism, as a supposed truth-tracking strategy, loses empirical support.
A Constraint-Based Alternative: Make the Metaphysics Pay Rent, or Admit It Is Comfortable Interpretation
There is a straightforward methodological repair that does not require anyone to pretend consciousness is “just another object like mass.” Treat any claim that posits structure in reality as owing you a clear set of failure conditions. That is Popper’s demarcation instinct in operational form: not “metaphysics is evil,” but “if your claim bites reality, it must risk being wrong.”
“Thus my proposal was, and is, that it is this second boldness, together with the readiness to look for tests and refutations, which distinguished ‘empirical’ science from non-science, and especially from pre-scientific myths and metaphysics.”
Karl Popper
Ayer’s verificationism pushes a related pressure, though many philosophers reject its strongest form. The useful residue is not “only verification matters,” but the demand for meaning conditions. What would make a statement true, and what would make it false? Flew’s parable-style critique of unfalsifiable “death by a thousand qualifications” points at the same risk: you can save any claim by endlessly adjusting it, but at the cost of asserting less and less about the world. Quine complicates the story by emphasizing confirmation holism, the web of belief, and the fact that auxiliary assumptions can absorb shocks. That does not kill falsification. It tells you to operationalize it at the level of research programs and model comparison, not as a single-shot guillotine.
So here is the constraint-based thermodynamic monism proposal in plain terms. If you claim “experience pervades all energetic transmission,” what observations would you treat as disconfirming? What experimental signatures would differ between a world where that identity holds and a world where it does not? If you claim “the brain conditions but does not produce consciousness,” what interventions would behave differently on that view than on a view where consciousness depends on specific patterns of effective connectivity, integration, or global availability? If your answer is “no possible intervention could discriminate,” then you are doing metaphysics as interpretation, not metaphysics as a world-facing rival to mechanistic models. That is allowed. It just cannot support “no scientific evidence” as a bludgeon.
This is where mechanistic theories prove their worth. Some prominent proposals attempt to specify architectures, signatures, and failure modes. Global workspace style models propose testable relationships between broadcasting, access, reportability, and neural dynamics (Dehaene and Changeux, Neuron, 2011, DOI: 10.1016/j.neuron.2011.03.018). Whether you accept that model fully (methodological fallibilism demands one shouldn’t accept any model of anything “fully”), it exemplifies the right epistemically humble behavior: it states mechanisms, predicts patterns, and exposes itself to correction. A process metaphysics that wants comparable authority should do the same kind of work, or else present itself explicitly as an interpretive coherence framework rather than an evidentially privileged account.
The Sharpest Socratic Questions: Taking Segall’s Logic to Its Terminus
I want to take Segall seriously where he is strongest: his insistence that “evidence” is never raw, never theory-free, never separable from the conditions of knowing. Fine. I accept the premise. The questions below do not argue with that. They ask what happens when you apply it to Segall’s own positive metaphysical claims, not as a gotcha, but as the same discipline he asks science to endure. I am not asking for more confidence. I am asking for a failure mode.
What I’d like to ask Segall is this: if consciousness is the condition for the possibility of any evidence, then doesn’t that condition apply to your own claim about what consciousness is? By what method does your framework avoid the epistemic circle it diagnoses? Kant’s transcendental move was never just a shield against science; it also threatens every metaphysical claim that tries to speak about “the condition” as if it had stepped outside it. Sellars called this the “Myth of the Given,” the fantasy that we can start from something that functions as an epistemic foundation without already importing a theory. Rorty’s pragmatist jab is blunter: transcendental arguments often behave like conversation-stoppers in philosophical clothing. So the test is simple: do we get a disciplined method for distinguishing “necessary condition of inquiry” from “metaphysical picture I prefer,” or do we just get a new kind of exemption.
A second question follows immediately, and it is the one that determines whether this stays philosophy or turns into taste. Segall often suggests that metaphysics does not use falsification, but instead uses coherence, consistency, and adequacy to experience. Fine. But idealism, materialism, neutral monism, panpsychism, and cosmopsychism can all be made coherent with enough careful prose and enough flexible definitions. So what is the procedure for adjudication when multiple incompatible systems pass the coherence test? Quine’s underdetermination point matters here: the same “data” can support rival theories. Lakatos helps operationalize the problem by asking which research programme is progressive rather than degenerating, meaning which one generates novel constraints instead of just protecting itself. Kuhn complicates the story without licensing “anything goes.” Van Fraassen offers an exit ramp: if metaphysics cannot discriminate, stay empirically adequate and metaphysically modest. The key Socratic fork is this: if your criteria do not discriminate, in what sense is panexperientialism true rather than merely preferred.
Then comes the question that forces the metaphysics to touch the ground. If experience pervades all energetic transmission, what observable difference does that make compared to a universe where energy transfer has no interior correlate? What measurement, experiment, or intervention would come out differently if the claim is true? Popper is not being moralistic when he demands risk. He is being economical. A claim that cannot lose cannot win either. Flew’s “death by a thousand qualifications” is the same warning in a different accent: you can protect a statement by endlessly adjusting it, but you pay by asserting less and less about the world. Dennett’s heterophenomenology offers a pragmatic stance: study consciousness as a phenomenon of report, discrimination, and behavior without taking a metaphysical position on interiority. Chalmers is the mirror test here: what would count as solving the hard problem under panexperientialism if panexperientialism makes no empirical difference? If the answer becomes “it changes how we relate to nature,” then the claim has shifted categories. It becomes ethical or existential, which is allowed, but it stops being a descriptive world-claim.
The brain-as-tuner model is where the rubber meets the anesthesia. If the brain “conditions but does not produce” consciousness, in the strong filter sense, then reducing brain function should loosen the filter and expand experience, not erase it. So why does anesthesia so reliably produce unconsciousness rather than expanded awareness? Why do focal lesions remove specific contents and capacities rather than revealing hidden ones? What would count as evidence against the tuner model? Bergson and Huxley made the filter idea rhetorically seductive. The issue is not that it is aesthetically wrong. The issue is that taken literally, it predicts backward. Owen and Laureys type data on disorders of consciousness are useful precisely because they give a real constraint map rather than a vibe. Tononi’s IIT is relevant here because, whatever you think of it, it at least tries to say when consciousness should rise and fall as integration changes. The Socratic wedge is not “your metaphor is bad.” It is “what does your metaphor predict that competes with mechanistic accounts.”
Panexperientialism often sells itself as a way to avoid the “magic trick” of getting experience from non-experiential matter. But it can boomerang. If experience is basic, how do micro-experiences combine into unified macro-experience like ours? James posed the combination problem a century ago. Chalmers has argued it may be as hard as the hard problem. Goff tries to patch it, and critics accuse the patches of smuggling the thing being explained. Tononi offers an integration-based pathway, but whether that is a solution or a relabeling is contested. Roelofs maps the option space in detail. Segall sometimes distinguishes experience from consciousness, which can help, but it also sets a trap: if consciousness requires a special kind of integration and organization, then the explanation starts to look like “organization produces the relevant kind of experience,” which is the emergentist claim in different grammar. So the human version of the question is: does panexperientialism dissolve the mystery, or does it relocate it.
The “information requires an interpreter” line is another place where the rhetoric can do more work than the mechanism. If information is a difference that makes a difference to somebody, who is the “somebody” at the bottom? If every difference requires an interpreter, and the interpreter is itself an informational process, you risk regress or a foundational interpreter that interprets without being interpreted. Bateson’s line is powerful, but it is easy to misuse. Peirce’s semiotics shows how interpretants can generate infinite semiosis if you are not careful. Deacon is useful here because he tries to ground interpretation in constraint dynamics without a homunculus. Dennett’s approach is to dissolve the interpreter into layers of simpler competences until you bottom out in non-interpreting mechanisms. Thompson’s enactivism grounds interpretation in organism-environment coupling. The Socratic knife-edge is: either interpretation is everywhere and becomes nondiscriminating, or interpretation emerges and we are back to emergence.
Finally, there is a subtle tension between participatory truth, which John Archibald Wheeler would likely find rightfully interesting, and metaphysical assertion. Segall criticizes Hoffman for importing Cartesian epistemology into a new ontology, and he gestures toward participatory truth, something like resonance rather than mirroring. Fine. But if truth is participatory and reality is co-constituted through relation, then is “experience pervades all processes” a discovery about a pre-existing reality or a stance that helps shape what becomes real for us? James’s pragmatism makes “true means works” seductive, but it struggles when you want metaphysical realism at the same time. Rorty would push participatory truth toward social negotiation, which Segall would reject, but the tension remains. Whitehead wants conformity of appearance to reality, but “conformity” gets tricky if reality is partially co-constituted. Latour’s point is that constructed does not mean unreal, but it does change what “real” means. The Socratic fork is: can panexperientialism be false in a robust sense, or only less useful.
The Cost of Rejecting Falsifiability: Levin, “Platonic Fecundity,” and Institutional Capture
Segall’s “no evidence” posture does not just create a philosophical headache. It creates an epistemic permission slip: once a claim gets framed as “beyond empirical constraint,” it becomes portable. Anyone can pick it up, repackage it, and sell it as “science finally admitting what we knew all along.”
That portability matters because the moment you relax falsifiability, you do not get “humility.” You get jurisdiction without liability: the ability to make world-facing claims while disavowing the ordinary burdens of world-facing testing. Popper warned about this in practice even when people use different vocabulary. Immunize a claim against being wrong, and you have not made it deeper. You have made it safe from correction.
This is not abstract. Michael Levin’s own public framing (in his Platonism-forward writing) includes explicit causal-sounding language about non-physical patterns influencing biological morphogenesis, including claims that “Platonic forms inject information and influence into physical events,” and that “patterns themselves are the agent,” with bodies functioning as a “scratchpad.” You can argue about intended interpretation, but the affordance stays the same: it reads like a reality claim that outruns ordinary mechanistic accounting.
Once the door opens to “non-physical influence” talk, a very predictable thing happens: groups with preloaded metaphysical commitments walk through the door carrying pamphlets.
How Discovery Institute Uses This: Teleology as “Agency,” Then “Science and Faith Reconciled”
The Discovery Institute and its authors have published material explicitly using this Platonism/teleology framing as leverage for Intelligent Design style conclusions. The pattern shows up in their own words and publishing choices, including claims that teleology represents a missing “agency” and that recognizing it can “reconcile science with faith.”The same cluster includes book-length packaging (for example Plato’s Revenge) and repeated articles presenting “immaterial” explanatory layers as active, present-tense designers of biological outcomes.
This matters for a simple reason: if your framework cannot specify what would count against it, then an institution can always treat the framework as confirmation. Every outcome becomes “consistent,” and consistency becomes “evidence,” and evidence becomes “vindication.” That pipeline converts conceptual ambiguity into public certainty.
Why This Counts as Harm, Not Just “Bad Philosophy”
People sometimes treat this as mere academic drama. It is not. The harm vector runs through multiple channels:
First, science communication and education. If “teleology” and “immaterial patterns” get framed as empirically licensed agencies, the public gets trained to treat mechanistic explanation as optional, and to treat interpretive overlays as equivalent to testable models. That degrades scientific literacy and makes the next round of crank metaphysics cheaper to sell.
Second, institutional credibility. Bioelectricity and morphogenesis research already fights the “woo magnet” problem. If high-profile language blurs the boundary between mechanism and metaphysical assertion, it becomes easier for ideological actors to present the entire domain as spiritually motivated rather than experimentally disciplined.
Third, policy and culture-war capture. Intelligent Design movements operate by turning small ambiguities into wedge narratives. A little “beyond materialism” becomes a lot of “materialism is refuted,” then becomes lobbying, curriculum pressure, and public distrust of scientific institutions. The content does not need to be false to be weaponizable. It only needs to be non-discriminating.
How This Connects Back to Segall: “Evidence Standards” Become a Reusable Firewall
Segall’s move and Levin’s move share a structural weakness even when their motivations differ. When you say, “your evidence does not count because of the kind of thing this is,” you have invented an exemption class. Once invented, that exemption class becomes a tool other actors can use.
So here is the upgrade that keeps the philosophical insight (theory-ladenness, measurement context, observer inclusion) while blocking institutional abuse:
If a claim about consciousness, teleology, or forms makes contact with the world, it owes the world a failure condition.
If it cannot pay that bill, it can still function as poetry, ethics, or interpretation. But it should not be marketed as a scientific verdict, and it should not be used to undercut mechanistic research while also borrowing science’s authority.
A Practical Safeguard: “Anti-Weaponization” as a Requirement of Serious Theorizing
There is also a responsibility layer here that has nothing to do with blame and everything to do with leverage. If you speak in ways that predictably get reinterpreted as “science endorses immaterial agency,” you can reduce harm by doing at least one of the following:
- State a clear falsifiability hook: one empirical discriminator that would count against the metaphysical reading.
- Explicitly separate interpretive metaphysics from mechanistic claims in public, in the same venues where the metaphysical rhetoric appears.
- Publicly disavow institutional misappropriation (and the language that invites such misuse, ie: “ingression,” “transcendent,” “timeless,” “separate non-physical realms,” etc.) when it happens, especially when it gets used to sell anti-evolution or anti-naturalist conclusions as “scientific progress.”
Silence, strategic ambiguity, or “it depends what you mean” tends to function as tacit permission for capture.
Consciousness, Again: What Would Actually Move the Needle?
Segall’s most forceful rhetorical move is to treat first-person awareness as “primary evidence” that sits in a different jurisdiction from third-person methods. The first half can be granted. You do not discover consciousness as an external object. You live it as a datum. But the jurisdiction claim still needs justification. Why should first-person data imply that third-person constraints cannot apply?
In practice, consciousness science already treats report as data while refusing to treat introspection as infallible. It models the mapping from internal state to report channel, and then asks what physical and functional variables predict changes in report, discrimination, attention, and memory. That approach is not naive realism. It is the ordinary discipline of turning private access into public constraint.
A clean counterfactual sharpens the dispute. Imagine two worlds. In world A, consciousness depends strongly on specific, perturbation-sensitive patterns of effective connectivity and integration. In world B, consciousness is a ubiquitous interior aspect of energetic processes, and brains mainly tune or filter an already present field of experience. What would differ operationally? Would anesthesia, sleep, focal lesions, and stimulation produce the same predictable shifts in level and content in both worlds? Would complexity indices and perturbational measures track transitions similarly? If you cannot say what differs, you are not offering an alternative explanation. You are offering a redescriptive overlay.
This is also where “information” talk becomes dangerous. Segall is right to warn against reifying Shannon information into ontology. Shannon’s information is an operational measure of uncertainty relative to a coding scheme and channel, not a claim that reality is made of “information” in some mystical sense (Shannon, 1948, DOI: 10.1145/584091.584093). Bateson’s “difference that makes a difference” adds helpful emphasis on relevance and effect. But if we add “to somebody” without operationalizing “somebody” as a system with boundaries, update rules, and energy budgets, we risk smuggling in an interpreter as a homunculus. The moment “information implies an interpreter” becomes a metaphysical trump card, it stops doing scientific work unless it generates discriminating predictions.
A final devil’s advocate question targets the slogan directly. If a view defines “consciousness” so that no empirical result could ever count as evidence for it or against it, what has the view bought? It has bought immunity. Immunity can feel like depth, because it cannot be easily harmed. But immunity also signals low empirical content. It tells you the claim will not help you decide between models or interventions. At that point, the honest move is to stop presenting “no scientific evidence” as a result and start presenting “I am changing the rules of evidence” as a philosophical proposal.
So the most defensible closing is not “ignore first-person experience,” and it is not “declare consciousness beyond science.” It is this: treat consciousness claims like any other reality claim. Specify what would count against them. Prefer accounts that generate risky, discriminating predictions and survive contact with perturbation, replication, and cross-modal measurement. Keep interpretive metaphysics if it helps people think and live. Just do not confuse interpretation for exemption from constraint.
References
-
Segall, M. D. (2026, January 14).
There’s no scientific evidence that consciousness exists
.
Footnotes2Plato (Substack).
Annotation: Primary target text. The article critiques Segall’s framing of “evidence,” the transcendental move that tries to exempt consciousness from empirical constraint, and the claim that metaphysics sits outside falsification. -
Popper, K. R. (1959).
The logic of scientific discovery
.
Routledge.
Annotation: Establishes falsifiability as a demarcation criterion and frames empirical inquiry as structured exposure to potential refutation. Used to justify treating reality-implicating claims as answerable to empirical constraints. -
Popper, K. R. (1963).
Conjectures and refutations: The growth of scientific knowledge
.
Princeton University Press.
Annotation: Used for the “bold conjecture plus refutation seeking” posture and to motivate the “falsification tax vs robustness dividend” idea as a measurable signature of honest error-exposure. -
Ayer, A. J. (1936).
Language, truth and logic
.
Penguin Books.
Annotation: Provides the verificationist pressure: claims presented as factual should specify what would count as confirming or disconfirming experience. Used to critique “evidence” standards that get tightened until nothing can count. -
Flew, A. (1950).
Theology and falsification
.
In A. Flew & A. MacIntyre (Eds.), New essays in philosophical theology.
Annotation: Used to name and diagnose “self-sealing” claims. Supports the argument that if nothing could count against a claim, then the claim functions as a protected posture rather than a risky description of reality. -
Lakatos, I. (1970).
Falsification and the methodology of scientific research programmes
.
In I. Lakatos & A. Musgrave (Eds.), Criticism and the growth of knowledge. Cambridge University Press.
Annotation: Provides the research-programme lens: anomalies do not instantly kill theories, but the pattern of adjustments matters. Used to distinguish progressive problemshifts from degenerating, immunizing moves. -
Duhem, P. (1954).
The aim and structure of physical theory
(P. P. Wiener, Trans.). Princeton University Press.
Annotation: Supports the point that tests hit webs of assumptions, not isolated hypotheses. Used to show why falsification needs disciplined bookkeeping about auxiliaries and why “underdetermination” does not imply “anything goes.” -
Quine, W. V. O. (1951).
Two dogmas of empiricism
.
The Philosophical Review, 60(1), 20–43.
Annotation: Backs the “web of belief” update model. Used to explain how empirical pressure can rationally revise even deep commitments, while still allowing pragmatic stability where revision costs exceed gains. -
Kuhn, T. S. (1962).
The structure of scientific revolutions
.
University of Chicago Press.
Annotation: Supplies the paradigm and normal science context. Used to frame why “method-only” stories about falsification miss sociological and training dynamics, without conceding that truth reduces to consensus. -
Feyerabend, P. (1975).
Against method
.
Verso.
Annotation: Provides the strongest critique of rigid methodological rules. Used as a steelman: even if methods vary historically, reality-facing claims still pay a constraint bill when they guide interventions. -
van Fraassen, B. C. (1980).
The scientific image
.
Princeton University Press.
Annotation: Grounds the “empirical adequacy” standard and clarifies observable vs unobservable commitments. Used to argue that a view can stay modest about ontology while still demanding testable performance. -
Whitehead, A. N. (1920).
The concept of nature
.
Cambridge University Press.
Annotation: Primary Whitehead primer anchor for “bifurcation of nature” and the critique of treating nature as two disconnected realms. -
Whitehead, A. N. (1925).
Science and the modern world
.
Cambridge University Press.
Annotation: Primary support for Whitehead’s diagnosis of modernity’s conceptual splits. -
Whitehead, A. N. (1929).
Process and reality
.
Cambridge University Press.
Annotation: Primary source for “actual occasions,” “prehension,” and “vector feeling.”







