Dr. William B. Miller Jr. Says Every Cell Is Conscious (So Is Your Thermostat): A Complete Scholarly Philosophical and Scientific Dissection
A Recursive Constraint Falsification Dissection: Why Dr. Miller’s redefinition of consciousness as “processing ambiguity” accidentally grants phenomenal experience to dishwashers, thermostats, and literally every feedback system in existence; not just AI.
“Meet the Scientist Showing Every Cell In Your Body is Conscious!”
Method
Every substantive claim is quoted, then examined for: definition shifts, unwarranted assumptions, unfalsifiable claims, empirical contradictions, named fallacies (including all consequent/antecedent-adjacent fallacies), god tricks, nominalization errors and cascades, conflations, equivocations, reification errors, observer-attribution errors, map/territory confusions, contradictions with physics, biology, and thermodynamics, internal contradictions, and potential for public harm. Socratic questions draw on the work of relevant scholars. Every question posed is answered with the most authoritative experimental evidence or named scholarship available.
1. The Opening Claim (0:00–0:14)
“Unequivocally. Yes, I do believe that every cell is conscious. You are an amalgam of their consciousnesses that are so seamlessly connected to become your unique exclusive consciousness.”
Diagnosis: Equivocation (triple)
The word “conscious” appears three times in two sentences, doing different work each time. “Cell is conscious” will later be defined as “resolves ambiguity.” “Your unique exclusive consciousness” refers to phenomenal experience, the rich palette of feeling, sensing, and appreciating that Miller himself describes minutes later. These are not the same thing. Calling both “consciousness” and deriving the second from the first is the fallacy of equivocation: using a word in two senses within a single argument while treating them as identical.
Socratic Questions and Answers
Q: When the word “conscious” is applied to a cell and then to a human, does it pick out the same property in both cases? If yes, what is that property, precisely? If no, why use the same word?
This is the core challenge that Daniel Dennett pressed throughout his career against inflated consciousness claims. In Consciousness Explained (1991), Dennett argued that conflating “discriminative capacity” with “phenomenal experience” is the foundational error of consciousness studies. The answer to this question determines whether Miller’s project is a contribution to science or a prolonged equivocation. Miller never answers it. He defines consciousness as “problem-solving under ambiguity” when he needs it to be defensible, and as “experience of satisfaction” when he needs it to be interesting. These cannot both be the definition simultaneously, because thermostats solve problems under ambiguity and nobody claims they experience satisfaction.
Q: “You’ve just claimed to dissolve my Hard Problem. But your dissolution depends on defining consciousness as ‘responding to ambiguity.’ That’s a functional characterization. Have you dissolved the Hard Problem, or defined it away by refusing to address it?”
David Chalmers (1995, “Facing Up to the Problem of Consciousness,” Journal of Consciousness Studies) drew a precise distinction between the “easy problems” of consciousness (explaining discrimination, integration, reportability, and other functional capacities) and the “hard problem” (explaining why functional processing is accompanied by subjective experience). Miller’s definition of consciousness as “resolving ambiguity” places it squarely among the easy problems. His claim to have dissolved the hard problem is therefore the claim to have dissolved a problem he has not actually addressed. As Chalmers noted: “The really hard problem of consciousness is the problem of experience. When we think and perceive, there is a whir of information-processing, but there is also a subjective aspect.” Miller accounts for the whir. He does not account for the subjective aspect. He simply asserts they are identical.
Additional Issues
Composition fallacy: “You are an amalgam of their consciousnesses” assumes that if each cell is conscious, the organism’s consciousness is a summation of cellular consciousnesses. This is the composition fallacy: assuming that a property of parts must be a property of the whole (or that the whole’s property is composed of the parts’ properties). Temperature provides the counterexample. Individual molecules do not have temperature. Temperature is a statistical property of ensembles. Similarly, even if individual cells had some form of “micro-consciousness,” it would not follow that organismal consciousness is their sum. William James identified this as the “combination problem” in 1890 (Principles of Psychology, Chapter 6): “Take a hundred [feelings], shuffle them and pack them as close together as you can… each remains the same old feeling it always was… How the ‘integration’ of these feelings into a ‘higher’ feeling which is theirs takes the place of the original separate feelings is a mystery.”
2. Defining Consciousness as Doubt (4:56–5:40)
“Consciousness is doubt. Being is doubt and that’s what cells teach us. Every cell dwells in ambiguity, in doubt.”
“Consciousness is the ability to perceive doubt and to act purposively, purposefully on it. This doesn’t meet the same definition as every other scientist but you can’t get agreement among scientists.”
Diagnosis: Stipulative definition disguised as discovery (Definist fallacy)
Miller presents a private definition of consciousness, “the ability to perceive doubt and act on it,” then treats everything he can fit under this definition as actually conscious. This is the definist fallacy: defining X in a way that guarantees your conclusion, then presenting the conclusion as empirical.
Diagnosis: Persuasive definition (Stevenson, 1938)
Charles Stevenson identified “persuasive definitions” as definitions that preserve a word’s emotional force while changing its descriptive content. “Consciousness” carries enormous philosophical weight (the mystery of subjective experience, moral patienthood, the basis of rights). By redefining it as “signal processing under noise” and retaining the word, Miller imports all that weight into a claim about molecular biology. This is not innocent. If cells are “conscious,” they become potential moral patients. The terminological choice has real-world consequences for how we think about experimentation on living tissue.
Socratic Questions and Answers
Q: If “consciousness” is defined as “the ability to perceive doubt and act on it,” and this definition is stipulated rather than discovered, then what observation would falsify the claim that cells are conscious? Given that the definer controls the definition, can the claim ever be tested?
Karl Popper’s demarcation criterion (1959, The Logic of Scientific Discovery) requires that scientific claims specify what observations would count against them. Under Miller’s definition, any system that processes signals under noise qualifies as “conscious.” Since all physical systems process signals under noise (this is a consequence of the second law of thermodynamics, which guarantees noise in all channels), the claim becomes unfalsifiable. It is compatible with every possible observation. Popper would classify it as non-scientific, not because it is necessarily wrong, but because it cannot in principle be shown to be wrong.
Q: Cells do not “perceive doubt.” They have molecular pathways that respond to gradients. A thermostat responds to temperature gradients. What distinguishes the cell’s signal transduction from a thermostat’s temperature regulation, such that the word “doubt” applies to one but not the other?
The distinction Miller needs but never provides is between processing information under uncertainty (which every feedback system does, biological or not) and experiencing uncertainty as felt doubt (which requires phenomenal consciousness). John Searle’s “Chinese Room” argument (1980, Minds, Brains, and Programs, Behavioral and Brain Sciences) drew exactly this line: a system can process symbols according to rules without understanding or experiencing anything. The burden is on Miller to show that cellular signal transduction involves something more than rule-following. He meets this burden only by defining “doubt” to include molecular signal processing, which is circular.
Additional Issues
“You can’t get agreement among scientists” as justification: This is the fallacy of false equivalence. The lack of consensus on the definition of consciousness does not mean all definitions are equally valid. Some definitions (like Chalmers’ careful distinction between easy and hard problems, or Dehaene’s operationalization through global workspace access) are constrained by decades of empirical and philosophical work. Miller’s definition has no such constraints. It is chosen precisely because it guarantees his conclusion.
3. Intelligence Defined as Sensing + Acting (5:46–6:19)
“Every cell is intelligent meaning it can sense the world around it. Understand that there’s ambiguity in that understanding and act to resolve that doubt and make definitive directed decisions to deploy assets, scant cellular resources, and/or communicate or both to other cells in order to sustain themselves.”
Diagnosis: Discovery Institute isomorphism
The logical structure here is identical to Intelligent Design arguments:
- Observe that cells do impressive things (chemotaxis, gene regulation)
- Compare to a weak null model (random diffusion, pure noise)
- Declare the gap requires a special primitive (“intelligence,” “decision-making”)
- Install the primitive as explanation
This is the same formal skeleton that Michael Behe used for “irreducible complexity” and William Dembski used for “specified complexity.” The only difference is where the intelligence is located: outside the system (classical ID) or inside the cells (Miller). In both cases, “intelligence” does no explanatory work beyond redescribing the phenomenon. Dennett called this the “skyhook” move (Darwin’s Dangerous Idea, 1995): positing a higher-level explanation when a lower-level “crane” (in this case, molecular mechanism shaped by natural selection) already accounts for the data.
Socratic Questions and Answers
Q: When E. coli tumbles and swims up a nutrient gradient via the che signaling pathway, which step is the “decision”? The phosphorylation of CheY? The binding to the flagellar motor? If you can point to the molecular step, why call it a “decision” rather than a “biochemical reaction”? If you can’t point to it, what does the word add?
E. coli chemotaxis is one of the most thoroughly characterized molecular systems in biology (Howard Berg, E. coli in Motion, 2004; Sourjik & Wingreen, “Responding to Chemical Gradients,” Current Opinion in Cell Biology, 2012). Every step, from receptor binding through methylation adaptation, phosphotransfer to CheY, and flagellar motor switching, is accounted for without invoking “decisions.” The behavior looks intelligent to a human observer because natural selection has optimized the pathway over billions of years. But looking intelligent (from an observer’s perspective) and being intelligent (possessing a cognitive capacity) are different claims. Conflating them is the observer-attribution fallacy: attributing to the observed system a property that exists only in the observer’s interpretive framework.
Q: Cells self-organize under constraint. But self-organization is not intelligence. Bénard convection cells self-organize into hexagonal patterns. Snowflakes self-organize into hexagonal symmetry. Are Bénard cells intelligent?
Stuart Kauffman (The Origins of Order, 1993) demonstrated that self-organization is a pervasive physical phenomenon that does not require intelligence, design, or cognition. Self-organization occurs wherever systems are driven far from equilibrium and subject to constraints. The formation of convection cells, crystal structures, and chemical oscillations (Belousov-Zhabotinsky reactions) are all self-organizing processes that exhibit complex, adaptive-looking behavior. If “intelligence” means “self-organization under constraint,” then every dissipative structure in the universe is intelligent, and the word loses all discriminative power. If it means something more than that, Miller needs to specify what, and he never does.
Additional Issues
Cognitive gloss without explanatory content: When Miller says cells “make decisions,” he redescribes biochemistry in mentalistic vocabulary without adding any mechanism, prediction, or explanatory power. Terrence Deacon (Incomplete Nature, 2012) calls this the “homuncular fallacy”: explaining a cognitive capacity by positing a smaller cognitive agent inside the system. If the cell’s “intelligence” is just molecular signal transduction, then calling it “intelligence” adds nothing. If it is something more than molecular signal transduction, Miller must specify the additional ingredient, and he does not.
4. Cells Have “Preferences” and “Satisfaction” (6:24–8:36)
“Cells have states of preference… cells act to defend certain states of satisfaction. Preference. These are words that we don’t normally assign to cells, but they’re very important because they directly link to consciousness.”
“To sustain a state of preference, a state of satisfaction is an experience.”
Diagnosis: Nominalization cascade (reification through progressive word substitution)
Watch the sequence of conceptual slides:
- Cells maintain homeostatic setpoints (mechanistic description, uncontroversial)
- Cells “prefer” certain states (anthropomorphic redescription, smuggles in mental vocabulary)
- Cells have “states of preference” (verb frozen into noun: “preferring” becomes “a preference”)
- Preference requires “satisfaction” (second nominalization, introduces hedonic valence)
- Satisfaction is “an experience” (third nominalization, now claimed as phenomenal)
Each step is a tiny, almost imperceptible conceptual slide. But the distance between “maintains a setpoint via feedback regulation” and “has experiences” is enormous, and no argument bridges it. The work is done entirely by the nominalizations: turning process verbs (maintaining, regulating) into static nouns (preference, satisfaction, experience) and then treating the nouns as real things that need explanation.
Socratic Questions and Answers
Q: The argument moves from “the cell maintains a state” to “the cell experiences satisfaction.” Where in this chain was evidence for phenomenal experience introduced? Or was a word, “satisfaction,” introduced, and then the word treated as the evidence?
David Hume’s fork (1748, An Enquiry Concerning Human Understanding) divides claims into “relations of ideas” (true by definition) and “matters of fact” (true by observation). Miller’s chain is a series of definitional substitutions, not observations. No experiment is cited showing that cells experience satisfaction. The word “satisfaction” is introduced as a redescription of homeostatic regulation, then treated as evidence for phenomenal experience. This conflates a relation of ideas (the definitional chain) with a matter of fact (the empirical claim). The empirical claim, that cells have phenomenal experience, is never independently supported.
Q: Is there something it is like to be a cell defending a homeostatic setpoint?
Thomas Nagel’s formulation (1974, “What Is It Like to Be a Bat?,” Philosophical Review) defines consciousness as the presence of subjective character: there is something it is like to be in a given state. Miller asserts this for cells but provides no evidence beyond the definitional chain analyzed above. The closest empirical approach would be to look for neural correlates of consciousness (NCCs) or their functional equivalents, as Christof Koch and Giulio Tononi have pursued (2015, “Neural Correlates of Consciousness,” Current Biology). In vertebrates, the NCCs are associated with specific thalamocortical architectures, not with generic cellular signal processing. Single-celled organisms lack these architectures entirely, which provides positive evidence against Miller’s claim, not merely an absence of evidence for it.
Q: A thermostat “defends” a temperature setpoint. It even “acts to resolve the discrepancy” between its set temperature and the ambient temperature. Is it experiencing satisfaction when it reaches setpoint?
This is the thermostat problem, and it has a long pedigree. Hilary Putnam (1967, “The Nature of Mental States”) and later Ned Block (1978, “Troubles with Functionalism”) showed that purely functional definitions of mental states (defined by input-output relations) face the “liberalism problem”: they attribute mentality too broadly. If consciousness is “defending a setpoint,” then every thermostat, every governor on a steam engine, and every PID controller in an industrial plant is conscious. Miller needs to specify what additional criterion excludes thermostats but includes cells. He never does, except by implicit appeal to biological substrate, which contradicts his claim that the relevant property is functional (problem-solving, doubt-resolution) rather than material.
Additional Issue: Assuming the consequent (petitio principii)
Miller’s argument has the form: “Cells defend states → defending states = having preferences → having preferences = experiencing satisfaction → experiencing satisfaction = consciousness.” But “defending states = having preferences” is precisely the conclusion that needs to be proven, not an acceptable premise. The argument is circular: it assumes that homeostatic regulation is a form of experience in order to conclude that cells have experiences.
5. The Hard Problem “Dissolved” (8:42–9:01)
“By saying what I’m saying now, we effectively work our way beyond Chalmers’ hard problem which is a brilliant idea but has been very a source of limitless obstruction in understanding consciousness.”
“There’s no separation into two aliquots of consciousness of there’s the easy things like the flow of chemicals and the hard things like abstraction. For a cell they’re all one and the same thing.”
Diagnosis: Begging the question
The Hard Problem, as formulated by Chalmers (1995), asks: “Why is there subjective experience at all? Why doesn’t all this information processing proceed in the dark, without any phenomenal feel?” Miller’s “dissolution” is: “For a cell, chemistry and experience are the same thing.” But this is simply asserting that chemistry = experience, which is the very thing the Hard Problem asks you to explain. Claiming to dissolve the Hard Problem by assuming its answer is textbook petitio principii (begging the question).
Socratic Questions and Answers
Q: The Hard Problem asks why information processing feels like anything. Saying “because cells’ information processing IS experience” is a restatement of the problem in assertive form, not an answer. What mechanism links signal transduction to phenomenal feeling?
Chalmers specifically anticipated this move. In “Facing Up to the Problem of Consciousness” (1995), he wrote: “Why should physical processing give rise to a rich inner life at all? It seems objectively unreasonable that it should, and yet it does.” The question is not whether processing and experience co-occur but why. Miller’s assertion that they are “one and the same thing” is an identity claim (chemistry = experience). Identity claims in philosophy of mind require argument, not assertion. J.J.C. Smart (1959, “Sensations and Brain Processes”) and U.T. Place (1956, “Is Consciousness a Brain Process?”) provided arguments for the identity of mental states with brain states. Those arguments specifically concerned brain states, not generic cellular chemistry, because the empirical evidence shows that consciousness correlates with particular neural architectures, not with cellular processes in general.
Q: A cell has all the biochemical information about its environment. Does it also have the experience of redness, or pain, or satisfaction? If yes, on what basis? If no, then something is missing from the account.
Frank Jackson’s “Knowledge Argument” (1982, “Epiphenomenal Qualia,” Philosophical Quarterly) makes the point vivid: Mary, the color scientist who knows all the physical facts about color vision but has never seen red, learns something new when she sees red for the first time. The “something new” is the phenomenal quality. Even if we grant Miller that cells process all relevant information about their chemical environment, the question remains whether they experience any phenomenal quality in doing so. Jackson’s argument shows that functional/informational completeness does not automatically entail phenomenal experience. Miller claims it does, but his only support is the definitional chain from Section 4, which is circular.
Additional Issue: Misidentification of the Hard Problem
Miller characterizes the Hard Problem as “separating easy from hard aspects of consciousness.” This is not what the Hard Problem is. The Hard Problem is about the existence of subjective experience, not about sorting cognitive functions into “easy” and “hard” categories. Miller has misunderstood the problem he claims to dissolve.
6. No Objective Reality for Cells / Info Autopoiesis (9:11–15:57)
“There’s no such thing as absolute objective reality for any cell. Every bit of information that a cell has is ambiguous.”
“All the information that cell has is self-produced. There’s no objective that a cell has… It’s only information when it gets inside the cell.”
“We live in a self-generated, self-produced interpretation of reality.”
Diagnosis: Conflation of signal noise with epistemological subjectivity
Yes, signals degrade during transit through media (noise). Yes, cells process noisy signals. But “signal arrives with noise” does not entail “the cell lives in a self-generated interpretation of reality” in any phenomenologically meaningful sense. A radio receiver processes degraded signals too. We do not say the radio “lives in a self-generated interpretation of reality.”
Diagnosis: Equivocation on “self-produced”
Miller conflates two things:
- Transduction: Converting physical signals to biochemical signals (trivially true, philosophically uninteresting; all sensors do this)
- Construction of subjective reality: Creating an inner phenomenal world (philosophically explosive claim requiring independent argument)
He slides from (1) to (2) without any bridging argument.
Diagnosis: Misuse of “information”
Miller says “matter and energy are not information. It’s only information when it gets inside the cell.” This contradicts Shannon information theory.
Socratic Questions and Answers
Q: Information is defined as reduction of uncertainty at a receiver. The receiver doesn’t need to be conscious for this definition to apply. A silicon photodetector transduces light into electrical signal. Is the information “self-produced” by the detector? If yes, so what? If no, what’s different about the cell?
Claude Shannon (1948, “A Mathematical Theory of Communication,” Bell System Technical Journal) defined information as a measure of uncertainty reduction at a receiver, with no reference to consciousness, experience, or interiority. Shannon information exists at a photodetector, a cell membrane, and a radio antenna equally. Miller’s claim that “it’s only information when it gets inside the cell” contradicts the foundational definition of information theory. If he means something different by “information,” he needs to specify what. If he means Shannon information, his claim is false. If he means “phenomenal content” or “meaningful information,” he has assumed the conclusion (that cells have phenomenal processing) rather than argued for it.
Q: Autopoiesis was carefully distinguished from cognition, and both from consciousness, by its originators. On what basis are all three collapsed?
Humberto Maturana and Francisco Varela coined “autopoiesis” (1973, Autopoiesis and Cognition) to describe self-producing organizational closure. They were careful to distinguish autopoiesis (self-production of organizational structure) from cognition (effective behavior in a domain of interaction) and both from consciousness (subjective experience). Varela later, with Thompson and Rosch (The Embodied Mind, 1991), developed the enactivist approach, which links cognition to embodied sensorimotor engagement but still does not collapse it into generic cellular chemistry. Miller cites “info autopoiesis” from Haimea Cardinius Garcia and treats it as proof that cells are conscious experiencers. But “information is observer-dependent” (a legitimate philosophical point) does not entail “cells are conscious observers” (an extraordinary empirical claim). The gap between the two claims is where all the philosophical work needs to happen, and Miller simply skips it.
Additional Issue: Map/territory confusion
Miller says “we live in a self-generated, self-produced interpretation of reality.” This is the map/territory distinction (Alfred Korzybski, 1933, Science and Sanity). Yes, organisms operate on internal representations, not on raw reality. But acknowledging that organisms use maps does not make the maps “conscious.” A GPS navigation system also operates on an internally generated model of the road network. The existence of internal representation is necessary but not sufficient for consciousness. Miller treats it as sufficient.
7. The Continuum of Consciousness (18:27–19:13)
“Do you see any fundamental differences between the consciousness of a cell and the consciousness of a human?”
“Yes and no… We share an absolute self-similarity to every other living organism from the cell through plants through insects and so on.”
Diagnosis: Motte-and-bailey oscillation
The motte (defensible position): All living organisms share basic cellular processes, including signal transduction, feedback regulation, and gene expression. These are homologous across the tree of life.
The bailey (indefensible position): This shared biochemistry constitutes shared “consciousness” varying only in “complexity.”
When pressed on the difference between a cell and a human, Miller retreats to “we share self-similarities” (motte). But his titular claim, “every cell in your body is conscious,” requires the bailey. The word “consciousness” oscillates between “does biochemistry” and “has inner experience” depending on which is more defensible at the moment. This is the motte-and-bailey fallacy identified by Nicholas Shackel (2005, “The Vacuity of Postmodernist Methodology,” Metaphilosophy): advancing a bold claim when unchallenged, retreating to a trivial claim when challenged, then re-advancing the bold claim once pressure eases.
Socratic Question and Answer
Q: If I could expand a cell to the size of a mill and walk inside it, I would find nothing but molecular parts pushing against one another. Where in this machinery is the consciousness?
This thought experiment, adapted from Leibniz’s “Mill Argument” (Monadology, 1714, §17), highlights the explanatory gap between mechanism and experience. Leibniz argued that no arrangement of mechanical parts, however complex, can explain perception. The force of the argument persists: when we describe E. coli chemotaxis in complete molecular detail, we can trace every phosphorylation event, every conformational change, every motor reversal. At no point in this description does “experience” appear as a necessary explanatory posit. The behavior is fully explained by the molecular mechanism. Adding “consciousness” to the explanation is like adding phlogiston to a complete account of combustion: it does no work.
8. Collaboration, Cooperation, and Cellular “Rules” (19:13–24:26)
“Cells avidly engage in collaboration, cooperation, establish codependent relationships and they compete.”
“Cells respect each other’s self-integrity. Cells get along.”
“Cancer doesn’t play by the rules. It doesn’t establish codependencies. It doesn’t respect self-integrity. That’s why it’s cancer.”
“They act under the principle that you serve yourself best by serving others.”
Diagnosis: Anthropomorphism disguised as explanation
“Collaboration,” “cooperation,” “respect for self-integrity,” “serving others” are social-psychological terms applied to molecular processes. When cells “cooperate,” what actually happens is that their gene regulatory networks, constrained by billions of years of selection, produce compatible signals within a shared extracellular environment. This is not cooperation in any sense that implies intent, agreement, or ethical behavior.
Diagnosis: Projection-retrieval fallacy (circular explanation)
Miller insists “I’m not humanizing cells… I’m explaining humans through cells.” But he simultaneously uses human social vocabulary (“rules,” “respect,” “service”) to describe cellular behavior. The circularity: Project human concepts onto cells → claim cells exhibit human-like behavior → conclude humans are explained by cells that exhibit human-like behavior. This is reading your conclusion into the data and then “discovering” it there.
Socratic Questions and Answers
Q: Inclusive fitness theory and kin selection explain cooperative behavior in multicellular organisms without invoking altruistic intent. Each cell carries the same genome. The “cooperation” is explained by shared genetic interest, not by cellular consciousness deciding to be helpful. What does “consciousness” add to this explanation?
W.D. Hamilton (1964, “The Genetical Evolution of Social Behaviour,” Journal of Theoretical Biology) showed that cooperative behavior evolves when the fitness benefit to the recipient, weighted by relatedness, exceeds the fitness cost to the actor (Hamilton’s rule: rB > C). In a multicellular organism, all somatic cells share the same genome (r = 1), so cooperation is expected under even the simplest kin selection models. Richard Dawkins (The Selfish Gene, 1976) extended this: cells “cooperate” because the genes coding for cooperation are replicated through the germline. No cellular consciousness is required. The cooperation is fully explained by evolutionary dynamics. Miller’s invocation of “consciousness” is explanatorily superfluous, violating Ockham’s razor: do not multiply entities beyond necessity.
Q: Evolutionary game theory explains cellular interactions through selection dynamics and payoff matrices, without attributing cognition to the players. Bacteria “cooperate” in biofilms because cooperation is an evolutionarily stable strategy, not because they have decided to be nice. Is the output of selection being redescribed as the input of intelligence?
John Maynard Smith and Eörös Szathmáry (The Major Transitions in Evolution, 1995) showed that major transitions in evolution (from single cells to multicellular organisms, from solitary organisms to eusocial colonies) can be explained by selection for cooperation when it increases inclusive fitness. At each transition, mechanisms evolve to suppress cheating (cancer is literally failure of these mechanisms). The “rules” Miller describes are not social agreements; they are molecular signaling constraints (contact inhibition mediated by cadherins and integrins, apoptosis pathways like p53, immune surveillance via MHC presentation). Describing these constraints as “rules that cells play by” substitutes metaphor for mechanism, which is the reification error: treating a metaphor as a literal description of reality.
Additional Issue: The cancer argument as just-so story
Miller says cancer is proof that “rules” exist, because cancer breaks them. But “cancer breaks the rules” is not an explanation of cancer. Hanahan and Weinberg (2000, 2011, “The Hallmarks of Cancer,” Cell) identified ten hallmarks of cancer, each explained by specific molecular mechanisms: sustained proliferative signaling, evasion of growth suppressors, resistance to cell death, replicative immortality, angiogenesis, invasion and metastasis, reprogrammed energy metabolism, immune evasion, genome instability, and tumor-promoting inflammation. Every hallmark has a molecular explanation. None requires invoking “rules” that cancer “decided” to break. The anthropomorphic framing obscures the actual biology.
9. Cancer as “Alternative Self” and Niche Construction (27:39–35:02)
“The best way to understand cancer is to correctly understand that it is a different self. It’s not a normative self.”
“What are metastases? They are niche constructions of different sites in the body.”
“Blocking their communication, blocking their capacity to steal resources from normal cells would provide better pathways than trying to clobber cells with toxins.”
Credit where due
Miller’s suggestion that disrupting cancer cell communication rather than killing them directly is genuinely interesting and aligns with real research directions: targeting the tumor microenvironment, disrupting cancer-associated fibroblasts, interfering with exosome signaling, and manipulating the cancer-associated immune environment (Quail & Joyce, 2013, “Microenvironmental Regulation of Tumor Progression and Metastasis,” Nature Medicine). This therapeutic intuition has value independent of his consciousness framework.
Diagnosis: Anthropomorphism without predictive gain
However, describing cancer as a “different self” that “doesn’t play by the rules” is anthropomorphism, not mechanism. Cancer cells do not “decide” to become a “different self.” They accumulate mutations that disable growth controls. The “selfhood” framing adds no predictive power beyond what molecular oncology already provides.
Socratic Question and Answer
Q: The ten hallmarks of cancer are each explained by specific molecular mechanisms. What does calling cancer a “different self” add to any of these ten mechanisms? Does it predict an eleventh?
Robert Weinberg (co-author with Hanahan of the Hallmarks papers, and author of The Biology of Cancer, 2013) would note that every therapeutic advance in cancer treatment has come from understanding specific molecular targets: imatinib targets BCR-ABL (chronic myeloid leukemia), trastuzumab targets HER2 (breast cancer), pembrolizumab targets PD-1 (immune checkpoint). None of these advances required thinking of cancer as a “different self.” The therapeutic directions Miller suggests (communication disruption, microenvironment manipulation) are already being pursued through molecular frameworks. The consciousness framing is a terminological overlay on work that proceeds perfectly well without it.
10. The Brain: “Something in that Aggregation” (41:26–42:48)
“Something in that aggregation enables us to be the special beings that we are. And I’m not saying when I say special, we’re privileged.”
“Yes, we are special, but that doesn’t connote in my terms a hierarchical privilege on the planet.”
Diagnosis: Gap in the theory plugged with mystery (argument from ignorance, cellular edition)
Miller says the brain is “special” but “I don’t think we know why.” His framework claims to explain consciousness bottom-up from cells. But when asked what makes the brain different, he has no answer from within his framework. If consciousness is cellular, and every cell is conscious, the brain should be no more conscious than the liver (which also has trillions of cooperating cells). The fact that it IS different is a problem for his theory, and he handles it by gesturing at mystery: “Something in that aggregation…”
This is the argument from ignorance applied to his own framework: “We don’t know what makes brains special → therefore my cellular consciousness theory isn’t refuted.” But actually, if cellular consciousness theory cannot explain why brains produce rich phenomenal experience and livers do not, that IS evidence against the theory’s sufficiency.
Socratic Question and Answer
Q: If consciousness arises from cellular signal processing, why isn’t the cerebellum conscious? It has more neurons than the cerebral cortex. It processes enormous amounts of information in parallel. Yet cerebellar damage doesn’t affect consciousness. How does the theory explain this?
This is one of the sharpest empirical challenges to any theory that grounds consciousness in generic cellular or neural processing. The cerebellum contains approximately 69 billion neurons, roughly 80% of the brain’s total (Herculano-Houzel, 2009, “The Human Brain in Numbers,” Frontiers in Human Neuroscience). It performs sophisticated information processing, including motor coordination, timing, and some forms of learning. Yet cerebellar lesions do not produce loss of consciousness (Tononi & Koch, 2015, “Consciousness: Here, There and Everywhere?,” Philosophical Transactions of the Royal Society B). Patients with complete cerebellar agenesis can be fully conscious. This empirically falsifies “more cells cooperating = more consciousness” and demonstrates that the architecture of neural connectivity, specifically the reentrant connectivity of the thalamocortical system (Edelman, 2003, Wider Than the Sky), matters more than cell count or generic signal processing. Miller’s framework, focused on individual cell properties, cannot account for this architectural dependence.
11. Space Contamination Digression (43:08–51:55)
“We’re sending conscious cells out… There are, if revived, conscious living beings on those probes. This is not conjecture. This is near certainty.”
“Doesn’t anyone see that we become agents of panspermia?”
Diagnosis: Smuggling contested claims as established facts
The contamination claim is supported: NASA’s own research confirms that spacecraft sterilization is imperfect (Rummel et al., 2014, “A New Analysis of Mars ‘Special Regions’,” Astrobiology). The planetary protection concern is legitimate and worth more public discussion. But calling these microbes “conscious living beings” smuggles in the contested claim (that cells are conscious) as though it were established fact. The contamination issue stands perfectly well without the consciousness claim and is arguably weakened by it, because attaching a controversial philosophical claim to a legitimate practical concern makes the practical concern easier to dismiss.
Potential for public harm
Framing microbes as “conscious living beings” in the context of space exploration could either (a) trivialize consciousness (if taken seriously, it implies we commit moral wrongs every time we autoclave laboratory equipment) or (b) undermine legitimate planetary protection efforts (if dismissed as kooky, the real contamination concern gets dismissed along with it).
12. Problem-Solving as “Essence of Consciousness” (52:28–53:29)
“Problem solving is the essence of consciousness. The ability to work hard to resolve ambiguities, uncertainties. This is what consciousness is.”
Diagnosis: Definitional collapse (five distinct concepts treated as synonyms)
By this point Miller has fully collapsed:
- Signal transduction (molecular pathways responding to gradients)
- Homeostatic regulation (feedback loops maintaining setpoints)
- Adaptive behavior (evolved responses to environmental challenges)
- Problem-solving (cognitive process requiring representation of goal states)
- Consciousness (phenomenal experience)
All five are different things. Each requires different explanatory frameworks. Miller treats them as synonyms. This makes the claim unfalsifiable, because anything that responds to its environment now counts as “conscious.”
Socratic Questions and Answers
Q: A thermostat solves the problem of maintaining room temperature. A pocket calculator solves arithmetic problems. Are they conscious? If not, what’s the difference? If the difference is “biological substrate,” then consciousness isn’t defined by problem-solving; it’s defined by biology. Which is it?
John Searle (1992, The Rediscovery of the Mind) called this the “biology objection” to functionalism. If consciousness depends on functional properties (problem-solving, information integration), then any system with those functional properties should be conscious. If it depends on biology, then the functional definition is insufficient and you need to specify what biological property matters. Miller cannot have it both ways: he cannot define consciousness functionally (to include cells) and then deny it to AI systems (which have the same functional properties) by appealing to biology. Searle himself held that consciousness depends on specific biological processes (“biological naturalism”), but he provided arguments for this position. Miller provides none.
Q: Alan Turing carefully distinguished between “Can machines think?” (which depends on your definition of “think”) and “Can machines do X?” (which is empirically testable). Under Miller’s definition, his Turing machine is conscious. Has something been discovered about machines, or about definitions?
Turing’s insight (1950, “Computing Machinery and Intelligence,” Mind) was that definitional questions are often disguised as empirical ones. If we define “thinking” as “behaving indistinguishably from a thinker,” then a machine that passes the Turing test thinks, by definition. If we define “consciousness” as “problem-solving under ambiguity,” then every feedback system is conscious, by definition. In neither case have we learned anything about the system. We have only learned something about our definition. Miller’s claim reduces to: “Under my definition, cells are conscious.” This is trivially true and empirically empty.
13. Membranes and “Maxwell’s Demon” (53:42–55:27)
“You have to have an external boundary… it has to be a specific type of membrane, a discriminating membrane. It actually has to be a Maxwell’s demon.”
Diagnosis: Thermodynamic error (invoking Maxwell’s Demon against its own resolution)
Maxwell’s Demon is a thought experiment designed to show that information processing has thermodynamic costs (James Clerk Maxwell, 1871, Theory of Heat; Leo Szilard, 1929; Rolf Landauer, 1961; Charles Bennett, 1982). The resolution of the paradox is that the demon must expend energy to erase information (the Landauer-Bennett resolution), thus increasing total entropy. Miller invokes the Demon as though it supports his argument for biological exceptionalism. It does the opposite. The Demon shows that any discriminating barrier costs energy to operate. This applies equally to cell membranes and to silicon transistors. It does not privilege biological membranes as uniquely conscious.
Socratic Question and Answer
Q: Landauer showed that erasing one bit of information costs at least kT ln 2 of energy (verified experimentally by Bérut et al., 2012, in Nature). This applies to any physical information processor. A cell membrane pays this cost. So does a MOSFET gate. If “being a discriminating boundary that pays thermodynamic costs” is sufficient for consciousness, then every transistor is conscious. Is this an acceptable conclusion?
Rolf Landauer’s principle (1961, “Irreversibility and Heat Generation in the Computing Process,” IBM Journal of Research and Development) establishes that the minimum energy cost of erasing one bit is kT ln 2, approximately 2.8 × 10⁻²¹ joules at room temperature. This was experimentally verified by Bérut et al. (2012, “Experimental Verification of Landauer’s Principle,” Nature). The principle applies to ALL physical systems that process information, biological or not. Charles Bennett (1982, “The Thermodynamics of Computation,” International Journal of Theoretical Physics) showed that this resolves the Maxwell’s Demon paradox: the demon’s memory must eventually be erased, and this erasure pays the thermodynamic cost. Miller cites Maxwell’s Demon as supporting cellular consciousness, but the demon’s resolution shows that information processing and thermodynamic cost are universal features of physical systems, not unique to biology. If thermodynamic cost of discrimination were sufficient for consciousness, every semiconductor device would be conscious.
14. Viruses: “Maybe” Conscious (56:01–57:17)
“Do viruses qualify? Absolutely. Maybe. And the reason I say that is… they have a capsule… they have memory and they have a boundary. An efficient discriminating boundary. So it is possible that they are conscious.”
Diagnosis: Internal contradiction (failure to apply own stated criteria)
Miller has just stated that consciousness requires a “discriminating membrane” that acts as a “Maxwell’s Demon.” Viral capsids do not discriminate in this sense. They are passive protein shells that do not selectively transport ions, do not process signals, and do not maintain any homeostatic setpoint. By Miller’s own criteria, viruses should not qualify. But he says “maybe” anyway, which suggests the criteria are flexible enough to accommodate any answer.
Diagnosis: Unfalsifiability by flexibility
When stated criteria would produce an answer Miller doesn’t want, he softens them. When they produce an answer he does want, he applies them strictly. This is what Antony Flew (1950, “Theology and Falsification,” University) called the “death of a thousand qualifications”: a claim that is progressively hedged until it is compatible with every possible observation and therefore empirically empty. If “maybe” is always an acceptable answer, the framework can never be falsified.
Socratic Question and Answer
Q: Two necessary conditions were stated for consciousness: a discriminating membrane and retrievable memory. Viruses have neither in the functional sense described. The capsid is not a signal-discriminating membrane, and viruses have no autonomous metabolism or gene expression machinery. By the stated criteria, the answer should be “no.” The fact that the answer is “maybe” suggests the criteria don’t actually determine the conclusions. What does?
The actual biology is clear. Viruses are obligate intracellular parasites. They have no metabolism, no signal transduction, no gene expression outside a host cell (Koonin & Dolja, 2014, “Virus World as an Evolutionary Network of Viruses and Capsidless Selfish Elements,” Microbiology and Molecular Biology Reviews). By any functional definition of consciousness that requires active discrimination, self-maintenance, or information processing, viruses do not qualify. Miller’s willingness to include them anyway reveals that his framework is not constrained by its own stated criteria. This is a hallmark of what Imre Lakatos (1978, The Methodology of Scientific Research Programmes) called a “degenerating research program”: one that accommodates anomalies through ad hoc adjustments rather than making novel predictions.
15. Cognition-Based Evolution (1:05:13–1:12:29)
“Cognition based evolution is the only comprehensive alternative to standard neo-Darwinism that has been published.”
“What you have is intelligent cells measuring ambiguous environmental stresses and collaborating, cooperating, problem solving together. And that’s what creates forms.”
Diagnosis: Factual error
The extended evolutionary synthesis (Laland et al., 2015, “The Extended Evolutionary Synthesis,” Proceedings of the Royal Society B), neutral theory (Motoo Kimura, 1983, The Neutral Theory of Molecular Evolution), niche construction theory (Odling-Smee, Laland & Feldman, 2003, Niche Construction), and developmental systems theory (Oyama, Griffiths & Gray, 2001, Cycles of Contingency) are all published comprehensive alternatives to standard neo-Darwinism. Miller’s claim to uniqueness is false.
Diagnosis: Discovery Institute isomorphism (identical logical skeleton)
The structure of Cognition-Based Evolution is formally identical to Intelligent Design:
| Step | Intelligent Design | Cognition-Based Evolution |
|---|---|---|
| 1. Observe complex outcome | Complex biological structures | Complex biological structures |
| 2. Declare inadequacy of naturalistic explanation | “Random mutation can’t explain this” | “Random mutation can’t explain this” |
| 3. Install special primitive | “Intelligent Designer” | “Intelligent cells” |
| 4. Declare mystery solved | “Design explains complexity” | “Cognition explains complexity” |
The formal structure is identical. The only difference is the location of the “intelligence”: outside the system (classical ID) or inside the cells (Miller). In both cases, “intelligence” does no explanatory work beyond redescribing the phenomenon.
Socratic Questions and Answers
Q: Most evolutionary change is neutral: neither adaptive nor maladaptive. How does the theory account for the neutral majority of evolutionary change? Do cells “decide” to make neutral mutations?
Motoo Kimura (1968, “Evolutionary Rate at the Molecular Level,” Nature; 1983, The Neutral Theory of Molecular Evolution) demonstrated that the vast majority of evolutionary changes at the molecular level are selectively neutral, fixed by random genetic drift rather than selection. Michael Lynch (2007, The Origins of Genome Architecture) extended this, showing that many complex features of eukaryotic genomes (introns, gene duplications, regulatory complexity) are better explained by population genetic drift in small populations than by selection. These neutral changes constitute the overwhelming majority of evolutionary change. Miller’s framework, in which evolution is driven by “intelligent cells measuring and solving problems,” has no account of neutral evolution. Neutral mutations are not “solutions to problems.” They are accidents that persist because selection cannot distinguish them from alternatives. This is not a minor gap; neutral evolution accounts for the majority of molecular change.
Q: The Lenski Long-Term Evolution Experiment directly demonstrates evolution through random mutation plus selection. How is this reconciled with the claim that “random variation can’t lead to productive outputs”?
Richard Lenski’s experiment (begun 1988; key papers: Blount, Borland & Lenski, 2008, “Historical Contingency and the Evolution of a Key Innovation in an Experimental Population of Escherichia coli,” PNAS; Lenski et al., 2015, “Sustained Fitness Gains and Variability in Fitness Trajectories,” Proceedings of the Royal Society B) has followed 12 populations of E. coli for over 75,000 generations. It has directly observed: (1) novel metabolic capabilities arising through random mutation (citrate utilization in an aerobic environment, which E. coli normally cannot do), (2) fitness increases accumulated through cumulative selection, (3) historical contingency in which earlier random mutations potentiate later ones. This is direct experimental falsification of Miller’s claim that “a series of accidental things concatenated, linked together, never leads to solutions of problems.” In Lenski’s experiment, that is exactly what happened, on camera, for over three decades.
16. Cancer “Going Backwards” / Genes as “Tools” (1:06:05–1:07:44)
“Genes are tools. They’re memories… genes are the record of solutions to previous problems.”
“Cancer can go backwards in time… this ability to regress backwards and find older solutions to new problems… is one of the very strong reasons why we can state confidently that random genetic variations has almost nothing to do with evolution because you couldn’t go backwards in a toolbox if they had gotten there randomly.”
Diagnosis: Non sequitur, affirming the consequent
The argument: “Cancer cells reactivate ancestral gene expression patterns → therefore genes couldn’t have gotten there randomly.” This does not follow. Atavistic gene reactivation (Davies & Lineweaver, 2011, “Cancer Tumors as Metazoa 1.0,” Physical Biology) is explained by the fact that ancestral regulatory programs are still present in the genome but normally silenced by epigenetic mechanisms. When silencing mechanisms break down (as in cancer), older programs re-emerge. This requires no “intelligence”; only the well-established fact that evolution is conservative and builds on existing scaffolds.
Named fallacy: Affirming the consequent
The logical form:
- Premise: “If genes were intelligently placed, you could go backwards in the toolbox”
- Observation: “Cancer goes backwards in the toolbox”
- Invalid conclusion: “Therefore genes were intelligently placed”
This has the form: If P then Q; Q; therefore P. This is the textbook fallacy of affirming the consequent. The alternative explanation (evolutionary conservation of ancestral programs, plus epigenetic silencing that fails in cancer) accounts for the observation without invoking intelligence.
Socratic Question and Answer
Q: François Jacob argued that evolution works like a tinkerer, not an engineer: it repurposes existing parts. Cancer reactivating ancestral programs is tinkering with what’s already there. The programs are “there” because selection preserved them (or failed to eliminate them). Why does their reactivation require that they were placed there “intelligently”?
François Jacob (1977, “Evolution and Tinkering,” Science) made the definitive case that evolution is opportunistic bricolage, not forward-planning engineering. Ancestral programs persist in genomes because genomes are palimpsests: new regulatory layers are written on top of old ones without erasing them. Cancer’s reactivation of ancestral programs (documented by Trigos et al., 2017, “Altered Interactions Between Unicellular and Multicellular Genes Drive Hallmarks of Transformation,” PNAS) occurs when the newer regulatory layers fail. This is like a building’s modern wiring failing and the old knob-and-tube wiring becoming active again. Nobody concludes that the old wiring was “intelligently placed for future use.” It was simply never removed.
17. The Neo-Darwinian Strawman (1:18:26–1:23:22)
“A series of accidental things concatenated, linked together, never leads to solutions of problems unless you have a harnessing of that stochasticity.”
“Random errors, random replicative errors, which is the basis of standard neo-Darwinism, wouldn’t lead to productive outputs any more than errors in my Windows 11 program are going to lead to proper results on my computer.”
Diagnosis: Strawman fallacy
No evolutionary biologist claims that random mutation alone produces adaptation. The neo-Darwinian synthesis says: random mutation generates variation → natural selection preserves variants that increase fitness → cumulative selection over generations produces adaptation. The selection step is nonrandom. This is elementary. Miller attacks a theory that no one holds.
Diagnosis: False analogy
Software errors don’t improve programs because software doesn’t reproduce, doesn’t undergo selection, and doesn’t have a fitness landscape. Biological evolution has all three. The analogy fails on every relevant dimension. In fact, genetic algorithms (which apply mutation + selection to digital code) DO produce novel solutions to engineering problems (Holland, 1975, Adaptation in Natural and Artificial Systems). When you add selection to random variation in digital systems, productive outputs emerge routinely. The Windows analogy fails precisely because Windows doesn’t have selection, not because random variation is inherently unproductive.
Socratic Questions and Answers
Q: Natural selection, acting on variation, is the creative force in evolution. Random variation alone was never the claim. What specific aspect of the actual modern synthesis (not the strawman) does Cognition-Based Evolution improve upon?
Darwin himself (1859, On the Origin of Species) was explicit that variation alone does not produce adaptation: “I have called this principle, by which each slight variation, if useful, is preserved, by the term Natural Selection.” Ronald Fisher (1930, The Genetical Theory of Natural Selection) proved mathematically that even tiny fitness differences, accumulated over many generations via selection, produce dramatic adaptations. The “Fundamental Theorem of Natural Selection” does not require intelligent mutation, only heritable variation and differential reproduction. Miller’s attack on “random mutation” is an attack on a position no evolutionary biologist since the 1930s has held. The actual modern synthesis, which includes neutral theory, population genetics, evo-devo, and the extended evolutionary synthesis, is a far more sophisticated and well-supported framework than Miller acknowledges.
Q: The Lenski experiment directly falsifies the claim that “random variation can’t lead to productive outputs.” 75,000+ generations of E. coli evolving novel metabolic capabilities through random mutation plus selection, in real time, in a laboratory. How is this reconciled?
It cannot be reconciled. This is direct, repeated, experimentally controlled falsification. (See Section 15 for full citation.)
18. AI is Not Conscious (1:19:11–1:30:37)
“Of course AI is not conscious.”
“There is never doubt in the computer system. There’s no doubt.”
“The computer only has binary inputs.”
“The explicit reason why AI is not conscious is the principle of info autopoiesis… We self-generate our information internally. That’s consciousness.”
Diagnosis: Factual error (“binary inputs”)
Modern AI systems operate on continuous-valued floating-point representations. Neural networks process gradient information across millions of parameters. The inputs to a large language model are not “binary” in any meaningful sense; they are high-dimensional vector representations with continuous activations. Miller appears to be describing a 1960s understanding of computation.
Diagnosis: Special pleading (criteria applied inconsistently)
Applying Miller’s own stated criteria to AI:
| Miller’s criterion | Does AI meet it? |
|---|---|
| Deals with ambiguous information | Yes: language is massively ambiguous; LLMs resolve ambiguity constantly |
| Self-generates information internally | Yes: LLMs generate outputs not present in any single training input |
| Has a boundary | Yes: defined computational boundary |
| Has retrievable memory | Yes: trained parameters, context windows |
| Resolves uncertainty | Yes: probabilistic inference over token distributions |
| Makes “decisions” | Yes: selects among possible continuations |
By Miller’s own criteria, AI should qualify as conscious. But he says it doesn’t. This means either his criteria are insufficient (which undermines his argument for cellular consciousness) or he is applying them inconsistently. The inconsistent application is the fallacy of special pleading: applying rules selectively to get the desired answer.
Socratic Questions and Answers
Q: You’ve stated criteria for consciousness. AI meets those criteria. You deny AI consciousness anyway. Either the criteria are wrong, or there is an unstated criterion. Which is it?
Turing (1950) anticipated this exact problem. When definitions of “thinking” were gerrymandered to exclude machines, he proposed replacing the definitional question with a behavioral test. Miller’s situation is the reverse: his definition of consciousness is so broad that it includes machines, and he must gerrymander to exclude them. The intellectual move is the same in both directions. If “problem-solving under ambiguity with a boundary and memory” is genuinely sufficient for consciousness, then AI qualifies. If biology is additionally required, then consciousness is not defined by problem-solving under ambiguity; it is defined by specific biological properties that Miller has not identified.
Q: Miller says “there is no doubt in the computer system.” But a language model’s probability distributions over tokens ARE representations of uncertainty. A system that assigns 60% probability to one interpretation and 40% to another is in a state that is functionally isomorphic to “doubt.” If this is denied, what is the relevant difference between biochemical uncertainty-processing and computational uncertainty-processing?
Karl Friston’s free energy principle (2010, “The Free-Energy Principle: A Unified Brain Theory?,” Nature Reviews Neuroscience) formalizes biological systems as minimizing variational free energy, which is a measure of the divergence between internal predictions and sensory input, effectively a formalization of “surprise.” Modern AI systems also minimize prediction error (cross-entropy loss during training is formally related to variational free energy). If “minimizing surprise” is the criterion for consciousness (as Miller suggests when he invokes Friston), then AI systems that minimize prediction error meet the criterion. Miller invokes Friston to support cellular consciousness but ignores that the same mathematical framework applies to AI. This is selective citation, choosing which implications of a framework to accept based on the desired conclusion rather than the framework’s logic.
Additional Issue: “No beginning of a chance” as argument from ignorance
“No AI can do that. And there’s not even the beginning of a chance that we’ll do it soon, if ever, because we have no idea how to do such a thing.”
“We don’t know how to make AI conscious → therefore it can’t be done” is a textbook argument from ignorance (argumentum ad ignorantiam). In 1900 there was “no idea” how to transmit moving images wirelessly. This is not evidence of impossibility.
This is also contradicted by Miller’s own framework: if consciousness is “just” signal processing under ambiguity with a boundary and memory, then engineering those features into artificial systems should be straightforward in principle. His certainty that it is impossible sits oddly with his claim that the mechanism is simple enough to be present in every cell.
19. “You Serve Yourself Best by Serving Others”: Purpose (58:42–59:13)
“Life has purpose, meaning, and goal directedness. Because your cells are exemplars of that. They’re problem solving decision making to satisfy states of preference. That’s our goal directedness at the very [base]… What a cellular purpose is: you serve yourself best by serving others.”
Diagnosis: Naturalistic fallacy (Hume’s guillotine)
Miller derives an “ought” (you should serve others) from an “is” (cells cooperate). Even if cells cooperated in the way he describes, this would not establish that cooperation is purposeful or meaningful in the moral sense. David Hume (1739, A Treatise of Human Nature, Book III) argued that no amount of “is” statements can logically entail an “ought” statement. The logical gap between “cells exhibit cooperative behavior” and “life has purpose and meaning” is unbridgeable by observation alone.
Diagnosis: Selective evidence for “purpose”
Parasites “serve themselves” at the expense of hosts. Cancer cells (by Miller’s own account) “serve themselves” at the expense of the organism. Parasitoid wasps lay eggs inside living caterpillars, whose bodies are consumed from the inside. Which cellular behavior reveals the “real” purpose?
Diagnosis: Teleological fallacy
Describing evolved functional arrangements as “purposes” confuses teleonomy (apparent purposiveness produced by selection) with teleology (genuine purpose or goal). Larry Wright (1973, “Functions,” Philosophical Review) and Ruth Millikan (1984, Language, Thought, and Other Biological Categories) showed how to talk about biological “function” without invoking conscious purpose. A heart functions to pump blood (Wright-function: it was selected for pumping blood). This does not mean the heart “has a purpose” in the sense of pursuing a goal. Miller ignores this entire literature on teleosemantics.
Socratic Question and Answer
Q: Cells “teach” us that “you serve yourself best by serving others.” Cancer cells “teach” us that “you serve yourself best by exploiting others.” Which lesson should be drawn from cellular biology? By what principle is one cellular behavior selected as “purposeful” and the other as “aberrant”?
This is the is/ought gap in action. Miller selects cooperative cellular behavior as exemplifying “purpose” and labels exploitative behavior (cancer) as “aberrant.” But from the perspective of the cancer cell (using Miller’s own framework, in which cells are conscious agents with preferences), cancer is serving its own “states of preference” very effectively. The selection of cooperation over exploitation as “the real purpose” is Miller’s value judgment projected onto the biology, not a discovery from the biology. As Hume would note: the biology provides facts about what cells do, not prescriptions about what they (or we) should do.
20. The Xenobots Passage (1:08:51–1:10:42)
(Interviewer): “It will go around in its dish and create clumps of itself and they will become conscious or they’re conscious the whole time, but you’ll… it’s more observably conscious.”
Diagnosis: Unsupported attribution of consciousness to cell aggregates
Michael Levin’s xenobots (Kriegman et al., 2020, “A Scalable Pipeline for Designing Reconfigurable Organisms,” PNAS) are remarkable: frog skin and heart cells in novel configurations exhibiting collective behavior not specified by the genome. But describing them as “conscious” or “more observably conscious” is pure projection. The xenobots exhibit self-organization and kinematic self-replication. These are impressive collective behaviors explained by biomechanics and bioelectricity. No evidence suggests phenomenal experience. The attribution of “observable consciousness” to xenobots is the observer-attribution fallacy: attributing to the system a property that exists only in the observer’s interpretive framework.
21. The AI-Cell Marriage (1:33:07–1:34:01)
“You’ve got the cheapest consciousness in the world. You’re going to spend billions to create conscious AI and you’ve got it for free. All you have to do is marry it to the conscious entities that exist at the cellular level.”
Diagnosis: Assuming the conclusion in the premise
“You’ve got consciousness for free” assumes cells are conscious, the very thing under debate. The proposal to “marry” AI with cells is interesting as biocomputing (Adamatzky, 2017, Advances in Unconventional Computing) and does not require believing cells are conscious.
22. The Commenter Who Got It Right
The YouTube commenter Dustin Brooksby wrote what may be the most incisive critique of Miller’s framework in the entire comment section:
“If the claim is that a choice is being made, but 99.99% of the time all 100 billion cells choose the same thing, I think that’s evidence it’s not a choice but a reaction.”
“If I understand [Huntington’s] right, he has one healthy gene and one that isn’t allowing him to make the right chemical… in a scenario where individual cells are conscious and can make decisions, why are they all making the wrong decision predictably? Wouldn’t you see some cells choosing to use the healthy gene?”
This is devastating because it provides a concrete, testable prediction that Miller’s framework fails:
Prediction from Miller’s framework: If cells are conscious decision-makers, cells with both a healthy and a defective allele should sometimes “choose” the healthy one. You would expect variable expressivity, with some cells choosing correctly.
Observed reality: In Huntington’s disease, expression of the mutant huntingtin allele (HTT with expanded CAG repeats) is near-universal across neurons. The protein is expressed from the mutant allele in virtually every cell that expresses HTT. There is no evidence of cells “choosing” the healthy allele (Walker, 2007, “Huntington’s Disease,” The Lancet).
Implication: If cells were conscious decision-makers with the ability to “choose” which genes to express, Huntington’s disease should be far less uniform in its cellular expression. The uniformity of expression is evidence of deterministic molecular processes (gene expression governed by cis-regulatory elements and chromatin state), not conscious choice. This is a concrete empirical failure of the framework.
Summary of Systematic Errors
Equivocations (Same Word, Different Meanings)
| Word | Meaning 1 (defensible) | Meaning 2 (indefensible) | Where the slippage occurs |
|---|---|---|---|
| Consciousness | Signal processing under noise | Phenomenal subjective experience | Continuous throughout |
| Intelligence | Evolved adaptive response | Cognitive capacity with representations | Sections 3, 15 |
| Decision | Molecular pathway activation | Deliberate choice among alternatives | Sections 3, 12 |
| Preference | Homeostatic setpoint | Experienced desire or satisfaction | Section 4 |
| Information | Shannon signal | Phenomenal content | Section 6 |
| Doubt | Noisy signal | Felt uncertainty | Section 2 |
| Purpose | Teleonomy (evolved function) | Genuine teleology (meaning/goal) | Section 19 |
| Problem-solving | Feedback regulation | Cognitive deliberation | Section 12 |
| Rules | Molecular signaling constraints | Social agreements | Section 8 |
| Self | Organizational boundary | Experiencing subject | Sections 1, 9 |
Named Fallacies Committed
- Equivocation: “consciousness” used in two senses within single arguments
- Begging the question (petitio principii): “Hard Problem dissolved” by assuming its answer
- Affirming the consequent: cancer atavism → therefore genes are intelligent
- Strawman: neo-Darwinism depicted as “random mutation alone”
- Special pleading: criteria for consciousness applied to cells but not to AI systems meeting the same criteria
- Argument from ignorance (ad ignorantiam): “we don’t know how to make AI conscious, so it’s impossible”
- Naturalistic fallacy: deriving moral purpose from cellular behavior
- Persuasive definition (Stevenson): loading “consciousness” with emotional weight while applying it to a technical process
- Definist fallacy: defining consciousness to guarantee the desired conclusion
- Composition fallacy: assuming organismal consciousness = sum of cellular consciousnesses
- False analogy: Windows errors compared to genetic mutations (lacking selection)
- False equivalence: “scientists disagree about consciousness” treated as justification for any definition
- Motte-and-bailey (Shackel): oscillation between “cells process signals” and “cells are conscious experiencers”
- Projection-retrieval: projecting human concepts onto cells, then “discovering” them there
- Observer-attribution: cells look intelligent to us → therefore cells are intelligent
- Selective citation: invoking Friston for cells but ignoring that the same math applies to AI
Nominalization Cascades Identified
- Maintaining → preference → satisfaction → experience → consciousness (Section 4)
- Transduction → interpretation → subjective reality → self-generated world (Section 6)
- Signal processing → problem-solving → decision-making → intelligence → cognition (Sections 3, 12, 15)
- Evolved cooperation → collaboration → service → purpose → meaning (Sections 8, 19)
In each cascade, a process verb is frozen into a noun, then the noun is treated as a real entity requiring explanation, generating a further nominalization to explain it. The cascade creates the illusion of explanatory depth when the actual content at each step is the same: molecular processes doing what molecular processes do.
Unfalsifiable Claims
| Claim | Why unfalsifiable |
|---|---|
| “Every cell is conscious” | Consciousness defined as signal processing; all cells process signals; claim is tautological under the definition |
| “Cells have experiences of satisfaction” | No method for testing phenomenal experience in cells; no falsification criteria provided |
| “Cognition-based evolution” | Any evolved adaptation can be redescribed as “intelligent problem-solving”; no outcome counts against the framework |
| “You serve yourself best by serving others” | Cancer cells serve themselves by exploiting others; both behaviors are cellular; no principle selects which reveals the “real purpose” |
| “Life has purpose” | Under the stated definition, any evolved function counts as “purpose”; the claim is irrefutable because it is defined into truth |
Empirically Contradicted Claims
| Claim | Contradicting evidence |
|---|---|
| “Random variation can’t lead to productive outputs” | Lenski LTEE: 75,000+ generations of evolution via random mutation + selection, producing novel metabolic capabilities (Blount, Borland & Lenski, 2008) |
| “The computer only has binary inputs” | Modern AI operates on continuous-valued floating-point representations across millions of dimensions |
| “Cognition-based evolution is the only published comprehensive alternative to neo-Darwinism” | Extended evolutionary synthesis (Laland et al., 2015), neutral theory (Kimura, 1983), niche construction theory, developmental systems theory |
| Brain is special but the theory can’t explain why | Thalamocortical architecture, not cell count, determines consciousness (Tononi & Koch, 2015); cerebellar lesions do not affect consciousness despite 80% of brain’s neurons (Herculano-Houzel, 2009) |
| “Cells choose which genes to express” (implied) | Huntington’s disease: near-universal expression of mutant huntingtin allele across neurons, no evidence of cells “choosing” the healthy allele (Walker, 2007) |
Dr. William B. Miller Jr.’s Claims Are Falsified from Every Meaningful Angle
Dr. Miller’s framework collapses under Popperian scrutiny through two simultaneous failures: his core definitional claims are unfalsifiable (and therefore unscientific), while his specific empirical claims are decisively falsified by standard experimental observations.
The Popperian Framework
Karl Popper’s demarcation criterion holds that a claim is scientific only if it specifies what observations would count against it and could in principle be shown false by a conceivable basic statement. A theory that cannot be falsified by any observation is not bad science—it is non-science, because it has removed itself from empirical testing.
Miller’s framework exhibits both pathologies: where his claims are vague enough to be protected from refutation, they fail the demarcation test; where they are sharp enough to be testable, they are immediately falsified.
The Unfalsifiable Core: Consciousness by Definition
Miller’s central claim—”every cell is conscious”—evades falsification through systematic equivocation and definitional immunization.
The Equivocation Structure
Miller defines “consciousness” variously as:
- “Resolving doubt and ambiguity”
- “Problem-solving under uncertainty”
- “Perceiving doubt and acting purposively”
- “Defending states of preference and satisfaction”
Each definition is chosen to guarantee the conclusion. Since all physical systems process signals under noise (a consequence of the second law of thermodynamics), and since all cells maintain homeostatic setpoints via feedback regulation, every cell trivially satisfies these definitions. No experimental result could ever show that cells don’t process noisy signals or maintain homeostasis, so the claim becomes unfalsifiable by construction.
This is the definist fallacy: defining your target concept so that your desired conclusion follows by definition, then presenting the conclusion as an empirical discovery.
The Motte-and-Bailey Pattern
Miller oscillates between two claims:
- The Bailey (indefensible): Cells have phenomenal subjective experience—the “rich palette of feeling, sensing, and appreciating.”
- The Motte (trivial): Cells process biochemical signals and maintain homeostasis.
When challenged on phenomenal consciousness, he retreats to signal processing; when claiming to explain human consciousness, he advances the phenomenal interpretation. This is the motte-and-bailey fallacy identified by Nicholas Shackel: defending a trivial claim while exploiting the rhetorical force of the interesting one.
The Thermostat Problem
Miller’s functional definitions collapse into absurdity. A thermostat:
- Processes signals under noise (temperature fluctuations)
- Resolves ambiguity (discrepancy between setpoint and ambient temperature)
- Defends a state of preference (target temperature)
- Makes decisions (whether to activate heating/cooling)
- Acts purposively to solve a problem (temperature regulation)
By Miller’s stated criteria, thermostats are conscious. When confronted with this, he denies it through special pleading—invoking biological substrate while simultaneously claiming consciousness is functionally defined. This internal contradiction reveals that his framework has no consistent criteria and operates through post-hoc adjustment rather than principled prediction.
Why This Is Non-Scientific
Popper and Antony Flew called this pattern “death by a thousand qualifications”—progressively hedging a claim until it is compatible with every possible observation and therefore empirically empty. Miller has not entered “the scientific game” because he refuses to specify what observation would make him say “I was wrong; cells are not conscious.” The claim is protected from falsification through definition rather than exposed to empirical test.
The Falsifiable Predictions: All Refuted
When Miller steps beyond definitional safety and makes concrete empirical claims, each is immediately falsified by standard observations.
Falsification 1: Huntington’s Disease and Cellular “Choice”
The Claim: Cells are conscious agents that actively choose which genes to express, solving problems and satisfying preferences through “definitive directed decisions.”
The Prediction: In heterozygous genetic disease, where cells have access to both a healthy allele and a mutant allele, we should observe variable expression patterns reflecting individual cellular choices. Many cells should “choose” the healthy allele, producing heterogeneous pathology.
The Observation: In Huntington’s disease, neurons carrying one healthy huntingtin allele and one mutant allele (with expanded CAG repeats) express the mutant allele with near-universal regularity. There is no evidence of cells opting to use the healthy allele preferentially. The expression pattern is deterministic, governed by chromatin state and cis-regulatory elements, not by a population of conscious agents making divergent choices.
Why This Is Decisive: Miller’s framework predicts stochasticity where we observe determinism. As commenter Dustin Brooksby noted: “If individual cells are conscious and can make decisions, why are they all making the wrong decision predictably? Wouldn’t you see some cells choosing to use the healthy gene?” The answer is: molecular mechanisms, not cellular consciousness, govern gene expression. The uniform mutant expression falsifies the “cellular choice” hypothesis under standard Popperian rules: the theory predicted variability (¬O), we observe uniformity (O), therefore the theory is refuted.
Auxiliary Objection Blocked: Miller cannot retreat to “cells choose within constraints” without abandoning the decision-making claim that distinguished his theory from standard molecular biology. If “choice” means “whatever the chromatin state allows,” the word does no explanatory work and the claim reduces to a redescription of mechanism.
Falsification 2: The Lenski Long-Term Evolution Experiment
The Claim: “Random errors, random replicative errors…wouldn’t lead to productive outputs any more than errors in my Windows 11 program are going to lead to proper results on my computer.” Miller asserts that random genetic variation cannot generate novel adaptive function; cellular intelligence is required.
The Prediction: In a system where variation is random with respect to fitness and where there is no mechanism for directed mutation or cellular cognition, we should not observe the origin of genuinely new adaptive capabilities.
The Observation: In Richard Lenski’s long-term evolution experiment (LTEE), twelve asexual E. coli populations have been propagated for over 75,000 generations in glucose-limited medium with no recombination, no plasmids, and no directed mutation. After approximately 31,000 generations, one population evolved the novel ability to grow aerobically on citrate (Cit+), which ancestral E. coli cannot do under oxic conditions. Detailed genomic analysis shows this innovation arose through ordinary random mutations—including promoter capture and gene amplification of the citT transporter—shaped by natural selection. Subsequent experiments have repeatedly evolved citrate utilization in E. coli under different regimes, with similar molecular mechanisms.
Why This Is Decisive: Miller’s claim is universal: random mutation plus selection cannot produce productive novelty. The LTEE provides a controlled counterexample where random mutation plus selection did produce productive novelty—a metabolic innovation solving an environmental problem. The claim is categorical; the refutation is direct. Under Popperian logic: C (“random mutation cannot produce novelty”) plus background B (“LTEE uses only random mutation and selection”) entails ¬O (“no novel adaptive function will arise”). Observation O (Cit+ evolution) obtains. Therefore C ∧ B is falsified.
Escape Blocked: Miller sometimes invokes “stress-induced mutagenesis” to preserve cellular agency, but this mechanism merely increases mutation rate (like turning off spell-check), not mutation direction. Faster random errors are still random; panic is not cognition.
Falsification 3: The Luria-Delbrück Fluctuation Test
The Claim: Cells “engineer” their genomes or “collaborate” to solve environmental problems as cognitive agents responding to stressors.
The Prediction: Adaptive mutations (the “solutions”) should appear in response to the specific environmental stressor (the “problem”). Solutions should not precede problems or arise at rates indistinguishable from blind chance.
The Observation: The Luria-Delbrück fluctuation test (1943)—replicated thousands of times—demonstrates that genetic “solutions” (e.g., phage resistance) arise randomly and spontaneously in a population before the stressor (phage) is ever encountered. The statistical variance in resistant colonies across replicate cultures proves that resistance mutations occurred during earlier divisions, not in response to phage exposure.
Why This Is Fatal: Miller’s model requires cells to perceive a problem and design a solution—a cognitive act. The Luria-Delbrück result shows the “solution” exists in the lineage purely by chance prior to any encounter with the “problem.” The cell did not solve anything; it won a lottery it didn’t know it had entered. This falsifies the claim that mutations are cognitively directed responses to environmental challenges.
Falsification 4: Genomic Junk and Neutral Decay
The Claim: If the genome is the product of “intelligent cellular engineering,” genomic elements should be functional solutions to cellular problems.
The Prediction: The genome should not accumulate massive, clearly deleterious, or useless structural elements serving no adaptive purpose.
The Observation: Human and animal genomes contain pseudogenes (e.g., GULO in humans, which prevents vitamin C synthesis), millions of fossilized transposons (like inert Alu elements), and other sequences degrading at the exact rate of neutral background drift. These elements cost energy to replicate but provide no function; they are not “waiting to be used” but are broken machinery decaying under mutation pressure.
Why This Is Fatal: An “intelligent” system maximizing information efficiency under thermodynamic constraints (Landauer’s limit) would delete non-functional elements. Their persistence and passive rot prove that blind drift, not cognitive oversight, governs large fractions of genomic architecture. This falsifies Miller’s claim that cognition-based evolution is the primary driver of genomic structure.
Falsification 5: Cerebellar Agenesis Preserves Consciousness
The Claim: Consciousness arises from cellular signal processing and should scale with neuron number and information processing capacity.
The Prediction: Losing 80% of the brain’s neurons should drastically impair or eliminate consciousness.
The Observation: The cerebellum contains approximately 69 billion neurons—roughly 80% of the brain’s total—and performs sophisticated parallel information processing. Yet cerebellar lesions do not affect consciousness, and patients with complete cerebellar agenesis can be fully conscious.
Why This Is Fatal: If consciousness were a product of generic cellular computation, the cerebellum’s 69 billion signal-processing cells should contribute massively. They don’t. Consciousness depends on specific thalamocortical architectures, not on neuron count or generic cellular properties. This falsifies the claim that cellular signal processing is sufficient for consciousness.
The Structural Impossibility of Falsification
Miller’s framework is built to resist falsification through a family of immunization strategies:
- Definitional flexibility: Key terms (“consciousness,” “intelligence,” “choice,” “purpose”) shift meaning as needed to absorb counterexamples.
- Nominalization cascades: Process verbs are frozen into nouns, then reified as entities needing explanation. “Cells maintain setpoints” becomes “cells have preferences,” then “preferences require satisfaction,” then “satisfaction is an experience,” importing phenomenal consciousness through linguistic sleight-of-hand.
- Retreat to potentiality: When concrete mechanisms fail, Miller gestures toward unknown future theories or undetectable potentials that cannot be tested.
- Ad hoc qualification: Each falsification is quarantined as a “special case” rather than allowed to strike the core.
This structure guarantees that no observation can force abandonment, which is precisely Popper’s criterion for non-science. The theory does not survive harsh tests; it evades testing altogether.
The Dual Verdict
Miller’s framework fails Popperian scrutiny in both possible ways:
- As stated, it is unfalsifiable: The core consciousness claims are protected by definition and cannot be contradicted by any conceivable observation. Under Popper’s demarcation, this disqualifies them as scientific hypotheses.
- When sharpened into testable predictions, it is falsified: The specific empirical claims about cellular choice (Huntington’s), evolutionary innovation (LTEE), mutation direction (Luria-Delbrück), genomic optimization (junk DNA), and computational consciousness (cerebellar agenesis) are each contradicted by standard, well-established observations.
From a strict Popperian standpoint, this combination is definitive. The framework either removes itself from empirical science or loses decisively when it finally risks contact with reality. Miller’s claims are falsified from every meaningful angle: where they can be tested, they fail; where they cannot be tested, they fail to be science.
Potential for Public Harm
- Undermining legitimate cancer research framing: Reframing well-understood molecular oncology as “conscious cells breaking rules” may misdirect public understanding and research priorities.
- Weakening planetary protection advocacy: Attaching controversial consciousness claims to legitimate spacecraft contamination concerns makes the legitimate concern easier to dismiss.
- Trivializing consciousness: If every cell is “conscious,” the word loses the moral and philosophical weight it carries in discussions of animal welfare, patient rights, end-of-life care, and AI ethics.
- Providing ammunition for anti-evolution movements: The formal identity between Cognition-Based Evolution and Intelligent Design (“random mutation can’t explain this → therefore intelligence is needed”) can be, and likely will be, appropriated by ID proponents as academic support.
- Undermining trust in scientific communication: Presenting stipulative definitions as empirical discoveries erodes public understanding of what scientific claims actually mean.
The Central Structural Problem
The entire interview rests on a single move: defining consciousness as signal processing under ambiguity, then treating everything that processes signals under ambiguity as conscious in the full phenomenal sense. This is a category error elevated to a research program. It is equivalent to defining “love” as “chemical bonding” and then claiming that sodium chloride is in love.
The move is seductive because it does capture something real: cells are more complex, more responsive, and more adaptive than simple machines. The molecular biology is genuinely interesting. But the leap from “cells are more sophisticated than thermostats” to “cells are conscious experiencers” is not supported by any argument Miller provides. It is supported only by a definition that makes the conclusion trivially true, at the cost of making it trivially uninteresting.
What can be asserted without evidence can be dismissed without evidence. Miller asserts cellular consciousness without evidence for phenomenal experience in cells. He provides evidence for signal processing, which no one disputes. The entire edifice is built on the gap between what he demonstrates and what he claims, and the gap is bridged by nothing but vocabulary.
What a Stronger Version Would Look Like
If Miller wanted to make a defensible case, he could:
- Stop using “consciousness” for signal processing. Call it “basal responsiveness” or “cellular information integration” with explicit acknowledgment that phenomenal consciousness is a separate (and open) question.
- Specify falsification criteria. What observation would show that cells are NOT conscious? If no observation counts, the claim is not scientific.
- Engage with the architectural evidence. Why do thalamocortical circuits support consciousness while cerebellar circuits don’t? A theory of consciousness must explain this.
- Drop the neo-Darwinian strawman. Engage with the actual modern synthesis, including neutral theory, population genetics, and evo-devo.
- Apply criteria consistently. If AI meets every stated criterion for consciousness and he still denies AI consciousness, he needs a principled reason, not special pleading.
- Separate the interesting empirical claims from the metaphysical claims. Cells as sophisticated information processors: interesting, defensible, worth investigating. Cells as conscious experiencers with purpose and meaning: extraordinary claim requiring extraordinary evidence that he does not provide.
- Acknowledge the Huntington’s problem. Dustin Brooksby’s objection provides a clear, testable prediction. If cells “choose” which genes to express, genetic diseases with dominant expression patterns should show more variability than they do. They don’t.
REFERENCES
Giant’s Shoulder (YouTube channel). (n.d.). Meet the Scientist Proving Every Cell in Your Body is Conscious! (Interview with William B. Miller Jr.). https://www.youtube.com/watch?v=eNi-_2cOVtc
Philosophy of mind and consciousness
Block, N. (1978). Troubles with functionalism. In C. W. Savage (Ed.), Perception and Cognition: Issues in the Foundations of Psychology (Minnesota Studies in the Philosophy of Science, Vol. 9). University of Minnesota Press. https://philpapers.org/rec/BLOtwf
Chalmers, D. J. (1995). Facing up to the problem of consciousness. Journal of Consciousness Studies, 2(3), 200–219. https://consc.net/papers/facing.html
Dennett, D. C. (1991). Consciousness Explained. Little, Brown and Company. WorldCat: https://search.worldcat.org/title/consciousness-explained/oclc/22947321
Dennett, D. C. (1995). Darwin’s Dangerous Idea: Evolution and the Meanings of Life. Simon & Schuster. WorldCat: https://search.worldcat.org/title/darwins-dangerous-idea-evolution-and-the-meanings-of-life/oclc/32791923
Jackson, F. (1982). Epiphenomenal qualia. The Philosophical Quarterly, 32(127), 127–136. JSTOR stable item: https://www.jstor.org/stable/2960077
Leibniz, G. W. (1714). Monadology. Project Gutenberg: https://www.gutenberg.org/ebooks/1721
Nagel, T. (1974). What is it like to be a bat? The Philosophical Review, 83(4), 435–450. JSTOR stable item: https://www.jstor.org/stable/2183914
Place, U. T. (1956). Is consciousness a brain process? British Journal of Psychology, 47(1), 44–50. https://doi.org/10.1111/j.2044-8295.1956.tb00560.x
Putnam, H. (1967). The nature of mental states. In W. H. Capitan & D. D. Merrill (Eds.), Art, Mind, and Religion. University of Pittsburgh Press. https://philpapers.org/rec/PUTTNO
Searle, J. R. (1980). Minds, brains, and programs. Behavioral and Brain Sciences, 3(3), 417–457. https://doi.org/10.1017/S0140525X00005756
Searle, J. R. (1992). The Rediscovery of the Mind. MIT Press. https://mitpress.mit.edu/9780262193473/the-rediscovery-of-the-mind/
Shackel, N. (2005). The vacuity of postmodernist methodology. Metaphilosophy, 36(3), 295–320. https://doi.org/10.1111/j.1467-9973.2005.00371.x
Smart, J. J. C. (1959). Sensations and brain processes. The Philosophical Review, 68(2), 141–156. JSTOR stable item: https://www.jstor.org/stable/2182164
Stevenson, C. L. (1938). Persuasive definitions. Mind, 47(187), 331–350. https://doi.org/10.1093/mind/XLVII.187.331
Tononi, G., & Koch, C. (2015). Consciousness: Here, there and everywhere? Philosophical Transactions of the Royal Society B, 370(1668), 20140167. https://doi.org/10.1098/rstb.2014.0167
Turing, A. M. (1950). Computing machinery and intelligence. Mind, 59(236), 433–460. https://academic.oup.com/mind/article/LIX/236/433/986238
Epistemology and philosophy of science
Flew, A. (1955). Theology and falsification. In A. Flew & A. MacIntyre (Eds.), New Essays in Philosophical Theology. https://philpapers.org/rec/FLETAF
Hume, D. (1748). An Enquiry Concerning Human Understanding. Project Gutenberg: https://www.gutenberg.org/ebooks/9662
Lakatos, I. (1978). The Methodology of Scientific Research Programmes. Cambridge University Press. https://www.cambridge.org/core/books/methodology-of-scientific-research-programmes/
Popper, K. (1959). The Logic of Scientific Discovery. Routledge.
Information theory, computation, thermodynamics
Bennett, C. H. (1982). The thermodynamics of computation. International Journal of Theoretical Physics, 21, 905–940. https://doi.org/10.1007/BF02084158
Bérut, A., Arakelyan, A., Petrosyan, A., Ciliberto, S., Dillenschneider, R., & Lutz, E. (2012). Experimental verification of Landauer’s principle. Nature, 483, 187–189. https://doi.org/10.1038/nature10872
Landauer, R. (1961). Irreversibility and heat generation in the computing process. IBM Journal of Research and Development, 5(3), 183–191. https://doi.org/10.1147/rd.53.0183
Reprint: https://www.physics.pitt.edu/sites/default/files/landauer_1961.pdf
Maxwell, J. C. (1871). Theory of Heat. Internet Archive: https://archive.org/details/theoryofheat00maxw
Shannon, C. E. (1948). A mathematical theory of communication. Bell System Technical Journal, 27, 379–423, 623–656. Internet Archive scan: https://archive.org/details/bstj27-3-379
Szilard, L. (1929). On the decrease of entropy in a thermodynamic system by the intervention of intelligent beings. Zeitschrift für Physik, 53, 840–856. https://doi.org/10.1007/BF01341281
Neuroscience, cognition, and consciousness correlates
Dehaene, S. (2014). Consciousness and the Brain: Deciphering How the Brain Codes Our Thoughts. Viking.
Edelman, G. M. (2004). Wider Than the Sky: The Phenomenal Gift of Consciousness. Yale University Press. Internet Archive record: https://archive.org/details/widerthanskyphen00edel
Friston, K. (2010). The free-energy principle: A unified brain theory? Nature Reviews Neuroscience, 11, 127–138. https://doi.org/10.1038/nrn2787
Herculano-Houzel, S. (2009). The human brain in numbers: A linearly scaled-up primate brain. Frontiers in Human Neuroscience, 3, 31. https://doi.org/10.3389/neuro.09.031.2009
Cell biology, evolution, self-organization
Berg, H. C. (2004). E. coli in Motion. Springer. https://doi.org/10.1007/978-0-387-21651-6
Darwin, C. (1859). On the Origin of Species. Project Gutenberg: https://www.gutenberg.org/ebooks/1228
Fisher, R. A. (1930). The Genetical Theory of Natural Selection. Internet Archive: https://archive.org/details/geneticaltheoryo00fish
Holland, J. H. (1975). Adaptation in Natural and Artificial Systems. University of Michigan Press.
Jacob, F. (1977). Evolution and tinkering. Science, 196(4295), 1161–1166. https://doi.org/10.1126/science.860134
James, W. (1890). The Principles of Psychology (Vol. 1). Internet Archive: https://archive.org/details/theprinciplesofp01jameuoft
Kauffman, S. A. (1993). The Origins of Order: Self-Organization and Selection in Evolution. Oxford University Press.
Kimura, M. (1968). Evolutionary rate at the molecular level. Nature, 217, 624–626. https://doi.org/10.1038/217624a0
Kimura, M. (1983). The Neutral Theory of Molecular Evolution. Cambridge University Press. https://www.cambridge.org/core/books/neutral-theory-of-molecular-evolution/
Koonin, E. V., & Dolja, V. V. (2014). Virus world as an evolutionary network of viruses and capsidless selfish elements. Microbiology and Molecular Biology Reviews, 78(2), 278–303. https://doi.org/10.1128/MMBR.00049-13
Laland, K. N., Uller, T., Feldman, M. W., Sterelny, K., Müller, G. B., Moczek, A., Jablonka, E., & Odling-Smee, J. (2015). The extended evolutionary synthesis: Its structure, assumptions and predictions. Proceedings of the Royal Society B, 282, 20151019. https://doi.org/10.1098/rspb.2015.1019
Lenski, R. E., Barrick, J. E., & Ofria, C. (2015). Sustained fitness gains and variability in fitness trajectories in the long-term evolution experiment. Proceedings of the Royal Society B, 282, 20152292. https://doi.org/10.1098/rspb.2015.2292
Lynch, M. (2007). The Origins of Genome Architecture. Sinauer Associates.
Odling-Smee, F. J., Laland, K. N., & Feldman, M. W. (2003). Niche Construction: The Neglected Process in Evolution. Princeton University Press.
Oyama, S., Griffiths, P. E., & Gray, R. D. (2001). Cycles of Contingency: Developmental Systems and Evolution. MIT Press.
Sourjik, V., & Wingreen, N. S. (2012). Responding to chemical gradients: Bacterial chemotaxis. Current Opinion in Cell Biology, 24(2), 262–268. https://doi.org/10.1016/j.ceb.2011.11.008
Trigos, A. S., Pearson, R. B., Papenfuss, A. T., & Goode, D. L. (2017). Altered interactions between unicellular and multicellular genes drive hallmarks of transformation in a diverse range of solid tumors. Proceedings of the National Academy of Sciences, 114(24), 6406–6411. https://doi.org/10.1073/pnas.1617743114
Cancer biology (mechanism-level references)
Hanahan, D., & Weinberg, R. A. (2000). The hallmarks of cancer. Cell, 100(1), 57–70. https://doi.org/10.1016/S0092-8674(00)81683-9
Hanahan, D., & Weinberg, R. A. (2011). Hallmarks of cancer: The next generation. Cell, 144(5), 646–674. https://doi.org/10.1016/j.cell.2011.02.013
Quail, D. F., & Joyce, J. A. (2013). Microenvironmental regulation of tumor progression and metastasis. Nature Medicine, 19, 1423–1437. https://doi.org/10.1038/nm.3394
Walker, F. O. (2007). Huntington’s disease. The Lancet, 369(9557), 218–228. https://doi.org/10.1016/S0140-6736(07)60111-1
Astrobiology and planetary protection
Rummel, J. D., et al. (2014). A new analysis of Mars “special regions”: Findings of the second MEPAG special regions science analysis group (SR-SAG2). Astrobiology. Repository record: https://ttu-ir.tdl.org/items/a30d5323-370b-4682-8753-fe39faccd9de
Biocomputing and unconventional computation
Adamatzky, A. (2017). Advances in Unconventional Computing: Volume 1: Theory. Springer.
Autopoiesis, embodiment, maps vs territory
Korzybski, A. (1933). Science and Sanity: An Introduction to Non-Aristotelian Systems and General Semantics. Internet Archive: https://archive.org/details/sciencesanityint00korz
Maturana, H. R., & Varela, F. J. (1973). Autopoiesis and Cognition: The Realization of the Living. Reidel.
Varela, F. J., Thompson, E., & Rosch, E. (1991). The Embodied Mind: Cognitive Science and Human Experience. MIT Press.







