Michael Levin’s Platonic Space Symposium Discussion #1: TAME, Distributed Agency vs Individual Intent
“A deepity is a proposition that seems both important and true—and profound—but that achieves this effect by being ambiguous. On one reading it is manifestly false, but it would be earth-shaking if it were true; on the other reading it is true but trivial.”
Daniel Dennett
Michael Levin’s academic channel recently posted a discussion among contributors to the Platonic Space Hypothesis – a roughly 100-minute conversation exploring whether mathematics constitutes a separate realm from which biological patterns are “retrieved.” The discussion is worth watching for anyone interested in the foundations of developmental biology, cognitive science, and the philosophy of mathematics. It’s also worth watching for anyone interested in how intelligent people can spend hours exploring a framework without ever subjecting it to the one question that would determine whether it’s science or storytelling.
Their discourse reveals the defensive individualism problem at the heart of distributed agency frameworks in bioelectricity and morphogenesis research. Throughout the 100-minute conversation exploring whether Platonic morphospace explains convergent biological patterns, participants including Carl Friston on free energy minimization, Chris Fields on observer-imposed boundaries, and extensive TAME (Technological Approach to Mind Everywhere) framework discussion enthusiastically deploy distributed agency to analyze xenobots, anthrobots, cell collectives, and even sorting algorithms without attributing conscious intent.
Friston explains “self” as self-organization. Fields notes boundaries are observer-relative. TAME extends cognitive vocabulary to systems without neurons, licensing structural analysis of goals, preferences, and stress through observable dynamics rather than inner states.
Yet when I used the same methodology turned toward the research program itself, examining how unfalsifiable Platonic morphogenesis claims are systematically cited by the Discovery Institute for Intelligent Design advocacy, how path-dependent morphological outcomes (Durant et al. 2017) contradict Platonic pattern-determination, and how these frameworks inform regenerative medicine policy without explicit falsification criteria, the response retreats to defensive individualism. Levin states, “I don’t intend for the Discovery Institute to use my work” (individual intent as shield), followed by the insistence that “‘ethical breach’ is definitely an accusation” and “I don’t have time for parsing out accusations about aboriginal elders that aren’t accusations” (individual intent-attribution as weapon). This maneuver functions to convert a critique of downstream consequences, responsibility, and falsifiability into a dispute over alleged motives, and then to exit the discussion on that basis.
This is performative contradiction: the frameworks dissolve individual intent as explanatory primitive while the defense requires it as both absolution and accusation. Either individual intent is ontologically privileged for humans but not xenobots (requiring an account of human exceptionalism), or structural critique of research programs, analyzing constraint-propagation through citation networks, institutional uptake, and downstream policy effects, is exactly what distributed agency predicts as appropriate methodology.
The symposium provides the theoretical apparatus for its own critique. The question is whether proponents will apply their frameworks consistently or retreat to the bounded individual agency their work explicitly transcends.
Is Platonic morphospace scientific, or is it unfalsifiable metaphysics enabling harm through unaccountable policy influence? The symposium never asks what would falsify Platonic Space. That absence, combined with asymmetric deployment of distributed versus individual agency depending on whether researchers are analyzing subjects or defending themselves, suggests the answer.
In nearly 2 hours… the question, which nobody asks:
What empirical observation would falsify Platonic morphospace, or the closely related and highly contested claim “not all facts are physical facts.” that Levin’s thesis depends on?
Karl Popper was clear about this:
‘Irrefutability is not a virtue of a theory (as people often think) but a vice.’
Karl Popper
If nothing can disprove it, nothing proves it either. That’s the ballgame.
I want to work through this discussion carefully, not to dismiss the participants (several of whom make genuinely interesting points) but to surface the methodological tensions that run through it. These tensions matter beyond academic philosophy. They matter because unfalsifiable frameworks that guide biological research can’t learn from their mistakes, and mistakes in biology have consequences for living systems.
Before proceeding chronologically through the symposium, let me enumerate the frameworks that make “individual intent” a problematic explanatory category – not because intent doesn’t exist, but because it’s emergent rather than primitive:
Dennett’s Intentional Stance: Intent is a predictive heuristic we apply to systems, not an ontological feature we discover in them. “X intends Y” is a useful predictive shorthand, not a claim about inner states. Criticizing consequences doesn’t require claims about inner states – only about observable patterns and their effects.
Whitehead’s Process Philosophy: There are no substances with inherent properties; there are only processes and their relations. “Intent” reifies a process into a substance. The actual question is: what constraint-propagation patterns produce what downstream effects?
Rovelli’s Relational QM: Properties exist only relative to interactions, not in isolation. “Intent” as an intrinsic property of an isolated agent is precisely what RQM rules out. What exists is relational: framework-deployment-in-context, not “intent-in-agent.”
Resnik’s Mathematical Structuralism: Relations are primary; relata are nodes in relational structures. Any individual is a node in epistemic, institutional, and communicative networks. Attributing intent to the node rather than analyzing the network structure is methodologically backwards.
Ladyman & Ross’s Ontic Structural Realism: Only structures are real; objects are derivative. “Individual intent” presupposes object-primacy. The actual causal story involves structural relations: framework → deployment → citation patterns → policy influence → effects.
Quine’s Holism: Beliefs face evidence as a corporate body, not individually. Isolating a single belief-state from the web is artificial. The web includes: training, incentives, institutional pressures, prior commitments, audience expectations.
Friston’s Free Energy Principle (implicitly present in symposium): Agents minimize surprise under generative models. “Intent” is just the expected-state component of the model. The model is shaped by constraints, not freely chosen. Criticizing the model’s downstream effects doesn’t require claims about the agent’s “true intent.”
Each of these frameworks has survived decades of falsification attempts across multiple fields. They converge on a common insight: “individual intent” is not the right level of analysis for understanding complex systems and their effects.
The Gödelian Self-Reference Problem (~0:00-2:01)
Chris Fields opens with a genuinely interesting question:
“since we are as soon as we want to to achieve some level of of kind of precision and definition essentially forced to use mathematics to talk about our own states and our own interactions with the world… I have to use mathematics to describe my interaction with all of you for example. Um so how does that buy us if it does buy us our thinking about what mathematics is and um what it means to claim that um we are ourselves entities that are amenable not only amenable to mathematical description But for which mathematical description is required for a certain kind of discourse.”
Fields asks what it “buys us” that we must use mathematics to describe our own states. But why frame this as mathematics describing us rather than us constructing formalisms that happen to be self-applicable? The self-reference he identifies is real, but does it tell us something about mathematics-as-realm, or about the structure of any sufficiently expressive formal system?
Here’s my confusion: Gödel’s incompleteness results show that self-referential systems face specific computational limits. But those limits are derivable from the formal structure – they don’t require invoking anything outside the system. So what explanatory work does “mathematics” as a separate domain do here that “constraint on self-modeling systems” doesn’t?
If I claimed that the incompleteness theorems reveal something about thermodynamic limits on self-modeling (since any physical implementation of a formal system has finite resources and must compress), what observation would distinguish that claim from the claim that they reveal something about a mathematical realm? I can specify what would falsify my framing: if a physical system demonstrated self-consistent completeness without infinite resources, my thermodynamic account fails. What would falsify the realm interpretation?
Self-reference creates strange loops, as Hofstadter showed. Any agent describing their own intent is engaged in another layer of modeling – it’s not transparent access to a fact but a self-referential construction. When someone claims that a critic is “attributing intent,” they’re making a claim about the critic’s intent (the intent to attribute malice). This is the same structure they’re objecting to. If intent-attribution is problematic in one direction, it’s problematic in both.
Is Mathematics Fixed or Evolving? (~4:05-6:34)
Mike Levin poses the question:
“do do we in fact have a fixed thing that we know this is the this is mathematics and here are the borders of it and if you go beyond that you’re in you’re somewhere else. It’s not math. It’s something else, right? Or is it that uh our attempt to formalize what you know interactions between between agents and so on is actually stretching math. Is it changing the definition, you know, what the borders of what we thought.”
Chris Fields responds:
“certainly if one looks from the outside uh at least how math has been described by humans has changed quite a bit with the introduction for example of non-uclitian geometry… many things that in earlier formulations looked like distinct entities or distinct systems or organizations uh turn out to be notational variants. And you say, ‘Oh, okay. Well, this thing and that thing are in fact exactly the same thing. All we’ve done is redescribe them in a different language.'”
This exchange reveals the central confusion. Levin asks whether mathematics has “fixed borders” – but this presupposes mathematics is a bounded entity rather than a process. Fields correctly notes that mathematical description has evolved, but then immediately reifies it again with “some fixed entity called mathematics somewhere outside of our conceptualization.”
The nominalization trap is operating here: “mathematics” (noun) smuggles in substance-ontology. The verb form would be: “mathematizing” – the activity of constructing constraint-satisfaction formalisms. What’s “fixed” isn’t a realm but the thermodynamic constraints that any such formalism must respect. Landauer’s principle doesn’t care whether you call your notation “mathematics” or “alien logic” – information erasure costs kT ln 2 per bit regardless.
As Wittgenstein would put it: the borders of mathematics are the borders of what we can consistently formalize. Those borders shift as we develop new formalisms, not because we’re “discovering” more of a pre-existing realm, but because we’re inventing new constraint-satisfaction tools.
Identity as Axiomatic Assumption (~9:33-11:00)
Chris Fields continues:
“one of my major obsessions is the notion of identity. And in physics, that’s the notion of identity over time… without this notion, physics stops, right? There’s nothing to say anymore… Psychology stops because we can no longer talk about memory if we can’t talk about identity. And identity is a a key assumption I I’ll say or an an axiomatic assumption of category theory that uh there’s an operator that we call identity and without that notion of identity mathematics stops.”
Fields identifies something crucial here – identity is treated as axiomatic rather than derived. But his framing still contains the reification error.
In what sense does identity come first? In dynamical systems, what we call “identity over time” is a derived property – it’s what we say when a system’s trajectory remains within a particular basin of attraction under perturbation. The attractor is primary; “identity” is what we call the stability.
So here’s my question: is identity an axiom we assume, or a stability condition we observe? If it’s assumed, what licenses the assumption? If it’s observed, why call it axiomatic rather than empirical?
The Aboriginal songlines encode something relevant here: identity isn’t a substance you have – it’s a performance you maintain through ongoing constraint-satisfaction across contexts. You don’t “possess” an identity; you regenerate it. What rules out that framing as more fundamental than the axiomatic one?
Category theory requires an identity operator, yes. But category theory also allows us to work with groupoids and higher structures where identity becomes structure rather than primitive. Does that suggest identity-as-axiom is a feature of particular formalisms rather than a deep fact about cognition or physics?
These aren’t rhetorical questions. I genuinely want to understand what commits someone to identity-as-primitive rather than identity-as-emergent. The answer matters because it determines whether “self” and “agent” and “intent” are fundamental categories or useful approximations – and that determination has downstream consequences for how we think about responsibility, causation, and critique.
The “Non-Physical Facts” Motivation (~11:10-12:52)
Mike Levin explicitly states his strategy:
“The only reason I brought up math at all because again I I don’t I don’t have a real background in math. The only reason I bring it up at all is that it seemed to me to be a I won’t say uncontroversial because obviously there’s a lot of controversy but at least a simpler um domain in which to try to make the claim which some people at least already believe that not all facts are physical facts. In other words, if you try to do that in biology, it’s really hard because it’s very complex and people will say, ‘Well, there’s some mechanism. You just haven’t found it yet.’… But at least in math, um other other people for a really long period of time have already made the claim that there are facts that are not derived from nor changeable within physics.”
This is methodologically backwards. He’s saying: “I can’t defend non-physical facts in biology directly, so I’ll import them from mathematical Platonism, which is already accepted by some people.”
But mathematical Platonism is not “already accepted” as established fact – it’s a contested philosophical position that faces devastating objections (epistemological access problem, causal isolation, Benacerraf’s dilemma). Importing an unresolved philosophical controversy to “ground” biological claims doesn’t strengthen the argument – it compounds the burden of proof.
Here’s my question: what if the building is going the wrong direction? Instead of extending Platonism from math to biology, what if we should be extending naturalism from biology to math?
Consider: mathematical “facts” are stable because they describe constraint relationships that hold whenever any physical system satisfies those constraints. “2+2=4” isn’t true in a separate realm – it’s a compressed description of what happens when you combine two pairs of anything. The stability comes from the generality of the constraint, not from non-physical existence.
What observation would distinguish these framings? I can specify what would falsify mine: if mathematical relationships held in ways that couldn’t be instantiated in any physical system – if there were mathematical truths with no possible physical correlate – my constraint-satisfaction account would fail.
What would falsify Platonism?
And there’s a harder empirical question lurking here. Durant et al. 2017 (from Levin’s own lab) shows path-dependent morphological outcomes. If patterns in Platonic space are “pulling” systems toward them, why does the path matter? Shouldn’t the pattern determine the outcome regardless of trajectory? How does Platonic morphospace accommodate path-dependence without smuggling physical causation back in?
Time-Independence and Pattern Memory (~13:33-15:17)
Mariana articulates the Platonic picture clearly:
“I’ve dwelled a lot in this notion because I feel it’s very important… in development you see this a lot you know you would have an embryo it in principle it will um stage 22 it will have 36,000 cells as an open embryo. So this happens all the time and and and when there’s a variation then we note it down but in principle this happens all the time and so time we can express this also from a time independent perspective… suppose this structure space of patterns um is is a space where memory is retrieved from. And so all states that already happened and will happen live there.”
“Patterns live there” reifies patterns into entities that persist independently. But patterns don’t “live” – they are constraint-satisfaction solutions that get re-instantiated when systems satisfy the relevant constraints.
The empirical claim embedded here (that developmental stages are highly reproducible) is true and important. But the explanation doesn’t require Platonism. Developmental reproducibility emerges from:
- Shared thermodynamic constraints (available energy gradients)
- Shared molecular machinery (genetically inherited)
- Shared environmental boundary conditions
- Constraint-satisfaction under selection pressure over billions of years
Reproducibility is evidence for strong constraint-satisfaction, not for a separate realm. The patterns don’t “live” anywhere – they get reconstructed because the constraints that generate them persist.
This is precisely the memory-as-constraint-persistence insight: Memory is not storage of patterns but persistence of constraint sets that regenerate similar admissible states.
And here’s a harder question: if patterns exist atemporally, why does development take time? If the final state “already exists” in pattern space, why doesn’t the embryo just be at stage 22 immediately? What explains the temporal unfolding if the endpoint is eternally fixed?
If agents “loop around” fetching patterns from a structure space, then agency is distributed between agent and pattern-space. The “intent” isn’t localized in the agent – it’s in the agent-pattern-space coupling. This is the Platonic picture’s own framing. And if that framing is correct, then analyzing the coupling between frameworks and their downstream effects (including their coupling with advocacy networks that use them for purposes their originators might not endorse) is precisely what the picture predicts as the appropriate locus of explanation.
What Are We Actually Trying to Achieve? (~23:18-28:19)
A participant asks the crucial question:
“Let’s assume that everything that we want from the representation exist. What then? So what will be the end goal? Because the what we are representing is some subset of the real world. Let’s assume that we have everything in the technically create two words. Then then what? Or if we have some reduction of concept in that platonic space then what can we do with this?”
Mike Levin responds:
“what I’m really interested in is mapping the space, but also figuring out what is it what what what does it actually give you? Because there’s a wide range of options. So, it might just give you static patterns like here’s the value of E and that that’s all you get… Or it might give you uh dynamic behavior or or or algorithms or compute… what’s the what’s the range of of uh complexity that you get out of it that you didn’t put in and where right?”
This is the crucial question the symposium never adequately answers: What predictive work does Platonic space do that constraint-satisfaction doesn’t?
Levin’s “free lunch” framing reveals the core issue. He wants patterns that provide “complexity you didn’t put in.” But this is exactly what constraint-satisfaction already delivers – you specify constraints, and the solution space includes structures you didn’t anticipate. That’s not a “free lunch from another realm” – it’s how constraint-satisfaction works.
Consider: Chladni plate patterns. You specify vibration frequency and boundary conditions. You get complex geometric patterns “for free.” But nobody invokes Platonic space to explain this – the patterns emerge from constraint-satisfaction in physical dynamics.
See: 1. The Chladni Plate Alternative to Platonism: How Douglas Brash’s Constraint-Based Framework Explains Bioelectric Morphogenesis Without Platonic Forms & 2. Palanquins & Princes: How Platonism Ignores 13.8 Billion Years of “Just Weights”
The question Levin should be asking: “What does Platonic space predict that constraint-satisfaction doesn’t?” If they make identical predictions, Platonic space is explanatorily superfluous (Occam’s razor applies). If they make different predictions, Platonic space must be falsifiable.
Here’s a way to sharpen this: can anyone give me an example of a pattern that couldn’t be explained by constraint-satisfaction but could be explained by Platonic access? If not, isn’t Platonic space explanatorily superfluous by Occam’s razor?
And a related worry: if Platonic space explains any pattern that emerges, it explains nothing in particular. “The pattern came from Platonic space” can be said about literally any outcome. What predictive constraint does the framework impose? What can’t happen if Platonism is true?
This isn’t a gotcha question. It’s the question that distinguishes science from metaphysics. Science makes predictions that can fail. Metaphysics accommodates any outcome. Both can be interesting, but they shouldn’t be confused – especially when the framework is guiding interventions in living systems.
What is unfolding in the symposium is not a settled, principled defense of individual-level explanation, but a recognizable retreat maneuver. Distributed agency is embraced enthusiastically so long as it produces pattern, robustness, and emergence, but the moment those same distributed explanations begin to erode something still treated as non-negotiable, authorship, control, moral responsibility, or a privileged explanatory center, the language tightens and recoils. Agency is suddenly re-bounded. Causes migrate back inside discrete entities. Explanations begin to lean again on internal drives, goals, or intentions. This is not a contradiction so much as a pressure response: a framework expands until it threatens a protected boundary, then contracts to preserve it. The result is an oscillation between distributed process and individual agency that is not theoretically resolved, only rhetorically managed.
The Tail-to-Limb Problem (~29:53-34:00)
Mariana poses an excellent question:
“I’ve been thinking a lot about it and it seems that when you speak of perturbations or abrupt perturbations things that were unforeseen so far that then I’ll put these developmental novelties, you know, like the anthropots or um I I’m very puzzled about the the tail onto the flank. Like why not the tail? Why not keep the tail? It’s so much cheaper. Why not reject the tail cheaper? Why build the limb? It seems like it’s the most c I know your assumption is that is more more helpful or that is more useful but I feel that the qu sometimes I I wonder like why”
Levin’s response (~33:17-34:00) essentially punts: “we we we don’t know how how a lot of these decisions are made.”
From a Platonic perspective, this should be answerable: if the system is “accessing” patterns from morphospace, why does it access “limb” rather than “tail” when either would restore function? The Platonic framework predicts the system should converge on the nearest pattern in morphospace – but “nearest” by what metric? If it’s physical distance/energy, then it’s constraint-satisfaction. If it’s some non-physical “pattern similarity,” the framework becomes unfalsifiable (you can always claim whatever happened was “nearer” in Platonic space).
Here’s my worry: if “closeness in morphospace” is defined by what the system actually does, then it’s circular – we’re inferring morphospace topology from outcomes, then using that topology to “explain” the outcomes. That’s not explanation; it’s redescription.
A constraint-satisfaction account would say: the perturbation activates specific regulatory networks with specific activation thresholds. Whether you get limb or tail depends on which circuits are available and which constraints are satisfied under those conditions. This is testable: predict which bioelectric states produce which outcomes, then intervene.
Can Platonic morphospace make such predictions? If I tell you the bioelectric state of the system, can you tell me whether it will produce tail or limb before the experiment? If not, what predictive work is morphospace doing?
Levin’s confession of ignorance here is actually honest – but it’s problematic for the Platonic framework. If the framework can’t predict tail vs. limb, what predictive work is it doing?
When Does the System “Give Up”? (~31:54-33:17)
Levin raises a fascinating question about when a system stops trying to reach its “standard implementation” and shifts to something new:
“at some point, you can you can push it so far that it that it basically says forget it. I’m now an anthrobot. I’m not going to try to make a human embryo. I’m just going to this is my new, you know, this is my new life.”
The system “says forget it.” The system has a “new life.” This is intentional language applied to cell collectives without neurons. And the framework explicitly endorses this – TAME claims we should extend cognitive vocabulary to all sorts of systems.
But here’s the question I keep returning to: if it’s legitimate to analyze what a cell collective “says” and “wants” without claiming access to its inner mental states, why isn’t it equally legitimate to analyze what a research program “does” and “produces” without claiming access to its proponent’s inner mental states?
Why frame it as a decision rather than a phase transition? In dynamical systems, when you push a system past a critical threshold, it falls into a different basin of attraction. That’s not “giving up” – it’s crossing a bifurcation point.
Here’s my question: what does the intentional language add? “The system decided to become an anthrobot” versus “the perturbation pushed the system past a bifurcation into a different attractor basin” – do these make different predictions? If not, why prefer the intentional framing?
The stress-marker research makes this even clearer:
“we have a we have a project looking at uh stress systemic systemic stress as a measure of being of of distance to your goal state… are they stressed out about being those things or at some point at some point do they that’s the new that’s the new set point and so now and and you know being being a zenobot is my set point I’m now a great zenobot so now my stress can fall”
If xenobots can have “goal states” and “set points” that we analyze through their stress markers (without attributing conscious intent), then any system can be analyzed through its observable dynamics and their effects. The framework licenses exactly this kind of structural analysis. It’s strange to deploy that framework for cell collectives and then resist it for human-embedded systems like research programs and their citation networks.
And a follow-up on the stress-marker research: if anthrobots show low stress, does that mean “being an anthrobot” is now their goal, or does it mean they’ve equilibrated to new constraints? How would you distinguish these interpretations empirically?
This matters because “goal” and “attractor” aren’t equivalent. Goals can be frustrated without the system changing; attractors are whatever the system converges to. Confusing them risks building teleology into what might be pure dynamics.
Carl Friston on Attractors and Stress (~35:02-37:55)
Carl Friston provides the most RCF-compatible framing in the entire discussion:
“I just wanted to pick up on this notion of stress and attractors but try to frame it in uh response to some of the questions that have been rehearsed… I noticed that he used the word dialogue. Um there was also uh Mariana’s um notion of discourse and then we had Olaf’s um negotiated um corpus. I think all that speaks to you know maths just as being a particular kind of co-constructed language that has you know an enormous amount of explanatory power… the notion of identity in my world that would be self and it would be the self you find in self-organization it would be exactly the same thing you find in information theory in terms of self information um right through to self-evidencing under the free energy principle.”
This is a social-constructivist or conventionalist framing, not a Platonist one. Co-constructed languages don’t exist independently of their constructors; they’re products of negotiation and selection.
Friston’s intervention that “the notion of an attracting set has everything that you need” is precisely right. You don’t need Platonic space if you have well-defined attractors in state space. The question becomes: what determines attractor structure? Answer: constraints (thermodynamic, developmental, selective).
But Friston doesn’t push back against Levin’s Platonic framing – he just provides a parallel account that’s compatible with naturalism. This is diplomatically generous but epistemically problematic: it allows the Platonic framework to persist without confronting its burdens.
Here’s the question I’d pose: does the attractor framework need Platonic space, or does it work fine without it?
Friston’s operationalization of stress as “distance from attracting set” is elegant. If stress equals distance from attractor, then what organisms “want” is attractor convergence – minimizing free energy, returning to their characteristic set. This is entirely immanent: the attractor is a feature of the system’s dynamics, not something it “accesses” from outside.
But Platonic morphospace suggests patterns exist independently and organisms “pull toward” them. These seem like different claims. In the attractor picture, the target state is constituted by the system’s dynamics. In the Platonic picture, the target state exists independently and influences dynamics from outside.
Here’s a test: if we could fully specify a system’s dynamics, could we derive its attractor structure? If yes, then attractors are immanent and Platonic space is unnecessary. If no, then something besides dynamics determines attractors – but what? And how does it causally influence the system?
I suspect the answer is yes: attractors are derivable from dynamics. But then I’m confused about how the attractor framework is compatible with Platonism. Can someone clarify?
“Self” is not a substance with intentions – it’s a pattern of self-organization, an attractor in state space. This is Friston dissolving the very category that intent-based deflection requires. If “self” is what you find in self-organization, then criticizing the organizational patterns of a research program (including its citation network, its uptake by advocacy groups, its structural features that enable appropriation) is exactly the right level of analysis. You don’t need to attribute inner states to the self-organizing system; you analyze its dynamics and effects.
Diffusion Models and Non-Linear Time (~38:19-41:05)
Brian raises an interesting point:
“one of the issues with platonic spaces that I always reconcile with is whether this is all observer dependent in some sense and I think the notion of perturbations is a nice way to think about making the observer aspect as weird as possible… we actually already have these in the AI space, like they’re called diffusion models. And if you’ve ever read like the Ted Chang stories or like seen the movie Arrival, there’s this kind of gap between how we personally oftentimes see time in a linear fashion. But diffusion models in the sequence space, they see time in a completely different way where everything appears at once.”
Diffusion models do solve problems non-sequentially, which provides a concrete example of non-human cognitive process.
However, this doesn’t support Platonism – it supports substrate-dependence of process. Diffusion models have different inductive biases than autoregressive models, so they discover different algorithms. This is evidence that “how you search” determines “what you find,” which is constraint-satisfaction, not Platonic access.
The Ted Chang/Arrival reference is apt: the aliens in Arrival perceive time non-linearly because their cognitive architecture processes temporal information differently. But this is a story about different constraints producing different experiences, not about accessing an atemporal realm.
Brian’s research direction (training diffusion models on Sudoku to see what algorithms they discover) is genuinely interesting precisely because it’s falsifiable: if diffusion models consistently discover the same algorithms as sequential systems, that would suggest the algorithms are substrate-independent. If they discover different algorithms, that suggests substrate-dependence. Either way, you get empirical traction – unlike Platonic hypotheses.
If algorithms were Platonic (existing independently of implementation), shouldn’t different architectures converge on the same algorithms? The fact that they don’t seems like evidence that algorithms are constituted by computational constraints, not discovered from a shared space.
What am I missing? How does architecture-dependent algorithm discovery support rather than challenge the Platonic picture?
Causal attribution depends on how you parse the process. Sequential parsing produces “agent → intent → action → effect.” Holistic parsing produces “constraint-field → pattern-emergence.” The diffusion model doesn’t have “intent” in the sequential sense; it has dynamics that produce outcomes. Structural analysis of those dynamics is perfectly legitimate.
If that’s true for diffusion models, why not for research programs?
The Physicist’s Perspective on Realms (~41:35-44:37)
Ivette introduces an important point:
“one of the things that I’m interested in is an um uh the interface between the classical world and the quantum world and I tend to see these as two different realms. Maybe this is just like a perspective that can be helpful because at the end of the day you could say well it’s all part of one hole so why divided but sometimes these divisions can be useful… even within our community, some people argue that quantum mechanics is physical and some other colleagues say no, it’s just uh information and it’s just kind of some sort of mathematical tool to make some predictions which I don’t even understand very well what that means.”
Physicists themselves disagree about whether quantum mechanics describes “physical reality” or is “just a mathematical tool.”
This connects to QBism (Fuchs), which treats quantum states as epistemic (degrees of belief) rather than ontic (things existing). Under QBism, the quantum/classical “interface” isn’t a boundary between realms – it’s a transition in modeling strategies that correlates with decoherence timescales.
The symposium treats “different realms” as a neutral framing, but it’s not. “Realm” language reifies scale-dependent modeling differences into ontological divisions. The alternative: what looks like “different realms” is actually different constraint regimes operating at different scales. Quantum constraints dominate at small scales/short times; classical constraints (which emerge from quantum under decoherence) dominate at larger scales/longer times.
Here’s my question: are there two sets of rules, or one set of rules that appears different at different scales? If decoherence explains why quantum superpositions don’t persist at macro scales, then the “classical rules” are just the quantum rules under coarse-graining. No separate realm needed.
What observation would tell us whether we have genuinely distinct realms versus scale-dependent manifestations of unified dynamics?
Simulation vs. Explanation (~51:43-59:54)
Mike Levin asks:
“what do you guys think about the distinction between uh simulation and explanation because because this so so this comes up in biology in the following way. I’ll say look uh we need to understand why you know why this thing is doing that and people say well it’s emergent and I say well what what does that mean and they say well what it means is that if we were to simulate the the micros the sort of micro rules that are driving it this is what we would see… So I’m I’m interested in what the relationship is between being able to simulate it and thus show that yes, in fact that is what happens versus understanding what’s going on.”
Brian offers a pragmatic distinction:
“I think I have a very pragmatic distinction which is it’s a continuum explanation and virtual simulation but if you can accelerate the simulation to a point where it doesn’t have to run in real time or doesn’t have to run in reality in some sense. So the idea is that explanation allows you to skip a lot of steps allows you to go way faster way more forward than maybe a simulator would be if it had to simulate the entire universe to to get to the same point that you want.”
Chris Fields provides a genuinely good answer:
“I I I’ll throw in a different way of answering that question… a simulation may reproduce some effect that you’re interested in, but it doesn’t force you to change your conceptualization of the effect. It doesn’t force you to change your language. Whereas, uh, a really good explanation often forces you to change your concepts.”
This is one of the most substantive exchanges in the discussion. Fields’ answer is genuinely good: explanation involves conceptual change, not just predictive success.
The Rutherford example Fields gives is apt: “Rutherford’s point was that electrons don’t move. That electrons can have particular energies, but they’re not moving. So they don’t radiate uh unless they change their energy in in very precise ways.” This is explanation as dissolving a pseudo-problem by changing conceptual vocabulary.
This is exactly what constraint-satisfaction does to the “Platonic access” problem. The question “how do biological systems access Platonic patterns?” dissolves when you reconceptualize: there is no access because there is no separate realm. Patterns are constraint-satisfaction solutions that get re-instantiated when constraints are satisfied. The question becomes: “what constraints must hold for this pattern to persist?” – which is empirically tractable.
Brian’s compression criterion is also useful but incomplete. Compression isn’t just shorter – it’s shorter under the right abstractions. Bad abstractions can be brief but non-explanatory because they don’t track what’s actually doing causal work.
So here’s my question: what makes an abstraction right? I’d suggest: the abstraction boundaries correspond to constraint boundaries in the system. A good generative model has conditional independencies (Markov blanket structure) that match the system’s actual organization.
Does Platonic morphospace provide the right abstractions? What’s the Markov blanket structure of a “pattern in morphospace”? How does it relate to the system’s physical organization?
If we can’t answer these questions, maybe “pattern in morphospace” isn’t an explanatory abstraction – it’s a placeholder for “outcome we haven’t derived from constraints yet.”
As Friston would note: a good generative model isn’t just compressed – it has the right conditional independencies (Markov blanket structure) that correspond to the system’s actual organization.
The “Mind-Bending” Nature of TAME (~1:04:43-1:08:11)
Mariana explains:
“I find it very useful to debate it because why and phenotype and all of this like why um I feel that our models are very good at predicting a lot of things and we can do a lot of things with it. It is true. Um but why is it not a preference of some pattern that we don’t see um with whom we can obviously interact hypothesis because if we see it like this then we have the possibility to interact for example on this question of content if we have the content for me what we could do is do a code book and we can communicate”
Chris Fields responds:
“one thing that the framework uh requires of us is to uh abandon this notion that we are special in a very particular way as as cognitive beings and to um allow ourselves to consider all sorts of other systems as cognitive beings. And that’s um in a sense deeply deflationary.”
“Deeply deflationary” is exactly right. If cognition is everywhere and agency is distributed across scales, then “individual intent” is not a privileged locus of explanation. TAME itself says agency is multi-scale, distributed across components. So when analyzing the effects of any research program or framework, the right level isn’t “what did person X intend?” but “what constraint-propagation patterns produced what downstream effects?”
Fields correctly identifies that extending cognition to all systems is “deflationary” – it reduces human specialness. But he doesn’t follow through on the implication: if cognition is everywhere, it’s a category error to treat it as a special property that requires special explanation.
The reframe: “cognition” is not a substance some systems have and others lack. It’s a description of constraint-satisfaction dynamics under specific conditions (internal model, predictive processing, boundary maintenance).
These conditions can be satisfied by many substrates at many scales, which is why TAME finds “cognition” everywhere it looks.
But this isn’t evidence for Platonism – it’s evidence that “cognition” is too broad a category. What we actually care about is: which constraint-satisfaction systems have which capacities? A sorting algorithm “prefers” sorted outputs in the same way a thermostat “prefers” target temperature – by having dynamics that converge toward specific attractor states. Calling this “preference” is either metaphorical or requires unpacking into constraint-satisfaction terms.
Mariana’s “code book” idea is intriguing but underspecified. What would it mean to “communicate” with patterns? If patterns are constraint-satisfaction solutions, “communicating” with them means manipulating constraints to select different solutions. This is just engineering, not metaphysical contact.
Preferences at Different Scales (~1:13:53-1:16:08)
Carl Friston poses a crucial scale question:
“you’re talking about things that that that uh conform to a variation principle of least action. They have a you know the most likely path and that path is the path they prefer… does let’s just take two extreme scales. Let’s take the electron and the moon um both of which have lawful behavior. Do either of those want or have a preferred course of action or a preferred behavior?”
Chris Fields responds:
“Well, we certainly model them as if they do in terms of least action principles and so forth… Oh, I’m not sure it doesn’t apply to you and me.”
Friston poses a crucial scale question: do electrons and moons have “preferences”?
Fields’ answer (that we model them as if they do, and this might apply to us too) is either radical deflationism (nothing has preferences, it’s all just dynamics) or panpsychism (everything has preferences). Either way, it’s unhelpful without unpacking what “preference” means across scales.
The clarification: “Preference” conflates two distinct phenomena:
- Dynamical tendency: Systems follow paths that satisfy constraints (least action, free energy minimization, etc.). This applies to electrons, moons, and organisms alike.
- Adaptive modification of constraints: Some systems modify their own constraint structure based on feedback, enabling exploration and learning. This is scale-dependent and requires specific organizational features (memory, internal model, boundary maintenance).
The moon has (1) but not (2). Organisms have both. Calling both “preference” obscures the difference. What makes biology interesting isn’t that it follows least action – everything does. What’s interesting is that biological systems modify which action is “least” by changing their internal constraints through learning, development, and evolution.
Here’s my question: what does the biological sense of “preference” add beyond dynamical tendency? I’d suggest it’s adaptive modification of constraints – organisms don’t just follow paths of least action, they change which path is least action through learning, development, and evolution.
Is that what distinguishes biological goal-directedness from electron “preferences”? If so, then “preference” is equivocal across scales, and the moon-electron-organism continuity is misleading. If not, what does distinguish them?
And a follow-up: does the free energy principle distinguish these cases? Or does it apply equally to electrons minimizing action and organisms minimizing surprise? If the latter, how do we capture what’s special about biological cognition?
Levin on Sorting Algorithms’ “Extra Behavior” (~1:16:37-1:18:08)
Mike Levin makes one of his most interesting empirical claims:
“we’ve been looking at it in uh very minimal computational systems like the like the sorting algorithms that we’ve all talked about and there and and I don’t know actually if so so what we’ve observed you know for anybody that hasn’t seen it is basically that these are simple deterministic algorithms yes they sort like they’re supposed to but it turns out they also do some like as as as we were just saying if as Chris was just saying if you look at it from a different perspective which hadn’t been done in all the years that um people have been studying these things. You actually see something quite differently and uh uh you you see them doing some other stuff that that is is is very surprising.”
Sorting algorithms exhibit “surprising” behavior when viewed from new perspectives.
However, this doesn’t require Platonism to explain. Deterministic algorithms have well-defined behavior at all levels – we just don’t usually attend to most of it. Noticing previously unattended behavior patterns is discovery about the algorithm’s structure, not evidence of hidden agency or Platonic content.
What Levin calls “extra behavior” is likely one of:
- Emergent regularities: Higher-order patterns that follow necessarily from lower-order rules but weren’t previously characterized.
- Measurement artifacts: The new perspective imposes structure that appears as “behavior.”
- Genuinely novel dynamics: The algorithm is doing something the designers didn’t intend (this would be the most interesting case).
For this to support Platonism, you’d need to show that the “extra behavior” couldn’t have been predicted from the algorithm’s specification alone – that it requires appeal to external pattern sources. But if the algorithm is deterministic, its behavior is fully determined by its specification. The surprise is epistemological (we didn’t predict it), not ontological (it came from elsewhere).
Levin’s framing (“in the spaces between the thing you actually wanted it to do”) is suggestive but imprecise. Algorithms don’t have “spaces between” their operations in any mysterious sense. They have complete execution traces. “Spaces between” is observer-imposed parsing.
To make this concrete: if I gave you the complete specification of a sorting algorithm, could you in principle derive all of its “surprising” behaviors? If yes, then the behaviors were always implicit in the specification – no external source needed. If no, then the algorithm isn’t deterministic after all.
Which is it? And if it’s the former, what work is “Platonic space” doing?
Content vs. Path (~1:21:02-1:24:06)
Mariana’s question about “content” is interesting:
She distinguishes the geometric path of ingression from the “content” that ingresses. But what is content, separately from the constraints that generate it? If I specify all the constraints on a system (thermodynamic, developmental, selective), have I specified the content, or is there something left over?
Here’s a test: imagine two systems with identical constraints producing different “content.” Would that be evidence for Platonic content (something beyond constraints), or evidence that you missed a constraint?
I suspect it would be interpreted as the latter – we’d look for the hidden constraint that explains the difference. But if that’s right, then “content” is just unidentified constraints. It’s an epistemological placeholder, not an ontological category.
The codebook idea is intriguing, but what would you write in it? If the “content” of patterns is reducible to constraint relationships, the codebook would just describe those relationships. If it’s irreducible, how do we access it? Through what interface? And what would that interface cost thermodynamically?
Observer-Imposed Boundaries (~1:30:42-1:34:28)
Chris Fields articulates a crucial insight:
“from a uh physics perspective we can we always are moving or are able to move back and forth in physics from um uh a theoretical stance in which the world has been divided up in some particular way into systems that have certain internal processes… We can move between a perspective that cuts the world up in one way to a perspective that cuts the world up in some completely different way… but it doesn’t change the assumed behavior of the whole system. So um whatever model is constructed um has to be consistent with the underlying principle that if you just erase the boundaries, take your pencil, erase all the lines you’ve drawn that nothing has changed.”
This is a profound point about the observer-dependence of system boundaries. But follow the implication: if boundaries are observer-imposed, then the boundary around any individual (the boundary that constitutes them as a discrete agent with discrete intent) is also observer-imposed. Erase it, and you have constraint-propagation through networks: institutional, epistemic, financial, social. The “intent” disappears as a primitive; the effects remain as observable patterns.
Fields explicitly says erasing boundaries doesn’t change the system – only how we describe it. If the boundary around “individual with intent” is observer-imposed (as his framework claims), then the effects of framework-deployment exist at a different level than the intent of the deployer.
But Fields doesn’t follow through on the implication for Platonism. If boundaries are observer-imposed, then “patterns” (which are boundary-relative) are also observer-relative. The “patterns in Platonic space” aren’t independent of observation – they’re constituted by observational choices.
Fields (~1:32:52-1:34:28) continues: “to what extent does our perspective in which uh we each view ourselves as bounded entities bias our thinking about how patterns interact.” This is the right question. Our intuition that patterns are “things that interact” comes from our self-experience as bounded entities. But if boundaries are imposed rather than discovered, pattern-interaction is a projection of our observational framework, not a feature of reality independent of observation.
This is why relational quantum mechanics (Rovelli) and QBism (Fuchs) are important: they take the observer-dependence of facts seriously. The experimental evidence (Wigner’s friend experiments, contextuality violations) supports this. Levin’s Platonic framework, by contrast, assumes patterns exist independently of observers and then struggles to explain how observers access them.
Here’s a test case: the “limb pattern” that a regenerating system supposedly “accesses.” What are the boundaries of that pattern? Where does “limb” end and “not-limb” begin in morphospace? If those boundaries are observer-imposed, as Fields’ analysis suggests, then “the limb pattern” isn’t a feature of Platonic space – it’s a feature of how we carve up possibility space.
What would it mean for a pattern to exist in Platonic space independently of any observer imposing the boundaries that constitute it as a pattern? I’m not asking rhetorically – I genuinely don’t understand how pattern-realism survives Fields’ own insight about boundary-dependence.
Quantum vs. Classical Boundaries Over Time (~1:36:22-1:37:44)
Chris Fields makes a striking claim:
“I suppose I would say that from a um a kind of a global quantum theoretic viewpoint. As you increase the temporal scale, the approximation of things being statistically independent always becomes worse. And the only question is how much how much how quickly it becomes worse, which is which is dependent on kind of density of well dependent on energy density, but it it always becomes worse. So I I I do think that the reason we see the world as as classical at large scales is because we don’t know how to look at it. We don’t know the right way to look at it.”
If true, this undermines the “multiple realms” framing of the symposium. There aren’t separate classical and quantum realms – there’s one quantum reality that appears classical under coarse-grained observation. The “interface” between realms is actually the interface between observation resolutions.
Extending this: if the classical/quantum divide is observational rather than ontological, the same analysis should apply to the physical/Platonic divide. What looks like “access to Platonic patterns” might be coarse-grained observation of constraint-satisfaction dynamics. The patterns don’t exist “in another realm” – they’re high-level descriptions of what’s happening in this one.
Mariana (~1:39:59-1:40:33) adds: “you can have all these visions uh from a time independent manner and also a background independent manner… You can literally let the cohomology tell you what the metric is.” This is correct – mathematical frameworks can be formulated without assuming specific background structures. But this is a feature of mathematical formalism, not evidence of a separate mathematical realm. The formalism describes constraint relationships; those relationships are instantiated in physical dynamics.
Synthesis: Recurring Patterns in the Symposium
1. Systematic Nominalization
Throughout the discussion, processes get nominalized into substances:
- “mathematics” instead of “mathematizing”
- “patterns” instead of “constraint-satisfaction”
- “identity” instead of “persistence under perturbation”
- “cognition” instead of “constraint-satisfaction dynamics”
This is the reification move that process philosophy (Whitehead) and general semantics (Korzybski) have been flagging for a century. When you nominalize a process into a substance, you create the illusion that there’s a thing requiring explanation rather than a dynamic requiring characterization.
“Intent” is another nominalization. The verb form would be: “intending” – an ongoing process shaped by constraints, contexts, histories, and feedback loops. When you nominalize it into “intent,” you create something that seems like it could be possessed by a bounded individual, isolated from its relational context.
But the symposium’s own frameworks dissolve this. Fields says boundaries are observer-imposed. Friston says self is self-organization. Mariana says agents loop through pattern-space. The theoretical apparatus is all there for recognizing that “individual intent” is a modeling convenience rather than an ontological primitive.
2. Oscillation as Boundary Defense
What the symposium repeatedly exhibits is not a consistent theory of agency, but an oscillation driven by boundary-protection. Distributed agency is welcomed while it explains robustness and emergence, then abruptly curtailed when it threatens cherished loci of authorship, control, or moral responsibility. The explanatory frame expands over networks and processes until it nears a protected edge, then collapses back into individual entities and their supposed inner drives or intentions. This isn’t resolved at the level of theory; it’s managed at the level of rhetoric.
3. Unfalsifiable Framework Claims
At no point does anyone specify what would falsify Platonic space. The framework accommodates any observation through auxiliary hypotheses (“the pattern was more distant in morphospace,” “the interface wasn’t configured right,” etc.). This is Lakatos’s degenerating research program: adding epicycles rather than making novel predictions.
4. Failure to Address Path-Dependence
The Durant 2017 data (showing path-dependent morphological outcomes in Levin’s own lab) directly contradicts Platonic determination. If patterns “pull” systems toward them from outside physics, path shouldn’t matter. The fact that path matters is evidence for constraint-satisfaction, not Platonic access. Nobody in the symposium addresses this.
5. Conflation of Levels
The discussion repeatedly conflates:
- Descriptive adequacy (math can describe X)
- Explanatory necessity (math is required to explain X)
- Ontological independence (mathematical facts exist independently of physics)
These are distinct claims with different truth conditions. Mathematical Platonism requires all three; constraint-satisfaction only requires the first two.
6. Missing Thermodynamic Grounding
Nobody asks: “What are the energy costs of Platonic access?” If patterns exist independently and biological systems access them, there must be some coupling mechanism. That coupling must have thermodynamic signatures. Where are they? What does “accessing a pattern” cost in terms of free energy? The silence on this question is diagnostic: Platonism treats information as free-floating, but Landauer’s principle makes information physical.
7. Carl Friston as Unintentional RCF Ally
Friston consistently provides naturalistic alternatives that do the same explanatory work without Platonic commitments: attractors instead of realms, self-organization instead of access, inference instead of retrieval. He doesn’t challenge Levin directly, but his framework is incompatible with Levin’s metaphysics.
The Core Question Nobody Asked
Throughout 1:41:04 of discussion, nobody asks the decisive question:
What does Platonic morphospace predict that constraint-satisfaction doesn’t?
If the answer is “nothing,” Platonic space is explanatorily superfluous.
If the answer is “something,” that something must be testable.
The symposium’s failure to address this is not accidental – it’s structural. The Platonic framework is designed to be unfalsifiable: it can accommodate any outcome by adjusting the “structure of morphospace” post hoc. This is precisely why it’s scientifically problematic.
Constraint-satisfaction offers the alternative: reality is constraint-satisfaction all the way down. Patterns emerge from constraints; they don’t exist independently. Mathematical description captures constraint relationships; it doesn’t access separate realms. This framework makes the same predictions as Platonism where Platonism is vague, and different predictions where it’s specific – and those different predictions favor constraint-satisfaction (path-dependence, thermodynamic costs, measurement-context-dependence).
The symposium is a case study in how intelligent people can spend hours discussing a framework without ever subjecting it to falsification pressure. That’s not science – it’s collaborative speculation. Interesting, but not truth-tracking.
The Distributed Agency Problem
One of the most interesting tensions in the symposium emerges not from disagreement but from agreement – agreement on frameworks that dissolve the very categories their proponents might later need for defense.
This matters because of where frameworks travel once they leave the lab.
I’ve had exchanges with researchers in this space who emphasize that their work is “not about” challenging evolutionary theory, that they’re simply exploring interesting questions about biological organization. And I take that at face value – I have no reason to doubt anyone’s sincerity about their own motivations.
But here’s my question: does intent determine uptake?
The Discovery Institute (the institutional engine of Intelligent Design advocacy) actively cites work on morphogenetic fields, goal-directedness in development, and “non-physical” pattern causation as scientific support for their position. They don’t cite it incorrectly, exactly. They cite it for what it does: provide scientific-sounding vocabulary for the claim that biological form requires explanation beyond physics and chemistry, that patterns exist independently and “guide” material processes.
If someone told me their framework was being systematically cited by anti-evolution advocacy groups to add scientific legitimacy to movements that threaten science education, medical research, and evidence-based policy, I would want to know: what features of my framework make it so easily appropriated? And: is there a way to get the scientific benefits without those features?
These questions don’t require attributing intent. They require analyzing consequences.
In correspondence, I’ve been told that my critiques seem to attribute malicious intent, that I’m making claims about what researchers “really mean” or “secretly want.” But I’m not making claims about inner states at all. I’m making claims about observable patterns: framework has structural feature X (unfalsifiability, substance-dualist vocabulary), structural feature X enables uptake Y (ID movement citation), uptake Y produces harm Z (legitimization of anti-evolution advocacy).
None of that requires intent. All of it requires consequence-tracing.
And here’s what puzzles me: the researchers I’ve corresponded with have emphasized that they’re not doing anything that contradicts naturalism, that their frameworks are continuous with evolutionary biology, that they’re simply exploring new explanatory territory. If that’s true (and I have no reason to doubt their self-understanding), then why do those frameworks translate so seamlessly into Discovery Institute talking points?
This isn’t a gotcha. It’s a genuine question. If you tell me your framework is naturalistic, and the Discovery Institute tells its audience your framework supports non-naturalistic causation, one of you is misunderstanding the framework. Which one?
The Motte-and-Bailey Structure
In my exchanges with researchers in this space, I’ve noticed a pattern that I want to name carefully, because I think it’s operating below the level of conscious strategy.
The Motte (defensible position): “We’re just exploring interesting questions about biological organization. This is continuous with mainstream science. We’re not making metaphysical claims.”
The Bailey (actual position being defended, and the position that gets cited by ID advocates): “Patterns exist independently of physical instantiation. Form causation operates from outside physics. There are non-physical facts that biology must accommodate.”
When challenged on the Bailey, retreat to the Motte. When the challenge passes, return to the Bailey.
I don’t think this is intentional. I think it’s how unfalsifiable frameworks naturally behave under pressure – they have defensible cores and indefensible extensions, and the boundaries between them stay conveniently fuzzy.
But the Discovery Institute doesn’t cite the Motte. They cite the Bailey. They cite the language of “patterns existing in morphospace,” of “goal-directedness that can’t be reduced to physics,” of “non-physical facts that biology must accommodate.” That’s the language that serves their purposes.
So here’s my question for researchers in this space: if you mean the Motte, why use Bailey language? And if you mean the Bailey, why retreat to the Motte when challenged?
The question isn’t about intent. The question is about precision. If “Platonic morphospace” is just a useful heuristic for thinking about developmental attractors, say that explicitly – and note that it’s not ontologically committed, that it’s a modeling convenience, that it doesn’t imply patterns existing independently of physical instantiation. If it is ontologically committed, say that explicitly – and address the epistemological access problem, the causal interaction problem, and the question of what would falsify it.
The fuzzy middle is where the harm lives. It’s where frameworks can be “just exploring” when challenged and “revolutionary new ontology” when cited.
The Self-Refutation Structure
Here’s where the tension becomes explicit.
If TAME is correct:
- Agency is distributed across scales
- “Individual intent” is not a privileged explanatory primitive
- Cognition extends beyond neural boundaries
Then analyzing the distributed effects of a framework (including its uptake by advocacy groups, its structural features that enable appropriation, its consequences for science education and policy) is exactly what TAME predicts you should do. The intent of any individual proponent is distributed across: their training, their lab, their funding sources, their collaborators, their institutional context, their prior publications, their audience expectations, and the citation networks that carry their work into contexts they may not endorse. Criticizing the downstream effects of this distributed system doesn’t require (and shouldn’t require) isolating “individual intent” as a causal primitive.
If TAME is incorrect:
- Then its claims about distributed agency are wrong
- Individual intent is privileged for humans
- And this should be stated explicitly, along with an explanation of what makes human intent ontologically special when cell-collective intent isn’t
You can’t have it both ways. You can’t claim that agency is distributed when analyzing xenobots and anthrobots, then claim that agency is individual when receiving criticism. Well, you can claim both things – but then you’re applying distributed agency when it’s theoretically convenient and individual agency when it’s defensively convenient.
That’s not coherent. It’s strategic equivocation.
I’ve tried to make this point gently in correspondence: if you tell me that xenobots have “goals” that we can analyze without attributing conscious intent, then you’ve already conceded that goal-analysis doesn’t require intent-attribution. If you then tell me that analyzing your framework’s effects requires addressing your intent first, you’re applying different rules to yourself than to your research subjects.
The question isn’t “what do you mean by your framework?” The question is “what does your framework do once it leaves your control?” Those are different questions with different answers, and only the second one is about consequences.
The Transcript as Self-Undermining Evidence
What makes the symposium transcript so valuable for this analysis is that it provides the theoretical apparatus for dissolving intent-as-primitive – and then never notices the implications for how critique of the framework itself should proceed.
When Levin says xenobots might have a “new set point” where being a xenobot is now their goal, he’s licensing exactly the kind of analysis that doesn’t require inner-state access. Observable dynamics. Measurable stress markers. Behavioral patterns. Effects.
When Fields says boundaries are observer-imposed, he’s undermining the boundary around “individual agent with intent” that deflection-to-intent requires.
When Friston says self is self-organization, he’s dissolving the substance that would have to possess intent as an intrinsic property.
When Mariana says agents loop through pattern-space, she’s distributing agency beyond individual boundaries in exactly the way that makes individual-intent-analysis the wrong level.
The frameworks are all there. The implications are all there. The only thing missing is the recognition that these frameworks apply reflexively – that if they’re true of xenobots and anthrobots and diffusion models and biological systems, they’re true of research programs and their citation networks too.
Structural critique is what these frameworks license. Intent-deflection is what these frameworks dissolve.
The symposium provides its own refutation of the deflection strategy – it just doesn’t notice that it’s doing so.
The Fallacies of Intent-Deflection
When structural critique is deflected by “you’re attributing malicious intent,” several fallacies compound:
Category Error: Criticizing consequences of framework deployment is not the same as attributing intent to deployer. These operate at different levels. Framework has property P (unfalsifiability). Framework deployment produces effect E (policy influence without accountability). These claims don’t require anything about the deployer’s intent.
Self-Refutation: If distributed agency frameworks are correct, then individual intent is not a privileged explanatory primitive. The right level of analysis is: distributed agency across lab, institution, field, policy network. Criticizing this distributed system is exactly what the frameworks predict you should do.
Tu Quoque: Claiming “you’re attributing intent” is itself attributing intent – the intent to attribute malice. If intent-attribution is problematic, the deflection is equally problematic. If intent-attribution is legitimate, structural critique can proceed.
Motte-and-Bailey: The defensible position is “you shouldn’t claim to know my inner mental states.” The actual position being defended is “therefore your structural critique is invalid.” But structural critique doesn’t require claims about inner states.
Genetic Fallacy: Even if someone were attributing malicious intent, this wouldn’t affect whether their arguments are sound. The origin of an argument is irrelevant to its validity.
Appeal to Motive: “You’re attributing malice” functions as “therefore I don’t have to address your actual claims.” But the claims stand or fall independent of attributed motives. Is the framework falsifiable? Does the data show path-dependence? Do unfalsifiable frameworks produce unaccountable policy influence? These are addressable without any claims about intent.
The Harm Vector: Why This Matters Beyond Philosophy
Let me be concrete about why I keep pressing on this.
The Discovery Institute’s citation of morphogenetic field research, goal-directed development, and “Platonic” pattern causation isn’t hypothetical. It’s documented. They use this work to argue that “mainstream science” is moving toward recognizing non-material causation in biology, that evolutionary theory can’t explain form, that Intelligent Design is vindicated by cutting-edge developmental biology.
This has consequences:
Science education: ID-influenced curricula miseducate students about how biology works
Medical research: Resources flow toward “non-physical” explanations when physical mechanisms remain undiscovered
Public trust in science: When “even biologists” seem to support non-material causation, science’s epistemic authority erodes
Policy: Evidence-based policy requires trust in scientific consensus; that trust is a target
None of this is about intent. It’s about structural features of frameworks that make them appropriable for harmful purposes.
The question I’d pose to researchers whose work gets cited this way: what features of your framework make it useful to the Discovery Institute? And: could you get the same scientific benefits from a framework that doesn’t have those features?
If the answer to the second question is “yes” (if Platonic morphospace is eliminable in favor of attractor dynamics, if “non-physical facts” is replaceable with “constraint relationships,” if goal-directedness can be cashed out as free-energy minimization), then the metaphysical language is doing no scientific work. It’s only doing rhetorical work. And that rhetorical work has downstream effects that the metaphysics-free version wouldn’t have.
If the answer is “no” (if the Platonic framing is essential, if the work requires commitment to independently-existing patterns, if goal-directedness can’t be reduced to dynamics), then the framework is making substantial metaphysical claims. Those claims need to address the epistemological access problem, the causal interaction problem, and the question of falsifiability. And researchers making those claims should own them explicitly, rather than retreating to “just exploring” when challenged.
Either way, the question of intent is irrelevant. What matters is: what does the framework do? What effects does it produce? What harms flow from its structural features?
Here’s the core harm that intent-deflection obscures:
Unfalsifiable frameworks can’t learn from error.
This has consequences regardless of anyone’s intent:
Policy influence without correction: If Platonic morphospace guides regenerative medicine policy, and it’s unfalsifiable, the policy can’t be corrected by evidence. Harms compound across applications.
Resource misallocation: Funding flows to unfalsifiable research programs that can’t be distinguished from correct ones by evidence. Opportunity costs accumulate.
Epistemic pollution: Unfalsifiable frameworks enter citation networks, training data, public discourse. Future researchers inherit the pollution regardless of original intent.
Authority laundering: Unfalsifiable claims gain credibility through institutional association, not evidential support. The institution’s credibility is borrowed without being earned.
None of these require intent. They’re structural consequences of unfalsifiability.
Falsifiability is the remedy because:
- It creates correction mechanisms – error can be detected.
- It distributes accountability – claims must face evidence, not just authority.
- It prevents indefinite harm-compounding – false claims eventually get rejected.
- It enables genuine learning – we know more after than before.
A Direct Question
If someone showed you that your framework was being systematically cited by advocacy groups whose goals you don’t share, to support conclusions you don’t endorse, in ways that produce harms you would want to prevent – what would you want to know?
I’d want to know: what structural features of my framework enable this appropriation? And: can I modify those features while preserving the scientific value?
Those questions don’t require anyone to attribute intent to me. They require analyzing consequences. They require tracing constraint-propagation. They require asking what frameworks do, not what their proponents mean.
That’s the analysis I’m trying to offer. Not “you intend X.” But “your framework does Y, Y enables Z, Z produces harms H.” Every step in that chain is observable, testable, and addressable without any claims about inner states.
If distributed agency is real, this is the right level of analysis.
If distributed agency isn’t real, TAME is wrong about xenobots.
Either way, the question of intent is a distraction from the question of consequence. And consequences are what matter when frameworks leave the lab and enter the world.
The Final Irony
There’s a deep irony in how certain frameworks get deployed in discourse.
Frameworks that claim patterns exist independently of physical instantiation. Frameworks that claim agency is distributed beyond individual boundaries. Frameworks that emphasize observer-dependence of boundaries.
These frameworks, taken seriously, dissolve the very categories that deflection-to-intent requires. If agency is distributed, “individual intent” isn’t the right level of analysis. If boundaries are observer-imposed, “the individual agent” is a modeling convenience rather than an ontological primitive. If patterns are relationally constituted, “what X meant” is less important than “what effects emerged from the constraint-propagation.”
When critique arrives, there’s a choice: engage the structural claims on their merits, or retreat to exactly the framework that the work supposedly transcends – the framework of bounded individuals with interior intentions that must be addressed before structural analysis can proceed.
One of these responses is coherent with distributed agency. The other requires abandoning distributed agency when it’s inconvenient.
The symposium is worth watching. The questions raised are genuine. But the question that wasn’t raised (what would falsify this?) is the question that distinguishes science from speculation.
Until that question gets a specific answer, we’re doing philosophy. That’s fine. Philosophy is interesting. But philosophy that guides biological interventions without specifying its failure conditions isn’t wisdom – it’s risk that can’t be priced.
And risk that can’t be priced is risk that gets ignored until the bill comes due.
My positions are falsifiable. I’ve specified how. The question for frameworks that resist specifying their falsifiers is not “what do you intend?” but “what would prove you wrong?” If the answer is “nothing,” the conversation isn’t scientific – whatever its other virtues may be.







