Is Michael Levin’s Platonic Morphogenesis Scientific?
From Darwin’s Dangerous Idea:
“There is no such thing as philosophy-free science; there is only science whose philosophical baggage is taken on board without examination.”
Daniel Dennett
An analysis of how a strong naturalistic talk on convergence in biology and AI inadvertently undermines the Platonic metaphysics it was invited to support
Brian Cheung’s recent talk at Michael Levin’s Symposium on Platonic Space, titled “The Emergence of Convergence in Different Levels of Biology and AI,” is genuinely excellent. It presents a coherent, empirically grounded framework for understanding why neural networks (both biological and artificial) converge on similar representational structures. The data is compelling. The methodology is sound. The implications are fascinating.
There is just one problem: it does not actually support Levin’s Platonic metaphysics. It replaces it with a naturalistic, kernel-based convergence story that makes an independent Platonic realm entirely unnecessary.
Watch what happens at the exact point where Cheung introduces the “Platonic representation hypothesis.”
What Cheung Actually Claims
First, Cheung explicitly rules out “same data” and “same architecture” as sufficient explanations for convergence, and grounds everything in the world’s causal structure:
“So, what do we think it’s all about? Well, we think it’s about the world and is that we’re kind of invoking Plato’s, you know, allegory of the cave, meaning that there are these prisoners chained to a cave wall and their only interaction with the outside world is these projections of these shadows. But the point is is that there is a world out there and that world has causal features and the idea being that those projections are caused by the same causal properties of the actual world.” (~11:23)
He explicitly says it cannot “just be about the data” or “just be about transformers”:
“You might be asking, well why is this happening? Now you could imagine it’s all about the data, right? There’s only one internet in the world… But the thing is I showed you a line between vision and language. Those are not the same data sets… so it can’t be about the data. You might be asking, well, it’s all about transformers… but what we show in our paper is that there’s other architectures… that also show this convergence… So, it can’t just be about the architecture.” (~10:39)
Then comes the key move. He defines the “Platonic representation hypothesis” in purely representational and structural terms:
“If you have this causal representation, let’s say in this representation Z, it gets projected in many different domains… for example, it can be projected into a 2D image and be also projected into some semantic description of that image… as they embed those two different domains into representations, they should become more aligned because they come from the same causal process. And this is what we call the platonic representation hypothesis. Different neural networks are converging towards the same way of representing the world or in our case the same kernel.” (~11:47)
Notice what is not here: no talk of a realm that “exists independently” of the physical world, no “significant minds” inhabiting Platonic space, no “free compute” we “did not pay for,” no mysterious “access” relation between physical systems and non-physical patterns.
The only “Platonic” content is: stable, convergent kernels induced by the same causal world.
“Skyhooks are miraculous lifters, unsupported and insupportable. Cranes are no less miraculous, but they are honest. They work.”
Daniel Dennett
The Empirical Architecture: One World, Convergent Representations
The whole talk is an extended argument that representational convergence emerges from shared causal structure in a single world. Let me trace the key claims.
Brains and high-performing models share kernels
Cheung starts with representational similarity between brains and vision models:
“You have a brain, it generates representations via neural recordings and you have… a vision model also generates representations given an image… you could show the brain and also a vision model the same image, record the representations… and compare… just from visual inspection, they look pretty similar.” (~1:23)
He frames this as a “representational Turing test”:
“If you are a neuroscientist… could you tell whether you’re getting representation from a brain or maybe from another artificial model?” (~2:20)
Different architectures converge as they get better
He shows that convolutional networks and transformers become harder to distinguish at the kernel level as architectures improve:
“ResNet 18 or ResNet 34… you can see that the overlap is actually a lot more substantial… the situation gets even worse meaning that the overlap between convolution architectures’ representations and vision transformer representations is increasing… So this led us to the following question. Do different models represent the world in different ways or are they all somehow becoming alike?” (~5:36)
Cross-modal convergence between vision and language tracks performance
With a paired Wikipedia image-and-caption dataset, he plots language model performance (bits per byte) versus alignment to DINOv2:
“Better language models are simply better at language… Now the other hypothesis could be that better language models are actually in some sense better vision models and this is what we actually see in our results.” (~8:17)
“As language models go from 560 million parameters to 65 billion parameters the alignment increases… as a language model gets better at language it becomes more aligned to the DINOv2 model which is a vision model.” (~8:54)
And the relationship is bidirectional:
“As you go from the smaller vision model to the larger ones, the alignment simply shifts upwards… As models get better at vision, they also become more aligned to language models.” (~9:36)
Alignment is controllable by description and prompting (not mystical access)
Here is where the naturalistic framework becomes especially clear. Longer captions increase alignment:
“If you… increase the caption size, you see the alignment also increases. Meaning that as you describe the image more via having a longer caption… that increases the alignment to the vision model.” (~12:58)
“Sensory prompting” an LLM (“imagine seeing” versus “imagine hearing”) shifts kernels toward vision or audio models:
“What happens when you ask a language model to imagine senses it never experienced?… imagine what it would look like to see the caption… imagine what it would sound like to hear the caption.” (~17:24)
“When the model generates text you see this dissociation… suddenly the ‘see’ cue is more aligned to the vision model… and you see the reverse… for the audio model.” (~19:56)
Rewriting “seeing” text to “hearing” text and vice versa flips alignment:
“When you convert seeing generated text to hearing generated text the alignment goes down to the vision model… when you convert hearing to seeing the alignment goes up… showing that the text… influences the representation alignment.” (~21:18)
This is control, not access. The alignment shifts are produced by manipulating inputs within the system, not by tapping into an independent realm.
Scaling laws and biological evolution show the same pattern
In silico retinas evolve foveas under task constraints; adding zoom capacity stops foveas from emerging. Evolving eyes and brains shows that a bigger “brain” only helps if acuity increases; otherwise performance saturates:
“A larger brain is only better if you’re also allowed to increase the acuity that goes into that brain… This is very similar to… scaling laws of these large language models.” (~39:54)
The pattern is universal, and universally explicable through constraint satisfaction, not Platonic access.
Cheung’s Philosophy of Science: Constraint, Not Transcendence
Then Cheung turns explicitly to philosophy of science:
“Some people might argue [alignment] is a consistent representation across all modalities. But I argue that maybe it’s just a consistent representation across all projections of the real world.” (~42:25)
“What is science? I argue that science is the search for consistent representation of the real world. What if we optimized for alignment to find consistent representations?” (~42:34)
“We have a hypothesis space and things that solve vision create some notion of intersection of this hypothesis space with things that solve language… the truth is the intersection of those two parts… peer review should constrain the hypothesis space more and more.” (~44:33)
This is convergence through constraint, not through access to an independent realm. Science, on this view, is the progressive tightening of consistency requirements across multiple projections of the same causal world. It is structurally identical to the naturalistic “constraint causation” and “cognitive Platonism” moves in Gordana Dodig-Crnkovic’s talk at the same symposium.
Douglas Brash’s Q&A Remarks: Grounded Symbols, Not Platonic Forms
The Q&A reinforces the naturalistic reading even further. Douglas Brash (Yale) makes a crucial point about grounded symbols and resemblance-based structure:
“Back in the 1800s before Chomsky, the notion in linguistics was that language does model the outside world and the way you process it does do that. Then all that all kind of died when that got replaced by the idea that language involves arbitrary symbols involving some kind of syntax. And then Harnad in 1990 argued that cognition can’t work that way because what cognition has to do, it has to compare things, contrast things, generalize. And so for that, your symbols, your mental symbols need to contain some resemblance to whatever the reference is. If you have an arbitrary symbol, the information just isn’t there. You can’t do it. So he called those grounded symbols.”
Brash then notes:
“I can tell you that if you take a grounded symbol approach to language, you can write a program that will process English rather well in 8 megabytes of software, half of which is the dictionary.”
This is exactly the point. Once you take grounded, resemblance-based structure seriously, a very compact architecture can do a lot of what we call “intelligence” without invoking a separate realm. Cheung’s kernels and alignment curves are precisely this kind of grounded structural story, expressed in machine learning form.
Brash even addresses the Wittgensteinian point that Cheung invokes:
“Those suffixes actually create kind of all these simultaneous equations you could argue in the system that are consistent with each other. And the idea is that even though it’s not necessarily grounded in any sense of you know grounding, it has this consistency of the relationships of other objects, it has of other words and those relationships actually are the same relationship that you see in the real world.”
The entire explanatory apparatus is internal kernels, cross-modal alignment, and optimization in one world. No independent realm required. No mysterious access relation needed.
A Necessary Disambiguation: Three Very Different Uses of “Platonic”
At this point, it is worth pausing to make explicit something that has remained implicit throughout the symposium, and which is doing far more conceptual damage than most participants likely intend.
The word “Platonic” is doing three very different jobs in this discussion. Only one of them is scientifically legitimate. Failing to distinguish them invites exactly the kind of conceptual slippage that allows rigorous empirical work to be retroactively conscripted in defense of metaphysical claims it does not support.
The three uses are these.
First, Platonic-as-descriptive abstraction.
This is the weakest and most innocuous sense. It refers to the fact that we can describe patterns, relations, kernels, and invariances abstractly, in a way that is substrate-independent at the level of description. In this sense, the same structure can be instantiated in silicon, neurons, or equations without being ontologically separate from those instantiations. Nothing here implies independence from the physical world. It is simply abstraction.
Second, Platonic-as-structural convergence.
This is the sense actually doing the explanatory work in Cheung’s talk. Different systems converge on similar internal geometries because they are optimizing under the same causal constraints imposed by the same world. Constraint satisfaction has limited solutions. As performance increases, degrees of freedom collapse, and representational geometry converges. The resulting kernels look “universal” not because they are accessed from elsewhere, but because the space of viable solutions is narrow. This is a fact about optimization under constraint, not about ontology.
Third, Platonic-as-transcendent ontology.
This is a very different claim. Here, Platonic space is said to exist independently of physical systems, to contain structured contents prior to instantiation, and to be something organisms or models somehow access. This version introduces an additional realm, an access relation, and causal influence without a corresponding physical mechanism. It is this third sense that carries heavy metaphysical commitments and faces the classical interaction and ingression problems.
Cheung’s framework clearly occupies the first two categories and never requires the third. His explanatory apparatus consists entirely of kernels, alignment, optimization, scaling laws, and shared causal structure. Nothing in his data requires an independent realm. Nothing in his argument invokes access rather than construction. The convergence he documents is explained by constraint satisfaction in a single world.
By contrast, Levin’s broader Platonic claims explicitly rely on the third sense. Platonic space is described as existing independently, containing “significant minds,” preserving state, and exerting causal relevance on biological systems. That is not merely a stronger version of Cheung’s claim. It is a categorically different claim.
Why does this distinction matter? Because without it, a sympathetic reader can always say, “Perhaps they are just using different flavors of Platonism.” That move is precisely the harm. It allows a defensible, testable account of convergence to function as a rhetorical shield for an indefensible ontological commitment. Making the taxonomy explicit prevents this.
Once the distinctions are drawn, several things become unavoidable. Abstraction does not imply transcendence. Constraint-induced universality does not imply access to a realm. And Cheung’s data cannot be cited in support of Levin’s independent Platonic space without introducing additional premises that Cheung neither states nor needs.
This is also the point at which a brief historical note sharpens the analysis. Aristotle rejected the existence of Forms as independent entities and rejected mathematical objects as causally efficacious substances. But he accepted universals as immanent, form as constraint on matter, and explanation in terms of what must be the case given the structure of the world. By Aristotle’s lights, Cheung’s framework would be classified as immanent formal causation, not Platonism at all. Levin’s framework would not.
A similar, but much milder, risk appears in Chvykov’s framing. There, the danger is not structural but terminological. The mechanisms remain fully naturalized. The only risk is that calling them “Platonic” invites metaphysical over-reading, retroactive justification of non-naturalistic claims, and confusion about causal direction. That is a category mistake induced by vocabulary, not by theory.
The point bears emphasizing: when tested against falsifiability, Cheung’s and Chvykov’s (See: “Why physical systems find Platonic patterns” by Pavel Chvykov) frameworks collapse cleanly into constraint-based naturalism. Levin’s does not. Making that explicit is not hostile to structure, universals, or abstraction. It is hostile only to the silent slide from abstraction to transcendence.
Naming the distinction closes the last rhetorical escape hatch.
The Motte-and-Bailey Structure
Here is where the terminology becomes dangerous.
The only thing “Platonic” in Cheung’s hypothesis is:
- Stable structure induced by the world’s causal regularities
- Convergence of internal similarity geometry across diverse systems
- A heuristic about science as tightening consistency constraints via alignment
That is not what Levin means when he tells lay audiences that Platonic space “exists independently of” the world, contains “significant minds” with “plasticity” that can “save state,” and that organisms “access” its “specific contents” via bioelectric interfaces.
Cheung never endorses those claims. He does not need them. His entire explanatory apparatus works without them.
So we have a deep tension that shared vocabulary risks hiding:
If Cheung’s Platonic representation hypothesis is the right way to understand convergence, then Levin’s independent Platonic realm is redundant at best and misleading at worst.
If Levin’s realm is taken literally (independent space, minds, non-physical compute), then Cheung’s account is radically incomplete. But nothing in Cheung’s experiments points toward that extra layer.
Calling both “Platonic” invites the classic motte-and-bailey game: a defensible, testable motte (kernels, convergence, alignment as constraint) and an indefensible metaphysical bailey (independent realm with minds) trading on the same term. The audience hears “Platonic” and assumes both speakers mean the same thing. They do not.
What Would Falsify What?
Here is the question that clarifies the distinction.
Cheung’s framework is falsifiable. If better models stopped converging, if cross-modal alignment decreased with capability, if sensory prompting had no effect on representation geometry, if biological evolution produced eyes that violated scaling laws rather than following them, the hypothesis would fail. These are testable, measurable outcomes.
What would falsify Levin’s independent Platonic realm? If patterns are “accessed” rather than “constructed,” what observation would indicate failed access versus successful access? If morphospace exists independently of physical instantiation, what physical measurement would reveal its absence? The framework seems structured to accommodate any outcome, which is precisely what makes it scientifically problematic.
Cheung’s talk is science. The question is whether the symposium framing allows audiences to mistake it for support of something that is not science.
The Grounding Problem
There is a deeper issue here, one that Brash’s Q&A remarks illuminate.
The question “why do diverse systems converge on similar representations?” has two very different types of answer:
Type A (Naturalistic): They converge because they are solving similar problems under similar constraints in the same causal world. Constraint satisfaction has limited solutions. Performance optimization forces systems toward those solutions. The “Platonic” structure is the structure of successful constraint satisfaction, not an independent realm.
Type B (Transcendent): They converge because they are all accessing the same independent realm of patterns. The realm exists prior to and independent of the physical systems. Convergence indicates successful access rather than successful optimization.
Cheung’s entire talk is a Type A answer. He never invokes Type B. He does not need to. The data he presents is entirely explicable through Type A mechanisms.
But the symposium framing, and Levin’s broader claims about Platonic space, presuppose Type B. The tension is not resolved by using the same word (“Platonic”) for both. It is obscured.
Implications for the Broader Project
If Cheung is right, and I think his data strongly suggests he is, then the interesting questions become:
- What are the constraint structures that force convergence?
- How do different initial conditions and architectures navigate toward the same representational solutions?
- What does this tell us about the relationship between optimization, evolution, and representation?
- How can we use alignment as a tool for scientific discovery (Cheung’s vision)?
These are exciting, tractable questions. They do not require Platonic metaphysics. They require careful empirical work of exactly the kind Cheung is doing.
The risk is that symposium framing allows audiences to think Cheung’s rigorous empirical work supports metaphysical claims it does not touch. And the further risk is that when those metaphysical claims are questioned, the response is retreat to the defensible motte (Cheung’s version), while continuing to occupy the indefensible bailey (Levin’s version) in other contexts.
Anyone interested in conceptual clarity should keep these sharply distinct.
The “Bedrock” That Isn’t: How the Symposium’s Own Speakers Dissolve Levin’s Mathematical Foundation
In the Discussion #1 at the Platonic Space Symposium, Levin explicitly reveals his rhetorical strategy for establishing non-physical facts:
“The only reason I bring it up at all is that it seemed to me to be a I won’t say uncontroversial because obviously there’s a lot of controversy but at least a simpler domain in which to try to make the claim which some people at least already believe that not all facts are physical facts… at least in math, other people for a really long period of time have already made the claim that there are facts that are not derived from nor changeable within physics. There’s sort of this other domain of important information that exists. And so that was my strategy to say look we already know this is the case or at least many people believe this is the case and so now we can ask the question of whether some of these things are also relevant for biology for behavior science and so on. We can sort of move on from that foundation.” (~11:29)
The strategy is explicit: establish mathematical Platonism as “bedrock,” then extend it to biology. If non-physical mathematical facts exist, perhaps non-physical biological facts (patterns in morphospace) exist too.
There are three problems with this strategy, and the symposium’s own speakers illuminate all of them.
Problem One: The “Bedrock” Is Contested Ground
Levin frames mathematical Platonism as a relatively safe starting point (“at least a simpler domain,” “many people already believe this”). But mathematical Platonism is not uncontroversial bedrock. It is one of the most contested positions in philosophy of mathematics, facing well-known objections that have never been resolved:
Benacerraf’s Dilemma (1973): If mathematical objects exist outside space and time, how do we have epistemic access to them? Our cognitive faculties evolved to track physical regularities. What mechanism allows us to “perceive” abstract objects?
The Epistemological Access Problem: If mathematical facts are not “derived from nor changeable within physics,” how do physical brains come to know them? What is the interface between physical neurons and non-physical mathematical entities?
Quine’s Holism: Mathematical beliefs face empirical evidence as part of a corporate body of beliefs, not in isolation. Mathematics is revisable in principle (as non-Euclidean geometry demonstrated). The sharp distinction between “mathematical facts” and “physical facts” dissolves under holistic epistemology.
Levin acknowledges there is “controversy” but treats it as manageable. The symposium’s own speakers suggest it is not merely manageable; it is fatal to the strategy.
Problem Two: Cheung’s Convergence Evidence Supports Constraint Satisfaction, Not Platonic Access
Cheung’s “Platonic representation hypothesis” is precisely the kind of evidence Levin might cite for non-physical facts: different systems converge on similar representational structures, suggesting they are all “accessing” the same underlying reality.
But watch what Cheung actually claims:
“Different neural networks are converging towards the same way of representing the world or in our case the same kernel.” (~12:17)
The convergence is toward the same kernel, not toward access to a Platonic realm. And Cheung explains this convergence through constraint satisfaction in a single causal world:
“There is a world out there and that world has causal features and the idea being that those projections are caused by the same causal properties of the actual world.” (~11:33)
The “Platonic” structure, on Cheung’s account, is the structure of successful constraint satisfaction under shared causal regularities. It is not an independent realm that exists prior to physical systems. The convergence is explained by the fact that systems solving similar problems under similar constraints are forced toward similar solutions. No non-physical facts required.
If Cheung’s framework is correct, then the “mathematical facts” Levin invokes are not facts about an independent non-physical realm. They are facts about constraint relationships that any system (biological or artificial) must satisfy to successfully model the causal world. The “independence” is illusory; what is actually happening is that constraint structures are substrate-independent in their description, not in their existence.
Problem Three: Brash’s Grounded Symbols Dissolve the Physical/Non-Physical Distinction
Douglas Brash’s Q&A remarks cut even deeper. Recall his point about grounded symbols and resemblance-based structure:
“Harnad in 1990 argued that cognition can’t work that way because what cognition has to do, it has to compare things, contrast things, generalize. And so for that, your symbols, your mental symbols need to contain some resemblance to whatever the reference is.”
And then the crucial observation about why language models can capture relational structure:
“Those suffixes actually create kind of all these simultaneous equations you could argue in the system that are consistent with each other. And the idea is that even though it’s not necessarily grounded in any sense of you know grounding, it has this consistency of the relationships of other objects, it has of other words and those relationships actually are the same relationship that you see in the real world.”
This is the key move: the relationships in mathematical/linguistic structure ARE the same relationships you see in the real world. Mathematics does not describe a separate realm; it describes the relational structure of the physical world at a level of abstraction that makes substrate differences invisible.
“2 + 2 = 4” is not a fact about a non-physical realm. It is a fact about how constraint relationships compose. Those constraint relationships are instantiated every time two pairs of anything combine into a group of four. The “fact” is substrate-independent in description but not in existence. It exists wherever and whenever the relevant constraints are instantiated, which is to say, it exists physically.
Brash’s point about the 8-megabyte language processor is directly relevant here. If you can capture the relational structure of language (and by extension, of mathematics) in 8 megabytes of grounded symbol manipulation, you do not need a separate realm. You need a system that tracks the constraint relationships accurately. The “mathematical facts” are facts about those relationships, not facts about a transcendent domain.
The Dissolution: Cognitive Platonism Versus Transcendent Platonism
Gordana Dodig-Crnkovic’s framework (presented elsewhere in the symposium series) provides the explicit theoretical move that dissolves Levin’s strategy. Her “cognitive Platonism” holds that mathematical structures are cognitive constructs that survive because they successfully track causal regularities. They are “Platonic” in the sense of being abstract and universal, but they are not “Platonic” in the sense of existing independently of physical systems.
On this view:
- Mathematical facts are facts about constraint relationships
- Constraint relationships are physical (they govern what can and cannot happen in the causal world)
- Mathematical structures are stable because they accurately describe those relationships
- The “independence” of mathematics from any particular physical system is abstraction, not transcendence
This is structurally identical to what Cheung demonstrates empirically. Different architectures converge on similar kernels because they are all tracking the same causal structure. The kernels are “abstract” in the sense that they can be instantiated in different substrates, but they are not “independent of physics” in the sense of existing in a separate realm.
Why This Matters for the Biological Extension
Levin’s explicit strategy is to establish non-physical mathematical facts as bedrock, then extend to biology: if non-physical mathematical facts exist, perhaps non-physical biological patterns exist too.
But if the symposium’s own speakers are right, the bedrock crumbles:
- Cheung shows that representational convergence is explained by constraint satisfaction in a single causal world, not by access to an independent realm.
- Brash shows that the relational structure captured by language and mathematics IS the relational structure of the physical world, not a separate domain of “important information.”
- Dodig-Crnkovic provides the theoretical framework: “cognitive Platonism” where abstract structures are cognitive constructs tracking physical regularities, not inhabitants of a transcendent realm.
If mathematical “facts” are facts about physical constraint relationships (described abstractly), then there is no bedrock for extending to “non-physical biological facts.” The entire strategy collapses at the first step.
What remains is the interesting, tractable, naturalistic question: what constraint structures govern morphogenesis, and why do different biological systems converge on similar solutions? That question does not require Platonic metaphysics. It requires careful empirical work on constraint satisfaction in developing systems.
Cheung is doing exactly that work for artificial systems. The question is whether the symposium framing allows that work to be mistaken for support of something it actually refutes.
A Question for the Symposium
Here is what I would ask if I were in the room:
Dr. Levin, you describe your strategy as using mathematics to establish that “not all facts are physical facts,” then extending to biology. But Dr. Cheung’s convergence evidence is explained by constraint satisfaction in a single causal world, not by access to an independent realm. Dr. Brash notes that the relational structure in language and mathematics IS the relational structure of the physical world. Dr. Dodig-Crnkovic’s “cognitive Platonism” treats mathematical structures as cognitive constructs tracking physical regularities.
Given that your own symposium speakers provide naturalistic accounts that dissolve the physical/non-physical distinction you rely on, what is your response? Does the “bedrock” still hold? Or does the extension to biology require different support?
The answer would clarify whether the strategy can survive its own symposium’s best work.
Dr. Cheung, your “Platonic representation hypothesis” is entirely explicable through constraint satisfaction in a single causal world. You never invoke an independent realm that exists prior to physical systems. You never claim organisms “access” non-physical patterns. Your entire explanatory apparatus is kernels, alignment, and optimization.
Dr. Levin claims Platonic space “exists independently of” the physical world, contains minds with plasticity, and that organisms access its specific contents via bioelectric interfaces.
These seem to be very different claims using the same terminology. Could you clarify: does your hypothesis require the independent existence of a non-physical Platonic realm, or is “Platonic” simply a useful heuristic for talking about convergent structure induced by shared causal constraints?
And if it is the latter, does that mean Dr. Levin’s stronger claims are not supported by your work?
The answer matters. Because if the naturalistic interpretation is correct, then the symposium’s strongest empirical talk is actually evidence against the symposium’s central metaphysical thesis.
That would be worth knowing.
My frameworks are falsifiable. Cheung’s are falsifiable. The question I keep asking, and keep not receiving answers to, is: what would falsify the claim that Platonic space exists independently of the physical world? Because if nothing would, then the Discovery Institute will keep citing this work, and the conceptual confusion will keep compounding, regardless of what anyone means.
References
Primary Sources – Chvykov’s Work
Chvykov, P., & England, J. L. (2018). Least-rattling feedback from strong time-scale separation. Physical Review E, 97(3), 032115. https://doi.org/10.1103/PhysRevE.97.032115
https://journals.aps.org/pre/abstract/10.1103/PhysRevE.97.032115
Chvykov, P., Berrueta, T. A., Vardhan, A., Savoie, W., Samland, A., Murphey, T. D., Wiesenfeld, K., Goldman, D. I., & England, J. L. (2021). Low rattling: A predictive principle for self-organization in active collectives. Science, 371(6524), 90-95. https://doi.org/10.1126/science.abc6182
https://www.science.org/doi/full/10.1126/science.abc6182
Chvykov, P., & England, J. L. (2025). Emergent order from mixed chaos at low temperature. Scientific Reports, 15, 38490. https://doi.org/10.1038/s41598-025-22877-4
https://www.nature.com/articles/s41598-025-22877-4
Michael Levin’s Laboratory – Planarian Regeneration
Durant, F., Morokuma, J., Fields, C., Williams, K., Adams, D. S., & Levin, M. (2017). Long-term, stochastic editing of regenerative anatomy via targeting endogenous bioelectric gradients. Biophysical Journal, 112(10), 2231-2243. https://doi.org/10.1016/j.bpj.2017.04.011
https://www.cell.com/biophysj/fulltext/S0006-3495(17)30411-9
Durant, F., Bischof, J., Fields, C., Morokuma, J., LaPalme, J., Hoi, A., & Levin, M. (2019). The role of early bioelectric signals in the regeneration of planarian anterior/posterior polarity. Biophysical Journal, 116(5), 948-961. https://doi.org/10.1016/j.bpj.2019.01.029
https://www.cell.com/biophysj/fulltext/S0006-3495(19)30065-7
Philosophy of Mathematics – Benacerraf’s Dilemma
Benacerraf, P. (1973). Mathematical truth. The Journal of Philosophy, 70(19), 661-679. https://doi.org/10.2307/2025075
https://www.jstor.org/stable/2025075
Philosophy of Mind and Explanation – Dennett
Dennett, D. C. (1995). Darwin’s Dangerous Idea: Evolution and the Meanings of Life. New York: Simon & Schuster.
https://books.google.com/books/about/Darwin_s_Dangerous_Idea.html?id=FvRqtnpVotwC
Cognitive Science – Symbol Grounding
Harnad, S. (1990). The symbol grounding problem. Physica D: Nonlinear Phenomena, 42(1-3), 335-346. https://doi.org/10.1016/0167-2789(90)90087-6
https://www.sciencedirect.com/science/article/abs/pii/0167278990900876
Aristotle Scholarship
Leunissen, M. (2010). Explanation and Teleology in Aristotle’s Science of Nature. Cambridge: Cambridge University Press. ISBN: 9780521197748
https://www.cambridge.org/core/books/explanation-and-teleology-in-aristotles-science-of-nature/311F862489619C0EEB985B0D24275F20
Sedley, D. (2007). Creationism and Its Critics in Antiquity. Berkeley: University of California Press.
https://www.ucpress.edu/book/9780520931282/creationism-and-its-critics-in-antiquity
Witt, C. (1989). Substance and Essence in Aristotle: An Interpretation of Metaphysics VII-IX. Ithaca: Cornell University Press.
https://www.cornellpress.cornell.edu/book/9780801495786/substance-and-essence-in-aristotle/
Neural Network Convergence – Brian Cheung
Huh, M., Cheung, B., Wang, T., & Isola, P. (2024). Position: The Platonic representation hypothesis. Proceedings of the 41st International Conference on Machine Learning, 235, 20617-20642. https://doi.org/10.48550/arXiv.2405.07987
https://proceedings.mlr.press/v235/huh24a.html
https://arxiv.org/abs/2405.07987
Computational Philosophy and Cognitive Platonism – Dodig-Crnkovic
Dodig-Crnkovic, G. (2016). Information, computation, cognition: Agency-based hierarchies of levels. In V. C. Müller (Ed.), Fundamental Issues of Artificial Intelligence, Synthese Library 377 (pp. 139-159). Cham: Springer. https://doi.org/10.1007/978-3-319-26485-1_10
https://link.springer.com/chapter/10.1007/978-3-319-26485-1_10
Dodig-Crnkovic, G. (2017). Nature as a network of morphological infocomputational processes for cognitive agents. The European Physical Journal Special Topics, 226, 181-195. https://doi.org/10.1140/epjst/e2016-60362-9
https://link.springer.com/article/10.1140/epjst/e2016-60362-9
Dodig-Crnkovic, G. (2022). Cognition as morphological/morphogenetic embodied computation in vivo. Entropy, 24(11), 1576. https://doi.org/10.3390/e24111576
https://www.mdpi.com/1099-4300/24/11/1576
Dodig-Crnkovic, G., & Milkowski, M. (2023). Discussion on the relationship between computation, information, cognition, and their embodiment. Entropy, 25(2), 310. https://doi.org/10.3390/e25020310
https://www.mdpi.com/1099-4300/25/2/310
Additional Aristotle Scholarship
Sedley, D. (2007). Creationism and Its Critics in Antiquity. Berkeley: University of California Press. ISBN: 9780520931282
https://www.ucpress.edu/book/9780520931282/creationism-and-its-critics-in-antiquity
Witt, C. (1989). Substance and Essence in Aristotle: An Interpretation of Metaphysics VII-IX. Ithaca: Cornell University Press. ISBN: 9780801495786
https://www.cornellpress.cornell.edu/book/9780801495786/substance-and-essence-in-aristotle/
Additional Supporting Literature
Lange, M. (2016). Because Without Cause: Non-Causal Explanations in Science and Mathematics. Oxford: Oxford University Press. https://doi.org/10.1093/acprof:oso/9780190269487.001.0001
https://global.oup.com/academic/product/because-without-cause-9780190269487
Woodward, J. (2003). Making Things Happen: A Theory of Causal Explanation. Oxford: Oxford University Press. ISBN: 9780195189537
https://global.oup.com/academic/product/making-things-happen-9780195189537
Additional Bioelectricity and Morphogenesis Literature
Levin, M., Pezzulo, G., & Finkelstein, J. M. (2017). Endogenous bioelectric signaling networks: Exploiting voltage gradients for control of growth and form. Annual Review of Biomedical Engineering, 19, 353-387. https://doi.org/10.1146/annurev-bioeng-071114-040647
https://www.annualreviews.org/doi/10.1146/annurev-bioeng-071114-040647
Pezzulo, G., & Levin, M. (2016). Top-down models in biology: Explanation and control of complex living systems above the molecular level. Journal of The Royal Society Interface, 13(124), 20160555. https://doi.org/10.1098/rsif.2016.0555
https://royalsocietypublishing.org/doi/10.1098/rsif.2016.0555







