Relational Biology I: Is it possible to simulate life?
The first part of this trilogy is devoted to discussing the difference between model and simulation, one of the cornerstones for understanding relational biology. How true is it that Robert Rosen denied the possibility of simulating life? As we shall see, his insistence on the non-algorithmic nature of life should be understood as a call to rethink biological modeling, not as a categorical denial of simulation efforts.
Introduction
The quest to formalize biology within a mathematical framework has a rich history, tracing back to the pioneering work of Nicolas Rashevsky. In his seminal 1954 paper, Topology and Life: In Search of General Mathematical Principles in Biology and Sociology, Rashevsky challenged the prevailing metric-based approaches to biology, advocating instead for a relational perspective. His vision was to uncover underlying principles of life by examining the structural and functional relations that transcend individual biochemical details.
One of Rashevsky’s most influential students, Robert Rosen, expanded these ideas into what is now known as relational biology. In particular, Rosen’s 1958 papers, The Representation of Biological Systems from the Standpoint of the Theory of Categories and A Relational Theory of Biological Systems, laid the groundwork for a category-theoretic formalization of living systems. Through these works, Rosen sought to abstract biological organization using categorical constructs, emphasizing system-level interactions over molecular details.
His core insight was that the complexity of life could not be captured purely through reductionist, algorithmic models. This stance positioned relational biology in opposition to the dominant paradigm of mechanistic and computational modeling, raising fundamental questions about the nature of biological organization and whether life could be adequately simulated. However, to what extent is this true, and what is the current state of relational biology? In this first part of our trilogy we will try to unmask many misunderstandings that people have about Rosen and his ideas.
The Modeling Relation
Central to Rosen’s framework is the distinction between modeling and simulation. The modeling relation establishes a formal correspondence between natural systems and their mathematical representations, ensuring that the causal structures defining living processes are faithfully reflected. In contrast, simulation refers to computational techniques that attempt to replicate biological behavior algorithmically, often reducing life’s complexity to rule-based dynamics.
As pointed out by Pattee in the first proceedings of the Artificial Life conference, accuracy in a simulation need have no relation to quality of function in a realization. The criteria for good simulations and realizations of a system depend on our theory of the system. Simulations do not become realizations. This distinction is crucial because it highlights the limits of computational models in capturing the self-referential, anticipatory nature of biological systems.
Particularly, Rosen’s argument rests on the notion that life exhibits closure to efficient causation, meaning that its components are responsible for generating and sustaining their own causal structure. Computational systems, by contrast, rely on externally imposed rules, making them inadequate for fully encapsulating the essence of living systems. This critique challenges the prevailing assumption that all biological processes can be simulated using Turing-computable models, questioning the universality of the Church-Turing thesis in the context of life sciences.
However, this rejection of simulation does not necessarily preclude all artificial life efforts. Instead, it suggests a shift from algorithmic to relational modeling approaches. This potential revindication of the Artificial Life program calls for novel mathematical frameworks that go beyond computationalism, embracing relational and category-theoretic perspectives. In the next part of the present trilogy I will talk more about a classical relational model introduced by Rosen in the late 1950s, which will also clarify what I mean by closure to efficient causation.
Misunderstanding Rosen
Over the years, numerous efforts have been made to challenge Rosen’s conclusions, often due to misunderstandings of his evolving ideas. Initially, through a trilogy of papers, Rosen himself attempted to formalize biological systems as sequential machines, a line of inquiry that ultimately failed to capture the full complexity of self-construction.
More recent critiques have taken computational approaches, attempting to encode Rosen’s (M,R)-systems in formal languages. One very popular attempt is Mossio’s A Computable Expression of Closure to Efficient Causation, which employs lambda calculus to argue that Rosen’s system could, in principle, be modeled within a computable framework. However, such a functional model of computation was built under wrong assumptions, as explained by Cárdenas et al.
Currently one of the most interesting approaches to solve this issue is given by Palmer’s Rosen’s (M,R)-System as an X-Machine. There the authors aim to counter this assertion by proposing a computational approach based on communicating X-machines, a formalism that, they argue, circumvents the self-referential barriers that have traditionally hindered mechanistic representations of (M,R)-systems. This is nowadays the canonical model in relational biology, and it will be discussed and explained in detail in the next release of this trilogy.
Organisms as a Swarm of Machines
Palmer’s paper begins with a thorough introduction to the debate surrounding (M,R)-systems, situating Rosen’s ideas within the broader discourse on reductionism and mechanistic explanation in biology. In their methodology, the authors systematically present three formal machine architectures—finite state machines, stream X-machines, and communicating X-machines—each representing a successive refinement in computational complexity.
A finite state machine, the simplest of these, is shown to be inadequate because it cannot accommodate the self-referential entailment structures inherent in (M,R)-systems. Stream X-machines offer an improvement by incorporating memory, allowing catalytic elements to be reused, yet still fall short due to their inability to resolve self-reference. The communicating X-machine emerges as the most viable solution by treating each component of an (M,R)-system as an independent computational entity, allowing them to interact through structured communication while avoiding direct self-reference.
This methodological progression is a strong aspect of the paper. However, the discussion of how memory states interact within stream X-machines is somewhat dense, and a more concrete example of state transitions would clarify how these models differ in their computational expressiveness. Additionally, the discussion on the object-oriented approach to realize the authors’ ideas remains relatively brief, and a more detailed analysis of its advantages would add depth to the argument.
Despite the above, Palmer’s message is clear: perceived non-computability arises from an overly restrictive definition of machines, one that does not accommodate massively parallel architectures such as communicating X-machines. By expanding the definition of a machine to encompass distributed computation, they contend that (M,R)-systems can indeed be mechanistically instantiated without violating their essential properties. This argument is well-aligned with theoretical results found on algorithmic networks, where swarms of computable systems could allow us to go beyond Turing computability.
Nonetheless, while the authors make a strong case for the viability of communicating X-machines, they could engage more critically with potential limitations. For instance, it remains unclear whether their approach fully captures emergent properties of living systems or if it merely provides a formal approximation that lacks biological fidelity. Additionally, while they effectively sidestep the issue of self-reference, questions remain about whether the decentralized computational structure they propose can replicate the anticipatory nature of living systems that Rosen emphasized.
Overall, the paper makes a significant contribution to the ongoing debate on the computability of biological organization, and currently I consider it the state-of-the-art in this topic. Their approach, which bridges formal computational theory with object-oriented modeling, offers a promising framework for future research in relational biology. Future work should explore how this computational paradigm aligns with real biological objects and whether it can accommodate evolutionary dynamics inherent in living systems. At the end of the day, Rosen’s (M, R)-systems is just one of the many possibilities to achieve closure to efficient causation…
Conclusion
Despite the contentious debates surrounding his work, Rosen never outright rejected the possibility of simulating life. Rather, the common misconception stems from the difficulty of interpreting his evolving ideas, exacerbated by his tendency to work in isolation. His insistence on the non-algorithmic nature of life should be understood as a call to rethink biological modeling, not as a categorical denial of simulation efforts.
As we move forward, the challenge lies in constructing more biologically sound relational models. One of the main shortcomings of Rosen’s work was its high level of abstraction, which often neglected empirical biological constraints. The next step in relational biology must bridge this gap, integrating category-theoretic insights with concrete biological examples, or at least realistic biological assumptions. This will be the subject of the next essay in this trilogy, where we explore how modern approaches to relational biology can reconcile formal abstraction with real-world biological architectures.