Post

Review | Open Questions about Time and Self-reference in Living Systems

Today I am going to review a synthesis written by a group of renowned researchers of artificial life, who have omitted a couple of important contributions that close the gap with what they have called "open questions" in the relationship between self-reference and life.

Review | Open Questions about Time and Self-reference in Living Systems

Introduction

A century of theorizing about “what life is” has cycled through many metaphors and models: machine, code, system, network; yet one idea has proven stubbornly central: metabolic (organizational) closure. From von Neumann’s kinematic automaton and Ashby’s homeostat to Rosen’s (M,R)-systems, Maturana & Varela’s autopoiesis, Kauffman’s autocatalytic sets, Gánti’s chemoton, and Eigen & Schuster’s hypercycle, the most durable approaches share some notion of circular causation by which a living system continuously re-makes the very components that make it up. Still, this common ground has too often been obscured by disciplinary silos. As Cornish-Bowden, Cárdenas, and Letelier emphasize here, many differences across these frameworks are superficial byproducts of independent development: (M,R)-systems, autocatalytic sets, and the hypercycle all incorporate catalytic closure; autopoiesis and chemoton foreground material openness and boundary-making; but the overlaps are not obvious at first reading and rarely made explicit across schools of thought.

Such convergence around closure is not accidental. A comparative synthesis argues that metabolic closure is the core phenomenon enabling a living system to remain a coherent individual, with Rosen’s segmentation into metabolism, replacement, and metabolic invariance offering a uniquely fertile lens (once liberated from over-mathematization). For the avid reader, we have written in this space a trilogy (I, II, III) dedicated to Rosen’s work. The same synthesis insists that a satisfactory theory of life must unite thermodynamic openness, catalytic specificity, structural closure, and regulation—elements only partially captured by any single extant model.

Against this backdrop, self-reference becomes more than a logical curiosity: it is the structural heartbeat of living organization. Yet self-reference strains classical formalisms; as Rosen argued, “life is not an algorithm” in the narrow Turing sense, not because living processes defy causation or possess hypercomputation, but because timeless algorithmic semantics cannot accommodate intrinsically creative, self-modifying dynamics. The preprint I am going to review today, Open Questions about Time and Self-reference in Living Systems, takes this tension head-on. It contends that many paradoxes of self-reference dissolve once we explicitly integrate time, distinguishing “natural time” (the ever-unfolding present of physical processes) from “representational time” (the past–present–future constructed by living systems for memory, learning, and prediction). In doing so, the authors seek a formal space where self-reference, temporality, and self-modification coexist without contradiction.

Life in Time or Time in Life?

The paper’s central move is to ask whether a new notion of time co-originates with life. Classical proof and computation proceed in natural time—the process of proving or computing takes time—but their results are timeless within a fixed axiomatic context. Turing formalized the proving process only to show that halting cannot be guaranteed in general; when a Turing computation fails to halt, the formal question shifts to whether stronger mathematics is needed, in a way reminiscent of Gödel’s incompleteness. By contrast, living systems cannot defer to timeless conclusions: their viability depends on acting under uncertainty in ongoing time.

Here the authors introduce representational time: a constructed temporal framework with a past and a future, enabling memory, anticipation, and learning. Natural time, they stress, is the continuing present of the physical universe; its archetype in physics is the Hamiltonian, which encodes instantaneous evolution through partial derivatives and is, in this sense, “essentially timeless.” Only integration over time yields trajectories, and in that move we pass from natural to representational time. This contrast underwrites their biological claim: physical systems evolve without direct access to representations of past or future, but living systems deploy such representations to exercise agency.

Significantly, the authors do not reserve representational time for nervous systems. Even unicellular organisms embody implicit temporal representations; e.g., molecular structures tuned by evolution to recognize conditions and modulate reaction sequences, with enzymes controlling relative reaction times. In this telling, representation is anchored in functionality rather than introspective models: the organism’s structure stores histories and biases futures. This broadens the concept of temporal representation while inviting a challenge the authors leave open: What is the minimal organization capable of handling representational time in a way that distinguishes the living from the merely chemical?

The discussion returns to Rosen’s anticipatory systems: living dynamics are “forced” by anticipated futures generated by internal models. Strictly speaking, no future can literally cause the present; instead, a self-referential model generates anticipations that sometimes err, and those errors inject novelty. This makes creativity endogenous to living organization, not an external add-on. Such a stance dovetails with decades of closure-centered work that links cyclic organization, specificity, and stability: closed catalytic loops enable selective timing, memory-like hysteresis, and robustness—the very substrate from which temporal representation can arise.

Finally, the authors acknowledge a limit in standard modeling: most mathematical and computational languages are syntax-closed with respect to their own evolution. Calculus cannot add new equations while solving old ones; most programming languages prohibit self-modification at run-time; metamodels rarely allow dynamic change to themselves. If open-endedness demands novelty at the level of rules (not just states), then our formalisms must themselves be capable of self-modification. This motivates their survey of candidate tools (domain theory, coalgebra, and calculus of indications) as seeds for a rigorous mathematics of reflexive, self-changing systems.

Self-Referential Selves

The second arc of the paper reframes “self” not as a given essence but as a historical invariant emerging through structural coupling—the reciprocal shaping of organism and environment across time. On this view, an entity is structurally determined: its identity and behavior reflect a history of couplings; autonomy consists in subordinating change to the maintenance of identity. This is the language of organizational closure rendered temporal; it resonates with classic autopoietic insights and newer closure-based analyses of reaction networks that identify cycles (feedback) as the minimal decomposable units conferring stability.

The authors then tie self-reference to morphogenesis and agency. Development is portrayed as a continuous prediction–perception–action loop in which a proto-self “thinks about” and changes itself to reduce error, language that overlaps with active inference while carefully insisting on the centrality of time and self-modification. Still, the authors missed an important contribution by Tomek Korbak, who two years ago proposed a temporal parametrization for (M, R)-systems, connecting relational biology and active inference. Whether one embraces or resists strong active-inference claims, the paper’s core point stands: without a representation of its own temporal dynamics (what has been, what is likely to be) no organism could sustain goal-directed morphology or adaptive behavior.

From here the authors pivot to open-endedness. Following Banzhaf and colleagues, they distinguish state-level variation from novelty that alters the rules (type-1 and type-2), observing that most models and metamodels cannot express such rule-level change. Hence the call for formalisms that internalize self-modification: not just systems that run in time, but systems that rewrite their own generative schemes in time. This is where their mathematical survey becomes programmatic: coalgebra makes recurrence explicit as unwound in time; similarly, the calculus of indications casts paradoxical re-entry (e.g., J = ¬J) not as a pathology but as a dynamical engine—oscillation born of self-contradiction. The moral is constructive: taming self-reference requires embracing, not banishing, its temporal productivity.

At the same time, the paper brushes past two live issues. First, the starting problem) for closure (how coupled processes “boot-up” without already being in place) remains largely implicit. Here, work in chemical organization theory and closure-based analyses of catalytic loops suggests a way forward: characterize minimal cycles and their kinetics to show how stability and decomposition properties emerge from network structure, exactly the kind of “bridge” the closure literature has begun to build. Myself recently developed the first model explaining how self-referentiality could arise evolutionary speaking. Second, the boundary problem —where self ends and world begins—calls for explicit individuality criteria. The authors gesture toward these themes, but fuller engagement with the closure canon would sharpen the proposals and prevent slips into dualistic language. Furthermore, the authors seem to ignore the contributions made by Howard Pattee, who proposed semantic closure (the self-referential mechanism through which symbols actively construct and interpret their own functional contexts) as key to enable open-ended evolution.

To their credit, the authors recognize that existing tools are not enough. They argue that the mathematics needed to describe systems that add or remove selves in real time does not yet exist—and they may be right in spirit. But the path may be nearer than they suggest: closure-centric extended reconstructions of (M,R)-like architectures, such as Hofmeyr (F, A)-systems, already separate levels of causation and treat some formal causes as open-to-mutation “free-standing” informational constraints, a move that category-theoretic work can render precise. If representational time is a product of such constraints acting on dynamics, then the sought-after reflexive mathematics will likely marry categorical treatments of semantics with dynamical systems, exactly where the paper invites the field to go next.

Conclusions

The paper’s headline contribution is conceptual clarity: natural time (the continuing present of physical processes) differs categorically from representational time (the constructed past–present–future that living systems generate to remember, learn, and anticipate). This distinction reframes self-reference from a paradox-engine to a process-engine: paradox becomes oscillation; contradiction, creativity; self-reference, the means by which systems maintain identity while remaining open to change. Taken together, the arguments show why “life is not an algorithm” in the narrow, timeless sense—without denying that parts of life are algorithmic when stabilized by constraint.

Equally important is the methodological diagnosis. If open-endedness demands novelty at the level of rules, our models and metamodels must themselves be syntax-open, something that Peter Cariani told us in his 1989 PhD dissertation. The review’s tour through domain theory, coalgebra, and the calculus of indications is not a shopping list so much as a research program: recover time within formal systems, make recurrence explicit, and treat re-entry not as something to forbid but to formalize. For theoretical biology, this suggests a rapprochement with closure-centric traditions: use network-level constraints (specificity, cycles, boundaries) to ground temporal representation, then let reflexive formalisms track how those constraints themselves change.

Finally, as a position paper it situates itself where earlier syntheses left off. The closure literature has long argued that a viable account of life must integrate thermodynamic openness, catalytic closure, structural boundary-making, regulation, and information handling, but it has also warned against conflating simulation with realization: computation is not the same as construction. This preprint adds a missing axis (representational time) and ties it to self-reference and self-modification. The result is not yet a unifying mathematics, but it is a coherent agenda: build formalisms that let living systems write their own temporal semantics while staying physically grounded. Doing so promises not only to reconcile computational and constructionist views (as Korbak also proposed four years ago), but also to turn the recurring paradoxes of self-reference into the very principle of organization they have always been hinting at.

Desktop View

This post is licensed under CC BY 4.0 by the author.