Review | Naturalizing relevance realization: why agency and cognition are fundamentally not computational
Recently, dynamic trialectics has been proposed to show that life features such as cognition and agency are not algorithmic in nature. This perspective builds on classical notions of computation, leaving aside unconventional architectures for information processing. Here I explore a synthesis that potentially allows us to solve the problem of relevance (as proposed in the article) while keeping a computational narrative.
Introduction
During the early part of the 20th century, mathematician David Hilbert sought to promote a new approach for grounding all existing theories to a finite, complete set of axioms, and provide a proof that these axioms were consistent. This was known as Hilbert’s program. Subsequent to this proposal, there were multiple mathematical results which threw out of the window Hilbert’s idea of putting mathematics on a complete and consistent logical foundation. Almost a decade later, in 1931, Kurt Gödel published his incompleteness theorems, which show that for any sufficiently complex formal system there will always be propositions that are true but cannot be proved within that formalism.
Subsequently, in 1936, Alan Turing demonstrated the existence of undecidable problems for which it is impossible to construct an algorithm that always leads to a correct yes-or-no answer. This result was later generalized by Rice’s theorem, which states that all non-trivial semantic properties of programs are undecidable. Gregory Chaitin later showed that for any theory that can represent enough arithmetic, there is an upper bound c such that no specific number in that theory can be shown to have a Kolmogorov complexity greater than c. In this sense, while Gödel’s theorem is related to the Liar paradox, Chaitin’s result is related to Berry paradox.
Such a hodgepodge of results demonstrating that mathematics is incomplete even when filled with a deep degree of abstraction, brought with it a new narrative that put on the table the impossibility of capturing the phenomena of life by means of purely symbols. Robert Rosen in particular argued that it was possible to approximate aspects of biological organization through algorithmic simulation, but it would never fully capture the full range of dynamic behaviors or the evolutionary potential of a living system. This would not only imply that the strong Church-Turing conjecture—that all physical processes in nature must be computable—is false, but also refutes the Artificial Life program, which explores the possibility of capturing phenomena commonly associated with life—such as replication, morphogenesis, metabolism, learning, adaptation and evolution—in computational substrates.
Rosen’s argument has sought to be refuted in multiple ways. Today we can find an extensive literature based on attempts to represent (M, R)-systems using multiple models of computation, from process algebra and lambda calculus to sequential and communicative X machines. However, if systems of mathematical propositions cannot be completely formalized, is it really surprising that life cannot either? By resorting to the distinction between modeling and simulation proposed by Rosen himself, we can conclude that relational biology does not deny that life can be simulated, but life can never be computed as functions.
In a recent paper, Naturalizing relevance realization: why agency and cognition are fundamentally not computational, the authors show that “the process of relevance realization is beyond formalization. It cannot be captured completely by algorithmic approaches. This implies that organismic agency (and hence cognition as well as consciousness) are at heart not computational in nature”. In this review I will study the central aspects of this reading, which I found particularly interesting. Thus, by navigating section by section, I will try to reconcile this perspective with the computational enactivism, which was already discussed in this blog.
Agential emergentism and relevance realization
The authors begin by contrasting the ways organisms and algorithms tackle problems. Organisms live in a “large world”, characterized by overwhelming complexity and uncertainty, where most problems are ill-defined and open-ended. In this environment, organisms must identify and realize what is relevant, a process termed relevance realization. Algorithms, on the other hand, function in a “small world” of well-defined problems and operate within a formalized ontology. This distinction underscores the inability of algorithms to autonomously address relevance realization, as their operation depends entirely on pre-coded instructions.
The authors then introduce two primary perspectives on natural agency and cognition: computationalism and agential emergentism. On the one hand, computationalism posits that cognition and agency can be understood as varieties of algorithmic computation. This perspective often extends to pancomputationalism, which views all physical processes as computable. Here the authors critique this view, arguing that it commits a category mistake by conflating the symbolic, algorithmic representations of physical processes with the processes themselves. Physical reality, they argue, is not inherently computational.
On the other hand we have agential emergentism, the alternative proposed by the authors, which emphasizes that agency is intrinsic to living systems and cannot be reduced to algorithmic computation. Natural agency is defined as the capacity of living systems to act according to their own internal norms and goals. As mentioned previously, organisms set and pursue intrinsic goals arising from their precarious existence and drive for survival. This involves dynamically delineating their “arena”—the contextually relevant portion of their experienced environment—and identifying what is relevant to their goals and survival.
During these first three sections of their article, the authors highlight several key challenges that prevent algorithmic approaches from capturing relevance realization. First, relevance is mutable and situation-dependent, making it impossible to define universally or algorithmically. Second, any attempt to formalize relevance realization leads to an infinite regress of defining the parameters of relevance, which algorithms cannot resolve. Third, relevance realization entails turning intangible semantics into formalized syntax, a process that algorithms are incapable of performing autonomously.
Up to this point it can be observed that the conception of computation and algorithm used by the authors is centered around the von Neumann architecture and Turing machines, ignoring any form of unconventional computation or hypercomputation. If we define computation as any procedure by which input information is transformed according to mutable rules and turned into output data, leaving aside the idea that algorithmic instructions must be executed sequentially in a mandatory way, it is then possible to solve the above key challenges mentioned by the authors.
For example, the authors mentioned predictive processing as a potential computational approach to tackle the mutability and situation-dependence of relevance. In turn, Palmer et al. showed that communicating X-machines can evade the self-referential problem, avoiding impredicativity and infinite regress. In fact, by connecting multiple automata, the paradigm of algorithmic networks has suggested that theoretically it is possible to go beyond Turing computation. This could allow us to capture semantic, pragmatic and semiotic aspects of information that we observe in living beings.
Autopoiesis, anticipation and adaptation
The authors then explore different intrinsic and extrinsic aspects to describe the non algorithmic nature of agency in organisms. First they emphasize that organisms possess a unique form of self-manufacturing organization called autopoiesis, which distinguishes living from non-living systems. Autopoiesis entails the ability to produce and maintain one’s components and boundaries through dynamic, self-sustaining processes. A central concept in autopoiesis is organizational closure, where each component within the system depends on others, creating a self-referential and hierarchical causal regime. Unlike non-living systems, organisms exhibit immanent causation, where efficient causes coincide with final causes.
This endows organisms with intrinsic goals—which means maintaining their own existence—without requiring cognitive or intentional processes. As a model of organizational closure the authors start discussing Montévil-Mossio’s biological organization based on closure of constraints. Given the similarity between the conception of constraint with the Rosenian idea of efficient cause, the authors migrate to Robert Rosen’s (M, R)-system framework, which allows them to capture organizational closure in a more formal way. However, in order to capture life’s evolvability the authors ended with Hofmeyr’s (F, A)-systems, pointing out that hierarchical cycles and teleological organization cannot be fully captured or simulated by algorithmic or computational models.
Then the authors discuss how even the simplest living systems demonstrate a form of biological anticipation. Anticipation refers to an organism’s ability to predict and act in response to potential future states, which is crucial for maintaining organizational closure. This predictive capability is grounded in the system’s internal organization, rather than being externally imposed. In fact, Hofmeyr himself mentions that adding the synthesis of the membrane receptors and components of their signal transduction networks in the formal cause of his (F, A)-system addresses the problem of environmental sensing and adaptive restructuring of cellular functionality within genomic constraints.
This anticipatory behavior does not require cognition; it emerges naturally from the inherent organizational structure of life. Although the authors argue that predictive processing is not sufficient to capture anticipation in non-human and non-cognitive living beings, Korbak has shown that it is possible to obtain a temporal parameterization of (M,R)-systems that allows to connect that class of models with an important family of systems that assume the free energy principle by active inference. This temporal parameterization can be extended to (F, A)-systems, demonstrating that even simple life forms can use some degree of predictive processing as agency.
Finally, the authors introduce the triadic dialectic (or trialectic) of affordance, goal, and action, describing how organisms dynamically balance these aspects to maintain relevance realization. Affordances represent the actionable opportunities provided by an organism’s environment, goals stem from its intrinsic drive to sustain self-organization, and actions are the means to achieve these goals. The interaction between these elements occurs through a co-constructive dynamic where the organism continually adapts its behavior based on environmental feedback. According to the authors this trialectic highlights that relevance realization is not reducible to algorithmic processes, as it involves emergent, context-dependent, and co-constructive relations between the organism and its environment.
However, as has been well demonstrated by Hernández-Orozco et al., systems that exhibit (strong) open-ended evolution (OEE) must be undecidable. This means predicting future states (or convergence points) is computationally infeasible. Such undecidability reflects the inherent unpredictability and irreducibility of emergent organism-environment dynamics. The same paper discusses a computational evolutionary model proposed by Chaitin, which requires that complexity continuously increases with minimal significant drops, emphasizing the non-trivial evolution of systems. This framework aligns very well with the idea of context-dependent co-construction between an organism and its environment because OEE inherently assumes adaptive responses to novel and complex contexts.
Conclusion
In the last two sections of their paper, the authors outline a multifaceted framework of relevance realization in living organisms, structured into three primary dialectic processes. First we have autopoiesis, which establishes autonomy through the collective dynamics of macromolecular biosynthesis, internal milieu maintenance, and regulated transport. This enables organisms to set intrinsic goals via self-determination or self-constraint. Secondly, we have anticipation, projecting expectations about the environment through internal predictive models. These models integrate the organism’s current state, sensory inputs, and actions, allowing it to select suitable behavioral strategies aligned with intrinsic goals.
Finally we have adaptation, which involves the co-evolution of goals, actions, and environmental affordances. This represents relevance realization on an evolutionary scale, continually tightening the relationship between the organism and its environment in a mutual, transjective manner. The authors propose joining these levels into a hierarchical model to explain open-ended organismic evolution, but they did not formalize their ideas. This makes sense, because according to them relevance realization is fundamentally non-algorithmic and beyond formalization.
As I mentioned at the beginning, the authors’ conception of computation is reduced to the von Neumann-Turing view of hardware and software. When we leave aside this widespread perspective, we can delve into an ocean of possibilities that will allow us to study the unconventional and hypercomputational capabilities that we observe in living beings. Hofmeyr’s (F,A)-systems can be extended, with a temporal parameterization, to capture primitive anticipation in life. Moreover, we can abolish the impredicativity of relational models by communicating X-machines, which remove self-reference using parallel computation. On top of that, Chaitin’s evolutionary model captures the undecidability and irreducibility of organism-environment dynamics.
An apparent contradiction arises from the above. If the relevance realization is fundamentally non-algorithmic, why does it seem that we can capture the trilectic dynamics between autopoiesis, anticipation and adaptation using algorithmic networks, active inference and metabiology, all of which are computational in nature?
For me the answer to this is that life is a hypercomputational process. In this way the trilectic dynamics described in the paper reviewed here maintains its non-algorithmic quality (under conventional computation), but it is still representable in hypercomputational terms. This is fully aligned with Korbak’s computational enactivism, a philosophical interpretation that in my opinion is worth exploring. All the ideas I just mentioned collapsed into a recent preprint I’ve been developing with Hiroki Sayama and Carlos Gershenson.