Post

What Does “Computing” Mean?

Today I am going to write an essay in which I will outline some of the major epistemological gaps surrounding the concept of computation, some of which, in my opinion, have been ignored or taken for granted by the scientific community.

What Does “Computing” Mean?

Introduction

The concept of computation, as we now understand it, has deep historical roots. Originally tied to the human act of calculation, its meaning expanded alongside developments in mathematics, logic, and engineering. Early mechanical devices, such as Pascal’s calculator or Babbage’s Analytical Engine, embodied a physical form of computation, but it was in the early 20th century that the notion became rigorously formalized: a well-defined transformation of symbols into symbols, detached from matter, time, and energy. This abstract framing is reflected in standard references that define computation in terms of algorithms, models of computation, and classes of solvable problems, not in terms of the physical stuff that carries those symbols. At the same time, contemporary debates ask whether such a disembodied view is enough, especially once we notice how often the concept of computation migrates across domains, from silicon circuits to living cells to chemical reaction networks, giving rise to pancomputationalism.

No historical figure looms larger in this story than Alan Turing (1912–1954). Working within a scientific culture enthralled with mechanization and reduction, Turing gave a canonical analysis of effective procedures via his eponymous abstract machines, helping to set the horizon for computer science, cryptanalysis, and nascent Artificial Intelligence (AI). As Sydned Brenner pointed out in Life’s code script, Turing’s work became a huge influence from logic to morphogenesis, underlining how a mechanistic picture of symbol manipulation became a lens for the life sciences as well. Less well known outside specialist circles is how closely Ludwig Wittgenstein (1889–1951) tracked these developments. One of the twentieth century’s most significant philosophers of logic, mathematics, mind, and language, Wittgenstein was not only contemporaneous with Turing at Cambridge; he also responded, pointedly, to the philosophical reading Turing attached to his mathematical results.

At the intersection of these influential thinkers I found Wittgenstein versus Turing on the Nature of Church’s Thesis by S. G. Shanker, who carefully shows that Wittgenstein knew of Turing’s work and engaged its implications for the very meaning of “effective calculation,” a conversation that continues to unsettle how we answer the question this essay title poses. As I read Shanker’s paper, I cannot help but recall Robert Rosen’s insights on computability and realizability, and inspired by this flow of ideas, I am now going to write an essay in which I will outline some of the major epistemological gaps surrounding the concept of computation, some of which, in my opinion, have been ignored or taken for granted by the scientific community.

Behind Turing Machines

The modern conversation about “what computation is” typically begins with Turing machines: idealized devices that read and write symbols on an unbounded tape according to finitely many rules. As an abstract analysis of calculation, this framework has been enormously successful. Yet, as Shanker emphasizes, Wittgenstein’s scattered remarks force us to read Turing’s 1936 achievement with more care. Far from being ignorant of Turing’s On Computable Numbers, Wittgenstein not only knew of it, he left a puzzling gloss in his Remarks on the Philosophy of Psychology: Turing’s ‘Machines’. These machines are humans who calculate, Wittgenstein said. He then downplays the specialness of the halting problem, treating it as no more philosophically momentous than familiar paradoxes in the foundations of mathematics. Shanker takes this puzzling note as a point of entry: if we read it sympathetically, it directs us to separate Turing’s strict mathematics from the metaphysical freight that later came to be carried under its banner.

Two historical clarifications matter here. First, Wittgenstein’s worries about mechanicism predate Turing’s engagement with the mechanist thesis by almost a decade. In lectures and notes from the early 1930s Wittgenstein framed “Can a machine think?” as a category mistake unless we first clarify our concepts, a stance that makes sense of his famous comparison with asking if the number 3 has a colour. Second, the mechanist thesis itself was “in the air” well before Turing’s technical work. Wittgenstein’s contemporaneous remarks show him probing the grammar of “thinking” and “pain,” not denying engineering possibilities but resisting a slide from behavioural regularities to conceptual identity. These points explain why, for Wittgenstein, Turing’s 1936 paper looked like a hybrid—mathematical logic on one side, philosophy of mind on the other—and why he focused his criticisms on Turing’s prose interpretation rather than the formal results.

If we put the philosophy to one side and read On Computable Numbers as mathematics, the guiding idea is clear and modest: once a class of functions is effectively representable (say, in binary), one can give an abstract recipe for a device whose symbolic operations will compute those representations. The main point is not that a machine thinks, understands, or intends as it executes the rules, but that there exists a finite, determinate specification whose iterations produce the right results. Precisely because Turing abstracts away from cognitive terminology, his analysis illuminates the link between recursion-theoretic definitions and mechanical executability. Shanker’s complaint is that Turing later re-imported quasi-cognitive talk by defining human calculation in mechanical terms; thereby making it look as if the machine inherits normativity from us, when on Wittgenstein’s view, it is the other way around: the human calculator is the bearer of norms, and the machine is a surrogate for certain narrow aspects of our practice. Hence Wittgenstein’s provocation: Turing’s machines are “humans who calculate.”

This is where Rosen’s caution usefully dovetails with Wittgenstein’s. If we treat Church’s Thesis (CT) as a bridge from the mathematical notion of recursiveness to the physical notion of realizability, we must add physics back in. As Rosen argued in Church’s thesis and its relation to the concept of realizability in biology and physics, the slogan “effectively calculable = recursive” has no physical content on its face; to use it as a criterion of realizability requires restating it as a claim about which classes of physical processes exist, and as Rosen showed, that is true only given strong assumptions about the laws that govern physical state change. In other words, without such assumptions CT is a convention in recursion theory; with them, it becomes a substantive empirical proposition, vulnerable to how nature actually behaves.

Wittgenstein’s distinctive pressure point, however, is not physics but normativity. To describe a process with a rule is not yet to show that the process follows the rule; regularity is one thing, rule-following another. The grammar of “calculation” ties it to the ability to instruct, justify, correct, or explain by reference to the rule. So merely producing the right answers does not suffice to be calculating in the sense that belongs to mathematical practice. Turing’s rejoinder, embodied later in the behavioural stance of the Turing Test, was to shift criteria outward: if a device satisfies the complex behavioural regularities that govern our use of “calculating” with respect to human computers, then there is no a priori bar to calling it a calculator; the question is empirical. But on Wittgenstein’s reading, this reframes rather than resolves the issue: it trades in the internal normativity of rules for an external, causal mapping between input and output. Thus Shanker’s closing paradox for the first section of his manuscript begins to make sense: if calculating looks like the action of a machine, it is because the human being doing the work is the machine—i.e., the site where normativity is instituted—while the artefact remains a device for reproducing certain aspects of that instituted practice.

Thesis or Axiom?

Church’s contributions recast “effective calculability” as recursiveness, influenced (like Turing) by Hilbert’s foundational program with its finitary strictures: procedures must be fixed in advance; computations must finish in finitely many steps. Read this way, Church’s Thesis (CT) is less a discovery than a stipulation: it identifies “the effectively calculable” with a particular class and then points out a happy convergence with our mathematical practice. That is a powerful convention for recursion theory, but as Shanker stresses, it offers no independent explanation of why that convention captures the intuitive notion of effective procedure. It gives us logico-grammatical certainty, not an inductive argument. Turing’s 1936 analysis filled that explanatory gap by exhibiting an abstract device whose operations mirror the recursors; yet that very success tempts a further slide: from “algorithms are mechanically calculable” to “thought is mechanical.”

The years immediately following On Computable Numbers show Turing himself testing that slide. Working on chess, he moved from “brute-force” calculation to the idea of learning programs that modify themselves, a shift from fixed to self-modifying algorithms. This provided both a formalist infrastructure (mechanical symbol manipulation) and a mechanist superstructure (programs that learn), culminating in his 1947 London lecture and the 1950 paper Computing Machinery and Intelligence, where the Mechanist Thesis receives its most famous public articulation. In Shanker’s telling, the 1936 paper thus becomes a turning point: not only for the passage from recursion theory to computer science, but also for the philosophical transition to Artificial Intelligence.

Kurt Gödel’s reactions help isolate what is mathematics and what is metaphysics in this story. He criticized Church’s formulation as “thoroughly unsatisfactory” unless one shows that generally accepted properties of effective calculability force the proposed class, precisely what Turing’s analysis seemed to deliver, which is why Gödel preferred Turing’s version of CT to Church’s. Yet Gödel also resisted the mechanist reading Turing later drew from it, warning (in his later remarks) that the existence of finite, non-mechanical procedures, not equivalent to any algorithm, has nothing to do with the adequacy of the definitions of “formal system” and “mechanical procedure.” In short, Gödel could embrace Turing’s mathematical explanation while rejecting the epistemological investment many read into it.

Shanker’s discussion of Judson Webb sharpens the fork. The halting problem does not discredit mechanicism; in Webb’s neat phrase, it is a “guardian angel” of computability: to say a Turing machine is rule-governed is not to say we can predict its termination. Indeed, the point of learning programs is precisely that their evolution can outrun our foresight. But Webb’s conciliatory reading sets up Shanker’s Wittgensteinian worry: even if thought and mechanical calculation are partly co-extensive (say, for partial recursive functions), the identification is unstable. For mechanical execution can be mapped to normative practice without thereby being that practice; the two diverge at the point where justification, instruction, and correction pick out meaning and correctness, not mere causal production of outputs.

Hence Shanker’s verdict on “Church’s convention.” Treated as an axiom within recursion theory, CT is indispensable. Treated as an empirical thesis about minds and machines, it outruns its warrant unless we either (i) add physical assumptions (Rosen’s route) or (ii) reconceive what we are calling “calculation” (Wittgenstein’s route). The first path risks being false to physics; the second risks being false to our mathematical grammar. As I recently have discussed, in my opinion we should inspect both. That is exactly the knife-edge on which debates about mechanicism, AI, and human cognition have balanced since 1936.

Revindicating Computation

In the last movement of his paper, Shanker brings learning back into focus, but now as a diagnostic for the very language we use to talk about AI. We routinely say that machines “learn,” “understand,” or “reason,” but as Melanie Mitchell argues in The metaphors of artificial intelligence, much of this is merely a metaphor; productive for research agendas, yet treacherous when we forget that metaphors are not definitions. Shanker’s Wittgensteinian reading gives this a sharper edge: if learning is any change that stably improves performance, then nothing categorically distinguishes biological learners from “ideal learning machines.” But then, what becomes of rule-following? Are we committed to a picture in which an algorithm is just a set of trivial subrules whose blind execution yields the right outputs?

Turing’s own proposal about “learning programs” suggests a way machines can behave intelligently without grasping meanings: instructions must be complete and explicit; the device need not understand any intermediate results; and self-modification can increase competence by reorganizing its store of rules. This neatly skirts the symbol grounding problem: replace understanding with execution plus revision, and you can have competence without semantics. On Wittgenstein’s telling, however, that assumes that it is coherent to talk about “meaningless subrules” being followed. If a rule is, by nature, normative (something one can cite to justify, correct, or explain) then a “rule” stripped of all such connections is not a rule but a causal recipe; it can encode normative practice, but it does not embody it. The danger, Shanker warns, is to confuse the map with the territory; to think that because normative actions admit causal models, the causal structure therefore is the normativity.

This distinction matters for how we read Turing machines in relation to formal systems. Gödel saw that the very concept of a formal system entails mechanical operations on symbols; Turing’s model captures this essence elegantly. But Wittgenstein would invert the moral: the clarity of Turing’s model shows the non-mathematical character of formal systems as we use them. They are tools embedded in practices, not self-interpreting engines of meaning. When we describe a machine’s “state of mind” as the link between the symbols it observes and its next move, we risk sliding from encoding (which presupposes users who can read the code) to embodiment (which suggests the causes are just the rules). Once that slide occurs, normativity disappears behind a causal veil, and “following a rule” collapses into “behaving in accordance with a regularity.” We are reducing semantics and pragmatics to mere syntax. And as von Neumann proved in 1955, this reducibility obliterates the functional role of measurement and control.

None of this diminishes Turing’s central mathematical insight: that, given appropriate encodings, recursive functions are ideally suited to mechanical implementation. But “mechanizing rule-governed action” is a substitution, not a subsumption. Calling the product “calculation” is a useful convenience so long as we remember what makes calculation in our practice: the possibility of getting it wrong or right, of giving reasons, of teaching and correcting. On this view, On Computable Numbers is best seen as a hybrid. A luminous explication of a class of functions that then strays into quasi-epistemology when it addresses what human computers do. The continuing heat around Church’s Thesis is evidence that we have never entirely separated those strands. If we wish to “revindicate” computation, give it back its clarity and using it to describe life or cognition, we must either anchor our empirical theses in physics (Rosen’s line), or keep our logical theses tied to the norms of mathematical practice (Wittgenstein’s line), and resist the temptation to let metaphors do silent conceptual work in between.

Conclusion

So what does “computing” mean? Turing’s analysis gave us a crystalline mathematical notion: effective procedures realized by a simple symbol-manipulating machine. That notion anchors computer science and continues to inspire applications far beyond its original habitat. But Wittgenstein (as read by Shanker) presses us to separate the mathematical achievement from its philosophical overreach. Computation, as humans use the concept, is not merely regular symbol transformation; it is a normative practice of doing it correctly, justifying, explaining, and correcting by appeal to rules. When we call a device “a computer,” we project that practice onto a mechanism, and that projection is warranted only for certain purposes.

Rosen adds a complementary caution: if we treat Church’s Thesis as a physical claim about realizability, we must reformulate it in explicitly physical terms (about which processes do not exist) and then defend the strong conditions it assumes about natural law. Otherwise we mistake a logico-grammatical clarification for a statement about the world. The upshot is a two-level picture. At the formal level, CT unifies our models of effective procedures. At the conceptual level, the meaning of “computing” depends on the roles rules play in our practices; and that dependence cannot be read off from causal structure alone.

Where does this leave us? First, it suggests humility about computationalism in mind and life: Turing’s machines show what can be simulated by mechanical procedures, not what counts as rule-following in the full-blooded, normative sense Wittgenstein emphasized. Second, it supports a substrate-attentive program: if we want computation to illuminate life or cognition, we should investigate how particular substrates (biological, chemical, or otherwise) sustain the pragmatics of rule-governed activity rather than assume that symbol crunching is enough. Finally, it reframes CT as a hinge: mathematically indispensable, but philosophically non-final. To keep meaning in view, we must, as Wittgenstein urged, “go right down to the foundations”; not to replace Turing, but to understand what his great invention does and crucially, what it does not, by itself, decide.

Desktop View

This post is licensed under CC BY 4.0 by the author.