## Induction and recursion March 6, 2008

Posted by David Pierce in Uncategorized.

I have been invited to comment here on the distinction between proof by induction and definition by recursion. These have been confused since Guiseppe Peano published (in 1889) what came to be called the Peano Axioms. It might appear that both induction and recursion involve infinite sets; but only recursion necessarily does.

Writing before Peano, Richard Dedekind (1887) was clear about the distinction. He defined a simply infinite system as a system (i.e. set) with a base-element and a transformation (i.e. function) into itself such that:

1. the base-element is not the transform (i.e. image under the transformation) of any element;
2. distinct elements have distinct transforms (in Dedekind’s terminology, the transformation is similar);
3. proof by induction is possible: a subset containing the base-element and closed under the transformation is the whole set.

Peano also wrote down these conditions, but in a new logical symbolism, and they came to be known as the Peano axioms. But unlike Dedekind, Peano apparently thought that these axioms immediately allowed the definition of addition. Indeed, let the base-element be 1, and the transformationbe φ. We define n+1 as φ(n) and, assuming n+m has been defined, we define n+φ(m) as φ(n+m).

This is a recursive definition. Is it justified? If so, which of the three axioms are required, and how?

The definition of addition can be justified by induction alone. Likewise, multiplication. Edmund Landau published the proof (due to Kalmár) in 1929. But Landau did not dwell on the sufficiency of induction, or on what Dedekind had observed: a system that admits proof by induction may be finite, so that not all recursive definitions work.

Finite rings provide examples. In the ring of integers modulo p, or Z/(p), let 0 be taken as the base-element, and let the transformation be addition of the generator 1. No proper subset of the group contains 0 and is closed under the transformation: proof by induction is possible here. From this, addition and multiplication can be recovered.

However, exponentiation as a binary operation cannot be defined on this ring. Superficially, the attempted definition parallels those of addition and multiplication: n0 = 1, and if nm has been defined, then nm+1 = nm·n. But this definition fails. Indeed, assuming p is prime and n is not 0, by Fermat’s Theorem we must have np-1 = 1; but then n = np = n0 = 1, which is absurd unless p is 2.

Fermat’s Theorem is that we have exponentiation as a function from Z/(p) × Z/(p-1) into Z/(p); but to prove this, we need more than that the ring Z/(p) and the group Z/(p-1) admit proof by induction.

Though distinguishing between definition and proof, Dedekind spoke of definition by induction rather than definition by recursion. If Ω is a set with distinguished element ω and transformation θ into itself, then, from a simply infinite system as above, into Ω, there is a unique transformation ψ such that ψ(1) = ω and ψφ = θψ. For Dedekind, this is a theorem about simply infinite systems.

According to Saunders Mac Lane in Mathematics, form and function (1986), Lawvere first “made explicit” that this Recursion Theorem implies the three Peano Axioms listed above. The equivalence of the Theorem and the Axioms is also discussed in the first edition of Mac Lane and Birkhoff’s Algebra (1967). Nonetheless, although that text introduces the natural numbers (starting with 0) as satisfying the Peano axioms, the writers then immediately use the natural numbers to index iterations of an operation f on a set: f 0 is the identity, and f n+1 is f f n. This is a recursive definition, so one might expect it to be justified to the reader by means of the Recursion Theorem. But this theorem is stated about 40 pages later, as the Peano–Lawvere Axiom, from which the Peano Axioms are shown to follow; the converse is only to be found in “many [other] texts on foundations”.

Indeed, Stoll’s Set theory and logic (1963) is one such text. However, other texts continue to give the impression that “definition by induction” is permitted by the possibility of proof by induction. Other confusions are found in elementary texts, such as the suggestion that “strong induction” or the “well-ordering principle” are equivalent to (ordinary) induction. To me, it is important to get these things straight, if only because the attentive student might be confused (as I once was). Landau confessed to having been mistaken about what was possible with induction. Who has noted since how it is possible to go astray?

References:

1. Richard Dedekind, Essays on the theory of numbers (Dover, 1963)
2. Edmund Landau, Foundations of analysis (Chelsea, 1966)
3. Saunders Mac Lane, Mathematics, form and function (Springer, 1986)
4. Saunders Mac Lane and Garrett Birkhoff, Algebra (Macmillan, 1967)
5. Robert R. Stoll, Set theory and logic (Dover, 1979)
6. Jean van Heijenoort, editor, From Frege to Gödel, a source book in mathematical logic 1879–1931 (Harvard, 1967)

1. abo - March 15, 2008

Another relevant paper: Henkin, Leon. “On Mathematical Induction,” The American Mathematical Monthly, Vol. 67, No. 4 (Apr. 1960), pp. 323-338.

2. dcorfield - April 22, 2008

Anyone care to expand on Borisov and Manin here (p. 4):

It is interesting to notice that the classical theory of recursive functions must refer to a very special and in a sense universal algebra over a non-linear “computational operad”, but nobody so far was able to formalize the latter. Main obstacle is this: a standard description of any partially recursive function produces a circuit that may contain cycles of an a priori unknown multiplicity and eventually infinite subprocesses producing no output at all.

3. The Border of Kingdom « A Dialogue on Infinity - May 4, 2008

[…] my quest. This fable by Roerich came to my mind when I started to think about David Pierce’s induction/recursion opposition in terms of practicalities of “black box” computing in a very big residue […]

4. Henry - August 14, 2008

This is one of the most important posts I’ve ever read in my life.
Sincere thanks from a Maths student.

5. David Corfield - October 22, 2009

What’s special about $\mathbb{N}$ is that it’s an initial object in the category of algebras of the functor $X \to 1 + X$, which adjoins a disjoint singleton to a set. That is, there is a map $\langle 0, s \rangle: 1 + \mathbb{N} \to \mathbb{N}$, and for any algebra $f: 1 + Y \to Y$ there is a unique algebra morphism from $\mathbb{N}$.

Induction corresponds to the non-existence of a proper subalgebra of $\mathbb{N}$. This is a general feature of initial objects: any monomorphism to them is an isomorphism.

Recursion corresponds to the existence of a morphism out of $\mathbb{N}$. E.g., there’s an algebra structure on $\mathbb{N}$ which acting on $1 + \mathbb{N}$ sends 1 to 1 and $m$ to $n.m$. The unique morphism from the initial algebra now defines a function $f(m) = n^m$.

To take $\mathbb{Z}/(p)$ as an initial algebra we’d have to place it in a different category of algebras where the unary function applied $p$ times is equal to the identity function. Then induction works. However, now the attempted recursive definition will not work since $\langle 1, \times n \rangle: 1 + \mathbb{Z}/(p) \to \mathbb{Z}/(p)$ is not an algebra in this category.

6. Devendra - April 22, 2015

Thank you for this informative post. Very helpful!!