#### DRAFT

Inductive definition has been around forever; so has co-inductive "definition", for that matter, but it is much less well-know. Furthermore, there is an important sense in which co-inductive definitions are not definitions at all, at least not in the informal sense in which we normally think of definitions. "Definition" is derived from the same Latin root as "finite" and the point of definition is just to give a finite, delimited, determinate, etc. characterization of the definiendum. The notion of an inductive definition already runs afoul of this informal notion, since it involves at least potential infinity in the form of the inductive step. But at least inductive definitions start out with their feet on the ground, with a base case. Co-inductive definitions are not like this; they do not have base cases, and their definienda may themselves involve indeterminate notions like infinities and non-determinism. So it is not obvious why we should even think of them as definitions. They're more like

*co-definitions*: something dual to definitions, symmetrically related to the concept of definition but nonetheless fundamentally different.

That may explain why co-induction is not better known. At first it's just weird to have something that looks a lot like an inductive definition, but that lacks a base case. How can something like that serve as a genuine definition if it does not start at some definite

*beginning*point? And if we cannot "reach" every case by starting from that beginning and inductively stepping our way up, then aren't we compelled to say that whatever it is we're talking about must be undefined, maybe even undefinable?
Here I think the beginning of wisdom is to reject essentialism and reductionism. The impulse to reduce everything to foundations has been overwhelmingly dominant, at least in the West, for many centuries, and until relatively recently has been something of an article of faith since Descartes and the 17th century enlightenment. Of course there were always dissidents - Vico comes to mind - but it was only with the emergence the Pragmatist Enlightenment beginning in the late 19th century that the folly of radical essentialism has gradually become clear. Ok enough philosophy - the point is, to get comfortable with all this co-

*stuff*you really do need to expand your imaginative horizon, and in particular you have to get comfortable with the idea that definitions from the ground up (reducible to foundational base cases plus an inductive step) cannot claim any privilege - co-definitions that work from the top down (without a net, so to speak; from infinitude to finitude, from non-determinism to determinism, etc.) are just as fundamental. They may not*define*things in a reductionistic sense, but they do*characterize*them - or their behavior - in ways that make them just as*useful*as definitions.### Inductive definition

Inductive definitions work by specifying how to

*construct*the definiendum, always starting from a base case. For example, the type of finite lists List A is defined by giving the primitive constructors that may be used to "put together" finite lists: Nil and Cons. One of these constructors, Nil, functions as the inductive "base case".

The values of an inductively defined type are always finite, though the type itself may be infinite. That means any value such a type can always be totally constructed at least in principle, since every such value is finite and well that's just what inductive definition means.

Pi types are inductively defined; the primitive constructor is lambda. Base case?

### Co-inductive co-definition

Consider what effects, that might conceivably have practical bearings, we conceive the object of our conception to have. Then, our conception of these effects is the whole of our conception of the object. (The Pragmatic Maxim, C.S. Peirce)

Co-inductive co-definitions do no say what a thing

*is*; rather they say what it

*does*. Rather than characterizing

*essence*, they characterize

*accidence*. Rather than

*nature*, they specify

*behavior*.

Co-inductive definitions work by specifying how to de-struct the definiendum, never starting from a base case. For example, the type of infinite lists List* A is defined by giving the primitive eliminators that may be used to "take apart" infinite lists: Hd and Tl. There is no base case; you don't construct an infinite list by starting at Nil and building up; if you tried that, you would never finish, and constructors must always terminate.

Instead, you start with the very thing you would be trying to construct, if you were in the business of inductive definition: infinite data, non-deterministic computations, etc. You treat those as just

*given*, and you set about (co-)defining they way the behave under the operations of the type. Once you've done that, there is nothing more to know about them.

Co-inductive co-definitions are much like the police, according to that great mathematician Richard J. Daly, Mayor of Chicago: "The police are not here to create disorder; they're here to preserve disorder."

*given*, not defined! The business of the co-definition is not to define (the essence/nature of) List* A, but to co-define its behavior under the co-constructors (eliminators) of the type being co-defined. In the case of List* A, those eliminators are Hd* (Head*) and Tl* (Tail*). They look just like their counterparts (Hd, Tl) for List A; the critical difference is that they both operate on streams. Hd* co-constructs an a:A, and Tl* co-constructs another List* A.

The values of a co-inductively defined type

*may*be infinite, but they may not be. But in all cases, it is not possible to deterministically and completely construct such values. For example, List* A values cannot be completely constructed; we can derive a new List* A value from an old one (together with an a:A) using Cons*, but we cannot construct an

*entire*infinite list.

Sigma types are co-inductively defined; the primitive eliminators (co-constructors) are fst and snd.