Saturday, May 30, 2015

Equality diagram

Here's an attempt at a diagram illustrating some of the conceptual structure of equality types.  Latex source is at

``In the homotopy interpretation, ... the type of triples (x, y, p), where x and y are the endpoints of the path p (in other words, the Σ-type (x,y:A)(x = y)), is inductively generated by the constant loops at each point x. As we will see in Chapter 2, in homotopy theory the space corresponding to ∑(x,y:A)(x = y) is the free path space — the space of paths in A whose endpoints may vary — and it is in fact the case that any point of this space is homotopic to the constant loop at some point, since we can simply retract one of its endpoints along the given path.'' (HoTT p. 51)

``[T]he space of paths starting at some chosen point (the based path space at that point, which type-theoretically is (y:A)(a = y)) is contractible to the constant loop on the chosen point.'' (HoTT p. 51)

``In set theory, the proposition a = a is entirely uninteresting, but in homotopy theory, paths from a point to itself are called loops and carry lots of interesting higher structure. Thus, given a type A with a point a : A, we define its loop space Ω(A, a) to be the type a =A a.'' (HoTT p. 68)

This diagram is intended to convey the idea that point a freely generates its path space, hence b_1 and b_2, that there are multiple "ways of being identical", multiplicity of loops at a, etc.  Suggestions for improvement welcome.

Wednesday, May 27, 2015

From Path Induction to Path Involution in HoTT

There's something distinctly funky about the HoTT book's use of the term "induction".  As far as I can tell, it uses the term indifferently both for what anybody can recognize as genuine mathematical induction (e.g. over the naturals) as well as for reasoning that does not involve an inductive step.  I cannot see why the something that does not involve induction ought to be called, you know, induction.  I brought this up in an Issue ticket for the HoTT book and was rewarded with a sneer from one of the World's Greatest Mathematicians, so you've been warned: what follows may be grossly silly.  (On the other hand, I don't care how smart you are, if you want to convince me that Monday is really Tuesday you'd better come up with something better than a sneer.)

This morning I had a little epiphany; I finally realized exactly why the HoTT book's idea of induction feels so wrong to me.  It has to do with the difference between classical and intuitionistic reasoning about infinities.  In this post I'll try to explain what I think is going on.

The general form of "induction principles" in the HoTT book seems to be something like the following: "to prove something for all x, it suffices to prove it for y".  Turned around: "if you've proven something for y, then you have proved it for all x".  If x is Nat, y is the base case and inductive step; for enumerated types, y is the base cases; for equality and coproduct types too, y is the base case.

The problem is that, as the HoTT book emphasizes, in the case of equality types there may be many proofs of equality distinct from the base case (refl) and from each other.  In fact infinitely many.  Now we can prove an equality for the base case by construction - refl(a) is by definition proof of a = a.  But HoTT says that proving this base case suffices to prove all the other cases, no inductive hypothesis needed.  How can this be?  As I understand it, HoTT says that this works because the base case "freely generates" all the other cases.  But that's a non-sequitur.  Mathematical induction is a proof technique that depends on a particular conception of a non-terminating generative process; free generation is a mathematical concept that has nothing to do with proof, as far as I can tell.  In free generation the base case freely generates all the other cases; it does not follow that a proof of the base case counts as proof of all the other cases - at least, not for intuitionistic reasoning.  "Free generation" is just a description of how some things are related to some other things, not a proof technique.  Plus it's not constructive.  What gets generated, unlike inductively constructed elements, is not compositional; it does not build on the base case.  Free "generation" is actually a misnomer; it refers to various properties, not to generation.  The operative term in "freely generated" is "freely", not "generated".  But "free" is defined in terms of the result of the generation process - what is generated, rather than generation - the process - itself.  What's going on here?

I think the problem is that HoTT uses classical logic for the semantic domain, but type theory is intuitionistic.  And under the intuitionistic perspective, a phrase like "proof of P for all x", where there are infinite xs, is absurd.  The phrase "all x" is incoherent unless there are only finitely many xs, since there is no "all" to infinity.  You can't prove stuff for an infinite totality because there is no such thing; the best you can do is provide a method for finding a proof for any element in the totality.  (See the Dummett quotes below).

What about mathematical induction over Nat?  We can treat the base case and the inductive step intuitionistically, the latter being a method of "going on in the same way".  But the conclusion - "therefore P(n) for all n" should be replaced with "therefore for any n we can show (find) P(n)".  Proving the base case and the inductive step does not mean we have given a proof for "all n"; it means we've given a proof method that works for any n: starting at the base case, we can run through the initial segment that terminates at that n.

When it comes to HoTT's "principles of induction" for coproducts, equalities, and the like, HoTT seems to rely on the classical logic of homotopy theory.  The formula goes something like this: "if we've proven P for the constant path at a (base case), then we have proved it for every path (element of) a=b".  The problem is that there are elements other than the base case, but there is no "initial segment" that links the others to the base case; there is no sequence.  So we are given no way to prove P for any element other than the base case.  There is no way to "keep going in the same way" in order to reach an arbitrary element.  There is no way that I can see to construe such "induction principles" intuitionistically.  They are not genuinely inductive.

The homotopic strategy seems to be to present the non-canonical tokens of, say, an equality type, as (non-canonical?) paths "freely generated" by the base case (refl).  Once we have them, their endpoints count as "equal", since there is a path between the two.  And under the classical perspective of the semantic domain (homotopy theory), we always already have them.  But if we want an intuitionistic version of this idea suitable for constructive type theory, we need a means of actually constructing them in a way that allows us to "reach" them.  We need a way of thinking of "free generation" as a non-terminating process that constructs these paths, one at a time, giving an initial segment for each element, which allows us to reach each in a constructive proof.  I don't really see a way of doing this at the moment, but then again people like Beth, Bishop and Dummett have developed lots of techniques for reconstructing bits of classical mathematics intuitionistically, so maybe there is already a technique out there for handling this.

But I think there is another possibility that will allow us to prove things for the sorts of infinities involved in equalities, co-products, and the like without relying on a concept of induction.  The trick is to think of two kinds of infinitary process, the inductive and the involutional.  An infinite inductive process always "moves forward", so to speak.  Each step in the process spits out a unique token.  For nat, we get 0,1...n.  An infinite involutive process goes nowhere; it just spins in place, turning in on itself (hence "involution").  Each cycle in the process spits out a unique token-repeatable.  So an involution that produces 'a' generates a sequence: a,a,...a.  Each a is a distinct particular, so they are not identical; but they are all 'a' tokens (hence "token-repeatables"), so they are interchangeable.  And because of that, proving P(a) for a particular such 'a' is as good as proving it for any such 'a'; there is no need to run through an initial segment.

In terms of paths, the involutional process generates distinct 'a' tokens, so we should treat witnesses of e.g. a=b as paths from the canonical 'a' token to other 'a' tokens - which we call 'b', since there is really no way that a writing system can present this idea unambiguously using only 'a' tokens.  We could also always write a=a so long as we agree that these two 'a' tokens need not be "the same", so to speak - in other words, x:a=a is not necessarily a reflexivity proof - it could also represent a path to a different 'a' token.

The key concept here is "token-repeatable", rather than "free generation".  The notion of an involutive process generating a sequence of (equal) token-repeatables gives us an intuitionistically respectable interpretation of the concept of free generation.  The process is non-terminating (hence infinitary) and sequential, creating initial segments; but the elements of those initial segments are token-repeatables, which makes them interchangeable, so there is no need to "run through" an initial segment to reach a particular such token.  You can stop after the first one.

To cut a long story short: what the HoTT book calls "path induction" is not induction at all; it would be better to call it "path involution".

More details to follow in a separate post.

Dummett, Elements of Intuitionism

p. 4:

"The fact that quantification over an infinite totality shares so much in common with quantification over a finite one tempts us to overlook the crucial difference from the case in which we are quantifying over a finite totality which we can survey, namely that we do not have, even in principle, a method of determining the truth-value of a quantified statement by carrying out a complete inspection of the elements of the domain, and checking, for each one, whether the predicate applies.

p. 40-1:

"[T]he thesis that there is no completed infinity means, simply, that to grasp an infinite structure is to grasp the process that generates it, that to refer to such a structure is to refer to that process, and that to recognize a structure as being infinite is to recognize that the process will not terminate.  In the case of a process that does terminate, we may legitimately distinguish between the process itself and its completed output: we may be presented with the structure that is generated, without knowing anything about the process of generation.  But, in the case of an infinite structure, no such distinction is permissible: all that we can, at any given time, know of the output of the process of generation is some finite initial segment of the structure being generated.  There is no sense in which we can have any conception of this structure as a whole save by knowing the process of generation."

p. 41:

"It is, however, integral to classical mathematics to treat infinite structures as if they could be completed and then surveyed in their totality, in other words, as if we could be presented with the entire output of an infinite process.  Given his assumption that the application of a well-defined predicate to each element of the totality has a determinate value, true or false, the classical mathematician concludes that its universal closure has an equally determinate value, formed by taking the product of the values of its instances, and that the existential closure likewise has a determinate value, formed by taking the sum of the values of its instances.  On such a conception, the truth-value of a quantified statement is the final outcome of a process which involves running through the values of all its instances; the assumption that its truth-value is well-defined and determinate is precisely the assumption that we may regard an infinite process of this kind as capable of completion... On an intuitionisitic view, neither the truth-value of a statement nor any other mathematical entity can be given as the final result of an infinite process, since an infinite process is precisely one that does not have a final result: that is why, when the domain of quantification is infinite, an existentially quantified statement cannot be regarded in advance as determinately either true or false, and a universally quantified one cannot be thought of as being true accidentally, that is independently of there being a proof of it, a proof which must depend intrinsically upon our grasp  of the process whereby the domain is generated."

Tuesday, May 19, 2015

Constructors and Co-constructors in HoTT

The HoTT book's treatment of constructors, eliminators, etc. deviates somewhat from the treatment commonly found in texts on constructive logic.  Of eliminators, it says that they say how to use elements of a type.  A computation rule "expresses how an eliminator acts on a constructor."  In all cases, the book usually presents these things as functions.

The contrast is with the way intro and elim rules are usually presented in logic, as inference rules.  For example, &-intro licenses an inference from A, B to A&B.  This correponds cleanly to the HoTT pair constructor.  But &-intro is usually accompanied by two &-elim rules, call them &-elim1 and &-elim2, which are also inference rules.

        &-elim1:  A&B |- A
&-elim2:  A&B |- B

This could also be express more HoTTly:

        elim1:  (a,b):AxB |- a:A
elim2:  (a,b):AxB |- b:B

The HoTT book does not propose eliminators like this.  Instead it defines projection functions pr1 and pr2, which are like elim rules but are not basic to the type definition.  It offers an "elimination rule for products" that answers the question "How can we use pairs, i.e. how can we define functions out of a product type?"  The technique is to define a function on pairs f:AxB->C in terms of an iterated function g:A->B->C.

So the contrast in this case is fairly clear: the logical tradition treats an eliminator as an inference that "undoes" a constructor; HoTT treats an eliminator as a function that "uses" a construction.  In the case of &/x (i.e. product types), that means HoTT must introduce derived functions (projectors) in order to "undo" the constructor.

Note that this makes HoTT's terminology a bit abusive; after all a function on products does not necessarily eliminate anything: just look at the Identity function.

Also note that in both cases the relation between constructor and eliminator is asymmetric; constructors are "more primitive" than eliminators.  This follows Gentzen's original treatment of intro and elim rules: intro rules are viewed as defining (in a sense) the logical operators, and elim rules are derived (in a sense) from intro rules.

The HoTT book's strategy seems follow from the decision to treat these things as functions.  The projection functions cannot be primitive (to the product type) since we do not yet (in the middle of defining product types, so to speak) know how to apply a function to a product type.  So you have to start by defining how any such function f:AxB->C works, and then use that result to define the specific projection functions.

The proposal here is to reconceptualize constructors and eliminators.  This involves several basic moves: first, foreground the notion of inference rather then functions; second, do not treat constructors and eliminators as functions, and third, treat eliminators as co-constructors rather than undoers or users of constructors.

Foregrounding inference:  read "a:A" as "from token a infer type A" (there's a lot to be said about this approach but I'll leave that for another post) and treat the rules defining types like AxB as inference rules.  Not traditional inference rules that go from true premises to true conclusion, but inferency inference rules, that go from good inferences to good inferences.  For example, read pair constructor (intro rule) as something like "if the inference from a to A is good and the inference from b to B is good, then the inference from (a,b) to AxB is good".

Constructors and co-constructors: the idea is pretty simple and easily illustrated for the case of product types:

        ()-ctor:   \( a:A,\  b:B  \vdash  (a,b):A\times B\)
pr1-co-ctor:   \(x:A\times B \vdash  pr1(x):A\)
pr2-co-ctor:   \(x:A\times B \vdash  pr2(x):B\)

In the case of ctors, the rule is not to be construed in functional terms: it is not a function that produces a "result" of type AxB, given inputs of types A and B.  It does not "produce" anything at all; it just licenses the inference from (a,b) to AxB, given inferences from a to A and from b to B.  Strictly speaking we should not even think of this as constructing anything either, except perhaps the syntactic form (a,b).

In the case of co-constructors, the idea is that they, like constructors, license inferences.  As in the case of so-called constructors, we're not really constructing anything; \(pr1(x)\) has type \(A\) if \(x\) has type \(A\times B\).  That is all.  But the construction metaphor is convenient nonetheless: co-constructors may be viewed as "devices" that construct something, just like constructors.

Notice that we could write these using a function type notation, but here (as in HoTT) there is no function definition for constructors and co-constructors.

A critical contrast with the HoTT (and the traditional logical) approach is that co-constructors need not operate on constructors.  Constructors and co-constructors are perfectly symmetrical; nothing in the concept of types and tokens warrants (a priori) the primacy of constructors over co-constructors.  In other words it is not the case that one must first construct an element in order to have something that an eliminator can operate on.

Also note that with this approach a HoTT-style definition of how functions on product types may be defined still works, but it is not to be construed as an elimination rule on products.

It follows from the definitions that \(x:A\times B \vdash  (pr1(x), pr2(x)) : A\times B\).  This does not mean that every \(x:A\times B\) is a pair, but it does mean that a pair can be inferred from every such \(x\).  Which dovetails with the HoTT book's strategy: to define a function on a product type it suffices to define it for all pairs.

What about the uniqueness principle uppt?  In the HoTT book, it depends on the induction principle for pairs.  But as just mentioned that follows directly from the constructor and co-constructors.