Tuesday, June 9, 2015

HoTT Without Vars


There are a few notational conveniences of the lambda calculus that seem to be missing in the HoTT calculus. Here are some ideas about how to add them.  First, a kind of "reverse lambda" operator that allows us to refer to arbitrary elements of a type without using a name, just like lambda does with functions.  Second, an explicit type former for enumerated types, that allows us to express such types without using a name.  Lastly, a pipeline application operator that allows us to string expressions together.

A PDF version of this article is at HoTT Without Vars.  This is revision 2.

Type Specialization

The type formers of HoTT (×, →, Π, Σ and the rest) are very similar to the lambda operator λ. Just as λ turns an open formula into the name of the associated function (λx.x + 1 names the function associated with x + 1), the type formers turn expressions into the names of types. For example, × turns A, B into the type name A × B; → turns A, B into the type name A → B, and so forth.

Lambda expressions like λx.x + 1 are called lambda abstractions; by extension, we can call type formation expressions type abstractions. To “undo” a lambda abstraction, we use application: λx.x + 1(2) = 3. Since the opposite of abstraction is specialization, it is tempting to think of application as specialization; but application involves more than mere specialization. If we think of a function (that is, a lambda abstraction) as a set of ordered pairs, then its specialization should be a particular ordered pair; but application does not deliver a pair, it projects the second element of a pair. So application can be thought of as a combination of specialization and projection.

Lambda abstractions are treated not only as the names of functions, but as function definitions. That’s what allows us to write, e.g. f = λx.x + 1.

There is no application operator for type abstractions. We cannot undo A → B by writing (A → B)(x), for example. But we can specialize type abstractions; that is just what expressions like a : A do. One (classical) way to interpret such an expression is to say that a denotes an “element” of A. Such an interpretation presupposes a denotational semantics and thus implicates a notion of choice; a more detailed gloss might say that “:” chooses an element of A and assigns it to a. Under a pragmatic interpretation that eschews denotational semantics, we might simply stipulate that a : A means that a is a token of type A and leave it at that. Either way, we have introduced a variable, a, in the service of specializing a type abstraction.

Now one of the great advantages of the lambda notation is that it allows us to dispense with function names. In fact this notational convenience was Church’s original motivation in introducing λ; he did not realize until later that it formed the basis of a powerful computation/logical calculus. Without the lambda operator we would have to define and name every function we need to use, so that we can later refer to it by name. With the lambda calculus we can just write out the function definition wherever we need it.

In principle we should be able to do the same thing – that is, dispense with intermediate names – for type abstractions and specializations. But the HoTT notation does not currently support this; if you want to work with an arbitrary element of a type, you must name it using the “:” operator.  And enumerated types (e.g. the boolean type) cannot be specified without naming.

To address this notational deficiency we propose a specialization operator for type abstractions, ƛ (this is U+019B, lambda with stroke; in latex, I use reversed lambda). Its meaning is exactly analogous to the lambda operator, but instead of abstracting it specializes.  λ turns a particular formula into an abstraction; ƛ turns a type abstraction into a particular. Under a classical interpretation, the operator “selects” an element of the type to which it is applied; under a pragmatic interpretation, the combination of the operator symbol and a type symbol is a symbol of that type.

1) f :A→B f names an arbitrary function of type A→B
2) ƛA → B ƛA→B : A→B
3) ƛN an arbitrary natural number
The usefulness of this becomes more evident with Π and Σ types. From the definition of path induction, p. 49:
4) \(C:Π_{(x,y:A)}(x=_Ay)→U\) same as: \(C :\equiv ƛ \bigg(Π_{(x,y:A)} ƛ\big( (x=_Ay)\to\mathcal{U}\big)\bigg)\)
5) \(c:Π_{(x:A)}C(x,x,refl_x)\) same as: \(c :\equiv ƛ \bigg(Π_{(x:A)} ƛ\ C(x,x,refl_x)\bigg)\)
These may be rewritten as follows using the pipeline application operator “|” described below; briefly, x|f = f (x) :
6) \(c:≡ƛ\Pi_{(x:A)}\bigg(\ x\to\Sigma_{(x,x:A)}(x=_Ax)\ \bigg\rvert\ ƛ\Pi_{(x,y:A)}(x= y)→U\bigg)\) incorrect
6a) \(c:≡ƛ\Pi_{(x:A)}\ x\toƛ\bigg(ƛ\Sigma_{(x,x:A)}(x=_Ax)\ \bigg\rvert\ ƛ\Pi_{(x,y:A)}(x= y)→U\bigg)\) correct?
7) \(c:≡ƛ\Pi_{(x:A)}\bigg(\ x\to\Sigma_{(x:A)}\Big(\Sigma_{(x:A)}(x=_Ax)\Big)\ \bigg\rvert\ ƛ\Pi_{(x,y:A)}(x= y)→U\bigg)\) incorrect
7a) \(c:≡ƛ\Pi_{(x:A)}\ x\to\ ƛ\bigg(ƛ\Sigma_{(x:A)}\Big(\Sigma_{(x:A)}(x=_Ax)\Big)\ \bigg\rvert\ ƛ\Pi_{(x,y:A)}(x= y)→U\bigg)\) correct?

These are equivalent (I think); (7) is just more explicit than (6). They are slightly different than (4) and (5) in that they do not use \(refl_x\). In both cases, the entire meaning can be read off the one expression, just as in the case of a lambda definition; it can be glossed in order: c is an element of a function type that takes any x : A, produces a dependent triple of the form (x, x, p), where p is any proof that x = x, and feeds that into a function that maps such triples to a new type.

This notation seems to make it possible to render structure more explicitly; whether or not this is a better way to write this is beside the point; the λ operator at least makes it possible.

(Of course, the symbol is arbitrary but motivated - it’s a reverse lambda. We may find a better symbol than this; e.g. upside down lambda: or the like).

Caveat: You cannot just replace named elements with λ expressions willy-nilly, since the latter pick out arbitrary elements. Where a name occurs multiple times in a context it must mean the same thing each time; e.g. in the formula for \(ind=_A\) on p. 49. (reproduced below).



Definitions:  Implicit v. Explicit 

A specialization like a : A says that a is an arbitrary token of type A. But arbitrary does not mean indefinite; if we adopt the classical perspective and treat a : A as meaning that a denotes an arbitrary element of A we have not said that a is undefined. On the contrary, if it is to denote at all, it must denote something definite. So we might say that a denotes a definite but unknown element of A. Since it is defi- nite, we can say that a is implicitly defined. If we then offer a “defining equation” for a, it becomes explicitly defined.

In other words, typing judgments of the form a : A implicitly define their token component.

The difference is clear in the case of function types:

8) f:N→N
9) f :≡λx:N.x+1

Expression (8) implicitly defines f by giving its type explicitly and “judging” that f has that type; that is, by assigning f to a specialization of its type. Expression (9) then makes that definition explicit.

Since the term “definition” thus has two meanings, we will generally use the term “specification” and leave it to the reader to determine from context which kind of definition is involved. For example, we might say “Given the specification a : A”, meaning that a : A specifies that a names an implicitly defined element of A.

An Explicit Operator for Enumerated Types

Enumerated types like 2 do not have an explicit abstraction operator. We propose Λ. This allows us to specify “anonymous” enumerated types, e.g.

Λ{Mon, Tues, Wed, Thu, Fri, Sat, Sun}

names a “Day” type without naming it. To name it we can write:

Day :≡ Λ{Mon, Tues, Wed, Thu, Fri, Sat, Sun}

Pipeline Application

10) \(ƛ\Sigma_{(x,y:A)}(x=_A y)\ \Big\rvert\ ƛ\Pi_{(x,y:A)}(x=_Ay)→U\)
meaning, apply an element of the Π type to an element of the Σ type, or feed the latter into the former. In this case we can safely omit the specialization operators since they can be inferred from the context:
11) \(\Sigma_{(x,y:A)}(x=_Ay)\ \Big\rvert\ \Pi_{(x,y:A)}(x=_Ay)→U\)
Using names, these two expressions are equivalent to the pair of expressions:
12) \(C:\Pi_{(x,y:A)}(x=_Ay)→U\)
13) \(C(a,b,p)\) where \(a,b:A\) and \(p:(a=_Ab)\)

Universe Operator

For lack of a better term.  The standard way to express a family is B : A → U. This is fine if B and A are relatively simple; B(a) is a type. But if B and/or A are complex this can be awkward. Sometimes we may want a name to refer to members of a type family, if only for expository purposes; e.g. B(a) :≡ P. The prosposal here is that we introduce a new operator that allows us to name an element of U: PU. Read this as “P is an element drawn from U”.

This allows us to say e.g.  B : A → PU

This is to be interpreted as “B takes each element of A to an element of U, which we name P”; it does not mean “B takes each element of A to an element of P”.

Monday, June 8, 2015

Curry-Howard: It's Not Just for Propositions!






1) \((a=_A b)\) a=b
2) \((a=_A b)\to\mathcal{P}\) if \(a=b\) then \(\mathcal{P}\) is a proposition
3) \(\prod\limits_{x,y:A} (x =_A y) \to \mathcal{P}\) \(\forall x,y\) if \(x=y\) then \(\mathcal{P}\) is a proposition
4) \(C:\prod\limits_{x,y:A} (x =_A y) \to \mathcal{P}\) C proves proposition 3
5) \(C(a,b)\) if \(a=b\) then \(\mathcal{P}\) is a proposition
6) \(b,a|\prod\limits_{x,y:A} (x =_A y) \to \mathcal{P}\) same as proposition 5
7) \(p:(a=_Ab)\) p proves \(a=_Ab\)
8) \(p|(a=_Ab)\to\mathcal{P}\) \(\mathcal{P}\) is a proposition
9) \(C(a,b)(p)\) \(\mathcal{P}\) is a proposition
10) \(p,b,a|\prod\limits_{x,y:A} (x =_A y) \to \mathcal{P}\) \(\mathcal{P}\) is a proposition

Note: 8 and 10 use an unorthodox notation that I'm calling the "tokenization operator".  The idea is that for any type A,  "|" picks out an element of the type, and we then use prefix notation to indicate application. (We might also split out a tokenization operator, e.g. \(\Lambda\) and define a "pipeline" application operator.)  This allows us to dispense with (some) intermediate variables, just like the lambda operator allows us to dispense with function names.  In 8, \(|(a=b)\to\mathcal{P}\) picks out an element of the function type, and prefixing \(p\) to it indicates application of that element to p.  So 8 is a (nearly) variable-free version of:

    \(C: (a=b)\to\mathcal{P}\); \(C(p)\)

Note that 10 could be written more explicitly:

\[a,b:A, \ p:(a=b),\  p|b,a|\prod\limits_{x,y:A} (x =_A y) \to \mathcal{P}\]

which should be read "outward": feed a then b then a proof of a=b to the \(\prod\) function.

The glosses for 8-10 here ignore information in the formulae; the gloss only applies to the "result" of the type formation operation.  They can also be glossed intensionally as instances of modus ponens
(\(A\to B\); but \(A\), therefore \(B\)):

    If a=b is proven, then P; but p proves a=b, therefore P.

Here I think the advantage of having a tokenization operator like "|" is evident; it makes it easier to read the intensional sense of 8 and 10 directly off the formula.  Well, not entirely; we still use vars a, b, and p.   They could be eliminated using the tokenization operator with a \(\sum\) type, but that's a topic for another post.

What this is intended to show is that Curry-Howard is not limited to propositions.  Types may encode terms, propositions, or inferences.   Those that express inferences may also be construed as expressing propositions (namely, the concluding proposition of the inference); those that express propositions may be construed as expressing terms.  The latter corresponds to the fact that embedded propositions function as terms: in "if P then Q", both P and Q are propositional variables but they act as terms within the conditional.


Tuesday, June 2, 2015

HoTT Sphere


If you want to know where spheres come from in HoTT, check out Andrej Bauer's terrific animation.

Saturday, May 30, 2015

Equality diagram

Here's an attempt at a diagram illustrating some of the conceptual structure of equality types.  Latex source is at https://github.com/mobileink/graphics/blob/master/pgf/h-equality.tex.

``In the homotopy interpretation, ... the type of triples (x, y, p), where x and y are the endpoints of the path p (in other words, the Σ-type (x,y:A)(x = y)), is inductively generated by the constant loops at each point x. As we will see in Chapter 2, in homotopy theory the space corresponding to ∑(x,y:A)(x = y) is the free path space — the space of paths in A whose endpoints may vary — and it is in fact the case that any point of this space is homotopic to the constant loop at some point, since we can simply retract one of its endpoints along the given path.'' (HoTT p. 51)

``[T]he space of paths starting at some chosen point (the based path space at that point, which type-theoretically is (y:A)(a = y)) is contractible to the constant loop on the chosen point.'' (HoTT p. 51)

``In set theory, the proposition a = a is entirely uninteresting, but in homotopy theory, paths from a point to itself are called loops and carry lots of interesting higher structure. Thus, given a type A with a point a : A, we define its loop space Ω(A, a) to be the type a =A a.'' (HoTT p. 68)

This diagram is intended to convey the idea that point a freely generates its path space, hence b_1 and b_2, that there are multiple "ways of being identical", multiplicity of loops at a, etc.  Suggestions for improvement welcome.






Wednesday, May 27, 2015

From Path Induction to Path Involution in HoTT

There's something distinctly funky about the HoTT book's use of the term "induction".  As far as I can tell, it uses the term indifferently both for what anybody can recognize as genuine mathematical induction (e.g. over the naturals) as well as for reasoning that does not involve an inductive step.  I cannot see why the something that does not involve induction ought to be called, you know, induction.  I brought this up in an Issue ticket for the HoTT book and was rewarded with a sneer from one of the World's Greatest Mathematicians, so you've been warned: what follows may be grossly silly.  (On the other hand, I don't care how smart you are, if you want to convince me that Monday is really Tuesday you'd better come up with something better than a sneer.)

This morning I had a little epiphany; I finally realized exactly why the HoTT book's idea of induction feels so wrong to me.  It has to do with the difference between classical and intuitionistic reasoning about infinities.  In this post I'll try to explain what I think is going on.

The general form of "induction principles" in the HoTT book seems to be something like the following: "to prove something for all x, it suffices to prove it for y".  Turned around: "if you've proven something for y, then you have proved it for all x".  If x is Nat, y is the base case and inductive step; for enumerated types, y is the base cases; for equality and coproduct types too, y is the base case.

The problem is that, as the HoTT book emphasizes, in the case of equality types there may be many proofs of equality distinct from the base case (refl) and from each other.  In fact infinitely many.  Now we can prove an equality for the base case by construction - refl(a) is by definition proof of a = a.  But HoTT says that proving this base case suffices to prove all the other cases, no inductive hypothesis needed.  How can this be?  As I understand it, HoTT says that this works because the base case "freely generates" all the other cases.  But that's a non-sequitur.  Mathematical induction is a proof technique that depends on a particular conception of a non-terminating generative process; free generation is a mathematical concept that has nothing to do with proof, as far as I can tell.  In free generation the base case freely generates all the other cases; it does not follow that a proof of the base case counts as proof of all the other cases - at least, not for intuitionistic reasoning.  "Free generation" is just a description of how some things are related to some other things, not a proof technique.  Plus it's not constructive.  What gets generated, unlike inductively constructed elements, is not compositional; it does not build on the base case.  Free "generation" is actually a misnomer; it refers to various properties, not to generation.  The operative term in "freely generated" is "freely", not "generated".  But "free" is defined in terms of the result of the generation process - what is generated, rather than generation - the process - itself.  What's going on here?

I think the problem is that HoTT uses classical logic for the semantic domain, but type theory is intuitionistic.  And under the intuitionistic perspective, a phrase like "proof of P for all x", where there are infinite xs, is absurd.  The phrase "all x" is incoherent unless there are only finitely many xs, since there is no "all" to infinity.  You can't prove stuff for an infinite totality because there is no such thing; the best you can do is provide a method for finding a proof for any element in the totality.  (See the Dummett quotes below).

What about mathematical induction over Nat?  We can treat the base case and the inductive step intuitionistically, the latter being a method of "going on in the same way".  But the conclusion - "therefore P(n) for all n" should be replaced with "therefore for any n we can show (find) P(n)".  Proving the base case and the inductive step does not mean we have given a proof for "all n"; it means we've given a proof method that works for any n: starting at the base case, we can run through the initial segment that terminates at that n.

When it comes to HoTT's "principles of induction" for coproducts, equalities, and the like, HoTT seems to rely on the classical logic of homotopy theory.  The formula goes something like this: "if we've proven P for the constant path at a (base case), then we have proved it for every path (element of) a=b".  The problem is that there are elements other than the base case, but there is no "initial segment" that links the others to the base case; there is no sequence.  So we are given no way to prove P for any element other than the base case.  There is no way to "keep going in the same way" in order to reach an arbitrary element.  There is no way that I can see to construe such "induction principles" intuitionistically.  They are not genuinely inductive.

The homotopic strategy seems to be to present the non-canonical tokens of, say, an equality type, as (non-canonical?) paths "freely generated" by the base case (refl).  Once we have them, their endpoints count as "equal", since there is a path between the two.  And under the classical perspective of the semantic domain (homotopy theory), we always already have them.  But if we want an intuitionistic version of this idea suitable for constructive type theory, we need a means of actually constructing them in a way that allows us to "reach" them.  We need a way of thinking of "free generation" as a non-terminating process that constructs these paths, one at a time, giving an initial segment for each element, which allows us to reach each in a constructive proof.  I don't really see a way of doing this at the moment, but then again people like Beth, Bishop and Dummett have developed lots of techniques for reconstructing bits of classical mathematics intuitionistically, so maybe there is already a technique out there for handling this.

But I think there is another possibility that will allow us to prove things for the sorts of infinities involved in equalities, co-products, and the like without relying on a concept of induction.  The trick is to think of two kinds of infinitary process, the inductive and the involutional.  An infinite inductive process always "moves forward", so to speak.  Each step in the process spits out a unique token.  For nat, we get 0,1...n.  An infinite involutive process goes nowhere; it just spins in place, turning in on itself (hence "involution").  Each cycle in the process spits out a unique token-repeatable.  So an involution that produces 'a' generates a sequence: a,a,...a.  Each a is a distinct particular, so they are not identical; but they are all 'a' tokens (hence "token-repeatables"), so they are interchangeable.  And because of that, proving P(a) for a particular such 'a' is as good as proving it for any such 'a'; there is no need to run through an initial segment.

In terms of paths, the involutional process generates distinct 'a' tokens, so we should treat witnesses of e.g. a=b as paths from the canonical 'a' token to other 'a' tokens - which we call 'b', since there is really no way that a writing system can present this idea unambiguously using only 'a' tokens.  We could also always write a=a so long as we agree that these two 'a' tokens need not be "the same", so to speak - in other words, x:a=a is not necessarily a reflexivity proof - it could also represent a path to a different 'a' token.

The key concept here is "token-repeatable", rather than "free generation".  The notion of an involutive process generating a sequence of (equal) token-repeatables gives us an intuitionistically respectable interpretation of the concept of free generation.  The process is non-terminating (hence infinitary) and sequential, creating initial segments; but the elements of those initial segments are token-repeatables, which makes them interchangeable, so there is no need to "run through" an initial segment to reach a particular such token.  You can stop after the first one.

To cut a long story short: what the HoTT book calls "path induction" is not induction at all; it would be better to call it "path involution".

More details to follow in a separate post.

Dummett, Elements of Intuitionism


p. 4:

"The fact that quantification over an infinite totality shares so much in common with quantification over a finite one tempts us to overlook the crucial difference from the case in which we are quantifying over a finite totality which we can survey, namely that we do not have, even in principle, a method of determining the truth-value of a quantified statement by carrying out a complete inspection of the elements of the domain, and checking, for each one, whether the predicate applies.

p. 40-1:

"[T]he thesis that there is no completed infinity means, simply, that to grasp an infinite structure is to grasp the process that generates it, that to refer to such a structure is to refer to that process, and that to recognize a structure as being infinite is to recognize that the process will not terminate.  In the case of a process that does terminate, we may legitimately distinguish between the process itself and its completed output: we may be presented with the structure that is generated, without knowing anything about the process of generation.  But, in the case of an infinite structure, no such distinction is permissible: all that we can, at any given time, know of the output of the process of generation is some finite initial segment of the structure being generated.  There is no sense in which we can have any conception of this structure as a whole save by knowing the process of generation."

p. 41:

"It is, however, integral to classical mathematics to treat infinite structures as if they could be completed and then surveyed in their totality, in other words, as if we could be presented with the entire output of an infinite process.  Given his assumption that the application of a well-defined predicate to each element of the totality has a determinate value, true or false, the classical mathematician concludes that its universal closure has an equally determinate value, formed by taking the product of the values of its instances, and that the existential closure likewise has a determinate value, formed by taking the sum of the values of its instances.  On such a conception, the truth-value of a quantified statement is the final outcome of a process which involves running through the values of all its instances; the assumption that its truth-value is well-defined and determinate is precisely the assumption that we may regard an infinite process of this kind as capable of completion... On an intuitionisitic view, neither the truth-value of a statement nor any other mathematical entity can be given as the final result of an infinite process, since an infinite process is precisely one that does not have a final result: that is why, when the domain of quantification is infinite, an existentially quantified statement cannot be regarded in advance as determinately either true or false, and a universally quantified one cannot be thought of as being true accidentally, that is independently of there being a proof of it, a proof which must depend intrinsically upon our grasp  of the process whereby the domain is generated."

Tuesday, May 19, 2015

Constructors and Co-constructors in HoTT

The HoTT book's treatment of constructors, eliminators, etc. deviates somewhat from the treatment commonly found in texts on constructive logic.  Of eliminators, it says that they say how to use elements of a type.  A computation rule "expresses how an eliminator acts on a constructor."  In all cases, the book usually presents these things as functions.

The contrast is with the way intro and elim rules are usually presented in logic, as inference rules.  For example, &-intro licenses an inference from A, B to A&B.  This correponds cleanly to the HoTT pair constructor.  But &-intro is usually accompanied by two &-elim rules, call them &-elim1 and &-elim2, which are also inference rules.

        &-elim1:  A&B |- A
&-elim2:  A&B |- B

This could also be express more HoTTly:

        elim1:  (a,b):AxB |- a:A
elim2:  (a,b):AxB |- b:B

The HoTT book does not propose eliminators like this.  Instead it defines projection functions pr1 and pr2, which are like elim rules but are not basic to the type definition.  It offers an "elimination rule for products" that answers the question "How can we use pairs, i.e. how can we define functions out of a product type?"  The technique is to define a function on pairs f:AxB->C in terms of an iterated function g:A->B->C.

So the contrast in this case is fairly clear: the logical tradition treats an eliminator as an inference that "undoes" a constructor; HoTT treats an eliminator as a function that "uses" a construction.  In the case of &/x (i.e. product types), that means HoTT must introduce derived functions (projectors) in order to "undo" the constructor.

Note that this makes HoTT's terminology a bit abusive; after all a function on products does not necessarily eliminate anything: just look at the Identity function.

Also note that in both cases the relation between constructor and eliminator is asymmetric; constructors are "more primitive" than eliminators.  This follows Gentzen's original treatment of intro and elim rules: intro rules are viewed as defining (in a sense) the logical operators, and elim rules are derived (in a sense) from intro rules.

The HoTT book's strategy seems follow from the decision to treat these things as functions.  The projection functions cannot be primitive (to the product type) since we do not yet (in the middle of defining product types, so to speak) know how to apply a function to a product type.  So you have to start by defining how any such function f:AxB->C works, and then use that result to define the specific projection functions.

The proposal here is to reconceptualize constructors and eliminators.  This involves several basic moves: first, foreground the notion of inference rather then functions; second, do not treat constructors and eliminators as functions, and third, treat eliminators as co-constructors rather than undoers or users of constructors.

Foregrounding inference:  read "a:A" as "from token a infer type A" (there's a lot to be said about this approach but I'll leave that for another post) and treat the rules defining types like AxB as inference rules.  Not traditional inference rules that go from true premises to true conclusion, but inferency inference rules, that go from good inferences to good inferences.  For example, read pair constructor (intro rule) as something like "if the inference from a to A is good and the inference from b to B is good, then the inference from (a,b) to AxB is good".

Constructors and co-constructors: the idea is pretty simple and easily illustrated for the case of product types:

        ()-ctor:   \( a:A,\  b:B  \vdash  (a,b):A\times B\)
pr1-co-ctor:   \(x:A\times B \vdash  pr1(x):A\)
pr2-co-ctor:   \(x:A\times B \vdash  pr2(x):B\)

In the case of ctors, the rule is not to be construed in functional terms: it is not a function that produces a "result" of type AxB, given inputs of types A and B.  It does not "produce" anything at all; it just licenses the inference from (a,b) to AxB, given inferences from a to A and from b to B.  Strictly speaking we should not even think of this as constructing anything either, except perhaps the syntactic form (a,b).

In the case of co-constructors, the idea is that they, like constructors, license inferences.  As in the case of so-called constructors, we're not really constructing anything; \(pr1(x)\) has type \(A\) if \(x\) has type \(A\times B\).  That is all.  But the construction metaphor is convenient nonetheless: co-constructors may be viewed as "devices" that construct something, just like constructors.

Notice that we could write these using a function type notation, but here (as in HoTT) there is no function definition for constructors and co-constructors.

A critical contrast with the HoTT (and the traditional logical) approach is that co-constructors need not operate on constructors.  Constructors and co-constructors are perfectly symmetrical; nothing in the concept of types and tokens warrants (a priori) the primacy of constructors over co-constructors.  In other words it is not the case that one must first construct an element in order to have something that an eliminator can operate on.

Also note that with this approach a HoTT-style definition of how functions on product types may be defined still works, but it is not to be construed as an elimination rule on products.

It follows from the definitions that \(x:A\times B \vdash  (pr1(x), pr2(x)) : A\times B\).  This does not mean that every \(x:A\times B\) is a pair, but it does mean that a pair can be inferred from every such \(x\).  Which dovetails with the HoTT book's strategy: to define a function on a product type it suffices to define it for all pairs.

What about the uniqueness principle uppt?  In the HoTT book, it depends on the induction principle for pairs.  But as just mentioned that follows directly from the constructor and co-constructors.