1) | \((a=_A b)\) | a=b |

2) | \((a=_A b)\to\mathcal{P}\) | if \(a=b\) then \(\mathcal{P}\) is a proposition |

3) | \(\prod\limits_{x,y:A} (x =_A y) \to \mathcal{P}\) | \(\forall x,y\) if \(x=y\) then \(\mathcal{P}\) is a proposition |

4) | \(C:\prod\limits_{x,y:A} (x =_A y) \to \mathcal{P}\) | C proves proposition 3 |

5) | \(C(a,b)\) | if \(a=b\) then \(\mathcal{P}\) is a proposition |

6) | \(b,a|\prod\limits_{x,y:A} (x =_A y) \to \mathcal{P}\) | same as proposition 5 |

7) | \(p:(a=_Ab)\) | p proves \(a=_Ab\) |

8) | \(p|(a=_Ab)\to\mathcal{P}\) | \(\mathcal{P}\) is a proposition |

9) | \(C(a,b)(p)\) | \(\mathcal{P}\) is a proposition |

10) | \(p,b,a|\prod\limits_{x,y:A} (x =_A y) \to \mathcal{P}\) | \(\mathcal{P}\) is a proposition |

Note: 8 and 10 use an unorthodox notation that I'm calling the "tokenization operator". The idea is that for any type A, "|" picks out an element of the type, and we then use prefix notation to indicate application. (We might also split out a tokenization operator, e.g. \(\Lambda\) and define a "pipeline" application operator.) This allows us to dispense with (some) intermediate variables, just like the lambda operator allows us to dispense with function names. In 8, \(|(a=b)\to\mathcal{P}\) picks out an element of the function type, and prefixing \(p\) to it indicates application of that element to p. So 8 is a (nearly) variable-free version of:

\(C: (a=b)\to\mathcal{P}\); \(C(p)\)

Note that 10 could be written more explicitly:

\[a,b:A, \ p:(a=b),\ p|b,a|\prod\limits_{x,y:A} (x =_A y) \to \mathcal{P}\]

which should be read "outward": feed a then b then a proof of a=b to the \(\prod\) function.

The glosses for 8-10 here ignore information in the formulae; the gloss only applies to the "result" of the type formation operation. They can also be glossed intensionally as instances of modus ponens

(\(A\to B\); but \(A\), therefore \(B\)):

If a=b is proven, then P; but p proves a=b, therefore P.

Here I think the advantage of having a tokenization operator like "|" is evident; it makes it easier to read the intensional sense of 8 and 10 directly off the formula. Well, not entirely; we still use vars a, b, and p. They could be eliminated using the tokenization operator with a \(\sum\) type, but that's a topic for another post.

What this is intended to show is that Curry-Howard is not limited to propositions. Types may encode terms, propositions, or inferences. Those that express inferences may also be construed as expressing propositions (namely, the concluding proposition of the inference); those that express propositions may be construed as expressing terms. The latter corresponds to the fact that embedded propositions function as terms: in "if P then Q", both P and Q are propositional variables but they act as terms within the conditional.

(\(A\to B\); but \(A\), therefore \(B\)):

If a=b is proven, then P; but p proves a=b, therefore P.

Here I think the advantage of having a tokenization operator like "|" is evident; it makes it easier to read the intensional sense of 8 and 10 directly off the formula. Well, not entirely; we still use vars a, b, and p. They could be eliminated using the tokenization operator with a \(\sum\) type, but that's a topic for another post.

What this is intended to show is that Curry-Howard is not limited to propositions. Types may encode terms, propositions, or inferences. Those that express inferences may also be construed as expressing propositions (namely, the concluding proposition of the inference); those that express propositions may be construed as expressing terms. The latter corresponds to the fact that embedded propositions function as terms: in "if P then Q", both P and Q are propositional variables but they act as terms within the conditional.

## No comments:

## Post a Comment