+\epigraph{\emph{Half of my time spent doing research involves thinking up clever
+ schemes to avoid needing functional extensionality.}}{@larrytheliquid}
+
+However, propositional equality as described is quite restricted when
+reasoning about equality beyond the term structure, which is what definitional
+equality gives us (extension notwithstanding).
+
+The problem is best exemplified by \emph{function extensionality}. In
+mathematics, we would expect to be able to treat functions that give equal
+output for equal input as the same. When reasoning in a mechanised framework
+we ought to be able to do the same: in the end, without considering the
+operational behaviour, all functions equal extensionally are going to be
+replaceable with one another.
+
+However this is not the case, or in other words with the tools we have we have
+no term of type
+\[
+\myfun{ext} : \myfora{\myb{A}\ \myb{B}}{\mytyp}{\myfora{\myb{f}\ \myb{g}}{
+ \myb{A} \myarr \myb{B}}{
+ (\myfora{\myb{x}}{\myb{A}}{\myapp{\myb{f}}{\myb{x}} \mypeq{\myb{B}} \myapp{\myb{g}}{\myb{x}}}) \myarr
+ \myb{f} \mypeq{\myb{A} \myarr \myb{B}} \myb{g}
+ }
+}
+\]
+To see why this is the case, consider the functions
+\[\myabs{\myb{x}}{0 \mathrel{\myfun{$+$}} \myb{x}}$ and $\myabs{\myb{x}}{\myb{x} \mathrel{\myfun{$+$}} 0}\]
+where $\myfun{$+$}$ is defined by recursion on the first argument,
+gradually destructing it to build up successors of the second argument.
+The two functions are clearly extensionally equal, and we can in fact
+prove that
+\[
+\myfora{\myb{x}}{\mynat}{(0 \mathrel{\myfun{$+$}} \myb{x}) \mypeq{\mynat} (\myb{x} \mathrel{\myfun{$+$}} 0)}
+\]
+By analysis on the $\myb{x}$. However, the two functions are not
+definitionally equal, and thus we won't be able to get rid of the
+quantification.
+
+For the reasons above, theories that offer a propositional equality
+similar to what we presented are called \emph{intensional}, as opposed
+to \emph{extensional}. Most systems in wide use today (such as Agda,
+Coq, and Epigram) are of this kind.
+
+This is quite an annoyance that often makes reasoning awkward to execute. It
+also extends to other fields, for example proving bisimulation between
+processes specified by coinduction, or in general proving equivalences based
+on the behaviour on a term.
+
+\subsection{Equality reflection}
+
+One way to `solve' this problem is by identifying propositional equality with
+definitional equality:
+
+\mydesc{typing:}{\myjud{\mytmsyn}{\mytmsyn}}{
+ \AxiomC{$\myjud{\myse{q}}{\mytmm \mypeq{\mytya} \mytmn}$}
+ \UnaryInfC{$\myjud{\mytmm \mydefeq \mytmn}{\mytya}$}
+ \DisplayProof
+}
+
+This rule takes the name of \emph{equality reflection}, and is a very
+different rule from the ones we saw up to now: it links a typing judgement
+internal to the type theory to a meta-theoretic judgement that the type
+checker uses to work with terms. It is easy to see the dangerous consequences
+that this causes:
+\begin{itemize}
+\item The rule is syntax directed, and the type checker is presumably expected
+ to come up with equality proofs when needed.
+\item More worryingly, type checking becomes undecidable also because
+ computing under false assumptions becomes unsafe, since we derive any
+ equality proof and then use equality reflection and the conversion
+ rule to have terms of any type.
+\end{itemize}
+
+Given these facts theories employing equality reflection, like NuPRL
+\citep{NuPRL}, carry the derivations that gave rise to each typing judgement
+to keep the systems manageable.
+
+For all its faults, equality reflection does allow us to prove extensionality,
+using the extensions we gave above. Assuming that $\myctx$ contains
+\[\myb{A}, \myb{B} : \mytyp; \myb{f}, \myb{g} : \myb{A} \myarr \myb{B}; \myb{q} : \myfora{\myb{x}}{\myb{A}}{\myapp{\myb{f}}{\myb{x}} \mypeq{} \myapp{\myb{g}}{\myb{x}}}\]
+We can then derive
+\begin{prooftree}
+ \mysmall
+ \AxiomC{$\hspace{1.1cm}\myjudd{\myctx; \myb{x} : \myb{A}}{\myapp{\myb{q}}{\myb{x}}}{\myapp{\myb{f}}{\myb{x}} \mypeq{} \myapp{\myb{g}}{\myb{x}}}\hspace{1.1cm}$}
+ \RightLabel{equality reflection}
+ \UnaryInfC{$\myjudd{\myctx; \myb{x} : \myb{A}}{\myapp{\myb{f}}{\myb{x}} \mydefeq \myapp{\myb{g}}{\myb{x}}}{\myb{B}}$}
+ \RightLabel{congruence for $\lambda$s}
+ \UnaryInfC{$\myjud{(\myabs{\myb{x}}{\myapp{\myb{f}}{\myb{x}}}) \mydefeq (\myabs{\myb{x}}{\myapp{\myb{g}}{\myb{x}}})}{\myb{A} \myarr \myb{B}}$}
+ \RightLabel{$\eta$-law for $\lambda$}
+ \UnaryInfC{$\hspace{1.45cm}\myjud{\myb{f} \mydefeq \myb{g}}{\myb{A} \myarr \myb{B}}\hspace{1.45cm}$}
+ \RightLabel{$\myrefl$}
+ \UnaryInfC{$\myjud{\myapp{\myrefl}{\myb{f}}}{\myb{f} \mypeq{} \myb{g}}$}
+\end{prooftree}
+
+Now, the question is: do we need to give up well-behavedness of our theory to
+gain extensionality?
+
+\subsection{Some alternatives}
+
+% TODO finish
+% TODO add `extentional axioms' (Hoffman), setoid models (Thorsten)
+
+\section{Observational equality}
+\label{sec:ott}
+
+A recent development by \citet{Altenkirch2007}, \emph{Observational Type
+ Theory} (OTT), promises to keep the well behavedness of ITT while
+being able to gain many useful equality proofs,\footnote{It is suspected
+ that OTT gains \emph{all} the equality proofs of ETT, but no proof
+ exists yet.} including function extensionality. The main idea is to
+give the user the possibility to \emph{coerce} (or transport) values
+from a type $\mytya$ to a type $\mytyb$, if the type checker can prove
+structurally that $\mytya$ and $\mytya$ are equal; and providing a
+value-level equality based on similar principles. Here we give an
+exposition which follows closely the original paper.
+
+\subsection{A simpler theory, a propositional fragment}
+
+\mydesc{syntax}{ }{
+ $\mytyp_l$ is replaced by $\mytyp$. \\\ \\
+ $
+ \begin{array}{r@{\ }c@{\ }l}
+ \mytmsyn & ::= & \cdots \mysynsep \myprdec{\myprsyn} \mysynsep
+ \myITE{\mytmsyn}{\mytmsyn}{\mytmsyn} \\
+ \myprsyn & ::= & \mybot \mysynsep \mytop \mysynsep \myprsyn \myand \myprsyn
+ \mysynsep \myprfora{\myb{x}}{\mytmsyn}{\myprsyn}
+ \end{array}
+ $
+}
+
+\mydesc{typing:}{\myjud{\mytmsyn}{\mytmsyn}}{
+ \begin{tabular}{cc}
+ \AxiomC{$\myjud{\myse{P}}{\myprop}$}
+ \UnaryInfC{$\myjud{\myprdec{\myse{P}}}{\mytyp}$}
+ \DisplayProof
+ &
+ \AxiomC{$\myjud{\mytmt}{\mybool}$}
+ \AxiomC{$\myjud{\mytya}{\mytyp}$}
+ \AxiomC{$\myjud{\mytyb}{\mytyp}$}
+ \TrinaryInfC{$\myjud{\myITE{\mytmt}{\mytya}{\mytyb}}{\mytyp}$}
+ \DisplayProof
+ \end{tabular}
+}
+
+\mydesc{propositions:}{\myjud{\myprsyn}{\myprop}}{
+ \begin{tabular}{ccc}
+ \AxiomC{\phantom{$\myjud{\myse{P}}{\myprop}$}}
+ \UnaryInfC{$\myjud{\mytop}{\myprop}$}
+ \noLine
+ \UnaryInfC{$\myjud{\mybot}{\myprop}$}
+ \DisplayProof
+ &
+ \AxiomC{$\myjud{\myse{P}}{\myprop}$}
+ \AxiomC{$\myjud{\myse{Q}}{\myprop}$}
+ \BinaryInfC{$\myjud{\myse{P} \myand \myse{Q}}{\myprop}$}
+ \noLine
+ \UnaryInfC{\phantom{$\myjud{\mybot}{\myprop}$}}
+ \DisplayProof
+ &
+ \AxiomC{$\myjud{\myse{A}}{\mytyp}$}
+ \AxiomC{$\myjudd{\myctx; \myb{x} : \mytya}{\myse{P}}{\myprop}$}
+ \BinaryInfC{$\myjud{\myprfora{\myb{x}}{\mytya}{\myse{P}}}{\myprop}$}
+ \noLine
+ \UnaryInfC{\phantom{$\myjud{\mybot}{\myprop}$}}
+ \DisplayProof
+ \end{tabular}
+}
+
+Our foundation will be a type theory like the one of section
+\ref{sec:itt}, with only one level: $\mytyp_0$. In this context we will
+drop the $0$ and call $\mytyp_0$ $\mytyp$. Moreover, since the old
+$\myfun{if}\myarg\myfun{then}\myarg\myfun{else}$ was able to return
+types thanks to the hierarchy (which is gone), we need to reintroduce an
+ad-hoc conditional for types, where the reduction rule is the obvious
+one.
+
+However, we have an addition: a universe of \emph{propositions},
+$\myprop$. $\myprop$ isolates a fragment of types at large, and
+indeed we can `inject' any $\myprop$ back in $\mytyp$ with $\myprdec{\myarg}$: \\
+\mydesc{proposition decoding:}{\myprdec{\mytmsyn} \myred \mytmsyn}{
+ \begin{tabular}{cc}
+ $
+ \begin{array}{l@{\ }c@{\ }l}
+ \myprdec{\mybot} & \myred & \myempty \\
+ \myprdec{\mytop} & \myred & \myunit
+ \end{array}
+ $
+ &
+ $
+ \begin{array}{r@{ }c@{ }l@{\ }c@{\ }l}
+ \myprdec{&\myse{P} \myand \myse{Q} &} & \myred & \myprdec{\myse{P}} \myprod \myprdec{\myse{Q}} \\
+ \myprdec{&\myprfora{\myb{x}}{\mytya}{\myse{P}} &} & \myred &
+ \myfora{\myb{x}}{\mytya}{\myprdec{\myse{P}}}
+ \end{array}
+ $
+ \end{tabular}
+ } \\
+ Propositions are what we call the types of \emph{proofs}, or types
+ whose inhabitants contain no `data', much like $\myunit$. The goal of
+ doing this is twofold: erasing all top-level propositions when
+ compiling; and to identify all equivalent propositions as the same, as
+ we will see later.
+
+ Why did we choose what we have in $\myprop$? Given the above
+ criteria, $\mytop$ obviously fits the bill. A pair of propositions
+ $\myse{P} \myand \myse{Q}$ still won't get us data. Finally, if
+ $\myse{P}$ is a proposition and we have
+ $\myprfora{\myb{x}}{\mytya}{\myse{P}}$ , the decoding will be a
+ function which returns propositional content. The only threat is
+ $\mybot$, by which we can fabricate anything we want: however if we
+ are consistent there will be nothing of type $\mybot$ at the top
+ level, which is what we care about regarding proof erasure.
+
+\subsection{Equality proofs}
+
+\mydesc{syntax}{ }{
+ $
+ \begin{array}{r@{\ }c@{\ }l}
+ \mytmsyn & ::= & \cdots \mysynsep
+ \mycoee{\mytmsyn}{\mytmsyn}{\mytmsyn}{\mytmsyn} \mysynsep
+ \mycohh{\mytmsyn}{\mytmsyn}{\mytmsyn}{\mytmsyn} \\
+ \myprsyn & ::= & \cdots \mysynsep \mytmsyn \myeq \mytmsyn \mysynsep
+ \myjm{\mytmsyn}{\mytmsyn}{\mytmsyn}{\mytmsyn}
+ \end{array}
+ $
+}
+
+\mydesc{typing:}{\myjud{\mytmsyn}{\mytmsyn}}{
+ \begin{tabular}{cc}
+ \AxiomC{$\myjud{\myse{P}}{\myprdec{\mytya \myeq \mytyb}}$}
+ \AxiomC{$\myjud{\mytmt}{\mytya}$}
+ \BinaryInfC{$\myjud{\mycoee{\mytya}{\mytyb}{\myse{P}}{\mytmt}}{\mytyb}$}
+ \DisplayProof
+ &
+ \AxiomC{$\myjud{\myse{P}}{\myprdec{\mytya \myeq \mytyb}}$}
+ \AxiomC{$\myjud{\mytmt}{\mytya}$}
+ \BinaryInfC{$\myjud{\mycohh{\mytya}{\mytyb}{\myse{P}}{\mytmt}}{\myprdec{\myjm{\mytmt}{\mytya}{\mycoee{\mytya}{\mytyb}{\myse{P}}{\mytmt}}{\mytyb}}}$}
+ \DisplayProof
+
+ \end{tabular}
+}
+
+\mydesc{propositions:}{\myjud{\myprsyn}{\myprop}}{
+ \begin{tabular}{cc}
+ \AxiomC{$
+ \begin{array}{l}
+ \ \\
+ \myjud{\myse{A}}{\mytyp} \hspace{1cm} \myjud{\myse{B}}{\mytyp}
+ \end{array}
+ $}
+ \UnaryInfC{$\myjud{\mytya \myeq \mytyb}{\myprop}$}
+ \DisplayProof
+ &
+ \AxiomC{$
+ \begin{array}{c}
+ \myjud{\myse{A}}{\mytyp} \hspace{1cm} \myjud{\mytmm}{\myse{A}} \\
+ \myjud{\myse{B}}{\mytyp} \hspace{1cm} \myjud{\mytmn}{\myse{B}}
+ \end{array}
+ $}
+ \UnaryInfC{$\myjud{\myjm{\mytmm}{\myse{A}}{\mytmn}{\myse{B}}}{\myprop}$}
+ \DisplayProof
+
+ \end{tabular}
+}
+
+
+While isolating a propositional universe as presented can be a useful
+exercises on its own, what we are really after is a useful notion of
+equality. In OTT we want to maintain the notion that things judged to
+be equal are still always repleaceable for one another with no
+additional changes. Note that this is not the same as saying that they
+are definitionally equal, since as we saw extensionally equal functions,
+while satisfying the above requirement, are not definitionally equal.
+
+Towards this goal we introduce two equality constructs in
+$\myprop$---the fact that they are in $\myprop$ indicates that they
+indeed have no computational content. The first construct, $\myarg
+\myeq \myarg$, relates types, the second,
+$\myjm{\myarg}{\myarg}{\myarg}{\myarg}$, relates values. The
+value-level equality is different from our old propositional equality:
+instead of ranging over only one type, we might form equalities between
+values of different types---the usefulness of this construct will be
+clear soon. In the literature this equality is known as `heterogeneous'
+or `John Major', since
+
+\begin{quote}
+ John Major's `classless society' widened people's aspirations to
+ equality, but also the gap between rich and poor. After all, aspiring
+ to be equal to others than oneself is the politics of envy. In much
+ the same way, forms equations between members of any type, but they
+ cannot be treated as equals (ie substituted) unless they are of the
+ same type. Just as before, each thing is only equal to
+ itself. \citep{McBride1999}.
+\end{quote}
+
+Correspondingly, at the term level, $\myfun{coe}$ (`coerce') lets us
+transport values between equal types; and $\myfun{coh}$ (`coherence')
+guarantees that $\myfun{coe}$ respects the value-level equality, or in
+other words that it really has no computational component: if we
+transport $\mytmm : \mytya$ to $\mytmn : \mytyb$, $\mytmm$ and $\mytmn$
+will still be the same.
+
+Before introducing the core ideas that make OTT work, let us distinguish
+between \emph{canonical} and \emph{neutral} types. Canonical types are
+those arising from the ground types ($\myempty$, $\myunit$, $\mybool$)
+and the three type formers ($\myarr$, $\myprod$, $\mytyc{W}$). Neutral
+types are those formed by
+$\myfun{If}\myarg\myfun{Then}\myarg\myfun{Else}\myarg$.
+Correspondingly, canonical terms are those inhabiting canonical types
+($\mytt$, $\mytrue$, $\myfalse$, $\myabss{\myb{x}}{\mytya}{\mytmt}$,
+...), and neutral terms those formed by eliminators.\footnote{Using the
+ terminology from Section \ref{sec:types}, we'd say that canonical
+ terms are in \emph{weak head normal form}.} In the current system
+(and hopefully in well-behaved systems), all closed terms reduce to a
+canonical term, and all canonical types are inhabited by canonical
+terms.
+
+\subsubsection{Type equality, and coercions}
+
+The plan is to decompose type-level equalities between canonical types
+into decodable propositions containing equalities regarding the
+subterms, and to use coerce recursively on the subterms using the
+generated equalities. This interplay between type equalities and
+\myfun{coe} ensures that invocations of $\myfun{coe}$ will vanish when
+we have evidence of the structural equality of the types we are
+transporting terms across. If the type is neutral, the equality won't
+reduce and thus $\myfun{coe}$ won't reduce either. If we come an
+equality between different canonical types, then we reduce the equality
+to bottom, making sure that no such proof can exist, and providing an
+`escape hatch' in $\myfun{coe}$.
+
+\begin{figure}[t]
+
+\mydesc{equality reduction:}{\myprsyn \myred \myprsyn}{
+ $
+ \begin{array}{c@{\ }c@{\ }c@{\ }l}
+ \myempty & \myeq & \myempty & \myred \mytop \\
+ \myunit & \myeq & \myunit & \myred \mytop \\
+ \mybool & \myeq & \mybool & \myred \mytop \\
+ \myexi{\myb{x_1}}{\mytya_1}{\mytyb_1} & \myeq & \myexi{\myb{x_2}}{\mytya_2}{\mytya_2} & \myred \\
+ \multicolumn{4}{l}{
+ \myind{2} \mytya_1 \myeq \mytyb_1 \myand
+ \myprfora{\myb{x_1}}{\mytya_1}{\myprfora{\myb{x_2}}{\mytya_2}{\myjm{\myb{x_1}}{\mytya_1}{\myb{x_2}}{\mytya_2}} \myimpl \mytyb_1[\myb{x_1}] \myeq \mytyb_2[\myb{x_2}]}
+ } \\
+ \myfora{\myb{x_1}}{\mytya_1}{\mytyb_1} & \myeq & \myfora{\myb{x_2}}{\mytya_2}{\mytyb_2} & \myred \cdots \\
+ \myw{\myb{x_1}}{\mytya_1}{\mytyb_1} & \myeq & \myw{\myb{x_2}}{\mytya_2}{\mytyb_2} & \myred \cdots \\
+ \mytya & \myeq & \mytyb & \myred \mybot\ \text{if $\mytya$ and $\mytyb$ are canonical.}
+ \end{array}
+ $
+}
+\myderivsp
+\mydesc{reduction}{\mytmsyn \myred \mytmsyn}{
+ $
+ \begin{array}[t]{@{}l@{\ }l@{\ }l@{\ }l@{\ }l@{\ }c@{\ }l@{\ }}
+ \mycoe & \myempty & \myempty & \myse{Q} & \myse{t} & \myred & \myse{t} \\
+ \mycoe & \myunit & \myunit & \myse{Q} & \myse{t} & \myred & \mytt \\
+ \mycoe & \mybool & \mybool & \myse{Q} & \mytrue & \myred & \mytrue \\
+ \mycoe & \mybool & \mybool & \myse{Q} & \myfalse & \myred & \myfalse \\
+ \mycoe & (\myexi{\myb{x_1}}{\mytya_1}{\mytyb_1}) &
+ (\myexi{\myb{x_2}}{\mytya_2}{\mytyb_2}) & \myse{Q} &
+ \mytmt_1 & \myred & \\
+ \multicolumn{7}{l}{
+ \myind{2}\begin{array}[t]{l@{\ }l@{\ }c@{\ }l}
+ \mysyn{let} & \myb{\mytmm_1} & \mapsto & \myapp{\myfst}{\mytmt_1} : \mytya_1 \\
+ & \myb{\mytmn_1} & \mapsto & \myapp{\mysnd}{\mytmt_1} : \mysub{\mytyb_1}{\myb{x_1}}{\myb{\mytmm_1}} \\
+ & \myb{Q_A} & \mapsto & \myapp{\myfst}{\myse{Q}} : \mytya_1 \myeq \mytya_2 \\
+ & \myb{\mytmm_2} & \mapsto & \mycoee{\mytya_1}{\mytya_2}{\myb{Q_A}}{\myb{\mytmm_1}} : \mytya_2 \\
+ & \myb{Q_B} & \mapsto & (\myapp{\mysnd}{\myse{Q}}) \myappsp \myb{\mytmm_1} \myappsp \myb{\mytmm_2} \myappsp (\mycohh{\mytya_1}{\mytya_2}{\myb{Q_A}}{\myb{\mytmm_1}}) : \myprdec{\mysub{\mytyb_1}{\myb{x_1}}{\myb{\mytmm_1}} \myeq \mysub{\mytyb_2}{\myb{x_2}}{\myb{\mytmm_2}}} \\
+ & \myb{\mytmn_2} & \mapsto & \mycoee{\mysub{\mytyb_1}{\myb{x_1}}{\myb{\mytmm_1}}}{\mysub{\mytyb_2}{\myb{x_2}}{\myb{\mytmm_2}}}{\myb{Q_B}}{\myb{\mytmn_1}} : \mysub{\mytyb_2}{\myb{x_2}}{\myb{\mytmm_2}} \\
+ \mysyn{in} & \multicolumn{3}{@{}l}{\mypair{\myb{\mytmm_2}}{\myb{\mytmn_2}}}
+ \end{array}} \\
+
+ \mycoe & (\myfora{\myb{x_1}}{\mytya_1}{\mytyb_1}) &
+ (\myfora{\myb{x_2}}{\mytya_2}{\mytyb_2}) & \myse{Q} &
+ \mytmt & \myred &
+ \cdots \\
+
+ \mycoe & (\myw{\myb{x_1}}{\mytya_1}{\mytyb_1}) &
+ (\myw{\myb{x_2}}{\mytya_2}{\mytyb_2}) & \myse{Q} &
+ \mytmt & \myred &
+ \cdots \\
+
+ \mycoe & \mytya & \mytyb & \myse{Q} & \mytmt & \myred & \myapp{\myabsurd{\mytyb}}{\myse{Q}}\ \text{if $\mytya$ and $\mytyb$ are canonical.}
+ \end{array}
+ $
+}
+\caption{Reducing type equalities, and using them when
+ $\myfun{coe}$rcing.}
+\label{fig:eqred}
+\end{figure}
+
+Figure \ref{fig:eqred} illustrates this idea in practice. For ground
+types, the proof is the trivial element, and \myfun{coe} is the
+identity. For $\myunit$, we can do better: we return its only member
+without matching on the term. For the three type binders, things are
+similar but subtly different---the choices we make in the type equality
+are dictated by the desire of writing the $\myfun{coe}$ in a natural
+way.
+
+$\myprod$ is the easiest case: we decompose the proof into proofs that
+the first element's types are equal ($\mytya_1 \myeq \mytya_2$), and a
+proof that given equal values in the first element, the types of the
+second elements are equal too
+($\myprfora{\myb{x_1}}{\mytya_1}{\myprfora{\myb{x_2}}{\mytya_2}{\myjm{\myb{x_1}}{\mytya_1}{\myb{x_2}}{\mytya_2}}
+ \myimpl \mytyb_1 \myeq \mytyb_2}$).\footnote{We are using $\myimpl$ to
+ indicate a $\forall$ where we discard the first value. We write
+ $\mytyb_1[\myb{x_1}]$ to indicate that the $\myb{x_1}$ in $\mytyb_1$
+ is re-bound to the $\myb{x_1}$ quantified by the $\forall$, and
+ similarly for $\myb{x_2}$ and $\mytyb_2$.} This also explains the
+need for heterogeneous equality, since in the second proof it would be
+awkward to express the fact that $\myb{A_1}$ is the same as $\myb{A_2}$.
+In the respective $\myfun{coe}$ case, since the types are canonical, we
+know at this point that the proof of equality is a pair of the shape
+described above. Thus, we can immediately coerce the first element of
+the pair using the first element of the proof, and then instantiate the
+second element with the two first elements and a proof by coherence of
+their equality, since we know that the types are equal.
+
+The cases for the other binders are omitted for brevity, but they follow
+the same principle with some twists to make $\myfun{coe}$ work with the
+generated proofs; the reader can refer to the paper for details.
+
+\subsubsection{$\myfun{coe}$, laziness, and $\myfun{coh}$erence}
+
+It is important to notice that in the reduction rules for $\myfun{coe}$
+are never obstructed by the proofs: with the exception of comparisons
+between different canonical types we never `pattern match' on the proof
+pairs, but always look at the projections. This means that, as long as
+we are consistent, and thus as long as we don't have $\mybot$-inducing
+proofs, we can add propositional axioms for equality and $\myfun{coe}$
+will still compute. Thus, we can take $\myfun{coh}$ as axiomatic, and
+we can add back familiar useful equality rules:
+
+\mydesc{typing:}{\myjud{\mytmsyn}{\mytmsyn}}{
+ \AxiomC{$\myjud{\mytmt}{\mytya}$}
+ \UnaryInfC{$\myjud{\myapp{\myrefl}{\mytmt}}{\myprdec{\myjm{\mytmt}{\mytya}{\mytmt}{\mytya}}}$}
+ \DisplayProof
+
+ \myderivspp
+
+ \AxiomC{$\myjud{\mytya}{\mytyp}$}
+ \AxiomC{$\myjudd{\myctx; \myb{x} : \mytya}{\mytyb}{\mytyp}$}
+ \BinaryInfC{$\myjud{\mytyc{R} \myappsp (\myb{x} {:} \mytya) \myappsp \mytyb}{\myfora{\myb{y}\, \myb{z}}{\mytya}{\myprdec{\myjm{\myb{y}}{\mytya}{\myb{z}}{\mytya} \myimpl \mysub{\mytyb}{\myb{x}}{\myb{y}} \myeq \mysub{\mytyb}{\myb{x}}{\myb{z}}}}}$}
+ \DisplayProof
+}
+
+$\myrefl$ is the equivalent of the reflexivity rule in propositional
+equality, and $\mytyc{R}$ asserts that if we have a we have a $\mytyp$
+abstracting over a value we can substitute equal for equal---this lets
+us recover $\myfun{subst}$. Note that while we need to provide ad-hoc
+rules in the restricted, non-hierarchical theory that we have, if our
+theory supports abstraction over $\mytyp$s we can easily add these
+axioms as abstracted variables.
+
+\subsubsection{Value-level equality}
+
+\mydesc{equality reduction:}{\myprsyn \myred \myprsyn}{
+ $
+ \begin{array}{r@{ }c@{\ }c@{\ }c@{}l@{\ }c@{\ }r@{}c@{\ }c@{\ }c@{}l@{\ }l}
+ (&\mytmt_1 & : & \myempty&) & \myeq & (&\mytmt_2 & : & \myempty &) & \myred \mytop \\
+ (&\mytmt_1 & : & \myunit&) & \myeq & (&\mytmt_2 & : & \myunit&) & \myred \mytop \\
+ (&\mytrue & : & \mybool&) & \myeq & (&\mytrue & : & \mybool&) & \myred \mytop \\
+ (&\myfalse & : & \mybool&) & \myeq & (&\myfalse & : & \mybool&) & \myred \mytop \\
+ (&\mytrue & : & \mybool&) & \myeq & (&\myfalse & : & \mybool&) & \myred \mybot \\
+ (&\myfalse & : & \mybool&) & \myeq & (&\mytrue & : & \mybool&) & \myred \mybot \\
+ (&\mytmt_1 & : & \myexi{\mytya_1}{\myb{x_1}}{\mytyb_1}&) & \myeq & (&\mytmt_2 & : & \myexi{\mytya_2}{\myb{x_2}}{\mytyb_2}&) & \myred \\
+ & \multicolumn{11}{@{}l}{
+ \myind{2} \myjm{\myapp{\myfst}{\mytmt_1}}{\mytya_1}{\myapp{\myfst}{\mytmt_2}}{\mytya_2} \myand
+ \myjm{\myapp{\mysnd}{\mytmt_1}}{\mysub{\mytyb_1}{\myb{x_1}}{\myapp{\myfst}{\mytmt_1}}}{\myapp{\mysnd}{\mytmt_2}}{\mysub{\mytyb_2}{\myb{x_2}}{\myapp{\myfst}{\mytmt_2}}}
+ } \\
+ (&\myse{f}_1 & : & \myfora{\mytya_1}{\myb{x_1}}{\mytyb_1}&) & \myeq & (&\myse{f}_2 & : & \myfora{\mytya_2}{\myb{x_2}}{\mytyb_2}&) & \myred \\
+ & \multicolumn{11}{@{}l}{
+ \myind{2} \myprfora{\myb{x_1}}{\mytya_1}{\myprfora{\myb{x_2}}{\mytya_2}{
+ \myjm{\myb{x_1}}{\mytya_1}{\myb{x_2}}{\mytya_2} \myimpl
+ \myjm{\myapp{\myse{f}_1}{\myb{x_1}}}{\mytyb_1[\myb{x_1}]}{\myapp{\myse{f}_2}{\myb{x_2}}}{\mytyb_2[\myb{x_2}]}
+ }}
+ } \\
+ (&\mytmt_1 \mynodee \myse{f}_1 & : & \myw{\mytya_1}{\myb{x_1}}{\mytyb_1}&) & \myeq & (&\mytmt_1 \mynodee \myse{f}_1 & : & \myw{\mytya_2}{\myb{x_2}}{\mytyb_2}&) & \myred \cdots \\
+ (&\mytmt_1 & : & \mytya_1&) & \myeq & (&\mytmt_2 & : & \mytya_2 &) & \myred \mybot\ \text{if $\mytya_1$ and $\mytya_2$ are canonical.}
+ \end{array}
+ $
+}
+
+As with type-level equality, we want value-level equality to reduce
+based on the structure of the compared terms. When matching
+propositional data, such as $\myempty$ and $\myunit$, we automatically
+return the trivial type, since if a type has zero one members, all
+members will be equal. When matching on data-bearing types, such as
+$\mybool$, we check that such data matches, and return bottom otherwise.
+
+\subsection{Proof irrelevance and stuck coercions}
+\label{sec:ott-quot}
+
+The last effort is required to make sure that proofs (members of
+$\myprop$) are \emph{irrelevant}. Since they are devoid of
+computational content, we would like to identify all equivalent
+propositions as the same, in a similar way as we identified all
+$\myempty$ and all $\myunit$ as the same in section
+\ref{sec:eta-expand}.
+
+Thus we will have a quotation that will not only perform
+$\eta$-expansion, but will also identify and mark proofs that could not
+be decoded (that is, equalities on neutral types). Then, when
+comparing terms, marked proofs will be considered equal without
+analysing their contents, thus gaining irrelevance.
+
+Moreover we can safely advance `stuck' $\myfun{coe}$rcions between
+non-canonical but definitionally equal types. Consider for example
+\[
+\mycoee{(\myITE{\myb{b}}{\mynat}{\mybool})}{(\myITE{\myb{b}}{\mynat}{\mybool})}{\myb{x}}
+\]
+Where $\myb{b}$ and $\myb{x}$ are abstracted variables. This
+$\myfun{coe}$ will not advance, since the types are not canonical.
+However they are definitionally equal, and thus we can safely remove the
+coerce and return $\myb{x}$ as it is.
+
+This process of identifying every proof as equivalent and removing
+$\myfun{coe}$rcions is known as \emph{quotation}.
+
+\section{\mykant : the theory}
+\label{sec:kant-theory}
+
+\mykant\ is an interactive theorem prover developed as part of this thesis.
+The plan is to present a core language which would be capable of serving as
+the basis for a more featureful system, while still presenting interesting
+features and more importantly observational equality.
+
+We will first present the features of the system, and then describe the
+implementation we have developed in Section \ref{sec:kant-practice}.
+
+The defining features of \mykant\ are:
+
+\begin{description}
+\item[Full dependent types] As we would expect, we have dependent a system
+ which is as expressive as the `best' corner in the lambda cube described in
+ Section \ref{sec:itt}.
+
+\item[Implicit, cumulative universe hierarchy] The user does not need to
+ specify universe level explicitly, and universes are \emph{cumulative}.
+
+\item[User defined data types and records] Instead of forcing the user to
+ choose from a restricted toolbox, we let her define inductive data types,
+ with associated primitive recursion operators; or records, with associated
+ projections for each field.
+
+\item[Bidirectional type checking] While no `fancy' inference via
+ unification is present, we take advantage of a type synthesis system
+ in the style of \cite{Pierce2000}, extending the concept for user
+ defined data types.
+
+\item[Observational equality] As described in Section \ref{sec:ott} but
+ extended to work with the type hierarchy and to admit equality between
+ arbitrary data types.
+\end{description}
+
+We will analyse the features one by one, along with motivations and
+tradeoffs for the design decisions made.
+
+\subsection{Bidirectional type checking}
+
+We start by describing bidirectional type checking since it calls for
+fairly different typing rules that what we have seen up to now. The
+idea is to have two kinds of terms: terms for which a type can always be
+inferred, and terms that need to be checked against a type. A nice
+observation is that this duality runs through the semantics of the
+terms: neutral terms (abstracted or defined variables, function
+application, record projections, primitive recursors, etc.) \emph{infer}
+types, canonical terms (abstractions, record/data types data
+constructors, etc.) need to be \emph{checked}.
+
+To introduce the concept and notation, we will revisit the STLC in a
+bidirectional style. The presentation follows \cite{Loh2010}. The
+syntax for our bidirectional STLC is the same as the untyped
+$\lambda$-calculus, but with an extra construct to annotate terms
+explicitly---this will be necessary when having top-level canonical
+terms. The types are the same as those found in the normal STLC.
+
+\mydesc{syntax}{ }{
+ $
+ \begin{array}{r@{\ }c@{\ }l}
+ \mytmsyn & ::= & \myb{x} \mysynsep \myabs{\myb{x}}{\mytmsyn} \mysynsep (\myapp{\mytmsyn}{\mytmsyn}) \mysynsep (\mytmsyn : \mytysyn)
+ \end{array}
+ $
+}
+We will have two kinds of typing judgements: \emph{inference} and
+\emph{checking}. $\myinf{\mytmt}{\mytya}$ indicates that $\mytmt$
+infers the type $\mytya$, while $\mychk{\mytmt}{\mytya}$ can be checked
+against type $\mytya$. The type of variables in context is inferred,
+and so are annotate terms. The type of applications is inferred too,
+propagating types down the applied term. Abstractions are checked.
+Finally, we have a rule to check the type of an inferrable term.
+
+\mydesc{typing:}{\myctx \vdash \mytmsyn \Leftrightarrow \mytmsyn}{
+ \begin{tabular}{ccc}
+ \AxiomC{$\myctx(x) = A$}
+ \UnaryInfC{$\myinf{\myb{x}}{A}$}
+ \DisplayProof
+ &
+ \AxiomC{$\myjudd{\myctx;\myb{x} : A}{\mytmt}{\mytyb}$}
+ \UnaryInfC{$\mychk{\myabs{x}{\mytmt}}{\mytyb}$}
+ \DisplayProof
+ &
+ \AxiomC{$\myjud{\mytmm}{\mytya \myarr \mytyb}$}
+ \AxiomC{$\myjud{\mytmn}{\mytya}$}
+ \BinaryInfC{$\myjud{\myapp{\mytmm}{\mytmn}}{\mytyb}$}
+ \DisplayProof
+ \end{tabular}
+
+ \myderivspp
+
+ \begin{tabular}{cc}
+ \AxiomC{$\mychk{\mytmt}{\mytya}$}
+ \UnaryInfC{$\myinf{\myann{\mytmt}{\mytya}}{\mytya}$}
+ \DisplayProof
+ &
+ \AxiomC{$\myinf{\mytmt}{\mytya}$}
+ \UnaryInfC{$\mychk{\mytmt}{\mytya}$}
+ \DisplayProof
+ \end{tabular}
+}
+
+For example, if we wanted to type function composition (in this case for
+naturals), we would have to annotate the term:
+\[
+ \myfun{comp} \mapsto (\myabs{\myb{f}\, \myb{g}\, \myb{x}}{\myb{f}\myappsp(\myb{g}\myappsp\myb{x})}) : (\mynat \myarr \mynat) \myarr (\mynat \myarr \mynat) \myarr \mynat \myarr \mynat
+\]
+But we would not have to annotate functions passed to it, since the type would be propagated to the arguments:
+\[
+ \myfun{comp}\myappsp (\myabs{\myb{x}}{\myb{x} \mathrel{\myfun{$+$}} 3}) \myappsp (\myabs{\myb{x}}{\myb{x} \mathrel{\myfun{$*$}} 4}) \myappsp 42
+\]
+
+\subsection{Base terms and types}
+
+Let us begin by describing the primitives available without the user
+defining any data types, and without equality. The way we handle
+variables and substitution is left unspecified, and explained in section
+\ref{sec:term-repr}, along with other implementation issues. We are
+also going to give an account of the implicit type hierarchy separately
+in Section \ref{sec:term-hierarchy}, so as not to clutter derivation
+rules too much, and just treat types as impredicative for the time
+being.
+
+\mydesc{syntax}{ }{
+ $
+ \begin{array}{r@{\ }c@{\ }l}
+ \mytmsyn & ::= & \mynamesyn \mysynsep \mytyp \\
+ & | & \myfora{\myb{x}}{\mytmsyn}{\mytmsyn} \mysynsep
+ \myabs{\myb{x}}{\mytmsyn} \mysynsep
+ (\myapp{\mytmsyn}{\mytmsyn}) \mysynsep
+ (\myann{\mytmsyn}{\mytmsyn}) \\
+ \mynamesyn & ::= & \myb{x} \mysynsep \myfun{f}
+ \end{array}
+ $
+}
+
+The syntax for our calculus includes just two basic constructs:
+abstractions and $\mytyp$s. Everything else will be provided by
+user-definable constructs. Since we let the user define values, we will
+need a context capable of carrying the body of variables along with
+their type.
+
+Bound names and defined names are treated separately in the syntax, and
+while both can be associated to a type in the context, only defined
+names can be associated with a body:
+
+\mydesc{context validity:}{\myvalid{\myctx}}{
+ \begin{tabular}{ccc}
+ \AxiomC{\phantom{$\myjud{\mytya}{\mytyp_l}$}}
+ \UnaryInfC{$\myvalid{\myemptyctx}$}
+ \DisplayProof
+ &
+ \AxiomC{$\myjud{\mytya}{\mytyp}$}
+ \AxiomC{$\mynamesyn \not\in \myctx$}
+ \BinaryInfC{$\myvalid{\myctx ; \mynamesyn : \mytya}$}
+ \DisplayProof
+ &
+ \AxiomC{$\myjud{\mytmt}{\mytya}$}
+ \AxiomC{$\myfun{f} \not\in \myctx$}
+ \BinaryInfC{$\myvalid{\myctx ; \myfun{f} \mapsto \mytmt : \mytya}$}
+ \DisplayProof
+ \end{tabular}
+}
+
+Now we can present the reduction rules, which are unsurprising. We have
+the usual function application ($\beta$-reduction), but also a rule to
+replace names with their bodies ($\delta$-reduction), and one to discard
+type annotations. For this reason reduction is done in-context, as
+opposed to what we have seen in the past:
+
+\mydesc{reduction:}{\myctx \vdash \mytmsyn \myred \mytmsyn}{
+ \begin{tabular}{ccc}
+ \AxiomC{\phantom{$\myb{x} \mapsto \mytmt : \mytya \in \myctx$}}
+ \UnaryInfC{$\myctx \vdash \myapp{(\myabs{\myb{x}}{\mytmm})}{\mytmn}
+ \myred \mysub{\mytmm}{\myb{x}}{\mytmn}$}
+ \DisplayProof
+ &
+ \AxiomC{$\myfun{f} \mapsto \mytmt : \mytya \in \myctx$}
+ \UnaryInfC{$\myctx \vdash \myfun{f} \myred \mytmt$}
+ \DisplayProof
+ &
+ \AxiomC{\phantom{$\myb{x} \mapsto \mytmt : \mytya \in \myctx$}}
+ \UnaryInfC{$\myctx \vdash \myann{\mytmm}{\mytya} \myred \mytmm$}
+ \DisplayProof
+ \end{tabular}
+}
+
+We can now give types to our terms. Although we include the usual
+conversion rule, we defer a detailed account of definitional equality to
+Section \ref{sec:kant-irr}.
+
+\mydesc{typing:}{\myctx \vdash \mytmsyn \Leftrightarrow \mytmsyn}{
+ \begin{tabular}{cccc}
+ \AxiomC{$\myse{name} : A \in \myctx$}
+ \UnaryInfC{$\myinf{\myse{name}}{A}$}
+ \DisplayProof
+ &
+ \AxiomC{$\myfun{f} \mapsto \mytmt : A \in \myctx$}
+ \UnaryInfC{$\myinf{\myfun{f}}{A}$}
+ \DisplayProof
+ &
+ \AxiomC{$\mychk{\mytmt}{\mytya}$}
+ \UnaryInfC{$\myinf{\myann{\mytmt}{\mytya}}{\mytya}$}
+ \DisplayProof
+ &
+ \AxiomC{$\myinf{\mytmt}{\mytya}$}
+ \AxiomC{$\myctx \vdash \mytya \mydefeq \mytyb$}
+ \BinaryInfC{$\mychk{\mytmt}{\mytyb}$}
+ \DisplayProof
+ \end{tabular}
+ \myderivspp
+
+ \begin{tabular}{ccc}
+ \AxiomC{$\myinf{\mytmm}{\myfora{\myb{x}}{\mytya}{\mytyb}}$}
+ \AxiomC{$\mychk{\mytmn}{\mytya}$}
+ \BinaryInfC{$\myinf{\myapp{\mytmm}{\mytmn}}{\mysub{\mytyb}{\myb{x}}{\mytmn}}$}
+ \DisplayProof
+
+ &
+
+ \AxiomC{$\mychkk{\myctx; \myb{x}: \mytya}{\mytmt}{\mytyb}$}
+ \UnaryInfC{$\mychk{\myabs{\myb{x}}{\mytmt}}{\myfora{\myb{x}}{\mytyb}{\mytyb}}$}
+ \DisplayProof
+ \end{tabular}
+}
+
+\subsection{Elaboration}
+
+As we mentioned, $\mykant$\ allows the user to define not only values
+but also custom data types and records. \emph{Elaboration} consists of
+turning these declarations into workable syntax, types, and reduction
+rules. The treatment of custom types in $\mykant$\ is heavily inspired
+by McBride and McKinna early work on Epigram \citep{McBride2004},
+although with some differences.
+
+\subsubsection{Term vectors, telescopes, and assorted notation}
+
+We use a vector notation to refer to a series of term applied to
+another, for example $\mytyc{D} \myappsp \vec{A}$ is a shorthand for
+$\mytyc{D} \myappsp \mytya_1 \cdots \mytya_n$, for some $n$. $n$ is
+consistently used to refer to the length of such vectors, and $i$ to
+refer to an index in such vectors. We also often need to `build up'
+terms vectors, in which case we use $\myemptyctx$ for an empty vector
+and add elements to an existing vector with $\myarg ; \myarg$, similarly
+to what we do for context.
+
+To present the elaboration and operations on user defined data types, we
+frequently make use what de Bruijn called \emph{telescopes}
+\citep{Bruijn91}, a construct that will prove useful when dealing with
+the types of type and data constructors. A telescope is a series of
+nested typed bindings, such as $(\myb{x} {:} \mynat); (\myb{p} {:}
+\myapp{\myfun{even}}{\myb{x}})$. Consistently with the notation for
+contexts and term vectors, we use $\myemptyctx$ to denote an empty
+telescope and $\myarg ; \myarg$ to add a new binding to an existing
+telescope.
+
+We refer to telescopes with $\mytele$, $\mytele'$, $\mytele_i$, etc. If
+$\mytele$ refers to a telescope, $\mytelee$ refers to the term vector
+made up of all the variables bound by $\mytele$. $\mytele \myarr
+\mytya$ refers to the type made by turning the telescope into a series
+of $\myarr$. Returning to the examples above, we have that
+\[
+ (\myb{x} {:} \mynat); (\myb{p} : \myapp{\myfun{even}}{\myb{x}}) \myarr \mynat =
+ (\myb{x} {:} \mynat) \myarr (\myb{p} : \myapp{\myfun{even}}{\myb{x}}) \myarr \mynat
+\]
+
+We make use of various operations to manipulate telescopes:
+\begin{itemize}
+\item $\myhead(\mytele)$ refers to the first type appearing in
+ $\mytele$: $\myhead((\myb{x} {:} \mynat); (\myb{p} :
+ \myapp{\myfun{even}}{\myb{x}})) = \mynat$. Similarly,
+ $\myix_i(\mytele)$ refers to the $i^{th}$ type in a telescope
+ (1-indexed).
+\item $\mytake_i(\mytele)$ refers to the telescope created by taking the
+ first $i$ elements of $\mytele$: $\mytake_1((\myb{x} {:} \mynat); (\myb{p} :
+ \myapp{\myfun{even}}{\myb{x}})) = (\myb{x} {:} \mynat)$.
+\item $\mytele \vec{A}$ refers to the telescope made by `applying' the
+ terms in $\vec{A}$ on $\mytele$: $((\myb{x} {:} \mynat); (\myb{p} :
+ \myapp{\myfun{even}}{\myb{x}}))42 = (\myb{p} :
+ \myapp{\myfun{even}}{42})$.
+\end{itemize}
+
+Additionally, when presenting syntax elaboration, I'll use $\mytmsyn^n$
+to indicate a term vector composed of $n$ elements, or
+$\mytmsyn^{\mytele}$ for one composed by as many elements as the
+telescope.
+
+\subsubsection{Declarations syntax}
+
+\mydesc{syntax}{ }{
+ $
+ \begin{array}{r@{\ }c@{\ }l}
+ \mydeclsyn & ::= & \myval{\myb{x}}{\mytmsyn}{\mytmsyn} \\
+ & | & \mypost{\myb{x}}{\mytmsyn} \\
+ & | & \myadt{\mytyc{D}}{\myappsp \mytelesyn}{}{\mydc{c} : \mytelesyn\ |\ \cdots } \\
+ & | & \myreco{\mytyc{D}}{\myappsp \mytelesyn}{}{\myfun{f} : \mytmsyn,\ \cdots } \\
+
+ \mytelesyn & ::= & \myemptytele \mysynsep \mytelesyn \mycc (\myb{x} {:} \mytmsyn) \\
+ \mynamesyn & ::= & \cdots \mysynsep \mytyc{D} \mysynsep \mytyc{D}.\mydc{c} \mysynsep \mytyc{D}.\myfun{f}
+ \end{array}
+ $
+}
+
+In \mykant\ we have four kind of declarations:
+
+\begin{description}
+\item[Defined value] A variable, together with a type and a body.
+\item[Abstract variable] An abstract variable, with a type but no body.
+\item[Inductive data] A datatype, with a type constructor and various data
+ constructors---somewhat similar to what we find in Haskell. A primitive
+ recursor (or `destructor') will be generated automatically.
+\item[Record] A record, which consists of one data constructor and various
+ fields, with no recursive occurrences.
+\end{description}
+
+Elaborating defined variables consists of type checking body against the
+given type, and updating the context to contain the new binding.
+Elaborating abstract variables and abstract variables consists of type
+checking the type, and updating the context with a new typed variable:
+
+\mydesc{context elaboration:}{\myelab{\mydeclsyn}{\myctx}}{
+ \begin{tabular}{cc}
+ \AxiomC{$\myjud{\mytmt}{\mytya}$}
+ \AxiomC{$\myfun{f} \not\in \myctx$}
+ \BinaryInfC{
+ $\myctx \myelabt \myval{\myfun{f}}{\mytya}{\mytmt} \ \ \myelabf\ \ \myctx; \myfun{f} \mapsto \mytmt : \mytya$
+ }
+ \DisplayProof
+ &
+ \AxiomC{$\myjud{\mytya}{\mytyp}$}
+ \AxiomC{$\myfun{f} \not\in \myctx$}
+ \BinaryInfC{
+ $
+ \myctx \myelabt \mypost{\myfun{f}}{\mytya}
+ \ \ \myelabf\ \ \myctx; \myfun{f} : \mytya
+ $
+ }
+ \DisplayProof
+ \end{tabular}
+}
+
+\subsubsection{User defined types}
+\label{sec:user-type}
+
+Elaborating user defined types is the real effort. First, let's explain
+what we can defined, with some examples.
+
+\begin{description}
+\item[Natural numbers] To define natural numbers, we create a data type
+ with two constructors: one with zero arguments ($\mydc{zero}$) and one
+ with one recursive argument ($\mydc{suc}$):
+ \[
+ \begin{array}{@{}l}
+ \myadt{\mynat}{ }{ }{
+ \mydc{zero} \mydcsep \mydc{suc} \myappsp \mynat
+ }
+ \end{array}
+ \]
+ This is very similar to what we would write in Haskell:
+ \begin{Verbatim}
+data Nat = Zero | Suc Nat
+ \end{Verbatim}
+ Once the data type is defined, $\mykant$\ will generate syntactic
+ constructs for the type and data constructors, so that we will have
+ \begin{center}
+ \mysmall
+ \begin{tabular}{ccc}
+ \AxiomC{\phantom{$\mychk{\mytmt}{\mynat}$}}
+ \UnaryInfC{$\myinf{\mynat}{\mytyp}$}
+ \DisplayProof
+ &
+ \AxiomC{\phantom{$\mychk{\mytmt}{\mynat}$}}
+ \UnaryInfC{$\myinf{\mytyc{\mynat}.\mydc{zero}}{\mynat}$}
+ \DisplayProof
+ &
+ \AxiomC{$\mychk{\mytmt}{\mynat}$}
+ \UnaryInfC{$\myinf{\mytyc{\mynat}.\mydc{suc} \myappsp \mytmt}{\mynat}$}
+ \DisplayProof
+ \end{tabular}
+ \end{center}
+ While in Haskell (or indeed in Agda or Coq) data constructors are
+ treated the same way as functions, in $\mykant$\ they are syntax, so
+ for example using $\mytyc{\mynat}.\mydc{suc}$ on its own will be a
+ syntax error. This is necessary so that we can easily infer the type
+ of polymorphic data constructors, as we will see later.
+
+ Moreover, each data constructor is prefixed by the type constructor
+ name, since we need to retrieve the type constructor of a data
+ constructor when type checking. This measure aids in the presentation
+ of various features but it is not needed in the implementation, where
+ we can have a dictionary to lookup the type constructor corresponding
+ to each data constructor. When using data constructors in examples I
+ will omit the type constructor prefix for brevity.
+
+ Along with user defined constructors, $\mykant$\ automatically
+ generates an \emph{eliminator}, or \emph{destructor}, to compute with
+ natural numbers: If we have $\mytmt : \mynat$, we can destruct
+ $\mytmt$ using the generated eliminator `$\mynat.\myfun{elim}$':
+ \begin{prooftree}
+ \mysmall
+ \AxiomC{$\mychk{\mytmt}{\mynat}$}
+ \UnaryInfC{$
+ \myinf{\mytyc{\mynat}.\myfun{elim} \myappsp \mytmt}{
+ \begin{array}{@{}l}
+ \myfora{\myb{P}}{\mynat \myarr \mytyp}{ \\ \myapp{\myb{P}}{\mydc{zero}} \myarr (\myfora{\myb{x}}{\mynat}{\myapp{\myb{P}}{\myb{x}} \myarr \myapp{\myb{P}}{(\myapp{\mydc{suc}}{\myb{x}})}}) \myarr \\ \myapp{\myb{P}}{\mytmt}}
+ \end{array}
+ }$}
+ \end{prooftree}
+ $\mynat.\myfun{elim}$ corresponds to the induction principle for
+ natural numbers: if we have a predicate on numbers ($\myb{P}$), and we
+ know that predicate holds for the base case
+ ($\myapp{\myb{P}}{\mydc{zero}}$) and for each inductive step
+ ($\myfora{\myb{x}}{\mynat}{\myapp{\myb{P}}{\myb{x}} \myarr
+ \myapp{\myb{P}}{(\myapp{\mydc{suc}}{\myb{x}})}}$), then $\myb{P}$
+ holds for any number. As with the data constructors, we require the
+ eliminator to be applied to the `destructed' element.
+
+ While the induction principle is usually seen as a mean to prove
+ properties about numbers, in the intuitionistic setting it is also a
+ mean to compute. In this specific case we will $\mynat.\myfun{elim}$
+ will return the base case if the provided number is $\mydc{zero}$, and
+ recursively apply the inductive step if the number is a
+ $\mydc{suc}$cessor:
+ \[
+ \begin{array}{@{}l@{}l}
+ \mytyc{\mynat}.\myfun{elim} \myappsp \mydc{zero} & \myappsp \myse{P} \myappsp \myse{pz} \myappsp \myse{ps} \myred \myse{pz} \\
+ \mytyc{\mynat}.\myfun{elim} \myappsp (\mydc{suc} \myappsp \mytmt) & \myappsp \myse{P} \myappsp \myse{pz} \myappsp \myse{ps} \myred \myse{ps} \myappsp \mytmt \myappsp (\mynat.\myfun{elim} \myappsp \mytmt \myappsp \myse{P} \myappsp \myse{pz} \myappsp \myse{ps})
+ \end{array}
+ \]
+ The Haskell equivalent would be
+ \begin{Verbatim}
+elim :: Nat -> a -> (Nat -> a -> a) -> a
+elim Zero pz ps = pz
+elim (Suc n) pz ps = ps n (elim n pz ps)
+\end{Verbatim}
+ Which buys us the computational behaviour, but not the reasoning power.
+
+\item[Binary trees] Now for a polymorphic data type: binary trees, since
+ lists are too similar to natural numbers to be interesting.
+ \[
+ \begin{array}{@{}l}
+ \myadt{\mytree}{\myappsp (\myb{A} {:} \mytyp)}{ }{
+ \mydc{leaf} \mydcsep \mydc{node} \myappsp (\myapp{\mytree}{\myb{A}}) \myappsp \myb{A} \myappsp (\myapp{\mytree}{\myb{A}})
+ }
+ \end{array}
+ \]
+ Now the purpose of constructors as syntax can be explained: what would
+ the type of $\mydc{leaf}$ be? If we were to treat it as a `normal'
+ term, we would have to specify the type parameter of the tree each
+ time the constructor is applied:
+ \[
+ \begin{array}{@{}l@{\ }l}
+ \mydc{leaf} & : \myfora{\myb{A}}{\mytyp}{\myapp{\mytree}{\myb{A}}} \\
+ \mydc{node} & : \myfora{\myb{A}}{\mytyp}{\myapp{\mytree}{\myb{A}} \myarr \myb{A} \myarr \myapp{\mytree}{\myb{A}} \myarr \myapp{\mytree}{\myb{A}}}
+ \end{array}
+ \]
+ The problem with this approach is that creating terms is incredibly
+ verbose and dull, since we would need to specify the type parameters
+ each time. For example if we wished to create a $\mytree \myappsp
+ \mynat$ with two nodes and three leaves, we would have to write
+ \[
+ \mydc{node} \myappsp \mynat \myappsp (\mydc{node} \myappsp \mynat \myappsp (\mydc{leaf} \myappsp \mynat) \myappsp (\myapp{\mydc{suc}}{\mydc{zero}}) \myappsp (\mydc{leaf} \myappsp \mynat)) \myappsp \mydc{zero} \myappsp (\mydc{leaf} \myappsp \mynat)
+ \]
+ The redundancy of $\mynat$s is quite irritating. Instead, if we treat
+ constructors as syntactic elements, we can `extract' the type of the
+ parameter from the type that the term gets checked against, much like
+ we get the type of abstraction arguments:
+ \begin{center}
+ \mysmall
+ \begin{tabular}{cc}
+ \AxiomC{$\mychk{\mytya}{\mytyp}$}
+ \UnaryInfC{$\mychk{\mydc{leaf}}{\myapp{\mytree}{\mytya}}$}
+ \DisplayProof
+ &
+ \AxiomC{$\mychk{\mytmm}{\mytree \myappsp \mytya}$}
+ \AxiomC{$\mychk{\mytmt}{\mytya}$}
+ \AxiomC{$\mychk{\mytmm}{\mytree \myappsp \mytya}$}
+ \TrinaryInfC{$\mychk{\mydc{node} \myappsp \mytmm \myappsp \mytmt \myappsp \mytmn}{\mytree \myappsp \mytya}$}
+ \DisplayProof
+ \end{tabular}
+ \end{center}
+ Which enables us to write, much more concisely
+ \[
+ \mydc{node} \myappsp (\mydc{node} \myappsp \mydc{leaf} \myappsp (\myapp{\mydc{suc}}{\mydc{zero}}) \myappsp \mydc{leaf}) \myappsp \mydc{zero} \myappsp \mydc{leaf} : \myapp{\mytree}{\mynat}
+ \]
+ We gain an annotation, but we lose the myriad of types applied to the
+ constructors. Conversely, with the eliminator for $\mytree$, we can
+ infer the type of the arguments given the type of the destructed:
+ \begin{prooftree}
+ \small
+ \AxiomC{$\myinf{\mytmt}{\myapp{\mytree}{\mytya}}$}
+ \UnaryInfC{$
+ \myinf{\mytree.\myfun{elim} \myappsp \mytmt}{
+ \begin{array}{@{}l}
+ (\myb{P} {:} \myapp{\mytree}{\mytya} \myarr \mytyp) \myarr \\
+ \myapp{\myb{P}}{\mydc{leaf}} \myarr \\
+ ((\myb{l} {:} \myapp{\mytree}{\mytya}) (\myb{x} {:} \mytya) (\myb{r} {:} \myapp{\mytree}{\mytya}) \myarr \myapp{\myb{P}}{\myb{l}} \myarr
+ \myapp{\myb{P}}{\myb{r}} \myarr \myb{P} \myappsp (\mydc{node} \myappsp \myb{l} \myappsp \myb{x} \myappsp \myb{r})) \myarr \\
+ \myapp{\myb{P}}{\mytmt}
+ \end{array}
+ }
+ $}
+ \end{prooftree}
+ As expected, the eliminator embodies structural induction on trees.
+
+\item[Empty type] We have presented types that have at least one
+ constructors, but nothing prevents us from defining types with
+ \emph{no} constructors:
+ \[\myadt{\mytyc{Empty}}{ }{ }{ }\]
+ What shall the `induction principle' on $\mytyc{Empty}$ be? Does it
+ even make sense to talk about induction on $\mytyc{Empty}$?
+ $\mykant$\ does not care, and generates an eliminator with no `cases',
+ and thus corresponding to the $\myfun{absurd}$ that we know and love:
+ \begin{prooftree}
+ \mysmall
+ \AxiomC{$\myinf{\mytmt}{\mytyc{Empty}}$}
+ \UnaryInfC{$\myinf{\myempty.\myfun{elim} \myappsp \mytmt}{(\myb{P} {:} \mytmt \myarr \mytyp) \myarr \myapp{\myb{P}}{\mytmt}}$}
+ \end{prooftree}
+
+\item[Ordered lists] Up to this point, the examples shown are nothing
+ new to the \{Haskell, SML, OCaml, functional\} programmer. However
+ dependent types let us express much more than that. A useful example
+ is the type of ordered lists. There are many ways to define such a
+ thing, we will define our type to store the bounds of the list, making
+ sure that $\mydc{cons}$ing respects that.
+
+ First, using $\myunit$ and $\myempty$, we define a type expressing the
+ ordering on natural numbers, $\myfun{le}$---`less or equal'.
+ $\myfun{le}\myappsp \mytmm \myappsp \mytmn$ will be inhabited only if
+ $\mytmm \le \mytmn$:
+ \[
+ \begin{array}{@{}l}
+ \myfun{le} : \mynat \myarr \mynat \myarr \mytyp \\
+ \myfun{le} \myappsp \myb{n} \mapsto \\
+ \myind{2} \mynat.\myfun{elim} \\
+ \myind{2}\myind{2} \myb{n} \\
+ \myind{2}\myind{2} (\myabs{\myarg}{\mynat \myarr \mytyp}) \\
+ \myind{2}\myind{2} (\myabs{\myarg}{\myunit}) \\
+ \myind{2}\myind{2} (\myabs{\myb{n}\, \myb{f}\, \myb{m}}{
+ \mynat.\myfun{elim} \myappsp \myb{m} \myappsp (\myabs{\myarg}{\mytyp}) \myappsp \myempty \myappsp (\myabs{\myb{m'}\, \myarg}{\myapp{\myb{f}}{\myb{m'}}})
+ })
+ \end{array}
+ \]
+ We return $\myunit$ if the scrutinised is $\mydc{zero}$ (every
+ number in less or equal than zero), $\myempty$ if the first number is
+ a $\mydc{suc}$cessor and the second a $\mydc{zero}$, and we recurse if
+ they are both successors. Since we want the list to have possibly
+ `open' bounds, for example for empty lists, we create a type for
+ `lifted' naturals with a bottom (less than everything) and top
+ (greater than everything) elements, along with an associated comparison
+ function:
+ \[
+ \begin{array}{@{}l}
+ \myadt{\mytyc{Lift}}{ }{ }{\mydc{bot} \mydcsep \mydc{lift} \myappsp \mynat \mydcsep \mydc{top}}\\
+ \myfun{le'} : \mytyc{Lift} \myarr \mytyc{Lift} \myarr \mytyp\\
+ \myfun{le'} \myappsp \myb{l_1} \mapsto \\
+ \myind{2} \mytyc{Lift}.\myfun{elim} \\
+ \myind{2}\myind{2} \myb{l_1} \\
+ \myind{2}\myind{2} (\myabs{\myarg}{\mytyc{Lift} \myarr \mytyp}) \\
+ \myind{2}\myind{2} (\myabs{\myarg}{\myunit}) \\
+ \myind{2}\myind{2} (\myabs{\myb{n_1}\, \myb{n_2}}{
+ \mytyc{Lift}.\myfun{elim} \myappsp \myb{l_2} \myappsp (\myabs{\myarg}{\mytyp}) \myappsp \myempty \myappsp (\myabs{\myb{n_2}}{\myfun{le} \myappsp \myb{n_1} \myappsp \myb{n_2}}) \myappsp \myunit
+ }) \\
+ \myind{2}\myind{2} (\myabs{\myb{n_1}\, \myb{n_2}}{
+ \mytyc{Lift}.\myfun{elim} \myappsp \myb{l_2} \myappsp (\myabs{\myarg}{\mytyp}) \myappsp \myempty \myappsp (\myabs{\myarg}{\myempty}) \myappsp \myunit
+ })
+ \end{array}
+ \]
+ Finally, we can defined a type of ordered lists. The type is
+ parametrised over two values representing the lower and upper bounds
+ of the elements, as opposed to the type parameters that we are used
+ to. Then, an empty list will have to have evidence that the bounds
+ are ordered, and each time we add an element we require the list to
+ have a matching lower bound:
+ \[
+ \begin{array}{@{}l}
+ \myadt{\mytyc{OList}}{\myappsp (\myb{low}\ \myb{upp} {:} \mytyc{Lift})}{\\ \myind{2}}{
+ \mydc{nil} \myappsp (\myfun{le'} \myappsp \myb{low} \myappsp \myb{upp}) \mydcsep \mydc{cons} \myappsp (\myb{n} {:} \mynat) \myappsp (\mytyc{OList} \myappsp (\myfun{lift} \myappsp \myb{n}) \myappsp \myb{upp}) \myappsp (\myfun{le'} \myappsp \myb{low} \myappsp (\myfun{lift} \myappsp \myb{n})
+ }
+ \end{array}
+ \]
+ If we want we can then employ this structure to write and prove
+ correct various sorting algorithms.\footnote{See this presentation by
+ Conor McBride:
+ \url{https://personal.cis.strath.ac.uk/conor.mcbride/Pivotal.pdf},
+ and this blog post by the author:
+ \url{http://mazzo.li/posts/AgdaSort.html}.}
+
+\item[Dependent products] Apart from $\mysyn{data}$, $\mykant$\ offers
+ us another way to define types: $\mysyn{record}$. A record is a
+ datatype with one constructor and `projections' to extract specific
+ fields of the said constructor.
+
+ For example, we can recover dependent products:
+ \[
+ \begin{array}{@{}l}
+ \myreco{\mytyc{Prod}}{\myappsp (\myb{A} {:} \mytyp) \myappsp (\myb{B} {:} \myb{A} \myarr \mytyp)}{\\ \myind{2}}{\myfst : \myb{A}, \mysnd : \myapp{\myb{B}}{\myb{fst}}}
+ \end{array}
+ \]
+ Here $\myfst$ and $\mysnd$ are the projections, with their respective
+ types. Note that each field can refer to the preceding fields. A
+ constructor will be automatically generated, under the name of
+ $\mytyc{Prod}.\mydc{constr}$. Dually to data types, we will omit the
+ type constructor prefix for record projections.
+
+ Following the bidirectionality of the system, we have that projections
+ (the destructors of the record) infer the type, while the constructor
+ gets checked:
+ \begin{center}
+ \mysmall
+ \begin{tabular}{cc}
+ \AxiomC{$\mychk{\mytmm}{\mytya}$}
+ \AxiomC{$\mychk{\mytmn}{\myapp{\mytyb}{\mytmm}}$}
+ \BinaryInfC{$\mychk{\mytyc{Prod}.\mydc{constr} \myappsp \mytmm \myappsp \mytmn}{\mytyc{Prod} \myappsp \mytya \myappsp \mytyb}$}
+ \noLine
+ \UnaryInfC{\phantom{$\myinf{\myfun{snd} \myappsp \mytmt}{\mytyb \myappsp (\myfst \myappsp \mytmt)}$}}
+ \DisplayProof
+ &
+ \AxiomC{$\myinf{\mytmt}{\mytyc{Prod} \myappsp \mytya \myappsp \mytyb}$}
+ \UnaryInfC{$\myinf{\myfun{fst} \myappsp \mytmt}{\mytya}$}
+ \noLine
+ \UnaryInfC{$\myinf{\myfun{snd} \myappsp \mytmt}{\mytyb \myappsp (\myfst \myappsp \mytmt)}$}
+ \DisplayProof
+ \end{tabular}
+ \end{center}
+ What we have is equivalent to ITT's dependent products.
+\end{description}
+
+\begin{figure}[p]
+ \mydesc{syntax}{ }{
+ \footnotesize
+ $
+ \begin{array}{l}
+ \mynamesyn ::= \cdots \mysynsep \mytyc{D} \mysynsep \mytyc{D}.\mydc{c} \mysynsep \mytyc{D}.\myfun{f}
+ \end{array}
+ $
+ }
+
+ \mynegder
+
+ \mydesc{syntax elaboration:}{\mydeclsyn \myelabf \mytmsyn ::= \cdots}{
+ \footnotesize
+ $
+ \begin{array}{r@{\ }l}
+ & \myadt{\mytyc{D}}{\mytele}{}{\cdots\ |\ \mydc{c}_n : \mytele_n } \\
+ \myelabf &
+
+ \begin{array}{r@{\ }c@{\ }l}
+ \mytmsyn & ::= & \cdots \mysynsep \myapp{\mytyc{D}}{\mytmsyn^{\mytele}} \mysynsep \cdots \mysynsep
+ \mytyc{D}.\mydc{c}_n \myappsp \mytmsyn^{\mytele_n} \mysynsep \mytyc{D}.\myfun{elim} \myappsp \mytmsyn \\
+ \end{array}
+ \end{array}
+ $
+ }
+
+ \mynegder
+
+ \mydesc{context elaboration:}{\myelab{\mydeclsyn}{\myctx}}{
+ \footnotesize
+
+ \AxiomC{$
+ \begin{array}{c}
+ \myinf{\mytele \myarr \mytyp}{\mytyp}\hspace{0.8cm}
+ \mytyc{D} \not\in \myctx \\
+ \myinff{\myctx;\ \mytyc{D} : \mytele \myarr \mytyp}{\mytele \mycc \mytele_i \myarr \myapp{\mytyc{D}}{\mytelee}}{\mytyp}\ \ \ (1 \leq i \leq n) \\
+ \text{For each $(\myb{x} {:} \mytya)$ in each $\mytele_i$, if $\mytyc{D} \in \mytya$, then $\mytya = \myapp{\mytyc{D}}{\vec{\mytmt}}$.}
+ \end{array}
+ $}
+ \UnaryInfC{$
+ \begin{array}{r@{\ }c@{\ }l}
+ \myctx & \myelabt & \myadt{\mytyc{D}}{\mytele}{}{ \cdots \ |\ \mydc{c}_n : \mytele_n } \\
+ & & \vspace{-0.2cm} \\
+ & \myelabf & \myctx;\ \mytyc{D} : \mytele \myarr \mytyp;\ \cdots;\ \mytyc{D}.\mydc{c}_n : \mytele \mycc \mytele_n \myarr \myapp{\mytyc{D}}{\mytelee}; \\
+ & &
+ \begin{array}{@{}r@{\ }l l}
+ \mytyc{D}.\myfun{elim} : & \mytele \myarr (\myb{x} {:} \myapp{\mytyc{D}}{\mytelee}) \myarr & \textbf{target} \\
+ & (\myb{P} {:} \myapp{\mytyc{D}}{\mytelee} \myarr \mytyp) \myarr & \textbf{motive} \\
+ & \left.
+ \begin{array}{@{}l}
+ \myind{3} \vdots \\
+ (\mytele_n \mycc \myhyps(\myb{P}, \mytele_n) \myarr \myapp{\myb{P}}{(\myapp{\mytyc{D}.\mydc{c}_n}{\mytelee_n})}) \myarr
+ \end{array} \right \}
+ & \textbf{methods} \\
+ & \myapp{\myb{P}}{\myb{x}} &
+ \end{array}
+ \end{array}
+ $}
+ \DisplayProof \\ \vspace{0.2cm}\ \\
+ $
+ \begin{array}{@{}l l@{\ } l@{} r c l}
+ \textbf{where} & \myhyps(\myb{P}, & \myemptytele &) & \mymetagoes & \myemptytele \\
+ & \myhyps(\myb{P}, & (\myb{r} {:} \myapp{\mytyc{D}}{\vec{\mytmt}}) \mycc \mytele &) & \mymetagoes & (\myb{r'} {:} \myapp{\myb{P}}{\myb{r}}) \mycc \myhyps(\myb{P}, \mytele) \\
+ & \myhyps(\myb{P}, & (\myb{x} {:} \mytya) \mycc \mytele & ) & \mymetagoes & \myhyps(\myb{P}, \mytele)
+ \end{array}
+ $
+
+ }
+
+ \mynegder
+
+ \mydesc{reduction elaboration:}{\mydeclsyn \myelabf \myctx \vdash \mytmsyn \myred \mytmsyn}{
+ \footnotesize
+ $\myadt{\mytyc{D}}{\mytele}{}{ \cdots \ |\ \mydc{c}_n : \mytele_n } \ \ \myelabf$
+ \AxiomC{$\mytyc{D} : \mytele \myarr \mytyp \in \myctx$}
+ \AxiomC{$\mytyc{D}.\mydc{c}_i : \mytele;\mytele_i \myarr \myapp{\mytyc{D}}{\mytelee} \in \myctx$}
+ \BinaryInfC{$
+ \myctx \vdash \myapp{\myapp{\myapp{\mytyc{D}.\myfun{elim}}{(\myapp{\mytyc{D}.\mydc{c}_i}{\vec{\myse{t}}})}}{\myse{P}}}{\vec{\myse{m}}} \myred \myapp{\myapp{\myse{m}_i}{\vec{\mytmt}}}{\myrecs(\myse{P}, \vec{m}, \mytele_i)}
+ $}
+ \DisplayProof \\ \vspace{0.2cm}\ \\
+ $
+ \begin{array}{@{}l l@{\ } l@{} r c l}
+ \textbf{where} & \myrecs(\myse{P}, \vec{m}, & \myemptytele &) & \mymetagoes & \myemptytele \\
+ & \myrecs(\myse{P}, \vec{m}, & (\myb{r} {:} \myapp{\mytyc{D}}{\vec{A}}); \mytele & ) & \mymetagoes & (\mytyc{D}.\myfun{elim} \myappsp \myb{r} \myappsp \myse{P} \myappsp \vec{m}); \myrecs(\myse{P}, \vec{m}, \mytele) \\
+ & \myrecs(\myse{P}, \vec{m}, & (\myb{x} {:} \mytya); \mytele &) & \mymetagoes & \myrecs(\myse{P}, \vec{m}, \mytele)
+ \end{array}
+ $
+ }
+
+ \mynegder
+
+ \mydesc{syntax elaboration:}{\myelab{\mydeclsyn}{\mytmsyn ::= \cdots}}{
+ \footnotesize
+ $
+ \begin{array}{r@{\ }c@{\ }l}
+ \myctx & \myelabt & \myreco{\mytyc{D}}{\mytele}{}{ \cdots, \myfun{f}_n : \myse{F}_n } \\
+ & \myelabf &
+
+ \begin{array}{r@{\ }c@{\ }l}
+ \mytmsyn & ::= & \cdots \mysynsep \myapp{\mytyc{D}}{\mytmsyn^{\mytele}} \mysynsep \mytyc{D}.\mydc{constr} \myappsp \mytmsyn^{n} \mysynsep \cdots \mysynsep \mytyc{D}.\myfun{f}_n \myappsp \mytmsyn \\
+ \end{array}
+ \end{array}
+ $
+}
+
+ \mynegder
+
+\mydesc{context elaboration:}{\myelab{\mydeclsyn}{\myctx}}{
+ \footnotesize
+ \AxiomC{$
+ \begin{array}{c}
+ \myinf{\mytele \myarr \mytyp}{\mytyp}\hspace{0.8cm}
+ \mytyc{D} \not\in \myctx \\
+ \myinff{\myctx; \mytele; (\myb{f}_j : \myse{F}_j)_{j=1}^{i - 1}}{F_i}{\mytyp} \myind{3} (1 \le i \le n)
+ \end{array}
+ $}
+ \UnaryInfC{$
+ \begin{array}{r@{\ }c@{\ }l}
+ \myctx & \myelabt & \myreco{\mytyc{D}}{\mytele}{}{ \cdots, \myfun{f}_n : \myse{F}_n } \\
+ & & \vspace{-0.2cm} \\
+ & \myelabf & \myctx;\ \mytyc{D} : \mytele \myarr \mytyp;\ \cdots;\ \mytyc{D}.\myfun{f}_n : \mytele \myarr (\myb{x} {:} \myapp{\mytyc{D}}{\mytelee}) \myarr \mysub{\myse{F}_n}{\myb{f}_i}{\myapp{\myfun{f}_i}{\myb{x}}}_{i = 1}^{n-1}; \\
+ & & \mytyc{D}.\mydc{constr} : \mytele \myarr \myse{F}_1 \myarr \cdots \myarr \myse{F}_n \myarr \myapp{\mytyc{D}}{\mytelee};
+ \end{array}
+ $}
+ \DisplayProof
+}
+
+ \mynegder
+
+ \mydesc{reduction elaboration:}{\mydeclsyn \myelabf \myctx \vdash \mytmsyn \myred \mytmsyn}{
+ \footnotesize
+ $\myreco{\mytyc{D}}{\mytele}{}{ \cdots, \myfun{f}_n : \myse{F}_n } \ \ \myelabf$
+ \AxiomC{$\mytyc{D} \in \myctx$}
+ \UnaryInfC{$\myctx \vdash \myapp{\mytyc{D}.\myfun{f}_i}{(\mytyc{D}.\mydc{constr} \myappsp \vec{t})} \myred t_i$}
+ \DisplayProof
+ }
+
+ \caption{Elaboration for data types and records.}
+ \label{fig:elab}
+\end{figure}
+
+Following the intuition given by the examples, the mechanised
+elaboration is presented in figure \ref{fig:elab}, which is essentially
+a modification of figure 9 of \citep{McBride2004}\footnote{However, our
+ datatypes do not have indices, we do bidirectional typechecking by
+ treating constructors/destructors as syntactic constructs, and we have
+ records.}.
+
+In data types declarations we allow recursive occurrences as long as
+they are \emph{strictly positive}, employing a syntactic check to make
+sure that this is the case. See \cite{Dybjer1991} for a more formal
+treatment of inductive definitions in ITT.
+
+For what concerns records, recursive occurrences are disallowed. The
+reason for this choice is answered by the reason for the choice of
+having records at all: we need records to give the user types with
+$\eta$-laws for equality, as we saw in Section \ref{sec:eta-expand}
+and in the treatment of OTT in Section \ref{sec:ott}. If we tried to
+$\eta$-expand recursive data types, we would expand forever.
+
+To implement bidirectional type checking for constructors and
+destructors, we store their types in full in the context, and then
+instantiate when due:
+
+\mydesc{typing:}{\myctx \vdash \mytmsyn \Leftrightarrow \mytmsyn}{
+ \AxiomC{$
+ \begin{array}{c}
+ \mytyc{D} : \mytele \myarr \mytyp \in \myctx \hspace{1cm}
+ \mytyc{D}.\mydc{c} : \mytele \mycc \mytele' \myarr
+ \myapp{\mytyc{D}}{\mytelee} \in \myctx \\
+ \mytele'' = (\mytele;\mytele')\vec{A} \hspace{1cm}
+ \mychkk{\myctx; \mytake_{i-1}(\mytele'')}{t_i}{\myix_i( \mytele'')}\ \
+ (1 \le i \le \mytele'')
+ \end{array}
+ $}
+ \UnaryInfC{$\mychk{\myapp{\mytyc{D}.\mydc{c}}{\vec{t}}}{\myapp{\mytyc{D}}{\vec{A}}}$}
+ \DisplayProof
+
+ \myderivspp
+
+ \AxiomC{$\mytyc{D} : \mytele \myarr \mytyp \in \myctx$}
+ \AxiomC{$\mytyc{D}.\myfun{f} : \mytele \mycc (\myb{x} {:}
+ \myapp{\mytyc{D}}{\mytelee}) \myarr \myse{F}$}
+ \AxiomC{$\myjud{\mytmt}{\myapp{\mytyc{D}}{\vec{A}}}$}
+ \TrinaryInfC{$\myinf{\myapp{\mytyc{D}.\myfun{f}}{\mytmt}}{(\mytele
+ \mycc (\myb{x} {:} \myapp{\mytyc{D}}{\mytelee}) \myarr
+ \myse{F})(\vec{A};\mytmt)}$}
+ \DisplayProof
+ }
+
+\subsubsection{Why user defined types? Why eliminators?}
+
+The `hardest' design choice when designing $\mykant$\ was to decide
+whether user defined types should be included. In the end, we can
+devise
+
+% TODO reference levitated theories, indexed containers
+
+foobar
+
+\subsection{Cumulative hierarchy and typical ambiguity}
+\label{sec:term-hierarchy}
+
+Having a well founded type hierarchy is crucial if we want to retain
+consistency, otherwise we can break our type systems by proving bottom,
+as shown in Appendix \ref{sec:hurkens}.
+
+However, hierarchy as presented in section \label{sec:itt} is a
+considerable burden on the user, on various levels. Consider for
+example how we recovered disjunctions in Section \ref{sec:disju}: we
+have a function that takes two $\mytyp_0$ and forms a new $\mytyp_0$.
+What if we wanted to form a disjunction containing something a
+$\mytyp_1$, or $\mytyp_{42}$? Our definition would fail us, since
+$\mytyp_1 : \mytyp_2$.
+
+\begin{figure}[b!]
+
+ % TODO finish
+\mydesc{cumulativity:}{\myctx \vdash \mytmsyn \mycumul \mytmsyn}{
+ \begin{tabular}{ccc}
+ \AxiomC{$\myctx \vdash \mytya \mydefeq \mytyb$}
+ \UnaryInfC{$\myctx \vdash \mytya \mycumul \mytyb$}
+ \DisplayProof
+ &
+ \AxiomC{\phantom{$\myctx \vdash \mytya \mydefeq \mytyb$}}
+ \UnaryInfC{$\myctx \vdash \mytyp_l \mycumul \mytyp_{l+1}$}
+ \DisplayProof
+ &
+ \AxiomC{$\myctx \vdash \mytya \mycumul \mytyb$}
+ \AxiomC{$\myctx \vdash \mytyb \mycumul \myse{C}$}
+ \BinaryInfC{$\myctx \vdash \mytya \mycumul \myse{C}$}
+ \DisplayProof
+ \end{tabular}
+
+ \myderivspp
+
+ \begin{tabular}{ccc}
+ \AxiomC{$\myjud{\mytmt}{\mytya}$}
+ \AxiomC{$\myctx \vdash \mytya \mycumul \mytyb$}
+ \BinaryInfC{$\myjud{\mytmt}{\mytyb}$}
+ \DisplayProof
+ &
+ \AxiomC{$\myctx \vdash \mytya_1 \mydefeq \mytya_2$}
+ \AxiomC{$\myctx; \myb{x} : \mytya_1 \vdash \mytyb_1 \mycumul \mytyb_2$}
+ \BinaryInfC{$\myctx (\myb{x} {:} \mytya_1) \myarr \mytyb_1 \mycumul (\myb{x} {:} \mytya_2) \myarr \mytyb_2$}
+ \DisplayProof
+ \end{tabular}
+}
+\caption{Cumulativity rules for base types in \mykant, plus a
+ `conversion' rule for cumulative types.}
+ \label{fig:cumulativity}
+\end{figure}
+
+One way to solve this issue is a \emph{cumulative} hierarchy, where
+$\mytyp_{l_1} : \mytyp_{l_2}$ iff $l_1 < l_2$. This way we retain
+consistency, while allowing for `large' definitions that work on small
+types too. Figure \ref{fig:cumulativity} gives a formal definition of
+cumulativity for types, abstractions, and data constructors.
+
+For example we might define our disjunction to be
+\[
+ \myarg\myfun{$\vee$}\myarg : \mytyp_{100} \myarr \mytyp_{100} \myarr \mytyp_{100}
+\]
+And hope that $\mytyp_{100}$ will be large enough to fit all the types
+that we want to use with our disjunction. However, there are two
+problems with this. First, there is the obvious clumsyness of having to
+manually specify the size of types. More importantly, if we want to use
+$\myfun{$\vee$}$ itself as an argument to other type-formers, we need to
+make sure that those allow for types at least as large as
+$\mytyp_{100}$.
+
+A better option is to employ a mechanised version of what Russell called
+\emph{typical ambiguity}: we let the user live under the illusion that
+$\mytyp : \mytyp$, but check that the statements about types are
+consistent behind the hood. $\mykant$\ implements this following the
+lines of \cite{Huet1988}. See also \citep{Harper1991} for a published
+reference, although describing a more complex system allowing for both
+explicit and explicit hierarchy at the same time.
+
+We define a partial ordering on the levels, with both weak ($\le$) and
+strong ($<$) constraints---the laws governing them being the same as the
+ones governing $<$ and $\le$ for the natural numbers. Each occurrence
+of $\mytyp$ is decorated with a unique reference, and we keep a set of
+constraints and add new constraints as we type check, generating new
+references when needed.
+
+For example, when type checking the type $\mytyp\, r_1$, where $r_1$
+denotes the unique reference assigned to that term, we will generate a
+new fresh reference $\mytyp\, r_2$, and add the constraint $r_1 < r_2$
+to the set. When type checking $\myctx \vdash
+\myfora{\myb{x}}{\mytya}{\mytyb}$, if $\myctx \vdash \mytya : \mytyp\,
+r_1$ and $\myctx; \myb{x} : \mytyb \vdash \mytyb : \mytyp\,r_2$; we will
+generate new reference $r$ and add $r_1 \le r$ and $r_2 \le r$ to the
+set.
+
+If at any point the constraint set becomes inconsistent, type checking
+fails. Moreover, when comparing two $\mytyp$ terms we equate their
+respective references with two $\le$ constraints---the details are
+explained in Section \ref{sec:hier-impl}.
+
+Another more flexible but also more verbose alternative is the one
+chosen by Agda, where levels can be quantified so that the relationship
+between arguments and result in type formers can be explicitly
+expressed:
+\[
+\myarg\myfun{$\vee$}\myarg : (l_1\, l_2 : \mytyc{Level}) \myarr \mytyp_{l_1} \myarr \mytyp_{l_2} \myarr \mytyp_{l_1 \mylub l_2}
+\]
+Inference algorithms to automatically derive this kind of relationship
+are currently subject of research. We chose less flexible but more
+concise way, since it is easier to implement and better understood.
+
+% \begin{figure}[t]
+% % TODO do this
+% \caption{Constraints generated by the typical ambiguity engine. We
+% assume some global set of constraints with the ability of generating
+% fresh references.}
+% \label{fig:hierarchy}
+% \end{figure}
+
+\subsection{Observational equality, \mykant\ style}
+
+There are two correlated differences between $\mykant$\ and the theory
+used to present OTT. The first is that in $\mykant$ we have a type
+hierarchy, which lets us, for example, abstract over types. The second
+is that we let the user define inductive types.
+
+Reconciling propositions for OTT and a hierarchy had already been
+investigated by Conor McBride,\footnote{See
+ \url{http://www.e-pig.org/epilogue/index.html?p=1098.html}.} and we
+follow his broad design plan, although with some modifications. Most of
+the work, as an extension of elaboration, is to handle reduction rules
+and coercions for data types---both type constructors and data
+constructors.
+
+\subsubsection{The \mykant\ prelude, and $\myprop$ositions}
+
+Before defining $\myprop$, we define some basic types inside $\mykant$,
+as the target for the $\myprop$ decoder:
+\[
+\begin{array}{l}
+ \myadt{\mytyc{Empty}}{}{ }{ } \\
+ \myfun{absurd} : (\myb{A} {:} \mytyp) \myarr \mytyc{Empty} \myarr \myb{A} \mapsto \\
+ \myind{2} \myabs{\myb{A\ \myb{bot}}}{\mytyc{Empty}.\myfun{elim} \myappsp \myb{bot} \myappsp (\myabs{\_}{\myb{A}})} \\
+ \ \\
+
+ \myreco{\mytyc{Unit}}{}{}{ } \\ \ \\
+
+ \myreco{\mytyc{Prod}}{\myappsp (\myb{A}\ \myb{B} {:} \mytyp)}{ }{\myfun{fst} : \myb{A}, \myfun{snd} : \myb{B} }
+\end{array}
+\]
+When using $\mytyc{Prod}$, we shall use $\myprod$ to define `nested'
+products, and $\myproj{n}$ to project elements from them, so that
+\[
+\begin{array}{@{}l}
+\mytya \myprod \mytyb = \mytyc{Prod} \myappsp \mytya \myappsp (\mytyc{Prod} \myappsp \mytyb \myappsp \myunit) \\
+\mytya \myprod \mytyb \myprod \myse{C} = \mytyc{Prod} \myappsp \mytya \myappsp (\mytyc{Prod} \myappsp \mytyb \myappsp (\mytyc{Prod} \myappsp \mytyc \myappsp \myunit)) \\
+\myind{2} \vdots \\
+\myproj{1} : \mytyc{Prod} \myappsp \mytya \myappsp \mytyb \myarr \mytya \\
+\myproj{2} : \mytyc{Prod} \myappsp \mytya \myappsp (\mytyc{Prod} \myappsp \mytyb \myappsp \myse{C}) \myarr \mytyb \\
+\myind{2} \vdots
+\end{array}
+\]
+And so on, so that $\myproj{n}$ will work with all products with at
+least than $n$ elements. Then we can define propositions, and decoding:
+
+\mydesc{syntax}{ }{
+ $
+ \begin{array}{r@{\ }c@{\ }l}
+ \mytmsyn & ::= & \cdots \mysynsep \myprdec{\myprsyn} \\
+ \myprsyn & ::= & \mybot \mysynsep \mytop \mysynsep \myprsyn \myand \myprsyn \mysynsep \myprfora{\myb{x}}{\mytmsyn}{\myprsyn}
+ \end{array}
+ $
+}
+
+\mydesc{proposition decoding:}{\myprdec{\mytmsyn} \myred \mytmsyn}{
+ \begin{tabular}{cc}
+ $
+ \begin{array}{l@{\ }c@{\ }l}
+ \myprdec{\mybot} & \myred & \myempty \\
+ \myprdec{\mytop} & \myred & \myunit
+ \end{array}
+ $
+ &
+ $
+ \begin{array}{r@{ }c@{ }l@{\ }c@{\ }l}
+ \myprdec{&\myse{P} \myand \myse{Q} &} & \myred & \myprdec{\myse{P}} \myprod \myprdec{\myse{Q}} \\
+ \myprdec{&\myprfora{\myb{x}}{\mytya}{\myse{P}} &} & \myred &
+ \myfora{\myb{x}}{\mytya}{\myprdec{\myse{P}}}
+ \end{array}
+ $
+ \end{tabular}
+}
+
+Adopting the same convention as with $\mytyp$-level products, we will
+nest $\myand$ in the same way.
+
+\subsubsection{Some OTT examples}
+
+Before presenting the direction that $\mykant$\ takes, let's consider
+some examples of use-defined data types, and the result we would expect,
+given what we already know about OTT, assuming the same propositional
+equalities.
+
+\begin{description}
+
+\item[Product types] Let's consider first the already mentioned
+ dependent product, using the alternate name $\mysigma$\footnote{For
+ extra confusion, `dependent products' are often called `dependent
+ sums' in the literature, referring to the interpretation that
+ identifies the first element as a `tag' deciding the type of the
+ second element, which lets us recover sum types (disjuctions), as we
+ saw in Section \ref{sec:user-type}. Thus, $\mysigma$.} to
+ avoid confusion with the $\mytyc{Prod}$ in the prelude:
+ \[
+ \begin{array}{@{}l}
+ \myreco{\mysigma}{\myappsp (\myb{A} {:} \mytyp) \myappsp (\myb{B} {:} \myb{A} \myarr \mytyp)}{\\ \myind{2}}{\myfst : \myb{A}, \mysnd : \myapp{\myb{B}}{\myb{fst}}}
+ \end{array}
+ \]
+ Let's start with type-level equality. The result we want is
+ \[
+ \begin{array}{@{}l}
+ \mysigma \myappsp \mytya_1 \myappsp \mytyb_1 \myeq \mysigma \myappsp \mytya_2 \myappsp \mytyb_2 \myred \\
+ \myind{2} \mytya_1 \myeq \mytya_2 \myand \myprfora{\myb{x_1}}{\mytya_1}{\myprfora{\myb{x_2}}{\mytya_2}{\myjm{\myb{x_1}}{\mytya_1}{\myb{x_2}}{\mytya_2}} \myimpl \myapp{\mytyb_1}{\myb{x_1}} \myeq \myapp{\mytyb_2}{\myb{x_2}}}
+ \end{array}
+ \]
+ The difference here is that in the original presentation of OTT
+ the type binders are explicit, while here $\mytyb_1$ and $\mytyb_2$
+ functions returning types. We can do this thanks to the type
+ hierarchy, and this hints at the fact that heterogeneous equality will
+ have to allow $\mytyp$ `to the right of the colon', and in fact this
+ provides the solution to simplify the equality above.
+
+ If we take, just like we saw previously in OTT
+ \[
+ \begin{array}{@{}l}
+ \myjm{\myse{f}_1}{\myfora{\mytya_1}{\myb{x_1}}{\mytyb_1}}{\myse{f}_2}{\myfora{\mytya_2}{\myb{x_2}}{\mytyb_2}} \myred \\
+ \myind{2} \myprfora{\myb{x_1}}{\mytya_1}{\myprfora{\myb{x_2}}{\mytya_2}{
+ \myjm{\myb{x_1}}{\mytya_1}{\myb{x_2}}{\mytya_2} \myimpl
+ \myjm{\myapp{\myse{f}_1}{\myb{x_1}}}{\mytyb_1[\myb{x_1}]}{\myapp{\myse{f}_2}{\myb{x_2}}}{\mytyb_2[\myb{x_2}]}
+ }}
+ \end{array}
+ \]
+ Then we can simply take
+ \[
+ \begin{array}{@{}l}
+ \mysigma \myappsp \mytya_1 \myappsp \mytyb_1 \myeq \mysigma \myappsp \mytya_2 \myappsp \mytyb_2 \myred \\ \myind{2} \mytya_1 \myeq \mytya_2 \myand \myjm{\mytyb_1}{\mytya_1 \myarr \mytyp}{\mytyb_2}{\mytya_2 \myarr \mytyp}
+ \end{array}
+ \]
+ Which will reduce to precisely what we desire. For what
+ concerns coercions and quotation, things stay the same (apart from the
+ fact that we apply to the second argument instead of substituting).
+ We can recognise records such as $\mysigma$ as such and employ
+ projections in value equality and coercions; as to not
+ impede progress if not necessary.
+
+\item[Lists] Now for finite lists, which will give us a taste for data
+ constructors:
+ \[
+ \begin{array}{@{}l}
+ \myadt{\mylist}{\myappsp (\myb{A} {:} \mytyp)}{ }{\mydc{nil} \mydcsep \mydc{cons} \myappsp \myb{A} \myappsp (\myapp{\mylist}{\myb{A}})}
+ \end{array}
+ \]
+ Type equality is simple---we only need to compare the parameter:
+ \[
+ \mylist \myappsp \mytya_1 \myeq \mylist \myappsp \mytya_2 \myred \mytya_1 \myeq \mytya_2
+ \]
+ For coercions, we transport based on the constructor, recycling the
+ proof for the inductive occurrence:
+ \[
+ \begin{array}{@{}l@{\ }c@{\ }l}
+ \mycoe \myappsp (\mylist \myappsp \mytya_1) \myappsp (\mylist \myappsp \mytya_2) \myappsp \myse{Q} \myappsp \mydc{nil} & \myred & \mydc{nil} \\
+ \mycoe \myappsp (\mylist \myappsp \mytya_1) \myappsp (\mylist \myappsp \mytya_2) \myappsp \myse{Q} \myappsp (\mydc{cons} \myappsp \mytmm \myappsp \mytmn) & \myred & \\
+ \multicolumn{3}{l}{\myind{2} \mydc{cons} \myappsp (\mycoe \myappsp \mytya_1 \myappsp \mytya_2 \myappsp \myse{Q} \myappsp \mytmm) \myappsp (\mycoe \myappsp (\mylist \myappsp \mytya_1) \myappsp (\mylist \myappsp \mytya_2) \myappsp \myse{Q} \myappsp \mytmn)}
+ \end{array}
+ \]
+ Value equality is unsurprising---we match the constructors, and
+ return bottom for mismatches. However, we also need to equate the
+ parameter in $\mydc{nil}$:
+ \[
+ \begin{array}{r@{ }c@{\ }c@{\ }c@{}l@{\ }c@{\ }r@{}c@{\ }c@{\ }c@{}l@{\ }l}
+ (& \mydc{nil} & : & \myapp{\mylist}{\mytya_1} &) & \myeq & (& \mydc{nil} & : & \myapp{\mylist}{\mytya_2} &) \myred \mytya_1 \myeq \mytya_2 \\
+ (& \mydc{cons} \myappsp \mytmm_1 \myappsp \mytmn_1 & : & \myapp{\mylist}{\mytya_1} &) & \myeq & (& \mydc{cons} \myappsp \mytmm_2 \myappsp \mytmn_2 & : & \myapp{\mylist}{\mytya_2} &) \myred \\
+ & \multicolumn{11}{@{}l}{ \myind{2}
+ \myjm{\mytmm_1}{\mytya_1}{\mytmm_2}{\mytya_2} \myand \myjm{\mytmn_1}{\myapp{\mylist}{\mytya_1}}{\mytmn_2}{\myapp{\mylist}{\mytya_2}}
+ } \\
+ (& \mydc{nil} & : & \myapp{\mylist}{\mytya_1} &) & \myeq & (& \mydc{cons} \myappsp \mytmm_2 \myappsp \mytmn_2 & : & \myapp{\mylist}{\mytya_2} &) \myred \mybot \\
+ (& \mydc{cons} \myappsp \mytmm_1 \myappsp \mytmn_1 & : & \myapp{\mylist}{\mytya_1} &) & \myeq & (& \mydc{nil} & : & \myapp{\mylist}{\mytya_2} &) \myred \mybot
+ \end{array}
+ \]
+ % TODO quotation
+
+\item[Evil type]
+ Now for something useless but complicated.
+
+\end{description}
+
+\subsubsection{Only one equality}
+
+Given the examples above, a more `flexible' heterogeneous emerged, since
+of the fact that in $\mykant$ we re-gain the possibility of abstracting
+and in general handling sets in a way that was not possible in the
+original OTT presentation. Moreover, we found that the rules for value
+equality work very well if used with user defined type
+abstractions---for example in the case of dependent products we recover
+the original definition with explicit binders, in a very simple manner.
+
+In fact, we can drop a separate notion of type-equality, which will
+simply be served by $\myjm{\mytya}{\mytyp}{\mytyb}{\mytyp}$, from now on
+abbreviated as $\mytya \myeq \mytyb$. We shall still distinguish
+equalities relating types for hierarchical purposes. The full rules for
+equality reductions, along with the syntax for propositions, are given
+in figure \ref{fig:kant-eq-red}. We exploit record to perform
+$\eta$-expansion. Moreover, given the nested $\myand$s, values of data
+types with zero constructors (such as $\myempty$) and records with zero
+destructors (such as $\myunit$) will be automatically always identified
+as equal.
+
+\begin{figure}[p]
+\mydesc{syntax}{ }{
+ \small
+ $
+ \begin{array}{r@{\ }c@{\ }l}
+ \myprsyn & ::= & \cdots \mysynsep \myjm{\mytmsyn}{\mytmsyn}{\mytmsyn}{\mytmsyn} \\
+ \end{array}
+ $
+}
+
+ % \mytmsyn & ::= & \cdots \mysynsep \mycoee{\mytmsyn}{\mytmsyn}{\mytmsyn}{\mytmsyn} \mysynsep
+ % \mycohh{\mytmsyn}{\mytmsyn}{\mytmsyn}{\mytmsyn} \\
+ % \myprsyn & ::= & \cdots \mysynsep \myjm{\mytmsyn}{\mytmsyn}{\mytmsyn}{\mytmsyn} \\
+
+% \mynegder
+
+% \mydesc{typing:}{\myctx \vdash \mytmsyn \Leftrightarrow \mytmsyn}{
+% \small
+% \begin{tabular}{cc}
+% \AxiomC{$\myjud{\myse{P}}{\myprdec{\mytya \myeq \mytyb}}$}
+% \AxiomC{$\myjud{\mytmt}{\mytya}$}
+% \BinaryInfC{$\myinf{\mycoee{\mytya}{\mytyb}{\myse{P}}{\mytmt}}{\mytyb}$}
+% \DisplayProof
+% &
+% \AxiomC{$\myjud{\myse{P}}{\myprdec{\mytya \myeq \mytyb}}$}
+% \AxiomC{$\myjud{\mytmt}{\mytya}$}
+% \BinaryInfC{$\myinf{\mycohh{\mytya}{\mytyb}{\myse{P}}{\mytmt}}{\myprdec{\myjm{\mytmt}{\mytya}{\mycoee{\mytya}{\mytyb}{\myse{P}}{\mytmt}}{\mytyb}}}$}
+% \DisplayProof
+% \end{tabular}
+% }
+
+\mynegder
+
+\mydesc{propositions:}{\myjud{\myprsyn}{\myprop}}{
+ \small
+ \begin{tabular}{cc}
+ \AxiomC{\phantom{$\myjud{\myse{P}}{\myprop}$}}
+ \UnaryInfC{$\myjud{\mytop}{\myprop}$}
+ \noLine
+ \UnaryInfC{$\myjud{\mybot}{\myprop}$}
+ \DisplayProof
+ &
+ \AxiomC{$\myjud{\myse{P}}{\myprop}$}
+ \AxiomC{$\myjud{\myse{Q}}{\myprop}$}
+ \BinaryInfC{$\myjud{\myse{P} \myand \myse{Q}}{\myprop}$}
+ \noLine
+ \UnaryInfC{\phantom{$\myjud{\mybot}{\myprop}$}}
+ \DisplayProof
+ \end{tabular}
+
+ \myderivspp
+
+ \begin{tabular}{cc}
+ \AxiomC{$
+ \begin{array}{@{}c}
+ \phantom{\myjud{\myse{A}}{\mytyp} \hspace{0.8cm} \myjud{\mytmm}{\myse{A}}} \\
+ \myjud{\myse{A}}{\mytyp}\hspace{0.8cm}
+ \myjudd{\myctx; \myb{x} : \mytya}{\myse{P}}{\myprop}
+ \end{array}
+ $}
+ \UnaryInfC{$\myjud{\myprfora{\myb{x}}{\mytya}{\myse{P}}}{\myprop}$}
+ \DisplayProof
+ &
+ \AxiomC{$
+ \begin{array}{c}
+ \myjud{\myse{A}}{\mytyp} \hspace{0.8cm} \myjud{\mytmm}{\myse{A}} \\
+ \myjud{\myse{B}}{\mytyp} \hspace{0.8cm} \myjud{\mytmn}{\myse{B}}
+ \end{array}
+ $}
+ \UnaryInfC{$\myjud{\myjm{\mytmm}{\myse{A}}{\mytmn}{\myse{B}}}{\myprop}$}
+ \DisplayProof
+ \end{tabular}
+}
+
+\mynegder
+ % TODO equality for decodings
+\mydesc{equality reduction:}{\myctx \vdash \myprsyn \myred \myprsyn}{
+ \small
+ \begin{tabular}{cc}
+ \AxiomC{}
+ \UnaryInfC{$\myctx \vdash \myjm{\mytyp}{\mytyp}{\mytyp}{\mytyp} \myred \mytop$}
+ \DisplayProof
+ &
+ \AxiomC{}
+ \UnaryInfC{$\myctx \vdash \myjm{\myprdec{\myse{P}}}{\mytyp}{\myprdec{\myse{Q}}}{\mytyp} \myred \mytop$}
+ \DisplayProof
+ \end{tabular}
+
+ \myderivspp
+
+ \AxiomC{}
+ \UnaryInfC{$
+ \begin{array}{@{}r@{\ }l}
+ \myctx \vdash &
+ \myjm{\myfora{\myb{x_1}}{\mytya_1}{\mytyb_1}}{\mytyp}{\myfora{\myb{x_2}}{\mytya_2}{\mytyb_2}}{\mytyp} \myred \\
+ & \myind{2} \mytya_2 \myeq \mytya_1 \myand \myprfora{\myb{x_2}}{\mytya_2}{\myprfora{\myb{x_1}}{\mytya_1}{
+ \myjm{\myb{x_2}}{\mytya_2}{\myb{x_1}}{\mytya_1} \myimpl \mytyb_1[\myb{x_1}] \myeq \mytyb_2[\myb{x_2}]
+ }}
+ \end{array}
+ $}
+ \DisplayProof
+
+ \myderivspp
+
+ \AxiomC{}
+ \UnaryInfC{$
+ \begin{array}{@{}r@{\ }l}
+ \myctx \vdash &
+ \myjm{\myse{f}_1}{\myfora{\myb{x_1}}{\mytya_1}{\mytyb_1}}{\myse{f}_2}{\myfora{\myb{x_2}}{\mytya_2}{\mytyb_2}} \myred \\
+ & \myind{2} \myprfora{\myb{x_1}}{\mytya_1}{\myprfora{\myb{x_2}}{\mytya_2}{
+ \myjm{\myb{x_1}}{\mytya_1}{\myb{x_2}}{\mytya_2} \myimpl
+ \myjm{\myapp{\myse{f}_1}{\myb{x_1}}}{\mytyb_1[\myb{x_1}]}{\myapp{\myse{f}_2}{\myb{x_2}}}{\mytyb_2[\myb{x_2}]}
+ }}
+ \end{array}
+ $}
+ \DisplayProof
+
+
+ \myderivspp
+
+ \AxiomC{$\mytyc{D} : \mytele \myarr \mytyp \in \myctx$}
+ \UnaryInfC{$
+ \begin{array}{r@{\ }l}
+ \myctx \vdash &
+ \myjm{\mytyc{D} \myappsp \vec{A}}{\mytyp}{\mytyc{D} \myappsp \vec{B}}{\mytyp} \myred \\
+ & \myind{2} \mybigand_{i = 1}^n (\myjm{\mytya_n}{\myhead(\mytele(A_1 \cdots A_{i-1}))}{\mytyb_i}{\myhead(\mytele(B_1 \cdots B_{i-1}))})
+ \end{array}
+ $}
+ \DisplayProof
+
+ \myderivspp
+
+ \AxiomC{$
+ \begin{array}{@{}c}
+ \mydataty(\mytyc{D}, \myctx)\hspace{0.8cm}
+ \mytyc{D}.\mydc{c} : \mytele;\mytele' \myarr \mytyc{D} \myappsp \mytelee \in \myctx \hspace{0.8cm}
+ \mytele_A = (\mytele;\mytele')\vec{A}\hspace{0.8cm}
+ \mytele_B = (\mytele;\mytele')\vec{B}
+ \end{array}
+ $}
+ \UnaryInfC{$
+ \begin{array}{@{}l@{\ }l}
+ \myctx \vdash & \myjm{\mytyc{D}.\mydc{c} \myappsp \vec{\myse{l}}}{\mytyc{D} \myappsp \vec{A}}{\mytyc{D}.\mydc{c} \myappsp \vec{\myse{r}}}{\mytyc{D} \myappsp \vec{B}} \myred \\
+ & \myind{2} \mybigand_{i=1}^n(\myjm{\mytmm_i}{\myhead(\mytele_A (\mytya_i \cdots \mytya_{i-1}))}{\mytmn_i}{\myhead(\mytele_B (\mytyb_i \cdots \mytyb_{i-1}))})
+ \end{array}
+ $}
+ \DisplayProof
+
+ \myderivspp
+
+ \AxiomC{$\mydataty(\mytyc{D}, \myctx)$}
+ \UnaryInfC{$
+ \myctx \vdash \myjm{\mytyc{D}.\mydc{c} \myappsp \vec{\myse{l}}}{\mytyc{D} \myappsp \vec{A}}{\mytyc{D}.\mydc{c'} \myappsp \vec{\myse{r}}}{\mytyc{D} \myappsp \vec{B}} \myred \mybot
+ $}
+ \DisplayProof
+
+ \myderivspp
+
+ \AxiomC{$
+ \begin{array}{@{}c}
+ \myisreco(\mytyc{D}, \myctx)\hspace{0.8cm}
+ \mytyc{D}.\myfun{f}_i : \mytele; (\myb{x} {:} \myapp{\mytyc{D}}{\mytelee}) \myarr \myse{F}_i \in \myctx\\
+ \end{array}
+ $}
+ \UnaryInfC{$
+ \begin{array}{@{}l@{\ }l}
+ \myctx \vdash & \myjm{\myse{l}}{\mytyc{D} \myappsp \vec{A}}{\myse{r}}{\mytyc{D} \myappsp \vec{B}} \myred \\ & \myind{2} \mybigand_{i=1}^n(\myjm{\mytyc{D}.\myfun{f}_1 \myappsp \myse{l}}{(\mytele; (\myb{x} {:} \myapp{\mytyc{D}}{\mytelee}) \myarr \myse{F}_i)(\vec{\mytya};\myse{l})}{\mytyc{D}.\myfun{f}_i \myappsp \myse{r}}{(\mytele; (\myb{x} {:} \myapp{\mytyc{D}}{\mytelee}) \myarr \myse{F}_i)(\vec{\mytyb};\myse{r})})
+ \end{array}
+ $}
+ \DisplayProof
+
+ \myderivspp
+ \AxiomC{}
+ \UnaryInfC{$\myjm{\mytmm}{\mytya}{\mytmn}{\mytyb} \myred \mybot\ \text{if $\mytya$ and $\mytyb$ are canonical types.}$}
+ \DisplayProof
+}
+\caption{Propositions and equality reduction in $\mykant$. We assume
+ the presence of $\mydataty$ and $\myisreco$ as operations on the
+ context to recognise whether a user defined type is a data type or a
+ record.}
+ \label{fig:kant-eq-red}
+\end{figure}
+
+\subsubsection{Coercions}
+
+% \begin{figure}[t]
+% \mydesc{reduction}{\mytmsyn \myred \mytmsyn}{
+
+% }
+% \caption{Coercions in \mykant.}
+% \label{fig:kant-coe}
+% \end{figure}
+
+% TODO finish
+
+\subsubsection{$\myprop$ and the hierarchy}
+
+Where is $\myprop$ placed in the type hierarchy? The main indicator
+is the decoding operator, since it converts into things that already
+live in the hierarchy. For example, if we
+have
+\[
+ \myprdec{\mynat \myarr \mybool \myeq \mynat \myarr \mybool} \myred
+ \mytop \myand ((\myb{x}\, \myb{y} : \mynat) \myarr \mytop \myarr \mytop)
+\]
+we will better make sure that the `to be decoded' is at the same
+level as its reduction as to preserve subject reduction. In the example
+above, we'll have that proposition to be at least as large as the type
+of $\mynat$, since the reduced proof will abstract over it. Pretending
+that we had explicit, non cumulative levels, it would be tempting to have
+\begin{center}
+\begin{tabular}{cc}
+ \AxiomC{$\myjud{\myse{Q}}{\myprop_l}$}
+ \UnaryInfC{$\myjud{\myprdec{\myse{Q}}}{\mytyp_l}$}
+ \DisplayProof
+&
+ \AxiomC{$\myjud{\mytya}{\mytyp_l}$}
+ \AxiomC{$\myjud{\mytyb}{\mytyp_l}$}
+ \BinaryInfC{$\myjud{\myjm{\mytya}{\mytyp_{l}}{\mytyb}{\mytyp_{l}}}{\myprop_l}$}
+ \DisplayProof
+\end{tabular}
+\end{center}
+$\mybot$ and $\mytop$ living at any level, $\myand$ and $\forall$
+following rules similar to the ones for $\myprod$ and $\myarr$ in
+Section \ref{sec:itt}. However, we need to be careful with value
+equality since for example we have that
+\[
+ \myprdec{\myjm{\myse{f}_1}{\myfora{\myb{x_1}}{\mytya_1}{\mytyb_1}}{\myse{f}_2}{\myfora{\myb{x_2}}{\mytya_2}{\mytyb_2}}}
+ \myred
+ \myfora{\myb{x_1}}{\mytya_1}{\myfora{\myb{x_2}}{\mytya_2}{\cdots}}
+\]
+where the proposition decodes into something of type $\mytyp_l$, where
+$\mytya : \mytyp_l$ and $\mytyb : \mytyp_l$. We can resolve this
+tension by making all equalities larger:
+\begin{prooftree}
+ \AxiomC{$\myjud{\mytmm}{\mytya}$}
+ \AxiomC{$\myjud{\mytya}{\mytyp_l}$}
+ \AxiomC{$\myjud{\mytmn}{\mytyb}$}
+ \AxiomC{$\myjud{\mytyb}{\mytyp_l}$}
+ \QuaternaryInfC{$\myjud{\myjm{\mytmm}{\mytya}{\mytmm}{\mytya}}{\myprop_l}$}
+\end{prooftree}
+This is disappointing, since type equalities will be needlessly large:
+$\myprdec{\myjm{\mytya}{\mytyp_l}{\mytyb}{\mytyp_l}} : \mytyp_{l + 1}$.
+
+However, considering that our theory is cumulative, we can do better.
+Assuming rules for $\myprop$ cumulativity similar to the ones for
+$\mytyp$, we will have (with the conversion rule reproduced as a
+reminder):
+\begin{center}
+ \begin{tabular}{cc}
+ \AxiomC{$\myctx \vdash \mytya \mycumul \mytyb$}
+ \AxiomC{$\myjud{\mytmt}{\mytya}$}
+ \BinaryInfC{$\myjud{\mytmt}{\mytyb}$}
+ \DisplayProof
+ &
+ \AxiomC{$\myjud{\mytya}{\mytyp_l}$}
+ \AxiomC{$\myjud{\mytyb}{\mytyp_l}$}
+ \BinaryInfC{$\myjud{\myjm{\mytya}{\mytyp_{l}}{\mytyb}{\mytyp_{l}}}{\myprop_l}$}
+ \DisplayProof
+ \end{tabular}
+
+ \myderivspp
+
+ \AxiomC{$\myjud{\mytmm}{\mytya}$}
+ \AxiomC{$\myjud{\mytya}{\mytyp_l}$}
+ \AxiomC{$\myjud{\mytmn}{\mytyb}$}
+ \AxiomC{$\myjud{\mytyb}{\mytyp_l}$}
+ \AxiomC{$\mytya$ and $\mytyb$ are not $\mytyp_{l'}$}
+ \QuinaryInfC{$\myjud{\myjm{\mytmm}{\mytya}{\mytmm}{\mytya}}{\myprop_l}$}
+ \DisplayProof
+\end{center}
+
+That is, we are small when we can (type equalities) and large otherwise.
+This would not work in a non-cumulative theory because subject reduction
+would not hold. Consider for instance
+\[
+ \myjm{\mynat}{\myITE{\mytrue}{\mytyp_0}{\mytyp_0}}{\mybool}{\myITE{\mytrue}{\mytyp_0}{\mytyp_0}}
+ : \myprop_1
+\]
+which reduces to
+\[\myjm{\mynat}{\mytyp_0}{\mybool}{\mytyp_0} : \myprop_0 \]
+We need $\myprop_0$ to be $\myprop_1$ too, which will be the case with
+cumulativity. This is not the most elegant of systems, but it buys us a
+cheap type level equality without having to replicate functionality with
+a dedicated construct.
+
+\subsubsection{Quotation and definitional equality}
+\label{sec:kant-irr}
+
+Now we can give an account of definitional equality, by explaining how
+to perform quotation (as defined in Section \ref{sec:eta-expand})
+towards the goal described in Section \ref{sec:ott-quot}.
+
+We want to:
+\begin{itemize}
+\item Perform $\eta$-expansion on functions and records.
+
+\item As a consequence of the previous point, identify all records with
+no projections as equal, since they will have only one element.
+
+\item Identify all members of types with no elements as equal.
+
+\item Identify all equivalent proofs as equal---with `equivalent proof'
+we mean those proving the same propositions.
+
+\item Advance coercions working across definitionally equal types.
+\end{itemize}
+Towards these goals and following the intuition between bidirectional
+type checking we define two mutually recursive functions, one quoting
+canonical terms against their types (since we need the type to typecheck
+canonical terms), one quoting neutral terms while recovering their
+types.
+
+Our quotation will work on normalised terms, so that all defined values
+will have been replaced. Moreover, we match on datatype eliminators and
+all their arguments, so that $\mynat.\myfun{elim} \myappsp \vec{\mytmt}$
+will stand for $\mynat.\myfun{elim}$ applied to the scrutinised
+$\mynat$, plus its three arguments. This measure can be easily
+implemented by checking the head of applications and `consuming' the
+needed terms.
+
+% TODO finish this
+\begin{figure}[t]
+ \mydesc{canonical quotation}{\myctx \vdash \mytmsyn \myquot \mytmsyn \mapsto \mytmsyn}{
+
+ }
+
+ \mynegder
+
+ \mydesc{neutral quotation}{\myctx \vdash \mynquot \mytmsyn \mapsto \mytmsyn : \mytmsyn}{
+
+ }
+ \caption{Quotation in \mykant.}
+ \label{fig:kant-quot}
+\end{figure}
+
+% TODO finish
+
+\subsubsection{Why $\myprop$?}
+
+It is worth to ask if $\myprop$ is needed at all. It is perfectly
+possible to have the type checker identify propositional types
+automatically, and in fact in some sense we already do during equality
+reduction and quotation. However, this has the considerable
+disadvantage that we can never identify abstracted
+variables\footnote{And in general neutral terms, although we currently
+ don't have neutral propositions.} of type $\mytyp$ as $\myprop$, thus
+forbidding the user to talk about $\myprop$ explicitly.
+
+This is a considerable impediment, for example when implementing
+\emph{quotient types}. With quotients, we let the user specify an
+equivalence class over a certain type, and then exploit this in various
+way---crucially, we need to be sure that the equivalence given is
+propositional, a fact which prevented the use of quotients in dependent
+type theories \citep{Jacobs1994}.
+
+% TODO finish
+
+\section{\mykant : The practice}
+\label{sec:kant-practice}
+
+The codebase consists of around 2500 lines of Haskell, as reported by
+the \texttt{cloc} utility. The high level design is inspired by the
+work on various incarnations of Epigram, and specifically by the first
+version as described \citep{McBride2004} and the codebase for the new
+version.\footnote{Available intermittently as a \texttt{darcs}
+repository at \url{http://sneezy.cs.nott.ac.uk/darcs/Pig09}.} In many
+ways \mykant\ is something in between the first and second version of
+Epigram.
+
+The author learnt the hard way the implementations challenges for such a
+project, and while there is a solid and working base to work on, the
+implementation of observational equality is not currently complete.
+However, given the detailed plan in the previous section, doing so
+should not prove to be too much work.
+
+The interaction happens in a read-eval-print loop (REPL). The REPL is a
+available both as a commandline application and in a web interface,
+which is available at \url{kant.mazzo.li} and presents itself as in
+figure \ref{fig:kant-web}.
+
+\begin{figure}[t]
+{\small\begin{verbatim}B E R T U S
+Version 0.0, made in London, year 2013.
+>>> :h
+<decl> Declare value/data type/record
+:t <term> Typecheck
+:e <term> Normalise
+:p <term> Pretty print
+:l <file> Load file
+:r <file> Reload file (erases previous environment)
+:i <name> Info about an identifier
+:q Quit
+>>> :l data/samples/good/common.ka
+OK
+>>> :e plus three two
+suc (suc (suc (suc (suc zero))))
+>>> :t plus three two
+Type: Nat\end{verbatim}
+}
+
+ \caption{A sample run of the \mykant\ prompt.}
+ \label{fig:kant-web}
+\end{figure}
+
+The interaction with the user takes place in a loop living in and updating a
+context \mykant\ declarations. The user inputs a new declaration that goes
+through various stages starts with the user inputing a \mykant\ declaration or
+another REPL command, which then goes through various stages that can end up
+in a context update, or in failures of various kind. The process is described
+diagrammatically in figure \ref{fig:kant-process}:
+
+\begin{description}
+\item[Parse] In this phase the text input gets converted to a sugared
+ version of the core language.
+
+\item[Desugar] The sugared declaration is converted to a core term.
+
+\item[Reference] Occurrences of $\mytyp$ get decorated by a unique reference,
+ which is necessary to implement the type hierarchy check.
+
+\item[Elaborate] Converts the declaration to some context items, which
+ might be a value declaration (type and body) or a data type
+ declaration (constructors and destructors). This phase works in
+ tandem with \textbf{Type checking}, which in turns needs to
+ \textbf{Evaluate} terms.
+
+\item[Distill] and report the result. `Distilling' refers to the process of
+ converting a core term back to a sugared version that the user can
+ visualise. This can be necessary both to display errors including terms or
+ to display result of evaluations or type checking that the user has
+ requested.
+
+\item[Pretty print] Format the terms in a nice way, and display the result to
+ the user.
+
+\end{description}
+
+\begin{figure}
+ \centering{\mysmall
+ \tikzstyle{block} = [rectangle, draw, text width=5em, text centered, rounded
+ corners, minimum height=2.5em, node distance=0.7cm]
+
+ \tikzstyle{decision} = [diamond, draw, text width=4.5em, text badly
+ centered, inner sep=0pt, node distance=0.7cm]
+
+ \tikzstyle{line} = [draw, -latex']
+
+ \tikzstyle{cloud} = [draw, ellipse, minimum height=2em, text width=5em, text
+ centered, node distance=1.5cm]
+
+
+ \begin{tikzpicture}[auto]
+ \node [cloud] (user) {User};
+ \node [block, below left=1cm and 0.1cm of user] (parse) {Parse};
+ \node [block, below=of parse] (desugar) {Desugar};
+ \node [block, below=of desugar] (reference) {Reference};
+ \node [block, below=of reference] (elaborate) {Elaborate};
+ \node [block, left=of elaborate] (tycheck) {Typecheck};
+ \node [block, left=of tycheck] (evaluate) {Evaluate};
+ \node [decision, right=of elaborate] (error) {Error?};
+ \node [block, right=of parse] (distill) {Distill};
+ \node [block, right=of desugar] (update) {Update context};
+
+ \path [line] (user) -- (parse);
+ \path [line] (parse) -- (desugar);
+ \path [line] (desugar) -- (reference);
+ \path [line] (reference) -- (elaborate);
+ \path [line] (elaborate) edge[bend right] (tycheck);
+ \path [line] (tycheck) edge[bend right] (elaborate);
+ \path [line] (elaborate) -- (error);
+ \path [line] (error) edge[out=0,in=0] node [near start] {yes} (distill);
+ \path [line] (error) -- node [near start] {no} (update);
+ \path [line] (update) -- (distill);
+ \path [line] (distill) -- (user);
+ \path [line] (tycheck) edge[bend right] (evaluate);
+ \path [line] (evaluate) edge[bend right] (tycheck);
+ \end{tikzpicture}
+ }
+ \caption{High level overview of the life of a \mykant\ prompt cycle.}
+ \label{fig:kant-process}
+\end{figure}
+
+Here we will review only a sampling of the more interesting
+implementation challenges present when implementing an interactive
+theorem prover.
+
+\subsection{Term and context representation}
+\label{sec:term-repr}
+
+\subsubsection{Naming and substituting}
+
+Perhaps surprisingly, one of the most fundamental challenges in
+implementing a theory of the kind presented is choosing a good data type
+for terms, and specifically handling substitutions in a sane way.
+
+There are two broad schools of thought when it comes to naming
+variables, and thus substituting:
+\begin{description}
+\item[Nameful] Bound variables are represented by some enumerable data
+ type, just as we have described up to now, starting from Section
+ \ref{sec:untyped}. The problem is that avoiding name capturing is a
+ nightmare, both in the sense that it is not performant---given that we
+ need to rename rename substitute each time we `enter' a binder---but
+ most importantly given the fact that in even more slightly complicated
+ systems it is very hard to get right, even for experts.
+
+ One of the sore spots of explicit names is comparing terms up
+ $\alpha$-renaming, which again generates a huge amounts of
+ substitutions and requires special care. We can represent the
+ relationship between variables and their binders, by...
+
+\item[Nameless] ...getting rid of names altogether, and representing
+ bound variables with an index referring to the `binding' structure, a
+ notion introduced by \cite{de1972lambda}. Classically $0$ represents
+ the variable bound by the innermost binding structure, $1$ the
+ second-innermost, and so on. For instance with simple abstractions we
+ might have
+ \[\mymacol{red}{\lambda}\, (\mymacol{blue}{\lambda}\, \mymacol{blue}{0}\, (\mymacol{AgdaInductiveConstructor}{\lambda\, 0}))\, (\mymacol{AgdaFunction}{\lambda}\, \mymacol{red}{1}\, \mymacol{AgdaFunction}{0}) : ((\mytya \myarr \mytya) \myarr \mytyb) \myarr \mytyb \]
+
+ While this technique is obviously terrible in terms of human
+ usability,\footnote{With some people going as far as defining it akin
+ to an inverse Turing test.} it is much more convenient as an
+ internal representation to deal with terms mechanically---at least in
+ simple cases. Moreover, $\alpha$ renaming ceases to be an issue, and
+ term comparison is purely syntactical.
+
+ Nonetheless, more complex, more complex constructs such as pattern
+ matching put some strain on the indices and many systems end up using
+ explicit names anyway (Agda, Coq, \dots).
+
+\end{description}
+
+In the past decade or so advancements in the Haskell's type system and
+in general the spread new programming practices have enabled to make the
+second option much more amenable. \mykant\ thus takes the second path
+through the use of Edward Kmett's excellent \texttt{bound}
+library.\footnote{Available at
+\url{http://hackage.haskell.org/package/bound}.} We decribe its
+advantages but also pitfalls in the previously relatively unknown
+territory of dependent types---\texttt{bound} being created mostly to
+handle more simply typed systems.
+
+\texttt{bound} builds on two core ideas. The first is the suggestion by
+\cite{Bird1999} consists of parametrising the term type over the type of
+the variables, and `nest' the type each time we enter a scope. If we
+wanted to define a term for the untyped $\lambda$-calculus, we might
+have
+\begin{Verbatim}
+data Void
+
+data Var v = Bound | Free v
+
+data Tm v
+ = V v -- Bound variable
+ | App (Tm v) (Tm v) -- Term application
+ | Lam (Tm (Var v)) -- Abstraction
+\end{Verbatim}
+Closed terms would be of type \texttt{Tm Void}, so that there would be
+no occurrences of \texttt{V}. However, inside an abstraction, we can
+have \texttt{V Bound}, representing the bound variable, and inside a
+second abstraction we can have \texttt{V Bound} or \texttt{V (Free
+Bound)}. Thus the term $\myabs{\myb{x}}{\myabs{\myb{y}}{\myb{x}}}$ could be represented as
+\begin{Verbatim}
+Lam (Lam (Free Bound))
+\end{Verbatim}
+This allows us to reflect at the type level the `nestedness' of a type,
+and since we usually work with functions polymorphic on the parameter
+\texttt{v} it's very hard to make mistakes by putting terms of the wrong
+nestedness where they don't belong.
+
+Even more interestingly, the substitution operation is perfectly
+captured by the \verb|>>=| (bind) operator of the \texttt{Monad}
+typeclass:
+\begin{Verbatim}
+class Monad m where
+ return :: m a
+ (>>=) :: m a -> (a -> m b) -> m b
+
+instance Monad Tm where
+ -- `return'ing turns a variable into a `Tm'
+ return = V
+
+ -- `t >>= f' takes a term `t' and a mapping from variables to terms
+ -- `f' and applies `f' to all the variables in `t', replacing them
+ -- with the mapped terms.
+ V v >>= f = f v
+ App m n >>= f = App (m >>= f) (n >>= f)
+
+ -- `Lam' is the tricky case: we modify the function to work with bound
+ -- variables, so that if it encounters `Bound' it leaves it untouched
+ -- (since the mapping referred to the outer scope); if it encounters a
+ -- free variable it asks `f' for the term and then updates all the
+ -- variables to make them refer to the outer scope they were meant to
+ -- be in.
+ Lam s >>= f = Lam (s >>= bump)
+ where bump Bound = return Bound
+ bump (Free v) = f v >>= V . Free
+\end{Verbatim}
+With this in mind, we can define functions which will not only work on
+\verb|Tm|, but on any \verb|Monad|!
+\begin{Verbatim}
+-- Replaces free variable `v' with `m' in `n'.
+subst :: (Eq v, Monad m) => v -> m v -> m v -> m v
+subst v m n = n >>= \v' -> if v == v' then m else return v'
+
+-- Replace the variable bound by `s' with term `t'.
+inst :: Monad m => m v -> m (Var v) -> m v
+inst t s = do v <- s
+ case v of
+ Bound -> t
+ Free v' -> return v'
+\end{Verbatim}
+The beauty of this technique is that in a few simple function we have
+defined all the core operations in a general and `obviously correct'
+way, with the extra confidence of having the type checker looking our
+back.
+
+Moreover, if we take the top level term type to be \verb|Tm String|, we
+get for free a representation of terms with top-level, definitions;
+where closed terms contain only \verb|String| references to said
+definitions---see also \cite{McBride2004b}.
+
+What are then the pitfalls of this seemingly invincible technique? The
+most obvious impediment is the need for polymorphic recursion.
+Functions traversing terms parametrised by the variable type will have
+types such as
+\begin{Verbatim}
+-- Infer the type of a term, or return an error.
+infer :: Tm v -> Either Error (Tm v)
+\end{Verbatim}
+When traversing under a \verb|Scope| the parameter changes from \verb|v|
+to \verb|Var v|, and thus if we do not specify the type for our function explicitly
+inference will fail---type inference for polymorphic recursion being
+undecidable \citep{henglein1993type}. This causes some annoyance,
+especially in the presence of many local definitions that we would like
+to leave untyped.
+
+But the real issue is the fact that giving a type parametrised over a
+variable---say \verb|m v|---a \verb|Monad| instance means being able to
+only substitute variables for values of type \verb|m v|. This is a
+considerable inconvenience. Consider for instance the case of
+telescopes, which are a central tool to deal with contexts and other
+constructs:
+\begin{Verbatim}
+data Tele m v = End (m v) | Bind (m v) (Tele (Var v))
+type TeleTm = Tele Tm
+\end{Verbatim}
+The problem here is that what we want to substitute for variables in
+\verb|Tele m v| is \verb|m v| (probably \verb|Tm v|), not \verb|Tele m v| itself! What we need is
+\begin{Verbatim}
+bindTele :: Monad m => Tele m a -> (a -> m b) -> Tele m b
+substTele :: (Eq v, Monad m) => v -> m v -> Tele m v -> Tele m v
+instTele :: Monad m => m v -> Tele m (Var v) -> Tele m v
+\end{Verbatim}
+Not what \verb|Monad| gives us. Solving this issue in an elegant way
+has been a major sink of time and source of headaches for the author,
+who analysed some of the alternatives---most notably the work by
+\cite{weirich2011binders}---but found it impossible to give up the
+simplicity of the model above.
+
+That said, our term type is still reasonably brief, as shown in full in
+Figure \ref{fig:term}. The fact that propositions cannot be factored
+out in another data type is a consequence of the problems described
+above. However the real pain is during elaboration, where we are forced
+to treat everything as a type while we would much rather have
+telescopes. Future work would include writing a library that marries a
+nice interface similar to the one of \verb|bound| with a more flexible
+interface.
+
+\begin{figure}[t]
+{\small\begin{verbatim}-- A top-level name.
+type Id = String
+-- A data/type constructor name.
+type ConId = String
+
+-- A term, parametrised over the variable (`v') and over the reference
+-- type used in the type hierarchy (`r').
+data Tm r v
+ = V v -- Variable.
+ | Ty r -- Type, with a hierarchy reference.
+ | Lam (TmScope r v) -- Abstraction.
+ | Arr (Tm r v) (TmScope r v) -- Dependent function.
+ | App (Tm r v) (Tm r v) -- Application.
+ | Ann (Tm r v) (Tm r v) -- Annotated term.
+ -- Data constructor, the first ConId is the type constructor and
+ -- the second is the data constructor.
+ | Con ADTRec ConId ConId [Tm r v]
+ -- Data destrutor, again first ConId being the type constructor
+ -- and the second the name of the eliminator.
+ | Destr ADTRec ConId Id (Tm r v)
+ -- A type hole.
+ | Hole HoleId [Tm r v]
+ -- Decoding of propositions.
+ | Dec (Tm r v)
+
+ -- Propositions.
+ | Prop r -- The type of proofs, with hierarchy reference.
+ | Top
+ | Bot
+ | And (Tm r v) (Tm r v)
+ | Forall (Tm r v) (TmScope r v)
+ -- Heterogeneous equality.
+ | Eq (Tm r v) (Tm r v) (Tm r v) (Tm r v)
+
+-- Either a data type, or a record.
+data ADTRec = ADT | Rec
+
+-- Either a coercion, or coherence.
+data Coeh = Coe | Coh\end{verbatim}
+}
+ \caption{Data type for terms in \mykant.}
+ \label{fig:term}
+\end{figure}
+
+We also make use of a `forgetful' data type (as provided by
+\verb|bound|) to store user-provided variables names along with the
+`nameless' representation, so that the names will not be considered when
+compared terms, but will be available when distilling so that we can
+recover variable names that are as close as possible to what the user
+originally used.
+
+\subsubsection{Context}
+
+% TODO finish
+
+\subsection{Type hierarchy}
+\label{sec:hier-impl}
+
+As a breath of air amongst the type gas, we will explain how to
+implement the typical ambiguity we have spoken about in
+\ref{sec:term-hierarchy}. As mentioned, we have to verify a the
+consistency of a set of constraints each time we add a new one. The
+constraints range over some set of variables whose members we will
+denote with $x, y, z, \dots$. and are of two kinds:
+\begin{center}
+ \begin{tabular}{cc}
+ $x \le y$ & $x < y$
+ \end{tabular}
+\end{center}
+
+Predictably, $\le$ expresses a reflexive order, and $<$ expresses an
+irreflexive order, both working with the same notion of equality, where
+$x < y$ implies $x \le y$---they behave like $\le$ and $<$ do on natural
+numbers (or in our case, levels in a type hierarchy). We also need an
+equality constraint ($x = y$), which can be reduced to two constraints
+$x \le y$ and $y \le x$.
+
+Given this specification, we have implemented a standalone Haskell
+module---that we plan to release as a library---to efficiently store and
+check the consistency of constraints. Its interface is shown in Figure
+\ref{fig:constraint}. The problem unpredictably reduces to a graph
+algorithm. If we
+
+\begin{figure}[t]
+{\small\begin{verbatim}module Data.Constraint where
+
+-- | Data type holding the set of constraints, parametrised over the
+-- type of the variables.
+data Constrs a
+
+-- | A representation of the constraints that we can add.
+data Constr a = a :<=: a | a :<: a | a :==: a
+
+-- | An empty set of constraints.
+empty :: Ord a => Constrs a
+
+-- | Adds one constraint to the set, returns the new set of constraints
+-- if consistent.
+addConstr :: Ord a => Constr a -> Constrs a -> Maybe (Constrs a)\end{verbatim}
+}
+
+ \caption{Interface for the module handling the constraints.}
+ \label{fig:constraint}
+\end{figure}
+
+
+\subsection{Type holes}
+
+When building up programs interactively, it is useful to leave parts
+unfinished while exploring the current context. This is what type holes
+are for.
+
+\section{Evaluation}
+
+\section{Future work}
+
+\subsection{Coinduction}
+
+\subsection{Quotient types}
+
+\subsection{Partiality}
+
+\subsection{Pattern matching}
+
+\subsection{Pattern unification}
+
+% TODO coinduction (obscoin, gimenez, jacobs), pattern unification (miller,
+% gundry), partiality monad (NAD)
+
+\appendix
+
+\section{Notation and syntax}
+
+Syntax, derivation rules, and reduction rules, are enclosed in frames describing
+the type of relation being established and the syntactic elements appearing,
+for example
+
+\mydesc{typing:}{\myjud{\mytmsyn}{\mytysyn}}{
+ Typing derivations here.
+}
+
+In the languages presented and Agda code samples I also highlight the syntax,
+following a uniform color and font convention:
+
+\begin{center}
+ \begin{tabular}{c | l}
+ $\mytyc{Sans}$ & Type constructors. \\
+ $\mydc{sans}$ & Data constructors. \\
+ % $\myfld{sans}$ & Field accessors (e.g. \myfld{fst} and \myfld{snd} for products). \\
+ $\mysyn{roman}$ & Keywords of the language. \\
+ $\myfun{roman}$ & Defined values and destructors. \\
+ $\myb{math}$ & Bound variables.
+ \end{tabular}
+\end{center}
+
+Moreover, I will from time to time give examples in the Haskell programming
+language as defined in \citep{Haskell2010}, which I will typeset in
+\texttt{teletype} font. I assume that the reader is already familiar with
+Haskell, plenty of good introductions are available \citep{LYAH,ProgInHask}.
+
+When presenting grammars, I will use a word in $\mysynel{math}$ font
+(e.g. $\mytmsyn$ or $\mytysyn$) to indicate indicate nonterminals. Additionally,
+I will use quite flexibly a $\mysynel{math}$ font to indicate a syntactic
+element. More specifically, terms are usually indicated by lowercase letters
+(often $\mytmt$, $\mytmm$, or $\mytmn$); and types by an uppercase letter (often
+$\mytya$, $\mytyb$, or $\mytycc$).
+
+When presenting type derivations, I will often abbreviate and present multiple
+conclusions, each on a separate line:
+\begin{prooftree}
+ \AxiomC{$\myjud{\mytmt}{\mytya \myprod \mytyb}$}
+ \UnaryInfC{$\myjud{\myapp{\myfst}{\mytmt}}{\mytya}$}
+ \noLine
+ \UnaryInfC{$\myjud{\myapp{\mysnd}{\mytmt}}{\mytyb}$}
+\end{prooftree}
+
+I will often present `definitions' in the described calculi and in
+$\mykant$\ itself, like so:
+\[
+\begin{array}{@{}l}
+ \myfun{name} : \mytysyn \\
+ \myfun{name} \myappsp \myb{arg_1} \myappsp \myb{arg_2} \myappsp \cdots \mapsto \mytmsyn
+\end{array}
+\]
+To define operators, I use a mixfix notation similar
+to Agda, where $\myarg$s denote arguments:
+\[
+\begin{array}{@{}l}
+ \myarg \mathrel{\myfun{$\wedge$}} \myarg : \mybool \myarr \mybool \myarr \mybool \\
+ \myb{b_1} \mathrel{\myfun{$\wedge$}} \myb{b_2} \mapsto \cdots
+\end{array}
+\]
+
+In explicitly typed systems, I will also omit type annotations when they
+are obvious, e.g. by not annotating the type of parameters of
+abstractions or of dependent pairs.
+
+I will also introduce multiple arguments in one go in arrow types:
+\[
+ (\myb{x}\, \myb{y} {:} \mytya) \myarr \cdots = (\myb{x} {:} \mytya) \myarr (\myb{y} {:} \mytya) \myarr \cdots
+\]
+and in abstractions:
+\[
+\myabs{\myb{x}\myappsp\myb{y}}{\cdots} = \myabs{\myb{x}}{\myabs{\myb{y}}{\cdots}}
+\]
+
+\section{Code}
+
+\subsection{ITT renditions}
+\label{app:itt-code}
+
+\subsubsection{Agda}
+\label{app:agda-itt}
+
+Note that in what follows rules for `base' types are
+universe-polymorphic, to reflect the exposition. Derived definitions,
+on the other hand, mostly work with \mytyc{Set}, reflecting the fact
+that in the theory presented we don't have universe polymorphism.
+
+\begin{code}
+module ITT where
+ open import Level
+
+ data Empty : Set where
+
+ absurd : ∀ {a} {A : Set a} → Empty → A
+ absurd ()
+
+ ¬_ : ∀ {a} → (A : Set a) → Set a
+ ¬ A = A → Empty
+
+ record Unit : Set where
+ constructor tt
+
+ record _×_ {a b} (A : Set a) (B : A → Set b) : Set (a ⊔ b) where
+ constructor _,_
+ field
+ fst : A
+ snd : B fst
+ open _×_ public
+
+ data Bool : Set where
+ true false : Bool
+
+ if_/_then_else_ : ∀ {a} (x : Bool) (P : Bool → Set a) → P true → P false → P x
+ if true / _ then x else _ = x
+ if false / _ then _ else x = x
+
+ if_then_else_ : ∀ {a} (x : Bool) {P : Bool → Set a} → P true → P false → P x
+ if_then_else_ x {P} = if_/_then_else_ x P
+
+ data W {s p} (S : Set s) (P : S → Set p) : Set (s ⊔ p) where
+ _◁_ : (s : S) → (P s → W S P) → W S P
+
+ rec : ∀ {a b} {S : Set a} {P : S → Set b}
+ (C : W S P → Set) → -- some conclusion we hope holds
+ ((s : S) → -- given a shape...
+ (f : P s → W S P) → -- ...and a bunch of kids...
+ ((p : P s) → C (f p)) → -- ...and C for each kid in the bunch...
+ C (s ◁ f)) → -- ...does C hold for the node?
+ (x : W S P) → -- If so, ...
+ C x -- ...C always holds.
+ rec C c (s ◁ f) = c s f (λ p → rec C c (f p))
+
+module Examples-→ where
+ open ITT
+
+ data ℕ : Set where
+ zero : ℕ
+ suc : ℕ → ℕ
+
+ -- These pragmas are needed so we can use number literals.
+ {-# BUILTIN NATURAL ℕ #-}
+ {-# BUILTIN ZERO zero #-}
+ {-# BUILTIN SUC suc #-}
+
+ data List (A : Set) : Set where
+ [] : List A
+ _∷_ : A → List A → List A
+
+ length : ∀ {A} → List A → ℕ
+ length [] = zero
+ length (_ ∷ l) = suc (length l)
+
+ _>_ : ℕ → ℕ → Set
+ zero > _ = Empty
+ suc _ > zero = Unit
+ suc x > suc y = x > y
+
+ head : ∀ {A} → (l : List A) → length l > 0 → A
+ head [] p = absurd p
+ head (x ∷ _) _ = x
+
+module Examples-× where
+ open ITT
+ open Examples-→
+
+ even : ℕ → Set
+ even zero = Unit
+ even (suc zero) = Empty
+ even (suc (suc n)) = even n
+
+ 6-even : even 6
+ 6-even = tt
+
+ 5-not-even : ¬ (even 5)
+ 5-not-even = absurd
+
+ there-is-an-even-number : ℕ × even
+ there-is-an-even-number = 6 , 6-even
+
+ _∨_ : (A B : Set) → Set
+ A ∨ B = Bool × (λ b → if b then A else B)
+
+ left : ∀ {A B} → A → A ∨ B
+ left x = true , x
+
+ right : ∀ {A B} → B → A ∨ B
+ right x = false , x
+
+ [_,_] : {A B C : Set} → (A → C) → (B → C) → A ∨ B → C
+ [ f , g ] x =
+ (if (fst x) / (λ b → if b then _ else _ → _) then f else g) (snd x)
+
+module Examples-W where
+ open ITT
+ open Examples-×
+
+ Tr : Bool → Set
+ Tr b = if b then Unit else Empty
+
+ ℕ : Set
+ ℕ = W Bool Tr
+
+ zero : ℕ
+ zero = false ◁ absurd
+
+ suc : ℕ → ℕ
+ suc n = true ◁ (λ _ → n)
+
+ plus : ℕ → ℕ → ℕ
+ plus x y = rec
+ (λ _ → ℕ)
+ (λ b →
+ if b / (λ b → (Tr b → ℕ) → (Tr b → ℕ) → ℕ)
+ then (λ _ f → (suc (f tt))) else (λ _ _ → y))
+ x
+
+module Equality where
+ open ITT
+
+ data _≡_ {a} {A : Set a} : A → A → Set a where
+ refl : ∀ x → x ≡ x
+
+ ≡-elim : ∀ {a b} {A : Set a}
+ (P : (x y : A) → x ≡ y → Set b) →
+ ∀ {x y} → P x x (refl x) → (x≡y : x ≡ y) → P x y x≡y
+ ≡-elim P p (refl x) = p
+
+ subst : ∀ {A : Set} (P : A → Set) → ∀ {x y} → (x≡y : x ≡ y) → P x → P y
+ subst P x≡y p = ≡-elim (λ _ y _ → P y) p x≡y
+
+ sym : ∀ {A : Set} (x y : A) → x ≡ y → y ≡ x
+ sym x y p = subst (λ y′ → y′ ≡ x) p (refl x)
+
+ trans : ∀ {A : Set} (x y z : A) → x ≡ y → y ≡ z → x ≡ z
+ trans x y z p q = subst (λ z′ → x ≡ z′) q p
+
+ cong : ∀ {A B : Set} (x y : A) → x ≡ y → (f : A → B) → f x ≡ f y
+ cong x y p f = subst (λ z → f x ≡ f z) p (refl (f x))
+\end{code}
+
+\subsubsection{\mykant}
+
+The following things are missing: $\mytyc{W}$-types, since our
+positivity check is overly strict, and equality, since we haven't
+implemented that yet.
+
+{\small
+\verbatiminput{itt.ka}
+}
+
+\subsection{\mykant\ examples}
+
+{\small
+\verbatiminput{examples.ka}
+}
+
+\subsection{\mykant's hierachy}
+\label{sec:hurkens}
+
+This rendition of the Hurken's paradox does not type check with the
+hierachy enabled, type checks and loops without it. Adapted from an
+Agda version, available at
+\url{http://code.haskell.org/Agda/test/succeed/Hurkens.agda}.
+
+{\small
+\verbatiminput{hurkens.ka}
+}