+\mydesc{typing:}{\myjud{\mytmsyn}{\mytmsyn}}{
+ \AxiomC{$\myjud{\mytmt}{\mytya}$}
+ \UnaryInfC{$\myjud{\myapp{\myrefl}{\mytmt}}{\myprdec{\myjm{\myb{x}}{\myb{\mytya}}{\myb{x}}{\myb{\mytya}}}}$}
+ \DisplayProof
+
+ \myderivsp
+
+ \AxiomC{$\myjud{\mytya}{\mytyp}$}
+ \AxiomC{$\myjudd{\myctx; \myb{x} : \mytya}{\mytyb}{\mytyp}$}
+ \BinaryInfC{$\myjud{\mytyc{R} \myappsp (\myb{x} {:} \mytya) \myappsp \mytyb}{\myfora{\myb{y}\, \myb{z}}{\mytya}{\myprdec{\myjm{\myb{y}}{\mytya}{\myb{z}}{\mytya} \myimpl \mysub{\mytyb}{\myb{x}}{\myb{y}} \myeq \mysub{\mytyb}{\myb{x}}{\myb{z}}}}}$}
+ \DisplayProof
+}
+
+$\myrefl$ is the equivalent of the reflexivity rule in propositional
+equality, and $\mytyc{R}$ asserts that if we have a we have a $\mytyp$
+abstracting over a value we can substitute equal for equal---this lets
+us recover $\myfun{subst}$. Note that while we need to provide ad-hoc
+rules in the restricted, non-hierarchical theory that we have, if our
+theory supports abstraction over $\mytyp$s we can easily add these
+axioms as abstracted variables.
+
+\subsubsection{Value-level equality}
+
+\mydesc{equality reduction:}{\myprsyn \myred \myprsyn}{
+ $
+ \begin{array}{r@{ }c@{\ }c@{\ }c@{}l@{\ }c@{\ }r@{}c@{\ }c@{\ }c@{}l@{\ }l}
+ (&\mytmt_1 & : & \myempty&) & \myeq & (&\mytmt_2 & : & \myempty &) & \myred \mytop \\
+ (&\mytmt_1 & : & \myempty&) & \myeq & (&\mytmt_2 & : & \myempty&) & \myred \mytop \\
+ (&\mytrue & : & \mybool&) & \myeq & (&\mytrue & : & \mybool&) & \myred \mytop \\
+ (&\myfalse & : & \mybool&) & \myeq & (&\myfalse & : & \mybool&) & \myred \mytop \\
+ (&\mytmt_1 & : & \mybool&) & \myeq & (&\mytmt_1 & : & \mybool&) & \myred \mybot \\
+ (&\mytmt_1 & : & \myexi{\mytya_1}{\myb{x_1}}{\mytyb_1}&) & \myeq & (&\mytmt_2 & : & \myexi{\mytya_2}{\myb{x_2}}{\mytyb_2}&) & \myred \\
+ & \multicolumn{11}{@{}l}{
+ \myind{2} \myjm{\myapp{\myfst}{\mytmt_1}}{\mytya_1}{\myapp{\myfst}{\mytmt_2}}{\mytya_2} \myand
+ \myjm{\myapp{\mysnd}{\mytmt_1}}{\mysub{\mytyb_1}{\myb{x_1}}{\myapp{\myfst}{\mytmt_1}}}{\myapp{\mysnd}{\mytmt_2}}{\mysub{\mytyb_2}{\myb{x_2}}{\myapp{\myfst}{\mytmt_2}}}
+ } \\
+ (&\myse{f}_1 & : & \myfora{\mytya_1}{\myb{x_1}}{\mytyb_1}&) & \myeq & (&\myse{f}_2 & : & \myfora{\mytya_2}{\myb{x_2}}{\mytyb_2}&) & \myred \\
+ & \multicolumn{11}{@{}l}{
+ \myind{2} \myprfora{\myb{x_1}}{\mytya_1}{\myprfora{\myb{x_2}}{\mytya_2}{
+ \myjm{\myb{x_1}}{\mytya_1}{\myb{x_2}}{\mytya_2} \myimpl
+ \myjm{\myapp{\myse{f}_1}{\myb{x_1}}}{\mytyb_1[\myb{x_1}]}{\myapp{\myse{f}_2}{\myb{x_2}}}{\mytyb_2[\myb{x_2}]}
+ }}
+ } \\
+ (&\mytmt_1 \mynodee \myse{f}_1 & : & \myw{\mytya_1}{\myb{x_1}}{\mytyb_1}&) & \myeq & (&\mytmt_1 \mynodee \myse{f}_1 & : & \myw{\mytya_2}{\myb{x_2}}{\mytyb_2}&) & \myred \cdots \\
+ (&\mytmt_1 & : & \mytya_1&) & \myeq & (&\mytmt_2 & : & \mytya_2 &) & \myred \\
+ & \multicolumn{11}{@{}l}{
+ \myind{2} \mybot\ \text{if $\mytya_1$ and $\mytya_2$ are canonical.}
+ }
+ \end{array}
+ $
+}
+
+As with type-level equality, we want value-level equality to reduce
+based on the structure of the compared terms.
+
+\subsection{Proof irrelevance}
+
+% \section{Augmenting ITT}
+% \label{sec:practical}
+
+% \subsection{A more liberal hierarchy}
+
+% \subsection{Type inference}
+
+% \subsubsection{Bidirectional type checking}
+
+% \subsubsection{Pattern unification}
+
+% \subsection{Pattern matching and explicit fixpoints}
+
+% \subsection{Induction-recursion}
+
+% \subsection{Coinduction}
+
+% \subsection{Dealing with partiality}
+
+% \subsection{Type holes}
+
+\section{\mykant : the theory}
+\label{sec:kant-theory}
+
+\mykant\ is an interactive theorem prover developed as part of this thesis.
+The plan is to present a core language which would be capable of serving as
+the basis for a more featureful system, while still presenting interesting
+features and more importantly observational equality.
+
+The author learnt the hard way the implementations challenges for such a
+project, and while there is a solid and working base to work on, observational
+equality is not currently implemented. However, a detailed plan on how to add
+it this functionality is provided, and should not prove to be too much work.
+
+The features currently implemented in \mykant\ are:
+
+\begin{description}
+\item[Full dependent types] As we would expect, we have dependent a system
+ which is as expressive as the `best' corner in the lambda cube described in
+ section \ref{sec:itt}.
+
+\item[Implicit, cumulative universe hierarchy] The user does not need to
+ specify universe level explicitly, and universes are \emph{cumulative}.
+
+\item[User defined data types and records] Instead of forcing the user to
+ choose from a restricted toolbox, we let her define inductive data types,
+ with associated primitive recursion operators; or records, with associated
+ projections for each field.
+
+\item[Bidirectional type checking] While no `fancy' inference via unification
+ is present, we take advantage of an type synthesis system in the style of
+ \cite{Pierce2000}, extending the concept for user defined data types.
+
+\item[Type holes] When building up programs interactively, it is useful to
+ leave parts unfinished while exploring the current context. This is what
+ type holes are for.
+\end{description}
+
+The planned features are:
+
+\begin{description}
+\item[Observational equality] As described in section \ref{sec:ott} but
+ extended to work with the type hierarchy and to admit equality between
+ arbitrary data types.
+
+\item[Coinductive data] ...
+\end{description}
+
+We will analyse the features one by one, along with motivations and tradeoffs
+for the design decisions made.
+
+\subsection{Bidirectional type checking}
+
+We start by describing bidirectional type checking since it calls for fairly
+different typing rules that what we have seen up to now. The idea is to have
+two kind of terms: terms for which a type can always be inferred, and terms
+that need to be checked against a type. A nice observation is that this
+duality runs through the semantics of the terms: data destructors (function
+application, record projections, primitive re cursors) \emph{infer} types,
+while data constructors (abstractions, record/data types data constructors)
+need to be checked. In the literature these terms are respectively known as
+
+To introduce the concept and notation, we will revisit the STLC in a
+bidirectional style. The presentation follows \cite{Loh2010}.
+
+% TODO do this --- is it even necessary
+
+% The syntax of
+
+\subsection{Base terms and types}
+
+Let us begin by describing the primitives available without the user
+defining any data types, and without equality. The way we handle
+variables and substitution is left unspecified, and explained in section
+\ref{sec:term-repr}, along with other implementation issues. We are
+also going to give an account of the implicit type hierarchy separately
+in section \ref{sec:term-hierarchy}, so as not to clutter derivation
+rules too much, and just treat types as impredicative for the time
+being.
+
+\mydesc{syntax}{ }{
+ $
+ \begin{array}{r@{\ }c@{\ }l}
+ \mytmsyn & ::= & \mynamesyn \mysynsep \mytyp \\
+ & | & \myfora{\myb{x}}{\mytmsyn}{\mytmsyn} \mysynsep
+ \myabs{\myb{x}}{\mytmsyn} \mysynsep
+ (\myapp{\mytmsyn}{\mytmsyn}) \mysynsep
+ (\myann{\mytmsyn}{\mytmsyn}) \\
+ \mynamesyn & ::= & \myb{x} \mysynsep \myfun{f}
+ \end{array}
+ $
+}
+
+The syntax for our calculus includes just two basic constructs:
+abstractions and $\mytyp$s. Everything else will be provided by
+user-definable constructs. Since we let the user define values, we will
+need a context capable of carrying the body of variables along with
+their type. Bound names and defined names are treated separately in the
+syntax, and while both can be associated to a type in the context, only
+defined names can be associated with a body:
+
+\mydesc{context validity:}{\myvalid{\myctx}}{
+ \begin{tabular}{ccc}
+ \AxiomC{\phantom{$\myjud{\mytya}{\mytyp_l}$}}
+ \UnaryInfC{$\myvalid{\myemptyctx}$}
+ \DisplayProof
+ &
+ \AxiomC{$\myjud{\mytya}{\mytyp}$}
+ \AxiomC{$\mynamesyn \not\in \myctx$}
+ \BinaryInfC{$\myvalid{\myctx ; \mynamesyn : \mytya}$}
+ \DisplayProof
+ &
+ \AxiomC{$\myjud{\mytmt}{\mytya}$}
+ \AxiomC{$\myfun{f} \not\in \myctx$}
+ \BinaryInfC{$\myvalid{\myctx ; \myfun{f} \mapsto \mytmt : \mytya}$}
+ \DisplayProof
+ \end{tabular}
+}
+
+Now we can present the reduction rules, which are unsurprising. We have
+the usual function application ($\beta$-reduction), but also a rule to
+replace names with their bodies ($\delta$-reduction), and one to discard
+type annotations. For this reason reduction is done in-context, as
+opposed to what we have seen in the past:
+
+\mydesc{reduction:}{\myctx \vdash \mytmsyn \myred \mytmsyn}{
+ \begin{tabular}{ccc}
+ \AxiomC{\phantom{$\myb{x} \mapsto \mytmt : \mytya \in \myctx$}}
+ \UnaryInfC{$\myctx \vdash \myapp{(\myabs{\myb{x}}{\mytmm})}{\mytmn}
+ \myred \mysub{\mytmm}{\myb{x}}{\mytmn}$}
+ \DisplayProof
+ &
+ \AxiomC{$\myfun{f} \mapsto \mytmt : \mytya \in \myctx$}
+ \UnaryInfC{$\myctx \vdash \myfun{f} \myred \mytmt$}
+ \DisplayProof
+ &
+ \AxiomC{\phantom{$\myb{x} \mapsto \mytmt : \mytya \in \myctx$}}
+ \UnaryInfC{$\myctx \vdash \myann{\mytmm}{\mytya} \myred \mytmm$}
+ \DisplayProof
+ \end{tabular}
+}
+
+We can now give types to our terms. The type of names, both defined and
+abstract, is inferred. The type of applications is inferred too,
+propagating types down the applied term. Abstractions are checked.
+Finally, we have a rule to check the type of an inferrable term. We
+defer the question of term equality (which is needed for type checking)
+to section \label{sec:kant-irr}.
+
+\mydesc{typing:}{\myctx \vdash \mytmsyn \Leftrightarrow \mytmsyn}{
+ \begin{tabular}{ccc}
+ \AxiomC{$\myse{name} : A \in \myctx$}
+ \UnaryInfC{$\myinf{\myse{name}}{A}$}
+ \DisplayProof
+ &
+ \AxiomC{$\myfun{f} \mapsto \mytmt : A \in \myctx$}
+ \UnaryInfC{$\myinf{\myfun{f}}{A}$}
+ \DisplayProof
+ &
+ \AxiomC{$\myinf{\mytmt}{\mytya}$}
+ \UnaryInfC{$\mychk{\myann{\mytmt}{\mytya}}{\mytya}$}
+ \DisplayProof
+ \end{tabular}
+ \myderivsp
+
+ \begin{tabular}{ccc}
+ \AxiomC{$\myinf{\mytmm}{\myfora{\myb{x}}{\mytya}{\mytyb}}$}
+ \AxiomC{$\mychk{\mytmn}{\mytya}$}
+ \BinaryInfC{$\myinf{\myapp{\mytmm}{\mytmn}}{\mysub{\mytyb}{\myb{x}}{\mytmn}}$}
+ \DisplayProof
+
+ &
+
+ \AxiomC{$\mychkk{\myctx; \myb{x}: \mytya}{\mytmt}{\mytyb}$}
+ \UnaryInfC{$\mychk{\myabs{\myb{x}}{\mytmt}}{\myfora{\myb{x}}{\mytyb}{\mytyb}}$}
+ \DisplayProof
+ \end{tabular}
+}
+
+\subsection{Elaboration}
+
+As we mentioned, $\mykant$\ allows the user to define not only values
+but also custom data types and records. \emph{Elaboration} consists of
+turning these declarations into workable syntax, types, and reduction
+rules. The treatment of custom types in $\mykant$\ is heavily inspired
+by McBride and McKinna early work on Epigram \citep{McBride2004},
+although with some differences.
+
+\subsubsection{Term vectors, telescopes, and assorted notation}
+
+We use a vector notation to refer to a series of term applied to
+another, for example $\mytyc{D} \myappsp \vec{A}$ is a shorthand for
+$\mytyc{D} \myappsp \mytya_1 \cdots \mytya_n$, for some $n$. $n$ is
+consistently used to refer to the length of such vectors, and $i$ to
+refer to an index in such vectors. We also often need to `build up'
+terms vectors, in which case we use $\myemptyctx$ for an empty vector
+and add elements to an existing vector with $\myarg ; \myarg$, similarly
+to what we do for context.
+
+To present the elaboration and operations on user defined data types, we
+frequently make use what de Bruijn called \emph{telescopes}
+\citep{Bruijn91}, a construct that will prove useful when dealing with
+the types of type and data constructors. A telescope is a series of
+nested typed bindings, such as $(\myb{x} {:} \mynat); (\myb{p} :
+\myapp{\myfun{even}}{\myb{x}})$. Consistently with the notation for
+contexts and term vectors, we use $\myemptyctx$ to denote an empty
+telescope and $\myarg ; \myarg$ to add a new binding to an existing
+telescope.
+
+We refer to telescopes with $\mytele$, $\mytele'$, $\mytele_i$, etc. If
+$\mytele$ refers to a telescope, $\mytelee$ refers to the term vector
+made up of all the variables bound by $\mytele$. $\mytele \myarr
+\mytya$ refers to the type made by turning the telescope into a series
+of $\myarr$. Returning to the examples above, we have that
+{\small\[
+ (\myb{x} {:} \mynat); (\myb{p} : \myapp{\myfun{even}}{\myb{x}}) \myarr \mynat =
+ (\myb{x} {:} \mynat) \myarr (\myb{p} : \myapp{\myfun{even}}{\myb{x}}) \myarr \mynat
+\]}
+
+We make use of various operations to manipulate telescopes:
+\begin{itemize}
+\item $\myhead(\mytele)$ refers to the first type appearing in
+ $\mytele$: $\myhead((\myb{x} {:} \mynat); (\myb{p} :
+ \myapp{\myfun{even}}{\myb{x}})) = \mynat$. Similarly,
+ $\myix_i(\mytele)$ refers to the $i^{th}$ type in a telescope
+ (1-indexed).
+\item $\mytake_i(\mytele)$ refers to the telescope created by taking the
+ first $i$ elements of $\mytele$: $\mytake_1((\myb{x} {:} \mynat); (\myb{p} :
+ \myapp{\myfun{even}}{\myb{x}})) = (\myb{x} {:} \mynat)$
+\item $\mytele \vec{A}$ refers to the telescope made by `applying' the
+ terms in $\vec{A}$ on $\mytele$: $((\myb{x} {:} \mynat); (\myb{p} :
+ \myapp{\myfun{even}}{\myb{x}}))42 = (\myb{p} :
+ \myapp{\myfun{even}}{42})$.
+\end{itemize}
+
+\subsubsection{Declarations syntax}
+
+\mydesc{syntax}{ }{
+ $
+ \begin{array}{r@{\ }c@{\ }l}
+ \mydeclsyn & ::= & \myval{\myb{x}}{\mytmsyn}{\mytmsyn} \\
+ & | & \mypost{\myb{x}}{\mytmsyn} \\
+ & | & \myadt{\mytyc{D}}{\mytelesyn}{}{\mydc{c} : \mytelesyn\ |\ \cdots } \\
+ & | & \myreco{\mytyc{D}}{\mytelesyn}{}{\myfun{f} : \mytmsyn,\ \cdots } \\
+
+ \mytelesyn & ::= & \myemptytele \mysynsep \mytelesyn \mycc (\myb{x} {:} \mytmsyn) \\
+ \mynamesyn & ::= & \cdots \mysynsep \mytyc{D} \mysynsep \mytyc{D}.\mydc{c} \mysynsep \mytyc{D}.\myfun{f}
+ \end{array}
+ $
+}
+
+In \mykant\ we have four kind of declarations:
+
+\begin{description}
+\item[Defined value] A variable, together with a type and a body.
+\item[Abstract variable] An abstract variable, with a type but no body.
+\item[Inductive data] A datatype, with a type constructor and various data
+ constructors---somewhat similar to what we find in Haskell. A primitive
+ recursor (or `destructor') will be generated automatically.
+\item[Record] A record, which consists of one data constructor and various
+ fields, with no recursive occurrences.
+\end{description}
+
+Elaborating defined variables consists of type checking body against the
+given type, and updating the context to contain the new binding.
+Elaborating abstract variables and abstract variables consists of type
+checking the type, and updating the context with a new typed variable:
+
+\mydesc{context elaboration:}{\myelab{\mydeclsyn}{\myctx}}{
+ \begin{tabular}{cc}
+ \AxiomC{$\myjud{\mytmt}{\mytya}$}
+ \AxiomC{$\myfun{f} \not\in \myctx$}
+ \BinaryInfC{
+ $\myctx \myelabt \myval{\myfun{f}}{\mytya}{\mytmt} \ \ \myelabf\ \ \myctx; \myfun{f} \mapsto \mytmt : \mytya$
+ }
+ \DisplayProof
+ &
+ \AxiomC{$\myjud{\mytya}{\mytyp}$}
+ \AxiomC{$\myfun{f} \not\in \myctx$}
+ \BinaryInfC{
+ $
+ \myctx \myelabt \mypost{\myfun{f}}{\mytya}
+ \ \ \myelabf\ \ \myctx; \myfun{f} : \mytya
+ $
+ }
+ \DisplayProof
+ \end{tabular}
+}
+
+\subsubsection{User defined types}
+\label{sec:user-type}
+
+\begin{figure}[p]
+ \begin{subfigure}[b]{\textwidth}
+ \vspace{-1cm}
+ \mydesc{syntax}{ }{
+ \footnotesize
+ $
+ \begin{array}{l}
+ \mynamesyn ::= \cdots \mysynsep \mytyc{D} \mysynsep \mytyc{D}.\mydc{c} \mysynsep \mytyc{D}.\myfun{f}
+ \end{array}
+ $
+ }
+
+ \mydesc{syntax elaboration:}{\mydeclsyn \myelabf \mytmsyn ::= \cdots}{
+ \footnotesize
+ $
+ \begin{array}{r@{\ }l}
+ & \myadt{\mytyc{D}}{\mytele}{}{\cdots\ |\ \mydc{c}_n : \mytele_n } \\
+ \myelabf &
+
+ \begin{array}{r@{\ }c@{\ }l}
+ \mytmsyn & ::= & \cdots \mysynsep \myapp{\mytyc{D}}{\mytmsyn^{\mytele}} \mysynsep \cdots \mysynsep
+ \mytyc{D}.\mydc{c}_n \myappsp \mytmsyn^{\mytele_n} \mysynsep \mytyc{D}.\myfun{elim} \myappsp \mytmsyn \\
+ \end{array}
+ \end{array}
+ $
+ }
+
+ \mydesc{context elaboration:}{\myelab{\mydeclsyn}{\myctx}}{
+ \footnotesize
+
+ \AxiomC{$
+ \begin{array}{c}
+ \myinf{\mytele \myarr \mytyp}{\mytyp}\hspace{0.8cm}
+ \mytyc{D} \not\in \myctx \\
+ \myinff{\myctx;\ \mytyc{D} : \mytele \myarr \mytyp}{\mytele \mycc \mytele_i \myarr \myapp{\mytyc{D}}{\mytelee}}{\mytyp}\ \ \ (1 \leq i \leq n) \\
+ \text{For each $(\myb{x} {:} \mytya)$ in each $\mytele_i$, if $\mytyc{D} \in \mytya$, then $\mytya = \myapp{\mytyc{D}}{\vec{\mytmt}}$.}
+ \end{array}
+ $}
+ \UnaryInfC{$
+ \begin{array}{r@{\ }c@{\ }l}
+ \myctx & \myelabt & \myadt{\mytyc{D}}{\mytele}{}{ \cdots \ |\ \mydc{c}_n : \mytele_n } \\
+ & & \vspace{-0.2cm} \\
+ & \myelabf & \myctx;\ \mytyc{D} : \mytele \myarr \mytyp;\ \cdots;\ \mytyc{D}.\mydc{c}_n : \mytele \mycc \mytele_n \myarr \myapp{\mytyc{D}}{\mytelee}; \\
+ & &
+ \begin{array}{@{}r@{\ }l l}
+ \mytyc{D}.\myfun{elim} : & \mytele \myarr (\myb{x} {:} \myapp{\mytyc{D}}{\mytelee}) \myarr & \textbf{target} \\
+ & (\myb{P} {:} \myapp{\mytyc{D}}{\mytelee} \myarr \mytyp) \myarr & \textbf{motive} \\
+ & \left.
+ \begin{array}{@{}l}
+ \myind{3} \vdots \\
+ (\mytele_n \mycc \myhyps(\myb{P}, \mytele_n) \myarr \myapp{\myb{P}}{(\myapp{\mytyc{D}.\mydc{c}_n}{\mytelee_n})}) \myarr
+ \end{array} \right \}
+ & \textbf{methods} \\
+ & \myapp{\myb{P}}{\myb{x}} &
+ \end{array}
+ \end{array}
+ $}
+ \DisplayProof \\ \vspace{0.2cm}\ \\
+ $
+ \begin{array}{@{}l l@{\ } l@{} r c l}
+ \textbf{where} & \myhyps(\myb{P}, & \myemptytele &) & \mymetagoes & \myemptytele \\
+ & \myhyps(\myb{P}, & (\myb{r} {:} \myapp{\mytyc{D}}{\vec{\mytmt}}) \mycc \mytele &) & \mymetagoes & (\myb{r'} {:} \myapp{\myb{P}}{\myb{r}}) \mycc \myhyps(\myb{P}, \mytele) \\
+ & \myhyps(\myb{P}, & (\myb{x} {:} \mytya) \mycc \mytele & ) & \mymetagoes & \myhyps(\myb{P}, \mytele)
+ \end{array}
+ $
+
+ }
+
+ \mydesc{reduction elaboration:}{\mydeclsyn \myelabf \myctx \vdash \mytmsyn \myred \mytmsyn}{
+ \footnotesize
+ $\myadt{\mytyc{D}}{\mytele}{}{ \cdots \ |\ \mydc{c}_n : \mytele_n } \ \ \myelabf$
+ \AxiomC{$\mytyc{D} : \mytele \myarr \mytyp \in \myctx$}
+ \AxiomC{$\mytyc{D}.\mydc{c}_i : \mytele;\mytele_i \myarr \myapp{\mytyc{D}}{\mytelee} \in \myctx$}
+ \BinaryInfC{$
+ \myctx \vdash \myapp{\myapp{\myapp{\mytyc{D}.\myfun{elim}}{(\myapp{\mytyc{D}.\mydc{c}_i}{\vec{\myse{t}}})}}{\myse{P}}}{\vec{\myse{m}}} \myred \myapp{\myapp{\myse{m}_i}{\vec{\mytmt}}}{\myrecs(\myse{P}, \vec{m}, \mytele_i)}
+ $}
+ \DisplayProof \\ \vspace{0.2cm}\ \\
+ $
+ \begin{array}{@{}l l@{\ } l@{} r c l}
+ \textbf{where} & \myrecs(\myse{P}, \vec{m}, & \myemptytele &) & \mymetagoes & \myemptytele \\
+ & \myrecs(\myse{P}, \vec{m}, & (\myb{r} {:} \myapp{\mytyc{D}}{\vec{A}}); \mytele & ) & \mymetagoes & (\mytyc{D}.\myfun{elim} \myappsp \myb{r} \myappsp \myse{P} \myappsp \vec{m}); \myrecs(\myse{P}, \vec{m}, \mytele) \\
+ & \myrecs(\myse{P}, \vec{m}, & (\myb{x} {:} \mytya); \mytele &) & \mymetagoes & \myrecs(\myse{P}, \vec{m}, \mytele)
+ \end{array}
+ $
+ }
+ \end{subfigure}
+
+ \begin{subfigure}[b]{\textwidth}
+ \mydesc{syntax elaboration:}{\myelab{\mydeclsyn}{\mytmsyn ::= \cdots}}{
+ \footnotesize
+ $
+ \begin{array}{r@{\ }c@{\ }l}
+ \myctx & \myelabt & \myreco{\mytyc{D}}{\mytele}{}{ \cdots, \myfun{f}_n : \myse{F}_n } \\
+ & \myelabf &
+
+ \begin{array}{r@{\ }c@{\ }l}
+ \mytmsyn & ::= & \cdots \mysynsep \myapp{\mytyc{D}}{\mytmsyn^{\mytele}} \mysynsep \mytyc{D}.\mydc{constr} \myappsp \mytmsyn^{n} \mysynsep \cdots \mysynsep \mytyc{D}.\myfun{f}_n \myappsp \mytmsyn \\
+ \end{array}
+ \end{array}
+ $
+}
+
+
+\mydesc{context elaboration:}{\myelab{\mydeclsyn}{\myctx}}{
+ \footnotesize
+ \AxiomC{$
+ \begin{array}{c}
+ \myinf{\mytele \myarr \mytyp}{\mytyp}\hspace{0.8cm}
+ \mytyc{D} \not\in \myctx \\
+ \myinff{\myctx; \mytele; (\myb{f}_j : \myse{F}_j)_{j=1}^{i - 1}}{F_i}{\mytyp} \myind{3} (1 \le i \le n)
+ \end{array}
+ $}
+ \UnaryInfC{$
+ \begin{array}{r@{\ }c@{\ }l}
+ \myctx & \myelabt & \myreco{\mytyc{D}}{\mytele}{}{ \cdots, \myfun{f}_n : \myse{F}_n } \\
+ & & \vspace{-0.2cm} \\
+ & \myelabf & \myctx;\ \mytyc{D} : \mytele \myarr \mytyp;\ \cdots;\ \mytyc{D}.\myfun{f}_n : \mytele \myarr (\myb{x} {:} \myapp{\mytyc{D}}{\mytelee}) \myarr \mysub{\myse{F}_n}{\myb{f}_i}{\myapp{\myfun{f}_i}{\myb{x}}}_{i = 1}^{n-1}; \\
+ & & \mytyc{D}.\mydc{constr} : \mytele \myarr \myse{F}_1 \myarr \cdots \myarr \myse{F}_n \myarr \myapp{\mytyc{D}}{\mytelee};
+ \end{array}
+ $}
+ \DisplayProof
+}
+
+ \mydesc{reduction elaboration:}{\mydeclsyn \myelabf \myctx \vdash \mytmsyn \myred \mytmsyn}{
+ \footnotesize
+ $\myreco{\mytyc{D}}{\mytele}{}{ \cdots, \myfun{f}_n : \myse{F}_n } \ \ \myelabf$
+ \AxiomC{$\mytyc{D} \in \myctx$}
+ \UnaryInfC{$\myctx \vdash \myapp{\mytyc{D}.\myfun{f}_i}{(\mytyc{D}.\mydc{constr} \myappsp \vec{t})} \myred t_i$}
+ \DisplayProof
+ }
+
+ \end{subfigure}
+ \caption{Elaboration for data types and records.}
+ \label{fig:elab}
+\end{figure}
+
+Elaborating user defined types is the real effort. First, let's explain
+what we can defined, with some examples.
+
+\begin{description}
+\item[Natural numbers] To define natural numbers, we create a data type
+ with two constructors: one with zero arguments ($\mydc{zero}$) and one
+ with one recursive argument ($\mydc{suc}$):
+ {\small\[
+ \begin{array}{@{}l}
+ \myadt{\mynat}{ }{ }{
+ \mydc{zero} \mydcsep \mydc{suc} \myappsp \mynat
+ }
+ \end{array}
+ \]}
+ This is very similar to what we would write in Haskell:
+ {\small\[\text{\texttt{data Nat = Zero | Suc Nat}}\]}
+ Once the data type is defined, $\mykant$\ will generate syntactic
+ constructs for the type and data constructors, so that we will have
+ \begin{center}
+ \small
+ \begin{tabular}{ccc}
+ \AxiomC{\phantom{$\mychk{\mytmt}{\mynat}$}}
+ \UnaryInfC{$\myinf{\mynat}{\mytyp}$}
+ \DisplayProof
+ &
+ \AxiomC{\phantom{$\mychk{\mytmt}{\mynat}$}}
+ \UnaryInfC{$\myinf{\mytyc{\mynat}.\mydc{zero}}{\mynat}$}
+ \DisplayProof
+ &
+ \AxiomC{$\mychk{\mytmt}{\mynat}$}
+ \UnaryInfC{$\myinf{\mytyc{\mynat}.\mydc{suc} \myappsp \mytmt}{\mynat}$}
+ \DisplayProof
+ \end{tabular}
+ \end{center}
+ While in Haskell (or indeed in Agda or Coq) data constructors are
+ treated the same way as functions, in $\mykant$\ they are syntax, so
+ for example using $\mytyc{\mynat}.\mydc{suc}$ on its own will be a
+ syntax error. This is necessary so that we can easily infer the type
+ of polymorphic data constructors, as we will see later.
+
+ Moreover, each data constructor is prefixed by the type constructor
+ name, since we need to retrieve the type constructor of a data
+ constructor when type checking. This measure aids in the presentation
+ of various features but it is not needed in the implementation, where
+ we can have a dictionary to lookup the type constructor corresponding
+ to each data constructor. When using data constructors in examples I
+ will omit the type constructor prefix for brevity.
+
+ Along with user defined constructors, $\mykant$\ automatically
+ generates an \emph{eliminator}, or \emph{destructor}, to compute with
+ natural numbers: If we have $\mytmt : \mynat$, we can destruct
+ $\mytmt$ using the generated eliminator `$\mynat.\myfun{elim}$':
+ \begin{prooftree}
+ \small
+ \AxiomC{$\mychk{\mytmt}{\mynat}$}
+ \UnaryInfC{$
+ \myinf{\mytyc{\mynat}.\myfun{elim} \myappsp \mytmt}{
+ \begin{array}{@{}l}
+ \myfora{\myb{P}}{\mynat \myarr \mytyp}{ \\ \myapp{\myb{P}}{\mydc{zero}} \myarr (\myfora{\myb{x}}{\mynat}{\myapp{\myb{P}}{\myb{x}} \myarr \myapp{\myb{P}}{(\myapp{\mydc{suc}}{\myb{x}})}}) \myarr \\ \myapp{\myb{P}}{\mytmt}}
+ \end{array}
+ }$}
+ \end{prooftree}
+ $\mynat.\myfun{elim}$ corresponds to the induction principle for
+ natural numbers: if we have a predicate on numbers ($\myb{P}$), and we
+ know that predicate holds for the base case
+ ($\myapp{\myb{P}}{\mydc{zero}}$) and for each inductive step
+ ($\myfora{\myb{x}}{\mynat}{\myapp{\myb{P}}{\myb{x}} \myarr
+ \myapp{\myb{P}}{(\myapp{\mydc{suc}}{\myb{x}})}}$), then $\myb{P}$
+ holds for any number. As with the data constructors, we require the
+ eliminator to be applied to the `destructed' element.
+
+ While the induction principle is usually seen as a mean to prove
+ properties about numbers, in the intuitionistic setting it is also a
+ mean to compute. In this specific case we will $\mynat.\myfun{elim}$
+ will return the base case if the provided number is $\mydc{zero}$, and
+ recursively apply the inductive step if the number is a
+ $\mydc{suc}$cessor:
+ {\small\[
+ \begin{array}{@{}l@{}l}
+ \mytyc{\mynat}.\myfun{elim} \myappsp \mydc{zero} & \myappsp \myse{P} \myappsp \myse{pz} \myappsp \myse{ps} \myred \myse{pz} \\
+ \mytyc{\mynat}.\myfun{elim} \myappsp (\mydc{suc} \myappsp \mytmt) & \myappsp \myse{P} \myappsp \myse{pz} \myappsp \myse{ps} \myred \myse{ps} \myappsp \mytmt \myappsp (\mynat.\myfun{elim} \myappsp \mytmt \myappsp \myse{P} \myappsp \myse{pz} \myappsp \myse{ps})
+ \end{array}
+ \]}
+ The Haskell equivalent would be
+ {\small\[
+ \begin{array}{@{}l}
+ \text{\texttt{elim :: Nat -> a -> (Nat -> a -> a) -> a}}\\
+ \text{\texttt{elim Zero pz ps = pz}}\\
+ \text{\texttt{elim (Suc n) pz ps = ps n (elim n pz ps)}}
+ \end{array}
+ \]}
+ Which buys us the computational behaviour, but not the reasoning power.
+ % TODO maybe more examples, e.g. Haskell eliminator and fibonacci
+
+\item[Binary trees] Now for a polymorphic data type: binary trees, since
+ lists are too similar to natural numbers to be interesting.
+ {\small\[
+ \begin{array}{@{}l}
+ \myadt{\mytree}{\myappsp (\myb{A} {:} \mytyp)}{ }{
+ \mydc{leaf} \mydcsep \mydc{node} \myappsp (\myapp{\mytree}{\myb{A}}) \myappsp \myb{A} \myappsp (\myapp{\mytree}{\myb{A}})
+ }
+ \end{array}
+ \]}
+ Now the purpose of constructors as syntax can be explained: what would
+ the type of $\mydc{leaf}$ be? If we were to treat it as a `normal'
+ term, we would have to specify the type parameter of the tree each
+ time the constructor is applied:
+ {\small\[
+ \begin{array}{@{}l@{\ }l}
+ \mydc{leaf} & : \myfora{\myb{A}}{\mytyp}{\myapp{\mytree}{\myb{A}}} \\
+ \mydc{node} & : \myfora{\myb{A}}{\mytyp}{\myapp{\mytree}{\myb{A}} \myarr \myb{A} \myarr \myapp{\mytree}{\myb{A}} \myarr \myapp{\mytree}{\myb{A}}}
+ \end{array}
+ \]}
+ The problem with this approach is that creating terms is incredibly
+ verbose and dull, since we would need to specify the type parameters
+ each time. For example if we wished to create a $\mytree \myappsp
+ \mynat$ with two nodes and three leaves, we would have to write
+ {\small\[
+ \mydc{node} \myappsp \mynat \myappsp (\mydc{node} \myappsp \mynat \myappsp (\mydc{leaf} \myappsp \mynat) \myappsp (\myapp{\mydc{suc}}{\mydc{zero}}) \myappsp (\mydc{leaf} \myappsp \mynat)) \myappsp \mydc{zero} \myappsp (\mydc{leaf} \myappsp \mynat)
+ \]}
+ The redundancy of $\mynat$s is quite irritating. Instead, if we treat
+ constructors as syntactic elements, we can `extract' the type of the
+ parameter from the type that the term gets checked against, much like
+ we get the type of abstraction arguments:
+ \begin{center}
+ \small
+ \begin{tabular}{cc}
+ \AxiomC{$\mychk{\mytya}{\mytyp}$}
+ \UnaryInfC{$\mychk{\mydc{leaf}}{\myapp{\mytree}{\mytya}}$}
+ \DisplayProof
+ &
+ \AxiomC{$\mychk{\mytmm}{\mytree \myappsp \mytya}$}
+ \AxiomC{$\mychk{\mytmt}{\mytya}$}
+ \AxiomC{$\mychk{\mytmm}{\mytree \myappsp \mytya}$}
+ \TrinaryInfC{$\mychk{\mydc{node} \myappsp \mytmm \myappsp \mytmt \myappsp \mytmn}{\mytree \myappsp \mytya}$}
+ \DisplayProof
+ \end{tabular}
+ \end{center}
+ Which enables us to write, much more concisely
+ {\small\[
+ \mydc{node} \myappsp (\mydc{node} \myappsp \mydc{leaf} \myappsp (\myapp{\mydc{suc}}{\mydc{zero}}) \myappsp \mydc{leaf}) \myappsp \mydc{zero} \myappsp \mydc{leaf} : \myapp{\mytree}{\mynat}
+ \]}
+ We gain an annotation, but we lose the myriad of types applied to the
+ constructors. Conversely, with the eliminator for $\mytree$, we can
+ infer the type of the arguments given the type of the destructed:
+ \begin{prooftree}
+ \footnotesize
+ \AxiomC{$\myinf{\mytmt}{\myapp{\mytree}{\mytya}}$}
+ \UnaryInfC{$
+ \myinf{\mytree.\myfun{elim} \myappsp \mytmt}{
+ \begin{array}{@{}l}
+ (\myb{P} {:} \myapp{\mytree}{\mytya} \myarr \mytyp) \myarr \\
+ \myapp{\myb{P}}{\mydc{leaf}} \myarr \\
+ ((\myb{l} {:} \myapp{\mytree}{\mytya}) (\myb{x} {:} \mytya) (\myb{r} {:} \myapp{\mytree}{\mytya}) \myarr \myapp{\myb{P}}{\myb{l}} \myarr
+ \myapp{\myb{P}}{\myb{r}} \myarr \myb{P} \myappsp (\mydc{node} \myappsp \myb{l} \myappsp \myb{x} \myappsp \myb{r})) \myarr \\
+ \myapp{\myb{P}}{\mytmt}
+ \end{array}
+ }
+ $}
+ \end{prooftree}
+ As expected, the eliminator embodies structural induction on trees.
+
+\item[Empty type] We have presented types that have at least one
+ constructors, but nothing prevents us from defining types with
+ \emph{no} constructors:
+ {\small\[
+ \myadt{\mytyc{Empty}}{ }{ }{ }
+ \]}
+ What shall the `induction principle' on $\mytyc{Empty}$ be? Does it
+ even make sense to talk about induction on $\mytyc{Empty}$?
+ $\mykant$\ does not care, and generates an eliminator with no `cases',
+ and thus corresponding to the $\myfun{absurd}$ that we know and love:
+ \begin{prooftree}
+ \small
+ \AxiomC{$\myinf{\mytmt}{\mytyc{Empty}}$}
+ \UnaryInfC{$\myinf{\myempty.\myfun{elim} \myappsp \mytmt}{(\myb{P} {:} \mytmt \myarr \mytyp) \myarr \myapp{\myb{P}}{\mytmt}}$}
+ \end{prooftree}
+
+\item[Ordered lists] Up to this point, the examples shown are nothing
+ new to the \{Haskell, SML, OCaml, functional\} programmer. However
+ dependent types let us express much more than
+ % TODO
+
+\item[Dependent products] Apart from $\mysyn{data}$, $\mykant$\ offers
+ us another way to define types: $\mysyn{record}$. A record is a
+ datatype with one constructor and `projections' to extract specific
+ fields of the said constructor.
+
+ For example, we can recover dependent products:
+ {\small\[
+ \begin{array}{@{}l}
+ \myreco{\mytyc{Prod}}{\myappsp (\myb{A} {:} \mytyp) \myappsp (\myb{B} {:} \myb{A} \myarr \mytyp)}{\\ \myind{2}}{\myfst : \myb{A}, \mysnd : \myapp{\myb{B}}{\myb{fst}}}
+ \end{array}
+ \]}
+ Here $\myfst$ and $\mysnd$ are the projections, with their respective
+ types. Note that each field can refer to the preceding fields. A
+ constructor will be automatically generated, under the name of
+ $\mytyc{Prod}.\mydc{constr}$. Dually to data types, we will omit the
+ type constructor prefix for record projections.
+
+ Following the bidirectionality of the system, we have that projections
+ (the destructors of the record) infer the type, while the constructor
+ gets checked:
+ \begin{center}
+ \small
+ \begin{tabular}{cc}
+ \AxiomC{$\mychk{\mytmm}{\mytya}$}
+ \AxiomC{$\mychk{\mytmn}{\myapp{\mytyb}{\mytmm}}$}
+ \BinaryInfC{$\mychk{\mytyc{Prod}.\mydc{constr} \myappsp \mytmm \myappsp \mytmn}{\mytyc{Prod} \myappsp \mytya \myappsp \mytyb}$}
+ \noLine
+ \UnaryInfC{\phantom{$\myinf{\myfun{snd} \myappsp \mytmt}{\mytyb \myappsp (\myfst \myappsp \mytmt)}$}}
+ \DisplayProof
+ &
+ \AxiomC{$\myinf{\mytmt}{\mytyc{Prod} \myappsp \mytya \myappsp \mytyb}$}
+ \UnaryInfC{$\myinf{\myfun{fst} \myappsp \mytmt}{\mytya}$}
+ \noLine
+ \UnaryInfC{$\myinf{\myfun{snd} \myappsp \mytmt}{\mytyb \myappsp (\myfst \myappsp \mytmt)}$}
+ \DisplayProof
+ \end{tabular}
+ \end{center}
+ What we have is equivalent to ITT's dependent products.
+\end{description}
+
+Following the intuition given by the examples, the mechanised
+elaboration is presented in figure \ref{fig:elab}, which is essentially
+a modification of figure 9 of \citep{McBride2004}\footnote{However, our
+ datatypes do not have indices, we do bidirectional typechecking by
+ treating constructors/destructors as syntactic constructs, and we have
+ records.}.
+
+In data types declarations we allow recursive occurrences as long as
+they are \emph{strictly positive}, employing a syntactic check to make
+sure that this is the case. See \cite{Dybjer1991} for a more formal
+treatment of inductive definitions in ITT.
+
+For what concerns records, recursive occurrences are disallowed. The
+reason for this choice is answered by the reason for the choice of
+having records at all: we need records to give the user types with
+$\eta$-laws for equality, as we saw in section % TODO add section
+and in the treatment of OTT in section \ref{sec:ott}. If we tried to
+$\eta$-expand recursive data types, we would expand forever.
+
+To implement bidirectional type checking for constructors and
+destructors, we store their types in full in the context, and then
+instantiate when due:
+
+\mydesc{typing:}{ }{
+ \AxiomC{$
+ \begin{array}{c}
+ \mytyc{D} : \mytele \myarr \mytyp \in \myctx \hspace{1cm}
+ \mytyc{D}.\mydc{c} : \mytele \mycc \mytele' \myarr
+ \myapp{\mytyc{D}}{\mytelee} \in \myctx \\
+ \mytele'' = (\mytele;\mytele')\vec{A} \hspace{1cm}
+ \mychkk{\myctx; \mytake_{i-1}(\mytele'')}{t_i}{\myix_i( \mytele'')}\ \
+ (1 \le i \le \mytele'')
+ \end{array}
+ $}
+ \UnaryInfC{$\mychk{\myapp{\mytyc{D}.\mydc{c}}{\vec{t}}}{\myapp{\mytyc{D}}{\vec{A}}}$}
+ \DisplayProof
+
+ \myderivsp
+
+ \AxiomC{$\mytyc{D} : \mytele \myarr \mytyp \in \myctx$}
+ \AxiomC{$\mytyc{D}.\myfun{f} : \mytele \mycc (\myb{x} {:}
+ \myapp{\mytyc{D}}{\mytelee}) \myarr \myse{F}$}
+ \AxiomC{$\myjud{\mytmt}{\myapp{\mytyc{D}}{\vec{A}}}$}
+ \TrinaryInfC{$\myinf{\myapp{\mytyc{D}.\myfun{f}}{\mytmt}}{(\mytele
+ \mycc (\myb{x} {:} \myapp{\mytyc{D}}{\mytelee}) \myarr
+ \myse{F})(\vec{A};\mytmt)}$}
+ \DisplayProof
+ }
+
+\subsubsection{Why user defined types?}
+
+% TODO reference levitated theories, indexed containers
+
+foobar
+
+\subsection{Cumulative hierarchy and typical ambiguity}
+\label{sec:term-hierarchy}
+
+A type hierarchy as presented in section \label{sec:itt} is a
+considerable burden on the user, on various levels. Consider for
+example how we recovered disjunctions in section \ref{sec:disju}: we
+have a function that takes two $\mytyp_0$ and forms a new $\mytyp_0$.
+What if we wanted to form a disjunction containing two $\mytyp_0$, or
+$\mytyp_{42}$? Our definition would fail us, since $\mytyp_0 :
+\mytyp_1$.
+
+One way to solve this issue is a \emph{cumulative} hierarchy, where
+$\mytyp_{l_1} : \mytyp_{l_2}$ iff $l_1 < l_2$. This way we retain
+consistency, while allowing for `large' definitions that work on small
+types too. For example we might define our disjunction to be
+{\small\[
+ \myarg\myfun{$\vee$}\myarg : \mytyp_{100} \myarr \mytyp_{100} \myarr \mytyp_{100}
+\]}
+And hope that $\mytyp_{100}$ will be large enough to fit all the types
+that we want to use with our disjunction. However, there are two
+problems with this. First, there is the obvious clumsyness of having to
+manually specify the size of types. More importantly, if we want to use
+$\myfun{$\vee$}$ itself as an argument to other type-formers, we need to
+make sure that those allow for types at least as large as
+$\mytyp_{100}$.
+
+A better option is to employ a mechanised version of what Russell called
+\emph{typical ambiguity}: we let the user live under the illusion that
+$\mytyp : \mytyp$, but check that the statements about types are
+consistent behind the hood. $\mykant$\ implements this following the
+lines of \cite{Huet1988}. See also \citep{Harper1991} for a published
+reference, although describing a more complex system allowing for both
+explicit and explicit hierarchy at the same time.
+
+We define a partial ordering on the levels, with both weak ($\le$) and
+strong ($<$) constraints---the laws governing them being the same as the
+ones governing $<$ and $\le$ for the natural numbers. Each occurrence
+of $\mytyp$ is decorated with a unique reference, and we keep a set of
+constraints and add new constraints as we type check, generating new
+references when needed.
+
+For example, when type checking the type $\mytyp\, r_1$, where $r_1$
+denotes the unique reference assigned to that term, we will generate a
+new fresh reference $\mytyp\, r_2$, and add the constraint $r_1 < r_2$
+to the set. When type checking $\myctx \vdash
+\myfora{\myb{x}}{\mytya}{\mytyb}$, if $\myctx \vdash \mytya : \mytyp\,
+r_1$ and $\myctx; \myb{x} : \mytyb \vdash \mytyb : \mytyp\,r_2$; we will
+generate new reference $r$ and add $r_1 \le r$ and $r_2 \le r$ to the
+set.
+
+If at any point the constraint set becomes inconsistent, type checking
+fails. Moreover, when comparing two $\mytyp$ terms we equate their
+respective references with two $\le$ constraints---the details are
+explained in section \ref{sec:hier-impl}.
+
+Another more flexible but also more verbose alternative is the one
+chosen by Agda, where levels can be quantified so that the relationship
+between arguments and result in type formers can be explicitly
+expressed:
+{\small\[
+\myarg\myfun{$\vee$}\myarg : (l_1\, l_2 : \mytyc{Level}) \myarr \mytyp_{l_1} \myarr \mytyp_{l_2} \myarr \mytyp_{l_1 \mylub l_2}
+\]}
+Inference algorithms to automatically derive this kind of relationship
+are currently subject of research. We chose less flexible but more
+concise way, since it is easier to implement and better understood.
+
+\subsection{Observational equality, \mykant\ style}
+
+There are two correlated differences between $\mykant$\ and the theory
+used to present OTT. The first is that in $\mykant$ we have a type
+hierarchy, which lets us, for example, abstract over types. The second
+is that we let the user define inductive types.
+
+Reconciling propositions for OTT and a hierarchy had already been
+investigated by Conor McBride\footnote{See
+ \url{http://www.e-pig.org/epilogue/index.html?p=1098.html}.}, and we
+follow his footsteps. Most of the work, as an extension of elaboration,
+is to generate reduction rules and coercions.
+
+\subsubsection{The \mykant\ prelude, and $\myprop$ositions}
+
+Before defining $\myprop$, we define some basic types inside $\mykant$,
+as the target for the $\myprop$ decoder:
+\begin{framed}
+\small
+$
+\begin{array}{l}
+ \myadt{\mytyc{Empty}}{}{ }{ } \\
+ \myfun{absurd} : (\myb{A} {:} \mytyp) \myarr \mytyc{Empty} \myarr \myb{A} \mapsto \\
+ \myind{2} \myabs{\myb{A\ \myb{bot}}}{\mytyc{Empty}.\myfun{elim} \myappsp \myb{bot} \myappsp (\myabs{\_}{\myb{A}})} \\
+ \ \\
+
+ \myreco{\mytyc{Unit}}{}{}{ } \\ \ \\
+
+ \myreco{\mytyc{Prod}}{\myappsp (\myb{A}\ \myb{B} {:} \mytyp)}{ }{\myfun{fst} : \myb{A}, \myfun{snd} : \myb{B} }
+\end{array}
+$
+\end{framed}
+When using $\mytyc{Prod}$, we shall use $\myprod$ to define `nested'
+products, and $\myproj{n}$ to project elements from them, so that
+{\small
+\[
+\begin{array}{@{}l}
+\mytya \myprod \mytyb = \mytyc{Prod} \myappsp \mytya \myappsp (\mytyc{Prod} \myappsp \mytyb \myappsp \myunit) \\
+\mytya \myprod \mytyb \myprod \myse{C} = \mytyc{Prod} \myappsp \mytya \myappsp (\mytyc{Prod} \myappsp \mytyb \myappsp (\mytyc{Prod} \myappsp \mytyc \myappsp \myunit)) \\
+\myind{2} \vdots \\
+\myproj{1} : \mytyc{Prod} \myappsp \mytya \myappsp \mytyb \myarr \mytya \\
+\myproj{2} : \mytyc{Prod} \myappsp \mytya \myappsp (\mytyc{Prod} \myappsp \mytyb \myappsp \myse{C}) \myarr \mytyb \\
+\myind{2} \vdots
+\end{array}
+\]
+}
+And so on, so that $\myproj{n}$ will work with all products with at
+least than $n$ elements. Then we can define propositions, and decoding:
+
+\mydesc{syntax}{ }{
+ $
+ \begin{array}{r@{\ }c@{\ }l}
+ \mytmsyn & ::= & \cdots \mysynsep \myprdec{\myprsyn} \\
+ \myprsyn & ::= & \mybot \mysynsep \mytop \mysynsep \myprsyn \myand \myprsyn \mysynsep \myprfora{\myb{x}}{\mytmsyn}{\myprsyn}
+ \end{array}
+ $
+}
+
+\mydesc{proposition decoding:}{\myprdec{\mytmsyn} \myred \mytmsyn}{
+ \begin{tabular}{cc}
+ $
+ \begin{array}{l@{\ }c@{\ }l}
+ \myprdec{\mybot} & \myred & \myempty \\
+ \myprdec{\mytop} & \myred & \myunit
+ \end{array}
+ $
+ &
+ $
+ \begin{array}{r@{ }c@{ }l@{\ }c@{\ }l}
+ \myprdec{&\myse{P} \myand \myse{Q} &} & \myred & \myprdec{\myse{P}} \myprod \myprdec{\myse{Q}} \\
+ \myprdec{&\myprfora{\myb{x}}{\mytya}{\myse{P}} &} & \myred &
+ \myfora{\myb{x}}{\mytya}{\myprdec{\myse{P}}}
+ \end{array}
+ $
+ \end{tabular}
+}
+
+\subsubsection{Why $\myprop$?}
+
+It is worth to ask if $\myprop$ is needed at all. It is perfectly
+possible to have the type checker identify propositional types
+automatically, and in fact that is what The author initially planned to
+identify the propositional fragment iinternally \cite{Jacobs1994}.
+
+\subsubsection{OTT constructs}
+
+Before presenting the direction that $\mykant$\ takes, let's consider
+some examples of use-defined data types, and the result we would expect,
+given what we already know about OTT, assuming the same propositional
+equalities.
+
+\begin{description}
+
+\item[Product types] Let's consider first the already mentioned
+ dependent product, using the alternate name $\mysigma$\footnote{For
+ extra confusion, `dependent products' are often called `dependent
+ sums' in the literature, referring to the interpretation that
+ identifies the first element as a `tag' deciding the type of the
+ second element, which lets us recover sum types (disjuctions), as we
+ saw in section \ref{sec:user-type}. Thus, $\mysigma$.} to
+ avoid confusion with the $\mytyc{Prod}$ in the prelude: {\small\[
+ \begin{array}{@{}l}
+ \myreco{\mysigma}{\myappsp (\myb{A} {:} \mytyp) \myappsp (\myb{B} {:} \myb{A} \myarr \mytyp)}{\\ \myind{2}}{\myfst : \myb{A}, \mysnd : \myapp{\myb{B}}{\myb{fst}}}
+ \end{array}
+ \]} Let's start with type-level equality. The result we want is
+ {\small\[
+ \begin{array}{@{}l}
+ \mysigma \myappsp \mytya_1 \myappsp \mytyb_1 \myeq \mysigma \myappsp \mytya_2 \myappsp \mytyb_2 \myred \\
+ \myind{2} \mytya_1 \myeq \mytya_2 \myand \myprfora{\myb{x_1}}{\mytya_1}{\myprfora{\myb{x_2}}{\mytya_2}{\myjm{\myb{x_1}}{\mytya_1}{\myb{x_2}}{\mytya_2}} \myimpl \myapp{\mytyb_1}{\myb{x_1}} \myeq \myapp{\mytyb_2}{\myb{x_2}}}
+ \end{array}
+ \]} The difference here is that in the original presentation of OTT
+ the type binders are explicit, while here $\mytyb_1$ and $\mytyb_2$
+ functions returning types. We can do this thanks to the type
+ hierarchy, and this hints at the fact that heterogeneous equality will
+ have to allow $\mytyp$ `to the right of the colon', and in fact this
+ provides the solution to simplify the equality above.
+
+ If we take, just like we saw previously in OTT
+ {\small\[
+ \begin{array}{@{}l}
+ \myjm{\myse{f}_1}{\myfora{\mytya_1}{\myb{x_1}}{\mytyb_1}}{\myse{f}_2}{\myfora{\mytya_2}{\myb{x_2}}{\mytyb_2}} \myred \\
+ \myind{2} \myprfora{\myb{x_1}}{\mytya_1}{\myprfora{\myb{x_2}}{\mytya_2}{
+ \myjm{\myb{x_1}}{\mytya_1}{\myb{x_2}}{\mytya_2} \myimpl
+ \myjm{\myapp{\myse{f}_1}{\myb{x_1}}}{\mytyb_1[\myb{x_1}]}{\myapp{\myse{f}_2}{\myb{x_2}}}{\mytyb_2[\myb{x_2}]}
+ }}
+ \end{array}
+ \]} Then we can simply take
+ {\small\[
+ \begin{array}{@{}l}
+ \mysigma \myappsp \mytya_1 \myappsp \mytyb_1 \myeq \mysigma \myappsp \mytya_2 \myappsp \mytyb_2 \myred \\ \myind{2} \mytya_1 \myeq \mytya_2 \myand \myjm{\mytyb_1}{\mytya_1 \myarr \mytyp}{\mytyb_2}{\mytya_2 \myarr \mytyp}
+ \end{array}
+ \]} Which will reduce to precisely what we desire. For what
+ concerns coercions and quotation, things stay the same (apart from the
+ fact that we apply to the second argument instead of substituting).
+ We can recognise records such as $\mysigma$ as such and employ
+ projections in value equality, coercions, and quotation; as to not
+ impede progress if not necessary.
+
+\item[Lists] Now for finite lists, which will give us a taste for data
+ constructors:
+ {\small\[
+ \begin{array}{@{}l}
+ \myadt{\mylist}{\myappsp (\myb{A} {:} \mytyp)}{ }{\mydc{nil} \mydcsep \mydc{cons} \myappsp \myb{A} \myappsp (\myapp{\mylist}{\myb{A}})}
+ \end{array}
+ \]}
+ Type equality is simple---we only need to compare the parameter:
+ {\small\[
+ \mylist \myappsp \mytya_1 \myeq \mylist \myappsp \mytya_2 \myred \mytya_1 \myeq \mytya_2
+ \]} For coercions, we transport based on the constructor, recycling
+ the proof for the inductive occurrence: {\small\[
+ \begin{array}{@{}l@{\ }c@{\ }l}
+ \mycoe \myappsp (\mylist \myappsp \mytya_1) \myappsp (\mylist \myappsp \mytya_2) \myappsp \myse{Q} \myappsp \mydc{nil} & \myred & \mydc{nil} \\
+ \mycoe \myappsp (\mylist \myappsp \mytya_1) \myappsp (\mylist \myappsp \mytya_2) \myappsp \myse{Q} \myappsp (\mydc{cons} \myappsp \mytmm \myappsp \mytmn) & \myred & \\
+ \multicolumn{3}{l}{\myind{2} \mydc{cons} \myappsp (\mycoe \myappsp \mytya_1 \myappsp \mytya_2 \myappsp \myse{Q} \myappsp \mytmm) \myappsp (\mycoe \myappsp (\mylist \myappsp \mytya_1) \myappsp (\mylist \myappsp \mytya_2) \myappsp \myse{Q} \myappsp \mytmn)}
+ \end{array}
+ \]} Value equality is unsurprising---we match the constructors, and
+ return bottom for mismatches. However, we also need to equate the
+ parameter in $\mydc{nil}$: {\small\[
+ \begin{array}{r@{ }c@{\ }c@{\ }c@{}l@{\ }c@{\ }r@{}c@{\ }c@{\ }c@{}l@{\ }l}
+ (& \mydc{nil} & : & \myapp{\mylist}{\mytya_1} &) & \myeq & (& \mydc{nil} & : & \myapp{\mylist}{\mytya_2} &) \myred \mytya_1 \myeq \mytya_2 \\
+ (& \mydc{cons} \myappsp \mytmm_1 \myappsp \mytmn_1 & : & \myapp{\mylist}{\mytya_1} &) & \myeq & (& \mydc{cons} \myappsp \mytmm_2 \myappsp \mytmn_2 & : & \myapp{\mylist}{\mytya_2} &) \myred \\
+ & \multicolumn{11}{@{}l}{ \myind{2}
+ \myjm{\mytmm_1}{\mytya_1}{\mytmm_2}{\mytya_2} \myand \myjm{\mytmn_1}{\myapp{\mylist}{\mytya_1}}{\mytmn_2}{\myapp{\mylist}{\mytya_2}}
+ } \\
+ (& \mydc{nil} & : & \myapp{\mylist}{\mytya_1} &) & \myeq & (& \mydc{cons} \myappsp \mytmm_2 \myappsp \mytmn_2 & : & \myapp{\mylist}{\mytya_2} &) \myred \mybot \\
+ (& \mydc{cons} \myappsp \mytmm_1 \myappsp \mytmn_1 & : & \myapp{\mylist}{\mytya_1} &) & \myeq & (& \mydc{nil} & : & \myapp{\mylist}{\mytya_2} &) \myred \mybot
+ \end{array}
+ \]}
+ Finally, quotation
+ % TODO quotation
+
+
+\end{description}
+
+
+\mydesc{syntax}{ }{
+ $
+ \begin{array}{r@{\ }c@{\ }l}
+ \mytmsyn & ::= & \cdots \mysynsep \mycoee{\mytmsyn}{\mytmsyn}{\mytmsyn}{\mytmsyn} \mysynsep
+ \mycohh{\mytmsyn}{\mytmsyn}{\mytmsyn}{\mytmsyn} \\
+ \myprsyn & ::= & \cdots \mysynsep \myjm{\mytmsyn}{\mytmsyn}{\mytmsyn}{\mytmsyn} \\
+ \end{array}
+ $
+}
+
+\mydesc{equality reduction:}{\myctx \vdash \myprsyn \myred \myprsyn}{
+ \footnotesize
+ \AxiomC{}
+ \UnaryInfC{$
+ \begin{array}{r@{\ }l}
+ \myctx \vdash &
+ \myjm{\myfora{\myb{x_1}}{\mytya_1}{\mytyb_1}}{\mytyp}{\myfora{\myb{x_2}}{\mytya_2}{\mytyb_2}}{\mytyp} \myred \\
+ & \myind{2} \mytya_2 \myeq \mytya_1 \myand \\
+ & \myind{2} \myprfora{\myb{x_2}}{\mytya_2}{\myprfora{\myb{x_1}}{\mytya_1}{
+ \myjm{\myb{x_2}}{\mytya_2}{\myb{x_1}}{\mytya_1} \myimpl \mytyb_1 \myeq \mytyb_2
+ }}
+ \end{array}
+ $}
+ \DisplayProof
+
+ \myderivsp
+
+ \AxiomC{$\mytyc{D} : \mytele \myarr \mytyp \in \myctx$}
+ \UnaryInfC{$
+ \begin{array}{r@{\ }l}
+ \myctx \vdash &
+ \myjm{\mytyc{D} \myappsp \vec{A}}{\mytyp}{\mytyc{D} \myappsp \vec{B}}{\mytyp} \myred \\
+ & \myind{2} \myjm{\mytya_1}{\myhead(\mytele)}{\mytyb_1}{\myhead(\mytele)} \myand \cdots \myand \\
+ & \myind{2} \myjm{\mytya_n}{\myhead(\mytele(A_1 \cdots A_{n-1}))}{\mytyb_n}{\myhead(\mytele(B_1 \cdots B_{n-1}))}
+ \end{array}
+ $}
+ \DisplayProof
+
+ \myderivsp
+
+ \AxiomC{}
+ \UnaryInfC{$\myctx \vdash \myjm{\mytyp}{\mytyp}{\mytyp}{\mytyp} \myred \mytop$}
+ \DisplayProof
+
+ \myderivsp
+
+ \AxiomC{$
+ \begin{array}{c}
+ \mydataty(\mytyc{D}, \myctx)\hspace{0.8cm}
+ \mytyc{D}.\mydc{c}_i : \mytele;\mytele' \myarr \mytyc{D} \myappsp \mytelee \in \myctx \\
+ \mytele_A = (\mytele;\mytele')\vec{A}\hspace{0.8cm}
+ \mytele_B = (\mytele;\mytele')\vec{B}
+ \end{array}
+ $}
+ \UnaryInfC{$
+ \begin{array}{l}
+ \myctx \vdash \myjm{\mytyc{D}.\mydc{c}_i \myappsp \vec{\mytmm}}{\mytyc{D} \myappsp \vec{A}}{\mytyc{D}.\mydc{c}_i \myappsp \vec{\mytmn}}{\mytyc{D} \myappsp \vec{B}} \myred \\
+ \myind{2} \myjm{\mytmm_1}{\myhead(\mytele_A)}{\mytmn_1}{\myhead(\mytele_B)} \myand \cdots \myand \\
+ \myind{2} \myjm{\mytmm_n}{\mytya_n}{\mytmn_n}{\mytyb_n}
+ \end{array}
+ $}
+ \DisplayProof
+
+ \myderivsp
+
+ \AxiomC{$\myisreco(\mytyc{D}, \myctx)$}
+ \UnaryInfC{$\myctx \vdash \myjm{\mytmm}{\mytyc{D} \myappsp \vec{A}}{\mytmn}{\mytyc{D} \myappsp \vec{B}} \myred foo$}
+ \DisplayProof
+
+ \myderivsp
+ \AxiomC{}
+ \UnaryInfC{$\mytya \myeq \mytyb \myred \mybot\ \text{if $\mytya$ and $\mytyb$ are canonical types.}$}
+ \DisplayProof
+}
+
+\subsubsection{$\myprop$ and the hierarchy}
+
+Where is $\myprop$ placed in the $\mytyp$ hierarchy? At each universe
+level, we will have that
+
+\subsubsection{Quotation and irrelevance}
+\ref{sec:kant-irr}
+
+foo
+
+\section{\mykant : The practice}
+\label{sec:kant-practice}
+
+The codebase consists of around 2500 lines of Haskell, as reported by
+the \texttt{cloc} utility. The high level design is inspired by Conor
+McBride's work on various incarnations of Epigram, and specifically by
+the first version as described \citep{McBride2004} and the codebase for
+the new version \footnote{Available intermittently as a \texttt{darcs}
+ repository at \url{http://sneezy.cs.nott.ac.uk/darcs/Pig09}.}. In
+many ways \mykant\ is something in between the first and second version
+of Epigram.
+
+The interaction happens in a read-eval-print loop (REPL). The REPL is a
+available both as a commandline application and in a web interface,
+which is available at \url{kant.mazzo.li} and presents itself as in
+figure \ref{fig:kant-web}.
+
+\begin{figure}
+ \centering{
+ \includegraphics[scale=1.0]{kant-web.png}
+ }
+ \caption{The \mykant\ web prompt.}
+ \label{fig:kant-web}
+\end{figure}
+
+The interaction with the user takes place in a loop living in and updating a
+context \mykant\ declarations. The user inputs a new declaration that goes
+through various stages starts with the user inputing a \mykant\ declaration or
+another REPL command, which then goes through various stages that can end up
+in a context update, or in failures of various kind. The process is described
+diagrammatically in figure \ref{fig:kant-process}:
+
+\begin{description}
+\item[Parse] In this phase the text input gets converted to a sugared
+ version of the core language.
+
+\item[Desugar] The sugared declaration is converted to a core term.
+
+\item[Reference] Occurrences of $\mytyp$ get decorated by a unique reference,
+ which is necessary to implement the type hierarchy check.
+
+\item[Elaborate] Convert the declaration to some context item, which might be
+ a value declaration (type and body) or a data type declaration (constructors
+ and destructors). This phase works in tandem with \textbf{Typechecking},
+ which in turns needs to \textbf{Evaluate} terms.
+
+\item[Distill] and report the result. `Distilling' refers to the process of
+ converting a core term back to a sugared version that the user can
+ visualise. This can be necessary both to display errors including terms or
+ to display result of evaluations or type checking that the user has
+ requested.
+
+\item[Pretty print] Format the terms in a nice way, and display the result to
+ the user.
+
+\end{description}
+
+\begin{figure}
+ \centering{\small
+ \tikzstyle{block} = [rectangle, draw, text width=5em, text centered, rounded
+ corners, minimum height=2.5em, node distance=0.7cm]
+
+ \tikzstyle{decision} = [diamond, draw, text width=4.5em, text badly
+ centered, inner sep=0pt, node distance=0.7cm]
+
+ \tikzstyle{line} = [draw, -latex']
+
+ \tikzstyle{cloud} = [draw, ellipse, minimum height=2em, text width=5em, text
+ centered, node distance=1.5cm]
+
+
+ \begin{tikzpicture}[auto]
+ \node [cloud] (user) {User};
+ \node [block, below left=1cm and 0.1cm of user] (parse) {Parse};
+ \node [block, below=of parse] (desugar) {Desugar};
+ \node [block, below=of desugar] (reference) {Reference};
+ \node [block, below=of reference] (elaborate) {Elaborate};
+ \node [block, left=of elaborate] (tycheck) {Typecheck};
+ \node [block, left=of tycheck] (evaluate) {Evaluate};
+ \node [decision, right=of elaborate] (error) {Error?};
+ \node [block, right=of parse] (distill) {Distill};
+ \node [block, right=of desugar] (update) {Update context};
+
+ \path [line] (user) -- (parse);
+ \path [line] (parse) -- (desugar);
+ \path [line] (desugar) -- (reference);
+ \path [line] (reference) -- (elaborate);
+ \path [line] (elaborate) edge[bend right] (tycheck);
+ \path [line] (tycheck) edge[bend right] (elaborate);
+ \path [line] (elaborate) -- (error);
+ \path [line] (error) edge[out=0,in=0] node [near start] {yes} (distill);
+ \path [line] (error) -- node [near start] {no} (update);
+ \path [line] (update) -- (distill);
+ \path [line] (distill) -- (user);
+ \path [line] (tycheck) edge[bend right] (evaluate);
+ \path [line] (evaluate) edge[bend right] (tycheck);
+ \end{tikzpicture}
+ }
+ \caption{High level overview of the life of a \mykant\ prompt cycle.}
+ \label{fig:kant-process}
+\end{figure}
+
+\subsection{Parsing and Sugar}
+
+\subsection{Term representation and context}
+\label{sec:term-repr}
+
+\subsection{Type checking}
+
+\subsection{Type hierarchy}
+\label{sec:hier-impl}
+
+\subsection{Elaboration}
+
+\section{Evaluation}
+
+\section{Future work}
+
+% TODO coinduction (obscoin, gimenez), pattern unification (miller,
+% gundry), partiality monad (NAD)