+\begin{description}
+\item[Natural numbers] To define natural numbers, we create a data type
+ with two constructors: one with zero arguments ($\mydc{zero}$) and one
+ with one recursive argument ($\mydc{suc}$):
+ {\small\[
+ \begin{array}{@{}l}
+ \myadt{\mynat}{ }{ }{
+ \mydc{zero} \mydcsep \mydc{suc} \myappsp \mynat
+ }
+ \end{array}
+ \]}
+ This is very similar to what we would write in Haskell:
+ {\small
+ \begin{verbatim}data Nat = Zero | Suc Nat
+ \end{verbatim}}
+ Once the data type is defined, $\mykant$\ will generate syntactic
+ constructs for the type and data constructors, so that we will have
+ \begin{center}
+ \small
+ \begin{tabular}{ccc}
+ \AxiomC{\phantom{$\mychk{\mytmt}{\mynat}$}}
+ \UnaryInfC{$\myinf{\mynat}{\mytyp}$}
+ \DisplayProof
+ &
+ \AxiomC{\phantom{$\mychk{\mytmt}{\mynat}$}}
+ \UnaryInfC{$\myinf{\mytyc{\mynat}.\mydc{zero}}{\mynat}$}
+ \DisplayProof
+ &
+ \AxiomC{$\mychk{\mytmt}{\mynat}$}
+ \UnaryInfC{$\myinf{\mytyc{\mynat}.\mydc{suc} \myappsp \mytmt}{\mynat}$}
+ \DisplayProof
+ \end{tabular}
+ \end{center}
+ While in Haskell (or indeed in Agda or Coq) data constructors are
+ treated the same way as functions, in $\mykant$\ they are syntax, so
+ for example using $\mytyc{\mynat}.\mydc{suc}$ on its own will be a
+ syntax error. This is necessary so that we can easily infer the type
+ of polymorphic data constructors, as we will see later.
+
+ Moreover, each data constructor is prefixed by the type constructor
+ name, since we need to retrieve the type constructor of a data
+ constructor when type checking. This measure aids in the presentation
+ of various features but it is not needed in the implementation, where
+ we can have a dictionary to lookup the type constructor corresponding
+ to each data constructor. When using data constructors in examples I
+ will omit the type constructor prefix for brevity.
+
+ Along with user defined constructors, $\mykant$\ automatically
+ generates an \emph{eliminator}, or \emph{destructor}, to compute with
+ natural numbers: If we have $\mytmt : \mynat$, we can destruct
+ $\mytmt$ using the generated eliminator `$\mynat.\myfun{elim}$':
+ \begin{prooftree}
+ \small
+ \AxiomC{$\mychk{\mytmt}{\mynat}$}
+ \UnaryInfC{$
+ \myinf{\mytyc{\mynat}.\myfun{elim} \myappsp \mytmt}{
+ \begin{array}{@{}l}
+ \myfora{\myb{P}}{\mynat \myarr \mytyp}{ \\ \myapp{\myb{P}}{\mydc{zero}} \myarr (\myfora{\myb{x}}{\mynat}{\myapp{\myb{P}}{\myb{x}} \myarr \myapp{\myb{P}}{(\myapp{\mydc{suc}}{\myb{x}})}}) \myarr \\ \myapp{\myb{P}}{\mytmt}}
+ \end{array}
+ }$}
+ \end{prooftree}
+ $\mynat.\myfun{elim}$ corresponds to the induction principle for
+ natural numbers: if we have a predicate on numbers ($\myb{P}$), and we
+ know that predicate holds for the base case
+ ($\myapp{\myb{P}}{\mydc{zero}}$) and for each inductive step
+ ($\myfora{\myb{x}}{\mynat}{\myapp{\myb{P}}{\myb{x}} \myarr
+ \myapp{\myb{P}}{(\myapp{\mydc{suc}}{\myb{x}})}}$), then $\myb{P}$
+ holds for any number. As with the data constructors, we require the
+ eliminator to be applied to the `destructed' element.
+
+ While the induction principle is usually seen as a mean to prove
+ properties about numbers, in the intuitionistic setting it is also a
+ mean to compute. In this specific case we will $\mynat.\myfun{elim}$
+ will return the base case if the provided number is $\mydc{zero}$, and
+ recursively apply the inductive step if the number is a
+ $\mydc{suc}$cessor:
+ {\small\[
+ \begin{array}{@{}l@{}l}
+ \mytyc{\mynat}.\myfun{elim} \myappsp \mydc{zero} & \myappsp \myse{P} \myappsp \myse{pz} \myappsp \myse{ps} \myred \myse{pz} \\
+ \mytyc{\mynat}.\myfun{elim} \myappsp (\mydc{suc} \myappsp \mytmt) & \myappsp \myse{P} \myappsp \myse{pz} \myappsp \myse{ps} \myred \myse{ps} \myappsp \mytmt \myappsp (\mynat.\myfun{elim} \myappsp \mytmt \myappsp \myse{P} \myappsp \myse{pz} \myappsp \myse{ps})
+ \end{array}
+ \]}
+ % TODO maybe more examples, e.g. Haskell eliminator and fibonacci
+
+\item[Binary trees] Now for a polymorphic data type: binary trees, since
+ lists are too similar to natural numbers to be interesting.
+ {\small\[
+ \begin{array}{@{}l}
+ \myadt{\mytree}{\myappsp (\myb{A} {:} \mytyp)}{ }{
+ \mydc{leaf} \mydcsep \mydc{node} \myappsp (\myapp{\mytree}{\myb{A}}) \myappsp \myb{A} \myappsp (\myapp{\mytree}{\myb{A}})
+ }
+ \end{array}
+ \]}
+ Now the purpose of constructors as syntax can be explained: what would
+ the type of $\mydc{leaf}$ be? If we were to treat it as a `normal'
+ term, we would have to specify the type parameter of the tree each
+ time the constructor is applied:
+ {\small\[
+ \begin{array}{@{}l@{\ }l}
+ \mydc{leaf} & : \myfora{\myb{A}}{\mytyp}{\myapp{\mytree}{\myb{A}}} \\
+ \mydc{node} & : \myfora{\myb{A}}{\mytyp}{\myapp{\mytree}{\myb{A}} \myarr \myb{A} \myarr \myapp{\mytree}{\myb{A}} \myarr \myapp{\mytree}{\myb{A}}}
+ \end{array}
+ \]}
+ The problem with this approach is that creating terms is incredibly
+ verbose and dull, since we would need to specify the type parameters
+ each time. For example if we wished to create a $\mytree \myappsp
+ \mynat$ with two nodes and three leaves, we would have to write
+ {\small\[
+ \mydc{node} \myappsp \mynat \myappsp (\mydc{node} \myappsp \mynat \myappsp (\mydc{leaf} \myappsp \mynat) \myappsp (\myapp{\mydc{suc}}{\mydc{zero}}) \myappsp (\mydc{leaf} \myappsp \mynat)) \myappsp \mydc{zero} \myappsp (\mydc{leaf} \myappsp \mynat)
+ \]}
+ The redundancy of $\mynat$s is quite irritating. Instead, if we treat
+ constructors as syntactic elements, we can `extract' the type of the
+ parameter from the type that the term gets checked against, much like
+ we get the type of abstraction arguments:
+ \begin{center}
+ \small
+ \begin{tabular}{cc}
+ \AxiomC{$\mychk{\mytya}{\mytyp}$}
+ \UnaryInfC{$\mychk{\mydc{leaf}}{\myapp{\mytree}{\mytya}}$}
+ \DisplayProof
+ &
+ \AxiomC{$\mychk{\mytmm}{\mytree \myappsp \mytya}$}
+ \AxiomC{$\mychk{\mytmt}{\mytya}$}
+ \AxiomC{$\mychk{\mytmm}{\mytree \myappsp \mytya}$}
+ \TrinaryInfC{$\mychk{\mydc{node} \myappsp \mytmm \myappsp \mytmt \myappsp \mytmn}{\mytree \myappsp \mytya}$}
+ \DisplayProof
+ \end{tabular}
+ \end{center}
+ Which enables us to write, much more concisely
+ {\small\[
+ \mydc{node} \myappsp (\mydc{node} \myappsp \mydc{leaf} \myappsp (\myapp{\mydc{suc}}{\mydc{zero}}) \myappsp \mydc{leaf}) \myappsp \mydc{zero} \myappsp \mydc{leaf} : \myapp{\mytree}{\mynat}
+ \]}
+ We gain an annotation, but we lose the myriad of types applied to the
+ constructors. Conversely, with the eliminator for $\mytree$, we can
+ infer the type of the arguments given the type of the destructed:
+ \begin{prooftree}
+ \footnotesize
+ \AxiomC{$\myinf{\mytmt}{\myapp{\mytree}{\mytya}}$}
+ \UnaryInfC{$
+ \myinf{\mytree.\myfun{elim} \myappsp \mytmt}{
+ \begin{array}{@{}l}
+ (\myb{P} {:} \myapp{\mytree}{\mytya} \myarr \mytyp) \myarr \\
+ \myapp{\myb{P}}{\mydc{leaf}} \myarr \\
+ ((\myb{l} {:} \myapp{\mytree}{\mytya}) (\myb{x} {:} \mytya) (\myb{r} {:} \myapp{\mytree}{\mytya}) \myarr \myapp{\myb{P}}{\myb{l}} \myarr
+ \myapp{\myb{P}}{\myb{r}} \myarr \myb{P} \myappsp (\mydc{node} \myappsp \myb{l} \myappsp \myb{x} \myappsp \myb{r})) \myarr \\
+ \myapp{\myb{P}}{\mytmt}
+ \end{array}
+ }
+ $}
+ \end{prooftree}
+ As expected, the eliminator embodies structural induction on trees.
+
+\item[Empty type] We have presented types that have at least one
+ constructors, but nothing prevents us from defining types with
+ \emph{no} constructors:
+ {\small\[
+ \myadt{\mytyc{Empty}}{ }{ }{ }
+ \]}
+ What shall the `induction principle' on $\mytyc{Empty}$ be? Does it
+ even make sense to talk about induction on $\mytyc{Empty}$?
+ $\mykant$\ does not care, and generates an eliminator with no `cases',
+ and thus corresponding to the $\myfun{absurd}$ that we know and love:
+ \begin{prooftree}
+ \small
+ \AxiomC{$\myinf{\mytmt}{\mytyc{Empty}}$}
+ \UnaryInfC{$\myinf{\myempty.\myfun{elim} \myappsp \mytmt}{(\myb{P} {:} \mytmt \myarr \mytyp) \myarr \myapp{\myb{P}}{\mytmt}}$}
+ \end{prooftree}
+
+\item[Ordered lists] Up to this point, the examples shown are nothing
+ new to the \{Haskell, SML, OCaml, functional\} programmer. However
+ dependent types let us express much more than
+ % TODO
+
+\item[Dependent products] Apart from $\mysyn{data}$, $\mykant$\ offers
+ us another way to define types: $\mysyn{record}$. A record is a
+ datatype with one constructor and `projections' to extract specific
+ fields of the said constructor.
+
+ For example, we can recover dependent products:
+ {\small\[
+ \begin{array}{@{}l}
+ \myreco{\mytyc{Prod}}{\myappsp (\myb{A} {:} \mytyp) \myappsp (\myb{B} {:} \myb{A} \myarr \mytyp)}{\\ \myind{2}}{\myfst : \myb{A}, \mysnd : \myapp{\myb{B}}{\myb{fst}}}
+ \end{array}
+ \]}
+ Here $\myfst$ and $\mysnd$ are the projections, with their respective
+ types. Note that each field can refer to the preceding fields. A
+ constructor will be automatically generated, under the name of
+ $\mytyc{Prod}.\mydc{constr}$. Dually to data types, we will omit the
+ type constructor prefix for record projections.
+
+ Following the bidirectionality of the system, we have that projections
+ (the destructors of the record) infer the type, while the constructor
+ gets checked:
+ \begin{center}
+ \small
+ \begin{tabular}{cc}
+ \AxiomC{$\mychk{\mytmm}{\mytya}$}
+ \AxiomC{$\mychk{\mytmn}{\myapp{\mytyb}{\mytmm}}$}
+ \BinaryInfC{$\mychk{\mytyc{Prod}.\mydc{constr} \myappsp \mytmm \myappsp \mytmn}{\mytyc{Prod} \myappsp \mytya \myappsp \mytyb}$}
+ \noLine
+ \UnaryInfC{\phantom{$\myinf{\myfun{snd} \myappsp \mytmt}{\mytyb \myappsp (\myfst \myappsp \mytmt)}$}}
+ \DisplayProof
+ &
+ \AxiomC{$\myinf{\mytmt}{\mytyc{Prod} \myappsp \mytya \myappsp \mytyb}$}
+ \UnaryInfC{$\myinf{\myfun{fst} \myappsp \mytmt}{\mytya}$}
+ \noLine
+ \UnaryInfC{$\myinf{\myfun{snd} \myappsp \mytmt}{\mytyb \myappsp (\myfst \myappsp \mytmt)}$}
+ \DisplayProof
+ \end{tabular}
+ \end{center}
+ What we have is equivalent to ITT's dependent products.
+\end{description}
+
+Following the intuition given by the examples, the mechanised
+elaboration is presented in figure \ref{fig:elab}, which is essentially
+a modification of figure 9 of \citep{McBride2004}\footnote{However, our
+ datatypes do not have indices, we do bidirectional typechecking by
+ treating constructors/destructors are syntactic constructors, and we
+ have records.}.
+
+In data types declarations we allow recursive occurrences as long as
+they are \emph{strictly positive}, employing a syntactic check to make
+sure that this is the case. See \cite{Dybjer1991} for a more formal
+treatment of inductive definitions in ITT.
+
+For what concerns records, recursive occurrences are disallowed. The
+reason for this choice is answered by the reason for the choice of
+having records at all: we need records to give the user types with
+$\eta$-laws for equality, as we saw in section % TODO add section
+and in the treatment of OTT in section \ref{sec:ott}. If we tried to
+$\eta$-expand recursive data types, we would expand forever.
+
+To implement bidirectional type checking for constructors and
+destructors, we store their types in full in the context, and then
+instantiate when due:
+
+\mydesc{typing:}{ }{
+ \AxiomC{$
+ \begin{array}{c}
+ \mytyc{D} : \mytele \myarr \mytyp \in \myctx \hspace{1cm}
+ \mytyc{D}.\mydc{c} : \mytele \mycc \mytele' \myarr
+ \myapp{\mytyc{D}}{\mytelee} \in \myctx \\
+ \mytele'' = (\mytele;\mytele')\vec{A} \hspace{1cm}
+ \mychkk{\myctx; \mytake_{i-1}(\mytele'')}{t_i}{\myix_i( \mytele'')}\ \
+ (1 \le i \le \mytele'')
+ \end{array}
+ $}
+ \UnaryInfC{$\mychk{\myapp{\mytyc{D}.\mydc{c}}{\vec{t}}}{\myapp{\mytyc{D}}{\vec{A}}}$}
+ \DisplayProof
+
+ \myderivsp
+
+ \AxiomC{$\mytyc{D} : \mytele \myarr \mytyp \in \myctx$}
+ \AxiomC{$\mytyc{D}.\myfun{f} : \mytele \mycc (\myb{x} {:}
+ \myapp{\mytyc{D}}{\mytelee}) \myarr \myse{F}$}
+ \AxiomC{$\myjud{\mytmt}{\myapp{\mytyc{D}}{\vec{A}}}$}
+ \TrinaryInfC{$\myinf{\myapp{\mytyc{D}.\myfun{f}}{\mytmt}}{(\mytele
+ \mycc (\myb{x} {:} \myapp{\mytyc{D}}{\mytelee}) \myarr
+ \myse{F})(\vec{A};\mytmt)}$}
+ \DisplayProof
+ }
+
+\subsubsection{Why user defined types?}
+
+% TODO reference levitated theories, indexed containers
+
+foobar
+
+\subsection{Cumulative hierarchy and typical ambiguity}