+\label{sec:future-work}
+
+As mentioned, the first move that the author plans to make is to work
+towards a simple but powerful term representation, and then adjust
+\mykant\ to this new representation. A good plan seems to be to
+associate each type (terms, telescopes, etc.) with what we can
+substitute variables with, so that the term type will be associated with
+itself, while telescopes and propositions will be associated to terms.
+This can probably be accomplished elegantly with Haskell's \emph{type
+ families} \citep{chakravarty2005associated}. After achieving a more
+solid machinery for terms, implementing observational equality fully
+should prove relatively easy.
+
+Beyond this steps, \mykant\ would still need many additions to compete
+as a reasonable alternative to the existing systems:
+
+\begin{description}
+\item[Pattern matching] Eliminators are very clumsy to use, and
+
+\item[More powerful data types] Beyond normal inductive data types,
+ \mykant\ does not in more sophisticated notions. A popular
+ improvements on basic data types are inductive families, where the
+ parameters for the type constructors change based on the data
+ constructors, which lets us express naturally types such as
+ $\mytyc{Vec} : \mynat \myarr \mytyp$, which given a number returns the
+ type of lists of that length, or $\mytyc{Fin} : \mynat \myarr \mytyp$,
+ which given a number $n$ gives the type of numbers less than $n$.
+ This apparent omission was motivated by the fact that inductive
+ families can be represented by adding equalities concerning the
+ parameters of the type constructors as arguments to the data
+ constructor, in much the same way that Generalised Abstract Data Types
+ \citep{GHC} are handled in Haskell, where interestingly the modified
+ version of System F that lies at the core of recent versions of GHC
+ features coercions similar to those found in OTT \citep{Sulzmann2007}.
+
+ Another popular extension introduced by \cite{dybjer2000general} is
+ induction-recursion, where we define a data type in tandem with a
+ function on that type. This technique has proven extremely useful to
+ define embeddings of other calculi in an host language, by defining
+ the representation of the embedded language as a data type and at the
+ same time a function decoding from the representation to a type in the
+ host language.
+
+ It is also worth mentionning that in recent times there has been work
+ by \cite{dagand2012elaborating, chapman2010gentle} to show how to
+ define a set of primitives that data types can be elaborated into,
+ with the additional advantage of having the possibility of having a
+ very powerful notion of generic programming by writing functions
+ working on the `primitive' types as to be workable by all `compatible'
+ user-defined data types. This has been a considerable problem in the
+ dependently type world, where we often define types which are more
+ `strongly typed' version of similar structures,\footnote{For example
+ the $\mytyc{OList}$ presented in Section \ref{sec:user-type} being a
+ `more typed' version of an ordinary list.} and then find ourselves
+ forced to redefine identical operations on both types.
+
+\item[Type inference] While bidirectional type checking helps, for a
+ syts \cite{miller1992unification} \cite{gundrytutorial}
+ \cite{huet1973undecidability}.
+
+\item[Coinduction] \cite{cockett1992charity} \cite{mcbride2009let}.
+\end{description}
+
+