summaryrefslogtreecommitdiff
diff options
context:
space:
mode:
authorFrancesco Mazzoli <f@mazzo.li>2013-06-03 15:37:24 +0100
committerFrancesco Mazzoli <f@mazzo.li>2013-06-03 15:37:24 +0100
commitd629d19ea2f4447e62425cb7dabdaa8592ba90df (patch)
treeb255fbbfb12e78c126cff9e235149ef12df95159
parentc9b61fe511ff3c9dc42ce657a03d5a81b1a6bee5 (diff)
...
-rw-r--r--kant-web.pngbin0 -> 9763 bytes
-rw-r--r--thesis.lagda519
2 files changed, 492 insertions, 27 deletions
diff --git a/kant-web.png b/kant-web.png
new file mode 100644
index 0000000..8a2d329
--- /dev/null
+++ b/kant-web.png
Binary files differ
diff --git a/thesis.lagda b/thesis.lagda
index eb7dc9e..8a5065c 100644
--- a/thesis.lagda
+++ b/thesis.lagda
@@ -14,6 +14,7 @@
%% Symbols
\usepackage[fleqn]{amsmath}
+\usepackage{stmaryrd} %llbracket
%% Proof trees
\usepackage{bussproofs}
@@ -21,6 +22,19 @@
%% Diagrams
\usepackage[all]{xy}
+%% Quotations
+\usepackage{epigraph}
+
+%% Images
+\usepackage{graphicx}
+
+%% diagrams
+\usepackage{tikz}
+\usetikzlibrary{shapes,arrows,positioning}
+% \usepackage{tikz-cd}
+% \usepackage{pgfplots}
+
+
%% -----------------------------------------------------------------------------
%% Commands for Agda
\usepackage[english]{babel}
@@ -64,14 +78,23 @@
\FrameSep0.2cm
\newcommand{\mydesc}[3]{
- {\small
- \vspace{0.3cm}
- \hfill \textbf{#1} $#2$
- \vspace{-0.3cm}
- \begin{framed}
- #3
- \end{framed}
-}
+ \noindent
+ \mbox{
+ \parbox{\textwidth}{
+ {\small
+ \vspace{0.3cm}
+ \hfill \textbf{#1} $#2$
+
+ \framebox[\textwidth]{
+ \parbox{\textwidth}{
+ \vspace{0.1cm}
+ #3
+ \vspace{0.1cm}
+ }
+ }
+ }
+ }
+ }
}
% TODO is \mathbin the correct thing for arrow and times?
@@ -97,7 +120,7 @@
% TODO \mathbin or \mathrel here?
\newcommand{\myabss}[3]{\mydc{$\lambda$} #1 {:} #2 \mathrel{\mydc{$\mapsto$}} #3}
\newcommand{\mytt}{\mydc{$\langle\rangle$}}
-\newcommand{\myunit}{\mytyc{$\top$}}
+\newcommand{\myunit}{\mytyc{Unit}}
\newcommand{\mypair}[2]{\mathopen{\mydc{$\langle$}}#1\mathpunct{\mydc{,}} #2\mathclose{\mydc{$\rangle$}}}
\newcommand{\myfst}{\myfld{fst}}
\newcommand{\mysnd}{\myfld{snd}}
@@ -108,7 +131,7 @@
\newcommand{\mysum}{\mathbin{\textcolor{AgdaDatatype}{+}}}
\newcommand{\myleft}[1]{\mydc{left}_{#1}}
\newcommand{\myright}[1]{\mydc{right}_{#1}}
-\newcommand{\myempty}{\mytyc{$\bot$}}
+\newcommand{\myempty}{\mytyc{Empty}}
\newcommand{\mycase}[2]{\mathopen{\myfun{[}}#1\mathpunct{\myfun{,}} #2 \mathclose{\myfun{]}}}
\newcommand{\myabsurd}[1]{\myfun{absurd}_{#1}}
\newcommand{\myarg}{\_}
@@ -135,6 +158,23 @@
\newcommand{\myrec}[4]{\myfun{rec}\,#1 / {#2.#3}\,\myfun{with}\,#4}
\newcommand{\mylub}{\sqcup}
\newcommand{\mydefeq}{\cong}
+\newcommand{\myrefl}{\mydc{refl}}
+\newcommand{\mypeq}[1]{\mathrel{\mytyc{=}_{#1}}}
+\newcommand{\myjeqq}{\myfun{=-elim}}
+\newcommand{\myjeq}[3]{\myapp{\myapp{\myapp{\myjeqq}{#1}}{#2}}{#3}}
+\newcommand{\mysubst}{\myfun{subst}}
+\newcommand{\myprsyn}{\myse{prop}}
+\newcommand{\myprdec}[1]{\llbracket #1 \rrbracket}
+\newcommand{\myand}{\wedge}
+\newcommand{\myprfora}[3]{\forall #1 {:} #2. #3}
+\newcommand{\mybot}{\bot}
+\newcommand{\mytop}{\top}
+\newcommand{\mycoe}{\myfun{coe}}
+\newcommand{\mycoee}[4]{\myapp{\myapp{\myapp{\myapp{\mycoe}{#1}}{#2}}{#3}}{#4}}
+\newcommand{\mycoh}{\myfun{coh}}
+\newcommand{\mycohh}[4]{\myapp{\myapp{\myapp{\myapp{\mycoh}{#1}}{#2}}{#3}}{#4}}
+\newcommand{\myjm}[4]{(#1 {:} #2) = (#3 {:} #4)}
+\newcommand{\myprop}{\mytyc{Prop}}
%% -----------------------------------------------------------------------------
@@ -655,6 +695,20 @@ fact is that while System F is impredicative it is still consistent and strongly
normalising. \cite{Coquand1986} further extended this line of work with the
Calculus of Constructions (CoC).
+Most widely used interactive theorem provers are based on ITT. Popular ones
+include Agda \citep{Norell2007, Bove2009}, Coq \citep{Coq}, and Epigram
+\citep{McBride2004, EpigramTut}.
+
+\subsection{A note on inference}
+
+% TODO do this, adding links to the sections about bidi type checking and
+% implicit universes.
+In the following text I will often omit explicit typing for abstractions or
+
+Moreover, I will use $\mytyp$ without bothering to specify a
+universe, with the silent assumption that the definition is consistent
+regarding to the hierarchy.
+
\subsection{A simple type theory}
\label{sec:core-tt}
@@ -745,13 +799,20 @@ The first thing to notice is that a barrier between values and types that we had
in the STLC is gone: values can appear in types, and the two are treated
uniformly in the syntax.
-While the usefulness of doing this will become clear soon, a consequence is that
-since types can be the result of computation, deciding type equality is not
-immediate as in the STLC. For this reason we define \emph{definitional
- equality}, $\mydefeq$, as the congruence relation extending $\myred$. Types
-that are definitionally equal can be used interchangeably. Here the
-`conversion' rule is not syntax directed, however we will see how it is possible
-to employ $\myred$ to decide term equality in a systematic way. % TODO add section
+While the usefulness of doing this will become clear soon, a consequence is
+that since types can be the result of computation, deciding type equality is
+not immediate as in the STLC. For this reason we define \emph{definitional
+ equality}, $\mydefeq$, as the congruence relation extending
+$\myred$---moreover, when comparing types syntactically we do it up to
+renaming of bound names ($\alpha$-renaming). For example under this
+discipline we will find that
+\[
+\myabss{\myb{x}}{\mytya}{\myb{x}} \mydefeq \myabss{\myb{y}}{\mytya}{\myb{y}}
+\]
+Types that are definitionally equal can be used interchangeably. Here the
+`conversion' rule is not syntax directed, however we will see how it is
+possible to employ $\myred$ to decide term equality in a systematic
+way. % TODO add section
Another thing to notice is that considering the need to reduce terms to decide
equality, it is essential for a dependently type system to be terminating and
confluent for type checking to be decidable.
@@ -760,11 +821,11 @@ Moreover, we specify a \emph{type hierarchy} to talk about `large' types:
$\mytyp_0$ will be the type of types inhabited by data: $\mybool$, $\mynat$,
$\mylist$, etc. $\mytyp_1$ will be the type of $\mytyp_0$, and so on---for
example we have $\mytrue : \mybool : \mytyp_0 : \mytyp_1 : \dots$. Each type
-`level' is often called a universe in the literature. While it is possible, to
-simplify things by having only one universe $\mytyp$ with $\mytyp : \mytyp$,
-this plan is inconsistent for much the same reason that impredicative na\"{\i}ve
-set theory is \citep{Hurkens1995}. Moreover, various techniques can be employed
-to lift the burden of explicitly handling universes.
+`level' is often called a universe in the literature. While it is possible,
+to simplify things by having only one universe $\mytyp$ with $\mytyp :
+\mytyp$, this plan is inconsistent for much the same reason that impredicative
+na\"{\i}ve set theory is \citep{Hurkens1995}. Moreover, various techniques
+can be employed to lift the burden of explicitly handling universes.
% TODO add sectioon about universes
\subsubsection{Contexts}
@@ -981,13 +1042,118 @@ type-level binders, $\myprod$ and $\mytyc{W}$.
}
}
-
\section{The struggle for equality}
\label{sec:equality}
-\subsection{Propositional equality...}
+In the previous section we saw how a type checker (or a human) needs a notion of
+\emph{definitional equality}. Beyond this meta-theoretic notion, in this
+section we will explore the ways of expressing equality \emph{inside} the
+theory, as a reasoning tool available to the user. This area is the main
+concern of this thesis, and in general a very active research topic, since we do
+not have a fully satisfactory solution, yet.
-\subsection{...and its limitations}
+\subsection{Propositional equality}
+
+\noindent
+\begin{minipage}{0.5\textwidth}
+\mydesc{syntax}{ }{
+ $
+ \begin{array}{r@{\ }c@{\ }l}
+ \mytmsyn & ::= & \dots \\
+ & | & \mytmsyn \mypeq{\mytmsyn} \mytmsyn \mysynsep
+ \myapp{\myrefl}{\mytmsyn} \\
+ & | & \myjeq{\mytmsyn}{\mytmsyn}{\mytmsyn}
+ \end{array}
+ $
+}
+\end{minipage}
+\begin{minipage}{0.5\textwidth}
+\mydesc{reduction:}{\mytmsyn \myred \mytmsyn}{
+ \centering{
+ $
+ \myjeq{\myse{P}}{(\myapp{\myrefl}{\mytmm})}{\mytmn} \myred \mytmn
+ $
+ }
+ \vspace{0.9cm}
+}
+\end{minipage}
+
+\mydesc{typing:}{\myjud{\mytmsyn}{\mytmsyn}}{
+ \centering{
+ \AxiomC{$\myjud{\mytya}{\mytyp_l}$}
+ \AxiomC{$\myjud{\mytmm}{\mytya}$}
+ \AxiomC{$\myjud{\mytmn}{\mytya}$}
+ \TrinaryInfC{$\myjud{\mytmm \mypeq{\mytya} \mytmn}{\mytyp_l}$}
+ \DisplayProof
+
+ \myderivsp
+
+ \begin{tabular}{cc}
+ \AxiomC{\phantom{$\myctx P \mytyp_l$}}
+ \noLine
+ \UnaryInfC{$\myjud{\mytmt}{\mytya}$}
+ \UnaryInfC{$\myjud{\myapp{\myrefl}{\mytmt}}{\mytmt \mypeq{\mytya} \mytmt}$}
+ \DisplayProof
+ &
+ \AxiomC{$\myjud{\myse{P}}{\myfora{\myb{x}\ \myb{y}}{\mytya}{\myfora{q}{\myb{x} \mypeq{\mytya} \myb{y}}{\mytyp_l}}}$}
+ \noLine
+ \UnaryInfC{$\myjud{\myse{q}}{\mytmm \mypeq{\mytya} \mytmn}\hspace{1.2cm}\myjud{\myse{p}}{\myapp{\myapp{\myapp{\myse{P}}{\mytmm}}{\mytmm}}{(\myapp{\myrefl}{\mytmm})}}$}
+ \UnaryInfC{$\myjud{\myjeq{\myse{P}}{\myse{q}}{\myse{p}}}{\myapp{\myapp{\myapp{\myse{P}}{\mytmm}}{\mytmn}}{q}}$}
+ \DisplayProof
+ \end{tabular}
+ }
+}
+
+To express equality between two terms inside ITT, the obvious way to do so is
+to have the equality construction to be a type-former. Here we present what
+has survived as the dominating form of equality in systems based on ITT up to
+the present day.
+
+Our type former is $\mypeq{\mytya}$, which given a type (in this case
+$\mytya$) relates equal terms of that type. $\mypeq{}$ has one introduction
+rule, $\myrefl$, which introduces an equality relation between definitionally
+equal terms---in the typing rule we display the term as `the same', meaning
+`the same up to $\mydefeq$'. % TODO maybe mention this earlier
+
+Finally, we have one eliminator for $\mypeq{}$, $\myjeqq$. $\myjeq{\myse{P}}{\myse{q}}{\myse{p}}$ takes
+\begin{itemize}
+\item $\myse{P}$, a predicate working with two terms of a certain type (say
+ $\mytya$) and a proof of their equality
+\item $\myse{q}$, a proof that two terms in $\mytya$ (say $\myse{m}$ and
+ $\myse{n}$) are equal
+\item and $\myse{p}$, an inhabitant of $\myse{P}$ applied to $\myse{m}$, plus
+ the trivial proof by reflexivity showing that $\myse{m}$ is equal to itself
+\end{itemize}
+Given these ingredients, $\myjeqq$ retuns a member of $\myse{P}$ applied to
+$\mytmm$, $\mytmn$, and $\myse{q}$. In other words $\myjeqq$ takes a
+witness that $\myse{P}$ works with \emph{definitionally equal} terms, and
+returns a witness of $\myse{P}$ working with \emph{propositionally equal}
+terms. Invokations of $\myjeqq$ will vanish when the equality proofs will
+reduce to invocations to reflexivity, at which point the arguments must be
+definitionally equal, and thus the provided
+$\myapp{\myapp{\myapp{\myse{P}}{\mytmm}}{\mytmm}}{(\myapp{\myrefl}{\mytmm})}$
+can be returned.
+
+While the $\myjeqq$ rule is slightly convoluted, ve can derive many more
+`friendly' rules from it, for example a more obvious `substitution' rule, that
+replaces equal for equal in predicates:
+\[
+\begin{array}{l}
+(\myabs{\myb{A}\ \myb{P}\ \myb{x}\ \myb{y}\ \myb{q}\ \myb{p}}{
+ \myjeq{(\myabs{\myb{x}\ \myb{y}\ \myb{q}}{\myapp{\myb{P}}{\myb{y}}})}{\myb{q}}{\myb{p}}}) : \\
+\myind{1} \myfora{\myb{A}}{\mytyp}{\myfora{\myb{P}}{\myb{A} \myarr \mytyp}{\myfora{\myb{x}\ \myb{y}}{\myb{A}}{\myb{x} \mypeq{\myb{A}} \myb{y} \myarr \myapp{\myb{P}}{\myb{x}} \myarr \myapp{\myb{P}}{\myb{y}}}}}
+\end{array}
+\]
+This rule is often called $\myfun{subst}$---here we will invoke it without
+specifying the type ($\myb{A}$) and the sides of the equality
+($\myb{x},\myb{y}$).
+
+Once we have $\myfun{subst}$, we can easily prove more familiar laws regarding
+equality, such as symmetry, transitivity, and a congruence law:
+
+% TODO finish this
+
+\subsection{Common extensions}
eta law
@@ -995,11 +1161,158 @@ congruence
UIP
+\subsection{Limitations}
+
+\epigraph{\emph{Half of my time spent doing research involves thinking up clever
+ schemes to avoid needing functional extensionality.}}{@larrytheliquid}
+
+However, propositional equality as described is quite restricted when
+reasoning about equality beyond the term structure, which is what definitional
+equality gives us (extension notwithstanding).
+
+The problem is best exemplified by \emph{function extensionality}. In
+mathematics, we would expect to be able to treat functions that give equal
+output for equal input as the same. When reasoning in a mechanised framework
+we ought to be able to do the same: in the end, without considering the
+operational behaviour, all functions equal extensionally are going to be
+replaceable with one another.
+
+However this is not the case, or in other words with the tools we have we have
+no term of type
+\[
+\myfun{ext} : \myfora{\myb{A}\ \myb{B}}{\mytyp}{\myfora{\myb{f}\ \myb{g}}{
+ \myb{A} \myarr \myb{B}}{
+ (\myfora{\myb{x}}{\myb{A}}{\myapp{\myb{f}}{\myb{x}} \mypeq{\myb{B}} \myapp{\myb{g}}{\myb{x}}}) \myarr
+ \myb{f} \mypeq{\myb{A} \myarr \myb{B}} \myb{g}
+ }
+}
+\]
+To see why this is the case, consider the functions
+\[\myabs{\myb{x}}{0 \mathrel{\myfun{+}} \myb{x}}$ and $\myabs{\myb{x}}{\myb{x} \mathrel{\myfun{+}} 0}\]
+where $\myfun{+}$ is defined by recursion on the first argument, gradually
+destructing it to build up successors of the second argument. The two
+functions are clearly denotationally the same, and we can in fact prove that
+\[
+\myfora{\myb{x}}{\mynat}{(0 \mathrel{\myfun{+}} \myb{x}) \mypeq{\mynat} (\myb{x} \mathrel{\myfun{+}} 0)}
+\]
+By analysis on the $\myb{x}$. However, the two functions are not
+definitionally equal, and thus we won't be able to get rid of the
+quantification.
+
+For the reasons above, theories that offer a propositional equality similar to
+what we presented are called \emph{intensional}, as opposed to
+\emph{extensional}. Most systems in wide use today (such as Agda, Coq and
+Epigram) are of this kind.
+
+This is quite an annoyance that often makes reasoning awkward to execute. It
+also extends to other fields, for example proving bisimulation between
+processes specified by coinduction, or in general proving equivalences based
+on the behaviour on a term.
+
\subsection{Equality reflection}
+One way to `solve' this problem is by identifying propositional equality with
+definitional equality:
+
+\mydesc{typing:}{\myjud{\mytmsyn}{\mytmsyn}}{
+ \centering{
+ \AxiomC{$\myjud{\myse{q}}{\mytmm \mypeq{\mytya} \mytmn}$}
+ \UnaryInfC{$\myjud{\mytmm \mydefeq \mytmn}{\mytya}$}
+ \DisplayProof
+ }
+}
+
+This rule takes the name of \emph{equality reflection}, and is a very
+different rule from the ones we saw up to now: it links a typing judgement
+internal to the type theory to a meta-theoretic judgement that the type
+checker uses to work with terms. It is easy to see the dangerous consequences
+that this causes:
+\begin{itemize}
+\item The rule is syntax directed, and the type checker is presumably expected
+ to come up with equality proofs when needed.
+\item More worryingly, type checking becomes undecidable also because
+ computing under false assumptions becomes unsafe.
+ Consider for example
+ \[
+ \myabss{\myb{q}}{\mytya \mypeq{\mytyp} (\mytya \myarr \mytya)}{\myhole{?}}
+ \]
+ Using the assumed proof in tandem with equality reflection we could easily
+ write a classic Y combinator, sending the compiler into a loop.
+\end{itemize}
+
+Given these facts theories employing equality reflection, like NuPRL
+\citep{NuPRL}, carry the derivations that gave rise to each typing judgement
+to keep the systems manageable. % TODO more info, problems with that.
+
+For all its faults, equality reflection does allow us to prove extensionality,
+using the extensions we gave above. Assuming that $\myctx$ contains
+\[\myb{A}, \myb{B} : \mytyp; \myb{f}, \myb{g} : \myb{A} \myarr \myb{B}; \myb{q} : \myfora{\myb{x}}{\myb{A}}{\myapp{\myb{f}}{\myb{x}} \mypeq{} \myapp{\myb{g}}{\myb{x}}}\]
+We can then derive
+\begin{prooftree}
+ \AxiomC{$\myjudd{\myctx; \myb{x} : \myb{A}}{\myapp{\myb{q}}{\myb{x}}}{\myapp{\myb{f}}{\myb{x}} \mypeq{} \myapp{\myb{g}}{\myb{x}}}$}
+ \RightLabel{equality reflection}
+ \UnaryInfC{$\myjudd{\myctx; \myb{x} : \myb{A}}{\myapp{\myb{f}}{\myb{x}} \mydefeq \myapp{\myb{g}}{\myb{x}}}{\myb{B}}$}
+ \RightLabel{congruence for $\lambda$s}
+ \UnaryInfC{$\myjud{(\myabs{\myb{x}}{\myapp{\myb{f}}{\myb{x}}}) \mydefeq (\myabs{\myb{x}}{\myapp{\myb{g}}{\myb{x}}})}{\myb{A} \myarr \myb{B}}$}
+ \RightLabel{$\eta$-law for $\lambda$}
+ \UnaryInfC{$\myjud{\myb{f} \mydefeq \myb{g}}{\myb{A} \myarr \myb{B}}$}
+ \RightLabel{$\myrefl$}
+ \UnaryInfC{$\myjud{\myapp{\myrefl}{\myb{f}}}{\myb{f} \mypeq{} \myb{g}}$}
+\end{prooftree}
+
+Now, the question is: do we need to give up well-behavedness of our theory to
+gain extensionality?
+
\subsection{Observational equality}
+\ref{sec:ott}
+
+% TODO should we explain this in detail?
+A recent development by \citet{Altenkirch2007}, \emph{Observational Type
+ Theory} (OTT), promises to keep the well behavedness of ITT while being able
+to gain many useful equality proofs\footnote{It is suspected that OTT gains
+ \emph{all} the equality proofs of ETT, but no proof exists yet.}, including
+function extensionality. The main idea is to give the user the possibility to
+\emph{coerce} (or transport) values from a type $\mytya$ to a type $\mytyb$,
+if the type checker can prove structurally that $\mytya$ and $\mytya$ are
+equal; and providing a value-level equality based on similar principles. A
+brief overview is given below,
+
+\mydesc{syntax}{ }{
+ $
+ \begin{array}{r@{\ }c@{\ }l}
+ \mytmsyn & ::= & \dots \\
+ & | & \myprdec{\myprsyn} \mysynsep
+ \mycoee{\mytmsyn}{\mytmsyn}{\mytmsyn}{\mytmsyn} \mysynsep
+ \mycohh{\mytmsyn}{\mytmsyn}{\mytmsyn}{\mytmsyn} \\
+ \myprsyn & ::= & \mybot \mysynsep \mytop \mysynsep \myprsyn \myand \myprsyn
+ \mysynsep \myprfora{\myb{x}}{\mytmsyn}{\myprsyn} \\\
+ & | & \mytmsyn = \mytmsyn \mysynsep
+ \myjm{\mytmsyn}{\mytmsyn}{\mytmsyn}{\mytmsyn}
+ \end{array}
+ $
+}
+\mydesc{propositions:}{\myjud{\myprsyn}{\myprop}}{
+ \centering{
+ }
+}
+
+The original presentation of OTT employs the theory presented above. It is
+close to the one presented in section \ref{sec:itt}, with the additions
+presented above, and the change that only one the `first' universe, the type
+of small types ($\mytyp_0$), is present.
+
+The propositional universe is meant to be where equality proofs live in. The
+equality proofs are respectively between types ($\mytya = \mytyb$), and
+between values
+
+
+
+However, only one universe is present ($\mytyp_0$), and a \emph{propositional}
+universe is isolated, intended to be the universe where equality proofs live
+in. Propositions (as long as our system is consistent) are inhabited only by
+one element, and thus can all be treated as definitionally equal.
+
-\subsection{Univalence foundations}
\section{Augmenting ITT}
\label{sec:practical}
@@ -1025,6 +1338,145 @@ UIP
\section{\mykant}
\label{sec:kant}
+\mykant is an interactive theorem prover developed as part of this thesis.
+The plan is to present a core language which would be capable of serving as
+the basis for a more featureful system, while still presenting interesting
+features and more importantly observational equality.
+
+The author learnt the hard way the implementations challenges for such a
+project, and while there is a solid and working base to work on, observational
+equality is not currently implemented. However, a detailed plan on how to add
+it this functionality is provided, and should not prove to be too much work.
+
+\subsection{High level overview}
+
+\subsubsection{Features}
+
+The features currently implemented in \mykant are:
+
+\begin{description}
+\item[Full dependent types] As we would expect, we have dependent a system
+ which is as expressive as the `best' corner in the lambda cube described in
+ section \ref{sec:itt}.
+
+\item[Implicit, cumulative universe hierarchy] The user does not need to
+ specify universe level explicitly, and universes are \emph{cumulative}.
+
+\item[User defined data types and records] Instead of forcing the user to
+ choose from a restricted toolbox, we let her define inductive data types,
+ with associated primitive recursion operators; or records, with associated
+ projections for each field.
+
+\item[Bidirectional type checking] While no `fancy' inference via unification
+ is present, we take advantage of an type synthesis system in the style of
+ \cite{Pierce2000}, extending the concept for user defined data types.
+
+\item[Type holes] When building up programs interactively, it is useful to
+ leave parts unfinished while exploring the current context. This is what
+ type holes are for.
+\end{description}
+
+The planned features are:
+
+\begin{description}
+\item[Observational equality] As described in section \label{sec:ott} but
+ extended to work with the type hierarchy and to admit equality between
+ arbitrary data types.
+
+\item[Coinductive data] ...
+\end{description}
+
+We will analyse the features one by one, along with motivations and tradeoffs
+for the design decisions made.
+
+\subsubsection{Implementation}
+
+The codebase consists of around 2500 lines of Haskell, as reported by the
+\texttt{cloc} utility. The high level design is heavily inspired by Conor
+McBride's work on various incarnations of Epigram, and specifically by the
+first version as described \citep{McBride2004} and the codebase for the new
+version \footnote{Available intermittently as a \texttt{darcs} repository at
+ \url{http://sneezy.cs.nott.ac.uk/darcs/Pig09}.}. In many ways \mykant is
+something in between the first and second version of Epigram.
+
+The interaction happens in a read-eval-print loop (REPL). The repl is a
+available both as a commandline application and in a web interface, which is
+available at \url{kant.mazzo.li} and presents itself as in figure
+\ref{fig:kant-web}.
+
+\begin{figure}
+ \centering{
+ \includegraphics[scale=1.0]{kant-web.png}
+ }
+ \caption{The \mykant web prompt.}
+ \label{fig:kant-web}
+\end{figure}
+
+The interaction with the user takes place in a loop living in and updating a
+context \mykant declarations. The user inputs a new declaration that goes
+through various stages starts with the user inputing a \mykant declaration,
+which then goes through various stages that can end up in a context update, or
+in failures of various kind. The process is described in figure
+\ref{fig:kant-process}. The workings of each phase will be illustrated in the
+next sections.
+
+\begin{figure}
+ {\small
+ \tikzstyle{block} = [rectangle, draw, text width=5em, text centered, rounded
+ corners, minimum height=2.5em, node distance=0.7cm]
+
+ \tikzstyle{decision} = [diamond, draw, text width=4.5em, text badly
+ centered, inner sep=0pt, node distance=0.7cm]
+
+ \tikzstyle{line} = [draw, -latex']
+
+ \tikzstyle{cloud} = [draw, ellipse, minimum height=2em, text width=5em, text
+ centered, node distance=1.5cm]
+
+
+ \begin{tikzpicture}[auto]
+ \node [cloud] (user) {User};
+ \node [block, below left=1cm and 0.1cm of user] (parse) {Parse};
+ \node [block, below=of parse] (desugar) {Desugar};
+ \node [block, below=of desugar] (reference) {Reference};
+ \node [block, below=of reference] (elaborate) {Elaborate};
+ \node [block, below=of elaborate] (tycheck) {Typecheck};
+ \node [decision, right=of elaborate] (error) {Error?};
+ \node [block, right=of parse] (distill) {Distill};
+ \node [block, right=of desugar] (update) {Update context};
+
+ \path [line] (user) -- (parse);
+ \path [line] (parse) -- (desugar);
+ \path [line] (desugar) -- (reference);
+ \path [line] (reference) -- (elaborate);
+ \path [line] (elaborate) edge[bend right] (tycheck);
+ \path [line] (tycheck) edge[bend right] (elaborate);
+ \path [line] (elaborate) -- (error);
+ \path [line] (error) edge[out=0,in=0] node [near start] {yes} (distill);
+ \path [line] (error) -- node [near start] {no} (update);
+ \path [line] (update) -- (distill);
+ \path [line] (distill) -- (user);
+
+
+ \end{tikzpicture}
+ }
+ \caption{High level overview of the life of a \mykant prompt cycle.}
+ \label{fig:kant-process}
+\end{figure}
+
+\subsection{Bidirectional type checking}
+
+We start by describing bidirectional type checking since it calls for fairly
+different typing rules that what we have seen up to now. The idea is to have
+two kind of terms: terms for which a type can always be inferred, and terms
+that need to be checked against a type. A nice observation is that this
+duality runs through the semantics of the terms: data destructors (function
+application, record projections, primitive recursors) \emph{infer} types,
+while data constructors (abstractions, record/data types data constructors)
+need to be checked.
+
+To introduce the concept and notation, we will revisit the STLC in a
+bidirectional style.
\appendix
@@ -1035,7 +1487,7 @@ the type of relation being established and the syntactic elements appearing,
for example
\mydesc{typing:}{\myjud{\mytmsyn}{\mytysyn}}{
- Typing derivations here.
+ \centering{Typing derivations here.}
}
In the languages presented and Agda code samples I also highlight the syntax,
@@ -1114,6 +1566,19 @@ module ITT where
(x : W S P) → -- If so, ...
C x -- ...C always holds.
rec C c (s ◁ f) = c s f (λ p → rec C c (f p))
+
+ data _≡_ {a} {A : Set a} : A → A → Set a where
+ refl : ∀ x → x ≡ x
+
+ J : ∀ {a b} {A : Set a}
+ (P : (x y : A) → x ≡ y → Set b) →
+ ∀ {x y} → P x x (refl x) → (x≡y : x ≡ y) → P x y x≡y
+ J P p (refl x) = p
+
+ subst : ∀ {a b} {A : Set a}
+ (P : A → Set b) →
+ ∀ {x y} → (x≡y : x ≡ y) → P x → P y
+ subst P q p = J (λ _ y _ → P y) p q
\end{code}
\nocite{*}