by Jan Malakhovski, v. 16.3, created , published , last updated Literate Source
Agda doesn’t lack tutorials and introductions, see below for a rather long list of recommendations. There’s a lot to read already, why another introduction? Because there is a gap. The Theory is huge and full of subtle details that are mostly ignored in tutorial implementations and hidden in language tutorials (so that unprepared are not scared away). Which is hardly surprising since the current state of art takes years to implement correctly, and even then some (considerable) problems remain.
Nevertheless, I think it is the hard parts that matter, and I always wanted a tutorial that at least mentioned their existence (well, obviously there is a set of dependently typed problems most people appreciate, e.g. undecidable type inference, but there is still a lot of issues that are not so well-understood). Moreover, after I stumbled upon some of these lesser known parts of dependently typed programming I started to suspect that hiding them behind the language goodnesses actually makes things harder to understand. “Dotted patterns” and “unification stuck” error in Agda are perfect examples. I claim that:
agda2-mode
.1Having said that, this article serves somewhat controversial purposes:
Finally, before we start, a disclaimer: I verified my core thoughts about how all this stuff works by reading (parts of) Agda’s source code, but still, as Plato’s Socrates stated, “I know that I know nothing”.
There is a whole page of Agda tutorials [3] on the Agda’s documentation page [4]. Personally, I recommend:
(This list is not an order, the best practice is to read them (and this page) simultaneously.23)
Same proposition holds for Coq [9], Idris [10] and, to a lesser extent, for Epigram [11].
For a general introduction to type theory field look no further than:
There’s also a number of theoretical books strongly related to the languages listed above:
And a number of tutorials which show how to implement a dependently typed language yourself:
There is agda2-mode for Emacs. It allows to:
Installation:
emacs
,agda
substring from your package manager or Agda
and Agda-executable
with cabal
,agda-mode setup
(or, alternatively, if your OS run nix
[21], a purely-functional package manager, then you can use emacs.pkgs.agda2-mode
package instead, it’s a tiny wrapper that sets everything up for you, you can put it into emacsWithPackages
and it will be kept in-sync with your agda
automatically),emacs
.Don’t be scared away by this if you never used Emacs before, by default it looks and works like any conventional text editor (i.e. it’s not Vim; though, you can make Emacs to emulate Vim by installing and enabling evil-mode
, and I do, but that’s unrelated).
I expect you to load the Literate Agda version of this document into Emacs and continue reading it there. You can, of course
agda-mode
) and call agda
from the shell, but then you will miss most of the fun stuff that makes me prefer Agda to Coq.How to open the Literate Agda file:
emacs
,C-x C-f BrutalDepTypes.lagda RET
(press Control+x keyboard keys together, release all keys, press Control+f, release, type “BrutalDepTypes.lagda”, release, press Return/Enter key) to load this file,If Emacs appars to hang, or it asks if you want to do something you don’t, or you got yourself into some other kind of Emacs-mess somehow, you can always start fixing your problem by pressing C-g
(which calls keyboard-quit
Emacs LISP function, which is Emacs’ version of shell’s Control+C) repeatedly until it stops doing the thing you don’t it want doing.
In Agda a module definition always goes first:
{-# OPTIONS -WnoOpenPublicPrivate #-}
-- The above is needed because Agda 2.6.4.1 gives factually incorrect warnings.
-- Trying to fix the problem it points to the way it wants will break things.
module BrutalDepTypes where
Nested modules and modules with parameters are supported. One of the most common usages of nested modules is to hide some definitions from the top level namespace:
Datatypes are written in GADTs-style:
data Bool : Set where
true false : Bool -- Note, we can list constructors of a same type
-- by interspersing them with spaces.
-- input for ℕ is \bN,
-- input for → is \to, but -> is fine too
-- Naturals.
data ℕ : Set where
zero : ℕ
succ : ℕ → ℕ
-- Identity container
data Id (A : Set) : Set where
pack : A → Id A
-- input for ⊥ is \bot
-- Empty type. Absurd. False proposition.
data ⊥ : Set where
Set
here means the same thing as kind *
in Haskell, i.e. a type of types (more on that below).
Agda is a total language. There is no Haskell-like undefined
, all functions are guaranteed to terminate on all possible inputs (if not explicitly stated otherwise by a compiler flag or a function definition itself), which means that ⊥
type is really empty.
Function declarations look very similar to those in Haskell:
except function arguments have their names even in type expressions:
-- Note, argument's name in a type might differ from a name used in pattern-matching
idℕ₁ : (n : ℕ) → ℕ
idℕ₁ x = x -- this `x` refers to the same argument as `n` in the type
with idℕ₀
’s definition being a syntax sugar for:
where the underscore means “I don’t care about the name”, just like in Haskell.
Dependent types allow type expressions after an arrow to depend on expressions before the arrow, this is used to type polymorphic functions:
Note that this time A
in the type cannot be changed into an underscore, but it’s fine to ignore this name in pattern matching.
Pattern matching looks as usual:
except if you make an error in a constructor name:
Agda will say nothing. This might be critical sometimes:
data Three : Set where
COne CTwo CThree : Three
three2ℕ : Three → ℕ
three2ℕ COne = zero
three2ℕ Ctwo = succ zero
three2ℕ _ = succ (succ zero) -- intersects with the previous clause
Finally, Agda supports implicit arguments:
Values of implicit arguments are derived from other arguments’ values and types by solving type equations (more on them below). You don’t have to apply them or pattern match on them explicitly, but you still can if you wish:
-- positional:
id₁ : {A : Set} → A → A
id₁ {A} a = a
idTest₁ : ℕ → ℕ
idTest₁ = id {ℕ}
-- named:
const₀ : {A : Set} {B : Set} → A → B → A
const₀ {B = _} a _ = a
constTest₀ : ℕ → ℕ → ℕ
constTest₀ = const₀ {A = ℕ} {B = ℕ}
[It’s important to note that no proof search is ever done. Implicit arguments are completely orthogonal to computational aspect of a program, being implicit doesn’t imply anything else. Implicit variables are not treated any way special, they are not erased any way differently than others. They are just a kind of syntax sugar assisted by equation solving.]
It’s allowed to skip arrows between arguments in parentheses or braces:
and to intersperse names of values of the same type by spaces inside parentheses and braces:
What makes Agda’s syntax so confusing is the usage of underscore.
The first one, is the “I don’t care about the name” demonstrated by the const
term above (which is the same syntax as Haskell uses).
The second one is “guess the value yourself”:
which works exactly the same way as implicit arguments (this is different from Haskell where an underscore in a term body declares it to be a hole to be filled later, Agda has a similar mechanism called “goals”, which will be discussed below).
Or, to be more precise, it is the implicit arguments that work like arguments implicitly applied with underscores, except Agda does this once for each function definition, not for each call.
The second-and-a-half meaning is “guess the type yourself” (it shares the mechanism with the second meaning, its a dependently typed language, after all):
which has a special ∀
syntax sugar:
-- input for ∀ is \all or \forall
unpack : ∀ {A} → Id A → A
unpack (pack a) = a
-- explicit argument version:
unpack₁ : ∀ A → Id A → A
unpack₁ _ (pack a) = a
∀
extends to the right up to the first arrow:
unpack₂ : ∀ {A B} → Id A → Id B → A
unpack₂ (pack a) _ = a
unpack₃ : ∀ {A} (_ : Id A) {B} → Id B → A
unpack₃ (pack a) _ = a
Datatype syntax assumes implicit ∀
when there is no type specified:
It is important to note that Agda’s ∀
is quite different from Haskell’s ∀
(forall
). When we say ∀ n
in Agda it’s perfectly normal for n : ℕ
to be inferred, but in Haskell ∀ n
always means {n : Set}
, [i.e. Haskell’s ∀
is an implicit (Hindley-Milner) version of second order universal quantifier while in Agda it’s just a syntax sugar].
Syntax misinterpreting becomes a huge problem when working with more than one universe level (more on that below). It is important to train yourself to desugar type expressions subconsciously (by doing in consciously at first). It will save hours of your time later. For instance, ∀ {A} → Id A → A
means {A : _} → (_ : Id A) → A
(where the last → A
should be interpreted as → (_ : A)
), i.e. the first A
is a variable name, while the other expressions are types.
Finally, the third and the last meaning of an underscore is to mark arguments’ places in function names for the MixFix
parser, i.e. an underscore in a function name marks the place where the arguments goes:
if_then_else_ : {A : Set} → Bool → A → A → A
if true then a else _ = a
if false then _ else b = b
-- Are two ℕs equal?
_=ℕ?_ : ℕ → ℕ → Bool
zero =ℕ? zero = true
zero =ℕ? succ m = false
succ m =ℕ? zero = false
succ n =ℕ? succ m = n =ℕ? m
-- Sum for ℕ.
infix 6 _+_
_+_ : ℕ → ℕ → ℕ
zero + n = n
succ n + m = succ (n + m)
ifthenelseTest₀ : ℕ
ifthenelseTest₀ = if (zero + succ zero) =ℕ? zero
then zero
else succ (succ zero)
-- Lists
-- input for ∷ is \::
data List (A : Set) : Set where
[] : List A
_∷_ : A → List A → List A
[_] : {A : Set} → A → List A
[ a ] = a ∷ []
listTest₁ : List ℕ
listTest₁ = []
listTest₂ : List ℕ
listTest₂ = zero ∷ (zero ∷ (succ zero ∷ []))
Note the fixity declaration infix
which has the same meaning as in Haskell. We didn’t write infixl
for a reason. With declared associativity Agda would not print redundant parentheses, which is good in general, but would somewhat complicate explanation of a several things below.
There is a where
construct, just like in Haskell:
ifthenelseTest₁ : ℕ
ifthenelseTest₁ = if (zero + succ zero) =ℕ? zero
then zero
else x
where
x = succ (succ zero)
While pattern matching, there is a special case when a type we are trying to pattern match on is obviously ([type inhabitance problem is undecidable in a general case]) empty. This special case is called an absurd pattern:
which allows you to skip a right-hand side of a definition.
You can bind variables like that still:
Agda has records, which work very much like newtype
declarations in Haskell, i.e. they are datatypes with a single constructor [the tag for which is not stored in memory].
record Pair (A B : Set) : Set where
field
first : A
second : B
getFirst : ∀ {A B} → Pair A B → A
getFirst = Pair.first
Note, however, that to prevent name clashes record definition generates a module with field extractors inside.
There is a convention to define a type with one element as a record with no fields:
-- input for ⊤ is \top
-- One element type. Record without fields. True proposition.
record ⊤ : Set where
tt : ⊤
tt = record {}
A special thing about this convention is that an argument of an empty record type automatically gets the value record {}
when applied implicitly or with underscore.
Lastly, Agda uses oversimplified lexer that splits tokens by spaces, parentheses, and braces. For instance (note the name of the variable binding):
-- input for ‵ is \`
-- input for ′ is \'
⊥-elim‵′ : {A : Set} → ⊥ → A
⊥-elim‵′ ∀x:⊥→-- = ⊥-elim ∀x:⊥→--
is totally fine. Also note that --
doesn’t generate a comment here.
Let’s define the division by two:
the problem with this definition is that Agda is total and we have to extend this function for odd numbers
by changing {!check me!}
into some term, most common choice being zero
.
Suppose now, we know that inputs to div2
are always even and we don’t want to extend div2
for the succ zero
case. How do we constrain div2
to even naturals only? With a predicate! That is, even
predicate:
which returns ⊤
with with a trivial proof tt
when the argument is even and empty ⊥
then the argument is odd.
Now the definition of div2e
constrained to even naturals only becomes:
div2e : (n : ℕ) → even n → ℕ -- Note, we have to give a name `n` to the first argument here
div2e zero p = zero
div2e (succ zero) ()
div2e (succ (succ y)) p = succ (div2e y p) -- Note, a proof of `even (succ (succ n))` translates
-- to a proof of `even n` by the definition of `even`.
When programming with dependent types, a predicate on A
becomes a function from A
to types, i.e. A → Set
. If a : A
satisfies the predicate P : A → Set
then the function P
returns a type with each element being a proof of P a
, in a case a
doesn’t satisfy P
it returns an empty type.
The magic of dependent types makes the type of the second argument of div2e
change every time we pattern match on the first argument n
. From the callee side, if the first argument is odd then the second argument would get ⊥
type sometime (after a number of recursive calls) enabling the use of an absurd pattern. From the caller side, we are not able to call the function with an odd n
, since we have no means to construct a value for the second argument in this case.
There is another way to define “even” predicate. This time with a datatype indexed by ℕ:
data Even : ℕ → Set where
ezero : Even zero
e2succ : {n : ℕ} → Even n → Even (succ (succ n))
twoIsEven : Even (succ (succ zero))
twoIsEven = e2succ ezero
Even : ℕ → Set
is a family of types indexed by ℕ and obeying the following rules:
Even zero
has one element ezero
.n
type Even (succ (succ n))
has one element if Even n
is nonempty.Compare this to even : ℕ → Set
definition translation:
zero
has property even
.succ zero
has property even
.n
has property even
then so has succ (succ n)
.In other words, the difference is that Even : ℕ → Set
constructs a type whereas even : ℕ → Set
returns a type when applied to an element of ℕ
.
The proof that two is even even (succ (succ zero))
literally says “two is even because it has a trivial proof”, whereas the proof that two is even twoIsEven
says “two is even because zero is even and two is the successor of the successor of zero”.
Even
datatype allows us to define another non-extended division by two for ℕ
:
div2E : (n : ℕ) → Even n → ℕ
div2E zero ezero = zero
div2E (succ zero) ()
div2E (succ (succ n)) (e2succ stilleven) = succ (div2E n stilleven) -- Compare this case to div2e.
Note, there is no case for div2E zero (e2succ x)
since e2succ x
has the wrong type, there is no such constructor in Even zero
. For the succ zero
case the type of the second argument is not ⊥
, but is empty. How do we know that? Unification!
Unification is the most important (at least with pattern matching on inductive datatypes involved) and easily forgotten aspect of dependently typed programming. Given two terms M
and N
unification tries to find a substitution s
such that using s
on M
gives the same result as using s
on N
. The precise algorithm definition is pretty long, but the idea is simple: to decide if two expressions could be unified we
s
.For instance:
(succ a) + b
with succ (c + d)
] we need reduce both of them, now we apply the definition of _+_
from above, and so now we need to unify [succ (a + b)
with succ (c + d)
], which means that we need to unify [a + b
with c + d
], which means that we need to unify [a
with c
] and [b
with d
], which means that [a = c
, b = d
].succ a
can not be unified with zero
for any a
, and succ b
can not be unified with b
for any b
([actually, "for any b
of inductive type, since there is a solution for b
of coinductive ℕ]).foo n
with zero
for some unknown function foo
(it might or might not reduce to zero
for some n
).In the code above succ zero
does not unify with any of the Even
constructors’ indexes [zero
, succ (succ n)
], which means this type is obviously empty by the definition.
[Refer to “The view from the left” paper by McBride and McKinna [22] for more details on pattern matching with type families.]
In datatype declaration things before the colon are called “parameters”, things between the colon and the Set
are called “indexes”.
There is a famous datatype involving both of them:
Vec A n
is a vector of values of type A
and length n
, Vec
has a parameter of type Set
and is indexed by values of type ℕ
. Compare this definition to the definition of List
and Even
. Note also, that Agda tolerates different datatypes with constructors of the same name (see below for how this is resolved).
We can not omit the clause for an []
case in a function which takes a head of a List
:
but we have nothing to write in place of {!check me!}
there (if we want to be total).
On the other hand, there is no []
constructor in a Vec A (succ n)
type:
Note that there are no absurd patterns here, Vec A (succ n)
is inhabited, it just happens that there is no []
in there.
The Vec
type is famous for its concatenation function, which has a very nice type when compared to that for simple List
s:
-- Concatenation for `List`s
_++_ : ∀ {A} → List A → List A → List A
[] ++ bs = bs
(a ∷ as) ++ bs = a ∷ (as ++ bs)
-- Concatenation for `Vec`tors
-- The length of a concatenation is the sum of lengths of arguments and is available in types.
_++v_ : ∀ {A n m} → Vec A n → Vec A m → Vec A (n + m)
[] ++v bs = bs
(a ∷ as) ++v bs = a ∷ (as ++v bs)
Compare _+_
, _++_
, and _++v_
definitions.
Why does the definition of _++v_
work? Because we defined _+_
this way!
_++v_
the type of []
gives n = zero
by unification, and zero + m = m
by the definition of _+_
, and so bs : Vec A m
, which matches the result type.n = succ n0
, as : Vec A n0
, (succ n0) + m = succ (n0 + m)
, so a ∷ (as ++v bs) : Vec A (succ (n0 + m))
, which, again, matches the result type.Let’s define a substraction function:
Note that n - m = zero
for m > n
.
Let us get rid of this (succ n) - zero
case with _≤_
relation:
We are now able to write a substraction that is not extended for m > n
.
sub₀ : (n m : ℕ) → m ≤ n → ℕ
sub₀ n zero (z≤n .{n}) = n
sub₀ .(succ n) .(succ m) (s≤s {m} {n} y) = sub₀ n m y
Note the dots. These are called “dotted patterns”. Now ignore them for a second.
Consider the first clause sub₀ n zero (z≤n {k})
.
m = zero
from the pattern matching, so the type of the third argument is zero ≤ n
.z≤n {k}
is zero ≤ k
(where k
is a fresh type variable).zero ≤ n
, zero ≤ k
] gives [k = n
].sub₀ n zero (z≤n {n})
.The second clause is sub₀ n m (s≤s {n'} {m'} y)
(for fresh type variables n'
and m'
).
m ≤ n
.s≤s {n'} {m'} y
is succ n' ≤ succ m'
.n = succ n'
, m = succ m'
].sub₀ (succ n') (succ m') (s≤s {n'} {m'} y)
.Note that in the first clause for sub₀
we now have two pattern matches on n
and in the second clause we now have two matches on n'
and m'
each. Which of those do we want to match on and bind to a variable? Dotted patterns allow us to make that choice by placing dots before the expressions we want to ignore.
In the above instance, to ask the compiler to match on the first n
in the first clause of sub₀
we put a dot before the second occurrence of n
. The second clause of sub₀
binds the second occurience of each variable instead. (Yes, from a computational point of view, this is stupid. Deliberately so, see below.)
In other words, dotted pattern say “do not match on this, it is the only possible value” to the compiler.
Rewritten with a case
construct from Haskell (Agda doesn’t have case
, see below) the code above becomes (in pseudo-Haskell):
sub₀ n m less = case less of
z≤n {k} -> case m of -- [`m = zero`, `k = n`]
zero -> n
succ m' -> __IMPOSSIBLE__ -- since `m = zero` doesn't merge with `m = succ m'`
s≤s n' m' y -> sub₀ n' m' y -- [`n = succ n'`, `m = succ n'`]
where __IMPOSSIBLE__
is “an undefined
that is never executed”. [It is sometimes also called abort
in the TT literature.] Absurd patterns translate to __IMPOSSIBLE__
too.
Note, that since we have [m = zero
, k = n
] in the first case, we can actually dot the first usage of zero
too to optimize the match on m
away completely:
sub₁ : (n m : ℕ) → m ≤ n → ℕ
sub₁ n .zero (z≤n .{n}) = n
sub₁ .(succ n) .(succ m) (s≤s {m} {n} y) = sub₁ n m y
which translates to
Finally, note that sub₀
and sub₁
extract m
it from the proof of m ≤ n
instead of just using the actual argument, which, of course, is a bit ridiculous. [I presented them first to show that dot patterns do not influence “terms are expression rewrite rules” semantics, but do influence computational behaviour of terms.]
A coventional definition for sub
looks like this:
sub : (n m : ℕ) → m ≤ n → ℕ
sub n zero (z≤n .{n}) = n
sub (succ n) (succ m) (s≤s .{m} .{n} y) = sub n m y
and translates into the following:
sub n m less = case m of
zero -> case less of
z≤n {k} -> n
s≤s {k} {l} y -> __IMPOSSIBLE__ -- since `zero` (`m`) can't be unified
-- with `succ k`
succ m' -> case n of
zero -> case less of
z≤n {k} -> __IMPOSSIBLE__ -- since `succ m'` (`m`) can't be unified
-- with `zero`
s≤s {k} {l} y -> __IMPOSSIBLE__ -- since `zero` (`n`) can't be unified
-- with `succ l`
succ n' -> case less of
z≤n {k} -> __IMPOSSIBLE__ -- since `succ n'` (`n`) can't be unified
-- with `zero`
s≤s {k} {l} y -> sub n' m' y
Exercise. Write out the unification constraints for the pseudo-Haskell translation above.
Also note, that for sub n zero
the third argument is always z≤n {n}
, so, in theory, we could have written
sub₂ : (n m : ℕ) → m ≤ n → ℕ
sub₂ n zero .(z≤n {n}) = n
sub₂ (succ n) (succ m) (s≤s .{m} .{n} y) = sub₂ n m y
but Agda doesn’t allow this. Why? Because dotted patterns are inlined unification constraints and the unification algorithm does not generate any constraints that allow us to dot z≤n {n}
in the first clause of sub₂
(a different unification algorithm could, though). [Actually, also note that in the second clause of sub₂
we, too, at least in theory, could have dotted the whole s≤s
destructor subexpression while keeping the y
bound (we don’t care about its value since nothing in sub₂
will actually use it for computation), but Agda doesn’t allow that either.]
This is also the reason why the first case of sub₀
has two possible implementations noted above (with a dot pattern on zero
and without).
In the sub
case, however, we can write
sub₃ : (n m : ℕ) → m ≤ n → ℕ
sub₃ n zero _ = n
sub₃ (succ n) (succ m) (s≤s .{m} .{n} y) = sub₃ n m y
to sidestep the problem at least in the z≤n
case.
Exercise. Translate the following definition into pseudo-Haskell with unification constraints:
sub₄ : (n m : ℕ) → m ≤ n → ℕ
sub₄ n zero (z≤n .{n}) = n
sub₄ (succ .n) (succ .m) (s≤s {m} {n} y) = sub₄ n m y
We shall now define the most useful type family, that is, Martin-Löf’s equivalence (values only version, though):
For x y : A
the type x ≡ y
has exactly one constructor refl
if x
and y
are convertible, i.e. there exist such z
that z →β✴ x
and z →β✴ y
, where →β✴
is “β-reduces in zero or more steps”. By a consequence from a Church-Rosser theorem and strong normalization convertibility can be solved by normalization. Which means that unification will both check convertibility and fill in any missing parts. In other words, x y : A
the type x ≡ y
has exactly one constructor refl
if x
and y
unify with each other.
Let’s prove some of _≡_
’s properties:
-- _≡_ is symmetric
sym : {A : Set} {a b : A} → a ≡ b → b ≡ a
sym refl = refl
-- transitive
trans : {A : Set}{a b c : A} → a ≡ b → b ≡ c → a ≡ c
trans refl refl = refl
-- and congruent
cong : {A B : Set} {a b : A} → (f : A → B) → a ≡ b → f a ≡ f b
cong f refl = refl
Consider the case sym {A} {a} {b} (refl {x = a})
. Matching on refl
gives [b = a
] equation, i.e. the clause actually is sym {A} {a} .{a} (refl {x = a})
which allows to write refl
on the right-hand side. Other proofs are similar.
Note, we can prove sym
the other way:
sym
packs a
into refl
. sym′
packs b
. “Are these two definitions equal?” is an interesting philosophical question. (From the Agda’s point of view they are.)
Since dotted patterns are just unification constraints, you don’t have to dot implicit arguments when you don’t bind or match on them.
_≡_
type family is called “propositional equality”. In Agda’s standard library it has a bit more general definition, see below.
With _≡_
we can finally prove something from basic number theory. Let’s do this interactively.
Our first victim is the associativity of _+_
.
Note a mark {!!}
.
Press C-c C-l
(agda2-load
in Emacs LISP and under M-x
) to load and typecheck this buffer in Emacs agda2-mode
(this file is quite large, give it a couple seconds). Your Emacs buffer should now be syntax-colored and the {!!}
above should be transformed into a greenish “{ }2”.
Anything of the form {!expr!}
with “expr” being any string (including empty) becomes a goal after a buffer gets loaded by agda2-mode
. Typing {!!}
is quite tedious, so there is a shortcut ?
. All ?
symbols are automatically transformed into {!!}
when a buffer gets reloaded.
Goals are interactive “holes” in a buffer, pressing special key sequences (or calling agda2-*
function via M-x
) while inside a goal allows you to ask Agda questions about and perform actions on the code inside and around that goal. In this document “check me” in a goal means that that goal is not expected to be filled, it’s just an example.
Normally, you are going to spend most of your time editing with goals, but from time to time you might want to turn everything back to plain text (in case you screwed something up or something bugged out). Press C-c C-x C-q
(agda2-quit
) and most things should become black-and-white again. Press C-c C-l
to continue.
Placing the cursor in the goal above and pressing C-c C-c a RET
(agda2-make-case
, make case by a
) gives (ignore changes to the name of a function and “check me”s everywhere):
+-assoc₁ : ∀ a b c → (a + b) + c ≡ a + (b + c)
+-assoc₁ zero b c = {!check me!}
+-assoc₁ (succ a) b c = {!check me!}
Press C-c C-,
(agda2-goal-and-context
) to show goal type and the context while in the goal. Write refl
in there and press C-c C-r
(agda2-refine
, refine), this would typecheck it and produce:
+-assoc₂ : ∀ a b c → (a + b) + c ≡ a + (b + c)
+-assoc₂ zero b c = refl
+-assoc₂ (succ a) b c = {!check me!}
C-c C-f
(agda2-next-goal
), write cong succ
, refine, and you will get
+-assoc₃ : ∀ a b c → (a + b) + c ≡ a + (b + c)
+-assoc₃ zero b c = refl
+-assoc₃ (succ a) b c = cong succ {!check me!}
Next goal, goal type and context, press C-c C-a
(agda2-auto
proof search), and you will get this:
+-assoc : ∀ a b c → (a + b) + c ≡ a + (b + c)
+-assoc zero b c = refl
+-assoc (succ a) b c = cong succ (+-assoc a b c)
Done.
Similarly, we prove
lemma-+zero : ∀ a → a + zero ≡ a
lemma-+zero zero = refl
lemma-+zero (succ a) = cong succ (lemma-+zero a)
lemma-+succ : ∀ a b → succ a + b ≡ a + succ b
lemma-+succ zero b = refl
lemma-+succ (succ a) b = cong succ (lemma-+succ a b)
The commutativity for _+_
is not hard to follow too:
-- A fun way to write transitivity
infixr 5 _~_
_~_ = trans
+-comm : ∀ a b → a + b ≡ b + a
+-comm zero b = sym (lemma-+zero b)
+-comm (succ a) b = cong succ (+-comm a b) ~ lemma-+succ b a
Nice way to “step” through a proof is to wrap some subexpression with {! !}
, e.g.:
+-comm₁ : ∀ a b → a + b ≡ b + a
+-comm₁ zero b = sym (lemma-+zero b)
+-comm₁ (succ a) b = cong succ {!(+-comm a b)!} ~ lemma-+succ b a
reload, ask for a type, context and inferred type with C-c C-l
followed by C-c C-.
, refine, wrap another subexpression, rinse and repeat. Sometimes I dream of a better interface for this.
The second clause of +-comm
is pretty fun example to infer implicit arguments by hand. Let’s do that. Algorithm is as follows:
_
in a term into “metavariables”, that is, special meta-level variables not bound anywhere in the program.term1 term2 : D
, term1 : A → B
and term2 : C
, add equations A == C
and B == D
to the system.Applying the first step of the algorithm to a term
gives:
with m*
being metavariables.
a b : ℕ
since _+_ : ℕ → ℕ → ℕ
in the type of +comm
. This gives the following system of equations (with duplicates and metavariable applications skipped):
trans (cong succ (+-comm a b)) (lemma-+succ b a) : _≡_ {ℕ} (succ a + b) (b + succ a)
trans (cong succ (+-comm a b)) (lemma-+succ b a) : _≡_ {ℕ} (succ (a + b)) (b + succ a) -- after normalization
ma = ℕ
mb = succ (a + b)
md = b + succ a
+-comm a b : _≡_ {ℕ} (a + b) (b + a)
mg = (a + b)
me = ℕ
mh = (b + a)
mf = ℕ
cong succ (+-comm a b) : _≡_ {ℕ} (succ (a + b)) (succ (b + a))
mc = succ (b + a)
lemma-+succ b a : _≡_ {ℕ} (succ b + a) (b + succ a)
lemma-+succ b a : _≡_ {ℕ} (succ (b + a)) (b + succ a) -- after normalization
trans (cong succ (+-comm a b)) (lemma-+succ b a) : _≡_ {ℕ} (succ a + b) (b + succ a)
The most awesome thing about this is that from Agda’s point of view, a goal is just a metavariable of a special kind. When you ask for a type of a goal with C-c C-t
or C-c C-,
Agda prints everything it has for the corresponding metavariable. Funny things like ?0
, ?1
, and etc in agda2-mode
outputs are references to these goal metavariables. For instance, in the following:
the type of the goal mentions the name of the metavariable corresponding to the very first goal in this article (of the unimplementable case in div2
).
By the way, to resolve datatype constructor overloading Agda infers the type for a constructor call expected at the call site, and unifies the inferred type with the types of all possible constructors of the same name. If there are no matches found, an error is reported. In case with more than one alternative available, an “unsolved meta” error for the corresponding return type metavariable is produced.
Work in progress.
Exercise. Define multiplication by induction on the first argument:
so that the following proof works:
-- Distributivity.
*+-dist : ∀ a b c → (a + b) * c ≡ a * c + b * c
*+-dist zero b c = refl
-- λ is \lambda, \Gl
*+-dist (succ a) b c = cong (λ x → c + x) (*+-dist a b c) ~ sym (+-assoc c (a * c) (b * c))
Now, fill in the following goals:
*-assoc : ∀ a b c → (a * b) * c ≡ a * (b * c)
*-assoc zero b c = refl
*-assoc (succ a) b c = *+-dist b (a * b) c ~ cong {!!} (*-assoc a b c)
lemma-*zero : ∀ a → a * zero ≡ zero
lemma-*zero a = {!!}
lemma-+swap : ∀ a b c → a + (b + c) ≡ b + (a + c)
lemma-+swap a b c = sym (+-assoc a b c) ~ {!!} ~ +-assoc b a c
lemma-*succ : ∀ a b → a + a * b ≡ a * succ b
lemma-*succ a b = {!!}
*-comm : ∀ a b → a * b ≡ b * a
*-comm a b = {!!}
Pressing C-c C-.
while there is a term in a hole shows a goal type, context and the term’s inferred type. Incredibly useful key sequence for interactive proof editing.
with
Consider the following implementation of a filter
function in Haskell:
filter :: (a → Bool) → [a] → [a]
filter p [] = []
filter p (a : as) = case p a of
True -> a : (filter p as)
False -> filter p as
It could be rewritten into Agda like this:
filter : {A : Set} → (A → Bool) → List A → List A
filter p [] = []
filter p (a ∷ as) with p a
... | true = a ∷ (filter p as)
... | false = filter p as
which doesn’t look very different. But desugaring ...
by the rules of Agda syntax makes it a bit less similar:
filter₀ : {A : Set} → (A → Bool) → List A → List A
filter₀ p [] = []
filter₀ p (a ∷ as) with p a
filter₀ p (a ∷ as) | true = a ∷ (filter₀ p as)
filter₀ p (a ∷ as) | false = filter₀ p as
There’s no direct analogue to case
in Agda, with
construction allows pattern matching on intermediate expressions (just like Haskell’s case
), but (unlike case
) on a top level only. Thus with
effectively just adds a “derived” argument to a function. Just like with normal arguments, pattern matching on a derived argument might change some types in a context.
The top level restriction simplifies all the dependently typed stuff (mainly related to dotted patterns), but makes some things a little bit more awkward (in most cases you can emulate case
with a subterm placed into a where
block). Syntactically, vertical bars separate normal arguments from a derived ones and a derived ones from each other.
The with
construct can be nested and multiple matches are allowed to be done in parallel, e.g. with can obfuscate the above definition as:
filterN : {A : Set} → (A → Bool) → List A → List A
filterN p [] = []
filterN p (a ∷ as) with p a
filterN p (a ∷ as) | true with as
filterN p (a ∷ as) | true | [] = a ∷ []
filterN p (a ∷ as) | true | b ∷ bs with p b
filterN p (a ∷ as) | true | b ∷ bs | true = a ∷ (b ∷ filterN p bs)
filterN p (a ∷ as) | true | b ∷ bs | false = a ∷ filterN p bs
filterN p (a ∷ as) | false = filterN p as
-- or alternatively
filterP : {A : Set} → (A → Bool) → List A → List A
filterP p [] = []
filterP p (a ∷ []) with p a
filterP p (a ∷ []) | true = a ∷ []
filterP p (a ∷ []) | false = []
filterP p (a ∷ (b ∷ bs)) with p a | p b
filterP p (a ∷ (b ∷ bs)) | true | true = a ∷ (b ∷ filterP p bs)
filterP p (a ∷ (b ∷ bs)) | true | false = a ∷ filterP p bs
filterP p (a ∷ (b ∷ bs)) | false | true = b ∷ filterP p bs
filterP p (a ∷ (b ∷ bs)) | false | false = filterP p bs
Let us prove that all these functions produce equal results when applied to equal arguments:
filter≡filterN₀ : {A : Set} → (p : A → Bool) → (as : List A) → filter p as ≡ filterN p as
filter≡filterN₀ p [] = refl
filter≡filterN₀ p (a ∷ as) = {!check me!}
note the goal type (filter p (a ∷ as) | p a) ≡ (filterN p (a ∷ as) | p a)
which shows p a
as derived argument to filter
function.
Remember that to reduce a + b
we had to match on a
in the proofs above, matching on b
gave nothing interesting because _+_
was defined by induction on the first argument. Similarly, to finish the filter≡filterN
proof we have to match on p a
, as
, and p b
, essentially duplicating the form of filterN
term:
filter≡filterN : {A : Set} → (p : A → Bool) → (as : List A) → filter p as ≡ filterN p as
filter≡filterN p [] = refl
filter≡filterN p (a ∷ as) with p a
filter≡filterN p (a ∷ as) | true with as
filter≡filterN p (a ∷ as) | true | [] = refl
filter≡filterN p (a ∷ as) | true | b ∷ bs with p b
filter≡filterN p (a ∷ as) | true | b ∷ bs | true = cong (λ x → a ∷ (b ∷ x)) (filter≡filterN p bs)
filter≡filterN p (a ∷ as) | true | b ∷ bs | false = cong (_∷_ a) (filter≡filterN p bs)
filter≡filterN p (a ∷ as) | false = filter≡filterN p as
Exercise. Guess the types for filter≡filterP
and filterN≡filterP
. Argue which of these is easier to prove? Do it (and get the other one almost for free by transitivity).
with
and UnificationWhen playing with the proofs about filters you might have noticed that with
does something interesting with a goal.
In the following hole
filter≡filterN₁ : {A : Set} → (p : A → Bool) → (as : List A) → filter p as ≡ filterN p as
filter≡filterN₁ p [] = refl
filter≡filterN₁ p (a ∷ as) = {!check me!}
the type of the goal is (filter p (a ∷ as) | p a) ≡ (filterN p (a ∷ as) | p a)
. But after the following with
filter≡filterN₂ : {A : Set} → (p : A → Bool) → (as : List A) → filter p as ≡ filterN p as
filter≡filterN₂ p [] = refl
filter≡filterN₂ p (a ∷ as) with p a | as
... | r | rs = {!check me!}
it becomes (filter p (a ∷ rs) | r) ≡ (filterN p (a ∷ rs) | r)
.
Same things might happen not only to a goal but to a context as a whole:
strange-id : {A : Set} {B : A → Set} → (a : A) → (b : B a) → B a
strange-id {A} {B} a ba with B a
... | r = {!check me!}
in the hole, both the type of ba
and the goal’s type are r
.
From these observations we conclude that with expr
creates a new variable, say w
, and “backwards-substitutes” expr
to w
in a context, changing all the occurrences of expr
in the types of the context to w
. Which means that in a resulting context every type that had expr
as a subterm starts dependending on w
.
This property allows using with
for rewriting:
lemma-+zero′ : ∀ a → a + zero ≡ a
lemma-+zero′ zero = refl
lemma-+zero′ (succ a) with a + zero | lemma-+zero′ a
lemma-+zero′ (succ a) | ._ | refl = refl
-- same expression with expanded underscore:
lemma-+zero′₀ : ∀ a → a + zero ≡ a
lemma-+zero′₀ zero = refl
lemma-+zero′₀ (succ a) with a + zero | lemma-+zero′₀ a
lemma-+zero′₀ (succ a) | .a | refl = refl
In second clauses these terms a + zero
is replaced by a new variable, say w
, which gives lemma-+zero′ a : w ≡ a
. Pattern matching on refl
gives [w = a
] and so the dotted pattern appears. After that the goal type becomes succ a ≡ succ a
.
This pattern
is so common that it has its own shorthand:
Exercise. Prove (on paper) that rewriting a goal type with with
and propositional equality is a syntax sugar for expressions built from refl
, sym
, trans
and cong
.
When moving from Haskell to Agda expression “every type is of kind *
, i.e. for any type X
, X : *
” transforms into “every ground type is of type Set
, i.e. for any ground type X
, X : Set
”. If we are willing to be consistent, we can’t afford Set : Set
because it leads to a number of paradoxes (more on them below). Still, we might want to construct things like “a list of types” and our current implementation of List
can not express this.
To solve this problem Agda introduces an infinite tower of Set
s, i.e. Set0 : Set1
, Set1 : Set2
, and so on with Set
being an alias for Set0
. Agda is also a predicative system which means that Set0 → Set0 : Set1
, Set0 → Set1 : Set2
, and so on, but not Set0 → Set1 : Set1
. Note, however, that this tower is not cumulative, e.g. Set0 : Set2
and Set0 → Set1 : Set3
are false typing judgments.
[As far as I know, in theory nothing prevents us from making the tower cumulative, it’s just so happened that Agda selected this route and not another. Predicativity is a much more subtle matter (more on that below).]
A list of types now becomes:
which looks very much like the usual List
definition.
To prevent a code duplication like that Agda allows universe polymorphic definitions by, essentially, defining a type for universe levels, a type very reminiscent of ℕ
:
data Level : ? where
lzero : Level
lsucc : Level → Level
-- input for ⊔ is \sqcup
-- maximum of two levels
_⊔_ : Level → Level → Level
lzero ⊔ m = m
lsucc n ⊔ lzero = lsucc n
lsucc n ⊔ lsucc m = lsucc (n ⊔ m)
though, since there’s nothing we could possibly write in the place of ?
there, older versions of Agda allowed us to define Level
with postulate
syntax instead:
postulate Level : Set
postulate lzero : Level
postulate lsucc : Level → Level
postulate _⊔_ : Level → Level → Level
followed by some BUILTIN
pragmas:
{-# BUILTIN LEVEL Level #-}
{-# BUILTIN LEVELZERO lzero #-}
{-# BUILTIN LEVELSUC lsucc #-}
{-# BUILTIN LEVELMAX _⊔_ #-}
Modern versions require the use of Agda.Primitive
module instead:
In practice, the difference between ℕ
and Level
is that we are not allowed to pattern match on elements of the latter.
Though, postulate
s still exist and allow you to define propositions without proofs, i.e. they say “trust me, I know this to be true”. Obviously, this can be exploited to infer contradictions
and will produce errors when type-checking in safe mode and will abort the program on attempts at executing them.
Given the definition above, expression Set α
for α : Level
means “the Set
of level α
”.
We are now able to define universe polymorphic list in the following way:
data PList₀ {α : Level} (A : Set α) : Set α where
[] : PList₀ A
_∷_ : A → PList₀ A → PList₀ A
-- or a bit nicer:
data PList₁ {α} (A : Set α) : Set α where
[] : PList₁ A
_∷_ : A → PList₁ A → PList₁ A
Note that we have been writing everything above inside a module called ThrowAwayIntroduction
. From here on we are going to (mostly) forget about it and write a small standard library for Agda from scratch. The idea is to remove any module with a name prefixed by “ThrowAway” from this file to produce the library code. To make the implementation of this idea as simple as possible we place markers like:
at the ends of throw away code. It allows to generate the library by a simple shell command:
cat BrutalDepTypes.lagda | sed '/^\\begin{code}/,/^\\end{code}/ ! d; /^\\begin{code}/ d; /^\\end{code}/ c \
' | sed '/^ *module ThrowAway/,/^ *.- end of ThrowAway/ d;'
We are now going to redefine everything useful from above in a universe polymorphic way (when applicable).
Each module in Agda has an export list. Everything defined in a module gets appended to it. To place things defined for export in another module into a current context there is an open
construct:
This doesn’t append ModuleName
’s export list to current module’s export list. To do that we need to add public
keyword at the end:
Exercise. Understand what is going on in types of the following functions:
module Function where
-- Dependent application
infixl 0 _$_
_$_ : ∀ {α β}
→ {A : Set α} {B : A → Set β}
→ (f : (x : A) → B x)
→ ((x : A) → B x)
f $ x = f x
-- Simple application
infixl 0 _$′_
_$′_ : ∀ {α β}
→ {A : Set α} {B : Set β}
→ (A → B) → (A → B)
f $′ x = f $ x
-- input for ∘ is \o
-- Dependent composition
_∘_ : ∀ {α β γ}
→ {A : Set α} {B : A → Set β} {C : {x : A} → B x → Set γ}
→ (f : {x : A} → (y : B x) → C y)
→ (g : (x : A) → B x)
→ ((x : A) → C (g x))
f ∘ g = λ x → f (g x)
-- Simple composition
_∘′_ : ∀ {α β γ}
→ {A : Set α} {B : Set β} {C : Set γ}
→ (B → C) → (A → B) → (A → C)
f ∘′ g = f ∘ g
-- Flip
flip : ∀ {α β γ}
→ {A : Set α} {B : Set β} {C : A → B → Set γ}
→ ((x : A) → (y : B) → C x y)
→ ((y : B) → (x : A) → C x y)
flip f x y = f y x
-- Identity
id : ∀ {α} {A : Set α} → A → A
id x = x
-- Constant function
const : ∀ {α β}
→ {A : Set α} {B : Set β}
→ (A → B → A)
const x y = x
open Function public
Especially note the scopes of variable bindings in types.
Intuitionistic Logic
module:
module Logic where
-- input for ⊥ is \bot
-- False proposition
data ⊥ : Set where
-- input for ⊤ is \top
-- True proposition
record ⊤ : Set where
-- ⊥ implies anything at any universe level
⊥-elim : ∀ {α} {A : Set α} → ⊥ → A
⊥-elim ()
Propositional negation is defined as follows:
The technical part of the idea of this definition is that the principle of explosion (“from a contradiction, anything follows”) gets a pretty straightforward proof.
Exercise. Prove the following propositions:
module ThrowAwayExercise where
contradiction : ∀ {α β} {A : Set α} {B : Set β} → A → ¬ A → B
contradiction = {!!}
contraposition : ∀ {α β} {A : Set α} {B : Set β} → (A → B) → (¬ B → ¬ A)
contraposition = {!!}
contraposition¬ : ∀ {α β} {A : Set α} {B : Set β} → (A → ¬ B) → (B → ¬ A)
contraposition¬ = {!!}
→¬² : ∀ {α} {A : Set α} → A → ¬ (¬ A)
→¬² a = {!!}
¬³→¬ : ∀ {α} {A : Set α} → ¬ (¬ (¬ A)) → ¬ A
¬³→¬ = {!!}
Hint. Use C-c C-,
here to see the goal type in its normal form.
From a more logical standpoint the idea of ¬
is that false proposition P
should be isomorphic to ⊥
(i.e. they should imply each other: ⊥ → P ∧ P → ⊥
). Since ⊥ → P
is true for all P
there is only P → ⊥
left for us to prove.
From a computational point of view having a variable of type ⊥
in a context means that there is no way execution of a program could reach this point. Which means we can match on the variable and use absurd pattern, ⊥-elim
does exactly that.
Note that, being an intuitionistic system, Agda has no means to prove “double negation” rule. See for yourself:
Fun fact: proofs in the exercise above amounted to a scientific paper at the start of 20th century.
Solution for the exercise:
private
module DummyAB {α β} {A : Set α} {B : Set β} where
contradiction : A → ¬ A → B
contradiction a ¬a = ⊥-elim (¬a a)
contraposition : (A → B) → (¬ B → ¬ A)
contraposition = flip _∘′_
contraposition¬ : (A → ¬ B) → (B → ¬ A)
contraposition¬ = flip
open DummyAB public
private
module DummyA {α} {A : Set α} where
→¬² : A → ¬ (¬ A)
→¬² = contradiction
¬³→¬ : ¬ (¬ (¬ A)) → ¬ A
¬³→¬ ¬³a = ¬³a ∘′ →¬²
open DummyA public
Exercise. Understand this solution.
Note clever module usage. Opening a module with parameters prefixes types of all the things defined there with these parameters. We will use this trick a lot.
Let us define conjunction, disjunction, and logical equivalence:
-- input for ∧ is \and
infixr 6 _∧_ _,_
record _∧_ {α β} (A : Set α) (B : Set β) : Set (α ⊔ β) where
constructor _,_
field
fst : A
snd : B
open _∧_ public
-- input for ∨ is \or
data _∨_ {α β} (A : Set α) (B : Set β) : Set (α ⊔ β) where
inl : A → A ∨ B
inr : B → A ∨ B
un∨ : ∀ {α} {A : Set α} → A ∨ A → A
un∨ (inl a) = a
un∨ (inr a) = a
-- input for ↔ is \<->
_↔_ : ∀ {α β} (A : Set α) (B : Set β) → Set (α ⊔ β)
A ↔ B = (A → B) ∧ (B → A)
Make all this goodness available:
Some definitions from Per Martin-Löf’s type theory [14]:
module MLTT where
-- input for ≡ is \==
-- Propositional equality
infix 4 _≡_
data _≡_ {α} {A : Set α} (x : A) : A → Set α where
refl : x ≡ x
_≠_ : ∀ {α} {A : Set α} (x : A) → A → Set α
x ≠ y = ¬ (x ≡ y)
-- input for Σ is \Sigma
-- Dependent pair
infixr 6 _,_
record Σ {α β} (A : Set α) (B : A → Set β) : Set (α ⊔ β) where
constructor _,_
field
projl : A
projr : B projl
open Σ public
-- Make rewrite syntax work
{-# BUILTIN EQUALITY _≡_ #-}
The Σ
type is a dependent version of _∧_
(the second field depends on the first), i.e. _∧_
is a specific case of Σ
:
-- input for × is \x
_×_ : ∀ {α β} (A : Set α) (B : Set β) → Set (α ⊔ β)
A × B = Σ A (λ _ → B)
×↔∧ : ∀ {α β} {A : Set α} {B : Set β} → (A × B) ↔ (A ∧ B)
×↔∧ = (λ z → projl z , projr z) , (λ z → fst z , snd z)
Personally, I use both _∧_
and _×_
occasionally since _×_
looks ugly in the normal form and makes goal types hard to read.
Some properties:
module ≡-Prop where
private
module DummyA {α} {A : Set α} where
-- _≡_ is symmetric
sym : {x y : A} → x ≡ y → y ≡ x
sym refl = refl
-- _≡_ is transitive
trans : {x y z : A} → x ≡ y → y ≡ z → x ≡ z
trans refl refl = refl
-- _≡_ is substitutive
subst : ∀ {γ} (P : A → Set γ) {x y} → x ≡ y → P x → P y
subst P refl p = p
private
module DummyAB {α β} {A : Set α} {B : Set β} where
-- _≡_ is congruent
cong : ∀ (f : A → B) {x y} → x ≡ y → f x ≡ f y
cong f refl = refl
subst₂ : ∀ {ℓ} {P : A → B → Set ℓ} {x y u v} → x ≡ y → u ≡ v → P x u → P y v
subst₂ refl refl p = p
private
module DummyABC {α β γ} {A : Set α} {B : Set β} {C : Set γ} where
cong₂ : ∀ (f : A → B → C) {x y u v} → x ≡ y → u ≡ v → f x u ≡ f y v
cong₂ f refl refl = refl
open DummyA public
open DummyAB public
open DummyABC public
Make all this goodness available:
Decidable proposition it is a proposition that has an explicit proof or disproval:
This datatype is very much like Bool
, except it also explains why the proposition holds or why it must not.
Decidable propositions are the glue that make your program work with the real world data.
Suppose we want to write a program that reads a natural number, say n
, from stdin
and divides it by two with div2E
. To do that we need a proof that n
is Even
. The easiest way to do it is to define a function that decides if a given natural is Even
:
module ThrowAwayExample₁ where
open ThrowAwayIntroduction
¬Even+2 : ∀ {n} → ¬ (Even n) → ¬ (Even (succ (succ n)))
¬Even+2 ¬en (e2succ en) = contradiction en ¬en
Even? : ∀ n → Dec (Even n)
Even? zero = yes ezero
Even? (succ zero) = no (λ ()) -- note an absurd pattern in
-- an anonymous lambda expression
Even? (succ (succ n)) with Even? n
... | yes a = yes (e2succ a)
... | no a¬ = no (¬Even+2 a¬)
{- end of ThrowAwayExample₁ -}
then read n
from stdin
, feed it to Even?
, match on the result and call div2E
if n
is Even
.
Same idea applies to almost everything:
yes
, type check, match on yes
, optimize typed representation, generate output.Using same idea we can define decidable dichotomous and trichotomous propositions:
data Di {α β} (A : Set α) (B : Set β) : Set (α ⊔ β) where
diyes : ( a : A) (¬b : ¬ B) → Di A B
dino : (¬a : ¬ A) ( b : B) → Di A B
data Tri {α β γ} (A : Set α) (B : Set β) (C : Set γ) : Set (α ⊔ (β ⊔ γ)) where
tri< : ( a : A) (¬b : ¬ B) (¬c : ¬ C) → Tri A B C
tri≈ : (¬a : ¬ A) ( b : B) (¬c : ¬ C) → Tri A B C
tri> : (¬a : ¬ A) (¬b : ¬ B) ( c : C) → Tri A B C
Make all this goodness available:
Consider this to be the answer (encrypted with rewrite
s) for the exercise way above:
module Data-ℕ where
-- Natural numbers (positive integers)
data ℕ : Set where
zero : ℕ
succ : ℕ → ℕ
module ℕ-Rel where
infix 4 _≤_ _<_ _>_
data _≤_ : ℕ → ℕ → Set where
z≤n : ∀ {n} → zero ≤ n
s≤s : ∀ {n m} → n ≤ m → succ n ≤ succ m
_<_ : ℕ → ℕ → Set
n < m = succ n ≤ m
_>_ : ℕ → ℕ → Set
n > m = m < n
≤-unsucc : ∀ {n m} → succ n ≤ succ m → n ≤ m
≤-unsucc (s≤s a) = a
<-¬refl : ∀ n → ¬ (n < n)
<-¬refl zero ()
<-¬refl (succ n) (s≤s p) = <-¬refl n p
≡→≤ : ∀ {n m} → n ≡ m → n ≤ m
≡→≤ {zero} refl = z≤n
≡→≤ {succ n} refl = s≤s (≡→≤ {n} refl) -- Note this
≡→¬< : ∀ {n m} → n ≡ m → ¬ (n < m)
≡→¬< refl = <-¬refl _
≡→¬> : ∀ {n m} → n ≡ m → ¬ (n > m)
≡→¬> refl = <-¬refl _
<→¬≡ : ∀ {n m} → n < m → ¬ (n ≡ m)
<→¬≡ = contraposition¬ ≡→¬<
>→¬≡ : ∀ {n m} → n > m → ¬ (n ≡ m)
>→¬≡ = contraposition¬ ≡→¬>
<→¬> : ∀ {n m} → n < m → ¬ (n > m)
<→¬> {zero} (s≤s z≤n) ()
<→¬> {succ n} (s≤s p<) p> = <→¬> p< (≤-unsucc p>)
>→¬< : ∀ {n m} → n > m → ¬ (n < m)
>→¬< = contraposition¬ <→¬>
module ℕ-Op where
open ≡-Prop
pred : ℕ → ℕ
pred zero = zero
pred (succ n) = n
infixl 6 _+_
_+_ : ℕ → ℕ → ℕ
zero + n = n
succ n + m = succ (n + m)
infixl 6 _-_
_-_ : ℕ → ℕ → ℕ
zero - n = zero
n - zero = n
succ n - succ m = n - m
infixr 7 _*_
_*_ : ℕ → ℕ → ℕ
zero * m = zero
succ n * m = m + (n * m)
private
module Dummy₀ where
lemma-+zero : ∀ a → a + zero ≡ a
lemma-+zero zero = refl
lemma-+zero (succ a) rewrite lemma-+zero a = refl
lemma-+succ : ∀ a b → succ a + b ≡ a + succ b
lemma-+succ zero b = refl
lemma-+succ (succ a) b rewrite lemma-+succ a b = refl
open Dummy₀
-- + is associative
+-assoc : ∀ a b c → (a + b) + c ≡ a + (b + c)
+-assoc zero b c = refl
+-assoc (succ a) b c rewrite (+-assoc a b c) = refl
-- + is commutative
+-comm : ∀ a b → a + b ≡ b + a
+-comm zero b = sym $ lemma-+zero b
+-comm (succ a) b rewrite +-comm a b | lemma-+succ b a = refl
-- * is distributive by +
*+-dist : ∀ a b c → (a + b) * c ≡ a * c + b * c
*+-dist zero b c = refl
*+-dist (succ a) b c rewrite *+-dist a b c | +-assoc c (a * c) (b * c) = refl
-- * is associative
*-assoc : ∀ a b c → (a * b) * c ≡ a * (b * c)
*-assoc zero b c = refl
*-assoc (succ a) b c rewrite *+-dist b (a * b) c | *-assoc a b c = refl
private
module Dummy₁ where
lemma-*zero : ∀ a → a * zero ≡ zero
lemma-*zero zero = refl
lemma-*zero (succ a) = lemma-*zero a
lemma-+swap : ∀ a b c → a + (b + c) ≡ b + (a + c)
lemma-+swap a b c rewrite sym (+-assoc a b c) | +-comm a b | +-assoc b a c = refl
lemma-*succ : ∀ a b → a + a * b ≡ a * succ b
lemma-*succ zero b = refl
lemma-*succ (succ a) b rewrite lemma-+swap a b (a * b) | lemma-*succ a b = refl
open Dummy₁
-- * is commutative
*-comm : ∀ a b → a * b ≡ b * a
*-comm zero b = sym $ lemma-*zero b
*-comm (succ a) b rewrite *-comm a b | lemma-*succ b a = refl
module ℕ-RelOp where
open ℕ-Rel
open ℕ-Op
open ≡-Prop
infix 4 _≡?_ _≤?_ _<?_
_≡?_ : (n m : ℕ) → Dec (n ≡ m)
zero ≡? zero = yes refl
zero ≡? succ m = no (λ ())
succ n ≡? zero = no (λ ())
succ n ≡? succ m with n ≡? m
succ .m ≡? succ m | yes refl = yes refl
succ n ≡? succ m | no ¬a = no (¬a ∘ cong pred) -- Note this
_≤?_ : (n m : ℕ) → Dec (n ≤ m)
zero ≤? m = yes z≤n
succ n ≤? zero = no (λ ())
succ n ≤? succ m with n ≤? m
... | yes a = yes (s≤s a)
... | no ¬a = no (¬a ∘ ≤-unsucc)
_<?_ : (n m : ℕ) → Dec (n < m)
n <? m = succ n ≤? m
cmp : (n m : ℕ) → Tri (n < m) (n ≡ m) (n > m)
cmp zero zero = tri≈ (λ ()) refl (λ ())
cmp zero (succ m) = tri< (s≤s z≤n) (λ ()) (λ ())
cmp (succ n) zero = tri> (λ ()) (λ ()) (s≤s z≤n)
cmp (succ n) (succ m) with cmp n m
cmp (succ n) (succ m) | tri< a ¬b ¬c = tri< (s≤s a) (¬b ∘ cong pred) (¬c ∘ ≤-unsucc)
cmp (succ n) (succ m) | tri≈ ¬a b ¬c = tri≈ (¬a ∘ ≤-unsucc) (cong succ b) (¬c ∘ ≤-unsucc)
cmp (succ n) (succ m) | tri> ¬a ¬b c = tri> (¬a ∘ ≤-unsucc) (¬b ∘ cong pred) (s≤s c)
open Data-ℕ public
Exercise. Understand this. Now, remove all term bodies from ℕ-RelProp
and ℕ-RelOp
and reimplement everything yourself.
module Data-List where
-- List
infixr 10 _∷_
data List {α} (A : Set α) : Set α where
[] : List A
_∷_ : A → List A → List A
module List-Op where
private
module DummyA {α} {A : Set α} where
-- Singleton `List`
[_] : A → List A
[ a ] = a ∷ []
-- Concatenation for `List`s
infixr 10 _++_
_++_ : List A → List A → List A
[] ++ bs = bs
(a ∷ as) ++ bs = a ∷ (as ++ bs)
-- Filtering with decidable propositions
filter : ∀ {β} {P : A → Set β} → (∀ a → Dec (P a)) → List A → List A
filter p [] = []
filter p (a ∷ as) with p a
... | yes _ = a ∷ (filter p as)
... | no _ = filter p as
open DummyA public
module Data-Vec where
-- Vector
infixr 5 _∷_
data Vec {α} (A : Set α) : ℕ → Set α where
[] : Vec A zero
_∷_ : ∀ {n} → A → Vec A n → Vec A (succ n)
module Vec-Op where
open ℕ-Op
private
module DummyA {α} {A : Set α} where
-- Singleton `Vec`
[_] : A → Vec A (succ zero)
[ a ] = a ∷ []
-- Concatenation for `Vec`s
infixr 5 _++_
_++_ : ∀ {n m} → Vec A n → Vec A m → Vec A (n + m)
[] ++ bs = bs
(a ∷ as) ++ bs = a ∷ (as ++ bs)
head : ∀ {n} → Vec A (succ n) → A
head (a ∷ as) = a
tail : ∀ {n} → Vec A (succ n) → Vec A n
tail (a ∷ as) = as
open DummyA public
{-
Work in progress. TODO.
I find the following definition for List to be quite amusing in practice:
module VecLists where
open Data-Vec
private
module DummyA {α} {A : Set α} where
VecList = Σ ℕ (Vec A)
-}
List
Indexing allows to define pretty fun things:
module ThrowAwayMore₁ where
open Data-List
open List-Op
-- input for ∈ is \in
-- `a` is in `List`
data _∈_ {α} {A : Set α} (a : A) : List A → Set α where
here : ∀ {as} → a ∈ (a ∷ as)
there : ∀ {b as} → a ∈ as → a ∈ (b ∷ as)
-- input for ⊆ is \sub=
-- `xs` is a subset of `ys`
_⊆_ : ∀ {α} {A : Set α} → List A → List A → Set α
as ⊆ bs = ∀ {x} → x ∈ as → x ∈ bs
The _∈_
relation says that “being in a List
” for an element a : A
means that a
in the head of a List
or in the tail of a List
. For some a
and as
a value of type a ∈ as
, that is “a
is in a list as
” is a position of an element a
in as
(there might be any number of elements in this type). Relation ⊆
, that is “being a sublist”, carries a function that for each a
in xs
gives its position in as
.
Examples:
listTest₁ = zero ∷ zero ∷ succ zero ∷ []
listTest₂ = zero ∷ succ zero ∷ []
∈Test₀ : zero ∈ listTest₁
∈Test₀ = here
∈Test₁ : zero ∈ listTest₁
∈Test₁ = there here
⊆Test : listTest₂ ⊆ listTest₁
⊆Test here = here
⊆Test (there here) = there (there here)
⊆Test (there (there ()))
Let us prove some properties for ⊆
relation:
⊆-++-left : ∀ {A : Set} (as bs : List A) → as ⊆ (bs ++ as)
⊆-++-left as [] n = n
⊆-++-left as (b ∷ bs) n = there (⊆-++-left as bs n)
⊆-++-right : ∀ {A : Set} (as bs : List A) → as ⊆ (as ++ bs)
⊆-++-right [] bs ()
⊆-++-right (a ∷ as) bs here = here
⊆-++-right (a ∷ as) bs (there n) = there (⊆-++-right as bs n)
{- end of ThrowAwayMore₁ -}
Note how these proofs renumber elements of a given list.
List
generalized: AnyBy generalizing _∈_
relation from propositional equality (in x ∈ (x ∷ xs)
both x
s are propositionally equal) to arbitrary predicates we arrive at:
module Data-Any where
open Data-List
open List-Op
-- Some element of a `List` satisfies `P`
data Any {α γ} {A : Set α} (P : A → Set γ) : List A → Set (α ⊔ γ) where
here : ∀ {a as} → (pa : P a) → Any P (a ∷ as)
there : ∀ {a as} → (pas : Any P as) → Any P (a ∷ as)
module Membership {α β γ} {A : Set α} {B : Set β} (P : B → A → Set γ) where
-- input for ∈ is \in
-- `P b a` holds for some element `a` from the `List`
-- when P is `_≡_` this becomes the usual "is in" relation
_∈_ : B → List A → Set (α ⊔ γ)
b ∈ as = Any (P b) as
-- input for ∉ is \notin
_∉_ : B → List A → Set (α ⊔ γ)
b ∉ as = ¬ (b ∈ as)
-- input for ⊆ is \sub=
_⊆_ : List A → List A → Set (α ⊔ β ⊔ γ)
as ⊆ bs = ∀ {x} → x ∈ as → x ∈ bs
-- input for ⊈ is \sub=n
_⊈_ : List A → List A → Set (α ⊔ β ⊔ γ)
as ⊈ bs = ¬ (as ⊆ bs)
-- input for ⊇ is \sup=
_⊆⊇_ : List A → List A → Set (α ⊔ β ⊔ γ)
as ⊆⊇ bs = (as ⊆ bs) ∧ (bs ⊆ as)
⊆-refl : ∀ {as} → as ⊆ as
⊆-refl = id
⊆-trans : ∀ {as bs cs} → as ⊆ bs → bs ⊆ cs → as ⊆ cs
⊆-trans f g = g ∘ f
⊆⊇-refl : ∀ {as} → as ⊆⊇ as
⊆⊇-refl = id , id
⊆⊇-sym : ∀ {as bs} → as ⊆⊇ bs → bs ⊆⊇ as
⊆⊇-sym (f , g) = g , f
⊆⊇-trans : ∀ {as bs cs} → as ⊆⊇ bs → bs ⊆⊇ cs → as ⊆⊇ cs
⊆⊇-trans f g = (fst g ∘ fst f) , (snd f ∘ snd g)
∉[] : ∀ {b} → b ∉ []
∉[]()
-- When P is `_≡_` this becomes `b ∈ [ a ] → b ≡ a`
∈singleton→P : ∀ {a b} → b ∈ [ a ] → P b a
∈singleton→P (here pba) = pba
∈singleton→P (there ())
P→∈singleton : ∀ {a b} → P b a → b ∈ [ a ]
P→∈singleton pba = here pba
⊆-++-left : ∀ as bs → as ⊆ (bs ++ as)
⊆-++-left as [] n = n
⊆-++-left as (b ∷ bs) n = there (⊆-++-left as bs n)
⊆-++-right : ∀ as bs → as ⊆ (as ++ bs)
⊆-++-right [] bs ()
⊆-++-right (x ∷ as) bs (here pa) = here pa
⊆-++-right (x ∷ as) bs (there n) = there (⊆-++-right as bs n)
⊆-++-both-left : ∀ {as bs} cs → as ⊆ bs → (cs ++ as) ⊆ (cs ++ bs)
⊆-++-both-left [] as⊆bs n = as⊆bs n
⊆-++-both-left (x ∷ cs) as⊆bs (here pa) = here pa
⊆-++-both-left (x ∷ cs) as⊆bs (there n) = there (⊆-++-both-left cs as⊆bs n)
⊆-filter : ∀ {σ} {Q : A → Set σ} → (q : ∀ x → Dec (Q x)) → (as : List A) → filter q as ⊆ as
⊆-filter q [] ()
⊆-filter q (a ∷ as) n with q a
⊆-filter q (a ∷ as) (here pa) | yes qa = here pa
⊆-filter q (a ∷ as) (there n) | yes qa = there (⊆-filter q as n)
⊆-filter q (a ∷ as) n | no ¬qa = there (⊆-filter q as n)
Exercise. Note how general this code is. ⊆-filter
covers a broad set of propositions, with “filtered list is a sublist (in the usual sense) of the original list” being a special case. Do C-c C-.
in the following goal and explain the type:
Explain the types of all the terms in Membership
module.
{-
Work in progress. TODO.
I didn't have a chance to use `All` yet (and I'm too lazy to implement this module right now),
but here is the definition:
module Data-All where
open Data-List
-- All elements of a `List` satisfy `P`
data All {α β} {A : Set α} (P : A → Set β) : List A → Set (α ⊔ β) where
[]∀ : All P []
_∷∀_ : ∀ {a as} → P a → All P as → All P (a ∷ as)
-}
Work in progress. TODO.
module Data-Chain where
open Data-List
open List-Op
data Chain {α γ} {A : Set α} (P : A → A → Set γ) : List A → Set (α ⊔ γ) where
[]c : Chain P []
[1]c : ∀ {a} → Chain P (a ∷ [])
_∷c_ : ∀ {a b bs} → P a b → Chain P (b ∷ bs) → Chain P (a ∷ b ∷ bs)
Are not that needed with Dec
, actually, but lets define them for completeness:
module Data-Bool where
-- Booleans
data Bool : Set where
true false : Bool
module Bool-Op where
if_then_else_ : ∀ {α} {A : Set α} → Bool → A → A → A
if true then a else _ = a
if false then _ else b = b
not : Bool → Bool
not true = false
not false = true
and : Bool → Bool → Bool
and true x = x
and false _ = false
or : Bool → Bool → Bool
or false x = x
or true x = true
open Data-Bool public
Work in progress. TODO. We need to prove something from A to Z. Quicksort maybe.
This section discusses interesting things about Agda which are somewhere in between practice and pure theory.
Rewriting with equality hides a couple of catches.
Remember the term of lemma-+zero′
from above:
lemma-+zero′ : ∀ a → a + zero ≡ a
lemma-+zero′ zero = refl
lemma-+zero′ (succ a) with a + zero | lemma-+zero′ a
lemma-+zero′ (succ a) | ._ | refl = refl
it typechecks, but the following proof doesn’t:
lemma-+zero′′ : ∀ a → a + zero ≡ a
lemma-+zero′′ zero = refl
lemma-+zero′′ (succ a) with a | lemma-+zero′′ a
lemma-+zero′′ (succ a) | ._ | refl = refl
The problem here is that for arbitrary terms A
and B
to pattern match on refl : A ≡ B
these A
and B
must unify. In lemma-+zero′
case we have a + zero
backward-substituted into a new variable w
, then we match on refl
we get w ≡ a
. On the other hand, in lemma-+zero′′
case we have a
changed into w
, an refl
gets w + zero ≡ w
type which is a malformed (recursive) unification constraint.
There is another catch. Our current definition of _≡_
allows to express equality on types, e.g. Bool ≡ ℕ
.
This enables us to write the following term:
lemma-unsafe-eq : (P : Bool ≡ ℕ) → Bool → ℕ
lemma-unsafe-eq P b with Bool | P
lemma-unsafe-eq P b | .ℕ | refl = b + succ zero
which type checks without errors.
Note, however, that lemma-unsafe-eq
cannot be proven by simply pattern matching on P
:
Exercise. lemma-unsafe-eq
is food for thought about computation safety under false assumptions.
In this section we shall talk about some theoretical stuff like datatype encodings and paradoxes. You might want to read some of the theoretical references like [12,14] first.
In literature Agda’s arrow (x : X) → Y
(where Y
might have x
free) is called dependent product type, or Π-type (“Pi-type”) for short. Dependent pair Σ
is called dependent sum type, or Σ-type (“Sigma-type”) for short.
Given ⊥
, ⊤
and Bool
it is possible to define any finite type, that is a type with finite number of elements.
module FiniteTypes where
open Bool-Op
_∨′_ : (A B : Set) → Set
A ∨′ B = Σ Bool (λ x → if x then A else B)
zero′ = ⊥
one′ = ⊤
two′ = Bool
three′ = one′ ∨′ two′
four′ = two′ ∨′ two′
--- and so on
TODO. Say something about extensional setting and ⊤ = ⊥ → ⊥
.
Given finite types, Π-types, and Σ-types it is possible to define non-inductive datatypes using the same scheme the definition of _∨′_
uses.
Non-inductive datatype without indexes has the following scheme:
data DataTypeName (Param1 : Param1Type) (Param2 : Param2Type) ... : Set whatever
Cons1 : (Cons1Arg1 : Cons1Arg1Type) (Cons1Arg2 : Cons1Arg2Type) ... → DataTypeName Param1 Param2 ...
Cons2 : (Cons2Arg1 : Cons2Arg1Type) ... → DataTypeName Param1 Param2 ...
...
ConsN : (ConsNArg1 : ConsNArg1Type) ... → DataTypeName Param1 Param2 ...
Re-encoded into Π-types, Σ-types, and finite types it becomes:
DataTypeName : (Param1 : Param1Type) (Param2 : Param2Type) ... → Set whatever
DataTypeName Param1 Param2 ... = Σ FiniteTypeWithNElements choice where
choice : FiniteTypeWithNElements → Set whatever
choice element1 = Σ Cons1Arg1Type (λ Cons1Arg1 → Σ Cons1Arg2Type (λ Cons1Arg2 → ...))
choice element2 = Σ Cons2Arg1Type (λ Cons2Arg1 → ...)
...
choice elementN = Σ ConsNArg1Type (λ ConsNArg1 → ...)
For instance, Di
type from above translates into:
Di′ : ∀ {α β} (A : Set α) (B : Set β) → Set (α ⊔ β)
Di′ {α} {β} A B = Σ Bool choice where
choice : Bool → Set (α ⊔ β)
choice true = A × ¬ B
choice false = ¬ A × B
Work in progress. TODO. The general idea: add them as parameters and place an equality proof inside.
Work in progress. TODO. General ideas: W-types and μ.
Negative occurrences make the system inconsistent.
Copy this to a separate file and typecheck:
{-# OPTIONS --no-positivity-check #-}
module CurrysParadox where
data CS (C : Set) : Set where
cs : (CS C → C) → CS C
paradox : ∀ {C} → CS C → C
paradox (cs b) = b (cs b)
loop : ∀ {C} → C
loop = paradox (cs paradox)
contr : ⊥
contr = loop
Work in progress. TODO.
1. Malakhovski J. Agda-mode commands to show and use unification constraints between "Goal" and "Have". https://web.archive.org/web/20140326144257/http://code.google.com/p/agda/issues/detail?id=771.
2. Malakhovski J. Functional Programming and Proof Checking Course. https://oxij.org/teaching/itmo/fp/.
3. Agda Project Authors. Agda: Tutorials list. 2024. https://agda.readthedocs.io/en/latest/getting-started/tutorial-list.html.
4. Agda Project Authors. Agda: Documentation. 2024. https://agda.readthedocs.io/en/latest/index.html.
5. Setzer A. Interactive theorem proving for agda users. https://web.archive.org/web/20210620074721/https://www.cs.swan.ac.uk/~csetzer/lectures/intertheo/07/interactiveTheoremProvingForAgdaUsers.html.
6. Bove A., Dybjer P. Dependent types at work. https://www.cse.chalmers.se/~peterd/papers/DependentTypesAtWork.pdf.
7. Norell U. Dependently typed programming in agda. 2008. https://www.cse.chalmers.se/~ulfn/papers/afp08/tutorial.pdf.
8. Altenkirch T. Computer aided formal reasoning. https://www.cs.nott.ac.uk/~txa/g53cfr/.
9. Coq Project Authors. Coq: Documentation. 2015. https://coq.inria.fr/documentation.
10. Idris Project Authors. Idris: Documentation. 2015. https://www.idris-lang.org/pages/documentation.html.
11. Epigram. http://www.e-pig.org/.
12. Sørensen M.H.B., Urzyczyn P. Lectures on the Curry-Howard isomorphism. 1998. https://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.17.7385.
13. Thompson S. Type theory and functional programming. https://www.cs.kent.ac.uk/people/staff/sjt/TTFP/.
14. Martin-Löf P. Intuitionistic type theory. Notes by Giovanni Sambin. 1980. https://web.archive.org/web/20230930164024/https://www.csie.ntu.edu.tw/~b94087/ITT.pdf.
15. Martin-Löf P. Intuitionistic type theory. https://intuitionistic.files.wordpress.com/2010/07/martin-lof-tt.pdf.
16. Nordström B., Petersson K., Smith J.M. Programming in Martin-Löf’s Type Theory. An Introduction. https://www.cse.chalmers.se/research/group/logic/book/.
17. augustss. Simpler, Easier! https://augustss.blogspot.com/2007/10/simpler-easier-in-recent-paper-simply.html.
18. Bauer A. How to implement dependent type theory I. https://math.andrej.com/2012/11/08/how-to-implement-dependent-type-theory-i/.
19. Bauer A. How to implement dependent type theory II. https://math.andrej.com/2012/11/11/how-to-implement-dependent-type-theory-ii/.
20. Bauer A. How to implement dependent type theory III. https://math.andrej.com/2012/11/29/how-to-implement-dependent-type-theory-iii/.
21. Nix: Purely functional package manager. https://nixos.org/nix/.
22. McBride C., McKinna J. The view from the left. http://strictlypositive.org/view.ps.gz.
(As of writing of Version 15 of this document) Most of that proposal was implemented between 2013 and 2024: agda2-mode
now has agda2-solve-maybe-all
and agda2-elaborate-give
commands. But multi-goal tracking still does not exists, unfortunately.↩︎
By the way, this document is far from finished, but should be pretty useful in its current state.↩︎
(As of Version 15) 11 years later and still unfinished…↩︎