Skip to content

Instantly share code, notes, and snippets.

@danidiaz
Last active March 10, 2021 19:30
Show Gist options
  • Select an option

  • Save danidiaz/cccc9eaa79a1f90512a4ff41f8870232 to your computer and use it in GitHub Desktop.

Select an option

Save danidiaz/cccc9eaa79a1f90512a4ff41f8870232 to your computer and use it in GitHub Desktop.

Main.Documentation

Data

Logicians and theoretical computer scientists may find it helpful to think of an indexed datatype as a set of judgements given by inference rules.

Brutal [Meta]Introduction to Dependent Types in Agda

There are loads of idiosyncratic features in both, so I mention only a couple of general things which are most relevant to me. Agda: much stronger and more robust inference & unification, HoTT compatibility, modules, universe polymorphism, rewrite pragmas. Idris: far more usable program compilation & code backends.

Agda uses oversimplified lexer that splits tokens by spaces, parentheses, and braces

When programming with dependent types, a predicate on A becomes a function from A to types, i.e. A → Set. If a : A satisfies the predicate P : A → Set then the function P returns a type with each element being a proof of P a, in a case a doesn’t satisfy P it returns an empty type.

The magic of dependent types makes the type of the second argument of div2e change every time we pattern match on the first argument n.

Unification is the most important (at least with pattern matching on inductive datatypes involved) and easily forgotten aspect of dependently typed programming.

In datatype declarations things before a : are called “parameters”, things after the colon but before a Set are called “indexes”.

Decidable propositions are the glue that make your program work with the real world.

Difference between type parameters and indices?

Parameters are merely indicative that the type is somewhat generic, and behaves parametrically with regards to the argument supplied.

Indices on the other hand may affect which inhabitants you may find in the type! That's why we say they index a family of types, that is, each index tells you which one of the types (within the family of types) you are looking at (in that sense, a parameter is a degenerate case where all the indices point to the same set of "shapes").

This is why, when you define inductive families, usually you can define the parameters for the entire type, but you have to specify the indices for each constructor (since you are allowed to specify, for each constructor, what indices it lives at).

zero != n of type Nat

There’s no direct analogue to case in Agda, with construction allows pattern matching on intermediate expressions (just like Haskell’s case), but (unlike case) on a top level only. Thus with effectively just adds a “derived” argument to a function. Just like with normal arguments, pattern matching on a derived argument might change some types in a context.

Parameters vs Indices

Agda Vim editing

Programming Language Theory in Agda

Declarative GUIs: Simple, Consistent, and Verified

Quick Guide to Editing, Type Checking and Compiling Agda Code

tip: one can "Give" (C-SPC) while having ? inside the goal

To enter a subscript in the Emacs mode, type "_1".

Commands working with types [like C-d] can be prefixed with C-u to compute type without further normalisation and with C-u C-u to compute normalised types.

Library Management. agda-stdlib.

writing Unicode chars in Emacs

compilers

Note that pattern matching on an Integer is slower than on an unary natural number. Code that does a lot of unary manipulations and doesn’t use builtin arithmetic likely becomes slower due to this optimization. If you find that this is the case, it is recommended to use a different, but isomorphic type to the builtin natural numbers.

Typical examples of erasable types are the equality type and the accessibility predicate used for well-founded recursion

The erasure means that equality proofs will (mostly) be erased, and never looked at, and functions defined by well-founded recursion will ignore the accessibility proof.

built-ins

Instrumenting Total Parsers Written in agdarsec. paper. github. earlier paper

Working only with classical logic?

Is Agda without K less powerful?

What is Axiom K?

Problem solving PLFA exercises

Assisting Agda's termination checker. Total definition of Gcd in Idris. MiniAgda: Integrating Sized and Dependent Types.

I would like to offer a slightly different answer than the ones given above. In particular, I want to suggest that instead of trying to somehow convince the termination checker that actually, no, this recursion is perfectly fine, we should instead try to reify the well-founded-ness so that the recursion is manifestly fine in virtue of being structural.

Libraries and other developments

Auto in Agda

The Coq proof assistant has two distinct language fragments. Besides the programming language Gallina, there is a separate tactic language for writing and programming proof scripts. Together with several highly customizable tactics, the tactic language Ltac can provide powerful proof automation [Chlipala, 2013]. Having to introduce a separate tactic language, however, seems at odds with the spirit of type theory, where a single language is used for both proof and computation. Having a separate language for programming proofs has its drawbacks: programmers need to learn another language to automate proofs, debugging Ltac programs can be difficult, and the resulting proof automation may be inefficient

open type-level proofs

proving stuff:

Proof by induction isn't just flailing about, you know? The trick is to pick the case analysis that provokes the "stuck" programs to do a step of computation. Then the same reasoning that justifies the termination of the program will justify the induction in a proof about it.

cubical types for dummies. cubical adventures.

The Agda's New Sorts

An agda proposition used in the type — what does it mean?

How to compare two sets in Agda?

Agda exercises

Built-ins

module Agda.Builtin.Equality

While it is possible to define your own versions of the built-in types and bind them using BUILTIN pragmas, it is recommended to use the definitions in the Agda.Builtin modules. These modules are installed when you install Agda and so are always available. For instance, built-in natural numbers are defined in Agda.Builtin.Nat. The standard library and the agda-prelude reexport the definitions from these modules.

Browse the standard library as clickable HTML

universes. more on universes. worlds.

function notation

Why haven't newer dependently typed languages adopted SSReflect's approach?

There are two conventions I've found in Coq's SSReflect extension that seem particularly useful but which I haven't seen widely adopted in newer dependently-typed languages (Lean, Agda, Idris).

Firstly, where possible predicates are expressed as boolean-returning functions rather than inductively defined datatypes. This brings decidability by default, opens up more opportunities for proof by computation, and improves checking performance by avoiding the need for the proof engine to carry around large proof terms. The main disadvantage I see is the need to use reflection lemmas to manipulate these boolean predicates when proving.

A dependently typed datastructure in which the invariant is part of its type. One advantage of SSReflect's approach is that it allows reuse, so that for instance many of the functions defined for seq and proofs about them can still be used with tuple (by operating on the underlying seq), whereas with Idris' approach functions like reverse, append and the like need to be rewritten for Vect. Lean actually has a SSReflect style equivalent in its standard library, vector, but it also has an Idris-style array which seems to have an optimised implementation in the runtime.

So SSReflect's approach is not perfect. The other one is too. Is there anything that improves upon both? Yes, Ornaments (see Ornamental Algebras, Algebraic Ornaments and The essence of ornaments). You can easily get Vec from List by applying the corresponding ornamental algebra, but I can't say how much code reuse you'll get from it and whether types will drive you nuts or not. I heard people actually use ornaments somewhere.

Ornamental Algebras, Algebraic Ornaments. The Essence of Ornaments. Datatypes of Datatypes.

I see types as an external rationalisation imposed upon the raw stuff of computation

the equational theory of open terms (conceived as functions from valuations of their variables) will always be to some extent beyond the ken of the computer.

The remedy for our inevitable disappointment with judgmental equality is to define a notion of evidence for equality

we have declared that the equality predicate for closed terms is whatever judgmental equality we happen to have chosen. We have sealed our disappointment in, but we have gained the abilty to prove useful equations on open terms.

repurpose the existing data declaration as mere sugar for the definition of the corresponding code, and to ensure that those codes carry enough information (e.g., about constructor names, implicit argument conventions, and so forth) to sustain the readability

Proving properties of programs by structural induction.

Don't use Booleans!

That said, you should still be able to recover the information you threw away and extract a HasElem x xs value from a context of Elem x xs ~ True. It's tedious, though - you have to search the list for the item (which you already did in order to evaluate Elem x xs!) and eliminate the impossible cases.

So does give you the same information as Disjoint — you just need to extract it. Basically, if there is no inconsistency between disjoint and Disjoint, then you should be able to write a function So (disjoint s) -> Disjoint s using pattern matching, recursion and impossible cases elimination.

Agda Tips. more on reddit

If you’ve got a hole in your program, you can put the cursor in it and press SPC-m-a (in spacemacs), and Agda will try and find the automatic solution to the problem. For a while, I didn’t think much of this feature, as rare was the program which Agda could figure out. Turns out I was just using it wrong! Into the hole you should type the options for the proof search: enabling case-splitting (-c), enabling the use of available definitions (-r), and listing possible solutions (-l).

Often, a program will not be obviously terminating (according to Agda’s termination checker). The first piece of advice is this: don’t use well-founded recursion. It’s a huge hammer, and often you can get away with fiddling with the function (try inlining definitions, rewriting generic functions to monomorphic versions, or replacing with-blocks with helper functions), or using one of the more lightweight techniques out there.

When combining prescriptive and descriptive indices, ensure both are in constructor form. Exclude defined functions which yield difficult unification problems.

This is exactly the problem inspect solves: it runs your function, giving you a result, but also giving you a proof that the result is equal to running the function.

This one is a little confusing, because Agda’s notion of “irrelevance” is different from Idris’, or Haskell’s. In all three languages, irrelevance is used for performance: it means that a value doesn’t need to be around at runtime, so the compiler can elide it.

In dependently typed languages, this isn’t a distinction we can rely on. The line between runtime entities and compile-time entities is drawn elsewhere, so quite often types need to exist at runtime. As you might guess, though, they don’t always need to. The length of a length-indexed vector, for instance, is completely determined by the structure of the vector: why would you bother storing all of that information at runtime? This is what Idris recognizes, and what it tries to remedy: it analyses code for these kinds of opportunities for elision, and does so when it can. Kind of like Haskell’s fusion, though, it’s an invisible optimization, and there’s no way to make Idris throw a type error when it can’t elide something you want it to elide.

Three Tricks to make Termination Obvious

Canonical Structures in Agda using REWRITE

The Agda archives.

https://lists.chalmers.se/pipermail/agda/

https://lists.chalmers.se/pipermail/agda/2018/009985.html

https://lists.chalmers.se/pipermail/agda/2018/009986.html

https://lists.chalmers.se/pipermail/agda/2018/009950.html IO examples

https://lists.chalmers.se/pipermail/agda/2018/010031.html

https://lists.chalmers.se/pipermail/agda/2018/010158.html

https://lists.chalmers.se/pipermail/agda/2018/010159.html

https://lists.chalmers.se/pipermail/agda/2018/010195.html

https://lists.chalmers.se/pipermail/agda/2018/010263.html

https://lists.chalmers.se/pipermail/agda/2018/010305.html

Level polymorphism bloats almost every definition in the Standard Prelude

Indeed; to give another little data point, my personal experience is that I use --type-in-type with Agda almost always, and check predicativity by hand (usually not that hard, if you aren't doing something which will obviously be bad). I suspect a lot of people work this way, so as far as the social force is concerned, the solution which is eventually adopted in Agda X might need to be not much more bureaucratic than this

https://lists.chalmers.se/pipermail/agda/2018/010267.html https://lists.chalmers.se/pipermail/agda/2018/010276.html [Agda] The joys and sorrows of abstraction

Implicit generalization

Keeping Formal Verification in Bounds.

Agda Beginner(-ish) Tips, Tricks, and Pitfalls Total Combinations

Graphs are to categories as lists are to monoids.

Univalent Categories A formalization of category theory in Cubical Agda.

Resources for Teaching with Formal Methods.

Automatically And Efficiently Illustrating Polynomial Equalities in Agda. tweet.

Generic Zero-Cost Reuse for Dependent Types

pendently typed languages are well known for having a problem with code reuse. Traditional non-indexed algebraic datatypes (e.g. lists) appear alongside a plethora of indexed variations (e.g. vectors). Functions are often rewritten for both non-indexed and indexed versions of essentially the same datatype, which is a source of code duplication.

Finger Trees in Agda

filter in Agda

Permutations By Sorting

Introduction to Univalent Foundations of Mathematics with Agda

Introductory Agda sessions

tactic arguments

agda quality code

Automatically and E!ciently Illustrating Polynomial Equalities in Agda

tips for matrices in agda

(Programming Languages) in Agda = Programming (Languages in Agda)

Vectors and Matrices in Agda

From type theory to setoids and back

improvements to the with construct

Lazily exploring a directory (co)tree to find a file with a given basename, in Agda

a cheatsheet to Agda demonstrating how to use Haskell for sideffects

Introduction to Univalent Foundations of Mathematics with Agda Martín Hötzel Escardó

a pretty slick way to get computation to do more for us. I guess it's what ssreflect was always getting at, in a way

Three Equivalent Ordinal Notation Systems in Cubical Agda

Hack your type theory with rewrite rules, also in Haskell.

Equality of functions in Agda

equality in mechanized mathematics

Agda vs Coq vs Idris

Agda 2.6.1 has been released!

Type-based formal verification

agda resources

Brigitte Pientka: Mechanizing Meta-Theory in Beluga https://vimeo.com/423668919

ZuricHac

Competing inheritance paths in dependent type theory: a case study in functional analysis

Agda editor support

Leibniz Equality is Isomorphic to Martin-Lof¨ Identity, Parametrically

tactics

inference in agda

about fin (Idris). reddit.

the nature of tactics

The Agda universal algebra library and Birkhoff's theorem in Martin-Löf dependent type theory

induction-recursion and universes

The Agda Universal Algebra Library, Part 1: Foundation

Emacs

In tty mode, the binding C-c C-RET has no effect.

Workaround: manually bind C-c RET to 'proof-goto-point instead.

Emacs key binding commands

Bind command to C-RET in Emacs

9.1 Adding your own keybindings

Should we bind C-c C-j in terminals?

C-c C-RET (proof-goto-point) doesn't work in shells.

(Backspace Key)[https://www.emacswiki.org/emacs/BackspaceKey]

Traditionally, Unix uses the ^H keystroke to send a backspace from or to a terminal. Emacs, not coming from a Unix background (see CategoryHistory), does not respect this tradition.

9.6 Why does the key invoke help?

Unable to hide welcome screen in Emacs

(custom-set-faces
 ;; custom-set-faces was added by Custom.
 ;; If you edit it by hand, you could mess it up, so be careful.
 ;; Your init file should contain only one such instance.
 ;; If there is more than one, they won't work right.
 '(default ((t (:family "Consolas" :foundry "outline" :slant normal :weight normal :height 120 :width normal)))))

 (setq inhibit-startup-screen t)

recent open files

Literate Programming: Empower Your Writing with Emacs Org-Mode hn

Selecting things in Emacs

Getting productive with selection and navigation in Emacs

GNU Emacs Survival Card for version 26

(**
* Induction in Coq
-----
#<i>#
Topics:
- recursive functions
- induction on lists
- induction on natural numbers
- rings and fields
- induction principles
#</i>#
-----
We'll need the list library for these notes.
*)
Require Import List.
Import ListNotations.
(**
(**********************************************************************)
** Recursive functions
The [List] library defines the list append operator, which in Coq is written as
infix operator [++], or as prefix function [app]. In OCaml, the same operator
is written [@]. In OCaml's standard library, you'll recall that append
is defined as follows:
<<
let rec append lst1 lst2 =
match lst1 with
| [] -> lst2
| h::t -> h :: (append t lst2)
>>
The Coq equivalent of that would be: *)
Fixpoint append {A : Type} (lst1 : list A) (lst2 : list A) :=
match lst1 with
| nil => lst2
| h::t => h :: (append t lst2)
end.
(** The [Fixpoint] keyword is similar to a [let rec] definition in OCaml. The
braces around [A : Type] in that definition make the [A] argument implicit,
hence we don't have to provide it at the recursive call to [append] in the
second branch of the pattern match. Without the braces, we'd have to write the
function as follows: *)
Fixpoint append' (A : Type) (lst1 : list A) (lst2 : list A) :=
match lst1 with
| nil => lst2
| h::t => h :: (append' A t lst2)
end.
(** The actual definition of [++] in the Coq standard library is a little more
complicated, but it's essentially the same idea. Here's that definition:
<<
Definition app (A : Type) : list A -> list A -> list A :=
fix app' lst1 lst2 :=
match lst1 with
| nil => lst2
| h :: t => h :: app' t lst2
end.
>>
The Coq [fix] keyword is similar to a [let rec] expression in OCaml, but where
the body of the expression is implicit: Coq [fix f x1 .. xn := e] is like OCaml
[let rec f x1 .. xn = e in f]. So in OCaml we could rephrase the above
definition as follows:
<<
let app : 'a list -> 'a list -> 'a list =
let rec app' lst1 lst2 =
match lst1 with
| [] -> lst2
| h :: t -> h :: app' t lst2
in
app'
>>
Now that we know how Coq defines [++], let's prove a theorem about it.
*)
Theorem nil_app : forall (A:Type) (lst : list A),
[] ++ lst = lst.
(** Intuition: appending the empty list to [lst] immediately returns
[lst], by the definition of [++]. *)
Proof.
intros A lst.
simpl.
trivial.
Qed.
(** The second step in that proof simplifies [[] ++ lst] to [lst]. That's
because of how [++] is defined: it pattern matches against its first argument,
which here is [[]], hence simply returns its second argument.
Next, let's prove that appending nil on the right-hand side also results in
[lst]: *)
Theorem app_nil : forall (A:Type) (lst : list A),
lst ++ [] = lst.
(* Intuition (incomplete): by case analysis on [lst].
- if [lst] is [[]], then trivially [[] ++ []] is [[]].
- if [lst] is [h::t], then ...? *)
Proof.
intros A lst. destruct lst as [ | h t].
- trivial.
- simpl. (* can't proceed *)
Abort.
(** When we get to the end of that proof, we are trying to show that [h :: (t ++
[]) = h :: t]. There's no way to make progress on that, because we can't
simplify [t ++ []] to just [t]. Of course as humans we know that holds. But to
Coq, that's a fact that hasn't yet been proved. Indeed, it is an instance of
the theorem we're currently trying to prove!
What's going wrong here is that case analysis is not a sufficiently powerful
proof technique for this theorem. We need to be able to _recursively_ apply the
theorem we're trying to prove to smaller lists. That's where _induction_ comes
into play.
(**********************************************************************)
** Induction on lists
_Induction_ is a proof technique that you will have encountered in math
classes before---including CS 2800. It is useful when you want to prove
that some property holds of all the members of an infinite set, such as
the natural numbers, as well as lists, trees, and other data types.
One classic metaphor for induction is _falling dominos_: if you arrange some
dominos such that each domino, when it falls, will knock over the next domino,
and if you knock over the first domino, then all the dominos will fall. Another
classic metaphor for induction is a _ladder_: if you can reach the first rung,
and if for any given rung the next rung can be reached, then you can reach any
rung you wish. (As long as you're not afraid of heights.)
What both of those metaphors have in common is
- a _base case_, in which something is done first. For the dominos, it's
knocking over the first domino; for the ladder, it's climbing the first rung.
And,
- an _inductive case_, in which a step is taken from one thing to the
the next. For the dominos, it's one domino knocking over the next; for the
ladder, it's literally taking the step from one rung to the next. In both
cases, it must actually be possible for the action to occur: if the dominos
or the rungs were spaced too far apart, then progress would stop.
A proof by induction likewise has a base case and an inductive case.
Here's the structure of a proof by induction on a list:
<<
Theorem. For all lists lst, P(lst).
Proof. By induction on lst.
Case: lst = nil
Show: P(nil)
Case: lst = h::t
IH: P(t)
Show: P(h::t)
QED.
>>
The _base case_ of a proof by induction on lists is for when the list is empty.
The _inductive case_ is when the list is non-empty, hence is the cons of a head
to a tail. In the inductive case, we get to assume an _inductive hypothesis_,
which is that the property [P] holds of the tail.
In the metaphors above, the inductive hypothesis is the assumption that we've
already reached some particular domino or rung. From there, the metaphorical
work we do in the inductive case of the proof is to show that from that domino
or rung, we can reach the next one.
An inductive hypothesis is exactly the kind of assumption we needed to get our
proof about appending nil to go through.
Here's how that proof could be written:
<<
Theorem: for all lists lst, lst ++ nil = lst.
Proof: by induction on lst.
P(lst) = lst ++ nil = lst.
Case: lst = nil
Show:
P(nil)
= nil ++ nil = nil
= nil = nil
Case: lst = h::t
IH: P(t) = (t ++ nil = t)
Show
P(h::t)
= (h::t) ++ nil = h::t
= h::(t ++ nil) = h::t // by definition of ++
= h::t = h::t // by IH
QED
>>
In Coq, we could prove that theorem as follows:
*)
Theorem app_nil : forall (A:Type) (lst : list A),
lst ++ nil = lst.
Proof.
intros A lst. induction lst as [ | h t IH].
- simpl. trivial.
- simpl. rewrite -> IH. trivial.
Qed.
(** The tactics used in that proof correspond exactly to the non-Coq proof
above.
The [induction] tactic is new to us. It initiates a proof by induction on its
argument, in this case [lst], and provides names for the variables to be used in
the cases. There aren't any new variables in the base case, but the inductive
case has variables for the head of the list, the tail, and the inductive
hypothesis. You could leave out those variables and simply write [induction
lst.], but that leads to a less human-readable proof.
In the inductive case, we use the [rewrite ->] tactic to rewrite [t ++
nil] to [t]. The [IH] says those terms are equal. That tactic replaces the
left-hand side of the equality with the right-hand side, wherever the left-hand
side appears in the subgoal. It's also possible to rewrite from right to left
with the [rewrite <-] tactic. If you leave out the arrow, Coq assumes that
you mean [->].
Here's another theorem we can prove in exactly the same manner. This
theorem shows that append is _associative_.
<<
Theorem: forall lists l1 l2 l3, l1 ++ (l2 ++ l3) = (l1 ++ l2) ++ l3.
Proof: by induction on l1.
P(l1) = l1 ++ (l2 ++ l3) = (l1 ++ l2) ++ l3
Case: l1 = nil
Show:
P(nil)
= nil ++ (l2 ++ l3) = (nil ++ l2) ++ l3
= l2 ++ l3 = l2 ++ l3 // simplifying ++, twice
Case: l1 = h::t
IH: P(t) = t ++ (l2 ++ l3) = (t ++ l2) ++ l3
Show:
P(h::t)
= h::t ++ (l2 ++ l3) = (h::t ++ l2) ++ l3
= h::(t ++ (l2 ++ l3)) = h::((t ++ l2) ++ l3) // simplifying ++, thrice
= h::((t ++ l2) ++ l3) = h::((t ++ l2) ++ l3) // by IH
QED
>>
In Coq, that proof looks more or less identical to our previous Coq proof
about append and nil:
*)
Theorem app_assoc : forall (A:Type) (l1 l2 l3 : list A),
l1 ++ (l2 ++ l3) = (l1 ++ l2) ++ l3.
(* Intuition: above *)
Proof.
intros A l1 l2 l3.
induction l1 as [ | h t IH].
- trivial.
- simpl. rewrite -> IH. trivial.
Qed.
(**
(**********************************************************************)
** Induction on natural numbers
One of the classic theorems proved by induction is that [0 + 1 + ... + n] is
equal to [n * (n+1) / 2]. It uses proof by induction on the natural numbers,
which are the non-negative integers. The structure of a proof by induction on
the naturals is as follows:
<<
Theorem. For all natural numbers n, P(n).
Proof. By induction on n.
Case: n = 0
Show: P(0)
Case: n = k+1
IH: P(k)
Show: P(k+1)
QED.
>>
The base case is for zero, the smallest natural number. The inductive case
assumes that P holds of k, then shows that P holds of k+1.
The [induction] tactic in Coq works for inductive types---that is, types defined
with the [Inductive] keyword. You might, therefore, suspect that if we're going
to do induction over [nat]s, the type [nat] must be inductively defined. Indeed
it is. There is a famous inductive definition of the natural numbers that is
credited to Giuseppe Peano (1858-1932). In OCaml, that definition would be:
<<
type nat = O | S of nat
>>
The [O] constructor (that's the letter capital O) represents zero.
The [S] constructor represents the successor function---that is, the
function that adds one to its argument. So:
- 0 is [O]
- 1 is [S O]
- 2 is [S (S O)]
- 3 is [S (S (S O))]
- etc.
This is a kind of _unary_ representation of the naturals, in which we repeat
the symbol [S] a total of [n] times to represent the natural number [n].
The Coq definition of [nat] is much the same:
*)
Print nat.
(**
Coq responds with output that is equivalent to the following:
<<
Inductive nat : Set :=
| O : nat
| S : nat -> nat
>>
That is, [nat] has two constructors, [O] and [S], which are just like the OCaml
constructors we examined above. And [nat] has type [Set], meaning that [nat] is
a specification for program computations. (Or, a little more loosely, that
[nat] is a type representing program values.)
Anywhere we write something that looks like an integer literal in Coq, Coq
actually understand that as its expansion in the Peano representation defined
above. For example, [2] is understood by Coq as just syntactic sugar for [S (S
O)]. We can even write computations using those constructors:
*)
Compute S (S O).
(**
Coq responds, though, by reintroducing the syntactic sugar:
<<
= 2 : nat
>>
The Coq standard library defines many functions over [nat] using those
constructors and pattern matching, including addition, subtraction,
and multiplication. For example, addition is defined like this:
*)
Fixpoint my_add (a b : nat) : nat :=
match a with
| 0 => b
| S c => S (my_add c b)
end.
(**
Note that we're allowed to use either [0] or [O] as a pattern, because
the former is just syntactic sugar for the latter. The second branch
of the pattern match is effectively calling [my_add] recursively with
[a-1] as its first argument, since [a = S c], meaning that [a] is the
successor of [c].
Now that we know how [nat] is defined inductively, let's try
to prove the classic theorem mentioned above about summation.
Moreover, let's prove that a program that computes the sum [0 + 1 + ... + n]
does in fact compute [n * (n+1) / 2]. First, we need to write
that program. In OCaml, we could write the program quite easily:
<<
let rec sum_to n =
if n=0 then 0
else n + sum_to (n-1)
>>
In Coq, it will turn out to be surprisingly tricky...
(**********************************************************************)
** Recursive functions, revisited
Here's a first attempt at defining [sum_to], which is just a direct translation
of the OCaml code into Coq. The [Fail] keyword before it tells Coq to expect
the definition to fail to compile. *)
Fail Fixpoint sum_to (n:nat) : nat :=
if n = 0 then 0 else n + sum_to (n-1).
(**
Coq responds:
<<
The command has indeed failed with message:
The term "n = 0" has type "Prop"
which is not a (co-)inductive type.
>>
The problem is the the equality operator [=] returns a proposition (i.e.,
something we could try to prove), whereas the [if] expression expects a Boolean
(i.e., [true] or [false]) as its guard. (Actually [if] is willing to accept any
value of an inductive type with exactly two constructors as its guard, and
[bool] is an example of such a type.)
To fix this problem, we need to use an equality operator that returns a [bool],
rather than a [Prop], when applied to two [nat]s. Such an operator is defined
in the [Arith] library for us: *)
Require Import Arith.
Locate "=?".
Check Nat.eqb.
(** Coq responds:
<<
Nat.eqb : nat -> nat -> bool
>>
We can now try to use that operator. Unfortunately, we discover a new problem:
*)
Fail Fixpoint sum_to (n:nat) : nat :=
if n =? 0 then 0 else n + sum_to (n-1).
(** Coq responds with output that contains the following lines:
<<
Recursive definition of sum_to is ill-formed.
...
Recursive call to sum_to has principal argument equal to
"n - 1" instead of a subterm of "n".
...
>>
Although the error message might be cryptic, you can see that Coq is complaining
about the recursive call in the [else] branch. For some reason, Coq is unhappy
about the argument [n-1] provided at that call. Coq wants that argument to be a
"subterm" of [n]. The words _term_ and _expression_ are synonymous here, so Coq
is saying that it wants the argument to be a subexpression of [n]. Of course
[n] doesn't have any subexpressions. So why is Coq giving us this error?
Before we can answer that question, let's look at a different recursive
function---one that implements an infinite loop: *)
Fail Fixpoint inf (x:nat) : nat := inf x.
(**
Coq responds very similarly to how it did with [sum_to]:
<<
Recursive definition of inf is ill-formed.
...
Recursive call to inf has principal argument equal to
"x" instead of a subterm of "x".
>>
The reason Coq rejects [inf] is that #<b>Coq does not permit any infinite
loops</b>#. That might seem strange, but there's an excellent reason for it.
Consider how [inf] would be defined in OCaml:
<<
# let rec inf x = inf x
val inf : 'a -> 'b = <fun>
>>
Let's look at the type of that function, using what we learned about
propositions-as-types in the previous lecture. The type ['a -> 'b]
corresponds to the proposition [A -> B], where [A] and [B] could
be any propositions. In particular, [A] could be [True] and [B]
could be [False], leading to the proposition [True -> False]. That's
a proposition that should never be provable: truth does not imply
falsehood. And yet, since [inf] is a program of that type, [inf]
corresponds to a proof of that proposition. So using [inf] we could
actually prove [False]:
<<
type void = {nope : 'a . 'a};;
let rec inf x = inf x;;
let ff : void = inf ();;
>>
The [void] type is how we represented [False] in the previous lecturev.
The value [ff] above corresponds to a proof of [False].
So infinite loops are able to prove [False].
In OCaml we don't mind that phenomenon, because OCaml's purpose is not to be a
proof assistant. But in Coq it would be deadly: we should never allow the
proof assistant to prove false propositions. Coq therefore wants to prohibit
all infinite loops. But that's easier said than done! Recall from CS 2800 that
the _halting problem_ is undecidable: we can't write a program that precisely
determines whether another program will terminate. Well, the Coq compiler is a
program, and it wants to detect which programs terminate and which programs do
not---which is exactly what the halting problem says is impossible.
So instead of trying to do something impossible, Coq settles for doing something
possible but imprecise, specifically, something that prohibits all
non-terminating programs as well as prohibiting some terminating programs. Coq
enforces a syntactic restriction on recursive functions: there must always be an
argument that is _syntactically smaller_ at every recursive function
application. An expression [e1] is syntactically smaller than [e2] if [e1] is a
subexpression of [e2]. For example, [1] is syntactically smaller than [1-x],
but [n-1] is not syntactically smaller than [n]. It turns out this restriction
is sufficient to guarantee that programs must terminate: eventually, if every
call results in something smaller, you must reach something that is small enough
that you cannot make a recursive call on it, hence evaluation must terminate. A
synonym for "syntactically smaller" is _structurally decreasing_.
But that does rule out some programs that we as humans know will terminate yet
do not meet the syntactic restriction. And [sum_to] is one of them. Here is the
definition we previously tried: *)
Fail Fixpoint sum_to (n:nat) : nat :=
if n =? 0 then 0 else n + sum_to (n-1).
(** The recursive call to [sum_to] has as its argument [n-1], which
syntactically is actually bigger than the original argument of [n]. So Coq
rejects the program.
To finally succeed in definining [sum_to], we can make use of what we know about
how [nat] is defined: since it's an inductive type, we can pattern match on it:
*)
Fixpoint sum_to (n:nat) : nat :=
match n with
| 0 => 0
| S k => n + sum_to k
end.
(** The second branch of the pattern match recurses on an argument that is both
syntactically and arithmetically smaller, just as our definition of [my_add]
did, above. So Coq's syntactic restriction on recursion is satisfied, and the
definition is accepted as a program that definitely terminates.
(**********************************************************************)
** Inductive proof of the summation formula
Now that we've finally succeeded in defining [sum_to], we can prove
the classic theorem about summation.
Here's how we would write the proof mathematically:
<<
Theorem: for all natural numbers n, sum_to n = n * (n+1) / 2.
Proof: by induction on n.
P(n) = sum_to n = n * (n+1) / 2
Case: n=0
Show:
P(0)
= sum_to 0 = 0 * (0+1) / 2
= 0 = 0 * (0+1) / 2 // simplifying sum_to 0
= 0 = 0 // 0 * x = 0
Case: n=k+1
IH: P(k) = sum_to k = k * (k+1) / 2
Show:
P(k+1)
= sum_to (k+1) = (k+1) * (k+1+1) / 2
= k + 1 + sum_to k = (k+1) * (k+1+1) / 2 // simplifying sum_to (k+1)
= k + 1 + k * (k+1) / 2 = (k+1) * (k+1+1) / 2 // using IH
= 2 + 3k + k*k = 2 + 3k + k*k // simplifying terms on each side
QED
>>
Now let's do the proof in Coq. *)
Theorem sum_sq : forall n : nat,
sum_to n = n * (n+1) / 2.
Proof.
intros n.
induction n as [ | k IH].
- trivial.
- simpl. rewrite -> IH.
Abort.
(** The proof is working fine so far, but now we have a complicated algebraic
equation we need to prove:
<<
S (k + k * (k + 1) / 2) = fst (Nat.divmod (k + 1 + k * S (k + 1)) 1 0 0)
>>
([divmod] is part of how [/] is implemented in Coq.)
Although we could try to prove that manually using the definitions of all the
operators, it would be much nicer to get Coq to find the proof for us. It turns
out that Coq does have great support for finding proofs that involve _rings_:
algebraic structures that support addition and multiplication operations.
(We'll discuss rings in detail after we finish the current proof.) But we can't
use that automation here, because the equation we want to prove also involves
division, and rings do not support division operations.
To avoid having to reason about division, we could rewrite the theorem we want
to prove: by multiplying both sides by 2, the division goes away: *)
Theorem sum_sq_no_div : forall n : nat,
2 * sum_to n = n * (n+1).
Proof.
intros n.
induction n as [ | k IH].
- trivial.
- simpl.
Abort.
(** Now, after the call to [simpl], we don't have any division, but we also
don't have any expressions that look exactly like the left-hand side of the
inductive hypothesis. The problem is that [simpl] was too agressive in
simplifying all the expressions. All we really want is to transform the
left-hand side of the subgoal, [2 * sum_to (S k)], into an expression that
contains the left-hand side of the inductive hypothesis, [2 * sum_to k].
Thinking about the definition of [sum_to], we ought to be able to transform [2 *
sum_to (S k)] into [2 * (S k + sum_to k)], which equals [2 * (S k) + 2 * sum_to
k]. That final expression does have the left-hand side of the inductive
hypothesis in it, as desired. Let's factor out that reasoning as a separate
"helper" theorem. In math, helper theorems are usually called _lemmas_. The
Coq keyword [Lemma] is synonymous with [Theorem]. *)
Lemma sum_helper : forall n : nat,
2 * sum_to (S n) = 2 * (S n) + 2 * sum_to n.
Proof.
intros n. simpl. ring.
Qed.
(** The proof above simplifies the application of [sum_to (S n)], then invokes a
new tactic called [ring]. That tactic is able to automatically search for
proofs of equations involving addition and multiplication of natural numbers.
Now that we have our helper lemma, we can use it to prove the theorem: *)
Theorem sum_sq_no_div : forall n : nat,
2 * sum_to n = n * (n+1).
Proof.
intros n.
induction n as [ | k IH].
- trivial.
- rewrite -> sum_helper.
rewrite -> IH.
ring.
Qed.
(** Once more, after doing the rewriting with the lemma and the inductive
hypothesis, we're left with algebraic equations that can be proved simply by
invoking the [ring] tactic.
Finally, we can use [sum_sq_no_div] to prove the original theorem involving
division. To do that, we need to first prove another lemma that shows we can
transform a multiplication into a division: *)
Lemma div_helper : forall a b c : nat,
c <> 0 -> c * a = b -> a = b / c.
Proof.
intros a b c neq eq.
rewrite <- eq.
rewrite Nat.mul_comm.
rewrite Nat.div_mul.
trivial.
assumption.
Qed.
(**
That lemma involves two library theorems, [mult_comm] and [Nat.div_mul]. How
did we know to use these? Coq can help us search for useful theorems. Right
after we [rewrite <- eq] in the above proof, our subgoal is [a = c * a / c]. It
looks like we ought to be able to cancel the [c] term on the right-hand side.
So we can search for a theorem that would help us do that. The [Search] command
takes wildcards and reports all theorems that match the pattern we supply, for
example:
*)
Search (_ * _ / _).
(** Unfortunately, the search results currently seem to be broken in Visual
Studio Code: we get only one matching library theorem, instead of all of them.
Doing the search in [coqide] or [coqtop] (the Coq REPL), or just browsing
through the web documentation, does reveal a useful theorem:
<<
Nat.div_mul: forall a b : nat, b <> 0 -> a * b / b = a
>>
That would let us cancel a term from the numerator and denominator, but it
requires the left-hand side of the equality to be of the form [a * b / b],
whereas we have [c * a / b]. The problem is that the two sides of the
multiplication are reversed. No worries; multiplication is commutative, and
there is a library theorem that proves it. Again, we could find that theorem: *)
Search (_ * _ = _ * _).
(** One of the results is:
<<
Nat.mul_comm: forall n m : nat, n * m = m * n
>>
Putting those two library theorems to use, we're able to prove the lemma as
above.
Finally, we can use that lemma to prove our classic theorem about summation. *)
Theorem sum_sq : forall n : nat,
sum_to n = n * (n+1) / 2.
Proof.
intros n.
apply div_helper.
- discriminate.
- rewrite sum_sq_no_div. trivial.
Qed.
(** When we use [apply div_helper] in that proof, Coq generates two new
subgoals---one for each of the propositions [c <> 0] and [c * a = b] in the type
of [div_helper].
_Summary_: wow, that was a lot of work to prove that seemingly simple classic
theorem! We had to figure out how to code [sum_to], and we had to deal with a
lot of complications involving algebra. This situation is not uncommon: the
theorems that we think of as easy with pencil-and-paper (like arithmetic) turn
out to be hard to convince Coq of, whereas the theorems that we think of as
challenging with pencil-and-paper (like induction) turn out to be easy.
(**********************************************************************)
** Rings and fields
In the proof we just did above about summation, we used a tactic called [ring]
that we said searches for proofs about algebraic equations involving
multiplication and addition. Let's look more closely at that tactic.
When we studied OCaml modules, we looked at _rings_ as an example of a
mathematical abstraction of addition, multiplication, and negation. Here is an
OCaml signature for a ring:
<<
module type Ring = sig
type t
val zero : t
val one : t
val add : t -> t -> t
val mult : t -> t -> t
val neg : t -> t
end
>>
We could implement that signature with a representation type [t] that is [int],
or [float], or even [bool].
The names given in [Ring] are suggestive of the operations they represent, but
to really specify how those operations should behave, we need to write some
equations that relate them. Below are the equations that (it turns out) fully
specify [zero], [one], [add], and [mult]. Rather than use those identifiers, we
use the more familiar notation of [0], [1], [+], and [*].
<<
0 + x = 0
x + y = y + x
x + (y + z) = (x + y) + z
0 * x = 0
1 * x = 1
x * y = y * x
x * (y * z) = (x * y) * z
(x + y) * z = (x * z) + (y * z)
>>
Technically these equations specify what is known as a _commutative semi-ring_.
It's a _semi_-ring because we don't have equations specifying negation yet.
It's a _commutative_ semi-ring because the [*] operation commutes. (The [+]
operation commutes too, but that's always required of a semi-ring.)
The first group of equations specifies how [+] behaves on its own. The second
group specifies how [*] behaves on its own. The final equation specifies how [+]
and [*] interact.
If we extend the equations above with the following two, we get a specification
for a _ring_:
<<
x - y = x + (-y)
x + (-x) = 0
>>
It's a remarkable fact from the study of _abstract algebra_ that those equations
completely specify a ring. Any theorem you want to prove about addition,
multiplication, and negation follows from those equations. We call the
equations the _axioms_ that specify a ring.
Rings don't have a division operation. Let's introduce a new operator called
[inv] (short for "inverse"), and let's write [1/x] as syntactic sugar for [inv
x]. If we take all the the ring axioms and add the following axiom for [inv],
we get what is called a _field_:
<<
x * 1/x = 1 if x<>0
>>
A field is an abstraction of addition, multiplication, negation, and division.
Note that OCaml [int]s do not satisfy the [inv] axiom above. For example, [2 *
(1/2)] equals [0] in OCaml, not [1]. OCaml [float]s mostly do satisfy the field
axioms, up to the limits of floating-point arithmetic. And in mathematics, the
rational numbers and the real numbers are fields.
Coq provides two tactics, [ring] and [field], that automatically search for
proofs using the ring and field axioms. The [ring] tactic was already loaded for
us when we wrote [Require Import Arith] earlier in this file. We can use the
[ring] tactic to easily prove equalities that follow from the ring axioms. Here
are two examples. *)
Theorem plus_comm : forall a b,
a + b = b + a.
Proof.
intros a b. ring.
Qed.
Theorem foil : forall a b c d,
(a + b) * (c + d) = a*c + b*c + a*d + b*d.
Proof.
intros a b c d. ring.
Qed.
(** Coq infers the types of the variables above to be [nat], because the [+] and
[*] operators are defined on [nat].
The proofs that the [ring] tactic finds can be quite complicated. For example,
try looking at the output of the following command. It's so long that we won't
put that output in this file! *)
Print foil.
(** Of course, [ring] won't find proofs of equations that don't actually hold.
For example, if we had a typo in our statement of [foil], then [ring] would
fail. *)
Theorem broken_foil: forall a b c d,
(a + b) * (c + d) = a*c + b*c + c*d + b*d.
Proof.
intros a b c d. try ring.
Abort.
(** Here's a theorem that [ring], perhaps surprisingly, cannot prove. *)
Theorem sub_add_1 : forall a : nat, a - 1 + 1 = a.
Proof.
intros a.
try ring.
Abort.
(** What's going wrong here is that [nat] is really only a semi-ring, not a
ring. That is, [nat] doesn't satisfy the axioms about negation. Why? Remember
that the natural numbers stop at [0]; we don't get any negative numbers. So if
[a] is [0] in the above theorem, [a-1] actually evaluates to [0] rather than
[-1]. *)
Compute 0-1. (* 0 : nat *)
(** If we want to reason about the integers instead of the natural numbers, we
can use a library called [ZArith] for that. The name comes from the fact that
[Z] is used in mathematics to denote the integers. *)
Require Import ZArith.
Open Scope Z_scope.
(** The [Open Scope] command causes the [ZArith] library's scope to be used to
resolve names, hence [+] becomes the operator on [Z] instead of on [nat], as
does [-], etc. *)
Compute 0-1. (* -1 : Z *)
(** Now we can prove the theorem from before. *)
Theorem sub_add_1 : forall a : Z, a - 1 + 1 = a.
Proof.
intros a. ring.
Qed.
(** Before going on, let's close the [Z] scope so that the operators go back to
working on [nat], as usual. *)
Close Scope Z_scope.
Compute 0-1. (* 0 : nat *)
(** Coq also provides implementations of the rational numbers as a field, as
well as the real numbers as a field. To get the [field] tactic, we first need
to load the [Field] library. *)
Require Import Field.
(** The rational numbers are provided in a couple different Coq libraries; the
one we'll use here is [Qcanon]. In mathematics, [Q] denotes the rational
numbers, and [canon] indicates that the numbers are stored in a _canonical
form_---that is, as simplified fractions. For example, [Qcanon] would represent
[2/4] as [1/2], eliminating the common factor of [2] from the numerator and the
denominator. (The [QArith] library provides rational numbers that are not in
canonical form.) *)
Require Import Qcanon.
Open Scope Qc_scope.
Theorem frac_qc: forall x y z : Qc, z <> 0 -> (x + y) / z = x / z + y /z.
Proof.
intros x y z z_not_0.
field. assumption.
Qed.
Close Scope Qc_scope.
(** The real numbers are provided in the [Reals] library. Here's that same
theorem again. *)
Module RealExample.
(** This code is in its own module for an annoying reason: [Reals] redefines its
own [nil], which will interefere with the examples want to give further below in
this file with lists. *)
Require Import Reals.
Open Scope R_scope.
Theorem frac_r : forall x y z, z <> 0 -> (x + y) / z = x / z + y /z.
Proof.
intros x y z z_not_0.
field. assumption.
Qed.
(** The assumption that [z <> 0] was needed in the above theorems to avoid
division by zero. If we omitted that assumption, the [field] tactic would leave
us with an unprovable subgoal, as in the proof below. *)
Theorem frac_r_broken : forall x y z, (x + y) / z = x / z + y /z.
Proof.
intros x y z.
field.
Abort.
Close Scope R_scope.
End RealExample.
(**
(**********************************************************************)
** Induction principles
When we studied the Curry-Howard correspondence, we learned that proofs
correspond to programs. That correspondence applies to inductive proofs
as well, and as it turns out, inductive proofs correspond to recursive
programs. Intuitively, that's because an inductive proof involves
an inductive hypothesis---which is an instance of the theorem that
is being proved, but applied to a smaller value. Likewise, recursive
programs involve recursive calls---which are like another instance
of the function that is already being evaluated, but on a smaller value.
To get a more concrete idea of what this means, let's look at the proof
value (i.e., program) that Coq produces for our original inductive
proof in these notes:
*)
Print app_nil.
(**
Coq responds:
<<
app_nil =
fun (A : Type) (lst : list A) =>
list_ind (fun lst0 : list A => lst0 ++ nil = lst0) eq_refl
(fun (h : A) (t : list A) (IH : t ++ nil = t) =>
eq_ind_r (fun l : list A => h :: l = h :: t) eq_refl IH) lst
: forall (A : Type) (lst : list A), lst ++ nil = lst
>>
That's dense, but let's start picking it apart. First, we see that [app_nil] is
a function that takes in two arguments: [A] and [lst]. Then it immediately
applies another function named [list_ind]. That function was defined for us in
the standard library, and it's what "implements" induction on lists. Let's
check it out: *)
Check list_ind.
(**
Coq responds:
<<
list_ind
: forall (A : Type) (P : list A -> Prop),
P nil ->
(forall (a : A) (l : list A), P l -> P (a :: l)) ->
forall l : list A, P l
>>
We call [list_ind] the _induction principle_ for lists. It is a proposition
that says, intuitively, that induction is a valid reasoning principle for lists.
In more detail, it takes these arguments:
- [A], which is the type of the list elements.
- [P], which is the property to be proved by induction. For example,
the property being proved in [app_nil] is
[fun (lst: list A) => lst ++ nil = lst].
- [P nil], which is a proof that [P] holds of the empty list. In other words,
a proof of the base case.
- A final argument of type [forall (a : A) (l : list A), P l -> P (a :: l)].
This is the proof of the inductive case. It takes an argument [a],
which is the head of a list, [l], which is the tail of a list, and
a proof [P l] that [P] holds of [l]. So, [P l] is the inductive
hypothesis. The output is of type [P (a :: l)], which is a proof
that [P] holds of [a::l].
Finally, [list_ind] returns a value of type [forall l : list A, P l],
which is a proof that [P] holds of all lists.
Ok, so that's the type of [list_ind]: a proposition asserting that
if you have a proof of the base case, and a proof of the inductive
case, you can assemble those to prove that a property holds of a list.
Next, what's the _value_ of [list_ind]? In other words, what's the
proof that [list_ind] itself is actually a true proposition?
*)
Print list_ind.
(**
Coq responds:
<<
list_ind =
fun (A : Type) (P : list A -> Prop) => list_rect P
...
>>
So [list_ind] takes in [A] and [P] and just immediately applies another
function, [list_rect], to [P]. (The name [rect] is not especially helpful to
understand, but alludes to [rec]ursion over a [t]ype.) Before we look at
[list_rect]'s actual implementation, let's look at our own equivalent
implementation that is easier to read: *)
Fixpoint my_list_rect
(A : Type)
(P : list A -> Prop)
(baseCase : P nil)
(inductiveCase : forall (h : A) (t : list A), P t -> P (h::t))
(lst : list A)
: P lst
:=
match lst with
| nil => baseCase
| h::t => inductiveCase h t
(my_list_rect A P baseCase inductiveCase t)
end.
(** The arguments to [my_list_rect] are the same as the arguments to [list_ind]:
an element type, a property to be proved, a proof of the base case, and a proof
of the inductive case. Then [my_list_rect] takes an argument [lst], which is
the list for which we want to prove that [P] holds. Finally, [my_list_rect]
returns that proof specifically for [lst].
The body of [my_list_rect] constructs the proof that [P] holds of [lst].
It does so by matching against [lst]:
- If [lst] is empty, then [my_list_rect] returns the proof of the base case.
- If [lst] is [h::t], then [my_list_rect] returns the proof of the inductive
case. To construct that proof, it applies [inductiveCase] to [h] and [t] as
the head and tail. But [inductiveCase] also requires a final argument, which
is the proof that [P] holds of [t]. To construct that proof, [my_list_rect]
calls itself recursively on [t].
That recursive call is exactly why we said that inductive proofs are recursive
programs. The inductive proof needs evidence that the inductive hypothesis
holds of the smaller list, and recursing on that smaller list produces the
evidence.
It's not immediately obvious, but [my_list_rect] is almost just [fold_right].
Here's how we could implement [fold_right] in Coq, with a slightly different
argument order than the same function in OCaml: *)
Fixpoint my_fold_right
{A B : Type}
(init : B)
(f : A -> B -> B)
(lst : list A)
:=
match lst with
| nil => init
| h::t => f h (my_fold_right init f t)
end.
(** Now compare the body of [my_fold_right] with [my_list_rect]:
<<
my_fold_right's body:
match lst with
| nil => init
| h::t => f h (my_fold_right init f t)
end.
my_list_rect's body:
match lst with
| nil => baseCase
| h::t => inductiveCase h t (my_list_rect A P baseCase inductiveCase t)
end.
>>
Both match against [lst]. If [lst] is empty, both return an initial/base-case
value. If [lst] is non-empty, both recurse on the tail, then pass the result of
the recursive call to a function ([f] or [inductiveCase]) that combines that
result with the head. The only essential difference is that [f] does not take
[t] directly as an input, whereas [inductiveCase] does.
So there you have it: induction over a list is really just folding over the
list, eventually reaching the empty list and returning the proof of the base
case for it, then working the way back up the call stack, assembling an
ever-larger proof for each element of the list. #<b>#An inductive proof is a
recursive program.#</b>#
Going back to the actual definition of [list_rect], here it is: *)
Print list_rect.
(** Coq responds:
<<
list_rect =
fun (A : Type) (P : list A -> Type) (f : P nil)
(f0 : forall (a : A) (l : list A), P l -> P (a :: l)) =>
fix F (l : list A) : P l :=
match l as l0 return (P l0) with
| nil => f
| y :: l0 => f0 y l0 (F l0)
end
: forall (A : Type) (P : list A -> Type),
P nil ->
(forall (a : A) (l : list A), P l -> P (a :: l)) ->
forall l : list A, P l
>>
That uses different syntax, but it ends up defining the same function as
[my_list_rect].
Whenever you define an inductive type, Coq automatically generates the induction
principle and recursive function that implements it for you. For example, we
could define our own lists: *)
Inductive mylist (A:Type) : Type :=
| mynil : mylist A
| mycons : A -> mylist A -> mylist A.
(** Coq automatically generates [mylist_rect] and [mylist_ind] for us: *)
Print mylist_ind.
Print mylist_rect.
(**
** Summary
Coq excels as a proof assistant when it comes to proof by induction. Whenever
we define an inductive type, Coq generates an induction principle for us
automatically. That principle is really a recursive program that knows how to
assemble evidence for a proposition, given the constructors of the inductive
type. The [induction] tactic manages the proof for us, automatically figuring
out what the base case and the inductive case, and automatically generating the
inductive hypothesis.
** Terms and concepts
- append
- base case
- field
- [fix]
- [Fixpoint]
- induction
- induction principle
- inductive case
- inductive hypothesis
- lemma
- Peano natural numbers
- [Prop] vs [bool]
- ring
- searching for library theorems
- semi-ring
- syntactically smaller restriction on recursive calls
** Tactics
- [field]
- [induction]
- [rewrite]
- [ring]
- tacticals: [try]
** Further reading
- _Software Foundations, Volume 1: Logical Foundations_.
#<a href="https://softwarefoundations.cis.upenn.edu/lf-current/toc.html">
Chapter 2 through 4: Induction, Lists, Poly</a>#.
- _Interactive Theorem Proving and Program Development_.
Chapters 6 through 10. Available
#<a href="https://newcatalog.library.cornell.edu/catalog/10131206">
online from the Cornell library</a>#.
*)
Require Import List ssreflect ssrbool.
From mathcomp Require Import eqtype ssrnat.
(* Let's see how one would define insertion sort in OCaml:
let rec insert n l =
match l with
| [] -> [n]
| x::xs -> if n <= x then n::x::xs else x::(insert n xs)
let rec sort l =
match l with
| [] -> []
| x::xs -> insert x (sort xs)
As you saw during the lesson, the [insert] function is just some
particular proof of [nat -> list nat -> list nat]: it is the proof
that builds a new list by inserting its first arguments into its
second one. As any proof, it thus can be defined in Coq using tactics.
*)
Definition insert (n:nat) (l:list nat) : list nat.
Proof.
(* The OCaml program is a recursive definition over l. I just do
exactly the same with the elim tactic. The induction hypothesis ih
is a proof of insert n xs : in fact it is the recursive call. *)
elim:l => [|x xs ih].
(* We start with the base case: we return the singleton list
containing only n. *)
apply:(n::nil).
(* We are now in the recursive case. We need to distinguish two cases
whether n <= x or not. *)
case h:(n <= x).
(* We are in the branch where n <= x. In this case, we can immediately
return the list n::x::xs *)
apply:(n::x::xs).
(* We now are in the branch where n > x. In this case, we want to
return x::(insert n xs). As we said, the recursive call is in fact
ih. *)
apply:(x::ih).
(* Our function is now defined using tactics. [Defined] checks that it
actually have the right type and defines it. *)
Defined.
(* We can check on some examples that it is indeed the insert function
we add in mind. Coq can actually compute with it! *)
Compute (insert 5 (1::2::3::4::6::7::8::9::nil)).
Compute (insert 5 (1::2::3::4::nil)).
(* Now do the same for the sort function, and test it on some examples.
Hint: it takes only three tactics to define it! *)
Definition sort (l:list nat) : list nat.
Proof.
(* TODO *)
Defined.
Compute (sort (7::9::5::3::0::4::2::1::8::6::nil)).
(* Add your own examples... *)
(* Ok, so far we have understood that we can define function using
tactics. But our sorting function is no more that the OCaml version:
we did not prove it was actually returning a list that is sorted!
With tactics, we will be able to build the sorting function AND proof
its correctness at the same time... *)
(* First, we need to define for a list of integers what it means to be
sorted: each element is smaller than all the following elements *)
Fixpoint smaller n l : Prop :=
match l with
| nil => true
| x::xs => n <= x /\ smaller n xs
end.
Fixpoint sorted l : Prop :=
match l with
| nil => true
| x::xs => smaller x xs /\ sorted xs
end.
(* We give a very simple fact about smaller: if a is smaller than all
the elements of a list l, and n <= a, then n is also smaller than all
the elements of l. *)
Lemma smaller_trans n a l : smaller a l -> n <= a -> smaller n l.
Proof.
elim:l => /=.
done.
move => b l ih [h1 h2] h3.
split.
move/(_ a n b):leq_trans => h4.
by apply:h4.
by apply:ih.
Defined.
(* For a matter of time, we also give you some arithmetic lemmas that
you will need:
o leq_false_lt m n : (m <= n) = false -> (n < m)
o leq_false_leq n x : (n <= x) = false -> x <= n
o eq_refl (n:nat) : n == n
o gtn_eqF m n : m < n -> n == m = false
o leq_false_neq n a : (n <= a) = false -> n == a = false
Do not look at the proofs, just look at the statements above. *)
Lemma leq_trans n m p : m <= n -> n <= p -> m <= p.
Proof.
elim: n m p => [|i IHn] [|m] [|p]; try reflexivity.
move => h _. apply:h.
compute. discriminate 1.
move => _. compute => h. apply:h.
by move/(_ m p):IHn.
Defined.
Lemma ltnW m n : m < n -> m <= n.
Proof. by apply:leq_trans. Defined.
Lemma leq_false_lt m n : (m <= n) = false -> (n < m).
Proof.
elim: m n => [|m IHm] [|n].
compute. discriminate 1.
compute. discriminate 1.
reflexivity.
by move/(_ n):IHm.
Defined.
Lemma leq_false_leq n x : (n <= x) = false -> x <= n.
Proof.
move => h.
apply:ltnW.
by apply:leq_false_lt.
Defined.
Lemma eq_refl (n:nat) : n == n.
Proof.
elim:n => [|n ih].
reflexivity.
exact:ih.
Defined.
Lemma leqNgt m n : (m <= n) = ~~ (n < m).
Proof. by elim: m n => [|m IHm] [|n] //; exact: IHm n. Defined.
Lemma eqn_leq m n : (m == n) = (m <= n <= m).
Proof. elim: m n => [|m IHm] [|n] //; exact: IHm n. Defined.
Lemma gtn_eqF m n : m < n -> n == m = false.
Proof. by rewrite eqn_leq (leqNgt n) => ->. Defined.
Lemma leq_false_neq n a : (n <= a) = false -> n == a = false.
Proof.
move => h.
apply:gtn_eqF.
by apply: leq_false_lt.
Defined.
(* We are now ready to build the insert function and prove its
correctness at the same time. To do so, the Coq writing {l | P l}
that designates a list l that satisfies the predicate P: P is the
specification of the function.
The specification of the insert function is the following:
o if a is smaller that all the elements of l and a <= n then a is
smaller than all the elements of (insert n l)
o if l is sorted then (insert n l) is sorted *)
Definition insert_spec n l :
{l' |
(forall a, smaller a l -> a <= n -> smaller a l') /\
(sorted l -> sorted l')}.
Proof.
(* The proof is a refinement of the proof of [insert], in which we
also build the proof of the specification of the function. So like
for defining [insert], we do an induction. *)
elim:l => [|x xs ih].
(* We present the base case. The answer is the singleton list
containing only n, together with a proof that this list satisfies
the specification. We first give the list, using the exists tactic.
*)
exists (n::nil).
(* Let's now prove the specification. It is a conjunction. *)
split.
(* On the first branch, we introduce a. *)
move => a.
(* We can simplify this goal: we know what [smaller a nil] and
[smaller a (n :: nil)] mean. *)
simpl.
(* In fact, this goal is trivially proved with done. *)
done.
(* Let's simplify the second branch: we know what [sorted nil] and
[sorted (n :: nil)] mean. *)
simpl.
(* In fact, this goal is trivially proved with done. *)
done.
(* Now do the recursive case. Do not forget to make a case analysis
whether n <= x or not. You can destruct a proof of {l | P l}
using the [case] tactic as usual. *)
(* TODO *)
Defined.
(* Let do the same for the sort function. *)
Definition sort_spec (l:list nat) : {l' | sorted l'}.
Proof.
(* TODO *)
Defined.
(* We now have defined functions that are correct (with respect to our
specification) by construction. Finally, Coq offers an extraction
mechanism, that automatically export them to OCaml, so we are sure
that the OCaml program is correct by construction. *)
Require Extraction.
Extraction Language Ocaml.
Set Extraction AccessOpaque.
Extraction "insert_sort.ml" sort_spec.
(* Have a look at the file "insert_sort.mli". You observe that the
function [sort_spec] has type [nat list -> nat list]: the extraction
mechanism "forgets" the proof of the correctness of this function,
since it has no signification in OCaml. You can test this extracted
function using the file "test_sort.ml". *)
(* TO GO FURTHER *)
(* I am sure you all have noticed that our specification is incomplete:
the sort function must not only return a sorted list, but this list
must be a permutation of the initial list! We now do the same job
with a complete specification. *)
(* We first define our notion of permutation: (x::xs) and l2 are a
permutation if and only if xs and (remove x l2) are. *)
Fixpoint remove a (l:list nat) :=
match l with
| nil => None
| x::xs =>
if a == x then
Some xs
else
match remove a xs with
| Some xs' => Some (x::xs')
| None => None
end
end.
Fixpoint permutation l1 l2 : Prop :=
match l1, l2 with
| nil, nil => true
| nil, _ => False
| x::xs, _ =>
match remove x l2 with
| Some l2' => permutation xs l2'
| None => False
end
end.
(* First prove that this relation is reflexive. *)
Lemma permutation_refl l: permutation l l.
Proof.
(* TODO *)
Defined.
(* Now define insertion and sort with the complete specification. *)
Definition insert_complete n l :
{l' |
(forall a, smaller a l -> a <= n -> smaller a l') /\
(remove n l' = Some l) /\
(sorted l -> sorted l' /\ permutation (n::l) l')}.
Proof.
(* TODO *)
Defined.
Definition sort_complete (l:list nat) : {l' | sorted l' /\ permutation l l'}.
Proof.
(* TODO *)
Defined.
Extraction Language Ocaml.
Set Extraction AccessOpaque.
Extraction "insert_sort.ml" sort_complete.

Agda

Installed on Windows.

Used cabal new-install.

Had to do it from Git Bash, compilation failed from Powershell.

Symlink creation failed, had to localte the Agda packages in ~/AppData/Roaming/cabal/store/...

emacs agda-mode automatic installation failed, but adding the lines manually to .emacs did the trick.

Standard library has to be downloaded separately form here. Then I followed these instructions.

Coq

proof-general

As of 20/09/2018, use the "async" branch of proof-general.

I installed by cloning the git repo, not through MELPA.

HISTORY OF INTERACTIVE THEOREM PROVING

Data Structures and Functional Programming

Proof general documentation

Without using electric terminator, you can trigger processing the text up to the current position of the point with the key C-c C-RET, or just up to the next command with C-c C-n. We show the rest of the example in Isabelle with semi-colons, but these will not appear in the final text.

A proof script is a sequence of commands which constructs definitions, declarations, theories, and proofs in a proof assistant. Proof General is designed to work with text-based interactive proof assistants, where the mode of working is usually a dialogue between the human and the proof assistant.

Data Types as Inductively Defined Mathematical Collections

CIC, Gallina, Ltac, Vernacular

Coq is actually based on an extension of CIC called Gallina. The text after the
:= and before the period in the last code example is a term of Gallina. Gallina
includes several useful features that must be considered as extensions to CIC.
The important metatheorems about CIC have not been extended to the full breadth
of the features that go beyond the formalized language, but most Coq users do
not seem to lose much sleep over this omission.  Next, there is Ltac, Coq's
domain-specific language for writing proofs and decision procedures. We will
see some basic examples of Ltac later in this chapter, and much of this book is
devoted to more involved Ltac examples.  Finally, commands like Inductive and
Definition are part of the Vernacular, which includes all sorts of useful
queries and requests to the Coq system. Every Coq source file is a series of
vernacular commands, where many command forms take arguments that are Gallina
or Ltac programs. (Actually, Coq source files are more like trees of vernacular
commands, thanks to various nested scoping constructs.)

Vernacular grammar

more on induction principles

mutual induction more

tactics index

Coq standard library

Coq.Init.Prelude

Why does Coq have Prop?

Prop is very useful for program extraction because it allows us to delete parts of code that are useless.

But there are genuine design decisions one has to make when using Prop.

it is a design question which things to put in Prop. You need to know what the user wants, and he tells you what he wants by using Prop.

Coq : Back To Basics

In Coq, we have two sorts : Set and Prop. A type of sort Set is a specification and a proof of a specification is a program. A type of sort Prop is a proposition.

Why do we need a sort Prop ? With Set, we already have the proof-as-program correspondence and the type-as-proposition.

From an algorithmic point of view, tactics can be dangerous since you have no control on the algorithm which is generated. This kind of type driven development is better for proofs.

the Omega library used for inequalities

Prop_or_Set

If your type could have two or more distiguishable objects, put it in Set otherwise put it in Prop.

Coq: Prop versus Set in Type(n)

What is different between Set and Type in Coq? [closed]

Library Universes

Inside Coq’s Type System 1

My unusual hobby

COQ LTAC 101

Company-coq

A collection of extensions for Proof General's Coq mode.

Intermediate Coq: "Programming In The Large" Roadmap

Between Software Foundations and Adam Chlipala's books, there is an enormous amount of excellent introductory material for "programming in the small" in Coq. On the other hand, whenever I try to write a Coq project of non-trivial size, I run into all sorts of issues. They include:

When should I use modules vs. typeclasses?
Is using setoids everywhere instead of eq overkill?
When should I use reflection?
When should I use refine?
When should I use Program (Russell)?
What options should I set? Implicit Arguments seems to be consensus, are there others?
What set of libraries should I default to?
...and, while I'm at it, here are some "coding in the small" things I can't figure out:

How do I make coinduction non-painful?
How do I use hints properly?
Any insight is greatly appreciated.

How do I make coinduction non-painful?

Paco: A Coq Library for Parameterized Coinduction

Have you tried the Paco approach? You don't have to go full Mendler style (co)-recursion, actually using open co-inductive types and the corresponding co-induction principle are already fairly useful.

Also, don't use co-inductive types defined as inductive ones. They're fundamentally broken, you should rely on primitive records when doing coinduction (up to the fact they're currently still somewhat buggy, but this is not a foundational issue).

Approximating Inductive Types in Coq

Is Agda without K less powerful?

The situation with Martin-Löf type theory and Axiom K is in some ways analogous to Euclidean geometry and the parallel postulate. With the parallel postulate more theorems can be proven, but those are only about Euclidean spaces. Without the parallel postulate provable theorems are true of non-Euclidean spaces too, and one has the freedom to add explicitly non-Euclidean axioms.

Axiom K seems intuitive, since we defined Agda's propositional equality with a single refl constructor. Why even bother with the mysterious non-refl equality proofs whose existence we can't disprove without K?

Can I extract a Coq proof as a Haskell function?

Thanks to Prof. Pierce's summer 2012 video 4.1 as Dan Feltey suggested, we see that the key is that the theorem to be extracted must provide a member of Type rather than the usual kind of propositions, which is Prop.

The Logic of Coq

Proof irrelevance / Axiom K

What is Axiom K?

When investigating the identity type in his thesis, Thomas Streicher introduced a new eliminator for the identity type which he called K (as the next letter in the alphabet after J, the standard eliminator for the identity type). It states that any proof p of an equality of the form x = x is itself equal to the trivial proof refl. From this, it follows that any two proofs p and q of any equation x = y are equal, hence the alternative name "uniqueness of identity proofs". What's remarkable is that he proved that this does not follow from the standard rules of type theory. I recommend reading Dan Licata's article on the homotopy type theory blog if you want to understand why not, he explains it much better than I could.

Conor McBride showed in his thesis ("Dependently typed functional programs and their proofs (2000)") that K is the only thing that dependent pattern matching really adds to dependent type theory.

Just Kidding: Understanding Identity Elimination in Homotopy Type Theory

Several current proof assistants, such as Agda and Epigram, provide uniqueness of identity proofs (UIP): any two proofs of the same propositional equality are themselves propositionally equal.

On the other hand, Coq does not provide uniqueness of identity proofs (but nor does it allow you to define higher-dimensional types that contract it, except by adding axioms).

Here’s where things get confusing: UIP is not a consequence of J, as shown by the first higher-dimensional interpretation of type theory, Hofmann and Streicher’s groupoid interpretation. This gives a model of type theory that validates J, but in which UIP does not hold. UIP can be added as an extra axiom, such as Streicher’s K:

Axiom K at ncatlab

Propositionally extensional type theory has some of the attributes of intensional type theory, and many type theorists use “extensional type theory” to refer only to the definitional version.

In type theory, the axiom K is an axiom that when added to intensional type theory turns it into extensional type theory — or more precisely, what is called here “propositionally extensional type theory”.

We can add as an axiom the “uniqueness of identity proofs” (axiom UIP) property that any two inhabitants of the same identity type IdA(x,y) Id_A(x,y) are themselves equal (in the corresponding higher identity type).

An alternative to adding this axiom is to allow the kind of dependent pattern matching present in Agda and Epigram, which allow you to define K.

The Coq FAQ talking with the rooster

Learning set theory through Coq (part 1)

GADTs vs Inductive Datatypes

Notice that it looks quite similar to Haskell definition, with type variables explicated. An empty list in explicitly typed haskell is VNil Int, whereas in Coq, it is vnil int. However, unlike Haskell, where there is term-type-kind hierarcy, Coq has full dependent types with infinite hierarchy of universes. One consequence of this is that we will frequently see arrows mapping members of one universe to members of a different universe.

Require, Import, Require Import

Vernacular Commands Index Variable

What is the difference between “definition” and “inductive” in Coq?

Fixpoint is similar to Definition, but allows a recursive definition (like let rec in ML). It's in fact syntactic sugar for Definition plus the explicit fixpoint combinator fix, but definitions made using Fixpoint are easier to read and write.

In Coq, what does it mean to have an inductive type where the right-hand side of “:” is Prop?

Every time you prove something, Coq builds the proof object and verifies that it's well-formed. It's rare to write down constructions in Prop, you'd usually let a tactic do it for you, but the objects always exist under the hood. (End a proof with Defined. instead of Qed. and run Print mytheorem to see the proof object for mytheorem. It can get big, but it's sometimes instructive to look at that for small proofs.)

How to define set in coq without defining set as a list of elements

The are basically four possible choices to be made when defining sets in Coq depending on your constraints on the base type of the set and computation needs

How to define finite set of N elements in Coq?

Does ssreflect assume excluded middle?

In standard Coq, le is encoded as an inductive type, whereas in ssreflect it's a computational function leq: nat -> nat -> bool.

Using the leq function for proofs is more "efficient" for the following reason: imagine you want to prove that 102 < 203. Using the standard Coq definition, you will have to build a large proof term, you must explicitly encode every step of the proof.

ssreflect is only concerned with decidable types: types where equality can be decided, ... This means that for any two elements of the type T, you can get a constructive (in Set, not in Prop like the excluded middle) proof that they are either equal or different.

Ordered seq in Coq/SSreflect

What is the difference between “Qed” and “Defined”?

There is some concept of "opacity" which Qed enforces but Defined does not.

How is Coq's parser implemented?

this notation system is very powerful and it was probably one of the reasons of Coq's success. In practice, this is a source of much complication in the source code.

v8.7.1

New tactics: Variants of tactics supporting existential variables eassert,
eenough, etc. by Hugo Herbelin; Tactics extensionality in H and
inversion_sigma by Jason Gross; specialize with accepting partial bindings
by Pierre Courtieu.

Cumulative Polymorphic Inductive Types, allowing
cumulativity of universes to go through applied inductive types, by Amin
Timany and Matthieu Sozeau.

The SSReflect plugin by Georges Gonthier,
Assia Mahboubi and Enrico Tassi was integrated (with its documentation in
the reference manual) by Maxime Dénès, Assia Mahboubi and Enrico Tassi.

Programs as Proofs

Types classes in Coq Class vernacular

What's the difference between ADTs, GADTs, and inductive types?

Haskell for Coq programmers

A Tutorial on [Co-]Inductive Types in Coq

COQ documentation

Why haven't newer dependently typed languages adopted SSReflect's approach?

Firstly, where possible predicates are expressed as boolean-returning functions rather than inductively defined datatypes.

Secondly, datatypes with invariants are defined as dependent records containing a simple datatype plus a proof of the invariant

A common anti-pattern in dependently-typed languages and Coq in particular is to encode such algebraic properties into the definitions of the datatypes and functions themselves (a canonical example of such approach are length-indexed lists). While this approach looks appealing, as it demonstrates the power of dependent types to capture certain properties of datatypes and functions on them, it is inherently non-scalable, as there will be always another property of interest, which has not been foreseen by a designer of the datatype/function, so it will have to be encoded as an external fact anyway. This is why we advocate the approach, in which datatypes and functions are defined as close to the way they would be defined by a programmer as possible, and all necessary properties of them are proved separately.

How do we overcome the compile time and runtime gap when programming in a Dependently Typed Language?

we can still reason symbolically on the unknowns n,m and argue that vn ++ vm has that type, involving n+m, even if we do not yet know what n+m actually is.

Keeping information when using induction?

There is a general trick, which is to selectively rewrite the structured parameter to a variable, keeping the equality in the environment.

What does V stand for in the Coq file extension?

Coq in a hurry

On representations of permutations

A Short Course on Interactive Proofs in Coq/Ssreflect.

Finding bugs in Haskell code by proving it

Formal Reasoning in Coq — a Beginner's Guide

Naive Coq

Tricks you wish the Coq manual told you

POPL 2016: Programs and Proofs in the Coq Proof Assistant

Some questions about binary number representations (ZArith)

Bishop's work on type theory reddit

Papers

Generating Good Generators for Inductive Relations

Total Haskell is reasonable Coq

One Monad to Prove Them All

We follow her on her journey when she learns about concepts like free monads and containers as well as basics and restrictions of proof assistants like Coq. These concepts are well-known individually, but their interplay gives rise to a solution for Mona’s problem that has not been presented before

Verifying Monoidal String Matching in Liquid Haskell and Coq

a ssreflect tutorial

Reification by Parametricity Fast Setup for Proof by Reflection, in Two Lines of Ltac

Functional Pearl: Theorem Proving for All

Mtac2: Typed Tactics for Backward Reasoning in Coq

Demostración asistida por ordenador

POPL 2016

Towards Mac Lane’s Comparison Theorem for the (co)Kleisli Construction in Coq

certified algorithms for program slicing

C-path monad

Slides

TheMathematical Components library principles and design choices

INF551 - Computational Logic: Artificial Intelligence in Mathematical Reasoning The theory of sets

Videos

How to use Coq with Proof General

How to use CoqIDE

Leibniz Equality implies Coq's Equality

A first proof with Coq (Frobenius rule)

at here:

we submit this to Coq (C-c C-n)

Introductory Proof: Commutativity of Addition in Coq

Introduction to Coq by Kimball Germane

Software foundations in Coq 0.1 - Benjamin Pierce full list

quote from here

the tactics language is a programming language in which you are imperatively manipulating a lambda term

you propose a theorem and then go into a different mode

Introduction to the Coq Proof Assistant - Andrew Appel

The dual Frobenius rule, part 2

At here

implication is just a special case of forall.

Maybe that's why intro works for both?

Coming Soon: Machine-Checked Mathematical Proofs in Everyday Software and Hardware Development

Functional Geekery Episode 101 – Adam Chlipala

awesome Coq

Code

Ensemble facts

Insertion sort

Mathematical Components

hs-to-coq

state as comonad reddit

parity nat list

Cours du Coq.

Discrete math in Coq

advent of coq 2018

safety in programming languages. Size-Change Termination as a Contract. termination tweet

Proving tree algorithms for succinct data structures

jscoq

An Introduction to Programming and Proving with Dependent Types in Coq

sqrt(2) is irrational

Survery of category theory in Coq

Equations Reloaded (also: ChunkableMonoid)

Learn Coq in Y

Programming with dependent types in Coq: inductive families and dependent patter-matching.

the nature of tactics

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment