Skip to main content

Coq

A few months ago I started messing around with The Coq Proof Assistant to figure out what exactly it was, and proceeded to go through the tutorial.

Coq provides a nice interface for doing proofs in an interactive session. It was a lot of fun but didn't seem particularly useful for my research.

As I've been playing with the notion of Total Functional Programming, I ended up reading (and actually doing homework) from a book entitled "Type Theory and Functional Programming". This book is excellent by the way! I highly recommend it to anyone interested in dependent types/proof theory/type theory. It has a very lucid style and makes a lot of complex notions in type theory very easy to understand. I even got to find an error in one of the homework examples (now listed in the errata) which is always fun.

After working through the first 5 chapters I started looking around for a system that would let me apply some of the principles that I had learned.

As it turns out Coq is really a dependently typed functional programming language masquerading as a proof assistant! I've spent quite a lot of time over the past few weeks writing total functional programs in Coq and proving properties about them. I've done a bunch of simple things so far, including a proof of correctness for various properties of insertion sort. I started with merge sort, but stalled when it got too complicated. I'm starting to get a feel for using Coq however and large proofs are getting much easier.

In my research I've been working on the implementation of the distillation algorithm (a form of super-compilation) for logic programming. As it turns out
the distillation algorithm as described by Geoff Hamilton could be a real boon to total functional programming in Coq.

In Coq all functions and other values are terms that represent witnesses of a proof. Inhabitation of types is proved exactly by creating a term of the appropriate type. The "tactic" system in Coq is basically a library that helps you build appropriate terms. Alternately you can supply the proofs directly by writing in the functional programming language that acts as the witness terms of the types.

In order to avoid inconsistency functions are not capable of general recursion or errors or exceptions, as this can immediately lead to proofs of propositions which are not actually inhabited. A simple example (in psuedo-haskell) would be a function with the following structure.

loop :: int -> arbitrary
loop x = loop x

Clearly "int -> arbitrary" is not actually the type of this function. It doesn't terminate and so has type _|_. Types in languages like ML and haskell aren't actually just as they are written, but include the possibilities of non-termination or errors into the type implicitly. This (arguably) works out alright if you expect to run your program, but if you are trying to prove useful properties in your type system it turns out to be pretty worthless.

Syntactic restrictions are therefor necessary to avoid including non-terminating functions in Coq. The method chosen is to accept only structurally recursive functions, which can be checked with simple syntactic criterion. In fact the functions require you to specify *which* argument of a function is structurally recursive. This works out surprisingly well but can occasionally be a real pain (try working out a unification algorithm without using anything but structural recursion).

In Coq 8.1 they include a way to define functions that include a measure function which provably decreases or to use a well-founded relation (in conjunction with a proof) to show that the function terminates.

Distillation likes to take functions from general recursion to structural tail recursion. If you could define functions in Coq such that they were then processed by distillation, it would be very useful!

If you haven't tried Coq yet, you should. And if you haven't tried total functional programming yet, I suggest trying it in Coq.

Comments

Popular posts from this blog

Decidable Equality in Agda

So I've been playing with typing various things in System-F which previously I had left with auxiliary well-formedness conditions. This includes substitutions and contexts, both of which are interesting to have well typed versions of. Since I've been learning Agda, it seemed sensible to carry out this work in that language, as there is nothing like a problem to help you learn a language.

In the course of proving properties, I ran into the age old problem of showing that equivalence is decidable between two objects. In this particular case, I need to be able to show the decidability of equality over types in System F in order to have formation rules for variable contexts. We'd like a context Γ to have (x:A) only if (x:B) does not occur in Γ when (A ≠ B). For us to have statements about whether two types are equal or not, we're going to need to be able to decide if that's true using a terminating procedure.

And so we arrive at our story. In Coq, equality is som…

Formalisation of Tables in a Dependent Language

I've had an idea kicking about in my head for a while of making query plans explicit in SQL in such a way that one can be assured that the query plan corresponds to the SQL statement desired. The idea is something like a Curry-Howard in a relational setting. One could infer the plan from the SQL, the SQL from the plan, or do a sort of "type-checking" to make sure that the plan corresponds to the SQL.

The devil is always in the details however. When I started looking at the primitives that I would need, it turns out that the low level table joining operations are actually not that far from primitive SQL statement themselves. I decided to go ahead and formalise some of what would be necessary in Agda in order get a better feel for the types of objects I would need and the laws which would be required to demonstrate that a plan corresponded with a statement.

Dependent types are very powerful and give you plenty of rope to hang yourself. It's always something of…

Plotkin, the LGG and the MGU

Legend has it that a million years ago Plotkin was talking to his professor Popplestone, who said that unification (finding the most general unifier or the MGU) might have an interesting dual, and that Plotkin should find it. It turns out that the dual *is* interesting and it is known as the Least General Generalisation (LGG). Plotkin apparently described both the LGG for terms, and for clauses. I say apparently because I can't find his paper on-line.

The LGG for clauses is more complicated so we'll get back to it after we look at the LGG of terms. We can see how the MGU is related to the LGG by looking at a couple of examples and the above image. We use the prolog convention that function symbols start with lower case, and variables start with uppercase. The image above is organised as a DAG (Directed Acyclic Graph). DAGs are a very important structure in mathematics since DAGs are lattices.

Essentially what we have done is drawn an (incomplete) Hasse diagram f…