Skip to main content

Automated Deduction and Functional Programming

I just got "ML for the working programmer" in the mail a few days ago,
and worked through it at a breakneck pace since receiving it. It
turns out that a lot of the stuff from the "Total Functional
Programming" book is also in this one. Paulson goes through the use
of structural recursion and extends the results by showing techniques
for proving a large class of programs to be terminating. Work with
co-algebras and bi-simulation didn't quite make it in, except for a
brief mention about co-variant types leading to the possibility of a
type: D := D → D which is the type of programs in the untyped lambda
calculus, and hence liable to lead one into trouble.

I have to say that having finished the book, this is the single most
interesting programming book that I've read since "Paradigms of
Artificial Intelligence Programming" and "Structure and Interpretation
of Computer Programs". In fact, I would rate this one above the other
two, though I think it might be because of my current interests.

The conjunction of "Total Functional Programming" and "ML for the
Working Programmer" have completely changed my world view of automated
deduction. Interestingly, my advisers work looks even more appealing
from this perspective.

The basic crux of the issue is this. Given that we restrict ourselves
to total functional programming, as described in the aforementioned
article, we will have the ability to prove theorems directly by
manipulation of the program source.

My adviser told me about this the first time that I met him. It at
first seemed like a curiosity. It has however, blossomed into a
completely different way of thinking about how proofs should be

I've spent the last few days converting proofs into program source,
which I then manipulate equationally. I have a few examples of
inductive proofs that can be written quite naturally as structurally
recursive functions.

datatype nat = Z | S of nat

fun lt n m = 
    case m of 
 Z => false
      | (S m') => 
 case n of 
     Z => true 
   | (S n') => lt n' m'

fun divide n d = 
    case n of 
 Z => (Z,Z)
      | (S n') => 
 let val (q',r') = divide n' d in 
     case lt (S r') d of 
  true => (q',S r')
       | false => (S q',Z)

Both of these functions are direct "extractions" of a program from a
proofs that I wrote. The second one is a proof with two existential
variables. The two existential variables present themselves as return
values. In the case of "lt" it simply computes a truth value.

The datatype nat = Z | S of nat defines natural numbers with the
correspondence Z=0, S(Z)=1, S(S(Z))=2 etc...

The proof is a proposition that goes like this:

∀n,d:nat. d>0 ∃q,r:nat. n=dq+r ∧ 0≤r<d

You might notice I dropped the restriction on d in the actual source
code. That can be encoded with an auxiliary function in any case.

The proof proceeds inductively, basically you can turn the divide
function inside out to recover the inductive proof.

The base case is n=0 which gives 0=dq+r which gives r=0 and q=0 (since
d is greater than 0). The first case is exemplified in the case
structure above. (Z,Z) = (q,r) the two existential variables are
returned with a valid result for n=0.

The inductive case is this.

n: ∃q,r. n=dq+r ∧ 0≤r<d
(n+1): ∃q',r'. n+1=dq'+r' ∧ 0≤r'<d
then n+1=dq+r+1

We see that this gives us that r'=r+1, q'=q giving us a solution to r' and q' from r and q (and hence solving the n case, gives us some information about n+1)

else n+1 = d(q+1)+r'

which we can rewrite as: n+1 = d(q+1)+r' = dq+d+r'=dq+r+1 so d+r'=r+1,
but r+1 must be at least d otherwise we wouldn't have failed the test
r+1<d, and r' can be no less than zero, so we must choose r'=0
this results in (finally!)

n+1 = d(q+1) with r'=0,q'=q+1

We have a proof! For the extracted program you can look at the source
and see that in fact it mirrors the proof directly, short of a few
simplifications done by hand. Making use of the r,q in the proof of
r' and q' is the same as my use of r and q in my proof of the n+1 case.

I'm looking forward to a deeper understanding of the correspondences
between proofs and programs with my new found capacity to produce actual functional program proofs by extraction manually. I'm also
curious to try my hand at generating termination proofs directly, and
also at making a syntax that more directly supports total functional


Popular posts from this blog

Decidable Equality in Agda

So I've been playing with typing various things in System-F which previously I had left with auxiliary well-formedness conditions. This includes substitutions and contexts, both of which are interesting to have well typed versions of. Since I've been learning Agda, it seemed sensible to carry out this work in that language, as there is nothing like a problem to help you learn a language.

In the course of proving properties, I ran into the age old problem of showing that equivalence is decidable between two objects. In this particular case, I need to be able to show the decidability of equality over types in System F in order to have formation rules for variable contexts. We'd like a context Γ to have (x:A) only if (x:B) does not occur in Γ when (A ≠ B). For us to have statements about whether two types are equal or not, we're going to need to be able to decide if that's true using a terminating procedure.

And so we arrive at our story. In Coq, equality is som…

Formalisation of Tables in a Dependent Language

I've had an idea kicking about in my head for a while of making query plans explicit in SQL in such a way that one can be assured that the query plan corresponds to the SQL statement desired. The idea is something like a Curry-Howard in a relational setting. One could infer the plan from the SQL, the SQL from the plan, or do a sort of "type-checking" to make sure that the plan corresponds to the SQL.

The devil is always in the details however. When I started looking at the primitives that I would need, it turns out that the low level table joining operations are actually not that far from primitive SQL statement themselves. I decided to go ahead and formalise some of what would be necessary in Agda in order get a better feel for the types of objects I would need and the laws which would be required to demonstrate that a plan corresponded with a statement.

Dependent types are very powerful and give you plenty of rope to hang yourself. It's always something of…

Plotkin, the LGG and the MGU

Legend has it that a million years ago Plotkin was talking to his professor Popplestone, who said that unification (finding the most general unifier or the MGU) might have an interesting dual, and that Plotkin should find it. It turns out that the dual *is* interesting and it is known as the Least General Generalisation (LGG). Plotkin apparently described both the LGG for terms, and for clauses. I say apparently because I can't find his paper on-line.

The LGG for clauses is more complicated so we'll get back to it after we look at the LGG of terms. We can see how the MGU is related to the LGG by looking at a couple of examples and the above image. We use the prolog convention that function symbols start with lower case, and variables start with uppercase. The image above is organised as a DAG (Directed Acyclic Graph). DAGs are a very important structure in mathematics since DAGs are lattices.

Essentially what we have done is drawn an (incomplete) Hasse diagram f…