Skip to main content

Automated Deduction and Functional Programming

I just got "ML for the working programmer" in the mail a few days ago,
and worked through it at a breakneck pace since receiving it. It
turns out that a lot of the stuff from the "Total Functional
Programming" book is also in this one. Paulson goes through the use
of structural recursion and extends the results by showing techniques
for proving a large class of programs to be terminating. Work with
co-algebras and bi-simulation didn't quite make it in, except for a
brief mention about co-variant types leading to the possibility of a
type: D := D → D which is the type of programs in the untyped lambda
calculus, and hence liable to lead one into trouble.

I have to say that having finished the book, this is the single most
interesting programming book that I've read since "Paradigms of
Artificial Intelligence Programming" and "Structure and Interpretation
of Computer Programs". In fact, I would rate this one above the other
two, though I think it might be because of my current interests.

The conjunction of "Total Functional Programming" and "ML for the
Working Programmer" have completely changed my world view of automated
deduction. Interestingly, my advisers work looks even more appealing
from this perspective.

The basic crux of the issue is this. Given that we restrict ourselves
to total functional programming, as described in the aforementioned
article, we will have the ability to prove theorems directly by
manipulation of the program source.

My adviser told me about this the first time that I met him. It at
first seemed like a curiosity. It has however, blossomed into a
completely different way of thinking about how proofs should be
performed.

I've spent the last few days converting proofs into program source,
which I then manipulate equationally. I have a few examples of
inductive proofs that can be written quite naturally as structurally
recursive functions.

datatype nat = Z | S of nat

fun lt n m = 
    case m of 
 Z => false
      | (S m') => 
 case n of 
     Z => true 
   | (S n') => lt n' m'

fun divide n d = 
    case n of 
 Z => (Z,Z)
      | (S n') => 
 let val (q',r') = divide n' d in 
     case lt (S r') d of 
  true => (q',S r')
       | false => (S q',Z)
 end

Both of these functions are direct "extractions" of a program from a
proofs that I wrote. The second one is a proof with two existential
variables. The two existential variables present themselves as return
values. In the case of "lt" it simply computes a truth value.

The datatype nat = Z | S of nat defines natural numbers with the
correspondence Z=0, S(Z)=1, S(S(Z))=2 etc...

The proof is a proposition that goes like this:

∀n,d:nat. d>0 ∃q,r:nat. n=dq+r ∧ 0≤r<d

You might notice I dropped the restriction on d in the actual source
code. That can be encoded with an auxiliary function in any case.

The proof proceeds inductively, basically you can turn the divide
function inside out to recover the inductive proof.

The base case is n=0 which gives 0=dq+r which gives r=0 and q=0 (since
d is greater than 0). The first case is exemplified in the case
structure above. (Z,Z) = (q,r) the two existential variables are
returned with a valid result for n=0.

The inductive case is this.

n: ∃q,r. n=dq+r ∧ 0≤r<d
(n+1): ∃q',r'. n+1=dq'+r' ∧ 0≤r'<d
if(r+1)<d
then n+1=dq+r+1

We see that this gives us that r'=r+1, q'=q giving us a solution to r' and q' from r and q (and hence solving the n case, gives us some information about n+1)

else n+1 = d(q+1)+r'

which we can rewrite as: n+1 = d(q+1)+r' = dq+d+r'=dq+r+1 so d+r'=r+1,
but r+1 must be at least d otherwise we wouldn't have failed the test
r+1<d, and r' can be no less than zero, so we must choose r'=0
this results in (finally!)

n+1 = d(q+1) with r'=0,q'=q+1

We have a proof! For the extracted program you can look at the source
and see that in fact it mirrors the proof directly, short of a few
simplifications done by hand. Making use of the r,q in the proof of
r' and q' is the same as my use of r and q in my proof of the n+1 case.

I'm looking forward to a deeper understanding of the correspondences
between proofs and programs with my new found capacity to produce actual functional program proofs by extraction manually. I'm also
curious to try my hand at generating termination proofs directly, and
also at making a syntax that more directly supports total functional
programming.

Comments

Popular posts from this blog

Decidable Equality in Agda

So I've been playing with typing various things in System-F which previously I had left with auxiliary well-formedness conditions. This includes substitutions and contexts, both of which are interesting to have well typed versions of. Since I've been learning Agda, it seemed sensible to carry out this work in that language, as there is nothing like a problem to help you learn a language. In the course of proving properties, I ran into the age old problem of showing that equivalence is decidable between two objects. In this particular case, I need to be able to show the decidability of equality over types in System F in order to have formation rules for variable contexts. We'd like a context Γ to have (x:A) only if (x:B) does not occur in Γ when (A ≠ B). For us to have statements about whether two types are equal or not, we're going to need to be able to decide if that's true using a terminating procedure. And so we arrive at our story. In Coq, equality is ...

Managing state in Prolog monadically, using DCGs.

Prolog is a beautiful language which makes a lot of irritating rudimentary rule application and search easy. I have found it is particularly nice when trying to deal with compilers which involve rule based transformation from a source language L to a target language L'. However, the management of these rules generally requires keeping track of a context, and this context has to be explicitly threaded through the entire application, which involves a lot of irritating and error prone sequence variables. This often leads to your code looking something a bit like this: compile(seq(a,b),(ResultA,ResultB),S0,S2) :- compile(a,ResultA,S0,S1), compile(b,ResultB,S1,S2). While not the worst thing, I've found it irritating and ugly, and I've made a lot of mistakes with incorrectly sequenced variables. It's much easier to see sequence made explicitly textually in the code. While they were not designed for this task, but rather for parsing, DCGs turn out to be a conveni...

Call by Name (CBN) is dual to Call By Value (CBV)

Probably one of the best papers I've read on the relationship between CBN, CBV and the Curry-Howard correspondance is the paper Call-by-value is dual to call-by-name by Wadler. The calculus he develops for describing the relationship shows an obvious schematic duality that is very visually appealing. After reading the paper that I mentioned earlier on Socially Responsive, Environmentally Friendly Logic (which shall henceforth be called SREF Logic), it struck me that it would be interesting to see what a CPS ( Continuation-passing Style ) like construction looks like in SREF logic, so I went back to the Wadler paper to see if I could figure out how to mimic the technique for multi-player logic. It looks like the formulation by Wadler comes out directly by thinking about logic as a two player game! I'm excited to see what happens with n-player logic. This has been a big diversion from what I'm actually suppose to be working on but I didn't want to forget about i...