### Proof by Program Transformation

Program transformation is a sort of meta-compilation where the source and target languages are the same. In this sense it can be seen as an automorphism on the set of programs in a language.

As was pointed out by Turchin, it is sometimes possible to prove properties of programs simply by performing a program transformation. Basically a program, and a program which verifies it's behaviour (a predicate) can be replaced with a truth value if the program transformation is sufficiently powerful. An obvious example of course, which can be calculated using partial evaluation would be something like (max (< 3) (1::2::1::nil)). In fact very complex properties can be proved automatically by Distillation, a particularly powerful program transformation system.

In my research I've been working with the μ-Calculus, which is a type of temporal logic that is very powerful (LTL, CTL, CTL* can all be embedded in the μ-Calculus) and good for describing reactive systems. It uses a notion of least and greatest fixed-points in order to describe temporal properties. This lead me to consider infinite proofs, bi-simulation and co-induction in the context of distillation.

I was reading a paper [1], which lead me to wonder if Distillation couldn't be used as a means of determining if two programs bi-simulate in a very simple way. Distillation can be viewed as a relation between programs, such that a program a and a program b are related by a D b iff a Distils to b (that is b is the result of running the distillation algorithm on b). If the bi-simulation relation is called ~ then we have trivially that:

∀ a b c ∈ PROG. a D c ∧ b D c → a~b

This is obvious since D is semantics preserving, and c must have the same semantics as both a and b. So this looks a little bit silly at first, what good can this possibly be? Can we ever expect this to happen? Well, I was a little bit surprised to find that it does work for simple examples that I tried on paper. For instance, let us take the following program.

```iterate f x = x:(iterate f (f x))
map f (h:tl) = (f h):(map f tl)
cofix f = f (cofix f)
```

```conjecture_1 f x = iterate f (f x)
conjecture_2 f x = map f (iterate f x)
```

We will assume that the ":" constructor is for a type of lists with no "nil" terminator, that is, we are only dealing with infinite lists. Supercompilation is strictly less powerful than Distillation, but sufficient for this example, and simpler to do on paper. We will therefore supercompile the two halves of the conjecture:

∀ f x. iterate f (f x) ~ map f (iterate f x)

to yield:

```conjecture_1 f x = iterate f (f x)
-- unfold iterate
conjecture_1 f x = (f x):(iterate f (f (f x))
-- generalise [iterate f (f x), iterate f (f (f x))] =>
--  iterate f (f x) which is just conjecture_1
conjecture_1 f x = (f x):(conjecture_1 f (f x))
conjecture_1 = cofix (\ g f x -> (f x):(g f (f x)))
```

```conjecture_2 f x = map f (iterate f x)
-- unfold map
conjecture_2 f x = case (iterate f x) of h:tl => (f h):(map f tl)
-- unfold iterate
conjecture_2 f x = case (x:(iterate f (f x)) of h:tl => (f h):(map f tl)
-- case law, [h:=x, tl:=(iterate f (f x))]
conjecture_2 f x = (f x):(map f (iterate f (f x))
-- generalise [map f (iterate f (f x)), map f (iterate f x)] =>
-- map f (iterate f x), which is just conjecture_2
conjecture_2 f x = (f x):(conjecture_2 f (f x))
conjecture_2 = cofix (\ g f x -> (f x):(g f (f x)))
```

So we can see that conjecture_1 and conjecture_2 are syntactically identical modulo alpha-conversion. The cofix was added just so that the function name didn't obscure the equality. In fact we could also go to some De Bruijn representation as well, to get rid of the need for alpha-conversion equivalence, but I think you get the picture. In addition, it is trivial to verify that this is a valid co-recursive program satisfying the guard condition that the recursive call happens under the constructor of the co-inductive type which is the co-domain of the co-recursive function (look at all those "co"s!).

The point is though that we have derived a syntactically identical program, and hence proved bi-simulation, using a completely automated procedure!

So the obvious question of course is, how often will this work? The technique relies on finding a canonical form. For non-recursive (NOREC) programs, we can probably always find a canonical form using supercompilation (the S relation, a subset of D). That is, I expect that:

∀ p,q,r,s ∈ NOREC. p D r ∧ q D s → r=s

This is probably obviously true to someone who understands better about normal forms in the lambda calculus, but I haven't yet proved it for the language over which distillation is defined.

For recursively defined programs, it can't be the case that we can always find a canonical form. This is the undecidability of equivalence for the lambda calculus. However, I'll stick my neck out and conjecture that it will work for all programs of suitably low complexity (PRIMREC, ELEMENTARY maybe? [3]). This doesn't tell us anything about bi-simulation however, since co-recursive programs don't have a complexity. Is there a co-complexity we could use? It would be really nice to have some class we could show is definitely canonisable.

And while I'm on a roll playing fast and loose, I'll say that I think that Distillation represents "the right way" of going about finding "proof-net" like structures for intuitionistic logic. That is, internal to the distillation algorithm we actually form a graph, and in the case of conjecture_1 and conjecture_2 the graphs are identical. The internal representation used for distillation then is actually the form in which we remove "The bureaucracy of syntax" [2] for some subset of possible proofs.

For further reading on some of these ideas:

[1] Proof Methods for Corecursive Programs
[2] Down with the bureaucracy of syntax!
[3] The Expressive Power of Higher-order Types or, Life without CONS
[4] Deforestation, program transformation, and cut-elimination

### Managing state in Prolog monadically, using DCGs.

Prolog is a beautiful language which makes a lot of irritating rudimentary rule application and search easy. I have found it is particularly nice when trying to deal with compilers which involve rule based transformation from a source language L to a target language L'.

However, the management of these rules generally requires keeping track of a context, and this context has to be explicitly threaded through the entire application, which involves a lot of irritating and error prone sequence variables. This often leads to your code looking something a bit like this:

compile(seq(a,b),(ResultA,ResultB),S0,S2) :- compile(a,ResultA,S0,S1), compile(b,ResultB,S1,S2).
While not the worst thing, I've found it irritating and ugly, and I've made a lot of mistakes with incorrectly sequenced variables. It's much easier to see sequence made explicitly textually in the code.

While they were not designed for this task, but rather for parsing, DCGs turn out to be a convenient …

### Generating etags automatically when needed

Have you ever wanted M-. (the emacs command which finds the definition of the term under the cursor) to just "do the right thing" and go to the most current definition site, but were in a language that didn't have an inferior process set-up to query about source locations correctly (as is done in lisp, ocaml and some other languages with sophisticated emacs interfaces)?

Well, fret no more. Here is an approach that will let you save the appropriate files and regenerate your TAGS file automatically when things change assuring that M-. takes you to the appropriate place.

You will have to reset the tags-table-list or set it when you first use M-. and you'll want to change the language given to find and etags in the 'create-prolog-tags function (as you're probably not using prolog), but otherwise it shouldn't require much customisation.

And finally, you will need to run etags once manually, or run 'M-x create-prolog-tags' in order to get the initia…

### Decidable Equality in Agda

So I've been playing with typing various things in System-F which previously I had left with auxiliary well-formedness conditions. This includes substitutions and contexts, both of which are interesting to have well typed versions of. Since I've been learning Agda, it seemed sensible to carry out this work in that language, as there is nothing like a problem to help you learn a language.

In the course of proving properties, I ran into the age old problem of showing that equivalence is decidable between two objects. In this particular case, I need to be able to show the decidability of equality over types in System F in order to have formation rules for variable contexts. We'd like a context Γ to have (x:A) only if (x:B) does not occur in Γ when (A ≠ B). For us to have statements about whether two types are equal or not, we're going to need to be able to decide if that's true using a terminating procedure.

And so we arrive at our story. In Coq, equality is som…