### Proof by Program Transformation

Program transformation is a sort of meta-compilation where the source and target languages are the same. In this sense it can be seen as an automorphism on the set of programs in a language.

As was pointed out by Turchin, it is sometimes possible to prove properties of programs simply by performing a program transformation. Basically a program, and a program which verifies it's behaviour (a predicate) can be replaced with a truth value if the program transformation is sufficiently powerful. An obvious example of course, which can be calculated using partial evaluation would be something like (max (< 3) (1::2::1::nil)). In fact very complex properties can be proved automatically by Distillation, a particularly powerful program transformation system.

In my research I've been working with the μ-Calculus, which is a type of temporal logic that is very powerful (LTL, CTL, CTL* can all be embedded in the μ-Calculus) and good for describing reactive systems. It uses a notion of least and greatest fixed-points in order to describe temporal properties. This lead me to consider infinite proofs, bi-simulation and co-induction in the context of distillation.

I was reading a paper [1], which lead me to wonder if Distillation couldn't be used as a means of determining if two programs bi-simulate in a very simple way. Distillation can be viewed as a relation between programs, such that a program a and a program b are related by a D b iff a Distils to b (that is b is the result of running the distillation algorithm on b). If the bi-simulation relation is called ~ then we have trivially that:

∀ a b c ∈ PROG. a D c ∧ b D c → a~b

This is obvious since D is semantics preserving, and c must have the same semantics as both a and b. So this looks a little bit silly at first, what good can this possibly be? Can we ever expect this to happen? Well, I was a little bit surprised to find that it does work for simple examples that I tried on paper. For instance, let us take the following program.

```iterate f x = x:(iterate f (f x))
map f (h:tl) = (f h):(map f tl)
cofix f = f (cofix f)
```

```conjecture_1 f x = iterate f (f x)
conjecture_2 f x = map f (iterate f x)
```

We will assume that the ":" constructor is for a type of lists with no "nil" terminator, that is, we are only dealing with infinite lists. Supercompilation is strictly less powerful than Distillation, but sufficient for this example, and simpler to do on paper. We will therefore supercompile the two halves of the conjecture:

∀ f x. iterate f (f x) ~ map f (iterate f x)

to yield:

```conjecture_1 f x = iterate f (f x)
-- unfold iterate
conjecture_1 f x = (f x):(iterate f (f (f x))
-- generalise [iterate f (f x), iterate f (f (f x))] =>
--  iterate f (f x) which is just conjecture_1
conjecture_1 f x = (f x):(conjecture_1 f (f x))
conjecture_1 = cofix (\ g f x -> (f x):(g f (f x)))
```

```conjecture_2 f x = map f (iterate f x)
-- unfold map
conjecture_2 f x = case (iterate f x) of h:tl => (f h):(map f tl)
-- unfold iterate
conjecture_2 f x = case (x:(iterate f (f x)) of h:tl => (f h):(map f tl)
-- case law, [h:=x, tl:=(iterate f (f x))]
conjecture_2 f x = (f x):(map f (iterate f (f x))
-- generalise [map f (iterate f (f x)), map f (iterate f x)] =>
-- map f (iterate f x), which is just conjecture_2
conjecture_2 f x = (f x):(conjecture_2 f (f x))
conjecture_2 = cofix (\ g f x -> (f x):(g f (f x)))
```

So we can see that conjecture_1 and conjecture_2 are syntactically identical modulo alpha-conversion. The cofix was added just so that the function name didn't obscure the equality. In fact we could also go to some De Bruijn representation as well, to get rid of the need for alpha-conversion equivalence, but I think you get the picture. In addition, it is trivial to verify that this is a valid co-recursive program satisfying the guard condition that the recursive call happens under the constructor of the co-inductive type which is the co-domain of the co-recursive function (look at all those "co"s!).

The point is though that we have derived a syntactically identical program, and hence proved bi-simulation, using a completely automated procedure!

So the obvious question of course is, how often will this work? The technique relies on finding a canonical form. For non-recursive (NOREC) programs, we can probably always find a canonical form using supercompilation (the S relation, a subset of D). That is, I expect that:

∀ p,q,r,s ∈ NOREC. p D r ∧ q D s → r=s

This is probably obviously true to someone who understands better about normal forms in the lambda calculus, but I haven't yet proved it for the language over which distillation is defined.

For recursively defined programs, it can't be the case that we can always find a canonical form. This is the undecidability of equivalence for the lambda calculus. However, I'll stick my neck out and conjecture that it will work for all programs of suitably low complexity (PRIMREC, ELEMENTARY maybe? [3]). This doesn't tell us anything about bi-simulation however, since co-recursive programs don't have a complexity. Is there a co-complexity we could use? It would be really nice to have some class we could show is definitely canonisable.

And while I'm on a roll playing fast and loose, I'll say that I think that Distillation represents "the right way" of going about finding "proof-net" like structures for intuitionistic logic. That is, internal to the distillation algorithm we actually form a graph, and in the case of conjecture_1 and conjecture_2 the graphs are identical. The internal representation used for distillation then is actually the form in which we remove "The bureaucracy of syntax" [2] for some subset of possible proofs.

For further reading on some of these ideas:

[1] Proof Methods for Corecursive Programs
[2] Down with the bureaucracy of syntax!
[3] The Expressive Power of Higher-order Types or, Life without CONS
[4] Deforestation, program transformation, and cut-elimination

### Decidable Equality in Agda

So I've been playing with typing various things in System-F which previously I had left with auxiliary well-formedness conditions. This includes substitutions and contexts, both of which are interesting to have well typed versions of. Since I've been learning Agda, it seemed sensible to carry out this work in that language, as there is nothing like a problem to help you learn a language.

In the course of proving properties, I ran into the age old problem of showing that equivalence is decidable between two objects. In this particular case, I need to be able to show the decidability of equality over types in System F in order to have formation rules for variable contexts. We'd like a context Γ to have (x:A) only if (x:B) does not occur in Γ when (A ≠ B). For us to have statements about whether two types are equal or not, we're going to need to be able to decide if that's true using a terminating procedure.

And so we arrive at our story. In Coq, equality is som…

### Formalisation of Tables in a Dependent Language

I've had an idea kicking about in my head for a while of making query plans explicit in SQL in such a way that one can be assured that the query plan corresponds to the SQL statement desired. The idea is something like a Curry-Howard in a relational setting. One could infer the plan from the SQL, the SQL from the plan, or do a sort of "type-checking" to make sure that the plan corresponds to the SQL.

The devil is always in the details however. When I started looking at the primitives that I would need, it turns out that the low level table joining operations are actually not that far from primitive SQL statement themselves. I decided to go ahead and formalise some of what would be necessary in Agda in order get a better feel for the types of objects I would need and the laws which would be required to demonstrate that a plan corresponded with a statement.

Dependent types are very powerful and give you plenty of rope to hang yourself. It's always something of…

### Plotkin, the LGG and the MGU

Legend has it that a million years ago Plotkin was talking to his professor Popplestone, who said that unification (finding the most general unifier or the MGU) might have an interesting dual, and that Plotkin should find it. It turns out that the dual *is* interesting and it is known as the Least General Generalisation (LGG). Plotkin apparently described both the LGG for terms, and for clauses. I say apparently because I can't find his paper on-line.

The LGG for clauses is more complicated so we'll get back to it after we look at the LGG of terms. We can see how the MGU is related to the LGG by looking at a couple of examples and the above image. We use the prolog convention that function symbols start with lower case, and variables start with uppercase. The image above is organised as a DAG (Directed Acyclic Graph). DAGs are a very important structure in mathematics since DAGs are lattices.

Essentially what we have done is drawn an (incomplete) Hasse diagram f…