Skip to main content

Proof Theory

Thanks to my brother I got a really cool book on proof theory called "Basic Proof Theory". It has a bunch of nice features including a from the ground up presentation of proof theory that should be relatively accesible to anyone with a background in mathematics. It demonstrates some of the connections provided by the Curry-Howard correspondance (which is my favourite part of the book) . It also describe Second order logic, which is great because I've had very little formal exposure to this. Second order logic is really beautiful since you can define all the connectives in terms of ∀, ∀2 and →. If you pun ∀ and ∀2 you have a really compact notation.

The book also forced me to learn some things I hadn't wrapped my head around. One of those was Gentzen style sequent calculus. This really turns out to be pretty easy when you have a good book describing it. I've even wrote a little sequent solver (in lisp) since I found the proofs so much fun. The first order intuisionistic sequent solver is really not terribly difficult to write. Basically I treat the proofs as goal directed starting with a sequent of the form:

⇒ F

And try to arive at leaves of the tree that all have the form:

A ⇒ A

I have already proven that 'F ⇒ F' for compound formulas F from 'A ⇒ A' so I didn't figure it was neccessary to make the solver do it. The solver currently only works with propositional formula (it solves a type theory where types are not parameteric.) but I'm interested in limited extensions though I haven't thought much about that. I imagine I quickly get something undecidable if I'm not careful.

Anyhow working with the sequent calculus got me thinking about → In the book they present the rule for R→ as such


Γ,A ⇒ Δ,B
Γ ⇒ A→B,Δ



This is a bit weird since there is nothing that goes the other direction. ie. for non of: Minimal, Intuisionistic or Classical logic do you find a rule in which you introduce a connective in the left from formulas in the right. I started looking around for something that does this and I ran into Basic Logic. I haven't read the paper yet so I can't really comment on it. I'll let you know after I'm done with it.

Comments

Popular posts from this blog

Decidable Equality in Agda

So I've been playing with typing various things in System-F which previously I had left with auxiliary well-formedness conditions. This includes substitutions and contexts, both of which are interesting to have well typed versions of. Since I've been learning Agda, it seemed sensible to carry out this work in that language, as there is nothing like a problem to help you learn a language.

In the course of proving properties, I ran into the age old problem of showing that equivalence is decidable between two objects. In this particular case, I need to be able to show the decidability of equality over types in System F in order to have formation rules for variable contexts. We'd like a context Γ to have (x:A) only if (x:B) does not occur in Γ when (A ≠ B). For us to have statements about whether two types are equal or not, we're going to need to be able to decide if that's true using a terminating procedure.

And so we arrive at our story. In Coq, equality is som…

Formalisation of Tables in a Dependent Language

I've had an idea kicking about in my head for a while of making query plans explicit in SQL in such a way that one can be assured that the query plan corresponds to the SQL statement desired. The idea is something like a Curry-Howard in a relational setting. One could infer the plan from the SQL, the SQL from the plan, or do a sort of "type-checking" to make sure that the plan corresponds to the SQL.

The devil is always in the details however. When I started looking at the primitives that I would need, it turns out that the low level table joining operations are actually not that far from primitive SQL statement themselves. I decided to go ahead and formalise some of what would be necessary in Agda in order get a better feel for the types of objects I would need and the laws which would be required to demonstrate that a plan corresponded with a statement.

Dependent types are very powerful and give you plenty of rope to hang yourself. It's always something of…

Plotkin, the LGG and the MGU

Legend has it that a million years ago Plotkin was talking to his professor Popplestone, who said that unification (finding the most general unifier or the MGU) might have an interesting dual, and that Plotkin should find it. It turns out that the dual *is* interesting and it is known as the Least General Generalisation (LGG). Plotkin apparently described both the LGG for terms, and for clauses. I say apparently because I can't find his paper on-line.

The LGG for clauses is more complicated so we'll get back to it after we look at the LGG of terms. We can see how the MGU is related to the LGG by looking at a couple of examples and the above image. We use the prolog convention that function symbols start with lower case, and variables start with uppercase. The image above is organised as a DAG (Directed Acyclic Graph). DAGs are a very important structure in mathematics since DAGs are lattices.

Essentially what we have done is drawn an (incomplete) Hasse diagram f…