Skip to main content

More Questions than Answers

Code transformation or meta-compilation as it is sometimes called (which is the general notion of techniques including Partial Evaluation, Supercompilation, Deforestation, or my advisors Distillation), is a powerful technique in computer programming. The benefits (and drawbacks) are almost certainly not sufficiently studied.

I was just conversing with my room-mate Tom about meta-compilation and I made the supposition that Meta-compilers are somewhat like the technology of the lathe. There are a huge number of technologies that require a lathe in order to be produced efficiently. A lathe can be viewed as a major nexus in the dependency graph of machining technology. A lathe is an almost a completely fixed precondition for the mill. The Mill is the crux of modern machining. It allows you to construct almost any currently available machined part. Without the mill we really wouldn't have the industrial age at all. Do such things exist in computer programming?

Metacompiler technology is incredibly powerful. It is a technique that usually is concidered to be a superset of a partial-evaluator. It is a compiler technique that starts in the source language and ends in the source language rather than some target language as does a standard compiler. While this might at first sound trivial or irrelevant a few examples can convince one that it actually a very useful tool. (2*2) can be coded in most languages, but really it is just the literal 4. Partial-evaluation will reduce this computation at compile-time elminating the cost from the final executable. The power doesn't stop there though. One particularly convincing example that I found was the partial evaluation of fairly simple grammar recogniser (parser) which reduced a problem directly from an NDFA to a DFA. Which is basically the compilation process used for regexps.

The Futamura projections give us some idea of just how powerful the technique is. If we have a metacompiler, we can metacompile an intepreter with respect to a program writen in the source language of the interpreter to arive at an executable in the language of the metacompiler. In fact, if we metacompile the metacompiler with respect to the interpreter and we can generate a compiler!

So I have a *lot* of questions about metacompilation. It sounds almost too good to be true (but there are good reasons to believe that it isn't). Some of them are very technical which I will probably save for tomorrow's post. The following question though is more philosophical, and practical (can those two happen at the same time?)

Why aren't supercompilers/partial evaluators used as general compilation systems? If you can write a supercompiler in some high level, nice language like OCaml and then all you have to do is write an interpreter for your language of choice in order to produce a compiler, then why isn't this done?

This seems like the holy grail of leveraging, or code re-use. You could write one really good compiler for a good language for specifying languages (Which ML was originally designed for, and of which OCaml is a descendant). One really good metacompiler. At this point every other language (front end, in the terminology of GCC) is simply the act of writing an interpreter. Writing an interpreter is *radically* simpler than making a sophisticated compiler. It is basically equivalent to a specification for the language. The process of language design can hardly be facilitated more than this since interpreters are pretty much the minimal requirement for specifying the operational semantics of a language!

My question is why isn't this general procedure really carried out in practice? Are metacompilers not good enough in practice to produce high quality performant programs? Has it just not been tried? If not, I'd like to see some effort expended on this, since it seems like a crucial technology that could really be leveraged far more than any of the "shared VM" projects like C# with minimal cost to language implementors.


Popular posts from this blog

Generating etags automatically when needed

Have you ever wanted M-. (the emacs command which finds the definition of the term under the cursor) to just "do the right thing" and go to the most current definition site, but were in a language that didn't have an inferior process set-up to query about source locations correctly (as is done in lisp, ocaml and some other languages with sophisticated emacs interfaces)?

Well, fret no more. Here is an approach that will let you save the appropriate files and regenerate your TAGS file automatically when things change assuring that M-. takes you to the appropriate place.

You will have to reset the tags-table-list or set it when you first use M-. and you'll want to change the language given to find and etags in the 'create-prolog-tags function (as you're probably not using prolog), but otherwise it shouldn't require much customisation.

And finally, you will need to run etags once manually, or run 'M-x create-prolog-tags' in order to get the initia…

Decidable Equality in Agda

So I've been playing with typing various things in System-F which previously I had left with auxiliary well-formedness conditions. This includes substitutions and contexts, both of which are interesting to have well typed versions of. Since I've been learning Agda, it seemed sensible to carry out this work in that language, as there is nothing like a problem to help you learn a language.

In the course of proving properties, I ran into the age old problem of showing that equivalence is decidable between two objects. In this particular case, I need to be able to show the decidability of equality over types in System F in order to have formation rules for variable contexts. We'd like a context Γ to have (x:A) only if (x:B) does not occur in Γ when (A ≠ B). For us to have statements about whether two types are equal or not, we're going to need to be able to decide if that's true using a terminating procedure.

And so we arrive at our story. In Coq, equality is som…

Formalisation of Tables in a Dependent Language

I've had an idea kicking about in my head for a while of making query plans explicit in SQL in such a way that one can be assured that the query plan corresponds to the SQL statement desired. The idea is something like a Curry-Howard in a relational setting. One could infer the plan from the SQL, the SQL from the plan, or do a sort of "type-checking" to make sure that the plan corresponds to the SQL.

The devil is always in the details however. When I started looking at the primitives that I would need, it turns out that the low level table joining operations are actually not that far from primitive SQL statement themselves. I decided to go ahead and formalise some of what would be necessary in Agda in order get a better feel for the types of objects I would need and the laws which would be required to demonstrate that a plan corresponded with a statement.

Dependent types are very powerful and give you plenty of rope to hang yourself. It's always something of…