Code transformation or meta-compilation as it is sometimes called (which is the general notion of techniques including Partial Evaluation, Supercompilation, Deforestation, or my advisors Distillation), is a powerful technique in computer programming. The benefits (and drawbacks) are almost certainly not sufficiently studied.
I was just conversing with my room-mate Tom about meta-compilation and I made the supposition that Meta-compilers are somewhat like the technology of the lathe. There are a huge number of technologies that require a lathe in order to be produced efficiently. A lathe can be viewed as a major nexus in the dependency graph of machining technology. A lathe is an almost a completely fixed precondition for the mill. The Mill is the crux of modern machining. It allows you to construct almost any currently available machined part. Without the mill we really wouldn't have the industrial age at all. Do such things exist in computer programming?
Metacompiler technology is incredibly powerful. It is a technique that usually is concidered to be a superset of a partial-evaluator. It is a compiler technique that starts in the source language and ends in the source language rather than some target language as does a standard compiler. While this might at first sound trivial or irrelevant a few examples can convince one that it actually a very useful tool. (2*2) can be coded in most languages, but really it is just the literal 4. Partial-evaluation will reduce this computation at compile-time elminating the cost from the final executable. The power doesn't stop there though. One particularly convincing example that I found was the partial evaluation of fairly simple grammar recogniser (parser) which reduced a problem directly from an NDFA to a DFA. Which is basically the compilation process used for regexps.
The Futamura projections give us some idea of just how powerful the technique is. If we have a metacompiler, we can metacompile an intepreter with respect to a program writen in the source language of the interpreter to arive at an executable in the language of the metacompiler. In fact, if we metacompile the metacompiler with respect to the interpreter and we can generate a compiler!
So I have a *lot* of questions about metacompilation. It sounds almost too good to be true (but there are good reasons to believe that it isn't). Some of them are very technical which I will probably save for tomorrow's post. The following question though is more philosophical, and practical (can those two happen at the same time?)
Why aren't supercompilers/partial evaluators used as general compilation systems? If you can write a supercompiler in some high level, nice language like OCaml and then all you have to do is write an interpreter for your language of choice in order to produce a compiler, then why isn't this done?
This seems like the holy grail of leveraging, or code re-use. You could write one really good compiler for a good language for specifying languages (Which ML was originally designed for, and of which OCaml is a descendant). One really good metacompiler. At this point every other language (front end, in the terminology of GCC) is simply the act of writing an interpreter. Writing an interpreter is *radically* simpler than making a sophisticated compiler. It is basically equivalent to a specification for the language. The process of language design can hardly be facilitated more than this since interpreters are pretty much the minimal requirement for specifying the operational semantics of a language!
My question is why isn't this general procedure really carried out in practice? Are metacompilers not good enough in practice to produce high quality performant programs? Has it just not been tried? If not, I'd like to see some effort expended on this, since it seems like a crucial technology that could really be leveraged far more than any of the "shared VM" projects like C# with minimal cost to language implementors.
I was just conversing with my room-mate Tom about meta-compilation and I made the supposition that Meta-compilers are somewhat like the technology of the lathe. There are a huge number of technologies that require a lathe in order to be produced efficiently. A lathe can be viewed as a major nexus in the dependency graph of machining technology. A lathe is an almost a completely fixed precondition for the mill. The Mill is the crux of modern machining. It allows you to construct almost any currently available machined part. Without the mill we really wouldn't have the industrial age at all. Do such things exist in computer programming?
Metacompiler technology is incredibly powerful. It is a technique that usually is concidered to be a superset of a partial-evaluator. It is a compiler technique that starts in the source language and ends in the source language rather than some target language as does a standard compiler. While this might at first sound trivial or irrelevant a few examples can convince one that it actually a very useful tool. (2*2) can be coded in most languages, but really it is just the literal 4. Partial-evaluation will reduce this computation at compile-time elminating the cost from the final executable. The power doesn't stop there though. One particularly convincing example that I found was the partial evaluation of fairly simple grammar recogniser (parser) which reduced a problem directly from an NDFA to a DFA. Which is basically the compilation process used for regexps.
The Futamura projections give us some idea of just how powerful the technique is. If we have a metacompiler, we can metacompile an intepreter with respect to a program writen in the source language of the interpreter to arive at an executable in the language of the metacompiler. In fact, if we metacompile the metacompiler with respect to the interpreter and we can generate a compiler!
So I have a *lot* of questions about metacompilation. It sounds almost too good to be true (but there are good reasons to believe that it isn't). Some of them are very technical which I will probably save for tomorrow's post. The following question though is more philosophical, and practical (can those two happen at the same time?)
Why aren't supercompilers/partial evaluators used as general compilation systems? If you can write a supercompiler in some high level, nice language like OCaml and then all you have to do is write an interpreter for your language of choice in order to produce a compiler, then why isn't this done?
This seems like the holy grail of leveraging, or code re-use. You could write one really good compiler for a good language for specifying languages (Which ML was originally designed for, and of which OCaml is a descendant). One really good metacompiler. At this point every other language (front end, in the terminology of GCC) is simply the act of writing an interpreter. Writing an interpreter is *radically* simpler than making a sophisticated compiler. It is basically equivalent to a specification for the language. The process of language design can hardly be facilitated more than this since interpreters are pretty much the minimal requirement for specifying the operational semantics of a language!
My question is why isn't this general procedure really carried out in practice? Are metacompilers not good enough in practice to produce high quality performant programs? Has it just not been tried? If not, I'd like to see some effort expended on this, since it seems like a crucial technology that could really be leveraged far more than any of the "shared VM" projects like C# with minimal cost to language implementors.
Comments
Post a Comment