I have a bunch of ideas that I don't have time to follow up on but I'd like to retain in some fashion so here goes.
1.
A long time ago I had a question about the shortest assembly program (parameterised by the instruction set of course!) that could encode the generators of the quaternions under multiplication. It is a very easy thing to program this, but as I was doing it, I found myself being highly doubtful that I was choosing the "right" encoding. In fact I almost certainly wasn't. Would there not be some very tight encoding in terms of "normal" assembly language operators? This has been a persistent question in my mind that comes up frequently in different and often more general forms. If you want to make a fast compiler, how do you know what instructions to output? If you have a highlevel specification, how can you avoid the ridiculously high overhead usually associated with extremely high level languages?
Today I found this fabulous paper (from 1987!) that deals with some of the basic issues involved in finding the shortest program for a given problem. It's called "Superoptimizer -- A look at the smallest program". I'd link to it, but I got it off ACM so if you are one of the few people that has a subscription you'll know where to find it and otherwise I don't want to give the money grubbers the link-love.
Some other thoughts that are in this region of my thought-space. Can you take a given specification with a proof and systematically transform it to enumerate the space of satisfying programs? If not, why not? Even if you can't, might it not be interesting to just perform large searches of this space? Are there methods of using the transformation operators such that programs are only decreasing or minimally increasing? If so then perhaps we can call the search good at some size threshold since very large programs are unlikely to be good because of locality issues. Also Automatic Design of Algorithms Through Evolution is in a very similar vein.
2.
Concurrency is a nasty problem. It doesn't have a nice formalism that everyone in the CS world can agree on. There must be like 500 different formalisms. All of them better (easier, not neccessarily faster) than the ones that we actually use (locking, threads, condition variables) but none of them stand out as "the right thing".
I recently found a paper called:
Socially Responsive and Environmentally Friendly Logic. I love the title :) But aside from that, the formalism is very nice. It is something that I've contemplated a bit but never had the drive to actually try to work out formally. The basic idea comes from the knowledge that Classical Logic and Intuisionistic Logic can be viewed as 2 player games. This game is pretty simple. If I have a proof phi then to win I have to prove it. If I have a proof ¬φ, then to win my oponent has to fail to prove it. If I have φ ∧ ψ then my partern gets to pick a formula and I have to prove it. If I have ∀x then my oponent gets to pick any stand-in for x that he would like. You can probably guess the rest (or look it up). This alternate logic breaks the essential two person nature of the logic. One interesting practical feature of negation in the traditional logics, is that they give rise to Continuations in the Curry-Howard Correspondance. So what does this give rise to in the N-player games defined by Abramsky? I'm not sure, but I suspect it might give process migration! Something worth looking in to.
1.
A long time ago I had a question about the shortest assembly program (parameterised by the instruction set of course!) that could encode the generators of the quaternions under multiplication. It is a very easy thing to program this, but as I was doing it, I found myself being highly doubtful that I was choosing the "right" encoding. In fact I almost certainly wasn't. Would there not be some very tight encoding in terms of "normal" assembly language operators? This has been a persistent question in my mind that comes up frequently in different and often more general forms. If you want to make a fast compiler, how do you know what instructions to output? If you have a highlevel specification, how can you avoid the ridiculously high overhead usually associated with extremely high level languages?
Today I found this fabulous paper (from 1987!) that deals with some of the basic issues involved in finding the shortest program for a given problem. It's called "Superoptimizer -- A look at the smallest program". I'd link to it, but I got it off ACM so if you are one of the few people that has a subscription you'll know where to find it and otherwise I don't want to give the money grubbers the link-love.
Some other thoughts that are in this region of my thought-space. Can you take a given specification with a proof and systematically transform it to enumerate the space of satisfying programs? If not, why not? Even if you can't, might it not be interesting to just perform large searches of this space? Are there methods of using the transformation operators such that programs are only decreasing or minimally increasing? If so then perhaps we can call the search good at some size threshold since very large programs are unlikely to be good because of locality issues. Also Automatic Design of Algorithms Through Evolution is in a very similar vein.
2.
Concurrency is a nasty problem. It doesn't have a nice formalism that everyone in the CS world can agree on. There must be like 500 different formalisms. All of them better (easier, not neccessarily faster) than the ones that we actually use (locking, threads, condition variables) but none of them stand out as "the right thing".
I recently found a paper called:
Socially Responsive and Environmentally Friendly Logic. I love the title :) But aside from that, the formalism is very nice. It is something that I've contemplated a bit but never had the drive to actually try to work out formally. The basic idea comes from the knowledge that Classical Logic and Intuisionistic Logic can be viewed as 2 player games. This game is pretty simple. If I have a proof phi then to win I have to prove it. If I have a proof ¬φ, then to win my oponent has to fail to prove it. If I have φ ∧ ψ then my partern gets to pick a formula and I have to prove it. If I have ∀x then my oponent gets to pick any stand-in for x that he would like. You can probably guess the rest (or look it up). This alternate logic breaks the essential two person nature of the logic. One interesting practical feature of negation in the traditional logics, is that they give rise to Continuations in the Curry-Howard Correspondance. So what does this give rise to in the N-player games defined by Abramsky? I'm not sure, but I suspect it might give process migration! Something worth looking in to.
Comments
Post a Comment