I was thinking about cut elimination on my way back from work today due in large part to a post on the cut rule made by sigfpe on A Neighborhood of Infinity.

It occured to me that the fact that cut-free proofs can be so tremendously much larger than the cut-full ones and that directly constructing cut-full proofs is so difficult is a bit strange. It seems somehow unfair.

As I was thinking about this I realised that one could find cut-free proofs automatically and then reduce them with reverse cut-elimination to produce extremely cutfull proofs. These proofs should be very economical computationally. If one keeps the terms that correspond with the proofs along side, one should be able to obtain a source to source translation that might have performance benefits.

Another totally far-out idea came to me as well. Mathematicians often use lemmas repeatedly. Perhaps the process of finding cuts is generally useful. Specifically, if a lemma is useful in simplifying one proof, maybe it is more likely to be useful in simplifying other proofs. There should be statistical measures over random syntactically valid sentences that one could come up with to see if such lemmas exist.