Skip to main content

Posts

Showing posts from 2005

The Logic of Space part III

Ok, so last time we left off with a very informal discussion about Venn diagrams and how they relate to Boolean Logic. First let us do a little set theory and then we'll start drawing connections with the previous post to make it a bit more rigorous. We will stick with completely finite sets, so keep that in mind. A set is basically a collection of distinguishable objects. Sets have no notion of the number of times an object is in them. They simply contain an object or they do not. A set (if it is finite) can be writen in terms of its elements, for instance: S = {a,b,c} is the set S with elements a,b and c. A map can be thought of as arrows that leave from one set (the domain) and arive at another (the codomain). We will also introduce a special set 2^S which is an exponential or a "homset" called hom(S,2). S will be jus t a collection of elements as above, and 2 will be a collection of the elements {0,1}. We can think of a homset as a collection of all ma

The Logic of Space Part II

I was checking my access logs and noticed that I was #1 on MSN for "logic of space" which is a bit embarassing since my last post had so little content, so I've decided to make it up to those who end up here. We can start with Venn diagrams. Venn diagrams are spatial representations of logic. They can be used to reason in boolean logic. A Boolean Algebra can be thought of as a tuple < S,∨,∧,¬,0,1 > where S is a set (a collection of elements), ∨,∧ are binary operators, ¬ is a unary operator, and 0,1 are nullary operators (if you don't know what I'm talking about yet don't worry). We can think of 0,1 as false and true respectively. In the Diagram above S represents everything in the picture. A and B are subsets of S which we will write A<s meaning that A is a proper subset of S. White in our pictures will always represent things that are not in the set of interest. When you look at these diagrams you can often make sense of them by re

The Logic of Space

In my search for a Quantum Logic I've started looking at formalisations of logic for classical mechanics.  This seems like a reasonable place to start. Of course some of you know that you can easily get an intuisionistic logic from spatial operations where where meet and join are specified by intersection and union (almost, we have to be careful about boundaries).  For more on how this is done and it's connection with modal logic, look at thatlogicblog . This is nearly enough to start reasoning about classical mechanics since we can view space as 4 dimensionally static.  I haven't worked through it yet since I've been a bit busy.  I'll update you later. [Update:  See The Logic of Space Part II ]

Skolem's paradox

I was cruising the net looking for information about skolemization when I stumbled on a paradox that I'd never seen before. Skolem's paradox is one of the most astounding paradoxes that I've ever seen. Here are a few links I dug up on the subject: The Löwenheim-Skolem Theorem MOORE ON SKOLEM'S PARADOX Reflections on Skolem's Paradox I'm don't totally understand all the implications of the paradox, but I do understand some of them and I find it both fascinating, and in some ways disheartening.

Wish List

I've often had questions that I wanted answers to and didn't know who to ask.  Sometimes the questions are unanswerable unless some mighty alien intelligence were to come and give them to me.  Often times however, they are questions that should in principle be answerable.  Here are some questions from my current wish list. 1. Is it the case that the operational procedures that are used to "prove" queries in logic programming languages are exactly program extraction?  Am I missing something in this picture? 2. The Curry-Howard correspondence gives rise to a simple logic as the type calculus in functional programming languages.  In a language like Haskell this means that you write mostly in the functional programming language and most of logical implications are infered using Hindley-Milner type inference.  If we go to the other extreme we see things like Coq that allow us to do program extraction (extraction of a functional program) from a logical specification.  M

More on I/O, effects and logic programming

I've been having some talks on #prolog about the best way to implement effects in logic programming languages and have come across some really good information, due in large part to "ski" who hangs out on #prolog. There seem to be quite a few ways to implement effects for I/O. I'm not sure exactly at this point how many of them will be appropriate to implementing non-monotonic logic, by which I mean allowing the data store to grow/shrink. Apparently the Monad strategy can move into the logic programming scene without too much difficulty. There are even a few papers about using monads in lambda-prolog that I've come across lately. This method has a lot going for it, as it seems to have worked fairly well in haskell and clean. Another way is using linearity. This has a lot of appeal since linear logic has a well understood declarative semantics and it doesn't force us to use higher order constructs in order to achieve I/O. I'm not sure how this wo