Ok, so last time we left off with a very informal discussion about Venn diagrams and how they relate to Boolean Logic.

First let us do a little set theory and then we'll start drawing connections with the previous post to make it a bit more rigorous. We will stick with completely finite sets, so keep that in mind.

A set is basically a collection of distinguishable objects. Sets have no notion of the number of times an object is in them. They simply contain an object or they do not. A set (if it is finite) can be writen in terms of its elements, for instance: S = {a,b,c} is the set S with elements a,b and c.

A map can be thought of as arrows that leave from one set (the domain) and arive at another (the codomain).

We will also introduce a special set 2^S which is an exponential or a "homset" called hom(S,2). S will be jus t a collection of elements as above, and 2 will be a collection of the elements {0,1}. We can think of a homset as a collection of all maps from the elements of S into the elements of 2. In this particular case the map has some special properties because of 2. A map will either map an element to 0 or it maps it to 1. This means that we can fully describe a map from S to 2 by simply naming either the set that goes to 1 or the set that goes to 0. See the illustration below.

Since S is finite 2^S will also be finite. In fact if we use the naming trick above, where we associate the set of elements that goes to 1 with the map, the set of all maps can then be described as the set of all subsets of S. Try it out on paper with S={a,b,c}, 2^S = {{},{a},{b},{c},{a,b},{b,c},{b,a},{a,b,c}}.

Ok, so there are a couple of operations on sets that we will be interested in. One is ∩ which is pronounced "intersection" and ∪ which is pronounced "union". The intersection of two sets is simply all elements that occur in both sets. So if we let A={a,b,c,d} and B={c,d,e,f} then A∩B={c,d}. The union of two sets is just every element that is in both of the sets. so A∪B={a,b,c,d,e,f}.

Ok now as it turns out we want to talk about another structure called a topology. A topology is a set, in our case S together with a set of subsets of S. We will write it as the tuple <S ,T>.

Now let us go back to Boolean Algebras momentarily. We had a Boolean Algebra B being a tuple <B ,∧,∨,0,1>. Now let us identify B with 2^S, ∧ with ∩ and ∨ with ∪. By doing this we have recovered a Boolean algebra simply by looking at the topology of all S together with all its subsets and the intersection and union operations!

Ok, that is great, but it doesn't stop there. Topologies are not restricted to <s ,2^S> We can look at other subsets of S. And then it might be interesting to ask whether we will get something like a Boolean algebra. It turns out of course that you don't get Boolean algebras with just any set of subsets, but you do get some kind of algebra. The thing that you always get is called a lattice.

As it turns out Quantum Mechanics also forms a kind of logic if we start with Hilbert space and use the idea of associating the spaces with a logic. Its more than just the union and intersection of sets however, we also need the concept of an algebraic closure operator (an operator having as properties X ⊂ C(X), C(C(X)) = C(X), and X ⊂ Y → C(X) ⊂ C(Y) ) since the join of hilbert spaces is a superset of the union. This is actually the source of "entanglement" in quantum mechanics which leads to so much confusion. The algebraic closure in the case of Hilbert spaces is the space spanned by the basis vectors of both sets. From this we can then look at the lattice structure and come to find that it isn't much like a Boolean Algebra at all, but is something called an "Orthomodular Lattice" a bounded lattice that is also an ortho lattice (has a complementation operation) and satisfies the modularity condition. We can write the algebra as the tuple <S,∧,∨,',0,1 >

The modularity condition is very interesting but visually producing spaces that exhibit it is a bit tricky. I'll try to draw some nice pictures of spaces that display this behavior in an intuitive way in part IV.

P.S. I lied about getting to Heyting algebras didn't I! :)

Ok, I'll say something about it. When we talk about our typical notions of spaces as we learned them in high school we run into some differences with the boolean algebraic description. The topology that we provided for our boolean space 2^S has an unusual property. If we look at the dual of 2^S in the sense that we exchange 0 for 1 in our naming of sets for hom(S,2) nothing changes. If we have a complement operation in our topology, that is an operator ¬ such that ¬A = S⁄A where ⁄ produces a set with every element of S that is not also in A. The set of all complements of sets in 2^S is 2^S. We say in topology that the tuple <S ,T> is the tuple of a set S, and its open sets T. The complement of the open sets are the closed sets. In 2^S every thing is both closed and open! In fact the silly mathematicians even say that they are "clopen".

When we took algebra we learned that [0,1] is not the same as (0,1). That is the set that includes its boundary is not the same as the one that does not. This leads us to a different topology in which we have a clear distinction between open and closed sets. In the standard description of space in high school Algebra (that is, the topology induced by the euclidean distance measure) there are only two sets that have the property of being clopen (can you guess what they are?).

So in a Heyting algebra we don't even bother with the notion of complementation since it leads to complications. Instead we introduce a symbol → into our algebra. We then have the algebra <S,∧,∨,→>. The symbol → is pronounced "implies".

I promise to tell you more later :)

## Saturday, 10 December 2005

## Saturday, 3 December 2005

### The Logic of Space Part II

I was checking my access logs and noticed that I was #1 on MSN for "logic of space" which is a bit embarassing since my last post had so little content, so I've decided to make it up to those who end up here.

We can start with Venn diagrams.

Venn diagrams are spatial representations of logic. They can be used to reason in boolean logic. A Boolean Algebra can be thought of as a tuple < S,∨,∧,¬,0,1 > where S is a set (a collection of elements), ∨,∧ are binary operators, ¬ is a unary operator, and 0,1 are nullary operators (if you don't know what I'm talking about yet don't worry). We can think of 0,1 as false and true respectively.

In the Diagram above S represents everything in the picture. A and B are subsets of S which we will write A<s meaning that A is a proper subset of S. White in our pictures will always represent things that are not in the set of interest.

When you look at these diagrams you can often make sense of them by replacing the letters A B and S with familiar heirarchical objects. For example take S as Animals, A as mammals and B as animals with legs. The statement A<S can be expressed as "All mammals are also animals". Since it is < (proper subset) and not ≤ the statement also implies that there are animals which are not mammals.

The symbols '∨' and '∧' can be used to produce new sets from subsets of S. '∨' is called 'join', but in our case it can be read as 'or'. '∧' is called 'meet' and can be read as 'and'.

In the above image we see A∨B. This can be read A or B, as in "Animals with legs or Mammals".

The above diagram in light of or Animals example would represent mamals with legs (which of course is just about all mammals excluding pinepeds).

Next we have the unary operator ¬ also called complementation or negation. In our case it gives us set complementation by which we mean ¬A represents all elements that are in the set S but not in the set A. With the animals example ¬A is any animal that is not a mammal which includes birds and reptiles.

With our Animals example we pictorially described the situation 'Not a mammal or not an animal with legs'. This set (¬A)∨(¬B) would have such elements as whales and lizards but would not include dogs.

The DeMorgan's law: '(¬A)∨(¬B)=¬(A∧B)' is true for any A or B in S in a Boolean algebra (but also in other algebras including orthomodular algebras). This law is obeyed by our intuitive notion of inclusion with finite sets. Since ¬ is an involution in Boolean algebra (¬¬x=x; read 'not not x' is equivelent to 'x') we can also derive the dual formula: '(¬A)∧(¬B)=¬(A∨B)'.

Another important concept that will be more central in our next discussion, because complementation/negation is not a fundmental operation in Heyting algebras, is implication. In a Boolean Algebra we can define the symbol → as 'A → B = (¬A) ∨ B'. At first this formula might be a bit difficult to interpret. It does however follow our intuitive notions of implication. If ¬A is true then A is false so B must be true.

All of this discussion so far has been very informal and has been meant as an introduction to the ideas I plan to present next. In the next post I'll talk about the topology of boolean algebras and heyting algebras and how we can think of them pictorially.

We can start with Venn diagrams.

Venn diagrams are spatial representations of logic. They can be used to reason in boolean logic. A Boolean Algebra can be thought of as a tuple < S,∨,∧,¬,0,1 > where S is a set (a collection of elements), ∨,∧ are binary operators, ¬ is a unary operator, and 0,1 are nullary operators (if you don't know what I'm talking about yet don't worry). We can think of 0,1 as false and true respectively.

In the Diagram above S represents everything in the picture. A and B are subsets of S which we will write A<s meaning that A is a proper subset of S. White in our pictures will always represent things that are not in the set of interest.

When you look at these diagrams you can often make sense of them by replacing the letters A B and S with familiar heirarchical objects. For example take S as Animals, A as mammals and B as animals with legs. The statement A<S can be expressed as "All mammals are also animals". Since it is < (proper subset) and not ≤ the statement also implies that there are animals which are not mammals.

The symbols '∨' and '∧' can be used to produce new sets from subsets of S. '∨' is called 'join', but in our case it can be read as 'or'. '∧' is called 'meet' and can be read as 'and'.

In the above image we see A∨B. This can be read A or B, as in "Animals with legs or Mammals".

The above diagram in light of or Animals example would represent mamals with legs (which of course is just about all mammals excluding pinepeds).

Next we have the unary operator ¬ also called complementation or negation. In our case it gives us set complementation by which we mean ¬A represents all elements that are in the set S but not in the set A. With the animals example ¬A is any animal that is not a mammal which includes birds and reptiles.

With our Animals example we pictorially described the situation 'Not a mammal or not an animal with legs'. This set (¬A)∨(¬B) would have such elements as whales and lizards but would not include dogs.

The DeMorgan's law: '(¬A)∨(¬B)=¬(A∧B)' is true for any A or B in S in a Boolean algebra (but also in other algebras including orthomodular algebras). This law is obeyed by our intuitive notion of inclusion with finite sets. Since ¬ is an involution in Boolean algebra (¬¬x=x; read 'not not x' is equivelent to 'x') we can also derive the dual formula: '(¬A)∧(¬B)=¬(A∨B)'.

Another important concept that will be more central in our next discussion, because complementation/negation is not a fundmental operation in Heyting algebras, is implication. In a Boolean Algebra we can define the symbol → as 'A → B = (¬A) ∨ B'. At first this formula might be a bit difficult to interpret. It does however follow our intuitive notions of implication. If ¬A is true then A is false so B must be true.

All of this discussion so far has been very informal and has been meant as an introduction to the ideas I plan to present next. In the next post I'll talk about the topology of boolean algebras and heyting algebras and how we can think of them pictorially.

## Thursday, 17 November 2005

### The Logic of Space

In my search for a Quantum Logic I've started looking at formalisations of logic for classical mechanics. This seems like a reasonable place to start.

Of course some of you know that you can easily get an intuisionistic logic from spatial operations where where meet and join are specified by intersection and union (almost, we have to be careful about boundaries). For more on how this is done and it's connection with modal logic, look at thatlogicblog.

This is nearly enough to start reasoning about classical mechanics since we can view space as 4 dimensionally static. I haven't worked through it yet since I've been a bit busy. I'll update you later.

[Update: See The Logic of Space Part II ]

Of course some of you know that you can easily get an intuisionistic logic from spatial operations where where meet and join are specified by intersection and union (almost, we have to be careful about boundaries). For more on how this is done and it's connection with modal logic, look at thatlogicblog.

This is nearly enough to start reasoning about classical mechanics since we can view space as 4 dimensionally static. I haven't worked through it yet since I've been a bit busy. I'll update you later.

[Update: See The Logic of Space Part II ]

## Wednesday, 26 October 2005

### Skolem's paradox

I was cruising the net looking for information about skolemization when I stumbled on a paradox that I'd never seen before. Skolem's paradox is one of the most astounding paradoxes that I've ever seen. Here are a few links I dug up on the subject: The LĂ¶wenheim-Skolem Theorem MOORE ON SKOLEM'S PARADOX Reflections on Skolem's Paradox I'm don't totally understand all the implications of the paradox, but I do understand some of them and I find it both fascinating, and in some ways disheartening.

## Tuesday, 18 October 2005

### Wish List

I've often had questions that I wanted answers to and didn't know who to ask. Sometimes the questions are unanswerable unless some mighty alien intelligence were to come and give them to me. Often times however, they are questions that should in principle be answerable. Here are some questions from my current wish list.

1. Is it the case that the operational procedures that are used to "prove" queries in logic programming languages are exactly program extraction? Am I missing something in this picture?

2. The Curry-Howard correspondence gives rise to a simple logic as the type calculus in functional programming languages. In a language like Haskell this means that you write mostly in the functional programming language and most of logical implications are infered using Hindley-Milner type inference. If we go to the other extreme we see things like Coq that allow us to do program extraction (extraction of a functional program) from a logical specification. My question is if it is possible to do something more like what Haskell does. Namely, allow the user to occasionally describe the functional program associated with the specification. This would be like writing types and having program inference, rather than writing programs with type inference.

3. I've spent a lot of time thinking about transactional logic since it seems critical that we deal with schema evolution and encorporation of new facts. How exactly does the Curry-Howard correspondence relate to schema evolution?

4. In the same vein as 3. Is it possible to have schema evolution of types in a functional programming language by using atomic transactional code insertion. Clearly without some sort of atomism we will be able to arive at inconsistent/type-unsafe intermediate stages.

5. What logical type calculus is sufficent to capture the Deutch-Josza algorithm if we assume that the usual matrix algebra is the appropriate combinitor calculus for implementing quantum algorithms (ie. what is the explicit Curry-Howard correspondence).

Anyone sensing a theme here? I'm totally in love with the CH correspondence. If you don't know about it. You aught to go look it up on wikipedia and read a bit!

1. Is it the case that the operational procedures that are used to "prove" queries in logic programming languages are exactly program extraction? Am I missing something in this picture?

2. The Curry-Howard correspondence gives rise to a simple logic as the type calculus in functional programming languages. In a language like Haskell this means that you write mostly in the functional programming language and most of logical implications are infered using Hindley-Milner type inference. If we go to the other extreme we see things like Coq that allow us to do program extraction (extraction of a functional program) from a logical specification. My question is if it is possible to do something more like what Haskell does. Namely, allow the user to occasionally describe the functional program associated with the specification. This would be like writing types and having program inference, rather than writing programs with type inference.

3. I've spent a lot of time thinking about transactional logic since it seems critical that we deal with schema evolution and encorporation of new facts. How exactly does the Curry-Howard correspondence relate to schema evolution?

4. In the same vein as 3. Is it possible to have schema evolution of types in a functional programming language by using atomic transactional code insertion. Clearly without some sort of atomism we will be able to arive at inconsistent/type-unsafe intermediate stages.

5. What logical type calculus is sufficent to capture the Deutch-Josza algorithm if we assume that the usual matrix algebra is the appropriate combinitor calculus for implementing quantum algorithms (ie. what is the explicit Curry-Howard correspondence).

Anyone sensing a theme here? I'm totally in love with the CH correspondence. If you don't know about it. You aught to go look it up on wikipedia and read a bit!

## Sunday, 28 August 2005

### More on I/O, effects and logic programming

I've been having some talks on #prolog about the best way to implement effects in logic programming languages and have come across some really good information, due in large part to "ski" who hangs out on #prolog.

There seem to be quite a few ways to implement effects for I/O. I'm not sure exactly at this point how many of them will be appropriate to implementing non-monotonic logic, by which I mean allowing the data store to grow/shrink.

Apparently the Monad strategy can move into the logic programming scene without too much difficulty. There are even a few papers about using monads in lambda-prolog that I've come across lately. This method has a lot going for it, as it seems to have worked fairly well in haskell and clean.

Another way is using linearity. This has a lot of appeal since linear logic has a well understood declarative semantics and it doesn't force us to use higher order constructs in order to achieve I/O. I'm not sure how this would interact with changing the data store though.

Mercury uses the linearity approach. First we declare the main predicate to be of a determinate mode, meaning that there is no way that we can have other choices. now the I/O resource is exhausted when used and we don't have to worry about backtracking over the use of the resource.

All of this new information makes the end goal of a fully declarative system where effects can take place seem more feasible.

There seem to be quite a few ways to implement effects for I/O. I'm not sure exactly at this point how many of them will be appropriate to implementing non-monotonic logic, by which I mean allowing the data store to grow/shrink.

Apparently the Monad strategy can move into the logic programming scene without too much difficulty. There are even a few papers about using monads in lambda-prolog that I've come across lately. This method has a lot going for it, as it seems to have worked fairly well in haskell and clean.

Another way is using linearity. This has a lot of appeal since linear logic has a well understood declarative semantics and it doesn't force us to use higher order constructs in order to achieve I/O. I'm not sure how this would interact with changing the data store though.

Mercury uses the linearity approach. First we declare the main predicate to be of a determinate mode, meaning that there is no way that we can have other choices. now the I/O resource is exhausted when used and we don't have to worry about backtracking over the use of the resource.

All of this new information makes the end goal of a fully declarative system where effects can take place seem more feasible.

Subscribe to:
Posts (Atom)