Saturday, 10 December 2005

The Logic of Space part III

Ok, so last time we left off with a very informal discussion about Venn diagrams and how they relate to Boolean Logic.

First let us do a little set theory and then we'll start drawing connections with the previous post to make it a bit more rigorous. We will stick with completely finite sets, so keep that in mind.

A set is basically a collection of distinguishable objects. Sets have no notion of the number of times an object is in them. They simply contain an object or they do not. A set (if it is finite) can be writen in terms of its elements, for instance: S = {a,b,c} is the set S with elements a,b and c.

A map can be thought of as arrows that leave from one set (the domain) and arive at another (the codomain).

We will also introduce a special set 2^S which is an exponential or a "homset" called hom(S,2). S will be jus t a collection of elements as above, and 2 will be a collection of the elements {0,1}. We can think of a homset as a collection of all maps from the elements of S into the elements of 2. In this particular case the map has some special properties because of 2. A map will either map an element to 0 or it maps it to 1. This means that we can fully describe a map from S to 2 by simply naming either the set that goes to 1 or the set that goes to 0. See the illustration below.



Since S is finite 2^S will also be finite. In fact if we use the naming trick above, where we associate the set of elements that goes to 1 with the map, the set of all maps can then be described as the set of all subsets of S. Try it out on paper with S={a,b,c}, 2^S = {{},{a},{b},{c},{a,b},{b,c},{b,a},{a,b,c}}.

Ok, so there are a couple of operations on sets that we will be interested in. One is ∩ which is pronounced "intersection" and ∪ which is pronounced "union". The intersection of two sets is simply all elements that occur in both sets. So if we let A={a,b,c,d} and B={c,d,e,f} then A∩B={c,d}. The union of two sets is just every element that is in both of the sets. so A∪B={a,b,c,d,e,f}.

Ok now as it turns out we want to talk about another structure called a topology. A topology is a set, in our case S together with a set of subsets of S. We will write it as the tuple <S ,T>.

Now let us go back to Boolean Algebras momentarily. We had a Boolean Algebra B being a tuple <B ,∧,∨,0,1>. Now let us identify B with 2^S, ∧ with ∩ and ∨ with ∪. By doing this we have recovered a Boolean algebra simply by looking at the topology of all S together with all its subsets and the intersection and union operations!

Ok, that is great, but it doesn't stop there. Topologies are not restricted to <s ,2^S> We can look at other subsets of S. And then it might be interesting to ask whether we will get something like a Boolean algebra. It turns out of course that you don't get Boolean algebras with just any set of subsets, but you do get some kind of algebra. The thing that you always get is called a lattice.

As it turns out Quantum Mechanics also forms a kind of logic if we start with Hilbert space and use the idea of associating the spaces with a logic. Its more than just the union and intersection of sets however, we also need the concept of an algebraic closure operator (an operator having as properties X ⊂ C(X), C(C(X)) = C(X), and X ⊂ Y → C(X) ⊂ C(Y) ) since the join of hilbert spaces is a superset of the union. This is actually the source of "entanglement" in quantum mechanics which leads to so much confusion. The algebraic closure in the case of Hilbert spaces is the space spanned by the basis vectors of both sets. From this we can then look at the lattice structure and come to find that it isn't much like a Boolean Algebra at all, but is something called an "Orthomodular Lattice" a bounded lattice that is also an ortho lattice (has a complementation operation) and satisfies the modularity condition. We can write the algebra as the tuple <S,∧,∨,',0,1 >

The modularity condition is very interesting but visually producing spaces that exhibit it is a bit tricky. I'll try to draw some nice pictures of spaces that display this behavior in an intuitive way in part IV.

P.S. I lied about getting to Heyting algebras didn't I! :)

Ok, I'll say something about it. When we talk about our typical notions of spaces as we learned them in high school we run into some differences with the boolean algebraic description. The topology that we provided for our boolean space 2^S has an unusual property. If we look at the dual of 2^S in the sense that we exchange 0 for 1 in our naming of sets for hom(S,2) nothing changes. If we have a complement operation in our topology, that is an operator ¬ such that ¬A = S⁄A where ⁄ produces a set with every element of S that is not also in A. The set of all complements of sets in 2^S is 2^S. We say in topology that the tuple <S ,T> is the tuple of a set S, and its open sets T. The complement of the open sets are the closed sets. In 2^S every thing is both closed and open! In fact the silly mathematicians even say that they are "clopen".

When we took algebra we learned that [0,1] is not the same as (0,1). That is the set that includes its boundary is not the same as the one that does not. This leads us to a different topology in which we have a clear distinction between open and closed sets. In the standard description of space in high school Algebra (that is, the topology induced by the euclidean distance measure) there are only two sets that have the property of being clopen (can you guess what they are?).

So in a Heyting algebra we don't even bother with the notion of complementation since it leads to complications. Instead we introduce a symbol → into our algebra. We then have the algebra <S,∧,∨,→>. The symbol → is pronounced "implies".

I promise to tell you more later :)

Saturday, 3 December 2005

The Logic of Space Part II

I was checking my access logs and noticed that I was #1 on MSN for "logic of space" which is a bit embarassing since my last post had so little content, so I've decided to make it up to those who end up here.

We can start with Venn diagrams.




Venn diagrams are spatial representations of logic. They can be used to reason in boolean logic. A Boolean Algebra can be thought of as a tuple < S,∨,∧,¬,0,1 > where S is a set (a collection of elements), ∨,∧ are binary operators, ¬ is a unary operator, and 0,1 are nullary operators (if you don't know what I'm talking about yet don't worry). We can think of 0,1 as false and true respectively.

In the Diagram above S represents everything in the picture. A and B are subsets of S which we will write A<s meaning that A is a proper subset of S. White in our pictures will always represent things that are not in the set of interest.

When you look at these diagrams you can often make sense of them by replacing the letters A B and S with familiar heirarchical objects. For example take S as Animals, A as mammals and B as animals with legs. The statement A<S can be expressed as "All mammals are also animals". Since it is < (proper subset) and not ≤ the statement also implies that there are animals which are not mammals.

The symbols '∨' and '∧' can be used to produce new sets from subsets of S. '∨' is called 'join', but in our case it can be read as 'or'. '∧' is called 'meet' and can be read as 'and'.




In the above image we see A∨B. This can be read A or B, as in "Animals with legs or Mammals".




The above diagram in light of or Animals example would represent mamals with legs (which of course is just about all mammals excluding pinepeds).




Next we have the unary operator ¬ also called complementation or negation. In our case it gives us set complementation by which we mean ¬A represents all elements that are in the set S but not in the set A. With the animals example ¬A is any animal that is not a mammal which includes birds and reptiles.




With our Animals example we pictorially described the situation 'Not a mammal or not an animal with legs'. This set (¬A)∨(¬B) would have such elements as whales and lizards but would not include dogs.

The DeMorgan's law: '(¬A)∨(¬B)=¬(A∧B)' is true for any A or B in S in a Boolean algebra (but also in other algebras including orthomodular algebras). This law is obeyed by our intuitive notion of inclusion with finite sets. Since ¬ is an involution in Boolean algebra (¬¬x=x; read 'not not x' is equivelent to 'x') we can also derive the dual formula: '(¬A)∧(¬B)=¬(A∨B)'.



Another important concept that will be more central in our next discussion, because complementation/negation is not a fundmental operation in Heyting algebras, is implication. In a Boolean Algebra we can define the symbol → as 'A → B = (¬A) ∨ B'. At first this formula might be a bit difficult to interpret. It does however follow our intuitive notions of implication. If ¬A is true then A is false so B must be true.

All of this discussion so far has been very informal and has been meant as an introduction to the ideas I plan to present next. In the next post I'll talk about the topology of boolean algebras and heyting algebras and how we can think of them pictorially.