Skip to main content

The Logic of Space part III

Ok, so last time we left off with a very informal discussion about Venn diagrams and how they relate to Boolean Logic.

First let us do a little set theory and then we'll start drawing connections with the previous post to make it a bit more rigorous. We will stick with completely finite sets, so keep that in mind.

A set is basically a collection of distinguishable objects. Sets have no notion of the number of times an object is in them. They simply contain an object or they do not. A set (if it is finite) can be writen in terms of its elements, for instance: S = {a,b,c} is the set S with elements a,b and c.

A map can be thought of as arrows that leave from one set (the domain) and arive at another (the codomain).

We will also introduce a special set 2^S which is an exponential or a "homset" called hom(S,2). S will be jus t a collection of elements as above, and 2 will be a collection of the elements {0,1}. We can think of a homset as a collection of all maps from the elements of S into the elements of 2. In this particular case the map has some special properties because of 2. A map will either map an element to 0 or it maps it to 1. This means that we can fully describe a map from S to 2 by simply naming either the set that goes to 1 or the set that goes to 0. See the illustration below.



Since S is finite 2^S will also be finite. In fact if we use the naming trick above, where we associate the set of elements that goes to 1 with the map, the set of all maps can then be described as the set of all subsets of S. Try it out on paper with S={a,b,c}, 2^S = {{},{a},{b},{c},{a,b},{b,c},{b,a},{a,b,c}}.

Ok, so there are a couple of operations on sets that we will be interested in. One is ∩ which is pronounced "intersection" and ∪ which is pronounced "union". The intersection of two sets is simply all elements that occur in both sets. So if we let A={a,b,c,d} and B={c,d,e,f} then A∩B={c,d}. The union of two sets is just every element that is in both of the sets. so A∪B={a,b,c,d,e,f}.

Ok now as it turns out we want to talk about another structure called a topology. A topology is a set, in our case S together with a set of subsets of S. We will write it as the tuple <S ,T>.

Now let us go back to Boolean Algebras momentarily. We had a Boolean Algebra B being a tuple <B ,∧,∨,0,1>. Now let us identify B with 2^S, ∧ with ∩ and ∨ with ∪. By doing this we have recovered a Boolean algebra simply by looking at the topology of all S together with all its subsets and the intersection and union operations!

Ok, that is great, but it doesn't stop there. Topologies are not restricted to <s ,2^S> We can look at other subsets of S. And then it might be interesting to ask whether we will get something like a Boolean algebra. It turns out of course that you don't get Boolean algebras with just any set of subsets, but you do get some kind of algebra. The thing that you always get is called a lattice.

As it turns out Quantum Mechanics also forms a kind of logic if we start with Hilbert space and use the idea of associating the spaces with a logic. Its more than just the union and intersection of sets however, we also need the concept of an algebraic closure operator (an operator having as properties X ⊂ C(X), C(C(X)) = C(X), and X ⊂ Y → C(X) ⊂ C(Y) ) since the join of hilbert spaces is a superset of the union. This is actually the source of "entanglement" in quantum mechanics which leads to so much confusion. The algebraic closure in the case of Hilbert spaces is the space spanned by the basis vectors of both sets. From this we can then look at the lattice structure and come to find that it isn't much like a Boolean Algebra at all, but is something called an "Orthomodular Lattice" a bounded lattice that is also an ortho lattice (has a complementation operation) and satisfies the modularity condition. We can write the algebra as the tuple <S,∧,∨,',0,1 >

The modularity condition is very interesting but visually producing spaces that exhibit it is a bit tricky. I'll try to draw some nice pictures of spaces that display this behavior in an intuitive way in part IV.

P.S. I lied about getting to Heyting algebras didn't I! :)

Ok, I'll say something about it. When we talk about our typical notions of spaces as we learned them in high school we run into some differences with the boolean algebraic description. The topology that we provided for our boolean space 2^S has an unusual property. If we look at the dual of 2^S in the sense that we exchange 0 for 1 in our naming of sets for hom(S,2) nothing changes. If we have a complement operation in our topology, that is an operator ¬ such that ¬A = S⁄A where ⁄ produces a set with every element of S that is not also in A. The set of all complements of sets in 2^S is 2^S. We say in topology that the tuple <S ,T> is the tuple of a set S, and its open sets T. The complement of the open sets are the closed sets. In 2^S every thing is both closed and open! In fact the silly mathematicians even say that they are "clopen".

When we took algebra we learned that [0,1] is not the same as (0,1). That is the set that includes its boundary is not the same as the one that does not. This leads us to a different topology in which we have a clear distinction between open and closed sets. In the standard description of space in high school Algebra (that is, the topology induced by the euclidean distance measure) there are only two sets that have the property of being clopen (can you guess what they are?).

So in a Heyting algebra we don't even bother with the notion of complementation since it leads to complications. Instead we introduce a symbol → into our algebra. We then have the algebra <S,∧,∨,→>. The symbol → is pronounced "implies".

I promise to tell you more later :)

Comments

Popular posts from this blog

Decidable Equality in Agda

So I've been playing with typing various things in System-F which previously I had left with auxiliary well-formedness conditions. This includes substitutions and contexts, both of which are interesting to have well typed versions of. Since I've been learning Agda, it seemed sensible to carry out this work in that language, as there is nothing like a problem to help you learn a language.

In the course of proving properties, I ran into the age old problem of showing that equivalence is decidable between two objects. In this particular case, I need to be able to show the decidability of equality over types in System F in order to have formation rules for variable contexts. We'd like a context Γ to have (x:A) only if (x:B) does not occur in Γ when (A ≠ B). For us to have statements about whether two types are equal or not, we're going to need to be able to decide if that's true using a terminating procedure.

And so we arrive at our story. In Coq, equality is som…

Formalisation of Tables in a Dependent Language

I've had an idea kicking about in my head for a while of making query plans explicit in SQL in such a way that one can be assured that the query plan corresponds to the SQL statement desired. The idea is something like a Curry-Howard in a relational setting. One could infer the plan from the SQL, the SQL from the plan, or do a sort of "type-checking" to make sure that the plan corresponds to the SQL.

The devil is always in the details however. When I started looking at the primitives that I would need, it turns out that the low level table joining operations are actually not that far from primitive SQL statement themselves. I decided to go ahead and formalise some of what would be necessary in Agda in order get a better feel for the types of objects I would need and the laws which would be required to demonstrate that a plan corresponded with a statement.

Dependent types are very powerful and give you plenty of rope to hang yourself. It's always something of…

Plotkin, the LGG and the MGU

Legend has it that a million years ago Plotkin was talking to his professor Popplestone, who said that unification (finding the most general unifier or the MGU) might have an interesting dual, and that Plotkin should find it. It turns out that the dual *is* interesting and it is known as the Least General Generalisation (LGG). Plotkin apparently described both the LGG for terms, and for clauses. I say apparently because I can't find his paper on-line.

The LGG for clauses is more complicated so we'll get back to it after we look at the LGG of terms. We can see how the MGU is related to the LGG by looking at a couple of examples and the above image. We use the prolog convention that function symbols start with lower case, and variables start with uppercase. The image above is organised as a DAG (Directed Acyclic Graph). DAGs are a very important structure in mathematics since DAGs are lattices.

Essentially what we have done is drawn an (incomplete) Hasse diagram f…