Saturday, October 31, 2009

wtf?

And if I look at the mutual friend I know that this is probably legit ...

Friday, October 30, 2009

Computational thinking

I just read a little article from CMU's Jeannette Wing where she discusses "computational thinking" as a basic skill that everyone should have, and that should be taught in much the same way as a skill like critical thinking is at universities.

The idea is that computational thinking is collection of mental tools, approaches, and metaphors, more or less drawn from computer science, that we can bring to everyday problems. This would include things like useful abstraction, suppressing detail and complexity, optimisation of solutions, formulation of algorithms and so on. Computational thinking also has one up on straight up "mathematical thinking" (which has all these goodies I've just listed), in that it requires one to wear the engineers' boots while wearing the mathematician's hat, bringing a vital pragmatic element into the mix that might be well be missed amongst the glorious abstractions of a purer mathematical environment.

The article itself is pretty interesting, and could usefully be summed up by saying that she'd like to see everyone learn to embrace their hacker nature.

By the way, the points that she makes about education, about teaching people outside of computer science and engineering departments about programming is exactly the kind of thing that Felleisen, Findler, Flatt, and Krishnamurthi are trying to accomplish with their wonderful book How to design programs which is in fact the very text that a course called "Ways to think like a computer scientist" should be based on.

Tuesday, October 27, 2009

Rudiments : Exploding logics

Rudiments : Exploding logics

A while back I was sitting in a pub with a mathematician, a psychologist, and a philosopher. And while this certainly sounds like I'm getting ready to tell some corny joke, the truth is that we were actually just trying to get a philosophy of mind discussion group off the ground, so it wasn't as strange as all that.
Anyway, at one point, the philosopher turned to the mathematician and asked him if he could explain something that he, the philosopher, had either learned in an elementary logic course, or had come across somewhere in his (admittedly vast, and very deep) reading.

He asked the mathematician to explain just how it was that if we have a contradiction in a formal system, we can derive any conclusion we want?

I was a little surprised at the question - the guy who was asking is super smart - but was even more surprised when the mathematician couldn't answer it (not his field, apparently)!

So it seems to me that one can go pretty far in one's education without learning about the "principle of explosion", or - if you like your Latin - ex falso quodlibet, ex falso sequitur quodlibet, that is, "from a contradiction, follows anything you like" (or something like that - You'll see I'm playing as fast and loose with my Latin as I am with my logic in what follows, so please forgive me)

A little groundwork

I'm going to give something of a semi-informal proof so that it can be understood by anyone who hasn't really done any formal logic. But since we're speaking about formal logic, there are a couple of things that one should know.

When one is starting out, a useful way of thinking about Formal logic is to consider it a study of valid forms of reasoning. So when we're doing formal logic we're less concerned about what we're reasoning about than we are about than we are about the structure of arguments in general.
As a way of abstracting away from the details, logicians will do things like use symbols to represent basic units of meaning, rather than actual sentences in a natural language.
So instead of saying "the sky is blue", a logician will use the symbol (A). Symbols like (A), (B), (C) and so on are sometimes called primitives. We can think of these primitives as being able to represent very simple facts about the world (although they don't have to), and we can assign primitives some value, usually true or false. The advantage to using these kinds of symbols rather than actual sentences is that these symbols could stand for anything. So we know that what we discover about the shape of an argument is going to be true for any argument whatsoever because we can substitute pretty much anything we like for our symbols (as long as what we're substituting can be given a value of true or false).

Logicians also use another set of symbols to combine these basic truth bearing entities, these are called connectives. The basic vocabulary will consist of, at least (AND) and (OR) allow us to build up sentences from propositions in such a way that at each step, truth is preserved. We also need negation (~), which "flips the truth value" of a primitive - so if (A) means "the sky is blue", we can think of it's negation (~A) as meaning "the sky is not blue".

Since this post isn't a primer on sentential logic though - I'll point you to a page that is. But with a little common sense you should be able to follow most of it, but the problem is that the formal proof that we can get anything from a contradiction relies on some "moves" that are a little too slippery for common sense.

Let's say we have some proposition (A), what true sentences can we derive from this?
Well, we can get (~~A) without a problem, since double negations cancel themselves out.
We can also get something like (A AND A), which, if we substituted "the sky is blue" into our sentence we would get "the sky is blue AND the sky is blue" - and this, other than being redundant, is true. A sentence of the form (E AND F) will be true just in case that (E) and (F) are true.

But here is something odd - we can, for free, tag anything we want onto the end of our sentence using a move called disjunction introduction.
Let's say we have the primitive (C) which can stand for anything, including "cats can fly"
We can use disjunction introduction to derive the sentence (A OR C) - which when translated says "the sky is blue OR cats can fly", and this move would be - in the simple kind of logic we're dealing with here - truth preserving. We say that (A OR C) is true, because the only thing that's required for truth over disjunction (OR) is that ONE of the primitives is true.
(A) is true, and so (A OR C) is also true.

An interesting aside here - can you see how, because of disjunction introduction, for any system we can derive an infinite number of true sentences?
If we know that (A) is true, we can just keep adding more "stuff" onto it using disjunction introduction forever and ever. We could derive long sentences like (A OR B OR C OR D OR E OR F OR G ... OR Z) and it would be true because we know that (A) is true, so all of the sentences we generate using this rule will be true as well.
As I said, it's a little odd, but that's the thing, while it's useful to think of AND and OR being like "and" and "or" in natural language, they don't entirely match up.

The other thing we need to lean on is the disjunctive syllogism. Now, we can actually give a proof for this in sentential logic, but it's easier to understand in plain language, because we reason this way the whole time. The disjunctive syllogism is a way of reasoning from something like (A OR B) to either (A) or (B) by itself.
Let's say that I know that (A OR B), but, on top of that, I also know that (~A). Well, that makes it pretty obvious which of (A) or (B) is true, we know that (A) isn't true, so (B) must be true the term that's true, so we can confidently add (B) to our collection of things we know to be true.

The proof

Well, lets say that we've got a contradiction, and from this contradiction we want to prove (C) - how do we go about that?

Let's say our contradiction is

(A & ~A)

Which, using our examples for substitution would say something like "the sky is blue AND the sky is not blue"
This means that we can infer both (A) and (~A) by themselves.
Okay, now, remember our rule about disjunction introduction - we can take any of our theorems and, for free, paste anything we want onto the end of a sentence (even if the sentence is a simple primitive)
So we're interested in (C), so let's introduce that by pasting it on to our (A) using our rule for disjunction introduction.

(A OR C) - by disjunction introduction

Now, this wouldn't pose any problem for us, except for the fact that we actually HAVE (~A) as somthing we know. Why is this a problem?
Well, remember our disjunctive syllogism? Well, if you agree that the disjunctive syllogism is indeed a truth preserving move, then we're in the awkward position that we can use that move to infer (C) from our theorem ((A) OR (C)).

That's really it - easy huh?
There are actually a couple of ways of proving that we can derive anything from a contradiction and they give three different derivations over on the wikipedia page about the priciple of explosion. Check it out if you want to see what a full formal proof of the principle looks like - they also give a semantic proof as well (I hope to cover some stuff from formal semantics properly in some other posts).

Does any of this sound dodgy to you though? Well, if it does, you're not the only one. If you find you don't like your logics to be of the exploding variety, mosey on down to wikipedia and check out paraconsistent logics - these may be more your taste.

What does this tell us?

To sum up, I want to discuss one of the real problems with being able to infer anything from a contradiction.

We can intuitively think of the true sentences that we derive as being a way of eliminating possible ways things can be.
For example, lets say that I don't know whether or not it's raining in London - let's agree to represent the basic fact "it's raining in London" with the symbol (L).
Before we check the weather online, we only really know (L OR ~L) - which translates to something like, "it's either raining in London, or it's not".
These are, for us who don't yet know the facts of the matter, the two possible ways that the world might be.
When we do hop online and see that it is in fact raining in London, that is, the fact that (L) is true, we've effectively reduced the number of possible ways the world could be given what we know.
If we wanted to, we could think about the amount of information that a sentence carries as being equivalent to the number of ways-the-world-can-be that are eliminated by us coming to know a fact.

You should be able to see where I'm going with this. If we hit a contradiction, and the principle of explosion holds, we find that our system gives us absolutely no information at all, because being able to derive whatever we want means that anything is possible we can't use our system of logic to deduce any particular way the world could be, all states are possible for us, given what we "know".

Contradiction = knowing absolutely nothing.

Yeah, it gives me the chills too :)

Labuschagne - applied logic and Rudiments

Dr W. Labuschagne (Otago, formally UNISA) and Prof. Heidema (UNISA) are apparently writing a book on applied logic together, parts of which are available online at Labuschagne's website here

I'd suggest that anyone who is interested in learning about formal semantics, non-monotonic logics, philosophy of formal logic etc. takes a look at it, working through the exercises will give you a good sense of the field and some of it's applications. If it's anything like his other books, you'll either love or hate Dr Labuschagne's sense of humor - and you'll become intimately acquainted with the infamous light-fan system.

It seems to be a continuation of the old UNISA philosophy/3rd year course in formal semantics and applied logic, and it's something that I'm personally quite passionate about, but haven't had much time to look into, or do any work, in since finishing up with logic a couple of years ago.
So part of these posts that I'm going to be doing will make up a series of "rudiments", used in the same sense and spirit that our drummery friends use it - I see these as being basic ideas and topics in philosophy of information, philosophy of mind, computer science, logic, psychology and so on, that one needs to have a grip on to understand higher levels of the discipline. This will also give me a chance to brush up on my basics as well.

Sunday, October 25, 2009

Random links - stats and machine learning

There have been a few discussions of statistics, probability and machine learning on some of the feeds that I follow - here are some of the best suggested resources to come out of those discussions.

Andrew Ng's course on machine learning up at Standford's "engineering everywhere" page is amazing. The notes for the course are pretty much a textbook on machine learning themselves.

For those of you who haven't yet downloaded Hastie,Tibshirani, and Friedman's Elements of statistical learning, I'd suggest you take a look at it as soon as possible. Very comprehensive.

Another useful, and free, resource is McKay's Information Theory, Inference, and Learning Algorithms

Finally, be sure to check out videolectures.net - another top notch resource for, well, pretty much anything.

Wednesday, October 21, 2009

Formal introductions

*Edit: This page is about 6 years old now. A lot has changed since then, a lot has stayed the same. I finished my Masters, I had a kid, I started a company. As such, this page's use as an introduction is questionable at best. Don't trust it :) *

Recently, Michael over at ionian-enchantment suggested I start blogging seriously as a kind of intellectual exercise - or something like it.

So instead of going ahead and launching something new, I thought I'd start posting more regularly here, and with more focus. As such, I've cleared away the more diary-like entries.

I'd imagine that this blog will become more focused in terms of the kind of content that I post as we go along, but what can be expected for the moment will be posts dealing with topics in psychology, philosophy, and computer science.

As for myself. I'm a software developer at a small company called Thinkopen - we develop most of our software using PHP and the Zend Framework, sometimes do work in Drupal, and - very occasionally, when we have absolutely no other choice - we work with Microsoft Products (mainly C#/ASP.net apps). That isn't to say I don't like MS - I couldn't care less about the beef people have with them - I taught courses on their products for several years and am a fan of SQL server, especially now that Management studio has intellisense. It's just not our prefered development platform.

I've got a BA in classics, philosophy and logic from the University of South Africa (UNISA) - a correspondence institution, and the largest university in South Africa. I've also completed a qualification called the National Certificate in Datametrics, which allows one to take courses in Computer science at Undergrad level without having to complete an entire degree.
I'm finishing up my honours at the moment, and will be starting my MA in cognitive science at UKZN mid-2010.