Tuesday, October 27, 2009

Rudiments : Exploding logics

Rudiments : Exploding logics

A while back I was sitting in a pub with a mathematician, a psychologist, and a philosopher. And while this certainly sounds like I'm getting ready to tell some corny joke, the truth is that we were actually just trying to get a philosophy of mind discussion group off the ground, so it wasn't as strange as all that.
Anyway, at one point, the philosopher turned to the mathematician and asked him if he could explain something that he, the philosopher, had either learned in an elementary logic course, or had come across somewhere in his (admittedly vast, and very deep) reading.

He asked the mathematician to explain just how it was that if we have a contradiction in a formal system, we can derive any conclusion we want?

I was a little surprised at the question - the guy who was asking is super smart - but was even more surprised when the mathematician couldn't answer it (not his field, apparently)!

So it seems to me that one can go pretty far in one's education without learning about the "principle of explosion", or - if you like your Latin - ex falso quodlibet, ex falso sequitur quodlibet, that is, "from a contradiction, follows anything you like" (or something like that - You'll see I'm playing as fast and loose with my Latin as I am with my logic in what follows, so please forgive me)

A little groundwork

I'm going to give something of a semi-informal proof so that it can be understood by anyone who hasn't really done any formal logic. But since we're speaking about formal logic, there are a couple of things that one should know.

When one is starting out, a useful way of thinking about Formal logic is to consider it a study of valid forms of reasoning. So when we're doing formal logic we're less concerned about what we're reasoning about than we are about than we are about the structure of arguments in general.
As a way of abstracting away from the details, logicians will do things like use symbols to represent basic units of meaning, rather than actual sentences in a natural language.
So instead of saying "the sky is blue", a logician will use the symbol (A). Symbols like (A), (B), (C) and so on are sometimes called primitives. We can think of these primitives as being able to represent very simple facts about the world (although they don't have to), and we can assign primitives some value, usually true or false. The advantage to using these kinds of symbols rather than actual sentences is that these symbols could stand for anything. So we know that what we discover about the shape of an argument is going to be true for any argument whatsoever because we can substitute pretty much anything we like for our symbols (as long as what we're substituting can be given a value of true or false).

Logicians also use another set of symbols to combine these basic truth bearing entities, these are called connectives. The basic vocabulary will consist of, at least (AND) and (OR) allow us to build up sentences from propositions in such a way that at each step, truth is preserved. We also need negation (~), which "flips the truth value" of a primitive - so if (A) means "the sky is blue", we can think of it's negation (~A) as meaning "the sky is not blue".

Since this post isn't a primer on sentential logic though - I'll point you to a page that is. But with a little common sense you should be able to follow most of it, but the problem is that the formal proof that we can get anything from a contradiction relies on some "moves" that are a little too slippery for common sense.

Let's say we have some proposition (A), what true sentences can we derive from this?
Well, we can get (~~A) without a problem, since double negations cancel themselves out.
We can also get something like (A AND A), which, if we substituted "the sky is blue" into our sentence we would get "the sky is blue AND the sky is blue" - and this, other than being redundant, is true. A sentence of the form (E AND F) will be true just in case that (E) and (F) are true.

But here is something odd - we can, for free, tag anything we want onto the end of our sentence using a move called disjunction introduction.
Let's say we have the primitive (C) which can stand for anything, including "cats can fly"
We can use disjunction introduction to derive the sentence (A OR C) - which when translated says "the sky is blue OR cats can fly", and this move would be - in the simple kind of logic we're dealing with here - truth preserving. We say that (A OR C) is true, because the only thing that's required for truth over disjunction (OR) is that ONE of the primitives is true.
(A) is true, and so (A OR C) is also true.

An interesting aside here - can you see how, because of disjunction introduction, for any system we can derive an infinite number of true sentences?
If we know that (A) is true, we can just keep adding more "stuff" onto it using disjunction introduction forever and ever. We could derive long sentences like (A OR B OR C OR D OR E OR F OR G ... OR Z) and it would be true because we know that (A) is true, so all of the sentences we generate using this rule will be true as well.
As I said, it's a little odd, but that's the thing, while it's useful to think of AND and OR being like "and" and "or" in natural language, they don't entirely match up.

The other thing we need to lean on is the disjunctive syllogism. Now, we can actually give a proof for this in sentential logic, but it's easier to understand in plain language, because we reason this way the whole time. The disjunctive syllogism is a way of reasoning from something like (A OR B) to either (A) or (B) by itself.
Let's say that I know that (A OR B), but, on top of that, I also know that (~A). Well, that makes it pretty obvious which of (A) or (B) is true, we know that (A) isn't true, so (B) must be true the term that's true, so we can confidently add (B) to our collection of things we know to be true.

The proof

Well, lets say that we've got a contradiction, and from this contradiction we want to prove (C) - how do we go about that?

Let's say our contradiction is

(A & ~A)

Which, using our examples for substitution would say something like "the sky is blue AND the sky is not blue"
This means that we can infer both (A) and (~A) by themselves.
Okay, now, remember our rule about disjunction introduction - we can take any of our theorems and, for free, paste anything we want onto the end of a sentence (even if the sentence is a simple primitive)
So we're interested in (C), so let's introduce that by pasting it on to our (A) using our rule for disjunction introduction.

(A OR C) - by disjunction introduction

Now, this wouldn't pose any problem for us, except for the fact that we actually HAVE (~A) as somthing we know. Why is this a problem?
Well, remember our disjunctive syllogism? Well, if you agree that the disjunctive syllogism is indeed a truth preserving move, then we're in the awkward position that we can use that move to infer (C) from our theorem ((A) OR (C)).

That's really it - easy huh?
There are actually a couple of ways of proving that we can derive anything from a contradiction and they give three different derivations over on the wikipedia page about the priciple of explosion. Check it out if you want to see what a full formal proof of the principle looks like - they also give a semantic proof as well (I hope to cover some stuff from formal semantics properly in some other posts).

Does any of this sound dodgy to you though? Well, if it does, you're not the only one. If you find you don't like your logics to be of the exploding variety, mosey on down to wikipedia and check out paraconsistent logics - these may be more your taste.

What does this tell us?

To sum up, I want to discuss one of the real problems with being able to infer anything from a contradiction.

We can intuitively think of the true sentences that we derive as being a way of eliminating possible ways things can be.
For example, lets say that I don't know whether or not it's raining in London - let's agree to represent the basic fact "it's raining in London" with the symbol (L).
Before we check the weather online, we only really know (L OR ~L) - which translates to something like, "it's either raining in London, or it's not".
These are, for us who don't yet know the facts of the matter, the two possible ways that the world might be.
When we do hop online and see that it is in fact raining in London, that is, the fact that (L) is true, we've effectively reduced the number of possible ways the world could be given what we know.
If we wanted to, we could think about the amount of information that a sentence carries as being equivalent to the number of ways-the-world-can-be that are eliminated by us coming to know a fact.

You should be able to see where I'm going with this. If we hit a contradiction, and the principle of explosion holds, we find that our system gives us absolutely no information at all, because being able to derive whatever we want means that anything is possible we can't use our system of logic to deduce any particular way the world could be, all states are possible for us, given what we "know".

Contradiction = knowing absolutely nothing.

Yeah, it gives me the chills too :)

3 comments:

  1. Rhiannon linked me - so blame her ;)

    Betrand Russel's "An enquiry into Truth and Meaning" will fascinate you.

    Have your mathematician and philosopher considered how this question is influenced by Chaos Theory? Or Systems Theory in general?

    It gives you "chills" :) like entropy? In a sense entropy is the universal version of a logic argument that keeps discarding possible 'truths' until only one (or none?) cold hard truth remains.

    Not that Systems Theory is free from unpleasant conclusions. The trouble of course with Systems Theory is that 'truth' becomes mutable/flexible. i.e. almost like saying:
    contradiction = knowing everything (but not knowing anything with absolute certainty). Existentialists and sophists love this; what better license to believe whatever is most convenient at a given time?

    Which brings us to the psychologist in the pub, who has remained strangely quiet. I wonder what he/she would have to say about how much certainty is healthy for a person? And how much uncertainty is healthy? Whether there is such a thing as a content/healthy mind that is either perfectly logic or totally irrational? Of course, there's a name for all of these schisms/dichotomies in the human mind: contradiction.

    Contradiction = the human condition. I suspect that's a proposition the mathematician, the philospher and the psychologist could all accept. Of course they'd be biased since they're all human! If we could get a non-human adjudicator to evaluate, the results would be interesting. What would such an adjudicator say of the pure 'logic' which allows us humans to harness the thermo-nuclear power of the atom, or the sheer 'irrational' beauty of the Sistene Chapel?

    All of which is my roundabout way of arguing that fortunately pure 'logic' (for all it's useful applications) cannot explain the totality of our existence.

    ciao

    H

    PS. Now, if we were to involve esoteric/metaphysical/religious people in this discussion, they'd tell you that "being able to derive whatever we want means that anything is possible" is a proof of the supernatural. Have you read Thomas Aquinas? Trouble with his argument (from his point of view, in any case) is that it can be used to prove the existence of Allah (Peace be Upon Him) or any other divine 'unknowable' being. Anyways, I digress.

    ReplyDelete
  2. Hi Metoikos - thanks for the post.

    I'm not sure how chaos theory or systems theory affects these, rather rarefied notions from formal logic, if they do at all?
    I'm neither a physicist, a mathematician or a systems theorist. I'm trying to explicate a very specific thing here and can't really comment on any of the implications outside of my field (more on that below).

    But I'm sure that my mathematician and philosopher didn't consider any of this considering neither of them knew what the principle of explosion was or how it worked :)

    I guess that I could have been more precise in my definitions - I was really only trying to explicate a particular principle from Formal logic. And I should have specified that the stuff about "knowledge" at the end was meant to be a lead in to formal, possible-world semantics ...

    It would also be a mistake trying to conflate this notion of contradiction with the other ordinary language, or technical, senses of contradiction - within the sentential logic there is a very specific meaning to this term. I think that I will have to be more specific in future posts.

    What I don't understand is this - you seem like a smart guy, I mean, you seem to know a bunch about stuff that I have only a passing knowledge of - why, then, are you failing to get that I'm speaking about a result/principle within formal logic? I mean, clearly I've restricted myself to a teensy tiny domain of discourse - the principle of explosion within the sentential logic. I said as much in the introduction.

    Within your reply you bring up Systems theory, statistical physics,Thomism, Chaos theory, existentialism, and the history of philosophy (I may have missed some of them there). I can only really engage you on the last two - but still, I don't know why we would even be speaking about them given the extremely restricted domain that my post deals with? And given it's intended audience is, ideally, someone who hasn't ever heard of the principle of explosion I'm not sure whether bringing in anything other than, say, a discussion about the validity of the syllogism it relies on, maybe paraconsistent logics, and AT A PUSH relevance logics is at all helpful?

    What I'd like to suggest is this - let's tackle one issue at a time. If you're really sincere about discussing this stuff, lets look at each issue by itself - say, starting with something like system's theory, or entropy in the information theoretic sense and this will give us the space to look at each issue properly. It will also give me time to read up about each topic so that I can make a meaningful contribution, and try and answer your questions in a useful, and educated, way.

    P.S. I'd prefer if we'd NOT get any religious / esoteric people involved (however, as a student of philosophy I don't have a problem with metaphysics, unless it's that nonsense spouted by the religious / esoteric people)

    ReplyDelete
  3. I'd like to read a discussion like the one you propose, Blaize.

    ReplyDelete

Note: Only a member of this blog may post a comment.