Archive for the 'Logic' category

Finally: Gödel's Proof of Incompleteness!

Mar 12 2013 Published by under Incompleteness

Finally, we're at the end of our walkthrough of Gödel great incompleteness proof. As a refresher, the basic proof sketch is:

  1. Take a simple logic. We've been using a variant of the Principia Mathematica's logic, because that's what Gödel used.
  2. Show that any statement in the logic can be encoded as a number using an arithmetic process based on the syntax of the logic. The process of encoding statements numerically is called Gödel numbering.
  3. Show that you can express meta-mathematical properties of logical statements in terms of arithemetic properties of their Gödel numbers. In particular, we need to build up the logical infrastructure that we need to talk about whether or not a statement is provable.
  4. Using meta-mathematical properties, show how you can create an unprovable statement encoded as a Gödel number.

What came before:

  1. Gödel numbering: The logic of the Principia, and how to encode it as numbers. This was step 1 in the sketch.
  2. Arithmetic Properties: what it means to say that a property can be expressed arithemetically. This set the groundwork for step 2 in the proof sketch.
  3. Encoding meta-math arithmetically: how to take meta-mathematical properties of logical statements, and define them as arithmetic properties of the Gödel numberings of the statements. This was step 2 proper.

So now we can move on to step three, where we actually see why mathematical logic is necessarily incomplete.

When we left off with Gödel, we'd gone through a very laborious process showing how we could express meta-mathematical properties of logical statements as primitive recursive functions and relations. We built up to being able to express one non-primitive recursive property, which describes the property that a given statement is provable:

pred provable(x) =
  some y {
    proofFor(y, x)
  }
}

The reason for going through all of that was that we really needed to show how we could capture all of the necessary properties of logical statements in terms of arithmetic properties of their Gödel numbers.

Now we can get to the target of Gödel's effort. What Gödel was trying to do was show how to defeat the careful stratification of the Principia's logic. In the principia, Russell and Whitehead had tried to avoid problems with self-reference by creating a very strict stratification, where each variable or predicate had a numeric level, and could only reason about objects from lower levels. So if natural numbers were the primitive objects in the domain being reasoned about, then level-1 objects would be things like specific natural numbers, and level-1 predicates could reason about specific natural numbers, but not about sets of natural numbers or predicates over the natural numbers. Level-2 objects would be sets of natural numbers, and level-2 predicates could reason about natural numbers and sets of natural numbers, but not about predicates over sets of natural numbers, or sets of sets of natural numbers. Level-3 objects would be sets of sets of natural numbers... and so on.

The point of this stratification was to make self-reference impossible. You couldn't make a statement of the form "This predicate is true": the predicate would be a level-N predicate, and only a level N+1 predicate could reason about a level-N predicate.

What Gödel did in the laborious process we went through in the last
post is embed a model of logical statements in the natural numbers. That's the real trick: the logic is designed to work with a set of objects that are a model of the natural numbers. By embedding a model of logical statements in
the natural numbers, he made it possible for a level-1 predicate (a predicate about a specific natural number) to reason about any logical statement or object. A level-1 predicate can now reason about a level-7 object! A level-1 predicate can reason about the set defined by a level-1 predicate: a level-1 predicate can reason about itself!.

Now, we can finally start getting to the point of all of this: incompleteness! We're going to use our newfound ability to nest logical statements into numbers to construct an unprovable true statement.

In the last post, one of the meta-mathematical properties that we defined for the Gödel-numbered logic was immConseq, which defines when some statement x is an immediate consequence of a set of statements S. As a reminder, that means that x can be inferred from statements in S in one inferrence step.

We can use that property to define what it means to be a consequence of a set of statements: it's the closure of immediate consequence. We can define it in pseudo-code as:

def conseq(κ) = {
  K = κ + axioms
  added_to_k = false
  do {
    added_to_k = false
    for all c in immConseq(K) {
      if c not in K {
        add c to K
        added_to_k = true
      }
    }
  } while added_to_k
  return K
}

In other words, Conseq(κ) is the complete set of everything that can possibly be inferred from the statements in κ and the axioms of the system. We can say that there's a proof for a statement x in κ if and only if x ∈ Conseq(κ).

We can the idea of Conseq use that to define a strong version of what it means for a logical system with a set of facts to be consistent. A system is ω-consistent if and only if there is not a statement a such that: a ∈ Conseq(κ) ∧ not(forall(v, a)) ∈ Conseq(κ).

In other words, the system is ω-consistent as long as it's never true that both a universal statement and it. But for our purposes, we can treat it as being pretty much the same thing. (Yes, that's a bit hand-wavy, but I'm not trying to write an entire book about Gödel here!)

(Gödel's version of the definition of ω-consistency is harder to read than this, because he's very explicit about the fact that Conseq is a property of the numbers. I'm willing to fuzz that, because we've shown that the statements and the numbers are interchangable.)

Using the definition of ω-consistency, we can finally get to the actual statement of the incompleteness theorem!

Gödel's First Incompleteness Theorem: For every ω-consistent primitive recursive set κ of formulae, there is a primitive-recursive predicate r(x) such that neither forall(v, r) nor not(forall(v, r)) is provable.

To prove that, we'll construct the predicate r.

First, we need to define a version of our earlier isProofFigure that's specific to the set of statements κ:

pred isProofFigureWithKappa(x, kappa) = {
  all n in 1 to length(x) {
    isAxiom(item(n, x)) or
    item(n, x) in kappa or
    some p in 0 to n {
      some q in 0 to n {
        immedConseq(item(n, x), item(p, x), item(q, x))
      }
    }
  } and length(x) > 0
}

This is the same as the earlier definition - just specialized so that it ensures that every statement in the proof figure is either an axiom, or a member of κ.

We can do the same thing to specialize the predicate proofFor and provable:

pred proofForStatementWithKappa(x, y, kappa) = {
  isProofFigureWithKappa(x, kappa) and
  item(length(x), x) = y
}

pred provableWithKappa(x, kappa) = {
  some y {
    proofForStatementWithKappa(y, x, kappa)
  }
}

If κ is the set of basic truths that we can work with, then
provable in κ is equivalent to provable.

Now, we can define a predicate UnprovableInKappa:

pred NotAProofWithKappa(x, y, kappa) = {
  not (proofForKappa(x, subst(y, 19, number(y))))
}

Based on everything that we've done so far, NotAProofWithKappa is primitive recursive.

This is tricky, but it's really important. We're getting very close to the goal, and it's subtle, so let's take the time to understand this.

  • Remember that in a Gödel numbering, each prime number is a variable. So 19 here is just the name of a free variable in y.
  • Using the Principia's logic, the fact that variable 19 is free means that the statement is parametric in variable 19. For the moment, it's an incomplete statement, because it's got an unbound parameter.
  • What we're doing in NotAProofWithKappa is substituting the numeric coding of y for the value of y's parameter. When that's done, y is no longer incomplete: it's unbound variable has been replaced by a binding.
  • With that substitution, NotAProofWithKappa(x, y, kappa) is true when x does not prove that y(y) is true.

What NotAProofWithKappa does is give us a way to check whether a specific sequence of statements x is not a proof of y.

We want to expand NotAProofWithKappa to something universal. Instead of just saying that a specific sequence of statements x isn't a proof for y, we want to be able to say that no possible sequence of statements is a proof for y. That's easy to do in logic: you just wrap the statement in a "∀ x ( )". In Gödel numbering, we defined a function that does exactly that. So the universan form of provability is:
∀ a (NotAProofWithKappa(a, y, kappa)).

In terms of the Gödel numbering, if we assume that the Gödel number for the variable a is 17, and the variable y is numbered as 19, we're talking about the statement p = forall(17, ProvableInKappa(17, 19, kappa).

p is the statement that for some logical statement (the value of variable 19, or y in our definition), there is no possible value for variable 17 (a) where a proves y in κ.

All we need to do now is show that we can make p become self-referential. No problem: we can just put number(p) in as the value of y in UnprovableInKappa. If we let q be the numeric value of the statement UnprovableInKappa(a, y), then:

r = subst(q, 19, p)

i = subst(p, 19, r)

i says that there is no possible value x that proves p(p). In other words, p(p) is unprovable: there exists no possible proof that there is no possible proof of p!

This is what we've been trying to get at all this time: self-reference! We've got a predicate y which is able to express a property of itself. Worse, it's able to express a negative property of itself!

Now we're faced with two possible choices. Either i is provable - in which case, κ is inconsistent! Or else i is unprovable - in which case κ is incomplete, because we've identified a true statement that can't be proven!

That's it: we've shown that in the principia's logic, using nothing but arithmetic, we can create a true statement that cannot be proven. If, somehow, it were to be proven, the entire logic would be inconsistent. So the principia's logic is incomplete: there are true statements that cannot be proven true.

We can go a bit further: the process that we used to produce this result about the Principia's logic is actually applicable to other logics. There's no magic here: if your logic is powerful enough to do Peano arithmetic, you can use the same trick that we demonstrated here, and show that the logic must be either incomplete or inconsistent. (Gödel proved this formally, but we'll just handwave it.)

Looking at this with modern eyes, it doesn't seem quite as profound as it did back in Gödel's day.

When we look at it through the lens of today, what we see is that in the Principia's logic, proof is a mechanical process: a computation. If every true statement was provable, then you could take any statement S, and write a program to search for a proof of either S or ¬ S, and eventually, that program would find one or the other, and stop.

In short, you'd be able to solve the halting problem. The proof of the halting problem is really an amazingly profound thing: on a very deep level, it's the same thing as incompleteness, only it's easier to understand.

But at the time that Gödel was working, Turing hadn't written his paper about the halting problem. Incompletess was published in 1931; Turing's halting paper was published in 1936. This was a totally unprecedented idea when it was published. Gödel produced one of the most profound and surprising results in the entire history of mathematics, showing that the efforts of the best mathematicians in the world to produce the perfection of mathematics were completely futile.

5 responses so far

The Meta of Gödel

Feb 17 2013 Published by under Incompleteness

As you may be figuring out, there's a reason why I resisted walking through Gödel's proof of incompleteness for so long. Incompeteness isn't a simple proof!

To refresh your memory, here's a sketch of the proof:

  1. Take a simple logic. We've been using a variant of the Principia Mathematica's logic, because that's what Gödel used.
  2. Show that any statement in the logic can be encoded as a number using an arithmetic process based on the syntax of the logic. The process of encoding statements numerically is called Gödel numbering.
  3. Show that you can express meta-mathematical properties of logical statements in terms of arithemetic properties of their Gödel numbers. In particular, we need to build up the logical infrastructure that we need to talk about whether or not a statement is provable.
  4. Using meta-mathematical properties, show how you can create an unprovable statement encoded as a Gödel number.

What we've done so far is the first two steps, and part of the third. In this post, we saw the form of the Principia's logic that we're using, and how to numerically encode it as a Gödel numbering. We've start started on the third point in this post, by figuring out just what it means to say that things are encoded arithmetically. Now we can get to the part where we see how to encode meta-mathematical properties in terms of arithmetic properties of the Gödel numbering. In this post, we're going to build up everything we need to express syntactic correctness, logical validity, and provability in terms of arithmetical properties of Gödel numbers. (And, as a reminder, I've been using this translation on Gödel's original paper on incompleteness.)

This is the most complex part of the incompleteness proof. The basic concept of what we're doing is simple, but the mechanics are very difficult. What we want to do is define a set of predicates about logical statements, but we want those predicates to be expressed as arithmetic properties of the numerical representations of the logical statements.

The point of this is that we're showing that done in the right way, arithmetic is logic - that doing arithmetic on the Gödel numbers is doing logical inference. So what we need to do is build up a toolkit that shows us how to understand and manipulate logic-as-numbers using arithmetic. As we saw in the last post, primitive recursion is equivalent to arithmetic - so if we can show how all of the properties/predicates that we define are primitive recursive, then they're arithmetic.

This process involves a lot of steps, each of which is building the platform for the steps that follow it. I struggled quite a bit figuring out how to present these things in a comprehensible way. What I ended up with is writing them out as code in a pseudo-computer language. Before inventing this language, I tried writing actual executable code, first in Python and then in Haskell, but I wasn't happy with the clarity of either one.

Doing it in an unimplemented language isn't as big a problem as you might think. Even if this was all executable, you're not going to be able to actually run any of it on anything real - at least not before you hair turns good and gray. The way that this stuff is put together is not what any sane person would call efficient. But the point isn't to be efficient: it's to show that this is possible. This code is really all about searching; if we wanted to be efficient, this could all be done in a different representation, with a different search method that was a lot faster - but that wolud be harder to understand.

So, in the end, I threw together a simple language that's easy to read. This language, if it were implemented, wouldn't really even be Turing complete - it's a primitive recursive language.

Basics

We'll start off with simple numeric properties that have no obvious connection to the kinds of meta-mathematical statements that we want to talk about, but we'll use those to define progressively more and more complex and profound properties, until we finally get to our goal.

# divides n x == True if n divides x without remainder.
pred divides(n, x) = x mod n == 0

pred isPrime(0) = False
pred isPrime(1) = False
pred isPrime(2) = True
pred isPrime(n) = {
  all i in 2 to n {
    not divides(i, n)
  }
}

fun fact(0) = 1
fun fact(n) = n * fact(n - 1)

Almost everything we're going to do here is built on a common idiom. For anything we want to do arithmetically, we're going to find a bound - a maximum numeric value for it. Then we're going to iterate over all of the values smaller than that bound, searching for our target.

For example, what's the nth prime factor of x? Obviously, it's got to be smaller than x, so we'll use x as our bound. (A better bound would be the square root of x, but it doesn't matter. We don't care about efficiency!) To find the nth prime factor, we'll iterate over all of the numbers smaller than our bound x, and search for the smallest number which is prime, which divides x, and which is larger than the n-1th prime factor of x. We'll translate that into pseudo-code:

fun prFactor(0, x) = 0
fun prFactor(n, x) = {
  first y in 1 to x {
    isPrime(y) and divides(y, x) and prFactor(n - 1, x) < y 
  }
}

Similarly, for extracting values from strings, we need to be able to ask, in general, what's the nth prime number? This is nearly identical to prFactor above. The only difference is that we need a different bound. Fortunately, we know that the nth prime number can't be larger than the factorial of the previous prime plus 1.

fun nthPrime(0) = 0
fun nthPrime(n) = {
  first y in 1 to fact(nthPrime(n - 1)) + 1  {
    isPrime(y) and y > nthPrime(n - 1)) 
  }
}

In composing strings of Gödel numbers, we use exponentiation. Given integers x and n, xn, we can obviously compute them via primitive recursion. I'll define them below, but in the rest of this post, I'll write them as an operator in the language:

fun pow(n, 0) = 1
fun pow(n, i) = n * pow(n, i - 1)

String Composition and Decomposition

With those preliminaries out of the way, we can get to the point of defining something that's actually about one of the strings encoded in these Gödel numbers. Given a number n encoding a string, item(n, x) is the value of the nth character of x. (This is slow. This is really slow! We're getting to the limit of what a very powerful computer can do in a reasonable amount of time. But this doesn't matter. The point isn't that this is a good way of doing these things: it's that these things are possible. To give you an idea of just how slow this is, I started off writing the stuff in this post in Haskell. Compiled with GHC, which is a very good compiler, item to extract the 6th character of an 8 character string took around 10 minutes on a 2.4Ghz laptop.)

fun item(n, x) = {
  first y in 1 to x {
    divides(prFactor(n, x) ** y, y) and
      not divides(prFactor(n, x)**(y+1), x) 
  }
}

Given a string, we want to be able to ask how long it is; and given two strings, we want to be able to concatenate them.

fun length(x) = {
  first y in 1 to x {
    prFactor(y, x) > 0 and prFactor(y + 1, x) == 0
  }
}

fun concat(x, y) = {
  val lx = length(x)
  val ly = length(y)

  first z in 1 to nthprime(lx + ly)**(x + y) {
    (all n in 1 to lx {
        item(n, z) == item(n, x)
     }) and (all n in 1 to ly {
        item(n + lx, z) == item(n, y)
      }) 
  }
}

fun concatl([]) = 0
fun concatl(xs) = {
  concat(head(xs), concatl(tail(xs)))
}

fun seq(x) = 2**x

We want to be able to build statements represented as numbers from other statements represented as numbers. We'll define a set of functions that either compose new strings from other strings, and to check if a particular string is a particular kind of syntactic element.

# x is a variable of type n.
pred vtype(n, x) = {
  some z in 17 to x {
    isPrime(z) and x == n**z
  }
}

# x is a variable
pred isVar(x) = {
  some n in 1 to x {
    vtype(n, x)
  }
}

fun paren(x) =
  concatl([gseq(11), x, gseq(13)])

# given the Gödel number for a statement x, find 
# the Gödel number for not x.
fun gnot(x) =
  concat(gseq(5), paren(x))

# Create the number for x or y.
fun gor(x, y) =
  concatl([paren(x), seq(7), paren(y)])

# Create the number for 'forall x(y)'.
fun gforall(x, y) =
  concatl([seq(9), seq(x), paren(y)])

# Create the number for x with n invocations of the primitive
# successor function.
fun succn(0, x) = x
fun succn(n, x) = concat(seq(3), succn(n - 1, x))

# Create the number n using successor and 0.
fun gnumber(n) = succn(n, seq(1))

# Check if a statement is type-1.
pred stype_one(x) = {
  some m in 1 to x {
     m == 1 or (vtype(1, m) and x == succn(n, seq(m))
  }
}

# Check if a statement is type n.
pred fstype(1, x) = stype_one(x)
pred fstype(n, x) =
  some v in 1 to x {
    vtype(n, v) and R(v)
  }
}

That last function contains an error: the translation of Gödel that I'm using says R(v) without defining R. Either I'm missing something, or the translator made an error.

Formulae

Using what we've defined so far, we're now ready to start defining formulae in the basic Principia logic. Forumlae are strings, but they're strings with a constrained syntax.

pred elFm(x) = {
  some y in 1 to x {
    some z in 1 to x {
      some n in 1 to x {
        stype(n, y) and stype(n+1, z) and x == concat(z, paren(y))
      }
    }
  }
}

All this is doing is expressing the grammar rule in arithmetic form: an elementary formula is a predicate: P(x), where x is a variable on level n, and P is a variable of level x + 1.

The next grammar rule that we encode this way says how we can combine elementary formulae using operators. There are three operators: negation, conjunction, and universal quantification.

pred op(x, y, z) = {
  x == gnot(y) or
  x == gor(y, z) or 
  (some v in 1 to x { isVar(v) and x == gforall(v, y) })
}

And now we can start getting complex. We're going to define the idea of a valid sequence of formulae. x is a valid sequence of formulae when it's formed from a collection of formulae, each of which is either an elementary formula, or is produced from the formulae which occured before it in the sequence using either negation, logical-or, or universal quantification.

In terms of a more modern way of talking about it, the syntax of the logic is a grammar. A formula sequence, in this system, is another way of writing the parse-tree of a statement: the sequence is the parse-tree of the last statement in the sequence.

pred fmSeq(x) = {
  all p in 0 to length(x) { 
    elFm(item(n, x)) or
      some p in 0 to (n - 1) {
        some q in 0 to (n - 1) {
          op(item(n,x), item(p, x), item(q, x))
        }
      }
  }
}

The next one bugs me, because it seems wrong, but it isn't really! It's a way of encoding the fact that a formula is the result of a well-defined sequence of formulae. In order to ensure that we're doing primitive recursive formulae, we're always thinking about sequences of formulae, where the later formulae are produced from the earlier ones. The goal of the sequence of formula is to produce the last formula in the sequence. What this predicate is really saying is that a formula is a valid formula if there is some sequence of formulae where this is the last one in the sequence.

Rephrasing that in grammatical terms, a string is a formula if there is valid parse tree for the grammar that produces the string.

pred isFm(x) = {
  some n in 1 to nthPrime(length(x)**2)**(x*length(x)**2) {
    fmSeq(n)
  }
}

So, now, can we say that a statement is valid because it's parsed according to the grammar? Not quite. It's actually a familiar problem for people who write compilers. When you parse a program in some language, the grammar doesn't usually specify variables must be declared before they're used. It's too hard to get that into the grammar. In this logic, we've got almost the same problem: the grammar hasn't restricted us to only use bound variables. So we need to have ways to check whether a variable is bound in a Gödel-encoded formula, and then use that to check the validity of the formula.

# The variable v is bound in formula x at position n.
pred bound(v, n, x) = {
  isVar(v) and isFm(x) and 
  (some a in 1 to x {
    some b in 1 to x {
      some c in 1 to x {
        x == concatl([a, gforall(v, b), c]) and
        isFm(b) and
        length(a) + 1 ≤ n ≤ length(a) + length(forall(v, b))
      }
    }
  })
}

# The variable v in free in formula x at position n
pred free(v, n, x) = {
  isVar(v) and isFm(x) and
  (some a in 1 to x {
    some b in 1 to x {
      some c in 1 to x {
        v == item(n, x) and n ≤ length(x) and not bound(v, n, x)
      }
    }
  })
}

pred free(v, x) = {
  some n in 1 to length(x) {
    free(v, n, x)
  }
}

To do logical inference, we need to be able to do things like replace a variable with a specific infered value. We'll define how to do that:

# replace the item at position n in x with y.
fun insert(x, n, y) = {
  first z in 1 to nthPrime(length(x) + length(y))**(x+y) {
    some u in 1 to x {
      some v in 1 to x {
        x == concatl([u, seq(item(n, x)), v]) and
        z == concatl([u, y, v]) and
        n == length(u) + 1
      }
    }
  }
}

There are inference operations and validity checks that we can only do if we know whether a particular variable is free at a particular position.

# freePlace(k, v, k) is the k+1st place in x (counting from the end)
# where v is free.
fun freePlace(0, v, x) = {
  first n in 1 to length(x) {
    free(v, n, x) and 
    not some p in n to length(x) {
      free(v, p, x)
    }
  }
}

fun freePlace(k, v, x) = {
  first n in 1 to freePlace(n, k - 1, v) {
    free(v, n, x) and 
    not some p in n to freePlace(n, k - 1, v) {
      free(v, p, x)
    }
  }
}

# number of places where v is free in x
fun nFreePlaces(v, x) = {
  first n in 1 to length(x) {
    freeplace(n, v, x) == 0
  }
}

In the original logic, some inference rules are defined in terms of a primitive substitution operator, which we wrote as subst[v/c](a) to mean substitute the value c for the variable c in the statement a. We'll build that up on a couple of steps, using the freePlaces function that we just defined.

# Subst1 replaces a single instance of v with y.
fun subst'(0, x, v, y) = x
fun subst1(0k, x, v, y) =
  insert(subst1(k, x, v, y), freePlace(k, v, x), y)

# subst replaces all instances of v with y
fun subst(x, v, y) = subst'(nFreePlaces(v, x), x, v, y)

The next thing we're going to do isn't, strictly speaking, absolutely necessary. Some of the harder stuff we want to do will be easier to write using things like implication, which aren't built in primitive of the Principia logic. To write those as clearly as possible, we'll define the full suite of usual logical operators in terms of the primitives.

# implication
fun gimp(x, y) = gor(gnot(x), y)

# logical and
fun gand(x, y) = gnot(gor(gnot(x), gnot(y)))

# if/f
fun gequiv(x, y) = gand(gimp(x, y), gimp(y, x))

# existential quantification
fun gexists(v, y) = not(gforall(v, not(y)))

Axioms

The Peano axioms are valid logical statements, so they have Gödel numbers in this system. We could compute their value, but why bother? We know that they exist, so we'll just give them names, and define a predicate to check if a value matches them.

The form of the Peano axioms used in incompleteness are:

  1. Zero: ¬(succ(x1) = 0)
  2. Uniqueness: succ(x1) = succ(y1) \Rightarrow x = y
  3. Induction: x2(0) ∧ ∀x1(x2(x1)⇒ x2(succ(x1))) ⇒ ∀x1(x2(x1))
const pa1 = ...
const pa2 = ...
const pa3 = ...

pred peanoAxiom(x) = 
  (x == pa1) or (x == pa2) or (x == pa3)

Similarly, we know that the propositional axioms must have numbers. The propositional
axioms are:

  1. p \lor p \Rightarrow p
  2. p \Rightarrow p \lor q
  3. p \lor q \Rightarrow p \lor q
  4. (p \Rightarrow q) \Rightarrow (r \lor p \Rightarrow r \lor q)

I'll show the translation of the first - the rest follow the same pattern.

# Check if x is a statement that is a form of propositional
# axiom 1: y or y => y
pred prop1Axiom(x) =
  some y in 1 to x {
    isFm(x) and x == imp(or(y, y), y)
  }
}

pred prop2Axiom(x) = ...
pred prop3Axiom(x) = ...
pred prop4Axiom(x) = ...
pred propAxiom(x) = prop2Axiom(x) or prop2Axiom(x) or 
    prop3Axiom(x) or prop4Axiom(x)

Similarly, all of the other axioms are written out in the same way, and we add a predicate isAxiom to check if something is an axiom. Next is quantifier axioms, which are complicated, so I'll only write out one of them - the other follows the same basic scheme.

The two quantifier axioms are:

  1. \forall v(a) \Rightarrow \text{subst}[v/c](a)
  2. \forall v(b \lor a) \Rightarrow (b \or \forall v(a))
quantifier_axiom1_condition(z, y, v) = {
  not some n in 1 to length(y) {
    some m in 1 to length(z) {
      some w in 1 to z {
         w == item(m, z) and bound(w, n, y) and free(v, n, y)
      }
    }
  }
}

pred quantifier1Axiom(x) = {
  some v in 1 to x {
    some y in 1 to x {
      some z in 1 to x {
        some n in 1 to x {
          vtype(n, v) and stype(n, z) and
          isFm(y) and 
          quantifier_axiom1_condition(z, y, v) and
          x = gimp(gforall(v, y), subst(y, v, z))
        }
      }
    }
  }
}
      
quanitifier_axiom2 = ...
isQuantifierAxiom = quantifier1Axiom(x) or quantifier2Axiom(x)

We need to define a predicate for the reducibility axiom (basically, the Principia's version of the ZFC axiom of comprehension). The reducibility axiom is a schema: for any predicate a, \exists u (\forall v (u(v) \Leftrightarrow a. In our primitive recursive system, we can check if something is an instance of the reducibility axiom schema with:

pred reduAxiom(x) =
  some u in 1 to x {
    some v in 1 to x {
      some y in 1 to x {
        some n in 1 to x {
          vtype(n, v) and
          vtype(n+1, u) and
          not free(u, y) and
          isFm(y) and
          x = gexists(u, gforall(v, gequiv(concat(seq(u), paren(seq(v))), y)))
        }
      }
    }
  }
}

Now, the set axiom. In the logic we're using, this is the axiom that defines set equality. It's written as \forall x_1 (x_2(x_1) \Leftrightarrow y_2(y_1) \Rightarrow x_2 = x_1). Set equality is defined for all types of sets, so we need to have one version of axiom for each level. We do that using type-lifting: we say that the axiom is true for type-1 sets, and that any type-lift of the level-1 set axiom is also a version of the set axiom.

fun typeLift(n, x) = {
  first y in 1 to x**(x**n) {
    all k in 1 to length(x) {
      item(k, x) ≤ 13 and item(k, y) == item(k, v) or
      item(k, x) > 13 and item(k, y) = item(k, x) * prFactor(1, item(k, x))**n
    }
  }
}

We haven't defined the type-1 set axiom. But we just saw the axiom above, and it's obviously a simple logical statement. That mean that it's got a Gödel number. Instead of computing it, we'll just say that that number is called sa1. Now we can define a predicate to check if something is a set axiom:

val sa1 = ... 
pred setAxiom(x) = 
  some n in 1 to x {
    x = typeLift(n, sa)
  }
}

We've now defined all of the axioms of the logic, so we can now create a general predicate to see if a statement fits into any of the axiom categories:

pred isAxiom(x) =
  peanoAxiom(x) or propAxiom(x) or quantifierAxom(x) or
  reduAxiom(x) or setAxiom(x)

Proofs and Provability!

With all of the axioms expressible in primitive recursive terms, we can start on what it means for something to be provable. First, we'll define what it means for some statement x to be an immediate consequence of some statements y and z. (Back when we talked about the Principia's logic, we said that x is an immediate consequence of y and z if either: y is the formula z ⇒ x, or if c is the formula ∀v.x).

pred immConseq(x, y, z) = {
  y = imp(z, x) or 
  some v in 1 to x {
    isVar(v) and x = forall(v, y)
  }
}

Now, we can use our definition of an immediate consequence to specify when a sequence of formula is a proof figure. A proof figure is a sequence of statements where each statement in it is either an axiom, or an immediate consequence of two of the statements that preceeded it.

pred isProofFigure(x) = {
  (all n in 0 to length(x) {
    isAxiom(item(n, x)) or
    some p in 0 to n {
      some q in 0 to n {
        immConseq(item(n, x), item(p, x), item(q, x))
      }
    }
  }) and
  length(x) > 0
}

We can say that x is a proof of y if x is proof figure, and the last statement in x is y.

pred proofFor(x, y) =
  isProofFigure(x) and
  item(length(x), x) == y

Finally, we can get to the most important thing! We can define what it means for something to be provable! It's provable if there's a proof for it!

pre provable(x) =
  some y {
    proofFor(y, x)
  }
}

Note that this last one is not primitive recursive! There's no way that we can create a bound for this: a proof can be any length.

At last, we're done with these definition. What we've done here is really amazing: now, every logical statement can be encoded as a number. Every proof in the logic can be encoded as a sequence of numbers: if something is provable in the Principia logic, we can encode that proof as a string of numbers, and check the proof for correctness using nothing but (a whole heck of a lot of) arithmetic!

Next post, we'll finally get to the most important part of what Gödel did. We've been able to define what it means for a statement to be provable - we'll use that to show that there's a way of creating a number encoding the statement that something is not provable. And we'll show how that means that there is a true statement in the Principia's logic which isn't provable using the Principia's logic, which means that the logic isn't complete.

In fact, the proof that we'll do shows a bit more than that. It doesn't just show that the Principia's logic is incomplete. It shows that any consistent formal system like the Principia, any system which is powerful enough to encode Peano arithmetic, must be incomplete.

4 responses so far

Defining Properties Arithmetically (part 1): Gödel and Primitive Recursion

Feb 11 2013 Published by under Incompleteness

When I left off, we'd seen how to take statements written in the logic of the Principia Mathematica, and convert them into numerical form. What we need to see now is how to get meta-mathematical.

We want to be able to write second-order logical statements. The basic trick to incompleteness is that we're going to use the numerical encoding of statements to say that a predicate or relation is represented by a number. Then we're going to write predicates about predicates by defining predicates on the numerical representations of the first-order predicates. That's going to let us create a true statement in the logic that can't be proven with the logic.

To do that, we need to figure out how to take our statements and relations represented as numbers, and express properties of those statements and relations in terms of arithmetic. To do that, we need to define just what it means to express something arithmetically. Gödel did that by defining "arithmetically" in terms of a concept called primitive recursion.

I learned about primitive recursion when I studied computational complexity. Nowadays, it's seen as part of theoretical computer science. The idea, as we express it in modern terms, is that there are many different classes of computable functions. Primitive recursion is one of the basic complexity classes. You don't need a Turing machine to compute primitive recursive functions - they're a simpler class.

The easiest way to understand primitive recursion is that it's what you get in a programming language with integer arithmetic, and simple for-loops. The only way you can iterate is by repeating things a bounded number of times. Primitive recursion has a lot of interesting properties: the two key ones for our purposes here are: number theoretic proofs are primitive recursive, and every computation of a primitive recursive function is guaranteed to complete within a bounded amount of time.

The formal definition of primitive recursion, the way that Gödel wrote it, is quite a bit more complex than that. But it means the same thing.

We start with what it means to define a formula via primitive recursion. (Note the language that I used there: I'm not explaining what it means for a function to be primitive recursive; I'm explaining what it means to be defined via primitive recursion.) And I'm defining formulae, not functions. In Gödel's proof, we're always focused on numerical reasoning, so we're not going to talk about programs or algorithms, we're going to about the definition of formulae.

A formula \phi(x_1, x_2, ..., x_n) is defined via primitive recursion if, for some other formulae \psi and \mu:

  • Base: \phi(0, x_2, ..., x_n) = \psi(x_2, ..., x_n)
  • Recursive: \phi(i+1, x_2, ..., x_n) = \mu(i, \phi(i, x_2, ..., x_n), x_2, ..., x_n).

So, basically, the first parameter is a bound on the number of times that \phi can invoked recursively. When it's 0, you can't invoke \phi any more.

A formula is primitive recursive if it defined from a collection of formulae \phi_1, ..., \phi_n where any formula \phi_i is defined via primitive recursion from \phi_1, ..., \phi_{i-1}, or the primitive succ function from Peano arithmetic.

For any formula \phi_i in that sequence, the degree of the formula is the number of other primitive recursive formulae used in its definition.

Now, we can define a primitive recursive property: R(x_1, ..., x_n) is primitive recursive if and only if there exists a primitive recursive function \phi such that \phi(x_1, ..., x_n) = 0.

With primitive recursive formulae and relations defined, there's a bunch of theorems about how you can compose primitive recursive formulae and relations:

  1. Every function or relation that you get by substituting a primitive recursive function for a variable in a primitive recursive function/relation is primitive recursive.
  2. If R and S are primitive relations, then ¬R, R∧S, R∨S are all primitive recursive.
  3. If \phi(x_1, ..., x_n) and \psi(x_1, ..., x_n) are primitive recursive functions, then the relation R(x_1, ..., x_n) \Leftrightarrow (\phi(x_1, ..., x_n) = \psi(x_1, ..., x_n) is also primitive recursive.
  4. Let xv and zv be finite-length tuples of variables. If the function \phi(xv) and the relation R(y, zv) are primitive recursive, then so are the relations:
    • S(xv, zv) \Leftrightarrow  (\exists y \le \phi(xv). R(y, zv))
    • T(xv, zv) \Leftrightarrow  (\forall y \le A(xv). R(y, zv))
  5. Let xv and zv be finite-length tuples of variables. And let \text{argmin}[y \le f(x).R(x)] be the smallest value of x for which y \le f(x) and R(x) is true, or 0 if there is no such value. Then if the function \phi(xv) and the relation R(y, zv) are primitive recursive, then so is the function P(xv, zv) = (\text{argmin}[y \le A(xv). R(y, zv))].

By these definitions, addition, subtraction, multiplication, and integer division are all primitive recursive.

Ok. So, now we've got all of that out of the way. It's painful, but it's important. What we've done is come up with a good formal description of what it means for something to be an arithmetic property: if we can write it as a primitive recursive relation or formula, it's arithmetic.

So now, finally, we're ready to get to the really good part! Now that we know what it means to define something arithmetically, we can see how to define meta-mathematical properties and formulae arithmetically. Next post, hopefully tomorrow, I'll start showing you arithmetic expressions of meta-math!

5 responses so far

G&oum;del Numbering: Encoding Logic as Numbers

Feb 07 2013 Published by under Incompleteness

The first step in Gödel's incompleteness proof was finding a way of taking logical statements and encoding them numerically. Looking at this today, it seems sort-of obvious. I mean, I'm writing this stuff down in a text file - that text file is a stream of numbers, and it's trivial to convert that stream of numbers into a single number. But when Gödel was doing it, it wasn't so obvious. So he created a really clever mechanism for numerical encoding. The advantage of Gödel's encoding is that it makes it much easier to express properties of the encoded statements numerically.

Before we can look at how Gödel encoded his logic into numbers, we need to look at the logic that he used. Gödel worked with the specific logic variant used by the Principia Mathematica. The Principia logic is minimal and a bit cryptic, but it was built for a specific purpose: to have a minimal syntax, and a complete but minimal set of axioms.

The whole idea of the Principia logic is to be purely syntactic. The logic is expected to have a valid model, but you shouldn't need to know anything about the model to use the logic. Russell and Whitehead were deliberately building a pure logic where you didn't need to know what anything meant to use it. I'd really like to use Gödel's exact syntax - I think it's an interestingly different way of writing logic - but I'm working from a translation, and the translator updated the syntax. I'm afraid that switching between the older Gödel syntax, and the more modern syntax from the translation would just lead to errors and confusion. So I'm going to stick with the translation's modernization of the syntax.

The basic building blocks of the logic are variables. Already this is a bit different from what you're probably used to in a logic. When we think of logic, we usually consider predicates to be a fundamental thing. In this logic, they're not. A predicate is just a shorthand for a set, and a set is represented by a variable.

Variables are stratified. Again, it helps to remember where Russell and
Whitehead were coming from when they were writing the Principia. One of their basic motivations was avoiding self-referential statements like Russell's paradox. In order to prevent that, they thought that they could create a stratified logic, where on each level, you could only reason about objects from the level below. A first-order predicate would be a second-level object could only reason about first level objects. A second-order predicate would be a third-level object which could reason about second-level objects. No predicate could ever reason about itself or
anything on its on level. This leveling property is a fundamental property built into their logic. The way the levels work is:

  • Type one variables, which range over simple atomic values, like
    specific single natural numbers. Type-1 variables are written as
    a1, b1.
  • Type two variables, which range over sets of atomic values, like
    sets of natural numbers. A predicate, like IsOdd, about specific natural numbers would be represented as a type-2 variable. Type-2 variables are
    written a2, b2, ...
  • Type three variables range over sets of sets of atomic values.
    The mappings of a function could be represented as type-3 variables:
    in set theoretic terms, a function is set of ordered pairs. Ordered pairs, in
    turn, can be represented as sets of sets - for example, the
    ordered pair (1, 4) would be represented by the set { {1}, {1, 4} }.
    A function, in turn, would be represented by a type-4 variable - a set
    of ordered pairs, which is a set of sets of sets of values.

Using variables, we can form simple atomic expressions, which in Gödel's terminology are called signs. As with variables, the signs are divided into stratified levels:

  • Type-1 signs are variables, and successor expressions. Successor expressions
    are just Peano numbers written with "succ": 0, succ(0), succ(succ(0)),
    succ(a1), etc.
  • Signs of any type greater than 1 are just variables of that type/level.

Now we can assemble the basic signs into formulae. Gödel explained how to build formulae in a classic logicians form, which I think is hard to follow, so I've converted it into a grammar:

 elementary_formula → signn+1(signn)
 formula → ¬(elementary_formula)
 formula → (elementary_formula) \or (elementary_formula)
 formula → ∀ signn (elementary_formula)

That's all that's really included in Gödel's logic. It's enough: everything else can be defined in terms of combinatinos of these. For example, you can define logical and in terms of negation and logical or: (a ∧ b) ⇔ ¬ (¬ a ∨ ¬ b).

Next, we need to define the axioms of the system. In terms of logic the way I think of it, these axioms include both "true" axioms, and the inference rules defining how the logic works. There are five families of axioms.

  1. First, there's the Peano axioms, which define the natural numbers.
    1. ¬(succ(x1) = 0): 0 is a natural number, and it's not the successor of anything.
    2. succ(x1) = succ(y1) ⇒ x1 = y1: Successors are unique.
    3. (x2(0) ∧ ∀ x1 (x2(x1) ⇒ x2(succ(x1)))) ⇒ ∀ x1(x2(x1)): induction works on the natural numbers.
  2. Next, we've got a set of basic inference rules about simple propositions. These
    are defined as axiom schemata, which can be instantiated for any set of formalae
    p, q, and r.

    1. p ∨ p ⇒ p
    2. p ⇒ p ∨ q
    3. p ∨ q ⇒ q ∨ p
    4. (p ⇒ q) ⇒ (p ∨ r) ⇒ (q ∨ r)
  3. Axioms that define inference over quantification. v is a variable,
    a is any formula, b is any formula where v is not a free variable, and c is a sign of the same level as v, and
    which doesn't have any free variables that would be bound if it were inserted as
    a replacement for v.

    1. ∀ v (a) ⇒ subst[v/c](a): if formula a is true for all values of v, then you can substitute any specific value c for v in a, and a must still be true.
    2. (∀ v (b ∨ a)) ⇒ (b ∨ ∀ v (a))
  4. The Principia's version of the set theory axiom of comprehension:
    &exists; u (∀ v ( u(v) ⇔ a )).
  5. And last but not least, an axiom defining set equivalence:
    ∀ xi (xi+1(xi) ⇔ yi+1(yi)) ⇒ xi+1 = yi+1
  6. So, now, finally, we can get to the numbering. This is quite clever. We're going to use the simplest encoding: for every possible string of symbols in the logic, we're going to define a representation as a number. So in this representation, we are not going to get the property that every natural number is a valid formula: lots of natural numbers won't be. They'll be strings of nonsense symbols. (If we wanted to, we could make every number be a valid formula, by using a parse-tree based numbering, but it's much easier to just let the numbers be strings of symbols, and then define a predicate over the numbers to identify the ones that are valid formulae.)

    We start off by assigning numbers to the constant symbols:

    Symbols Numeric Representation
    0 1
    succ 3
    ¬ 5
    7
    9
    ( 11
    ) 13

    Variables will be represented by powers of prime numbers, for prime numbers greater that 13. For a prime number p, p will represent a type one variable, p2 will represent a type two variable, p3 will represent a type-3 variable, etc.

    Using those symbol encodings, we can take a formula written as symbols x1x2x3...xn, and encode it numerically as the product 2x13x25x2...pnxn, where pn is the nth prime number.

    For example, suppose I wanted to encode the formula: ∀ x1 (y2(x1)) ∨ x2(x1).

    First, I'd need to encode each symbol:

    1. "∀" would be 9.
    2. "x1"" = 17
    3. "(" = 11
    4. "y2" = 192 = 361
    5. "(" = 11
    6. "x1" = 17
    7. ")" = 13
    8. "∨" = 7
    9. "x2" = 172 = 289
    10. "(" = 11
    11. "x1" = 17
    12. ")" = 13
    13. ")" = 13

    The formula would thus be turned into the sequence: [9, 17, 11, 361, 11, 17, 13, 7, 289, 11, 17, 13, 13]. That sequence would then get turned into a single number
    29 317 511 7361 1111 1317 1713 197 23289 2911 3117 3713 4113, which according to Hugs is the number (warning: you need to scroll to see it. a lot!):
    1,821,987,637,902,623,701,225,904,240,019,813,969,080,617,900,348,538,321,073,935,587,788,506,071,830,879,280,904,480,021,357,988,798,547,730,537,019,170,876,649,747,729,076,171,560,080,529,593,160,658,600,674,198,729,426,618,685,737,248,773,404,008,081,519,218,775,692,945,684,706,455,129,294,628,924,575,925,909,585,830,321,614,047,772,585,327,805,405,377,809,182,961,310,697,978,238,941,231,290,173,227,390,750,547,696,657,645,077,277,386,815,869,624,389,931,352,799,230,949,892,054,634,638,136,137,995,254,523,486,502,753,268,687,845,320,296,600,343,871,556,546,425,114,643,587,820,633,133,232,999,109,544,763,520,253,057,252,248,982,267,078,202,089,525,667,161,691,850,572,084,153,306,622,226,987,931,223,193,578,450,852,038,578,983,945,920,534,096,055,419,823,281,786,399,855,459,394,948,921,598,228,615,703,317,657,117,593,084,977,371,635,801,831,244,944,526,230,994,115,900,720,026,901,352,169,637,434,441,791,307,175,579,916,097,240,141,893,510,281,613,711,253,660,054,258,685,889,469,896,461,087,297,563,191,813,037,946,176,250,108,137,458,848,099,487,488,503,799,293,003,562,875,320,575,790,915,778,093,569,309,011,025,000,000,000.

    Next, we're going to look at how you can express interesting mathematical properties in terms of numbers. Gödel used a property called primitive recursion as an example, so we'll walk through a definition of primitive recursion, and show how Gödel expressed primitive recursion numerically.

5 responses so far

Gödel's Incompleteness

Feb 03 2013 Published by under Incompleteness, Logic

I've mentioned Gödel's incompleteness theorems many times on this blog, but I've never actually written about them in detail. I was asking, on twitter, for topics that readers would be interested in, and one thing that came up is actually go through the proof of incompleteness. I'm happy to take the time to do that incompleteness is one of the most beautiful and profound proofs that I've ever seen.

It takes a fair bit of effort though, so it's going to take a couple of posts just to get through the preliminaries. What I'm going to do is work with this translation of the original paper where Gödel published his first incompleteness proof. Before we can get to the actual proof, we need to learn a bit about the particular kind of logic that he used in his proof.

The story of incompleteness is really pretty interesting. In the late 19th and early 20th centuries, mathematicians started to get really interested in formal foundations. Why, exactly, this happened is somewhat of a debate, but the story I find most likely is that it's a result of Cantor and his naive set theory.

Set theory came along, and it seemed like such a great idea. It seems incredibly simple, and it makes many proofs in many different subject areas appear to be really simple. But there's a huge problem: it's not consistent.

The problem is what's come to be known as Russell's paradox. I've explained Russell's paradox a bunch of times on the blog. For this post, I'm going to do it a bit differently. As part of my prep for this series of posts, I've been reading the book Gödel's Proof, and in that book, they have a way of rephrasing Russell's paradox which I think clarifies it nicely, so I'm going to try their version.

If you think about the set of all sets, you can partition it into two non-overlapping subsets. There's a subset of normal sets, and a class of abnormal sets. What makes a set abnormal is that it's self-referential - an abnormal set contains itself as a member.

So considered the set of all normal sets. Is it, itself, a normal set?

The answer is: argh! Because if the set of all normal sets is normal, then it must be a member of itself. But if it's a member of itself, it can't be normal. Similarly, if you start off saying that the set of all normal sets isn't normal, then by definition, it is normal. No matter what you do, you're screwed.

What Russell's paradox did was show that there was a foundational problem in math. You could develop what appeared to be a valid mathematicial structure and theory, only to later discover that all the work you did was garbage, because there was some non-obvious fundamental inconsistency in how you defined it. But the way that foundations were treated simple wasn't strong or formal enough to be able to detect, right up front, whether you'd hard-wired an inconsistency into your theory. So foundations had to change, to prevent another Cantor incident.

So, eventually, along came two mathematicians, Russell and Whitehead, and they created an amazing piece of work called the Principia Mathematica. The principia was supposed to be an ideal, perfect mathematical foundation. The Principia was supposed to have two key properties: it was supposed to consistent, and it was supposed to be complete.

  • Consistent meant that the statement would not allow any inconsistencies of any kind. If you used the logic and the foundantions of the Principia, you couldn't even say anything like Russell's paradox: you couldn't even write it as a valid statement.
  • Complete meant that every true statement was provably true, every false statement was provably false, and every statement was either true or false.

A big part of how it did this was by creating a very strict stratification of logic. You could reason about a specific number, using level-0 statements. You could reason about sets of numbers using level-1 statements. And you could reason about sets of sets of numbers using level-2 statements. In a level-1 statement, you could make meta-statements about level-0 properties, but you couldn't reason about level-1. In level-2, you could reason about level-1 and level-0, but not about level-2. This meant that you couldn't make a set like the set of normal sets - because that set is formed using a predicate - and when you're forming a set like "the set of proper sets", the sets you can reason about are a level below you. You can form a level-1 set using a level-1 statement about level-0 objects. But you can't make a level-1 statement about level-1 objects! So self-referential systems would be completely impossible in the Principia's logic.

As a modern student of math, it's hard to understand what a profound thing they were trying to do. We've grown up learning math long after incompleteness became a well-known fact of life. (I read "Gödel Escher Bach" when I was a college freshman - well before I took any particularly deep math classes - so I knew about incompleteness before I knew enough to really understand what completeness woud have meant!) The principia would have been the perfection of math, a final ultimate perfect system. There would have been nothing that we couldn't prove, nothing in math that we couldn't know!

What Gödel did was show that using the Principia's own system, and it's own syntax, that not only was the principia itself flawed, but that any possible effort like the principia would inevitably be flawed!

With the incompleteness proof, Gödel showed that even in the Principia, even with all of the effort that it made to strictly separate the levels of reasoning, that he could form self-referential statements, and that those self-referential statements were both true and unprovable.

The way that he did it was simply brilliant. The proof was a sequence of steps.

  1. He showed that using Peano arithmetic - that is, the basic definition of natural numbers and natural number arithmetic - that you could take any principia-logic statement, and uniquely encode it as a number - so that every logical statement was a number, and ever number was a specific logical statement.
  2. Then using that basic mechanic, he showed how you could take any property defined by a predicate in the principia's logic, and encode it as a arithmetic property of the numbers. So a number encoded a statement, and the property of a number could be encoded arithmetically. A number, then, could be both a statement, and a definition of an arithmetic property of a stament, and a logical description of a logical property of a statement - all at once!
  3. Using that encoding, then - which can be formed for any logic that can express Peano arithmetic - he showed that you could form a self-referential statement: a number that was a statement about numbers including the number that was statement itself. And more, it could encode a meta-property of the statement in a way that was both true, and also unprovable: he showed how to create a logical property "There is no proof of this statement", which applied to its own numeric encoding. So the statement said, about itself, that it couldn't be proven.

The existence of that statement meant that the Principia - and any similar system! - was incomplete. Completeness means that every true statement is provably true within the system. But the statement encodes the fact that it cannot be proven. If you could prove it, the system would be inconsistent. If you can't, it's consistent, but incomplete.

We're going to go through all of that in detail. But to do that in a way where we can really understand it, we're going to need to start by looking at the logic used by the Principia. Then we'll look at how to encode Principia-logical statements as natural numbers using a technique called Gödel numbering, and how to express predicates numerically. And then, finally, we can look at how you can form the classic Gödel statement that shows that the Principia's system of logic is incomplete.

16 responses so far

Programs As Proofs: Models and Types in the Lambda Calculus

Feb 29 2012 Published by under lambda calculus, Logic

Lambda calculus started off with the simple, untyped lambda calculus that we've been talking about so far. But one of the great open questions about lambda calculus was: was it sound? Did it have a valid model?

Church found that it was easy to produce some strange and non-sensical expressions using the simple lambda calculus. In order to try to work around those problems, and end up with a consistent system, Church introduced the concept of types, producing the simply typed lambda calculus. Once types hit the scene, things really went wild; the type systems for lambda calculi have never stopped developing: people are still finding new things to do by extending the LC type system today! Most lambda calculus based programming languages are based on the Hindley-Milner lambda calculus, which is a simplification of one of the standard sophisticated typed lambda calculi called SystemF. There's even a Lambda Cube which can categorize the different type abstractions for lambda calculus (but alas, despite its name, it's not related to the time cube.) Once people really started to understand types, they realized that the untyped lambda calculus was really just a pathologically simple instance of the simply typed lambda calculus: a typed LC with only one base type.

The semantics of lambda calculus are easiest to talk about in a typed version. For now, I'll talk about the simplest typed LC, known as the simply typed lambda calculus. One of the really amazing things about this, which I'll show, is that a simply typed lambda calculus is completely semantically equivalent to an intuitionistic propositional logic: each type in the program is a proposition in the logic; each β reduction corresponds to an inference step; and each complete function corresponds to a proof! Look below for how.

Types

The main thing that typed lambda calculus adds to the mix is a concept called base types. In a typed lambda calculus, you have some universe of atomic values which you can manipulate; those values are partitioned into the *base types*. Base types are usually named by single lower-case greek letters: So, for example, we could have a type "σ", which consists of the set of natural numbers; a type "τ" which corresponds to boolean true/false values; and a type "γ" which corresponds to strings.

Once we have basic types, then we can talk about the type of a function. A function maps from a value of one type (the type of parameter) to a value of a second type (the type of the return value). For a function that takes a parameter of type "γ", and returns a value of type "δ", we write its type as "γ → δ". "→" is called the _function type constructor_; it associates to the right, so "γ → δ → ε" parses as "γ → (δ → ε)"

To apply types to the lambda calculus, we do a couple of things. First, we need a syntax update so that we can include type information in lambda terms. And second, we need to add a set of rules to show what it means for a typed program to be valid.

The syntax part is easy. We add a ":" to the notation; the colon has an expression or variable binding on its left, and a type specification on its right. It asserts that whatever is on the left side of the colon has the type specified on the right side. A few examples:

\lambda x: \nu . x + 3
This asserts that the parameter, x has type \nu, which we'll use as the type name for the natural numbers. (In case it's hard to tell, that's a greek letter "nu" for natural.) There is no assertion of the type of the result of the function; but since we know that "+" is a function with type \nu  \rightarrow \nu \rightarrow \nu, we can infer that the result type of this function will be \nu.
(\lambda x . x + 3): \nu \rightarrow \nu
This is the same as the previous, but with the type declaration moved out, so that it asserts the type for the lambda expression as a whole. This time we can infer that x : \nu because the function type is \nu \rightarrow \nu, which means that the function parameter has type \nu.
λ x : ν, y : δ . if y then x * x else x
This is a two parameter function; the first parameter has type ν, and the second has type δ. We can infer the return type, which is ν. So the type of the full function is ν → δ → ν. This may seem surprising at first; but remember that lambda calculus really works in terms of single parameter functions; multi-parameter functions are a shorthand for currying. So really, this function is: λ x : ν . (λ y : δ . if y then x * x else x); the inner lambda is type δ → ν; the outer lambda is type ν → (δ → ν).

To talk about whether a program is valid with respect to types (aka well-typed), we need to introduce a set of rules for type inference. Then we can verify that the program is type-consistent.

In type inference, we talked about judgements. When we can infer the type of an expression using an inference rule, we call that inference a type judgement. Type inference and judgements allow us to reason about types in a lambda expression; and if any part of an expression winds up with an inconsistent type judgement, then the expression is invalid. (When Church started doing typed LC, one of the motivations was to distinguish between values representing "atoms", and values representing "predicates"; he was trying to avoid the Godel-esque paradoxes, by using types to ensure that predicates couldn't operate on predicates.)

Type judgements are usually written in a sequent-based notation, which looks like a fraction where the numerator consists of statements that we know to be true; and the denominator is what we can infer from the numerator. In the numerator, we normally have statements using a context, which is a set of type judgements that we already know;it's usually written as an uppercase greek letter. If a type context includes the judgement that x : \alpha, I'll write that as \Gamma :- x : \alpha.

Rule 1: Type Identity

\frac{\mbox{}}{x : \alpha :- x: \alpha}

This is the simplest rule: if we have no other information except a declaration of the type of a variable, then we know that that variable has the type it was declared with.

Rule 2: Type Invariance

\frac{ \Gamma :- x:\alpha, x != y}{\Gamma + y:\beta :- x:\alpha}

This rule is a statement of non-interference. If we know that x:\alpha, then inferring a type judgement about any other term cannot change our type judgement for x.

Rule 3: Function Type Inference

\frac{\Gamma + x:\alpha :- y:\beta}{\Gamma :- (\lambda x:\alpha<br />
. y):\alpha \rightarrow \beta}

This statement allows us to infer function types given parameter types. Ff we know the type of the parameter to a function is \alpha; and if, with our knowledge of the parameter type, we know that the type of term that makes up the body of the function is \beta, then we know that the type of the function is \alpha \rightarrow \beta.

Rule 4: Function Application Inference

\frac{\Gamma :- x: \alpha \rightarrow \beta, \Gamma :- y:<br />
\alpha}{\Gamma :- (x y): \beta}

This one is easy: if we know that we have a function that takes a parameter of type \alpha and returns a value of type \beta, then if we apply that function to a value of type \alpha, we'll get a value of type \beta.

These four rules are it. If we can take a lambda expression, and come up with a consistent set of type judgements for every term in the expression, then the expression is well-typed. If not, then the expression is invalid.

So let's try taking a look at a simple lambda calculus expression, and seeing how inference works on it.

\lambda x y . y x

Without any type declarations or parameters, we don't know the exact type. But we do know that "x" has some type; we'll call that "α"; and we know that "y" is a function that will be applied with "x" as a parameter, so it's got parameter type α, but its result type is unknown. So using type variables, we can say "x:α,y:α→β". We can figure out what "α" and "β" are by looking at a complete expression. So, let's work out the typing of it with x="3", and y="λ a:ν.a\*a". We'll assume that our type context already includes "*" as a function of type "ν→ν→ν", where ν is the type of natural numbers.

  • "λ x y . y x) 3 (λ a:ν . a \* a)": Since 3 is a literal integer, we know its type: 3:ν.
  • By rule 4, we can infer that the type of the expression "a\*a" where "a:ν" is "ν", and \*:ν→ν→ν so therefore, by rule 3 the lambda expression has type "ν→ν". So with type labelling, our expression is now: "(λ x y . y x) (3:ν) (λ a:ν.(a\*a):ν) : ν→ν".
  • So - now, we know that the parameter "x" of the first lambda must be "ν"; and "y" must be "ν→ν"; so by rule 4, we know that the type of the application expression "y x" must be "ν"; and then by rule 3, the lambda has type: "ν→(ν→ν)→ν".
  • So, for this one, both α and β end up being "ν", the type of natural numbers.

So, now we have a simply typed lambda calculus. The reason that it's simply typed is because the type treatment here is minimal: the only way of building new types is through the unavoidable \rightarrow constructor. Other typed lambda calculi include the ability to define parametric types, which are types expressed as functions ranging over types.

Programs are Proofs

Here's where it gets really fun. Think about the types in the simple typed language calculus. Anything which can be formed from the following grammar is a lambda calculus type:

  • type ::= primitive | function | ( type )
  • primitive ::= α | β | δ | ...
  • function ::= type→type

The catch with that grammar is that you can create type expressions which, while they are valid type definitions, you can't write a single, complete, closed expression which will actually have that type. (A closed expression is one with no free variables.) When there is an expression that has a type, we say that the expression inhabits the type; and that the type is an inhabited type. If there is no expression that can inhabit a type, we say it's uninhabitable. Any expression which either can't be typed using the inference rules, or which is typed to an uninhabitable type is a type error.

So what's the difference between inhabitable type, and an uninhabitable type?

The answer comes from something called the Curry-Howard isomorphism. For a typed lambda calculus, there is a corresponding intuitionistic logic. A type expression is inhabitable if and only if the type is a provable theorem in the corresponding logic.

The type inference rules in lambda calculus are, in fact, the same as logical inference rules in intuitionistic logic. A type \alpha \rightarrow \beta can be seen as either a statement that this is a function that maps from a value of type \alpha to a value of type \beta, or as a logical statement that if we're given a fact alpha \alpha, we could use that to infer the truth of a fact \beta.

If there's a logical inference chain from an axiom (a given type assignment) to an inferred statement, then the inferred stateent is an inhabitable type. If we have a type \alpha \rightarrow \alpha, then given a inhabited type \alpha, we know that \alpha \rightarrow alpha is inhabitable, because if \alpha is a fact, then \alpha \rightarrow \alpha is also a fact.

On the other hand, think of a different case \alpha \rightarrow \beta. That's not a theorem, unless there's some other context that proves it. As a function type, that's the type of a function which, without including any context of any kind, can take a parameter of type α, and return a value of a different type β. You can't do that - there's got to be some context which provides a value of type β - and to access the context, there's got to be something to allow the function to access its context: a free variable. Same thing in the logic and the lambda calculus: you need some kind of context to establish "α→β" as a theorem (in the logic) or as an inhabitable type (in the lambda calculus).

It gets better. If there is a function whose type is a theorem in the corresponding intuitionistic logic, then the program that has that type is a proof of the theorem. Each beta reduction is equivalent to an inference step in the logic. This is what programming languages geeks like me mean when we say "the program is the proof": a well-typed program is, literally, a proof its well-typed-ness.

To connect back to the discussion about models: the intuitionistic logic corresponding to the lambda calculus and intuitionistic logic are, in a deep sense, just different reflections of the same thing. We know that intuitionistic logic has a valid model. And that, in turn, means that lambda calculus is valid as well. When we show that something is true using the lambda calculus, we can trust that it's not an artifact of an inconsistent system.

14 responses so far

Models and Why They Matter

Feb 19 2012 Published by under lambda calculus, Logic, Numbers

As I said in the last post, Church came up with λ-calculus, which looks like it's a great formal model of computation. But - there was a problem. Church struggled to find a model. What's a model, and why would that matter? That's the point of this post. To get a quick sense of what a model is, and why it matters?

A model is basically a mapping from the symbols of a logical system to some set off objects, such that all statements that you can prove in the logical system will be true about the corresponding objects. Note that when I say object here, I don't necessarily mean real-world physical objects - they're just something that we can work with, which is well-defined and consistent.

Why does it matter? Because the whole point of a system like λ-calculus is because we want to use it for reasoning. When you have a logical system like λ-calculus, you've built this system with its rules for a reason - because you want to use it as a tool for understanding something. The model provides you with a way of saying that the conclusions you derive using the system are meaningful. If the model isn't correct, if it contains any kind of inconsistency, then your system is completely meaningless: it can be used to derive anything.

So the search for a model for λ-calculus is really important. If there's a valid model for it, then it's wonderful. If there isn't, then we're just wasting our time looking for one.

So, now, let's take a quick look at a simple model, to see how a problem can creep in. I'm going to build a logic for talking about the natural numbers - that is, integers greater than or equal to zero. Then I'll show you how invalid results can be inferred using it; and finally show you how it fails by using the model.

One quick thing, to make the notation easier to read: I'm going to use a simple notion of types. A type is a set of atoms for which some particular one-parameter predicate is true. For example, if P(x) is true, I'll say that x is a member of type P. In a quantifier, I'll say things like \forall x \in P: \mbox{\em foo} to mean \forall x : P(x) \Rightarrow \mbox{\em foo}. Used this way, we can say that P is a type predicate.

How do we define natural numbers using logic?

First, we need an infinite set of atoms, each of which represents one number. We pick one of them, and call it zero. To represent the fact that they're natural numbers, we define a predicate {\cal N}(x), which is true if and only if x is one of the atoms that represents a natural number.

Now, we need to start using predicates to define the fundamental properties of numbers. The most important property of natural numbers is that they are a sequence. We define that idea using a predicate, \mbox{\em succ}(x,y), where \mbox{\em succ}(x,y) is true if and only if x = y + 1. To use that to define the ordering of the naturals, we can say: \forall x \in {\cal N}: \exists y: \mbox{\em succ}(x, y).

Or in english: every natural number has a successor - you can always add one to a natural number and get another natural number.

We can also define predecessor similarly, with two statements:

  1. \forall x \in {\cal N}: \exists y \in {\cal N}: \mbox{\em pred}(x, y).
  2. \forall x,y \in {\cal N}: \mbox{\em pred}(y,x) \Leftrightarrow \mbox{\em succ}(x,y)

So every number has a predecessor, and every number has a successor, and x is the predecessor of y if y is the successor of x.

To be able to define things like addition and subtraction, we can use successor. Let's define addition using a predicate Sum(x,y,z) which means "z = x + y".

  1. \forrall x,y \in {\cal N}: \exists z \in {\cal N} : Sum(x,y,z)
  2. \forall x,y \in {\cal N}: Sum(x, 0, x)
  3. \forall x,y,z \in {\cal N}: \exists a,b \in {\cal N}: Sum(a,b,z) \land \mbox{\em succ}(a,x) \land \mbox{\em succ}(y,b) \Rightarrow Sum(x, y, z)

Again, in english: for any two natural numbers, there is a natural number that it their sum; x + 0 always = x; and for any natural number, x + y = z is true if (x + 1) + (y - 1) = z.

Once we have addition, subtraction is easy: \forall x,y,z \in {\cal N} : diff(x,y,z) \Leftrightarrow sum(z,y,x)

That's: x-y=z if and only if x=y+z.

We can also define greater than using addition:

\forall x,y \in {\cal N} : x > y \Leftrightarrow

  • \mbox{\em succ}(x,y)) \lor
  • \exists z \in {\cal N}: \mbox{\em succ}(x, z)) \land exists z \in {\cal N}: \mbox{\em succ}(x,z) and z > y.

That's x > y if you can get to x from y by adding one repeatedly.

So, we've got a nice definition of natural numbers here, right?

Almost. There's one teeny little mistake.

We said that every natural number has a successor and a predecessor, and we also said that a number x is greater than a number y if you can get from y to x using a sequence of successors. That means that 0 has a predecessor, and that the predecessor of 0 is less than 0. But we're supposed to be defining the natural numbers! And one of the standard axioms of the natural numbers is that \forall x \in {\cal N}: 0 \le x. But we've violated that - we have both \forall x \in {\cal N}: 0 \le x, and
\exists x \in {\cal N}: 0 > x. And with a contradiction like that in the system, we can prove anything we want, anything at all. We've got a totally worthless, meaningless system.

That's why mathematicians are so particular about proving the validity of their models: because the tiniest error can mean that anything you proved with the logic might not be true - your proofs are worthless.

10 responses so far

3-Valued Semantics

Mar 23 2011 Published by under Fuzzy Logic

Before we can move from three-valued logic to fuzzy logic, we need to take a look at semantics - both how conventional two-valued logic handle semantics, and how three-valued logics extend the basic semantic structure. This isn't exactly one of the more exciting topics I've ever written about - but it is important, and going through it now will set the groundwork for the interesting stuff - the semantics of a true fuzzy logic.

What we've looked at so far has been propositional 3-valued logics. Propositional logics aren't particularly interesting. You can't do or say much with them. What we really care about is predicate logics. But all we need to do is take the three-valued logics we've seen, and allow statements to be predicate(object).

In a conventional first-order predicate logic, we define the semantics in terms of a model or interpretation of the logic. (Technically, a logic and an interpretation aren't quite the same thing, but for our purposes here, we don't need to get into the difference.)

An interpretation basically takes a domain consisting of a set of objects or values, and does two things:

  1. For each atomic symbol in the logic, it assigns an object from the domain. That value is called the interpretation of the symbol.
  2. For each predicate in the logic, it assigns a set, called the extension of the predicate. The extension contains the tuples for which the predicate is true.

For example, we could use logic to talk about Scientopia. The domain would be the set of bloggers, and the set of blogs. Then we could have predicates like "Writes", which takes two parameters - A, and B - and which is true is A is the author of the blog B.

Then the extension of "Writes" would be a set of pairs: { (MarkCC, Good Math/Bad Math), (Scicurious, Neurotic Physiology), ... }.

We can also define the counterextension, which is the set of pairs for whiche the predicate is not true. So the counterextension of "writes" would contain values like { (MarkCC, Neurotic Physiology), ...}.

Given a domain of objects, and the extension of each predicate, we know the meaning of statements in the logic. We know what objects we're reasoning about, and we know the truth or falsehood of every statement. Importantly, we don't need to know the counterextension of a predicate: if we know the extension, then the counterextension is simple the complement of the extension.

In three-valued Lukasiewicz logic, that's not true, for reasons that should be obvious: if I(A) is the interpretation of the predicate A, then the complement of I(A) is not the same thing as I(\lnot A). L_3 requires three sets for a predicate: the extension, the counterextension, and the fringe. The fringe of a predicate P is the set of values x for which P(x) is N.

To be more precise, an interpretation I for first order L_3 consists of:

  1. A set of values, D, called the domain of the logic. This is the set of objects that the logic can be used to reason about.
  2. For each predicate P of arity n (that is, taking n arguments), three sets ext(P), cext(P), and fringe(P), such that:
    • the values of the members of all three sets are members of D^n.
    • the sets are mutually exclusive - that is, there is no value that is in more than one of the sets.
    • the sets are exhaustive: ext(P) \cup cext(P) \cup fringe(P) = D^n.
  3. For each constant symbol a in the logic, an assignment of a to some member of D: I(a) \in D

With the interpretation, we can look at statements in the logic, and determine their truth or falsehood. But when we go through a proof, we'll often have statements that don't operate on specific values - they use variables inside of them. In order to make a statement have a truth value, all of the variables in that statement have to be bound by a quantifier, or assigned to a specific value by a variable assignment. So given a statement, we can frequently only talk about its meaning in terms of variable assignments for it.

So, for example, consider a simple statement: P(x,y,z). In the interpretation I, P(x, y, z) is satisfied if (I(x), I(y), I(z)) \in ext(P); it's dissatisfied if (I(x), I(y), I(z)) \in cext(P). Otherwise, (I(x), I(y), I(z)) must be in fringe(P), and then the statement is undetermined.

The basic connectives - and, or, not, implies, etc., all have defining rules like the above - they're obvious and easy to derive given the truth tables for the connectives, so I'm not going to go into detail. But it does get at least a little bit interesting when we get to quantified statements. But to talk about we need to first define a construction called a variant. Given a statement with variable assignment v, which maps all of the variables in the statement to values, an x-variant v' of v is a variable assignment where for every variable y except x, v'(y) == v(y). In other words, it's an assignment where all of the variables except x have the same value as in v.

Now we can finally get to the interpretation of quantified statements. Given a statement \forall x P, P is satisfied by a variable assignment v if P is satisfied by every x-variant of v; it's dissatisfied if P is dissatisfied by at least one x-variant of v. Otherwise, it's undetermined.

Similarly, an existentially quantified statement \exists x P is satisfied by v if P is satisfied by at least one x-variant of v; it's dissatisfied if P is dissatisfied by every x-variant of v. Otherwise, it's undetermined.

Finally, now, we can get to the most important bit: what it means for a statement to be true or false in L_3. A statement S is T (true) if it is satisfied by every variable assignment on I; it's F (false) if it's dissatisfied by every variable assignment on I, and it's N otherwise.

9 responses so far

Fuzzy Logic vs Probability

Feb 02 2011 Published by under Fuzzy Logic

In the comments on my last post, a few people asked me to explain the difference between fuzzy logic and probability theory. It's a very good question.

The two are very closely related. As we'll see when we start looking at fuzzy logic, the basic connectives in fuzzy logic are defined in almost the same way as the corresponding operations in probability theory.

The key difference is meaning.

There are two major schools of thought in probability theory, and they each assign a very different meaning to probability. I'm going to vastly oversimplify, but the two schools are the frequentists and the Bayesians

First, there are the frequentists. To the frequentists, probability is defined by experiment. If you say that an event E has a probability of, say, 60%, what that means to the frequentists is that if you could repeat an experiment observing the occurrence or non-occurrence of E an infinite number of times, then 60% of the time, E would have occurred. That, in turn, is taken to mean that the event E has an intrinsic probability of 60%.

The other alternative are the Bayesians. To a Bayesian, the idea of an event having an intrinsic probability is ridiculous. You're interested in a specific occurrence of the event - and it will either occur, or it will not. So there's a flu going around; either I'll catch it, or I won't. Ultimately, there's no probability about it: it's either yes or no - I'll catch it or I won't. Bayesians say that probability is an assessment of our state of knowledge. To say that I have a 60% chance of catching the flu is just a way of saying that given the current state of our knowledge, I can say with 60% certainty that I will catch it.

In either case, we're ultimately talking about events, not facts. And those events will either occur, or not occur. There is nothing fuzzy about it. We can talk about the probability of my catching the flu, and depending on whether we pick a frequentist or Bayesian interpretation, that means something different - but in either case, the ultimate truth is not fuzzy.

In fuzzy logic, we're trying to capture the essential property of vagueness. If I say that a person whose height is 2.5 meters is tall, that's a true statement. If I say that another person whose height is only 2 meters is tall, that's still true - but it's not as true as it was for the person 2.5 meters tall. I'm not saying that in a repeatable experiment, the first person would be tall more often than the second. And I'm not saying that given the current state of my knowledge, it's more likely than the first person is tall than the second. I'm saying that both people possess the property tall - but in different degrees.

Fuzzy logic is using pretty much the same tools as probability theory. But it's using them to trying to capture a very different idea. Fuzzy logic is all about degrees of truth - about fuzziness and partial or relative truths. Probability theory is interested in trying to make predictions about events from a state of partial knowledge. (In frequentist terms, it's about saying that I know that if I repeated this 100 times, E would happen in 60; in Bayesian, it's precisely a statement of partial knowledge: I'm 60% certain that E will happen.) But probability theory says nothing about how to reason about things that aren't entirely true or false.

And, in the other direction: fuzzy logic isn't particularly useful for talking about partial knowledge. If you allowed second-order logic, you could have fuzzy meta-predicates that described your certainty about crisp first-order predicates. But with first order logic (which is really where we want to focus our attention), fuzzy logic isn't useful for the tasks where we use probability theory.

So probability theory doesn't capture the essential property of meaning (partial truth) which is the goal of fuzzy logic - and fuzzy logic doesn't capture the essential property of meaning (partial knowledge) which is the goal of probability theory.

30 responses so far

More 3-valued logic: Lukasiewicz and Bochvar

Jan 31 2011 Published by under Fuzzy Logic

Last time I wrote about fuzzy logic, we were looking at 3-valued logics, and I mentioned that there's more than one version of 3-valued logic. We looked at one, called K^S_3, Kleene's strong 3-valued logic. In K^S_3, we extended a standard logic so that for any statement, you can say that it's true (T), false (F), or that you don't know (N). In this kind of logic, you can see some of the effect of uncertainty. In many ways, it's a very natural logic for dealing with uncertainty: "don't know" behaves in a very reasonable way.

For example, suppose I know that Joe is happy, but I don't know if Jane is happy. So the truth value of "Happy(Joe)" is T; the truth value of "Happy(Jane)" is N. In Kleene, the truth value of "Happy(Joe) ∨ Happy(Jane)" is T; since "Happy(Joe)" is true, then "Happy(Joe) ∨ anything" is true. And "Happy(Joe) ∧ Happy(Jane)" is N; since we know that Joe is happy, but we don't know whether or not Jane is happy, we can't know whether both Joe and Jane are happy. It works nicely. It's a rather vague way of handling vagueness, (that is, it lets you say you're not sure, but it doesn't let you say how not sure you are) but in so far as it goes, it works nicely.

A lot of people, when they first see Kleene's three-valued logic think that it makes so much sense that it somehow defines the fundamental, canonical three-valued logic in the same way that, say, first order predicatelogic defines the fundamental two-valued predicate logic.

It isn't.

There are a bunch of different ways of doing three-valued logic. The difference between them is related to the meaning of the third value - which, in turn, defines how the various connectives work.

There are other 3-valued logics. We'll talk about two others. There's Bochvar's logic, and there's Lukasiewicz's. In fact, we'll end up building our fuzzy logic on Lukasiewicz's. But Bochvar is interesting in its own right. So we'll take a look at both.

Continue Reading »

13 responses so far

Older posts »

Bad Behavior has blocked 1708 access attempts in the last 7 days.