Started reading through Peter Smith’s “Gödel Without (Too Many) Tears” lecture notes yesterday, and I realized I hadn’t been as clear as I should have been in my treatment of Peano arithmetic. So I started typing some errata, and I ended up typing a super-long exposition about All the Logic I Know. This isn’t very much, because really I find mathematical logic a fascinatingly dull subject: it’s the sort of thing that looked cool when I was a kid because I could actually understand it, but now everything I read about it seems like this confusing haze of definitions and pseudophilosophical “theorems” about things actual math has left behind years ago. So yeah. What this means is: if you’re a logician, correct me if I get something wrong and please please try to convince me that your subject is cool and point me to a place where I can see that again. I feel bad casting scorn on an entire branch of math just because it feels too weird and abstract.

Anyway, read on if you’re curious. If you’re not, don’t. I’ll cite this post whenever I prove Gödel’s Theorems but probably at no other time, so you won’t really miss much. I do define **quantifiers**, which are these useful math symbols I might reuse: means “for all ” and means “there exists an such that.” Okay? Okay.

When I presented the Peano axioms yesterday, I left out a lot of their context. For Peano, Russell, and their contemporaries, such a system wasn’t just about axiomatizing math — it was about doing so in a *completely formal language*. The dream was that all of our theorems could be captured in strings of symbols that left no room for vagueness or misinterpretation, and then proved through sequences of statements all of which followed by specifically defined rules of inference that basically amounted to typographical operations on said strings of symbols. (Not a philosopher, but there’s probably a ton of stuff to dig around in as regards language and ambiguity in this period of time. For example, Wittgenstein was a friend and student of Russell’s, and the pretty-cool-though-not-too-mathy comic book *Logicomix* has some interesting pages about this relationship.) Gödel’s Incompleteness Theorems pretty much blasted this dream to smithereens: not only did they show that there were statements you couldn’t prove or disprove using Peano arithmetic, *they also showed that these statements were expressible in the language of Peano arithmetic itself*. In fact, the recently discovered Goodstein’s Theorem is a super-simple number theory statement that you can prove easily using ordinals and ZFC, but that you can’t prove in PA.

Okay, so a **formal language** is a set of symbols, called an **alphabet**, with a set of strings of symbols in the alphabet that are called **statements, well-formed formulas** or **wffs**. We’re going to generally want to be able to tell whether a given string is a wff or not. (Okay, so we can formalize this part, too: “we want a *computer* to be able to tell whether a given string is a wff or not *in finite time*.” Now we start to step on the toes of theoretical CS and philosophy, in which it’s a common assumption that the deciding power of a computer, even a very specific kind of computer, is equivalent to that of a human.) A **theory** is a formal language together with a set of **axioms**, which are wffs, and **rules of inference**, which are operations that change wffs to other wffs. Any wff you can get from an axiom after a finite number of rules of inference is a** theorem**.

The ur-example is the **propositional calculus**. The language here has a set of variables , parentheses , a unary operator , and binary operators . Then we can recursively construct wffs: any variable is a wff, and if are wffs, then is for any of the binary operators, and so are and . So wffs look like . The standard **interpretation** is that means “not”, means “and”, means “or”, means “implies”, and means “if and only if”. You can give axioms but they suck: usually what’s done is to assume rules of inference but no axioms and say **entails** if is a theorem in the theory given by taking as an axiom. The rules of inference are things like “modus ponens”, which says that you can change to : if you know one thing and you know it implies a secon thing, you know the second thing.

The propositional calculus is assumed to be the underlying structure behind the “standard logic” of thought and math: if we think “Reading this blog will make me wiser, and I’m reading it, so I must be getting wiser,” we’re really applying modus ponens. This allows us to use the standard **interpretation** of as “and”, etc., to talk about what’s *true* in addition to what we can *prove*: namely, a PC statement is **true** if whenever we plop down true English sentences in for its variables, interpret all the symbols, and read it off as an English sentence, we get a true statement. In terms of truth, PC has two good properties: it is **consistent**, which means that every theorem is true, and it is **complete**, which means that every true statement is a theorem. Actually, we can state these without using truth at all. There’s a propositional calculus theorem of the form for any : if a statement and its negative are both true, then *every statement is true*. Since basically every theory includes PC, a **consistent** theory is one which doesn’t prove statements of the form $(P\wedge\not P)$, and a **complete** theory is one which either proves or for every statement . (We can prove these by basically collapsing all true English statements to the single word “true” and all false ones to the single word “false”, and turning the logical operators back into arithmetical operators, and then checking all of the combinations of “true” and “false” on a given wff. If you know about “logic gates” in electronics, you probably know what I mean.)

Gödel’s First Incompleteness Theorem showed that no consistent theory that includes Peano arithmetic is also complete. It will always have some statement for which we can’t prove either the statement or its opposite: if we interpret the theory so that statements have a truth value, this means that the statement is either true and not provable, or false and not disprovable. If we add this statement in as a new axiom, the new theory is still consistent and still includes PA, so there’s now a new statement that is true and not provable. Do NOT let this get misinterpreted! This is a mathematical theorem about axiomatic theories, not about “thought systems” or theories about reality. Stupid people, especially intelligent design “theorists,” use this as a way to prove that “you can’t know everything” by applying it totally out of context. If you catch someone doing this, call them out on it, and tell me about it, I will bake and mail you cookies.

Okay, but what is PA? A first guess would be what Peter Smith calls **Baby Arithmetic**. The alphabet is the symbols , along with the PC alphabet; **terms** are of the form and where are terms; wffs are , where are terms, as well as the wffs given by the rules of PC; the axioms are just the first four PA axioms, everything but induction, along with the definitions of addition and multiplication from PA; and the rules of inference are just those of PC, most importantly substitution. Well, not quite. We’ve made our alphabet finite and free of variables, so we have to suffer elsewhere: “There is no such that ” is not an axiom. Instead, we have to assume the **axiom schema** , where runs through the numerals . This is an infinite number of axioms! And moreover we *need* to do this because there’s no way to talk about “every ” in BA. Everything else suffers, too: you can’t define because you can’t say “there exists a …”, and you can’t prove theorems about all numbers for the same reason. What you can prove, in fact, are the “theorems of simple calculation”: things like . And the lengths of their proofs are directly proportional to the size of their numbers: it would take you twice as long to prove that . But these are also the only sorts of true statements that BA can express, so BA is also complete.

What BA and PC lack is **quantifiers**. These are things that let you talk about all or arbitrary numbers: the usual math symbols are , meaning “for all” and “there exists” respectively. These are actually really common symbols in terms of writing on blackboards, but I never found a time to introduce them here. So there you go. PC plus quantifiers on variables is called **first-order logic**: **expressions** in it are of the form , where is an expression in first-order logic in which is a variable that doesn’t occur in any quantifiers, in addition to the usual wffs of PC. We call this a **free variable**, and when it appears in a quantifier it’s a **bound variable**. A wff in FOL is one is an expression in which every variable is bound. Adjoining FOL to BA, we roughly get a theory called Q or **Robinson Arithmetic**, in which we have axioms like , and similarly theorems about all numbers.

Peano arithmetic uses **second-order logic**, where variables can now range over all *sets* of objects in addition to objects. We need this in order to formulate induction, which looks like , where $M$ ranges over sets of numbers. Just like we could change the first-order least element axiom to an infinite BA schema, we can change the second-order induction axiom to an infinite first-order schema, *sort of*. This will look like , where ranges over expressions of PA that have as their only free variable. The problem with this is that it only gives you instances of the second-order induction axiom for which is of the form for some expression with one free variable (we say is **definable**). Assuming a finite alphabet (we can do this by replacing variables like with ), there are only a countable number of such expressions , but an uncountable number of subsets of . So not every set is definable, and this first-order axiom schema is weaker than induction.

I don’t have any good examples of a statement the first-order schema can’t prove but PA can, but there are some pretty good examples of things the first-order schema can prove but zeroth-order Robinson Arithmetic can’t. Robinson Arithmetic, or Q, doesn’t have induction, but it does have quantifiers. Since it includes BA, it can prove the theorem schema for every , but it can’t prove the statement . This even though it has as an axiom. The reason is intuitively that the proof of takes a number of steps proportional to , having to apply the recursive definition of addition times, so any proof of would take an infinite number of steps, which is no proof at all. On the other hand, in PA we’d just have to prove two statements: and . Smith even gives a model of Q in which is false for a certain , which you should check out if you’re curious (basically, without induction, the set of numbers can be larger than just , and we can define for “extra numbers” ). In fact, Gödel’s proof actually applies to anything that includes Q, and Q is in fact the weakest incomplete arithmetical theory.

The axioms of PA are the following, in the language of PA:

I hope to have taught you how each of these are different than saying infinitely many copies of the corresponding unquantified axioms. That was all I wanted to do and I’m at 2k words! Gah!

**1 Comment so far**

Leave a comment

[…] 7 12 2010 Freeing yourself of the bounds of Peano arithmetic, you come upon a number of different routes to take. You want the addition […]

Pingback by Ordinal Arithmetic « Gracious LivingDecember 7, 2010 @ 21:03