Peter Suber, Philosophy Department, Earlham College

As in the exercise hand-out, page and theorem numbers refer to Geoffrey Hunter, Metalogic, University of California Press, 1971. To see what Day 1, Day 2, Day 3, etc. correspond to, see my syllabus.

Answer x.y corresponds to Day x, exercise y. When a question has sub-questions, then answer x.y.z corresponds to Day x, question y, sub-question z.

1.1, 1.2. Two systems S and S' may have the same theorems but different axioms and rules. This difference means they will differ in their proof theory. In S, some wff A might follow from another wff B, but this implication may not hold in S'.

2.1. Statement i is certainly true, in that every terminating, semantically bug-free program is obviously effective. Programming languages express what computers can do, and every step a computer takes is 'dumb' (even if putting many of these steps together is 'intelligent').

Statement ii may be true, but it is uncertain and unprovable. We'll never know whether a definite class of methods (those that are programmable) coincides with an indefinite class of methods (those that satisfy our intuition about 'dumbness'). The claim that statement B is true is called Church's Thesis, and will come up again on Day 27 (Hunter at 230ff).

Statement i is false in this sense: many ineffective methods are programmable. Every method with an infinite loop is both ineffective and programmable.

4.1.1. True. À1 is defined as the first cardinal greater than À0. Moreover, without the continuum hypothesis, we can prove that c = 2À0 (see Hunter's metatheorem A16 in today's reading), but without the continuum hypothesis we cannot prove that 2À0 = À1. Hence, we require the continuum hypothesis to assert that c = À1.

4.1.2. False. The first part of the claim is false; À1 is not defined as 2À0. But the second part of the claim is true; we do require the continuum hypothesis to assert that c = À1. See the answer to previous question.

4.3. The diagonal construction could be made for the rationals as easily as for the reals. But the new number produced by diagonalizing on the digits of the denumerable list of rationals would be irrational, and as such, not eligible to be added to the list of rationals. Therefore it would not show the incompleteness of the denumerable list of rationals, and hence would not show the uncountability of the rationals.

4.4. Because 0.5000... = 0.4999..., if we did not eliminate one of them from the list, the list would contain repetitions which might throw off the one-to-one correspondence to be tested. We could convert 0.4999... to 0.5000..., rather than vice versa, however.

4.6.1. False. The rationals (for example) are dense but not continuous.

4.6.2. False. The irrationals are dense and uncountable but not continuous. (The rationals are merely countable.)

4.9.1. This is true by definition. We introduce the symbol "À0" to represent the cardinality of N.

4.9.2. This is only known by proof; see Hunter's theorem A7 (at p. 33).

4.9.3. By proof, from theorem A16 and the continuum hypothesis.

4.9.4. By proof, from theorem A13.

4.9.5. By proof, from theorem A16 and the continuum hypothesis.

4.9.6. By proof, from the continuum hypothesis. The symbol "Àa" (a > 0) is defined as the smallest cardinal greater than Àa-1. It only represents the cardinal 2Àa-1 if we assume the continuum hypothesis.

4.9.7. By proof, from the generalized continuum hypothesis. See the answer to 4.9.6.

4.9.8. By proof, theorem A16.

4.9.9. By proof, from theorem A4.

4.9.10. By proof, from theorems A12 and A13.

4.9.11. By proof, from theorems A12 and A14.

4.9.12. By proof. The symbol "c" represents the cardinality of the continuum (by definition). But we must prove that the cardinality of the continuum equals the cardinality of the reals, which follows from theorems A4 and A7 (and lemmas in the proof of A7).

5.1. A function is computable iff there is an effective method for evaluating it. A method is effective iff it always gives an answer (for the right class of problems). If it balks at some appropriate input, then it is not effective. So a computable function must always return a value (for the right class of inputs). What we meant by 'the right class of inputs' here is that the arguments are members of the function's domain. If the function returns a value for every member of its domain, then it is a total function. A partial function would be undefined for at least one member of the domain.

5.3. There is no privilege for twoness. Just as n-adic connectives (n > 2) are translatable into dyadic connectives, so are dyadic connectives translatable into n-adic (n > 2). These are theoretically equivalent; if we prefer the dyadic, it is only in practice, for elegance, economy, simplicity.

5.6. "iv", truth-values to truth-values.

6.4. The fourth rule makes the list of rules sufficient, not only necessary, for defining wff-hood. Hence, it means there is an effective method for determining whether an arbitrary formula is a wff. Without this rule, our best method could say 'yes' infallibly but could not say 'no' infallibly. The rule makes the set of wffs into a decidable set.

6.10. The truth-table test is effective.

7.5. Yes. Truth for I must be defined separately for each connective. Hence, the more connectives we use, the more complicated is our model theory.

We could use a distinct symbol for each truth-function, say, a natural number. But then we would have difficulty remembering the symbol for a given function. At the other extreme are the "universal" truth-functions | and [dagger], which make wffs terribly long and hard to read. To use a small handful of connectives is a compromise based on human convenience, not required by the logic of propositions.

7.5.1. (p · q) ~(p ~q).

7.5.2. (p q) (~p q).

7.5.3. (p q) ~((p q) ~(q p)).

Alternately, (p q) ((p ~q) ~(~p q)).

7.5.4. (p q) ((p q) ~(q p)).

7.5.5, 7.5.7, 7.5.8. (p|q) (p ~q).

7.5.6. (pq) ~(~p q).

7.7. Let "*" in *(A,B,C) be a triadic connective expressing ~(A B C), i.e. "all the following are false". With this truth-function we can express ~A thus: *(A,A,A). We can express A·B thus: *(*(A,A,A),*(B,B,B),*(B,B,B)). By Hunter's metatheorem 21.4, if we can express both ~A and A·B, then we can express all truth-functions.

Now let "#" in #(A,B,C) be a triadic connective expressing ~(A·B·C), i.e. "not all the following are true". With this truth-function we can express ~A thus: #(A,A,A). We can express AB thus: #(#(A,A,A),#(B,B,B),#(B,B,B)). By Hunter's metatheorem 21.3, if we can express both ~A and AB, then we can express all truth-functions.

From these two, it should be clear how to produce adequate n-adic truth-functions for any n > 2. Those like "*" will be generalized dagger functions. Those like "#" will be generalized stroke functions.

8.3. Clue. Recall the use we made of arithmetization in Day 3.

8.4. We've seen two ways so far in which later theorems can be longer than earlier ones. (1) Later theorems may instantiate one of the axiom-schemata rather than follow by MP from earlier theorems. These instantiations may be arbitrarily long. (2) A later theorem may be an interpolation by virtue of the interpolation theorem.

8.5. In PS we get this kind of closure by the definition of material implication or "". We said that if A is a wff and if B is a wff, then AB is a wff. Modus ponens says that A B, A, B. If it takes wffs as input, then A B is a wff, and if so, then (by definition) B is a wff.

8.6. For axioms, no metalanguage is needed; indeed, axioms should be wffs of the formal language. For axiom schemata, however, a metalanguage is needed unless the rules of inference permit substitution. For the rules of inference, a metalanguage is usually needed unless the rules are expressed as functions. (See the next exercise.)

8.7. There are at least two ways to do this. We could turn MP into a function (1) that returned a truth-value or (2) that returned a wff.

1. In this form, modus ponens would look like this: MP(A,B) = T, where A and B are wffs and T is a truth-value. The function would be defined in the usual way: if A is a conditional statement and B is A's antecedent, then MP returns falsehood when A is true and B false; otherwise MP returns truth.
2. In this form, modus ponens would look like this: MP(A,B) = C, where A, B, and C are all wffs. We would define the function thus: When A is a conditional, B is A's antecedent, and C is A's consequent, then MP returns C; otherwise MP is undefined.

8.8. There are several reasons. (1) Our rigorous concept of proof is meant to capture something of interest that occurs in the practice of reasoning, not only in mathematics but in every discipline, namely, an accumulation of reasons to accept a conclusion. It seems that all these accumulations are finite; or we have never (yet!) been persuaded to accept a conclusion by an infinitely long series of propositions. This is the semantic motivation of our formal notion of proof. It is also Aristotle's objection to arguments involving an infinite regress. (2) If proofs are finite, then there is an effective method for determining whether an arbitrary series of wffs constitutes a proof. Since wffhood is decidable, the finitude of proofs makes proofhood decidable as well. Gödel uses this fact in his proof of his first incompleteness theorem. (3) It makes a proof into a "data structure" that is easily represented and used in metatheoretical reasoning. If proofs are finite, then we can we can represent them as a sequence of wffs, <A1, A2, A3, ...An>, and perform operations on them as a sequence. For example, see the proofs of metatheorems 28.4, 32.12, and 45.12.

Note that the second and third reasons say, in effect, that a certain concept of proof gives us leverage in conducting proofs.

The first reason may be paraphrased thus: when we ask why a certain conclusion is justified, we want to finish hearing the answer; a proof is supposed to satisfy our curiosity and answer our doubts. This requires that proofs be finite. However, the danger (or attraction) of this paraphrase is that it makes very long finite arguments into non-proofs. If an argument had 1010 premises, we would never live to hear them all presented, so we would never finish hearing the answer to our question. When very long finite 'proofs' should count as proofs has become controversial recently. The proof of the completeness of the classification of the finite simple groups is over 1500 pages long, far too long to persuade in the traditional sense. A similar problem arises with computer-generated proofs, such as for that confirming the four-color conjecture for maps.

9.6. One model suffices to prove consistency for several reasons. (1) Most importantly, we defined our language so that the existence of a model implies consistency. We defined truth for I for "~" so that A and ~A are never true at the same time (in the same interpretation). This holds in every interpretation in which "~" takes its usual meaning. Therefore, if all the theorems of a system are true for the same interpretation, then none of their negations is true in that interpretation. See for example the proofs of metatheorems 32.6 and its converse, 32.13 (Day 13). Together these show that a set of TFPL wffs has a model iff it is p-consistent (or is m-consistent iff it is p-consistent).

Another way to see this: If some set of wffs Γ has a model, then the single wff which conjoins all the members of Γ will be true in at least one row of its truth table; hence it will either be a tautology or a contingency —anything but a contradiction.

There are other, secondary answers to this question which appeal to intuitions or intentions rather than definitions built into our formal language. (2) Consistency is sought for semantic reasons. If the intended interpretation is a model, then that's all we care about; the system is consistent in the sense we wanted. (3) No system is consistent if all interpretations must be models; for if we have abc and bc, then one interpretation will assign negation to "a", and make these two theorems contradictory. (4) There is a deep assumption that all truths are consistent with one another. If the connectives keep their usual meanings (no unexpected negations), then a single model proves all theorems are simultaneously true for same I, which brings consistency in this sense.

9.7. If we don't limit the interpretation sought to those in which the connectives take their usual meanings, then any system we like can be rendered consistent by the device of interpreting the alphabet of the language so that no symbol takes the meaning of negation.

9.9. True. Simple consistency and p-consistency both describe the syntactic property that a contradiction cannot be derived from a set or system.

9.15. For help, see Hunter at p. 78, or Hofstadter at pp. 87-88, 94f, 453.

10.1. There are many ways to explain the legitimacy of this procedure. (1) In introductory logic courses, the procedure is called conditional proof. If we assume any proposition p, hypothetically, and then derive q, then we may non-hypothetically assert pq. See any elementary logic textbook for a proof of the validity of this procedure. (2) Every conditional statement has a corresponding argument; that is, for every wff of the form pq (that is tautologous) there is an inference of the form p q (that is valid), and vice versa. (3) Since pq is true whenever p is false, we need not worry about the case when p is false; if we prove that q follows from the affirmation of p, then we have proved the non-trivial part of the statement.

10.3. Hint: see the proof of Hunter's metatheorem 43.1.

11.1.1 - 11.1.10. False. False. False. True. True. False. True. False. True. True.

Remember that for TFPL a model of a wff is a truth-table row in which the wff comes out true. Hence in general, for TFPL, when asked whether every model of A is also a model of B, look at the truth-table columns for A and B. Is every row where A is true also a row where B is true?

11.4.1. For semantic completeness, essentially we want as theorems all strings of symbols in the language that are tautologies in the standard interpretation. We would fall far short of this if we had just one tautologous theorem, even if it is demonstrably biconditional with every other tautology. Logical equivalence shows that all tautologies are identical in truth-value (and truth conditions), but not that they are identical wffs considered syntactically as typographical objects. In this sense, "semantic" completeness is a kind of "syntactic" completeness, or has a syntactic as well a semantic motivation.

12.1.1. If k = À0 in metatheorem 31.15, then the loop in the middle of the proof wouldn't work. In the loop we repeatedly subtract 1 from k (by applying the deduction theorem), hoping eventually to reduce k to 0. When k is infinite, then k - 1 = k; after each iteration of the loop we'd back where we started.

12.1.2. K cannot be infinite in metatheorem 31.15 because k represents the number of distinct propositional symbols in some arbitrary tautology A. But all tautologies are wffs, and all wffs in PS are finite in length.

Moreover, under a lemma for 31.15, we can take the atomic components of A, say B1...Bk, and assert that (when valued in certain ways) they syntactically imply A: B1I...BkI AI. But if k were infinite, then this lemma would violate our requirement that derivations take a finite number of premises.

13.4.1. False. A maximal p-con set of PS will contain at least some contingencies like p, q, or r. (Instead of p, it might contain ~p, and so on.) A "contingency" here is a proposition that is neither a tautology nor a contradiction. But because all PS's theorems are tautologies (proved in metatheorem 28.3, Day 11), no contingencies are theorems of PS. Therefore Γ might contain all theorems, but not only theorems.

Sub-exercise: Satisfy yourself that contingencies are consistent with tautologies, in this sense: each contingency of PS is consistent with all tautologies. Or: if Δ contained all and only tautologies, then if we added contingencies to Δ one at a time, then Δ could remain p-consistent arbitrarily long (or short); but when Δ became p-inconsistent, it would be on account of the inconsistency of two contingencies, not the inconsistency of two tautologies and not the inconsistency of a contingency and a tautology.

13.4.2. False. If Γ contained all non-theorems of PS, then it would contain both p and ~p, and then it would not be p-consistent.

13.7. Yes, there is an effective method here but it is intractable (or NP-complete). The job of testing all the wffs in a set for consistency entails testing all its subsets. Hence for n statements, we must perform 2n tests, which makes the problem NP-complete. For a very brief introduction to NP-completeness, see the hand-out, Three Levels of Truth.

13.10. The chief reason perhaps is that derivations (denoted by syntactic consequence) are finite by definition.

14.1.1. PS wants all and only tautologies as theorems. If it were complete in the sense that for every wff A, either A or ~A were a theorem, then either p or ~p would be a theorem, yet both of the1e are contingencies; hence PS would have contingent theorems.

14.1.2. No, PS would not necessarily be inconsistent. It would merely fail to support the metatheorem, A A (all theorems are logically valid). PS would have contingent theorems (but also tautologous theorems); such systems can be consistent. While we sometimes use the metatheorem, A A, as shorthand for consistency, it is actually stronger than consistency and may be false for some consistent systems with contingent theorems.

14.1.3. At least one necessary condition of a system that will find this kind of completeness desirable is that it will want to have contingencies among its theorems, and hence among its axioms. (Contingent axioms are called proper axioms as opposed to logical axioms.) Arithmetic turns out to be such a system. So does predicate logic with identity.

14.6. Syntactic completeness only means we cannot add an unprovable schema without triggering inconsistency. Metatheorem 33.2 only refers to the addition of an unprovable wff. Syntactically complete systems that cannot take the addition of any unprovable schemata can still take any number of additional unprovable wffs; 33.2 proves this.

14.8. Suppose that system S is consistent. Let A be any wff of S. Either A is already a theorem of S or it isn't. If it is, then it may consistently be added to the axiom set of S. If it isn't, then its negation may consistently be added, under 33.2.

14.8, 14.9. Clue. Review Post's proof of completeness of TFPL in Day 11. He showed there was an effective proof procedure for the system of TFPL in Whitehead and Russell's Principia Mathematica.

16.5. No. Atomic wffs lack quantifiers and connectives. Wffs without connectives cannot be logically valid and, therefore, should not appear as theorems in a consistent system of PL.

16.6. A model requires truth for I, not merely satisfaction.

16.7. Hint: You can say "exactly n" if you can say both "at least n" and "at most n".

If you want more than just this hint, see the end of my Translation Tips hand-out.

As we move through the metatheory of PL, reflect on the importance of the fact that we cannot express the natural numbers without identity or "=".

20.10.1. False. While A B, it is not the case that B A. Negation incompleteness is more than sufficient to prove absolute consistency; for some wff A, it shows both A and ~A to be non-theorems, when either alone would suffice for absolute consistency. For the very reason that negation incompeteness proves more than absolute consistency, we cannot get BA, since absolute consistency does not imply negation incompleteness. For example, A may be a wff non-theorem while ~A is a theorem.

20.10.2. False. As it stands, statement C is not limited to closed wffs. Without such a limitation, we have A C, but not always C A (unless S is inconsistent). With such a limitation then this biconditional becomes true. A system is negation incomplete only if a closed wff is undecidable. So negation incompleteness implies the existence of undecidable wffs, but the existence of undecidable wffs (which might all be open) does not imply negation incompleteness.

20.10.3. False. C B is always true, but B C is not. A system is absolutely consistent only if some wff is a non-theorem; this wff may be decidable or undecidable. If a decidable wff is a non-theorem (i.e. the negation of a theorem), then S is absolutely consistent but need not contain any undecidable wffs.

 I haven't written many answers yet for exercises after Day 16. But as time permits, I will.

This file is an electronic hand-out for the course, Logical Systems.

Some of the logic symbols in this file are GIFs; see my Notes on Logic Notation on the Web. Some are HTML characters using the Symbol Font; see Alan Wood's guide to its symbols.

Peter Suber, Department of Philosophy, Earlham College, Richmond, Indiana, 47374, U.S.A.