Logic: Intro and Propositional Calculus

Brief History of Reasoning

450BC Stoics Propositional logic, inference(maybe)
322BC Aristotle "Syllogisms" (inference rules)
    quantifiers
1565 Cardano Probability theory (propositional logic
    + uncertainty)
1847 Boole Propositional logic
1879 Frege First-order logic
1922 Wittgenstein Proof by truth tables
1930 Godel Complete algorithm for FOL exists
1930 Herbrand Complete algorithm for FOL (reduce
    to propositional)
1931 Godel No complete algorithm for arithmetic exists
1960 Davis and Putnam "practical" algorithm for
    propositional logic
1965 Robinson "practical" algorithm for FOL: resolution

Symbolic Logic

Logic as Canonical Representation

Types of Logic in 173

More Logics and Their Properties

What does a logic commit to (express) as primitives: Ontological (what exists? facts? objects? time? beliefs?) and Epistemological (what states of knowledge are there?).

LANGUAGE ONTOLOGICAL EPISTEMOLOGICAL
Prop. Logic facts T/F/?
1st-order Logic facts, objs, relns T/F/?
Temporal log. FOL + time T/F/?
Prob. Theory facts prob ⇒ deg. belief ∈ [0,1]
Fuzzy logic degree of truth deg. belief ∈ [0,1]
Non-monotonic logic FOL, fact's truth can change T/F/?
Modal logic modal ops. on sentences possible worlds

Propositional Calculus

Teleport to Prop. Calc. PPT Courtesy of Hwee Tou Ng (Nat. U. Singapore).

Boolean Functions

Boolean Functions and Circuits

The non-universality of AND, vs. NAND

Models: Definitions

More on Models 1

More on Models 2

More on Models 3

Model-Checking with TTs

Model-Checking with TTs (Again)

Let α = A ∨ B and KB = (A ∨ B) ∧ (B ∨ ∼ C).

Is it the case that KB |= α ?

Check all possible models: α must be true whenever KB is true. A B C (A ∨ B) (B ∨ ∼ C) KB α KB ⇒ α; F F F F F T F T F F T T T F F T F T T T F T T T

The Relationship of Implication and Entailment

Recall α = A ∨ B and KB = (A ∨ B) ∧ (B ∨ ∼ C). A B C (A ∨ B) (B ∨ ∼ C) KB α KB ⇒ α (1) (2) (1 ∧ 2) F F F F T F F T F F T F F F F T F T F T T T T T F T T T T T T T T F F T T T T T T F T T F F T T T T F T T T T T T T T T T T T T Last Column: true in all models!

Does KB |= α?

α true whenever KB is true
or, Never KB = T and α = F
or, M(KB) ⊆ M(α)

Propositional Logic Semantics

Model-checking in PC with truth tables is an exponential algorithm, and we might have to check all assignments to find a satisfying assignment. Indeed n-SAT, the problem of finding a satisfying assignment for sentences with n distinct symbols, is NP-complete. Indeed again, 3-SAT, in which there are at most 3 literals in any clause of an nSAT problem, is also NP-complete.

BEYOND TRUTH TABLES -- EFFICIENT MODEL CHECKING

NP-complete problem: (first problem to be so proved) equivalent to SAT(isfiability).

Truth table has 2n rows.

Wang Algorithm: early domain-specific pruned search, involving canonical form, special operations on clauses equivalent to some we'll see later. No Wikipedia article (opportunity there).

Nowadays, SAT-solvers. Central to lots of current key problems like hardware and security protocol verification. Annual competition, too. Today SAT for tens of millions of variables can be solved.

Techniques in common with constraint satisfaction problems (CSP), like N-Queens or Cryptarithmetic. Problem is always to assign one of a set of labels to each of a set of variables so that a set of constraints is satisfied.

TT Alternative: Inference (Preview)

Suppose we had axioms, or theorems, or identities, or rules of logic, or syllogisms, that let us rewrite a set of PL sentences into one that was tautologous -- always had the same truth value as the AND of the sentences in the set. Then if we could rewrite our KB into our desired conclusion we'd be done.

Some of these rules go back to Aristotle: Modus Ponens (the way of the bridge) is (the comma means "AND"):
(B ⇒ A, B) ⇒ A
Resolution is:
[(¬ B ∨ A), B] ⇒ A
A very useful identity is (B ⇒ A) ⇔ (¬ B ∨ A),
which can be proved by TT. Using it, we can see Resolution and MP are very closely related.

If you have had a logic class, chances are you proved logic theorems by using MP and other rules of inference --- CB did, and it's hard (choosing the right rule, say). However, which system would be better for automating the proof process? We'll see...

HORN CLAUSES

Resolution is complete, but sometimes if don't need full expressive power of FOPC, can use only special sorts of clauses, especially (in Prolog, say) Horn clauses. See Wikipedia: Horn Clause.

Horn slause are closed under resolution, hae a quick decision algorithm. Horn clauses have power of Turing machine, but have some esoteric weaknesses compared to full FOPC. See, e.g. Reasoning with Horn Clauses" .

Consider PC sentences in disjunctive normal form (ORs of sets of (maybe negated) literals connected by ANDS): e.g. ((p ∧ q) ∨ t).

Definite Clause: has exactly one positive literal.

Fact: definite clause with no negative literals (it's just one literal)

Goal Clause: has no positive literals.

HORN CLAUSES

I find the * expression most intuitive.

Disjunctive Form
Definite Clause: ∼p ∨ ∼q ∨ ... ∨ ∼t ∨ u
Fact: u
* Goal Clause: ∼p ∨ ∼q ∨ ... ∨ ∼t
(Try to) show p,q,...t all hold: as in roof by contradiction: "at least one of these has to be false."

Implication Form
* Definite Clause: u ← p ∧ q ∧ ... ∧ t
As in Prolog: to prove u, prove p,q,...,t
Fact: u
Goal Clause: false ← p ∧ q ∧ ... ∧ t

Prolog Form
Definite Clause: A :- B, C, D.
Fact: A.
Goal Clause: :- B, C, D.

What's NOT a Horn Clause?
for example, in "not-Prolog",
A, B :- pred(x,y,Z). % "A or B is true if pred(..)".
Makes us queasy: search??

KBs with definite clauses:

  1. Look like a list of implications: easy to understand. Single positive literal is a fact. Prolog!
  2. Tops-down and bottoms-up reasoning (forward and backward chaining) both intuitive and natural. Prolog uses backward.
  3. Entailment algorithm is linear in KB size(!!)

Forward and Backward chaining approaches to inference.

SAT and PC

Transform the PC sentence(s) (with ∧, ∨, ⇒, etc) into Conjunctive Normal Form (coming up), which is the AND of clauses
(...) ∧ (A ∨ ∼B ∨ D ∨ ...) ∧...
SAT finds an (or all) assignment(s) of True or False to the variables such that the sentence is true. Clauses can help: resolution proof uses them, as do SAT solvers, which cleverly avoid consdering all 2N models. Let , be ∨ and ; be ∧

(A, B, C); (B, ∼C, D); (A, ∼B, ∼D)
(E, F, G); (∼E, F, ∼G)
7 variables but falls apart into a 4-var and a 3-var problem: 16+8 models, not 128. Component Analysis.

(A) is unit clause, know its value. So can get unit propagation
(A); (∼A, ∼B); (B, C)
A is True, so B is F, so C is T ...like resolution or MP. This time gives linear result!

(A, B); (A, ∼B) A is pure symbol -- not both A, ∼A. here, A is T and B can be T or F.

(∼A, D); (A,B,C); (A, ∼B, C) -- C is pure and T.
Also can ignore clauses known to be true, so if we know D is T, ignore clause 1 and A is now pure.

A pure symbol can purify another, similar to unit clause propagation.

EFFICIENT MODEL-CHECKING:

The SAT problem: Find truth values for variables to make a set of clauses all true. 3-SAT is NP-complete. ``A entails B'' can be proved by testing UNsatisfiability of ``A and not B''.

Huge effort in SAT-solvers. SAT Solving Competition, many practical applications (hardware correctness, protocol correctness).

Davis-Putnam: a Complete Backtracking Algorithm

Basically a depth first enumeration of models with several tricks.

Early termination: can check if S must be T or F even with partially completed models. Clause true if any literal is true, and Sentence is false if any clause is false.

'Pure Symbol' Heuristic. Pure symbol has same sign in all clauses. Thus if S is true, values of pure symbols must make literal true. Also can ignore literals in clauses known to be true, so it's possible that assigning one variable can purify another. In (A ∨ ∼B), (∼B ∨ ∼C), (C ∨ A) , A, B are pure. If model contains B = false then (∼B ∨ ∼C) is true regardless of C's value: the clause can be ignored and that purifies C.

'Unit Clause' Heuristic. A Unit clause has one literal, but here we also include clauses with all literals but one assigned FALSE. Unit clauses force assignments of their variables (literal must be true). Heuristic is to assign all unit clauses before moving on. As with pure symbols, assigning one unit clause can ``unit-ify'' another. Such a cascade of forced assigments is 'unit propagation', which is like forward chaining.

For example, with clauses (A), ( ∼ A ∨ B), (A) is a unit clause, hence must be true in satisfying interpretation. That means in (∼ A ∨ B), (∼ A) is false so B must be true and we're done.

If this little exercise reminds you of resolution and modus ponens, good.

GENERAL CSP TECHNIQUES IN BACKTRACKING SAT-SOLVERS

SAT as tree search: 2N paths for N vars:

           P
          / \
         /   \
        Q     Q 
       / \   / \
      R   R R   R
     / \ ...   / \

Component Analysis: if clauses become disjoint subsets (components in the contraint graph) they're independent and can be solved separately and in parallel

Variable and Value Ordering: can use degree heuristic to choose variable that appears most frequently over all remaining clauses. Always assign T value before F??

Intelligent Backtracking Backtrack to the cause of problems, learn sets of conflict clauses.

Random Restart No progress? Go back to top, take some different random choices (as in variable and value selection). Don't forget conflict clauses learned.

Clever Indexing Vital! This is why we want to be programmers, after all, hein? how answer questions like 'var appearing most frequently', or 'clauses where X appears as positive literal',... AND we are only interested in clauses not so far satisfied, so indexing is dynamic...yikes.

INTELLIGENT BACKTRACKING: Looking Backward

Chronological backtracking is the normal, weak technique that backs up to the last decision (e.g. Prolog). But that decision might not have anything to do with the current failure. Why not keep a conflict set of assignments that are in conflict with each variable? Then when we can't assign to it, backjump to the most recently assigned member of its conflict set.

Forward checking is equally powerful, really, but general idea is to backtrack guided by the reason for failure: conflict-directed backtracking.

Constraint learning is the idea of finding the minimum set of variables from the conflict set that causes the problem. These vars and their values are a no-good, which can be remembered as new constraints.