Logic is the beginning of wisdom, not the end.
Leonard Nimoy (as Spock)
4.1 Introduction to Propositional Logic¶
Propositional logic (also called Boolean logic or sentential logic) is the foundation of knowledge representation and automated reasoning in AI. It provides a formal language for expressing facts and a calculus for deriving new facts.
Why Logic in AI?¶
Advantages of Logic:
Declarative: Specify what is true, not how to compute it
Compositional: Build complex statements from simple ones using connectives
Unambiguous: Precise mathematical semantics
Inference: Automatically derive new knowledge from existing knowledge
Explainable: Reasoning steps are transparent and traceable
General: Apply same inference rules to any domain
Real-World Applications:
Expert systems (medical diagnosis, financial advising)
Hardware verification (chip design correctness)
Software verification (program correctness proofs)
Planning and scheduling systems
Natural language understanding
Semantic web and knowledge graphs
Circuit design and optimization
Database query optimization
4.2 Syntax of Propositional Logic¶
Atomic Propositions¶
Atomic propositions (or propositional symbols) are the basic building blocks:
Denoted by capital letters:
Each represents a statement that is either true or false
Examples:
: “It is raining”
: “I have an umbrella”
: “I will get wet”
Logical Connectives¶
We combine atomic propositions using logical connectives:
| Connective | Symbol | Name | Meaning |
|---|---|---|---|
| Negation | NOT | opposite of | |
| Conjunction | AND | both and | |
| Disjunction | OR | or or both | |
| Implication | IF-THEN | if then | |
| Biconditional | IFF | if and only if |
Well-Formed Formulas (WFF)¶
Definition: A formula is well-formed if constructed by these rules:
Every atomic proposition is a WFF
If is a WFF, then is a WFF
If and are WFFs, then:
is a WFF
is a WFF
is a WFF
is a WFF
Examples:
✓
✓
✓
✗ (not well-formed)
4.3 Semantics: Meaning of Formulas¶
Truth Values¶
Every proposition has a truth value: true (T, 1) or false (F, 0).
Models¶
A model (or interpretation) is an assignment of truth values to all propositional symbols.
Example: For symbols , one model is:
Truth Tables for Connectives¶
Negation ():
| T | F |
| F | T |
Conjunction ():
| T | T | T |
| T | F | F |
| F | T | F |
| F | F | F |
Disjunction ():
| T | T | T |
| T | F | T |
| F | T | T |
| F | F | F |
Implication ():
| T | T | T |
| T | F | F |
| F | T | T |
| F | F | T |
Key insight: is equivalent to
Biconditional ():
| T | T | T |
| T | F | F |
| F | T | F |
| F | F | T |
Key insight: means
4.4 Logical Equivalences¶
Two formulas and are logically equivalent () if they have the same truth value in all models.
Important Equivalences¶
Identity Laws:
Domination Laws:
Idempotent Laws:
Double Negation:
Commutative Laws:
Associative Laws:
Distributive Laws:
De Morgan’s Laws:
Implication Elimination:
Biconditional Elimination:
4.5 Satisfiability, Validity, and Entailment¶
Satisfiability¶
A formula is satisfiable if there exists at least one model where it is true.
Example: is satisfiable (true when )
A formula is unsatisfiable (contradiction) if it is false in all models.
Example: is unsatisfiable
Validity¶
A formula is valid (tautology) if it is true in all models.
Examples:
(Law of Excluded Middle)
(Law of Non-contradiction)
(Contrapositive)
Logical Entailment¶
A knowledge base entails a formula (written ) if is true in all models where is true.
Example:
:
Then (by Modus Ponens)
Key Relationship:
This is the basis for refutation theorem proving.
4.6 Conjunctive Normal Form (CNF)¶
Definition¶
A formula is in Conjunctive Normal Form if it is a conjunction of clauses, where each clause is a disjunction of literals.
Literal: An atomic proposition or its negation ( or )
Clause: Disjunction of literals ()
CNF: Conjunction of clauses
Examples:
✓ (in CNF)
✗ (not in CNF)
Conversion to CNF¶
Algorithm:
Eliminate implications and biconditionals:
Move negation inward (De Morgan’s laws):
Distribute over :
Example Conversion:
Convert to CNF:
Eliminate implication:
De Morgan:
Double negation:
Result: (in CNF)
4.7 Model Checking¶
Model checking determines if by enumerating all possible models.
Truth Table Enumeration¶
Algorithm: TT-ENTAILS?(KB, α)
begin
symbols ← all propositional symbols in KB and α
return TT-CHECK-ALL(KB, α, symbols, {})
end
Algorithm: TT-CHECK-ALL(KB, α, symbols, model)
begin
if symbols is empty then
if KB is true in model then
return α is true in model
else
return true // KB false, so entailment holds vacuously
else
P ← FIRST(symbols)
rest ← REST(symbols)
return TT-CHECK-ALL(KB, α, rest, model ∪ {P: true})
and TT-CHECK-ALL(KB, α, rest, model ∪ {P: false})
endComplexity: where is the number of symbols
Sound: Yes - only returns true if entailment holds
Complete: Yes - always terminates with correct answer
4.8 Resolution Theorem Proving¶
Resolution is a single inference rule that is sound and refutation-complete for propositional logic.
Resolution Rule¶
From two clauses containing complementary literals, infer a new clause:
Example:
Resolution Algorithm¶
To prove , show that is unsatisfiable:
Algorithm: PL-RESOLUTION(KB, α)
begin
clauses ← CNF(KB ∧ ¬α)
new ← {}
loop
for each pair (Ci, Cj) of clauses in clauses do
resolvents ← PL-RESOLVE(Ci, Cj)
if resolvents contains empty clause then
return true // Proved by contradiction
new ← new ∪ resolvents
if new ⊆ clauses then
return false // No new clauses, cannot prove
clauses ← clauses ∪ new
end
Algorithm: PL-RESOLVE(Ci, Cj)
begin
resolvents ← {}
for each literal ℓ in Ci do
if ¬ℓ in Cj then
resolvent ← (Ci - {ℓ}) ∪ (Cj - {¬ℓ})
resolvents ← resolvents ∪ {resolvent}
return resolvents
endProperties:
Sound: Only derives true conclusions
Refutation-complete: Can prove all entailments by contradiction
Complexity: Exponential in worst case
Resolution Example¶
Prove: where
Convert to CNF:
(negated query)
Apply resolution:
Resolve and : get
Resolve and : get
Resolve and : get empty clause
Empty clause derived → proved!
4.9 DPLL Algorithm for SAT¶
The Davis-Putnam-Logemann-Loveland (DPLL) algorithm is an efficient method for determining satisfiability.
Key Ideas¶
Early termination: Stop as soon as formula is satisfied or unsatisfied
Pure symbol heuristic: If symbol appears only positive or only negative, assign accordingly
Unit clause heuristic: If clause has only one literal, that literal must be true
Backtracking search: Try assignments systematically
DPLL Algorithm¶
Algorithm: DPLL(clauses, symbols, model)
begin
if every clause in clauses is true in model then
return true
if some clause in clauses is false in model then
return false
// Pure symbol heuristic
P, value ← FIND-PURE-SYMBOL(symbols, clauses, model)
if P is non-null then
return DPLL(clauses, symbols - P, model ∪ {P: value})
// Unit clause heuristic
P, value ← FIND-UNIT-CLAUSE(clauses, model)
if P is non-null then
return DPLL(clauses, symbols - P, model ∪ {P: value})
// Splitting: try both values
P ← FIRST(symbols)
rest ← REST(symbols)
return DPLL(clauses, rest, model ∪ {P: true})
or DPLL(clauses, rest, model ∪ {P: false})
endImprovements over basic search:
Much faster in practice
Still exponential worst-case
Foundation of modern SAT solvers
4.10 Horn Clauses and Forward/Backward Chaining¶
Horn Clauses¶
A Horn clause is a clause with at most one positive literal.
Forms:
Definite clause: Exactly one positive literal
Example: (equivalent to )
Fact: Single positive literal
Example:
Goal clause: No positive literals
Example: (equivalent to )
Why Horn clauses?
Many real-world KBs naturally expressed as Horn clauses
Inference is linear time in size of KB
Foundation of logic programming (Prolog)
Forward Chaining¶
Forward chaining is data-driven inference: start with facts, apply rules to derive new facts.
Algorithm: FORWARD-CHAIN(KB, query)
begin
count ← table of number of premises for each rule
inferred ← table, initially all false
agenda ← queue of facts, initially all facts in KB
while agenda is not empty do
p ← POP(agenda)
if p = query then
return true
if inferred[p] = false then
inferred[p] ← true
for each rule (premises → conclusion) where p is in premises do
count[rule] ← count[rule] - 1
if count[rule] = 0 then
ADD conclusion to agenda
return false
endProperties:
Complete: For definite clauses
Time: where = number of rules, = number of symbols
Space: Linear in KB size
Monotonic: Inferred facts never retracted
Backward Chaining¶
Backward chaining is goal-driven inference: start with query, work backward to find supporting facts.
Algorithm: BACKWARD-CHAIN(KB, query)
begin
if query is a known fact in KB then
return true
for each rule (premises → conclusion) where conclusion = query do
if BACKWARD-CHAIN-ALL(KB, premises) then
return true
return false
end
Algorithm: BACKWARD-CHAIN-ALL(KB, premises)
begin
for each premise in premises do
if not BACKWARD-CHAIN(KB, premise) then
return false
return true
endProperties:
Complete: For definite clauses
Efficient: Only explores relevant parts of KB
Depth-first: Can get stuck in infinite loops (need loop detection)
Forward vs. Backward Chaining¶
| Aspect | Forward | Backward |
|---|---|---|
| Direction | Data-driven | Goal-driven |
| Strategy | Breadth-first | Depth-first |
| Complexity | with memoization | |
| Space | More | Less |
| Best for | Derive all consequences | Answer specific query |
4.11 Summary¶
Key Concepts¶
Syntax: Propositions and connectives
Semantics: Truth values and models
CNF: Standard form for inference
Entailment: Logical consequence
Model checking: Truth table enumeration
Resolution: Refutation-complete inference
DPLL: Efficient SAT solving
Horn clauses: Restricted but tractable
Forward/Backward chaining: Efficient for Horn clauses
Inference Methods Comparison¶
| Method | Complete | Sound | Complexity | Best For |
|---|---|---|---|---|
| Truth Tables | Yes | Yes | Small formulas | |
| Resolution | Yes* | Yes | Exponential | General CNF |
| DPLL | Yes | Yes | Exponential | SAT problems |
| Forward Chain | Yes** | Yes | Derive all facts | |
| Backward Chain | Yes** | Yes | Specific queries |
*Refutation-complete, **For Horn clauses
Limitations of Propositional Logic¶
Cannot express:
Quantification (all, some)
Objects and relations
Functions
Lack of abstraction: Must enumerate all instances
Solution: First-Order Logic (Chapter 5)
4.12 Implementation¶
For complete Python implementations, see:
Chapter 4: Propositional Logic - Implementation
The implementation notebook includes:
Expression Classes: Symbol, Not, And, Or, Implies, Biconditional
Truth Table Generation: Enumerate all models
CNF Conversion: Transform any formula to CNF
Resolution Prover: Theorem proving by refutation
DPLL SAT Solver: Efficient satisfiability checking
Forward Chaining: Data-driven inference for Horn clauses
Backward Chaining: Goal-driven inference
Knowledge Base: Store and query facts and rules
Real-World Applications:
Logic puzzle solver (Einstein’s riddle)
Sudoku as SAT problem
Expert system for classification
Further Reading¶
Textbooks¶
Aggarwal, C. C. (2021). Artificial Intelligence: A Textbook. Springer. [Chapter 4]
Russell, S., & Norvig, P. (2020). Artificial Intelligence: A Modern Approach (4th ed.). Pearson. [Chapter 7]
Genesereth, M., & Nilsson, N. (1987). Logical Foundations of Artificial Intelligence. Morgan Kaufmann.
Classic Papers¶
Davis, M., & Putnam, H. (1960). A computing procedure for quantification theory. Journal of the ACM.
Robinson, J. A. (1965). A machine-oriented logic based on the resolution principle. Journal of the ACM.
Cook, S. A. (1971). The complexity of theorem-proving procedures. STOC.
Modern SAT Solvers¶
Biere, A., et al. (2009). Handbook of Satisfiability. IOS Press.
Marques-Silva, J., & Sakallah, K. A. (1999). GRASP: A search algorithm for propositional satisfiability. IEEE TC.