\(\newenvironment{bprooftree}{\begin{prooftree}}{\end{prooftree}}\)
24 Apr 2026

Logika v računalništvu (Simpson)

Warning: The notes are not optimized for HTML (yet). Without warranty.

Lecture 1: Propositional logic

Conjunction introduction and elimination \begin{gather*} \begin{bprooftree} \AxiomC{\(\phi\)} \AxiomC{\(\psi\)} \RightLabel{\(\land i\)} \BinaryInfC{\(\phi \land \psi\)} \end{bprooftree} \quad \begin{bprooftree} \AxiomC{\(\phi \land \psi\)} \RightLabel{\(\land e_{1}\)} \UnaryInfC{\(\phi\)} \end{bprooftree} \quad \begin{bprooftree} \AxiomC{\(\phi \land \psi\)} \RightLabel{\(\land e_{2}\)} \UnaryInfC{\(\psi\)} \end{bprooftree} \end{gather*}
Disjunction introduction and elimination \begin{gather*} \begin{bprooftree} \AxiomC{\(\phi\)} \RightLabel{\(\lor i_{1}\)} \UnaryInfC{\(\phi \lor \psi\)} \end{bprooftree} \quad \begin{bprooftree} \AxiomC{\(\psi\)} \RightLabel{\(\lor i_2\)} \UnaryInfC{\(\phi \lor \psi\)} \end{bprooftree} \end{gather*}

Elimination is by case analysis: given \(\phi \lor \psi\), if both \(\phi \vdash \theta\) and \(\psi \vdash \theta\), conclude \(\theta\).

\begin{gather*} \begin{bprooftree} \AxiomC{\(\phi \lor \psi\)} \AxiomC{\(\phi\)} \noLine \def\extraVskip{0pt} \UnaryInfC{\(\vdots\)} \noLine \UnaryInfC{\(\theta\)} \def\extraVskip{2pt} \AxiomC{\(\psi\)} \noLine \def\extraVskip{0pt} \UnaryInfC{\(\vdots\)} \noLine \UnaryInfC{\(\theta\)} \def\extraVskip{2pt} \TrinaryInfC{\(\theta\)} \end{bprooftree} \end{gather*}
Implication introduction and elimination \begin{gather*} \begin{bprooftree} \AxiomC{\(\phi\)} \noLine \UnaryInfC{\(\vdots\)} \noLine \UnaryInfC{\(\psi\)} \RightLabel{\(\rightarrow i\)} \UnaryInfC{\(\phi \rightarrow \psi\)} \end{bprooftree} \qquad\quad \begin{bprooftree} \AxiomC{\(\phi \rightarrow \psi\)} \AxiomC{\(\phi\)} \RightLabel{\(\rightarrow e\)} \BinaryInfC{\(\psi\)} \end{bprooftree} \end{gather*}

\(\rightarrow i\) discharges the assumption \(\phi\). \(\rightarrow e\) is modus ponens.

Negation introduction and elimination, falsum elimination \begin{gather*} \begin{bprooftree} \AxiomC{\(\phi\)} \noLine \def\extraVskip{0pt} \UnaryInfC{\(\vdots\)} \noLine \UnaryInfC{\(\bot\)} \def\extraVskip{2pt} \RightLabel{\(\neg i\)} \UnaryInfC{\(\neg \phi\)} \end{bprooftree} \quad\qquad \begin{bprooftree} \AxiomC{\(\neg \phi\)} \AxiomC{\(\phi\)} \RightLabel{\(\neg e\)} \BinaryInfC{\(\bot\)} \end{bprooftree} \quad \begin{bprooftree} \AxiomC{\(\bot\)} \RightLabel{\(\bot e\)} \UnaryInfC{\(\phi\)} \end{bprooftree} \end{gather*}

\(\neg i\) discharges \(\phi\). \(\bot e\) is ex falso quodlibet: from falsehood, anything follows.

Reductio ad absurdum

The rule that distinguishes classical from intuitionistic logic:

\begin{prooftree} \AxiomC{\(\neg \phi\)} \noLine \def\extraVskip{0pt} \UnaryInfC{\(\vdots\)} \noLine \UnaryInfC{\(\bot\)} \def\extraVskip{2pt} \RightLabel{raa} \UnaryInfC{\(\phi\)} \end{prooftree}

If assuming \(\neg \phi\) leads to \(\bot\), conclude \(\phi\). Compare with \(\neg i\): there, assuming \(\phi\) and deriving \(\bot\) gives \(\neg \phi\). RAA goes the other direction.

Distributivity of conjunction over disjunction

\[ p \land (q \lor r) \rightarrow (p \land q) \lor (p \land r) \]

\begin{logicproof}{3} \begin{subproof} p \land (q \lor r) & \(\text{assumption} \) \\ p & \(\land e(1) \) \\ q \lor r & \(\land e(1) \) \\ \begin{subproof} q & \(\text{assumption} \) \\ p \land q & \(\land i(2)(4) \) \\ (p \land q) \lor (p \land r) & \(\lor i(5)\) \end{subproof} \begin{subproof} r & \(\text{assumption} \) \\ p \land r & \(\land i(2)(7) \) \\ (p \land q) \lor (p \land r) & \(\lor i(8)\) \end{subproof} (p \land q) \lor (p \land r) & \(\lor e(3)(4{-}6)(7{-}9)\) \end{subproof} p \land (q \lor r) \rightarrow (p \land q) \lor (p \land r) & \(\rightarrow i (1{-}10)\) \end{logicproof}
Proof and the turnstile notation

A proof of a formula \(\phi\) from assumptions \(\Gamma\) is a sequence of formulas following the inference rules such that all assumptions used in the proof belong to \(\Gamma\). We write \(\Gamma \vdash \phi\).

Soundness and completeness of propositional natural deduction

\[ \Gamma \vdash \phi \iff \bigwedge \Gamma \rightarrow \phi \text{ is a tautology} \]

Soundness (\(\Rightarrow\)): If \(\Gamma \vdash \phi\), then \(\phi\) is true under every truth assignment making all formulas in \(\Gamma\) true.

Completeness (\(\Leftarrow\)): If \(\phi\) is true under every truth assignment making all formulas in \(\Gamma\) true, then \(\Gamma \vdash \phi\).

Curry-Howard correspondence: formulas as types

Propositional formulas correspond to types, and proofs correspond to terms (judgements \(t : A\)):

Formula Type
\(p\) (atomic) \(\alpha\) (atomic type)
\(\phi \land \psi\) \(A \times B\) (product)
\(\phi \lor \psi\) \(A + B\) (coproduct/sum)
\(\phi \rightarrow \psi\) \(A \rightarrow B\) (function)
\(\bot\) \(0\) (empty type)
Product type introduction and elimination \begin{gather*} \begin{bprooftree} \AxiomC{\(s : A\)} \AxiomC{\(t : B\)} \RightLabel{\(\Pi i\)} \BinaryInfC{\((s, t) : A \times B\)} \end{bprooftree} \quad \begin{bprooftree} \AxiomC{\(t : A \times B\)} \RightLabel{\(\Pi e_{1}\)} \UnaryInfC{\(\pi_1(t) : A\)} \end{bprooftree} \quad \begin{bprooftree} \AxiomC{\(t : A \times B\)} \RightLabel{\(\Pi e_{2}\)} \UnaryInfC{\(\pi_2(t) : B\)} \end{bprooftree} \end{gather*}

Computation rules: \(\pi_1(s, t) \longrightarrow s\) and \(\pi_2(s, t) \longrightarrow t\).

Function type introduction and elimination \begin{gather*} \begin{bprooftree} \AxiomC{\(x : A\)} \noLine \UnaryInfC{\(\vdots\)} \noLine \UnaryInfC{\(t : B\)} \RightLabel{\(\rightarrow i\)} \UnaryInfC{\(\lambda x . t : A \rightarrow B\)} \end{bprooftree} \qquad \quad \begin{bprooftree} \AxiomC{\(t : A \rightarrow B\)} \AxiomC{\(u : A\)} \RightLabel{\(\rightarrow e\)} \BinaryInfC{\(tu : B\)} \end{bprooftree} \end{gather*}

Computation rule (beta reduction): \((\lambda x : A . t)\, u \longrightarrow t[x := u]\).

Substitution requires first renaming bound variables in \(t\) so they are distinct from free variables in \(u\).

Derived term: swap for products

Construct a term of type \(A \times B \rightarrow B \times A\).

\begin{logicproof}{2} \begin{subproof} x : A \times B & assumption \\ \pi_1(x) : A & \(\Pi e_1 (1)\) \\ \pi_2(x) : B & \(\Pi e_2 (1)\) \\ (\pi_2(x), \pi_1(x)) : B \times A & \(\Pi i (3)(2)\) \end{subproof} \lambda x : A \times B . (\pi_2(x), \pi_1(x)) : A \times B \rightarrow B \times A & \(\rightarrow i (1{-}4)\) \end{logicproof}
Coproduct (sum) type introduction and elimination \begin{gather*} \begin{bprooftree} \AxiomC{\(t : A\)} \RightLabel{\(+ i_{1}\)} \UnaryInfC{\(\iota_1(t) : A + B\)} \end{bprooftree} \quad \begin{bprooftree} \AxiomC{\(t : B\)} \RightLabel{\(+ i_2\)} \UnaryInfC{\(\iota_2(t) : A + B\)} \end{bprooftree} \end{gather*}

Elimination by case analysis:

\begin{prooftree} \AxiomC{\(s : A + B\)} \AxiomC{\(x : A\)} \noLine \def\extraVskip{0pt} \UnaryInfC{\(\vdots\)} \noLine \UnaryInfC{\(t : C\)} \def\extraVskip{2pt} \AxiomC{\(y : B\)} \noLine \def\extraVskip{0pt} \UnaryInfC{\(\vdots\)} \noLine \UnaryInfC{\(u : C\)} \def\extraVskip{2pt} \TrinaryInfC{\(\mathbf{case}\; s\; (\iota_1(x) \mapsto t \mid \iota_2(y) \mapsto u) : C\)} \end{prooftree}

Computation rules:

\begin{align*} \mathbf{case}\; \iota_1(s)\; (\iota_1(x) \mapsto t \mid \iota_2(y) \mapsto u) &\longrightarrow t[x := s] \\ \mathbf{case}\; \iota_2(s)\; (\iota_1(x) \mapsto t \mid \iota_2(y) \mapsto u) &\longrightarrow u[y := s] \end{align*}
Empty type

The empty type \(0\) has no introduction rule (no closed terms of type \(0\) can be constructed).

\begin{prooftree} \AxiomC{\(t : 0\)} \UnaryInfC{\(\mathbf{empty}(t) : A\)} \end{prooftree}

This is the type-theoretic counterpart of \(\bot e\) (ex falso quodlibet).

Lecture 2: First order logic and dependent type theory

Predicate logic: syntax

Extends propositional logic with predicates. \[ \phi ::= p(t_1, \dots, t_n) \mid \phi \land \phi \mid \phi \lor \phi \mid \phi \rightarrow \phi \mid \bot \mid \forall x . \phi \mid \exists x . \phi \] We add predicates expressing relations and constants and function symbols expressing elements. Every function symbol has an arity \(\geq 1\). Likewise we interpret formulas as having an arity \(\geq 0\).

Terms express elements: \[ t ::= x \mid c \mid f(t_1, \dots, t_n) \] where \(c\) represents constants, \(f\) a function symbol of arity \(n\).

Expressing properties of integers
  • Constants: \(0, 1\)
  • Function symbols: \(+, \cdot\) both of arity 2.

Examples of terms: \(z\), \(x + y\), \(x \cdot (y + (z + 1))\).

Predicates: \(=\) and \(\leq\), both of arity 2.

Example formula (with one free variable \(x\)): \[ \exists y . \exists z . \exists w . \exists v . \; x = y \cdot y + z \cdot z + w \cdot w + v \cdot v \]

The propositional universe

To integrate predicate logic and type theory, introduce a special type \(\Prop\) — the type of propositions, called the propositional universe.

Basic formation rules for \(\Prop\) \begin{gather*} \begin{bprooftree} \AxiomC{\(\phi : \Prop\)} \AxiomC{\(\psi : \Prop\)} \RightLabel{\(\land\)-form} \BinaryInfC{\(\phi \land \psi : \Prop\)} \end{bprooftree} \quad \begin{bprooftree} \AxiomC{\(\phi : \Prop\)} \AxiomC{\(\psi : \Prop\)} \RightLabel{\(\lor\)-form} \BinaryInfC{\(\phi \lor \psi : \Prop\)} \end{bprooftree} \\ \begin{bprooftree} \AxiomC{\(\phi : \Prop\)} \AxiomC{\(\psi : \Prop\)} \RightLabel{\(\rightarrow\)-form} \BinaryInfC{\(\phi \rightarrow \psi : \Prop\)} \end{bprooftree} \quad \begin{bprooftree} \AxiomC{} \RightLabel{\(\bot\)-form} \UnaryInfC{\(\bot : \Prop\)} \end{bprooftree} \end{gather*}

Often we make the context explicit, e.g.: \[ p : \Prop,\; q : \Prop \vdash p \land q \rightarrow q \land p : \Prop \]

Formation rules for quantifiers in \(\Prop\) \begin{gather*} \begin{bprooftree} \AxiomC{[\(x : A\)]} \noLine \UnaryInfC{\(\phi : \Prop\)} \RightLabel{\(\forall\)-form} \UnaryInfC{\(\forall x : A . \phi : \Prop\)} \end{bprooftree} \qquad\qquad\qquad \begin{bprooftree} \AxiomC{[\(x : A\)]} \noLine \UnaryInfC{\(\phi : \Prop\)} \RightLabel{\(\exists\)-form} \UnaryInfC{\(\exists x : A . \phi : \Prop\)} \end{bprooftree} \end{gather*}

An alternative is to include connectives and quantifiers as typed constants: \[ \rightarrow,\; \land,\; \lor : \Prop \rightarrow \Prop \rightarrow \Prop \] \[ \exists,\; \forall : (A \rightarrow \Prop) \rightarrow \Prop \]

Three judgement forms for \(\Prop\)
  1. \(t : A\) — \(t\) is of type \(A\)
  2. \(\phi : \Prop\) — \(\phi\) is a proposition
  3. \(\phi\) — \(\phi\) is true
Proof rules for the universal quantifier \begin{gather*} \begin{bprooftree} \AxiomC{[\(z : A\)]} \noLine \UnaryInfC{\(\phi\)} \RightLabel{\(\forall i\)} \UnaryInfC{\(\forall z : A . \phi\)} \end{bprooftree} \quad \begin{bprooftree} \AxiomC{\(\forall x : A . \phi\)} \AxiomC{\(t : A\)} \RightLabel{\(\forall e\)} \BinaryInfC{\(\phi[x := t]\)} \end{bprooftree} \end{gather*}
  • In \(\forall i\), \(z\) must not appear free outside the box (i.e. not free in the context).
  • In \(\forall e\), every free occurrence of \(x\) in \(\phi\) is replaced by \(t\), with renaming of bound variables if necessary.
Proof rules for the existential quantifier \begin{gather*} \begin{bprooftree} \AxiomC{$t : A$} \AxiomC{$\phi[x := t]$} \RightLabel{$\exists i$} \BinaryInfC{$\exists x : A.\ \phi$} \end{bprooftree} \quad \begin{bprooftree} \AxiomC{$\exists x : A.\ \phi$} \AxiomC{[$x : A$]} \noLine \UnaryInfC{$\phi$} \noLine \UnaryInfC{$\psi$} \RightLabel{$\exists e$} \BinaryInfC{$\psi$} \end{bprooftree} \end{gather*}

In \(\exists e\): \(\psi : \Prop\) must be provable outside the box, and \(\psi\) cannot contain \(x\) free.

Unary predicate as a function \begin{logicproof}{2} p : A \rightarrow \Prop & assumption \\ \begin{subproof} x : A & assumption \\ px : \Prop & \(\rightarrow e\) \end{subproof} \forall x : A . px : \Prop & \(\forall\)-form (2--3) \end{logicproof}

Thus \(\underbrace{p : A \rightarrow \Prop}_{p \text{ is a unary predicate}} \vdash \forall x : A . px : \Prop\).

A unary predicate is a function from a type into propositions. For binary predicates we may use currying \(q : A \rightarrow A \rightarrow \Prop\) or pairs \(q : A \times A \rightarrow \Prop\).

\[ p : A \rightarrow \Prop,\; q : A \rightarrow \Prop \;\vdash\; (\forall x : A . px) \rightarrow (\forall x : A . qx) \rightarrow (\forall x : A . px \land qx) \] So to speak, universal quantifier commutes with conjunction. Note this is an instance of RAPL.

\begin{logicproof}{2} p : A \rightarrow \Prop & assumption \\ q : A \rightarrow \Prop & assumption \\ \forall x : A . px & assumption \\ \forall y : A . qy & assumption \\ z : A & assumption \\ pz & \(\forall e\) (3)(5) \\ qz & \(\forall e\) (4)(5) \\ pz \land qz & \(\land i\) (6)(7) \\ \forall z : A . pz \land qz & \(\forall i\) (5--8) \\ (\forall y : A . qy) \rightarrow (\forall z : A . pz \land qz) & \(\rightarrow i\) (4--9) \\ (\forall x : A . px) \rightarrow (\forall y : A . qy) \rightarrow (\forall z : A . pz \land qz) & \(\rightarrow i\) (3--10) \end{logicproof}
Judgements for types — the universe \(\Type\)

Replacing the grammar for types with a universe \(\Type\), our judgements are now:

  • \(\phi\) — \(\phi\) is true
  • \(t : A\) — \(t\) is of type \(A\)
  • \(\phi : \Prop\) — \(\phi\) is a proposition
  • \(A : \Type\) — \(A\) is a type

Formation rules for type constructors:

\begin{gather*} \begin{bprooftree} \AxiomC{\(A : \Type\)} \AxiomC{\(B : \Type\)} \BinaryInfC{\(A \times B : \Type\)} \noLine \UnaryInfC{\(A + B : \Type\)} \noLine \UnaryInfC{\(A \rightarrow B : \Type\)} \end{bprooftree} \quad \begin{bprooftree} \AxiomC{} \UnaryInfC{\(0 : \Type\)} \end{bprooftree} \end{gather*}

Thus when we introduce a concept, we give it four kinds of rule:

  1. Formation rules
  2. Introduction rules
  3. Elimination rules
  4. Computation rules
Dependent product example

\[ \Pi (n : \mathbb{N}) . \operatorname{Vector}_{\mathbb{N}}\, n \] is the type of functions \(f\) such that for each \(n : \mathbb{N}\), \(f(n) : \operatorname{Vector}_{\mathbb{N}}\, n\). Examples:

  • \(n \mapsto \overbrace{0 \cdots 0}^{n}\) — the zero sequence of length \(n\)
  • \(n \mapsto 1\, 2 \cdots n\) — the counting sequence of length \(n\)

Lecture 3: Dependent type theory 2

Natural numbers type

The type \(\operatorname{nat}\) is given by

\begin{gather*} \begin{bprooftree} \AxiomC{} \RightLabel{formation rule} \UnaryInfC{\(\operatorname{nat} : \Type\) } \end{bprooftree} \end{gather*} \begin{gather*} \begin{bprooftree} \AxiomC{} \UnaryInfC{\(0 : \operatorname{nat}\) } \end{bprooftree} \quad \begin{bprooftree} \AxiomC{\(t : \operatorname{nat}\)} \RightLabel{Introduction rules} \UnaryInfC{\(\operatorname{succ} t : \operatorname{nat}\) } \end{bprooftree} \end{gather*}
List types \begin{gather*} \begin{bprooftree} \AxiomC{\(A : \Type\)} \RightLabel{type of lists of elements of type \(A\)} \UnaryInfC{\(\operatorname{List} A : \Type\) } \end{bprooftree} \end{gather*} \begin{gather*} \begin{bprooftree} \AxiomC{\(A : \Type\)} \UnaryInfC{\(\operatorname{nil} : \operatorname{List} A\) } \end{bprooftree} \quad \begin{bprooftree} \AxiomC{\(a : A\)} \AxiomC{\(as : \operatorname{List} A\)} \RightLabel{Introduction} \BinaryInfC{\(\operatorname{cons} a\, as : \operatorname{List} A\) } \end{bprooftree} \end{gather*}

Alternatively, we can think of this type as given by

  1. A constant \(\operatorname{nil} : \operatorname{List} A\)
  2. A map \(\operatorname{cons} : A \rightarrow \operatorname{List} A \rightarrow \operatorname{List} A\)
Vector types \begin{gather*} \begin{bprooftree} \AxiomC{\(A : \Type\)} \AxiomC{\(n : \operatorname{nat}\)} \BinaryInfC{\(\operatorname{Vector} A \, n : \Type\)} \end{bprooftree} \end{gather*} \begin{gather*} \begin{bprooftree} \AxiomC{} \UnaryInfC{\(\operatorname{nil} : \operatorname{Vec} 0\, A\) } \end{bprooftree} \quad \begin{bprooftree} \AxiomC{\(a : A\)} \AxiomC{\(as : \operatorname{Vector} A\, n \)} \BinaryInfC{\(\operatorname{cons} n\, a \, ax : \operatorname{Vector} A \, (\operatorname{succ} n)\)} \end{bprooftree} \end{gather*}

Alternatively, we can think of it as given by

  1. A constant \(\operatorname{nil} : \operatorname{Vec} 0 \, A\)
  2. A map \(\operatorname{cons} : A \rightarrow \operatorname{Vec} A \, n \rightarrow \operatorname{Vec} A \, (\operatorname{succ} n)\)

The map including the dependent \(n\) means we really have a family of maps, which we give as a dependent product: \[ \operatorname{cons} : \prod(n : \operatorname{nat}) .\, A \rightarrow \operatorname{Vec} A \, n \rightarrow \operatorname{Vec} A \, (\operatorname{succ} n) \] Hence why \(\operatorname{cons}\) takes the additional argument \(n\). Vectors form a dependent type.

Dependent product rules \begin{gather*} \begin{bprooftree} \AxiomC{[\(x : A\)]} \noLine \UnaryInfC{\(B : \Type\)} \RightLabel{\(\Pi\)f} \UnaryInfC{\(\Pi(x : A) . B : \Type\)} \end{bprooftree} \qquad \begin{bprooftree} \AxiomC{[\(x : A\)]} \noLine \UnaryInfC{\(t : B\)} \RightLabel{\(\Pi\)i} \UnaryInfC{\(\lambda x : A . t : \Pi(x : A) . B\) } \end{bprooftree} \qquad \begin{bprooftree} \AxiomC{\(s : \Pi (x : A).\, B\)} \AxiomC{\(t : A\)} \RightLabel{\(\Pi\)e} \BinaryInfC{\(st : B[t / x]\)} \end{bprooftree} \end{gather*}

We also have the computation rule \[ (\lambda x : A . t) u \longrightarrow t[u/x]. \]

Note that these rules match to the rules for \(\rightarrow\). In type theory, once you have dependent product, we don't need the \(\rightarrow\) type as a primitive anymore, because it is defineable. In particular, reduces to the ordinary function type \(A \rightarrow B\) when \(B\) does not depend on \(x\), \(\Pi(x : A) . B\).

Dependent sum types \begin{gather*} \begin{bprooftree} \AxiomC{\(A : \Type\)} \AxiomC{[\(x : A\)]} \noLine \UnaryInfC{\(B : \Type\)} \RightLabel{\(\Sigma\)-form} \BinaryInfC{\(\Sigma (x : A) . \, B : \Type\)} \end{bprooftree} \qquad \qquad \begin{bprooftree} \AxiomC{\(s : A\)} \AxiomC{\(t : B[s/x]\)} \RightLabel{\(\Sigma\)-intro} \BinaryInfC{\((s, t) : \Sigma(x : A). \, B\)} \end{bprooftree} \\ \begin{bprooftree} \AxiomC{\(t : \Sigma(x : A).\, B\)} \RightLabel{\(\Sigma\)-elim-1} \UnaryInfC{\(\pi_1(t) : A\) } \end{bprooftree} \quad \begin{bprooftree} \AxiomC{\(t : \Sigma(x : A).\, B\)} \RightLabel{\(\Sigma\)-elim-2} \UnaryInfC{\(\pi_2(t) : B[\pi_1(t) / x]\) } \end{bprooftree} \end{gather*}

We also define the computation rules

\begin{align*} \pi_1(s, t) &\longrightarrow s \\ \pi_2(s, t) &\longrightarrow t \end{align*}

Intuitively, \(\Sigma(x : A). B\) is the type where elements are tagged elements \[ \operatorname{in}_a(b) \] where \(a : A\) and \(b : B[a/x]\). Equivalently, we can give the tagging with pairs \((a, b)\) for \(a : A\) and \(b : B[a/x]\).

Product types

We can define the product types using dependent sums using \[ A \times B := \Sigma(x : A) . \, B \] where \(B\) does not depend on \(A\).

Types and their elements
  • \(0\) has no elements
  • \(A \times B\) has pairs \((a, b)\) where \(a : A\) and \(b : B\)
  • \(A + B\) has tagged elements \(\iota_1(a)\) where \(a : A\) or \(\iota_2(b)\) where \(b : B\)
  • \(A \rightarrow B\) has functions of domain \(A\) and codomain \(B\)
  • \(\Pi (x : A).\, B\) has functions mapping elements \(a : B\) to elements of type \(B[a/x]\).
  • \(\Sigma (x : A). \, B\) has pairs \((a, b)\) where \(a : A\) and \(b : B[a/x]\)

We can also consider a primitive unit type.

Brouwer-Heyting-Kolmogorov interpretation of logic
  • \(\bot\) has no proof.
  • A proof of \(\phi \land \psi\) is a pair \((a, b)\) where \(a\) proves \(\phi\) and \(b\) proves \(\psi\).
  • A proof of \(\phi \lor \psi\) is either \(\iota_1(a)\) where \(a\) is a proof of \(\phi\) or \(\iota_2(b)\) where \(b\) is a proof of \(\psi\).
  • A proof of \(\phi \rightarrow \psi\) is a function that maps a proof of \(\phi\) to a proof of \(\psi\).
  • A proof of \(\forall x : A . \phi\) is a function that takes an element \(a : A\) to a proof of \(\phi[a/x]\).
  • A proof of \(\exists x : A . \phi\) is a pair \(a, b\) where \(a : A\) and \(b\) is a proof of \(\phi[a/x]\).
Propositions as types: Curry-Howard correspondence

We replace logical formulas (proposition) with types whose elements are proofs of the propositions.

\begin{align*} \bot &\longrightarrow 0 \\ \phi \land \psi &\longrightarrow A \times B \\ \phi \lor \psi &\longrightarrow A + B \\ \phi \rightarrow \psi &\longrightarrow A \rightarrow B \\ \forall x : A .\, \phi &\longrightarrow \Pi(x : A) . \, B \\ \exists x : A .\, \Phi &\longrightarrow \Sigma(x : A) . \, B \end{align*}

With this approach we do not need an explicit propositional universe \(\Prop\). This works but we have a better way.

Propositions as subterminal types

This is the modern approach to embedding logic in type theory.

  • If \(\phi : \Prop\), then \(\phi : \Type\), i.e. \(\Prop\) is a subuniverse of \(\Type\)
  • \(\Prop\) is closed under \(\Pi\) types, meaning

    \begin{prooftree} \AxiomC{\(A : \Type\)} \noLine \UnaryInfC{\(x : A\)} \noLine \UnaryInfC{\(\phi : \Prop\)} \UnaryInfC{\(\Pi(x : A). \, B : \Prop\) } \end{prooftree}
  • If \(\phi : \Prop\), then \(\phi\) is subterminal, if \(a, b : \phi\), then \(a = b\).
  • We also take \(\Prop : \Type\) - impredicativity.
The Scott-Prawitz encoding of intuitionistic logic

Taking type theory with \(\Pi\) as the only type constructor is enough for logic. Also taking inductive types allows us to do mathematics and computer science.

  • \(\bot\): \(\Pi(p : \Prop) . \,p\)
  • \(\phi \land \psi\): \( \Pi (p : \Prop) . (\phi \rightarrow \psi \rightarrow p) \rightarrow p \)
  • \(\phi \lor \psi\):\( \Pi (p : \Prop) . (\phi \rightarrow p) \rightarrow (\psi \rightarrow p) \rightarrow p \)
  • \(\phi \rightarrow \psi\) : \(\phi \rightarrow \psi\)
  • \(\forall x : A . \phi\) :\( \Pi (x : A) . \phi\)
  • \(\exists x : A . \phi\): \( \Pi (p : \Prop) \left( \Pi(x : A) . (\phi \rightarrow p) \right) \rightarrow p\)

Lecture 4: Dependent type theory 3

We have the basic judgement forms

  • \(A : \Type\)
  • \(t : A\) wherein we had to prove \(A : \Type\)
  • \(S \equiv t : A\) meaning \(s\) and \(t\) are judgementally equal.

and the derived judgment forms

  • \(\Phi : \Prop\) is given by an instancce of \(t : A\) and \(A : \Type\)
  • \(t : \Prop\) means \(\_ : \phi\), so \(\phi\) must be inhabited. We say the term is a witness for \(\phi\).

We will look at judgemental equality today. It turns out to be crucial.

We want to produce an elimination rule for the naturals.

Induction rule \begin{gather*} \begin{bprooftree} \AxiomC{\(t : \operatorname{nat}\)} \AxiomC{\(P : \operatorname{nat} \rightarrow \Prop\)} \AxiomC{\(P0\)} \AxiomC{\(x : \operatorname{nat}, Px \vdash p(\operatorname{succ x})\)} \QuaternaryInfC{\(Pt\)} \end{bprooftree} \end{gather*}

Instead of a rule-based formulation, we assume typed constants that are "equivalent" to the rules. Our introduction rules are terms \(0 : \operatorname{nat}\) and \(\operatorname{succ} : \operatorname{nat} \rightarrow \operatorname{nat}\). Note that we cannot derive that \(\operatorname{succ}\) is a function by our previous definition, but we can define \(\lambda x : \operatorname{nat} . \operatorname{succ} x : \operatorname{nat} \rightarrow \operatorname{nat}\).

Likewise, rather than defining the induction rule as above, we can formulate it as an axiom in logic.

Induction axiom

\[ \operatorname{Ind} : P : \operatorname{nat} \rightarrow\Prop .\, \left( P0 \rightarrow (\forall x : \operatorname{nat} .\, Px \rightarrow P (\operatorname{succ} x)) \rightarrow \forall x : \operatorname{nat} .\, Px \right) \]

A mathematical proof in set theory derives the recursion principle from this. If we have any set \(A\) and \(a : A\) and \(f : A \rightarrow A\), then there exists a unique function \(r : \N \rightarrow A\) such that we have \(r(0) = a\) and \(r(n + 1) = f(r(n))\).

Building on the classical intuition, we generalize the induction axiom by replacing the \(\forall\) with the dependent product, constructing an explicit recursor function.

\[ R_{\operatorname{nat}} : \prod P : \operatorname{nat} \rightarrow \Type .\, \left( P0 \rightarrow \left( \prod x : \operatorname{nat} .\, Px \rightarrow P(\operatorname{succ} x) \right) \right) \rightarrow \prod x : \operatorname{nat}.\, Px \]

We can use the recursor to formulate a computation rule.

Say we have \(P : \operatorname{nat} \rightarrow \Type\), \(b : P_0\), \(f : \prod (x : \operatorname{nat} .\,Px \rightarrow P(\operatorname{succ} x))\). Then we have a term \[ R_{\operatorname{nat}}P\, b\, f\, : \prod(x : \operatorname{nat}) .\, Px \] which we can apply as

\begin{align*} R_{\operatorname{nat}} P\, b\, f\, 0 &\longrightarrow b : P0\\ R_{\operatorname{nat}} P\, b\, f\, (\operatorname{succ} n) &\longrightarrow f \left( R_{\operatorname{nat}} P\, b\, f\, n\right) : P(\operatorname{succ} n) \end{align*}

We define the function

\begin{align*} \operatorname{add} : \operatorname{nat} &\rightarrow \operatorname{nat}\rightarrow \operatorname{nat}\\ \operatorname{add} &:= \lambda m : \operatorname{nat} .\, \lambda n : \operatorname{nat} .\, R_{\operatorname{nat}} (\lambda \_ : \operatorname{nat}) \,m\, (\lambda\_ : \operatorname{nat} .\, \lambda x : \operatorname{nat} .\, \operatorname{succ} x) \, n \end{align*}

Let us prove \(1 + 1 = 2\).

\begin{align*} \operatorname{add} \, (\operatorname{succ} 0)\, (\operatorname{succ} 0) &\longrightarrow^{ * } R_{\operatorname{nat}} (\lambda \_ : \operatorname{nat} . \operatorname{nat}) (\operatorname{succ} 0 )\, (\lambda \_ : \operatorname{nat} . \operatorname{nat}) (\operatorname{succ} 0 )\, \\ &\longrightarrow (\lambda \_ : \operatorname{nat} .\, \lambda x : \operatorname{nat}.\, \operatorname{succ} x) \, 0 \, (R_{\operatorname{nat}} (\lambda \_ : \operatorname{nat} .\, \operatorname{nat}) \, (\operatorname{succ} 0) \, \operatorname{\lambda \_ : \operatorname{nat} .\, \lambda x : \operatorname{nat} .\, \operatorname{succ} x} 0) \\ &\longrightarrow^{ * } \operatorname{succ} (*R_{\operatorname{nat}} \, (\lambda \_ : \operatorname{nat} .\, \operatorname{nat}) \, (\operatorname{succ} 0)\, (\lambda \_ : \operatorname{nat} .\, \lambda x : \operatorname{nat} .\, \operatorname{succ} x) 0) \\ &\rightarrow \operatorname{oper}(\operatorname{oper} 0) \end{align*}

Note that a vector is given by \(\operatorname{nil} : \operatorname{Vec} \, A \, 0\) and \[ \operatorname{cons} : \prod (n : \operatorname{nat}) : A \rightarrow \operatorname{Vec} A\, n \rightarrow \operatorname{Vec}\, A\, (\operatorname{succ} n). \]

We define the recursor for vectors by

\begin{align*} R_{\operatorname{Vec}} &: \prod P : (\prod n : \operatorname{nat} . (\operatorname{Vec} A\,n \rightarrow \Type)) \\ & P\, 0\, \operatorname{nil} \rightarrow (\prod a : A.\, \prod n : \operatorname{nat} .\, \prod as : \operatorname{Vec} A\, n) .\\ &P \,n\,as \rightarrow P\,(\operatorname{succ} n)\, (\operatorname{cons}\,n\,a\,as) \\ &\rightarrow \prod n : \operatorname{nat} \, \prod ax : \operatorname{Vec} \, A\, n .\, P \,n\, as \end{align*}

The computation rules as follows.

Vector computation rules \begin{align*} R_{\operatorname{Vec}} P\, b\, f\, &\prod (n : \operatorname{nat})\, \prod (as : \operatorname{Vec}\, A\, n) . P\,n\,as\\ R_{\operatorname{Vec}} P\, b\, f\, 0\, \operatorname{nil} &\longrightarrow b : P\,0 \,\operatorname{nil}\\ R_{\operatorname{Vec}} P\, b\, f\, (\operatorname{succ} n)\, (\operatorname{cons}\,n\,a\,as) &\longrightarrow f\,a\,n\,as\, (R_{\operatorname{Vec}} P\,b\,f\,n\,as) : P \,(\operatorname{succ} n)\,(\operatorname{cons}\,a\,as) \end{align*}

We also specify equality as an inductive type.

  • Formation: \begin{prooftree} \AxiomC{\(A : \Type\)} \AxiomC{\(s : A\)} \AxiomC{\(t : A\)} \TrinaryInfC{\(s =_A t : \Prop\)} \end{prooftree}
  • Introduction: \begin{prooftree} \AxiomC{\(t : A\)} \UnaryInfC{\(\operatorname{refl}_{A, t}: t =_A t\) } \end{prooftree}
  • Eliminator:

    \begin{align*} J_A : &\prod C : (\prod (x : A) \, \prod (y : A)\, ((x =_A y) \rightarrow \Type) ) .\\ &(\prod (x : A).\, C \, x\, x \, (\operatorname{refl}_{A, x})) \rightarrow \\ &\prod (x : A)\, \prod (y : A)\, \prod (p : (x =_A y))\. C\,x\,y\,p \end{align*}
  • Computation: \[ J_A \, C\, f\,t\,t\, \operatorname{refl}_{A, t} \rightarrow f\, t \]

Martin-löf's version has equality types, where \(s =_A t : \Type\) instead of merely being a proposition. This leads to Homotopy Type Theory.

\begin{prooftree} \AxiomC{\(u : B[s/x]\)} \AxiomC{\(s \equiv t : A\)} \RightLabel{sub} \BinaryInfC{\(u : B[t/x]\)} \end{prooftree}

Then we have a rule

\begin{prooftree} \AxiomC{\(s : A\)} \AxiomC{\(t : A\)} \AxiomC{\(s \rightarrow^{ * } u\)} \AxiomC{\(t \rightarrow^{ * } u\)} \QuaternaryInfC{\(s\equiv t : A\)} \end{prooftree}

Lecture 5: SAT Solving

We will use the following connectives as primitive.

\[ \phi ::= x \mid \neg \phi \mid \bot \mid \phi \lor \phi \mid \top \mid \phi \land \phi \] Other connectives can be defined.

We view a formula \(\phi\) with (at most) \(k\) propositional variables \(x_1,\dots, x_k\) as a function \[ \llbracket \phi \rrbracket : 2^k \rightarrow 2 \] encoding the truth table. The proposition is encased in so called semantic brackets.

  • A formula \(\phi\) in variables \(x_1,\dots, x_k\) is satisfiable if the true fiber is nonempty, i.e. if there exists \(\vec{b} \in 2^k\) such that

\[ \llbracket \phi \rrbracket(\vec{b}) = 1. \]

  • A formula is a tautology (valid) if the false fiber is empty, i.e. it is true for all assigments of truth values.
  • Formulae \(\phi\) and \(\psi\) are equivalent if \(\forall \vec{b} \in 2^k .\, \llbracket \phi \rrbracket(\vec{b}) = \llbracket \psi \rrbracket (\vec{b})\).

If \(\vec{b}\) satisfies \(\phi\) it is called a satisfying assignment. More generally, we might refer to it as a certificate.

The connections with the proof system is given by a pair of theorems.

Soundness theorem

If we can prove \(\phi\) from no assumptions in natural deduction, then \(\phi\) is valid.

Completeness theorem

If \(\phi\) is valid, then we can prove \(\phi\) in natural deduction.

Note that we need LEM for this.

\(\phi\) is valid \(\iff \neg \phi\) is not satisfiable.

The definitions of validity and satisfyability give rise to truth table algorithms, which simply check the property in question for all \(\vec{b} \in 2^k\). This is exponential, however.

The soundness theorem suggests another avenue of attack, namely, to search for proofs of the formula in natural deduction. This is provided by proof systems. Due to the equivalence between satisfiability and validity, by correctly bounding the size of proofs also allows us to exhaust the proof space and show nonsatisfiability.

It turns out many search problems can efficiently be encoded as a satisfiability problem efficiently.

A formula \(\phi\) is in negation normal form (nnf) if the only negations appear directly in front of propositional variables. This is given by the grammar \[ \phi ::= x \mid \neg x \mid \bot \mid \phi \lor \phi \mid \top \mid \phi \land \phi \]

Using De Morgan's laws we do rewrites of the form

\begin{align*} \neg (\phi \lor \psi) &\longrightarrow (\neg \phi) \land (\neg \psi) \\ \neg (\phi \land \psi) &\longrightarrow (\neg \phi) \lor (\neg \psi) \\ \neg\neg \phi &\longrightarrow \phi \\ \neg \bot &\longrightarrow \top \\ \neg \top &\longrightarrow \bot \end{align*}

we arrive to the expected form. As stated this is a nondeterministic algorithm.

For any formula \(\phi\), the rewrite algorithm terminates with formula \(\phi^{\text{nnf}}\) that is equivalent to \(\phi\).

The time complexity is \(O(\abs{\phi})\). Also \(\abs{\phi^{\text{nnf}}} = O(\abs{\phi})\).

  • A literal is a propositional variable \(x\) or the negation of one \(\neg x\).
  • A clause is a finite disjunction of literals \(L_1 \lor \cdots \lor L_m \) or \(\bot\).
  • A coclause is a finite conjunction of literals \(L_1 \land \cdots \land L_m \) or \(\top\).

A formula is in conjunctive normal form (CNF) if it is a finite conjunction of clauses: \[ D_1 \land \cdots \land D_n \text{ (or \(\top\))} \] where each \(D_i\) is a clause. A formula is in \(n\)-CNF if each clause contains at most \(n\) literals.

Dually, it is in disjunctive normal form if it is a finite disjunction of coclauses: \[ C_1 \lor \cdots \lor C_n. \]

We have an algoritm (see notes) that converts any formula \(\phi\) into an equivalent \(\phi^{\text{cnf}}\) in conjunctive normal form. A different algorithm works for dnf.

The SAT problem is a computational problem such that:

  • Input: A propositional formula \(\phi\)
  • Output: yes if \(\phi\) is satisfiable, no if \(\phi\) is not satisfiable.

In practice, we often care about the slightly more powerful "useful" SAT problem:

  • Input: A propositional formula \(\phi\)
  • Output: yes and a satisfying assigmnment if \(\phi\) is satisfiable, no if \(\phi\) is not satisfiable.

Famously, SAT is NP-complete. There is no known efficient in theory algorithm. Nevertheless, there are practical SAT solvers.

We can look at SAT for formulae in normal form. First, DNF.

  • Input: A formula in DNF.
  • Output: yes if satisfiable, no if not.

This is very easy to solve - we need to check a single disjunct is solvable, and each of these is a conjunct, which is satisfiable iff it does not contain both a statement and its negation. Checking this takes at most quadratic time.

However, the conversion into DNF (likewise for CNF) involves an exponential blowup in the size of the formula. The problem is not interesting.

CNF is the more interesting normal form.

  • Input: A formula \(\phi\) in CNF.
  • Output: yes + a satisfying assignment if \(\phi\) is satisfiable, no otherwise.

This is what SAT solvers implement. Even though conversion into an equivalent CNF formula is hard, we can give a transformation into a CNF formula which, while not equivalent, gives us a one-to-one mappping of satisfying assignments.

The Tseytin transformation

Let \(\phi(x_1, \dots, x_k)\) be a propositional formula. The Tseytin transformation maps it to a formula \[ \phi^{T}(x_1, \dots, x_K), \] where \(K \geq k\) such that

  1. \(\phi^{T}\) is in 3-CNF,
  2. \((b_1, \dots, b_K) \mapsto (b_1, \dots, b_k) : 2^K \rightarrow 2^k\) is a bijection between satisfying assignments for \(\phi^T\) and \(\phi\).

\(\phi^T\) is computable in \(O(\abs{\phi})\) time.

We will understand the algorithm and its validity through an example. NB this is an IŠRM lecture so this is justified.

Start with \[ \phi \equiv (x_1 \land x_2) \lor \neg (x_3 \lor x_1) \] We write the formula as its abstract syntax tree.

\begin{forest} [{$\lor$} [{$\land$} [$x_1$] [$x_2$] ] [{\neg} [{$\lor$} [$x_3$] [$x_1$] ] ] ] \end{forest}

We introduce new variables to name the internal nodes of the tree. \(x_4\) is the root of the tree, \(x_5\) corresponds to the first \(\land\), \(x_6\) to the negation, \(x_7\) to the \(\lor\). Then we introduce the new formulas

\begin{align*} x_4 &\leftrightarrow x_5 \lor x_6 & & (\neg x_4 \lor x_5 \lor x_6) \land (\neg x_5 \lor x_4) \land (\neg x_6 \lor x_4) \\ x_5 & \leftrightarrow x_1 \land x_2 & \land\, & (\neg x_5 \lor x_1) \land (\neg x_5 \lor x_2) \land (\neg x_1 \lor \neg x_2 \lor x_5) \\ x_6 & \leftrightarrow \neg x_7 & \land\, & (\neg x_6 \lor \neg x_7) \land (x_7 \lor x_6) \\ x_6 & \leftrightarrow x_3 \lor x_1 & \land\, & (\neg x_7 \lor x_3 \lor x_1) \land (\neg x_3 \lor x_7) \land (\neg x_1 \lor x_7) \\ & & \land\, & x_4 \end{align*}

where the whole formula on the right is \(\phi^T\).

In practice we often need not even convert the formula, as many problems can be stated in CNF directly.

Let \((V, E)\) be a graph with a finite set of vertices and \(E \subseteq V \times V\) is symmetric and irreflexive, i.e. without self-loops.

Then \((V, E)\) is 3-colorable if there exists a coloring function \(c : V \rightarrow \left\{ R, G, B \right\}\) such that for any edge \((v, v')\), we have \(c(v) \neq c(v')\).

We can see this as a SAT problem thus. Construct a formula \(\phi_{G}\) with propositional variables \[ \left\{ r_v, g_v, b_v \mid v \in V\right\} \] indicating the color of each vertex \(v\), giving us \(3 \abs{V}\) propositional variables. Then \(\phi_{G}\) is constructed as the CNF formed as a conjunction of the following clauses.

  1. For every \(v \in V\), we include the clauses \(r_v \lor g_v \lor b_v\) - it has at least one color, \(\neg r_v \lor \neg g_v\), \(\neg g_v \lor \neg b_v\), \(\neg r_v \lor \neg b_r\), i.e. it has at most one color.
  2. For every \(\left\{ v, v' \right\}\) in \(E\) we include the clauses \(\neg r_v \lor \neg r_{v'}\), \(\neg g_v \lor \neg g_{v'}\), \(\neg b_v \lor \neg b_{v'}\), i.e. no vertices of an edge share a color.

Satisfying assignments for \(\phi_{G}\) are in obvious bijective correspondence with 3-colorings of the graph \(G\).

We will specify a recursive algorithm DPLL \((\phi, \rho)\) where \(\rho\) is an assignment of truth values to variables not in \(\phi\). We repeat until none is applicable

  • (Empty conjunction elimination): If \(\phi\) is the empty conjunction \([]\), return \(\rho\) as the satisfying assignment.
  • (Empty clause elimination): If \(\phi\) contains an empty clause, return no
  • (Unit clause elimination): If \(\phi\) contains a unit clause (or a pure literal) \(L\), perform \(DPLL\) on it \(\rho' = \rho, L\) and continue with DPLL on \(\rho' = \rho, L\) and \(\phi' = \operatorname{simplify}(\phi, L)\).

If none of the above apply, we choose some literal \(L\) appearing in \(\phi\) and continue with DPLL on \(\rho' = \rho, L\) and \(\phi' = \operatorname{simplify}(\phi, L)\).

If DPLL returns \(\rho''\) then we return \(\rho''\). Otherwise, we perform DPLL on \(\rho, \neg L\) and \(\phi' = \operatorname{simplify}(\phi, \neg L)\).

It turns out explicitly checking for pure literals tends to take more time than ignoring them. A clause is pure if it always appears either negated or nonnegated in the formula.

\[ \left[ (p \lor q) \land (\neg p \lor r) \land (\neg q \lor \neg r) \land (\neg p \lor q \lor \neg r) \right] \] At the start, \(\rho = \emptyset\).

  1. Choose \(p\) to be true. The simplification turns the formula into \[ \left[ (r) \land (\neg q \lor \neg r) \land (q \lor \neg r) \right] \] with \(\rho' = (p)\).
  2. \((r)\) is a unit clause, so we eliminate it. The simplification becomes \[ (\neg q) \land (q) \] with \(\rho' = (p, r)\)
  3. \((\neg q)\) is a unit clause, so we eliminate it. We get \[ [()] \] i.e. the singleton conjunction of the empty disjunction, i.e. falsehood.
  4. We backtrack. Choosing \(p\) didn't work, so we try \(\not p\). Our formula becomes \[ \left[ (q) \land (\neg q \lor \neg r) \right] \] with \(\rho' = (\neg p)\).
  5. We have unit clause \(q\). \[ \left[ (\neg r) \right] \] with \(\rho' = (\neg p, q)\).
  6. We have unit clause \(\neg r\), giving us \[ \left[ \right] \] with \(\rho' = (\neg p, q, \neg r)\).

Lecture 6: Algorithmic problems in predicate logic

In predicate logic, we have a signature, a set \(\mathcal{P}\) or relation symbols or predicates and a set \(\mathcal{F}\) of function symbols including constant symbols.

A predicate \(p\) has an arity \(\geq 0\). Function symbols likewise have arity \(\geq 0\).

\begin{align*} \phi &::= p(\underbrace{t, \dots, t}_{\text{arity } p}) \mid t = t \mid \neg \phi \mid \phi \land \phi \mid \phi \lor \phi \mid \forall x .\, \phi \mid \exists x .\, \phi \\ t &::= x \mid f(\underbrace{t, \dots, t}_{\text{arity } f}) \end{align*}

An example signature is \((\leq, 0, +)\) where \(\leq\) is a relation with respective arities 2, 0, 2. An example formula is \[ \forall x.\, 0 \leq x. \] We call such a formula with no free variables a sentence. We think of sentences as being true or false in a specific interpretation. The above formula is true in \(\N\) as normally understood, but we could just as easily have defined a perverse model with \(\leq := \geq\) where the formula is false. Likewise the formula is false in \(\Z\).

A structure \(\mathcal{M} = (M, (p^\mathcal{M})_{p \in \mathcal{P}}), (f^{\mathcal{M}})_{f \in \mathsf{F}} \) where

  • \(M\) is a set (the domain or universe of quantification)
  • For every relation symbol \(P\) of arity \(n\), we have its interpretation \[ P^{\mathcal{M}} : M^n \rightarrow 2 \] where \(n\) is the arity of \(P\).
  • Every function symbol \(f\) is interpreted as a function on the universe \[ f^{\mathcal{M}} \rightarrow M^n \rightarrow M. \]

An example of a structure on \((\leq, 0, +)\) is \(\mathcal{L}\) where

\begin{align*} L &:= 2^{ * } \\ 0^{\mathcal{L}} &:= \epsilon \text{ (the empty sequence)} \\ +^{\mathcal{L}} &:= \omega_1, \omega_2 \mapsto \omega_1 \omega_2 \text{ (concatenation)} \\ \leq^{\mathcal{L}} &:= \omega_1, \omega_2 \mapsto \begin{cases} \mathtt{t} & \text{ if } \omega_1 \text{ is a prefix of } \omega_2 \\ \mathtt{f} & \text{otherwise} \end{cases} \end{align*}

The sentence \[ \forall x \forall y .\, x + y = y + n \] holds in the natural structure \(\mathcal{N}\) on \(\N\), but not in \(\mathcal{L}\).

We write

\begin{align*} \mathcal{N} &\vDash \forall x \forall y .\, x + y = y + x \\ \mathcal{L} &\not\vDash \forall x \forall y .\, x + y = y + x \end{align*}

We write \(\mathcal{M} \vDash \phi\), read as \(\mathcal{M}\) satisfies \(\phi\).

We write \(\mathcal{M} \vDash_{\rho} \phi\) where \(\rho\) is an environment mapping variables to elements of \(M\). In particular

\begin{align*} \mathcal{M} \vDash_{\rho} P(t_1, \dots, t_n) &\iff P^m(t_1^{\rho}, \dots, t_n^{\rho}) \mid x^{\rho} = \rho(x), \quad (f(t_1, \dots, t_n))^{\rho} = f^{\mathcal{M}}(t_1^{\rho}, \dots, t_n^{\rho}). \\ \mathcal{M} \vDash_{\rho} t_1 = t_2 &\iff t_1^{\rho} = t_2^{\rho} \\ \mathcal{M} \vDash_{\rho} \phi_1 \land \phi_2 &\iff \mathcal{M} \vDash_{\rho} \phi_1 \text{ and } \mathcal{M} \vDash_{\rho} \phi_2 \\ \mathcal{M} \vDash_{\rho} \forall x .\, \phi &\iff \text{for all } a \in M, \, \mathcal{M} \vDash_{\rho[x \rightarrow a]}\phi \end{align*}

For a sentence \(\phi\) whether or not \(\mathcal{M} \vDash_{\rho} \phi\) holds

We cannot say \(\mathcal{L} \vDash x \leq y\), but we can say

\begin{align*} \mathcal{L} &\vDash_{\overset{x = 01}{y = 011}} x \leq y\\ \mathcal{L} &\not\vDash_{\overset{x = 000}{y = 0101}} x \leq y \end{align*}

A sentence \(\phi\) is valid if, for all structures \(\mathcal{M}\), we have \(\mathcal{M} \vDash \phi\).

Soundness

If \(\phi\) has a proof in natural deduction, then \(\phi\) is valid.

Completeness

If \(\phi\) is valid, then it has a natural deduction proof.

A formula \(\phi\) is satisfiable if there exists a structure such that \(\mathcal{M} \vDash \phi\).

  • Input: A sentence \(\phi\)
  • Output: yes if \(\phi\) is valid, no otherwise.
(Church/Turing 1936)

The validity problem is undecidable.

A problem is decidable if there exists an algorithm which decides it in finite time for every input.

It follows that the satisfiability problem is also undecidable.

  • Input: A sentence \(\phi\) and a purported ND proof of \(\phi\).
  • Output: yes if the proof is correct, no otherwise.

This is a practical and efficiently computable problem used in computer proof assistants.

Consider the following algorithm for validity.

  • Input: A sentence \(\phi\)
  • Algorithm: Systematically search the space of all proofs until we find a proof of \(\phi\). If we find such a proof, return yes.

This algorithm shows that validity is semidecidable. The semidecidability of validity is the basis of automatic theorem proving.

A problem is semidecidable if there exists an algorithm that returns yes in finite time if the answer is yes, and runs forever otherwise. In particular, it need not tell you no.

Under this definition we could say satisfiability is co-semidecidable. We can return no in finite time.

Define a signature \((\operatorname{connected}/2, \operatorname{server}/1, \operatorname{client}/1)\).

\begin{align*} \forall x &.\, \neg \operatorname{connected}(x, x) \\ \forall x, y &.\, \operatorname{connected}(x, y) \rightarrow \operatorname{connected}(y, x) \\ \forall x &.\, \operatorname{server}(x) \lor \operatorname{client}(x) \\ \forall x &.\, \neg \operatorname{server}(x) \lor \neg\operatorname{client}(x) \\ \forall x &.\, \operatorname{client}(x) \rightarrow \exists y .\, \operatorname{server}(y) \land \operatorname{connected}(x, y) \\ \forall x &.\, \operatorname{server}(x) \rightarrow \exists y .\, \operatorname{client}(y) \land \operatorname{connected}(x, y) \\ \forall x, y &.\, \operatorname{connected}(x, y) \rightarrow \neg\operatorname{server}(x) \lor \neg \operatorname{server}(y) \end{align*}

We can ask questions like: Is it consistent to have a network with (at least/exactly) 10 devices such that exactly 2 are servers? We express this with a formula like

\begin{align*} \exists x_1 \dots x_{10} &.\, \bigwedge\limits_{i = 1}^9 \bigwedge\limits_{j = i+1}^{10} x_i \neq x_j \\ \land & \exists x \exists y .\, x \neq y \land (\forall z .\, \operatorname{server}(z) \rightarrow z = x \lor z = y) \\ \land& \operatorname{server}(x) \land \operatorname{server}(y) \end{align*}
Finite satisfiability
  • Question: A sentence \(\phi\)
  • Answer: yes if there exists a finite structure \(\mathcal{M}\) such that \(M \vDash \phi\). no otherwise.

Finite satisfiability is semidecidable. In fact we have an algorithm for useful finite satisfiability, which also ouptups such a model \(\mathcal{M}\).

The algorithm works as follows. For every size \(n \in \N\) we enumerate all non-isomorphic structures of size \(n\) and each structure \(\mathcal{M}\) we look at the model-checking problem \(\mathcal{M} \vDash \phi\). If and when we find such a structure, we return the structure \(\mathcal{M}\).

Model checking problem
  • Input: A pair \(\mathcal{M}, \phi\) where \(\mathcal{M}\) is a finite structure.
  • Output: yes if \(\mathcal{M} \vDash \phi\), no otherwise.

We do this by algorithmically checking that \(M \vDash \phi\) corresponds to the definition of satisfiability.

Bounded satisfiability
  • Question: A sentence \(\phi\) with a bound \(N > 0\)
  • Answer: yes if there exists a finite structure \(\mathcal{M}\) such that \(M \vDash \phi\). no otherwise.

This problem is decidable, though not efficiently so.

For the above simple network example, mark the structure axiom conjunction as \(T\) and the query as \(Q\). Then we can run a bounded model check on \(T \land Q\) with bound (N = 10).

We can solve problems like this using tools like Alloy. They work using the following principle.

We will solve bounded satisfiability by reducing to SAT. Assume \(\phi\) does not involve function symbols, except constants, and equalities. The input sentence is \(\phi\) along with a bound \(N\).

We add \(N\) constants \(C_1, \dots, C_N\) to the signature. For each predicate symbol \(P\) of arity \(k\) we add propositional variables \(P_{i_1, \dots, i_k}\) for each tuple \(i_1, \dots, i_k \in [N]\).

We define a mapping \(\psi \mapsto \psi^{ * }\) from sentences to propositional formulas using the propositional variables. Then

\begin{align*} (\exists x .\, \phi)^{ * } &:= (\phi[C_1/x])^{ * } \lor (\phi[C_2])^{ * } \lor \cdots \lor (\phi[c_N/x])^{ * } \\ (\psi_1 \land \psi)^{ * }_2 &:= \psi_1^{ * } \land \psi_2^{ * } \\ (P(C_1, \dots, C_n))^{ * } &:= P_{i_1, \dots, i_k} \end{align*}

Satisfying assignments for \(\psi^{ * }\) are in one-to-one correspondence with structures of size \(\leq N\) that satisfy \(\psi\).

FINISH Lecture 7: SMT Solving

SMT stands for satisfiability modulo a theory.

  • We've looked at SAT solving which involves propositional logic. This is not very expressive, but it is practical.
  • First order logic (FOL) has good expressivity but the solving is impractical.
  • SMT is the sweet spot which is expressive enough for usage but still practical enough for solving.

A first order signature is given by function symbols and predicates. Typically we're interested in answering the question: is \(\phi\) satisfyable modulo a theory \(T\)?

.

for our purposes a theory \(T\) is determined by a set of structures \(\operatorname{Mod}_{T}\). We're interested in the question \[ \exists \mathcal{M} \in \operatorname{Mod}_{T} .\, \exists \text{environment } \rho .\, \mathcal{M} \Vdash_{\rho} \phi \] NB \(\phi\) is quantifier-free.

SMT is useful for program verification. We construct a formula \(\phi(x_1, \dots, x_n)\) saying \(x_1, \dots, x_n\) are inputs on which the program behaves incorrectly. A satisfying assignment to \(\phi\) finds a buggy input. If unsatisfiable, there are no bugs.

  • EUF: Equality between uninterpreted functions
  • Signaures: just function symbols \[ x = y \land x = g(y) \land u = f(x, g(x)) \land v = f(y, x) \land u \neq v \]
  • We flatten the formula: no nesting of function symbols, every term named by a variable \[ x = y \land x = g(y) \land t = g(x) \land u = f(x, t) \land v = f(y, x) \land u \neq v \]
  • Perform congruence closure
  • Check consistency of the resulting congruence closure with the negated equalities

We have an algorithm of congruence closure which decides satisfiability for coclauses - conjunctions of literals. This is practical. So we can decide satisfiability of arbitrary quantifier-free formulas by reduction to DMF. This is impractical. The main task of SMT solving is bootstrapping these coclauses into quantifier-free formulas.

We have two functions \(\operatorname{write}(a, i, v)\) and \(\operatorname{read}(a, i)\). The theory is axiomatised by

\begin{align*} i = j &\rightarrow \operatorname{read}(a, i) = \operatorname{read}(a, j) \\ i = j &\rightarrow \operatorname{read}(\operatorname{write}(a, i, v), j) = v \\ i \neq j &\rightarrow \operatorname{read}(\operatorname{write}(a, i, v), j) = \operatorname{read}(a, j) \end{align*}

Let's see how this reduces to EUF.

\begin{align*} i \neq j &\land u = \operatorname{read}(\operatorname{write}(a, i, v), j) \land v = \operatorname{read}(a, j) \land u \neq v \intertext{we also write this as} i \neq j &\land u = (i = j ~ ? ~ v : \operatorname{read}(a, j)) \land v = \operatorname{read}(a, j) \land u \neq v \\ i \neq j &\land u = \operatorname{read}(a, j) \land v = \operatorname{read}(a, j) \land u \neq v \end{align*}

We can now perform a satisfiability test in EUF. NB this is unsatisfiable.

We have theory that supports efficient satisfiability for coclauses. We now need to bootstrap that to arbitrary quantifier-free formulas. We have 2 approaches:

  1. Eager: Translate to a propositional SAT problem. (In practice this is doable, but we still want to present most of these problems as first-order formulas first)
  2. Lazy: Modify SAT solving algorithms based on DPLL. Use the coclause decision procedure.
  • Start with a flattened formula \[ x = f(y) \land y = f(x) \land x = y \]
  • Replace terms with variables \[ x = f^y \land y = f^x \land x = y \land (x = y \rightarrow f^x = f^y) \] We obtain a first-order formula where al the atomic formulas are equalities between variables. Let \(k\) be the smallest number such that \(2^k \geq \) the number of variables.
  • We use bit blasting, described below.

    \begin{align*} (x_1 \leftrightarrow f_1^y) &\land (x_2 \leftrightarrow f_2^y) \land (y_1 \leftrightarrow f_1^x) \land (y_2 \leftrightarrow f_2^x) \land \neg (x_1 \leftrightarrow y_1 \land x_2 \leftrightarrow y_2) \land \\ &\left[ (x_1 \leftrightarrow y_1 \land x_2 \leftrightarrow y_2) \rightarrow (f_1^x \leftrightarrow f_1^y \land f_2^x \leftrightarrow f_2^y) \right] \end{align*}
  • We pass this to a SAT solver (using the Tseytin transformation)

Replace every variable \(x\) with \(k\) propositional variables \(x_1, \dots, x_k\). Replace every equality \(x = y\) with \[ x_1 \leftrightarrow y_1 \land x_2 \leftrightarrow y_2 \land \dots \land x_k \leftrightarrow y_k \] and \(x \neq y\) with the negation of the above. Submit to a SAT solver.

We have terms of the form \(0, +, 1\) and \(a \cdot x\), where \(a\) is a number of some kind. We get formulas like \[ 3\cdot x + 4 \cdot y \leq 3 \cdot z. \] SAT for coclauses is the feasibility problem for linear programming.

Lecture 8: Model checking 1 - Linear-time temporal logic (LTL)

<2026-04-07 Tue> The material this week is well covered by the textbook (LHR, ch. 3).

A mutual exclusion protocol

We imagine two systems which can both access a shared device, but only one at a time can access it. Write \(n\) for a noncritical state of a system and \(t\) for trying. Once one of the systems is in a trying state, it is attempting to access the share device. If the device gives it access, we say the system becomes critical \(c\). If both of the systems are trying, then only one of them can gain access, the other remaining trying. Etc.

This is an example of a transition system.

NB I will try to put in the proper diagram.

We label states as properties that hold. First, \(G(\neg (c_1 \land c_2))\) holds globally, since we always have \(\neg (c_1 \land c_2)\). This is an example of a safety property, namely that our system always stays in a safe state. We also have liveness properties, i.e. that something desirable will eventually happen. For instance, \(G(t_1 \rightarrow F c_1) \land (t_2 \rightarrow (t_2 \rightarrow F c_2))\), i.e. whenever one of the components tries to access the device, it will eventually gain access. Note here that the model does not itself guarantee this, an eager system could hog the resource with no further assumptions. This property therefore does not hold. We have a counterrun \[ (n_1, n_2) \rightarrow (t_1, n_2) \rightarrow (t_1, t_2) \rightarrow (c_1, t_2) \rightarrow (n_1, t_2) \rightarrow (t_1, t_2) \rightarrow (c_1, t_2) \rightarrow \cdots \] This is an instance of a lasso run.

\noindent The idea of model checking is to try whether this is a valid model under the given assertions.

A transition system is \((S, \rightarrow, L, \text{Atoms})\) where \(S\) is a finite set (of states), the relation \(\rightarrow \subseteq S \times S\) is the transition relation, \[ L : S \rightarrow \pot(\text{atoms}) \] is the labelling function, and the atoms correspond to properties of states we are interested in.

Since we are using transition systems to model LTL, we also need an additional requirement. \[ \forall s \in S .\, \exists s' \in S .\, s \rightarrow s', \] i.e. we can always continue to a new state.

A simple definition includes the formulas and their modalities. \[ \phi ::= \underbrace{p}_{\in \text{Atoms}} \mid \neg \phi \mid \phi \land \phi \mid \phi \lor \phi \mid X\phi \mid F \phi \mid G \phi \mid \phi U \psi. \] where:

  • \(X\phi\) means \(\phi\) holds in the neXt state,
  • \(F\phi\) means that \(\phi\) holds at a point in the future,
  • \(G \phi\) means that \(\phi\) holds globally in the future
  • and \(\phi U \psi\) means \(\phi\) holds until \(\psi\) holds.

\noindent

The satisfaction relation takes the form \[ \pi \vDash \phi. \] Here \(\pi\) is an infinite run through the transition system. We can write it as \[ \pi = s_1 \rightarrow s_2 \rightarrow s_3 \rightarrow s_4 \rightarrow \cdots \] A run is an infinite sequence of states related as above by a transition relation. Then we have

\begin{align*} \pi \vDash &\iff p \in L(s_1) \\ \pi \vDash \neg \phi &\iff \pi \not\vDash \phi \\ \pi \vDash \phi \land \psi &\iff \pi \vDash \phi \land \pi \vDash \psi \\ \pi \vDash X\phi &\iff \pi^2 \vDash \phi \\ \pi \vDash G\phi &\iff \forall n \geq 1 .\, \pi^n \vDash \phi \\ \pi \vDash F\phi &\iff \exists n \geq \exists .\, \pi^n \vDash \phi \\ \pi \vDash \phi U \psi &\iff \exists n \geq 1 \pi^n \vDash \psi \land \forall m .\, 1 \leq m < n .\, \pi^m \vDash \phi \end{align*}

where \(\pi^n\) means taking the tail of the run \(\pi\) starting at the \(n\)-th state, i.e. \[ \pi^n := s_n \rightarrow s_{n+1} \rightarrow s_{n+2} \rightarrow \cdots \] or more explicitly \(\pi^n = \lambda m .\, \pi(n + m -1)\).

  • Input: A transition system \(M\), starting state \(s_0\) and an LTL formula \(\phi\)
  • Output: yes if for all runs \(\pi\) through \(M\) starting at \(s_0\) we have \(\pi \vDash \phi\), no otherwise.

NB we also write this property as \(M, s_0 \vDash \phi\).

We can also equivalently expand the no branch by also returning an infinite run \(\pi\) starting at \(s_0\) for which \(\pi \neg\vDash \phi\). It turns out that for any bad run, we can also give a lasso run.

The liveness property for mutual exclusion was given above as \[ \phi_{\text{liveness}} = G((t_1 \rightarrow Fc_1) \land (t_2 \rightarrow Fc_2)) \] which quickly leads to a lasso property as above. As an exercise one can consider how to modify the model where when both systems are critical, the allocation flip-flops on which system gains access. Another modification (which we cannot properly model yet) is to introduce a probabilistic choice. Alternatively, we can assume a fairness property. We express this assumption as \[ \phi_{\text{fairness}} = GF(t_1\land t_2) \rightarrow GF c_1 \land GF c_2 \] We need to show now that we have \(\phi_{\text{fairness}} \rightarrow \phi_{\text{liveness}}\). It is not immediately obvious how we can solve this algorithmically. It turns out we can model this using Büchi automata.

A Büchi automaton defines a language of infinite words over a (wlog) binary alphabet. A language is a set of those infinite words recognized by an alphabet. Here an infinite word is accepted if it is some run of the automaton labelled by the word, which passes through an accepting state an infinite number of times.

Lecture 9: Büchi automata

Model checking problem for LTL

  • input: Transition system \(\mathcal{M}\), starting state \(s\) and an LTL formula \(\phi\)
  • output: yes if \(\mathcal{M}, s \vDash \phi\), no otherwise, nad a lasso run \(\pi\) such that \(\pi \vDash \neg \phi\)

A finite automaton gives a means for specifying a language.

Language theory basics

Over an alphabet \(\Sigma\), a word is a finite sequence from \(\Sigma\) with \(\Sigma^{ * }\) being the set of words. A language is a set of words \(L \subseteq \Sigma^{* }\).

Language theory for \(\omega\)-words

An \(\omega\)-word is an infinite sequence of symbols from \(\Sigma\). We denote the set of \(\omega\)-words as \(\Sigma\)ω.

An \(\omega\)-language is a set of \(\omega\)-words \(L \subseteq \Sigma^\omega\).

An automaton on \(\omega\)-words specifies an \(\omega\)-language. Büchi automata are particular automata on \(\omega\)-words.

Example

For \(\Sigma = \left\{ 0, 1 \right\}\) we will take \(A_1\) to specify \[ L_{\omega}(A_1) = \left\{ \omega \in \left\{ 0, 1 \right\}^{\omega} \mid \omega \text{ contains the substring } 1110 \text{ infinitely often} \right\} \] For instance, \(\overline{111000}\) is in the language, since it contains \(1110\) on each repetition. On the other hand, \(\overline{1100}\) is not accepted by \(A_1\).

Define \[ L_{\omega}(A_2) = \left\{ \omega \in \left\{ 0, 1 \right\}^{\omega} \mid \text{ there are only finitely many 0s} \right\} \] this is given as a small 2-node automaton.

Define \(A_3\) as the machine above, but terminating in a node with no loop. Then \(L_{\omega}(A_3) = \emptyset\).

Büchi automaton

A nondeterministic Büchi automaton (NBA) \(\mathcal{A}\) is formally a tuple \(\mathcal{A} = (Q, I, \Delta, F)\) where \(Q\) is a set of states of the automaton, \(I \subseteq Q\) a set of initial states, \(F \subseteq Q\) a set of finite states and \(\Delta\) is the transition relation where \[ \Delta \subseteq Q \times \Sigma \times Q, \] i.e. any transition is a tuple relating a start state and the end state of the element of an alphabet.

Then \[ L_{\omega}(A) := \left\{ w \in \left\{ 0, 1 \right\}^{\omega} \mid \text{ there exists an accepting run of } \mathcal{A} \text{ on } w \right\}. \] We say \(\mathcal{A}\) accepts such a \(w\).

Emptiness testing fro an NBA

Let \(A\) be an NBA. Then \(L_{\omega}(A) \neq \emptyset\) if and only if there exists a path in \(A\) of the form \[ q_0 \rightarrow q_1 \rightarrow \cdots \rightarrow q_k, \] where \(q_k\) is the final state where we detect a loop. Such a path is called an accepting lasso.

ω-regular language

An \(\omega\)-language \(L \subseteq \Sigma^{\omega}\) is \(\omega\)-regular if there exists NBA \(\mathcal{A}\) such that \(L = L_{\omega}(\mathcal{A})\).

ω-regular languages closed under

\(\omega\)-regular languages are closed under

  • union
  • intersection
  • complement (this statement is also called Büchi's theorem)

.

Consider an LTL formula \(aUb\). Consider when does it hold that \(\pi \vDash aUb\). Then \[ \pi = s_1 \rightarrow s_2 \rightarrow s_3 \rightarrow s_4 \rightarrow \cdotss \]

Trace of a run

The trace of a run \(\pi = s_1 \rightarrow \cdots\) is the sequence \[ L(s_1), L(S_2), \dots \] This is an \(\omega\)-word in \(\pot(\text{Atoms})\)ω. The truth of a formula \(\phi\) on a run \(\pi\) depends only on the trace. We write \[ \operatorname{Traces}(\phi) = \left\{ w \in (\pot(\text{Atoms}))^{\omega} \mid \phi \text{ is true on any run with the trace } w\right\} \]

.

For any \(\phi\), \(\operatorname{Traces}(\phi)\) is \(\omega\)-regular.

\noindent There is an algorithm to construct \(\mathcal{A}_{\phi}\) recognising \(\operatorname{Traces}(\phi)\).

Lecture 10: Probabilistic model checking (superficial)

  • There are many probabilistic models - Today we will focus on DTMCs - Discrete time Markov chains.
  • Likewise many probabilisitc logics like PCTL and PCTL*. We will use LTL.
  • There is a whole technology of associated algorithms.

We have a model which can only deal in coin-flips and wish to get a fair die. Tossing the coins thrice leads to 8 states, we loop back from 2 of them.

Say we have a DTMC and an LTL formula \(\phi\). We're interested in the probability that a randomly generated run \(\pi\) satisfies \(\phi\). We need to make sense of the question and then actually compute the answer.

A Discrete time Markov chain is a transition system \((S, s_{\text{init}}, P, L)\).

  • \(S\) is a set of states. In modelling situations, \(S\) is usually finite
  • \(s_{\text{init}} \in S\) is the initial state
  • \(L : S \rightarrow \pot(\mathrm{Atoms})\) the labelling function
  • \(P : S \times S \rightarrow [0, 1]\) is the transition probability matrix, which must satisfy \[ \forall s \in S \cdot \sum\limits_{t \in S} P(s, t) = 1 \]

\noindent

For a forula \(\phi\) we want to calculate \[ \mathbb{P}[\phi] = \mathbb{P} \left\{ \pi \in \operatorname{Runs} \mid \pi \vDash \phi \right\}. \] A run looks like \(\pi = s_1 \xrightarrow{p_1} s_2 \xrightarrow{p_2} \cdots\) where \(p_i\) is \(P(s_n, s_{n+1})\). We require \(p_n > 0\) for all \(n\). We assign probabilities to measurable sets of runs.

A cylinder set is determined by a finite run \(w = s_1 \xrightarrow{p_1} \xrightarrow{p_2} \cdots \rightarrow s_n\) where \(s_1 = s_{\text{init}}\) and all \(p_i > 0\). We define \[ \left< w \right> = \left\{ \pi \in \operatorname{Runs}\mid \pi \text{ extends } w \right\}. \] Then \[ \mathbb{P} = \prod\limits_{i=1}^{n - 1} p_i. \]

The measurable subsets of \(\operatorname{Runs}\) are defined by

  • Every cylinder set \(\left< w \right>\) is measurable
  • If \(R_1, R_2,\dots\) are measurable sets, then so are \(\bigcup\limits_n R_n\) and \(\bigcap\limits_n R_n\).
  • If \(R\) is measurable, then so is its complement \(\operatorname{Runs} \setminus R\).
  • Both \(\emptyset\) and \(\operatorname{Runs}\) are measurable.

Let \(\mathcal{E}\) be the set of all measurable subsets of \(\operatorname{Runs}\). There is a unique probability function \(\mathbb{P} : \mathcal{E} \rightarrow [0, 1]\) satisfying

  • \(\mathbb{P}( \emptyset) = 0\) and \(\mathbb{P}(\operatorname{Runs}) = 1\)
  • \(\mathbb{P}(\left< w \right>)\) is as defined earlier
  • If \(R_1, R_2, \dots\) are pairwise disjoint measurable sets, then we have countable additivity, i.e. \[ \mathbb{P} \left( \bigcap\limits_n^{} R_n \right) = \sum\limits_n^{} \mathbb{P}(R_n). \]

For any formula \(\phi\), the set of all runs \(\left\{ p \in \operatorname{Runs} \mid \pi \vDash \phi \right\}\) is measurable.

The idea of the proof uses Büchi automata. We model check, given \((S, s_{\text{init}}, P, L)\) and \(\phi\) we answer \[ \mathbb{P}_{> q} \phi \text{ is } \mathbb{P}(\phi) > q \] and likewise for the other operations.

If we think back to our previous nondeterministic automaton for criticality of device access, we can enhance it with a single probabilistic node. That way we get a Markov decision process which combines probability and nondeterminism. Then liveness is \(G((t_1 \rightarrow Fc_1) \land (t_2 \rightarrow F c_2) )\). We consider Markov runs, where we consider all probabilistic choices. In particular, Markov runs are Markov chains.

Tags: lecture-notes