Lectures on mathematical logic and theory of algorithms. Construction of mathematics

Send your good work in the knowledge base is simple. Use the form below

Good work to the site">

Students, graduate students, young scientists who use the knowledge base in their studies and work will be very grateful to you.

There is no HTML version of the work yet.
You can download the archive of the work by clicking on the link below.

Similar documents

    Basic definitions of mathematical logic, Boolean and equivalent functions. General concepts Boolean algebra. Zhegalkin algebra: statements and predicates. Definition formal theory. Elements of the theory of algorithms, recursive functions, Turing machine.

    course of lectures, added 08/08/2011

    Basic forms of thinking: concepts, judgments, inferences. An essay by George Boole that explored logical algebra in detail. The truth value (i.e., truth or falsity) of a statement. Logical operations inversion (negation) and conjunction.

    presentation, added 12/14/2016

    Graphic interpretation of sets and operations on them. Mathematical logic, Boolean algebra. Perfect conjunctive normal form. Equivalent formulas and their proof. Completeness of the system Boolean functions. Predicate logic, graph theory.

    lecture, added 12/01/2009

    History of the emergence of Boolean algebra, development of the propositional calculus system. Methods for establishing the truth or falsity of complex logical statements by using algebraic methods. Disjunction, conjunction and negation, truth tables.

    presentation, added 02/22/2014

    Square matrices and determinants. Coordinate linear space. System Research linear equations. Algebra of matrices: their addition and multiplication. Geometric image complex numbers and them trigonometric form. Laplace's theorem and basis.

    training manual, added 03/02/2009

    The basic concept of the theory of positive (natural) numbers. Development of shorthand for arithmetic operations. A symbolic language for divisibility. Properties and algebra of comparisons. Raising comparisons to powers. Re-squaring. Fermat's little theorem.

    presentation, added 06/04/2014

    Systems digital processing information. The concept of Boole algebra. Designations of logical operations: disjunction, conjunction, inversion, implication, equivalence. Laws and identities of Boole algebra. Logical Basics COMPUTER. Conversion of structural formulas.

    presentation, added 10/11/2014

Volzhsky University named after. Tatishcheva.

Lectures on mathematical logic and theory of algorithms.

Compiled by: Associate Professor S.V. Kaverin.

Chapter I. Algebra of Logic.

§1.1. Definition of a Boolean function.

Boolean function y=f(x 1 ,x 2 ,…,x n)from P variables x 1 , x 2 ,...,x n is any function in which the arguments and the function can take the value either 0 or 1, i.e. a Boolean function is a rule by which an arbitrary set of zeros and ones

(x 1 ,x 2 ,...,x n) is assigned to the value 0 or 1.

Boolean functions also called logic algebra functions, binary functions and switching functions.

Boolean function from n variables can be specified by a truth table in which sets of argument values ​​are arranged in ascending order of their numbers : at first recruitment is underway, representing the binary expansion of 0 (this set is numbered 0); then comes the set, which is the binary expansion of 1, then 2, 3, etc. The last set consists of n units and is the binary expansion of the number 2 n-1 (this order of arrangement of sets will be called lexicographic order ). Considering that the count starts from 0, and the value of a Boolean function can be either 0 or n

1, we conclude that there are only 22 different Boolean functions of n variables. Thus, there are, for example, 16 Boolean functions of two variables, 256 of three, etc.

Example 1.1.1.(vote) . Let's consider a device that records the adoption of a certain resolution by the “committee of three”. Each committee member presses his own button when approving a resolution. If a majority of members vote in favor, the resolution is adopted. This is recorded by a recording device. Thus, the device implements the function f(x,y,z ) , whose truth table has the form

x 0 0 0 0 0 1 1 1
y 0 0 1 1 1 0 0 1
z 0 1 0 0 1 0 1 1
f(x,y,z) 0 0 0 0 1 0 1 1

A Boolean function is also uniquely specified by enumerating all tuples on which it takes the value 0, or by enumerating all tuples on which it takes the value 1. The function obtained in the example f can also be specified by the following system of equalities: f(0,0,0) = f(0,0,1) = f(0,1,0) = f(1,0,0) =0.

A vector of Boolean function values y=f(x 1 ,x 2 ,…,x n) is an ordered set of all values ​​of the function f, in which the values ​​are ordered by lexicographic order. For example, let a function of three variables f be specified by a vector of values ​​(0000 0010) and it is necessary to find a set on which f takes the value 1. Because 1 is in 7th place, and the numbering in lexicographic order starts from 0, then it is necessary to find the binary expansion of 6. Thus, the function f takes the value 1 on the set (110).

§1.2. Elementary Boolean functions.

Among Boolean functions, the so-called elementary Boolean functions stand out, by means of which any Boolean function of any number of variables can be described.

1. A Boolean function f(x 1 ,x 2 ,…,x n) taking the value 1 on all sets of zeros and ones is called constant 1, or identical unit. Designation : 1 .

2. A Boolean function f(x 1 ,x 2 ,…,x n) taking the value 0 on all sets of zeros and ones is called constant 0, or identical zero. Designation : 0 .

3. Denial is a Boolean function of one variable that is defined by the following truth table

Other names : logical multiplication (product); logical "and".

Designations : x&y, xÿy, x⁄y, min(x,y).

5. Disjunction

Other name : logical consequence. Designations : xØy, xfly, xy.

7. Equivalence is a Boolean function of two variables that is defined by the following truth table

Other name : anti-equivalence. Designations : x∆y, x+y.

9. Schaeffer's stroke is a Boolean function of two variables that is defined by the following truth table

Other name : negation of disjunction, logical “not-or”, Webb function.

Designation : x∞y ; for the Webb function - x±y.

Comment. Symbols Ÿ, ⁄, ¤, Ø, ~, ∆, |, ∞ involved in the notation elementary functions we will call them connections or operations.

§1.3. Specifying Boolean functions using elementary ones.

If you substitute some Boolean functions into a logical function instead of variables, the result is a new Boolean function called superposition substituted functions (complex function). Using superposition, you can obtain more complex functions that can depend on any number of variables. We will call writing Boolean functions in terms of elementary Boolean functions formula implementing this function.

Example 1.3.1. Let an elementary Boolean function x¤y be given. Let us substitute the function x 1 ∞x 2 instead of x. We obtain a function of three variables (x 1 ∞x 2)¤y. If instead of the variable y we substitute, for example, x 3 ∆x 4, then we get new feature from four variables: (x 1 ∞x 2)¤(x 3 ∆x 4). Obviously, in this way we will obtain Boolean functions, which will be expressed through elementary Boolean functions.

Looking ahead, we note that any a Boolean function can be represented as a superposition of elementary functions.

For a more compact recording complex functions let's introduce the following conventions : 1) outer brackets are omitted; 2) operations in brackets are performed first; 3) it is considered that the priority of connectives decreases in the following order : Ÿ, ⁄, ¤, Ø, ~. For equivalent connectives and the remaining connectives (∆,|,∞), you should place parentheses for now.

Examples 1.3.2. In the formula x⁄y¤z the brackets are placed in the following way: ((x⁄y)¤z), because operation ⁄ is stronger than operation ¤ according to our agreement.

1. In the formula x¤y~zØx the brackets are placed as follows: ((x¤y)~(zØx))

2. In the formula (x∆y)~zØxy¤Ÿz the brackets are placed as follows: ((x∆y)~(zØ((xy)¤(Ÿz)))).

3. The formula xØyØz, following our agreement, is not written correctly, because placing parentheses results in two different functions: ((xØy)Øz) and (xØ(yØz)).

§1.4. Significant and non-significant variables.

Boolean function y=f(x 1 ,x 2 ,…,x n) depends significantly from variable x k if such a set of values ​​exists a 1 ,a 2 ,…,a k - 1, a k+1, a k + 2 ,…, a n that f (a 1,a 2,…,a k-1 , 0 ,a k+1,a k+2,…,a n) π f (a 1,a 2,…,a k-1 , 1 ,a k+1,a k+2,…,a n).

In this case x k is called an essential variable , V otherwise x k is called an insignificant (dummy) variable . In other words, a variable is irrelevant if changing it does not change the value of the function.

Example 1.4.1. Let the Boolean functions f 1 (x 1 ,x 2) and f 2 (x 1 ,x 2) be defined by the following truth table:

x 1 x 2 f 1 (x 1 ,x 2) f 2 (x 1 ,x 2)
0 0 0 1
0 1 0 1
1 0 1 0
1 1 1 0

For these functions the variable x 1 - is significant, and the variable x 2 is not significant.

Obviously, Boolean functions do not change their values ​​by introducing (or removing) irrelevant variables. Therefore, in what follows, Boolean functions are considered up to unimportant variables (in the example we can write: f 1 (x 1 , x 2) = x 1 , f 2 (x 1 , x 2) = Ÿx 1 ).

§1.5. Truth tables. Equivalent functions.

Knowing the truth tables for elementary functions, you can calculate the truth table of the function that this formula implements.

Example 1.5.1. F1=x 1 ⁄x 2 ¤(x 1 ⁄Ÿx 2 ¤Ÿx 1 ⁄x 2)

Thus, formula F1 implements the disjunction. Example 1.5.2. F2=x 1 ⁄x 2 Øx 1

Thus, formula F3 implements the disjunction.

The Boolean functions f1 and f2 are called equivalent, if on every set ( a 1 ,a 2 ,…, a n) zeros and ones, the values ​​of the functions coincide. The notation for equivalent functions is as follows : f1=f2.

According to the given examples 1-3, we can write

X 1 ⁄x 2 ¤(x 1 ⁄Ÿx 2 ¤Ÿx 1 ⁄x 2)=x 1 ¤x 2 ;

X 1 ⁄x 2 Øx 1 =1;

((x 1 ⁄x 2)∆x 1)∆x 2 =x 1 ¤x 2.

§1.6. Basic equivalences.

The given equivalences are often useful when operating with Boolean functions.

Below H, H1, H2,... stand for some Boolean functions.

1. Law of double negation: H = H.

2. Idempotency

3. Commutativity:

H1*H2=H2*H1, where the symbol * means one of the connectives &, ¤, ∆,

4. Associativity:

H1*(H2*H3)=(H1*H2)*H3, where the symbol * means one of the connectives &, ¤, ∆, ~.

5. Distributivity:

H1&(H2¤H3)=(H1&H2)¤(H1&H3); H1¤(H2&H3)=(H1¤H2)&(H1¤H3); H1&(H2∆H3)=(H1&H2)∆(H1&H3).

6. De Morgan's laws:

H1& H2 = H1 ∨ H2, H1∨ H2 = H1 & H2.

7. Takeover rules:

H1¤(H2&H3)=H1, H1&(H2¤H3)=H1

8. Laws of gluing:

H1&H2 ∨ H1&H2 = H1, (H1∨ H2) & (H1∨ H2) = H1.

9. Inversion laws: H ∨ H = 1, H & H = 0.

10. Rules for operations with constants:

H¤1=1, H&1=H, H¤0=H, H&0=0.

To check the truth of the above equivalences, it is enough to construct the corresponding truth tables.

Note that the associativity property of an operation allows this operation to be extended to any number of variables. So, for example, the notation x¤у¤z¤w is correct, because any arrangement of parentheses results in the same function. The commutative nature of an operation allows you to swap variables in a formula. For example, x⁄y⁄z⁄w=w⁄y⁄x⁄z.

§1.7. Functional completeness.

In a typical modern digital computer the numbers are 0 and 1. Therefore, the instructions that the processor executes are Boolean functions. Below we will show that any Boolean function is implemented through conjunction, disjunction and negation. Consequently, it is possible to build the required processor, having at its disposal elements that implement conjunction, disjunction and negation. This section is devoted to answering the question: are there (and if so, what) other systems of Boolean functions that have the property that they can be used to express all other functions.

Let us introduce a number of classes of functions.

1. The class of functions that preserve the constant 0, i.e. such functions

2. The class of functions that preserve the constant 1, i.e. such functions

3. The class of self-dual functions, i.e. such functions y=f(x 1 ,x 2 ,…,x n) such that f(x 1 , x 2 ,… , x n) = f(x 1 , x 2 ,… , x n) .

4th grade linear functions, i.e. such functions y=f(x 1 ,x 2 ,…,x n), which can be represented as f(x 1 ,x 2 ,…,x n)=c 0 ∆c 1 x 1 ∆c 2 x 2 ∆… ∆c n x n, where c 0, c 1, c 2 ...coefficients that can take the value 0 or 1.

5. Class monotonic functions. On the set of sets of zeros and ones Bn =((x 1 ,x 2 ,…,x n):x i œ(0,1),i=1,2,…,n) we introduce a partial order as follows:

(a 1 ,a 2 ,...,a n)§( b 1 ,b 2 ,...,b n) if and only if a 1 § b 1 , a 2 § b 2 ,…,a n § b n. A function f(x 1, x 2,..., x n) is called monotonic if for any two elements from B n such that

(a 1 ,a 2 ,...,a n)§( b 1 ,b 2 ,...,b n) it follows that f( a 1 ,a 2 ,...,a n)§f( b 1 ,b 2 ,...,b n).

A system S of Boolean functions, the superposition of which can represent any Boolean function, is called functionally complete . They say it's functional complete system S Boolean functions form basis in logical space. The basis S is called minimal , if the removal of any function from it turns this system into incomplete.

Completeness criterion (Post's theorem) . A system S of Boolean functions is complete if and only if it includes at least one function: non-preserving constant 0, non-preserving constant 1, non-self-dual, non-linear and non-monotonic.

Table 1.7 shows the properties that elementary Boolean functions have (the symbol * indicates a property that a given function has).

Using Post's theorem and table 1.7, you can build bases from elementary functions according to next rule. By choosing any elementary Boolean function and supplementing it, if necessary, with other functions so that they all together satisfy the functional completeness theorem. Through the functions of this basis we can express All other Boolean functions.

Let's construct some frequently used bases in applications.

Table 1.7

Name Designation

Unstorability

constants

Unstorability

constants

Samodvoys

validity

Const.0 0 * *
Const.1 1 * *
Negative Ÿ * * *
Kongyun. & * *
Disjunction. ¤ * *
Implic. Ø * * * *
Equivalent. ~ * * *
Sum by mod_2 * * *
| * * * * *
Pierce's arrow * * * * *

1. The system of functions S1=(Ÿ,⁄,¤) forms a basis. To reduce a Boolean function to a form containing only connectives from the basis S1, the following equivalences can be useful: x → y = x ∨ y , x ↔ y = (x ∨ y)(x ∨ y) , x ⊕ y = xy ∨ xy , xy = x ∨ y , x ↓ y = x & y .

2. The system S2=(Ÿ,&) forms a basis. Arbitrary function can first be reduced to a form containing connectives from S1 and then

use the relation x ∨ y = x ⋅ y.

3. The system S3=(Ÿ,¤) forms a basis. An arbitrary function can first be reduced to a form containing connectives from S1 and then

use the relation x ⋅ y = x ∨ y.

4. The system S4=(1,&,∆) forms a basis. An arbitrary function can first be reduced to a form containing connectives from S1 and then use the relations x = 1⊕ x, x ∨ y = x ⊕ y ⊕ x ⋅ y.

5. The system S5=(|) forms a basis. An arbitrary function can first be reduced to a form containing connectives from S2 and then use the relations x ⋅ y = x y , x = xx .

6. The system S6=(∞) forms a basis. An arbitrary function can first be reduced to a form containing connectives from S3 and then

use the relations x ∨ y = x ↓ y, x = x ↓ x.

7. The system S7=(Ø,0) forms a basis.

Example 1.7.1. Write the function x¨(y∆z) in the basis S1=(Ÿ,⁄,¤). x ↔ (y ⊕ z) = (x ∨ y ⊕ z) ⋅(x ∨ (y ⊕ z)) = (x ∨ y ⋅ z ∨ y ⋅ z) ⋅(x ∨ y ⋅ z ∨ y ⋅ z)

Chapter II. Boolean algebra.

The set of all booleans in the basis S1=( ÿ, &, ⁄} form Boolean algebra. Thus, in Boolean algebra, all formulas are written using three connectives: Ÿ, &, ¤. We partially examined the properties of this algebra in Chapter I (see, for example, basic equivalences). This chapter deals with properties specific to Boolean algebra and therefore throughout this chapter we will only deal with three functions: ÿ, &, ⁄.

§2.1. Normal forms.

Normal forms are a syntactically unambiguous way of writing a formula that implements given function.

If x is a logical variable, and σœ(0,1) then the expression x σ = x if if σσ == 10 or x σ = 10 if if x x =≠σσ , x is called a letter. The letters x and Ÿx are called contraries. Conjunct disjunct called disjunction of letters. For example, the formulas x ⋅ y ⋅ z and x ⋅ y ⋅ x ⋅ x are conjuncts, the formulas x ∨ y ∨ z and x ∨ y ∨ x are disjuncts, and the formula z is both a conjunct and a disjunct.

Disjunctive normal form (DNF) is called a disjunction of a finite number of conjunctions .

Conjunctive normal form (CNF) is called a conjunction of a finite number of clauses .

More simply: DNF is the sum of products, and CNF is the product of logical sums. Examples.

1. xÿy¤yÿz¤x is DNF (sum of products).

2. (x ∨ y ∨ z)⋅(x ∨ y)⋅z is CNF (product of sums).

3. x ∨ y ∨ z ∨ w is DNF and CNF (simultaneously).

4. x ⋅ y ⋅ z ⋅ w is DNF and CNF (simultaneously).

5. (x¤x¤y)·(y¤z¤x)·z is CNF.

6. x⋅y⋅z and x⋅y⋅x⋅x are DNFs.

7. x ⋅(x ∨ yz)⋅ x ⋅ y ⋅ z is not a normal form (not DNF or CNF).

Let the function f be written in the basis S1. This function is reduced to normal form as follows:

1) we use De Morgan’s laws to transform the formula to a form in which the negative signs relate only to individual variables;

2) we apply the rule for removing double negatives: ŸŸx=x;

H1&(H2¤H3)=(H1&H2)¤(H1&H3) , and the second distributive law for reduction to CNF. H1¤(H2&H3)=(H1¤H2)&(H1¤H3).

Any Boolean function can have an infinite number of DNF and CNF representations. For example, using additionally the laws of inversions and the rules of operations with constants, it is possible to ensure that in each individual conjunct or disjunct any variable appears no more than once (either itself or its negation).

Example 2.1.1. To reduce to DNF we use the 1st law of distributivity.

x⋅y⋅x⋅y⋅z⋅(y∨z)=x⋅y⋅(x∨y∨z)⋅(y∨z)=(x⋅y⋅x∨x⋅y⋅y∨x⋅y ⋅z)⋅(y∨z)= is CNF

= (0∨ x⋅y∨ x⋅y⋅z)⋅(y∨ z) = (x⋅y∨ x⋅y⋅z)⋅(y∨ z) = - this is another CNF

X ⋅ y⋅у ∨ x ⋅ y⋅z⋅ y ∨ x ⋅y⋅z ∨ x ⋅ y⋅z⋅z = 0∨ 0∨ x ⋅y⋅z ∨ x ⋅ y⋅z =

X ⋅ y ⋅ z ∨ x ⋅ y ⋅ z is a DNF

Example 2.1.2. To reduce to CNF it is necessary to use the second law of distributivity.

x ∨ y ⋅ x ⋅ y ∨ z = x ∨ y ⋅ (x ⋅ y ⋅ z) = x ∨ y ⋅ (x ∨ y) ⋅ z =

X∨y⋅z⋅(x∨y)=(x∨y⋅z)⋅(x∨x∨y)=(x∨y)⋅(x∨z)⋅(1∨y)=

= (x ∨ y) ⋅ (x ∨ z) is CNF

§2.2. Perfect normal forms.

If in each term of a normal form all variables are represented (either themselves or their negations), and in each individual conjunct or disjunction any variable appears exactly once (either itself or its negation), then this form is called perfect normal form (SDNF or SCNF). Examples: Let a function of three variables f(x,y,z) be given.

1. x ⋅ y ⋅ z ∨ x ⋅ y ⋅ z ∨ x ⋅ y ⋅ z is a perfect DNF.

2. (x ∨ y ∨ z)⋅(x ∨ y ∨ z)⋅(x ∨ y ∨ z) is a perfect CNF.

3. (x ∨ y) ⋅ (x ∨ z) is not a perfect CNF, because for example, the first sum does not include the variable z.

4. xÿyÿz is a perfect DNF. Theorem 2.2.1.

1. Any Boolean function that is not identically zero has only one SDNF, up to the location of the terms.

2. Any Boolean function that is not identical to 1 has only one SCNF, up to the location of the terms.

We will prove the theorem constructively, as a solution to the following problem: Using this truth table, construct a SDNF.

Let's consider the solution using the example of the function f(x,y,z) given in a table (Table 2.2) for n=3.

Example 2.2.1. Using this truth table (Table 2.2), construct a SDNF.

Table 2.2

x y z

basic

conjunctions

f(x,y,z)
0 0 0 x ⋅ y ⋅ z 0
0 0 1 x ⋅ y ⋅ z 1
0 1 0 x ⋅ y ⋅ z 1
0 1 1 x ⋅ y ⋅ z 0
1 0 0 x ⋅ y ⋅ z 0
1 0 1 x ⋅ y ⋅ z 1
1 1 0 x ⋅ y ⋅ z 1
1 1 1 x ⋅ y ⋅ z 1

Basic conjunctions (or constituents_1) included in the table correspond to a specific set of zeros and ones that take variables x,y,z. Constituents are being built_ 1 according to the following rule: a variable is included in the product itself if it takes the value 1 on a given set, otherwise its negation is included in the product. So, for example, on the set (0,0,1) the variables x,y take the value 0 and therefore their negations are included in the product, and the variable z takes the value 1 and therefore is included in the product itself. For a given set (0,0,1), constituent_1 is equal to x ⋅ y ⋅ z .

Obviously, the conjunction x ⋅ y ⋅ z is equal to 1 only on the set

(0,0,0), and x ⋅ y ⋅ z is 1 on the set (0,0,1), etc. (see table). Next, note that the disjunction of two basic conjunctions is equal to 1 on exactly two sets that correspond to these basic conjunctions. For example, the function x ⋅ y ⋅ z¤x ⋅ y ⋅ z is equal to 1 only on two sets - (0,0,0) and (0,0,1), and on all other sets it is equal to 0. Similarly, the disjunction of three of basic conjunctions is equal to 1 on exactly three sets that correspond to these basic conjunctions, etc.

That. we get rule: for constructing SDNF you should select the rows in which the function is equal to 1, and then take the disjunction of the corresponding main conjunctions, we obtain the desired SDNF. So for our example, we have x ⋅ y ⋅ z ¤ x ⋅ y ⋅ z ¤ x ⋅ y ⋅ z ¤ x ⋅ y ⋅ z ¤ x ⋅ y ⋅ z .

Task. Using this truth table, construct the SCNF.

Constitution_0 for a set of zeros and ones (which take the variables x,y,z) is constructed as follows: the variable is included in the disjunction itself if it takes the value on this set 0 , otherwise the work includes its negation.

Rule for constructing SKNF: you should select lines in which the function is equal to 0 , and then take the conjunction of the corresponding constituents_0. The result will be the desired SCNF. So for our example, we have f = (x ∨ y∨ z)⋅(x ∨ y∨ z)⋅(x ∨ y∨ z) .

Comment. To construct a perfect CNF function f, it is enough to construct a perfect DNF for the function f, and then

Let us construct the SCNF for our example based on the remark. 1. We build a SDNF for negation.

x ⋅ y ⋅ z ∨ x ⋅ y ⋅ z ∨ x ⋅ y ⋅ z

2. We use de Morgan's laws f = f = x ⋅ y⋅ z ∨ x ⋅ y⋅z ∨ x ⋅ y⋅ z = x ⋅ y⋅ z&x ⋅ y⋅z&x ⋅ y⋅ z == (x ∨ y ∨ z) ⋅(x ∨ y ∨ z)⋅(x ∨ y ∨ z) .

The described method of finding the SDNF (and SCNF) using the truth table is often more labor-intensive than the following algorithm.

1. To find SDNF this formula We first reduce to DNF.

2. If some conjunction K (i.e. K is the product of a certain number of variables or their negations) does not include, say, the variable y, then we replace this conjunction with the equivalent formula K&(y ∨ y) and, applying the law of distributivity, we present the resulting formula to DNF; if there are several missing variables, then for each of them we add the corresponding formula of the form (y ∨ y) to the conjunction.

3. If the resulting DNF contains several identical constituents of the unit, then we leave only one of them. The result is SDNF.

Comment: To construct an SCNF into a clause not containing, say, a variable at we add a formula of the form y⋅ y, i.e. We replace this disjunct with the equivalent formula D ∨ y⋅ y and, applying the 2nd law of distributivity.

Example 2. 2. 2. Construct a SDNF for the function f using equivalent transformations.

f = x ∨ y ⋅ z = x ⋅ (y ∨ y) ⋅ (z ∨ z) ∨ y ⋅ z ⋅ (x ∨ x) == x ⋅ y ⋅ z x ⋅ y ⋅ z x ⋅ y ⋅ z x ⋅ y ⋅ z y ⋅ z ⋅ x y ⋅ z ⋅ x =

RETREAT.

The calculation of SDNF has not only theoretical, but also large practical significance. For example, in many modern programs with a graphical interface for composing complex logical conditions a visual form in the form of a table is used: conditions are written in the cells, and the cells of one column are considered to be connected by a conjunction, and the columns are considered to be connected by a disjunction, that is, they form a DNF (or vice versa, in this case, a CNF is obtained). In particular, this is how the QBE (Query-byExample) graphical interface is designed, used to formulate logical conditions when querying a DBMS.

Algorithm 2.2.1. Construction of SDNF

Entrance: vector X: array of string - variable identifiers, matrix V: array of 0..1 of all different sets of variable values,

vector F: array of 0..1 corresponding function values.

Exit: a sequence of symbols forming a record of the SDNF formula for a given function.

f:= false(sign of the presence of the left operand of the disjunction) for i from 1to 2n do

if F[i] = 1 then if f then

yield"¤" (adding a disjunction sign to the formula; operator yield m prints

symbol m) else f:= true

end if g:= false(sign of the presence of the left operand of the conjunction) for j from 1to n do if g then

yield"⁄" (adding a conjunction sign to the formula)

else g: =true

end if if V ( adding a variable identifier to the formula

§2.3. DNF minimization by Quine's method.

Each formula has final number occurrences of variables. The occurrence of a variable refers to the place that the variable occupies in the formula. The task is to find, for a given Boolean function f, a DNF representing this function and having smallest number occurrences of variables.

If x is a logical variable, and σœ(0,1) then the expression x σ =xx if if σσ== 10 .

called letter . Conjunct called conjunction of letters. For example, the formulas x ⋅ y ⋅ z and x ⋅ y ⋅ x ⋅ x are conjunctions . An elementary product is a conjunction in which any variable appears no more than once (either itself or its negation).

Formula f1 is called implicant formulas f , if f1 is an elementary product and f 1 ⁄ f = f 1, i.e. that is, for the functions corresponding to the formulas, the inequality f 1 § f holds. The implicant f1 of a formula f is called simple , if, after discarding any letter from f1, a formula is not obtained that is an implicant of the formula f.

Example 2.3.1 . Let's find all implicants and simple implicants for the formula f=xØy . There are a total of 8 elementary products with variables X And u. Below, for clarity, their truth tables are given:

x y xØy x ⋅ y x ⋅ y x ⋅ y x ⋅ y x y x y
0 0 1 1 0 0 0 1 1 0 0
0 1 1 0 1 0 0 1 0 0 1
1 0 0 0 0 1 0 0 1 1 0
1 1 1 0 0 0 1 0 0 1 1

From the truth tables we conclude that the formulas x ⋅ y , x ⋅ y , x ⋅ y , x ,y - all implicants of the formula xØy, and of these implicants the formulas x and y are simple (the formula x ⋅ y, for example, is not a simple implicant, since, discarding the letter y, we obtain the implicant x).

Abbreviated DNF is called the disjunction of all prime implicants of a given formula (function) .

Theorem 2.3.1. Any Boolean function that is not the constant 0 can be represented as a shorthand DNF.

In Example 2.3.1, the function corresponding to the formula xØy is represented by the formula x ∨ y which is its abbreviated DNF.

A reduced DNF may contain extra implicants, the removal of which does not change the truth table. If we remove all unnecessary implicants from a reduced DNF, we obtain a DNF called dead end.

Note that the representation of a function as a dead-end DNF in general case ambiguous. Selecting from all dead-end forms, the form with the least number of occurrences of variables gives minimal DNF (MDNF).

Consider the method Quina, to find the MDNF representing a given Boolean function. We define the following three operations:

1. full bonding operation : f ⋅ x ∨ f ⋅ x = f ⋅ (x ∨ x) = f ;

2. partial gluing operation:

f ⋅ x ∨ f ⋅ x = f ⋅ (x ∨ x) ∨ f ⋅ x ∨ f ⋅ x = f ∨ f ⋅ x ∨ f ⋅ x ;

3. operation of elementary absorption f ⋅ x σ ∨ f = f , σ ∈ (0,1) .

Theorem 2.3.2(Quine's theorem). If, based on the SDNF function, we perform all possible operations of incomplete gluing and then elementary absorption, then the result will be a reduced DNF, i.e., a disjunction of all simple implicants.

Example 2.3.2. Let the function f(x,y,z) be given by a perfect DNF f = x ⋅ y⋅ z ∨ x ⋅ y⋅z ∨ x ⋅ y⋅z ∨ x ⋅ y⋅ z ∨ x ⋅ y⋅z . Then, performing in two stages all possible operations of incomplete gluing, and then elementary absorption, we have

f

Thus, the shortened DNF of a function f is the formula y¤x·z.

In practice, when performing incomplete gluing operations at each stage, it is possible not to write the terms involved in these operations, but to write only the results of all possible complete gluings and conjunctions that are not involved in any gluing.

Example 2. 3. 3. Let the function f(x,y,z) be given by a perfect DNF f = x ⋅ y ⋅ z ∨ x ⋅ y ⋅ z ∨ x ⋅ y ⋅ z ∨ x ⋅ y ⋅ z .

Then, performing the operations of gluing and then elementary absorption, we have

f = x ⋅ y ⋅(z ∨ z) ∨ y ⋅ z ⋅(x ∨ x) ∨ x ⋅ z ⋅(y ∨ y) = x ⋅ y ∨ y ⋅ z ∨ x ⋅ z

To obtain the minimum DNF from the reduced DNF, the Quine matrix is ​​used , which is constructed as follows. The column headings of the table contain the constituents of the perfect DNF unit, and the row headings contain the simple implicants from the resulting abbreviated DNF. In the table, asterisks mark those intersections of rows and columns for which the conjunct in the row header is included in the constituent of the unit, which is the column header.

For example 2.3.3. The Quine matrix has the form

In a dead-end DNF, the minimum number of simple implicants is selected, the disjunction of which preserves all the constituents of the unit, i.e., each column of the Quine matrix contains an asterisk at the intersection with the row corresponding to one of the selected implicants. The dead-end DNF that has the smallest number of occurrences of variables is chosen as the minimum DNF.

In Example 2.3.3, using the Quine matrix, we find that the minimum DNF of a given function is x ⋅ y ¤ x ⋅ z.

Comment.

use f =f and De Morgan's laws.

§ 2.4. Carnot maps.

Another way to obtain simple implicant formulas with a small number of variables (and, therefore, to find the minimum DNF) is based on the use of so-called Carnot maps.

The Carnot map is special type a table that simplifies the process of finding minimal forms and is successfully used when the number of variables does not exceed six. Karnaugh maps for functions depending on n variables are a rectangle divided into 2 n cells. Each cell of the diagram is associated with a binary n-dimensional set. The values ​​of a given function f from the truth table are entered into the required squares, but if the cell corresponds to 0, then it usually remains empty.

In table 2.4.1. shows an example of marking a Karnaugh map for a function that depends on three variables. The bottom four cells of the map correspond to binary sets in which the variable x takes the value 1, the top four cells correspond to sets in which the variable x takes the value 0. The four cells that make up the right half of the map correspond to sets in which the variable y; takes the value 1, etc. In table 2.4.2. The marking of the Karnaugh map for n=4 variables is shown.

To construct a minimal DNF, we perform gluing procedure "1". The values ​​"1" that stick together correspond to neighboring cells, i.e. cells differing only in the value of one variable (by graphic representation separated vertically or horizontal line taking into account the proximity of opposite extreme cells).

The process of gluing “1” comes down to combining single cells of the Karnaugh map into groups, and the following rules must be followed;

1. The number of cells included in one group must be expressed as a multiple of 2, i.e. 2 m where m=0,1,2,...

2. Each cell included in a group of 2 m cells must have m neighboring cells in the group.

3. Each cell must belong to at least one group.

4. Each group should include maximum number cells, i.e. no group should be contained within another group.

5. The number of groups should be minimal.

Read function f according to the gluing group is carried out as follows: variables that save same values in the cells of the gluing group, they enter into a conjunction, and values ​​1 correspond to the variables themselves, and values ​​0 correspond to their negations.

We present templates that help build coverages 1 (we consider the variables to be the same, but we will not write them). To simplify the notation, we will not mark the variables, although we will keep their designations as in tables 2.4.1, 2.4.2.

1 1
1 1
F=Ÿy&x
1 1
1 1 1 1 1 1
1 1
1 1 1
1

F=Ÿz&Ÿy f=Ÿx&y

1 1 1 1 1 1 1 1
1 1 1 1
1 1 1 1
1 1 1 1 1 1 1 1

F=Ÿx&z f=y&w F=Ÿx&Ÿy

1 1 1 1 1 1
1 1 1 1
1 1

F=Ÿy&Ÿw f=Ÿy&Ÿz F=Ÿz&Ÿx

1 1 1 1 1 1
1 1
1 1 1 1

F=y&z&w f=Ÿy&Ÿz&Ÿw F=x&y&Ÿz

1
1
1 1 1
1

Example 2.4.1. Build MDNF.

First we look to see if there are any coatings_ 1 out of 16 cells covering at least one uncovered 1. There are no such coverings. We move on to coverings of 8 cells. Let's see if there are covers of 1 out of 8 cells covering at least one uncovered 1. There are no such covers. We move on to coverings of 4 cells. Let's see if there are covers of 1 of 4 cells covering at least one uncovered 1. There are two such coverings. We move on to coverings of 2 cells. There is only one such coating. Thus, all 1 became covered. Next, let's see if we can remove some of the coverings so that all units remain covered. At the end we write out the MDNF: f =x⋅z∨y⋅w∨y⋅z⋅w.

Comment. To construct the minimal CNF of a function f, it is enough to construct the minimal DNF for the function f, and then

use f =f and De Morgan's laws.

Chapter III. Zhegalkin algebra.

The set of Boolean functions defined in the Zhegalkin basis S4=(∆,&,1) is called the Zhegalkin algebra.

Basic properties.

1. commutativity

H1∆H2=H2∆H1, H1&H2=H2

2. associativity

H1∆(H2∆H3)=(H1∆H2)∆H3, H1&(H2&H3)=(H1&H2)

3. distributivity

H1&(H2∆H3)=(H1&H2)∆(H1&H3);

4. properties of the constants H&1=H, H&0=0, H∆0=H;

5. H∆H=0, H&H=H.

Statement 3.1.1. All other Boolean functions can be expressed through the operations of Zhegalkin algebra:

Ÿx=1∆x, x¤y=x∆y∆xy, x~y=1∆x∆y, xØy=1∆x∆xy, x∞y=1∆x∆y∆xy, x|y= 1∆xy.

Definition. The Zhegalkin polynomial (polynomial modulo 2) of n variables x 1 ,x 2 ,…,x n is an expression of the form c0∆с1x1∆c2x2∆…∆cnxn∆c12x1x2∆…∆c12…nx1x2…xn, where constants with k can take values 0 or 1.

If the Zhegalkin polynomial does not contain products of individual variables, then it is called linear (linear function).

For example, f=x∆yz∆xyz and f1=1∆x∆y∆z are polynomials, the second being a linear function.

Theorem 3.1.1. Each Boolean function is represented in the form of a Zhegalkin polynomial in a unique way.

Let us present the main methods for constructing Zhegalkin polynomials from a given function.

1. Method uncertain coefficients. Let P(x 1 ,x 2 ,…,x n) be the desired Zhegalkin polynomial that implements the given function f(x 1 ,x 2 ,…,xn). Let's write it in the form

P= c 0 ∆c 1 x 1 ∆c 2 x 2 ∆…∆c n x n ∆c 12 x 1 x 2 ∆…∆c12… n x 1 x 2 …x n .

Let's find the coefficients with k. To do this, we sequentially assign the variables x 1 , x 2 ,…, x n values ​​from each row of the truth table. As a result, we obtain a system of 2 n equations with 2 n unknowns, having only decision. Having solved it, we find the coefficients of the polynomial P(x 1 ,x 2 ,…,xn).

2. A method based on transforming formulas over a set of connectives ( ÿ,&}. Construct some formula Ф over the set of connectives (Ÿ,&), realizing the given function f(x 1 ,x 2 ,…,x n). Then replace everywhere subformulas of the form A with A∆1, open the brackets using the distributive law (see property 3), and then apply properties 4 and 5.

Example 3.1.1. Construct a Zhegalkin polynomial for the function f(x,y)=xØy.

Solution. 1 . (method of undetermined coefficients). Let us write the required polynomial in the form

P(x,y)= c 0 ∆c 1 x∆c 2 y∆c 12 xy Using the truth table

x 0 0 1 1
y 0 1 0 1
xØy 1 1 0 1

we find that f(0,0)=P(0,0)= c 0 =1, f(0,1)=P(0,1)= c 0 ∆ c 2 =1, f(1,0) =P(1,0)= c 0 ∆c 1 =0, f(1,1)=P(1,1)= c 0 ∆c 1 ∆c 2 ∆c 12 =1

From where we consistently find, c 0 =1, c 1 =1, c 2 =0, c 12 =1. Therefore xØy=1∆x∆xy (compare with statement 3.1).

2.(Formula conversion method.) We have

x → y = x ∨ y = x ⋅ y = (x ⋅ (y ⊕ 1)) ⊕ 1 = 1 ⊕ x ⊕ x ⋅ y .

Note that the advantage of Zhegalkin algebra (compared to other algebras) is the arithmetization of logic, which makes it possible to perform transformations of Boolean functions quite simply. Its disadvantage compared to Boolean algebra is the cumbersomeness of the formulas.

Chapter IV. Statements. Predicates.

§4.1. Statements.

When constructing the algebra of logic, we used functional approach. However, it would be possible to construct this algebra constructively. First, define the objects of study (statements), introduce operations on these objects and study their properties. Let us give formal definitions.

By saying let's call declarative sentence regarding which one can unambiguously say whether it is true (value I or 1) or false (value L or 0) at a specific point in time. For example, “5 is a prime number”, “Esc key pressed”, etc. Using connectives “not”, “and”, “or”, “if,... then”, “if and only if” (they correspond to the operations “Ÿ”, “&”, “¤”, “Ø”, “~” » accordingly), more complex statements (sentences) can be constructed. This is how propositional algebra is constructed.

To simplify the recording of complex statements, the precedence of connectives is introduced: “Ÿ”, “&”, “¤”, “Ø”, “~”, which helps to omit unnecessary brackets.

We call simple statements propositional variables.

Let's introduce the concept of a formula.

1. Propositional variables are formulas.

2. If A, B are formulas, then the expressions ŸA, A⁄B, A¤B, AØB, A~B are formulas.

3. Formulas are only expressions constructed in accordance with paragraphs 1 and 2.

A formula that takes the value AND for all values ​​of propositional variables is called tautology (or generally valid), and a formula that takes the value A for all values ​​of the propositional variables is called contradictory (or impossible)

The description of the properties of propositional algebra is similar to the description of the corresponding functions in Boolean algebra, and we omit them.

§4.2. Predicates. Logical operations on predicates.

The subject of study in this chapter will be predicates - mappings of arbitrary sets into a set of statements. In fact, we are making the transition to new level abstractions, a transition of the type that was made at school - from the arithmetic of real numbers to the algebra of numerical functions.

Definition 2.1 Let x 1 ,x 2 ,…,xn be symbols of variables of arbitrary nature. We will call these variables subject variables. Let the sets of variables (x 1 ,x 2 ,…,x n) belong to the set M=(M1,M2,…Mn), which we will call the subject area (i.e. x i œM i, where Mi are called the domain of definition of the variable xi). Locality predicate n (n-place predicate) defined on subject area M, is a logical function that takes either the value AND or the value L.

Example 4.2.1. D(x1,x2) = “The natural number x1 is divided (without remainder) by the natural number x2.” - a two-place predicate defined on a set of pairs natural numbers M=NäN. Obviously, D(4,2) = And, D(3,5) = 0.

Example 4.2.2. Q(x) ==“x 2<-1, хœR» - одноместный предикат, определенный на множестве real numbers M=R. It is clear that Q(1) = А, Q(5) = А, and in general the predicate Q(x) is identically false, i.e.

Q(x) = А for all xœR.

Example 4.2.3. B(x,y,z) = “x 2 +y 2

A predicate P defined on M is called identically true if it takes the value AND for any values ​​of the subject variables; The predicate P is called identically false if it takes the value A for any values ​​of the subject variables. Predicate Q from Example 4.2.2. is identically false.

Since predicates are functions with values ​​in a set of statements where logical operations are introduced, these operations are naturally defined for predicates. Let P and Q be predicates defined on M. Then

1. ¬P(x, x,…, x n) = P(x, x,…, x)

∧ 1 2 n 1 2 n ∧ 1 2 n

3. (P ∨ Q)(x 1 ,x 2 ,…,x n) = P(x 1 ,x 2 ,…,x n) ∨ Q(x 1 ,x 2 ,…,x n)

4. (P → Q)(x 1 ,x 2 ,…,x n) = P(x 1 ,x 2 ,…,x n) → Q(x 1 ,x 2 ,…,x n) Predicates P and Q, defined on M are called equivalent (write P=Q) if P(x 1 ,x 2 ,…,xn)=Q(x 1 ,x 2 ,…,xn) for any set (x 1 ,x 2 ,…, x n ) subject variables from M .

Theorem 4.2.1 The set of n-ary predicates defined on M forms a Boolean predicate algebra. Thus, the basic equivalences are valid for them (see §1.6).

§4.3. Quantifiers and their properties.

Let P(x 1 ,x 2 ,…,xn) be an n-ary predicate defined on M. Let us fix x i = a. Let us define the (n-1)-ary predicate Q(x 1 ,x 2 ,…,xk-1, xk+1,xn) as follows: Q(x 1 ,x 2 ,…,xk-1,xk+1, xn)=P(x 1 ,x 2 ,…,xk1, a,xk+1,xn). They say that the predicate Q(x 1 ,x 2 ,…,xk-1, xk+1,xn) is obtained from the predicate P(x 1 ,x 2 ,…,xn) by fixing the value of the i-th variable: x i = a .

Definition 4.3.1 . Let P(x) be a unary predicate. Let us associate with it a statement denoted “xP(x) (read “for any x P(x)”), which is true if and only if P(x) is an identically true predicate. The statement “xP(x) is said to be , that it is obtained from the predicate P by attaching a universal quantifier over the variable x.

Definition 4.3.2. Let P(x) be a unary predicate. Let us associate it with a statement denoted $xP(x) (read “there is x P(x)”), which is false if and only if P(x) is an identically false predicate. The statement $xP(x) is said to be obtained from the predicate P by attaching an existential quantifier to the variable x.

Note 1. The symbols " and $ for quantifiers are inverted Latin letters A and E, respectively, which are the first letters in English words All- All, Exist- exist.

Note 2. Statements can be considered predicates that do not contain variables, that is, 0-place predicates (or predicates of any locality).

Let P(x 1 ,x 2 ,…,xn) be an n-ary predicate defined on M. Let us fix in it the values ​​of the variables x 1 ,x 2 ,…,x k-1 ,x k+1 ,x n . We attach a universal (existence) quantifier to the resulting unary predicate Q(x k) and obtain a statement. Thus, a fixed set of values ​​of variables x 1 ,x 2 ,…,x k-1 ,x k+1 ,x n is associated with a statement using the quantifier of universality (existence). This (n-1)-ary predicate of the variables x 1 ,x 2 ,…,x k-1 ,x k+1 ,x n is said to be obtained from the original predicate P(x 1 ,x 2 ,…,x n) by adding a quantifier universality (existence) in the kth variable. This predicate is denoted: “x to P(x 1 ,x 2 ,…,x n) ($x to P(x 1 ,x 2 ,…,x n)). About the k-th variable (which no longer exists) they say that it is bound by the quantifier of universality (existence).

Example 4.3.1. Let D(x1,x2) = “The natural number x1 is divisible (without remainder) by the natural number x2.” - two-place predicate.

Let us assign quantifiers successively to its variables. It's clear that

1) "x1"x2D(x1,x2)=0 2) "x2"x1D(x1,x2)=0 3) $x1$x2D(x1,x2)=1

4) $x2$x1D(x1,x2)=1 5) "x1$x2D(x1,x2)=1 6) $x2"x1D(x1,x2)=1 7) $x1"x2D(x1,x2) =0 8) "x1$x2D(x1,x2)=1.

Thus (by comparing 7 and 8 in the last example) we proved the theorem:

Typically, connectives and quantifiers are ordered by priority as follows: Ÿ, ", $, &, ¤, Ø, ~.

Theorem 4.3.1. Opposite quantifiers, generally speaking, do not commute.

Theorem 4.3.2.(basic equivalences containing quantifiers) The following equivalences take place:

1. De Morgan's laws

∀ xP (x) = ∃x P(x) , ∃xP (x) = ∀ x P(x)

2. Commutativity

∀x∀yP(x,y) =∀y∀xP(x,y) , ∃x∃yP(x,y) =∃y∃xP(x,y)

3. Distributivity

∀x(P(x)&Q(x)) =∀xP(x)&Q(x) , ∃x(P(x)∨ Q(x))=∃xP(x)∨ Q(x)

4. Limitations on the action of quantifiers

∀x(P(x)∨Q(y))=∀xP(x)∨∀xQ(y) , ∃x(P(x)&Q(y) =∃xP(x)&∃xQ(y)

5. For any two-place predicate

∃y∀xP(x,y) →∀x∃yP(x,y) =1

Chapter V. Formal theories.

§5.1. Definition of formal theory.

Formal theory(or calculus) Y- This:

1. set A characters forming alphabet ;

1. a bunch of F words in the alphabet A, F Ã A which are called formulas ;

3. subset B formulas, B Ã F , which are called axioms;

4. many relationships R on a set of formulas called rules of inference.

Lots of symbols A may be finite or infinite. Usually, to form symbols, a finite set of letters is used, to which, if necessary, natural numbers are assigned as indices.

Lots of formulas F usually given by inductive definition, for example by means of a formal grammar. As a rule, this set is infinite. Sets A And F collectively determine language , or signature , formal theory.

Many axioms B may be finite or infinite. If the set of axioms is infinite, then, as a rule, it is specified using a finite set of axiom schemes and rules for generating specific axioms from the axiom scheme.

Lots of inference rules R , as a rule, of course. So, calculus Y there is a four (A, F, B, R) .

By derivation in calculus Y is a sequence of formulas F 1 ,F 2 ,…,Fn such that for any k (1§k§n) the formula Fk is either an axiom of calculus Y or a direct consequence of any previous formulas obtained by the rule of inference.

A formula G is called a theorem of calculus Y (derivable in Y or provable in Y) if there is a conclusion F 1 ,F 2 ,…,F n ,G which is called a derivation of the formula G or a proof of the theorem G.

This is written as follows: F 1,F 2,…,F n + G.

Calculus Y called consistent, if not all of its formulas are provable. Another definition of consistency can be given: A calculus is called consistent if the formulas F and ŸF (the negation of F) are not simultaneously deducible in it.

Calculus Y called complete(or adequate) if each true statement M corresponds to a theorem of the theory Y .

Formal theory Y called decidable, if there is an algorithm that, for any formula of the theory, determines whether this formula is a theorem of the theory Y or not.

§5.2. Propositional calculus.

Using the concept of formal calculus, we define propositional calculus (PS).

Alphabet IW consists of

1. letters A, B, Q, R, P and others, possibly with indexes

(which are called propositional variables),

2. logical symbols(ligaments) Ÿ, &, ¤, Ø, 3. auxiliary characters (,).

Lots of formulas IV is determined inductively:

1. all propositional variables are IV formulas;

2. if A, B are formulas IV , toŸA, A⁄B, A¤B, AØB - formulasIV ;

3. an expression is an IV formula if and only if this can be established using points "1" and

Thus, any IV formula is constructed from propositional variables using connectives Ÿ, ⁄, ¤, Ø.

In the future, when writing formulas, we will omit some parentheses, using the same conventions as in the previous chapter.

Axioms IV are the following formulas (for any formulas A,B,C)

2. (AØB)Ø((AØ(BØC))Ø(AØC));

5. (AØB)Ø((AØC)Ø(AØ(B⁄C)));

8. (AØC)Ø((BØC)Ø((A¤B)ØC)); 9. (AØB)Ø((AØŸB)ØŸA);

These formulas are called IV axiom schemes . When substituting specific formulas into any scheme, a special case of the axiom scheme is obtained.

Rule of inference in IE there is a rule of conclusion (modus ponens): if A and AØB are derivable formulas, then B is also derivable

Symbolically it is written like this: A, A B B .

For example, if the statements A⁄B and A⁄BØ(AØC) are deducible, then the statement AØC is also derivable according to the inference rule.

A formula G is said to be deducible from the formulas F 1 ,F 2 ,…,F n (denoted F 1 ,F 2 ,…,F n +G) if there is a sequence of formulas F 1 ,F 2 ,…,F k ,G , in which any formula is either an axiom, or belongs to the list of formulas F 1,F 2,...,F n (called hypotheses), or is obtained from previous formulas according to the rule of inference. The derivability of the formula G from “ (denoted by +G) is equivalent to the fact that G is an IV theorem.

Example 5.2.1. Let us show that the formula AØA is derivable in IV. To do this, we will construct the derivation of this formula:

1) in axiom 2, replace B with AØA, C with A.

We get the axiom

(AØ(AØA))Ø((AØ((AØA)ØA))Ø(AØA));

2) in axiom 1 we replace B with A. We obtain AØ(AØA);

3) from 1 and 2 according to modus ponens we conclude

(AØ((AØA)ØA))Ø(AØA);

4) in axiom 1 we replace B with AØA. We get AØ((AØA)ØA);

5) from paragraphs. 3 and 4, according to the inference rule, + AØA is true.

Theorem 5.2.1.

1. If F 1 ,F 2 ,…,Fn,A,B are IV formulas, Г=(F 1 ,F 2 ,…,Fn), Г+A, then Г,B+A. (you can increase the number of hypotheses).

2. If and only if F 1 ,F 2 ,…,F n +A, when F 1 ⁄F 2 ⁄…⁄F n +A (reduction of many hypotheses to one hypothesis).

§5.3. Deduction theorem. Completeness of IV.

Theorem 5.3.1. (deduction theorem)

If Г,B+A, then Г+BØA, where Г is a set of some formulas Г=(F 1 ,F 2 ,…,F n ).

Corollary 5.3.1. Then and only if F 1 ,F 2 ,…,F n +A, when

Proof. Let F 1 ,F 2 ,…,F n +A. Then, applying the deduction theorem, we have F 1 ,F 2 ,…,F n-1 +F n ØA. Similarly F 1 ,F 2 ,…,F n-2 +F n 1Ø(F n ØA), etc. Continuing the process the required number of times, we get

F 1 Ø(F 2 Ø…Ø(F n-1 Ø(F n ØA))…)

To prove sufficiency, assume that +B, where B=F 1 Ø(F 2 Ø…Ø(F n-1 Ø(F n ØA))…). Let's use Theorem 5.2.1., item 1:

F 1 +B . According to the conclusion rule, we obtain F 1 + (F 2 Ø…Ø(F n-1 Ø(F n ØA))…), Then after n steps we have F 1 ,F 2 ,…,F n +A.

Thus, thanks to Corollary 5.3.1., checking the deducibility of formula A from the formulas F 1,F 2,…,F n, is reduced to checking the provability of the formula

F 1 Ø(F 2 Ø…Ø(F n-1 Ø(F n ØA))…).

Recall that a formula A is called identically true (or a tautology) if the value of the formula A is equal to one for any set of values ​​of the propositional variables. The following theorem reduces the verification of the provability of a formula to the verification of its identical truth.

Theorem 5.3.2. (about completeness). Formula A is provable if and only if A is identically true (tautology): +A ‹ A-tautology.

Thus, in order to determine whether a formula is provable, it is enough to compile its truth table. As is known, there is an effective algorithm for constructing a truth table, and, therefore, IV solvable.

Example 5.3.1. Let us prove that P+P. By the theorem of deduction, this is equivalent to +(PØP). In turn, according to the completeness theorem, it is enough to prove that (РØР) is a tautology. Compiling a truth table for the formula (РØР) , We are convinced that (РØР) is identically true and, therefore, provable.

Theorem 5.3.3. (about consistency). The IW calculus is consistent.

Proof. According to the completeness theorem, any formula that is not identically true is not provable in IW. For example, such a formula is the formula A⁄(ŸA).

The set of formulas Г is called controversial , if Г+А⁄(ŸА) . If Г is a contradictory set of formulas, we will denote this fact by Г+.

Statement 5.3.1. Formula A is deducible from the set of formulas Г if and only if the set Г»(ŸA) is contradictory.

§5.4. Automatic proof of theorems.

Automatic theorem proving is the cornerstone of logic programming, artificial intelligence and other modern trends in programming. Generally speaking, there may not be an algorithm by which, given an arbitrary formula A, one can determine, after a finite number of steps, whether A is deducible in the calculus Y or not. However, for some simple formal theories (for example, propositional calculus) and some simple classes of formulas (for example, applied predicate calculus with one unary predicate), algorithms for automatic theorem proving are known. Below, using the example of propositional calculus, we outline the basics of the resolution method - a classic and at the same time popular method of automatically proving theorems.

§5.5. Resolution method in IW.

Recall that if x is a logical variable, and σœ(0,1) then the expression

x σ = xx if if σσ == 10 or x σ = 10 if if x x =≠σσ , is called letter. The letters x and Ÿx are called contrarian. Conjunct called conjunction of letters. disjunct called disjunction of letters.

Let D 1 = B 1 ∨ A, D 2 = B 2 ∨ A be clauses. The clause B 1 ¤B 2 is called resolvent clauses D 1 and D 2 by letter A and is denoted by res A (D 1 ,D 2). The resolvent of the clauses D 1 and D 2 is the resolvent by some letter and is denoted by res(D 1 ,D 2). Obviously res(A,ŸA)=0. Indeed, because A=A¤0 and ŸA=ŸA¤0, then res(A,ŸA)=0¤0=0. If clauses D 1 and D 2 do not contain contrasting characters, then they do not have resolvents.

Example 5.5.1. If

D 1 =A¤B¤C, D 2 = A ∨ B ∨ Q, then

res A (D 1 ,D 2)=B¤C¤ B ¤Q, res B (D 1 ,D 2)=A¤C¤A¤Q, resC(D 1 ,D 2) does not exist.

Statement 5.5.1. If res(D 1 ,D 2) exists, then D 1 ,D 2 + res(D 1 ,D 2).

Let S=(D 1 ,D 2 ,…,Dn) be the set of clauses.

The sequence of formulas F 1 ,F 2 ,…,F n is called a resolute derivation from S if for each formula F k one of the conditions is met:

2. there are j, k

Theorem 5.5.1. (about the completeness of the resolution method). A set of clauses S is contradictory if and only if there is a resolutive conclusion from S that ends in 0.

Note that the resolution method can be used to check the deducibility of a formula F from a given set of formulas F 1,F 2,…,F n. Indeed, the condition F 1 ,F 2 ,…,F n +F is equivalent to the condition F 1 ,F 2 ,…,F n ,ŸF+ (many formulas are contradictory), which in turn is equivalent to the condition Q+, where Q=F 1 ⁄F 2 ⁄…⁄F n ⁄(ŸF). Let's reduce the formula Q to CNF: Q=D 1 ⁄D 2 ⁄...⁄Dm, then Q+ ‹D 1 ⁄D 2 ⁄...⁄Dm+ ‹ D 1 ,D 2 ,...,D m +. Thus, the task of checking the deducibility of F 1 ,F 2 ,...,F n +F comes down to checking the inconsistency of the set of clauses S=(D 1 ,D 2 ,...,D m ), which is equivalent to the existence of a decreed conclusion 0 from S.

Example 5.5.2. Check the ratio AØ(BØC), CDØE, ŸFØD&(Ÿ)E + AØ(BØF) using the resolution method.

According to statement 5.3.1. need to check for

inconsistency many formulas

S = (AØ(BØC), CDØE, ŸFØD&(Ÿ)E, Ÿ(AØ(BØF))).

We present all the formulas from. S to KNF:

S = (A ∨ B ∨ C,C ⋅ D ∨ E, F ∨ D ⋅ E, A ∨ B ∨ F) == (A ∨ B ∨ C, C ∨ D ∨ E, (F ∨ D) ⋅ (F ∨ E), A ⋅ B ⋅ F) .

Thus, we obtain a set of clauses S = (A ∨ B ∨ C, C ∨ D ∨ E,F ∨ D,F ∨ E,A,B, F)

Let's construct a resolutive conclusion from S ending with 0:

1. res A (A ∨ B ∨ C,A) = B ∨ C ;

2. res B (B ∨ C,B) = C ;

3. res D (C ∨ D ∨ E,F ∨ D) = C ∨ E ∨ F ;

4. res E (C ∨ E ∨ F,F ∨ E) = C ∨ F ;

5. res C (C, C ∨ F) = F ; 6. res(F, F) = 0 .

So, we conclude that the formula AØ(BØF) is deducible from the set of formulas AØ(BØC), CDØE, ŸFØD&(Ÿ)E.

Note that the resolution method is sufficient to discover the possible satisfiability of a given set of clauses S. To do this, let us include in the set S all clauses obtained by resolutive deductions from S. From the theorem on the completeness of the resolution method it follows

Corollary 5.5.1. If a set of clauses S contains the resolvents of all its elements, then S is satisfiable if and only if 0–S.

Chapter VI. Elements of the theory of algorithms.

§6.1. Algorithm definition.

A characteristic feature of modernity is the widespread use of computers in solving problems (tasks) in various fields of human activity. However, the problem must first be solved algorithmically, i.e. a formal prescription must be given, following which one can obtain the final result for solving all problems of a certain type (this is an intuitive, not strict concept of an algorithm). For example, an algorithm for finding the greatest common divisor of two natural numbers a, b, as follows:

1) expand the number a by prime factors;

2) repeat step 1 for b And go to step 3;

3) compose the product of common prime factors from expansions A And b with indices equal to the smallest of the indices of inclusion in the expansions.

Having analyzed this example, we note the most important features (properties) of the algorithm:

1. Mass character- applicability of the algorithm not to one problem, but to a class of problems.

2. Discreteness- a clear breakdown into individual stages (steps) of the algorithm.

3. Determinism- such an organization of execution stages in which it is always clear how to make the transition from one stage to another.

4. Limb- to obtain a result when applying an algorithm to solve a specific problem, a finite sequence of steps of the algorithm is performed:

Note that if the presence of an algorithm in itself serves as proof of the existence of an algorithm, then to prove its absence it is necessary to have a strict mathematical definition of an algorithm.

Attempts to formalize the concept of an algorithm led to the creation Turing machines, as some imaginary device that implements the algorithm. Another step in defining the concept of an algorithm was the appearance recursive functions , as functions that formalize the concept of an algorithm and implement the intuitive concept of computability. It was soon discovered that the set of recursive functions coincided with the set of functions computable on Turing machines. New concepts that then appeared, claiming to explain the concept of an algorithm, turned out to be equivalent to functions computable on Turing machines, and therefore to recursive functions. The result of the ongoing discussion about what an algorithm is was a statement now called Church’s thesis.

Church's thesis. The concept of an algorithm, or computability by some mechanical device, coincides with the concept of computability on Turing machines (and therefore with the concept of a recursive function). In other words, an algorithm is a process that can be represented by a functional diagram and implemented by some Turing machine.

§6.2. Turing machine.

A Turing machine is an (abstract) device consisting of a tape, a control device, and a read head.

The tape is divided into cells. Each cell contains exactly one character from external alphabet A=( a 0 , a 1 ,…, a n ). Some symbol (we will denote it Ÿ ) of the alphabet A is called empty, and any cell currently containing an empty character is called an empty cell (at that moment). The tape is assumed to be potentially unlimited in both directions.

Control device at each moment of time is in some state q j belonging to the set Q=(q 0 , q 1 ,..., q m )(m=l). The set Q is called internal alphabet . One of these conditions (usually q 0) is called final, and some other (usually q 1) - initial.

The read head moves along the tape so that at any given time it scans exactly one cell of the tape. The head can read the contents of the observed cell and write into it instead of the observed symbol some new symbol from the external alphabet A(possibly the same one).

During operation, the control device, depending on the state in which it is located and the symbol viewed by the head, changes its internal state q. Then it issues the head an order to print a certain character from the external alphabet in the cell being monitored A, and then commands the head to either stay in place, or move one cell to the right, or move one cell to the left. Once in the final state, the machine stops working.

Configuration on tape (or machine word) is called the set formed by:

1) sequence a i (1) , a i (2) ,...,a i(s) characters from the external alphabet A, recorded in tape cells, where a i (1) .- the symbol written in the first cell on the left, etc. (any such sequence is called in a word), 2) the state of the internal memory q r ;

3) number k perceived cell.

We will write the machine configuration as follows:

a ,a ,..., a a i(r) a ,a ,..., a

i(1) i(2) i(r−1) q r i(r+1) i(r+2) i(s)

Here r- the perceived cell is highlighted as a fraction.

If the machine, being in the internal state qi, accepts a cell with a symbol a u, writes a symbol to this cell at the next moment in time a r, goes into the internal state qs and moves along the tape, then they say that the machine is executing the command q i a u Æ q s a r S, where the symbol S can take one of the following values: -1 – shift the head to the left; +1 - head shift to the right; 0 – the head remains in place. The list of all commands (quints) that determine the operation of a Turing machine is called program this car. The machine program is often specified in the form of a table. So for the situation described above in the table at the intersection

lines a u and column qi will stand q s a r S(see table 6.2.1)

Table 6.2.1.

q 0 qi q m
a u q s a rS

If the program includes cars for a couple ( q i ,a u ) five is missing, then in the table at the intersection of the line a u, and column qi a dash is added.

So, A Turing machine is, by definition, kit M=(A,Q,P), Where A- external alphabet, Q— alphabet of internal states, P- program.

If a machine, having started working with some word P written on tape, reaches the final state, then it is called applicable to this word. The result of its work is the word recorded on the tape in its final state. Otherwise, the machine is said not to be applicable to the word R.

Example 6.2.1. Let's build a Turing machine that adds natural numbers written in the unary number system (that is, written using one symbol |. For example 5=|||||.).

Solution. Consider the alphabet A = {|, +, ⁄}.

The machine is determined by the following program:

Let us write down the configurations that arise sequentially during the operation of this machine on the initial word ||+ ||| , i.e. 2+3. Here, when recording the configuration, we will use the following convention: the state in which the machine is located is written in parentheses to the right behind the letter being observed.

Example 6.2.2. Build a Turing machine that doubles natural numbers written in the unary number system.

Solution. We will construct the required machine in the alphabet A=(|, α, ⁄). The program for such a machine might look like this:

Let's apply the resulting machine to the word || .

Introduction of a new letter α and replacement of the original ones | on α allows one to distinguish the original | and new (assigned) | . State q 1 provides replacement | on α , state q 2 provides search for α , intended to replace | , and stopping the machine in the case when α is not detected, q 3 ensures completion | in the case when α was replaced by |.

§6.3. Recursive functions

Let us agree that in this paragraph

1. the set N of natural numbers contains 0 (zero), i.e. N=(0,1,2,3,...);

2. the functions under consideration f=f(x 1 ,x 2 ,…,x n) are defined only when the variables take values ​​from N, i.e. xiœN;

3. range of values ​​of functions DŒN;

4. the functions under consideration f=f(x 1 ,x 2 ,…,x n) can be partially defined, i.e. not defined for all sets of natural numbers.

Let's introduce into consideration simple functions

o(x)=0, s(x)=x+1, I m n (x 1 ,..., x n) = x m

These functions can be calculated using an appropriate mechanical device (for example, a Turing machine). Let us define operators that construct new functions based on one or more given functions.

Superposition operator. Let the function f(x 1 ,x 2 ,…,x k) of k variables and k functions f 1 (x 1 ,x 2 ,…,x n),…,f k (x 1 ,x 2 ,…,x n) be given n variables. A superposition of functions f,f 1 ,…,f k is the function j(x 1 ,x 2 ,…,x n)= f(f 1 (x 1 ,x 2 ,…,x n),…,f k (x 1 ,x 2 ,…,x n))

We say that function j is obtained by applying the superposition operator S k+1 to the functions f,f 1 ,…,f k , and write j=S k+1 (f,f 1 ,…,f k) For example, S 2 (s, o)=s(o(x)), i.e. a function identically equal to 1, and S 2 (s,s)=s(s(x)) is a function y(x)=x+2.

Primitive recursion operator. Let the functions g(x 1 ,x 2 ,…,x n) and h(x 1 ,x 2 ,…,x n+2) be given. Let's construct a function f(x 1 ,x 2 ,…,x n+1) Let the values ​​x 1 ,x 2 ,…,x n be fixed. Then we assume: f(x 1 ,x 2 ,…,x n ,0)= g(x 1 ,x 2 ,…,x n)

f 1 (x 1 ,x 2 ,…,x n ,y+1)= h(x 1 ,x 2 ,…,x n ,y, f(x 1 ,x 2 ,…,x n ,y))

These equalities define the function f(x 1 ,x 2 ,…,x n+1) uniquely. A function f is called a function obtained using the primitive recursion operator R. The notation f=R(g,h) is used.

Inductive definition of a function (demonstrated in the primitive recursion operator) is not uncommon in mathematics. For example, 1) a degree with a natural exponent is determined inductively: a 0 =1, a n+ 1 =a n ÿ a ;

2) factorial: 0!=1, (n+1)!= n!ÿ(n+1), etc.

Definition. Functions that can be obtained from the simplest o(x)=0, s(x)=x+1, I m n (x 1 ,..., x n) = x m by applying superposition operators and primitive recursion a finite number of times are called primitively recursive.

Let's check that the function u(x,y)=x+y is primitive recursive. Indeed, we have: u(x,0)=0, u(x,y+1)=x+y+1=u(x,y)+1. This is a primitive recursion scheme, since x= I 1 1 (x), and u(x,y)+1=s(u(x,y))=S 2 (s,u). Here g(x)= I 1 1 (x), and h(x,y,u)=s(u)=S 2 (s, I 3 3).

It is similarly proven that the functions m(x,y)=xÿy, d(x,y)=x y (we assume by definition 0 0 =1), fact(x)=x! and many others are primitively recursive.

Note; that primitively recursive functions are everywhere defined (that is, defined for all values ​​of their arguments). Indeed, the simplest functions o, s, I m n are everywhere defined, and applying superposition and primitive recursion operators to everywhere defined functions also gives everywhere defined functions. So a function like

=   x − y, if x ≥ y< y

f(x,y) does not exist if x

cannot be primitively recursive. We have no right to consider the function f(x,y)=x-y here, since the function values ​​must be natural numbers (therefore not negative). However, one can consider the function

÷ y = 0x − y ifif x x<≥y.y

Let's check that it is primitively recursive. Let us first prove that the function j(x)=xπ1 is primitive recursive. Indeed, j(0)=0. j(y+1)=(y+1)π1=y, which serves as a primitive recursion scheme for the function xπ1. Finally, xπ0=x, xπ(y+1)=(xπy)π1=j(xπy) is a primitive recursion scheme for xπy.

A significantly broader class of functions than primitive recursive functions is the class of recursive functions (see definition below). In the literature these functions are also called partially recursive . To determine them, we introduce one more operator.

Minimization operator. Let the function f(x 1 ,x 2 ,… ,x n ,x n+1) be given. Let’s fix some values ​​x 1 ,x 2 ,… ,x n of the first n variables and calculate f(x 1 ,x 2 ,… ,x n ,0), f(x 1 ,x 2 ,… ,x n ,1), f(x 1 ,x 2 ,… ,x n ,2) etc. If y is the smallest natural number for which f(x 1 ,x 2 ,…

X n ,y)=x n+1 (i.e. values ​​f(x 1 ,x 2 ,… ,x n ,0), f(x 1 ,x 2 ,… ,x n ,1),…,f( x 1 ,x 2 ,…

X n ,y-1) all exist and are not equal to xn +1), then we put g(x 1 ,x 2 ,…

X n ,x n+1)=y. Thus, g(x 1 ,x 2 ,… ,x n ,x n+1)=min(y|f(x 1 ,x 2 ,…,x n ,y)=x n+1).

If such y no, then we consider that f(x 1 ,x 2 ,… ,x n ,x n+1) is not defined. So, three cases are possible:

1. f(x 1 ,x 2 ,… ,x n ,0), f(x 1 ,x 2 ,… ,x n ,1),…,f(x 1 ,x 2 ,… ,x n ,y-1) exist and are not equal to xn +1, and f(x 1 ,x 2 ,…,x n ,y)=x n+1 ;

2. f(x 1 ,x 2 ,… ,x n ,0), f(x 1 ,x 2 ,… ,x n ,1),…,f(x 1 ,x 2 ,… ,x n ,y-1) exist and are not equal to xn +1, but f(x 1 ,x 2 ,…,x n ,y) does not exist;

3. f(x 1 ,x 2 ,… ,x n ,i) exist for all iœN and are different from xn +1.

If the 1st case occurs, then g(x 1 ,x 2 ,… ,x n ,x n+1)=y, and if the 2nd or 3rd case, then g(x 1 ,x 2 ,… ,x n , x n+1) is not defined. A function g obtained in this way is said to be obtained from f using the minimization operator M. We write g= Mf.

The minimization operator is an obvious generalization of the inverse function operator. The generalization is quite deep, since the function f is not required to be one-to-one (in the variable x n+1)

Definition. Functions that can be derived from the simplest o(x)=0, s(x)=x+1, I m n (x 1 ,..., x n) = x m applying superposition operators, primitive recursion, and minimization operators a finite number of times are called recursive.

The class of recursive functions is wider than the class of primitive recursive functions, if only because it contains not only functions that are defined everywhere. Let us prove, for example, that the function

=   x − y, if x ≥ y< y

f(x,y) does not exist if x

is recursive. Indeed, f(x,y)=min(z| y+z=x), and it was previously established that the function u(x,y)=x+y is primitively recursive.

Recursive functions reflect our intuitive understanding of the functions that some mechanical device can compute. In particular, they are computable on Turing machines (see previous paragraph). On the contrary, every function computable on a Turing machine is recursive. We will not check this fact, as it would take too much time and space. A complete proof can be found, for example, in the book by A.I. Maltsev “Algorithms and recursive functions”.

Note that not every function of natural arguments is recursive, not even every function of a single argument. The existence of non-recursive functions is the “mathematical reason” for the presence of algorithmically unsolvable problems.

§6.4. Algorithmically unsolvable problems.

In various branches of mathematics there are algorithmically unsolvable problems, i.e. problems for which there is no solution algorithm, and not because it has not yet been invented, but because it is impossible in principle. Of course, the algorithm must be understood in the sense of Turing machines and recursive functions.

Let us formulate one of these problems

Turing machine stopping problem. A Turing machine is an object defined by a finite number of parameters. All partial mappings from one finite set to another can be efficiently renumbered. Therefore, each Turing machine can be assigned a number (a natural number). Let T(n) be a Turing machine with number n. Some machines that start running on an empty belt eventually stop, and some run indefinitely. The problem arises: given a natural number n, determine whether a Turing machine T(n) launched on an empty tape will stop or not. This task

algorithmically undecidable. That is, there is no automatic procedure , for each n decides whether the machine T(n) stops or not. This does not preclude us from determining for any particular machine whether it stops or not. There is no method that solves this for all machines at once .

§6.5. Algorithms and their complexities.

Given a problem, how to find an efficient algorithm to solve it? And if an algorithm is found, how can it be compared with other algorithms that solve the same problem? How to evaluate its quality? Questions of this kind are of interest to both programmers and those involved in the theoretical study of computing.

There are many criteria for evaluating algorithms. Most often, we will be interested in the order of growth of the time and memory capacity required to solve a problem as the input data increases. We would like to associate a number with each specific task, called its size , which would express a measure of the amount of input data. For example, the size of a matrix multiplication problem may be the largest size of the factor matrices.

The time taken by an algorithm as a function of problem size is called time complexity this algorithm. The behavior of this complexity in the limit as the problem size increases is called asymptotic time complexity . Similarly we can define capacitive complexity and asymptotic capacitive complexity.

It is the asymptotic complexity of the algorithm that ultimately determines the size of the problems that can be solved by this algorithm. If an algorithm processes inputs of size n in time cÿn 2 , where c - some constant, then they say that the time complexity of this algorithm is O(n 2) (read “of order en square”).

One would think that the enormous increase in computing speed brought about by the current generation of digital computing machines would reduce the importance of efficient algorithms. However, the opposite happens. As computing machines get faster and faster and we can solve larger problems, it is the complexity of the algorithm that determines the increase in problem size that can be achieved as the machine speed increases.

Let's say we have five algorithms A1,A2,…,A5 with the following time complexities:

Here, time complexity is the number of time units required to process an input of size n. Let the unit of time be one millisecond (1sec=1000 milliseconds). Then algorithm A1 can process an input of size 1000 in one second, while A5 can process an input of size no more than 9. In the table. 6.5.1. The sizes of problems that can be solved in one second, one minute and one hour by each of these five algorithms are given.

Table 6.5.3.

Let's assume that the next generation of computers will be 10 times faster than the current one. In table 6.5.2. shows how the size of the problems we can solve due to this increase in speed will increase. Note that for algorithm A5, a tenfold increase in speed increases the size of the problem that can be solved by only three units (see the last line in Table 6.5.2.), while for algorithm A3 the size of the problem more than triples.

Table 6.5.4.

Instead of the effect of increasing speed, let's now consider the effect of using a more efficient algorithm. Let's return to table 6.5.1. If we take 1 minute as a basis for comparison, then by replacing algorithm A4 with algorithm A3, we can solve a problem 6 times larger, and by replacing A4 with A2 , you can solve a problem 125 times larger. These results are far more impressive than the 2x improvement achieved by 10x the speed. If we take 1 hour as the basis for comparison, the difference turns out to be even more significant. From this we conclude that the asymptotic complexity of an algorithm serves as an important measure of the quality of the algorithm, and a measure that promises to become even more important with subsequent increases in computational speed.

Despite the fact that the main attention here is paid to the order of growth of quantities, one must understand that a large order of growth in the complexity of the algorithm may have a smaller multiplicative constant (constant c in the definition of O(f(x))), than a small order of increase in the complexity of another algorithm. In this case, an algorithm with rapidly increasing complexity may be preferable for small problems - perhaps even for all the problems we are interested in. For example, suppose that the time complexities of algorithms A1, A2, A3, A4, A5 are actually 1000n, 100nÿlog(n), 10n2, n3, and 2, respectively. n Then A5 will be the best for problems of size 2§n§9, A2 - for problems of size

10§n§58, A1 - at 59§n§1024, and A1- with n>1024.-

LITERATURE.

1. F.A. Novikov. Discrete mathematics for programmers./ St. Petersburg: Peter, 2001, 304С.

2. S.V. Sudoplatov, E.V. Ovchinnikova. Elements of Discrete Mathematics./ M., INFRA-M, Novosibirsk, NSTU Publishing House,

3. Y.M.Erusalimsky. Discrete mathematics / M., “University book”, 2001, 279 pp.

4. A. Aho, J. Hopcroft, J. Ullman. Construction and analysis of computational algorithms. / M., Mir, 1979, 536С.

5. V.N.Nefedov, V.A.Osipova Course in discrete mathematics./ M., MAI Publishing House, 1992, 264P.

Mathematical logic and theory of algorithms – Course of lectures
Introduction.

    1. Purpose.
This course serves to develop knowledge and skills that form the theoretical foundation necessary for setting and solving problems in the field of computer science, for a correct understanding of the limitations that arise when creating computational structures, algorithms and information processing programs.

New sections of the discrete mathematics course, although implemented in the form of educational programs and lecture series, do not yet exist in the form of monographs, at least in Russian, since the discrete mathematics course for technical universities is focused on old applied problems that engineers had to solve . In particular, in mathematical logic it was the minimization of logical circuits, which has lost its relevance today.

It is interesting to note that the theory of synthesis of logical circuits, having gone through an almost complete “biological cycle” before the eyes of one generation of researchers, is a very instructive example of how branches of technical sciences that are poorly connected with fundamental science are highly susceptible to obsolescence. Just 10 years ago, all technical journals were filled with articles on minimization and synthesis of logic circuits. Most of the minimization methods developed by scientists are now forgotten and are not in demand in practice. And those ideas that were considered purely theoretical at that time have found practical application in modern technology. For example, fuzzy logic, Petri nets, and algorithm theory have stood the test of time and are widely used in various fields of cybernetics and programming such as systems programming, computational complexity, and artificial intelligence.

And the theory of algorithms became the central section of discrete mathematics. However, unlike most monographs in Russian, in the course of lectures these issues are presented as a means of solving practical, engineering problems.

As you know, after each decade, the hardware components of computers, operating systems, access tools and the programs themselves change radically. However, the structures and algorithms underlying them remain unchanged for much longer. These foundations began to be laid thousands of years ago, when formal logic was developed and the first algorithms were developed.

Mathematical logic and the theory of algorithms traditionally belong to fundamental science and are considered to have little connection with practice and are difficult to understand. Indeed, when J. Boole created the mathematical apparatus of Boolean algebra, it did not find practical application for a long time, but in the 20th century it was this mathematical apparatus that made it possible to design all computer components. Consequently, the first of these prejudices is successfully refuted by the development of computer technology.

As for the prejudice about the difficulty of understanding this discipline, it largely stems from the fact that books on mathematical logic and the theory of algorithms were written by mathematicians for mathematicians.

Now, when the capabilities of computing technology have increased many times over, and there are much more personal computers themselves than there are people who know how to use them effectively, understanding what can and cannot be done with the help of modern computing technology is of exceptional importance.

It was the general theory of algorithms that showed that there are problems that are unsolvable no matter how powerful the computing power is, and its rapidly developing branch - the theory of computational complexity - gradually leads to the understanding that there are problems that are solvable, but objectively complex, and their complexity may turn out to be somewhat in the absolute sense, i.e. practically inaccessible to modern computers.

This course set the following objectives:

1. Present all the issues under consideration as simply as possible, but not simpler than is required for a highly qualified specialist.

2. Practical problems of design and analysis of information systems are the starting point, and the formal apparatus is a means of systematically solving these problems. It is our deep conviction that a student is not a vessel that needs to be filled, but a torch that needs to be lit.

3. Each section of the course contains self-test questions. To master this course, the student must answer all these questions.

As a result of mastering this course, the student, based on a clear understanding of the relevant theoretical sections, should be able to:

Implement the simplest type of logical transformation of information in an arbitrary basis of logical functions;

Identify the logical structure in evidential reasoning in natural language, construct formal proof schemes and check their correctness.

1.2 Logical representations
Logical representations - description of the system, process, phenomenon under study in the form of a set complex statements made up of simple (elementary) statements And logical connectives between them. Logical representations and their components are characterized by certain properties and a set of permissible transformations over them (operations, inference rules, etc.), implementing those developed in formal (mathematical) In logic, the correct methods of reasoning are the laws of logic.

Methods (rules) of formal presentation of statements, construction of new statements from existing ones using logically correct transformations, as well as methods (methods) of establishing the truth or falsity of statements are studied in mathematical logic. Modern mathematical logic includes two main sections: logic of statements and covering it predicate logic(Fig. 1.1), for the construction of which there are two approaches (languages), forming two variants of formal logic: algebra of logic And logical calculus. There is a one-to-one correspondence between the basic concepts of these languages ​​of formal logic. Their isomorphism is ultimately ensured by the unity of the underlying admissible transformations.

Rice. 1.1
The main objects of traditional branches of logic are statements.

Statement - declarative sentence (statement, judgment), o which it makes sense to say that it true or false. All scientific knowledge (laws and phenomena of physics, chemistry, biology, etc., mathematical theorems, etc.), events of everyday life, situations arising in economics and management processes are formulated in the form of statements. Imperative and interrogative sentences are not statements.

Examples of statements: “Twice two is four”, “We live in the 21st century”, “The ruble is the Russian currency”, “Alyosha is Oleg’s brother”, “The operations of union, intersection and addition are Boolean operations on sets”, “Man is mortal” , “Rearranging the places of the terms does not change the sum,” “Today is Monday,” “If it rains, you should take an umbrella.”

In order to further operate with these sentences as statements, we must know for each of them whether it is true or false, i.e. know them truth value (truth). Note that in some cases the truth or falsity of a statement depends on what specific reality (system, process, phenomenon) we are trying to describe with its help. In this case, the given statement is said to be true (or false) in a given interpretation (context). We further assume that the context is given and the statement has a certain truth value.

1.3 History of developed mathematical logic

Logic as a science was formed in the 4th century. BC. It was created by the Greek scientist Aristotle.

The word “logic” comes from the Greek “logos”, which on the one hand means “word” or “exposition”, and on the other, thinking. In the explanatory dictionary of Ozhegov S.I. It is said: “Logic is the science of the laws of thinking and its forms.” In the 17th century German scientist Leibniz planned to create a new science, which would be “the art of calculating truth” . In this logic, according to Leibniz, each statement would have a corresponding symbol, and reasoning would have the form of calculations. This idea of ​​Leibniz, having not met the understanding of his contemporaries, was not spread or developed and remained a brilliant guess.

Only in the middle of the 19th century. Irish mathematician George Boole embodied Leibniz's idea. In 1854, he wrote the work “Investigation of the laws of thought,” which laid the foundations for the algebra of logic, in which laws similar to the laws of ordinary algebra apply, but the letters do not denote numbers , but statements. In the language of Boolean algebra, one can describe reasoning and “compute” its results. However, it does not cover all reasoning, but only a certain type of it. , Therefore, Boole algebra is considered a propositional calculus.

Boole's algebra of logic was the embryo of a new science - mathematical logic. In contrast, Aristotle's logic is called traditional formal logic. The name “mathematical logic” reflects two features of this science: firstly, mathematical logic is logic that uses the language and methods of mathematics; secondly, mathematical logic is brought to life by the needs of mathematics.

At the end of the 19th century. The set theory created by Georg Cantor seemed to be a reliable foundation for all mathematics, including mathematical logic, at least for propositional calculus (Boole algebra), because It turned out that Cantor algebra (set theory) is isomorphic to Boole algebra.

Mathematical logic itself became a branch of mathematics that at first seemed highly abstract and infinitely far from practical applications. However, this area did not remain the domain of “pure” mathematicians for long. At the beginning of the 20th century. (1910) Russian scientist Ehrenfest P.S. pointed out the possibility of using the apparatus of Boolean algebra in telephone communications to describe switching circuits. In 1938-1940, almost simultaneously, the works of the Soviet scientist V.I. Shestakov, the American scientist Shannon and the Japanese scientists Nakashima and Hakazawa appeared on the application of mathematical logic in digital technology. The first monograph devoted to the use of mathematical logic in the design of digital equipment was published in the USSR by the Soviet scientist M.A. Gavrilov. in 1950. The role of mathematical logic in the development of modern microprocessor technology is extremely important: it is used in the design of computer hardware, in the development of all programming languages ​​and in the design of discrete automation devices.

Scientists from different countries made a great contribution to the development of mathematical logic: professor of Kazan University Poretsky P.S., de-Morgan, Peirce, Turing, Kolmogorov A.N., Heidel K. and others.

1.4 Questions for self-test.

1. Formulate the objectives of the course

Mathematical logic and theory of algorithms

Lecturer: A. L. Semenov

Lecture 1

Introduction 1

Problem of mathematical logic and theory of algorithms 1

Mathematical results of mathematical logic and theory of algorithms 2

Modern civilization and the role of MLiTA 2

Construction of mathematics. Set theory 5

Program mathematical research mathematical activity- Gilbert 9

General idea 9

Results of the Hilbert Program 12

Language and axioms of set theory. I. Examples 12

Logical symbols and their meaning (semantics) 12

Examples of axioms for the existence of sets 13

Introduction

The problem of mathematical logic and theory of algorithms

The problem solved by mathematical logic and the theory of algorithms is to build a system of mathematical definitions and theorems that allow one to mathematically describe and study the following types of human activity:

  • Proving theorems and defining mathematical concepts

  • Description of relationships between mathematical objects

  • Obtaining consequences from experimentally established statements, from hypotheses, etc.

  • Design of devices (mechanical, electronic, etc.) with specified properties and functions.

  • Creation and implementation of formal instructions (description and application of algorithms and programs)

  • Establishing a correspondence between the description of the required result and the algorithm designed to achieve this result (proof of correctness)
Mathematical logic and the theory of algorithms provide (mathematical, exact) criteria for correctness for the listed activities.

Mathematical results of mathematical logic and theory of algorithms

By carrying out this research, we will obtain results related to:

  1. Sets and relations that can be described in a particular language

  2. Sets of provable formulas

  3. Sets of true formulas (there is a fundamental difference with item 2)

  4. Sets of mathematical structures in which formulas from a given set are true

  5. Classes of functions that are calculated by algorithms

  6. The existence of an algorithm that determines the truth or provability of formulas

  7. Computational Complexity

  8. Complexity of objects
etc.

Modern civilization and the role of MLiTA

Significant progress in the development of mankind is associated with the creation of machines for processing material objects, receiving and transmitting energy (used by these machines), means of transport, lighting, etc.

For centuries, people have had the idea of ​​​​creating machines to work not with matter and energy, but with information objects. Moreover, such machines were created and even operated successfully, for example, a machine that allows you to perform arithmetic operations - an adding machine (B. Pascal).

In the first half of the 20th century, a general description was given of what possible methods of formal processing of information by humans. By the middle of the 20th century, physical principles were discovered that made it possible to create devices that can store a lot of information and process it very quickly. Universal devices were created - computers that can do everything that a person can formally do, but much faster than a person.

Taking a very general view, we can say that mathematical logic provides the foundation for theoretical mathematics, and the theory of algorithms provides the foundation for computational practice (the use of computers). A more detailed examination shows, however, that many achievements of mathematical logic have found applications in the development and application of information technologies, and algorithmic considerations are also significant in various sections of pure mathematics.

Milestones of history

Important moments in the development of modern ideas about what mathematical proofs and calculations are were the achievements of German mathematicians late XIX- beginning of the 20th century

Of particular importance was the speech of David Hilbert (01/23/1862 - 01/14/1943) at the II International Congress of Mathematicians (Paris, 1900). There he formulated 23 so-called. Hilbert's problems were the most important for mathematics of that time and for the development of mathematics in the 20th century. Some of Hilbert's problems were a question of determining the truth of a particular mathematical statement, others could be called more of a proposal to carry out some kind of research program.

Problems I, II, X from Hilbert's list relate to the subject of mathematical logic and the theory of algorithms.

Of the seven mathematical Problems of the Millennium, the first also relates to our subject (it was not among Hilbert's Problems).

The course will discuss the above-mentioned problems in the field of mathematical logic and theory of algorithms and the modern view of them.

Organizational notes

The authors of the course believe that the best way to learn mathematics and become a mathematician is to do mathematics yourself. By mathematicians we here mean everyone for whom an essential part of their professional activity (and for many, their whole life) is working with mathematical objects using mathematical methods of activity. A significant part of the activity in the field of information technology, for example, is structured this way. By “doing mathematics” we mean solving problems, and, first of all, not those that are most often solved at school, or in the course of mathematical analysis in the first year of university. We mean tasks where you need to come up with something new. Of course, when mastering a new field of mathematics, many of these problems should be simple and long ago solved, but they are not fundamentally different from the problems that a professional mathematician and programmer has to solve.

Problems of the type we are talking about now will be formulated both in lectures and in course exercises. Not all formulated tasks will be completely simple. Moreover: some of them have been solved in mathematics quite recently, there will be some that have not yet been solved, and others that do not have a solution at all (but which are useful to solve).

We encourage you to engage in problem solving throughout the course and discuss them with your instructor (and, of course, with each other). This:


  • Will help you better understand the content of lectures and the entire mathematical area to which the course relates

  • It is better to prepare for the exam and partially “pass” it (solving problems during the course will be “credited” to you in the exam, your teachers will tell you more about it)

  • Try yourself in a promising area of ​​mathematics and, perhaps, choose it for your specialization at the University, which may be useful for your work after university.
Another place where problems are solved and their solutions discussed is the Proseminar on Mathematical Logic and Computer Science for junior students. There, along with simple tasks that allow you to understand the essence of the matter, you will immediately be offered both complex ones and those that have not yet been solved.

The materials of the next lecture will be posted on the Internet on the website http://lpcs.math.msu.su/vml2013/

In addition to them, you can get acquainted with the course content from the books:


  • N.K. Vereshchagin, A. Shen, Lectures on mathematical logic and theory of algorithms, ed. MCCME (mccme.ru)

  • I. A. Lavrov, L. L. Maksimova, Problems in set theory, mathematical logic and theory of algorithms,
which are also available on the Internet.

Finally, the last thing. One of the applied results of mathematical logic and the theory of algorithms is the automation of various components of mathematical activity. In particular, verification of certain types of mathematical proofs can be automated. The field of such automation, naturally, is constantly expanding due to the development of the automation algorithms themselves, greater understanding by mathematicians of how to apply these algorithms, the accumulation of experience and the growth of computing capabilities. Today, the most popular and effective computing framework for automating evidence verification is Coq. Our department conducts a workshop on Coq, where you can learn how to use this environment, imagine its capabilities and limitations.

Construction of mathematics. Set theory

The modern structure of mathematics, in particular the way it is taught in universities, is based on set theory. Now we will recall some basic concepts of this theory.

Curly braces are often used to define sets. Inside them you can place all the elements of the given set: (2, 14, 5.4) or describe it in a special way: (x|x is a real number and sin(x)>0).

We use the following notation for set operations: membership of an element in a set ∊, inclusion of sets ⊂, union ∪, intersection ∩, difference \.

Let us have two objects x,y. Ordered pair x; y> called the object from which uniquely restored x, y, in other words: x; y> = x′; y′> → ( x = xy = y′).

The work x X y two sets x And y is the set of all ordered pairs u; v>, where u x And v y.

Similarly, ordered n-ki objects and n th degree x n sets x. It can be agreed that x 1 is x.

Relationship between sets x, y any subset of their product is called x X y.

n-local relation on the set x any subset is called n-th degree of this set.

Attitude f between x And y called a function of x V y, if from the coincidence of the first components of any two elements of the relation f the coincidence of the latter follows.

The domain of a function is the set of its first components.

If the domain of a function is from x V y coincides with x, then the function is said to display x V y and write f : x y. Lots of all functions displaying x V y, denoted by y x .

Function f : x y called bijection between x And y, or bijection from x V y, or isomorphism x And y, if from the coincidence of the second components of the elements f it follows that the first elements coincide, and, in addition, the second elements f form the entire set y. Isomorphic sets are also called equivalent sets.

A set is called countable if it is equivalent to the natural series.

Task. Prove that every subset of a natural series is equivalent to either its initial segment (which is the same as some of its elements) or the entire natural series.

Formulated in the last problem basic observation– that a part can be isomorphic to the whole probably seems trivial to second-year students. But it was one of the first discoveries of set theory.

Finite sets can be compared by size. If one is isomorphic to a subset of another, then it has fewer elements. When infinite sets this is wrong. This situation was described by Galileo Galilei in the following dialogue, which we present with abbreviation:

Conversations Andmathematicalproof, concerning two newbranches of science,related TomechanicsAndlocal movement

signora Galileo Galilei Lyncheo, philosopher and first mathematician His Serene Highness Grand Duke of Tuscany

Salviati. ...the number of all numbers together - squares and non-squares - is greater than the number of squares alone; is not it?

Simplicio. I can't argue against this.

Salviati. There are as many squares as there are roots, since each square has its own root and each root its own square; no square can have more than one root and no root can have more than one square.

Sagredo. What needs to be done to find a way out of this situation?

Salviati. I do not see the possibility of any other solution than to admit that the properties of equality, as well as greater and lesser magnitude, do not exist where we are dealing with infinity, and are applicable only to finite quantities. Therefore, when Signor Simplicio offers me unequal lines and asks me how it is possible that the larger of them does not contain more points than in fewer, then I answer him that there are no more, no less, and not the same number of them, but an infinite number in each.

However, even among infinite sets there is still some order, as shown by the following theorem, also announced without proof by Cantor and soon proven by other German mathematicians.

Cantor–Bernstein theorem. Let there be a bijection between the set A and a subset of the set B, as well as a bijection between the set B and a subset of the set A. Then the sets A And B– equal in power.

Task. Prove the Cantor–Bernstein Theorem.

Task. Is it possible to compare any sets in terms of cardinality, that is, is it true that for any A And B, or A equally powerful subset B, or B equally powerful subset A?

Among the functions we highlight properties– functions that take only the values ​​0 and 1. Every property specifies a relation – a set of elements on which its value is 1. Any function f : x → B is called characteristic(on x). Note that in our notation and conventions B=(0,1)=2. Thus, the set of all characteristic functions on x receives the designation B x or 2 x .

Task. Construct an isomorphism between the set of characteristic functions on x and many subsets of the set x.

Task. Prove that the set of subsets of any set is not isomorphic to it.

Solution idea [Cantor Diagonal]. Let isomorphism G : x → 2 x exists. Let's consider for each element y from our many x its corresponding subset G(y). Can we from the elements x collect a subset C, which would differ from the set G(y) "on element y» ? Or, what is the same thing, how to construct one characteristic function f C , which would differ from the characteristic function of the set G (y) "at the point y» in any case y?

If x is a set of natural numbers, then the proof becomes clear graphic form. We will call the number y, which goes into the characteristic function f, function number f.


Argument

Function no.



0

1

2

3

4



0

1

0

0

1

0

1

1

1

0

1

0

2

1

0

0

1

1

3

0

0

0

1

0

4

0

1

0

1

0

………

In this table in the line with number n discharged characteristic function with number n. The table does not have a function obtained from its diagonal by changing 1 to 0 and 0 to 1 (negation operation).

Task. Prove that the set of real numbers is equivalent to the set of subsets of the natural series.

Mathematical Activity Research Program - Hilbert

General idea

David Hilbert owns the idea of ​​mathematics as a field of mathematically described activity, awareness of the importance of research into mathematical activity using the means of mathematics itself, and the presentation of a program for such research - the Hilbert Program.

Hilbert's programs for studying mathematics can be represented as follows:


  • Mathematics can be represented as a system

    • axioms - statements that we accept as true and

    • rules of proof - obtaining new statements.

  • The practice of mathematical activity should convince us that the chosen system allows us to construct all the necessary proofs. Ideally, it could happen that every mathematical statement can be proven or disproved using this system. (This property is called completeness.)

  • Some mathematical proofs are "particularly robust and compelling." These include, for example, arithmetic calculations. Using only them, you can make sure that the system chosen for mathematics does not allow you to obtain contradictions. (This property is called consistency.) Moreover, it might turn out that the completeness of mathematics can also be proven using simple, understandable and convincing reasoning.
The usefulness of the completeness property is clear. As a rule, when trying to prove a mathematical statement, we simultaneously look for its refutation. I would like to be sure that such activity will ultimately lead to results and it’s “only” a matter of our abilities and time. Hilbert believed: “This is the belief in the solvability of every mathematical problem is a great help for us in our work; we hear a constant call within ourselves: here is the problem, look for a solution. You can find it through pure thinking; for in mathematics there is no Ignorabimus!”

Note that the property of consistency is much more important. A priori one could imagine scientific theory, in which the contradiction is located somewhere on the periphery and relates to some unimportant issues. However, the design of all major systems mathematical proof is such that the provability of one contradiction (for example, that the product of some two is very large numbers equal to some third and another fourth) immediately leads to the provability of any mathematical statement. The contradiction cannot be “localized”.

The first steps towards achieving the goals of the Hilbert Program were taken even before its formulation. Moreover, the Program logically followed from them. These are the steps.

Proof. Logics. At the end of the 19th century, it became clear how to formalize the system of reasoning. This formalization was completed in the works of Gottlob Frege (11/8/1848 - 07/26/1925).

Set theory. Another achievement of mathematics at the end of the 19th century was the understanding that all mathematics could be represented uniformly in terms of sets (as is done in modern math courses and we reminded above). This was done in the works of Georg Cantor (3.03.1845 - 6.01.1918).

Thus, all that remained was to choose suitable system axioms and continue to follow the path of proving consistency and completeness. The hope for obtaining such evidence by simple and reliable means was associated with the following. The use of axioms and proof rules looks quite simple process working with formulas. The formulas themselves are simple objects, chains of symbols. Mathematics seems like a game, like chess, for example. Suppose we need to prove that some position in chess is impossible. In principle - this, of course, can be done by going through all sorts of chess games. But you can imagine more simple reasoning, based, for example, on the fact that pieces are not added to the field, that bishops are light-squared and dark-squared, etc. Such reasoning most likely will not use real numbers, integrals and even more complex mathematical objects.

System axioms for set theory was mainly built during the first decades of the 20th century, the first formulation close to the modern one belonged to Ernst Zermelo (27.7.1871 ‒ 21.5.1953) and was published by him in 1908.

Results of the Hilbert Program

What happened to the Hilbert Program later? We will formulate this briefly now, and in the next course we will explain it in more detail.

On the one hand, the Program was successfully implemented:


  • Axiomatic set theory is the basis modern mathematics.

  • In particular, in the thirties a group of mathematicians was formed under the collective pseudonym N. Bourbaki. Maximum of it creative activity occurred in the 1950s and 60s. This group consistently and systematically presented a significant part of modern mathematics as built on the basis of set theory.
At the same time, the Program fundamentally failed:

  • Mathematics is not complete and cannot be complete.

  • The consistency of mathematics cannot be established not only by some reliable convincing means.
This was established by Kurt Gödel (04/28/1906 – 01/14/1978) in the 1930s.

Language and axioms of set theory.I. Examples

We begin formulating a system of proofs in mathematics (set theory) with a description of the logical language.

Logical symbols and their meaning (semantics)

Boolean values: symbols I (true), L (false), or symbols 1, 0. We will denote the set of two symbols 0 and 1 as B.

Logical operations:(not, negation), (and, conjunction), (or, disjunction), → (follows, implication), ≡ (equivalence) are applied to the symbols 1 (I) and 0 (A) and the result of their application is described by the following table:


A

B

A

AB

AB

AB

AB

0

0

1

0

0

1

1

0

1

1

0

1

1

0

1

0

0

0

1

0

0

1

1

0

1

1

1

1

Quantifiers, which you are also, of course, familiar with - x (exists x ), y (for anyone y )

Examples of axioms for the existence of sets

A number of axioms of Set Theory are statements about the existence of sets, including those formed from other sets.

In order to talk about sets, in particular, in order to formulate the axioms relating to them, it is necessary to add to the logical symbols symbols related to set theory. How to write down what x empty set, that is, a set that has no elements? For example, like this:

y (­ y x ) (for everyone ­ y it is not true that y belongs x )

We needed a membership symbol ∊. Let's add it to the alphabet of our language.

In order for us to have something to talk about, it would be good to be sure of the existence of at least one set. Let's start with the blank (Ø):

x y (­ y x ) [Axiom of the empty set.]

We want to define some specific sets, properties of sets, etc. We want to introduce notations for them.


  • We will consider the empty set to be zero.

  • How to get next number from the number n? Add to all elements n still just n. That is, we will consider the elements of the next n numbers all elements from n and further n. All the resulting elements form a set N:

    • 1 is (0)

    • 2 is (0,1)= (0,(0))
Task. How many elements are in the number (set) 5? And in abundance n?

The existence of a natural series constructed in this way is guaranteed by the following axiom. For ease of understanding, we have divided it into parts and comment on it (in square brackets) these parts:

s (u (u s v (v u )) [assyou can take the natural seriesN; it will containu - zero]

u (u s [ For all sorts of things u from s ]

v (v s [there will bev froms , ]

w (w v (w u w = u ))))) [next tou ] [ Axiom of Infinity ]
However, this axiom can be satisfied not only by the natural series, but also by other sets

Task. Which for example?

Task. How can we accurately describe the natural series we have constructed?

IN mathematical constructions operations on sets are used. Following the path already begun, we must add axioms that guarantee the existence of the results of these operations. Here's another example:

usv(w(w v w u) ≡ v s) [Axiom of degree]

Task. Check that the last formula meaningfully means the existence of the set of all subsets of any given set.

Of course, we will need, for example, a set that is the intersection of two data, etc.

Above we began to gradually build sets. It is clear how to continue this path, for example, to construct a set of integers, then rational numbers, as a set of pairs of integers with some equivalence relation on it, then a set of real numbers, etc.