Discrete Markov processes. Markov processes: examples

Markov random processes are named after the outstanding Russian mathematician A.A. Markov (1856-1922), who first began the study of the probabilistic relationship of random variables and created a theory that can be called “probability dynamics”. Subsequently, the foundations of this theory became the initial basis for the general theory of random processes, as well as such important applied sciences as the theory of diffusion processes, reliability theory, queuing theory, etc. Currently, the theory of Markov processes and its applications are widely used in various fields of sciences such as mechanics, physics, chemistry, etc.

Due to the comparative simplicity and clarity of the mathematical apparatus, the high reliability and accuracy of the solutions obtained, Markov processes have received special attention from specialists involved in operations research and the theory of optimal decision making.

Despite the above-mentioned simplicity and clarity, the practical application of the theory of Markov chains requires knowledge of some terms and basic principles that should be discussed before presenting examples.

As indicated, Markov random processes refer to special cases of random processes (SP). In turn, random processes are based on the concept of a random function (SF).

A random function is a function whose value, for any value of the argument, is a random variable (RV). In other words, SF can be called a function that, at each test, takes on some previously unknown form.

Such examples of SF are: voltage fluctuations in an electrical circuit, the speed of a car on a section of road with a speed limit, the surface roughness of a part in a certain area, etc.

As a rule, it is believed that if the argument of the SF is time, then such a process is called random. There is another definition of random processes, closer to decision theory. In this case, a random process is understood as a process of random change in the states of any physical or technical system with respect to time or some other argument.

It is easy to see that if you designate a state and depict a dependence, then such a dependence will be a random function.

Random processes are classified according to the types of states and the argument t. In this case, random processes can be with discrete or continuous states or time.

In addition to the above examples of classification of random processes, there is another important property. This property describes the probabilistic connection between the states of random processes. So, for example, if in a random process the probability of the system transitioning to each subsequent state depends only on the previous state, then such a process is called a process without aftereffect.

Let us note, firstly, that a random process with discrete states and time is called a random sequence.

If a random sequence has the Markov property, then it is called a Markov chain.

On the other hand, if in a random process the states are discrete, time is continuous and the aftereffect property is preserved, then such a random process is called a Markov process with continuous time.

A Markov random process is said to be homogeneous if the transition probabilities remain constant during the process.

A Markov chain is considered given if two conditions are given.

1. There is a set of transition probabilities in the form of a matrix:

2. There is a vector of initial probabilities

describing the initial state of the system.

In addition to the matrix form, the Markov chain model can be represented as a directed weighted graph (Fig. 1).

Rice. 1

The set of states of a Markov chain system is classified in a certain way, taking into account the further behavior of the system.

1. Irreversible set (Fig. 2).

Fig.2.

In the case of a non-returning set, any transitions within this set are possible. The system can leave this set, but cannot return to it.

2. Return set (Fig. 3).

Rice. 3.

In this case, any transitions within the set are also possible. The system can enter this set, but cannot leave it.

3. Ergodic set (Fig. 4).

Rice. 4.

In the case of an ergodic set, any transitions within the set are possible, but transitions from and to the set are excluded.

4. Absorbing set (Fig. 5)

Rice. 5.

When the system enters this set, the process ends.

In some cases, despite the randomness of the process, it is possible to control the distribution laws or the parameters of transition probabilities to a certain extent. Such Markov chains are called controlled. Obviously, with the help of controlled Markov chains (MCC), the decision-making process becomes especially effective, as will be discussed later.

The main feature of a discrete Markov chain (DMC) is the determinism of time intervals between individual steps (stages) of the process. However, often in real processes this property is not observed and the intervals turn out to be random with some distribution law, although the Markov property of the process is preserved. Such random sequences are called semi-Markov.

In addition, taking into account the presence and absence of certain sets of states mentioned above, Markov chains can be absorbing if there is at least one absorbing state, or ergodic if the transition probabilities form an ergodic set. In turn, ergodic chains can be regular or cyclic. Cyclic chains differ from regular ones in that during transitions through a certain number of steps (cycles) a return to some state occurs. Regular chains do not have this property.

Queuing theory is one of the branches of probability theory. This theory considers probabilistic problems and mathematical models (before that we considered deterministic mathematical models). Let us remind you that:

Deterministic mathematical model reflects the behavior of an object (system, process) from the perspective full certainty in the present and future.

Probabilistic mathematical model takes into account the influence of random factors on the behavior of an object (system, process) and, therefore, evaluates the future from the standpoint of the probability of certain events.

Those. here, as, for example, in game theory problems are considered in conditionsuncertainty.

Let us first consider some concepts that characterize “stochastic uncertainty,” when the uncertain factors included in the problem are random variables (or random functions), the probabilistic characteristics of which are either known or can be obtained from experience. Such uncertainty is also called “favorable”, “benign”.

The concept of a random process

Strictly speaking, random disturbances are inherent in any process. It is easier to give examples of a random process than a “non-random” process. Even, for example, the process of running a clock (it seems to be a strictly calibrated work - “works like a clock”) is subject to random changes (moving forward, lagging behind, stopping). But as long as these disturbances are insignificant and have little effect on the parameters of interest to us, we can neglect them and consider the process as deterministic, non-random.

Let there be some system S(technical device, group of such devices, technological system - machine, site, workshop, enterprise, industry, etc.). In system S leaks random process, if it changes its state over time (passes from one state to another), moreover, in a previously unknown random manner.

Examples: 1. System S– technological system (machine section). Machines break down from time to time and are repaired. The process taking place in this system is random.

2. System S- an aircraft flying at a given altitude along a specific route. Disturbing factors - weather conditions, crew errors, etc., consequences - bumpiness, violation of the flight schedule, etc.

Markov random process

A random process occurring in a system is called Markovsky, if for any moment of time t 0 probabilistic characteristics of a process in the future depend only on its state at the moment t 0 and do not depend on when and how the system reached this state.

Let the system be in a certain state at the moment t 0 S 0 . We know the characteristics of the state of the system in the present, everything that happened when t<t 0 (process history). Can we predict (predict) the future, i.e. what will happen when t>t 0 ? Not exactly, but some probabilistic characteristics of the process can be found in the future. For example, the probability that after some time the system S will be able S 1 or will remain in state S 0, etc.

Example. System S- a group of aircraft participating in air combat. Let x– number of “red” planes, y– number of “blue” aircraft. By the time t 0 number of surviving (not shot down) aircraft, respectively – x 0 ,y 0 . We are interested in the probability that at the moment of time the numerical superiority will be on the side of the “reds”. This probability depends on what state the system was in at the time t 0, and not on when and in what sequence those shot down died until the moment t 0 planes.

In practice, Markov processes in their pure form are usually not encountered. But there are processes for which the influence of “prehistory” can be neglected. And when studying such processes, Markov models can be used (queuing theory does not consider Markov queuing systems, but the mathematical apparatus that describes them is much more complex).

In operations research, Markov random processes with discrete states and continuous time are of great importance.

The process is called discrete state process, if its possible states S 1 ,S 2, ... can be determined in advance, and the transition of the system from state to state occurs “in a jump,” almost instantly.

The process is called continuous time process, if the moments of possible transitions from state to state are not fixed in advance, but are uncertain, random and can occur at any moment.

Example. Technological system (section) S consists of two machines, each of which can fail (fail) at a random moment in time, after which the repair of the unit immediately begins, which also continues for an unknown, random time. The following system states are possible:

S 0 - both machines are working;

S 1 - the first machine is being repaired, the second is working;

S 2 - the second machine is being repaired, the first one is working;

S 3 - both machines are being repaired.

System transitions S from state to state occur almost instantly, at random moments when a particular machine fails or a repair is completed.

When analyzing random processes with discrete states, it is convenient to use a geometric scheme - state graph. The vertices of the graph are the states of the system. Graph arcs – possible transitions from state to

Fig.1. System state graph

state. For our example, the state graph is shown in Fig. 1.

Note. Transition from state S 0 in S 3 is not indicated in the figure, because it is assumed that the machines fail independently of each other. We neglect the possibility of simultaneous failure of both machines.

Lecture 9

Markov processes
Lecture 9
Markov processes



1

Markov processes

Markov processes
A random process occurring in a system is called
Markovian if it has no consequences. Those.
if we consider the current state of the process (t 0) - as
present, a set of possible states ( (s),s t) - as
past, a set of possible states ( (u),u t) - as
future, then for a Markov process for a fixed
present, the future does not depend on the past, but is determined
only in the present and does not depend on when and how the system
came to this state.
KHNURE, department PM, lecturer Kirichenko L.O.
"Probability theory, mathematical
statistics and random processes"
2

Markov processes

Markov processes
Markov random processes are named after the outstanding Russian mathematician A.A. Markov, who first began studying the probabilistic connection of random variables
and created a theory that can be called "dynamics
probabilities." Subsequently, the foundations of this theory were
the initial basis of the general theory of random processes, as well as such important applied sciences as the theory of diffusion processes, reliability theory, queuing theory, etc.
KHNURE, department PM, lecturer Kirichenko L.O.
"Probability theory, mathematical
statistics and random processes"
3

Markov Andrey Andreevich Markov Andrey Andreevich Markov Andrey Andreevich

Markov processes
Markov Andrey Andreevich
1856-1922
Russian mathematician.
Wrote about 70 works on
theories
numbers,
theories
function approximations, theories
probabilities. Significantly expanded the scope of the law
large numbers and central
limit theorem. Is
founder of the theory of random processes.
KHNURE, department PM, lecturer Kirichenko L.O.
"Probability theory, mathematical
statistics and random processes"
4

Markov processes

Markov processes
In practice, Markov processes in their pure form are usually
do not meet. But there are processes for which the influence of “prehistory” can be neglected, and when studying
For such processes, Markov models can be used. IN
Currently, the theory of Markov processes and its applications are widely used in a variety of fields.
KHNURE, department PM, lecturer Kirichenko L.O.
"Probability theory, mathematical
statistics and random processes"
5

Markov processes

Markov processes
Biology: processes of birth and death - populations, mutations,
epidemics.
Physics:
radioactive
decays,
theory
counters
elementary particles, diffusion processes.
Chemistry:
theory
traces
V
nuclear
photo emulsions,
probabilistic models of chemical kinetics.
Images.jpg
Astronomy: fluctuation theory
brightness of the Milky Way.
Queuing theory: telephone exchanges,
repair shops, ticket offices, information desks,
machine and other technological systems, control systems
flexible production systems, information processing by servers.
KHNURE, department PM, lecturer Kirichenko L.O.
"Probability theory, mathematical
statistics and random processes"
6

Markov processes

Markov processes
Let at the current moment t0 the system be in
certain state S0. We know the characteristics
state of the system in the present and everything that happened at t< t0
(background of the process). Can we predict the future,
those. what happens at t > t0?
Not exactly, but some probabilistic characteristics
process can be found in the future. For example, the probability that
that after a while
system S will be in a state
S1 or will remain in state S0, etc.
KHNURE, department PM, lecturer Kirichenko L.O.
"Probability theory, mathematical
statistics and random processes"
7

Markov processes. Example.

Markov processes
Markov processes. Example.
System S is a group of aircraft participating in air combat. Let x be the quantity
“red” planes, y – the number of “blue” planes. At time t0, the number of surviving (not shot down) aircraft
respectively – x0, y0.
We are interested in the probability that at the moment of time
t 0 the numerical superiority will be on the side of the “reds”. This probability depends on the state the system was in
at the moment of time t0, and not on when and in what sequence the planes shot down before the moment t0 died.
KHNURE, department PM, lecturer Kirichenko L.O.
"Probability theory, mathematical
statistics and random processes"
8

Discrete Markov chains

Markov processes
Discrete Markov chains
Markov process with finite or countable number
states and moments of time is called discrete
Markov chain. Transitions from state to state are possible only at integer moments of time.
KHNURE, department PM, lecturer Kirichenko L.O.
"Probability theory, mathematical
statistics and random processes"
9

10. Discrete Markov chains. Example

Markov processes

Suppose
What
speech
coming
O
successive coin tosses
game of toss; a coin is thrown into
conditional moments of time t =0, 1, ... and at
each step the player can win ±1 s
the same
probability
1/2,
like this
Thus, at moment t, its total gain is a random variable ξ(t) with possible values ​​j = 0, ±1, ... .
Provided that ξ(t) = k, at the next step the payoff will be
is already equal to ξ(t+1) = k ± 1, taking the values ​​j = k ± 1 with the same probability 1/2. We can say that here, with the corresponding probability, a transition occurs from the state ξ(t) = k to the state ξ(t+1) = k ± 1.
KHNURE, department PM, lecturer Kirichenko L.O.
"Probability theory, mathematical
statistics and random processes"
10

11. Discrete Markov chains

Markov processes
Discrete Markov chains
Generalizing this example, we can imagine a system with
countable number of possible states, which over time
discrete time t = 0, 1, ... randomly moves from state to state.
Let ξ(t) be its position at time t as a result of a chain of random transitions
ξ(0) -> ξ(1) -> ... -> ξ(t) -> ξ(t+1) ->...-> ... .
KHNURE, department PM, lecturer Kirichenko L.O.
"Probability theory, mathematical
statistics and random processes"
11

12. Discrete Markov chains

Markov processes
Discrete Markov chains
When analyzing random processes with discrete states, it is convenient to use a geometric scheme - a graph
states. The vertices of the graph are the states of the system. Arcs of the graph
– possible transitions from state to state.
A game of toss.
KHNURE, department PM, lecturer Kirichenko L.O.
"Probability theory, mathematical
statistics and random processes"
12

13. Discrete Markov chains

Markov processes
Discrete Markov chains
Let us denote all possible states by integers i = 0, ±1, ...
Let us assume that for a known state ξ(t) =i, at the next step the system goes to the state ξ(t+1) = j with conditional probability
P( (t 1) j (t) i)
regardless of her behavior in the past, or rather, regardless
from the chain of transitions to moment t:
P( (t 1) j (t) i; (t 1) it 1;...; (0) i0 )
P( (t 1) j (t) i)
This property is called Markovian.
KHNURE, department PM, lecturer Kirichenko L.O.
"Probability theory, mathematical
statistics and random processes"
13

14. Discrete Markov chains

Markov processes
Discrete Markov chains
Number
pij P( (t 1) j (t) i)
called probability
transition of the system from state i to state j in one step in
time t 1.
If the transition probability does not depend on t, then the circuit
Markov is called homogeneous.
KHNURE, department PM, lecturer Kirichenko L.O.
"Probability theory, mathematical
statistics and random processes"
14

15. Discrete Markov chains

Markov processes
Discrete Markov chains
Matrix P, whose elements are probabilities
transition pij is called the transition matrix:
p11...p1n
P p 21 ... p 2n
p
n1...pnn
It is stochastic, i.e.
pij 1 ;
i
KHNURE, department PM, lecturer Kirichenko L.O.
"Probability theory, mathematical
statistics and random processes"
p ij 0 .
15

16. Discrete Markov chains. Example

Markov processes
Discrete Markov chains. Example
Transition matrix for the toss game
...
k 2
k 2
0
k 1
1/ 2
k
0
k 1
k
k 1
k 2
0
1/ 2
0
0
1/ 2
0
1/ 2
0
1/ 2
0
0
0
KHNURE, department PM, lecturer Kirichenko L.O.
"Probability theory, mathematical
statistics and random processes"
...
k 1 k 2
0
0
0
1/ 2
0
1/ 2
...
0
0
1/ 2
0
16

17. Discrete Markov chains. Example

Markov processes
Discrete Markov chains. Example
As a result of a chemical analysis of the soil, the gardener evaluates
its condition is one of three numbers - good (1), satisfactory (2) or bad (3). As a result of observations over many years, the gardener noticed
that soil productivity in the current
year depends only on its condition in
previous year. Therefore the probabilities
transition of soil from one state to
another can be represented as follows
Markov chain with matrix P1:
0.20 0.50 0.30
0.00 0.50 0.50
0.00 0.00 1.00
KHNURE, department PM, lecturer Kirichenko L.O.
"Probability theory, mathematical
statistics and random processes"
17

18. Discrete Markov chains. Example

Markov processes
Discrete Markov chains. Example
However, as a result of agricultural practices, the gardener can change the transition probabilities in the matrix P1.
Then matrix P1 will be replaced
to matrix P2:
0.30 0.60 0.10
0.10 0.60 0.30
0.05 0.40 0.55
KHNURE, department PM, lecturer Kirichenko L.O.
"Probability theory, mathematical
statistics and random processes"
18

19. Discrete Markov chains

Markov processes
Discrete Markov chains
Let's consider how process states change over time. We will consider the process at successive moments in time, starting from moment 0. Let us set the initial probability distribution p(0) ( p1 (0),..., pm (0)), where m is the number of states of the process, pi (0) is the probability of finding
process in state i at the initial moment of time. The probability pi(n) is called the unconditional probability of the state
i at time n 1.
The components of the vector p (n) show which of the possible states of the circuit at time n are the most
probable.
m
KHNURE, department PM, lecturer Kirichenko L.O.
"Probability theory, mathematical
statistics and random processes"
pk(n) 1
k 1
19

20. Discrete Markov chains

Markov processes
Discrete Markov chains
Knowing the sequence ( p (n)) for n 1,... allows you to get an idea of ​​the behavior of the system over time.
In a 3-state system
p11 p12 p13
P p21
p
31
p22
p32
p23
p33
p2 (1) p1 (0) p12 p2 (0) p22 p3 (0) p32
p2 (n 1) p1 (n) p12 p2 (n) p22 p3 (n) p32
In general:
p j (1) pk (0) pkj
p j (n 1) pk (n) pkj
k
KHNURE, department PM, lecturer Kirichenko L.O.
"Probability theory, mathematical
statistics and random processes"
k
p(n 1) p(n) P
20

21. Discrete Markov chains. Example

Markov processes
Discrete Markov chains. Example
Matrix
0.20 0.50 0.30
0.00 0.50 0.50
0.00 0.00 1.00
Step
(p(n))
n
0
1, 0, 0
n
1
0.2 , 0.5 , 0.3
n
2
0.04 , 0.35 , 0.61
n
3
0.008 , 0.195 , 0.797
n
4
0.0016 , 0.1015 , 0.8969
KHNURE, department PM, lecturer Kirichenko L.O.
"Probability theory, mathematical
statistics and random processes"
21

22. Discrete Markov chains

Markov processes
Discrete Markov chains
n
Transition matrix for n steps P(n) P .
0.20 0.50 0.30
0.00 0.50 0.50
0.00 0.00 1.00
p(2) p(0) P
2
p(2)
P(2) P2
1, 0, 0
0.0016
0.
0.
0.0016
0.
0.
0.1015
0.0625
0.
0.1015
0.0625
0.
0.8969
0.9375
1.
0.8969
0.9375
1.
0.04 , 0.35 , 0.61
KHNURE, department PM, lecturer Kirichenko L.O.
"Probability theory, mathematical
statistics and random processes"
22

23. Discrete Markov chains

Markov processes
Discrete Markov chains
How do Markov chains behave for n?
For a homogeneous Markov chain, under certain conditions, the following property holds: p (n) for n.
Probabilities 0 do not depend on the initial distribution
p(0) , and are determined only by the matrix P . In this case, it is called a stationary distribution, and the chain itself is called ergodic. The ergodicity property means that as n increases
the probability of states practically ceases to change, and the system goes into a stable operating mode.
i
KHNURE, department PM, lecturer Kirichenko L.O.
"Probability theory, mathematical
statistics and random processes"
23

24. Discrete Markov chains. Example

Markov processes
Discrete Markov chains. Example
0.20 0.50 0.30
0.00 0.50 0.50
0.00 0.00 1.00
0 0 1
P() 0 0 1
0 0 1
KHNURE, department PM, lecturer Kirichenko L.O.
"Probability theory, mathematical
statistics and random processes"
p()(0,0,1)
24

25. Discrete Markov chains. Example

Markov processes
Discrete Markov chains. Example
0.30 0.60 0.10
0.10 0.60 0.30
0.05 0.40 0.55
0.1017 0.5254 0.3729
P() 0.1017 0.5254 0.3729
0.1017 0.5254 0.3729
p() (0.1017,0.5254,0.3729)
KHNURE, department PM, lecturer Kirichenko L.O.
"Probability theory, mathematical
statistics and random processes"
25

26. Markov processes with continuous time

Markov processes

A process is called a continuous-time process if
the moments of possible transitions from state to state are not fixed in advance, but are uncertain, random and can happen
any time.
Example. The technological system S consists of two devices,
each of which at a random moment in time can exit
building, after which the repair of the unit immediately begins, also continuing for an unknown, random time.
The following system states are possible:
S0 - both devices are working;
S1 - the first device is being repaired, the second is working properly;
S2 - the second device is being repaired, the first is working properly;
S3 - both devices are being repaired.
KHNURE, department PM, lecturer Kirichenko L.O.
"Probability theory, mathematical
statistics and random processes"
26

27. Markov processes with continuous time

Markov processes
Continuous-time Markov processes
Transitions of the system S from state to state occur
almost instantly, at random moments of failure
one or another device or
completion of repairs.
The probability of simultaneous
failure of both devices
can be neglected.
KHNURE, department PM, lecturer Kirichenko L.O.
"Probability theory, mathematical
statistics and random processes"
27

28. Event streams

Markov processes
Event Streams
A stream of events is a sequence of homogeneous events following one after another at some random moments in time.
is the average number of events
Event flow intensity
per unit time.
KHNURE, department PM, lecturer Kirichenko L.O.
"Probability theory, mathematical
statistics and random processes"
28

29. Event streams

Markov processes
Event Streams
A flow of events is called stationary if its probabilistic characteristics do not depend on time.
In particular, the intensity
steady flow is constant. The flow of events inevitably has condensations or rarefactions, but they are not of a regular nature, and the average number of events per unit of time is constant and does not depend on time.
KHNURE, department PM, lecturer Kirichenko L.O.
"Probability theory, mathematical
statistics and random processes"
29

30. Event streams

Markov processes
Event Streams
A flow of events is called a flow without consequences if for
any two non-overlapping time periods and the number of events falling on one of them does not depend on how many events fall on the other. In other words, this means that the events that form the flow appear at certain moments
time independently of each other and each caused by its own reasons.
A flow of events is called ordinary if the probability of occurrence of two or more events in an elementary segment t is negligible compared to the probability of occurrence of one
events, i.e. events appear in it one by one, and not in groups of several at once
KHNURE, department PM, lecturer Kirichenko L.O.
"Probability theory, mathematical
statistics and random processes"
30

31. Event streams

Markov processes
Event Streams
A flow of events is called the simplest (or stationary Poisson) if it has three properties at once: 1) stationary, 2) ordinary, 3) has no consequences.
The simplest flow has the simplest mathematical description. He plays among the streams the same special
role, like the law of normal distribution among others
laws of distribution. Namely, when superimposing a sufficiently large number of independent, stationary and ordinary
flows (comparable to each other in intensity), the result is a flow close to the simplest.
KHNURE, department PM, lecturer Kirichenko L.O.
"Probability theory, mathematical
statistics and random processes"
31

32. Event streams

Markov processes
Event Streams
For the simplest flow with intensity
interval
time T between neighboring events has an exponential
distribution with density
p(x) e x , x 0 .
For a random variable T having an exponential distribution, the mathematical expectation is the reciprocal of the parameter.
KHNURE, department PM, lecturer Kirichenko L.O.
"Probability theory, mathematical
statistics and random processes"
32

33. Markov processes with continuous time

Markov processes
Continuous-time Markov processes
Considering processes with discrete states and continuous time, we can assume that all transitions of the system S from state to state occur under the influence
simple event flows (call flows, failure flows, recovery flows, etc.).
If all flows of events that transfer system S from state to state are simplest, then the process occurring in
system will be Markovian.
KHNURE, department PM, lecturer Kirichenko L.O.
"Probability theory, mathematical
statistics and random processes"
33

34. Markov processes with continuous time

Markov processes
Continuous-time Markov processes
Let the system in the state be acted upon by
the simplest flow of events. As soon as the first event of this flow appears, the system “jumps” from the state
into condition.
- intensity of the flow of events transferring the system
from the state
KHNURE, department PM, lecturer Kirichenko L.O.
"Probability theory, mathematical
statistics and random processes"
V
.
34

35. Markov processes with continuous time

Markov processes
Continuous-time Markov processes
Let the system S under consideration have
possible states
. Probability p ij (t) is the probability of transition from state i to state j in time t.
Probability of the i -th state
is the probability that
that at time t the system will be in the state
. Obviously, for any moment in time the amount
of all state probabilities is equal to one:
KHNURE, department PM, lecturer Kirichenko L.O.
"Probability theory, mathematical
statistics and random processes"
35

36. Markov processes with continuous time

Markov processes
Continuous-time Markov processes
To find all state probabilities
How
functions of time, Kolmogorov differential equations are compiled and solved - a special type of equation in which the unknown functions are the probabilities of states.
For transition probabilities:
p ij (t) p ik (t) kj
k
For unconditional probabilities:
p j (t) p k (t) kj
k
KHNURE, department PM, lecturer Kirichenko L.O.
"Probability theory, mathematical
statistics and random processes"
36

37. Kolmogorov Andrey Nikolaevich

Markov processes
Kolmogorov Andrey Nikolaevich
1903-1987
Great Russian
mathematician.
KHNURE, department PM, lecturer Kirichenko L.O.
"Probability theory, mathematical
statistics and random processes"
37

38. Markov processes with continuous time

Markov processes
Continuous-time Markov processes
- intensity of failure flow;
- intensity of recovery flow.
Let the system be in the state
S0. It is transferred to state S1 by the flow
failures of the first device. Its intensity is
Where
- average device uptime.
The system is transferred from state S1 to S0 by the flow of restorations
first device. Its intensity is
Where
- average time to repair the first machine.
The intensities of event flows that transfer the system along all arcs of the graph are calculated in a similar way.
KHNURE, department PM, lecturer Kirichenko L.O.
"Probability theory, mathematical
statistics and random processes"
38

39. Queuing systems

Markov processes

Examples of queuing service systems (QS): telephone exchanges, repair shops,
ticket
cash registers,
reference
the Bureau,
machine tools and other technological systems,
systems
management
flexible
production systems,
information processing by servers, etc.
KHNURE, department PM, lecturer Kirichenko L.O.
"Probability theory, mathematical
statistics and random processes"
39

40. Queuing systems

Markov processes
Queuing systems
The QS consists of a certain number of serving
units called service channels (these are
machines, robots, communication lines, cashiers, etc.). Any SMO
is designed to service the flow of applications (requirements) arriving at random times.
Service of the request continues for a random time, after which the channel is freed and ready to receive the next
applications.
KHNURE, department PM, lecturer Kirichenko L.O.
"Probability theory, mathematical
statistics and random processes"
40

41. Queuing systems

Markov processes
Queuing systems
The QS operation process is a random process with discrete
states and continuous time. The state of the QS changes abruptly at the moments of occurrence of some events
(arrival of a new request, end of service, moment,
when an application that is tired of waiting leaves the queue).
KHNURE, department PM, lecturer Kirichenko L.O.
"Probability theory, mathematical
statistics and random processes"
41

42. Queuing systems

Markov processes
Queuing systems
Classification of queuing systems
1. QS with failures;
2. Queue with a queue.
In a QS with refusals, an application received at a time when all channels are busy receives a refusal, leaves the QS and is no longer
served.
In a QS with a queue, a request that arrives at a time when all channels are busy does not leave, but becomes queued and waits for the opportunity to be served.
QS with queues are divided into different types depending on
depends on how the queue is organized - limited or unlimited. Restrictions may apply to both queue length and time
expectations, “service discipline.”
KHNURE, department PM, lecturer Kirichenko L.O.
"Probability theory, mathematical
statistics and random processes"
42

43. Queuing systems

Markov processes
Queuing systems
The subject of queuing theory is the construction
mathematical models connecting given conditions
operation of the QS (number of channels, their performance, rules
work, the nature of the flow of applications) with the characteristics that interest us - indicators of the effectiveness of the QS. These indicators describe the ability of the QS to cope with the flow
applications. They can be: the average number of applications served by the QS per unit of time; average number of busy channels; average number of applications in queue; average waiting time for service, etc.
KHNURE, department PM, lecturer Kirichenko L.O.
"Probability theory, mathematical
statistics and random processes"
43

44.

THANK YOU
FOR ATTENTION!!!
44

45. Construct a transition graph

Markov processes
Build a transition graph
0.30
0.70
0.0
0.10
0.60
0.30
0.50
0.50
0.0
KHNURE, department PM, lecturer Kirichenko L.O.
"Probability theory, mathematical
statistics and random processes"

Many operations that have to be analyzed when choosing an optimal solution develop as random processes depending on a number of random factors.

For a mathematical description of many operations developing in the form of a random process, the mathematical apparatus developed in probability theory for the so-called Markov random processes can be successfully applied.

Let us explain the concept of a Markov random process.

Let there be some system S, whose state changes over time (under the system S can mean anything: an industrial enterprise, a technical device, a repair shop, etc.). If the system state S changes over time in a random, unpredictable way in advance, they say that in the system S leaks random process.

Examples of random processes:

price fluctuations in the stock market;

customer service in a hair salon or repair shop;

implementation of the supply plan for a group of enterprises, etc.

The specific course of each of these processes depends on a number of random, previously unpredictable factors, such as:

the arrival of unpredictable news about political changes on the stock market;

random nature of the flow of applications (requirements) coming from clients;

random interruptions in the implementation of the supply plan, etc.

DEFINITION. A random process occurring in a system is called Markovian(or process without consequences), if it has the following property: for each moment of time t 0 the probability of any state of the system in the future (with t > t 0) depends only on its state in the present (with t = t 0) and does not depend on when and how the system came to this state (i.e., how the process developed in the past).

In other words, in a Markov random process, its future development depends only on the present state and does not depend on the “prehistory” of the process.

Let's look at an example. Let the system S represents a stock market that has been around for some time. We are interested in how the system will work in the future. It is clear, at least to a first approximation, that the characteristics of future performance (the probabilities of a fall in the price of a particular stock in a week) depend on the state of the system at the moment (a variety of factors such as government decisions or election results can intervene here) and do not depend on when and how the system reached its current state (does not depend on the nature of the price movements of these shares in the past).

In practice, we often encounter random processes that, to varying degrees of approximation, can be considered Markovian.

The theory of Markov random processes has a wide range of different applications. We will be mainly interested in the application of the theory of Markov random processes to the construction of mathematical models of operations, the course and outcome of which significantly depend on random factors.

Markov random processes are divided into classes depending on how and at what points in time the system S" can change its states.

DEFINITION. The random process is called process with discrete states, if possible states of the system s x , s 2 , s v... can be listed (numbered) one after another, and the process itself is that from time to time the system S jumps abruptly (instantly) from one state to another.

For example, project development S carried out jointly by two departments, each of which can make a mistake. The following system states are possible:

5, - both departments are working normally;

s 2 - the first department made a mistake, the second one is working fine;

s 3 - the second department made a mistake, the first one is working fine;

s 4 - both departments made a mistake.

The process taking place in the system is that it randomly at some points in time moves (“jumps”) from state to state. The system has a total of four possible states. Before us is a process with discrete states.

In addition to processes with discrete states, there are random processes with continuous states: these processes are characterized by a gradual, smooth transition from state to state. For example, the process of changing voltage in a lighting network is a random process with continuous states.

We will consider only random processes with discrete states.

When analyzing random processes with discrete states, it is very convenient to use a geometric scheme - the so-called state graph. State graph geometrically depicts the possible states of the system and its possible transitions from state to state.

Let there be a system S with discrete states:

Each state will be represented by a rectangle, and possible transitions (“jumps”) from state to state will be represented by arrows connecting these rectangles. An example of a state graph is shown in Fig. 4.1.

Note that the arrows mark only direct transitions from state to state; if the system can transition from the state s 2 at 5 3 only through s y then the arrows mark only transitions s 2-> and l, 1 -> 5 3, but not s 2s y Let's look at a few examples:

1. System S- a company that can be in one of five possible states: s]- works with profit;

s 2- lost its development prospects and ceased to generate profit;

5 3 - became an object for a potential takeover;

s 4- is under external control;

s 5- the property of the liquidated company is sold at auction.

The state graph of the company is shown in Fig. 4.2.

Rice. 4.2

  • 2. System S- a bank with two branches. The following system states are possible:
  • 5, - both branches operate at a profit;

s 2 - the first branch operates without profit, the second operates with profit;

5 3 - the second branch operates without profit, the first operates with profit;

s 4 - both branches operate without profit.

It is assumed that there is no improvement in the condition.

The state graph is shown in Fig. 4.3. Note that the graph does not show a possible transition from the state s] directly to s4, which will come true if the bank straightaway will operate at a loss. The possibility of such an event can be neglected, as practice confirms.

Rice. 4.3

3. System S- an investment company consisting of two traders (departments): I and II; each of them may at some point in time begin to operate at a loss. If this happens, the company's management immediately takes measures to restore the profitable operation of the department.

Possible system states: s- the activities of both departments are profitable; s 2- the first department is being restored, the second is operating at a profit;

s 3- the first department is operating at a profit, the second is being restored;

s 4- both departments are being restored.

The system state graph is shown in Fig. 4.4.

4. In the conditions of the previous example, the activities of each trader, before he begins to restore the profitable work of the department, are subject to study by the management of the company in order to take measures to improve it.

For convenience, we will number the states of the system not with one, but with two indices; the first will mean the status of the first trader (1 - works with a profit, 2 - his activities are being studied by management, 3 - restores the profitable activity of the department); the second - the same states for the second trader. For example, s 23 will mean: the activities of the first trader are being studied, the second one is restoring profitable work.

Possible system states S:

s u- the activities of both traders bring profit;

s l2- the first trader works with a profit, the activities of the second are studied by the company’s management;

5 13 - the first trader works with a profit, the second restores the profitable activity of the department;

s 2l- the activities of the first trader are studied by management, the second one works with profit;

s 22 - the activities of both traders are studied by management;

  • 5 23 - the work of the first trader is studied, the second trader restores the profitable activities of the department;
  • 5 31 - the first trader restores the profitable activities of the department, the second one works with a profit;
  • 5 32 - the profitable activity of the department is restored by the first trader, the work of the second trader is studied;
  • 5 33 - both traders restore the profitable work of their department.

There are nine states in total. The state graph is shown in Fig. 4.5.

The evolution of which after any given value of the time parameter t does not depend on the evolution that preceded it t, provided that the value of the process at this moment is fixed (in short: the “future” and “past” of the process do not depend on each other with a known “present”).

The property that defines a magnetic field is usually called Markovian; it was first formulated by A. A. Markov. However, already in the work of L. Bachelier one can discern an attempt to interpret Brownian motion as a magnetic process, an attempt that received justification after the research of N. Wiener (N. Wiener, 1923). The foundations of the general theory of continuous-time magnetic processes were laid by A. N. Kolmogorov.

Markov property. There are definitions of M. that differ significantly from each other. One of the most common is the following. Let a random process with values ​​from a measurable space be given on a probability space where T - subset of the real axis Let Nt(respectively Nt).there is an s-algebra in generated by the quantities X(s).at Where In other words, Nt(respectively Nt) is a set of events associated with the evolution of the process up to moment t (starting from t) . Process X(t).is called Markov process if (almost certainly) the Markov property holds for all:

or, what is the same, if for any

M. p., for which T is contained in the set of natural numbers, called. Markov chain(however, the latter term is most often associated with the case of at most countable E) . If Is an interval in more than countable, M. is called. continuous time Markov chain. Examples of continuous-time magnetic processes are provided by diffusion processes and processes with independent increments, including Poisson and Wiener processes.

In the future, for definiteness, we will talk only about the case of Formulas (1) and (2) provide a clear interpretation of the principle of independence of the “past” and “future” with a known “present”, but the definition of M. p. based on them turned out to be insufficiently flexible in those numerous situations when it is necessary to consider not one, but a set of conditions of type (1) or (2), corresponding to different, although agreed upon in a certain way, measures. Considerations of this kind led to the adoption of the following definition (see,).

Let the following be given:

a) a measurable space where the s-algebra contains all one-point sets in E;

b) a measurable space equipped with a family of s-algebras such that if

c) function (“trajectory”) x t =xt(w) , defining for any measurable mapping

d) for each and a probability measure on the s-algebra such that the function is measurable with respect to if and

Set of names (non-terminating) Markov process defined in if -almost surely

whatever may be Here - the space of elementary events, - phase space or state space, P( s, x, t, V)- transition function or the transition probability of the process X(t) . If E is endowed with topology, and is a collection of Borel sets in E, then it is customary to say that the M. p. is given in E. Typically, the definition of M. p. includes the requirement that and then be interpreted as a probability, provided that x s =x.

The question arises: is every Markov transition function P( s, x;t, V), given in a measurable space can be considered as a transition function of a certain M. space. The answer is positive if, for example, E is a separable locally compact space, and is a collection of Borel sets in E. Moreover, let E - full metric space and let

for anyone where

A - complement of the e-neighborhood of a point X. Then the corresponding magnetic field can be considered continuous on the right and having limits on the left (that is, its trajectories can be chosen as such). The existence of a continuous magnetic field is ensured by the condition at (see, ). In the theory of mechanical processes, the main attention is paid to processes that are homogeneous (in time). The corresponding definition assumes a given system objects a) - d) with the difference that for the parameters s and u that appeared in its description, only the value 0 is now allowed. The notation is also simplified:

Further, the homogeneity of the space W is postulated, i.e., it is required that for any there exists such that (w) for Due to this, on the s-algebra N, the smallest of the s-algebras in W containing any event of the form, time shift operators q are given t, which preserve the operations of union, intersection and subtraction of sets and for which

Set of names (non-terminating) homogeneous Markov process defined in if -almost certainly

for the Transition function of the process X(t).is considered P( t, x, V), and, unless there are special reservations, they additionally require that It is useful to keep in mind that when checking (4) it is enough to consider only sets of the form where and what in (4) always Ft can be replaced by s-algebra equal to the intersection of completions Ft for all possible measures. Often, a probability measure m (the “initial distribution”) is fixed and a Markov random function is considered where is the measure on given by the equality

M. p. called. progressively measurable if for every t>0 the function induces a measurable mapping in where is the s-algebra

Borel subsets in . Right continuous MPs are progressively measurable. There is a way to reduce a heterogeneous case to a homogeneous one (see), and in what follows we will talk about homogeneous MPs.

Strictly Markov property. Let a measurable space be given by a m.

The function is called Markov moment, If for all In this case, the set is classified as a family F t if at (most often F t is interpreted as a set of events associated with the evolution of X(t) up to moment t). For believe

Progressively measurable M. p. Xnaz. strictly Markov process (s.m.p.), if for any Markov moment m and all and the relation

(strictly Markov property) holds almost certainly on the set W t . When checking (5), it is enough to consider only sets of the form where in this case the symmetrical space is, for example, any right-continuous Fellerian dimensional space in a topological. space E. M. p. called. Feller Markov process if the function

is continuous whenever f is continuous and bounded.

In class with. m.p. certain subclasses are distinguished. Let the Markov transition function P( t, x, V), defined in a metric locally compact space E, stochastically continuous:

for any neighborhood U of each point. Then if operators take into themselves the class of continuous functions that vanish at infinity, then the functions P( t, x, V) meets the standard M. p. X, i.e. continuous on the right with. m.p., for which

and - almost certainly on the set a - Pmarkov moments that do not decrease with growth.

Terminating Markov process. Often physical It is advisable to describe systems using a non-terminating magnetic field, but only on a time interval of random length. In addition, even simple transformations of magnetic processes can lead to a process with trajectories specified on a random interval (see. "Functional" from a Markov process). Guided by these considerations, the concept of a broken MP is introduced.

Let be a homogeneous magnetic field in phase space having a transition function and let there exist a point and a function such that for and otherwise (if there are no special reservations, consider ). New trajectory x t(w) is specified only for ) by means of the equality a Ft is defined as a trace in a set

Set where called by a terminating Markov process (o.m.p.), obtained from it by terminating (or killing) at time z. The z value is called the moment of the break, or the time of life, o. m.p. The phase space of the new process is where there is a trace of the s-algebra in E. Transition function o. m.p. is a restriction to the set Process X(t). a strictly Markov process, or a standard Markov process, if it has the corresponding property. A non-terminating MP can be considered as an o. m.p. with the moment of break Heterogeneous o. m.p. is determined in a similar way. M.

Markov processes and differential equations. MPs of the type of Brownian motion are closely related to parabolic differential equations. type. Transition density p(s, x, t, y) of the diffusion process satisfies, under certain additional assumptions, the inverse and direct differential equations of Kolmogorov:

Function p( s, x, t, y).is the Green's function of equations (6) - (7), and the first known methods for constructing diffusion processes were based on theorems on the existence of this function for differential equations (6) - (7). For a process homogeneous in time, the operator L( s, x)= L(x).on smooth functions coincides with the characteristic. operator M. p. (see "Transition operators semigroup").

Math. the expectations of various functionals from diffusion processes serve as solutions to the corresponding boundary value problems for the differential equation (1). Let - mathematical. expectation at measure Then the function satisfies at s equation (6) and condition

Likewise, the function

satisfies with s equation

and condition and 2 ( T, x) = 0.

Let tt be the moment of first reaching the boundary dD region process trajectory Then, under certain conditions, the function

satisfies the equation

and takes values ​​cp on the set

Solution of the 1st boundary value problem for a general linear parabolic. 2nd order equations

under fairly general assumptions can be written in the form

In the case when the operator L and functions s, f do not depend on s, A representation similar to (9) is also possible for solving a linear elliptic. equations More precisely, the function

under certain assumptions there is a solution to the problem

In the case when the operator L degenerates (del b( s, x) = 0 ).or border dD is not “good” enough; boundary values ​​may not be accepted by functions (9), (10) at individual points or on entire sets. The concept of a regular boundary point for an operator L has a probabilistic interpretation. At regular points of the boundary, the boundary values ​​are achieved by functions (9), (10). Solving problems (8), (11) allows us to study the properties of the corresponding diffusion processes and their functionals.

There are methods for constructing MPs that do not rely on constructing solutions to equations (6), (7), for example. method stochastic differential equations, absolutely continuous change of measure, etc. This circumstance, together with formulas (9), (10), allows us to probabilistically construct and study the properties of boundary value problems for equation (8), as well as the properties of the solution of the corresponding elliptic. equations

Since the solution of a stochastic differential equation is insensitive to the degeneracy of the matrix b( s, x), That probabilistic methods were used to construct solutions to degenerate elliptic and parabolic differential equations. Extension of the averaging principle of N. M. Krylov and N. N. Bogolyubov to stochastic differential equations made it possible, using (9), to obtain the corresponding results for elliptic and parabolic differential equations. It turned out that it was possible to solve certain difficult problems of studying the properties of solutions to equations of this type with a small parameter at the highest derivative using probabilistic considerations. The solution of the 2nd boundary value problem for equation (6) also has a probabilistic meaning. The formulation of boundary value problems for an unbounded domain is closely related to the recurrence of the corresponding diffusion process.

In the case of a time-homogeneous process (L does not depend on s), the positive solution of the equation, up to a multiplicative constant, coincides under certain assumptions with the stationary distribution density of the MP. Probabilistic considerations also turn out to be useful when considering boundary value problems for nonlinear parabolics. equations. R. 3. Khasminsky.

Lit.: Markov A. A., "Izvestia. Phys.-Mathematics Society of Kazan University", 1906, vol. 15, No. 4, p. 135-56; V a s h e l i e r L., "Ann. scient. Ecole norm, super.", 1900, v. 17, p. 21-86; Kolmogorov A.N., "Math. Ann.", 1931, Bd 104, S. 415-458; rus. Transl. - "Uspekhi Matematicheskikh Nauk", 1938, century. 5, p. 5-41; Zhun Kai-lai, Homogeneous Markov chains, trans. from English, M., 1964; R e 1 1 e r W., "Ann. Math.", 1954, v. 60, p. 417-36; Dynkin E.B., Yushkevich A.A., “Probability theory and its applications,” 1956, vol. 1, century. 1, p. 149-55; Xant J.-A., Markov processes and potentials, trans. from English, M., 1962; D e l l a s h e r i K., Capacities and random processes, trans. from French, M., 1975; Dynk and E.V., Foundations of the theory of Markov processes, M., 1959; him, Markov Processes, M., 1963; G and h man I. I., S k o r o x o d A. V., Theory of random processes, vol. 2, M., 1973; Freidlin M.I., in the book: Results of Science. Probability theory, mathematical statistics. - Theoretical cybernetics. 1966, M., 1967, p. 7-58; X a sminskiy R. 3., “Probability theory and its applications,” 1963, vol. 8, in . 1, p. 3-25; Ventzel A.D., Freidlin M.I., Fluctuations in dynamic systems under the influence of small random disturbances, M., 1979; Blumenthal R. M., G e t o r R. K., Markov processes and potential theory, N.Y.-L., 1968; Getоor R. K., Markov processes: Ray processes and right processes, V., 1975; Kuznetsov S.E., “Probability theory and its applications,” 1980, vol. 25, century. 2, p. 389-93.