Difference between generating and characteristic functions. Characteristic function

Mathematical expectation and its properties.

Numerical characteristics of random variables.

Characteristic function.

Lecture No. 5

Section 2. Random variables.

Topic 1. Distribution function, probability density and numerical characteristics of a random variable.

Purpose of the lecture: give knowledge about ways to describe random variables.

Lecture questions:

Literature:

L1 - Bocharov P. P., Pechinkin A. V. Probability theory. Math statistics. - 2nd ed. - M.: FIZMATLIT, 2005. - 296 p.

L2 - Gmurman, V. E. Probability theory and mathematical statistics: Textbook. manual for universities/V. E. Gmurman. - 9th ed., erased. - M.: Higher. school, 2005. - 479 p.: ill.

L3 - Nakhman A.D., Kosenkova I.V. Rows. Theory of Probability and Mathematical Statistics. Methodological developments. – Tambov: TSTU Publishing House, 2009.

L4 - Plotnikova S.V. Math statistics. Methodological developments. – Tambov: TSTU Publishing House, 2005. (pdf file)

When solving many problems, instead of the distribution function F(x) and p.v. p(x) the characteristic function is applied. With the help of this characteristic it turns out to be advisable, for example, to determine some numerical characteristics of the word. and z.r. functions s.v.

Characteristic function sl.v. is called the Fourier transform of its a.e. p(x):

, (2.6.1)

where is the parameter that is the argument of the characteristic function, - m.o. sl.v. (see § 2.8.).

Applying the inverse Fourier transform, we obtain a formula that determines a.e. sl.v. by its characteristic function

. (2.6.2)

Since the dimension p(x) inverse of dimension x, then the quantity , and therefore, are dimensionless. The argument has the inverse dimension x.

Using representation (2.5.7) a.e. p(x) in the form of a sum of delta functions, we can extend formula (1) to discrete r.v.

. (2.6.3)

Sometimes, instead of the characteristic function, it turns out to be convenient to use the logarithm of it:

Y. (2.6.4)

Function Y can be called the second ( logarithmic)characteristic function sl.v. .

Let us note the most important properties of the characteristic function.

1. The characteristic function satisfies the following conditions:

. (2.6.5)

2. For symmetric distribution, when p(x)= p(-x), the imaginary part in (1) is zero, and therefore the characteristic function is a real even function . On the contrary, if it takes only real values, then it is even and the corresponding distribution is symmetrical.

3. If s.v. is a linear function of r.v. , then its characteristic function is determined by the expression



, (2.6.6)

Where a And b- permanent.

4. Characteristic function of the sum independent s.v. is equal to the product of the characteristic functions of the terms, i.e., if

. (2.6.7)

This property is especially useful, since otherwise finding a.e. amount of sl.v. is associated with multiple repetitions of convolution, which sometimes causes difficulties.

Thus, taking into account the unambiguous relationship between the distribution function, probability density and characteristic function, the latter can equally be used to describe the r.v.

Example 2.6.1. A code combination of two pulses is transmitted through a communication channel with interference. Due to the independent influence of interference on these pulses, each of them can be suppressed with a probability q=0.2. It is necessary to determine: I) the distribution series of c.v. - number of pulses suppressed by interference; 2) distribution function; 3) probability density; 4) characteristic function of r.v. .

Discrete s.v. can take three values ​​(none of the pulses are suppressed), (one pulse is suppressed), (both pulses are suppressed). The probabilities of these values ​​are respectively equal:

By the way, you just advocated that the student should not know anything about uniform continuity, and now you are offering him delta functions? Adequately, I won’t say anything.

I am glad to see you again on the topic with a willingness to discuss regardless of the characteristics that concern me personally. I'm interested in you. The student must know everything that he can be asked about, but first of all, he must master the system of concepts, their characterization and the relationships between them and should not be limited to the narrow circle of the section of the discipline that he is currently studying and should also not be a walking reference book , which constantly remembers a large number of functions that do not satisfy one or another condition.
In the original problem, it was required to establish whether the given HF function was any random variable. The student receives such a task when the concept of HF is introduced. And the goal of solving such problems is to consolidate an understanding of the relationship between CP and PR, as well as to consolidate knowledge about the properties of CP.
There are two ways to show that a given function is a HF: either you should find the function corresponding to it according to Fourier and check that it satisfies the normalization condition and is positive, or you should prove the non-negative definiteness of the given function and refer to the Bochner-Khinchin theorem. At the same time, the use of theorems on representing a SV in the form of a linear combination of other Rademacher SVs does not in any way contribute to the understanding of the basic properties of HF; moreover, as I indicated above, your solution contains a veiled Fourier series, that is, it actually corresponds to the first method.
When it is required to show that a given function cannot be a HF of any SV, then it is enough to establish the failure of one of the properties of the HF: a unit value at zero, bounded modulus by one, obtaining correct values ​​for the moments of the PDF, uniform continuity. Checking the correctness of the values ​​of moments calculated through a given function is a mathematically equivalent check of uniform continuity in the sense that failure to fulfill any of these properties can serve as the same basis for recognizing the unsuitability of a given function. However, checking the correctness of the moment values ​​is formalized: differentiate and check. Uniform continuity, in the general case, has to be proven, which makes the success of solving a problem dependent on the creative potential of the student, on his ability to “guess.”
As part of the discussion of the “construction” of an SV, I propose to consider a simple problem: let’s construct an SV with a HF of the form: Where

α k

(y)=

M[Y

+∞∫ ϕ k

(x)

(x)dx;

µk(y)

∫ (ϕ (x)

f(x)dx.

Characteristic function of a random variable

Let Y = e itX, where

X –

random variable with a known law

distribution, t – parameter, i =

− 1.

Characteristic function random variable Called

mathematical expectation of the function Y = e itX:

∑ e itx k p k , for DSV,

k = 1

υ X (t )= M =

∫ e itX f (x )dx , for NSV.

Thus, the characteristic

υ X(t)

and the law of distribution

random variables are uniquely related Fourier transform. For example, the distribution density f (x) of a random variable X is uniquely expressed through its characteristic function using inverse Fourier transform:

f(x)=

+∞ υ (t) e− itX dt.

2 π−∞ ∫

Basic properties of the characteristic function:

Characteristic function of the quantity Z = aX + b, where X is random

the value of the characteristic function υ X (t) is equal to

υ Z (t) = M [ e it(aX+ b) ] = e itbυ X (at) .

The initial moment of the kth order of the random variable X is equal to

α k (x )= υ X (k ) (0)i − k ,

where υ X (k) (0) is the value of the kth derivative of the characteristic function at t = 0.

3. Characteristic function of the sum

Y = ∑ X k independent

k = 1

random variables is equal to the product of the characteristic functions of the terms:

υ Y(t ) = ∏ υ Xi

(t).

i = 1

4. Characteristic function of normal

random variable with

parameters m and σ is equal to:

υ X (t) = eitm −

t 2 σ 2

LECTURE 8 Two-dimensional random variables. Two-dimensional distribution law

A two-dimensional random variable (X,Y) is a set of two one-dimensional random variables that take values ​​as a result of the same experiment.

Two-dimensional random variables are characterized by sets of values ​​Ω X , Ω Y of their components and a joint (two-dimensional) distribution law. Depending on the type of components X,Y, discrete, continuous and mixed two-dimensional random variables are distinguished.

A two-dimensional random variable (X, Y) can be geometrically represented as a random point (X, Y) on the x0y plane or as a random vector directed from the origin to the point (X, Y).

Two-dimensional distribution function two-dimensional random variable

(X ,Y ) is equal to the probability of joint execution of two events (X<х } и {Y < у }:

F(x, y) = p(( X< x} { Y< y} ) .

Geometrically two-dimensional distribution function F(x, y)

hit of a random point (X,Y) in

endless

quadrant with

top in

point (x,y) lying to the left and below it.

Component X took on the values

smaller than the real number x, this is

distribution

F X (x ), and

component Y – less than real

numbers y,

distribution

FY(y).

Properties of the two-dimensional distribution function:

1. 0 ≤ F (x ,y )≤ 1.

is the probability

. (x,y)

Proof. The property follows from the definition of the distribution function as a probability: probability is a non-negative number not exceeding 1.

2. F (–∞, y) = F (x, –∞) = F (–∞, –∞) = 0,F (+∞, +∞) = 1.

3. F (x 1 ,y )≤ F (x 2 ,y ), if x 2 >x 1 ;F (x ,y 1 )≤ F (x ,y 2 ), if y 2 >y 1 .

Proof. Let us prove that F (x ,y ) is a non-decreasing function with respect to

variable x. Consider the probability

p(X< x2 , Y< y) = p(X< x1 , Y< y) + p(x1 ≤ X< x2 , Y< y) .

Since p(X< x 2 ,Y < y )= F (x 2 ,y ), аp (X < x 1 , Y < y ) = F (x 1 , y ) , то

F (x 2 ,y )− F (x 1 ,y )= p (x 1 ≤ X< x 2 ,Y < y )F (x 2 ,y )− F (x 1 ,y )≥ 0F (x 2 ,y )≥ F (x 1 ,y ).

Likewise for y.

4. Transition to one-dimensional characteristics:

F (x ,∞ )= p (X< x ,Y < ∞ )= p (X < x )= F X (x );

F (∞ ,y )= p (X< ∞ ,Y < y )= p (Y < y )= F Y (y ).

5. Probability of hitting a rectangular area

p (α≤ X ≤ β; δ≤ Υ≤ γ) =

F (β ,γ ) −F (β ,δ ) −F (α ,γ ) +F (α ,δ ).

(β,γ)

Distribution function - most

universal

distribution

used

descriptions of how

(β,δ)

continuous,

and discrete

(α,δ)

two-dimensional random variables.

Distribution matrix

A two-dimensional random variable (X,Y) is discrete if the sets of values ​​of its components Ω X and Ω Y are countable sets. To describe the probabilistic characteristics of such quantities, a two-dimensional distribution function and a distribution matrix are used.

Distribution matrix is a rectangular table that contains the values ​​of the component X − Ω X =( x 1 ,x 2 ,... ,x n ), the values ​​of the component Y − Ω Y =( y 1 ,y 2 , …,y m ) and the probabilities of all possible pairs of values ​​p ij =p (X =x i,Y =y j),i = 1, …,n,j = 1, …,m.

xi\yj

X i )= ∑ p ij ,i = 1, ...,n .

j= 1

3. Transition to the probability distribution series of the component Y:

p j = p (Y = y j )= ∑ p ij ,j = 1, ...,m .

i= 1

Two-dimensional distribution density

A two-dimensional random variable (X ,Y ) is continuous if it

the distribution function F (x,y) is a continuous, differentiable function for each of the arguments and there is a second

mixed derivative ∂ 2 F (x, y).

∂ x ∂y

Two-dimensional distribution density f(x, y ) characterizes the probability density in the vicinity of a point with coordinates ( x, y ) and is equal to the second mixed derivative of the distribution function:

∫∫ f(x, y) dxdy.

Properties of two-dimensional density:

1. f (x ,y )≥ 0.

2. Normalization condition:

∞ ∞

∫ ∫ f(x, y) d x d y= 1 .

Given on the entire number line by the formula

X. f. random variable X, by definition, is X. f. its probability distribution

The method associated with the use of X. f. was first used by A. M. Lyapunov and later became one of the main analytical ones. methods of probability theory. It is used especially effectively in proving limit theorems in probability theory, for example. the central limit theorem for independent identically distributed random variables with 2 moments is reduced to the elementary relation

Basic properties of X. f. 1) and positive definite, i.e.

For any finite collection of complex numbers and arguments

2) uniformly continuous along the entire axis

4)in particular, takes only real values ​​(and is an even function) if and only if the corresponding probabilistic is symmetric, i.e. where

5) X. f. unambiguously defines the measure; there is an appeal:

For any intervals (a, 6) whose ends have zero m-measure. If it is integrable (absolutely, if understood in the Riemannian sense), then the corresponding distribution function has

6) X. f. the convolution of two probability measures (the sum of two independent random variables) is their X. f.

The following three properties express the connection between the existence of moments of a random variable and the degree of smoothness of its X. function.

7) If for some natural P, then for all naturals there exist derivatives of order r from X. f. random variable X and the equality holds

8) If exists then

9) If for everyone

then it holds for everyone

Using the X.f. method is mainly based on the above properties of X. functions, as well as on the following two theorems.
Bochner's theorem (description of the class of X. functions). Let the function f be given on and f(0)=1. In order for f to be X. f. a certain probability measure, it is necessary and sufficient that it be continuous and positive definite.
Levy's theorem (correspondence). Let be a sequence of probability measures, and let be a sequence of their X.f. Then weakly converges to a certain probability measure (i.e., for an arbitrary continuous bounded function if and only if at each point it converges to a certain continuous function f; in the case of convergence, the function It follows that the relative (in sense of weak convergence) of a family of probability measures is equivalent to equicontinuity at zero of the family of corresponding X. functions.
Bochner's theorem allows us to look at the Fourier-Stieltjes transform as between a semigroup (with respect to the convolution operation) of probability measures in and a semigroup (with respect to pointwise multiplication) of positive definite continuous functions equal to one at zero. Lévy's theorem states that this algebraic. isomorphism is also topological. homeomorphism, if in the semigroup of probability measures we mean the topology of weak convergence, and in the semigroup of positive definite functions - the topology of uniform convergence on bounded sets.
Expressions of X. f. are known. basic probabilistic diseases (see,), for example, X. f. Gaussian measure with average variance is
For non-negative integer random variables X, Along with X. f., its analogue is used -

Associated with X. f. ratio
X. f. a probability measure in a finite-dimensional space is defined similarly:

Where x> means . The facts formulated above are also true for X. f. probability measures in

Lit.: Lukach E., Characteristic functions, trans. from English, M., 1979; Feller V., Introduction to Probability Theory and Its Applications, vol. 2. trans. from English, M., 1967; Prokhorov Yu. V., Rozanov Yu. A., Theory of Probability. Basic concepts. Limit theorems. Random processes, 2nd ed., M., 1973; 3olotarev V. M., One-dimensional stable distributions, Moscow, 1983.
N.H. Vakhania.

Mathematical encyclopedia. - M.: Soviet Encyclopedia. I. M. Vinogradov. 1977-1985.

See what "CHARACTERISTIC FUNCTION" is in other dictionaries:

    Characteristic function: The characteristic function in thermodynamics is a function by which the thermodynamic properties of a system are determined. The characteristic function of a set is a function that establishes the membership of an element in a set; ... ... Wikipedia

    In thermodynamics, a function of the state of independent parameters that determine the state of thermodynamics. systems. To X. f. include thermodynamic and entropy potentials. Through X... Physical encyclopedia

    characteristic function- A function of the state of a thermodynamic system of corresponding independent thermodynamic parameters, characterized by the fact that through this function and its derivatives with respect to these parameters, all thermodynamic ... ... Technical Translator's Guide

    Characteristic function- in the theory of cooperative games, a ratio that determines the amount of the minimum winnings for any coalition in the game. When two coalitions unite, the value of H.f. will be no less than the sum of such functions for uncombined... ... Economic-mathematical dictionary

    characteristic function- būdingoji funkcija statusas T sritis chemija apibrėžtis Būsenos funkcija, kurios diferencialinėmis išraiškomis galima nusakyti visas termodinaminės sistemos savybes. atitikmenys: engl. characteristic function rus. characteristic function... Chemijos terminų aiškinamasis žodynas

    characteristic function- būdingoji funkcija statusas T sritis fizika atitikmenys: engl. characteristic function vok. Charakteristische Funktion, f rus. characteristic function, f pranc. fonction caractéristique, f… Fizikos terminų žodynas - sets of Espace X are a function equal to 1 at and equal to 0 at (where CE is the complement of Ev X). Any function with values ​​in (0, 1) is an X. function. of a certain set, namely a set, Properties of X. functions: pairwise disjoint, then 6) if then... Mathematical Encyclopedia