Determine the correlation function of a random process. Correlation function of a stationary process

9. Correlation function and its main properties.

For a complete description of random processes, the concept of correlation f-i is introduced.

equal to the mathematical expectation, variance, standard deviation

It is assumed that the distribution law is normal. The graphs show a sharp difference between the processes, despite their equal probabilistic characteristics.

(t)m

(t)

(t )D

(t)

(t)

(t) .

For example, tracking an airplane. If at the moment of time t he occupied position 1, then by this very same his possible position 2 at the next moment t 2 is limited, i.e. the events (x 1,t 1) and (x 2,t 2) will not be independent. The more inertial the object being studied, the greater this interdependence, or correlation. Corr function mathematically expresses the correlation of two functions or the correlation of a function with itself (autocorrection function). The function is described as follows:

where t 1 and t 2 are any moments in time, that is, t 1 and t 2 T

Correlation is a statistical relationship between two or more random variables.

Correlation function– such a non-random function R x (t 1 ,t 2 ) of two arguments, which for any pair of fixed values ​​of arguments t 1 and t 2 is equal to the correlation moment corresponding to these sections of random variables x (t 1 ) and x (t 2 ).

The correlation function is a function of time that specifies correlation in systems with random processes.

When moments t 1 and t 2 coincide, the correlation function is equal to the dispersion. The normalized correlation function is calculated using the formula:

) 1,

where x (t 1) and x (t 2) r.s.o. random function x (t) with t =t 1 and t =t 2, respectively. To calculate

correlation function required

density (two-dimensional)

probabilities

(x ,x

; t, t

) dx dx

Properties of correlation functions

1. Correlation function R x (t 1 ,t 2 ) is symmetrical with respect to its arguments:

R x (t 1 ,t 2 ) =R x (t 2 ,t 1 )

in accordance with the definition of the correlation function X(t).

2. When added to a random function X (t) of an arbitrary non-random term

(t), correlation function Z (t) X (t) (t),

then R z (t 1 ,t 2 ) =R x (t 1 ,t 2 ).

3. When multiplying a random function X (t) by an arbitrary non-random factor ψ(t), the correlation function R x (t 1,t 2) is multiplied by ψ(t 1)ψ(t 2).

06 Lecture.doc

Lecture 6. Correlation functions of random processes
Plan.

1. The concept of the correlation function of a random process.

2. Stationarity in the narrow and broad senses..

3. Average value for the set.

4. Average value over time.

5. Ergodic random processes.
Mathematical expectation and dispersion are important characteristics of a random process, but they do not give a sufficient idea of ​​​​the nature of individual implementations of a random process. This is clearly seen from Fig. 6.1, which shows the implementation of two random processes, completely different in structure, although they have the same values ​​of mathematical expectation and dispersion. Dashed lines in Fig. 6.1. values ​​shown 3 x (t) for random processes.
The process shown in Fig. 6.1, A, from one section to another proceeds relatively smoothly, and the process in Fig. 6.1, b has strong variability from section to section. Therefore, the statistical connection between the cross sections in the first case is greater than in the second, but this cannot be established either by the mathematical expectation or by the dispersion.

In order to characterize to some extent the internal structure of a random process, i.e., to take into account the relationship between the values ​​of a random process at different points in time or, in other words, to take into account the degree of variability of a random process, it is necessary to introduce the concept of a correlation (autocorrelation) function of a random process. new process.

^ Correlation function of a random process X(t)call a non-random function of two argumentsR x (t 1 , t 2), which for each pair of arbitrarily chosen argument values ​​(time points) t 1 Andt 2 equal to the mathematical expectation of the product of two random variablesX(t 1 ) AndX(t 2 ) corresponding sections of the random process:

Where 2 (x 1 , t 1 ; x 2 , t 2) - two-dimensional probability density.

They often use a different expression for the correlation function, which is not written for the random process itself. X(t), and for the centered random component X(t). The correlation function in this case is called centered and is determined from the relation

(6.2)

Various random processes, depending on how their statistical characteristics change over time, are divided into stationary And non-stationary. There is a distinction between stationarity in the narrow sense and stationarity in the broad sense.

^ Stationary in the narrow sense called a random process X(t), if it n-dimensional distribution functions and probability density for any P do not depend on the position of the start of time counting t, i.e.

This means that two processes, X(t) And X(t+), have the same statistical properties for any , i.e. the statistical characteristics of a stationary random process are constant over time. A stationary random process is a kind of analogue of a steady-state process in deterministic systems.

^ Stationary in a broad sense called a random process X(t), the mathematical expectation of which is constant:

And the correlation function depends on only one variable - the difference in the arguments =t 2 -t 1:

(6.5)

The concept of a random process, stationary in the broad sense. is introduced when only the mathematical expectation and the correlation function are used as statistical characteristics of a random process. The part of the theory of random processes that describes the properties of a random process through its mathematical expectation and correlation function is called correlation theory.

For a random process with a normal distribution law, the mathematical expectation and correlation function completely determine it n-dimensional probability density. That's why for normal random processes, the concepts of stationarity in the broad and narrow sense coincide.

The theory of stationary processes has been most fully developed and allows for relatively simple calculations for many practical cases. Therefore, it is sometimes advisable to make the assumption of stationarity also for those cases when the random process, although non-stationary, but during the considered period of operation of the system, the statistical characteristics of the signals do not have time to change in any significant way. In what follows, unless otherwise stated, random processes that are stationary in the broad sense will be considered.

In the theory of random processes, two concepts of average values ​​are used. The first concept of average is average value over the set(or mathematical expectation), which is determined based on observation of the set of implementations of a random process at the same point in time. The average value over a set is usually denoted by a wavy line above the expression describing a random function:

In general, the average value over a set is a function of time.

Another concept of average is average value over time, which is determined based on observation of a separate implementation of a random process x{ f) for quite a long time T. The time average is indicated by a straight line above the corresponding expression of the random function and is determined by the formula

(6.7)

If this limit exists.

The average value over time is generally different for individual implementations of the set that define the random process.

In general, for the same random process, the set average and the time average are different, however, for the so-called ergodic stationary random processes, the set average coincides with the time average:

(6.8)

Equality (6.8) follows from ergodic theorem, in which for some stationary random processes it is proven that any statistical characteristic obtained by averaging over a set, with a probability no matter how close to unity, coincides with the characteristic averaged over time. The ergodic theorem has not been proven for all stationary processes, therefore, in those cases where it has not yet been proven, they speak of ergodic hypothesis.

It should be noted that not every stationary process is ergodic.

In Fig. 6.2. shows, for example, a graph of a stationary non-ergodic process for which equality (6.8) does not hold. In the general case, the same random process can be ergodic with respect to some statistical characteristics and not ergodic with respect to others. In what follows, we will assume that the ergodicity conditions for the mathematical expectation and the correlation function are satisfied.

The physical meaning of the ergodic theorem (or hypothesis) is deep and has great practical significance. To determine the statistical properties of ergodic stationary processes, if it is difficult to carry out simultaneous observation of many similar systems at an arbitrarily chosen point in time, for example, if there is one prototype, it can be replaced by long-term observation of one system. As a matter of fact, this fact underlies the experimental determination of the correlation function of a stationary random process based on one implementation. On the contrary, if there is a large batch of mass produced products for similar studies, it is possible to conduct simultaneous observation of all samples of the batch or a fairly representative sample of them.

As can be seen from (6.5), the correlation function is the average over the set. In accordance with the ergodic theorem for a stationary random process, the correlation function can be defined as the time average of the product x(t) And x(t+), i.e.

(6.9)

Where x(t)- any implementation of a random process.

Centered correlation function of an ergodic stationary random process

(6.10

Between correlation functions R x () And R 0 x () there is the following connection:

R x ()=R x 0 ()+(x -) 2 , (6.11)

Based on the ergodicity property, the dispersion can be D x [cm. (19)] defined as the time average of the square of the centered random process, i.e.

(6.12)

Comparing expressions (6.10) and (6.11), one can notice that the variance of a stationary random process is equal to the initial value of the centered correlation function:

(6.13)

Taking into account (6.12), we can establish a connection between the dispersion and the correlation function R x (), i.e.

From (6.14) and (6.15) it is clear that the dispersion of a stationary random process is constant, and therefore the standard deviation is constant:

Statistical properties of the connection between two random processes X(t) And G(t) can be characterized cross correlation functionR xg (t 1 , t 2), which for each pair of arbitrarily chosen argument values t 1 , t 2 is equal

According to the ergodic theorem, instead of (6.18) we can write

(6.19)

Where x(t) And g(t) - any implementation of stationary random processes X(t) And G(t) respectively.

Cross correlation function R xg ( characterizes the mutual statistical relationship of two random processes X(t) And G(t) at different points in time, separated from each other by a period of time t. Meaning R xg(0) characterizes this connection at the same point in time.

From (6.19) it follows that

(6.20)

If random processes X(t) And G(t) are not statistically related to each other and have zero average values, then their mutual correlation function for all m is equal to zero. However, the opposite conclusion, that if the cross-correlation function is equal to zero, then the processes are independent, can be made only in individual cases (in particular, for processes with a normal distribution law), but the inverse law does not have general force.

Centered correlation function R° x ( for non-random functions of time is identically equal to zero. However, the correlation function R x ( can also be calculated for non-random (regular) functions. Note, however, that when we talk about the correlation function of a regular function x(t), then this is simply understood as the result of a formal application to a regular function x(t) operation expressed by the integral (6.13).

In order to characterize to some extent the internal structure of a random process, i.e. to take into account the relationship between the values ​​of a random process at different points in time or, in other words, to take into account the degree of variability of a random process, introduce the concept of correlation (autocorrelation) function of a random process.

The correlation (or autocorrelation) function of a random process is a non-random function of two arguments, which for each pair of arbitrarily chosen values ​​of the arguments (time points) is equal to the mathematical expectation of the product of two random variables corresponding sections of the random process:

Correlation function for the centered random component is called centered and is determined from the relation

(1.58)

The function is often called covariance, and – autocorrelation .

Various random processes, depending on how their statistical characteristics change over time, are divided into stationary And non-stationary. A distinction is made between stationarity in the narrow sense and stationarity in the broad sense.

Stationary in the narrow sense called a random process, if its -dimensional distribution functions and probability densities for any do not depend from the time reference position. This means that two processes have the same statistical properties for any one, i.e., the statistical characteristics of a stationary random process are constant over time. A stationary random process is a kind of analogue of a steady-state process in dynamic systems.

Stationary in a broad sense called a random process, whose mathematical expectation is constant:

and the correlation function depends only on one variable - the difference between the arguments:

The concept of a random process, stationary in the broad sense, is introduced when only the mathematical expectation and the correlation function are used as statistical characteristics of a random process. The part of the theory of random processes that describes the properties of a random process through its mathematical expectation and correlation function is called correlation theory.

For a random process with a normal distribution law, the mathematical expectation and correlation function completely determine it n-dimensional probability density. That's why For normal random processes, the concepts of stationarity in the broad and narrow senses coincide.

The theory of stationary processes has been most fully developed and allows for relatively simple calculations for many practical cases. Therefore, it is sometimes advisable to make the assumption of stationarity also for those cases when the random process, although non-stationary, but during the considered period of operation of the system, the statistical characteristics of the signals do not have time to change in any significant way.

In the theory of random processes, two concepts of average values ​​are used. The first concept of average is set average (or mathematical expectation), which is determined based on observation of multiple implementations of a random process at the same point in time. The average value over the set is usually denoted wavy line over an expression describing a random function:

In general, the set average is a function of time.

Another concept of average is average over time , which is determined based on observation of a separate implementation of a random process over a sufficiently long time. The time average is denoted by straight line over the corresponding expression of the random function and is determined by the formula

, (1.62)

if this limit exists.

The time average is generally different for individual realizations of the set that define the random process.

In general, for the same random process, the set average and the time average are different, but for the so-called ergodic stationary random processes the average value over the set coincides with the average value over time:

In accordance with the ergodic theorem for a stationary random process, the correlation function can be defined as the time average of one implementation

(1.64)

Where - any implementation of a random process.

Centered correlation function of an ergodic stationary random process

From expression (1.65), it can be noted that the variance of a stationary random process is equal to the initial value of the centered correlation function:

The subject of correlation analysis is the study of probabilistic dependencies between random variables.

Quantities are independent if the law of distribution of each of them does not depend on the value assumed by the other. Such values ​​can be considered, for example, the endurance limit of the part material and the theoretical stress concentration coefficient in the dangerous section of the part.

Quantities are related probabilistic or stochastic dependencies if the known value of one quantity corresponds not to a specific value, but to the distribution law of another. Probabilistic dependencies occur when quantities depend not only on their common factors, but also on various random factors.

Complete information about the probabilistic relationship between two random variables is represented by the joint distribution density f(x,y) or conditional distribution densities f(x/y), f(y/x), i.e., the distribution densities of random variables X and Y when specifying specific values at And X respectively.

The joint density and conditional distribution densities are related by the following relations:

The main characteristics of probabilistic dependencies are the correlation moment and the correlation coefficient.

The correlation moment of two random variables X and Y is the mathematical expectation of the product of centered random variables:

for discrete

for continuous

where m x and m y– mathematical expectations of X and Y values; р ij– probability of individual values x i And y i.

The correlation moment simultaneously characterizes the connection between random variables and their scattering. In terms of its dimension, it corresponds to the variance for an independent random variable. To highlight the characteristics of the relationship between random variables, we proceed to the correlation coefficient, which characterizes the degree of closeness of the relationship and can vary within the range -1 ≤ ρ ≤ 1.

;

where S x and S y– standard deviations of random variables.

Values ρ = 1 and ρ = –1 indicate functional dependence, value ρ = 0 indicates that random variables are uncorrelated

The correlation is considered both between quantities and between events, as well as multiple correlation, which characterizes the relationship between many quantities and events.

With a more detailed analysis of the probabilistic relationship, the conditional mathematical expectations of random variables are determined m y/x And m x/y, i.e. mathematical expectations of random variables Y and X for given specific values X And at respectively.

Dependence of conditional mathematical expectation t u/x from X called regression of Y on X. Dependence t x/u from at corresponds to regression of X on Y.

For normally distributed quantities Y and X regression equation is:

for regression of Y on X

for regression of X on Y

The most important area of ​​application of correlation analysis to reliability problems is the processing and generalization of the results of operational observations. Results of observing random variables Y and X represented by paired values y i, x i i-th observation, where i=1, 2 . . . P; P– number of observations.

Evaluation r correlation coefficient ρ determined by the formula

Where , – estimates of mathematical expectations t x And that respectively, i.e. the average of P observations of values

s x , s y- estimates of standard deviations S x And S y accordingly:


Having designated the estimate of conditional mathematical expectations t y/x, t x / y respectively through and , empirical regression equations U By X And X By Y written in the following form:

As a rule, only one of the regressions has practical value.

With a correlation coefficient r=1 the regression equations are identical.

Question No. 63 Estimation of statistical parameters using confidence intervals

If the value of the tested parameter is estimated by one number, then it is called a point value. But in most problems it is necessary to find not only the most reliable numerical value, but also to evaluate the degree of reliability.

You need to know what error is caused by replacing a true parameter A its point estimate; with what degree of confidence can one expect that these errors will not exceed known predetermined limits.

For this purpose, in mathematical statistics, so-called confidence intervals and confidence probabilities are used.

If for the parameter A unbiased estimate obtained from experience , and the task is set to estimate the possible error, then it is necessary to assign some sufficiently large probability β (for example β = 0.9; 0.95; 0.99, etc.), such that an event with probability β could be considered practically reliable.

In this case, one can find a value of ε for which P(| - a| < ε) = β.

Rice. 3.1.1 Confidence interval diagram.

In this case, the range of practically possible errors that occur during replacement A will not exceed ± ε. Errors that are large in absolute value will appear only with a low probability α = 1 – β. An event that is opposite and unknown with probability β will fall within the interval I β= ( - ε; + ε). Probability β can be interpreted as the probability that a random interval I β will cover the point A(Fig. 3.1.1).

The probability β is usually called the confidence probability, and the interval I β is commonly called a confidence interval. In Fig. 3.1.1 a symmetric confidence interval is considered. In general, this requirement is not mandatory.

Confidence interval of parameter values a can be considered as an interval of values a, consistent with experimental data and not contradicting them.

By choosing a confidence probability β close to one, we want to have confidence that an event with such probability will occur if a certain set of conditions is met.

This is equivalent to the fact that the opposite event will not happen, that we neglect the probability of the event, equal to α = 1 – β. Let us point out that assigning a boundary for negligible probabilities is not a mathematical problem. The purpose of such a boundary is outside the theory of probability and is determined in each area by the degree of responsibility and the nature of the problems being solved.

But establishing too large a safety margin leads to an unjustified and large increase in construction costs.


65 Question No. 65 Stationary random process.

A stationary random function is a random function whose all probabilistic characteristics do not depend on the argument. Stationary random functions describe stationary processes of machine operation, non-stationary functions describe non-stationary processes, in particular transient ones: start, stop, mode change. The argument is time.

Stationarity conditions for random functions:

1. constancy of mathematical expectation;

2. constancy of dispersion;

3. The correlation function should depend only on the difference between the arguments, but not on their values.

Examples of stationary random processes include: oscillations of an aircraft in steady-state horizontal flight; random noise in the radio, etc.

Each stationary process can be considered as continuing in time indefinitely; during research, any point in time can be chosen as the starting point. When studying a stationary random process over any period of time, the same characteristics should be obtained.

The correlation function of stationary random processes is an even function.

For stationary random processes, spectral analysis is effective, i.e. consideration in the form of harmonic spectra or Fourier series. Additionally, the spectral density function of a random function is introduced, which characterizes the distribution of dispersions over spectral frequencies.

Dispersion:

Correlation function:

K x (τ) =

Spectral Density:

Sx() =

Stationary processes can be ergodic and non-ergodic. Ergodic - if the average value of a stationary random function over a sufficiently long period is approximately equal to the average value for individual implementations. For them, the characteristics are determined as the time average.

Question No. 66 Reliability indicators of technical objects: single, complex, calculated, experimental, operational, extrapolated.

Reliability indicator is a quantitative characteristic of one or more properties that make up the reliability of an object.

A single reliability indicator is a reliability indicator that characterizes one of the properties that makes up the reliability of an object.

A complex reliability indicator is a reliability indicator that characterizes several properties that make up the reliability of an object.

Calculated reliability indicator is a reliability indicator, the values ​​of which are determined by the calculation method.

Experimental reliability indicator is a reliability indicator, the point or interval estimate of which is determined based on test data.

Operational reliability indicator – a reliability indicator, the point or interval estimate of which is determined based on operational data.

Extrapolated reliability indicator – a reliability indicator, a point or interval estimate of which is determined based on the results of calculations, tests and (or) operational data by extrapolating to another duration of operation and other operating conditions.



Question No. 68 Indicators of durability of technical objects and cars.

Gamma-percentage resource is the total operating time during which the object will not reach the limit state with probability g, expressed as a percentage.

Average resource is the mathematical expectation of a resource.

Gamma-percentage service life is the calendar duration of operation during which the object will not reach the limiting state with probability g, expressed as a percentage

Average service life is the mathematical expectation of service life.

Note. When using durability indicators, the starting point and type of action after the onset of the limit state should be indicated (for example, the gamma-percentage life from the second major overhaul to write-off). Durability indicators, counted from the commissioning of an object to its final decommissioning, are called gamma-percentage full resource (service life), average full resource (service life)


71 71 Tasks and methods for predicting car reliability

There are three stages of forecasting: retrospection, diagnosis and prognosis. At the first stage, the dynamics of changes in machine parameters in the past are established, at the second stage the technical state of elements in the present is determined, at the third stage changes in the parameters of the state of elements in the future are predicted.

The main tasks of predicting the reliability of cars can be formulated as follows:

a) Predicting patterns of changes in vehicle reliability in connection with prospects for production development, the introduction of new materials, and increasing the strength of parts.

b) Assessing the reliability of designed vehicles before they are manufactured. This task arises at the design stage.

c) Predicting the reliability of a specific vehicle (or its component or assembly) based on the results of changes in its parameters.

d) Prediction of the reliability of a certain set of cars based on the results of a study of a limited number of prototypes. These types of problems have to be faced at the production stage.

e) Predicting the reliability of cars under unusual operating conditions (for example, when the temperature and humidity of the environment are higher than permissible, difficult road conditions, and so on).

Methods for predicting vehicle reliability are selected taking into account forecasting tasks, the quantity and quality of initial information, and the nature of the real process of changing the reliability indicator (predicted parameter).

Modern forecasting methods can be divided into three main groups: a) methods of expert assessments; b) modeling methods, including physical, physical-mathematical and information models; c) statistical methods.

Forecasting methods based on expert assessments consist of generalization, statistical processing and analysis of specialist opinions regarding the prospects for the development of this area.

Modeling methods are based on the basic principles of similarity theory. Based on the similarity of the indicators of modification A, the reliability level of which was previously studied, and some properties of modification B of the same car or its component, the reliability indicators of B are predicted for a certain period of time.

Statistical forecasting methods are based on extrapolation and interpolation of predicted reliability parameters obtained as a result of preliminary studies. The method is based on patterns of changes in vehicle reliability parameters over time

Question No. 74 Mathematical methods of forecasting. Construction of mathematical models of reliability.

When predicting transmission reliability, it is possible to use the following models: 1) the “weakest” link; 2) dependent resources of parts elements; 3) independent resources of detail elements. The resource of the i-th element is determined from the ratio:

x i = R i /r i ,

where R i is the quantitative value of the criterion of the i-th element at which its failure occurs;

r i – the average increment in the quantitative assessment of the criterion of the i-th element per unit of resource.

The values ​​of R i and r i can be random with certain distribution laws or constant.

For the option when R i are constant, and r i are variable and have a functional connection with the same random variable, consider the situation when a linear functional connection is observed between the values ​​of r i, which leads to the “weakest” link model. In this case, the reliability of the system corresponds to the reliability of the “weakest” link.

The model of dependent resources is implemented under loading according to the scheme, when there is a spread of operating conditions for mass-produced machines or uncertainty in the operating conditions of unique machines. The model of independent resources occurs when loading according to a scheme with specific operating conditions.

An expression for calculating the reliability of a system with independent resource elements.

Question No. 79 Schematic loading of the system, parts and elements (using the example of a transmission).

By transmission we mean the drive of the car as a whole or a separate, rather complex part of it, which for one reason or another needs to be isolated. The load on the transmission is determined by the power and speed components. The force component is characterized by torque, and the speed component is characterized by the angular velocity of rotation, which determines the number of loading cycles of transmission parts or the sliding speed of contact surfaces.

Depending on the type of part, the schematization of torque in order to obtain the load of the part may be different. For example, the load on gears and bearings is determined by the current value of the moments, and the torsional load on shafts is determined by the magnitude of its amplitude.

Based on operating conditions, the transmission load can be presented in the form of the following diagrams.

1. Each mode corresponds to a one-dimensional distribution curve.

2. For each mode we have n one-dimensional distribution curves (n is the number of machine operating conditions). The probability of operation in each of the conditions is specific.

3. For each mode we have one two-dimensional distribution of the current and average torque values.

Scheme 1 can be used for mass-produced machines under exactly the same operating conditions or for a unique machine under specific operating conditions.

Scheme 2 is not qualitatively different from Scheme 1, however, in some cases, for the calculation it is advisable that each operating condition correspond to a load curve.

Scheme 3 can characterize the load on the transmission of a unique machine, the specific operating conditions of which are unknown, but the range of conditions is known.

82 Question No. 82 Systematic approach to predicting the life of parts

A car should be considered as a complex system, formed from the point of view of the reliability of its sequentially connected units, parts, and elements.

Item resource:

T i = R i /r i ,

where R i is the quantitative value of the limit state criterion of the i-th element at which its failure occurs;

g i - the average increment of the quantitative assessment of the criterion

limit state of the i-th element per unit of resource.

R i and r i can be random or constant and are possible

the following options:

1. R i - random, r i - random;

2. R i - random, r i - constant;

3. R ​​i - constant, r i - random;

4. R i - constants, r i - constants.

For the first three options, we consider R i to be independent random variables.

1.a) r i - independent

The reliability of the system is considered to be the multiplication of the FBG

b) r i - random and related by probability

f (r i / r j) = f (r i , r j)/ f (r j);

f (r j / r i) = f (r i, r j)/ f (r i).

If r i and r j depend on each other, then the resources will also depend on each other

friend and the element resource dependence model is used for calculation. Because the relationship is probabilistic, then the method of conditional functions is used.

c) r i - random and functionally related.

In this case, free quantities depend on each other, and resources also depend on each other. Only due to functional dependence will the connection be stronger than in other cases.

2. model of independent elements resources.

The FBR of the system is equal to the sum of the FBR of all elements.

3. The same cases as in 1 are possible, only in cases b) and c) there will be an increase in dependent resources due to the constancy of R i. In case c) r i is a functional connection,

a situation is possible when the “weakest” link model is applied.

R 1 , R 2 – constants;

r 1 ,r 2 – random;

r 1 = 1.5 ∙ r 2 ;

R 1 = T ∙ r 1 ;

R 2 = T ∙ r 2 ;

If, for other two specific values ​​of r 1, r 2,

the same resource ratio T 1 >T 2, then element 2 will be the “weakest”

link, i.e. it determines the reliability of this system.

Application of the weakest link model:

If there is an element in the system whose criterion R is significantly less than this criterion for all other elements, and all elements are loaded approximately equally;

If the R criterion for all elements is approximately the same, and the loading of one element is significantly higher than all other elements.

Question No. 83 Determination of the service life of parts (shafts, or gears, or bearings of transmission units) based on experimental load conditions.

Determination of the life of rolling bearings.

To determine the durability of rolling bearings of transmission units and chassis, it is necessary to perform several types of calculations: for static strength, for contact fatigue, for wear.

Failure Model:

where f(R) is the resource distribution density;

, – density and resource distribution function for the i-th type of destructive process;

n – number of calculation types.

The most widely used calculation of rolling bearings for contact fatigue is:

R = a p C d mρ No 50 [β -1 ,

where C d – dynamic load capacity;

No 50 – the number of cycles of the fatigue curve corresponding to a 50% probability of non-destruction of the bearing under load C d;

m ρ – exponent (ball = 3, roller = 3.33);

Frequency of bearing loading when moving in kth gear;

The distribution density of the reduced load when driving in k-th gear under i-th operating conditions.

Main features of the calculation.

1. Since for the bearing fatigue curve, instead of the endurance limit, C d is introduced (corresponding to the probability of non-destruction of 90% at 10 6 cycles), it is necessary to move to the fatigue curve corresponding to 50% of non-destruction. Considering that the distribution density under load on the bearing C d obeys the Weibull law, then No 50 = 4.7 ∙ 10 6 cycles.

2. Integration in the formula is carried out from zero, and the parameters of the fatigue curve - m ρ, No 50 and C d - are not adjusted. Therefore, under the condition = const, rearranging the operations of summation and integration does not affect the value of R. Consequently, calculations for the generalized load mode and for individual load modes are identical. If the values ​​differ significantly, then the average resource R ik is calculated separately for each transmission:

R ik = a p C d mρ No [β -1 ,

the formula can be written:

R = [ -1 ,

Р = (K Fr ∙ K v ∙ F r + K Fa ∙ F a) ∙ K b ∙ K T ∙ K m;

where F r, F a – radial and axial loads;

K v – rotation coefficient;

K b – rotation coefficient;

K T – temperature coefficient;

K m – material coefficient;

K Fr , K Fa – coefficient of radial and axial loads.

4. Relationship between torque on shaft M and reduced load on the bearing:

Р = K P M = (K Fr ∙ K v ∙ K R + K Fa ∙ K A) ∙ K b ∙ K T ∙ K m ∙ M;

where K R is the conversion factor;

K R , K A – torque conversion coefficients into total radial and axial loads on the bearing.

The loading frequency of the bearing corresponds to the frequency of its rotation.

1000 U Σα (2πr ω)

where U Σα is the total gear ratio of the transmission from the shaft to the driving wheels of the vehicle when the kth gear is engaged.

5. Calculation of the distribution density of the bearing resource and its parameters is carried out using the method of static modeling.

Question No. 12 Specific material consumption of cars.

When determining the material consumption of a vehicle, the weight of the curbed chassis is used. The expediency of using chassis weight when assessing the material consumption of a car is explained by the widespread development of the production of specialized cars with bodies of various types or other superstructures of different weights installed on the chassis of the same base car. That is why branded brochures and catalogs for foreign trucks, as a rule, provide the weight of the curb chassis, not the vehicle. At the same time, many foreign companies do not include the weight of equipment and additional equipment in the weight of the equipped chassis, and the degree of fuel filling is indicated differently in different standards.

To objectively assess the material consumption of cars of various models, they must be brought to a single configuration. In this case, the chassis load capacity is determined as the difference between the total structural weight of the vehicle and the weight of the curbed chassis.

The main indicator of the material consumption of a car is the specific gravity of the chassis:

m beat = (m sn.shas – m z.sn)/[(m k.a – m sn.shas)P];

where m ground chassis is the weight of the equipped chassis,

m з.сн – mass of refueling and equipment,

m к.а – total structural mass of the vehicle,

P – established resource before major repairs.

For a tractor vehicle, the total weight of the road train is taken into account:

m beat = (m sn.shas – m z.sn)/[(m k.a – m sn.shas)KR];

where K is the coefficient of correction of indicators for tractor-trailer vehicles intended for operation as part of a road train

K = m a /m k.a;

where m a is the total weight of the road train.


Related information.


Interference in communication systems is described by methods of the theory of random processes.

A function is called random if, as a result of an experiment, it takes one form or another, and it is not known in advance which one. A random process is a random function of time. The specific form that a random process takes as a result of an experiment is called the implementation of a random process.

In Fig. Figure 1.19 shows a set of several (three) implementations of the random process , , . Such a collection is called an ensemble of realizations. With a fixed value of the moment of time in the first experiment we obtain a specific value, in the second - , in the third - .

The random process is dual in nature. On the one hand, in each specific experiment it is represented by its implementation - a non-random function of time. On the other hand, a random process is described by a set of random variables.

Indeed, let us consider a random process at a fixed point in time. Then in each experiment it takes one value, and it is not known in advance which one. Thus, a random process considered at a fixed point in time is a random variable. If two moments of time and are recorded, then in each experiment we will obtain two values ​​of and . In this case, joint consideration of these values ​​leads to a system of two random variables. When analyzing random processes at N points in time, we arrive at a set or system of N random variables .

Mathematical expectation, dispersion and correlation function of a random process. Since a random process considered at a fixed point in time is a random variable, we can talk about the mathematical expectation and dispersion of a random process:

, .

Just as for a random variable, dispersion characterizes the spread of values ​​of a random process relative to the average value. The larger , the greater the likelihood of very large positive and negative process values. A more convenient characteristic is the standard deviation (MSD), which has the same dimension as the random process itself.

If a random process describes, for example, a change in the distance to an object, then the mathematical expectation is the average range in meters; dispersion is measured in square meters, and Sco is measured in meters and characterizes the spread of possible range values ​​relative to the average.

The mean and variance are very important characteristics that allow us to judge the behavior of a random process at a fixed point in time. However, if it is necessary to estimate the “rate” of change in a process, then observations at one point in time are not enough. For this purpose, two random variables are used, considered together. Just as for random variables, a characteristic of the connection or dependence between and is introduced. For a random process, this characteristic depends on two moments in time and is called the correlation function: .

Stationary random processes. Many processes in control systems occur uniformly over time. Their basic characteristics do not change. Such processes are called stationary. The exact definition can be given as follows. A random process is called stationary if any of its probabilistic characteristics do not depend on the shift in the origin of time. For a stationary random process, the mathematical expectation, variance and standard deviation are constant: , .

The correlation function of a stationary process does not depend on the origin t, i.e. depends only on the difference in time:

The correlation function of a stationary random process has the following properties:

1) ; 2) ; 3) .

Often the correlation functions of processes in communication systems have the form shown in Fig. 1.20.

Rice. 1.20. Correlation functions of processes

The time interval over which the correlation function, i.e. the magnitude of the connection between the values ​​of a random process decreases by M times, called the interval or correlation time of the random process. Usually or . We can say that the values ​​of a random process that differ in time by the correlation interval are weakly related to each other.

Thus, knowledge of the correlation function allows one to judge the rate of change of a random process.

Another important characteristic is the energy spectrum of a random process. It is defined as the Fourier transform of the correlation function:

.

Obviously, the reverse transformation is also true:

.

The energy spectrum shows the power distribution of a random process, such as interference, on the frequency axis.

When analyzing an ACS, it is very important to determine the characteristics of a random process at the output of a linear system with known characteristics of the process at the input of the ACS. Let us assume that the linear system is given by an impulse transient response. Then the output signal at the moment of time is determined by the Duhamel integral:

,

where is the process at the system input. To find the correlation function, we write and after multiplication we find the mathematical expectation