The least squares method is used for estimation. The simplest special cases

Example.

Experimental data on the values ​​of variables X And at are given in the table.

As a result of their alignment, the function is obtained

Using least square method, approximate these data by a linear dependence y=ax+b(find parameters A And b). Find out which of the two lines better (in the sense of the least squares method) aligns the experimental data. Make a drawing.

The essence of the least squares method (LSM).

The task is to find the linear dependence coefficients at which the function of two variables A And b takes the smallest value. That is, given A And b the sum of squared deviations of the experimental data from the found straight line will be the smallest. This is the whole point of the least squares method.

Thus, solving the example comes down to finding the extremum of a function of two variables.

Deriving formulas for finding coefficients.

A system of two equations with two unknowns is compiled and solved. Finding partial derivatives of a function with respect to variables A And b, we equate these derivatives to zero.

We solve the resulting system of equations using any method (for example by substitution method or ) and obtain formulas for finding coefficients using the least squares method (LSM).

Given A And b function takes the smallest value. The proof of this fact is given.

That's the whole method of least squares. Formula for finding the parameter a contains the sums , , , and parameter n- amount of experimental data. We recommend calculating the values ​​of these amounts separately. Coefficient b found after calculation a.

It's time to remember the original example.

Solution.

In our example n=5. We fill out the table for the convenience of calculating the amounts that are included in the formulas of the required coefficients.

The values ​​in the fourth row of the table are obtained by multiplying the values ​​of the 2nd row by the values ​​of the 3rd row for each number i.

The values ​​in the fifth row of the table are obtained by squaring the values ​​in the 2nd row for each number i.

The values ​​in the last column of the table are the sums of the values ​​across the rows.

We use the formulas of the least squares method to find the coefficients A And b. We substitute the corresponding values ​​from the last column of the table into them:

Hence, y = 0.165x+2.184- the desired approximating straight line.

It remains to find out which of the lines y = 0.165x+2.184 or better approximates the original data, that is, makes an estimate using the least squares method.

Error estimation of the least squares method.

To do this, you need to calculate the sum of squared deviations of the original data from these lines And , a smaller value corresponds to a line that better approximates the original data in the sense of the least squares method.

Since , then straight y = 0.165x+2.184 better approximates the original data.

Graphic illustration of the least squares (LS) method.

Everything is clearly visible on the graphs. The red line is the found straight line y = 0.165x+2.184, the blue line is , pink dots are the original data.

Why is this needed, why all these approximations?

I personally use it to solve problems of data smoothing, interpolation and extrapolation problems (in the original example they might be asked to find the value of an observed value y at x=3 or when x=6 using the least squares method). But we’ll talk more about this later in another section of the site.

Proof.

So that when found A And b function takes the smallest value, it is necessary that at this point the matrix of the quadratic form of the second order differential for the function was positive definite. Let's show it.

The essence of the least squares method is in finding the parameters of a trend model that best describes the tendency of development of any random phenomenon in time or space (a trend is a line that characterizes the tendency of this development). The task of the least squares method (LSM) comes down to finding not just some trend model, but to finding the best or optimal model. This model will be optimal if the sum of square deviations between the observed actual values ​​and the corresponding calculated trend values ​​is minimal (smallest):

where is the square deviation between the observed actual value

and the corresponding calculated trend value,

The actual (observed) value of the phenomenon being studied,

The calculated value of the trend model,

The number of observations of the phenomenon being studied.

MNC is used quite rarely on its own. As a rule, most often it is used only as a necessary technical technique in correlation studies. It should be remembered that the information basis of OLS can only be a reliable statistical series, and the number of observations should not be less than 4, otherwise the smoothing procedures of OLS may lose common sense.

The MNC toolkit boils down to the following procedures:

First procedure. It turns out whether there is any tendency at all to change the resultant attribute when the selected factor-argument changes, or in other words, is there a connection between “ at " And " X ».

Second procedure. It is determined which line (trajectory) can best describe or characterize this trend.

Third procedure.

Example. Let's say we have information about the average sunflower yield for the farm under study (Table 9.1).

Table 9.1

Observation number

Productivity, c/ha

Since the level of technology in sunflower production in our country has remained virtually unchanged over the past 10 years, it means that, apparently, fluctuations in yield during the analyzed period were very much dependent on fluctuations in weather and climatic conditions. Is this really true?

First OLS procedure. The hypothesis about the existence of a trend in sunflower yield changes depending on changes in weather and climatic conditions over the analyzed 10 years is tested.

In this example, for " y " it is advisable to take the sunflower yield, and for " x » – number of the observed year in the analyzed period. Testing the hypothesis about the existence of any relationship between " x " And " y "can be done in two ways: manually and using computer programs. Of course, with the availability of computer technology, this problem can be solved by itself. But in order to better understand the MNC tools, it is advisable to test the hypothesis about the existence of a relationship between “ x " And " y » manually, when only a pen and an ordinary calculator are at hand. In such cases, the hypothesis about the existence of a trend is best checked visually by the location of the graphical image of the analyzed series of dynamics - the correlation field:

The correlation field in our example is located around a slowly increasing line. This in itself indicates the existence of a certain trend in changes in sunflower yields. It is impossible to talk about the presence of any tendency only when the correlation field looks like a circle, a circle, a strictly vertical or strictly horizontal cloud, or consists of chaotically scattered points. In all other cases, the hypothesis about the existence of a relationship between “ x " And " y ", and continue research.

Second OLS procedure. It is determined which line (trajectory) can best describe or characterize the trend of changes in sunflower yield over the analyzed period.

If you have computer technology, the selection of the optimal trend occurs automatically. In “manual” processing, the selection of the optimal function is carried out, as a rule, visually - by the location of the correlation field. That is, based on the type of graph, the equation of the line that best fits the empirical trend (the actual trajectory) is selected.

As is known, in nature there is a huge variety of functional dependencies, so it is extremely difficult to visually analyze even a small part of them. Fortunately, in real economic practice, most relationships can be described quite accurately either by a parabola, or a hyperbola, or a straight line. In this regard, with the “manual” option of selecting the best function, you can limit yourself to only these three models.

Hyperbola:

Second order parabola: :

It is easy to see that in our example, the trend in sunflower yield changes over the analyzed 10 years is best characterized by a straight line, so the regression equation will be the equation of a straight line.

Third procedure. The parameters of the regression equation characterizing this line are calculated, or in other words, an analytical formula is determined that describes the best trend model.

Finding the values ​​of the parameters of the regression equation, in our case the parameters and , is the core of the OLS. This process comes down to solving a system of normal equations.

(9.2)

This system of equations can be solved quite easily by the Gauss method. Let us recall that as a result of the solution, in our example, the values ​​of the parameters and are found. Thus, the found regression equation will have the following form:

  • Tutorial

Introduction

I am a mathematician and programmer. The biggest leap I took in my career was when I learned to say: "I do not understand anything!" Now I am not ashamed to tell the luminary of science that he is giving me a lecture, that I do not understand what he, the luminary, is telling me. And it's very difficult. Yes, admitting your ignorance is difficult and embarrassing. Who likes to admit that he doesn’t know the basics of something? Due to my profession, I have to attend a large number of presentations and lectures, where, I admit, in the vast majority of cases I want to sleep because I do not understand anything. But I don’t understand because the huge problem of the current situation in science lies in mathematics. It assumes that all listeners are familiar with absolutely all areas of mathematics (which is absurd). Admitting that you don’t know what a derivative is (we’ll talk about what it is a little later) is shameful.

But I've learned to say that I don't know what multiplication is. Yes, I don't know what a subalgebra over a Lie algebra is. Yes, I don’t know why quadratic equations are needed in life. By the way, if you are sure that you know, then we have something to talk about! Mathematics is a series of tricks. Mathematicians try to confuse and intimidate the public; where there is no confusion, there is no reputation, no authority. Yes, it is prestigious to speak in as abstract a language as possible, which is complete nonsense.

Do you know what a derivative is? Most likely you will tell me about the limit of the difference ratio. In the first year of mathematics and mechanics at St. Petersburg State University, Viktor Petrovich Khavin told me determined derivative as the coefficient of the first term of the Taylor series of the function at a point (this was a separate gymnastics to determine the Taylor series without derivatives). I laughed at this definition for a long time until I finally understood what it was about. The derivative is nothing more than a simple measure of how similar the function we are differentiating is to the function y=x, y=x^2, y=x^3.

I now have the honor of lecturing to students who afraid mathematics. If you are afraid of mathematics, we are on the same path. As soon as you try to read some text and it seems to you that it is overly complicated, then know that it is poorly written. I assert that there is not a single area of ​​mathematics that cannot be discussed “on the fingers” without losing accuracy.

Assignment for the near future: I assigned my students to understand what a linear quadratic regulator is. Don’t be shy, spend three minutes of your life and follow the link. If you don’t understand anything, then we are on the same path. I (a professional mathematician-programmer) didn’t understand anything either. And I assure you, you can figure this out “on your fingers.” At the moment I don't know what it is, but I assure you that we will be able to figure it out.

So, the first lecture that I am going to give to my students after they come running to me in horror and say that a linear-quadratic regulator is a terrible thing that you will never master in your life is least squares methods. Can you solve linear equations? If you are reading this text, then most likely not.

So, given two points (x0, y0), (x1, y1), for example, (1,1) and (3,2), the task is to find the equation of the line passing through these two points:

illustration

This line should have an equation like the following:

Here alpha and beta are unknown to us, but two points of this line are known:

We can write this equation in matrix form:

Here we should make a lyrical digression: what is a matrix? A matrix is ​​nothing more than a two-dimensional array. This is a way of storing data; no further meanings should be attached to it. It depends on us exactly how to interpret a certain matrix. Periodically I will interpret it as a linear mapping, periodically as a quadratic form, and sometimes simply as a set of vectors. This will all be clarified in context.

Let's replace concrete matrices with their symbolic representation:

Then (alpha, beta) can be easily found:

More specifically for our previous data:

Which leads to the following equation of the line passing through the points (1,1) and (3,2):

Okay, everything is clear here. Let's find the equation of the line passing through three points: (x0,y0), (x1,y1) and (x2,y2):

Oh-oh-oh, but we have three equations for two unknowns! A standard mathematician will say that there is no solution. What will the programmer say? And he will first rewrite the previous system of equations in the following form:

In our case, the vectors i, j, b are three-dimensional, therefore (in the general case) there is no solution to this system. Any vector (alpha\*i + beta\*j) lies in the plane spanned by the vectors (i, j). If b does not belong to this plane, then there is no solution (equality cannot be achieved in the equation). What to do? Let's look for a compromise. Let's denote by e(alpha, beta) exactly how far we have not achieved equality:

And we will try to minimize this error:

Why square?

We are looking not just for the minimum of the norm, but for the minimum of the square of the norm. Why? The minimum point itself coincides, and the square gives a smooth function (a quadratic function of the arguments (alpha, beta)), while simply the length gives a cone-shaped function, non-differentiable at the minimum point. Brr. A square is more convenient.

Obviously, the error is minimized when the vector e orthogonal to the plane spanned by the vectors i And j.

Illustration

In other words: we are looking for a straight line such that the sum of the squared lengths of the distances from all points to this straight line is minimal:

UPDATE: I have a problem here, the distance to the straight line should be measured vertically, and not by orthogonal projection. This commentator is right.

Illustration

In completely different words (carefully, poorly formalized, but it should be clear): we take all possible lines between all pairs of points and look for the average line between all:

Illustration

Another explanation is straightforward: we attach a spring between all data points (here we have three) and the straight line that we are looking for, and the straight line of the equilibrium state is exactly what we are looking for.

Minimum quadratic form

So, given this vector b and a plane spanned by the column vectors of the matrix A(in this case (x0,x1,x2) and (1,1,1)), we are looking for the vector e with a minimum square of length. Obviously, the minimum is achievable only for the vector e, orthogonal to the plane spanned by the column vectors of the matrix A:

In other words, we are looking for a vector x=(alpha, beta) such that:

Let me remind you that this vector x=(alpha, beta) is the minimum of the quadratic function ||e(alpha, beta)||^2:

Here it would be useful to remember that the matrix can be interpreted also as a quadratic form, for example, the identity matrix ((1,0),(0,1)) can be interpreted as a function x^2 + y^2:

quadratic form

All this gymnastics is known under the name linear regression.

Laplace's equation with Dirichlet boundary condition

Now the simplest real task: there is a certain triangulated surface, it is necessary to smooth it. For example, let's load a model of my face:

The original commit is available. To minimize external dependencies, I took the code of my software renderer, already on Habré. To solve a linear system, I use OpenNL, this is an excellent solver, which, however, is very difficult to install: you need to copy two files (.h+.c) to the folder with your project. All smoothing is done with the following code:

For (int d=0; d<3; d++) { nlNewContext(); nlSolverParameteri(NL_NB_VARIABLES, verts.size()); nlSolverParameteri(NL_LEAST_SQUARES, NL_TRUE); nlBegin(NL_SYSTEM); nlBegin(NL_MATRIX); for (int i=0; i<(int)verts.size(); i++) { nlBegin(NL_ROW); nlCoefficient(i, 1); nlRightHandSide(verts[i][d]); nlEnd(NL_ROW); } for (unsigned int i=0; i&face = faces[i]; for (int j=0; j<3; j++) { nlBegin(NL_ROW); nlCoefficient(face[ j ], 1); nlCoefficient(face[(j+1)%3], -1); nlEnd(NL_ROW); } } nlEnd(NL_MATRIX); nlEnd(NL_SYSTEM); nlSolve(); for (int i=0; i<(int)verts.size(); i++) { verts[i][d] = nlGetVariable(i); } }

X, Y and Z coordinates are separable, I smooth them separately. That is, I solve three systems of linear equations, each with a number of variables equal to the number of vertices in my model. The first n rows of matrix A have only one 1 per row, and the first n rows of vector b have the original model coordinates. That is, I tie a spring between the new position of the vertex and the old position of the vertex - the new ones should not move too far from the old ones.

All subsequent rows of matrix A (faces.size()*3 = number of edges of all triangles in the mesh) have one occurrence of 1 and one occurrence of -1, with the vector b having zero components opposite. This means I put a spring on each edge of our triangular mesh: all edges try to get the same vertex as their starting and ending point.

Once again: all vertices are variables, and they cannot move far from their original position, but at the same time they try to become similar to each other.

Here's the result:

Everything would be fine, the model is really smoothed, but it has moved away from its original edge. Let's change the code a little:

For (int i=0; i<(int)verts.size(); i++) { float scale = border[i] ? 1000: 1; nlBegin(NL_ROW); nlCoefficient(i, scale); nlRightHandSide(scale*verts[i][d]); nlEnd(NL_ROW); }

In our matrix A, for the vertices that are on the edge, I add not a row from the category v_i = verts[i][d], but 1000*v_i = 1000*verts[i][d]. What does it change? And this changes our quadratic form of error. Now a single deviation from the top at the edge will cost not one unit, as before, but 1000*1000 units. That is, we hung a stronger spring on the extreme vertices, the solution will prefer to stretch the others more strongly. Here's the result:

Let's double the spring strength between the vertices:
nlCoefficient(face[ j ], 2); nlCoefficient(face[(j+1)%3], -2);

It is logical that the surface has become smoother:

And now even a hundred times stronger:

What is this? Imagine that we have dipped a wire ring in soapy water. As a result, the resulting soap film will try to have the least curvature as possible, touching the border - our wire ring. This is exactly what we got by fixing the border and asking for a smooth surface inside. Congratulations, we have just solved Laplace's equation with Dirichlet boundary conditions. Sounds cool? But in reality, you just need to solve one system of linear equations.

Poisson's equation

Let's remember another cool name.

Let's say I have an image like this:

Looks good to everyone, but I don’t like the chair.

I'll cut the picture in half:



And I will select a chair with my hands:

Then I will pull everything that is white in the mask to the left side of the picture, and at the same time throughout the picture I will say that the difference between two neighboring pixels should be equal to the difference between two neighboring pixels of the right picture:

For (int i=0; i

Here's the result:

Code and pictures available

If a certain physical quantity depends on another quantity, then this dependence can be studied by measuring y at different values ​​of x. As a result of measurements, a number of values ​​are obtained:

x 1, x 2, ..., x i, ..., x n;

y 1 , y 2 , ..., y i , ... , y n .

Based on the data of such an experiment, it is possible to construct a graph of the dependence y = ƒ(x). The resulting curve makes it possible to judge the form of the function ƒ(x). However, the constant coefficients that enter into this function remain unknown. They can be determined using the least squares method. Experimental points, as a rule, do not lie exactly on the curve. The least squares method requires that the sum of the squares of the deviations of the experimental points from the curve, i.e. 2 was the smallest.

In practice, this method is most often (and most simply) used in the case of a linear relationship, i.e. When

y = kx or y = a + bx.

Linear dependence is very widespread in physics. And even when the relationship is nonlinear, they usually try to construct a graph so as to get a straight line. For example, if it is assumed that the refractive index of glass n is related to the light wavelength λ by the relation n = a + b/λ 2, then the dependence of n on λ -2 is plotted on the graph.

Consider the dependency y = kx(a straight line passing through the origin). Let's compose the value φ the sum of the squares of the deviations of our points from the straight line

The value of φ is always positive and turns out to be smaller the closer our points are to the straight line. The least squares method states that the value for k should be chosen such that φ has a minimum


or
(19)

The calculation shows that the root-mean-square error in determining the value of k is equal to

, (20)
where n is the number of measurements.

Let us now consider a slightly more difficult case, when the points must satisfy the formula y = a + bx(a straight line that does not pass through the origin).

The task is to find the best values ​​of a and b from the available set of values ​​x i, y i.

Let us again compose the quadratic form φ, equal to the sum of the squared deviations of points x i, y i from the straight line

and find the values ​​of a and b for which φ has a minimum

;

.

.

The joint solution of these equations gives

(21)

The root mean square errors of determination of a and b are equal

(23)

.  (24)

When processing measurement results using this method, it is convenient to summarize all the data in a table in which all the amounts included in formulas (19)(24) are preliminarily calculated. The forms of these tables are given in the examples below.

Example 1. The basic equation of the dynamics of rotational motion ε = M/J (a straight line passing through the origin) was studied. At different values ​​of the moment M, the angular acceleration ε of a certain body was measured. It is required to determine the moment of inertia of this body. The results of measurements of the moment of force and angular acceleration are listed in the second and third columns table 5.

Table 5
n M, N m ε, s -1 M 2 M ε ε - kM (ε - kM) 2
1 1.44 0.52 2.0736 0.7488 0.039432 0.001555
2 3.12 1.06 9.7344 3.3072 0.018768 0.000352
3 4.59 1.45 21.0681 6.6555 -0.08181 0.006693
4 5.90 1.92 34.81 11.328 -0.049 0.002401
5 7.45 2.56 55.5025 19.072 0.073725 0.005435
– – 123.1886 41.1115 – 0.016436

Using formula (19) we determine:

.

To determine the root mean square error, we use formula (20)

0.005775kg-1 · m -2 .

According to formula (18) we have

; .

S J = (2.996 0.005775)/0.3337 = 0.05185 kg m2.

Having set the reliability P = 0.95, using the table of Student coefficients for n = 5, we find t = 2.78 and determine the absolute error ΔJ = 2.78 0.05185 = 0.1441 ≈ 0.2 kg m2.

Let's write the results in the form:

J = (3.0 ± 0.2) kg m2;


Example 2. Let's calculate the temperature coefficient of metal resistance using the least squares method. Resistance depends linearly on temperature

R t = R 0 (1 + α t°) = R 0 + R 0 α t°.

The free term determines the resistance R 0 at a temperature of 0 ° C, and the slope coefficient is the product of the temperature coefficient α and the resistance R 0 .

The results of measurements and calculations are given in the table ( see table 6).

Table 6
n t°, s r, Ohm t-¯t (t-¯t) 2 (t-¯t)r r - bt - a (r - bt - a) 2 .10 -6
1 23 1.242 -62.8333 3948.028 -78.039 0.007673 58.8722
2 59 1.326 -26.8333 720.0278 -35.581 -0.00353 12.4959
3 84 1.386 -1.83333 3.361111 -2.541 -0.00965 93.1506
4 96 1.417 10.16667 103.3611 14.40617 -0.01039 107.898
5 120 1.512 34.16667 1167.361 51.66 0.021141 446.932
6 133 1.520 47.16667 2224.694 71.69333 -0.00524 27.4556
515 8.403 – 8166.833 21.5985 – 746.804
∑/n 85.83333 1.4005 – – – – –

Using formulas (21), (22) we determine

R 0 = ¯ R- α R 0 ¯ t = 1.4005 - 0.002645 85.83333 = 1.1735 Ohm.

Let's find an error in the definition of α. Since , then according to formula (18) we have:

.

Using formulas (23), (24) we have

;

0.014126 Ohm.

Having set the reliability to P = 0.95, using the table of Student coefficients for n = 6, we find t = 2.57 and determine the absolute error Δα = 2.57 0.000132 = 0.000338 deg -1.

α = (23 ± 4) 10 -4 hail-1 at P = 0.95.


Example 3. It is required to determine the radius of curvature of the lens using Newton's rings. The radii of Newton's rings r m were measured and the numbers of these rings m were determined. The radii of Newton's rings are related to the radius of curvature of the lens R and the ring number by the equation

r 2 m = mλR - 2d 0 R,

where d 0 the thickness of the gap between the lens and the plane-parallel plate (or the deformation of the lens),

λ wavelength of incident light.

λ = (600 ± 6) nm;
r 2 m = y;
m = x;
λR = b;
-2d 0 R = a,

then the equation will take the form y = a + bx.

.

The results of measurements and calculations are entered into table 7.

Table 7
n x = m y = r 2, 10 -2 mm 2 m -¯m (m -¯m) 2 (m -¯ m)y y - bx - a, 10 -4 (y - bx - a) 2 , 10 -6
1 1 6.101 -2.5 6.25 -0.152525 12.01 1.44229
2 2 11.834 -1.5 2.25 -0.17751 -9.6 0.930766
3 3 17.808 -0.5 0.25 -0.08904 -7.2 0.519086
4 4 23.814 0.5 0.25 0.11907 -1.6 0.0243955
5 5 29.812 1.5 2.25 0.44718 3.28 0.107646
6 6 35.760 2.5 6.25 0.894 3.12 0.0975819
21 125.129 – 17.5 1.041175 – 3.12176
∑/n 3.5 20.8548333 – – – – –

It is widely used in econometrics in the form of a clear economic interpretation of its parameters.

Linear regression comes down to finding an equation of the form

or

Equation of the form allows based on specified parameter values X have theoretical values ​​of the resultant characteristic, substituting the actual values ​​of the factor into it X.

The construction of linear regression comes down to estimating its parameters - A And V. Linear regression parameter estimates can be found using different methods.

The classical approach to estimating linear regression parameters is based on least squares method(MNC).

The least squares method allows us to obtain such parameter estimates A And V, at which the sum of squared deviations of the actual values ​​of the resultant characteristic (y) from calculated (theoretical) minimum:

To find the minimum of a function, you need to calculate the partial derivatives for each of the parameters A And b and set them equal to zero.

Let's denote through S, then:

Transforming the formula, we obtain the following system of normal equations for estimating parameters A And V:

Solving the system of normal equations (3.5) either by the method of sequential elimination of variables or by the method of determinants, we find the required estimates of the parameters A And V.

Parameter V called the regression coefficient. Its value shows the average change in the result with a change in the factor by one unit.

The regression equation is always supplemented with an indicator of the closeness of the connection. When using linear regression, such an indicator is the linear correlation coefficient. There are different modifications of the linear correlation coefficient formula. Some of them are given below:

As is known, the linear correlation coefficient is within the limits: -1 1.

To assess the quality of selection of a linear function, the square is calculated

Linear correlation coefficient called coefficient of determination. The coefficient of determination characterizes the proportion of variance of the resulting characteristic y, explained by regression, in the total variance of the resulting trait:

Accordingly, the value 1 characterizes the share of variance y, caused by the influence of other factors not taken into account in the model.

Questions for self-control

1. The essence of the least squares method?

2. How many variables does pairwise regression provide?

3. What coefficient determines the closeness of the connection between changes?

4. Within what limits is the coefficient of determination determined?

5. Estimation of parameter b in correlation-regression analysis?

1. Christopher Dougherty. Introduction to econometrics. - M.: INFRA - M, 2001 - 402 p.

2. S.A. Borodich. Econometrics. Minsk LLC “New Knowledge” 2001.


3. R.U. Rakhmetova Short course in econometrics. Tutorial. Almaty. 2004. -78p.

4. I.I. Eliseeva. Econometrics. - M.: “Finance and Statistics”, 2002

5. Monthly information and analytical magazine.

Nonlinear economic models. Nonlinear regression models. Transformation of variables.

Nonlinear economic models..

Transformation of variables.

Elasticity coefficient.

If there are nonlinear relationships between economic phenomena, then they are expressed using the corresponding nonlinear functions: for example, an equilateral hyperbola , parabolas of the second degree and etc.

There are two classes of nonlinear regressions:

1. Regressions that are nonlinear with respect to the explanatory variables included in the analysis, but linear with respect to the estimated parameters, for example:

Polynomials of various degrees - , ;

Equilateral hyperbola - ;

Semilogarithmic function - .

2. Regressions that are nonlinear in the parameters being estimated, for example:

Power - ;

Demonstrative - ;

Exponential - .

The total sum of squared deviations of individual values ​​of the resulting characteristic at from the average value is caused by the influence of many reasons. Let us conditionally divide the entire set of reasons into two groups: factor under study x And other factors.

If the factor does not influence the result, then the regression line on the graph is parallel to the axis Oh And

Then the entire variance of the resulting characteristic is due to the influence of other factors and the total sum of squared deviations will coincide with the residual. If other factors do not influence the result, then y tied With X functionally and the residual sum of squares is zero. In this case, the sum of squared deviations explained by the regression is the same as the total sum of squares.

Since not all points of the correlation field lie on the regression line, their scatter always occurs as a result of the influence of the factor X, i.e. regression at By X, and caused by other causes (unexplained variation). The suitability of a regression line for forecasting depends on what part of the total variation of the trait at accounts for the explained variation

Obviously, if the sum of squared deviations due to regression is greater than the residual sum of squares, then the regression equation is statistically significant and the factor X has a significant impact on the result u.

, i.e., with the number of freedom of independent variation of a characteristic. The number of degrees of freedom is related to the number of units of the population n and the number of constants determined from it. In relation to the problem under study, the number of degrees of freedom should show how many independent deviations from P

The assessment of the significance of the regression equation as a whole is given using F-Fisher criterion. In this case, a null hypothesis is put forward that the regression coefficient is equal to zero, i.e. b = 0, and therefore the factor X does not affect the result u.

The immediate calculation of the F-test is preceded by analysis of variance. The central place in it is occupied by the decomposition of the total sum of squared deviations of a variable at from the average value at into two parts - “explained” and “unexplained”:

- total sum of squared deviations;

- sum of squared deviations explained by regression;

- residual sum of squared deviations.

Any sum of squared deviations is related to the number of degrees of freedom , i.e., with the number of freedom of independent variation of a characteristic. The number of degrees of freedom is related to the number of population units n and with the number of constants determined from it. In relation to the problem under study, the number of degrees of freedom should show how many independent deviations from P possible required to form a given sum of squares.

Dispersion per degree of freedomD.

F-ratios (F-test):

If the null hypothesis is true, then the factor and residual variances do not differ from each other. For H 0, a refutation is necessary so that the factor dispersion exceeds the residual dispersion several times. The English statistician Snedekor developed tables of critical values F-relations at different levels of significance of the null hypothesis and different numbers of degrees of freedom. Table value F-criterion is the maximum value of the ratio of variances that can occur in case of random divergence for a given level of probability of the presence of the null hypothesis. Calculated value F-relationships are considered reliable if o is greater than the table.

In this case, the null hypothesis about the absence of a relationship between signs is rejected and a conclusion is drawn about the significance of this relationship: F fact > F table H 0 is rejected.

If the value is less than the tabulated F fact ‹, F table, then the probability of the null hypothesis is higher than a specified level and cannot be rejected without serious risk of drawing the wrong conclusion about the presence of a relationship. In this case, the regression equation is considered statistically insignificant. But he doesn’t deviate.

Standard error of regression coefficient

To assess the significance of the regression coefficient, its value is compared with its standard error, i.e. the actual value is determined t-Student's test: which is then compared with the table value at a certain significance level and number of degrees of freedom ( n- 2).

Standard parameter error A:

The significance of the linear correlation coefficient is checked based on the magnitude of the error correlation coefficient t r:

Total trait variance X:

Multiple Linear Regression

Model building

Multiple regression represents a regression of an effective characteristic with two or more factors, i.e. a model of the form

Regression can give good results in modeling if the influence of other factors affecting the object of study can be neglected. The behavior of individual economic variables cannot be controlled, i.e. it is not possible to ensure the equality of all other conditions for assessing the influence of one factor under study. In this case, you should try to identify the influence of other factors by introducing them into the model, i.e., construct a multiple regression equation: y = a+b 1 x 1 +b 2 +…+b p x p + .

The main goal of multiple regression is to build a model with a large number of factors, while determining the influence of each of them separately, as well as their combined impact on the modeled indicator. The specification of the model includes two ranges of issues: selection of factors and choice of the type of regression equation