Orthogonal and orthonormal systems.

Definition 1. ) is called orthogonal if all its elements are pairwise orthogonal:

Theorem 1. An orthogonal system of non-zero vectors is linearly independent.

(Assume the system is linearly dependent: and, to be sure, Let us scalarly multiply the equality by . Taking into account the orthogonality of the system, we obtain: }

Definition 2. System of vectors of Euclidean space ( ) is called orthonormal if it is orthogonal and the norm of each element is equal to one.

It immediately follows from Theorem 1 that an orthonormal system of elements is always linearly independent. From here it follows, in turn, that in n– in a dimensional Euclidean space an orthonormal system of n vectors forms a basis (for example, ( i, j, k ) at 3 X– dimensional space). Such a system is called orthonormal basis, and its vectors are basis vectors.

The coordinates of a vector in an orthonormal basis can be easily calculated using the scalar product: if Indeed, multiplying the equality on , we obtain the indicated formula.

In general, all the basic quantities: the scalar product of vectors, the length of a vector, the cosine of the angle between vectors, etc. have the simplest form in an orthonormal basis. Let's consider the scalar product: , since

And all other terms are equal to zero. From here we immediately get: ,

* Consider an arbitrary basis. The scalar product in this basis will be equal to:

(Here α i And β j – coordinates of vectors in the basis ( f), and are scalar products of basis vectors).

Quantities γ ij form a matrix G, called Gram matrix. The scalar product in matrix form will look like: *

Theorem 2. In any n– in dimensional Euclidean space there is an orthonormal basis. The proof of the theorem is constructive in nature and is called

9. Gram–Schmidt orthogonalization process.

Let ( a 1 ,...,a n ) − arbitrary basis n– dimensional Euclidean space (the existence of such a basis is due to n– dimension of space). The algorithm for constructing an orthonormal based on a given basis is as follows:

1.b 1 =a 1, e 1 = b 1/|b 1|, |e 1|= 1.

2.b 2^e 1, because (e 1 , a 2)- projection a 2 on e 1 , b 2 = a 2 -(e 1 , a 2)e 1 , e 2 = b 2/|b 2|, |e 2|= 1.

3.b 3^a 1, b 3^a 2 , b 3 = a 3 -(e 1 , a 3)e 1 -(e 2 , a 3)e 2 , e 3 = b 3/|b 3|, |e 3|= 1.

.........................................................................................................

k. b k^a 1 ,..., b k^a k-1 , b k = a k - S i=1 k(e i , a k)e i , e k = b k/|b k|, |e k|= 1.

Continuing the process, we obtain an orthonormal basis ( e 1 ,...,e n }.

Note 1. Using the considered algorithm, it is possible to construct an orthonormal basis for any linear shell, for example, an orthonormal basis for the linear shell of a system that has a rank of three and consists of five-dimensional vectors.



Example.x =(3,4,0,1,2), y =(3,0,4,1,2), z =(0,4,3,1,2)

Note 2. Special cases

The Gram-Schmidt process can also be applied to an infinite sequence of linearly independent vectors.

Additionally, the Gram-Schmidt process can be applied to linearly dependent vectors. In this case it issues 0 (zero vector) at step j , If a j is a linear combination of vectors a 1 ,...,a j -1 . If this can happen, then to preserve the orthogonality of the output vectors and to prevent division by zero during orthonormalization, the algorithm must check for null vectors and discard them. The number of vectors produced by the algorithm will be equal to the dimension of the subspace generated by the vectors (i.e., the number of linearly independent vectors that can be distinguished among the original vectors).

10. Geometric vector spaces R1, R2, R3.

Let us emphasize that only spaces have direct geometric meaning

R 1, R 2, R 3. The space R n for n > 3 is an abstract purely mathematical object.

1) Let a system of two vectors be given a And b . If the system is linearly dependent, then one of the vectors, let's say a , is linearly expressed through another:

a= k b.

Two vectors connected by such a dependence, as already mentioned, are called collinear. So, a system of two vectors is linearly dependent if and only

when these vectors are collinear. Note that this conclusion applies not only to R3, but also to any linear space.

2) Let the system in R3 consist of three vectors a, b, c . Linear dependence means that one of the vectors, say a , is linearly expressed through the rest:

A= k b+ l c . (*)

Definition. Three vectors a, b, c in R 3 lying in the same plane or parallel to the same plane are called coplanar

(in the figure on the left the vectors are indicated a, b, c from one plane, and on the right the same vectors are plotted from different origins and are only parallel to one plane).

So, if three vectors in R3 are linearly dependent, then they are coplanar. The converse is also true: if the vectors a, b, c from R3 are coplanar, then they are linearly dependent.

Vector artwork vector a, to vector b in space is called a vector c , satisfying the following requirements:

Designation:

Consider an ordered triple of non-coplanar vectors a, b, c in three-dimensional space. Let us combine the origins of these vectors at the point A(that is, we choose a point arbitrarily in space A and move each vector in parallel so that its origin coincides with the point A). The ends of vectors combined with their beginnings at a point A, do not lie on the same line, since the vectors are non-coplanar.

Ordered triple of non-coplanar vectors a, b, c in three-dimensional space is called right, if from the end of the vector c shortest turn from a vector a to vector b visible to the observer counterclockwise. Conversely, if the shortest turn is seen clockwise, then the triple is called left.

Another definition is related to right hand person (see picture), where the name comes from.

All right-handed (and left-handed) triples of vectors are called identically oriented.

Equal to zero:

.

An orthogonal system, if complete, can be used as a basis for space. In this case, the decomposition of any element can be calculated using the formulas: , where .

The case when the norm of all elements is called an orthonormal system.

Orthogonalization

Any complete linearly independent system in a finite-dimensional space is a basis. From a simple basis, therefore, one can go to an orthonormal basis.

Orthogonal decomposition

When decomposing the vectors of a vector space according to an orthonormal basis, the calculation of the scalar product is simplified: , where and .

see also


Wikimedia Foundation. 2010.

See what “Orthogonal system” is in other dictionaries:

    1) Oh... Mathematical Encyclopedia

    - (Greek orthogonios rectangular) a finite or countable system of functions belonging to the (separable) Hilbert space L2(a,b) (quadratically integrable functions) and satisfying the conditions F tion g(x) called. weighing O. s. f.,* means... ... Physical encyclopedia

    System of functions??n(x)?, n=1, 2,..., specified on the segment ORTHOGONAL TRANSFORMATION linear transformation of Euclidean vector space, preserving unchanged lengths or (which is equivalent to this) scalar products of vectors ... Big Encyclopedic Dictionary

    A system of functions (φn(x)), n = 1, 2, ..., defined on the interval [a, b] and satisfying the following orthogonality condition: for k≠l, where ρ(x) is some function called weight. For example, the trigonometric system is 1, sin x, cos x, sin 2x,... ... encyclopedic Dictionary

    A system of functions ((фn(х)), n=1, 2, ..., defined on the interval [a, b] and satisfying the trace, orthogonality condition for k is not equal to l, where p(x) is a certain function , called weight. For example, trigonometric system 1, sin x, cosх, sin 2x, cos 2x,... O.s.f. with weight... ... Natural science. encyclopedic Dictionary

    System of functions ((φn (x)), n = 1, 2,..., orthogonal with weight ρ (x) on the segment [a, b], i.e., such that Examples. Trigonometric system 1, cos nx , sin nx; n = 1, 2,..., O. s.f. with weight 1 on the segment [π, π]. Bessel... Great Soviet Encyclopedia

    Orthogonal coordinates are those in which the metric tensor has a diagonal form. where d In orthogonal coordinate systems q = (q1, q², …, qd) the coordinate surfaces are orthogonal to each other. In particular, in the Cartesian coordinate system... ... Wikipedia

    orthogonal multichannel system- - [L.G. Sumenko. English-Russian dictionary on information technology. M.: State Enterprise TsNIIS, 2003.] Topics information technology in general EN orthogonal multiplex ...

    coordinate system of a (photogrammetric) image- Right orthogonal spatial coordinate system, fixed on a photogrammetric image by images of fiducial marks. [GOST R 51833 2001] Topics: photogrammetry... Technical Translator's Guide

    system- 4.48 system: A combination of interacting elements organized to achieve one or more specified goals. Note 1 A system can be considered as a product or the services it provides. Note 2 In practice... ... Dictionary-reference book of terms of normative and technical documentation


Definition. Vectorsa Andb are called orthogonal (perpendicular) to each other if their scalar product is equal to zero, i.e.a × b = 0.

For non-zero vectors a And b the equality of the scalar product to zero means that cos j= 0, i.e. . The zero vector is orthogonal to any vector, because a × 0 = 0.

Exercise. Let and be orthogonal vectors. Then it is natural to consider the diagonal of a rectangle with sides and . Prove that

,

those. the square of the length of the diagonal of a rectangle is equal to the sum of the squares of the lengths of its two non-parallel sides(Pythagorean theorem).

Definition. Vector systema 1 ,…, a m is called orthogonal if any two vectors of this system are orthogonal.

Thus, for an orthogonal system of vectors a 1 ,…,a m the equality is true: a i × a j= 0 at i¹ j, i= 1,…, m; j= 1,…,m.

Theorem 1.5. An orthogonal system consisting of nonzero vectors is linearly independent. .

□ We carry out the proof by contradiction. Suppose that the orthogonal system of nonzero vectors a 1 , …, a m linearly dependent. Then

l 1 a 1 + …+ l ma m= 0 , wherein . (1.15)

Let, for example, l 1 ¹ 0. Multiply by a 1 both sides of equality (1.15):

l 1 a a 1 + …+ l m a m × a 1 = 0.

All terms except the first are equal to zero due to the orthogonality of the system a 1 , …, a m. Then l 1 a a 1 =0, which follows a 1 = 0 , which contradicts the condition. Our assumption turned out to be wrong. This means that the orthogonal system of nonzero vectors is linearly independent. ■

The following theorem holds.

Theorem 1.6. In the space Rn there is always a basis consisting of orthogonal vectors (orthogonal basis)
(no proof).

Orthogonal bases are convenient primarily because the expansion coefficients of an arbitrary vector over such bases are simply determined.

Suppose we need to find the decomposition of an arbitrary vector b on an orthogonal basis e 1 ,…,e n. Let’s compose an expansion of this vector with still unknown expansion coefficients for this basis:

Let's multiply both sides of this equality scalarly by the vector e 1 . By virtue of axioms 2° and 3° of the scalar product of vectors, we obtain

Since the basis vectors e 1 ,…,e n are mutually orthogonal, then all scalar products of the basis vectors, with the exception of the first, are equal to zero, i.e. the coefficient is determined by the formula

.

Multiplying equality (1.16) one by one by other basis vectors, we obtain simple formulas for calculating the vector expansion coefficients b :

. (1.17)

Formulas (1.17) make sense because .

Definition. Vectora is called normalized (or unit) if its length is equal to 1, i.e. (a , a )= 1.


Any nonzero vector can be normalized. Let a ¹ 0 . Then , and the vector is a normalized vector.

Definition. Vector system e 1 ,…,e n is called orthonormal if it is orthogonal and the length of each vector of the system is equal to 1, i.e.

(1.18)

Since there is always an orthogonal basis in the space Rn and the vectors of this basis can be normalized, then there is always an orthonormal basis in Rn.

An example of an orthonormal basis of the space R n is the system of vectors e 1 ,=(1,0,…,0),…, e n=(0,0,…,1) with the scalar product defined by equality (1.9). In an orthonormal basis e 1 ,=(1,0,…,0),…, e n=(0,0,…,1) formula (1.17) to determine the coordinates of the vector decomposition b have the simplest form:

Let a And b – two arbitrary vectors of the space R n with an orthonormal basis e 1 ,=(1,0,…,0),…, e n=(0,0,…,1). Let us denote the coordinates of the vectors a And b in the basis e 1 ,…,e n accordingly through a 1 ,…,a n And b 1 ,…, b n and find the expression for the scalar product of these vectors through their coordinates in this basis, i.e. Let's pretend that

, .

From the last equality, by virtue of the scalar product axioms and relations (1.18), we obtain


Finally we have

. (1.19)

Thus, in an orthonormal basis, the scalar product of any two vectors is equal to the sum of the products of the corresponding coordinates of these vectors.

Let us now consider a completely arbitrary (generally speaking, not orthonormal) basis in the n-dimensional Euclidean space R n and find an expression for the scalar product of two arbitrary vectors a And b through the coordinates of these vectors in the specified basis.

system of functions ((φ n(x)}, n= 1, 2,..., orthogonal with weight ρ ( X) on the segment [ A, b], i.e. such that

Examples. Trigonometric system 1, cos nx,sin nx; n= 1, 2,..., - O.s. f. with weight 1 on the segment [-π, π]. Bessel functions n = 1, 2,..., J ν ( x), form for each ν > - 1/2 O. s. f. with weight X on the segment.

If each function φ ( X) from O. s. f. is that x) by number

Systematic study of O. s. f. was started in connection with the Fourier method for solving boundary value problems of equations of mathematical physics. This method leads, for example, to finding solutions to the Sturm-Liouville problem (See Sturm-Liouville problem) for the equation [ρ( X) y" ]" + q(x) y = λ at, satisfying the boundary conditions at(A) + hy"(a) = 0, y(b) + Hy"(b) = 0, where h And N- permanent. These decisions are the so-called. the eigenfunctions of the problem form the O.s. f. with weight ρ ( X) on the segment [ a, b].

An extremely important class of O. s. f. - Orthogonal polynomials - was discovered by P. L. Chebyshev in his studies on interpolation by the least squares method and the problem of moments. In the 20th century research on O. s. f. are carried out mainly on the basis of integral theory and Lebesgue measure. This contributed to the separation of these studies into an independent branch of mathematics. One of the main tasks of the theory of O. s. f. - problem of decomposition of a function f(x) in a series of the form p ( X)) - O. s. f. If we put it formally P ( X)) - normalized O. s. f., and allow the possibility of term-by-term integration, then, multiplying this series by φ P(X) ρ( X) and integrating from A before b, we get:

Odds S p, called the Fourier coefficients of the function relative to the system (φ n(x)), have the following extremal property: linear form x):

has the smallest value compared to the errors given for the same n other linear expressions of the form

Series ∑ ∞ n=1 C n φ n (x) with odds S p, calculated using formula (*), is called the Fourier series of the function f(x) according to the normalized O. s. f. (φ n(x)). For applications, the question of primary importance is whether the function is uniquely defined f(x) by their Fourier coefficients. O. s. f., for which this takes place, are called complete, or closed. Conditions for closed O. s. f. may be given in several equivalent forms. 1) Any continuous function f(x) can be approximated on average with any degree of accuracy by linear combinations of functions φ k(x), that is, C n φ n (x) converges on average to the function f(x)]. 2) For any function f(x), whose square we integrate with respect to the weight ρ( X), the Lyapunov-Steklov closedness condition is satisfied:

3) There is no non-zero function with integrable on the interval [ a, b] square orthogonal to all functions φ n(x), n = 1, 2,....

If we consider functions with an integrable square as elements of a Hilbert space (See Hilbert space), then the normalized O.S. f. will be systems of coordinate unit vectors of this space, and the series expansion in normalized O.s. f. - expansion of the vector in unit vectors. With this approach, many concepts of the theory of normalized operational systems. f. acquire a clear geometric meaning. For example, formula (*) means that the projection of the vector onto the unit vector is equal to the scalar product of the vector and the unit unit; the Lyapunov - Steklov equality can be interpreted as the Pythagorean theorem for an infinite-dimensional space: the square of the length of a vector is equal to the sum of the squares of its projections on the coordinate axes; isolation O. s. f. means that the smallest closed subspace containing all the vectors of this system coincides with the entire space, etc.

Lit.: Tolstov G.P., Fourier Series, 2nd ed., M., 1960; Natanson I.P., Constructive theory of functions, M. - L., 1949; by him, Theory of functions of a real variable, 2nd ed., M., 1957; Jackson D., Fourier series and orthogonal polynomials, trans. from English, M., 1948; Kaczmarz S., Shteingauz G., Theory of orthogonal series, trans. from German, M., 1958.

  • - the group of all linear transformations of the n-dimensional vector space V over the field k, preserving a fixed non-degenerate quadratic form Q on V)=Q for any)...

    Mathematical Encyclopedia

  • - a matrix over a commutative ring R with unit 1, for which the transposed matrix coincides with the inverse. The determinant of O. m. is equal to +1...

    Mathematical Encyclopedia

  • - a network in which the tangents at a certain point to lines of different families are orthogonal. Examples of operational systems: asymptotic network on a minimal surface, line curvature network. A.V. Ivanov...

    Mathematical Encyclopedia

  • - 1) Oh....

    Mathematical Encyclopedia

  • - an orthogonal array, OA - a matrix of size kx N, the elements of which are the numbers 1, 2, .....

    Mathematical Encyclopedia

  • - see Isogonal trajectory...

    Mathematical Encyclopedia

  • - an orthonormal system of functions (j) of a certain Hilbert space H such that in H there does not exist a function orthogonal to all functions of a given family...

    Mathematical Encyclopedia

  • - see Projection...

    Big Encyclopedic Polytechnic Dictionary

  • - determination of the subordination of the functions of various objects...

    Dictionary of business terms

  • - strengthening of functions, one of Ch. ways of progressive transformation of organs during the evolution of animals. I.f. usually associated with the complication of the structure of organs and the body as a whole...

    Biological encyclopedic dictionary

  • - strengthening of functions, one of the main ways of progressive transformation of organs during the evolution of animals. I.f. is associated with a complication of the structure of organs and leads to a general increase in the level of vital activity...
  • - order n Matrix...

    Great Soviet Encyclopedia

  • - a special case of parallel projection, when the axis or plane of projections is perpendicular to the direction of projection...

    Great Soviet Encyclopedia

  • - a system of functions (), n = 1, 2,..., orthogonal with weight ρ on the segment, i.e., such that Examples. Trigonometric system 1, cos nx, sin nx; n = 1, 2,..., - O.s. f. with weight 1 on the segment...

    Great Soviet Encyclopedia

  • - such a system of functions Ф = (φ), defined on an interval, that there is no function f for which,...

    Great Soviet Encyclopedia

  • - ORTHOGONAL system of FUNCTIONS - system of functions??n?, n=1, 2,.....

    Large encyclopedic dictionary

"Orthogonal system of functions" in books

Paragraph XXIV The old system of trench warfare and the modern system of marches

From the book Strategy and Tactics in the Art of War author Zhomini Genrikh Veniaminovich

Paragraph XXIV The Old System of Positional Warfare and the Modern System of Marches By the system of positions is meant the old method of conducting methodical warfare, with armies sleeping in tents, having supplies at hand, engaged in observing each other; one army

19. The concept of “tax system of the Russian Federation”. The relationship between the concepts “tax system” and “tax system”

From the book Tax Law author Mikidze S G

19. The concept of “tax system of the Russian Federation”. The relationship between the concepts of “tax system” and “tax system” The tax system is a set of federal taxes, regional and local taxes established in the Russian Federation. Its structure is enshrined in Art. 13–15 Tax Code of the Russian Federation. In accordance with

From the book How It Really Happened. Reconstruction of true history author Nosovsky Gleb Vladimirovich

23. Geocentric system of Ptolemy and heliocentric system of Tycho Brahe (and Copernicus) The system of the world according to Tycho Brahe is shown in Fig. 90. At the center of the world is the Earth, around which the Sun revolves. However, all other planets are already orbiting the Sun. Exactly

23. Geocentric system of Ptolemy and heliocentric system of Tycho Brahe (and Copernicus)

From the author's book

23. Geocentric system of Ptolemy and heliocentric system of Tycho Brahe (and Copernicus) The system of the world according to Tycho Brahe is shown in Fig. 90. At the center of the world is the Earth, around which the Sun revolves. However, all other planets are already orbiting the Sun. Exactly

Complete system of functions

From the book Great Soviet Encyclopedia (PO) by the author TSB

Orthogonal matrix

TSB

Orthographic projection

From the book Great Soviet Encyclopedia (OR) by the author TSB

Orthogonal function system

From the book Great Soviet Encyclopedia (OR) by the author TSB

Tip 46: Pass function objects to algorithms instead of functions

From the book Using STL Effectively by Meyers Scott

Tip 46: Pass Function Objects to Algorithms Instead of Functions It is often said that increasing the level of abstraction of high-level languages ​​causes the generated code to become less efficient. Alexander Stepanov, the inventor of STL, once developed a small complex

12.3.5. Function Adapters for Function Objects

From the C++ book for beginners by Lippman Stanley

12.3.5. Function Adapters for Function Objects The standard library also contains a number of function adapters for specializing and extending both unary and binary function objects. Adapters are special classes divided into the following two

11/19/2. Calling functions from a function file

From the book Linux and UNIX: shell programming. Developer's Guide. by Tainsley David

11/19/2. Calling Functions from a Function File We've already looked at how functions are called from the command line. These types of functions are usually used by utilities that create system messages. Now let's use the function described above again, but in this case

The system of objective (positive) law and the system of legislation: the relationship of concepts

From the book Jurisprudence author Mardaliev R. T.

The system of objective (positive) law and the system of legislation: the relationship of concepts The system of objective (positive) law is the internal structure of law, dividing it into branches, sub-sectors and institutions in accordance with the subject and method of legal

31. French government system, suffrage and electoral system

From the book Constitutional Law of Foreign Countries author Imasheva E G

31. French government system, suffrage and electoral system In France, there is a mixed (or semi-presidential) republican government. The system of government in France is built on the principle of separation of powers. Modern France

Therapeutic movements to restore motor functions and for back pain Restoration of motor functions

From the book Encyclopedia of therapeutic movements for various diseases author Astashenko Oleg Igorevich

Therapeutic movements to restore motor functions and for back pain Restoring motor functions There are a lot of exercises to restore the spine. You can either come up with them yourself, or find them in a variety of types of gymnastics. However, simple

Therapeutic movements to restore motor functions and for back pain motor functions

From the book Overhaul for the spine author Astashenko Oleg Igorevich

Therapeutic movements to restore motor functions and motor functions for back pain Restoring motor functions There are a lot of exercises to restore the spine. You can either come up with them yourself, or find them in a variety of types of gymnastics.

x =λ 0 e +z, wherez L. To calculate λ 0, we scalarly multiply both sides of the equality by e. Since (z ,e ) = 0, we get (x ,e ) =λ 0 (e ,e ) =λ 0 .

Orthogonal and orthonormal systems

Definition 5.5. If L is a subspace of a Hilbert space H, then the collection M of all elements from H that are orthogonal to L is called

orthogonal complement to L.

Let us prove that M is also a subspace.

1) From property 3) for orthogonal elements it follows that M is a linear subset of the space H.

2) Let z n M and z n → z . By definition, M z n y for any y L , and by property 4) for orthogonal elements we have z y . Therefore, z M and M are closed.

For any x H, by Theorem 5.3 there is a unique expansion

of the form x =y +z, where y L,z M, i.e. subspaces L and M form

orthogonal decomposition of the space H.

Lemma 5.1. Let a finite or countable set of pairwise orthogonal subspaces L n be given and let the element x H be represented in the form

x = ∑ y n , where y L . Then such a representation is unique and y n = Pr L n x .

Definition 5.6. A system of orthogonal subspaces L n is called complete if in the space H there is no nonzero element orthogonal to all L n .

Definition 5.7. A finite or countable system of elements h n of a Hilbert space H is called orthogonal if h n h m for n ≠m. Definition 5.8. The orthogonal system h n is called orthonormal, if ||h n || = 1.

Definition 5.9. An orthogonal system h n is called complete if there is no non-zero element x H such that x h n for all n .

You can check that nonzero elements of the orthogonal system are linearly independent.

An example of a complete orthonormal system in l 2 is the system of all coordinate unit vectors.

Generated by elements h n

one-dimensional

subspaces L n

orthogonal. Element projections

subspaces

calculated by the formula

x = anhn.

PrL n

The numbers α n = (x ,h n ) are called

coefficients

Fourier elementx

relative to the system of elements h n.

Theorem 5.4. If element x H can be represented as

x = ∑ λ n h n , then this representation is unique and the coefficients λ n are equal

This is a performance x is called the Fourier expansion (orthogonal expansion) of the element x into the elements hn.

Theorem 5.5. In order for any element x H to be represented by its Fourier expansion over the elements h n of an orthonormal system, it is necessary and sufficient that this system be complete.

From this theorem it follows that in an n-dimensional Hilbert space a complete orthonormal system must consist of n elements. On the other hand, if in an n-dimensional Hilbert space an arbitrary basis is given, consisting of pairwise orthogonal elements, then it follows from Theorem 5.5 that this system is complete.

Definition 5.10. A complete orthogonal system of elements is called

orthonormal basis Hilbert space.

Definition 5.11. Ratio

∑ α n 2=

where α n

– Fourier coefficients of element x, called equation

isolation.

Theorem 5.6.

For an arbitrary orthonormal system (h n ), the following statements regarding the elements x H are equivalent:

1) for the element x H the Fourier expansion (5.7) is valid;

2) the element x H is included in the subspace generated by the set of elements (h n);

3) for the element x H the closedness equation (5.8) is satisfied. Corollary. From Theorems 5.5 and 5.6 it follows that in order for an orthonormal system to be complete, it is necessary and sufficient that

for any x H the closedness equation was satisfied.

Theorem 5.7. If the elementx H can be represented by its Fourier expansion (5.7) over the elements of the orthonormal system (h n ), then for any y H

(x ,y )= ∑ α n β n ,

where α n are the Fourier coefficients of elementx, β n are the Fourier coefficients of elementy relative to the system (h n).

Theorem 5.8. A finite-dimensional normed space is separable. Theorem 5.9. Any space with a countable basis is separable.

From Theorems 5.8 and 5.9 it follows that a finite or countable orthonormal basis can exist only in separable spaces.

Orthogonalization of a system of linearly independent elements

Let a Hilbert space H be given a finite or countable system of linearly independent elements g 1 , g 2 , ... Let us construct an orthonormal system of elements h 1 , h 2 , ... so that each h n has the form

h n =μ n 1 g 1 +μ n 2 g 2 +...+μ nn g n ,

and each g n has the form

g n =ν n 1 h 1 +ν n 2 h 2 +...+ν nn h n .

First, let's construct an orthogonal system of elements f 1 , f 2 , ... , assuming sequentially

k = 1

The coefficients λ ik must be selected in such a way that the elements f 1 , f 2 , ... are pairwise orthogonal. Let the coefficients λ ik for the elements f 1 , f 2 , ..., f n- 1 have already been found. Then when i

n− 1

n− 1

(f n ,f i ) = (g n –∑ λ nk f k ,f i ) = (g n ,f i ) –∑ λ nk (f k ,f i ).

k = 1

k = 1

Since f 1 ,f 2 , ..., f n- 1 already

are orthogonal, then (f k ,f i ) = 0 for

k ≠ i,

we get

F i ) = (g n ,f i ) –λ ni ||f i ||2 .

(fn

Since each element

is a linear combination linearly

independent elements g 1,

g 2 , ...,g n , and the coefficient

at g n

unity, then f n ≠ 0. In order for the condition (f n ,f i ) = 0 to be satisfied, the coefficient λ ni must be determined by the formula

λni=

g n,

f i)

We have constructed an orthogonal system f 1 ,f 2 , .... Now let's put

h n=

Elements h 1 ,h 2 , ... are pairwise orthogonal, ||h n || = 1 and each element h n is a linear combination of elements g 1 ,g 2 , ...,g n , therefore, has the required form (5.9). On the other hand, from formula (5.11) it is clear that each g n is a linear combination of the elements f 1, f 2, ..., f n, and therefore the elements h 1, h 2, ..., h n, i.e. has the form (5.10). Thus, we have obtained the required orthonormal system.

Moreover, if the original system (gn) was infinite, then the orthogonalization process consists of an infinite number of steps, and the system (hn) will also be infinite. If the original system consists of m elements, then the resulting system will have the same number.

Note that from conditions (5.9) and (5.10) it follows that the linear shells of the systems of elements (gn) and (hn) coincide.

If L is a finite-dimensional subspace of the space H, and g 1 ,g 2 , ...,g n is its arbitrary basis, then by applying the orthogonalization process to the system (g n ), we will construct an orthonormal basis of the subspace

Isomorphism of an arbitrary separable Hilbert space with the space l²

Theorem 5.10. In a separable Hilbert space H containing nonzero elements, there exists a finite or countable orthonormal basis.

Proof.

By the definition of separability, there exists a countable everywhere dense set A in H. Let us renumber all elements of set A. Let us select from A a finite or countable system B of linearly independent elements, the linear span of which coincides with the linear span of the set A. In this case, all elements thrown out from A are linear combinations of elements of system B. We will subject system B to the process of orthogonalization and construct a finite or countable orthonormal system of elements h n . Let's prove

that it is full.

Let x H be orthogonal to all h n . Since the elements of system B are linear combinations of elements h n , tox is orthogonal to all elements

systems B. Set A differs from B in that it contains some more elements that are represented as linear combinations of elements of system B. Therefore, x is orthogonal to all elements of the set A. But since A is dense everywhere inH, thenx = 0 by property 5) for orthogonal elements. Thus, the completeness of the system of elements h n is proven.

Let us transfer the definitions of algebraic isomorphism and isometry for Euclidean spaces to any normed spaces.

Definition 5.12. Two normed spaces E and E 1 are called

algebraically isomorphic and isometric , if a one-to-one correspondence can be established between their elements so that:

a) algebraic operations on elements from E correspond to the same operations on their images in E 1 ;

b) the norms of the corresponding elements from E and from E 1 are equal.

Theorem 5.11. Every infinite-dimensional separable Hilbert space H is algebraically isomorphic and isometric to the space l 2 .

Proof.

By Theorem 5.10, there is a countable orthonormal basis in H: h 1 ,h 2 , ..., h n , .... By Theorem 5.5, for any x H the expansion into

x = ∑ α n hn .

comparable

n= 1

sequence of its coefficients

(α n ), i.e.

n= 1

The vector a and will be called the image of the elements.

If α n are the Fourier coefficients of the elements, and β n are the coefficients

the sum of the images of elements x and y. Similarly, it is verified that if a is the image of the elements, then λ a is the image of the element λ x. This means that algebraic operations on elements fromH correspond to the same operations on their images inl 2.

Let us show that each vector a = (α n )l 2 is the image of some

x H . To do this, given the given value, we compose the series ∑ α n h n . Since the members of the series

are pairwise orthogonal, and

n= 1

∑ ||α n h n ||2 =

∑ α n 2< +∞,

n= 1

n= 1

then by Theorem 5.2 the series converges. If we denote its sum by x, then by Theorem 5.4α n will be the Fourier coefficients of this, therefore,

the given vector a will be its image.

Now let’s check that the established correspondence between elements from H and vectors from l 2 is one-to-one. Indeed, if vectors a and b are images of elements in y, respectively, then, by what has been proven, a – b is an image of elements in – y and by (5.12) a − b = x − y. Therefore, ifx ≠ y, then ia ≠ b.

In other words, if the orthonormal system is complete, and two elements x and y have respectively the same Fourier coefficients, then x = y. This is not true for an incomplete system.

Thus, we have established a correspondence between elements from H and vectors from l 2, which represents an algebraic isomorphism and, according to (5.12), isometric. The theorem has been proven.

Now we prove that the isomorphism between H and l 2 is also established with

preserving the value of the scalar product.

Theorem 5.12. With the isomorphism between the spaces H and l 2 established in Theorem 5.11, the scalar product of any two elements of H . is equal to the scalar product of their images inl 2.

Proof . Let vectors a and b be images of the elements uy,

accordingly, a= (α n),b= (β n). Then: x = ∑ α n h n ,y =∑ β n h n .

n= 1

n= 1

Taking into account Theorem 5.7 and the definition of the scalar product in l 2, we find