Statistical physics and thermodynamics. Dynamic and statistical laws

Lecture 2.

Thermodynamics, statistical physics, information entropy

1. Information from thermodynamics and statistical physics. Distribution function. Liouville's theorem. Microcanonical distribution. The first law of thermodynamics. Adiabatic processes. Entropy. Statistical weight. Boltzmann's formula. Second law of thermodynamics. Reversible and irreversible processes.

2. Shannon information entropy. Bits, nuts, trits, etc. Relationship between entropy and information.

This part belongs to lecture 1. It is better considered in section V (“The concept of entanglement of quantum states”).

LE CNOT is depicted as:

We store the value of (qu)bit a while (qu)bit b changes according to the XOR law:

bit b(target = target) changes its state if and only if the state of the control bit a matches 1; At the same time, the state of the control bit does not change.

The logical operation XOR (CNOT) illustrates why classical data can be cloned but quantum data cannot. Note that in the general case, by quantum data we will understand superpositions of the form

, (1)

where and are complex numbers or amplitudes of states, and, .

According to the truth table, if XOR is applied to Boolean data in which the second bit is in the “0” state (b) and the first is in the “X” state (a), then the first bit does not change, and the second becomes its copy:

U XOR (X, 0) = (X, X), where X = “0” or “1”.

In the quantum case, the data denoted by the symbol “X” should be considered superposition (1):

.

Physically, the data can be encoded, for example, in the polarization basis |V> = 1, |H> = 0 (H,V)= (0,1):

And

It can be seen that state copying actually takes place. The no-cloning theorem states that it is impossible to copy arbitrary quantum state. In the example considered, copying occurred because the operation was performed in its own basis (|0>, |1>), i.e. V private case of a quantum state.

It would seem that the XOR operation can also be used to copy superpositions of two Boolean states, such as |45 0 > ? |V> + |H>:

But that's not true! The unitarity of quantum evolution requires that a superposition of input states be transformed into a corresponding superposition of output states:

(2)

This is the so-called an entangled state (Ф+), in which each of the two output qubits does not have a specific value (in this case, polarization). This example shows that logical operations performed on quantum objects follow different rules than in classical computing processes.

The next question arises: Seems to be in output mode A again can be represented as a superposition , like the state of fashion b. How to show that this is not so, i.e., that there is no point in talking about mode (bit) states at all? a and fashion (bit) b?

Let's use the polarization analogy when

(3).

There are two ways. Path 1 is longer, but more consistent. It is necessary to calculate the average values ​​of the Stokes parameters for both output modes. The averages are taken from the wave function (2). If all except turn out to be zero, then this state is unpolarized, i.e. mixed and superposition (3) makes no sense. We work in the Heisenberg representation, when the operators are transformed, but the wave function is not.

So, we find it in fashion a.

- total beam intensity a,

- proportion of vertical polarization,

- share +45 0th polarization,

- share of right-hand circular polarization.

The wave function over which averaging is performed is taken in the form (2):

where are the birth and destruction operators in mods a And b operate according to the rules:

(Do the calculations in Section V (see notebook). There, also calculate the probability of registering coincidences or a correlator of the form }

Path II is more visual, but less “honest”!

Let's find the dependence of the light intensity in the mode a on the rotation angle of the Polaroid placed in this mode. This is a standard quantum optical way of checking state (2) - the intensity should not depend on rotation. At the same time, a similar dependence of the number of matches has the form

. Such dependencies were first obtained by E. Fry (1976) and A. Aspek (1985) and are often interpreted as proof of the nonlocality of quantum mechanics.

So, the experimental situation is depicted in the figure:

A-priory

where is the annihilation operator in mode a. It is known that the transformation of the operators of two orthogonally polarized modes x and y when light passes through a polaroid oriented at an angle has the form:

.

(only the first, fourth, fifth and eighth terms are different from zero) =

(only the first and eighth terms are different from zero) = - does not depend on the angle?!

Physically, this happens because the wave function (2) is not factorized and there is no point in talking about states in modes A And b separately. Thus, it cannot be argued that mode a is in a superposition state (3)!

Comment. The calculations done (Way II) do not at all prove that the state is in vogue A unpolarized. For example, if there was circularly polarized light in this mode, the result would be the same. Rigorous proof - for example, through Stokes parameters (in section V).

Note that by acting in the same way, we can prove that the state in mode a before the CNOT element is polarized.

Here, averaging must be carried out over the wave function of the initial state (3). The result looks like this:

those. maximum counts are achieved at = 45 0 .

Information and entropy.

Without introducing the “operational” term “information” for now, we will argue using “everyday” language. Those. information is some knowledge about an object.

The following example shows that the concepts information and entropy are closely related. Let us consider an ideal gas in thermodynamic equilibrium. A gas consists of a huge number of molecules that move in volume V. The parameters of the state are pressure and temperature. The number of states of such a system is enormous. The entropy of a gas at TD equilibrium is maximum and, as follows from Boltzmann’s formula, is determined by the number of microstates of the system. At the same time, we do not know anything about what specific state the system has at a given moment in time - the information is minimal. Let’s say that somehow we managed, using a very fast device, to “peep the state of the system at a given moment in time. So we received some information about her. You can even imagine that we photographed not only the coordinates of the molecules, but also their speeds (for example, by taking several photographs one after another). Moreover, at every moment of time when information about the state of the system is available to us, entropy tends to zero, because the system is only in one specific state out of all their huge variety, and this state is highly nonequilibrium. This example shows that information and entropy are indeed somehow connected, and the nature of the connection is already emerging: the more information, the less entropy.

Information from thermodynamics and statistical physics.

Physical quantities that characterize the macroscopic states of bodies (many molecules) are called thermodynamic (including energy, volume). There are, however, quantities that appear as a result of the action of purely statistical laws and have meaning when applied only to macroscopic systems. Such, for example, are entropy and temperature.

Classic statistics

*Liouville's theorem. The distribution function is constant along the phase trajectories of the subsystem (we are talking about quasi-closed subsystems, so the theorem is valid only for not very large periods of time, during which the subsystem behaves as closed).

Here - - distribution function or probability density. It is introduced through probability w detect a subsystem in a phase space element at this moment in time: dw = ( p 1 ,..., ps , q 1 ,..., qs ) dpdq , and

Finding the statistical distribution for any subsystem is the main task of statistics. If the statistical distribution is known, then it is possible to calculate the probabilities of different values ​​of any physical quantities depending on the states of this subsystem (i.e., on the values ​​of coordinates and momenta):

.

*Microcanonical distribution.

The distribution for the set of two subsystems (they are assumed to be closed, i.e., weakly interacting) is equal. That's why - logarithm of the distribution function - value additive. From Liouville's theorem it follows that the distribution function must be expressed in terms of such combinations of variables p and q that, when the subsystem moves as a closed system, must remain constant (such quantities are called integrals of motion). This means that the distribution function itself is an integral of motion. Moreover, its logarithm is also an integral of motion, and additive. In total, in mechanics there are seven integrals of motion - energy, three components of momentum and three components of angular momentum - (for subsystem a: E a (p, q), P A (p, q), M A (p, q)). The only additive combination of these quantities is

Moreover, the coefficients (there are seven of them) must remain the same for all subsystems of a given closed system, and are selected from the normalization conditions (4).

For normalization condition (4) to be satisfied, it is necessary that the function (p, q) contacted the points E 0, P 0, M 0 to infinity. A more precise formulation gives the expression

Microcanonical distribution.

The presence of - functions ensures that they vanish for all points of the phase space at which at least one of the quantities E, R, M not equal to its given (average) value E 0, P 0, M 0 .

From six integrals P And M can be eliminated by enclosing the system in a solid box in which it rests.

.

Physical entropy

Again we use the concept of an ideal gas.

Let a monatomic ideal gas with density n and temperature T takes up volume V. We will measure temperature in energy units - Boltzmann's constant will not appear. Each gas atom has an average kinetic energy of thermal motion equal to 3T/2. Therefore, the total thermal energy of the gas is equal to

It is known that the gas pressure is equal to p = nT. If a gas can exchange heat with the external environment, then the law of conservation of gas energy looks like this:

. (5)

Thus, a change in the internal energy of a gas can occur both due to the work it does and due to the receipt of a certain amount of heat dQ from outside. This equation expresses the first law of thermodynamics, i.e. law of energy conservation. It is assumed that the gas is in equilibrium, i.e. p = const throughout the entire volume.

If we assume that the gas is also in a state of TD equilibrium, T =const, then relation (5) can be considered as an elementary process of variation of gas parameters when they change very slowly, when the TD equilibrium is not disturbed. It is for such processes that the concept of entropy S is introduced using the relation

Thus, it is argued that in addition to internal energy, an equilibrium gas has another internal characteristic associated with the thermal motion of atoms. According to (5, 6) at constant volume dV= 0, the change in energy is proportional to the change in temperature, and in the general case

Because Where N = nV = const is the total number of gas atoms, then the last relation can be written as

After integration we get

The expression in square brackets represents the entropy per particle.

Thus, if both temperature and volume change in such a way that VT 3/2 remains constant, then the entropy S does not change. According to (6), this means that the gas does not exchange heat with the external environment, i.e. the gas is separated from it by heat-insulating walls. This process is called adiabatic.

Because the

where = 5/3 is called the adiabatic exponent. Thus, during an adiabatic process, temperature and pressure change with density according to the law

Boltzmann formula

As follows from Liouville's theorem, the distribution function? has a sharp maximum at E = E 0 (average value) and is nonzero only in the vicinity of this point. If you enter the width E of the curve (E), defining it as the width of a rectangle whose height is equal to the value of the function (E) at the maximum point, and the area is equal to unity (with proper normalization). We can move from the interval of energy values ​​to the number of states Г with energies belonging to E (this is, in fact, the average fluctuation of the energy of the system). Then the value Γ characterizes the degree of smearing of the macroscopic state of the system over its microscopic states. In other words, for classical systems Г is the size of the region of phase space in which a given subsystem spends almost all of its time. In the quasi-classical theory, a correspondence is established between the volume of the region of phase space and the number of quantum states per it. Namely, for each quantum state in phase space there is a cell with volume , where s is the number of degrees of freedom

The value Γ is called the statistical weight of the macroscopic state; it can be written as:

The logarithm of the statistical weight is called entropy:

where - statistical weight = number of microstates covered by the macrostate of the system under consideration.

.

In quantum statistics it is shown that = 1. Then

Where by is meant a statistical matrix (density). Due to the linearity of the logarithm of the energy distribution function (*), where averaging is carried out over the distribution function.

Since the number of states is in any case not less than one, entropy cannot be negative. S determines the density of levels in the energy spectrum of a macroscopic system. Due to the additivity of entropy, we can say that the average distances between the levels of a macroscopic body decrease exponentially with an increase in its size (i.e., the number of particles in it). The highest entropy value corresponds to complete statistical equilibrium.

Characterizing each macroscopic state of the system by the distribution of energy between various subsystems, we can say that a number of successively traversed states by the system correspond to an increasingly probable distribution of energy. This increase in probability is large due to its exponential nature eS- the exponent contains an additive quantity - entropy. That. the processes occurring in a nonequilibrium closed system proceed in such a way that the system continuously moves from states with lower entropy to states with higher entropy. As a result, entropy reaches the highest possible value, corresponding to complete statistical equilibrium.

Thus, if a closed system at some point in time is in a nonequilibrium macroscopic state, then the most likely consequence at subsequent times will be a monotonic increase in the entropy of the system. This - second law of thermodynamics (R. Clausius, 1865). Its statistical justification was given by L. Boltzmann in 1870. Another definition:

if at some moment of time the entropy of a closed system is different from the maximum, then at subsequent moments the entropy does not decrease. It increases or, in the extreme case, remains constant. According to these two possibilities, all processes occurring with macroscopic bodies are usually divided into irreversible And reversible . Irreversible - those processes that are accompanied by an increase in the entropy of the entire closed system (processes that would be their repetitions in the reverse order cannot occur, since in this case the entropy would have to decrease). Note that a decrease in entropy can be caused by fluctuations. Reversible are processes in which the entropy of a closed system remains constant and which, therefore, can also occur in the opposite direction. A strictly reversible process represents an ideal limiting case.

During adiabatic processes, the system does not absorb or release heat ? Q = 0 .

Comment: (substantial). The statement that a closed system must transition to a state of equilibrium over a sufficiently long time (longer than the relaxation time) applies only to a system under stationary external conditions. An example is the behavior of a large region of the Universe accessible to our observation (the properties of nature have nothing in common with the properties of an equilibrium system).

Information.

Let's consider a tape divided into cells - a classic register. If only one of two characters can be placed in each cell, then the cell is said to contain a bit of information. It is obvious (see lecture 1) that in the register containing N cells contained N bit of information and can be written in it 2 N messages. So, information entropy is measured in bits:

(7)

Here Q N = 2 N- the total number of different messages. From (7) it is clear that information entropy is simply equal to the minimum number of binary cells with which some information can be recorded.

Definition (7) can be rewritten differently. Let us have many Q N various messages. Let's find the probability that the message we need will coincide with one randomly selected from the total number Q N various messages. It is obviously equal to P N = 1/ Q N. Then definition (7) will be written as:

(8)

The greater the number of cells N, the less likely it is P N and the greater the information entropy H B contained in this particular message.

Example . The number of letters of the alphabet is 32 (without the letter ё). The number 32 is the fifth power of two 32 = 2 5. To match each letter with a specific combination of binary numbers, you need to have 5 cells. By adding capital letters to lowercase letters, we double the number of characters we want to encode - there will be 64 = 2 6 - i.e. extra bit of information is added H B= 6. Here H B- the amount of information per letter (lowercase or uppercase). However, such a direct calculation of information entropy is not entirely accurate, since there are letters in the alphabet that are less common or more common. Those letters that occur less frequently can be given a larger number of cells, and for letters that occur frequently, one can save money and give them those register states that occupy a smaller number of cells. The exact definition of information entropy was given by Shannon:

(9)

Formally, the derivation of this relationship can be justified as follows.

We showed above that

due to the additivity of the logarithm of the distribution function and its linearity in energy.

Let p- distribution function of some discrete value f i (for example, the letter “o” in this text). If using the function p construct the probability distribution function of various values ​​of the quantity f = f 1 , f 2 ,... f N, then this function will have a maximum at , where and (normalization). Then p()= 1 and (generally speaking, this is true for the class of functions satisfying condition (*))

The summation is carried out over all characters (letters of the alphabet), and p i means the probability of the appearance of a symbol with a number i. As you can see, this expression covers both frequently used letters and letters whose likelihood of appearing in a given message is low.

Since expression (9) uses the natural logarithm, the corresponding unit of information is called “nat”.

Expression (9) can be rewritten as

where the brackets mean the usual classical averaging using the distribution function p i .

Comment . In the following lectures it will be shown that for quantum states

where is the density matrix. Formally, expressions (10) and (11) are the same, but there is a significant difference. Classical averaging is performed over orthogonal (eigen) states of the system, while for the quantum case there can also be non-orthogonal states (superpositions). Therefore always H quant H class !

Formulas (8) and (9) use logarithms in different bases. In (8) - based on base 2, and in (9) - based on base e. The information entropies corresponding to these formulas can be easily expressed through each other. Let's use the relation in which M is an arbitrary number

.

Then, considering that and we get

- the number of bits is almost one and a half times greater than the number of nat!

Reasoning in a similar way, we can obtain the relationship between entropies expressed in trits and bits:

In computer technology, information is used in binary base (in bits). For reasoning in physics, it is more convenient to use Shannon information (in Nat), which can characterize any discrete information. You can always find the number of corresponding bits.

RELATIONSHIP OF ENTROPY AND INFORMATION. Maxwell's demon

This paradox was first considered by Maxwell in 1871 (see Fig. 1). Let some “supernatural” force open and close the valve in a vessel divided into two parts and containing gas. The valve is controlled by the rule that it opens if fast molecules moving from right to left touch it or if slow molecules hit it moving in the opposite direction. Thus, the demon introduces a temperature difference between two volumes without performing work, which violates the second law of thermodynamics.

Maxwell's demon. The demon establishes a pressure difference by opening the damper when the number of gas molecules hitting it from the left exceeds the number of hits from the right. This can be done in a completely reversible manner, as long as the demon's memory stores the random results of his observations of molecules. Therefore, the demon's memory (or its head) heats up. The irreversible step is not that information is accumulated, but that information is lost when the daemon later clears the memory. Above: Filling the daemon's memory with bits of information is a random process. On the right side of the dotted line is an empty memory area (all cells are in state 0, on the left are random bits). Below is a demon.

A number of attempts have been made to resolve the paradox or exorcise the demon. For example, it was assumed that the demon could not extract information without doing work or without disturbing (i.e. heating) the gas - but it turned out that this was not so! Other attempts boiled down to the fact that the second principle could be violated under the influence of certain “reasonable” or “thinking” forces (creatures). In 1929 Leo Szilard significantly “advanced” the solution to the problem, reducing it to a minimal formulation and highlighting the essential components. The main thing that the Demon needs to do is to establish whether a single molecule is located to the right or left of the sliding valve, which would allow heat to be extracted. This device was called the Szilard engine. However, Szilard did not resolve the paradox because his analysis did not take into account how the measurement by which the demon knows whether a molecule is on the right or left affects the increase in entropy (see figure Szilard_demon.pdf). The engine operates on a six-step cycle. The engine is a cylinder with pistons at the ends. A flap is inserted into the middle. The work of pushing the partition in can be reduced to zero (Szilard showed this). There is also a memory device (MU). It can be in one of three states. "Empty", "Molecule on the Right" and "Molecule on the Left". Initial state: UP = “Empty”, the pistons are pressed out, the partition is extended, the molecule has an average speed, which is determined by the temperature of the thermostat (slide 1).

1. The partition is inserted, leaving the molecule on the right or left (slide 2).

2. The memory device determines where the molecule is and moves to the “right” or “left” state.

3. Compression. Depending on the state of the UE, the piston moves in from the side where there is no molecule. This stage does not require any work to be done. Because the vacuum is compressed (slide 3).

4. The septum is removed. The molecule begins to exert pressure on the piston (slide 4).

5. Working stroke. The molecule hits the piston, causing it to move in the opposite direction. The energy of the molecule is transferred to the piston. As the piston moves, its average speed should decrease. However, this does not happen, since the walls of the vessel are at a constant temperature. Therefore, heat from the thermostat is transferred to the molecule, keeping its speed constant. Thus, during the working stroke, the thermal energy supplied from the thermostat is converted into mechanical work performed by the piston (slide 6).

6. Cleansing the UE, returning it to the “Empty” state (slide 7). The cycle is completed (slide 8 = slide 1).

It is surprising that this paradox was not resolved until the 1980s. During this time, it was established that, in principle, any process can be done in a reversible manner, i.e. without “payment” by entropy. Finally, Bennett in 1982 established the definitive connection between this statement and Maxwell's paradox. He proposed that the demon could actually know where a molecule was in Szilard's engine without doing work or increasing the entropy of the environment (the thermostat) and thus do useful work in one engine cycle. However, information about the position of the molecule must remain in the demon's memory (rsi.1). As more cycles are performed, more and more information accumulates in memory. To complete the thermodynamic cycle, the demon must erase the information stored in memory. It is this operation of erasing information that must be classified as a process of increasing the entropy of the environment, as required by the second principle. This completes the fundamentally physical part of Maxwell's demon device.

These ideas received some development in the works of D.D. Kadomtsev.

Let us consider an ideal gas consisting of only one particle (Kadomtsev, “dynamics and information”). This is not absurd. If one particle is enclosed in a vessel of volume V with walls at temperature T, then sooner or later it will come into equilibrium with these walls. At each moment of time it is at a very specific point in space and at a very specific speed. We will carry out all the processes so slowly that the particle will have time, on average, to fill the entire volume and repeatedly change the magnitude and direction of the velocity during inelastic collisions with the walls of the vessel. Thus, the particle exerts an average pressure on the walls and has a temperature T and its velocity distribution is Maxwellian with temperature T. This system of one particle can be compressed adiabatically, its temperature can be changed, giving it the opportunity to come into equilibrium with the walls of the vessel.

Average pressure on the wall at N = 1 , equals p= T/V, and the average density n = 1/ V. Let us consider the case of an isothermal process when T =const. From the first beginning at T =const. And p= T/V we get

, because the

From here we find that the change in entropy does not depend on temperature, so

Here the integration constant is introduced: “particle size”<

Work in an isothermal process

work is determined by the difference in entropy.

Suppose we have ideal partitions that can be used to divide the vessel into parts without wasting energy. Let's divide our vessel into two equal parts with a volume V/2 each. In this case, the particle will be in one of the halves - but we do not know which one. Let's say that we have a device that allows us to determine which part a particle is located in, for example, a precision scale. Then from a symmetrical probability distribution of 50% to 50% being in two halves, we get a 100% probability for one of the halves - a “collapse” of the probability distribution occurs. Accordingly, the new entropy will be less than the original entropy by the amount

By decreasing entropy, work can be done. To do this, it is enough to move the partition towards the empty volume until it disappears. The work will be equal to If nothing changed in the external world, then by repeating these cycles, it is possible to build a perpetual motion machine of the second kind. This is Maxwell's demon in Szilard's version. But the second law of thermodynamics prohibits obtaining work only through heat. This means that something must be happening in the outside world. What is this? Detection of a particle in one of the halves changes information about a particle - Of the two possible halves, only one is indicated, in which the particle is located. This knowledge corresponds to one bit of information. The measurement process reduces the entropy of the particle (transfer to a nonequilibrium state) and increases information about the system (particle) by exactly the same amount. If you repeatedly divide in half the previously obtained halves, quarters, eighths, etc., then the entropy will consistently decrease, and the information will increase! In other words

The more we know about a physical system, the lower its entropy. If everything is known about the system, this means that we have transferred it to a highly nonequilibrium state, when its parameters are as far as possible from equilibrium values. If in our model the particle can be placed in an elementary cell of volume V 0 , then at the same time S = 0 , and information reaches its maximum value since probability pmin find a particle in a given cell is equal to V 0 / V. If at subsequent moments of time the particle begins to fill a larger volume, then information will be lost and entropy will increase. We emphasize that you need to pay for information (according to the second law) by increasing entropy S e external system, and Indeed, if for one bit of information the device (external system) increased its entropy by an amount less than one bit, then we could reverse the heat engine. Namely, by expanding the volume occupied by a particle, we would increase its entropy by the amount ln2 getting a job Tln2 , and the total entropy of the particle plus device system would decrease. But this is impossible according to the second principle. Formally, , therefore, a decrease in the entropy of the system (particle) is accompanied by an increase in the entropy of the device.

So, information entropy is a measure of the lack (or degree of uncertainty) of information about the actual state of a physical system.

Shannon information entropy:

, where (this refers to two-level systems, such as bits: “0” and “1”. If the dimension is n, That H = log n. Yes, for n = 3, N =log 3 and, = 3.)

Amount of information I(or simply information) about the state of a classical system, obtained as a result of measurements by an external device connected to the system under consideration by some communication channel, is defined as the difference in information entropy corresponding to the initial uncertainty of the system state H 0 , and information entropy of the final state of the system after measurement H. Thus,

I + H = H 0 = const .

In the ideal case, when there is no noise and interference created by external sources in the communication channel, the final probability distribution after measurement is reduced to one specific value p n= 1, i.e. H = 0, and the maximum value of the information obtained during measurement will be determined: Imax = H 0 . Thus, the Shannon information entropy of a system has the meaning of the maximum information contained in the system; it can be determined under ideal conditions of measuring the state of the system in the absence of noise and interference, when the entropy of the final state is zero:

Let's consider a classic logical element that can be in one of two equally probable logical states “0” and “1”. Such an element, together with the environment - the thermostat and the signal generated by an external thermally insulated object, forms a single nonequilibrium closed system. The transition of an element to one of the states, for example, to the “0” state, corresponds to a decrease in stat. the weight of its state compared to the initial state is 2 times (for three-level systems - 3 times). Let's find the decrease information entropy Shannon, which corresponds to an increase in the amount of information about an element by one, which is called bit:

Therefore, information entropy determines the number of bits that are required to encode information in the system or message in question.

LITERATURE

1. D. Landau, I. Lifshits. Statistical physics. Part 1. Science, M 1976.

2. M.A. Leontovich. Introduction to thermodynamics. Statistical physics. Moscow, Nauka, 1983. - 416 p.

3. B.B. Kadomtsev. Dynamics and information. UFN, 164, no. 5, 449 (1994).

10. Basic postulates of statistical thermodynamics

When describing systems consisting of a large number of particles, two approaches can be used: microscopic and macroscopic. In the first approach, based on classical or quantum mechanics, the microstate of the system is characterized in detail, for example, the coordinates and momenta of each particle at each moment in time. Microscopic description requires solving classical or quantum equations of motion for a huge number of variables. Thus, each microstate of an ideal gas in classical mechanics is described by 6 N variables ( N- number of particles): 3 N coordinates and 3 N impulse projections.

The macroscopic approach, which is used by classical thermodynamics, characterizes only the macrostates of the system and uses a small number of variables for this, for example, three: temperature, volume and number of particles. If a system is in an equilibrium state, then its macroscopic parameters are constant, while its microscopic parameters change with time. This means that for each macrostate there are several (in fact, infinitely many) microstates.

Statistical thermodynamics establishes a connection between these two approaches. The basic idea is this: if each macrostate has many microstates associated with it, then each of them contributes to the macrostate. Then the properties of the macrostate can be calculated as the average over all microstates, i.e. summing up their contributions taking into account statistical weights.

Averaging over microstates is carried out using the concept of a statistical ensemble. Ensemble is an infinite set of identical systems located in all possible microstates corresponding to one macrostate. Each system of the ensemble is one microstate. The entire ensemble is described by some distribution function by coordinates and momenta ( p, q, t), which is defined as follows:

(p, q, t) dp dq is the probability that the ensemble system is located in a volume element dp dq near point ( p, q) at the moment of time t.

The meaning of the distribution function is that it determines the statistical weight of each microstate in the macrostate.

From the definition follow the elementary properties of the distribution function:

1. Normalization

. (10.1)

2. Positive certainty

(p, q, t) i 0 (10.2)

Many macroscopic properties of a system can be defined as average value functions of coordinates and momenta f(p, q) by ensemble:

For example, internal energy is the average of the Hamilton function H(p,q):

The existence of a distribution function is the essence basic postulate of classical statistical mechanics:

The macroscopic state of the system is completely specified by some distribution function that satisfies conditions (10.1) and (10.2).

For equilibrium systems and equilibrium ensembles, the distribution function does not explicitly depend on time: = ( p,q). The explicit form of the distribution function depends on the type of ensemble. There are three main types of ensembles:

1) Microcanonical the ensemble describes isolated systems and is characterized by the following variables: E(energy), V(volume), N(number of particles). In an isolated system, all microstates are equally probable ( equal prior probability postulate):

2) Canonical Ensemble describes systems that are in thermal equilibrium with their environment. Thermal equilibrium is characterized by temperature T. Therefore, the distribution function also depends on temperature:

(10.6)

(k= 1.38 10 -23 J/K - Boltzmann constant). The value of the constant in (10.6) is determined by the normalization condition (see (11.2)).

A special case of the canonical distribution (10.6) is Maxwell distribution by speed v, which is valid for gases:

(10.7)

(m- mass of a gas molecule). Expression (v) d v describes the probability that a molecule has an absolute velocity value between v and v+ d v. The maximum of function (10.7) gives the most probable speed of molecules, and the integral

Average speed of molecules.

If the system has discrete energy levels and is described quantum mechanically, then instead of the Hamilton function H(p,q) use the Hamilton operator H, and instead of the distribution function - the density matrix operator:

(10.9)

The diagonal elements of the density matrix give the probability that the system is in i-th energy state and has energy E i:

(10.10)

The value of the constant is determined by the normalization condition: S i = 1:

(10.11)

The denominator of this expression is called the sum over states (see Chapter 11). It is of key importance for the statistical assessment of the thermodynamic properties of the system. From (10.10) and (10.11) one can find the number of particles N i having energy E i:

(10.12)

(N- total number of particles). The distribution of particles (10.12) over energy levels is called Boltzmann distribution, and the numerator of this distribution is the Boltzmann factor (multiplier). Sometimes this distribution is written in a different form: if there are several levels with the same energy E i, then they are combined into one group by summing the Boltzmann factors:

(10.13)

(g i- number of energy levels E i, or statistical weight).

Many macroscopic parameters of a thermodynamic system can be calculated using the Boltzmann distribution. For example, average energy is defined as the average of energy levels taking into account their statistical weights:

, (10.14)

3) Grand Canonical Ensemble describes open systems that are in thermal equilibrium and capable of exchanging matter with the environment. Thermal equilibrium is characterized by temperature T, and the equilibrium in the number of particles is the chemical potential. Therefore, the distribution function depends on temperature and chemical potential. We will not use an explicit expression for the distribution function of the large canonical ensemble here.

In statistical theory it is proven that for systems with a large number of particles (~ 10 23) all three types of ensembles are equivalent to each other. The use of any ensemble leads to the same thermodynamic properties, therefore the choice of one or another ensemble for describing a thermodynamic system is dictated only by the convenience of mathematical processing of distribution functions.

EXAMPLES

Example 10-1. A molecule can be at two levels with energies of 0 and 300 cm -1. What is the probability that a molecule will be at the upper level at 250 o C?

Solution. It is necessary to apply the Boltzmann distribution, and to convert the spectroscopic unit of energy cm -1 to joules, use the multiplier hc (h= 6.63 10 -34 J. s, c= 3 10 10 cm/s): 300 cm -1 = 300 6.63 10 -34 3 10 10 = 5.97 10 -21 J.

Answer. 0.304.

Example 10-2. A molecule can be at a level with energy 0 or at one of three levels with energy E. At what temperature will a) all molecules be at the lower level, b) the number of molecules at the lower level will be equal to the number of molecules at the upper levels, c) the number of molecules at the lower level will be three times less than the number of molecules at the upper levels?

Solution. Let's use the Boltzmann distribution (10.13):

A) N 0 / N= 1; exp(- E/kT) = 0; T= 0. As the temperature decreases, molecules accumulate at lower levels.

b) N 0 / N= 1/2; exp(- E/kT) = 1/3; T = E / [k ln(3)].

V) N 0 / N= 1/4; exp(- E/kT) = 1; T= . At high temperatures, molecules are evenly distributed across energy levels, because all Boltzmann factors are almost the same and equal to 1.

Answer. A) T= 0; b) T = E / [k ln(3)]; V) T = .

Example 10-3. When any thermodynamic system is heated, the population of some levels increases and others decreases. Using Boltzmann's distribution law, determine what the energy of a level must be in order for its population to increase with increasing temperature.

Solution. Occupancy is the proportion of molecules located at a certain energy level. By condition, the derivative of this quantity with respect to temperature must be positive:

In the second line we used the definition of average energy (10.14). Thus, population increases with temperature for all levels above the average energy of the system.

Answer. .

TASKS

10-1. A molecule can be at two levels with energies of 0 and 100 cm -1. What is the probability that a molecule will be at its lowest level at 25 o C?

10-2. A molecule can be at two levels with energies of 0 and 600 cm -1. At what temperature will there be twice as many molecules at the upper level as at the lower level?

10-3. A molecule can be at a level with energy 0 or at one of three levels with energy E. Find the average energy of molecules: a) at very low temperatures, b) at very high temperatures.

10-4. When any thermodynamic system cools, the population of some levels increases and others decreases. Using Boltzmann's distribution law, determine what the energy of a level must be in order for its population to increase with decreasing temperature.

10-5. Calculate the most probable speed of carbon dioxide molecules at a temperature of 300 K.

10-6. Calculate the average speed of helium atoms under normal conditions.

10-7. Calculate the most probable speed of ozone molecules at a temperature of -30 o C.

10-8. At what temperature is the average speed of oxygen molecules equal to 500 m/s?

10-9. Under some conditions, the average speed of oxygen molecules is 400 m/s. What is the average speed of hydrogen molecules under the same conditions?

10-10. What is the fraction of molecules weighing m, having a speed above average at temperature T? Does this fraction depend on the mass of molecules and temperature?

10-11. Using Maxwell's distribution, calculate the average kinetic energy of motion of molecules of mass m at a temperature T. Is this energy equal to kinetic energy at average speed?

Statistical physics and thermodynamics

Statistical and thermodynamic research methods . Molecular physics and thermodynamics are branches of physics in which they study macroscopic processes in bodies, associated with the huge number of atoms and molecules contained in the bodies. To study these processes, two qualitatively different and mutually complementary methods are used: statistical (molecular kinetic) And thermodynamic. The first underlies molecular physics, the second - thermodynamics.

Molecular physics - a branch of physics that studies the structure and properties of matter based on molecular kinetic concepts, based on the fact that all bodies consist of molecules in continuous chaotic motion.

The idea of ​​the atomic structure of matter was expressed by the ancient Greek philosopher Democritus (460-370 BC). Atomism was revived again only in the 17th century. and develops in works whose views on the structure of matter and thermal phenomena were close to modern ones. The rigorous development of molecular theory dates back to the middle of the 19th century. and is associated with the works of the German physicist R. Clausius (1822-1888), J. Maxwell and L. Boltzmann.

The processes studied by molecular physics are the result of the combined action of a huge number of molecules. The laws of behavior of a huge number of molecules, being statistical laws, are studied using statistical method. This method is based on the fact that the properties of a macroscopic system are ultimately determined by the properties of the particles of the system, the features of their movement and averaged values ​​of the dynamic characteristics of these particles (speed, energy, etc.). For example, the temperature of a body is determined by the speed of the chaotic movement of its molecules, but since at any moment of time different molecules have different speeds, it can only be expressed through the average value of the speed of movement of the molecules. You can't talk about the temperature of one molecule. Thus, the macroscopic characteristics of bodies have a physical meaning only in the case of a large number of molecules.

Thermodynamics- a branch of physics that studies the general properties of macroscopic systems in a state of thermodynamic equilibrium and the processes of transition between these states. Thermodynamics does not consider the microprocesses that underlie these transformations. This thermodynamic method different from statistical. Thermodynamics is based on two principles - fundamental laws established as a result of generalization of experimental data.

The scope of application of thermodynamics is much wider than that of molecular kinetic theory, since there are no areas of physics and chemistry in which the thermodynamic method cannot be used. However, on the other hand, the thermodynamic method is somewhat limited: thermodynamics does not say anything about the microscopic structure of matter, about the mechanism of phenomena, but only establishes connections between the macroscopic properties of matter. Molecular kinetic theory and thermodynamics complement each other, forming a single whole, but differing in various research methods.

Basic postulates of molecular kinetic theory (MKT)

1. All bodies in nature consist of a huge number of tiny particles (atoms and molecules).

2. These particles are in continuous chaotic(disorderly) movement.

3. The movement of particles is related to body temperature, which is why it is called thermal movement.

4. Particles interact with each other.

Evidence of the validity of MCT: diffusion of substances, Brownian motion, thermal conductivity.

Physical quantities used to describe processes in molecular physics are divided into two classes:

microparameters– quantities that describe the behavior of individual particles (mass of an atom (molecule), speed, momentum, kinetic energy of individual particles);
macro parameters– quantities that cannot be reduced to individual particles, but characterize the properties of the substance as a whole. The values ​​of macroparameters are determined by the result of the simultaneous action of a huge number of particles. Macro parameters are temperature, pressure, concentration, etc.

Temperature is one of the basic concepts that plays an important role not only in thermodynamics, but also in physics in general. Temperature- a physical quantity characterizing the state of thermodynamic equilibrium of a macroscopic system. In accordance with the decision of the XI General Conference on Weights and Measures (1960), only two temperature scales can currently be used - thermodynamic And International practical, graduated respectively in kelvins (K) and degrees Celsius (°C).

On the thermodynamic scale, the freezing point of water is 273.15 K (at the same

pressure as in the International Practical Scale), therefore, by definition, thermodynamic temperature and International Practical Temperature

scale are related by the ratio

T= 273,15 + t.

Temperature T = 0 K is called zero kelvin. Analysis of various processes shows that 0 K is unattainable, although approaching it as close as desired is possible. 0 K is the temperature at which theoretically all thermal movement of particles of a substance should cease.

In molecular physics, a relationship is derived between macroparameters and microparameters. For example, the pressure of an ideal gas can be expressed by the formula:

position:relative; top:5.0pt">- mass of one molecule, - concentration, font-size: 10.0pt">From the basic MKT equation you can obtain an equation convenient for practical use:

font-size: 10.0pt">An ideal gas is an idealized gas model in which it is believed that:

1. the intrinsic volume of gas molecules is negligible compared to the volume of the container;

2. there are no interaction forces between molecules (attraction and repulsion at a distance;

3. collisions of molecules with each other and with the walls of the vessel are absolutely elastic.

An ideal gas is a simplified theoretical model of a gas. But, the state of many gases under certain conditions can be described by this equation.

To describe the state of real gases, corrections must be introduced into the equation of state. The presence of repulsive forces that counteract the penetration of other molecules into the volume occupied by a molecule means that the actual free volume in which molecules of a real gas can move will be smaller. Whereb - the molar volume occupied by the molecules themselves.

The action of attractive gas forces leads to the appearance of additional pressure on the gas, called internal pressure. According to van der Waals calculations, internal pressure is inversely proportional to the square of the molar volume, i.e. where A - van der Waals constant, characterizing the forces of intermolecular attraction,V m - molar volume.

In the end we will get equation of state of real gas or van der Waals equation:

font-size:10.0pt;font-family:" times new roman> Physical meaning of temperature: temperature is a measure of the intensity of thermal motion of particles of substances. The concept of temperature is not applicable to an individual molecule. Only for a sufficiently large number of molecules creating a certain amount of substance, It makes sense to include the term temperature.

For an ideal monatomic gas, we can write the equation:

font-size:10.0pt;font-family:" times new roman>The first experimental determination of molecular speeds was carried out by the German physicist O. Stern (1888-1970). His experiments also made it possible to estimate the speed distribution of molecules.

The “confrontation” between the potential binding energies of molecules and the energies of thermal motion of molecules (kinetic molecules) leads to the existence of various aggregate states of matter.

Thermodynamics

By counting the number of molecules in a given system and estimating their average kinetic and potential energies, we can estimate the internal energy of a given system U.

font-size:10.0pt;font-family:" times new roman>For an ideal monatomic gas.

The internal energy of a system can change as a result of various processes, for example, performing work on the system or imparting heat to it. So, by pushing a piston into a cylinder in which there is a gas, we compress this gas, as a result of which its temperature increases, i.e., thereby changing (increasing) the internal energy of the gas. On the other hand, the temperature of a gas and its internal energy can be increased by imparting a certain amount of heat to it - energy transferred to the system by external bodies through heat exchange (the process of exchanging internal energies when bodies come into contact with different temperatures).

Thus, we can talk about two forms of energy transfer from one body to another: work and heat. The energy of mechanical motion can be converted into the energy of thermal motion, and vice versa. During these transformations, the law of conservation and transformation of energy is observed; in relation to thermodynamic processes this law is first law of thermodynamics, established as a result of generalization of centuries-old experimental data:

In a closed loop, therefore font-size:10.0pt;font-family:" times new roman>Heat engine efficiency: .

From the first law of thermodynamics it follows that the efficiency of a heat engine cannot be more than 100%.

Postulating the existence of various forms of energy and the connection between them, the first principle of TD says nothing about the direction of processes in nature. In full accordance with the first principle, one can mentally construct an engine in which useful work would be performed by reducing the internal energy of the substance. For example, instead of fuel, a heat engine would use water, and by cooling the water and turning it into ice, work would be done. But such spontaneous processes do not occur in nature.

All processes in nature can be divided into reversible and irreversible.

For a long time, one of the main problems in classical natural science remained the problem of explaining the physical nature of the irreversibility of real processes. The essence of the problem is that the motion of a material point, described by Newton’s II law (F = ma), is reversible, while a large number of material points behave irreversibly.

If the number of particles under study is small (for example, two particles in figure a)), then we will not be able to determine whether the time axis is directed from left to right or from right to left, since any sequence of frames is equally possible. That's what it is reversible phenomenon. The situation changes significantly if the number of particles is very large (Fig. b)). In this case, the direction of time is determined unambiguously: from left to right, since it is impossible to imagine that evenly distributed particles by themselves, without any external influences, will gather in the corner of the “box”. This behavior, when the state of the system can only change in a certain sequence, is called irreversible. All real processes are irreversible.

Examples of irreversible processes: diffusion, thermal conductivity, viscous flow. Almost all real processes in nature are irreversible: this is the damping of a pendulum, the evolution of a star, and human life. The irreversibility of processes in nature, as it were, sets the direction on the time axis from the past to the future. The English physicist and astronomer A. Eddington figuratively called this property of time “the arrow of time.”

Why, despite the reversibility of the behavior of one particle, does an ensemble of a large number of such particles behave irreversibly? What is the nature of irreversibility? How to justify the irreversibility of real processes based on Newton's laws of mechanics? These and other similar questions worried the minds of the most outstanding scientists of the 18th–19th centuries.

Second law of thermodynamics sets the direction laziness of all processes in isolated systems. Although the total amount of energy in an isolated system is conserved, its qualitative composition changes irreversibly.

1. In Kelvin's formulation, the second law is: “There is no process possible whose sole result would be the absorption of heat from a heater and the complete conversion of this heat into work.”

2. In another formulation: “Heat can spontaneously transfer only from a more heated body to a less heated one.”

3. The third formulation: “Entropy in a closed system can only increase.”

Second law of thermodynamics prohibits the existence perpetual motion machine of the second kind , i.e., a machine capable of doing work by transferring heat from a cold body to a hot one. The second law of thermodynamics indicates the existence of two different forms of energy - heat as a measure of the chaotic movement of particles and work associated with ordered movement. Work can always be converted into its equivalent heat, but heat cannot be completely converted into work. Thus, a disordered form of energy cannot be transformed into an ordered one without any additional actions.

We complete the transformation of mechanical work into heat every time we press the brake pedal in a car. But without any additional actions in a closed cycle of engine operation, it is impossible to transfer all the heat into work. Part of the thermal energy is inevitably spent on heating the engine, plus the moving piston constantly does work against friction forces (this also consumes a supply of mechanical energy).

But the meaning of the second law of thermodynamics turned out to be even deeper.

Another formulation of the second law of thermodynamics is the following statement: the entropy of a closed system is a non-decreasing function, that is, during any real process it either increases or remains unchanged.

The concept of entropy, introduced into thermodynamics by R. Clausius, was initially artificial. The outstanding French scientist A. Poincaré wrote about this: “Entropy seems somewhat mysterious in the sense that this quantity is inaccessible to any of our senses, although it has the real property of physical quantities, since, at least in principle, it is completely measurable "

According to Clausius's definition, entropy is a physical quantity whose increment is equal to the amount of heat , received by the system, divided by the absolute temperature:

font-size:10.0pt;font-family:" times new roman>In accordance with the second law of thermodynamics, in isolated systems, i.e. systems that do not exchange energy with the environment, a disordered state (chaos) cannot independently transform into order Thus, in isolated systems, entropy can only increase. This pattern is called principle of increasing entropy. According to this principle, any system strives for a state of thermodynamic equilibrium, which is identified with chaos. Since an increase in entropy characterizes changes over time in closed systems, entropy acts as a kind of arrows of time.

We called the state with maximum entropy disordered, and the state with low entropy ordered. A statistical system, if left to itself, goes from an ordered to a disordered state with maximum entropy corresponding to given external and internal parameters (pressure, volume, temperature, number of particles, etc.).

Ludwig Boltzmann connected the concept of entropy with the concept of thermodynamic probability: font-size:10.0pt;font-family:" times new roman> Thus, any isolated system, left to its own devices, over time passes from a state of order to a state of maximum disorder (chaos).

From this principle follows a pessimistic hypothesis about heat death of the Universe, formulated by R. Clausius and W. Kelvin, according to which:

· the energy of the Universe is always constant;

· The entropy of the Universe is always increasing.

Thus, all processes in the Universe are directed towards achieving a state of thermodynamic equilibrium, corresponding to the state of greatest chaos and disorganization. All types of energy degrade, turning into heat, and the stars will end their existence, releasing energy into the surrounding space. A constant temperature will be established only a few degrees above absolute zero. Lifeless, cooled planets and stars will be scattered in this space. There will be nothing - no energy sources, no life.

This grim prospect was predicted by physics until the 1960s, although the conclusions of thermodynamics contradicted the results of research in biology and social sciences. Thus, Darwin's evolutionary theory testified that living nature develops primarily in the direction of improvement and complexity of new species of plants and animals. History, sociology, economics, and other social and human sciences have also shown that in society, despite individual zigzags of development, progress is generally observed.

Experience and practical activity have shown that the concept of a closed or isolated system is a rather crude abstraction that simplifies reality, since in nature it is difficult to find systems that do not interact with the environment. The contradiction began to be resolved when in thermodynamics, instead of the concept of a closed isolated system, the fundamental concept of an open system was introduced, that is, a system exchanging matter, energy and information with the environment.

Methods Education About this site Library Mat. forums

Library > Physics Books > Statistical Physics

Search the library by authors and keywords from the book title:

Statistical physics

  • Aizenshits R. Statistical theory of irreversible processes. M.: Publishing house. Foreign lit., 1963 (djvu)
  • Anselm A.I. Fundamentals of statistical physics and thermodynamics. M.: Nauka, 1973 (djvu)
  • Akhiezer A.I., Peletminsky S.V. Methods of statistical physics. M.: Nauka, 1977 (djvu)
  • Bazarov I.P. Methodological problems of statistical physics and thermodynamics. M.: Moscow State University Publishing House, 1979 (djvu)
  • Bogolyubov N.N. Selected works on statistical physics. M.: Moscow State University Publishing House, 1979 (djvu)
  • Bogolyubov N.N. (Jr.), Sadovnikov B.I. Some questions of statistical mechanics. M.: Higher. school, 1975 (djvu)
  • Bonch-Bruevich V.L., Tyablikov S.V. Green's function method in statistical mechanics. M.: Fizmatlit, 1961 (djvu, 2.61Mb)
  • Vasiliev A.M. Introduction to statistical physics. M.: Higher. school, 1980 (djvu)
  • Vlasov A.A. Nonlocal statistical mechanics. M.: Nauka, 1978 (djvu)
  • Gibbs J.W. Basic principles of statistical mechanics (presented with special application to the rational foundation of thermodynamics). M.-L.: OGIZ, 1946 (djvu)
  • Gurov K.P. Foundations of kinetic theory. Method N.N. Bogolyubova. M.: Nauka, 1966 (djvu)
  • Zaslavsky G.M. Statistical irreversibility in nonlinear systems. M.: Nauka, 1970 (djvu)
  • Zakharov A.Yu. Lattice models of statistical physics. Veliky Novgorod: NovSU, 2006 (pdf)
  • Zakharov A.Yu. Functional methods in classical statistical physics. Veliky Novgorod: NovSU, 2006 (pdf)
  • Ios G. Course of theoretical physics. Part 2. Thermodynamics. Statistical physics. Quantum theory. Nuclear physics. M.: Education, 1964 (djvu)
  • Ishihara A. Statistical physics. M.: Mir, 1973 (djvu)
  • Kadanov L., Beim G. Quantum statistical mechanics. Green's function methods in the theory of equilibrium and nonequilibrium processes. M.: Mir, 1964 (djvu)
  • Katz M. Probability and related issues in physics. M.: Mir, 1965 (djvu)
  • Katz M. Several probabilistic problems of physics and mathematics. M.: Nauka, 1967 (djvu)
  • Kittel Ch. Elementary statistical physics. M.: IL, 1960 (djvu)
  • Kittel Ch. Statistical thermodynamics. M: Nauka, 1977 (djvu)
  • Kozlov V.V. Thermal equilibrium according to Gibbs and Poincaré. Moscow-Izhevsk: Institute of Computer Research, 2002 (djvu)
  • Kompaneets A.S. Laws of physical statistics. Shock waves. Superdense substance. M.: Nauka, 1976 (djvu)
  • Kompaneets A.S. Course of theoretical physics. Volume 2. Statistical laws. M.: Education, 1975 (djvu)
  • Kotkin G.L. Lectures on statistical physics, NSU (pdf)
  • Krylov N.S. Works on the substantiation of statistical physics. M.-L.: From the USSR Academy of Sciences, 1950 (djvu)
  • Kubo R. Statistical mechanics. M.: Mir, 1967 (djvu)
  • Landsberg P. (ed.) Problems in Thermodynamics and Statistical Physics. M.: Mir, 1974 (djvu)
  • Levich V.G. Introduction to statistical physics (2nd ed.) M.: GITTL, 1954 (djvu)
  • Libov R. Introduction to the theory of kinetic equations. M.: Mir, 1974 (djvu)
  • Mayer J., Geppert-Mayer M. Statistical mechanics. M.: Mir, 1980 (djvu)
  • Minlos R.A. (ed.) Mathematics. New in foreign science-11. Gibbs states in statistical physics. Digest of articles. M.: Mir, 1978 (djvu)
  • Nozdrev V.F., Senkevich A.A. Course of statistical physics. M.: Higher. school, 1965 (djvu)
  • Prigogine I. Nonequilibrium statistical mechanics. M.: Mir, 1964 (djvu)
  • Radushkevich L.V. Course of statistical physics (2nd ed.) M.: Education, 1966 (djvu)
  • Reif F. Berkeley course in physics. Volume 5. Statistical physics. M.: Nauka, 1972 (djvu)
  • Rumer Yu.B., Ryvkin M.Sh. Thermodynamics, statistical physics and kinetics. M.: Nauka, 1972 (djvu)
  • Rumer Yu.B., Ryvkin M.Sh. Thermodynamics, statistical physics and kinetics (2nd ed.). M.: Nauka, 1977 (djvu)
  • Ruel D. Statistical mechanics. M.: Mir, 1971 (djvu)
  • Savukov V.V. Clarification of the axiomatic principles of statistical physics. SPb.: Balt. state tech. Univ. "Voenmekh", 2006

STATISTICAL, statistical section. physics, dedicated to the substantiation of laws based on the laws of interaction. and the movements of the particles that make up the system. For systems in an equilibrium state, statistical allows one to calculate, record, phase and chemical conditions. . Nonequilibrium statistical provides a justification for the relationships (equations of transfer of energy, momentum, mass and their boundary conditions) and allows one to calculate the kinetics included in the equations of transfer. coefficients. Statistical establishes quantities. connection between micro- and macro-properties of physical. and chem. systems Statistical calculation methods are used in all areas of modern science. theoretical .

Basic concepts. For statistical macroscopic descriptions systems J. Gibbs (1901) proposed to use the concepts of statistical. ensemble and phase space, which makes it possible to apply methods of probability theory to solving problems. Statistical ensemble - a collection of a very large number of identical plural systems. particles (i.e., “copies” of the system under consideration) located in the same macrostate, which is determined by ; The microstates of the system may differ. Basic statistical ensembles - microcanonical, canonical, grand canonical. and isobaric-isothermal.

Microcanonical The Gibbs ensemble is used when considering (not exchanging energy E with) having a constant volume V and the number of identical particles N (E, V and N-systems). Kanonich. The Gibbs ensemble is used to describe systems of constant volume that are in thermal c (absolute temperature T) with a constant number of particles N (V, T, N). Grand Canon. The Gibbs ensemble is used to describe those located in thermal c (temperature T) and material with a reservoir of particles (all particles are exchanged through the “walls” surrounding the system with volume V). of such a system - V, T and m - chemical potential of particles. Isobaric-isothermal The Gibbs ensemble is used to describe systems in thermal and fur. s at constant P (T, P, N).

Phase space in statistical mechanics is a multidimensional space, the axes of which are all the generalized coordinates q i and the associated impulses p i (i = 1,2,..., M) of a system with M degrees of freedom. For a system consisting of N, q i and p i correspond to the Cartesian coordinate and momentum component (a = x, y, z) of a certain j and M = 3N. The set of coordinates and momenta are denoted by q and p, respectively. The state of the system is represented by a point in a phase space of dimension 2M, and the change in the state of the system in time is represented by the movement of a point along a line, called. phase trajectory. For statistical To describe the state of the system, the concepts of phase volume (an element of the volume of phase space) and the distribution function f(p, q) are introduced, which characterizes the probability density of finding a point representing the state of the system in an element of phase space near a point with coordinates p, q. Instead of phase volume, the concept of discrete energy is used. spectrum of a finite volume system, because the state of an individual particle is determined not by momentum and coordinates, but by a wave function, a cut in stationary dynamic. the state of the system corresponds to the energy. range .

Distribution function classic system f(p, q) characterizes the probability density of the implementation of a given microstates (p, q) in the volume element dГ of phase space. The probability of N particles being in an infinitesimal volume of phase space is equal to:

where dГ N is the element of the phase volume of the system in units of h 3N, h is Planck’s constant; divisor N! takes into account the fact that the rearrangement of identities. particles does not change the state of the system. The distribution function satisfies the normalization condition t f(p, q)dГ N = 1, because the system is reliably in the k.-l. condition. For quantum systems, the distribution function determines the probability w i, N of finding a system of N particles in a given set of quantum numbers i, with energy E i, N, subject to normalization

The average value at time t (i.e. according toinfinitely small time interval from t to t + dt) any physical. the value A(p, q), which is a function of the coordinates and momenta of all particles in the system, is calculated using the distribution function according to the rule (including for nonequilibrium processes):

Integration over coordinates is carried out over the entire volume of the system, and integration over impulses from - , to +, . Thermodynamic state systems should be considered as a limit t: , . For equilibrium states, the distribution functions are determined without solving the equation of motion of the particles that make up the system. The form of these functions (the same for classical and quantum systems) was established by J. Gibbs (1901).

In microcanon. In the Gibbs ensemble, all microstates with a given energy E are equally probable and the distribution function for the classical systems has the form:

f(p,q) = A d,

Where d - Dirac's delta function, H(p, q) - Hamilton's function, which is the sum of the kinetic. and potential energies of all particles; the constant A is determined from the normalization condition of the function f(p, q). For quantum systems, with a specification accuracy equal to the value of D E, in accordance with between energy and time (between momentum and particle coordinates), the function w (E k) = -1, if EE k E + D E, and w (E k ) = 0 if E k< Е и E k >E + D E. Value g(E, N, V)-t. called statistical , equal to the number in energy. layer D E. An important statistical relationship is the connection between the system and the statistical one. :

S(E, N, V) = klng(E, N, V), where the k-Boltzmann constant.

In canon. In the Gibbs ensemble, the probability of the system being in a microstate determined by the coordinates and momenta of all N particles or the values ​​of E i,N has the form: f(p, q) = exp (/kT); w i,N = exp[(F - E i,N)/kT],where F-free. energy (), depending on the values ​​of V, T, N:

F = -kTlnZ N ,

where Z N -statistic. sum (in the case of a quantum system) or statistical. integral (in the case of a classical system), determined from the condition for normalizing the functions w i,N or f(p, q):


Z N = t exp[-H(p, q)/kT]dpdq/(N!h 3N)

(sum over r over all systems, and integration is carried out over the entire phase space).

In the great canon. Gibbs ensemble distribution function f(p, q) and statistical. the sum X, determined from the normalization condition, has the form:

Where W - thermodynamic potential depending on the variables V, T, m (summation is carried out over all positive integers N). In isobaric-isothermal Gibbs ensemble distribution and statistical function. the sum Q, determined from the normalization condition, has the form:

where G-systems (isobaric-isothermal potential, free).

To calculate thermodynamic functions, you can use any distribution: they are equivalent to each other and correspond to different physical. conditions. Microcanonical The Gibbs distribution is applied. arr. in theoretical research. To solve specific problems, ensembles are considered, in which there is an exchange of energy with the environment (canonical and isobaric-isothermal) or an exchange of energy and particles (large canonical ensemble). The latter is especially convenient for studying phase and chemistry. . Statistical the sums of Z N and Q make it possible to determine F, G, as well as thermodynamic. properties of the system obtained by differentiation of statistical. amounts according to the relevant parameters (per 1 village): internal. energy U = RT 2 (9 lnZ N /9 T) V , H = RT 2 (9 lnQ/9 T) P , S = RlnZ N + RT(9 lnZ N /9 T) V = = R ln Q + RT (9 ln Q/9 T) P, at constant volume С V = 2RT(9 lnZ N/9 T) V + RT 2 (9 2 lnZ N/9 T 2) V, at constant С Р = 2RT (9 lnZ N /9 T) P + + RT 2 (9 2 lnZ N /9 T 2) P etc. Resp. all these quantities acquire statistical significance. meaning. Thus, it is identified with the average energy of the system, which allows us to consider it as the movement of the particles that make up the system; free energy is related to statistical the sum of the system, entropy - with the number of microstates g in a given macrostate, or statistical. macrostate, and therefore with its probability. The meaning as a measure of the probability of a state is preserved in relation to arbitrary (non-equilibrium) states. In a state of insulation. system has the maximum possible value for given external. conditions (E, V, N), i.e. the equilibrium state is the most. probable state (with max. statistical ). Therefore, the transition from a nonequilibrium state to an equilibrium state is a process of transition from less probable states to more probable ones. This is the statistical point. the meaning of the law of increase, according to which can only increase (see). At t-re abs. from scratch, any system is fundamentally state in which w 0 = 1 and S = 0. This statement is (see). It is important that for an unambiguous determination it is necessary to use the quantum description, because in classic statistics m.b. is defined only up to an arbitrary term.

Ideal systems. Calculation of statistical sums of most systems is a difficult task. It is significantly simplified if the contribution of potential. energy into the total energy of the system can be neglected. In this case, the complete distribution function f(p, q) for N particles of an ideal system is expressed through the product of single-particle distribution functions f 1 (p, q):


The distribution of particles among microstates depends on their kinetics. energy and from quantum saints in the system, due todue to the identity of particles. All particles are divided into two classes: fermions and bosons. The type of statistics that particles obey is uniquely related to their .

Fermi-Dirac statistics describes the distribution in a system of identities. particles with half-integer 1/2, 3/2,... in units ђ = h/2p. A particle (or quasiparticle) that obeys the specified statistics is called. fermion. Fermions include in, and, with odd, with odd difference and numbers, quasiparticles (for example, holes in), etc. This statistics was proposed by E. Fermi in 1926; in the same year, P. Dirac discovered its quantum mechanics. meaning. The wave function of the fermion system is antisymmetric, i.e. changes its sign when rearranging coordinates and any identities. particles. Each can contain no more than one particle (see). The average number of fermion particles n i in a state with energy E i is determined by the Fermi-Dirac distribution function:

n i =(1+exp[(E i - m )/kT]) -1 ,

where i is a set of quantum numbers characterizing the state of the particle.

Bose-Einstein statistics describes systems of identities. particles with zero or integer (0, ђ, 2ђ, ...). A particle or quasiparticle that obeys the specified statistics is called. boson. This statistics was proposed by S. Bose (1924) for photons and developed by A. Einstein (1924) in relation to , considered as composite particles of an even number of fermions, for example. with an even total number and (deuteron, 4 He nucleus, etc.). Bosons also include phonons in and liquid 4 He, excitons in and. The wave function of the system is symmetrical with respect to the permutation of any identities. particles. The filling numbers are not limited in any way, i.e. Any number of particles can exist in one state. The average number of particles n i bosons in a state with energy E i is described by the Bose-Einstein distribution function:

n i =(exp[(E i - m )/kT]-1) -1 .

Boltzmann statistics is a special case of quantum statistics, when quantum effects can be neglected (high temperatures). It considers the distribution of particles by momenta and coordinates in the phase space of one particle, and not in the phase space of all particles, as in the Gibbs distributions. As a minimum units of volume of phase space, which has six dimensions (three coordinates and three projections of particle momentum), in accordance with quantum mechanics. , you cannot choose a volume smaller than h 3 . The average number of particles n i in a state with energy E i is described by the Boltzmann distribution function:

n i =exp[( m -E i)/kT].

For particles that move according to classical laws. mechanics in external potential field U(r), the statistically equilibrium function of the distribution f 1 (p,r) over the momenta p and coordinates r of particles has the form:f 1 (p,r) = A exp( - [p 2 /2m + U(r)]/kT). Here p 2 /2t-kinetic. energy of mass w, constant A is determined from the normalization condition. This expression is often called Maxwell-Boltzmann distribution, and the Boltzmann distribution is called. function

n(r) = n 0 exp[-U(r)]/kT],

where n(r) = t f 1 (p, r)dp - density of the number of particles at point r (n 0 - density of the number of particles in the absence of an external field). The Boltzmann distribution describes the distributioncool in the gravitational field (barometric f-la), and highly dispersed particles in the field of centrifugal forces, in non-degenerate ones, and is also used to calculate the distribution in dilute. p-max (in the volume and at the boundary with), etc. At U(r) = 0, the Maxwell-Boltzmann distribution follows from the Maxwell-Boltzmann distribution, which describes the distribution of velocities of particles in a statistical state. (J. Maxwell, 1859). According to this distribution, the probable number per unit volume of velocity components, which lie in the intervals from u i to u i + du i (i = x, y, z), is determined by the following function:

The Maxwell distribution does not depend on the interaction. between Particles and is true not only for , but also for (if a classical description is possible for them), as well as for Brownian particles suspended in and . It is used to count the number of collisions between each other during chemical reactions. district and from the surface.

Amount by state. Statistical amount in canonical Gibbs ensemble is expressed through the sum over the states of one Q 1:

where E i is the energy of the i-th quantum level (i = O corresponds to the zero level), g i is statistical. i-th level. In the general case, individual types of movement, and groups in, as well as movement as a whole are interconnected, but approximately they can be considered as independent. Then the sum over the states may be presented in the form of a product of individual components associated with the steps. movement (Q post) and with intramol. movements (Q int):

Q 1 = Q post ·Q int, Q post = l (V/N),

Where l = (2p mkT/h 2) 3/2. For Q ext represents the sum of the electronic and nuclear states; for Q int - the sum of electronic, nuclear, oscillations. and rotate. states. In the range of temperatures from 10 to 10 3 K, an approximate description is usually used, in which each of the indicated types of movement is considered independently: Q in = Q el · Q poison · Q rotation · Q count /g, where g is the number, identity equal to number. configurations that arise during rotation, consisting of identical or groups.

The sum of the states of electronic motion Q el is equal to statistical. R t bas. electronic state. In plural cases bas. the level is non-degenerate and separated from the nearest excited level, which means. energy: (P t = 1). However, in some cases, e.g. for O 2, Р t = з, basically. state, the moment of the quantity of motion is different from zero and takes place, and the energy may. quite low. The sum over the states of Q poison, due to the degeneracy of nuclear ones, is equal to:

where s i is the spin of nucleus i, the product is taken over all . Sum by oscillation states. movement where v i -frequencies small fluctuations, n is the number in . The sum by state will be rotated. movements of a polyatomic with large moments of inertia can be considered classically [high-temperature approximation, T/q i 1, where q i = h 2 /8p 2 kI i (i = x, y, z), I t is the main moment of inertia of rotation around the i axis]: Q time = (p T 3 /q x q y q z) 1/2. For linear ones with moment of inertia I statistical. sum Q time = T/q, where q = h 2 /8p 2 *kI.

When calculating at temperatures above 10 3 K, it is necessary to take into account the anharmonicity of vibrations, interaction effects. oscillate and rotate. degrees of freedom (see), as well as electronic states, population of excited levels, etc. At low temperatures (below 10 K), it is necessary to take into account quantum effects (especially for diatomic ones). Okay, let's rotate it. the movement of heteronuclear AB is described by the following formula:

l-number rotate. states, and for homonuclear A 2 (especially for H 2, D 2, T 2) nuclear and rotate. degrees of freedom interaction Friendwith a friend: Q poison. rotate Q poison ·Q rotation

Knowing the sum over states allows one to calculate the thermodynamic. Saints and, incl. chem. , equilibrium degree of ionization, etc. Important in abs theory. speeds r-tions has the ability to calculate the process of formation of activation. complex (transition state), which is presented as a modification. particle, one of the vibrations. degrees of freedom the cut is replaced by the degree of freedom of the input. movements.

Non-ideal systems. In interaction together. In this case, the sum over the states of the ensemble is not reduced to the product of the sums over the states of the individual ones. If we assume that intermol. interaction do not affect internal states, statistical sum of the system in classical approximation for , consisting of N identities. particles has the form:

Where

Here<2 N-config. integral taking into account the interaction. . Naib, often potential. energy U is considered as a sum of pair potentials: U = =where U(r ij) is the center potential. forces depending ondistances r ij between i and j. Multiparticle contributions to the potential are also taken into account. energy, orientation effects, etc. The need to calculate the configuration. integral arises when considering any condenser. phases and phase boundaries. Exact solution to the plural problem. bodies is almost impossible, therefore, to calculate statistical data. sums and all thermodynamic. St. in, obtained from statistical. sums by differentiation according to the corresponding parameters, use diff. approximate methods.

According to the so-called method of group expansions, the state of the system is considered as a set of complexes (groups) consisting of different numbers and configurations. the integral decomposes into a set of group integrals. This approach allows us to imagine any thermodynamic. function in the form of a series of degrees of density. Naib. An important relationship of this kind is the virial level of the state.

For theoretical descriptions of the properties of dense, and solutions of non-electrolytes and interfaces in these systems are more convenient than direct calculation of statistical data. sum is the method of n-particle distribution functions. In it, instead of counting statistics. each state with a fixed energy uses the relationships between distribution functions f n, which characterize the probability of particles being simultaneously at points in space with coordinates r 1,..., r n; for n = N f N = b t f(p, r)dp (here and below q i = r i). The single-particle function f 1 (r 1) (n = 1) characterizes the density distribution of the substance. For this periodic. f-tion with maxima at crystalline nodes. structures; for or in the absence of external field is a constant value equal to macroscopic. density of the river The two-particle distribution function (n = 2) characterizes the probability of findingtwo particles at points 1 and 2, it determines the so-called. correlation function g(|r 1 - r 2 |) = f 2 (r 1, r 2)/r 2, characterizing the mutual correlation in the distribution of particles. Provides relevant experimental information.

Distribution functions of dimensions n and n + 1 are connected by an infinite system of interlocking integrodifferentials. Bogolyubov-Born-Green-Kirkwood-Yvon equations, the solution of which is extremely difficult, therefore the effects of correlation between particles are taken into account by introducing decomp. approximations, which determine how the function f n is expressed through functions of lower dimension. Resp. developed by several approximate methods for calculating functions f n, and through them all thermodynamic. characteristics of the system under consideration. Naib. The Perkus-Ievik and hyperchain approximations are used.

Lattice condenser models. states have found wide application in thermodynamic. consideration of almost all physical-chemical. tasks. The entire volume of the system is divided into local regions with a characteristic size on the order of the size u 0 . In general, in different models the size of the local area may be both greater and less than u 0 ; in most cases they are the same. The transition to a discrete distribution in space significantly simplifies the calculation of diff. . Lattice models take into account the interaction. together; energy interaction energetically described. parameters. In a number of cases, lattice models allow exact solutions, which makes it possible to evaluate the nature of the approximations used. With their help, it is possible to consider multiparticle and specific ones. interaction, orientation effects, etc. Lattice models are fundamental in the study and implementation of applied calculations and highly inhomogeneous systems.

Numerical methods for determining thermodynamics. St.-in are becoming increasingly important as computing develops. technology. In the Monte Carlo method, multidimensional integrals are directly calculated, which allows one to directly obtain statistical data. average observedvalues ​​A(r1.....r N) according to any of the statistical ensembles(for example, A is the energy of the system). So, in canon. thermodynamic ensemble the average looks like:

This method is applicable to almost all systems; the average values ​​obtained with its help for limited volumes (N = 10 2 -10 5) serve as a good approximation for describing macroscopic properties. objects and can be considered as accurate results.

In the method they say. The dynamics of the state of the system is considered using the numerical integration of Newton's equations for the movement of each particle (N = = 10 2 -10 5) at given potentials of interparticle interaction. Equilibrium characteristics of the system are obtained by averaging over phase trajectories (over velocities and coordinates) over long times, after establishing the Maxwellian distribution of particles over velocities (the so-called thermalization period).

Limitations in the use of numerical methods in basic. determined by the capabilities of the computer. Specialist. will calculate. techniques allow you to bypass the difficulties associated with the fact that it is not a real system that is being considered, but a small volume; this is especially important when taking into account long-range interaction potentials, transitions, etc.

Physical kinetics is a section of statistics. physics, which provides justification for the relationships describing the transfer of energy, momentum and mass, as well as the influence of external influences on these processes. fields. Kinetic. macroscopic coefficients characteristics of a continuous medium that determine the dependencies of physical flows. quantities (heat, momentum, mass of components, etc.) fromthe gradient flows that cause these flows are hydrodynamic. speed, etc. It is necessary to distinguish the Onsager coefficients included in the equations that connect flows with thermodynamics. forces (thermodynamic equation of motion), and transfer coefficients (, etc.) included in the transfer equation. The first m.b. expressed through the latter using the relationships between macroscopic. characteristics of the system, so in the future only coefficients will be considered. transfer.

To calculate macroscopic coefficient transfer, it is necessary to carry out averaging over the probabilities of realization of elementary transfers using a nonequilibrium distribution function. The main difficulty is that the analyte. the form of the distribution function f(p, q, t) (t-time) is unknown (in contrast to the equilibrium state of the system, which is described using the Gibbs distribution functions obtained at t: , ). Consider n-particle distribution functions f n (r, q, t), which are obtained from functions f (p, q, t) by averaging over the coordinates and momenta of the remaining (N - n) particles:

For them, maybe. a system of equations has been compiled that allows one to describe arbitrary nonequilibrium states. Solving this system of equations is very difficult. As a rule, in kinetic theory and gaseous quasiparticles (fermions and bosons), only the equation for the single-particle distribution function f 1 is used. Under the assumption that there is no correlation between the states of any particles (hypothesis of molecular chaos), the so-called kinetic Boltzmann equation (L. Boltzmann, 1872). This equation takes into account the change in the distribution of particles under the influence of external influences. forces F(r, m) and pair collisions between particles:

Where f 1 (u, r, t) and particle distribution functions up tocollisions, f " 1 (u", r, t) and distribution functionsafter a collision; u and -velocities of particles before the collision, u" and -velocities of the same particles after the collision, and = |u -|-modulus of the relative speed of the colliding particles, q - the angle between the relative speed of the u - colliding particles and the line connecting their centers , s (u,q )dW -differential effective cross section for particle scattering by solid angle dW in the laboratory coordinate system, depending on the law of particle interaction.For a model in the form of elastic rigid spheres with radius R, s = 4R 2 cosq is assumed In the framework of classical mechanics, the differential cross section is expressed in terms of the collision parameters b and e (the corresponding impact distance and the azimuthal angle of the line of centers): s dW = bdbde , and are considered as centers of forces with a potential depending on the distance. For quantum expressions for the differential The effective cross section is obtained on the basis of , taking into account the influence of effects on the probability of collision.

If the system is in statistical , the collision integral Stf is equal to zero and the solution of the kinetic. The Boltzmann equation will be the Maxwell distribution. For nonequilibrium states, the solutions of the kinetic. Boltzmann equations are usually sought in the form of a series expansion of the function f 1 (u, r, m) in small parameters relative to the Maxwell distribution function. In the simplest (relaxation) approximation, the collision integral is approximated as Stgas; for (the usual one-particle distribution function f 1 of molecules in liquids does not reveal the specifics of the phenomena and consideration of the two-particle distribution function f 2 is required. However, for sufficiently slow processes and in cases where the scales of spatial inhomogeneities are significantly smaller than the scale of correlation between particles, you can use a locally equilibrium single-particle distribution function with temperature, chemical potentials and hydrodynamic speed, which correspond to the small volume under consideration... To it you can find a correction proportional to the gradients of temperature, hydrodynamic speed and chemical. potentials of the components, and calculate the flows of impulses, energy and substances, as well as justify the Navier-Stokes equation, and... In this case, the transfer coefficients turn out to be proportional to the spatio-temporal correlation functions of the flows of energy, momentum and substances each component.

To describe matter at and at interfaces, the lattice condenser model is widely used. phases. the state of the system is described fundamentally. kinetic master equation regarding the distribution function P(q, t):

where P(q,t)= t f(p,q,t)du- distribution function, averaged over the impulses (velocities) of all N particles, describing the distribution of particles over the nodes of the lattice structure (their number is N y, N< N y), q- номер узла или его координата. В модели "решеточного " частица может находиться в узле (узел занят) или отсутствовать (узел свободен); W(q : q") is the probability of the system transitioning per unit time from state q, described by a complete set of particle coordinates, to another state q". The first sum describes the contribution of all processes in which a transition to a given state q is carried out, the second sum describes the exit from this state. In the case of equilibrium distribution of particles (t : , ) P(q) = exp[-H(q)/kT]/Q, where Q-statistic. sum, H(q) is the energy of the system in state q. The transition probabilities satisfy the detailed principle: W(q" : q)exp[-H(q")/kT] = W(q : q")exp[-H(q)/kT]. Based on the equations for the functions P(q,t), a kinetic equation is constructed. equations for n-particle distribution functions, which are obtained by averaging over the locations of all other (N - n) particles. For small ones h in the boundary with, growth, phase transformations, etc. For interphase transfer, due to differences in the characteristic times of elementary particle migration processes, the type of boundary conditions at the phase boundaries plays an important role.

For small systems (number of nodes N y = 10 2 - 10 5) the system of equations relative to the function P(q,t) may be solved numerically using the Monte Carlo method. The stage of the system to an equilibrium state allows us to consider the differences. transient processes in the study of the kinetics of phase transformations, growth, kinetics of surface reactions, etc. and determine their dynamics. characteristics, including coefficient. transfer.

To calculate the coefficient. transfer in gaseous, liquid and solid phases, as well as at phase boundaries, various variants of the mol. method are actively used. dynamics, which allows us to trace in detail the systems from times of ~10 -15 s to ~10 -10 s (at times of the order of 10 -10 - 10 -9 s and more, the so-called Langevin equation is used, this equation Newton's formulas containing a stochastic term on the right-hand side).

For systems with chemical p-tions on the nature of the distribution of particles is greatly influenced by the relationship between the characteristic times of transfer and their chemical. transformations. If the speed of chemical transformation is small, the distribution of particles does not differ much from the case when there is no solution. If the speed of the distribution is high, its influence on the nature of the distribution of particles is great and it is impossible to use average particles (i.e. distribution functions with n = 1), as is done when using. It is necessary to describe the distribution in more detail using the distribution functions f n with n > 1. Important in describing the reaction. particle flows on the surface and velocities have boundary conditions (see).

Lit.: Kubo R., Statistical mechanics, trans. from English, M., 1967; Zubarev D.N., Nonequilibrium statistical, M., 1971; Ishihara A., Statistical Physics, trans. from English, M., 1973; Landau L. D., Lifshits E. M L