Statistical physics and thermodynamics. Dynamic and statistical laws

STATISTICAL, statistical section. physics, dedicated to the substantiation of laws based on the laws of interaction. and the movements of the particles that make up the system. For systems in an equilibrium state, statistical allows one to calculate, record, phase and chemical conditions. . Nonequilibrium statistical provides a justification for the relationships (equations of transfer of energy, momentum, mass and their boundary conditions) and allows one to calculate the kinetics included in the equations of transfer. coefficients. Statistical establishes quantities. connection between micro- and macro-properties of physical. and chem. systems Statistical calculation methods are used in all areas of modern science. theoretical .

Basic concepts. For statistical macroscopic descriptions systems J. Gibbs (1901) proposed to use the concepts of statistical. ensemble and phase space, which makes it possible to apply methods of probability theory to solving problems. Statistical ensemble - a collection of a very large number of identical plural systems. particles (i.e., “copies” of the system under consideration) located in the same macrostate, which is determined by ; The microstates of the system may differ. Basic statistical ensembles - microcanonical, canonical, grand canonical. and isobaric-isothermal.

Microcanonical The Gibbs ensemble is used when considering (not exchanging energy E with) having a constant volume V and the number of identical particles N (E, V and N-systems). Kanonich. The Gibbs ensemble is used to describe systems of constant volume that are in thermal c (absolute temperature T) with a constant number of particles N (V, T, N). Grand Canon. The Gibbs ensemble is used to describe those located in thermal c (temperature T) and material with a reservoir of particles (all particles are exchanged through the “walls” surrounding the system with volume V). of such a system - V, T and m - chemical potential of particles. Isobaric-isothermal The Gibbs ensemble is used to describe systems in thermal and fur. s at constant P (T, P, N).

Phase space in statistical mechanics is a multidimensional space, the axes of which are all the generalized coordinates q i and the associated impulses p i (i = 1,2,..., M) of a system with M degrees of freedom. For a system consisting of N, q i and p i correspond to the Cartesian coordinate and momentum component (a = x, y, z) of a certain j and M = 3N. The set of coordinates and momenta are denoted by q and p, respectively. The state of the system is represented by a point in a phase space of dimension 2M, and the change in the state of the system in time is represented by the movement of a point along a line, called. phase trajectory. For statistical To describe the state of the system, the concepts of phase volume (an element of the volume of phase space) and the distribution function f(p, q) are introduced, which characterizes the probability density of finding a point representing the state of the system in an element of phase space near a point with coordinates p, q. Instead of phase volume, the concept of discrete energy is used. spectrum of a finite volume system, because the state of an individual particle is determined not by momentum and coordinates, but by a wave function, a cut in stationary dynamic. the state of the system corresponds to the energy. range .

Distribution function classic system f(p, q) characterizes the probability density of the implementation of a given microstates (p, q) in the volume element dГ of phase space. The probability of N particles being in an infinitesimal volume of phase space is equal to:

where dГ N is the element of the phase volume of the system in units of h 3N, h is Planck’s constant; divisor N! takes into account the fact that the rearrangement of identities. particles does not change the state of the system. The distribution function satisfies the normalization condition t f(p, q)dГ N = 1, because the system is reliably in the k.-l. condition. For quantum systems, the distribution function determines the probability w i, N of finding a system of N particles in a given set of quantum numbers i, with energy E i, N, subject to normalization

The average value at time t (i.e. according toinfinitely small time interval from t to t + dt) any physical. the value A(p, q), which is a function of the coordinates and momenta of all particles in the system, is calculated using the distribution function according to the rule (including for nonequilibrium processes):

Integration over coordinates is carried out over the entire volume of the system, and integration over impulses from - , to +, . Thermodynamic state systems should be considered as a limit t: , . For equilibrium states, the distribution functions are determined without solving the equation of motion of the particles that make up the system. The form of these functions (the same for classical and quantum systems) was established by J. Gibbs (1901).

In microcanon. In the Gibbs ensemble, all microstates with a given energy E are equally probable and the distribution function for the classical systems has the form:

f(p,q) = A d,

Where d - Dirac's delta function, H(p, q) - Hamilton's function, which is the sum of the kinetic. and potential energies of all particles; the constant A is determined from the normalization condition of the function f(p, q). For quantum systems, with a specification accuracy equal to the value of D E, in accordance with between energy and time (between momentum and particle coordinates), the function w (E k) = -1, if EE k E + D E, and w (E k ) = 0 if E k< Е и E k >E + D E. Value g(E, N, V)-t. called statistical , equal to the number in energy. layer D E. An important statistical relationship is the connection between the system and the statistical one. :

S(E, N, V) = klng(E, N, V), where the k-Boltzmann constant.

In canon. In the Gibbs ensemble, the probability of the system being in a microstate determined by the coordinates and momenta of all N particles or the values ​​of E i,N has the form: f(p, q) = exp (/kT); w i,N = exp[(F - E i,N)/kT],where F-free. energy (), depending on the values ​​of V, T, N:

F = -kTlnZ N ,

where Z N -statistic. sum (in the case of a quantum system) or statistical. integral (in the case of a classical system), determined from the condition for normalizing the functions w i,N or f(p, q):


Z N = t exp[-H(p, q)/kT]dpdq/(N!h 3N)

(sum over r over all systems, and integration is carried out over the entire phase space).

In the great canon. Gibbs ensemble distribution function f(p, q) and statistical. the sum X, determined from the normalization condition, has the form:

Where W - thermodynamic potential depending on the variables V, T, m (summation is carried out over all positive integers N). In isobaric-isothermal Gibbs ensemble distribution and statistical function. the sum Q, determined from the normalization condition, has the form:

where G-systems (isobaric-isothermal potential, free).

To calculate thermodynamic functions, you can use any distribution: they are equivalent to each other and correspond to different physical. conditions. Microcanonical The Gibbs distribution is applied. arr. in theoretical research. To solve specific problems, ensembles are considered, in which there is an exchange of energy with the environment (canonical and isobaric-isothermal) or an exchange of energy and particles (large canonical ensemble). The latter is especially convenient for studying phase and chemistry. . Statistical the sums of Z N and Q make it possible to determine F, G, as well as thermodynamic. properties of the system obtained by differentiation of statistical. amounts according to the relevant parameters (per 1 village): internal. energy U = RT 2 (9 lnZ N /9 T) V , H = RT 2 (9 lnQ/9 T) P , S = RlnZ N + RT(9 lnZ N /9 T) V = = R ln Q + RT (9 ln Q/9 T) P, at constant volume С V = 2RT(9 lnZ N/9 T) V + RT 2 (9 2 lnZ N/9 T 2) V, at constant С Р = 2RT (9 lnZ N /9 T) P + + RT 2 (9 2 lnZ N /9 T 2) P etc. Resp. all these quantities acquire statistical significance. meaning. Thus, it is identified with the average energy of the system, which allows us to consider it as the movement of the particles that make up the system; free energy is related to statistical the sum of the system, entropy - with the number of microstates g in a given macrostate, or statistical. macrostate, and therefore with its probability. The meaning as a measure of the probability of a state is preserved in relation to arbitrary (non-equilibrium) states. In a state of insulation. system has the maximum possible value for given external. conditions (E, V, N), i.e. the equilibrium state is the most. probable state (with max. statistical ). Therefore, the transition from a nonequilibrium state to an equilibrium state is a process of transition from less probable states to more probable ones. This is the statistical point. the meaning of the law of increase, according to which can only increase (see). At t-re abs. from scratch, any system is fundamentally state in which w 0 = 1 and S = 0. This statement is (see). It is important that for an unambiguous determination it is necessary to use the quantum description, because in classic statistics m.b. is defined only up to an arbitrary term.

Ideal systems. Calculation of statistical sums of most systems is a difficult task. It is significantly simplified if the contribution of potential. energy into the total energy of the system can be neglected. In this case, the complete distribution function f(p, q) for N particles of an ideal system is expressed through the product of single-particle distribution functions f 1 (p, q):


The distribution of particles among microstates depends on their kinetics. energy and from quantum saints in the system, due todue to the identity of particles. All particles are divided into two classes: fermions and bosons. The type of statistics that particles obey is uniquely related to their .

Fermi-Dirac statistics describes the distribution in a system of identities. particles with half-integer 1/2, 3/2,... in units ђ = h/2p. A particle (or quasiparticle) that obeys the specified statistics is called. fermion. Fermions include in, and, with odd, with odd difference and numbers, quasiparticles (for example, holes in), etc. This statistics was proposed by E. Fermi in 1926; in the same year, P. Dirac discovered its quantum mechanics. meaning. The wave function of the fermion system is antisymmetric, i.e. changes its sign when rearranging coordinates and any identities. particles. Each can contain no more than one particle (see). The average number of fermion particles n i in a state with energy E i is determined by the Fermi-Dirac distribution function:

n i =(1+exp[(E i - m )/kT]) -1 ,

where i is a set of quantum numbers characterizing the state of the particle.

Bose-Einstein statistics describes systems of identities. particles with zero or integer (0, ђ, 2ђ, ...). A particle or quasiparticle that obeys the specified statistics is called. boson. This statistics was proposed by S. Bose (1924) for photons and developed by A. Einstein (1924) in relation to , considered as composite particles of an even number of fermions, for example. with an even total number and (deuteron, 4 He nucleus, etc.). Bosons also include phonons in and liquid 4 He, excitons in and. The wave function of the system is symmetrical with respect to the permutation of any identities. particles. The filling numbers are not limited in any way, i.e. Any number of particles can exist in one state. The average number of particles n i bosons in a state with energy E i is described by the Bose-Einstein distribution function:

n i =(exp[(E i - m )/kT]-1) -1 .

Boltzmann statistics is a special case of quantum statistics, when quantum effects can be neglected (high temperatures). It considers the distribution of particles by momenta and coordinates in the phase space of one particle, and not in the phase space of all particles, as in the Gibbs distributions. As a minimum units of volume of phase space, which has six dimensions (three coordinates and three projections of particle momentum), in accordance with quantum mechanics. , you cannot choose a volume smaller than h 3 . The average number of particles n i in a state with energy E i is described by the Boltzmann distribution function:

n i =exp[( m -E i)/kT].

For particles that move according to classical laws. mechanics in external potential field U(r), the statistically equilibrium function of the distribution f 1 (p,r) over the momenta p and coordinates r of particles has the form:f 1 (p,r) = A exp( - [p 2 /2m + U(r)]/kT). Here p 2 /2t-kinetic. energy of mass w, constant A is determined from the normalization condition. This expression is often called Maxwell-Boltzmann distribution, and the Boltzmann distribution is called. function

n(r) = n 0 exp[-U(r)]/kT],

where n(r) = t f 1 (p, r)dp - density of the number of particles at point r (n 0 - density of the number of particles in the absence of an external field). The Boltzmann distribution describes the distributioncool in the gravitational field (barometric f-la), and highly dispersed particles in the field of centrifugal forces, in non-degenerate ones, and is also used to calculate the distribution in dilute. p-max (in the volume and at the boundary with), etc. At U(r) = 0, the Maxwell-Boltzmann distribution follows from the Maxwell-Boltzmann distribution, which describes the distribution of velocities of particles in a statistical state. (J. Maxwell, 1859). According to this distribution, the probable number per unit volume of velocity components, which lie in the intervals from u i to u i + du i (i = x, y, z), is determined by the following function:

The Maxwell distribution does not depend on the interaction. between Particles and is true not only for , but also for (if a classical description is possible for them), as well as for Brownian particles suspended in and . It is used to count the number of collisions between each other during chemical reactions. district and from the surface.

Amount by state. Statistical amount in canonical Gibbs ensemble is expressed through the sum over the states of one Q 1:

where E i is the energy of the i-th quantum level (i = O corresponds to the zero level), g i is statistical. i-th level. In the general case, individual types of movement, and groups in, as well as movement as a whole are interconnected, but approximately they can be considered as independent. Then the sum over the states may be presented in the form of a product of individual components associated with the steps. movement (Q post) and with intramol. movements (Q int):

Q 1 = Q post ·Q int, Q post = l (V/N),

Where l = (2p mkT/h 2) 3/2. For Q ext represents the sum of the electronic and nuclear states; for Q int - the sum of electronic, nuclear, oscillations. and rotate. states. In the range of temperatures from 10 to 10 3 K, an approximate description is usually used, in which each of the indicated types of movement is considered independently: Q in = Q el · Q poison · Q rotation · Q count /g, where g is the number, identity equal to number. configurations that arise during rotation, consisting of identical or groups.

The sum of the states of electronic motion Q el is equal to statistical. R t bas. electronic state. In plural cases bas. the level is non-degenerate and separated from the nearest excited level, which means. energy: (P t = 1). However, in some cases, e.g. for O 2, Р t = з, basically. state, the moment of the quantity of motion is different from zero and takes place, and the energy may. quite low. The sum over the states of Q poison, due to the degeneracy of nuclear ones, is equal to:

where s i is the spin of nucleus i, the product is taken over all . Sum by oscillation states. movement where v i -frequencies small fluctuations, n is the number in . The sum by state will be rotated. movements of a polyatomic with large moments of inertia can be considered classically [high-temperature approximation, T/q i 1, where q i = h 2 /8p 2 kI i (i = x, y, z), I t is the main moment of inertia of rotation around the i axis]: Q time = (p T 3 /q x q y q z) 1/2. For linear ones with moment of inertia I statistical. sum Q time = T/q, where q = h 2 /8p 2 *kI.

When calculating at temperatures above 10 3 K, it is necessary to take into account the anharmonicity of vibrations, interaction effects. oscillate and rotate. degrees of freedom (see), as well as electronic states, population of excited levels, etc. At low temperatures (below 10 K), it is necessary to take into account quantum effects (especially for diatomic ones). Okay, let's rotate it. the movement of heteronuclear AB is described by the following formula:

l-number rotate. states, and for homonuclear A 2 (especially for H 2, D 2, T 2) nuclear and rotate. degrees of freedom interaction Friendwith a friend: Q poison. rotate Q poison ·Q rotation

Knowing the sum over states allows one to calculate the thermodynamic. Saints and, incl. chem. , equilibrium degree of ionization, etc. Important in abs theory. speeds r-tions has the ability to calculate the process of formation of activation. complex (transition state), which is presented as a modification. particle, one of the vibrations. degrees of freedom the cut is replaced by the degree of freedom of the input. movements.

Non-ideal systems. In interaction together. In this case, the sum over the states of the ensemble is not reduced to the product of the sums over the states of the individual ones. If we assume that intermol. interaction do not affect internal states, statistical sum of the system in classical approximation for , consisting of N identities. particles has the form:

Where

Here<2 N-config. integral taking into account the interaction. . Naib, often potential. energy U is considered as a sum of pair potentials: U = =where U(r ij) is the center potential. forces depending ondistances r ij between i and j. Multiparticle contributions to the potential are also taken into account. energy, orientation effects, etc. The need to calculate the configuration. integral arises when considering any condenser. phases and phase boundaries. Exact solution to the plural problem. bodies is almost impossible, therefore, to calculate statistical data. sums and all thermodynamic. St. in, obtained from statistical. sums by differentiation according to the corresponding parameters, use diff. approximate methods.

According to the so-called method of group expansions, the state of the system is considered as a set of complexes (groups) consisting of different numbers and configurations. the integral decomposes into a set of group integrals. This approach allows us to imagine any thermodynamic. function in the form of a series of degrees of density. Naib. An important relationship of this kind is the virial level of the state.

For theoretical descriptions of the properties of dense, and solutions of non-electrolytes and interfaces in these systems are more convenient than direct calculation of statistical data. sum is the method of n-particle distribution functions. In it, instead of counting statistics. each state with a fixed energy uses the relationships between distribution functions f n, which characterize the probability of particles being simultaneously at points in space with coordinates r 1,..., r n; for n = N f N = b t f(p, r)dp (here and below q i = r i). The single-particle function f 1 (r 1) (n = 1) characterizes the density distribution of the substance. For this periodic. f-tion with maxima at crystalline nodes. structures; for or in the absence of external field is a constant value equal to macroscopic. density of the river The two-particle distribution function (n = 2) characterizes the probability of findingtwo particles at points 1 and 2, it determines the so-called. correlation function g(|r 1 - r 2 |) = f 2 (r 1, r 2)/r 2, characterizing the mutual correlation in the distribution of particles. Provides relevant experimental information.

Distribution functions of dimensions n and n + 1 are connected by an infinite system of interlocking integrodifferentials. Bogolyubov-Born-Green-Kirkwood-Yvon equations, the solution of which is extremely difficult, therefore the effects of correlation between particles are taken into account by introducing decomp. approximations, which determine how the function f n is expressed through functions of lower dimension. Resp. developed by several approximate methods for calculating functions f n, and through them all thermodynamic. characteristics of the system under consideration. Naib. The Perkus-Ievik and hyperchain approximations are used.

Lattice condenser models. states have found wide application in thermodynamic. consideration of almost all physical-chemical. tasks. The entire volume of the system is divided into local regions with a characteristic size on the order of the size u 0 . In general, in different models the size of the local area may be both greater and less than u 0 ; in most cases they are the same. The transition to a discrete distribution in space significantly simplifies the calculation of diff. . Lattice models take into account the interaction. together; energy interaction energetically described. parameters. In a number of cases, lattice models allow exact solutions, which makes it possible to evaluate the nature of the approximations used. With their help, it is possible to consider multiparticle and specific ones. interaction, orientation effects, etc. Lattice models are fundamental in the study and implementation of applied calculations and highly inhomogeneous systems.

Numerical methods for determining thermodynamics. St.-in are becoming increasingly important as computing develops. technology. In the Monte Carlo method, multidimensional integrals are directly calculated, which allows one to directly obtain statistical data. average observedvalues ​​A(r1.....r N) according to any of the statistical ensembles(for example, A is the energy of the system). So, in canon. thermodynamic ensemble the average looks like:

This method is applicable to almost all systems; the average values ​​obtained with its help for limited volumes (N = 10 2 -10 5) serve as a good approximation for describing macroscopic properties. objects and can be considered as accurate results.

In the method they say. The dynamics of the state of the system is considered using the numerical integration of Newton's equations for the movement of each particle (N = = 10 2 -10 5) at given potentials of interparticle interaction. Equilibrium characteristics of the system are obtained by averaging over phase trajectories (over velocities and coordinates) over long times, after establishing the Maxwellian distribution of particles over velocities (the so-called thermalization period).

Limitations in the use of numerical methods in basic. determined by the capabilities of the computer. Specialist. will calculate. techniques allow you to bypass the difficulties associated with the fact that it is not a real system that is being considered, but a small volume; this is especially important when taking into account long-range interaction potentials, transitions, etc.

Physical kinetics is a section of statistics. physics, which provides justification for the relationships describing the transfer of energy, momentum and mass, as well as the influence of external influences on these processes. fields. Kinetic. macroscopic coefficients characteristics of a continuous medium that determine the dependencies of physical flows. quantities (heat, momentum, mass of components, etc.) fromthe gradient flows that cause these flows are hydrodynamic. speed, etc. It is necessary to distinguish the Onsager coefficients included in the equations that connect flows with thermodynamics. forces (thermodynamic equation of motion), and transfer coefficients (, etc.) included in the transfer equation. The first m.b. expressed through the latter using the relationships between macroscopic. characteristics of the system, so in the future only coefficients will be considered. transfer.

To calculate macroscopic coefficient transfer, it is necessary to carry out averaging over the probabilities of realization of elementary transfers using a nonequilibrium distribution function. The main difficulty is that the analyte. the form of the distribution function f(p, q, t) (t-time) is unknown (in contrast to the equilibrium state of the system, which is described using the Gibbs distribution functions obtained at t: , ). Consider n-particle distribution functions f n (r, q, t), which are obtained from functions f (p, q, t) by averaging over the coordinates and momenta of the remaining (N - n) particles:

For them, maybe. a system of equations has been compiled that allows one to describe arbitrary nonequilibrium states. Solving this system of equations is very difficult. As a rule, in kinetic theory and gaseous quasiparticles (fermions and bosons), only the equation for the single-particle distribution function f 1 is used. Under the assumption that there is no correlation between the states of any particles (hypothesis of molecular chaos), the so-called kinetic Boltzmann equation (L. Boltzmann, 1872). This equation takes into account the change in the distribution of particles under the influence of external influences. forces F(r, m) and pair collisions between particles:

Where f 1 (u, r, t) and particle distribution functions up tocollisions, f " 1 (u", r, t) and distribution functionsafter a collision; u and -velocities of particles before the collision, u" and -velocities of the same particles after the collision, and = |u -|-modulus of the relative speed of the colliding particles, q - the angle between the relative speed of the u - colliding particles and the line connecting their centers , s (u,q )dW -differential effective cross section for particle scattering by solid angle dW in the laboratory coordinate system, depending on the law of particle interaction.For a model in the form of elastic rigid spheres with radius R, s = 4R 2 cosq is assumed In the framework of classical mechanics, the differential cross section is expressed in terms of the collision parameters b and e (the corresponding impact distance and the azimuthal angle of the line of centers): s dW = bdbde , and are considered as centers of forces with a potential depending on the distance. For quantum expressions for the differential The effective cross section is obtained on the basis of , taking into account the influence of effects on the probability of collision.

If the system is in statistical , the collision integral Stf is equal to zero and the solution of the kinetic. The Boltzmann equation will be the Maxwell distribution. For nonequilibrium states, the solutions of the kinetic. Boltzmann equations are usually sought in the form of a series expansion of the function f 1 (u, r, m) in small parameters relative to the Maxwell distribution function. In the simplest (relaxation) approximation, the collision integral is approximated as Stgas; for (the usual one-particle distribution function f 1 of molecules in liquids does not reveal the specifics of the phenomena and consideration of the two-particle distribution function f 2 is required. However, for sufficiently slow processes and in cases where the scales of spatial inhomogeneities are significantly smaller than the scale of correlation between particles, you can use a locally equilibrium single-particle distribution function with temperature, chemical potentials and hydrodynamic speed, which correspond to the small volume under consideration... To it you can find a correction proportional to the gradients of temperature, hydrodynamic speed and chemical. potentials of the components, and calculate the flows of impulses, energy and substances, as well as justify the Navier-Stokes equation, and... In this case, the transfer coefficients turn out to be proportional to the spatio-temporal correlation functions of the flows of energy, momentum and substances each component.

To describe matter at and at interfaces, the lattice condenser model is widely used. phases. the state of the system is described fundamentally. kinetic master equation regarding the distribution function P(q, t):

where P(q,t)= t f(p,q,t)du- distribution function, averaged over the impulses (velocities) of all N particles, describing the distribution of particles over the nodes of the lattice structure (their number is N y, N< N y), q- номер узла или его координата. В модели "решеточного " частица может находиться в узле (узел занят) или отсутствовать (узел свободен); W(q : q") is the probability of the system transitioning per unit time from state q, described by a complete set of particle coordinates, to another state q". The first sum describes the contribution of all processes in which a transition to a given state q is carried out, the second sum describes the exit from this state. In the case of equilibrium distribution of particles (t : , ) P(q) = exp[-H(q)/kT]/Q, where Q-statistic. sum, H(q) is the energy of the system in state q. The transition probabilities satisfy the detailed principle: W(q" : q)exp[-H(q")/kT] = W(q : q")exp[-H(q)/kT]. Based on the equations for the functions P(q,t), a kinetic equation is constructed. equations for n-particle distribution functions, which are obtained by averaging over the locations of all other (N - n) particles. For small ones h in the boundary with, growth, phase transformations, etc. For interphase transfer, due to differences in the characteristic times of elementary particle migration processes, the type of boundary conditions at the phase boundaries plays an important role.

For small systems (number of nodes N y = 10 2 - 10 5) the system of equations relative to the function P(q,t) may be solved numerically using the Monte Carlo method. The stage of the system to an equilibrium state allows us to consider the differences. transient processes in the study of the kinetics of phase transformations, growth, kinetics of surface reactions, etc. and determine their dynamics. characteristics, including coefficient. transfer.

To calculate the coefficient. transfer in gaseous, liquid and solid phases, as well as at phase boundaries, various variants of the mol. method are actively used. dynamics, which allows us to trace in detail the systems from times of ~10 -15 s to ~10 -10 s (at times of the order of 10 -10 - 10 -9 s and more, the so-called Langevin equation is used, this equation Newton's formulas containing a stochastic term on the right-hand side).

For systems with chemical p-tions on the nature of the distribution of particles is greatly influenced by the relationship between the characteristic times of transfer and their chemical. transformations. If the speed of chemical transformation is small, the distribution of particles does not differ much from the case when there is no solution. If the speed of the distribution is high, its influence on the nature of the distribution of particles is great and it is impossible to use average particles (i.e. distribution functions with n = 1), as is done when using. It is necessary to describe the distribution in more detail using the distribution functions f n with n > 1. Important in describing the reaction. particle flows on the surface and velocities have boundary conditions (see).

Lit.: Kubo R., Statistical mechanics, trans. from English, M., 1967; Zubarev D.N., Nonequilibrium statistical, M., 1971; Ishihara A., Statistical Physics, trans. from English, M., 1973; Landau L. D., Lifshits E. M L

Molecular physics is a branch of physics that studies the structure and properties of matter, based on the so-called molecular kinetic concepts. According to these ideas, any body - solid, liquid or gaseous - consists of a large number of very small isolated particles - molecules. The molecules of any substance are in a disorderly, chaotic movement that does not have any preferred direction. Its intensity depends on the temperature of the substance.

Direct evidence of the existence of chaotic motion of molecules is Brownian motion. This phenomenon lies in the fact that very small (visible only through a microscope) particles suspended in a liquid are always in a state of continuous random movement, which does not depend on external causes and turns out to be a manifestation of the internal movement of matter. Brownian particles move under the influence of random impacts of molecules.

The molecular kinetic theory sets itself the goal of interpreting those properties of bodies that are directly observed experimentally (pressure, temperature, etc.) as the total result of the action of molecules. At the same time, she uses the statistical method, being interested not in the movement of individual molecules, but only in such average values ​​that characterize the movement of a huge collection of particles. Hence its other name - statistical physics.

Thermodynamics also deals with the study of various properties of bodies and changes in the state of matter.

However, unlike the molecular-kinetic theory of thermodynamics, it studies the macroscopic properties of bodies and natural phenomena, without being interested in their microscopic picture. Without introducing molecules and atoms into consideration, without entering into a microscopic examination of processes, thermodynamics allows one to draw a number of conclusions regarding their occurrence.

Thermodynamics is based on several fundamental laws (called the principles of thermodynamics), established on the basis of a generalization of a large body of experimental facts. Because of this, the conclusions of thermodynamics are very general.

Approaching changes in the state of matter from different points of view, thermodynamics and molecular kinetic theory complement each other, essentially forming one whole.

Turning to the history of the development of molecular kinetic concepts, it should first of all be noted that ideas about the atomic structure of matter were expressed by the ancient Greeks. However, among the ancient Greeks these ideas were nothing more than a brilliant guess. In the 17th century atomism is being reborn again, but no longer as a guess, but as a scientific hypothesis. This hypothesis received particular development in the works of the brilliant Russian scientist and thinker M.V. Lomonosov (1711-1765), who attempted to give a unified picture of all physical and chemical phenomena known in his time. At the same time, he proceeded from a corpuscular (in modern terminology - molecular) concept of the structure of matter. Revolting against the theory of caloric (a hypothetical thermal fluid, the content of which in a body determines the degree of its heating) that was dominant in his time, Lomonosov sees the “cause of heat” in the rotational movement of body particles. Thus, Lomonosov essentially formulated molecular kinetic concepts.

In the second half of the 19th century. and at the beginning of the 20th century. Thanks to the works of a number of scientists, atomism turned into a scientific theory.

Classical and quantum statistical physics. Derivation of the Gibbs relation. Thermodynamic principles. Liouville's theorem and kinetic equations of Boltzmann and Ziegler. Methods of statistical physics in heterogeneous media.

1. Derivation of the Gibbs relation

Introductory Notes . The central place in the mechanics of heterogeneous media is occupied by the derivation of governing equations. It is the constitutive equations that contain the specification that allows one to distinguish between media with different mechanical properties. There are various ways to derive governing equations - both strict ones based on averaging methods, and heuristic ones. The most common method is a combination of thought experiments taking into account thermodynamic principles. Both of these approaches are phenomenological, although the thermodynamic method is deeply developed and based on fundamental physical laws. It is obvious that the phenomenological derivation of the defining relations needs to be justified based on general physical principles, in particular, using statistical methods.

Statistical physics studies systems consisting of a huge number of identical or similar elements (atoms, molecules, ions, submolecular structures, etc.). In the mechanics of heterogeneous media, such elements are microinhomogeneities (pores, cracks, grains, etc.). Studying them using deterministic methods is almost impossible. At the same time, a huge number of these elements allows for the manifestation of statistical patterns and the study of this system using statistical methods.

Statistical methods are based on the concepts of the main system and subsystem. The main system (thermostat) is much larger than the subsystem, but both are in a state of thermodynamic equilibrium. The object of study in statistical physics is precisely the subsystem, which in continuum mechanics is identified with an elementary volume, and in heterogeneous mechanics with a volume of phases in an elementary volume.

The Gibbs method in statistical physics is based on the concepts of phase space and trajectories in phase space. Phase space is the topological product of the coordinate and momentum spaces of each particle that makes up the subsystem. Trajectories in phase space contain a lot of unnecessary information, for example, initial values ​​and information about boundary conditions when the trajectory reaches the boundary. When describing one single trajectory in phase space, the ergodic hypothesis is usually used (or some surrogate of it, which slightly modifies it, but is amenable to rigorous proof). The subtleties of the proof of the ergodic hypothesis are not important, and therefore we do not dwell on them. It allows one trajectory to be replaced by a whole ensemble of states. An equivalent description using an ensemble of states allows us to get rid of this unnecessary information. The ensemble of states allows for a simple and transparent interpretation. It can be imagined as some fictitious gas in phase space, which is described using the transport equation.

The statistical approach includes two levels of research - quantum and classical. Each microscopic inhomogeneity of a heterogeneous medium is described by continuum mechanics as some homogeneous homogeneous body. It is assumed that the theory of quantum statistical physics has already been used when studying the mechanical and thermodynamic properties of these inhomogeneities. When we perform averaging over random inhomogeneities in a heterogeneous environment, we consider these inhomogeneities as classical random objects. The line of reasoning in quantum and classical statistical physics is very similar, although it has some differences. In quantum statistics, the phase volume takes discrete values. However, this is not the only difference. In quantum statistics, a fictitious gas is incompressible and undergoes only transport. In classical statistics, the transport equation includes a term that describes dissipative processes at the molecular level. Formally, it looks like a source. The divergent appearance of this source allows the full mass of the fictitious gas to be preserved, but allows for its local disappearance and reappearance. This process resembles diffusion in fictitious phase space.

Further, on the basis of classical statistics, thermodynamics itself is further expounded, including the thermodynamics of irreversible processes. The concepts of thermodynamic functions are introduced, with the help of which the governing equations are derived. Poroelastic media include conservative and dissipative processes. Reversible elastic deformations occur in the skeleton, which represent a conservative thermodynamic system, and dissipative processes occur in the fluid. In a porous-viscous medium, both phases (skeletal and fluid) are dissipative.

Microprocesses and macroprocesses . In heterogeneous media, a subsystem is an elementary volume that satisfies the postulates of heterogeneous media. In particular, it satisfies the conditions of local statistical homogeneity and local thermodynamic equilibrium. Accordingly, all objects and processes differ in their scale into microprocesses and macroprocesses. We will describe macroprocesses using generalized coordinates and generalized forces . Here, the subscripts mean not only vector and tensor indices, but also various quantities (including quantities with different tensor dimensions). When considering microprocesses we will use generalized coordinatesAnd generalized speeds. These coordinates describe the movement of large molecules, their associations and inhomogeneities, which are considered as classical objects. The phase space of the subsystem is formed by the coordinates and speeds all particles composing a given elementary volume.

It should be noted that in quantum mechanics the nature of particles is strictly established. The number of particles is finite, and the laws of their motion are known and uniform for each type of particle. A completely different situation arises in the mechanics of heterogeneous media. As a rule, we have constitutive relations derived by phenomenological methods for each of the phases. General constitutive relations for the entire elementary volume at the macro level are usually the subject of research. For this reason, the interaction of micro-level elements in heterogeneous environments does not lend itself to standard research methods.

In this regard, new methods and approaches are required, which have not yet been fully developed. One such approach is Ziegler's generalization of Gibbs' theory. Its essence lies in some modification of the Liouville equation. This approach will be described in more detail below. We first give a standard presentation of Gibbs's theory, and then present ideas that generalize it.

System energy changes due to work
at the macro level, which is expressed by the relation

. It also changes due to the influx of heat
associated with the movement of molecules. Let us write down the first law of thermodynamics in differential form

. (1.1)

We will describe microprocesses using Lagrange equations

, (1.2) where
Lagrange function,– kinetic, and - potential energy.

Gibbs' theory imposes the following restrictions. It is assumed that potential energy depends on microcoordinates and macrocoordinates, and kinetic energy depends only on microcoordinates and their velocities. Under such conditions, the Lagrange function does not depend on time and macro-velocities.

.

The approach based on the equations of motion in Lagrange form (1.2) can be replaced by an equivalent Hamiltonian formalism by introducing generalized momenta for microcoordinates

,
, And Hamilton function
, which has the meaning of the total energy of the particle. Let us write down the increment of the Hamilton function

Due to the definition of impulses and Lagrange equations of motion, this expression is transformed

, (1.2) which follows Hamilton's equations of motion

,
. (1.3a) where
has the meaning of the energy of the system, as well as the additional identity of races

. (1.3b)

It should be noted here that the Lagrange and Hamilton functions are expressed through different arguments. Therefore, the last identity has a not entirely trivial meaning. Let us write down the differential expression (1.2) for one particle along its trajectory

.

Using (1.3), we transform this expression

.

Consequently, the particle energy depends only on the generalized macrocoordinates. If they do not change over time, then the energy is conserved.

Statistical method for describing a system . The lack of information about the initial conditions for system (1.3) and about its behavior at the boundary of the body can be overcome if we use a statistical approach to studying this system. Let this mechanical system have degrees of freedom associated with microscopic variables. In other words, the position of all points in ordinary three-dimensional space is characterized by generalized coordinates(
). Let us consider the phase space of a larger number of variables
. The phase state is characterized by a point with coordinates
V
-dimensional Euclidean space. In practice, we always study a specific object that is part of some large (compared to the given object) system ( external environment). This object usually interacts with the external environment. Therefore, in the future we will talk about subsystem(which occupies part of the phase space) interacting with the system (which occupies the entire phase space).

When moving in
-dimensional space, a single trajectory gradually fills this entire phase space. Let's put
and denote by
that part of the phase space volume in which a given subsystem spends “almost all the time.” Here we mean the time during which the subsystem is in a quasi-equilibrium state. Over a sufficiently long period of time, the phase trajectory will pass through this section of phase space many times. Let us accept the ergodic hypothesis, according to which, instead of one moving point in phase space, we can consider many points forming a statistical ensemble. Passing to the infinitesimal elementary phase volume

, let us introduce a continuous distribution function using the ratio

. Here – number of points in the phase volume element
,
– the total number of points in the entire phase space, – a certain normalization coefficient that has the dimension of action. It characterizes the statistical weight of the selected phase space volume element. The distribution function satisfies the normalization condition

or
. (1.4)

Let
– the total time that the system spends within the elementary volume
, A – the total time of movement of a material point along its trajectory. In accordance with the ergodic hypothesis, we assume that

. (1.5)

Reasoning purely formally, we can assume that there is some fictitious gas in the phase space, the density of which is equal to the density of the number of points in the phase space. The conservation of the number of fictitious gas molecules is expressed by a transport equation in phase space, similar to the law of conservation of mass in ordinary three-dimensional space. This conservation law is called Liouville's theorem

. (1.6)

By virtue of Hamilton's equations, the condition for the incompressibility of the phase fluid follows:

(1.7)

Let us introduce the convective derivative

.

Combining (1.6) and (1.7), we obtain the phase fluid transport equation

or
. (1.8)

By virtue of the ergodic hypothesis, the density of the number of particles in phase space is proportional to the probability density in the ensemble of states. Therefore, equation (1.8) can be represented as

. (1.9)

In a state of equilibrium with constant external parameters, the energy of the microsystem, represented by the Hamiltonian, is conserved along the trajectory in phase space. In the same way, due to (1.9), the probability density is preserved. It follows that the probability density is a function of energy.

. (1.10)

Addiction from is easy to obtain if you notice that the energies of the subsystems are added, and the probabilities are multiplied. This condition is satisfied by the only form of functional dependence

. (1.11) This distribution is called canonical. Here – Boltzmann constant, quantities
And
have the dimension of energy. Quantities
And are called free energy and temperature.

Let's determine the internal energy as the average value of the true energy

. (1.12)

Substituting (1.11) here, we get

.

Entropy defined as

Relationship (1.13) introduces a new concept – entropy. The second law of thermodynamics states that in a nonequilibrium state of a system, its entropy tends to increase, and in a state of thermodynamic equilibrium, entropy remains constant. Combining (1.12) and (1.13), we obtain

. (1.14) Relationship (1.14) is the basis for deriving other thermodynamic functions that describe the equilibrium state of the subsystem.

Let us assume that inside the phase volume
of a given subsystem, the probability density is almost constant. In other words, this subsystem is weakly connected with the environment and is in a state of equilibrium. The relation is valid for it

. (1.15) Here
– delta function.

This distribution is called microcanonical in contrast to the canonical distribution (1.11). At first glance, it seems that both distributions are very different and even contradict each other. In fact, there is no contradiction between them. Let's enter the radius in a multidimensional phase space of a very large number of dimensions. In a thin, equidistant (in energy) spherical layer, the number of points significantly exceeds the number of points inside this sphere. It is for this reason that distributions (1.11) and (1.15) differ little from each other.

In order to satisfy the last relation (1.4), it is necessary that this probability density be equal to

. (1.16)

Let us substitute distribution (1.11) into the last relation (1.4)

and differentiate it. Considering that
is a function of macrocoordinates, we have

,
.

Using (1.14), we transform this expression

. (1.17a) Here
– heat flow,
– work of external forces. This relationship was first developed by Gibbs, and it bears his name. For gas it has a particularly simple form

. (1.17b) Here - pressure, - volume.

At the phenomenological level, the definition of temperature is also given. Note that heat flow is not a differential of the thermodynamic function, while entropy is such by definition. For this reason, in expression (1.17) there is an integrating factor , which is called temperature. You can take some working fluid (water or mercury) and introduce a temperature change scale. Such a body is called thermometer. Let us write (1.17) in the form

. Temperature in this relation is some intensive quantity.

Generalized forces and displacements are thermodynamically conjugate quantities. Likewise, temperature and entropy are conjugate quantities, of which one is a generalized force and the other is a generalized displacement. From (1.17) it follows

. (1.18)

By virtue of (1.14), for the free energy we have a similar differential expression

. (1.19) In this relation, temperature and entropy as conjugate quantities change places, and expression (1.18) is modified

. (1.20)

In order to use these relationships, it is necessary to specify independent defining parameters and expressions for the thermodynamic functions.

A more strict definition can be given for temperature. Let us consider, for example, a closed (isolated) system consisting of two bodies and in a state of thermodynamic equilibrium. Energy and entropy are additive quantities
,
. Note that entropy is a function of energy, so
. In an equilibrium state, entropy is a stationary point with respect to the redistribution of energy between two subsystems, i.e.

.

This directly follows

. (1.21)

The derivative of entropy with respect to energy is called absolute temperature (or simply temperature ). This fact also follows directly from (1.17). Relationship (1.21) means something more: in a state of thermodynamic equilibrium, the temperatures of bodies are equal

. (1.22)

Statistical physics occupies a prominent place in modern science and deserves special consideration. It describes the formation of macrosystem parameters from the movements of particles. For example, thermodynamic parameters such as temperature and pressure are reduced to the pulse-energy characteristics of molecules. She does this by specifying some probability distribution. The adjective "statistical" comes from the Latin word status(Russian - state). This word alone is not enough to express the specifics of statistical physics. Indeed, any physical science studies the states of physical processes and bodies. Statistical physics deals with an ensemble of states. The ensemble in the case under consideration presupposes a plurality of states, but not any, but correlating with the same aggregate state, which has integrative characteristics. Thus, statistical physics involves a hierarchy of two levels, often called microscopic and macroscopic. Accordingly, it examines the relationship between micro- and macrostates. The integrative features mentioned above are constituted only if the number of microstates is sufficiently large. For specific states it has a lower and an upper limit, the determination of which is a special task.

As already noted, a characteristic feature of the statistical approach is the need to refer to the concept of probability. Using distribution functions, statistical average values ​​(mathematical expectations) of certain characteristics that are inherent, by definition, at both the micro and macro levels are calculated. The connection between the two levels becomes particularly clear. The probabilistic measure of macrostates is entropy ( S). According to the Boltzmann formula, it is directly proportional to the statistical weight, i.e. number of ways to realize a given macroscopic state ( R):

Entropy is greatest in the state of equilibrium of the statistical system.

The statistical project was developed within the framework of classical physics. It seemed that it was not applicable in quantum physics. In reality, the situation turned out to be fundamentally different: in the quantum field, statistical physics is not limited to classical concepts and acquires a more universal character. But the very content of the statistical method is significantly clarified.

The character of the wave function is of decisive importance for the fate of the statistical method in quantum physics. It determines not the values ​​of physical parameters, but the probabilistic law of their distribution. L this means that the main condition of statistical physics is satisfied, i.e. assignment of probability distribution. Its presence is a necessary and, apparently, sufficient condition for the successful extension of the statistical approach to the entire field of quantum physics.

In the field of classical physics, it seemed that the statistical approach was not necessary, and if it was used, it was only due to the temporary absence of methods truly adequate to the nature of physical processes. Dynamic laws, through which unambiguous predictability is achieved, are more relevant than statistical laws.

Future physics, they say, will make it possible to explain statistical laws using dynamic ones. But the development of quantum physics presented scientists with a clear surprise.

In fact, the primacy of not dynamic, but statistical laws became clear. It was statistical patterns that made it possible to explain dynamic laws. The so-called unambiguous description is simply a recording of events that are most likely to occur. It is not unambiguous Laplacean determinism that is relevant, but probabilistic determinism (see paradox 4 from paragraph 2.8).

Quantum physics, by its very essence, is a statistical theory. This circumstance testifies to the enduring importance of statistical physics. In classical physics, the statistical approach does not require solving the equations of motion. Therefore, it seems that it is essentially not dynamic, but phenomenological. The theory answers the question “How do processes occur?”, but not the question “Why do they happen this way and not differently?” Quantum physics gives the statistical approach a dynamic character, phenomenology acquires a secondary character.

Statistical physics and thermodynamics

Statistical and thermodynamic research methods . Molecular physics and thermodynamics are branches of physics in which they study macroscopic processes in bodies, associated with the huge number of atoms and molecules contained in the bodies. To study these processes, two qualitatively different and mutually complementary methods are used: statistical (molecular kinetic) And thermodynamic. The first underlies molecular physics, the second - thermodynamics.

Molecular physics - a branch of physics that studies the structure and properties of matter based on molecular kinetic concepts, based on the fact that all bodies consist of molecules in continuous chaotic motion.

The idea of ​​the atomic structure of matter was expressed by the ancient Greek philosopher Democritus (460-370 BC). Atomism was revived again only in the 17th century. and develops in works whose views on the structure of matter and thermal phenomena were close to modern ones. The rigorous development of molecular theory dates back to the middle of the 19th century. and is associated with the works of the German physicist R. Clausius (1822-1888), J. Maxwell and L. Boltzmann.

The processes studied by molecular physics are the result of the combined action of a huge number of molecules. The laws of behavior of a huge number of molecules, being statistical laws, are studied using statistical method. This method is based on the fact that the properties of a macroscopic system are ultimately determined by the properties of the particles of the system, the features of their movement and averaged values ​​of the dynamic characteristics of these particles (speed, energy, etc.). For example, the temperature of a body is determined by the speed of the chaotic movement of its molecules, but since at any moment of time different molecules have different speeds, it can only be expressed through the average value of the speed of movement of the molecules. You can't talk about the temperature of one molecule. Thus, the macroscopic characteristics of bodies have a physical meaning only in the case of a large number of molecules.

Thermodynamics- a branch of physics that studies the general properties of macroscopic systems in a state of thermodynamic equilibrium and the processes of transition between these states. Thermodynamics does not consider the microprocesses that underlie these transformations. This thermodynamic method different from statistical. Thermodynamics is based on two principles - fundamental laws established as a result of generalization of experimental data.

The scope of application of thermodynamics is much wider than that of molecular kinetic theory, since there are no areas of physics and chemistry in which the thermodynamic method cannot be used. However, on the other hand, the thermodynamic method is somewhat limited: thermodynamics does not say anything about the microscopic structure of matter, about the mechanism of phenomena, but only establishes connections between the macroscopic properties of matter. Molecular kinetic theory and thermodynamics complement each other, forming a single whole, but differing in various research methods.

Basic postulates of molecular kinetic theory (MKT)

1. All bodies in nature consist of a huge number of tiny particles (atoms and molecules).

2. These particles are in continuous chaotic(disorderly) movement.

3. The movement of particles is related to body temperature, which is why it is called thermal movement.

4. Particles interact with each other.

Evidence of the validity of MCT: diffusion of substances, Brownian motion, thermal conductivity.

Physical quantities used to describe processes in molecular physics are divided into two classes:

microparameters– quantities that describe the behavior of individual particles (mass of an atom (molecule), speed, momentum, kinetic energy of individual particles);
macro parameters– quantities that cannot be reduced to individual particles, but characterize the properties of the substance as a whole. The values ​​of macroparameters are determined by the result of the simultaneous action of a huge number of particles. Macro parameters are temperature, pressure, concentration, etc.

Temperature is one of the basic concepts that plays an important role not only in thermodynamics, but also in physics in general. Temperature- a physical quantity characterizing the state of thermodynamic equilibrium of a macroscopic system. In accordance with the decision of the XI General Conference on Weights and Measures (1960), only two temperature scales can currently be used - thermodynamic And International practical, graduated respectively in kelvins (K) and degrees Celsius (°C).

On the thermodynamic scale, the freezing point of water is 273.15 K (at the same

pressure as in the International Practical Scale), therefore, by definition, thermodynamic temperature and International Practical Temperature

scale are related by the ratio

T= 273,15 + t.

Temperature T = 0 K is called zero kelvin. Analysis of various processes shows that 0 K is unattainable, although approaching it as close as desired is possible. 0 K is the temperature at which theoretically all thermal movement of particles of a substance should cease.

In molecular physics, a relationship is derived between macroparameters and microparameters. For example, the pressure of an ideal gas can be expressed by the formula:

position:relative; top:5.0pt">- mass of one molecule, - concentration, font-size: 10.0pt">From the basic MKT equation you can obtain an equation convenient for practical use:

font-size: 10.0pt">An ideal gas is an idealized gas model in which it is believed that:

1. the intrinsic volume of gas molecules is negligible compared to the volume of the container;

2. there are no interaction forces between molecules (attraction and repulsion at a distance;

3. collisions of molecules with each other and with the walls of the vessel are absolutely elastic.

An ideal gas is a simplified theoretical model of a gas. But, the state of many gases under certain conditions can be described by this equation.

To describe the state of real gases, corrections must be introduced into the equation of state. The presence of repulsive forces that counteract the penetration of other molecules into the volume occupied by a molecule means that the actual free volume in which molecules of a real gas can move will be smaller. Whereb - the molar volume occupied by the molecules themselves.

The action of attractive gas forces leads to the appearance of additional pressure on the gas, called internal pressure. According to van der Waals calculations, internal pressure is inversely proportional to the square of the molar volume, i.e. where A - van der Waals constant, characterizing the forces of intermolecular attraction,V m - molar volume.

In the end we will get equation of state of real gas or van der Waals equation:

font-size:10.0pt;font-family:" times new roman> Physical meaning of temperature: temperature is a measure of the intensity of thermal motion of particles of substances. The concept of temperature is not applicable to an individual molecule. Only for a sufficiently large number of molecules creating a certain amount of substance, It makes sense to include the term temperature.

For an ideal monatomic gas, we can write the equation:

font-size:10.0pt;font-family:" times new roman>The first experimental determination of molecular speeds was carried out by the German physicist O. Stern (1888-1970). His experiments also made it possible to estimate the speed distribution of molecules.

The “confrontation” between the potential binding energies of molecules and the energies of thermal motion of molecules (kinetic molecules) leads to the existence of various aggregate states of matter.

Thermodynamics

By counting the number of molecules in a given system and estimating their average kinetic and potential energies, we can estimate the internal energy of a given system U.

font-size:10.0pt;font-family:" times new roman>For an ideal monatomic gas.

The internal energy of a system can change as a result of various processes, for example, performing work on the system or imparting heat to it. So, by pushing a piston into a cylinder in which there is a gas, we compress this gas, as a result of which its temperature increases, i.e., thereby changing (increasing) the internal energy of the gas. On the other hand, the temperature of a gas and its internal energy can be increased by imparting a certain amount of heat to it - energy transferred to the system by external bodies through heat exchange (the process of exchanging internal energies when bodies come into contact with different temperatures).

Thus, we can talk about two forms of energy transfer from one body to another: work and heat. The energy of mechanical motion can be converted into the energy of thermal motion, and vice versa. During these transformations, the law of conservation and transformation of energy is observed; in relation to thermodynamic processes this law is first law of thermodynamics, established as a result of generalization of centuries-old experimental data:

In a closed loop, therefore font-size:10.0pt;font-family:" times new roman>Heat engine efficiency: .

From the first law of thermodynamics it follows that the efficiency of a heat engine cannot be more than 100%.

Postulating the existence of various forms of energy and the connection between them, the first principle of TD says nothing about the direction of processes in nature. In full accordance with the first principle, one can mentally construct an engine in which useful work would be performed by reducing the internal energy of the substance. For example, instead of fuel, a heat engine would use water, and by cooling the water and turning it into ice, work would be done. But such spontaneous processes do not occur in nature.

All processes in nature can be divided into reversible and irreversible.

For a long time, one of the main problems in classical natural science remained the problem of explaining the physical nature of the irreversibility of real processes. The essence of the problem is that the motion of a material point, described by Newton’s II law (F = ma), is reversible, while a large number of material points behave irreversibly.

If the number of particles under study is small (for example, two particles in figure a)), then we will not be able to determine whether the time axis is directed from left to right or from right to left, since any sequence of frames is equally possible. That's what it is reversible phenomenon. The situation changes significantly if the number of particles is very large (Fig. b)). In this case, the direction of time is determined unambiguously: from left to right, since it is impossible to imagine that evenly distributed particles by themselves, without any external influences, will gather in the corner of the “box”. This behavior, when the state of the system can only change in a certain sequence, is called irreversible. All real processes are irreversible.

Examples of irreversible processes: diffusion, thermal conductivity, viscous flow. Almost all real processes in nature are irreversible: this is the damping of a pendulum, the evolution of a star, and human life. The irreversibility of processes in nature, as it were, sets the direction on the time axis from the past to the future. The English physicist and astronomer A. Eddington figuratively called this property of time “the arrow of time.”

Why, despite the reversibility of the behavior of one particle, does an ensemble of a large number of such particles behave irreversibly? What is the nature of irreversibility? How to justify the irreversibility of real processes based on Newton's laws of mechanics? These and other similar questions worried the minds of the most outstanding scientists of the 18th–19th centuries.

Second law of thermodynamics sets the direction laziness of all processes in isolated systems. Although the total amount of energy in an isolated system is conserved, its qualitative composition changes irreversibly.

1. In Kelvin's formulation, the second law is: “There is no process possible whose sole result would be the absorption of heat from a heater and the complete conversion of this heat into work.”

2. In another formulation: “Heat can spontaneously transfer only from a more heated body to a less heated one.”

3. The third formulation: “Entropy in a closed system can only increase.”

Second law of thermodynamics prohibits the existence perpetual motion machine of the second kind , i.e., a machine capable of doing work by transferring heat from a cold body to a hot one. The second law of thermodynamics indicates the existence of two different forms of energy - heat as a measure of the chaotic movement of particles and work associated with ordered movement. Work can always be converted into its equivalent heat, but heat cannot be completely converted into work. Thus, a disordered form of energy cannot be transformed into an ordered one without any additional actions.

We complete the transformation of mechanical work into heat every time we press the brake pedal in a car. But without any additional actions in a closed cycle of engine operation, it is impossible to transfer all the heat into work. Part of the thermal energy is inevitably spent on heating the engine, plus the moving piston constantly does work against friction forces (this also consumes a supply of mechanical energy).

But the meaning of the second law of thermodynamics turned out to be even deeper.

Another formulation of the second law of thermodynamics is the following statement: the entropy of a closed system is a non-decreasing function, that is, during any real process it either increases or remains unchanged.

The concept of entropy, introduced into thermodynamics by R. Clausius, was initially artificial. The outstanding French scientist A. Poincaré wrote about this: “Entropy seems somewhat mysterious in the sense that this quantity is inaccessible to any of our senses, although it has the real property of physical quantities, since, at least in principle, it is completely measurable "

According to Clausius's definition, entropy is a physical quantity whose increment is equal to the amount of heat , received by the system, divided by the absolute temperature:

font-size:10.0pt;font-family:" times new roman>In accordance with the second law of thermodynamics, in isolated systems, i.e. systems that do not exchange energy with the environment, a disordered state (chaos) cannot independently transform into order Thus, in isolated systems, entropy can only increase. This pattern is called principle of increasing entropy. According to this principle, any system strives for a state of thermodynamic equilibrium, which is identified with chaos. Since an increase in entropy characterizes changes over time in closed systems, entropy acts as a kind of arrows of time.

We called the state with maximum entropy disordered, and the state with low entropy ordered. A statistical system, if left to itself, goes from an ordered to a disordered state with maximum entropy corresponding to given external and internal parameters (pressure, volume, temperature, number of particles, etc.).

Ludwig Boltzmann connected the concept of entropy with the concept of thermodynamic probability: font-size:10.0pt;font-family:" times new roman> Thus, any isolated system, left to its own devices, over time passes from a state of order to a state of maximum disorder (chaos).

From this principle follows a pessimistic hypothesis about heat death of the Universe, formulated by R. Clausius and W. Kelvin, according to which:

· the energy of the Universe is always constant;

· The entropy of the Universe is always increasing.

Thus, all processes in the Universe are directed towards achieving a state of thermodynamic equilibrium, corresponding to the state of greatest chaos and disorganization. All types of energy degrade, turning into heat, and the stars will end their existence, releasing energy into the surrounding space. A constant temperature will be established only a few degrees above absolute zero. Lifeless, cooled planets and stars will be scattered in this space. There will be nothing - no energy sources, no life.

This grim prospect was predicted by physics until the 1960s, although the conclusions of thermodynamics contradicted the results of research in biology and social sciences. Thus, Darwin's evolutionary theory testified that living nature develops primarily in the direction of improvement and complexity of new species of plants and animals. History, sociology, economics, and other social and human sciences have also shown that in society, despite individual zigzags of development, progress is generally observed.

Experience and practical activity have shown that the concept of a closed or isolated system is a rather crude abstraction that simplifies reality, since in nature it is difficult to find systems that do not interact with the environment. The contradiction began to be resolved when in thermodynamics, instead of the concept of a closed isolated system, the fundamental concept of an open system was introduced, that is, a system exchanging matter, energy and information with the environment.