Solving stochastic differential equations. Analysis of explicit numerical methods for solving stochastic differential equations

Ereshko Art. F.,

Computing Center named after. RAS,

Świętokrzyska Academy in Kielce, Poland

ANALYSIS OF EXPLICIT NUMERICAL METHODS

STOCHASTIC SOLUTIONS

DIFFERENTIAL EQUATIONS

The basic principles of constructing numerical methods for solving stochastic differential equations (SDE) are considered. The problem of rigidity of CDS systems is analyzed. For the one-dimensional Ito SDE, the approximation accuracy of existing explicit numerical methods is compared.

1. Introduction

The analysis and synthesis of stochastic dynamic systems is often associated with the use of a numerical solution of SDEs. For a number of tasks such as filtering, identification, forecasting and optimal control, the integration of the numerical solution of the SDE must be performed in real time and, moreover, with a certain accuracy and stability. In this regard, a number of problems arise. On the one hand, very few SDEs have analytical solutions (mostly these are linear SDEs with additive or multiplicative noise or nonlinear SDEs reducible to linear ones), and on the other hand, the physical features of real dynamic systems lead to the manifestation of rigidity, which has an unsatisfactory effect on the resulting numerical solution. Therefore, a particularly important stage in the design of a stochastic dynamic system is the choice of a scheme for the numerical solution of the SDE.

2. Principles of constructing numerical solution methods

stochastic differential equations

Currently, there are several approaches to creating numerical schemes for solving SDEs. One of the possibilities is to adapt existing schemes for ordinary differential circuits (ODCs) taking into account the properties of stochastic integrals, another is to develop special methods for solving ODEs. Most researchers use the first approach, since the theory of numerical solution of ODEs is well developed and it is quite easy to draw analogies between ODEs and SDEs.

The simplest method for approximating a numerical solution of an SDE (from a computational point of view) is the Euler method, developed by Maruyama in 1955. This scheme satisfies many of the necessary properties required for numerical methods (it has an order of convergence), but at the same time it has a number of limitations (it is not always stable, the approximation error is quite high, etc.). To eliminate these shortcomings, as well as to increase the order of convergence of numerical schemes for solving SDEs, research has been carried out and is still being carried out, the directions of which can be presented in the form of a diagram (see Fig. 1).

By analogy with the development of schemes for the numerical solution of ODEs, to increase the order of convergence, approximation accuracy and stability, one can use series expansion at the approximation point, i.e., use derivatives of various orders, both of the variable and the drift and diffusion coefficients. In the literature, this approach is called the Taylor method. However, the disadvantage of Taylor schemes is that at each approximation step it is necessary to calculate multiple stochastic integrals associated with the above derivatives. In order to avoid computational difficulties, you can use multiple divisions of the approximation step (Runge-Kutta methods) or the results of approximation of previous steps (multi-step methods).

Both ordinary and stochastic systems of differential equations, which describe many physical, biological or economic phenomena, exhibit “undesirable” behavior when computer simulated using conventional numerical schemes and can be classified as ill-posed problems. In most cases, “undesirable” behavior refers to very high instability of the numerical solution, associated with the so-called rigidity phenomenon. There are several possible explanations for this phenomenon.

The first reason is associated with the technical capabilities of the computer. So, to achieve the desired accuracy, you can apply multiple divisions of the integration step. On the one hand, this leads to the accumulation of rounding errors, and as a result, an overflow of computer registers occurs. On the other hand, using very small values ​​of the integration step requires enormous time resources and also leads to the accumulation of rounding errors. The second reason is related to the physical side of the system under consideration. This means that the system describes processes of different speeds or gradients (primarily this is typical for ill-posed problems). This phenomenon usually appears in problems of the boundary layer (hydrodynamics), skin effect (electromagnetism), chemical kinetics reactions, etc. Finally, rigidity can be caused by both reasons. Therefore, when developing stable numerical methods, it is necessary to take into account the above situations.

An analysis of modern literature has shown that the creation of numerical methods for solving rigid systems is in most cases based on the ideas presented by Hairer and Wanner. In their work, they postulated that rigid systems cannot be solved by explicit methods and presented approaches based only on the use of implicit methods. However, it should be noted that the direct application of these methods is always associated with an extremely complex procedure for determining the parameters of the circuit, based on a pre-allocated stability region only for the system under consideration. This circumstance makes the proposed approaches not acceptable for most of the above applications, but allows us to highlight two important mathematical properties of rigidity. First, all rigid systems have a very wide spectrum (or the presence of very different Lyapunov exponents). Secondly, according to the theorem of uniqueness and existence of a solution, rigid systems are characterized by large values ​​of the Lipschitz constant.

So, an analysis of the principles for creating numerical schemes for solving SDEs has shown the need for a thorough study of existing and, possibly, search for new methods when solving specific problems.

3. Explicit strong numerical schemes

Let us write the SDE in the Ito representation in general form

where - https://pandia.ru/text/78/507/images/image006_22.gif" width="80 height=28" height="28">; -https://pandia.ru/text/78/ 507/images/image009_18.gif" width="79 height=28" height="28">.gif" width="79" height="28 src="> - continuously twice differentiable functions of drift and diffusion; - dimensional vector parameters.

Obtaining a strong solution of SDE (3.1) is an important point in many practical problems; the purpose of the work is a comparative analysis of existing strong explicit numerical methods for solving SDE.

Let's consider the most common case in financial literature - the case of one-dimensional equation (3.1), using the following schemes: Euler, Milstein, Taylor, Runge-Kutta and two-step. In the one-dimensional case, the Euler scheme has the form:

Where And (https://pandia.ru/text/78/507/images/image019_9.gif" width="55" height="24">, appears as

Taylor order scheme https://pandia.ru/text/78/507/images/image022_10.gif" width="484" height="212"> (3.4)

and a two-step order scheme:

https://pandia.ru/text/78/507/images/image025_10.gif" width="509" height="52 src=">

https://pandia.ru/text/78/507/images/image027_8.gif" width="355" height="52 src=">

Runge-Kutta scheme, where the order of convergence https://pandia.ru/text/78/507/images/image029_8.gif" width="384" height="119 src="> (3.6)

https://pandia.ru/text/78/507/images/image031_6.gif" width="100" height="28 src=">.gif" width="345" height="68">,

https://pandia.ru/text/78/507/images/image035_5.gif" width="44" height="28"> and the analytical solution of SDE (3.1) at the end of the integration interval DIV_ADBLOCK220">

, (4.1)

where is the mathematical expectation operator.

Let us replace the theoretical value of the “absolute error” criterion (4.1) with its statistical analogue, based on Monte Carlo simulation..gif" width="44" height="28">..gif" width="29" height="27 src" =">, then the statistical analogue of criterion (4.1) is

(4.2)

Let's compare the above schemes using the criterion of absolute error. As a first test example, we study a linear SDE with constant homogeneous coefficients

whose analytical solution has the form

.

The second test example is a nonlinear Ito SDE of the form

with differentiable function and general solution

https://pandia.ru/text/78/507/images/image049_5.gif" width="108" height="57 src=">.

In particular, for the equation

(4.4)

there is an analytical solution

https://pandia.ru/text/78/507/images/image052_2.gif" width="17" height="19">, number of trajectories and approximation accuracy (4.2). The calculation results are given in tables 1 – 3, Let us analyze them using the averaging criterion (4.2).

For the first and second test equations (see Table 1 and Table 2), as the integration step length decreases and the order of convergence of the numerical scheme increases, the approximation accuracy increases for all numerical schemes under study.

However, this cannot be stated in the third case, which represented a rigid SDE (see Table 3). It was possible to calculate the value of the absolute error for all combinations of the integration step length and the number of trajectories only for the Euler scheme and the two-step scheme.

Table 1. Accuracy of approximation of the numerical solution of the equation (4..gif" width="53" height="20 src=">.gif" width="93" height="28 src=">)

Scheme

Integration step length,

Milshtein

two-step

Runge-Kutta

Milshtein

two-step

Runge-Kutta

Milshtein

two-step

Runge-Kutta

For the Milstein, Taylor and Runge-Kutta circuits at , , https://pandia.ru/text/78/507/images/image068_3.gif" width="57" height="23">, , , ) register overflow occurred , which made it impossible to carry out further calculations.

Thus, it can be noted that, unlike ODEs, when numerically integrating the solution of rigid SDEs, one should use “simple” explicit solution methods, i.e., avoid methods that use multiple divisions of the approximation step or derivatives of the drift and diffusion functions. If there is a need for a numerical solution of SDE in tasks such as filtering or identification of SDE parameters using the Monte Carlo procedure, the preferred step length is DIV_ADBLOCK222">

Table 2. Accuracy of approximation of the numerical solution of equation (4.4) (https://pandia.ru/text/78/507/images/image070_4.gif" width="100" height="25 src=">)

Scheme

Integration step length,

Milshtein

two-step

Runge-Kutta

Milshtein

two-step

Runge-Kutta

Milshtein

two-step

Runge-Kutta

Table 3. Accuracy of approximation of the numerical solution of the equation

(4.3)(https://pandia.ru/text/78/507/images/image071_4.gif" width="55" height="20 src=">.gif" width="100" height="25 src="> )

scheme

integration step length,

Milshtein

two-step

Runge-Kutta

Milshtein

two-step

Runge-Kutta

Milshtein

two-step

Runge-Kutta

Literature

1. Oksendal B. Stochastic differential equations. Berlin: Springer, 2000.

2. , Methods for solving ill-posed problems. M.: Nauka, 1986.

3. Kloden P. E., Platen E. Numerical solution of Stochastic Differential Equations. Berlin: Springer, 1999.

4. Burrage K., Tian T. Stiffly accurate Runge-Kutta methods for stiff stochastic differential equations // Computer Physics Communications. 2001. V. 142. P. 186 – 190.

5. Burrage K., Burrage P., Mitsui T. Numerical solutions of stochastic differential equations – implementation and stability issues // Journal of computational and applied mathematics. 2000. V. 125. P. 171 – 182.

6. Kuznetsov D. F. The three-step strong numerical methods of the orders of accuracy 1.0 and 1.5 for Ito Stochastic differential equations // Journal of Automation and Information Sciences. 2002. V. 34. No. 12. P. 22 – 35.

7. Gaines JG., Lyons T. J. Variable step size control in the numerical solution of stochastic differential equations // SIAM Journal of Applied Mathematics. 1997. V. 57. No. 5. P. 1455 – 1484.

8. Hairer E., Wanner G. Solving Ordinary Differential Equations II: Stiff and Differential-Algebraic Problems. Berlin: Springer-Verlag, 1996.

9. Lamberton D., Lapeyre B. Introduction to stochastic calculus applied to finance. London: Chapman and Hall. 2000.

10. Shiryaev A. N. Fundamentals of stochastic financial mathematics. M.: FAZIS, 1998.

11. Milstein G. N., Platen E., Schurz H. Balanced implicit methods for stiff stochastic systems // SIAM Journal of Numerical Analysis. 1998. V. 35. P. 1010 – 1019.

12. Filatova D., Grzywaczewski M., McDonald D. Estimating parameters of stochastic differential equations using a criterion function based on the Kolmogorov-Smirnov statistics // ACTA ET COMMENTATIONES UNIVERSITATIS TARTUENSIS DE MATHEMATICA. 2004. V. 8. – P. 93 – 99.

13. Nielsen J. N., Madsen H. Applying the EKF to stochastic differential equations with level effects // Automatica. 2001. V. 37. P. 107 – 112.

14. Nielsen J. N., Madsen H., Young P. C. Parametric estimation in stochastic differential equations: an overview // Annual Reviews in Control. 2000. V. 24. P. 83 – 94.

Let us return to the first-order dynamic equation (a system with 1/2 degrees of freedom), an example of which was the equation for small amplitude fluctuations in a self-oscillator [first formula (29.1)], i.e., an equation of the form

We deal with the same equation in problems about the speed and one-dimensional movement of a mass particle in a medium with viscous friction, or about the displacement s of this particle, but devoid of mass and tied to a spring with an elasticity coefficient, or about the voltage V on the capacitance-circuit, or about current I in the circuit, etc.

In accordance with what was said in § 28, we expect that when the dynamic system (35.1) is acted upon by sufficiently “dense” (compared to the establishment time) homogeneous shocks, the response will be continuous homogeneous

Markov process with a transition probability satisfying the Einstein-Fokker equation

i.e., equation (29.2), but in the one-dimensional case, when there is no dependence of v on the second variable. According to the method motivated in § 28, the coefficient in (35.2) is equal to the expression for x, i.e., the right side of equation (35.1):

Under initial condition

the solution to equation (35.2) is expressed by the normal law

[cm. (29.5) and (29.6)]. In the limit at , i.e. for t, formula (35.3) transforms into a stationary distribution independent of . In the problem of velocity and particles in a viscous medium, when the distribution must be Maxwellian:

so from where similar expressions for B can be written in the remaining problems listed above - simply as a consequence of the theorem on the equidistribution of energy over degrees of freedom: the average energy of a system with 1/2 degree of freedom should be equal to (in this case

This is, under the initial assumptions made, a purely probabilistic scheme for solving the problem of fluctuations. Now we will do things differently. Let us introduce a random (or fluctuation) force into equation (35.1):

If, for the sake of specificity, we discuss the problem of the motion of a particle in an unlimited viscous medium, then we are talking about the equation of motion

in which the influence of the environment on the particle is divided into two parts: systematic friction force and random force

Assuming that the systematic friction force is expressed by Stokes' law (for a spherical particle of radius a we have , where is the viscosity of the liquid), we make two assumptions.

First, the condition of laminar flow around the particle must be satisfied, i.e., the Reynolds number is small:

where is the density of the liquid. If for and we take the value of the root mean square velocity of thermal motion [and is the density of the substance of the particle], i.e., take into account the fastest vibrations of the particle, then

At we have that even for molecular sizes a gives the value Thus, the laminarity condition is satisfied.

Secondly, the total systematic force acting on a ball moving in a viscous incompressible fluid is equal, according to Bussinet,

where is the added mass, equal to half the mass displaced by the liquid particle. In equation (35.6), only the first term is retained from the total force F. But when the second and third terms are of the same order as . In relation, this is not significant, since the role of this term is reduced only to a change in the effective mass of the particle. More important is the third term, which expresses the viscous hydrodynamic aftereffect (see §§ 15 and 21), when taken into account, the system acquires an infinite number of degrees of freedom.

In the presence of a viscous (and thus probabilistic) aftereffect, the mean square displacement of a particle was found by V.V. Vladimirsky and Ya.P. Terletsky. The usual expression turns out to be valid only for time intervals t that are sufficiently large compared to the relaxation time. We will limit ourselves to a simplified formulation of the problem based on equation (35.5).

We will treat this stochastic equation as if it were an ordinary differential equation.

Integrating it under the initial condition we get

Since, by assumption, averaging (35.7) over an ensemble of random forces gives

that is, for x the same dynamic law is obtained as from equation (35.1) and from the Einstein-Fokker equation (35.2). Let us now find the variance. According to (35.7) and (35.8)

and, therefore, to obtain it is necessary to set the random strength correlation function . You can specify any correlation function allowed by the general restrictions of its form, but we will make a special assumption, namely, we will assume that a -stationary delta-correlated process:

where C is a constant. Note that thereby the force impulse

is a continuous random function with independent increments and is therefore normally distributed for any t (§ 34).

Substituting (35.10) into (35.9), we find

(35.11)

If we put , then this will coincide with expression (35.4) for obtained from the Einstein-Fokker equation (35.2).

We found only moments, but more can be said. Since the increment in momentum is distributed normally for any given value, the difference is, according to (35.7), a sum (or, more precisely, the limit of the sum) of normally distributed quantities. Consequently, the distribution is also given by a Gaussian law with dispersion (35.11). This conditional distribution (provided that ) simply coincides with (35.3). Further, it is easy to verify by direct substitution that conditional probabilities of this type satisfy the Smoluchowski equation (they are transition probabilities), i.e., the process turns out to be Markovian. Thus, if in the stochastic differential equation (35.5) the random force ) is stationary and delta-correlated [see. (35.10)], then the response is a diffusion Markov process whose transition probability satisfies the Einstein-Fokker equation with

Both approaches - based on the Einstein-Fokker equation and based on the stochastic differential equation for a random function - turn out to be equivalent in the problem considered. This, of course, does not mean that they are identical beyond this task. The Einstein-Fokker equation has, for example, an undoubted advantage in cases where certain restrictions are imposed on the set of possible values ​​of the random function (the presence of reflecting or absorbing walls, etc.), taken into account simply by the corresponding boundary conditions. In the Langevin formulation of the problem, introducing such restrictions is quite difficult. On the other hand, as has already been emphasized, the Langevin method does not require that the force be delta-correlated.

It is perhaps worth noting that precisely in the case of a delta-correlated force, operating with the differential equation (35.5) has, in a certain sense, a conditional character. This equation is not written for x, but for the instantaneous value . But with infinitely frequent shocks, the response is not a differentiable function, that is, it does not exist (in any of the probabilistic senses of the concept of derivative). Thus, the whole “differential equation” has only a certain symbolic meaning. This must be understood as follows.

Formal integration of equation (35.5) leads to the solution (35.7) for , which no longer has any troubles since it contains a delta-correlated dila only under the integral. In other words, equation (35.5) is

this (in the case of delta-correlated force under consideration) is a mathematically incorrect notation for the subsequent - already quite meaningful and, ultimately, the only one of interest to us - solution of this equation. The justification for this approach is the well-known advantages of operating with differential equations when formulating a problem - the ability to proceed from general dynamic laws, the ability to use the entire existing arsenal of mathematical tools to obtain a solution, etc. We are not even talking about the fact that with non-delta-correlated everything reservations become unnecessary: ​​stochastic differential equations for the random functions themselves then acquire a completely definite mathematical content and, moreover, allow one to go beyond the class of Markov processes.

The constant C in the correlation function (35.10) obviously characterizes the intensity of random shocks. Let's return to the variables in which the force and response of the system are energetically conjugate, that is, the product of the force and the derivative of the response represents the power given to the system. This is true, for example, for the force in equation (35.6), since the power transferred to the particle is equal to . Equation (35.6) becomes (35.5), being divided by the mass of the particle m. Thus, the correlation function of the present force in accordance with (35.10) is equal to

We established above what and what in the problem of the velocity of a Brownian particle. Therefore, the constant C in the force correlation function is equal to

i.e., it is associated only with the coefficient of systematic friction h. In the problem of current in a circuit, we must understand the random thermal (§ 28), and h the active resistance of the circuit R, so the correlation constant for will be

Stochastic differential equation(SDE) - a differential equation in which one or more terms are of a stochastic nature, that is, they represent a stochastic process (another name is a random process). Thus, the solutions to the equation also turn out to be stochastic processes. The most famous and frequently used example of an SDE is an equation with a term describing white noise (which can be considered an example of the derivative of a Wiener process). However, there are other types of random fluctuations, such as a jump process (for more details, see).

Story

In the literature, the first use of SDE is traditionally associated with work on the description of Brownian motion, done independently by Marian Smoluchowski (g.) and Albert Einstein (g.). However, SDEs were used a little earlier (years) by the French mathematician Louis Bouchelier in his doctoral dissertation “The Theory of Assumptions”. Based on the ideas of this work, the French physicist Paul Langevin began to use SDE in works on physics. Later, he and Russian physicist Ruslan Stratonovich developed a more rigorous mathematical justification for SDE.

Terminology

In physics, SDEs are traditionally written in the form of the Langevin equation. And often, not entirely accurately, they call it the Langevin equation itself, although SDE can be written in many other ways. SDE in the form of the Langevin equation consists of an ordinary non-stochastic differential equation and an additional part describing white noise. The second common form is the Fokker–Planck equation, which is a partial differential equation and describes the evolution of a probability density over time. The third form of SDE is more often used in mathematics and financial mathematics, it resembles Langevin's equations, but is written using stochastic differentials (see details below).

Stochastic calculus

Let it, and let it

Then the stochastic differential equation for given initial conditions

For

has a unique (in the sense of “almost certainly”) and -continuous solution such that - an adapted process for filtering, generated by and , , and

Application of stochastic equations

Physics

In physics, SDEs are often written in the form of the Langevin equation. For example, a first-order SDE system can be written as:

where is a set of unknowns, and are arbitrary functions, and are random functions of time, which are often called noise terms. This form of notation is used because there is a standard technique for transforming an equation with higher derivatives into a system of first-order equations by introducing new unknowns. If are constants, then the system is said to be subject to additive noise. Systems with multiplicative noise are also considered when . Of these two considered cases, additive noise is simpler. The solution to a system with additive noise can often be found using only standard mathematical analysis methods. In particular, the usual method of composition of unknown functions can be used. However, in the case of multiplicative noise, the Langevin equation is poorly defined in the sense of ordinary mathematical analysis and must be interpreted in terms of Ito's calculus or Stratonovich's calculus.

In physics, the main method for solving SDEs is to find a solution in the form of a probability density and transform the original equation into the Fokker-Planck equation. The Fokker-Planck equation is a partial differential equation without stochastic terms. It determines the time evolution of the probability density, just as the Schrödinger equation determines the time dependence of the wave function of a system in quantum mechanics or the diffusion equation determines the time evolution of chemical concentration. Solutions can also be sought numerically, for example using the Monte Carlo method. Other solution techniques use the path integral, this technique is based on the analogy between statistical physics and quantum mechanics (for example, the Fokker-Planck equation can be converted to the Schrödinger equation using some transformation of variables), or solving ordinary differential equations for the moments of the probability density.

Probability theory and financial mathematics

Biology

Chemistry

Links

  • The Stochastic World - A Simple Introduction to Stochastic Differential Equations

Literature

  • Adomian George Stochastic systems. - Orlando, FL: Academic Press Inc., 1983.
  • Adomian George Nonlinear stochastic operator equations. - Orlando, FL: Academic Press Inc., 1986.
  • Adomian George Nonlinear stochastic systems theory and applications to physics. - Dordrecht: Kluwer Academic Publishers Group, 1989.
  • Øksendal Bernt K. Stochastic Differential Equations: An Introduction with Applications. - Berlin: Springer, 2003. - ISBN ISBN 3-540-04758-1
  • Teugels, J. and Sund B. (eds.) Encyclopedia of Actuarial Science. - Chichester: Wiley, 2004. - P. 523–527.
  • C. W. Gardiner Handbook of Stochastic Methods: for Physics, Chemistry and the Natural Sciences. - Springer, 2004. - P. 415.
  • Thomas Mikosch Elementary Stochastic Calculus: with Finance in View. - Singapore: World Scientific Publishing, 1998. - P. 212. - ISBN ISBN 981-02-3543-7
  • Bachelier, L. Théorie de la speculation (in French), PhD Thesis. - NUMDAM: http://www.numdam.org/item?id=ASENS_1900_3_17__21_0, 1900. - ISBN In English in 1971 book "The Random Character of the Stock Market" Eds. P.H. Cootner

1. Among the Ito processes X = (Xt)t^o, having a stochastic differential
dXt = a(t,oj)dt + P(t,oj)dBt, (1)
an important role is played by those for which the coefficients a(t, a>) and (3(t, u) depend on a(t,u) = a(t,Xt(u>)), /3(t,u> ) = b(t,Xt (si)), (2)
where a = a(t, x) and b = b(t, x) are measurable functions on M+ x K. So, for example, the process
St=S0eateaBt-4-\\ (3)
called geometric, or economic, Brownian motion (see § For), has (according to Ito’s formula) a stochastic differential
dSt = aSt dt + aSt dBt. (4)
Process
=f
Joe 3rd
du(5)
has, as is easy to verify, again using Ito’s formula, a differential
dYt = (1 + aYt) dt + oYt dBt. (6)
(The process Y = (Yt)t^o plays an important role in the problems of quickly detecting changes in the local drift of Brownian motion; see.) If
Г, Г* du Г* dBu1
(7)
zt = st
Zq + (сі - ас2) / -Х- + С2
Jo Jo.
with some constants c\\ and c2, then, again using Ito’s formula, it is verified that
dZt = (сі + aZt) dt + (c2 + aZt) dBt. (8)
In the examples given, we started from the “explicit” form of the processes S = (St), У = (УІ), Z = (Zt) and using the Ito formula we obtained their stochastic differentials (4), (6) and (8).
It is possible, however, to change the point of view, namely, consider (4), (6) and (8) as stochastic differential equations with respect to unknown processes S = (St),Y = (Yt), Z = (Zt) and try to establish , that their found solutions (3), (5) and (7) are (in a certain sense) the only solutions to these equations.
Naturally, it is necessary to give a precise meaning to the very concept of “stochastic differential equation”, to determine what its “solution” is, and in what sense the “uniqueness” of the solution should be understood.
In defining all these concepts, discussed below, the key role is played by the concept of a stochastic integral introduced above.
2. We will assume that the filtered probability space (stochastic basis) (ft, (^t)t^Oi P) is given with the usual conditions (item 2, §7a) and let B = (Bt,&t)f^ o be Brownian motion.
Let a = a(t, x) and b = b(t, x) be measurable functions on K+ x M.
Definition 1. The stochastic differential equation is said to be
dXt = a(t, Xt)dt + b(t,Xt) dBt (9)
with an ^-measurable initial condition Xo has a continuous strong solution (or simply a solution) X = (Xt)t^o, if for each t > O
Xt are ^-measurable,
P(^J* \\a(s,Xa)\\ds p(^J* b2(s,Xa)ds (12)
Xt=Xo+ Ґ a(s,Xa) ds + Ґ b(s,Xa)dBa. Jo Jo
Definition 2. Two continuous random processes X = (Xt)t^o and Y = (Yt)t^0 are called stochastically indistinguishable if for any t > O
p(sup|Xs -Ys\\ >0) =0. (13)
Va and (R-p.n.)
Definition 3. We will say that a measurable function / ¦ f(t, x), defined on R+ x K, satisfies (with respect to the phase variable x) the local Lipschitz condition if for every n ^ 1 there is a constant K(n) such that for all t > 0 and |x| \\a(t,x)-a(t,y)\\ + \\b(t,x)-b(t,y)\\ Theorem 1 (K. Ito, ; see also, for example, , ). Let the coefficients a(t,x) ub(t,x) satisfy the local Lipschitz condition and the linear growth condition:
la(t,x)\\ + \\b(t,x)\\ and let the initial condition XQ be ^-measurable.
Then the stochastic differential equation (9) has a unique (up to stochastic indistinguishability) continuous solution X = (Xt,&t), which is a Markov process.
There are generalizations of this result in different directions: the local Lipschitz condition is weakened, the dependence (but of a special nature) of the coefficients on u> is allowed, cases of dependence of the coefficients a - a(t,Xt) and b = b(t,Xt) are considered. from the “past” (in somewhat loose notation: a = a(t; Xs, s) There are also generalizations to the multidimensional case, when X = (X1,...,Xd) is a vector process, a = a(t,x) - vector, b = b(t,x) - matrix and B = (B1,... ,Bd) - d-dimensional Brownian motion. See on this matter, for example, , , .
From various generalizations, we present only one, somewhat unexpected, result of A.K. Zvonkin, which states that for the existence of a strong solution of a stochastic differential equation
dXt = a(t, Xt)dt + dBt (15)
There is no need at all to require the fulfillment of the local Lipschitz condition, but only measurability in (?, x) and uniform boundedness of the coefficient a(t, x) are sufficient. (A multidimensional generalization of this result was obtained by A. Yu. Veretennikov, .)
Thus, for example, the stochastic differential equation
dXt = a(Xt) dt + dBt, X0 = 0, (16)
with a "bad" coefficient
G 1, x > O,
I. -1, x has, and, moreover, a unique, strong solution.
Note, however, that if instead of equation (16) we consider the equation
dXt=a(Xt)dBt, Xo = 0, (18)
with the same function ср(х), the situation changes dramatically, since, firstly, there are probability spaces on which this equation obviously has at least two strong solutions, and, secondly, on some probability spaces this equation may not have a strong solution at all.
To show the validity of the first statement, consider on the space of continuous functions u> = (u>t)t>o with Wiener measure a coordinately given Wiener process W = (Wt)t^Oi, i.e., such that Wt(w)=wt ,t>0.
Then, according to Levy’s theorem (see point 3 in §3b), process B = (Bt)t^o for
Bt= С o(Ws)dWa Jo
will also be a Wiener process (Brownian motion). And it's easy to see that
[ o(Wa)dBa = [ o2(Wa)dWa=Wt, Jo Jo
since cr2(x) = 1.
Thus, the process W =¦ (Wt)t^o is (on the probability space under consideration) a solution to equation (18) with a specially selected Brownian motion B¦ But, since cr(-x) = -cr(x), then
[ o(Wa)dBa = -Wt, Jo
Г o(-Wa) dBa = - Jo
those. along with W = (Wt)t^о the process -W = (-Wt)t>о is also a solution to equation (18).
As for the second statement, we assume that Equation 1
Xt= [ o(Xa)dBs Jo
there is a strong solution (relative to the flow of a-algebras (generated by the Brownian motion B). From Levy's theorem it follows that then the process X = (Xt, o is a Brownian motion.
According to Taiak's formula (see further § 5c and compare with the example in § lb, Chapter II):
\\Xt\\= Г a(Xa)dXa+Lt(0), Jo
where t
Lt(0) = limi- f I(\\Xa\\^e)da
ej.0 AZ Jo
- local time (Lévy) of the Brownian motion X, which it spends at zero on the interval. Therefore (R-p.n.)
Bt= Г o(Xa)dXa = \\Xt\\-Lt(0) Jo
and, therefore, C
The above assumption that X is adapted with respect to flux = (&t)t^o implies the inclusion C\

More on the topic § Ze. Stochastic differential equations:

  1. Chapter 9. Elements of the theory of ordinary differential equations
  2. In the early 70s, F. Black and M. Scholes developed a model for estimating the premium of a European call option on stocks that do not pay dividends. The resulting formula was the result of their solution to the Black-Schole differential equation. We consider this equation in the next paragraph.
  3. Part II Mathematical analysis and differential equations
  4. 6. An equation relating the price of a derivative to the market price of the risk. Continuous-time stochastic models for short-term rates and bond pricing
  5. APPENDIX 2. 2.1. Differential equation for a derivative asset per share that pays a continuously compounded dividend
  6. System of interdependent equations (system of joint simultaneous equations)
  7. Incremental (incremental, or differential) costs

- Copyright - Advocacy - Administrative law - Administrative process - Antimonopoly and competition law - Arbitration (economic) process - Audit - Banking system - Banking law - Business - Accounting - Property law - State law and administration - Civil law and process - Monetary law circulation, finance and credit - Money - Diplomatic and consular law - Contract law - Housing law - Land law - Electoral law - Investment law - Information law - Enforcement proceedings - History of state and law - History of political and legal doctrines -