The meaning of the word "probability." Classical probability

as an ontological category reflects the extent of the possibility of the emergence of any entity under any conditions. In contrast to the mathematical and logical interpretation of this concept, ontological mathematics does not associate itself with the obligation of quantitative expression. The meaning of V. is revealed in the context of understanding determinism and the nature of development in general.

Excellent definition

Incomplete definition ↓

PROBABILITY

concept characterizing quantities. the measure of the possibility of the occurrence of a certain event at a certain conditions. In scientific knowledge there are three interpretations of V. The classical concept of V., which arose from mathematical. analysis of gambling and most fully developed by B. Pascal, J. Bernoulli and P. Laplace, considers winning as the ratio of the number of favorable cases to the total number of all equally possible ones. For example, when throwing a dice that has 6 sides, each of them can be expected to land with a value of 1/6, since no one side has advantages over another. Such symmetry of experimental outcomes is specially taken into account when organizing games, but is relatively rare in the study of objective events in science and practice. Classic V.'s interpretation gave way to statistics. V.'s concepts, which are based on the actual observing the occurrence of a certain event over a long period of time. experience under precisely fixed conditions. Practice confirms that the more often an event occurs, the greater the degree of objective possibility of its occurrence, or B. Therefore, statistical. V.'s interpretation is based on the concept of relates. frequency, which can be determined experimentally. V. as a theoretical the concept never coincides with the empirically determined frequency, however, in plural. In cases, it differs practically little from the relative one. frequency found as a result of duration. observations. Many statisticians consider V. as a “double” refers. frequencies, edges are determined statistically. study of observational results

or experiments. Less realistic was the definition of V. as the limit relates. frequencies of mass events, or groups, proposed by R. Mises. As a further development of the frequency approach to V., a dispositional, or propensitive, interpretation of V. is put forward (K. Popper, J. Hacking, M. Bunge, T. Settle). According to this interpretation, V. characterizes the property of generating conditions, for example. experiment. installations to obtain a sequence of massive random events. It is precisely this attitude that gives rise to physical dispositions, or predispositions, V. which can be checked using relatives. frequency

Statistical V.'s interpretation dominates scientific research. cognition, because it reflects specific. the nature of the patterns inherent in mass phenomena of a random nature. In many physical, biological, economic, demographic. and other social processes, it is necessary to take into account the action of many random factors, which are characterized by a stable frequency. Identifying these stable frequencies and quantities. its assessment with the help of V. makes it possible to reveal the necessity that makes its way through the cumulative action of many accidents. This is where the dialectic of transforming chance into necessity finds its manifestation (see F. Engels, in the book: K. Marx and F. Engels, Works, vol. 20, pp. 535-36).

Logical, or inductive, reasoning characterizes the relationship between the premises and the conclusion of non-demonstrative and, in particular, inductive reasoning. Unlike deduction, the premises of induction do not guarantee the truth of the conclusion, but only make it more or less plausible. This plausibility, with precisely formulated premises, can sometimes be assessed using V. The value of this V. is most often determined by comparison. concepts (more than, less than or equal to), and sometimes in a numerical way. Logical interpretation is often used to analyze inductive reasoning and construct various systems of probabilistic logic (R. Carnap, R. Jeffrey). In semantics logical concepts V. is often defined as the degree to which one statement is confirmed by others (for example, a hypothesis by its empirical data).

In connection with the development of theories of decision making and games, the so-called personalistic interpretation of V. Although V. at the same time expresses the degree of faith of the subject and the occurrence of a certain event, V. themselves must be chosen in such a way that the axioms of the calculus of V. are satisfied. Therefore, V. with such an interpretation expresses not so much the degree of subjective, but rather reasonable faith . Consequently, decisions made on the basis of such V. will be rational, because they do not take into account psychological factors. characteristics and inclinations of the subject.

With epistemological t.zr. difference between statistical, logical. and personalistic interpretations of V. is that if the first characterizes the objective properties and relationships of mass phenomena of a random nature, then the last two analyze the features of the subjective, cognizant. human activities under conditions of uncertainty.

PROBABILITY

one of the most important concepts of science, characterizing a special systemic vision of the world, its structure, evolution and knowledge. The specificity of the probabilistic view of the world is revealed through the inclusion of the concepts of randomness, independence and hierarchy (the idea of ​​levels in the structure and determination of systems) among the basic concepts of existence.

Ideas about probability originated in ancient times and related to the characteristics of our knowledge, while the existence of probabilistic knowledge was recognized, which differed from reliable knowledge and from false knowledge. The impact of the idea of ​​probability on scientific thinking and on the development of knowledge is directly related to the development of probability theory as a mathematical discipline. The origin of the mathematical doctrine of probability dates back to the 17th century, when the development of a core of concepts allowing. quantitative (numerical) characteristics and expressing a probabilistic idea.

Intensive applications of probability to the development of cognition occur in the 2nd half. 19 - 1st floor 20th century Probability has entered the structures of such fundamental sciences of nature as classical statistical physics, genetics, quantum theory, and cybernetics (information theory). Accordingly, probability personifies that stage in the development of science, which is now defined as non-classical science. To reveal the novelty and features of the probabilistic way of thinking, it is necessary to proceed from an analysis of the subject of probability theory and the foundations of its numerous applications. Probability theory is usually defined as a mathematical discipline that studies the patterns of mass random phenomena under certain conditions. Randomness means that within the framework of mass character, the existence of each elementary phenomenon does not depend on and is not determined by the existence of other phenomena. At the same time, the mass nature of phenomena itself has a stable structure and contains certain regularities. A mass phenomenon is quite strictly divided into subsystems, and the relative number of elementary phenomena in each of the subsystems (relative frequency) is very stable. This stability is compared with probability. A mass phenomenon as a whole is characterized by a probability distribution, that is, by specifying subsystems and their corresponding probabilities. The language of probability theory is the language of probability distributions. Accordingly, probability theory is defined as the abstract science of operating with distributions.

Probability gave rise in science to ideas about statistical patterns and statistical systems. The latter are systems formed from independent or quasi-independent entities; their structure is characterized by probability distributions. But how is it possible to form systems from independent entities? It is usually assumed that for the formation of systems with integral characteristics, it is necessary that sufficiently stable connections exist between their elements that cement the systems. Stability of statistical systems is given by the presence of external conditions, external environment, external rather than internal forces. The very definition of probability is always based on setting the conditions for the formation of the initial mass phenomenon. Another important idea characterizing the probabilistic paradigm is the idea of ​​hierarchy (subordination). This idea expresses the relationship between the characteristics of individual elements and the integral characteristics of systems: the latter, as it were, are built on top of the former.

The importance of probabilistic methods in cognition lies in the fact that they make it possible to study and theoretically express the patterns of structure and behavior of objects and systems that have a hierarchical, “two-level” structure.

Analysis of the nature of probability is based on its frequency, statistical interpretation. At the same time, for a very long time, such an understanding of probability dominated in science, which was called logical, or inductive, probability. Logical probability is interested in questions of the validity of a separate, individual judgment under certain conditions. Is it possible to evaluate the degree of confirmation (reliability, truth) of an inductive conclusion (hypothetical conclusion) in quantitative form? During the development of probability theory, such questions were repeatedly discussed, and they began to talk about the degrees of confirmation of hypothetical conclusions. This measure of probability is determined by the information available to a given person, his experience, views on the world and psychological mindset. In all such cases, the magnitude of probability is not amenable to strict measurements and practically lies outside the competence of probability theory as a consistent mathematical discipline.

The objective, frequentist interpretation of probability was established in science with significant difficulties. Initially, the understanding of the nature of probability was strongly influenced by those philosophical and methodological views that were characteristic of classical science. Historically, the development of probabilistic methods in physics occurred under the determining influence of the ideas of mechanics: statistical systems were interpreted simply as mechanical. Since the corresponding problems were not solved by strict methods of mechanics, assertions arose that turning to probabilistic methods and statistical laws is the result of the incompleteness of our knowledge. In the history of the development of classical statistical physics, numerous attempts were made to substantiate it on the basis of classical mechanics, but they all failed. The basis of probability is that it expresses the structural features of a certain class of systems, other than mechanical systems: the state of the elements of these systems is characterized by instability and a special (not reducible to mechanics) nature of interactions.

The entry of probability into knowledge leads to the denial of the concept of hard determinism, to the denial of the basic model of being and knowledge developed in the process of the formation of classical science. The basic models represented by statistical theories are of a different, more general nature: they include the ideas of randomness and independence. The idea of ​​probability is associated with the disclosure of the internal dynamics of objects and systems, which cannot be entirely determined by external conditions and circumstances.

The concept of a probabilistic vision of the world, based on the absolutization of ideas about independence (as before the paradigm of rigid determination), has now revealed its limitations, which is most strongly reflected in the transition of modern science to analytical methods for studying complex systems and the physical and mathematical foundations of self-organization phenomena.

Excellent definition

Incomplete definition ↓

It is clear that each event has a varying degree of possibility of its occurrence (its implementation). In order to quantitatively compare events with each other according to the degree of their possibility, obviously, it is necessary to associate a certain number with each event, which is greater, the more possible the event is. This number is called the probability of an event.

Probability of event– is a numerical measure of the degree of objective possibility of the occurrence of this event.

Consider a stochastic experiment and a random event A observed in this experiment. Let's repeat this experiment n times and let m(A) be the number of experiments in which event A occurred.

Relation (1.1)

called relative frequency events A in the series of experiments performed.

It is easy to verify the validity of the properties:

if A and B are inconsistent (AB= ), then ν(A+B) = ν(A) + ν(B) (1.2)

The relative frequency is determined only after a series of experiments and, generally speaking, can vary from series to series. However, experience shows that in many cases, as the number of experiments increases, the relative frequency approaches a certain number. This fact of stability of the relative frequency has been repeatedly verified and can be considered experimentally established.

Example 1.19.. If you throw one coin, no one can predict which side it will land on top. But if you throw two tons of coins, then everyone will say that about one ton will fall up with the coat of arms, that is, the relative frequency of the coat of arms falling out is approximately 0.5.

If, with an increase in the number of experiments, the relative frequency of the event ν(A) tends to a certain fixed number, then it is said that event A is statistically stable, and this number is called the probability of event A.

Probability of the event A some fixed number P(A) is called, to which the relative frequency ν(A) of this event tends as the number of experiments increases, that is,

This definition is called statistical determination of probability .

Let's consider a certain stochastic experiment and let the space of its elementary events consist of a finite or infinite (but countable) set of elementary events ω 1, ω 2, …, ω i, …. Let us assume that each elementary event ω i is assigned a certain number - р i, characterizing the degree of possibility of the occurrence of a given elementary event and satisfying the following properties:

This number p i is called probability of an elementary eventωi.

Let now A be a random event observed in this experiment, and let it correspond to a certain set

In this setting probability of an event A call the sum of the probabilities of elementary events favoring A(included in the corresponding set A):


(1.4)

The probability introduced in this way has the same properties as the relative frequency, namely:

And if AB = (A and B are incompatible),

then P(A+B) = P(A) + P(B)

Indeed, according to (1.4)

In the last relation we took advantage of the fact that not a single elementary event can favor two incompatible events at the same time.

We especially note that probability theory does not indicate methods for determining p i; they must be sought for practical reasons or obtained from a corresponding statistical experiment.

As an example, consider the classical scheme of probability theory. To do this, consider a stochastic experiment, the space of elementary events of which consists of a finite (n) number of elements. Let us additionally assume that all these elementary events are equally possible, that is, the probabilities of elementary events are equal to p(ω i)=p i =p. It follows that

Example 1.20. When throwing a symmetrical coin, getting heads and tails are equally possible, their probabilities are equal to 0.5.

Example 1.21. When throwing a symmetrical die, all faces are equally possible, their probabilities are equal to 1/6.

Now let event A be favored by m elementary events, they are usually called outcomes favorable to event A. Then

Got classical definition of probability: the probability P(A) of event A is equal to the ratio of the number of outcomes favorable to event A to the total number of outcomes

Example 1.22. The urn contains m white balls and n black balls. What is the probability of drawing a white ball?

Solution. The total number of elementary events is m+n. They are all equally probable. Favorable event A of which m. Hence, .

The following properties follow from the definition of probability:

Property 1. The probability of a reliable event is equal to one.

Indeed, if the event is reliable, then every elementary outcome of the test favors the event. In this case t=p, hence,

P(A)=m/n=n/n=1.(1.6)

Property 2. The probability of an impossible event is zero.

Indeed, if an event is impossible, then none of the elementary outcomes of the test favor the event. In this case T= 0, therefore, P(A)=m/n=0/n=0. (1.7)

Property 3.The probability of a random event is a positive number between zero and one.

Indeed, only a part of the total number of elementary outcomes of the test is favored by a random event. That is, 0≤m≤n, which means 0≤m/n≤1, therefore, the probability of any event satisfies the double inequality 0≤ P(A)1. (1.8)

Comparing the definitions of probability (1.5) and relative frequency (1.1), we conclude: definition of probability does not require testing to be carried out in fact; the definition of relative frequency assumes that tests were actually carried out. In other words, the probability is calculated before the experiment, and the relative frequency - after the experiment.

However, calculating probability requires preliminary information about the number or probabilities of elementary outcomes favorable for a given event. In the absence of such preliminary information, empirical data are used to determine the probability, that is, the relative frequency of the event is determined based on the results of a stochastic experiment.

Example 1.23. Technical control department discovered 3 non-standard parts in a batch of 80 randomly selected parts. Relative frequency of occurrence of non-standard parts r(A)= 3/80.

Example 1.24. According to the purpose.produced 24 shot, and 19 hits were recorded. Relative target hit rate. r(A)=19/24.

Long-term observations have shown that if experiments are carried out under identical conditions, in each of which the number of tests is sufficiently large, then the relative frequency exhibits the property of stability. This property is that in different experiments the relative frequency changes little (the less, the more tests are performed), fluctuating around a certain constant number. It turned out that this constant number can be taken as an approximate value of the probability.

The relationship between relative frequency and probability will be described in more detail and more precisely below. Now let us illustrate the property of stability with examples.

Example 1.25. According to Swedish statistics, the relative frequency of births of girls for 1935 by month is characterized by the following numbers (the numbers are arranged in order of months, starting with January): 0,486; 0,489; 0,490; 0.471; 0,478; 0,482; 0.462; 0,484; 0,485; 0,491; 0,482; 0,473

The relative frequency fluctuates around the number 0.481, which can be taken as an approximate value for the probability of having girls.

Note that statistical data from different countries give approximately the same relative frequency value.

Example 1.26. Coin tossing experiments were carried out many times, in which the number of appearances of the “coat of arms” was counted. The results of several experiments are shown in the table.

So, let's talk about a topic that interests a lot of people. In this article I will answer the question of how to calculate the probability of an event. I will give formulas for such a calculation and several examples to make it clearer how this is done.

What is probability

Let's start with the fact that the probability that this or that event will occur is a certain amount of confidence in the eventual occurrence of some result. For this calculation, a total probability formula has been developed that allows you to determine whether the event you are interested in will occur or not, through the so-called conditional probabilities. This formula looks like this: P = n/m, the letters can change, but this does not affect the essence itself.

Examples of probability

Using a simple example, let's analyze this formula and apply it. Let's say you have a certain event (P), let it be a throw of a dice, that is, an equilateral die. And we need to calculate what is the probability of getting 2 points on it. To do this, you need the number of positive events (n), in our case - the loss of 2 points, for the total number of events (m). A roll of 2 points can only happen in one case, if there are 2 points on the dice, since otherwise the sum will be greater, it follows that n = 1. Next, we count the number of rolls of any other numbers on the dice, per 1 dice - these are 1, 2, 3, 4, 5 and 6, therefore, there are 6 favorable cases, that is, m = 6. Now, using the formula, we make a simple calculation P = 1/6 and we find that the roll of 2 points on the dice is 1/6, that is, the probability of the event is very low.

Let's also look at an example using colored balls that are in a box: 50 white, 40 black and 30 green. You need to determine what is the probability of drawing a green ball. And so, since there are 30 balls of this color, that is, there can only be 30 positive events (n = 30), the number of all events is 120, m = 120 (based on the total number of all balls), using the formula we calculate that the probability of drawing a green ball is will be equal to P = 30/120 = 0.25, that is, 25% of 100. In the same way, you can calculate the probability of drawing a ball of a different color (black it will be 33%, white 42%).

In fact, formulas (1) and (2) are a short record of conditional probability based on a contingency table of characteristics. Let's return to the example discussed (Fig. 1). Suppose we learn that a family is planning to buy a wide-screen television. What is the probability that this family will actually buy such a TV?

Rice. 1. Widescreen TV Buying Behavior

In this case, we need to calculate the conditional probability P (purchase completed | purchase planned). Since we know that the family is planning to buy, the sample space does not consist of all 1000 families, but only those planning to buy a wide-screen TV. Of the 250 such families, 200 actually bought this TV. Therefore, the probability that a family will actually buy a wide-screen TV if they have planned to do so can be calculated using the following formula:

P (purchase completed | purchase planned) = number of families who planned and bought a wide-screen TV / number of families planning to buy a wide-screen TV = 200 / 250 = 0.8

Formula (2) gives the same result:

where is the event A is that the family is planning to purchase a widescreen TV, and the event IN- that she will actually buy it. Substituting real data into the formula, we get:

Decision tree

In Fig. 1 families are divided into four categories: those who planned to buy a wide-screen TV and those who did not, as well as those who bought such a TV and those who did not. A similar classification can be performed using a decision tree (Fig. 2). The tree shown in Fig. 2 has two branches corresponding to families who planned to purchase a widescreen TV and families who did not. Each of these branches splits into two additional branches corresponding to households that did and did not purchase a widescreen TV. The probabilities written at the ends of the two main branches are the unconditional probabilities of events A And A'. The probabilities written at the ends of the four additional branches are the conditional probabilities of each combination of events A And IN. Conditional probabilities are calculated by dividing the joint probability of events by the corresponding unconditional probability of each of them.

Rice. 2. Decision tree

For example, to calculate the probability that a family will buy a wide-screen television if it has planned to do so, one must determine the probability of the event purchase planned and completed, and then divide it by the probability of the event purchase planned. Moving along the decision tree shown in Fig. 2, we get the following (similar to the previous) answer:

Statistical independence

In the example of buying a wide-screen TV, the probability that a randomly selected family purchased a wide-screen TV given that they planned to do so is 200/250 = 0.8. Recall that the unconditional probability that a randomly selected family purchased a wide-screen TV is 300/1000 = 0.3. This leads to a very important conclusion. Prior information that the family was planning a purchase influences the likelihood of the purchase itself. In other words, these two events depend on each other. In contrast to this example, there are statistically independent events whose probabilities do not depend on each other. Statistical independence is expressed by the identity: P(A|B) = P(A), Where P(A|B)- probability of event A provided that the event occurred IN, P(A)- unconditional probability of event A.

Please note that events A And IN P(A|B) = P(A). If in a contingency table of characteristics having a size of 2×2, this condition is satisfied for at least one combination of events A And IN, it will be valid for any other combination. In our example events purchase planned And purchase completed are not statistically independent because information about one event affects the probability of another.

Let's look at an example that shows how to test the statistical independence of two events. Let's ask 300 families who bought a widescreen TV if they were satisfied with their purchase (Fig. 3). Determine whether the degree of satisfaction with the purchase and the type of TV are related.

Rice. 3. Data characterizing the degree of satisfaction of buyers of widescreen TVs

Judging by these data,

In the same time,

P (customer satisfied) = 240 / 300 = 0.80

Therefore, the probability that the customer is satisfied with the purchase and that the family purchased an HDTV are equal, and these events are statistically independent because they are not related to each other.

Probability multiplication rule

The formula for calculating conditional probability allows you to determine the probability of a joint event A and B. Having resolved formula (1)

relative to joint probability P(A and B), we obtain a general rule for multiplying probabilities. Probability of event A and B equal to the probability of the event A provided that the event occurs IN IN:

(3) P(A and B) = P(A|B) * P(B)

Let's take as an example 80 families who bought a widescreen HDTV television (Fig. 3). The table shows that 64 families are satisfied with the purchase and 16 are not. Let us assume that two families are randomly selected from among them. Determine the probability that both customers will be satisfied. Using formula (3), we obtain:

P(A and B) = P(A|B) * P(B)

where is the event A is that the second family is satisfied with their purchase, and the event IN- that the first family is satisfied with their purchase. The probability that the first family is satisfied with their purchase is 64/80. However, the likelihood that the second family is also satisfied with their purchase depends on the first family's response. If the first family does not return to the sample after the survey (selection without return), the number of respondents is reduced to 79. If the first family is satisfied with their purchase, the probability that the second family will also be satisfied is 63/79, since there are only 63 left in the sample families satisfied with their purchase. Thus, substituting specific data into formula (3), we obtain the following answer:

P(A and B) = (63/79)(64/80) = 0.638.

Therefore, the probability that both families are satisfied with their purchases is 63.8%.

Suppose that after the survey the first family returns to the sample. Determine the probability that both families will be satisfied with their purchase. In this case, the probability that both families are satisfied with their purchase is the same, equal to 64/80. Therefore, P(A and B) = (64/80)(64/80) = 0.64. Thus, the probability that both families are satisfied with their purchases is 64.0%. This example shows that the choice of the second family does not depend on the choice of the first. Thus, replacing the conditional probability in formula (3) P(A|B) probability P(A), we obtain a formula for multiplying the probabilities of independent events.

The rule for multiplying the probabilities of independent events. If events A And IN are statistically independent, the probability of an event A and B equal to the probability of the event A, multiplied by the probability of the event IN.

(4) P(A and B) = P(A)P(B)

If this rule is true for events A And IN, which means they are statistically independent. Thus, there are two ways to determine the statistical independence of two events:

  1. Events A And IN are statistically independent of each other if and only if P(A|B) = P(A).
  2. Events A And B are statistically independent of each other if and only if P(A and B) = P(A)P(B).

If in a 2x2 contingency table, one of these conditions is met for at least one combination of events A And B, it will be valid for any other combination.

Unconditional probability of an elementary event

(5) P(A) = P(A|B 1)P(B 1) + P(A|B 2)P(B 2) + … + P(A|B k)P(B k)

where events B 1, B 2, ... B k are mutually exclusive and exhaustive.

Let us illustrate the application of this formula using the example of Fig. 1. Using formula (5), we obtain:

P(A) = P(A|B 1)P(B 1) + P(A|B 2)P(B 2)

Where P(A)- the likelihood that the purchase was planned, P(B 1)- the probability that the purchase is made, P(B 2)- the probability that the purchase is not completed.

BAYES' THEOREM

The conditional probability of an event takes into account information that some other event has occurred. This approach can be used both to refine the probability taking into account newly received information, and to calculate the probability that the observed effect is a consequence of a specific cause. The procedure for refining these probabilities is called Bayes' theorem. It was first developed by Thomas Bayes in the 18th century.

Let's assume that the company mentioned above is researching the market for a new TV model. In the past, 40% of the TVs created by the company were successful, while 60% of the models were not recognized. Before announcing the release of a new model, marketing specialists carefully research the market and record demand. In the past, 80% of successful models were predicted to be successful, while 30% of successful predictions turned out to be wrong. The marketing department gave a favorable forecast for the new model. What is the likelihood that a new TV model will be in demand?

Bayes' theorem can be derived from the definitions of conditional probability (1) and (2). To calculate the probability P(B|A), take formula (2):

and substitute instead of P(A and B) the value from formula (3):

P(A and B) = P(A|B) * P(B)

Substituting formula (5) instead of P(A), we obtain Bayes’ theorem:

where events B 1, B 2, ... B k are mutually exclusive and exhaustive.

Let us introduce the following notation: event S - TV is in demand, event S’ - TV is not in demand, event F - favorable prognosis, event F’ - poor prognosis. Let’s assume that P(S) = 0.4, P(S’) = 0.6, P(F|S) = 0.8, P(F|S’) = 0.3. Applying Bayes' theorem we get:

The probability of demand for a new TV model, given a favorable forecast, is 0.64. Thus, the probability of lack of demand given a favorable forecast is 1–0.64=0.36. The calculation process is shown in Fig. 4.

Rice. 4. (a) Calculations using the Bayes formula to estimate the probability of demand for televisions; (b) Decision tree when studying demand for a new TV model

Let's look at an example of using Bayes' theorem for medical diagnostics. The probability that a person suffers from a particular disease is 0.03. A medical test can check if this is true. If a person is truly sick, the probability of an accurate diagnosis (saying that the person is sick when he really is sick) is 0.9. If a person is healthy, the probability of a false positive diagnosis (saying that a person is sick when he is healthy) is 0.02. Let's say that the medical test gives a positive result. What is the probability that a person is actually sick? What is the likelihood of an accurate diagnosis?

Let us introduce the following notation: event D - the person is sick, event D’ - the person is healthy, event T - diagnosis is positive, event T’ - diagnosis negative. From the conditions of the problem it follows that P(D) = 0.03, P(D’) = 0.97, P(T|D) = 0.90, P(T|D’) = 0.02. Applying formula (6), we obtain:

The probability that with a positive diagnosis a person is really sick is 0.582 (see also Fig. 5). Please note that the denominator of the Bayes formula is equal to the probability of a positive diagnosis, i.e. 0.0464.

If the events H 1, H 2, ..., H n form a complete group, then to calculate the probability of an arbitrary event you can use the total probability formula:

P(A) = P(A/H 1) P(H 1)+P(A/H 2) P(H 2)

According to which the probability of the occurrence of event A can be represented as the sum of the products of the conditional probabilities of event A, subject to the occurrence of events H i, by the unconditional probabilities of these events H i. These events H i are called hypotheses.

From the total probability formula follows Bayes' formula:

The probabilities P(H i) of the hypotheses H i are called a priori probabilities - probabilities before conducting experiments.
Probabilities P(A/H i) are called posterior probabilities - the probabilities of hypotheses H i, refined as a result of experience.

Purpose of the service. The online calculator is designed to calculate the total probability with the entire solution process written in Word format (see examples of problem solving).

Number of objects 2 3 4 5
Number of products specified Probabilities of defective products are specified
Plant No. 1: P(H1) = . Probability of standard products: P(A|H1) =
Plant No. 2: P(H2) = . Probability of standard products: P(A|H2) =
Plant No. 3: P(H3) = . Probability of standard products: P(A|H3) =
Plant No. 4: P(H4) = . Probability of standard products: P(A|H4) =
Plant No. 5: P(H5) = . Probability of standard products: P(A|H5) =

If the source data is presented as a percentage (%), then it must be presented as a share. For example, 60%: 0.6.

Example No. 1. The store receives light bulbs from two factories, with the first factory's share being 25%. It is known that the percentage of defects at these factories is equal to 5% and 10% of all manufactured products, respectively. The seller takes one light bulb at random. What is the probability that it will be defective?
Solution: Let us denote by A the event - “the light bulb turns out to be defective.” The following hypotheses about the origin of this light bulb are possible: H 1- “the light bulb came from the first factory.” H 2- “the light bulb came from the second plant.” Since the share of the first plant is 25%, the probabilities of these hypotheses are equal, respectively ; .
The conditional probability that a defective light bulb was produced by the first plant is , the second plant - p(A/H 2)=we find the required probability that the seller took a defective light bulb using the total probability formula
0.25·0.05+0.75·0.10=0.0125+0.075=0.0875
Answer: p(A)= 0,0875.

Example No. 2. The store received two equal quantities of the product of the same name. It is known that 25% of the first batch and 40% of the second batch are first-class goods. What is the probability that a randomly selected unit of goods will not be of first grade?
Solution:
Let us denote by A the event - “the product will be first class.” The following hypotheses about the origin of this product are possible: H 1- “product from the first batch.” H 2- “product from the second batch.” Since the share of the first batch is 25%, the probabilities of these hypotheses are equal, respectively ; .
The conditional probability that the product from the first batch is , from the second batch - the desired probability that a randomly selected unit of goods will be first class
p(A) = P(H 1) p(A/H 1)+P(H 2) (A/H 2)= 0.25·0.5+0.4·0.5=0.125+0.2=0.325
Then, the probability that a randomly selected unit of goods will not be of the first grade will be equal to: 1- 0.325 = 0.675
Answer: .

Example No. 3. It is known that 5% of men and 1% of women are color blind. The person chosen at random turned out to be not colorblind. What is the probability that this is a man (assume that there are equal numbers of men and women).
Solution.
Event A - the person chosen at random turns out to be not colorblind.
Let's find the probability of this event occurring.
P(A) = P(A|H=male) + P(A|H=female) = 0.95*0.5 + 0.99*0.5 = 0.475 + 0.495 = 0.97
Then the probability that this is a man is: p = P(A|H=man) / P(A) = 0.475/0.97 = 0.4897

Example No. 4. 4 first-year students, 6 second-year students, and 5 third-year students take part in the sports Olympiad. The probabilities that a first-, second-, third-year student will win the Olympiad are respectively 0.9; 0.7 and 0.8.
a) Find the probability of winning by a randomly selected participant.
b) Under the conditions of this problem, one student won the Olympiad. Which group does he most likely belong to?
Solution.
Event A - victory of a randomly selected participant.
Here P(H1) = 4/(4+6+5) = 0.267, P(H2) = 6/(4+6+5) = 0.4, P(H3) = 5/(4+6+5) = 0.333,
P(A|H1) = 0.9, P(A|H2) = 0.7, P(A|H3) = 0.8
a) P(A) = P(H1)*P(A|H1) + P(H2)*P(A|H2) + P(H3)*P(A|H3) = 0.267*0.9 + 0.4*0.7 + 0.333*0.8 = 0.787
b) The solution can be obtained using this calculator.
p1 = P(H1)*P(A|H1)/P(A)
p2 = P(H2)*P(A|H2)/P(A)
p3 = P(H3)*P(A|H3)/P(A)
From p1, p2, p3, choose the maximum one.

Example No. 5. The company has three machines of the same type. One of them provides 20% of total production, the second – 30%, the third – 50%. In this case, the first machine produces 5% of defects, the second 4%, the third – 2%. Find the probability that a randomly selected defective product is produced by the first machine.