The probability is higher or greater. Probability of event

It is clear that each event has a varying degree of possibility of its occurrence (its implementation). In order to quantitatively compare events with each other according to the degree of their possibility, obviously, it is necessary to associate a certain number with each event, which is greater, the more possible the event is. This number is called the probability of an event.

Probability of event– is a numerical measure of the degree of objective possibility of the occurrence of this event.

Consider a stochastic experiment and a random event A observed in this experiment. Let's repeat this experiment n times and let m(A) be the number of experiments in which event A occurred.

Relation (1.1)

called relative frequency events A in the series of experiments performed.

It is easy to verify the validity of the properties:

if A and B are inconsistent (AB= ), then ν(A+B) = ν(A) + ν(B) (1.2)

The relative frequency is determined only after a series of experiments and, generally speaking, can vary from series to series. However, experience shows that in many cases, as the number of experiments increases, the relative frequency approaches a certain number. This fact of stability of the relative frequency has been repeatedly verified and can be considered experimentally established.

Example 1.19.. If you throw one coin, no one can predict which side it will land on top. But if you throw two tons of coins, then everyone will say that about one ton will fall up with the coat of arms, that is, the relative frequency of the coat of arms falling out is approximately 0.5.

If, with an increase in the number of experiments, the relative frequency of the event ν(A) tends to a certain fixed number, then it is said that event A is statistically stable, and this number is called the probability of event A.

Probability of the event A some fixed number P(A) is called, to which the relative frequency ν(A) of this event tends as the number of experiments increases, that is,

This definition is called statistical definition probabilities .

Let's consider a certain stochastic experiment and let the space of its elementary events consist of a finite or infinite (but countable) set of elementary events ω 1, ω 2, …, ω i, …. Let us assume that each elementary event ω i is assigned a certain number - р i, characterizing the degree of possibility of the occurrence of this element ary event and satisfying the following properties:

This number p i is called probability of an elementary eventωi.

Let now A be a random event observed in this experiment, and let it correspond to a certain set

In this setting probability of an event A call the sum of the probabilities of elementary events favoring A(included in the corresponding set A):


(1.4)

The probability introduced in this way has the same properties as the relative frequency, namely:

And if AB = (A and B are incompatible),

then P(A+B) = P(A) + P(B)

Indeed, according to (1.4)

In the last relation we took advantage of the fact that not a single elementary event can favor two incompatible events at the same time.

We especially note that probability theory does not indicate methods for determining p i; they must be sought from considerations of a practical nature or obtained from an appropriate statistical experiment.

As an example, consider the classical scheme of probability theory. To do this, consider a stochastic experiment, the space of elementary events of which consists of a finite (n) number of elements. Let us additionally assume that all these elementary events are equally possible, that is, the probabilities of elementary events are equal to p(ω i)=p i =p. It follows that

Example 1.20. When throwing a symmetrical coin, getting heads and tails are equally possible, their probabilities are equal to 0.5.

Example 1.21. When throwing a symmetrical die, all faces are equally possible, their probabilities are equal to 1/6.

Now let event A be favored by m elementary events, they are usually called outcomes favorable to event A. Then

Got classic definition probabilities: the probability P(A) of event A is equal to the ratio of the number of outcomes favorable to event A to the total number of outcomes

Example 1.22. The urn contains m white balls and n black balls. What is the probability of getting it out? white ball?

Solution. The total number of elementary events is m+n. They are all equally probable. Favorable event A of which m. Hence, .

The following properties follow from the definition of probability:

Property 1. Probability reliable event equal to one.

Indeed, if the event is reliable, then every elementary outcome of the test favors the event. In this case t=p, hence,

P(A)=m/n=n/n=1.(1.6)

Property 2. Probability impossible event equal to zero.

Indeed, if an event is impossible, then none of the elementary outcomes of the test favor the event. In this case T= 0, therefore, P(A)=m/n=0/n=0. (1.7)

Property 3.Probability random event There is positive number, enclosed between zero and one.

Indeed, only a part of the total number of elementary outcomes of the test is favored by a random event. That is, 0≤m≤n, which means 0≤m/n≤1, therefore, the probability of any event satisfies the double inequality 0≤ P(A)1. (1.8)

Comparing the definitions of probability (1.5) and relative frequency (1.1), we conclude: definition of probability does not require testing to be carried out in fact; the definition of relative frequency assumes that tests were actually carried out. In other words, the probability is calculated before the experiment, and the relative frequency - after the experiment.

However, calculating probability requires preliminary information about the number or probabilities of elementary outcomes favorable for a given event. In the absence of such preliminary information, empirical data are used to determine the probability, that is, the relative frequency of the event is determined based on the results of a stochastic experiment.

Example 1.23. Technical control department discovered 3 non-standard parts in a batch of 80 randomly selected parts. Relative frequency of occurrence of non-standard parts r(A)= 3/80.

Example 1.24. According to the purpose.produced 24 shot, and 19 hits were recorded. Relative target hit rate. r(A)=19/24.

Long-term observations showed that if experiments are carried out under identical conditions, in each of which the number of tests is sufficiently large, then the relative frequency exhibits the property of stability. This property is that in different experiments the relative frequency changes little (the less, the more tests are performed), fluctuating around a certain constant number. It turned out that this constant number can be taken as an approximate probability value.

More details and more precisely the connection between relative frequency and probability will be outlined below. Now let us illustrate the property of stability with examples.

Example 1.25. According to Swedish statistics, the relative frequency of births of girls for 1935 by month is characterized by the following numbers (the numbers are arranged in order of months, starting with January): 0,486; 0,489; 0,490; 0.471; 0,478; 0,482; 0.462; 0,484; 0,485; 0,491; 0,482; 0,473

The relative frequency fluctuates around the number 0.481, which can be taken as approximate value probability of having girls.

Note that statistical data various countries give approximately the same relative frequency value.

Example 1.26. Coin tossing experiments were carried out many times, in which the number of appearances of the “coat of arms” was counted. The results of several experiments are shown in the table.

Brought to date in open jar Unified State Examination problems in mathematics (mathege.ru), the solution of which is based on only one formula, which is the classical definition of probability.

The easiest way to understand the formula is with examples.
Example 1. There are 9 red balls and 3 blue balls in the basket. The balls differ only in color. We take out one of them at random (without looking). What is the probability that the ball chosen in this way will be blue?

A comment. In problems in probability theory, something happens (in this case, our action of pulling out the ball) that can have a different result - an outcome. It should be noted that the result can be looked at in different ways. “We pulled out some kind of ball” is also a result. “We pulled out the blue ball” - the result. “We pulled out exactly this ball from all possible balls” - this least generalized view of the result is called an elementary outcome. It is the elementary outcomes that are meant in the formula for calculating the probability.

Solution. Now let's calculate the probability of choosing the blue ball.
Event A: “the selected ball turned out to be blue”
Total number of all possible outcomes: 9+3=12 (the number of all balls that we could draw)
Number of outcomes favorable for event A: 3 (the number of such outcomes in which event A occurred - that is, the number of blue balls)
P(A)=3/12=1/4=0.25
Answer: 0.25

For the same problem, let's calculate the probability of choosing a red ball.
The total number of possible outcomes will remain the same, 12. Number of favorable outcomes: 9. Probability sought: 9/12=3/4=0.75

The probability of any event always lies between 0 and 1.
Sometimes in everyday speech (but not in probability theory!) the probability of events is estimated as a percentage. The transition between math and conversational scores is accomplished by multiplying (or dividing) by 100%.
So,
Moreover, the probability is zero for events that cannot happen - incredible. For example, in our example this would be the probability of drawing a green ball from the basket. (The number of favorable outcomes is 0, P(A)=0/12=0, if calculated using the formula)
Probability 1 has events that are absolutely certain to happen, without options. For example, the probability that “the selected ball will be either red or blue” is for our task. (Number of favorable outcomes: 12, P(A)=12/12=1)

We have reviewed classic example, illustrating the definition of probability. All similar Unified State Examination tasks According to probability theory, they are solved by using this formula.
In place of the red and blue balls there may be apples and pears, boys and girls, learned and unlearned tickets, tickets containing and not containing a question on some topic (prototypes,), defective and high-quality bags or garden pumps (prototypes,) - the principle remains the same.

They differ slightly in the formulation of the theory problem probability of the Unified State Exam, where you need to calculate the probability of an event occurring on a specific day. ( , ) As in previous tasks you need to determine what is the elementary outcome, and then apply the same formula.

Example 2. The conference lasts three days. On the first and second days there are 15 speakers, on the third day - 20. What is the probability that Professor M.’s report will fall on the third day if the order of reports is determined by drawing lots?

What is the elementary outcome here? – Assigning a professor’s report one of all possible serial numbers for a performance. 15+15+20=50 people participate in the draw. Thus, Professor M.'s report may receive one of 50 issues. This means there are only 50 elementary outcomes.
What are the favorable outcomes? - Those in which it turns out that the professor will speak on the third day. That is, the last 20 numbers.
According to the formula, probability P(A)= 20/50=2/5=4/10=0.4
Answer: 0.4

The drawing of lots here represents the establishment of a random correspondence between people and ordered places. In example 2, the establishment of correspondence was considered from the point of view of which of the places could be taken special person. You can approach the same situation from the other side: which of the people with what probability could get caught? specific place(prototypes , , , ):

Example 3. The draw includes 5 Germans, 8 French and 3 Estonians. What is the probability that the first (/second/seventh/last – it doesn’t matter) will be a Frenchman.

Number of elementary outcomes – number of all possible people, who could get to this place by drawing lots. 5+8+3=16 people.
Favorable outcomes - French. 8 people.
Required probability: 8/16=1/2=0.5
Answer: 0.5

The prototype is slightly different. There are still problems about coins () and dice (), which are somewhat more creative. The solution to these problems can be found on the prototype pages.

Here are a few examples of tossing a coin or dice.

Example 4. When we toss a coin, what is the probability of landing on heads?
There are 2 outcomes – heads or tails. (it is believed that the coin never lands on its edge) A favorable outcome is tails, 1.
Probability 1/2=0.5
Answer: 0.5.

Example 5. What if we toss a coin twice? What is the probability of getting heads both times?
The main thing is to determine what elementary outcomes we will consider when tossing two coins. After tossing two coins, one of the following results can occur:
1) PP – both times it came up heads
2) PO – first time heads, second time heads
3) OP – heads the first time, tails the second time
4) OO – heads came up both times
There are no other options. This means that there are 4 elementary outcomes. Only the first one, 1, is favorable.
Probability: 1/4=0.25
Answer: 0.25

What is the probability that two coin tosses will result in tails?
The number of elementary outcomes is the same, 4. Favorable outcomes are the second and third, 2.
Probability of getting one tail: 2/4=0.5

In such problems, another formula may be useful.
If during one toss of a coin possible options we have 2 results, then for two throws the results will be 2 2 = 2 2 = 4 (as in example 5), for three throws 2 2 2 = 2 3 = 8, for four: 2 2 2 2 =2 4 =16, ... for N throws the possible results will be 2·2·...·2=2 N .

So, you can find the probability of getting 5 heads out of 5 coin tosses.
Total number of elementary outcomes: 2 5 =32.
Favorable outcomes: 1. (RRRRRR – heads all 5 times)
Probability: 1/32=0.03125

The same is true for dice. With one throw, there are 6 possible results. So, for two throws: 6 6 = 36, for three 6 6 6 = 216, etc.

Example 6. We throw the dice. What is the probability that an even number will be rolled?

Total outcomes: 6, according to the number of sides.
Favorable: 3 outcomes. (2, 4, 6)
Probability: 3/6=0.5

Example 7. We throw two dice. What is the probability that the total will be 10? (round to the nearest hundredth)

For one die there are 6 possible outcomes. This means that for two, according to the above rule, 6·6=36.
What outcomes will be favorable for the total to roll 10?
10 must be decomposed into the sum of two numbers from 1 to 6. This can be done in two ways: 10=6+4 and 10=5+5. This means that the following options are possible for the cubes:
(6 on the first and 4 on the second)
(4 on the first and 6 on the second)
(5 on the first and 5 on the second)
Total, 3 options. Required probability: 3/36=1/12=0.08
Answer: 0.08

Other types of B6 problems will be discussed in a future How to Solve article.

Important notes!
1. If you see gobbledygook instead of formulas, clear your cache. How to do this in your browser is written here:
2. Before you start reading the article, pay attention to our navigator for the most useful resource For

What is probability?

The first time I encountered this term, I would not have understood what it was. Therefore, I will try to explain clearly.

Probability is the chance that the event we want will happen.

For example, you decided to go to a friend’s house, you remember the entrance and even the floor on which he lives. But I forgot the number and location of the apartment. And now you are standing on the staircase, and in front of you there are doors to choose from.

What is the chance (probability) that if you ring the first doorbell, your friend will answer the door for you? There are only apartments, and a friend lives only behind one of them. With an equal chance we can choose any door.

But what is this chance?

The door, the right door. Probability of guessing by ringing the first doorbell: . That is, one time out of three you will accurately guess.

We want to know, having called once, how often will we guess the door? Let's look at all the options:

  1. You called 1st door
  2. You called 2nd door
  3. You called 3rd door

Now let’s look at all the options where a friend could be:

A. Behind 1st the door
b. Behind 2nd the door
V. Behind 3rd the door

Let's compare all the options in table form. A checkmark indicates options when your choice coincides with a friend's location, a cross - when it does not coincide.

How do you see everything Maybe options your friend's location and your choice of which door to ring.

A favorable outcomes of all . That is, you will guess once by ringing the doorbell once, i.e. .

This is the probability - the ratio of a favorable outcome (when your choice coincides with your friend’s location) to the number possible events.

The definition is the formula. Probability is usually denoted by p, therefore:

It is not very convenient to write such a formula, so we will take for - the number of favorable outcomes, and for - the total number of outcomes.

The probability can be written as a percentage; to do this, you need to multiply the resulting result by:

The word “outcomes” probably caught your eye. Because mathematicians call various actions(in our country such an action is a doorbell) experiments, then the result of such experiments is usually called the outcome.

Well, there are favorable and unfavorable outcomes.

Let's go back to our example. Let's say we rang one of the doors, but it was opened for us stranger. We didn't guess right. What is the probability that if we ring one of the remaining doors, our friend will open it for us?

If you thought that, then this is a mistake. Let's figure it out.

We have two doors left. So we have possible steps:

1) Call 1st door
2) Call 2nd door

The friend, despite all this, is definitely behind one of them (after all, he wasn’t behind the one we called):

a) Friend for 1st the door
b) Friend for 2nd the door

Let's draw the table again:

As you can see, there are only options, of which are favorable. That is, the probability is equal.

Why not?

The situation we considered is example dependent events. The first event is the first doorbell, the second event is the second doorbell.

And they are called dependent because they influence the following actions. After all, if after the first ring the doorbell was answered by a friend, what would be the probability that he was behind one of the other two? Right, .

But if there are dependent events, then there must also be independent? That's right, they do happen.

A textbook example is tossing a coin.

  1. Toss a coin once. What is the probability of getting heads, for example? That's right - because there are all the options (either heads or tails, we will neglect the probability of the coin landing on its edge), but it only suits us.
  2. But it came up heads. Okay, let's throw it again. What is the probability of getting heads now? Nothing has changed, everything is the same. How many options? Two. How many are we happy with? One.

And let it come up heads at least a thousand times in a row. The probability of getting heads at once will be the same. There are always options, and favorable ones.

It is easy to distinguish dependent events from independent ones:

  1. If the experiment is carried out once (they throw a coin once, ring the doorbell once, etc.), then the events are always independent.
  2. If an experiment is carried out several times (a coin is thrown once, the doorbell is rung several times), then the first event is always independent. And then, if the number of favorable ones or the number of all outcomes changes, then the events are dependent, and if not, they are independent.

Let's practice determining probability a little.

Example 1.

The coin is tossed twice. What is the probability of getting heads twice in a row?

Solution:

Let's consider all possible options:

  1. Eagle-eagle
  2. Heads-tails
  3. Tails-Heads
  4. Tails-tails

As you can see, there are only options. Of these, we are only satisfied. That is, the probability:

If the condition asks simply to find the probability, then the answer should be given in the form decimal. If it were specified that the answer should be given as a percentage, then we would multiply by.

Answer:

Example 2.

In a box of chocolates, all the chocolates are packaged in the same wrapper. However, from sweets - with nuts, with cognac, with cherries, with caramel and with nougat.

What is the probability of taking one candy and getting a candy with nuts? Give your answer as a percentage.

Solution:

How many possible outcomes are there? .

That is, if you take one candy, it will be one of those available in the box.

How many favorable outcomes?

Because the box contains only chocolates with nuts.

Answer:

Example 3.

In a box of balloons. of which are white and black.

  1. What is the probability of drawing a white ball?
  2. We added more black balls to the box. What is now the probability of drawing a white ball?

Solution:

a) There are only balls in the box. Of them are white.

The probability is:

b) Now there are more balls in the box. And there are just as many whites left - .

Answer:

Total probability

The probability of all possible events is equal to ().

Let's say there are red and green balls in a box. What is the probability of drawing a red ball? Green ball? Red or green ball?

Probability of drawing a red ball

Green ball:

Red or green ball:

As you can see, the sum of all possible events is equal to (). Understanding this point will help you solve many problems.

Example 4.

There are markers in the box: green, red, blue, yellow, black.

What is the probability of drawing NOT a red marker?

Solution:

Let's count the number favorable outcomes.

NOT a red marker, that means green, blue, yellow or black.

The probability that an event will not occur is equal to minus the probability that the event will occur.

Rule for multiplying the probabilities of independent events

You already know what independent events are.

What if you need to find the probability that two (or more) independent events will occur in a row?

Let's say we want to know what is the probability that if we flip a coin once, we will see heads twice?

We have already considered - .

What if we toss a coin once? What is the probability of seeing an eagle twice in a row?

Total possible options:

  1. Eagle-eagle-eagle
  2. Heads-heads-tails
  3. Heads-tails-heads
  4. Heads-tails-tails
  5. Tails-heads-heads
  6. Tails-heads-tails
  7. Tails-tails-heads
  8. Tails-tails-tails

I don’t know about you, but I made mistakes several times when compiling this list. Wow! And only option (the first) suits us.

For 5 throws, you can make a list of possible outcomes yourself. But mathematicians are not as hardworking as you.

Therefore, they first noticed and then proved that the probability of a certain sequence of independent events each time decreases by the probability of one event.

In other words,

Let's look at the example of the same ill-fated coin.

Probability of getting heads in a challenge? . Now we flip the coin once.

What is the probability of getting heads in a row?

This rule doesn't only work if we are asked to find the probability that the same event will happen several times in a row.

If we wanted to find the sequence TAILS-HEADS-TAILS for consecutive tosses, we would do the same.

The probability of getting tails is , heads - .

Probability of getting the sequence TAILS-HEADS-TAILS-TAILS:

You can check it yourself by making a table.

The rule for adding the probabilities of incompatible events.

So stop! New definition.

Let's figure it out. Let's take our worn-out coin and toss it once.
Possible options:

  1. Eagle-eagle-eagle
  2. Heads-heads-tails
  3. Heads-tails-heads
  4. Heads-tails-tails
  5. Tails-heads-heads
  6. Tails-heads-tails
  7. Tails-tails-heads
  8. Tails-tails-tails

So these are incompatible events, this is a certain given sequence events. - these are incompatible events.

If we want to determine what is the probability of two (or more) incompatible events then we add up the probabilities of these events.

You need to understand that heads or tails are two independent events.

If we want to determine the probability of a sequence (or any other) occurring, then we use the rule of multiplying probabilities.
What is the probability of getting heads on the first toss, and tails on the second and third tosses?

But if we want to know what is the probability of getting one of several sequences, for example, when heads comes up exactly once, i.e. options and, then we must add up the probabilities of these sequences.

Total options suit us.

We can get the same thing by adding up the probabilities of occurrence of each sequence:

Thus, we add probabilities when we want to determine the probability of certain, inconsistent, sequences of events.

There is a great rule to help you avoid getting confused when to multiply and when to add:

Let's go back to the example where we tossed a coin once and wanted to know the probability of seeing heads once.
What is going to happen?

Should fall out:
(heads AND tails AND tails) OR (tails AND heads AND tails) OR (tails AND tails AND heads).
This is how it turns out:

Let's look at a few examples.

Example 5.

There are pencils in the box. red, green, orange and yellow and black. What is the probability of drawing red or green pencils?

Solution:

Example 6.

If a die is thrown twice, what is the probability of getting a total of 8?

Solution.

How can we get points?

(and) or (and) or (and) or (and) or (and).

The probability of getting one (any) face is .

We calculate the probability:

Training.

I think now you understand when you need to calculate probabilities, when to add them, and when to multiply them. Is not it? Let's practice a little.

Tasks:

Let's take a card deck containing cards including spades, hearts, 13 clubs and 13 diamonds. From to Ace of each suit.

  1. What is the probability of drawing clubs in a row (we put the first card pulled out back into the deck and shuffle it)?
  2. What is the probability of drawing a black card (spades or clubs)?
  3. What is the probability of drawing a picture (jack, queen, king or ace)?
  4. What is the probability of drawing two pictures in a row (we remove the first card drawn from the deck)?
  5. What is the probability, taking two cards, to collect a combination - (jack, queen or king) and an ace? The sequence in which the cards are drawn does not matter.

Answers:

If you were able to solve all the problems yourself, then you are great! Now you will crack probability theory problems in the Unified State Exam like nuts!

PROBABILITY THEORY. AVERAGE LEVEL

Let's look at an example. Let's say we throw a die. What kind of bone is this, do you know? This is what they call a cube with numbers on its faces. How many faces, so many numbers: from to how many? Before.

So we roll the dice and we want it to come up or. And we get it.

In probability theory they say what happened auspicious event(not to be confused with prosperous).

If it happened, the event would also be favorable. In total, only two favorable events can happen.

How many are unfavorable? Since there are total possible events, it means that the unfavorable ones are events (this is if or falls out).

Definition:

Probability is the ratio of the number of favorable events to the number of all possible events. That is, probability shows what proportion of all possible events are favorable.

Indicates probability Latin letter(apparently from English word probability - probability).

It is customary to measure the probability as a percentage (see topic,). To do this, the probability value must be multiplied by. In the dice example, probability.

And in percentage: .

Examples (decide for yourself):

  1. What is the probability of getting heads when tossing a coin? What is the probability of landing heads?
  2. What is the probability of getting an even number when throwing a die? Which one is odd?
  3. In a box of simple, blue and red pencils. We draw one pencil at random. What is the probability of getting a simple one?

Solutions:

  1. How many options are there? Heads and tails - just two. How many of them are favorable? Only one is an eagle. So the probability

    It's the same with tails: .

  2. Total options: (how many sides does the cube have, so many various options). Favorable ones: (these are all even numbers:).
    Probability. Of course, it’s the same with odd numbers.
  3. Total: . Favorable: . Probability: .

Total probability

All pencils in the box are green. What is the probability of drawing a red pencil? There are no chances: probability (after all, favorable events -).

Such an event is called impossible.

What is the probability of drawing a green pencil? There are exactly the same number of favorable events as there are total events (all events are favorable). So the probability is equal to or.

Such an event is called reliable.

If a box contains green and red pencils, what is the probability of drawing green or red? Yet again. Let's note this: the probability of pulling out green is equal, and red is equal.

In sum, these probabilities are exactly equal. That is, the sum of the probabilities of all possible events is equal to or.

Example:

In a box of pencils, among them are blue, red, green, plain, yellow, and the rest are orange. What is the probability of not drawing green?

Solution:

We remember that all probabilities add up. And the probability of getting green is equal. This means that the probability of not drawing green is equal.

Remember this trick: The probability that an event will not occur is equal to minus the probability that the event will occur.

Independent events and the multiplication rule

You flip a coin once and want it to come up heads both times. What is the likelihood of this?

Let's go through all the possible options and determine how many there are:

Heads-Heads, Tails-Heads, Heads-Tails, Tails-Tails. What else?

Total options. Of these, only one suits us: Eagle-Eagle. In total, the probability is equal.

Fine. Now let's flip a coin once. Do the math yourself. Happened? (answer).

You may have noticed that with the addition of each subsequent throw, the probability decreases by half. General rule called multiplication rule:

The probabilities of independent events change.

What are independent events? Everything is logical: these are those that do not depend on each other. For example, when we throw a coin several times, each time a new throw is made, the result of which does not depend on all previous throws. We can just as easily throw two different coins at the same time.

More examples:

  1. The dice are thrown twice. What is the probability of getting it both times?
  2. The coin is tossed once. What is the probability that it will come up heads the first time, and then tails twice?
  3. The player rolls two dice. What is the probability that the sum of the numbers on them will be equal?

Answers:

  1. The events are independent, which means the multiplication rule works: .
  2. The probability of heads is equal. The probability of tails is the same. Multiply:
  3. 12 can only be obtained if two -ki are rolled: .

Incompatible events and the addition rule

Events that complement each other are called incompatible. full probability. As the name suggests, they cannot happen simultaneously. For example, if we flip a coin, it can come up either heads or tails.

Example.

In a box of pencils, among them are blue, red, green, plain, yellow, and the rest are orange. What is the probability of drawing green or red?

Solution .

The probability of drawing a green pencil is equal. Red - .

Favorable events in all: green + red. This means that the probability of drawing green or red is equal.

The same probability can be represented in this form: .

This is the addition rule: the probabilities of incompatible events add up.

Mixed type problems

Example.

The coin is tossed twice. What is the probability that the results of the rolls will be different?

Solution .

This means that if the first result is heads, the second must be tails, and vice versa. It turns out that there are two pairs of independent events, and these pairs are incompatible with each other. How not to get confused about where to multiply and where to add.

There is a simple rule for such situations. Try to describe what is going to happen using the conjunctions “AND” or “OR”. For example, in this case:

It should come up (heads and tails) or (tails and heads).

Where there is a conjunction “and” there will be multiplication, and where there is “or” there will be addition:

Try it yourself:

  1. What is the probability that if a coin is tossed twice, the coin will land on the same side both times?
  2. The dice are thrown twice. What is the probability of getting a total of points?

Solutions:

Another example:

Toss a coin once. What is the probability that heads will appear at least once?

Solution:

PROBABILITY THEORY. BRIEFLY ABOUT THE MAIN THINGS

Probability is the ratio of the number of favorable events to the number of all possible events.

Independent events

Two events are independent if the occurrence of one does not change the probability of the other occurring.

Total probability

The probability of all possible events is equal to ().

The probability that an event will not occur is equal to minus the probability that the event will occur.

Rule for multiplying the probabilities of independent events

The probability of a certain sequence of independent events is equal to the product of the probabilities of each event

Incompatible events

Incompatible events are those that cannot possibly occur simultaneously as a result of an experiment. A series of incompatible events form full group events.

The probabilities of incompatible events add up.

Having described what should happen, using the conjunctions “AND” or “OR”, instead of “AND” we put a multiplication sign, and instead of “OR” we put an addition sign.

Well, the topic is over. If you are reading these lines, it means you are very cool.

Because only 5% of people are able to master something on their own. And if you read to the end, then you are in this 5%!

Now the most important thing.

You have understood the theory on this topic. And, I repeat, this... this is just super! You are already better than the vast majority of your peers.

The problem is that this may not be enough...

For what?

For successful completion Unified State Exam, for admission to college on a budget and, MOST IMPORTANTLY, for life.

I won’t convince you of anything, I’ll just say one thing...

People who have received a good education earn much more than those who have not received it. This is statistics.

But this is not the main thing.

The main thing is that they are MORE HAPPY (there are such studies). Perhaps because there is much more open before them more possibilities and life becomes brighter? Don't know...

But think for yourself...

What does it take to be sure to be better than others on the Unified State Exam and ultimately be... happier?

GAIN YOUR HAND BY SOLVING PROBLEMS ON THIS TOPIC.

You won't be asked for theory during the exam.

You will need solve problems against time.

And, if you haven’t solved them (A LOT!), you’ll definitely make a stupid mistake somewhere or simply won’t have time.

It's like in sports - you need to repeat it many times to win for sure.

Find the collection wherever you want, necessarily with solutions, detailed analysis and decide, decide, decide!

You can use our tasks (optional) and we, of course, recommend them.

In order to get better at using our tasks, you need to help extend the life of the YouClever textbook you are currently reading.

How? There are two options:

  1. Unlock all hidden tasks in this article -
  2. Unlock access to all hidden tasks in all 99 articles of the textbook - Buy a textbook - 499 RUR

Yes, we have 99 such articles in our textbook and access to all tasks and all hidden texts in them can be opened immediately.

Access to all hidden tasks is provided for the ENTIRE life of the site.

In conclusion...

If you don't like our tasks, find others. Just don't stop at theory.

“Understood” and “I can solve” are completely different skills. You need both.

Find problems and solve them!

In my blog, a translation of the next lecture of the course “Principles of Game Balance” by game designer Jan Schreiber, who worked on such projects as Marvel Trading Card Game and Playboy: the Mansion.

Until now, almost everything we've talked about has been deterministic, and last week we took a closer look at transitive mechanics, going into as much detail as I can explain. But until now we haven't paid attention to another aspect of many games, namely the non-deterministic aspects - in other words, randomness.

Understanding the nature of randomness is very important for game designers. We create systems that affect the user's experience in a given game, so we need to know how those systems work. If there is randomness in a system, we need to understand the nature of this randomness and know how to change it in order to get the results we need.

Dice

Let's start with something simple - throwing dice. When most people think of dice, they think of a six-sided die known as a d6. But most gamers have seen many other dice: tetrahedral (d4), octagonal (d8), twelve-sided (d12), twenty-sided (d20). If you're a real geek, you might have 30-sided or 100-sided dice somewhere.

If you're not familiar with the terminology, d stands for die, and the number after it is the number of sides it has. If the number appears before d, then it indicates the number of dice to be rolled. For example, in the game of Monopoly you roll 2d6.

So, in this case, the phrase “dice” is symbol. There are a huge number of other random number generators that do not look like plastic figures, but perform the same function - generate random number from 1 to n. An ordinary coin can also be represented as a dihedral dice d2.

I saw two designs of seven-sided dice: one of them looked like a dice, and the other looked more like a seven-sided wooden pencil. The tetrahedral dreidel, also known as the titotum, is similar to the tetrahedral bone. The spinning arrow board in Chutes & Ladders, where scores can range from 1 to 6, corresponds to a six-sided die.

A computer's random number generator can create any number from 1 to 19 if the designer specifies it, even though the computer doesn't have a 19-sided die (in general, I'll talk more about the probability of numbers coming up on a computer next week). All of these items look different, but in reality they are equivalent: you have an equal chance of each of several possible outcomes.

The dice have some interesting properties that we need to know about. First, the probability of landing any of the dice is the same (I'm assuming you're rolling the right dice). geometric shape). If you want to know the average value of a roll (for those who are into probability theory, this is known as the expected value), add up the values ​​on all the edges and divide that number by the number of edges.

The sum of the values ​​of all the sides for a standard six-sided die is 1 + 2 + 3 + 4 + 5 + 6 = 21. Divide 21 by the number of sides and get the average value of the roll: 21 / 6 = 3.5. This a special case, because we assume that all outcomes are equally likely.

What if you have special dice? For example, I saw a game with a six-sided die with special stickers on the sides: 1, 1, 1, 2, 2, 3, so it behaves like a strange three-sided die with which more chances that the number will be a 1 rather than a 2, and that a 2 is more likely to be rolled than a 3. What is the average value of the roll for this die? So, 1 + 1 + 1 + 2 + 2 + 3 = 10, divided by 6 - it turns out 5 / 3, or approximately 1.66. So, if you have a special die and the players roll three dice and then add up the results - you know that their roll will add up to about 5, and you can balance the game based on that assumption.

Dice and Independence

As I already said, we proceed from the assumption that each side is equally likely to fall out. It doesn't matter how many dice you roll. Each dice roll is independent, meaning that previous rolls do not affect the results of subsequent ones. Given enough trials, you're bound to notice a pattern of numbers - for example, rolling mostly higher or lower values ​​- or other features, but that doesn't mean the dice are "hot" or "cold". We'll talk about this later.

If you roll a standard six-sided die and the number 6 comes up twice in a row, the probability that the next throw will result in a 6 is exactly 1/6. The probability does not increase because the die has “heated up”. At the same time, the probability does not decrease: it is incorrect to reason that the number 6 has already come up twice in a row, which means that now another side should come up.

Of course, if you roll a die twenty times and get a 6 each time, the chance that the twenty-first time you roll a 6 is pretty high: perhaps you just have the wrong die. But if the die is fair, each side has the same probability of landing, regardless of the results of the other rolls. You can also imagine that we replace the die each time: if the number 6 is rolled twice in a row, remove the “hot” die from the game and replace it with a new one. I apologize if any of you already knew about this, but I needed to clear this up before moving on.

How to make the dice roll more or less random

Let's talk about how to get different results on different dice. Whether you roll a die only once or several times, the game will feel more random when the die has more sides. The more often you have to roll the dice, and the more dice you roll, the more the results approach the average.

For example, in the case of 1d6 + 4 (that is, if you roll a standard six-sided die once and add 4 to the result), the average would be a number between 5 and 10. If you roll 5d2, the average would also be a number between 5 and 10. The results of rolling 5d2 will mainly be the numbers 7 and 8, less often other values. The same series, even the same average value (in both cases 7.5), but the nature of the randomness is different.

Wait a minute. Didn't I just say that dice don't "heat" or "cool"? Now I say: if you throw a lot of dice, the results of the rolls will approach the average. Why?

Let me explain. If you roll one die, each side has the same probability of landing. This means that if you roll a lot of dice over time, each side will come up about the same number of times. The more dice you roll, the more the total result will approach the average.

This is not because the number drawn "forces" another number to be drawn that has not yet been drawn. But because a small series of rolling out the number 6 (or 20, or another number) in the end will not affect the result so much if you roll the dice ten thousand more times and mostly the average number will come up. Now you will get a few big numbers, and later a few small ones - and over time they will get closer to the average.

This doesn't happen because previous rolls affect the dice (seriously, the dice are made of plastic, it doesn't have the brains to think, "Oh, it's been a while since you rolled a 2"), but because this is what usually happens with a lot of rolls dice

Thus, it is quite easy to do the calculations for one random roll of the dice - at least to calculate the average value of the roll. There are also ways to calculate "how random" something is and say that the results of rolling 1d6+4 will be "more random" than 5d2. For 5d2, the rolls will be more evenly distributed. To do this, you need to calculate the standard deviation: the larger the value, the more random the results will be. I would not like to give so many calculations today; I will explain this topic later.

The only thing I'll ask you to remember is that, as a general rule, the fewer dice you roll, the greater the randomness. And the more sides a die has, the greater the randomness, since there are more possible value options.

How to Calculate Probability Using Counting

You may be wondering: how can we calculate the exact probability of getting a certain result? In fact, this is quite important for many games: if you initially roll the dice - most likely there is some kind of optimal result. My answer is: we need to calculate two values. Firstly, total number outcomes when throwing a die, and secondly, the number of favorable outcomes. Dividing the second value by the first will give you the desired probability. To get the percentage, multiply the result by 100.

Examples

Here's a very simple example. You want the number 4 or higher to roll the six-sided die once. The maximum number of outcomes is 6 (1, 2, 3, 4, 5, 6). Of these, 3 outcomes (4, 5, 6) are favorable. This means that to calculate the probability, we divide 3 by 6 and get 0.5 or 50%.

Here's an example a little more complicated. You want to roll 2d6 even number. The maximum number of outcomes is 36 (6 options for each die, one die does not affect the other, so multiply 6 by 6 and get 36). Difficulty of the issue of this type is that it is easy to count twice. For example, when rolling 2d6, there are two possible outcomes of 3: 1+2 and 2+1. They look the same, but the difference is which number is displayed on the first die and which number is displayed on the second.

You can also imagine that the dice different colors: So, for example, in this case one dice is red, the other is blue. Then count the number of options for rolling an even number:

  • 2 (1+1);
  • 4 (1+3);
  • 4 (2+2);
  • 4 (3+1);
  • 6 (1+5);
  • 6 (2+4);
  • 6 (3+3);
  • 6 (4+2);
  • 6 (5+1);
  • 8 (2+6);
  • 8 (3+5);
  • 8 (4+4);
  • 8 (5+3);
  • 8 (6+2);
  • 10 (4+6);
  • 10 (5+5);
  • 10 (6+4);
  • 12 (6+6).

It turns out that there are 18 options for a favorable outcome out of 36 - as in the previous case, the probability is 0.5 or 50%. Perhaps unexpected, but quite accurate.

Monte Carlo Simulation

What if you have too many dice for this calculation? For example, you want to know what the probability is of getting a total of 15 or more when rolling 8d6. For eight dice there is huge variety different results, and counting them manually would take a very long time - even if we found some good solution to group different series of dice rolls.

In this case, the easiest way is not to count manually, but to use a computer. There are two ways to calculate probability on a computer. The first method can give you an accurate answer, but it involves a bit of programming or scripting. The computer will go through each possibility, evaluate and count the total number of iterations and the number of iterations that match the desired result, and then provide the answers. Your code might look something like in the following way:

If you do not understand programming and you need an approximate answer rather than an exact one, you can simulate this situation in Excel, where you roll 8d6 several thousand times and get the answer. To roll 1d6 in Excel, use the formula =FLOOR(RAND()*6)+1.

There is a name for the situation when you don't know the answer and just try over and over again - Monte Carlo simulation. This is a great solution to use when calculating the probability is too difficult. The great thing is that in this case we don't need to understand how the math works, and we know that the answer will be "pretty good" because, as we already know, the more rolls, the closer the result gets to the average.

How to combine independent trials

If you ask about several repeating but independent tests, then the outcome of one throw does not affect the outcomes of other throws. There is another simpler explanation for this situation.

How to distinguish between something dependent and independent? Basically, if you can isolate each throw (or series of throws) of a die as a separate event, then it is independent. For example, let's say we roll 8d6 and want a total of 15. This event cannot be divided into several independent dice rolls. To get the result, you calculate the sum of all the values, so the result that comes up on one die affects the results that should come up on the others.

Here's an example of independent rolls: You're playing a dice game, and you're rolling six-sided dice multiple times. The first roll must be a 2 or higher to stay in the game. For the second throw - 3 or higher. The third requires a 4 or higher, the fourth requires a 5 or higher, and the fifth requires a 6. If all five rolls are successful, you win. In this case, all throws are independent. Yes, if one throw is unsuccessful, it will affect the outcome of the entire game, but one throw does not affect the other. For example, if your second roll of the dice is very successful, this does not mean that the next rolls will be as good. Therefore, we can consider the probability of each roll of the dice separately.

If you have independent probabilities and you want to know what the probability is of all the events happening, you determine each individual probability and multiply them together. Another way: if you use the conjunction “and” to describe several conditions (for example, what is the probability of the occurrence of some random event and some other independent random event?) - count the individual probabilities and multiply them.

No matter what you think, never add up independent probabilities. This is a common mistake. To understand why this is wrong, imagine a situation where you are tossing a coin and want to know what the probability is of getting heads twice in a row. The probability of each side falling out is 50%. If you add up these two probabilities, you get a 100% chance of getting heads, but we know that's not true because it could have been tails twice in a row. If you instead multiply the two probabilities, you get 50% * 50% = 25% - which is the correct answer for calculating the probability of getting heads twice in a row.

Example

Let's return to the six-sided dice game, where you first need to roll a number greater than 2, then greater than 3 - and so on until 6. What are the chances that in a given series of five rolls all outcomes will be favorable?

As stated above, these are independent trials, so we calculate the probability for each individual roll and then multiply them together. The probability that the outcome of the first roll will be favorable is 5/6. Second - 4/6. Third - 3/6. The fourth - 2/6, the fifth - 1/6. We multiply all the results by each other and get approximately 1.5%. Wins in this game are quite rare, so if you add this element to your game, you will need a fairly large jackpot.

Negation

Here's another one useful hint: Sometimes it is difficult to calculate the probability that an event will occur, but it is easier to determine the chances that the event will not occur. For example, let's say we have another game: you roll 6d6 and win if you roll a 6 at least once. What is the probability of winning?

In this case, there are many options to consider. It is possible that one number 6 will be rolled, that is, one of the dice will show the number 6, and the others will show the numbers from 1 to 5, then there are 6 options for which of the dice will show 6. You can get the number 6 on two dice dice, or three, or even more, and each time you will need to do a separate calculation, so it’s easy to get confused here.

But let's look at the problem from the other side. You will lose if none of the dice rolls a 6. In this case we have 6 independent trials. The probability that each dice will roll a number other than 6 is 5/6. Multiply them and you get about 33%. Thus, the probability of losing is one in three. Therefore, the probability of winning is 67% (or two to three).

From this example it is obvious: if you calculate the probability that an event will not occur, you need to subtract the result from 100%. If the probability of winning is 67%, then the probability of losing is 100% minus 67%, or 33%, and vice versa. If it is difficult to calculate one probability but easy to calculate the opposite, calculate the opposite and then subtract that number from 100%.

We combine the conditions for one independent test

I said just above that you should never add probabilities across independent trials. Are there any cases where it is possible to sum up the probabilities? Yes, in one special situation.

If you want to calculate the probability for several unrelated favorable outcomes on a single trial, sum the probabilities of each favorable outcome. For example, the probability of rolling a 4, 5, or 6 on 1d6 is equal to the sum of the probability of rolling a 4, the probability of rolling a 5, and the probability of rolling a 6. This situation can be imagined this way: if you use the conjunction “or” in a question about probability (for example, what is the probability of one or another outcome of one random event?) - count the individual probabilities and sum them up.

Please note: when you calculate all possible outcomes of a game, the sum of the probabilities of their occurrence must be equal to 100%, otherwise your calculation was made incorrectly. This is a good way to double-check your calculations. For example, you analyzed the probability of all combinations in poker. If you add up all your results, you should get exactly 100% (or at least fairly close to 100%: if you use a calculator, there may be a small rounding error, but if you add up the exact numbers by hand, everything should add up ). If the sum does not converge, it means that you most likely did not take into account some combinations or calculated the probabilities of some combinations incorrectly, and the calculations need to be double-checked.

Unequal probabilities

So far we have assumed that each side of a die is rolled at the same frequency, because that is how dice seem to work. But sometimes you may encounter a situation where different outcomes are possible and they have different chances of appearing.

For example, in one of the expansions of the Nuclear War card game there is a playing field with an arrow, on which the result of a rocket launch depends. Most often it deals normal damage, stronger or weaker, but sometimes the damage is doubled or tripled, or the rocket explodes launch pad and harms you, or some other event occurs. Unlike the arrow board in Chutes & Ladders or A Game of Life, the game board's results in Nuclear War are uneven. Some sections of the playing field are larger and the arrow stops on them much more often, while other sections are very small and the arrow stops on them rarely.

So, at first glance, the die looks something like this: 1, 1, 1, 2, 2, 3 - we already talked about it, it is something like a weighted 1d3. Therefore, we need to divide all these sections into equal parts, find the smallest unit of measurement, the divisor of which everything is a multiple, and then represent the situation in the form of d522 (or some other), where the set of dice faces will represent the same situation, nose big amount outcomes. This is one way to solve the problem, and it is technically feasible, but there is a simpler option.

Let's go back to our standard six-sided dice. We've said that to calculate the average roll for a normal die you need to add up the values ​​on all the faces and divide by the number of faces, but how exactly does the calculation work? There is another way to express this. For a six-sided die, the probability of each side being rolled is exactly 1/6. Now we multiply the outcome of each edge by the probability of that outcome (in this case, 1/6 for each edge), and then add up the resulting values. Thus, summing up (1 * 1/6) + (2 * 1/6) + (3 * 1/6) + (4 * 1/6) + (5 * 1/6) + (6 * 1/6 ), we get the same result (3.5) as in the calculation above. In fact, we count this way every time: we multiply each outcome by the probability of that outcome.

Can we do the same calculation for the arrow on the playing field in Nuclear War? Of course we can. And if we sum up all the results found, we will get the average value. All we need to do is calculate the probability of each outcome for the arrow on the game board and multiply by the outcome value.

Another example

This method of calculating the average is also suitable if the results are equally likely but have different advantages - for example, if you roll a die and win more on some sides than others. For example, let's take a casino game: you place a bet and roll 2d6. If three numbers are rolled with lowest value(2, 3, 4) or four high value numbers (9, 10, 11, 12) - you will win an amount equal to your bet. The numbers with the lowest and highest values ​​are special: if you roll a 2 or a 12, you win twice your bet. If any other number is rolled (5, 6, 7, 8), you will lose your bet. It's pretty simple game. But what is the probability of winning?

Let's start by counting how many times you can win. The maximum number of outcomes when rolling 2d6 is 36. What is the number of favorable outcomes?

  • There is 1 option that a 2 will be rolled, and 1 option that a 12 will be rolled.
  • There are 2 options that 3 will roll and 2 options that 11 will roll.
  • There are 3 options that a 4 will roll, and 3 options that a 10 will roll.
  • There are 4 options for rolling a 9.

Summing up all the options, we get 16 favorable outcomes out of 36. Thus, under normal conditions you will win 16 times out of 36 possible - the probability of winning is slightly less than 50%.

But in two cases out of these sixteen you will win twice as much - it's like winning twice. If you play this game 36 times, betting $1 each time, and each of all possible outcomes comes up once, you will win a total of $18 (you will actually win 16 times, but two of them will count as two wins ). If you play 36 times and win $18, doesn't that mean the odds are equal?

Take your time. If you count the number of times you can lose, you'll end up with 20, not 18. If you play 36 times, betting $1 each time, you'll win total amount$18 if all favorable outcomes occur. But you will lose a total of $20 if you get all 20 unfavorable outcomes. As a result, you will fall behind a little: you lose an average of $2 net for every 36 games (you can also say that you lose an average of 1/18 of a dollar per day). Now you see how easy it is to make a mistake in this case and calculate the probability incorrectly.

Rearrangement

So far we have assumed that the order of the numbers when throwing dice does not matter. Rolling 2 + 4 is the same as rolling 4 + 2. In most cases, we manually count the number of favorable outcomes, but sometimes this method is impractical and it is better to use a mathematical formula.

An example of this situation is from the Farkle dice game. For each new round, you roll 6d6. If you're lucky and get all of them possible results 1-2-3-4-5-6 (straight), you will receive a big bonus. What is the likelihood of this happening? In this case, there are many options for getting this combination.

The solution is as follows: one of the dice (and only one) must have the number 1. How many ways can the number 1 appear on one dice? There are 6 options, since there are 6 dice, and any of them can fall on the number 1. Accordingly, take one dice and put it aside. Now one of the remaining dice should roll the number 2. There are 5 options for this. Take another dice and set it aside. Then 4 of the remaining dice may land a 3, 3 of the remaining dice may land a 4, and 2 of the remaining dice may land a 5. This leaves you with one die that should land a 6 (in the latter case there is only one die, and there is no choice).

To calculate the number of favorable outcomes for hitting a straight, we multiply all the different independent possibilities: 6 x 5 x 4 x 3 x 2 x 1=720 - there seems to be quite a large number of possibilities for this combination to come up.

To calculate the probability of getting a straight, we need to divide 720 by the number of all possible outcomes for rolling 6d6. What is the number of all possible outcomes? Each die can have 6 sides, so we multiply 6 x 6 x 6 x 6 x 6 x 6 = 46656 (a much larger number than the previous one). Divide 720 by 46656 and we get a probability of approximately 1.5%. If you were designing this game, it would be useful for you to know this so you could create a scoring system accordingly. Now we understand why in Farkle you get such a big bonus if you get a straight: this is a fairly rare situation.

The result is also interesting for another reason. The example shows how rarely in a short period a result occurs that corresponds to probability. Of course, if we were tossing several thousand dice, different faces dice would come up quite often. But when we throw only six dice, it almost never happens that every face comes up. It becomes clear that it is stupid to expect that a line will now appear that has not yet happened, because “we have not rolled the number 6 for a long time.” Listen, your random number generator is broken.

This leads us to the common misconception that all outcomes occur at the same frequency over a short period of time. If we throw dice several times, the frequency of each side falling out will not be the same.

If you've ever worked on an online game with some kind of random number generator before, you've most likely encountered a situation where a player writes to technical support complaining that the random number generator is not showing random numbers. He came to this conclusion because he killed 4 monsters in a row and received 4 exactly the same rewards, and these rewards should only appear 10% of the time, so this obviously should almost never happen.

You are doing a mathematical calculation. The probability is 1/10 * 1/10 * 1/10 * 1/10, that is, 1 outcome out of 10 thousand is quite rare case. This is what the player is trying to tell you. Is there a problem in this case?

It all depends on the circumstances. How many players are currently on your server? Let's say you have a fairly popular game and 100 thousand people play it every day. How many players can kill four monsters in a row? Possibly all, several times a day, but let's assume that half of them are simply exchanging different objects at auctions, corresponds on RP servers, or performs other game actions - thus, only half of them hunt monsters. What is the probability that someone will get the same reward? In this situation, you can expect this to happen at least several times a day.

By the way, this is why it seems like every few weeks someone wins the lottery, even if that someone has never been you or anyone you know. If enough people play regularly, chances are there will be at least one lucky player somewhere. But if you play the lottery yourself, then you are unlikely to win, but rather you will be invited to work at Infinity Ward.

Cards and addiction

We discussed independent events, such as throwing a die, and now we know a lot powerful tools analysis of randomness in many games. Calculating probability is a little more complicated when it comes to drawing cards from the deck, because each card we draw affects the ones that remain in the deck.

If you have a standard 52-card deck, you remove 10 hearts from it and want to know the probability that the next card will be of the same suit - the probability has changed from the original because you have already removed one card of the suit of hearts from the deck. Each card you remove changes the probability of the next card appearing in the deck. In this case previous event affects the following, so we call this probability dependent.

Please note that when I say "cards" I'm talking about any game mechanic where you have a set of objects and you remove one of the objects without replacing it. A “deck of cards” in this case is analogous to a bag of chips from which you take one chip, or an urn from which colored balls are taken (I have never seen games with an urn from which colored balls are taken, but teachers of probability theory according to what -reason why this example is preferred).

Dependency Properties

I would like to clarify that when we're talking about about the cards, I'm guessing you take the cards out, look at them, and remove them from the deck. Each of these actions is an important property. If I had a deck of, say, six cards with the numbers 1 to 6, I would shuffle them and draw one card, then shuffle all six cards again - this would be similar to throwing a six-sided die, because one result has no effect for the next ones. And if I take out cards and don't replace them, then by taking out card 1, I increase the likelihood that the next time I'll draw a card with the number 6. The probability will increase until I eventually remove that card or shuffle the deck.

The fact that we are looking at cards is also important. If I take a card out of the deck and don't look at it, I won't have additional information and in fact the probability will not change. This may sound counterintuitive. How can a simple flip of a card magically change the probability? But it is possible because you can calculate the probability for unknown items just from what you know.

For example, if you shuffle a standard deck of cards and reveal 51 cards and none of them are a queen of clubs, then you can be 100% sure that the remaining card is a queen of clubs. If you shuffle a standard deck of cards and take out 51 cards without looking at them, the probability that the remaining card is a queen of clubs is still 1/52. As you open each card, you get more information.

Calculating the probability for dependent events follows the same principles as for independent events, except that it is a little more complicated because the probabilities change as you reveal cards. So you need to multiply a lot different meanings, instead of multiplying the same value. What this really means is that we need to combine all the calculations we did into one combination.

Example

You shuffle a standard 52-card deck and draw two cards. What is the probability that you will draw a pair? There are several ways to calculate this probability, but perhaps the simplest is this: what is the probability that if you draw one card, you will not be able to draw a pair? This probability is zero, so it doesn't matter which first card you draw, as long as it matches the second. It doesn't matter which card we draw first, we still have a chance to draw a pair. Therefore, the probability of drawing a pair after the first card is drawn is 100%.

What is the probability that the second card matches the first? There are 51 cards left in the deck, and 3 of them match the first card (actually there would be 4 out of 52, but you already removed one of the matching cards when you drew the first card), so the probability is 1/17. So the next time you're playing Texas Hold'em, the guy across the table from you says, “Cool, another pair? I’m feeling lucky today,” you will know that there is a high probability that he is bluffing.

What if we add two jokers so we have 54 cards in the deck and want to know what the probability of drawing a pair is? The first card may be a joker, and then there will only be one card in the deck that matches, and not three. How to find the probability in this case? We will divide the probabilities and multiply each possibility.

Our first card could be a joker or some other card. The probability of drawing a joker is 2/54, the probability of drawing some other card is 52/54. If the first card is a joker (2/54), then the probability that the second card will match the first is 1/53. We multiply the values ​​(we can multiply them because they are separate events and we want both events to happen) and we get 1/1431 - less than one tenth of a percent.

If you draw some other card first (52/54), the probability of matching the second card is 3/53. We multiply the values ​​and get 78/1431 (slightly more than 5.5%). What do we do with these two results? They do not intersect, and we want to know the probability of each of them, so we add the values. We get a final result of 79/1431 (still about 5.5%).

If we wanted to be sure of the accuracy of the answer, we could calculate the probability of all the other possible outcomes: drawing a joker and not matching a second card, or drawing some other card and not matching a second card. By summing up these probabilities and the probability of winning, we would get exactly 100%. I won't give the math here, but you can try the math to double check.

Monty Hall Paradox

This brings us to a rather famous paradox that often confuses many people - the Monty Hall Paradox. The paradox is named after the host of the TV show Let's Make a Deal. For those who have never seen this TV show, it was the opposite of The Price Is Right.

On The Price Is Right, the host (Bob Barker used to be the host; who is now, Drew Carey? Never mind) is your friend. He wants you to win money or cool prizes. It tries to give you every opportunity to win, as long as you can guess how much the items purchased by the sponsors are actually worth.

Monty Hall behaved differently. He was like Bob Barker's evil twin. His goal was to make you look like an idiot on national television. If you were on the show, he was your opponent, you played against him, and the odds were in his favor. Perhaps I'm being too harsh, but looking at the show that you're more likely to get into if you wear a ridiculous costume, that's exactly what I come to.

One of the most famous memes of the show was this: there are three doors in front of you, door number 1, door number 2 and door number 3. You can choose one door for free. Behind one of them is a magnificent prize - for example, a new car. There are no prizes behind the other two doors, both of which are of no value. They are supposed to humiliate you, so behind them is not just nothing, but something stupid, for example, a goat or a huge tube of toothpaste - anything but a new car.

You choose one of the doors, Monty is about to open it to let you know if you won or not... but wait. Before we find out, let's take a look at one of those doors you didn't choose. Monty knows which door the prize is behind, and he can always open the door that doesn't have a prize behind it. “Are you choosing door number 3? Then let's open door number 1 to show that there was no prize behind it." And now, out of generosity, he offers you the opportunity to exchange the selected door number 3 for what is behind door number 2.

At this point, the question of probability arises: does this opportunity increase your probability of winning, or decrease it, or does it remain unchanged? How do you think?

Correct answer: the ability to choose another door increases the probability of winning from 1/3 to 2/3. This is illogical. If you haven't encountered this paradox before, then most likely you are thinking: wait, how is it that by opening one door, we magically changed the probability? As we've already seen with maps, this is exactly what happens when we get more information. Obviously, when you choose for the first time, the probability of winning is 1/3. When one door opens, it does not change the probability of winning for the first choice at all: the probability is still 1/3. But the probability that the other door is correct is now 2/3.

Let's look at this example from a different perspective. You choose a door. The probability of winning is 1/3. I suggest you change the other two doors, which is what Monty Hall does. Sure, he opens one of the doors to reveal there's no prize behind it, but he can always do that, so it doesn't really change anything. Of course, you will want to choose a different door.

If you don't quite understand the question and need a more convincing explanation, click on this link to be taken to a great little Flash application that will allow you to explore this paradox in more detail. You can play starting with about 10 doors and then gradually work your way up to a game with three doors. There's also a simulator where you can play with any number of doors from 3 to 50, or run several thousand simulations and see how many times you would win if you played.

Choose one of three doors - the probability of winning is 1/3. Now you have two strategies: change your choice after opening the wrong door or not. If you do not change your choice, then the probability will remain 1/3, since the choice occurs only at the first stage, and you must guess right away. If you change, then you can win if you first choose the wrong door (then they open another wrong one, the right one remains - by changing your decision, you take it). The probability of choosing the wrong door at the beginning is 2/3 - so it turns out that by changing your decision, you double the probability of winning.

Remark from the teacher higher mathematics and game balance specialist Maxim Soldatov - Schreiber, of course, didn’t have her, but without her you can understand this magical transformation hard enough

And again about the Monty Hall paradox

As for the show itself: even if Monty Hall's opponents weren't good at math, he was good at it. Here's what he did to change the game a little. If you chose a door that had a prize behind it, which had a 1/3 chance of happening, it would always give you the option of choosing another door. You'll pick a car and then swap it for a goat and you'll look pretty stupid - which is exactly what you want since Hall is kind of an evil guy.

But if you choose a door that doesn't have a prize behind it, he'll only ask you to choose another one half the time, or he'll just show you your new goat and you'll leave the stage. Let's analyze this new game, in which Monty Hall can decide whether to offer you the chance to choose another door or not.

Suppose he follows this algorithm: if you choose a door with a prize, he always offers you the opportunity to choose another door, otherwise he is equally likely to offer you to choose another door or give you a goat. What is your probability of winning?

In one of three options you immediately choose the door behind which the prize is located, and the presenter invites you to choose another one.

Of the remaining two options out of three (you initially choose a door without a prize), in half the cases the presenter will offer you to change your decision, and in the other half of the cases - not.

Half of 2/3 is 1/3, that is, in one case out of three you will get a goat, in one case out of three you will choose the wrong door and the host will ask you to choose another, and in one case out of three you will choose the right door, but he again he will offer another one.

If the presenter offers to choose another door, we already know that that one case out of three, when he gives us a goat and we leave, did not happen. This helpful information: it means that our chances of winning have changed. Two cases out of three when we have the opportunity to choose: in one case it means that we guessed correctly, and in the other that we guessed wrong, so if we were offered the opportunity to choose at all, then the probability of our winning is 1/2 , and from a mathematical point of view, it doesn’t matter whether you stay with your choice or choose another door.

Like poker, it is a psychological game, not a mathematical one. Why did Monty give you a choice? He thinks that you are a simpleton who does not know that choosing another door is the “right” decision and will stubbornly hold on to his choice (after all, psychologically the situation is more complicated, when you chose a car and then lost it)?

Or does he, deciding that you are smart and will choose another door, offer you this chance because he knows that you guessed correctly in the first place and will get hooked? Or maybe he's being uncharacteristically kind and pushing you to do something that's beneficial to you because he hasn't given away cars in a while and the producers say the audience is getting bored and it would be better to give away a big prize soon to did the ratings drop?

In this way, Monty manages to sometimes offer a choice, and at the same time overall probability the winnings remain equal to 1/3. Remember that the probability that you will lose outright is 1/3. The chance that you will guess correctly right away is 1/3, and 50% of those times you will win (1/3 x 1/2 = 1/6).

The chance of you guessing wrong at first but then having a chance to choose another door is 1/3, and half of those times you will win (also 1/6). Add up two independent winning possibilities and you get a probability of 1/3, so it doesn't matter whether you stick with your choice or choose another door - your overall probability of winning throughout the game is 1/3.

The probability does not become greater than in the situation when you guessed the door and the presenter simply showed you what was behind it, without offering to choose another one. The point of the proposal is not to change the probability, but to make the decision-making process more fun to watch on television.

By the way, this is one of the reasons why poker can be so interesting: in most formats, between rounds when bets are made (for example, the flop, turn and river in Texas Hold'em), the cards are gradually revealed, and if at the beginning of the game you have one chance of winning , then after each betting round, when it is open more cards, this probability changes.

Boy and girl paradox

This brings us to another well-known paradox, which, as a rule, puzzles everyone - the paradox of the boy and the girl. The only thing I'm writing about today that isn't directly related to games (although I guess I'm just supposed to encourage you to create appropriate game mechanics). This is more of a puzzle, but an interesting one, and in order to solve it, you need to understand conditional probability, which we talked about above.

Problem: I have a friend with two children, at least one of them is a girl. What is the probability that the second child is also a girl? Let's assume that in any family the chances of having a girl and a boy are 50/50, and this is true for each child.

In fact, some men have more sperm with an X chromosome or a Y chromosome in their sperm, so the odds change slightly. If you know that one child is a girl, the likelihood of having a second girl is slightly higher, and there are other conditions, such as hermaphroditism. But to solve this problem, we will not take this into account and assume that the birth of a child is independent event and the birth of a boy and a girl are equally likely.

Since we are talking about a chance of 1/2, intuitively we expect that the answer will most likely be 1/2 or 1/4, or some other number that is a multiple of two in the denominator. But the answer is 1/3. Why?

The difficulty here is that the information we have reduces the number of possibilities. Suppose the parents are fans of Sesame Street and, regardless of the gender of the children, named them A and B. Under normal conditions, there are four equally likely possibilities: A and B are two boys, A and B are two girls, A is a boy and B is a girl, A is a girl and B is a boy. Since we know that at least one child is a girl, we can rule out the possibility that A and B are two boys. This leaves us with three possibilities - still equally probable. If all possibilities are equally probable and there are three of them, then the probability of each of them is 1/3. In only one of these three options are both children girls, so the answer is 1/3.

And again about the paradox of a boy and a girl

The solution to the problem becomes even more illogical. Imagine that my friend has two children and one of them is a girl who was born on Tuesday. Let us assume that under normal conditions a child can be born on each of the seven days of the week with equal probability. What is the probability that the second child is also a girl?

You might think the answer would still be 1/3: what does Tuesday matter? But even in this case, our intuition fails us. The answer is 13/27, which is not only unintuitive, but very strange. What's the matter in this case?

In fact, Tuesday changes the probability because we don't know which child was born on Tuesday, or perhaps both were born on Tuesday. In this case, we use the same logic: we count everything possible combinations, when at least one child is a girl born on Tuesday. As in the previous example, let's assume that the children are named A and B. The combinations look like this:

  • A is a girl who was born on Tuesday, B is a boy (in this situation there are 7 possibilities, one for each day of the week when a boy could have been born).
  • B is a girl born on Tuesday, A is a boy (also 7 possibilities).
  • A - a girl who was born on Tuesday, B - a girl who was born on another day of the week (6 possibilities).
  • B is a girl who was born on Tuesday, A is a girl who was not born on Tuesday (also 6 probabilities).
  • A and B are two girls who were born on Tuesday (1 possibility, you need to pay attention to this so as not to count twice).

We add up and get 27 different equally possible combinations of births of children and days with at least one possibility of a girl being born on Tuesday. Of these, there are 13 possibilities when two girls are born. This also seems completely illogical - it seems this task was invented only to cause headache. If you're still puzzled, game theorist Jesper Juhl's website has a good explanation of this issue.

If you are currently working on a game

If there is a randomness in the game you are designing, this is a great time to analyze it. Select some element that you want to analyze. First ask yourself what you expect the probability for a given element to be, what it should be in the context of the game.

For example, if you are making an RPG and you are wondering what the probability should be that the player will defeat a monster in battle, ask yourself what percentage winning seems right to you. Typically with console RPGs, players get very upset when they lose, so it's best if they lose infrequently - 10% of the time or less. If you're an RPG designer, you probably know better than I do, but you need to have basic idea, what should be the probability.

Then ask yourself whether your probabilities are dependent (like with cards) or independent (like with dice). Analyze all possible outcomes and their probabilities. Make sure that the sum of all probabilities is 100%. And, of course, compare the results obtained with your expectations. Are you able to roll the dice or draw cards the way you intended, or is it clear that the values ​​need to be adjusted. And, of course, if you find any shortcomings, you can use the same calculations to determine how much to change the values.

Homework assignment

Your homework this week will help you hone your probability skills. Here are two dice games and a card game that you will analyze using probability, as well as a strange game mechanic I once developed that will test the Monte Carlo method.

Game #1 - Dragon Bones

This is a dice game that my colleagues and I once came up with (thanks to Jeb Heavens and Jesse King) - it specifically blows people's minds with its probabilities. It's a simple casino game called Dragon Dice, and it's a gambling dice competition between the player and the house.

You are given a normal 1d6 die. The goal of the game is to roll a number higher than the house's. Tom is given a non-standard 1d6 - the same as yours, but on one of its faces instead of a unit there is an image of a dragon (thus, the casino has a dragon cube - 2-3-4-5-6). If the house gets a dragon, it automatically wins and you lose. If both get same number- it's a draw and you roll the dice again. The one who rolls the highest number wins.

Of course, everything doesn’t work out entirely in the player’s favor, because the casino has an advantage in the form of the dragon’s edge. But is this really true? This is what you have to calculate. But check your intuition first.

Let's say the odds are 2 to 1. So if you win, you keep your bet and get double your bet. For example, if you bet 1 dollar and win, you keep that dollar and get 2 more on top, for a total of 3 dollars. If you lose, you only lose your bet. Would you play? Do you intuitively feel that the probability is greater than 2 to 1, or do you still think that it is less? In other words, on average over 3 games, do you expect to win more than once, or less, or once?

Once you have your intuition figured out, use math. There are only 36 possible positions for both dice, so you can count them all with no problem. If you're not sure about that 2-for-1 offer, consider this: Let's say you played the game 36 times (betting $1 each time). For every win you get 2 dollars, for every loss you lose 1, and a draw changes nothing. Calculate all your likely winnings and losses and decide whether you will lose or gain some dollars. Then ask yourself how right your intuition was. And then realize what a villain I am.

And, yes, if you've already thought about this question - I'm deliberately confusing you by misrepresenting the actual mechanics of dice games, but I'm sure you can overcome this obstacle with just a little thought. Try to solve this problem yourself.

Game No. 2 - Throw for luck

This gambling in a dice called the "Luck Throw" (also "Birdcage" because sometimes the dice are not rolled but placed in a large wire cage, reminiscent of the cage from Bingo). The game is simple and basically boils down to this: bet, say, $1 on a number from 1 to 6. Then you roll 3d6. For each die that lands your number, you get $1 (and keep your original bet). If your number doesn't come up on any of the dice, the casino gets your dollar and you get nothing. So if you bet on 1 and you get a 1 on the sides three times, you get $3.

Intuitively, it seems that this game has equal chances. Each die is an individual 1 in 6 chance of winning, so over the sum of the three rolls, your chance of winning is 3 in 6. However, of course, remember that you are adding three separate dice, and you are only allowed to add if we we are talking about separate winning combinations of the same die. Something you will need to multiply.

Once you calculate all the possible outcomes (probably easier to do in Excel than by hand, since there are 216 of them), the game still looks odd-even at first glance. In fact, the casino still has a better chance of winning - how much more? Specifically, how much money on average do you expect to lose each round of play?

All you have to do is add up the wins and losses of all 216 results and then divide by 216, which should be pretty simple. But, as you can see, there are several pitfalls here, which is why I say: if you think this game has an even chance of winning, you have it all wrong.

Game #3 - 5 Card Stud Poker

If you've already warmed up to the previous games, let's check out what we know about conditional probability, using this card game as an example. Let's imagine a poker game with a 52-card deck. Let's also imagine 5 card stud, where each player only receives 5 cards. You can't discard a card, you can't draw a new one, there's no shared deck - you only get 5 cards.

A Royal Flush is 10-J-Q-K-A in one hand, there are four in total, so there are four possible ways get a royal flush. Calculate the probability that you will get one such combination.

I must warn you of one thing: remember that you can draw these five cards in any order. That is, first you can draw an ace or a ten, it doesn’t matter. So as you do the math, keep in mind that there are actually more than four ways to get a royal flush, assuming the cards were dealt in order.

Game No. 4 - IMF Lottery

The fourth problem cannot be solved so easily using the methods we talked about today, but you can easily simulate the situation using programming or Excel. It is on the example of this problem that you can work out the Monte Carlo method.

I mentioned earlier the game Chron X, which I once worked on, and there was one very interesting card there - the IMF lottery. Here's how it worked: you used it in the game. After the round ended, the cards were redistributed, and there was a 10% chance that the card would go out of play and that a random player would receive 5 units of each type of resource whose token was present on that card. The card was entered into play without a single chip, but each time it remained in play at the beginning of the next round, it received one chip.

So there was a 10% chance that if you put it into play, the round would end, the card would leave the game, and no one would get anything. If this does not happen (90% chance), there is a 10% chance (actually 9%, since it is 10% of 90%) that in the next round she will leave the game and someone will receive 5 units of resources. If the card leaves the game after one round (10% of the 81% available, so the probability is 8.1%), someone will receive 10 units, another round - 15, another - 20, and so on. Question: What is the general expected value of the number of resources you will get from this card when it finally leaves the game?

Normally we would try to solve this problem by calculating the possibility of each outcome and multiplying by the number of all outcomes. There is a 10% chance that you will get 0 (0.1 * 0 = 0). 9% that you will receive 5 units of resources (9% * 5 = 0.45 resources). 8.1% of what you'll get is 10 (8.1%*10=0.81 resources - overall expected value). And so on. And then we would sum it all up.

And now the problem is obvious to you: there is always a chance that the card will not leave the game, it can remain in the game forever, for infinite number rounds, so there is no way to calculate every probability. The methods we have learned today do not allow us to calculate infinite recursion, so we will have to create it artificially.

If you are good enough at programming, write a program that will simulate this map. You should have a time loop that brings the variable to a starting position of zero, shows a random number and with a 10% chance the variable exits the loop. Otherwise, it adds 5 to the variable and the loop repeats. When it finally exits the loop, increase the total number of trial runs by 1 and the total number of resources (by how much depends on where the variable ends up). Then reset the variable and start again.

Run the program several thousand times. In the end, divide the total number of resources by the total number of runs - this will be your expected Monte Carlo value. Run the program several times to make sure that the numbers you get are approximately the same. If the scatter is still large, increase the number of repetitions in the outer loop until you start getting matches. You can be sure that whatever numbers you end up with will be approximately correct.

If you're new to programming (even if you are), here's a quick exercise to test your Excel skills. If you are a game designer, these skills will never be superfluous.

Now the if and rand functions will be very useful to you. Rand doesn't require values, it just produces a random one decimal number from 0 to 1. We usually combine it with floor and pluses and minuses to simulate the dice roll that I mentioned earlier. However, in this case we're just leaving a 10% chance that the card will leave the game, so we can just check to see if the rand value is less than 0.1 and not worry about it anymore.

If has three meanings. In order: a condition that is either true or false, then a value that is returned if the condition is true, and a value that is returned if the condition is false. So next function will return 5% of the time, and 0 the remaining 90% of the time: =IF(RAND()<0.1,5,0) .

There are many ways to set this command, but I would use this formula for the cell that represents the first round, let's say it's cell A1: =IF(RAND()<0.1,0,-1) .

Here I'm using a negative variable to mean "this card has not left the game and has not given up any resources yet." So if the first round is over and the card leaves play, A1 is 0; otherwise it is –1.

For the next cell representing the second round: =IF(A1>-1, A1, IF(RAND()<0.1,5,-1)) . So if the first round ended and the card immediately left the game, A1 is 0 (the number of resources) and this cell will simply copy that value. Otherwise, A1 is -1 (the card has not yet left the game), and this cell continues to randomly move: 10% of the time it will return 5 units of resources, the rest of the time its value will still be equal to -1. If we apply this formula to additional cells, we get additional rounds, and whichever cell you end up with will give you the final result (or -1 if the card never left the game after all the rounds you played).

Take that row of cells, which represents the only round with that card, and copy and paste several hundred (or thousand) rows. We may not be able to do an infinite test for Excel (there are a limited number of cells in a table), but at least we can cover most cases. Then select one cell in which you will place the average of the results of all rounds - Excel helpfully provides an average() function for this.

On Windows, you can at least press F9 to recalculate all random numbers. As before, do this several times and see if you get the same values. If the spread is too large, double the number of runs and try again.

Unsolved problems

If you happen to have a degree in probability theory and the above problems seem too easy to you, here are two problems I've been scratching my head over for years, but unfortunately I'm not good enough at math to solve them.

Unsolved Problem #1: IMF Lottery

The first unsolved problem is the previous homework assignment. I can easily apply the Monte Carlo method (using C++ or Excel) and be confident in the answer to the question “how many resources will the player receive”, but I don’t know exactly how to provide an exact provable answer mathematically (it’s an infinite series) .

Unsolved problem #2: Sequences of figures

This problem (it also goes far beyond the tasks that are solved in this blog) was given to me by a gamer friend more than ten years ago. While playing blackjack in Vegas, he noticed one interesting thing: when he removed cards from an 8-deck shoe, he saw ten figures in a row (the figure or face card is 10, Joker, King or Queen, so there are 16 in total in a standard 52-deck cards or 128 in a 416 card shoe).

What is the probability that this shoe contains at least one sequence of ten or more figures? Let's assume that they were shuffled fairly, in random order. Or, if you prefer, what is the probability that a sequence of ten or more figures does not occur anywhere?

We can simplify the task. Here is a sequence of 416 parts. Each part is 0 or 1. There are 128 ones and 288 zeros scattered randomly throughout the sequence. How many ways are there to randomly intersperse 128 ones with 288 zeros, and how many times in these ways will at least one group of ten or more ones occur?

Every time I set about solving this problem, it seemed easy and obvious to me, but as soon as I delved into the details, it suddenly fell apart and seemed simply impossible.

So don't rush to blurt out the answer: sit down, think carefully, study the conditions, try to plug in real numbers, because all the people I talked to about this problem (including several graduate students working in this field) reacted about the same: “It’s completely obvious... oh, no, wait, it’s not obvious at all.” This is the case when I do not have a method for calculating all the options. I could, of course, brute force the problem through a computer algorithm, but it would be much more interesting to know the mathematical solution.

This is the ratio of the number of those observations in which the event in question occurred to the total number of observations. This interpretation is acceptable in the case of a sufficiently large number of observations or experiments. For example, if about half of the people you meet on the street are women, then you can say that the probability that the person you meet on the street will be a woman is 1/2. In other words, an estimate of the probability of an event can be the frequency of its occurrence in a long series of independent repetitions of a random experiment.

Probability in mathematics

In the modern mathematical approach, classical (that is, not quantum) probability is given by the Kolmogorov axiomatics. Probability is a measure P, which is defined on the set X, called probability space. This measure must have the following properties:

From these conditions it follows that the probability measure P also has the property additivity: if sets A 1 and A 2 do not intersect, then . To prove you need to put everything A 3 , A 4 , ... equal to the empty set and apply the property of countable additivity.

The probability measure may not be defined for all subsets of the set X. It is enough to define it on a sigma algebra, consisting of some subsets of the set X. In this case, random events are defined as measurable subsets of space X, that is, as elements of sigma algebra.

Probability sense

When we find that the reasons for some possible fact actually occurring outweigh the contrary reasons, we consider that fact probable, otherwise - incredible. This preponderance of positive bases over negative ones, and vice versa, can represent an indefinite set of degrees, as a result of which probability(And improbability) It happens more or less .

Complex individual facts do not allow for an exact calculation of the degrees of their probability, but even here it is important to establish some large subdivisions. So, for example, in the legal field, when a personal fact subject to trial is established on the basis of testimony, it always remains, strictly speaking, only probable, and it is necessary to know how significant this probability is; in Roman law, a quadruple division was adopted here: probatio plena(where the probability practically turns into reliability), Further - probatio minus plena, then - probatio semiplena major and finally probatio semiplena minor .

In addition to the question of the probability of the case, the question may arise, both in the field of law and in the moral field (with a certain ethical point of view), of how likely it is that a given particular fact constitutes a violation of the general law. This question, which serves as the main motive in the religious jurisprudence of the Talmud, also gave rise to very complex systematic constructions and a huge literature, dogmatic and polemical, in Roman Catholic moral theology (especially from the end of the 16th century) (see Probabilism).

The concept of probability allows for a certain numerical expression when applied only to such facts that are part of certain homogeneous series. So (in the simplest example), when someone throws a coin a hundred times in a row, we find here one general or large series (the sum of all falls of the coin), consisting of two private or smaller, in this case numerically equal, series (falls " heads" and falls "tails"); The probability that this time the coin will land heads, that is, that this new member of the general series will belong to this of the two smaller series, is equal to the fraction expressing the numerical relationship between this small series and the larger one, namely 1/2, that is, the same probability belongs to one or the other of two particular series. In less simple examples, the conclusion cannot be deduced directly from the data of the problem itself, but requires prior induction. So, for example, the question is: what is the probability for a given newborn to live to be 80 years old? Here there must be a general, or large, series of a certain number of people born in similar conditions and dying at different ages (this number must be large enough to eliminate random deviations, and small enough to maintain the homogeneity of the series, for for a person, born, for example, in St. Petersburg into a wealthy, cultured family, the entire million-strong population of the city, a significant part of which consists of people from various groups who can die prematurely - soldiers, journalists, workers in dangerous professions - represents a group too heterogeneous for a real determination of probability) ; let this general series consist of ten thousand human lives; it includes smaller series representing the number of people surviving to a particular age; one of these smaller series represents the number of people living to age 80. But it is impossible to determine the number of this smaller series (like all others) a priori; this is done purely inductively, through statistics. Suppose statistical studies have established that out of 10,000 middle-class St. Petersburg residents, only 45 live to be 80; Thus, this smaller series is related to the larger one as 45 is to 10,000, and the probability for a given person to belong to this smaller series, that is, to live to be 80 years old, is expressed as a fraction of 0.0045. The study of probability from a mathematical point of view constitutes a special discipline - probability theory.

see also

Notes

Literature

  • Alfred Renyi. Letters on probability / trans. from Hungarian D. Saas and A. Crumley, eds. B.V. Gnedenko. M.: Mir. 1970
  • Gnedenko B.V. Probability theory course. M., 2007. 42 p.
  • Kuptsov V.I. Determinism and probability. M., 1976. 256 p.

Wikimedia Foundation. 2010.

Synonyms:

Antonyms:

See what “Probability” is in other dictionaries:

    General scientific and philosophical. a category denoting the quantitative degree of possibility of the occurrence of mass random events under fixed observation conditions, characterizing the stability of their relative frequencies. In logic, semantic degree... ... Philosophical Encyclopedia

    PROBABILITY, a number in the range from zero to one inclusive, representing the possibility of a given event occurring. The probability of an event is defined as the ratio of the number of chances that an event can occur to the total number of possible... ... Scientific and technical encyclopedic dictionary

    In all likelihood.. Dictionary of Russian synonyms and similar expressions. under. ed. N. Abramova, M.: Russian Dictionaries, 1999. probability possibility, likelihood, chance, objective possibility, maza, admissibility, risk. Ant. impossibility... ... Synonym dictionary

    probability- A measure that an event is likely to occur. Note The mathematical definition of probability is: “a real number between 0 and 1 that is associated with a random event.” The number may reflect the relative frequency in a series of observations... ... Technical Translator's Guide

    Probability- “a mathematical, numerical characteristic of the degree of possibility of the occurrence of any event in certain specific conditions that can be repeated an unlimited number of times.” Based on this classic... ... Economic-mathematical dictionary

    - (probability) The possibility of the occurrence of an event or a certain result. It can be presented in the form of a scale with divisions from 0 to 1. If the probability of an event is zero, its occurrence is impossible. With a probability equal to 1, the onset of... Dictionary of business terms