Unfinished sentences. Psychological test

The lack of willpower among our people has given rise to a huge number of trainings. People need a push to give up annoying work, an unloved person and gray everyday life. However, the fuse does not last long. Eventually they will come back for more motivation. Do you belong to them or can you remain happy on your own? Let's check it now!

Good is what is missing in modern society. You can verify this by scrolling through the news feed. There is little positive and good there. We support the opinion that you should always start with yourself. Let this test be your first step.

Intuition is what we used to call “coincidence” and “chance”. In fact, it is the sixth sense that leads us to the very coincidence that can change the course of events. The old game "Heads or Tails" is a great way to test your intuition. It would seem that there is nothing difficult to guess, because the chance is extremely high - 50%! No matter how it is. We will make 8 throws, try to give the same number of correct answers.

The Union of indestructible free republics has long been gone, but this does not mean that you can forget the history of a great power that left such a bright mark on history. Today we have prepared a test consisting of questions that only a graduate of a Soviet school can cope with, since they require deep knowledge. If you think you can handle it, then get started!

Biological age is far from the most important indicator in life. There are much more significant figures. For example, how old is your soul or brain. Our stream of thoughts is an incredibly complex system; it develops and grows with us. From a psychological point of view, each age corresponds to certain thoughts and choices. Let's figure out whether you are succeeding or lagging behind in your development.

Master of the club "What? Where? When?" scandalously suspended from the program. Now he no longer has the right to earn money, popularity and respect with his knowledge and logic. But, as they say, there must always be a captain on a ship, we suggest you try to take his place. Use your life experience, logic and ingenuity!

Is work your second home and the whole world revolves around it? Or is work a way to survive and get what you want? How well do you work and fulfill your responsibilities, are you paid what you deserve, or maybe you deserve a promotion? Take our test and find out what salary you really deserve. And don’t forget to show the results to your boss!

The classic set of dishes in Russia are dumplings, scrambled eggs, fried potatoes, borscht and naval pasta. This menu has stuck with us since the Soviet years, when the choice of products was scarce. But this can no longer continue, so we suggest you take our educational test.

There is no problem in recognizing Albert Einstein or Michael Schumacher in the photo. But few people know that in all spheres of life there are women who deserve no less attention. We restore justice and introduce you to them.

Today we have prepared an extremely interesting test! It is unique in that it contains psychological tasks, the solution of which requires not intelligence and logic, but imagination and attentiveness. If you think you can handle it, then get started! Of course, your knowledge will also help you a lot!

The famous game “Who Wants to Be a Millionaire?” has been gathering entire families in front of the TV for years. The excitement and excitement experienced by the game participants, eager to win the coveted millions, is transmitted to television viewers. And don’t be upset if you haven’t found yourself on the other side of the screen yet, because today you have a great opportunity to practice! Test your knowledge!

Love is a very difficult feeling. For some, a few days of acquaintance are enough to express their feelings, while others do not dare to talk about them for years, for fear of ruining everything. Perhaps our test will tell you someone who has been lying awake at night for a long time, puzzling over how you will like it. And, who knows, maybe it will change your life!

The craziest and most inadequate people always turned out to be exemplary family men and workers. Others may not notice, but the aggression inside fills their hearts. If you don't get checked by a specialist, sooner or later it will come out. Our test should just help you figure it out. We won’t make an exact diagnosis, but we will hint at the problem.

) recorded a video to demonstrate quick stress testing. In his example, the approach was to feed the application wizard a huge amount of data, essentially forcing the application to load itself.

The video is almost six minutes long. About halfway through, James asks, “You might be wondering why I don’t want to stop now. The reason is that we are seeing a steady deterioration in the situation. We could stop now, but we might see something worse if we continue.” So he continued the test. And soon after, James proposed heuristics for stopping: we stop when: 1) we have identified a sufficiently serious problem, or 2) there is no obvious change in the behavior of the program - the program as a whole is stable, or 3) the value of continuing the test does not justify the cost. Those were the heuristics for stopping that test.

About a year after I first saw this video, I decided to more fully describe heuristics for stopping testing in a column for Better Software magazine. James and I had a forward-looking conversation about this. You can find the column. Another year later, the column became an informal lecture, which I gave in several places.

About six months after that, we both found even more heuristics for stopping testing. We discussed them at STAR East 2009, and Dale Emery and James Lyndsay, who were passing by at that moment, joined the discussion. In particular, Dale suggested that during a battle, shooting can be stopped in several cases: a temporary lull, a command to “cease fire”, an agreement between the parties on a ceasefire, the withdrawal of the parties to initial positions, disarmament of the enemy. I found this interesting.

In general, now I will tell you all the heuristics that we found. I emphasize that these heuristics for stopping are precisely heuristics. Heuristics are quick, inexpensive ways problem solving or decision making. Heuristics error prone, that is, they may or may not work. Heuristics are not abstract enough, they can overlap and intersect with each other. Also heuristics depend on the context, so they are expected to be used by people who have the knowledge and skills to use them wisely. Below I have listed heuristics and for each of them I have indicated some questions with which you can check the validity of its use.

1. Heuristic"Timeit turned out. For many testing professionals, this is the most common heuristic: we stop testing when the allotted time for it runs out.

Have we received the information we need to know about the product? Is the risk of stopping testing too high? Notwaswhethertermartificial, arbitrary? Willwhetherbe carried outadditionaldevelopment, whichwill requireadditionaltesting?

Once upon a time, the Joseph Sachs test impressed me with its ability to structure my chaotic ideas about the world around me, about myself and others.

The technique includes 60 unfinished sentences, conditionally divided into 15 groups that characterize your attitude towards:

Friends;

Representatives of the same and opposite sex;

Sexual relationships;

Authority and Subordination;

Past and future.

Some groups of sentences also address fears and concerns that affect your life, indicate unlived guilt, and also shed light on life goals.

Without processing, testing takes 20 minutes or more.

Instructions: On the test form you must complete the sentences with one or more words.

Test form

1. I think that my father rarely...

2. If everything is against me, then...

3. I always wanted/wanted...

4. If I held a leadership position...

5. The future seems to me...

6. My boss...

7. I know it’s stupid, but I’m afraid...

8. I think that a true friend...

9. When I was/was a child...

10. The ideal woman (man) for me is...

11. When I see a woman next to a man...

12. Compared to most other families, my family...

13. I work best with...

14. My mother and I...

15. I would/would do everything to forget...

16. If only my father wanted...

17. I think that I am capable enough to...

18. I could/could be very happy if...

19. If anyone works under my leadership...

20. I hope for...

21. At school my teachers...

22. Most of my friends don’t know that I’m afraid...

23. I don’t like people who...

24. Once upon a time...

25. I think that most boys (girls)…

26. Married life seems to me...

27. My family treats me like...

28. The people I work with...

29. My mother...

30. My biggest mistake was...

31. I would like/would like my father...

32. My greatest weakness is...

33. My hidden desire in life...

34. My subordinates...

35. The day will come when...

36. When my boss approaches me...

37. I wish I could stop being afraid...

38. Most of all I love those people who...

39. If I were/became young again...

40. I think that most women (men)…

41. If I had a normal sex life...

42. Most of the families I know...

43. I like to work with people who...

44. I think that most mothers...

45. When I was/was young, I felt/felt guilty if...

46. ​​I think that my father...

47. When I'm unlucky, I...

48. Most of all in life I would like/would like...

49. When I give instructions to others...

50. When I’m old/old…

51. People whose superiority over myself I recognize...

52. My fears have more than once forced me...

53. When I’m not there, my friends...

54. My most vivid childhood memory is...

55. I really don’t like it when women (men)…

56. My sex life...

57. When I was/was a child, my family...

58. People who work with me...

59. I love my mother, but...

60. The worst thing that happened to me was...

Processing and interpretation of results

For each group of sentences, a characteristic is displayed that defines this system of relations as positive (1), negative (2) or indifferent (0).

For example, The future seems to me:

1) gloomy, bad, strange (2)

2) interesting, intriguing (1)

3) unclear, unknown (0)

Such a quantitative assessment facilitates the identification of a disharmonious system of relations. But more important, of course, is the qualitative study of completed sentences.

Key

1 group. Relation to father 1, 16, 31, 46.
2nd group. Attitude towards yourself 2, 17, 32, 47.
3 group. Unrealized opportunities 3, 18, 33, 48.
4 group. Attitude towards subordinates 4, 19, 34, 49.
5 group. Attitude to the future 5, 20, 35, 50.
6 group. Attitude to superiors 6, 21, 36, 51.
7 group. Fears and concerns 7, 22, 37, 52.
8 group. Attitude towards friends 8, 23, 38, 53.
9 group. Attitude to your past 9, 24, 39, 54.
10th group. Attitudes towards people of the opposite sex 10, 25, 40, 55.
11 group. Sexual relations 11, 26, 41, 56.
12 group. Relationships with family 12, 27, 42, 57.
13 group. Attitude towards employees 13, 28, 43, 58.
14 group. Attitude towards mother 14, 29, 44, 59.
15 group. Guilt 15, 30, 45, 60.

The “Unfinished Sentences” technique has been used in experimental psychological practice for a long time. However, it is available not only to professionals. This test can be used as a starting point in studying yourself and your attitude towards others and the world in general. If the results upset you, do not rush to become despondent: discuss them with a psychologist or a person you trust. One way or another, each group of the test requires separate discussion and awareness, as well as additional confirmation by other methods.

Read also:
  1. CASE technologies as new tools for IC design. CASE - PLATINUM package, its composition and purpose. Criteria for evaluating and selecting CASE tools.
  2. Group I – Criteria based on discounted estimates, i.e. take into account the time factor: NPV, PI, IRR, DPP.
  3. PR in government agencies and departments. PR in the financial sector. PR in commercial organizations in the social sphere (culture, sports, education, healthcare)
  4. SCADA system. ORS. Organization of interaction with controllers.
  5. Bus as a means of transportation. Organization of bus tours, their geography, famous tour operators.
  6. Automated information systems and technologies in enterprises and organizations of various organizational forms.
  7. Municipal administration as an organization.
  8. Acting skills and organization of performances in Russian theatrical culture of the 19th century.

Organization of the testing process. Software development is largely a process of communicating information about the final program and translating this information from one form to another. In addition, the overwhelming number of software errors are caused by defects in the organization of work, insufficient mutual understanding and distortions in the process of transmitting and translating information.

By improving the clarity of the development process itself, many errors can be avoided. This leads to the fact that at the end of each stage it is necessary to include a separate verification step, with the goal of localizing the largest number of errors before moving on to the next stage. For example, the specification is checked by comparing it with the output of the previous stage, and each error found is returned to the specification development process for correction.

In addition, specific testing processes should be targeted at specific development stages. This focuses each testing process on a translation step, resulting in a specific class of errors being captured.

Relationship between development and testing processes.

The actual testing process begins with checking the source code. For this purpose, static testing methods are used.

This is followed by testing of modules, focused on checking compliance with the module interface specifications, as well as testing of the interface and assembly results of the modular structure, focused on checking compliance with the system design and (or) the design of the structure of a separate program.

After this comes function testing, which consists of finding differences between the program and its external specification. When testing functions, functional testing techniques are usually used. It is assumed that at an earlier stage of testing modules, the required logic coverage criterion characteristic of structural testing methods is satisfied.

To compare the development results with the initial goals, a process of comprehensive testing appears, or, as it is also called, system testing, in which all software is tested as a single whole. When considering the differences between the results obtained and the original goals of software development, most attention is paid to identifying translation errors that arise during the development of an external specification. This makes comprehensive testing vital because it is at this stage that the most serious errors are discovered.



The testing process ends with software testing. Tests allow you to check the completeness of the solution to functional problems, their quality and compliance of the software with technical documentation.

System testing. Unlike feature testing, an external specification cannot be used as a basis for deriving system tests, as this defeats the purpose of such testing. On the other hand, a document reflecting the goals of the system as such (in our case, this is a technical specification) cannot be used to formulate its tests, since by definition it does not contain accurate descriptions.

The problem is resolved by using operational user documentation. System tests are designed based on an analysis of its goals based on the results of studying user documentation. This practice allows you to compare not only the program with the source document, but also the results of its operation with the user documentation, as well as the user documentation with the source document.



There are several categories of tests, each aimed at testing specific purposes. These include implementation completeness testing, volume limit testing, load limit testing, usability testing, security testing, hardware configuration testing, compatibility testing, reliability testing, recovery testing, maintainability testing, installation usability testing, and documentation testing.

Testing the completeness of the implementation is the most obvious type of system testing, which consists of checking the implementation of each point of the source document. The verification procedure consists of sequentially reviewing the source document - sentence by sentence. If a sentence contains a specific task, then it is determined whether the program performs that task.

Limit testing involves running a program on large volumes of data, preferably larger than the proposed operational volume. For example, a large source program is fed to the input of the compiler as a test, a program containing a thousand modules is fed to the input of the link editor, and a circuit containing thousands of components is fed to the input of the electronic circuit modeling program. The purpose of capacity testing is to demonstrate that the program cannot handle the amount of data specified in its original objectives.

Testing at maximum loads is due to the fact that the need for memory resources and performance in the process of solving a problem on a computer varies significantly depending on the composition of the volume of source data. With a high intensity of input data, the time balance between the duration of solving a set of software problems in real time and the actual computer performance in solving these problems may be disrupted. The purpose of extreme load testing is to show that the software does not meet performance goals.

Usability testing involves identifying psychological (user) problems that arise during operation. This testing should establish, at a minimum, the following:

  1. Can the designed interface be adapted to inform and educate the end user, as well as enable it to operate in a real-world environment?
  2. Are the program's output messages meaningful, clear, and non-offensive?
  3. Is the error diagnosis clear?
  4. Do the entire set of user interfaces exhibit consistency and uniformity in syntax, conventions, semantics, format, style, and abbreviations?
  5. Does the system contain options that are excessive or unlikely to be used?
  6. Does the system issue any acknowledgments for all input messages?
  7. Is the software easy and pleasant to use?

Security testing consists of checking whether information is protected from unauthorized access. To test security, it is important to build tests that violate software security. One way to develop such tests is to study known security problems in similar existing systems and build tests that allow you to check how similar problems are solved in the system under test.

Hardware configuration testing is driven by the fact that operating systems, DBMSs, and communications systems must support multiple hardware configurations (for example, different types and numbers of I/O devices and communication lines, different amounts of memory, etc.). Often the number of possible configurations is too large to test the software on each of them. However, the program should be tested with at least each type of hardware at the minimum and maximum configurations. If you can change the configuration of the software itself, then you need to test all its possible configurations.

Compatibility testing is driven by the fact that most of the software being developed is not completely new. It often replaces imperfect, outdated information processing systems or manual processes. Therefore, when developing software, it is necessary to ensure compatibility with the environment in which the replaced systems operated, and, if necessary, create conversion procedures to ensure the transition from one data processing method to another. In this case, as with other forms of testing, tests should be focused on ensuring compatibility and operation of the conversion procedure.

The purpose of all types of testing is to increase the reliability of the software, but if the source document reflecting the goals of the project contains special instructions, for example, to provide a certain time between failures or a specified acceptable number of errors, then it is necessary to conduct a study of the software under test to satisfy these requirements. This is done through reliability testing. During this type of testing, there are a number of mathematical models of reliability. Next, in the section on test completion criteria, two reliability models will be considered: the so-called Mills model and a simple intuitive model.

For operating systems, DBMSs, and telecommunications, it is often determined how the system should recover from software errors, hardware failures, and data errors. When testing the system, it is required to show that these functions are not performed. Recovery testing is used for this purpose. To do this, you can deliberately introduce software errors into the system to see if it will recover after they are eliminated. Hardware failures can be simulated. Errors in data (interference in communication lines or incorrect values ​​of pointers in the database) can be intentionally created or simulated.

The source document sometimes contains special goals for ease of maintenance or maintainability of the software. They may define the maintenance tools with which the software should be provided (for example, memory dump programs, diagnostic programs, etc.), the average time to find a bug, the procedures associated with maintenance, and the quality of documentation about the internal logic of the program. Naturally, all these goals must be tested. For this purpose, usability testing is used.

The purpose of installation ease-of-installation testing is to show that the goals of customizing the software for specific operating conditions are not being met.

The system check also includes checking the accuracy of user documentation. Much of this verification occurs in determining whether prior system tests have been represented correctly. In addition, user documentation should be inspected for accuracy and clarity, similar to source code inspections. Any examples given in the documentation must be designed as a test and tested on the software.

Test completion criteria. When conducting testing, the question arises of when the program testing should be completed, since it is not possible to determine whether the identified error is the last one.

Basically, in practice, the following two criteria are adhered to: when the time allotted according to the work schedule for testing has expired; when all tests failed, that is, they were performed without identifying errors.

Both of these criteria are not accurate and logical enough, since the first criterion does not contain an assessment of the quality of testing and can be satisfied without doing anything, while the second does not depend on the quality of the test data sets.

However, the second criterion can be improved by focusing on certain test design methodologies. For example, you can determine the termination condition for module testing using tests obtained in two ways: satisfying the combinatorial coverage of conditions and the boundary value analysis method according to the module interface specification. All resulting tests must fail eventually.

The completion of function testing can be determined when the following conditions are met: tests obtained by functional diagram methods; equivalent partitioning and boundary value analysis must fail.

However, these criteria are, firstly, useless in the testing phase when certain methods become unusable, for example, in the system testing phase; secondly, such a measurement is subjective, because there is no guarantee that the specialist used the required methodology correctly and accurately; thirdly, in order to set a goal and enable the choice of the most appropriate way to achieve it, the considered criteria prescribe the use of specific methods, but do not set goals.

Sometimes a criterion is used that is based largely on common sense and information about the number of errors obtained during testing. To do this, plot the dependence of the number of errors and the time of their occurrence. The shape of the resulting curve determines whether it is worth continuing testing or not. The figure shows examples of graphs of the number of errors depending on the testing duration.

Dependence of the number of errors on the duration of testing.

The example shows that if the testing time is long and the number of errors increases as the testing time increases, then, naturally, testing should be continued. If during the testing process at a certain point there is a decrease in the number of detected errors, if the number of detected errors gradually tends to zero or has reached zero, then it is clear that the testing process can be completed.

However, this criterion is also not effective enough, since there is no certainty that in the latter case there will not be an increase in the number of detected errors in the future.

Another approach to determining the testing completion criterion is possible. Since the purpose of testing is to find errors, a certain predetermined number of errors can be selected as a criterion, corresponding to a certain part of the expected total number of errors. However, there are a number of problems with using this criterion. First, it is necessary to estimate the total number of errors in the program. Secondly, it is necessary to find out what percentage of these errors can be determined by testing. Finally, it is necessary to determine what part of the errors arose during the design process and during which testing phases it is advisable to identify them.

To estimate the total number of errors and identify the possible percentage of errors that can be detected by testing, you can use methods used in determining reliability indicators (reliability models), for example, using the Mills model or a simple intuitive model, which we will consider a little later.

Another way to obtain such an estimate is based on static averages, widely used in industry. For example, the number of errors that exist in typical programs by the time coding is completed (before walk-through or inspection) is approximately 4 to 8 per 100 program statements.

The Mills model is based on introducing errors into a program in order to estimate, during testing, the number of residual errors actually contained in the program. By checking the program for some time and sorting out the errors introduced from those actually present in the program, you can estimate the number of errors initially contained in the program and the number of errors remaining at the time of evaluation.

If S errors are randomly introduced into the program, and n+V errors are found during testing (n is the number of found own errors; V is the number of found introduced errors), then the estimated number of own errors initially found in the program can be calculated by the formula: .

For example, if 20 own and 10 introduced errors are detected, with a total number of initially introduced errors equal to 25, the value N=25*20/10 = 50; those. At this stage, it is assumed that the program had 50 inherent errors and testing should continue.

The number N can be estimated after each new error detection.

The program must be debugged until all introduced errors are detected. When introduced errors are detected, a confidence level C can be determined, indicating the probability that the estimate is correct:

where k is the estimated number of own errors, S is the number of introduced errors, n is the number of detected own errors.

For example, if we claim that there are no errors in the program (k = 0), and when 6 errors were introduced into the program, all of them were detected, but no own errors were detected, then C = 6/(6 + 0 + 1) = 0, 86. On the other hand, to achieve a confidence level of 0.98, 39 errors must be introduced into the program: C=39/(39 + 0 + 1)=0.98.

The Mills model is not without a number of shortcomings, the most significant of which are the need to introduce artificial errors (this process is poorly formalized) and a rather loose assumption of the value k (the number of own errors), which is based solely on the intuition of the person conducting the assessment, i.e. allows for a large influence of the subjective factor.

A simple intuitive model involves testing by two groups of programmers independently of each other, using independent test suites.

During the testing process, each group records all the errors it finds. When assessing the number of errors remaining in the program, the test results of both groups are collected and compared.

It turns out that the first group discovered N 1 errors, the second - N 2 errors, and N 12 are errors discovered by both groups.

If we denote by N the unknown number of errors present in the program before testing begins, then the testing efficiency of each group can be determined as

Assuming that the ability to detect all errors is the same for both groups, it can be assumed that if the first group found a certain number of all errors, it could determine the same number of any randomly selected subset.

In particular, it can be assumed that

The value of N 12 is known, and E 1 and E 2 can be defined as N 12 /N 1 and N 12 /N 2. Thus, the unknown number of errors in the program can be determined by the formula:

Taking this model further and assuming that both testing groups have an equal probability of finding "common" errors, it can be calculated using the following formula:

where P(N12i) is the probability of detecting N 12 “common” errors in program testing by two independent groups.

Identifying errors that occur during the design process uses data indicating that in large software, approximately 40% of all errors are logic design and coding errors, with the rest being made earlier in the design process.

Based on this, let's look at an example. Let's say that a program of size 1000 statements is being tested; the number of errors remaining after inspection of the source code is estimated at 5 per 100 statements. The goal of testing is to detect 98% of coding and logic errors and 95% of design errors.

The total number of errors is 500. It is assumed that 200 of these are coding and logic errors, and 300 are design errors. Therefore, it is required to find 196 coding and logic errors and 285 design errors.

For reasons of common sense, it is logical to distribute the percentage of errors found by testing stages, as shown in the table.

Percentage of errors found by testing stage.

Based on these numbers, the following criteria can be determined.

  1. During the module testing phase, 130 errors must be found and corrected (65% of the estimated 200 coding and logic errors).
  2. At the system testing stage, it is necessary to find and fix 6 errors and 105 (3% of 200 and 35% of 300).

Another obvious problem with this type of criterion is the problem of overestimation. What if in the example above there are fewer than 240 errors left by the time the feature check starts? Based on this criterion, it is never possible to complete the feature testing phase. To avoid such a situation, the criterion for the number of errors should be supplemented with the period of time during which they must be detected. In this case, if errors are found quickly, then testing at a certain stage should be continued until the end of the specified time interval. If overestimation occurs, i.e. time has passed and the specified number of errors have not been detected, then you should invite a disinterested expert who will express his opinion about the reasons for the problem: either the tests are not effective, or the tests are successful, but there are really few errors in the program.

The best criterion for completing testing is a combination of all three considered approaches. For testing modules, the first criterion considered will be optimal, because in most projects at this phase they do not monitor the number of errors detected; it is important here that a certain set of test design methods is used. During the feature and system testing phases, the termination criterion may be stopping when a specified number of detected errors is reached or reaching a point determined by the work schedule, provided that an analysis of the number of errors versus testing time shows a decrease in productivity.

When should testing be stopped and should it be stopped? Has the full amount of information been processed and has everything been taken into account? And generally speaking - ? These questions are relevant for every tester. So let's stop for a minute and think: at what point is it necessary and possible to interrupt the testing process tending to infinity?

Reason for stopping: “Deadlines are running out! Time is money!"

Often there are clearly defined deadlines for a project, which the customer is not always ready to move. In this case, the command “finish testing!” depends precisely on the deadlines, and this is an important criterion. Yes, such a scenario cannot be called the best (since there is always sorely not enough time for a complete check, and quality often suffers), but it is also possible.

Example from practice. I remember a situation where the quality of a product suffered due to tight deadlines. An online store of household goods was tested, and along with a new promotional discount only for registered customers, a bug was introduced: the inability to activate several promotions that were valid at that time. Thus, the release turned into “a release plus a couple more intense days of bug fixing.” Probably, it would have been much better to move the strict limits for a day or two and give the opportunity to further test the new functionality... but in life there are different situations.

Conclusion. The main task of a tester under limited time conditions is to cover the maximum possible number of critical test cases (high and medium priority test cases), record all defects found (to avoid their loss due to time and task turnover) and generate a report about the real volume work done. As a result, the tester should receive a complete picture of what has been tested and a list of what has not yet been checked (in order to determine the scope of work in the future).

Reason for stopping: “This is not the final one, but an intermediate one”

It happens that during testing it is necessary to make a forced stop, since “something” critically blocks the optimal assessment of the tested object, and because of this, the entire testing system may “fail” in the future. In this case, it is better to stop and wait for the problem to be resolved.

Example from practice. Quite large medical software was tested. At the test bench it was not possible to fully test the new functionality (sending letters to clients when filling out a form and client personal account data). The task was quite extensive and covered many aspects: activation of individual sections when documents are fully loaded, restrictions on access to sections at a certain level of profile completion, and others. To the question before the release, “has everything been tested and is it possible to finish testing?” It was simply impossible to give an unambiguous answer: the check was partially blocked due to the inability to verify the receipt of some letters in the test environment. As a result, critical errors appeared on the client side during release. The client did not receive the necessary notification letters, and therefore he could not get full access to his profile. To avoid such situations, the following solution was found: after modification, options for sending and receiving letters on the test bench became available, which made it possible to further test this part of the rather important functionality. The inclusion of these checks in all subsequent regression passes made it possible to more optimally assess the readiness of the product before releasing it to the client side.

Conclusion. Analysis of errors allowed us to draw the right conclusions and eliminate all blocking issues. And yet, in such a situation, working on optimizing testing tools and tools based on the results of already identified errors can hardly be considered a good option. Undoubtedly, it would be much more appropriate and correct to stop the testing process and refine the initially incomplete functionality of the test environment to prevent the emergence of critical moments already on the client side.

Reason for stopping: “You can’t move on if you stand still.” Where to put the comma, and why is there confusion?

Is it right to fix bugs directly during the test, which is not interrupted? Logic dictates that the process needs to be stopped and started from the very beginning after making an adjustment, since any correction of an error may entail the appearance of a dozen more new ones.

Example from practice. A fairly common case that is known to any tester immediately comes to mind: during testing, critical bugs are discovered, and half of the test cases have already been checked, and the results for them are recorded. Sometimes developers try to fix a bug so quickly that they “forget” to notify the diligent tester, who is in a hurry to go through all the planned regression test cases. As a result, testing continues after the bug-fix instead of stopping the test and starting it again; Some errors will no longer be detected.

Conclusion. In such situations, it is important for the developer to promptly report his corrections to the tester so that he can stop testing and re-pass the test cases - all or only the most critical and high priority ones (if there is not much time left). This will help avoid future confusion regarding the question: where did new defects in the product come from, and who is responsible for them?

Reason for stopping: “The order has arrived to retreat!”

It happens that the customer literally suspends the check at the last stage. There may be a lot of reasons for this: the emergence of a more important task or functionality that requires further clarification, a reassessment of release priorities, a revision of the current plan. Our task is to pause the process, but not forget anything!

Example from practice. It happened that an almost completely tested release was postponed. It would seem that everything is ready - all the blocks have been thoroughly tested, all the tasks have been completed - and you can release the finished product to the delight of the user, but the customer suddenly decides that it is better to do everything completely differently, and the almost ready release needs to be temporarily stopped. The disadvantage of this situation is the tester’s time wasted, the advantage is written test cases that can be used to test the functionality of other software.

Conclusion. In this case, it is important for a tester to write high-quality test cases that can be worked with in the future either on similar tasks, or (in case of resuming work) on a canceled/postponed release.

Reason for stopping: “That’s it, I’m tired, that’s enough!”

A stop can also occur simply because the tension has reached its peak. The desire to do as much as possible in as much time as possible can sometimes have a negative impact on work results.

Example from practice. One of our projects experienced a rather lengthy release; we tested it intensely and actively. The testing in my head did not stop even while I was sleeping. And at that moment, when the error appeared literally before my eyes, clearly “signaling” it to me in the logs, I simply did not see it. At such moments, you need to be able to tell yourself: “Stop, take a break, otherwise you will make a mistake, you will miss a bug, your attentiveness will be zero!” And attentiveness is the main quality of a tester. Of course, the process itself cannot simply be stopped, but it is necessary to allocate personal time for relaxation!

Conclusion. In such cases, stopping testing is a mandatory and important point for the tester. When finishing work, you need to rest and be distracted (do something else, for example), in order to avoid “blurring your eyes.”

Reason for stopping: “Any doubts? Stop!”

Before the release of each release, the tester, assessing the work done and the completed set of test cases, sums up: has everything been tested? Of course, the natural state will be the desire to continue the verification process, which does not always correspond to the time factor. Still, reasonable doubts must at least be voiced. Even if the bug was “caught” at the last stage, and its correction will delay the entire release process, in no case should the error be left unattended; it is better to stop the running mechanism and give time for correction.

Example from practice. In my experience, there were situations when an error was discovered already in the last steps (one might even say in the last minutes) of regression testing. Was this a tester's fault (that is, mine)? Yes, and this was a good “kick” for further work on my mistakes. But the bug had to be eradicated. The problem was “thrown out” from the release for improvement, but the release itself was quite successful. Don’t forget: the customer would rather appreciate the quality of the inspection than meeting deadlines without maintaining quality.

Conclusion. Every step of the testing process is important. Incorrect development of the material or incomplete coverage of the task with test cases can cause the tester to miss an important and critical bug. No matter what stage of testing this is discovered, it is important to understand that in such cases it is necessary to stop, assess the situation and decide on a further work plan!

Reason for stopping: “According to my desire, stop!”

In testing, an important role is played by the specialist’s understanding of the importance of the product being manufactured. It’s bad if a person is indifferent to the final product. In this case, testing may stop simply because the tester is tired of the process itself (“it will do!”).

On this occasion, an old joke comes to mind:
“The man sewed a suit in the studio. I came home and put it on. The wife is terrified:
-What did you sew? Look: one sleeve is longer, the other is shorter. The jacket's hems and trouser legs are different. Bring it all back!
The husband went back:
-What did you sew for me? Look! Pants of different lengths!
– And you bend one leg at the knee, because you don’t walk on straight legs. And all will be well.
– Look, the sleeves are different lengths!
- So what? You're not holding your hands at your sides. Bend your elbows. Here! Wonderful!
- What about the floors? What to do with them?
- And you lean a little to one side. Everything is fine!
The man came out in a new suit. People at the bus stop:
- Look, what a freak! And how well the suit fits!”

For a tester, a negligent attitude towards the process is simply unacceptable. All shortcomings eventually become apparent, which ultimately leads to disastrous results.

Example from practice. Fortunately, my colleagues and I have never encountered such situations: we love our work and respect end users (after all, our mistakes affect their experience of interacting with the product). I hope something like this never happens. The main thing is not to forget that this is possible and to avoid such cases.

Conclusion. It is impossible to stop testing only at the request of the tester; each stop must be justified. The decision to stop the process will be logical only if a whole set of parameters has been optimally and positively completed and thoroughly worked out: the full scope of test cases for the task has been written, priorities have been correctly set for estimating time in case of urgent or quick checks, all tasks have been fully analyzed and verified with technical requirements at the initial stages of familiarization, everything is taken into account at the release planning stage.

And finally... Drum roll... The last, but most desirable reason for stopping: “Ready, you can pick it up!”

When planning a new release begins, a specific testing plan, priorities and scope are laid down. Proper planning leads to positive and high-quality results. When all test results fully satisfy the quality criteria, you can safely say to yourself: “Stop, here we did everything we could!” But for this it is necessary that all the errors found are corrected, all planned test cases are passed (and not a single bug higher than minor is found), all necessary edits are made, and the result of acceptance testing is completely positive. And this actually happens! In this case, the customer is satisfied, and the tester can safely give himself a “medal” for good work. And how this sets you up for further “exploits”!

Example from practice. For example, some time ago we tested updating the website for household appliances. The site was and continues to be quite popular, and the responsibility for the product was quite high. The result of the release was positive dynamics and improved statistics on the number of users who placed an order via the Internet. This, of course, is a huge plus for the customer. The main indicator of a successful release for testers is a product that is maximally adapted to the client and contains a minimum number of errors (or maybe there are none left at all?!!!). Stopping testing in this case is quite natural, since it is laid down within a clearly established time frame, taking into account all the necessary criteria.

Conclusion. To get a good result, it is important to take into account all factors in your work. Assessing and analyzing a task, writing test cases to cover it, calculating time and maximum care guarantee positive results in your work.

The final

To summarize, the last scenario is the ideal test stopping option: it combines proper planning, detailed testing and a final positive acceptance part. In other cases, stoppages occur due to tester errors, at the request of the customer, an insufficiently thought-out test plan, incorrect timing, or simply because of laziness (by the way, unacceptable quality in our profession).

Therefore, stopping in such cases does not entail the final positive result that you always want to achieve. In this situation, it is important to draw the right conclusions and identify the cause of the main error. As a rule, this is incorrect time planning, fear of asking the developer about the readiness of fixes (that is, poor communication between the developer and the tester) and negligent writing of test cases due to incomplete familiarization with the specifications and requirements. Having identified weaknesses in the project or in personal qualities, you can begin to develop a plan for further more effective work.

To do this, it is always necessary to take into account: the conditions and wishes of the customer, the established time frame and the degree to which the test cases cover the requirements specified and described in the task. Each point must be clearly worked out and discussed with both the customer and the developer. The tester must imagine the scope of work: what kind of testing can be completed within the specified time frame, how many test cases will be required, up to what point bug fixes are allowed, when the freezing code begins, and whether the number of bugs found allows the release of the product itself.

Of course, each project has its own characteristics. It is impossible to find a single correct reference solution method. And yet, taking into account the above basic criteria will lead to the fact that the manufactured product will best meet the requirements, and the process and stopping of testing will be clear and logical.