The first synthesized chemical element. Which chemical elements are man-made? USSR and USA

Of the 26 currently known transuranium elements, 24 are not found on our planet. They were created by man. How are heavy and superheavy elements synthesized?
The first list of thirty-three putative elements, A Table of Substances belonging to all the Kingdoms of Nature, which may be considered the Simplest Constituents of Bodies, was published by Antoine Laurent Lavoisier in 1789. Along with oxygen, nitrogen, hydrogen, seventeen metals and several other real elements, light, caloric and some oxides appeared in it. And when 80 years later Mendeleev came up with the Periodic Table, chemists knew 62 elements. By the beginning of the 20th century, it was believed that 92 elements existed in nature - from hydrogen to uranium, although some of them had not yet been discovered. Nevertheless, already at the end of the 19th century, scientists assumed the existence of elements following uranium in the periodic table (transuranes), but it was impossible to find them. It is now known that the earth's crust contains trace amounts of elements 93 and 94 - neptunium and plutonium. But historically, these elements were first obtained artificially and only then discovered in the composition of minerals.
Of the 94 first elements, 83 have either stable or long-lived isotopes, the half-lives of which are comparable to the age of the Solar System (they came to our planet from a protoplanetary cloud). The life of the remaining 11 natural elements is much shorter, and therefore they appear in the earth’s crust only as a result of radioactive decay for a short time. But what about all the other elements, from 95 to 118? There are none on our planet. All of them were obtained artificially.
The first artificial
The creation of artificial elements has a long history. The fundamental possibility of this became clear in 1932, when Werner Heisenberg and Dmitry Ivanenko came to the conclusion that atomic nuclei consist of protons and neutrons. Two years later, Enrico Fermi's group attempted to produce transuraniums by irradiating uranium with slow neutrons. It was assumed that the uranium nucleus would capture one or two neutrons, after which it would undergo beta decay to produce elements 93 or 94. They even hastened to announce the discovery of transurans, which Fermi called ausonium and hesperium in his Nobel speech in 1938. However, German radiochemists Otto Hahn and Fritz Strassmann, together with the Austrian physicist Lise Meitner, soon showed that Fermi was mistaken: these nuclides were isotopes of already known elements, resulting from the splitting of uranium nuclei into pairs of fragments of approximately the same mass. It was this discovery, made in December 1938, that made it possible to create a nuclear reactor and an atomic bomb. The first element synthesized was not transuranium at all, but ecamanganese, predicted by Mendeleev. They searched for it in various ores, but to no avail. And in 1937, ecamanganese, later called technetium (from the Greek ??? - artificial) was obtained by firing deuterium nuclei at a molybdenum target, accelerated in a cyclotron at the Lawrence Berkeley National Laboratory.
Light projectiles
Elements 93 to 101 were obtained by the interaction of uranium nuclei or subsequent transuranium nuclei with neutrons, deuterons (deuterium nuclei) or alpha particles (helium nuclei). The first success here was achieved by the Americans Edwin McMillan and Philip Abelson, who in 1940 synthesized neptunium-239, working on Fermi’s idea: the capture of slow neutrons by uranium-238 and the subsequent beta decay of uranium-239. The next, 94th element - plutonium - was discovered for the first time while studying the beta decay of neptunium-238 obtained by deuteron bombardment of uranium at the cyclotron of the University of California at Berkeley in early 1941. And it soon became clear that plutonium-239, under the influence of slow neutrons, is fissile no worse than uranium-235 and can serve as the filling of an atomic bomb. Therefore, all information about the production and properties of this element was classified, and an article by MacMillan, Glenn Seaborg (for their discoveries they shared the 1951 Nobel Prize) and their colleagues with a message about the second transuranium appeared in print only in 1946. American authorities for almost six years The publication of the discovery of the 95th element, americium, which at the end of 1944 was isolated by Seaborg's group from the products of neutron bombardment of plutonium in a nuclear reactor, was also delayed. A few months earlier, physicists from the same team obtained the first isotope of element 96 with an atomic weight of 242, synthesized by bombarding uranium-239 with accelerated alpha particles. It was named curium in recognition of the scientific achievements of Pierre and Marie Curie, thereby opening the tradition of naming transurans in honor of the classics of physics and chemistry. The 60-inch cyclotron at the University of California became the site of the creation of three more elements, 97, 98 and 101 . The first two were named after their place of birth - berkelium and californium. Berkeley was synthesized in December 1949 by bombarding an americium target with alpha particles, and californium two months later by the same bombardment of curium. The 99th and 100th elements, einsteinium and fermium, were discovered during radiochemical analysis of samples collected in the area of ​​\u200b\u200bEniwetak Atoll, where on November 1, 1952, the Americans detonated a ten-megaton thermonuclear charge "Mike", the shell of which was made of uranium-238. During the explosion, uranium nuclei absorbed up to fifteen neutrons, after which they underwent chains of beta decays, which led to the formation of these elements. Element 101, mendelevium, was discovered in early 1955. Seaborg, Albert Ghiorso, Bernard Harvey, Gregory Choppin and Stanley Thomson subjected alpha particle bombardment to about a billion (this is very small, but there were simply no more) einsteinium atoms electrolytically deposited on gold foil. Despite the extremely high beam density (60 trillion alpha particles per second), only 17 mendelevium atoms were obtained, but their radiation and chemical properties were determined.
Heavy ions
Mendelevium was the last transuranium produced using neutrons, deuterons or alpha particles. To obtain the following elements, targets were required from element number 100 - fermium, which were then impossible to manufacture (even now in nuclear reactors fermium is obtained in nanogram quantities). Scientists took a different route: they used ionized atoms, whose nuclei contain more than two protons, to bombard targets. they are called heavy ions). To accelerate ion beams, specialized accelerators were required. The first such machine, HILAC (Heavy Ion Linear Accelerator), was launched in Berkeley in 1957, the second, the U-300 cyclotron, was launched at the Laboratory of Nuclear Reactions of the Joint Institute for Nuclear Research in Dubna in 1960. Later, more powerful U-400 and U-400M units were put into operation in Dubna. Another accelerator, UNILAC (Universal Linear Accelerator), has been operating at the German Helmholtz Center for Heavy Ion Research in Wickshausen, one of the districts of Darmstadt, since the end of 1975. During the bombardment of targets made of lead, bismuth, uranium or transuranium with heavy ions, highly excited ( hot) nuclei that either fall apart or release excess energy through the emission (evaporation) of neutrons. Sometimes these nuclei emit one or two neutrons, after which they undergo other transformations - for example, alpha decay. This type of synthesis is called cold. In Darmstadt, with its help, elements with numbers from 107 (borium) to 112 (copernicium) were obtained. In the same way, in 2004, Japanese physicists created one atom of the 113th element (a year earlier it was obtained in Dubna). During hot fusion, newborn nuclei lose more neutrons - from three to five. In this way, Berkeley and Dubna synthesized elements from 102 (nobelium) to 106 (seaborgium, in honor of Glenn Seaborg, under whose leadership nine new elements were created). Later, in Dubna, six of the most massive super-heavyweights were made in this way - from the 113th to the 118th. The International Union of Pure and Applied Chemistry (IUPAC) has so far only approved the names of the 114th (flerovium) and 116th (livermorium) elements.
Just three atoms
The 118th element with the temporary name ununoctium and the symbol Uuo (according to IUPAC rules, temporary names of elements are formed from the Latin and Greek roots of the names of the digits of their atomic number, un-un-oct (ium) - 118) was created by the joint efforts of two scientific groups: Dubna under the leadership of Yuri Oganesyan and the Livermore National Laboratory under the leadership of Kenton Moody, a student of Seaborg. Ununoctium is located below radon in the periodic table and may therefore be a noble gas. However, its chemical properties have not yet been determined, since physicists have created only three atoms of this element with a mass number of 294 (118 protons, 176 neutrons) and a half-life of about a millisecond: two in 2002 and one in 2005. They were obtained by bombarding a target of California-249 (98 protons, 151 neutrons) with ions of the heavy isotope of calcium with an atomic mass of 48 (20 protons and 28 neutrons), accelerated in the U-400 accelerator. The total number of calcium “bullets” was 4.1x1019, so the productivity of the Dubna “ununoctium generator” is extremely low. However, according to Kenton Moody, the U-400 is the only machine in the world on which it was possible to synthesize the 118th element. “Each series of experiments on the synthesis of transuraniums adds new information about the structure of nuclear matter, which is used to model the properties of superheavy nuclei. In particular, work on the synthesis of the 118th element made it possible to discard several previous models, recalls Kenton Moody. - We made the target from californium, since heavier elements were not available in the required quantities. Calcium-48 contains eight extra neutrons compared to its main isotope calcium-40. When its nucleus fused with the californium nucleus, nuclei with 179 neutrons were formed. They were in highly excited and therefore particularly unstable states, from which they quickly emerged, shedding neutrons. As a result, we obtained an isotope of element 118 with 176 neutrons. And these were real neutral atoms with a full set of electrons! If they had lived a little longer, it would have been possible to judge their chemical properties.”
Methuselah number 117
Element 117, also known as ununseptium, was obtained later - in March 2010. This element was created on the same U-400 machine, where, as before, calcium-48 ions were fired at a target made of berkelium-249, synthesized at the Oak Ridge National Laboratory. When berkelium and calcium nuclei collided, highly excited ununseptium-297 nuclei (117 protons and 180 neutrons) appeared. The experimenters managed to obtain six nuclei, five of which evaporated four neutrons each and turned into ununseptium-293, and the rest emitted three neutrons and gave rise to ununseptium-294. In comparison with ununoctium, ununseptium turned out to be a real Methuselah. The half-life of the lighter isotope is 14 milliseconds, and the heavier one is as much as 78 milliseconds! In 2012, Dubna physicists obtained five more atoms of ununseptium-293, and later several atoms of both isotopes. In the spring of 2014, scientists from Darmstadt reported the synthesis of four nuclei of element 117, two of which had an atomic mass of 294. The half-life of this “heavy” ununseptium, measured by German scientists, was about 51 milliseconds (this agrees well with the estimates of scientists from Dubna) Now in Darmstadt they are preparing a project for a new linear accelerator of heavy ions on superconducting magnets, which will allow the synthesis of elements 119 and 120. Similar plans are being implemented in Dubna, where a new cyclotron DS-280 is being built. It is possible that in just a few years the synthesis of new superheavy transuraniums will become possible. And the creation of the 120th, or even the 126th element with 184 neutrons and the discovery of the island of stability will become a reality.
Long life on the island of stability
Inside nuclei there are proton and neutron shells, somewhat similar to the electron shells of atoms. Nuclei with completely filled shells are especially resistant to spontaneous transformations. The numbers of neutrons and protons corresponding to such shells are called magic. Some of them have been determined experimentally - these are 2, 8, 20 and 28.Shell models make it possible to calculate the “magic numbers” of superheavy nuclei theoretically - however, without a complete guarantee. There is reason to expect that the neutron number 184 will be magical. It can correspond to proton numbers 114, 120 and 126, and the latter, again, must be magical. If this is so, then the isotopes of the 114th, 120th and 126th elements, containing 184 neutrons each, will live much longer than their neighbors on the periodic table - minutes, hours, or even years (this area of ​​the table is usually called the island of stability ). Scientists place their greatest hopes on the last isotope with a doubly magic nucleus.
Dubninsky method

When a heavy ion enters the region of nuclear forces of the target, a compound nucleus in an excited state can be formed. It either decays into fragments of approximately equal mass, or emits (evaporates) several neutrons and passes into the ground (unexcited) state.
“Elements 113 to 118 were created based on a remarkable method developed in Dubna under the leadership of Yuri Oganesyan,” explains Darmstadt team member Alexander Yakushev. - Instead of nickel and zinc, which were used to fire at targets in Darmstadt, Oganesyan took an isotope with a much lower atomic mass - calcium-48. The fact is that the use of light nuclei increases the likelihood of their fusion with target nuclei. The calcium-48 nucleus is also doubly magical, since it is composed of 20 protons and 28 neutrons. Therefore, Oganesyan's choice greatly contributed to the survival of the compound nuclei that arise when the target is fired upon. After all, a nucleus can shed several neutrons and give rise to a new transuranium only if it does not fall apart into fragments immediately after birth. To synthesize superheavy elements in this way, Dubna physicists made targets from transuranium produced in the USA - first plutonium, then americium, curium, californium and, finally, berkelium. Calcium-48 in nature is only 0.7%. It is extracted using electromagnetic separators, which is an expensive procedure. One milligram of this isotope costs about $200. This amount is enough for an hour or two of shelling a target, and experiments last for months. The targets themselves are even more expensive, their price reaches a million dollars. Paying electricity bills also costs a pretty penny - heavy ion accelerators consume megawatts of power. In general, the synthesis of superheavy elements is not a cheap pleasure.”

  • 7. Natural science as a phenomenon of universal human culture. Fundamental natural science directions: subject and methods of research.
  • 8. Reasons why the knowledge accumulated by the ancient civilizations of Babylon, Egypt, China cannot be considered scientific.
  • 9. Natural and social disasters that contributed to the origins of scientific knowledge in Ancient Greece.
  • 10.Principles and rules of true knowledge laid down by Thales of Miletus. The search for principles and the concept of atomism (Leucippus and Democritus).
  • 12.Fundamentals of the doctrine of the movement of bodies according to Aristotle. The first system of the universe of Aristotle - Ptolemy.
  • 14. Reasons for the decline of interest in scientific knowledge, the rise of monotheistic religions, the role of Arab and Eastern peoples in the preservation and development of ancient Greek knowledge
  • 15. Reasons for the development of criteria for scientific knowledge in the Middle Ages. Subsequent milestones in the development of the scientific method, its components and its creators
  • 20.Types and mechanisms of fundamental interactions in nature.
  • 21. Manifestations of fundamental interactions in mechanics, thermodynamics, nuclear physics, chemistry, cosmology.
  • 22. Manifestations of fundamental interactions and structural levels of organization of matter.
  • 26.Specificity of the laws of nature in physics, chemistry, biology, geology, cosmology.
  • 27.Basic principles underlying the pictures of the universe from Aristotle to the present day.
  • 32.Modern implementation of the atomistic concept of Leucippus - Democritus. Generations of quarks and leptons. Intermediate bosons as carriers of fundamental interactions.
  • 34.Structure of chemical elements, synthesis of transuranium elements.
  • 35. Atomic-molecular “constructor” of the structure of matter. The difference between physical and chemical approaches in studying the properties of matter.
  • 40.Main tasks of cosmology. Solving the question of the origin of the Universe at different stages of the development of civilization.
  • 41.Physical theories that served as the basis for the creation of the theory of the “hot” Universe by G.A. Gamova.
  • 42. Reasons for the short duration during the initial “eras” and “epochs” in the history of the Universe.
  • 43. The main events that took place in the era of quantum gravity. Problems of “modeling” these processes and phenomena.
  • 44.Explain from an energy point of view why the Age of Hadrons preceded the Age of Leptons.
  • 45. Energies (temperatures) at which the separation of radiation from matter occurred, and the Universe became “transparent”.
  • 46.Building material for the formation of the large-scale structure of the Universe.
  • 49. Properties of black holes and their detection in the Universe.
  • 50. Observed facts confirming the theory of a “hot” Universe.
  • 51.Methods for determining the chemical composition of stars and planets. The most common chemical elements in the Universe.
  • 34.Structure of chemical elements, synthesis of transuranium elements.

    In 1861, the outstanding Russian chemist A.M. Butlerov

    created and substantiated the theory of the chemical structure of matter, according to

    in which the properties of substances are determined by the order of the bonds of atoms in

    molecules and their mutual influence. In 1869, D.I. Mendeleev discovered9

    one of the fundamental laws of natural science is the periodic law

    chemical elements, the modern formulation of which is as follows:

    the properties of chemical elements periodically depend on the electric charge of their nuclei.

    35. Atomic-molecular “constructor” of the structure of matter. The difference between physical and chemical approaches in studying the properties of matter.

    An atom is the smallest particle of a given chemical element. All atoms existing in nature are represented in Mendeleev's periodic system of elements.

    Atoms are connected into a molecule through chemical bonds based on electrical interaction. The number of atoms in a molecule can vary. A molecule can consist of one atom, two, three, or even several hundred atoms.

    Examples of diatomic molecules include CO, NO, O 2, H 2, triatomic molecules - CO 2, H 2 O, SO 2, tetraatomic molecules - NH 3. Thus, a molecule consists of one or more atoms of one or different chemical elements.

    A molecule can be defined as the smallest particle of a given substance that has its chemical properties. Between the molecules of any body there are forces of interaction - attraction and repulsion. The forces of attraction ensure the existence of the body as a whole. In order to divide the body into parts, considerable effort must be made. The existence of repulsive forces between molecules is revealed when trying to compress a body.

    40.Main tasks of cosmology. Solving the question of the origin of the Universe at different stages of the development of civilization.

    Cosmology is the study of the physical properties of the Universe as a whole. In particular, its goal is to create a theory of the entire region of space covered by astronomical observations, which is commonly called the Metagalaxy.

    As is known, the theory of relativity leads to the conclusion that the presence of large masses affects the properties of space-time. The properties of the usual Euclidean space (for example, the sum of the angles of a triangle, the properties of parallel lines) change near large masses or, as they say, space “curves.” This curvature of space created by individual masses (for example, stars) is very small.

    Thus, it should be expected that due to the curvature of space, a ray of light near the Sun should change its direction. Accurate measurements of the positions of stars near the Sun and the time of total solar eclipses make it possible to capture this effect, however, at the limit of measurement accuracy.

    However, the total effect of the gravitating (i.e., possessing attraction) masses of all galaxies and supergalaxies can cause a certain curvature of space as a whole, which will significantly affect its properties, and, consequently, the evolution of the entire Universe.

    Even the very formulation of the problem of determining (based on the laws of relativity) the properties of space and time with an arbitrary distribution of masses is extremely difficult. Therefore, some approximate schemes called models of the Universe are usually considered.

    The simplest of them are based on the assumption that matter in the Universe on large scales is distributed equally (homogeneity), and the properties of space are the same in all directions (isotropy). Such a space must have some curvature, and the corresponding models are called

    homogeneous isotropic models of the Universe.

    Solutions of Einstein's gravitational equations for the case of a homogeneous isotropic

    models show that the distances between individual heterogeneities, if

    exclude their individual chaotic movements (peculiar velocities), cannot remain constant: the Universe must either contract, or,

    consistent with observations, expand. If we ignore the peculiar speeds

    galaxies, then the speed of mutual removal of any two bodies in the Universe is greater, the greater the distance between them. For relatively small distances, this dependence is linear, and the proportionality coefficient is the Hubble constant. From the above it follows that the distance between any pair of bodies is a function of time. The form of this function depends on the sign of the curvature of space. If the curvature is negative, then the “Universe” is expanding all the time. At zero curvature, corresponding to; Euclidean space, expansion occurs with a slowdown, and the expansion rate tends to zero. Finally, the expansion of the “Universe,” which has positive curvature, must give way to compression at some epoch.

    In the latter case, due to non-Euclidean geometry, the space must be

    final, i.e. have a certain finite volume at any given time,

    a finite number of stars, galaxies, etc. However, the “boundaries” of the Universe, naturally,

    cannot be in any case.

    A two-dimensional model of such a closed three-dimensional space is

    the surface of the inflated balloon. Galaxies in this model are depicted as flat

    figures drawn on the surface. As the ball stretches, the surface area and the distance between the shapes increases. Although in principle such a ball can grow without limit, its surface area is finite at any given time.

    However, in its two-dimensional space (surface) there are no boundaries. The curvature of space in a homogeneous isotropic model depends on the value of the average density of the substance. If the density is less than a certain critical value, the curvature is negative and the first case occurs. The second case (zero curvature) occurs at a critical density value. Finally, when the density is greater than the critical ¾, the curvature is positive (third case). During the expansion process, the absolute value of curvature may change, but its sign

    remains constant.

    The critical density value is expressed through the Hubble constant H and the gravitational constant f as follows: at H = 55 km/sec × Mpc, r cr = 5 × 10-30 g/cm3 Taking into account all masses known in the Metagalaxy leads to an estimate of the average density of about 5× 10-31 g/cm3

    However, this is obviously a lower limit, since the mass of the invisible medium between galaxies is not yet known. Therefore, the existing density estimate does not provide grounds for judging the sign of the curvature of real space.

    In principle, other ways of empirically selecting the most realistic model of the Universe are possible based on determining the redshift of the most distant objects (from which the light that reached us was emitted hundreds of millions and billions of years ago) and comparing these velocities with distances to objects found by other methods. In fact, in this way, the change in expansion rate over time is determined from observation. Modern observations are not yet so accurate that one can confidently judge the sign of the curvature of space. We can only say that the curvature of space in the Universe is close to zero.

    The Hubble constant, which plays such an important role in the theory of homogeneous isotropic

    Universe has a curious physical meaning. To clarify it, you should

    pay attention to the fact that the reciprocal quantity 1/H has the dimension of time and

    equal to 1/H = 6×1017 sec or 20 billion years. It's easy to figure out what it is

    the period of time required for the expansion of the Metagalaxy to its present state, provided that the expansion rate did not change in the past. However, the question of the constancy of this speed, of the preceding and subsequent (in relation to the modern) stages of the expansion of the Universe is still poorly understood.

    Confirmation that the Universe was indeed once in some special state is the cosmic radio emission discovered in 1965, called relict radiation (i.e., residual). Its spectrum is thermal and reproduces the Planck curve for a temperature of about 3 °K. [Note that, according to the formula, the maximum of such radiation occurs at a wavelength of about 1 mm, close to the range of the electromagnetic spectrum accessible for observations from the Earth.

    A distinctive feature of the cosmic microwave background radiation is its uniformity

    intensity in all directions (isotropy). It was this fact that made it possible to isolate such weak radiation that it could not be associated with any object or region in the sky.

    The name "relict radiation" is given because this radiation must be a remnant

    radiation of the Universe, which existed in the era of its high density, when it

    was opaque to its own radiation. Calculation shows that this should

    took place at a density r > 10-20 g/cm3 (average concentration of atoms

    about 104 cm -3), i.e. when the density was a billion times higher than today.

    Since the density varies inversely proportional to the cube of the radius, then, assuming

    expansion of the Universe in the past is the same as now, we get that in the era

    opacity, all distances in the Universe were 1000 times smaller. The wavelength l was the same number of times smaller. Therefore, quanta, which now have a wavelength of 1 mm, previously had a wavelength of about 1 μ, corresponding to the maximum radiation at a temperature of about 3000 °K.

    Thus, the existence of cosmic microwave background radiation is not only an indication of the high density of the Universe in the past, but also of its high temperature (the “hot” model of the Universe).

    About whether the Universe was in even denser states, accompanied by

    significantly higher temperatures, in principle one could judge by

    based on a similar study of relic neutrinos. For them, opacity

    The universe should occur at densities r " 107 g/cm3, which could only be

    at relatively very early stages of the development of the Universe. As in the case

    cosmic microwave background radiation, when, due to expansion, the Universe goes into

    state with a lower density, neutrinos stop interacting with the rest of the matter, as if “breaking away” from it, and subsequently undergo only a cosmological red shift due to expansion. Unfortunately, the detection of such neutrinos, which currently must have an energy of only a few ten-thousandths of an electron volt, is unlikely to be carried out in the near future.

    Cosmology, in principle, allows us to get an idea of ​​the most general

    laws of the structure and development of the Universe. It's easy to understand how huge

    This section of astronomy is important for the formation of correct

    materialistic worldview. By studying the laws of the entire Universe as a whole, we understand even more deeply the properties of matter, space and time. Some of them,

    for example, the properties of real physical space and time in large

    scales, can only be studied within the framework of cosmology. Therefore, its results are of utmost importance not only for astronomy and physics, which get the opportunity to clarify their laws, but also for philosophy, which acquires extensive material for generalizing the laws of the material world.


    Synthesis of elements

    Back in the early 40s, they tried to use the idea of ​​the Big Bang to explain the origin of chemical elements. American researchers R. Alpher, G. Gamow and R. Herman suggested that at the earliest stages of its existence the Universe was a clump of super-dense neutron gas (or, as they called it, “ilema”). Later, however, it was shown that a number of heavy elements could be formed in the interior of stars due to cycles of nuclear reactions, so the need for “ilem” seemed to disappear.

    Clarification of the chemical composition of the Cosmos soon led to controversy. If we calculate how much hydrogen in the stars of our Galaxy should have “burned out” into helium during its existence (10 billion years), it turns out that the observed amount of helium is 20 times greater than that obtained according to theoretical calculations. This means that the source of helium formation should be not only its synthesis in the depths of stars, but also some other, very powerful processes. In the end, we had to turn again to the idea of ​​the Big Bang and look for a source of excess helium in it. This time, success fell to the share of famous Soviet scientists Academician Ya. B. Zeldovich and I. D. Novikov, who in a series of detailed works substantiated in detail the theory of the Big Bang and the expanding Universe ( Ya. V. Zeldovich, I. D. Novikov. Structure and evolution of the Universe. M., Nauka, 1975). The main provisions of this theory are as follows.

    The expansion of the Universe began with very high density and very high temperature. At the dawn of its existence, the Universe resembled a laboratory of high energies and high temperatures. But this, of course, was a laboratory that had no earthly analogies.

    The very “beginning” of the Universe, i.e. its state, corresponding, according to theoretical calculations, to a radius close to zero, so far eludes even theoretical representation. The fact is that the equations of relativistic astrophysics remain valid up to a density of the order of 10 93 g/cm3. The Universe, compressed to such a density, once had a radius of about one ten-billionth of a centimeter, i.e. it was comparable in size to a proton! The temperature of this microuniverse, which, by the way, weighed no less than 10 51 tons, was incredibly high and, apparently, close to 10 32 degrees. This is how the Universe looked like an insignificant fraction of a second after the start of the “explosion”. At the “beginning” itself, both density and temperature turn to infinity, i.e. this “beginning,” using mathematical terminology, is that special “singular” point for which the equations of modern theoretical physics lose their physical meaning. But this does not mean that there was nothing before the “beginning”: we simply cannot imagine What was before the conventional “beginning” of the Universe.

    In our life, a second is an insignificant interval. In the very first moments of the life of the Universe (conventionally counted from the “beginning”), many events unfolded within the first second. The term "expansion" here seems too weak and therefore inappropriate. No, it was not an expansion, but a powerful explosion.

    By the end of one hundred thousandth of a second after the “beginning,” the Universe in its microvolume contained a mixture of elementary particles: nucleons and antinucleons, electrons and positrons, as well as mesons, light quanta (photons). In this mixture, according to Ya. B. Zeldovich, there were probably hypothetical (for now) gravitons and quarks ( Gravitons and quarks are hypothetical particles; the interaction of gravitons with other particles determines the gravitational field (these are quanta of the gravitational field); quarks are the “basic building blocks”, the combinations of which give rise to all the variety of particles. A lot of effort and money has been spent on detecting quarks, but they have not yet been found), but the main role still apparently belonged to neutrinos.

    When the “age” of the Universe was one ten-thousandth of a second, its average density (10 14 g/cm3) was already close to the density of atomic nuclei, and the temperature dropped to approximately several billion degrees. By this time, nucleons and antinucleons had already managed to annihilate, that is, mutually destroyed, turning into quanta of hard radiation. Only the number of neutrinos produced during the interaction of particles was maintained and increased, since neutrinos interact most weakly with other particles. This growing “sea” of neutrinos isolated the longest-living particles - protons and neutrons - from each other and caused the transformation of protons and neutrons into each other and the birth of electron-positron pairs. It is unclear what causes the subsequent predominance of particles and the small number of antiparticles in our world. Perhaps for some reason there was an initial asymmetry: the number of antiparticles was always less than the number of particles, or, as some scientists believe, thanks to an as yet unknown separation mechanism, particles and antiparticles were sorted, concentrating in different parts of the Universe, and antiparticles somewhere like this they predominate (as particles predominate in our world), forming an antiworld.

    According to Ya. B. Zeldovich, “at the moment, there are quanta left in the Universe that we observe, as well as neutrinos and gravitons, which we cannot observe with modern means and, probably, will not be able to observe for many years.”

    Let's continue the quote:

    “So, over time, all particles in the Universe “die out”, only quanta remain. This is correct to within one hundred millionth. But in reality there is one proton or neutron for every hundred million quanta. These particles are preserved because they - the remaining particles - have nothing to annihilate with (at first, nucleons, protons and neutrons annihilated with their antiparticles). There are few of them, but it is from these particles, and not from quanta, that the Earth and planets, the Sun and stars consist" ( Earth and Universe, 1969, No. 3, p. 8 (Ya. B. Zeldovich. Hot Universe)).

    When the age of the Universe reached a third of a second, the density dropped to 10 7 g/cm3, and the temperature dropped to 30 billion degrees. At this moment, according to Academician V.L. Ginzburg, neutrinos are separated from nucleons and are no longer absorbed by them. Today, these “primary” neutrinos traveling in outer space should have an energy of only a few ten-thousandths of an electronvolt. We do not know how to detect such neutrinos: to do this, the sensitivity of modern equipment must be increased hundreds of thousands of times. If this can ever be done, “primary” neutrinos will bring us valuable information about the first second of the life of the Universe.

    By the end of the first second, the Universe had expanded to a size approximately one hundred times greater than the size of the modern Solar System, whose diameter is 15 billion km. Now the density of its substance is 1 t/cm3, and the temperature is about 10 billion degrees. Nothing here resembles modern space yet. There are no atoms and atomic nuclei familiar to us, and there are no stable elementary particles.

    Just 0.9 seconds earlier, at a temperature of 100 billion degrees, there were equal numbers of protons and neutrons. But as the temperature decreased, the heavier neutrons decayed into protons, electrons and neutrinos. This means that the number of protons in the Universe has steadily increased, and the number of neutrons has decreased.

    The age of the Universe is three and a half minutes. Theoretical calculations fix the temperature at this moment at 1 billion degrees and the density is already a hundred times less than the density of water. The size of the Universe in just three and a half minutes increased from almost zero to 40 sv. years ( For the expansion of space, the speed of light is not the limit). Conditions were created under which protons and neutrons began to combine into the nuclei of the lightest elements, mainly hydrogen. Some stabilization occurs, and by the end of the fourth minute from the beginning of the “first explosion,” the Universe consisted of 70% hydrogen and 30% helium by mass. This was probably the original composition of the most ancient stars. Heavier elements arose later as a result of the processes that occur in stars.

    The further history of the Universe is calmer than its turbulent beginning. The rate of expansion gradually slowed down, the temperature, like the average density, gradually decreased, and when the Universe was a million years old, its temperature became so low (3500 degrees Kelvin) that protons and nuclei of helium atoms could already capture free electrons and turn into neutral atoms. From this moment, the modern stage of the evolution of the Universe essentially begins. Galaxies, stars, planets appear. Eventually, after many billions of years, the Universe became the way we see it.

    Perhaps some of the readers, amazed by the colossal numbers, far from the usual reality, will think that the history of the Universe, drawn in the most general terms, is only a theoretical abstraction, far from reality. But that's not true. The expanding universe theory explains the recession of galaxies. It is confirmed by many modern data about space. Finally, another very convincing experimental confirmation of the super-hot state of the ancient Universe was recently found.

    The primary plasma that initially filled the Universe consisted of elementary particles and radiation quanta, or photons - it was the so-called photon gas. Initially, the radiation density in the “microuniverse” was very high, but as it expanded, the “photon gas” gradually cooled. This would cool the hot air inside some continuously expanding closed volume.

    Nowadays, only subtle traces should remain of the primary “heat”. The energy of the quanta of the primary “photon gas” has decreased to a value corresponding to a temperature just a few degrees above absolute zero. Nowadays, the primary “photon gas” should emit most intensely in the centimeter radio range.

    These are the theoretical predictions. But they are confirmed by observations. In 1965, American radio physicists discovered noise radio emission at a wave of 7.3 cm. This emission came uniformly from all points in the sky and was clearly not associated with any discrete cosmic radio source. Neither earthly radio stations nor interference generated by radio equipment are to blame.

    Thus, the cosmic microwave background radiation of the Universe was discovered, a remnant of its original unimaginably high temperature. Thus, the “hot” model of the primary Universe, theoretically calculated by Ya. B. Zeldovich and his students, was confirmed.

    So, apparently, the Universe was born as a result of a powerful “first explosion”. From an insignificantly small volume, but super-heavy, super-dense, super-hot clot of matter and radiation, over the course of several billion years, what we now call Space arose.

    When the Universe expanded from a very small but unimaginably dense clump of matter to cosmic dimensions, its gigantic, still very hot and super-dense ball probably disintegrated into many “fragments.” This could be a consequence, for example, of the heterogeneity of the ball and the different rates of processes occurring in it.

    Each of the “fragments,” consisting of prestellar matter with enormous reserves of energy, in turn disintegrated over time. It is possible that the decay products were quasars - the embryos of galaxies. As Academician V.A. Ambartsumyan and other researchers believe, the cores of quasars (as well as the cores of galaxies) contain prestellar matter, the properties of which we cannot yet determine, and their outer layers consist of plasma and gases, the density of which is only several times higher than the density of matter in galaxies. If this is so, then we must admit that the “first explosion” and subsequent secondary explosions ejected into space not only “fragments” of prestellar matter, but also diffuse matter - plasma, gases from which dust material was formed. At the same time, one must think that the initial content of gas and dust matter in the Universe was significantly higher than it is now.

    Be that as it may, according to our modern ideas, up to the stage of the appearance of galaxies, explosive processes prevailed in the Universe. But as we have seen, explosive processes are also characteristic of the stage of galaxies, although their intensity decreases in the process of galaxy evolution - from violent manifestations of energy in the Markarian and Seyfert galaxies to the calm outflow of matter from the cores of galaxies such as ours. Thus, the theory of the expanding Universe may be compatible with the concept of Academician Ambartsumyan, who, based on his own discoveries and the discoveries of his collaborators, as well as on the works of foreign astronomers, extends the idea of ​​a creative explosion to star formation processes. According to this concept, all cosmic objects known to us (galaxies, stars, gas-dust nebulae) are born in the process of an explosion from super-dense clumps of prestellar matter filled with huge reserves of energy. That is why stars appear in the form of an expanding, initially compact group consisting of many thousands or millions of stars. This hypothesis seems to the author the most probable of all others, and therefore he proposes the following “pedigree” of all space objects.

    The “Primary Atom,” i.e., the Universe in the primary superdense state, and the primary fireball are its most distant ancestors, which, of course, gave, in addition to the planets, almost countless offspring of all cosmic objects.

    Some fragment of the fireball may have become the embryonic core of our Galaxy and, over time, acquired a stellar population. This embryonic galactic core and, probably, the stellar association that spun off from it, which included the Sun, are the next “relatives” of the Earth, closer to us in time.

    The proposed scheme for the evolution of the cosmos from the “first atom” to the stars is only a hypothesis that is subject to further development and testing. So far, no theory of the transformation of hypothetical “pre-stellar matter” into observable space objects exists, and this circumstance is one of the weak points in V. A. Ambartsumyan’s concept.

    On the other hand, the birth of stars through the condensation of rarefied gas and dust matter cannot be considered absolutely impossible; on the contrary, most astronomers still adhere to such a “condensation” hypothesis. Giant accumulations of gas and dust matter may have arisen at the stage of “secondary” explosions of “fragments of the primary explosion.” It can be assumed that the distribution of matter in them was initially uneven. Some general rotation of such clusters probably generates powerful magnetic fields in them, due to which the structure of gas and dust clouds could become fibrous. Under the influence of gravitational forces in the expansions (nodes) of these “fibers,” the concentration of matter could begin, leading to the emergence of entire families of stars.

    This concept is still adhered to by most researchers, although it also has its weaknesses. It is quite possible that both concepts (“explosive” and “condensation”) do not exclude, but complement each other: after all, during the decay of prestellar matter, not only stars, but also nebulae appear. Maybe the matter of these nebulae will someday serve (or has already served many times) as the starting material for the condensation of stars and planets? Only future research will be able to bring complete clarity to this issue.

    The Big Bang theory, developed by Ya. B. Zeldovich and N. D. Novikov, perfectly explained the “excess” of helium in the Universe. According to their recent calculations, already 100 seconds after the start of expansion, the Universe contained 70% hydrogen and about 30% helium. The rest of the helium and heavier elements appeared during the evolution of stars.

    Despite this great success, the horizons for the Big Bang theory are by no means bleak. Recently, a number of facts have been discovered that do not fit into the framework of this theory ( For more details, see the book: V. P. Chechev, Ya. M. Kramarovsky. Radioactivity and the evolution of the universe. M., Nauka, 1978). For example, galaxies are known that are clearly physically connected with each other and are located at an equal distance from us, but at the same time have significantly different (sometimes 13 times!) “red shifts”. Another thing that is unclear is why, at the same distance, spiral galaxies always have larger “redshifts” than elliptical galaxies. According to some data, it turns out that in different directions the rate of expansion, “swelling” of the Universe is not the same, which contradicts the previously prevailing ideas about the strictly “spherical” shape of the expanding world?

    Finally, it has recently become clear that the velocities of galaxies relative to the CMB background are very small. They are measured not in thousands and tens of thousands of kilometers per second, as follows from the theory of the expanding Universe, but only in hundreds of kilometers per second. It turns out that the galaxies are practically at rest relative to the relict background of the Universe, which for a number of reasons can be considered an absolute frame of reference ( For more details, see the book: Development of methods of astronomical research (A. A. Efimov. Astronomy and the principle of relativity). M., Nauka, 1979, p. 545).

    How to overcome these difficulties is still unclear. If it turns out that the “red shift” in the spectra of galaxies is caused not by the Doppler effect, but by some other process not yet known to us, the drawn diagram of the origin of chemical elements may turn out to be incorrect. However, most likely the Big Bang is not an illusion, but a reality, and the theory of a “hot” expanding Universe is one of the most important achievements of science of the 20th century.

    In conclusion, we note that no matter what views on the evolution of the Universe one adheres to, the indisputable fact remains unshakable - we live in a chemically unstable World, the composition of which is constantly changing.

    When uranium is bombarded with thermal neutrons, lighter elements with serial numbers 35-65 are formed from it: this led to hope that isotopes of elements 43 and 61 would also be found among the debris. If we recall the state of the issue of obtaining elements 43, 61, as well as 85 and 87 in 1930, noticeable progress could be discerned. First of all, the suspicion was confirmed that elements 43 and 61 are unstable substances that have become “extinct.” As for elements 85 and 87, they have long been recognized as decayed radioactive substances.
    In 1934, physicist Joseph Mattauch found an empirical rule that allows one to estimate the stability of isotope nuclei. According to Mattauch's rule, a second stable isotope cannot exist if the charge of its nucleus differs only by one from the charge of the nucleus of a known stable isotope with the same mass number. This pattern complements Harkins’ rule, according to which elements with an odd serial number (that is, an odd number of protons and electrons) are much less common on Earth, since the stability of their nuclei is low.
    In relation to elements 43 and 61, Mattauch's rule can be stated as follows. Based on their position in the periodic table, the mass number of element 43 should be about 98, and for element 61 - about 147. However, stable isotopes were already known for elements 42 and 44, as well as for elements 60 and 62 with masses from 94 to 102 and, accordingly, from 142 to 150. Since a second stable isotope with the same mass number cannot exist, elements 43 and 61 must have only unstable representatives. There is no doubt that elements 43 and 61 were once present on Earth in sufficient quantities. When our solar system arose, all the elements were formed through the combination of protons and neutrons. However, during the existence of the Earth - 4.6 billion years - their unstable representatives gradually completely disappeared. The only exceptions are those radioactive elements that could be constantly replenished within the natural radioactive series, because their parent substances - uranium or thorium - still exist on Earth, thanks to their half-lives of billions of years. Elements 43 and 61 do not belong to these natural radioactive series. Only if a long-lived isotope of these elements was available could one hope to detect radiochemical traces of it.
    While some scientists were still pursuing false transuraniums, other researchers managed to find the coveted elements 43 and 87. Here is the story of their discovery... In 1936, Emilio Segre left Fermi and his colleagues after his marriage and went to Palermo, the former capital of Sicily. At the university there he was offered the chair of physics. In Palermo, to his great regret, Segre was unable to continue the research begun with Fermi. The university did not have any equipment for radioactive research. Having quickly made a decision, the Italian scientist went to America to get acquainted with the University of California at Berkeley, which was famous for the best equipment. At that time, the only cyclotron in the world was located there. “The sources of radioactivity that I saw were truly amazing for a person who had previously worked only with Ra-Be sources,” the physicist recalled.
    Segrè was especially interested in the cyclotron deflection plate. It had to direct the flow of accelerated particles in the required direction. Due to collisions with high-energy particles - deuterons were accelerated - this plate became very hot. Therefore, it had to be made from a refractory metal - molybdenum. The guest from Italy turned his attention to this metallic molybdenum, bombarded by deuterons. Segre suggested that isotopes of the still unknown element 43 could perhaps be formed from molybdenum, element 42, as a result of bombardment with deuterons. Perhaps, according to the equation:
    Mo + D = X + n
    Natural molybdenum is a mixture of six stable isotopes. Segre suggested: what if one of the six possible radioactive isotopes of element 43, into which molybdenum could theoretically turn - at least one - turned out to be long-lived enough to withstand a sea voyage to Sicily. For the Italian physicist intended to search for element 43 only at the institute in his homeland.
    The researcher set off on his way back, having in his pocket a piece of molybdenum plate from the Berkeley cyclotron. At the end of January 1937, he began research with the support of the mineralogist and analytical chemist Perrier. Both, indeed, found radioactive atoms whose chemical properties could be placed between manganese and rhenium. The quantities of ecamanganese that were artificially revived on Earth again thanks to the exploratory genius of man were unimaginably small: from 10-10 to 10-12 g of the 43rd element!
    When, in July 1937, Segret and Perrier reported on the synthesis of the first artificial element, long extinct on Earth, it was a day that went down in history. For element 43, a very precise name was later found: technetium, derived from the Greek technetos - artificial. Will it ever be possible to get it in significant quantities and hold it in your hands? It was soon possible to answer this question positively when it was discovered that the fission of uranium produces isotopes 43 with a relatively high yield. The isotope with a mass number of 101 and a half-life of 14 minutes attracted particular attention. It was assumed that the Fermi substance with a half-life of 13 minutes, imaginary element 93, was supposed to be an isotope of element 43.
    Natural radioactive series have a definitive form - no one else dared to doubt this, especially after the mass spectrographic identification of uranium-235 by Dempster. However, there was a weak point in the uranium-actinium series. More than twenty years have passed since the “inaccuracy” was noted in this series, which was almost consigned to oblivion. Back in 1913/1914, the English chemist Cranston and the Austrian radioactivity researchers Mayer, Hess and Paneth stumbled upon this discrepancy while studying actinium. As a beta emitter, actinium is known to transform into radioactinium, that is, into an isotope of thorium. When scientists studied the transformation process, they always observed weak alpha radiation. This residual activity (approximately 1%) was also discovered by Otto Hahn in experiments on the production of pure actinium. “I couldn’t bring myself to attach significance to this small amount,” Khan said later. He believed it was most likely an impurity.
    Many years later. The French scientist Marguerite Perey, an employee of the famous Radium Institute in Paris, again followed this trail, very carefully purified the actinium fractions and in September 1939 was able to report the successful isolation of a new radioactive isotope. It was the long-missing element 87, that alpha-emitting byproduct that gives the residual one percent activity of actinium. Madame Perey found a branch in an already completed series, for the isotope of element 87 turns into actinium X in the same way as the well-known radioactinium. At Perey's suggestion, element 87 was named francium in honor of her homeland.
    True, chemists to this day have not achieved much success in studying element 87. After all, all French isotopes are short-lived and decay within milliseconds, seconds or minutes. For this reason, the element has remained “uninteresting” for many chemical studies and practical use. If necessary, it is obtained artificially. Of course, francium can also be “obtained” from natural sources, but this is a dubious undertaking: 1 g of natural uranium contains only 10[-18] g of francium!
    When the periodic table was discovered, 23 elements were missing, now only two are missing: 61 and 85. How did the hunt for elements proceed? In the summer of 1938, Emilio Segra again went to Berkeley. He intended to study the short-lived isotopes of element 43. Of course, such research had to be undertaken on site. Isotopes with short half-lives would not “survive” the journey to Italy. As soon as he arrived in Berkeley, Segre learned that returning to fascist Italy had become impossible for him due to racial terror. Segrè remained in Berkeley and continued his work there.
    At Berkeley, with a more powerful cyclotron, it was possible to accelerate alpha particles to high energies. After overcoming the so-called Coulomb interaction threshold, these alpha particles were able to penetrate even the nuclei of heavy atoms. Now Segre saw an opportunity to transform bismuth, element 83, into the unknown element 85. Together with the Americans Corson and Mackenzie, he bombarded bismuth nuclei with alpha particles with an energy of 29 MeV to carry out the following process:
    Bi + He = X + 2n
    The reaction came true. When the researchers completed their first joint work, on March 1, 1940, they only cautiously expressed the idea “of the possible production of a radioactive isotope of element 85.” Soon after this, they were already sure that element 85 had been artificially produced before it was found in nature. The latter was lucky enough to be done only a few years later by the Englishwoman Leigh-Smith and the Swiss Minder from the Institute in Bern. They were able to show that element 85 is formed in the radioactive series of thorium as a result of a side process. For the open element they chose the name Anglo-Helvetius, which was criticized as a verbal nonsense. The Austrian researcher Karlik and her collaborator Bernert soon found element 85 in other series of natural radioactivity, also as a by-product. However, the right to give a name to this element, which is found only in traces, remained with Segrè and his collaborators: now it is called astatine, which means unstable in Greek. After all, the most stable isotope of this element has a half-life of only 8.3 hours.
    By this time, Professor Segre was also trying to synthesize element 61. Meanwhile, it became clear that both neighbors of this element on the periodic table, neodymium and samarium, were weakly radioactive. At first this seemed surprising, since at that time it was believed that radioactivity was inherent in the heaviest elements. Neodymium, element 60, emitted beta rays and therefore must have been converted into element 61. The fact that this unknown chemical element could not yet be isolated was probably due to its rapid radioactive decay. What to do? Here the solution was again to artificially obtain the desired element. Since element 61 could not be found in nature, physicists tried to synthesize it.
    In 1941/42, scientists Lowe, Poole, Quill and Kurbatov from Ohio State University bombarded the rare earth element neodymium with deuterons accelerated in a cyclotron. They discovered radioactive isotopes of a new element, which they called cyclonium. However, this was only a trace left on the film.
    What were the successes of Emilio Segra? He irradiated praseodymium, element 59, with alpha rays. However, processing the isotopes of element 61 that he certainly synthesized turned out to be too difficult. Their isolation from other rare earth elements failed.
    One inconclusive study was reported from Finland. Back in 1935, the chemist Eremetse began analyzing concentrates of a mixture of samarium and neodymium oxides for the natural content of the 61st element. Several tons of apatite were processed for this purpose.
    The first stage of the struggle for the 61st element had a draw result. It was impossible even to accept the proposed name "cyclonium".

    Synthesized (artificial) chemical elements- elements first identified as a product of artificial synthesis. Some of them (heavy transuranic elements, all transactinoids) are apparently absent in nature; other elements were subsequently discovered in trace quantities in the earth's crust (technetium, promethium, astatine, neptunium, plutonium, americium, curium, berkelium, californium), in the photospheres of stars (technetium and possibly promethium), in the shells of supernovae (californium and, probably the products of its decay are berkelium, curium, americium and lighter ones).

    The last element found in nature before it was synthesized artificially was francium (1939). The first chemical element synthesized was technetium in 1937. As of 2012, elements up to ununoctium with atomic number 118 have been synthesized by nuclear fusion or fission, and attempts have been made to synthesize the following superheavy transuranium elements. The synthesis of new transactinoids and superactinoids continues.

    The most famous laboratories that have synthesized several new elements and several tens or hundreds of new isotopes are the National Laboratory. Lawrence Berkeley and the Livermore National Laboratory in the USA, the Joint Institute for Nuclear Research in the USSR/Russia (Dubna), the European Helmholtz Center for Heavy Ion Research in Germany, the Cavendish Laboratory of the University of Cambridge in the UK, the Institute of Physical and Chemical Research in Japan and other recent ones For decades, international teams have been working on the synthesis of elements in American, German and Russian centers.

    • 1 Opening synthesized elements by country
      • 1.1 USSR, Russia
      • 1.2 USA
      • 1.3 Germany
      • 1.4 Contested priorities and joint results
        • 1.4.1 USA and Italy
        • 1.4.2 USSR and USA
        • 1.4.3 Russia and Germany
        • 1.4.4 Russia and Japan
    • 2 Notes
    • 3 Links

    Discovery of synthesized elements by country

    USSR, Russia

    The elements nobelium (102), flerovium (114), ununpentium (115), livermorium (116), ununseptium (117), ununoctium (118) were synthesized in the USSR and Russia.

    USA

    In the USA, the elements promethium (61), astatine (85), neptunium (93), plutonium (94), americium (95), curium (96), berkelium (97), californium (98), einsteinium (99), fermium (100), mendelevium (101), seaborgium (106).

    Germany

    The elements hassium (108), meitnerium (109), darmstadtium (110), roentgenium (111), and copernicium (112) were synthesized in Germany.

    Contested priorities and joint results

    For a number of elements, the priority is equally approved according to the decision of the joint commission of IUPAC and IUPAP or remains controversial:

    USA and Italy

    Technetium (43) - a collaborative effort produced at an accelerator in Berkeley, California and chemically identified in Palermo, Sicily.

    USSR and USA

    Lawrencium (103), rutherfordium (104), dubnium (105).

    Russia and Germany

    Borius (107).

    Russia and Japan

    Ununtriy (113).

    Notes

    1. Emsley John. Nature's Building Blocks: An A-Z Guide to the Elements. - New. - New York, NY: Oxford University Press, 2011. - ISBN 978-0-19-960563-7.
    2. The institute in Dubna became the fourth in the world in the number of discovered isotopes
    3. Isotope ranking reveals leading labs eng.
    4. http://flerovlab.jinr.ru/rus/elements.html
    5. Temporary name for the 115th element; the name Langevinia has been proposed.
    6. Temporary name for the 117th element;
    7. Temporary name for the 118th element; The name Moscovian was proposed.
    8. R. C. Barber et al. Discovery of the transfermium elements (English) // Pure and Applied Chemistry. - 1993. - T. 65. - No. 8. - P. 1757-1814.
    9. Recently I have repeatedly had to write about the situation with the violation of the priority of Soviet scientists in the synthesis of superheavy
    10. About priority protection
    11. Chemistry: Periodic Table: darmstadtium: historical information
    12. http://element114.narod.ru/Projects/ao-iupac.html
    13. About priority protection
    14. Temporary name for the 113th element; The names of becquerelia, japonium, rykenium, and nihonium have been proposed.