What does "light field" mean?

The object of study in this area is the process of radiation energy transfer. The light field is inseparable from the field of electromagnetic radiation, but is qualitatively different from it, since it leaves aside the question of the nature of light. This field is macrocosmic in relation to time and space, since the spatial and temporal structures of the field of electromagnetic radiation are not considered in the theory of the light field. In fact, this is geometry plus the idea of ​​energy transfer introduced into it

The term “light field” was used by A. A. Gershun in a classic scientific work on the radiometric properties of light in three-dimensional space (). He introduced a vector representation of some quantities into the existing provisions of theoretical photometry, which made it possible to approach a new question for lighting engineering about quantitative assessment of the quality of lighting and, in many cases, successfully solve it.

The term Light Field was later redefined by computer graphics researchers.

In the same year, Michael Faraday, in his lecture “Reflections on the Oscillation of Rays,” first suggested that light should be interpreted as a field, much in the same way as magnetic fields, which he had been working on for several years at that time.

Notes

Literature

  • Gershun A. A. “Light Field”, Moscow, .

Wikimedia Foundation. 2010.

  • Tokin, Boris Petrovich
  • Newton Game Dynamics

See what “Light field” is in other dictionaries:

    LIGHT FIELD- field of light vector, spaces. distribution of light fluxes. Theory S. p. section theory. photometry. Basic har ki S. p. light vector, which determines the magnitude and direction of the transfer of radiant energy, and a scalar quantity cf. spherical... ... Physical encyclopedia

    light field

    Light field- light vector field (See Light vector) (See Vector field). The theory of photometry is a section of theoretical photometry (See Photometry), in which the distribution of illumination is found using general methods for calculating spatial distribution... ...

    field of light- šviesos laukas statusas T sritis fizika atitikmenys: engl. light field vok. Lichtfeld, n rus. field of light, n; light field, n pranc. champ de lumière, m; champ lumineux, m … Fizikos terminų žodynas

    NONLINEAR OPTICS- a branch of optics covering the study of the propagation of powerful light beams in TV. bodies, liquids and gases and their interaction with water. A strong light field changes the optical characteristics of the medium (refractive index, absorption coefficient), which become... Physical encyclopedia

    Nonlinear optics- a branch of physical optics, covering the study of the propagation of powerful light beams in solids, liquids and gases and their interaction with matter. With the advent of Lasers, optics had at its disposal sources of coherent... Great Soviet Encyclopedia

    FOURIER OPTICS- a section of optics, which includes the transformation of optical light fields. systems is studied using Fourier analysis (spectral decomposition) and linear filtering theory. The beginning of the use of spectral decomposition ideas in optics is associated with the names of J.... ... Physical encyclopedia

    Gershun, Andrey Alexandrovich- Andrey Aleksandrovich Gershun Date of birth: October 9 (22), 1903 (1903 10 22) Place of birth: St. Petersburg Date of death ... Wikipedia

    QUANTUM OPTICS- a branch of statistical optics that studies the microstructure of light fields and optical fields. phenomena in which a quantum is visible. nature of light. The idea of ​​quantum. the structure of radiation is introduced in German. physicist M. Planck in 1900. Statistical. interference structure fields... ... Physical encyclopedia

    RADIOLOGRAPHY- a method for recording, restoring and converting the wave front of an electrical system. mag. radio waves, in particular the microwave range. R. methods are direct analogues of optical methods. holography As there, holographic. The process comes down to obtaining (registration)… … Physical encyclopedia


The light field after the filter forms three beams. The third beam, corresponding to the last term (5.56), is deflected relative to the axis in the opposite direction.

The light field Ui (x y) corresponds to the first exposure.

Solenoidal light fields are fields in airless, equally bright space.

This light field represents the diffraction of a plane wave incident on the hologram. It can be seen that only first-order diffraction occurs, as it should be when the transmittance coefficient (38.14) changes according to the harmonic law [cf.

By scanning the light field of an object, reconstructed by recording H, this probe will register exactly the same sig-palms as in the case of registering a field directly reflected from an object O. Using the data of such measurements, it is possible to determine with very high accuracy the smallest details of the structure in general no longer existing object. For technical applications, the latter is much more important than creating the illusion of the presence of an object in the human brain: after all, accuracy and objectivity are exactly what modern technology needs.

Let the object light field Ui (x, y) be remapped by a positive lens into a certain plane H in image space. To simplify further reasoning, we assume that the surface of the object coincides with the front focal plane of the lens.

Calculation of the light field for the case of large x (up to - 108) is very complicated and is carried out on a computer. However, the picture of the field obtained from the calculations coincides well with that which follows from simple geometric parameters.

The pulse of the light field is equal to the sum of the photon pulses. The representation of the light field as a collective of photons replaces the classical picture of light waves. The latter should be considered as a special case, just as classical mechanics is a special (limiting) case of quantum mechanics.

In weak light fields, one photon ionization occurs. In high-intensity raccoon fields, multiphoton ionization is possible. However, the extremely high photon flux density in the laser beam makes multiphotonic radiation possible. Irradiation has been observed experimentally in rarefied vapors of alkali metals.

In a strong light field in a nonlinear medium, optical waves can interact not only with each other, but also with acoustic and molecular vibrations of matter.

In powerful light fields or in strongly nonlinear media, the higher terms of the polarization expansion cease to be small: nEn - 1 - xa, then expansion (1) loses its meaning, and the corresponding series (2) ceases to converge. Such problems arise, in particular, when studying the saturation of a transition in a system of two-level atoms in an electric field.

The section is very easy to use. Just enter the desired word in the field provided, and we will give you a list of its meanings. I would like to note that our site provides data from various sources - encyclopedic, explanatory, word-formation dictionaries. Here you can also see examples of the use of the word you entered.

Find

Light field

light vector field (see Vector field). S. photometry theory is a branch of theoretical photometry in which the distribution of illumination is found using general methods for calculating the spatial distribution of light flux. The projection of the light vector onto any direction passing through a point is equal to the difference in the illuminance of the two sides of a small area placed at this point perpendicular to this direction. The size and position of the light vector are independent of the coordinate system. The theory of solar fields uses the concept of light lines, which is similar to the concept of lines of force in the classical theory of physical fields.

Wikipedia

Light field

Light field- a function that describes the quantity Sveta, propagating in any direction through any point in space. In 1846, Michael Faraday, in his lecture “Reflections on the Vibrations of Rays,” first suggested that light should be interpreted as a field, much in the same way as magnetic fields, which he had been working on for several years at that time. The phrase “light field” was used by A. A. Gershun in a classic scientific work on the radiometric properties of light in three-dimensional space (1936). The phrase was later redefined by computer graphics researchers.

At this time, the latest technologies in the field of virtual reality are light field technologies. These words are often used, but there is little explanation of what is meant by this. Despite the fact that the technology (like many other currently popular technologies) is quite old (Michael Faraday even proposed to interpret light as an electromagnetic field), there are still many dark spots in it for a simple layman’s understanding and not for everyone, including And I understand its capabilities in the field of virtual reality and realistic rendering.

So, the light field is a function that describes the amount of light propagating in any direction through any point in space.
The easiest way to describe it is as a function of two planes.

Capturing light fields.
And here we move on to practical application. It is the two-plane function that modern light field cameras use. These are the lens plane and the matrix plane. In essence it would be an ordinary photograph. But what we need is a light field. That is, data from different directions. To obtain different points of view, a large number of cameras is required.


However, this is a rather complex engineering task (I’m not saying that different matrices can give different values ​​in white balance, for example). Therefore, in front of the huge matrix of the plenoptic (as they are called) camera, an array of microlenses is placed, each of which focuses the image on its own section of the matrix.

As you understand, a whole array of images is obtained on the same matrix. This means that the resolution of the images is insignificant compared to the capabilities of the matrix. To get a 1 megapixel photo you need at least a 10 megapixel sensor.
In addition to an array of microlenses, you can use a regular plate with holes based on the principle of a pinhole camera. This is much cheaper than lenses, but negatively affects the aperture ratio.
Mitsubishi Electric's MERL research laboratory resorted to a coding aperture - a special mask of transparent and opaque areas placed in front of the matrix. It is claimed that this avoids losses in image resolution. But the topic died down back in 2009 and nothing has been heard about it since then.
However, what is all the fuss about? What does an array of cameras do compared to a regular high-resolution photo? The camera array does two things.
1. Change the focal length.


Now there will be no objects out of focus, by integrating the data from all images, you can choose any focus (in fact, it depends on the resolution, the lower it is, the fewer possibilities).
2. Slight change in viewpoint.


Just for the sake of this effect.

Let me remind you that you cannot stick your head beyond certain limits. But within certain limits you can be absolutely free. In fact, this is simply an extension of the capabilities of 360 video, promoting greater immersion.

Rendering light fields.
Now let's turn to rendering light fields. To the distant year 1996.

As we can see, the same planes and methods are used. 2 images are created.
On the left is an array of projections of the (u,v) plane onto the (s,t) plane, that is, the entire front plane (perspective view) is projected onto a small part of the back plane (matrix). This is a perspective view from the matrix point through the lens. From another point the view will be slightly different.
On the right are the angular distributions of light around points on the back plane (s,t). These are reflectance maps. They are associated with a perspective view. Both arrays are integrated and a correct image is built from them. Without using building models, textures, etc. Only two images.
However, you can see the fundamental disadvantages of light fields - jerky, jumpy images and low resolution. With a fairly large amount of data. This miserable (albeit with a full 360 view) lion in the video weighs as much as 400 MB. True, compression algorithms can reduce this figure to 3 megabytes.
But the basic principle is not very different from the tricks of ancient programmers, who, with the help of a bunch of sprites, showed us 3D on ancient computers and consoles. And if you think that a lot has changed since 1996, then you are very mistaken. Here is a modern rendering of light fields.

As you can see, if you look closely, you can see twitching and jumping. Watch the boxes at the end of the video.

But let's take the idea of ​​rendering light fields further. Light fields are by no means 3D models and working with them is more like working in Photoshop than in a design studio. There is no work with polygons, which means there is no work with normals, ray tracing, or ray casting.
Take lighting, for example. Here it is calculated completely differently. A regular 360-degree photo of the environment is taken and, based on it, a light map is created, which is then mixed with the light field (a bunch of images from different angles) of the model.

https://www.youtube.com/watch?v=UUvAVjUnE8M
Quite realistic and no ray tracing. And most importantly, super fast.
And, of course, both the lighting and the model can be dynamic video rather than static images.
The shadow projection can also be easily calculated from the silhouette of a certain frame.

Light field displays.
First, let's just create a hologram from Star Wars.
We take an anisotropic mirror, place it on a rotating platform at 45 degrees to the horizon, and shine a projector from above with a high frame rate. Each corner has its own image. And hello, Star Wars!

Let's play and that's enough. Let's move on to serious VR problems.
For example, to the conflict between vergence and accommodation of our eyes. To clarify, vergence is the simultaneous movement of both eyes in opposite directions to maintain the integrity of the binocular image. And if the virtual object is very close to the “camera”, then the eyes will simultaneously try to bring the optical axes together (vergence) and focus on the object (accommodation), which will cause unpleasant sensations, including symptoms of seasickness and fatigue of the eye muscles, often accompanied by a headache . In general, the eye must be allowed to focus at different distances, and for this you can use a light field. In the new NE-LF (Near Eye Light Field) helmet, instead of one screen panel, two are installed at once, one behind the other, at a distance of approximately five millimeters. This design is a “bright field stereoscope”. Images on different panels have different zones of clarity, forming a single light field. This gives the eye support for natural focusing and relieves discomfort.

But this is all a surrogate. And Nvidia has developed a prototype of real light field glasses with a set of microlenses on top of an OLED display. Practically, they turned the light field camera.

As a result, the picture is clear, right next to the eyes, that is, there is no need for an elongated thing on the muzzle, the eyes do not get tired, everything is natural for them.
Guess what's wrong? What was wrong with the light field camera? What is its main disadvantage? That's right, permission.

Well, last on the list, but the most encouraging is the mysterious startup Magic Leap. Promising us light field technology for augmented reality.

With its own operating system and other goodies.

The public knows absolutely nothing about the startup. At all. A mystery shrouded in darkness. However, he managed to collect $2 billion in investments, Karl! Of course, not at kickstart, but from large companies. And, of course, he showed them something that made his hand, without thinking, reach for his wallet. Who would give that kind of money for a couple of videos?
Just recently, Magic Leap surprised us with the news, demonstrating its technical achievements.

Do you know what this is? Plastic? Glass? Lens? Screen? You guessed wrong. Don't even try.
Magic Leap says it is light field photonic chip! No more, no less. Of course, this is a nanotechnology product with a corresponding price tag. Have you exhaled?
Now let's try to figure it out. Let's dig into the patents.

Diffractive optical elements (DOEs) can be understood as very thin “lenses” that provide beam shaping, beam splitting, and scattering or homogenization. Magic Leap uses a linear diffraction grating with circular lenses to split the beam at the wave front and create beams with the desired focus. This directs the light into your eyes and makes it appear to be in the correct focal plane.

These DOEs are extremely thin, comparable to the wavelength of the light they control. The main disadvantage of these devices is that they are strictly tied to one specific function. They cannot operate at different wavelengths and change properties for different focusing points in real time. Therefore, it is necessary to use several different such Diffractive Optical Elements. Each of which is sharpened to a specific focal length. In this case, several layers of DOE are used, it is claimed that they can be turned on and off. For intermediate focus values, combinations of layers are used. Changing the active set of DOEs changes the path along which light leaves the photonic light-field chip. In addition, Magic Leap pathetically, with a mysterious aspiration, assured that she had learned create darkness with light. If we place one DOE on the inner surface of the lens and another on the outer surface, we can suppress light in much the same way as is implemented in noise-canceling headphones. Excerpt from the patent:
Such a system could be used to suppress light from a planar waveguide relative to background or real-world light, somewhat like noise-canceling headphones.
Each DOE has its own focal plane (layer), and their composition already makes up the final image. Yes, this is a multilayer photonic nanochip. There's nothing you can do about it.

Or a piece of glass and a 2 billion dollar scam)).
And finally, I would like to suggest another way to create a light field. Almost forgotten.

True, the resolution doesn’t shine here either.

Limitations and advantages.
The main advantage is extremely high realism and naturalness. Almost cinematic. Considering how important this is for immersion in VR, this direction will clearly not be abandoned. However, I would like to remind you that photogrammetry methods give similar results.

In principle, these methods are very similar, since photogrammetry is also built from video and photos, but, unlike light fields, it generates not picture maps, but standard building models covered with photo textures. Unfortunately, they are quite heavy (multi-polygonal) and far from optimal. In fact, light fields can be converted into a 3D model using photogrammetry methods (although not very easily), and it’s quite easy to screenshot a light field from a 3D model.

So one thing can very well lead to another.
We must understand that light fields are not building models. They are not interactive. They may be video animation, but not skeletal computer animation. This is a voluminous video, nothing more. These are not building models, they do not know collisions and volume, although you can hide collision boxes in them and change animations with scripts. But procedural animation, destructibility of objects, ragdoll and other features are impossible. These are more backgrounds and backdrops than actual interactive NPCs. Of course, a large number of animations can mitigate this drawback. But the volumes of data for light fields exceed all reasonable limits. I repeat, this is a bunch of photos taken from almost all angles. And for animations these are not photos, but videos. Large enough models (a room, for example) can take up tens of gigabytes. On the other hand, unlike building models, their complexity/number of polygons does not matter. Light fields are extremely economical on computing resources (merciless on memory) and can quite provide 90 frames per second for virtual reality without a video camera for a thousand bucks. However, the complexity of the object may affect its compression. You can squeeze a cube on video much better than a human model. Again, unlike models, there are no restrictions on polygons, etc. Only video size. But for computing resources, it doesn’t matter which video to play. Avatar or the Simpsons, the player doesn’t care.
It is my deepest conviction that mixed technologies will be used.
Photogrammetry + light fields = cinematic + interactivity.
And anyone who now sharpens software for photogrammetric scanning, game engines for rendering light fields, and video codecs with hardware acceleration for streaming without delays may well grab the jackpot. However, so far they can’t even stream full HD over Wi-Fi in the same room with a delay of at least 50ms, despite all sorts of hardware miracasts. So it's not that simple.