Increasing the depth of field for multiview 3D images
One of the overarching goals for 3D displays is allowing viewers to perceive depth. This can be achieved through ‘continuous parallax,’ which is provided by the displays themselves. Indeed, the concept of continuous parallax is the principal factor that governs the creation of natural viewing conditions with a 3D display. It is also the reason why eyes do not suffer from the vergence-accommodation conflict when people view natural scenes and objects. Continuous parallax can be achieved with a type of 3D display known as a ‘super-multiview display’ (a concept first introduced in the 1990s1).
In super-multiview displays it is necessary for at least two different image views to be provided simultaneously to each eye of a viewer. Through this process, it is possible for a viewer to perceive a hologram with a sense of depth. In other words, a monocular sense of depth is achieved by simultaneously projecting two different-view images to each pupil of a viewer's eyes. Although the realization of a monocular sense of depth, with 26 continuously projected different-view images directed to the eyes of viewers, was reported in 2011,2 no additional information on this subject has since been published.
We have therefore been investigating ways to verify the effects of increasing the number of simultaneously projected different-view images to each eye of a subject. As part of this work we have developed a super-multiview condition simulator that can be used to project up to four different-view images to each eye simultaneously.3 With our simulator we can address a number of important questions that need to be answered before a commercial super-multiview display is built. For instance, is it possible to obtain a monocular sense of depth with more than two simultaneously projected images? In addition, does the focusable depth range increase proportionally with the number of different-view images that are simultaneously projected to the pupils?
Our super-multiview effect simulator consists of four projectors (two for each eye) that we use to separately project the images to different locations on the pupil plane (see Figure 1). We align the two projectors for each eye at 90° so that we can make the optical axis of the objective for the first/third projector cross—at a half-mirror—with that of the second/fourth projector. The half-mirrors are located at 45° to the optical axes of the two objectives. In this way, we can use one half-mirror to combine the optical axes of two projectors.
In our simulator design we also include a liquid crystal display shutter at the input aperture of each projector. These shutters consist of three strips that are aligned vertically. For each projector, the two shutters are aligned so that their images are shifted by one shutter width to each other (see Figure 2). For our four different-view image operation, the first and third strips of each shutter are in the ‘on’ position. Two strips in the first shutter thus overlap with those in the second shutter. The first/third strips in shutter 1 and 2 project the first/third and second/fourth of the different-view images, respectively. Each shutter strip works at a rate of 30Hz, and the displayed images therefore flicker. The shutter configuration differs slightly in each of the different-view operations (see Figure 2). For the one-image view operation (i.e., stereoscopic image operation), the second strip of the first projector for each eye is on, whereas in the two-image view operation the second strip of both projectors for each eye is on (at 60Hz). Lastly, in the three-image view operation the first and third strips in the first projector (at 30Hz), and the first strip of the second projector (at 60Hz), are on.
The last component of our simulator is the spherical mirror we use as the image projection screen. The combined optical axes from each eye cross at the center of this mirror. In addition, the images from the input apertures of both eyes are focused at a distance of 750mm from the center of the mirror (with a 65mm distance in the horizontal direction). The image of the input aperture has a size of 5mm at the focus distance. We express the depth of field (DOF) for our simulator in terms of the diopter value (D). D is a measure of optical power, which is equal to 1/d (where d is the designed viewing distance of the 3D display). We calculate the focusable depth ranges as (1±0.3)D, which can be translated as (0.7∼1.3)/d. When d is 750mm, the focusable depth range for stereoscopic image viewing is therefore about 577–1078.5mm.
We have tested our simulator on a 23-year-old male subject with near-2.0 eyesight. For these measurements, we used a Grand Seiko WAM-5500 accommodometer and projected an image of a Maltese cross. We moved the projection so that the perspective was in the range of 350–1650mm from the viewer. Our results (see Figure 3) clearly show that the DOF of the viewer's eyes increases as the number of different-view images increases. Furthermore, our simultaneous projection of four different-view images provides a better DOF than when stereoscopic images are viewed. We do not, however, find any evidence that a monocular sense of depth was induced with two different-view images.
In summary, we have developed a super-multiview effect simulator with which we can simultaneously project up to four different-view images onto each eye of a viewer. With this kind of 3D display it is possible to achieve continuous parallax and allow viewers to perceive depth. In our simulator design we include four projectors (with a liquid crystal display shutter at each input aperture) and a spherical mirror as the image projection screen. Test measurements we made with our system indicate that the viewer's depth of field increases with the number of different-view images that are used. We are currently working on a project in which super-multiview content is being developed for telepresence services based on 5G communication networks.
This work was supported as part of the Giga KOREA Project, through the Development of Interactive and Realistic Massive Giga-Content Technology program (GK16C0100).
Beom-Ryeol Lee has been a principal researcher for ETRI since 1989 and is also an assistant professor at the University of Science and Technology, Republic of Korea. His research interests include super-multiview imaging and digital holographic content.