Hologram interaction based on brain wave measurement
In recent decades, developers of 3D content displays have investigated techniques such as stereoscopy, multi-view, integral imaging, and holography for applications in entertainment and medical imaging. One obstacle they face is that visual depth cues, such as motion parallax and disparity, can induce visual discomfort and fatigue if not all correctly provided to the observer.1–6 For instance, crosstalk and mismatch between accommodation and convergence (where the eye switches between focusing on near and far objects) are often considered the main sources of visual fatigue. Only holography enables reproduction of natural viewing conditions, and there remain several technical bottlenecks to overcome before this approach is fully viable. In the few experimental setups currently available,7 the main concern is that spatial light modulator (SLMs) cannot display complete hologram information. Only SLMs designed for amplitude-only or phase-only modulation are available, but not both at the same time. As a result, twin image and zeroth order contributions are visible in the reconstruction plane and degrade the global image quality. To observe the object, it is necessary to use an off-axis configuration to spatially separate the object from undesired items. Consequently, there exist only a few studies on human interaction with holograms.
Most of the interfaces developed so far are based on hand and gesture recognition. However, these kinds of interfaces are not suitable for disabled persons or for any situation where movement capabilities are limited. An alternative channel is a brain wave-based interface. Electroencephalogram (EEG) monitoring of brain electrical activity can be easily recorded by electrodes placed on the head. We may then observe the EEG in real time and extract time-locked evoked potentials (electrical potentials from the nervous system) by specific events or stimuli. The evoked potentials can be used for interaction with the external environment. For example, steady-state visual evoked potentials (SSVEPs) refer to synchronized brain responses to repetitive visual stimuli such as oscillating flickers.3 When a person focuses on the visual stimuli oscillating between black and white at a rate of more than 3Hz, a strong peak is evoked at the same frequency as the repetitive stimulus. We can also observe the second and third harmonics in the frequency domain of the EEG signals. As a result, we can use the changes in SSVEP components for alternative interfaces to detect the person's intention.5
We designed a holographic system with our SSVEP-based brain computer interface.8 The hologram was displayed with a liquid crystal-on-silicon SLM with a resolution of 600×1024 pixels and a pitch of 7.6μm. We used computer-generated hologram data to recreate an image with size 2.1×1×1.3mm of a teapot.9 Reconstruction distance was originally 150mm, and we used optical lenses to magnify the reconstructed plane and make it fit the designed viewing window. Four flickers located in the left, right, top, and bottom of the hologram image were presented to viewers. White-and-black paradigms, in which two squares continuously oscillate between white and black at a given frequency, have previously been used to evoke SSVEP responses. However, we used a threatening picture as the visual stimulus. Emotional pictures of threat or fear can evoke strong SSVEP responses because neural mechanisms are very sensitive to defense mechanism effects on cognitive control function.5 SSVEP can be evoked for frequencies up to 75Hz, but high frequencies evoke very weak SSVEP amplitudes, especially when they exceed the critical fusion frequency (where an intermittent light source appears constant to the eye). Furthermore, although using alpha bands (8–12Hz) evokes the strongest SSVEP amplitude, it also increases false-positive responses. Thus, we used four low frequencies (3.75, 4.285, 5, and 6Hz). Figure 1 shows our experimental setup.
When a clear and strong SSVEP signal was detected among the four flickers, the off-axis angle of the hologram was modified numerically10 so that the object and twin image were translated in the X and Y direction (see Figure 2). We performed numerical modification by multiplying the holographic data by a spatial carrier according to Equation 1:
where Hinitial and Hoff−axis are the holograms before and after modification of the off-axis angle. (x,y) are the coordinates in the hologram plane, ϕ and ψ are the desired off-axis angles in the X and Y direction, and λ is the wavelength (532nm in our experiment). When the angle was too small, the object, twin, and zeroth order were overlapped in the central part of the image. However, the maximum angle was limited by the resolution of the SLM and the presence of higher diffraction orders spatially filtered with a mask. One subject was asked to gaze at a specific flicker to move the hologram in the desired direction, so that he could find the position where the visual quality of the object was optimal. The subject was able to intentionally control the hologram. However, to prove the feasibility of the SSVEP-based hologram interaction, we require further experiments with a large number of subjects. In future work, we will test different kinds of flickers to enhance brain responses and improve the ease-of-use of our method. The display system also needs improvement, and we require additional functions rather than mere translation in the XY plane. Ideally, the proposed brain interface could be used for interaction with floating objects through 3D flickers located at different depths.
This work was supported by the GigaKOREA project, (GK15D0100, Development of Telecommunications Terminal with Digital Holographic Table-top Display) and (GK15C0100, Development of Interactive and Realistic Massive Giga-Content Technology).