High-frequency polychromatic visual stimuli for new interactive display systems
Brain–computer interfaces (BCIs) are intuitive operation modes that use electrical brain activity to communicate with external electronic devices. Over the past decade, BCI systems have been used for assistive living applications.1 In addition, 3D technologies are now widely available and are frequently used for virtual reality and augmented reality applications. As vision is the most dominant sense for humans, it is thought that BCI-enabled interactive displays (especially 3D displays) will also have a broad range of applications in gaming and e-learning.
The steady state visual evoked potential (SSVEP) is an example of a BCI modality that can be induced by visual stimuli. The SSVEP is the natural response of the brain to repetitive stimuli that are modulated at a constant frequency. It is thought that the SSVEP may be the most suitable modality in brain–display interactions (BDIs) because of its non-intrusive, easy detection, and high information transfer rates.2 The functional architecture of BDI systems is illustrated in Figure 1. SSVEP-based BCI systems have been developed in recent years because of their attractive features. To induce strong SSVEP responses, however, most of these systems use visual stimuli in a low-frequency band (less than 20Hz).3 Unfortunately, bright lights that flicker in this frequency range can be distracting to viewers, and they can cause visual fatigue, migraine headaches, and even photosensitive epilepsy attacks.
Our group is making the first steps in the development of a display-embedded SSVEP stimulus. As part of this work, we have proposed using a combination of flickering red-green lights to create an imperceptible flickering visual stimulus that can elicit an SSVEP at a basic flickering frequency.4 We have thus conducted a series of experiments to investigate whether the acuity of foveal vision (i.e., sharp central vision) can be used to overcome the limits of high-frequency-SSVEP (HF-SSVEP).5
For our experiments, we hypothesize that the fovea centralis—because of its high photopic visual acuity (i.e., in well-lit conditions)—should be capable of producing a detectable SSVEP in response to stimuli flashes above flicker fusion thresholds. We therefore measured the signal-to-noise ratios (SNRs) of human foveal SSVEP responses (see Figure 2). Although these HF-SSVEP responses are weaker than those in the alpha band (7.5–12.5Hz)—the prominent electroencephalography (EEG) signal observed during normal awake resting in humans6—they can still yield appreciable SNRs because other EEG signals also diminish in strength. The evaluation of our subjects' flicker perceptions (see Figure 2), however, shows that—even at a stimulation frequency for visual stimuli of 25–50Hz—the flickering was uncomfortable and at an unacceptable level.
White single light sources are generally chosen as the visual stimuli in SSVEP-based BCI systems, but these traditional stimulations are not designed to be embedded in display images. The associated encoded flickering stimuli cause shifts in the image color. This means that digital image processing can be used to make image corrections. We therefore chose polychromatic flickering lights with frequencies as our experimental visual stimuli.4 In particular, we propose using high-frequency (32 and 40Hz) flickering red/green mixed stimuli, with modulated amplitude and relative-phase offsets, to understand more about color perception within the red-green channel of human eyes. In our experiments, we used high-frequency polychromatic lights—with higher flickering frequency (40Hz) and larger phase offsets (180°)—to induce distinct SSVEP responses, but with small levels of associated flickering sensation (see Figure 3).4
Although 3D displays offer realistic sensations and various stimuli with depth information, few studies of SSVEP-based BCI systems have been conducted in 3D environments.7, 8 We have therefore investigated the potential improvements to SSVEP-based BCI systems that can be enabled by 3D displays. To reduce the negative effects of flicker sensations in our study, we set the flickering frequency of the 3D stimuli to 30Hz in the patterned retarder displays.9 The parameters of the 3D stimuli—disparity and crosstalk—impact on the 3D perception of the subject. Severe mismatch in eye convergence and leakage of image light are caused by large disparity angles and high crosstalk levels, respectively. These issues make it more difficult for subjects to generate 3D perception. We therefore chose the data from larger disparity angles (±3° and ±4°) to classify the 3D perception as either ‘perceived successfully,’ ‘perceived sometimes,’ or ‘perceived unsuccessfully’ (see Figure 4). Our results validate our original hypothesis, i.e., that SSVEP responses are affected by 3D perception. In addition, we conducted a one-way analysis of variance that showed there were significant differences (p < 0.001) between the results of 3D perception being perceived successfully or not.
We have proposed the use of high-frequency polychromatic SSVEP stimuli—with imperceptible flickers—for the development of new BDI systems. Our latest results show that 3D stimuli may successfully be used in SSVEP-based BCI systems that are integrated with 3D displays. We therefore hope to achieve full integration of imperceptible SSVEP stimuli and current display technologies to create the next generation of interactive display systems. In the next steps of our work, we expect to develop the necessary technology and to embed the optimal imperceptible SSVEP stimuli into the LED backlight of liquid crystal displays, organic LED displays, and quantum-dot LED displays. In addition, we aim to apply the SSVEP-based BDI on a glaucoma detection device that will hopefully replace the traditional (and expensive) diagnostic technology.10
This work is partially supported by grants, provided by Taiwan's Ministry of Science and Technology, to academic research projects NSC-102-2221-E-009-167-MY3, NSC-102-2221-E-009-168-MY3, and MOST-103-2218-E-009-012.
Fang-Cheng Lin received his BS and MS in physics from National Cheng Kung University, Taiwan, in 2000 and 2002, respectively, and his PhD from NCTU. From 2009 to 2010 he was a visiting scientist at Philips Research in The Netherlands. He is currently an assistant research fellow at the Display Institute and is working on e-paper, brain-computer interfaces, and the development of eco-display systems. As a PhD student, he won the 2009 JSID Outstanding Student Paper of the Year award.
Yu-Yi Chen received her BS and MS in 2010 and 2011, respectively, and is currently a PhD candidate in the Department of Photonics. Her current research interests include brain-computer interfaces, biomedical signal processing, display human vision evaluation, and 3D displays.
John K. Zao received his BS in engineering science and his MS degree in electrical engineering from the University of Toronto, Canada. He also obtained his SM degree and PhD in computer science from Harvard University. From 1994 to 1999 he served as a senior member of the technical staff in the Information Security Department of BBN Technologies, and then as a principal member of the technical staff from 1999 to 2002. He has also served as the principal investigator of several DARPA (Defense Advanced Research Projects Agency)-funded research projects. He joined the Computer Science Department in 2004 and was elected as an IEEE senior member in 2001. His current research interests include imperceptible polychromatic stimuli for visual brain–computer interactions, and pervasive biomedical telemonitoring based on fog and cloud computing.
Yi-Pai Huang received his BS from National Cheng Kung University in 1999 and then obtained his PhD from the Institute of Electro-Optical Engineering at NCTU. He is currently an associate professor in the Department of Photonics and the Display Institute. In 2005, while at the AU Optrinic (AUO) Corporation, he successfully developed the Advanced-MVA LCD for next-generation products. His current research interests are advanced display systems, display human vision evaluation, 3D displays, and display optics. He won the SID 2001 Best Student Paper Award, the SID 2004 Distinguished Student Paper Award, the 2005 Golden Thesis Award of the Acer Foundation, and the 2005 AUO Bravo Award.
Li-Wei Ko received his BS in mathematics from National Chung Cheng University in 2001, as well as his MS degree in educational measurement and statistics and PhD in electrical statistics from NCTU in 2004 and 2007, respectively. He is currently an executive officer of the Brain Research Center and an assistant professor in the Department of Biological Science and Technology. He is also a visiting scholar at the the University of California, San Diego. His research covers neural networks, neural fuzzy systems, machine learning, brain–computer interfaces, and computational neuroscience. He is an associate editor for IEEE Transactions on Neural Networks and Learning Systems.
Han-Ping Shieh received his BS from National Taiwan University in 1975 and his PhD in electrical and computer engineering from Carnegie Mellon University in 1987. He has been a professor at the Institute of Electro-Optical Engineering, and at the Microelectronics and Information Research Center since 1992. Before that he was a research staff member at the IBM TJ Watson Research Center from 1988. He is now the vice chancellor of the University System of Taiwan and the AUO Chair Professor. He has also held an appointment as the Chang Jiang Scholar at Shanghai Jiao Tong University since 2010. He is a fellow of IEEE, OSA, and the Society for Information Display.
Yijun Wang received his BE and PhD in biomedical engineering from Tsinghua University, China, in 2001 and 2007, respectively. In 2006 he was a visiting researcher at the University Medical Center Hamburg-Eppendorf, Germany, and from 2007 to 2008 he was a research fellow at Tsinghua University's Institute of Neural Engineering. He is currently an assistant project scientist at the Swartz Center for Computation Neuroscience (SCCN). His research interests include brain–computer interfaces, biomedical signal processing, and machine learning.
Tzyy-Ping Jung received his BS in electronics engineering from NCTU in 1984, and his MS and PhD from Ohio State Unversity in 1989 and 1993, respectively. He was then a research associate with the National Research Council of the National Academy of Sciences, and at the Salk Institute's Computational Neurobiology Laboratory. He is currently a research scientist and co-director of the Center for Advanced Neurological Engineering at UCSD. He is also an associate director of the SCCN, an adjunct professor of bioengineering, and a professor in the NCTU Department of Computer Science. His research interests lie in the areas of biomedical signal processing, cognitive neuroscience, machine learning, time-frequency analysis of human EEGs, functional neuroimaging, neural engineering, and brain–computer interfaces and interactions.