Share Email Print
cover

Proceedings Paper

Eye-tracking for human-centered mixed reality: promises and challenges
Author(s): Aaron L. Gardony; Robert W. Lindeman; Tad T. Brunyé
Format Member Price Non-Member Price
PDF $17.00 $21.00

Paper Abstract

Eye-tracking hardware and software are being rapidly integrated into mixed reality (MR) technology. Cognitive science and human-computer interaction (HCI) research demonstrate several ways eye-tracking can be used to gauge user characteristics, intent, and status as well as provide active and passive input control to MR interfaces. In this paper, we argue that eye-tracking can be used to ground MR technology in the cognitive capacities and intentions of users and that such human-centered MR is important for MR designers and engineers to consider. We detail relevant and timely research in eye-tracking and MR and offer suggestions and recommendations to accelerate the development of eye-tracking-enabled human-centered MR, with a focus on recent research findings. We identify several promises that eye-tracking holds for improving MR experiences. In the near term, these include user authentication, gross interface interactions, monitoring visual attention across real and virtual scene elements, and adaptive graphical rendering enabled by relatively coarse eyetracking metrics. In the far term, hardware and software advances will enable gaze-depth aware foveated MR displays and attentive MR user interfaces that track user intent and status using fine and dynamic aspects of gaze. Challenges, such as current technological limitations, difficulties in translating lab-based eye-tracking metrics to MR, and heterogeneous MR use cases are considered alongside cutting-edge research working to address them. With a focused research effort grounded in an understanding of the promises and challenges for eye-tracking, human-centered MR can be realized to improve the efficacy and user experience of MR.

Paper Details

Date Published: 19 February 2020
PDF: 18 pages
Proc. SPIE 11310, Optical Architectures for Displays and Sensing in Augmented, Virtual, and Mixed Reality (AR, VR, MR), 113100T (19 February 2020); doi: 10.1117/12.2542699
Show Author Affiliations
Aaron L. Gardony, U.S. Army Combat Capabilities Development Command Soldier Ctr. (United States)
Tufts Ctr. for Applied Brain and Cognitive Sciences (United States)
Robert W. Lindeman, Univ. of Canterbury (New Zealand)
Tad T. Brunyé, U.S. Army Combat Capabilities Development Command Soldier Ctr. (United States)
Tufts Ctr. for Applied Brain and Cognitive Sciences (United States)


Published in SPIE Proceedings Vol. 11310:
Optical Architectures for Displays and Sensing in Augmented, Virtual, and Mixed Reality (AR, VR, MR)
Bernard C. Kress; Christophe Peroz, Editor(s)

© SPIE. Terms of Use
Back to Top
PREMIUM CONTENT
Sign in to read the full article
Create a free SPIE account to get access to
premium articles and original research
Forgot your username?
close_icon_gray