Share Email Print

Proceedings Paper

Man-machine interaction in the 21st century--new paradigms through dynamic scene analysis and synthesis (Keynote Speech)
Author(s): Thomas S. Huang; Michael T. Orchard
Format Member Price Non-Member Price
PDF $14.40 $18.00
cover GOOD NEWS! Your organization subscribes to the SPIE Digital Library. You may be able to download this paper for free. Check Access

Paper Abstract

The past twenty years have witnessed a revolution in the use of computers in virtually every facet of society. While this revolution has been largely fueled by dramatic technological advances, the efficient application of this technology has been made possible through advances in the paradigms defining the way users interact with computers. Today's massive computational power would probably have limited sociological impact is users still communicated with computers via the binary machine language codes used in the 1950's. Instead, this primitive paradigm was replaced by keyboards and ASCII character displays in the 1970's, and the 'mouse' and multiple-window bit-mapped displays in the 1980's. As continuing technological advances make even larger computational power available in the future, advanced paradigms for man-machine interaction will be required to allow this power to be used efficiently in a wide range of applications. Looking ahead into the 21st century, we see paradigms supporting radically new ways of interacting with computers. Ideally, we would like these interactions to mimic the ways we interact with objects and people in the physical world, and, to achieve this goal, we believe that it is essential to consider the exchange of video data into and out of the computer. Paradigms based on visual interactions represent a radical departure from existing paradigms, because they allow the computer to actively seek out information from the user via dynamic scene analysis. For example, the computer might enlarge the display when it detects that the user if squinting, or it might reorient a three- dimensional object on the screen in response to detected hand motions. This contrasts with current paradigms in which the computer relies on passive switching devices (keyboard, mouse, buttons, etc.) to receive information. Feedback will be provided to the user via dynamic scene synthesis, employing stereoscopic three-dimensional display systems. To exploit the synergism between analysis and synthesis, we will need common data representation used by both. To illustrate, we give some typical scenarios which could be made possible by these new paradigms.

Paper Details

Date Published: 1 November 1992
PDF: 2 pages
Proc. SPIE 1818, Visual Communications and Image Processing '92, (1 November 1992); doi: 10.1117/12.131460
Show Author Affiliations
Thomas S. Huang, Univ. of Illinois/Urbana-Champaign (United States)
Michael T. Orchard, Univ. of Illinois/Urbana-Champaign (United States)

Published in SPIE Proceedings Vol. 1818:
Visual Communications and Image Processing '92
Petros Maragos, Editor(s)

© SPIE. Terms of Use
Back to Top