Share Email Print
cover

Proceedings Paper

Crossmodal information for visual and haptic discrimination
Author(s): Flip Phillips; Eric J. L. Egan
Format Member Price Non-Member Price
PDF $14.40 $18.00

Paper Abstract

Both our visual and haptic systems contribute to the perception of the three dimensional world, especially the proximal perception of objects. The interaction of these systems has been the subject of some debate over the years, ranging from the philosophically posed Molyneux problem to the more pragmatic examination of their psychophysical relationship. To better understand the nature of this interaction we have performed a variety of experiments characterizing the detection, discrimination, and production of 3D shape. A stimulus set of 25 complex, natural appearing, noisy 3D target objects were statistically specified in the Fourier domain and manufactured using a 3D printer. A series of paired-comparison experiments examined subjects' unimodal (visual-visual and haptic-haptic) and crossmodal (visual-haptic) perceptual abilities. Additionally, subjects sculpted objects using uni- or crossmodal source information. In all experiments, the performance in the unimodal conditions were similar to one another and unimodal presentation fared better than crossmodal. Also, the spatial frequency of object features affected performance differentially across the range used in this experiment. The sculpted objects were scanned in 3D and the resulting geometry was compared metrically and statistically to the original stimuli. Objects with higher spatial frequency were harder to sculpt when limited to haptic input compared to only visual input. The opposite was found for objects with low spatial frequency. The psychophysical discrimination and comparison experiments yielded similar findings. There is a marked performance difference between the visual and haptic systems and these differences were systematically distributed along the range of feature details. The existence of non-universal (i.e. modality-specific) representations explain the poor crossmodal performance. Our current findings suggest that haptic and visual information is either integrated into a multi-modal form, or each is independent and somewhat efficient translation is possible. Vision shows a distinct advantage when dealing with higher frequency objects but both modalities are effective when comparing objects that differ by a large amount.

Paper Details

Date Published: 10 February 2009
PDF: 15 pages
Proc. SPIE 7240, Human Vision and Electronic Imaging XIV, 72400H (10 February 2009); doi: 10.1117/12.817167
Show Author Affiliations
Flip Phillips, Skidmore College (United States)
Eric J. L. Egan, The Ohio State Univ. (United States)


Published in SPIE Proceedings Vol. 7240:
Human Vision and Electronic Imaging XIV
Bernice E. Rogowitz; Thrasyvoulos N. Pappas, Editor(s)

© SPIE. Terms of Use
Back to Top