Comparing collaborative interaction in different virtual environments

Haptics functionality and an immersive environment aid the joint manipulation of objects in virtual reality.
27 November 2007
Qonita Shahab, Yong-Moo Kwon, Heedong Ko, Maria Mayangsari, Shoko Yamasaki, and Hiroaki Nishino

Virtual reality technology is used to engage humans in a simulated milieu, typically for the purpose of entertainment, training, or education. The development of supporting hardware and software tools, such as display and interaction devices and physics-simulation libraries, has been accelerating. This is especially obvious for haptics, the application of touch using force, vibration, or motion. These advances affect collaborative virtual environments (CVEs), in which multiple users can work together by interacting with objects in the environment. In such cases, all of the inputs from the participants must be combined in real time to determine an item's behavior. The way this is implemented varies across different virtual reality systems.

Several research studies have examined interaction techniques between users in CVEs, especially where they both handle the same object. We constructed the Virtual Dollhouse application to demonstrate what happens in concurrent object manipulation, where several people want to act on an object together: for example, lifting a block at the same time. In this application, two people have to collaborate in building a dollhouse. They are presented with several building blocks, a hammer, and several nails. Network support enables participants in different places to work jointly when interacting with the simulation, and see the results of each other's actions.

Manipulating the same object's attributes generates the most events, or changes in the CVE, which need to be communicated throughout the environment.1 Thus, we focused our study on changes to a single thing's attributes, or situations where the item's reaction depends on the combined inputs of people working together. The first issue we addressed in our research is the effect of haptics on a collaborative interaction.2 The second is the possibilities for collaboration between users in different types of virtual environments.3

To address the first issue, we tested versions of Virtual Dollhouse with and without haptics functionality. We made this comparison over the Internet as well as over a local area network.4 To address the second issue, we examined Virtual Dollhouse use in participants with only a PC and in those with a cave automatic virtual environment (CAVE), an immersive display,5 as seen in Figure 1. We analyzed the usefulness of the immersive environment to follow up on evidence6 that it holds the key to effective remote collaboration.


Figure 1. Using the Virtual Dollhouse application, a participant inside the CAVE uses SPIDAR to interact with a PC user.

With visual and color feedback indicating the status of the object operated on by each user, even during voiceless collaboration between distant places as seen in Figure 2, users could still work together effectively. We also concluded that compared to analog devices like joysticks, a 3D device like the space interface device for artificial reality (SPIDAR)7 is more intuitive for a task where people are faced with a set of objects to select and move. Moreover, an immersive display environment is more suitable than a non-immersive display for simulating object manipulation that requires force and the feeling of weight.


Figure 2. Two users at a distance from each other collaborate effectively over the Internet without voice communication.

From our tests of the application over different networks and in varying environments, we conclude that haptics functionality via a force-feedback device is useful in helping participants to feel each other's presence. It also allows joint work to be performed more effectively (that is, with less time wasted). However, network delays caused difficulties with the smoothness of the haptics. In the future, we will study possible solutions to this problem and update our algorithm.


Qonita Shahab, Yong-Moo Kwon, Heedong Ko, Maria Mayangsari
Imaging Media Research Center
Korea Institute of Science and Technology
Seoul, Korea
Shoko Yamasaki, Hiroaki Nishino
Department of Computer Science and Intelligent Systems
Oita University
Oita, Japan

Recent News
PREMIUM CONTENT
Sign in to read the full article
Create a free SPIE account to get access to
premium articles and original research