Proceedings Volume 8649

The Engineering Reality of Virtual Reality 2013

cover
Proceedings Volume 8649

The Engineering Reality of Virtual Reality 2013

View the digital version of this volume at SPIE Digital Libarary.

Volume Details

Date Published: 15 March 2013
Contents: 8 Sessions, 21 Papers, 0 Presentations
Conference: IS&T/SPIE Electronic Imaging 2013
Volume Number: 8649

Table of Contents

icon_mobile_dropdown

Table of Contents

All links to SPIE Proceedings will open in the SPIE Digital Library. external link icon
View Session icon_mobile_dropdown
  • Front Matter: Volume 8649
  • Welcome to Future Tech 3000
  • Can I Borrow Your Swiss Army Knife?
  • Every Picture Tells a Story, Don't It?
  • Discover Secret Worlds
  • Panel Session: Art, Science, and Immersion: Data-Driven Experience
  • Viewing Ports of Call
  • Interactive Paper Session
Front Matter: Volume 8649
icon_mobile_dropdown
Front Matter: Volume 8649
This PDF file contains the front matter associated with SPIE Proceedings Volume 8649, including the Title Page, Copyright information, Table of Contents, and Conference Committee listing.
Welcome to Future Tech 3000
icon_mobile_dropdown
CalVR: an advanced open source virtual reality software framework
Jürgen P. Schulze, Andrew Prudhomme, Philip Weber, et al.
We developed CalVR because none of the existing virtual reality software frameworks offered everything we needed, such as cluster-awareness, multi-GPU capability, Linux compatibility, multi-user support, collaborative session support, or custom menu widgets. CalVR combines features from multiple existing VR frameworks into an open-source system, which we use in our laboratory on a daily basis, and for which dozens of VR applications have already been written at UCSD but also other research laboratories world-wide. In this paper, we describe the philosophy behind CalVR, its standard and unique features and functions, its programming interface, and its inner workings.
CAVE2: a hybrid reality environment for immersive simulation and information analysis
Alessandro Febretti, Arthur Nishimoto, Terrance Thigpen, et al.
Hybrid Reality Environments represent a new kind of visualization spaces that blur the line between virtual environments and high resolution tiled display walls. This paper outlines the design and implementation of the CAVE2TM Hybrid Reality Environment. CAVE2 is the world’s first near-seamless flat-panel-based, surround-screen immersive system. Unique to CAVE2 is that it will enable users to simultaneously view both 2D and 3D information, providing more flexibility for mixed media applications. CAVE2 is a cylindrical system of 24 feet in diameter and 8 feet tall, and consists of 72 near-seamless, off-axisoptimized passive stereo LCD panels, creating an approximately 320 degree panoramic environment for displaying information at 37 Megapixels (in stereoscopic 3D) or 74 Megapixels in 2D and at a horizontal visual acuity of 20/20. Custom LCD panels with shifted polarizers were built so the images in the top and bottom rows of LCDs are optimized for vertical off-center viewing- allowing viewers to come closer to the displays while minimizing ghosting. CAVE2 is designed to support multiple operating modes. In the Fully Immersive mode, the entire room can be dedicated to one virtual simulation. In 2D model, the room can operate like a traditional tiled display wall enabling users to work with large numbers of documents at the same time. In the Hybrid mode, a mixture of both 2D and 3D applications can be simultaneously supported. The ability to treat immersive work spaces in this Hybrid way has never been achieved before, and leverages the special abilities of CAVE2 to enable researchers to seamlessly interact with large collections of 2D and 3D data. To realize this hybrid ability, we merged the Scalable Adaptive Graphics Environment (SAGE) - a system for supporting 2D tiled displays, with Omegalib - a virtual reality middleware supporting OpenGL, OpenSceneGraph and Vtk applications.
Can I Borrow Your Swiss Army Knife?
icon_mobile_dropdown
MASCARET: creating virtual learning environments from system modelling
Ronan Querrec, Paola Vallejo, Cédric Buche
The design process for a Virtual Learning Environment (VLE) such as that put forward in the SIFORAS project (SImulation FOR training and ASsistance) means that system specifications can be differentiated from pedagogical specifications. System specifications can also be obtained directly from the specialists’ expertise; that is to say directly from Product Lifecycle Management (PLM) tools. To do this, the system model needs to be considered as a piece of VLE data. In this paper we present Mascaret, a meta-model which can be used to represent such system models. In order to ensure that the meta-model is capable of describing, representing and simulating such systems, MASCARET is based SysML1, a standard defined by Omg.
Employing WebGL to develop interactive stereoscopic 3D content for use in biomedical visualization
Semay Johnston, Luc Renambot, Daniel Sauter
Web Graphics Library (WebGL), the forthcoming web standard for rendering native 3D graphics in a browser, represents an important addition to the biomedical visualization toolset. It is projected to become a mainstream method of delivering 3D online content due to shrinking support for third-party plug-ins. Additionally, it provides a virtual reality (VR) experience to web users accommodated by the growing availability of stereoscopic displays (3D TV, desktop, and mobile). WebGL’s value in biomedical visualization has been demonstrated by applications for interactive anatomical models, chemical and molecular visualization, and web-based volume rendering. However, a lack of instructional literature specific to the field prevents many from utilizing this technology. This project defines a WebGL design methodology for a target audience of biomedical artists with a basic understanding of web languages and 3D graphics. The methodology was informed by the development of an interactive web application depicting the anatomy and various pathologies of the human eye. The application supports several modes of stereoscopic displays for a better understanding of 3D anatomical structures.
FreeVR: honoring the past, looking to the future
William R. Sherman, Daniel Coming, Simon Su
Fifteen years of experience in designing and implementing a VR integration library have produced a wealth of lessons upon which we can further build and improve our capability to write worthwhile virtual reality applications. The FreeVR virtual reality library is a mature library, yet continues to progress and benefit from the insights and requests encountered during application development. We compare FreeVR with the standard provisions of virtual reality integration libraries, and provide an in-depth look at FreeVR itself. We examine what design decisions worked, and which fell short. In particular, we look at how the features of FreeVR serve to restore applications of the past into working condition and aid in providing longevity to newly developed applications.
An industrial approach to design compelling VR and AR experience
Simon Richir, Philippe Fuchs, Domitile Lourdeaux, et al.
The convergence of technologies currently observed in the field of VR, AR, robotics and consumer electronic reinforces the trend of new applications appearing every day. But when transferring knowledge acquired from research to businesses, research laboratories are often at a loss because of a lack of knowledge of the design and integration processes in creating an industrial scale product. In fact, the innovation approaches that take a good idea from the laboratory to a successful industrial product are often little known to researchers. The objective of this paper is to present the results of the work of several research teams that have finalized a working method for researchers and manufacturers that allow them to design virtual or augmented reality systems and enable their users to enjoy “a compelling VR experience”. That approach, called “the I2I method”, present 11 phases from “Establishing technological and competitive intelligence and industrial property” to “Improvements” through the “Definition of the Behavioral Interface, Virtual Environment and Behavioral Software Assistance”. As a result of the experience gained by various research teams, this design approach benefits from contributions from current VR and AR research. Our objective is to validate and continuously move such multidisciplinary design team methods forward.
3D interactive augmented reality-enhanced digital learning systems for mobile devices
Kai-Ten Feng, Po-Hsuan Tseng, Pei-Shuan Chiu, et al.
With enhanced processing capability of mobile platforms, augmented reality (AR) has been considered a promising technology for achieving enhanced user experiences (UX). Augmented reality is to impose virtual information, e.g., videos and images, onto a live-view digital display. UX on real-world environment via the display can be e ectively enhanced with the adoption of interactive AR technology. Enhancement on UX can be bene cial for digital learning systems. There are existing research works based on AR targeting for the design of e-learning systems. However, none of these work focuses on providing three-dimensional (3-D) object modeling for en- hanced UX based on interactive AR techniques. In this paper, the 3-D interactive augmented reality-enhanced learning (IARL) systems will be proposed to provide enhanced UX for digital learning. The proposed IARL systems consist of two major components, including the markerless pattern recognition (MPR) for 3-D models and velocity-based object tracking (VOT) algorithms. Realistic implementation of proposed IARL system is conducted on Android-based mobile platforms. UX on digital learning can be greatly improved with the adoption of proposed IARL systems.
Every Picture Tells a Story, Don't It?
icon_mobile_dropdown
Using the computer-driven VR environment to promote experiences of natural world immersion
In December, 2011, over 800 people experienced the exhibit, <1>:“der”//pattern for a virtual environment, created for the fully immersive CAVETM at the University of Wisconsin-Madison. This exhibition took my nature-based photographic work and reinterpreted it for virtual reality (VR).Varied responses such as: “It’s like a moment of joy,” or “I had to see it twice,” or “I’m still thinking about it weeks later” were common. Although an implied goal of my 2D artwork is to create a connection that makes viewers more aware of what it means to be a part of the natural world, these six VR environments opened up an unexpected area of inquiry that my 2D work has not. Even as the experience was mediated by machines, there was a softening at the interface between technology and human sensibility. Somehow, for some people, through the unlikely auspices of a computer-driven environment, the project spoke to a human essence that they connected with in a way that went beyond all expectations and felt completely out of my hands. Other interesting behaviors were noted: in some scenarios some spoke of intense anxiety, acrophobia, claustrophobia–even fear of death when the scene took them underground. These environments were believable enough to cause extreme responses and disorientation for some people; were fun, pleasant and wonder-filled for most; and were liberating, poetic and meditative for many others. The exhibition seemed to promote imaginative skills, creativity, emotional insight, and environmental sensitivity. It also revealed the CAVETM to be a powerful tool that can encourage uniquely productive experiences. Quite by accident, I watched as these nature-based environments revealed and articulated an essential relationship between the human spirit and the physical world. The CAVETM is certainly not a natural space, but there is clear potential to explore virtual environments as a path to better and deeper connections between people and nature. We’ve long associated contact with nature as restorative, but those poetic reflections of Thoreau and others are now confirmed by research. Studies are showing that contact with nature can produce faster, greater recovery from stress and other illnesses, reduction in anger, and an increased sense of well-being. Additionally, I discovered that the novelty of a virtual reality experience can bring new focus and fresh attention to elements of our world that we have grown immune to. Possibly, the ‘boletus edulis’ in one scene seemed to have been made more remarkable and mysterious in VR than if it was seen in the backyard. A VR environment can be used to create opportunities to experience being in the world differently. Here they can be inside of an egg that is inside of a nest that is held by tree branches over a creek bed in a floating landscape where a light spring snow is falling. We are liberated from the worldly limitations of our body. The question is this: in an anti-natural environment, can immersants in a CAVETM become more ecologically sympathetic and spiritually connected? Although the exhibit was not put through any form of testing as of yet, my observations amount to a remarkable vision of what VR might provide for us as an instrument to expand consciousness and promote wellness. Creating exceptional, transformative experiences may seem like a lofty goal for VR but that purpose is at the heart of any art making process.
One’s Colonies: a virtual reality environment of oriental residences
This paper is a statement about my virtual reality environment project, One’s Colonies, and a description of the creative process of the project. I was inspired by the buildings in my hometown-Taiwan, which is really different from the architectural style in the United States. By analyzing the unique style of dwellings in Taiwan, I want to demonstrate how the difference between geography, weather and culture change the appearance of the living space. Through this project I want to express the relationship between architectural style and cultural difference, and how the emotional condition or characteristics of the residents are affected by their residencies.
Mrs. Squandertime
In this paper we discuss Mrs. Squandertime, a real-time, persistent simulation of a virtual character, her living room, and the view from her window, designed to be a wall-size, projected art installation. Through her large picture window, the eponymous Mrs. Squandertime watches the sea: boats, clouds, gulls, the tide going in and out, people on the sea wall. The hundreds of images that compose the view are drawn from historical printed sources. The program that assembles and animates these images is driven by weather, time, and tide data constantly updated from a real physical location. The character herself is rendered photographically in a series of slowly dissolving stills which correspond to the character's current behavior.
Discover Secret Worlds
icon_mobile_dropdown
There's an app for that shirt! Evaluation of augmented reality tracking methods on deformable surfaces for fashion design
Silvia Ruzanka, Ben Chang, Katherine Behar
In this paper we present appARel, a creative research project at the intersection of augmented reality, fashion, and performance art. appARel is a mobile augmented reality application that transforms otherwise ordinary garments with 3D animations and modifications. With appARel, entire fashion collections can be uploaded in a smartphone application, and “new looks” can be downloaded in a software update. The project will culminate in a performance art fashion show, scheduled for March 2013. appARel includes textile designs incorporating fiducial markers, garment designs that incorporate multiple markers with the human body, and iOS and Android apps that apply different augments, or “looks”, to a garment. We discuss our philosophy for combining computer-generated and physical objects; and share the challenges we encountered in applying fiduciary markers to the 3D curvatures of the human body.
Augmented reality: past, present, future
A great opportunity has permitted to carry out a cultural, historical, architectural and social research with great impact factor on the international cultural interest. We are talking about the realization of a museum whose the main theme is the visit and the discovery of a monument of great prestige: the monumental building the “Steri” in Palermo. The museum is divided into sub themes including the one above all, that has aroused the international interest so much that it has been presented the instance to include the museum in the cultural heritage of UNESCO. It is the realization of a museum path that regards the cells of the Inquisition, which are located just inside of some buildings of the monumental building. The project, as a whole, is faced, in a total view, between the various competences implicated: historic, chemic, architectonic, topographic, drawing, representation, virtual communication, informatics. The birth of the museum will be a sum of the results of all these disciplines involved. Methodology, implementation, fruition, virtual museum, goals, 2D graphic restitution, effects on the cultural heritage and landscape environmental, augmented reality, Surveying 2D and 3D, hi-touch screen, Photogrammetric survey, Photographic survey, representation, drawing 3D and more than this has been dealt with this research.
Vroom: designing an augmented environment for remote collaboration in digital cinema production
Todd Margolis, Tracy Cornish
As media technologies become increasingly affordable, compact and inherently networked, new generations of telecollaborative platforms continue to arise which integrate these new affordances. Virtual reality has been primarily concerned with creating simulations of environments that can transport participants to real or imagined spaces that replace the “real world”. Meanwhile Augmented Reality systems have evolved to interleave objects from Virtual Reality environments into the physical landscape. Perhaps now there is a new class of systems that reverse this precept to enhance dynamic media landscapes and immersive physical display environments to enable intuitive data exploration through collaboration. Vroom (Virtual Room) is a next-generation reconfigurable tiled display environment in development at the California Institute for Telecommunications and Information Technology (Calit2) at the University of California, San Diego. Vroom enables freely scalable digital collaboratories, connecting distributed, high-resolution visualization resources for collaborative work in the sciences, engineering and the arts. Vroom transforms a physical space into an immersive media environment with large format interactive display surfaces, video teleconferencing and spatialized audio built on a highspeed optical network backbone. Vroom enables group collaboration for local and remote participants to share knowledge and experiences. Possible applications include: remote learning, command and control, storyboarding, post-production editorial review, high resolution video playback, 3D visualization, screencasting and image, video and multimedia file sharing. To support these various scenarios, Vroom features support for multiple user interfaces (optical tracking, touch UI, gesture interface, etc.), support for directional and spatialized audio, giga-pixel image interactivity, 4K video streaming, 3D visualization and telematic production. This paper explains the design process that has been utilized to make Vroom an accessible and intuitive immersive environment for remote collaboration specifically for digital cinema production.
New perspectives and limitations in the use of virtual reality in the rehabilitation of motor disorders
Alessandro De Mauro, Aitor Ardanza, Esther Monge, et al.
Several studies have shown that both virtual and augmented reality are technologies suitable for rehabilitation therapy due to the inherent ability of simulating real daily life activities while improving patient motivation. In this paper we will first present the state of the art in the use of virtual and augmented reality applications for rehabilitation of motor disorders and second we will focus on the analysis of the results of our project. In particular, requirements of patients with cerebrovascular accidents, spinal cord injuries and cerebral palsy to the use of virtual and augmented reality systems will be detailed.
Panel Session: Art, Science, and Immersion: Data-Driven Experience
icon_mobile_dropdown
Art, science, and immersion: data-driven experiences
This panel and dialog-paper explores the potentials at the intersection of art, science, immersion and highly dimensional, “big” data to create new forms of engagement, insight and cultural forms. We will address questions such as: “What kinds of research questions can be identified at the intersection of art + science + immersive environments that can’t be expressed otherwise?” “How is art+science+immersion distinct from state-of-the art visualization?” “What does working with immersive environments and visualization offer that other approaches don’t or can’t?” “Where does immersion fall short?” We will also explore current trends in the application of immersion for gaming, scientific data, entertainment, simulation, social media and other new forms of big data. We ask what expressive, arts-based approaches can contribute to these forms in the broad cultural landscape of immersive technologies.
Viewing Ports of Call
icon_mobile_dropdown
Nomad devices for interactions in immersive virtual environments
Paul George, Andras Kemeny, Frédéric Merienne, et al.
Renault is currently setting up a new CAVE, a 5 rear-projected wall virtual reality room with a combined 3D resolution of 100 Mpixels, distributed over sixteen 4k projectors and two 2k projector as well as an additional 3D HD collaborative powerwall. Renault’s CAVEaims at answering needs of the various vehicle conception steps [1]. Starting from vehicle Design, through the subsequent Engineering steps, Ergonomic evaluation and perceived quality control, Renault has built up a list of use-cases and carried out an early software evaluation in the four sided CAVEof Institute Image, called MOVE. One goal of the project is to study interactions in a CAVE, especially with nomad devices such as IPhone or IPad to manipulate virtual objects and to develop visualization possibilities. Inspired by nomad devices current uses (multi-touch gestures, IPhone UI look’n’feel and AR applications), we have implemented an early feature set taking advantage of these popular input devices. In this paper, we present its performance through measurement data collected in our test platform, a 4-sided homemade low-cost virtual reality room, powered by ultra-short-range and standard HD home projectors.
Analysis of tactors for wearable simulator feedback: a tactile vest architecture
Current training simulators for police officers and soldiers lack two critical qualities for establishing a compelling sense of immersion within a virtual environment: a strong disincentive to getting shot, and accurate feedback about the bodily location of a shot. This research addresses these issues with hardware architecture for a Tactical Tactile Training Vest (T3V). In this study, we have evaluated the design space of impact “tactors” and present a T3V prototype that can be viscerally felt. This research focuses on determining the optimal design parameters for creating maximum tactor hitting energy. The energy transferred to the projectile directly relates to the quality of the disincentive. The complete T3V design will include an array of these tactors on front and back of the body to offer accurate spatial feedback. The impact tactor created and tested for this research is an electromagnetic projectile launcher, similar to a solenoid, but lower profile and higher energy. Our best tactor produced projectile energy of approximately 0.08 Joules with an efficiency at just above 0.1%. Users in an informal pilot study described the feeling as "surprising," "irritating," and "startling," suggesting that this level of force is approaching our target level of disincentive.
Use of virtual reality to promote hand therapy post-stroke
Daria Tsoupikova, Nikolay Stoykov, Randy Vick, et al.
A novel artistic virtual reality (VR) environment was developed and tested for use as a rehabilitation protocol for post-stroke hand rehabilitation therapy. The system was developed by an interdisciplinary team of engineers, art therapists, occupational therapists, and VR artists to improve patients' motivation and engagement. Specific exercises were developed to explicitly promote the practice of therapeutic tasks requiring hand and arm coordination for upper extremity rehabilitation. Here we describe system design, development, and user testing for efficiency, subject's satisfaction and clinical feasibility. We report results of the completed qualitative, pre-clinical pilot study of the system effectiveness for therapy. Fourteen stroke survivors with chronic hemiparesis participated in a single training session within the environment to gauge user response to the protocol through a custom survey. Results indicate that users found the system comfortable, enjoyable, tiring; instructions clear, and reported a high level of satisfaction with the VR environment and rehabilitation task variety and difficulty. Most patients reported very positive impressions of the VR environment and rated it highly, appreciating its engagement and motivation. We are currently conducting a longitudinal intervention study over 6 weeks in stroke survivors with chronic hemiparesis. Initial results following use of the system on the first subjects demonstrate that the system is operational and can facilitate therapy for post stroke patients with upper extremity impairment.
Collaborative imaging of urban forest dynamics: augmenting re-photography to visualize changes over time
Ruth West, Abby Halley, Jarlath O’Neil-Dunne, et al.
The ecological sciences face the challenge of making measurements to detect subtle changes sometimes over large areas across varied temporal scales. The challenge is thus to measure patterns of slow, subtle change occurring along multiple spatial and temporal scales, and then to visualize those changes in a way that makes important variations visceral to the observer. Imaging plays an important role in ecological measurement but existing techniques often rely on approaches that are limited with respect to their spatial resolution, view angle, and/or temporal resolution. Furthermore, integrating imaging acquired through different modalities is often difficult, if not impossible. This research envisions a community-based and participatory approach based around augmented rephotography of ecosystems. We show a case study for the purpose of monitoring the urban tree canopy. The goal is to explore, for a set of urban locations, the integration of ground level rephotography with available LiDAR data, and to create a dynamic view of the urban forest, and its changes across various spatial and temporal scales. This case study gives the opportunity to explore various augments to improve the ground level image capture process, protocols to support 3D inference from the contributed photography, and both in-situ and web based visualizations of the temporal change over time.
Interactive Paper Session
icon_mobile_dropdown
Integral virtual display for long distance view
In driving a vehicle, if the information which is needed to drive can be displayed like Augmented Reality (AR), driver operates the vehicle effectively. For example, it is efficient to display the images indicating an intersection where the vehicle should turn next. Unlike conventional AR system such as using See Through Head Mounted Display, Head Up Display (HUD) which is currently used on vehicle for displaying speed meter, tachometer and so on, can display the optical virtual images for long distance view and covers large viewing field. But it is difficult to apply HUD into AR because of narrow viewing field. To optimize this, we propose a system in which HUD is divided up many small optical systems. A convex lens array and elemental images which are similarly to Integral Photography (IP) were applied. One elemental image has been corresponding in front of each lens which generates the virtual image. In this paper, a theoretical formula of the position relation among the elemental images was solved to create the continuous virtual images. Moreover, we simulated the system with ray tracing method.