Share Email Print
cover

Proceedings Paper

Becoming Dragon: a mixed reality durational performance in Second Life
Author(s): Micha Cárdenas; Christopher Head; Todd Margolis; Kael Greco
Format Member Price Non-Member Price
PDF $14.40 $18.00

Paper Abstract

The goal for Becoming Dragon was to develop a working, immersive Mixed Reality system by using a motion capture system and head mounted display to control a character in Second Life - a Massively Multiplayer Online 3D environment - in order to examine a number of questions regarding identity, gender and the transformative potential of technology. This performance was accomplished through a collaboration between Micha Cardenas, the performer and technical director, Christopher Head, Kael Greco, Benjamin Lotan, Anna Storelli and Elle Mehrmand. The plan for this project was to model the performer's physical environment to enable them to live in the virtual environment for extended amounts of time, using an approach of Mixed Reality, where the physical world is mapped into the virtual. I remain critical of the concept of Mixed Reality, as it presents an idea of realities as totalities and as objective essences independent of interpretation through the symbolic order. Part of my goal with this project is to explore identity as a process of social feedback, in the sense that Donna Haraway describes "becoming with"iii, as well as to explore the concept of Reality Spectrum that Augmentology.com discusses, thinking about states such as AFK (Away From Keyboard) that are in-between virtual and corporeal presence.iv Both of these ideas are ways of overcoming the dualisms of mind/body, real/virtual and self/other that have been a problematic part of thinking about technology for so long. Towards thinking beyond these binaries, Anna Munster offers a concept of enfolding the body and technologyv, building on Gilles Deleuze's notion of the baroque fold. She says "the superfold... opens up for us a twisted topology of code folding back upon itself without determinate start or end points: we now live in a time and space in which body and information are thoroughly imbricated."vi She elaborates on this notion of body and code as becoming with each other saying "the incorporeal vectors of digital information draw out the capacities of our bodies to become other than matter conceived as a mere vessel for consciousness or a substrate for signal... we may also conceive of these experiences as a new territory made possible by the fact that our bodies are immanently open to these kinds of technically symbiotic transformations"vii. A number of the technologies used in this performance were used in an attempt to blur the line between the actual and the digital, such as motion capture, live video streaming into Second Life and 3D fabrication of physical copies of Second Life avatars. The performance was developed using the following components: - An Emagin Z800 immersive head mounted display (HMD) allowed the performer to move around in the physical environment within Calit2 and still remain "in game". Head tracking and stereoscopic imagery help to provide a realistic feeling of immersion. We built on the University of Michigan 3D (UM3D) lab's stereoscopic patch for the Second Life client, updating it to work with the latest version of Second Life. - A motion tracking system. A Vicon MX40+ motion capture system was installed into the Visiting Artist Lab at CRCA, which served as the physical performance space, to allow real-time motion tracking data to be sent to a PC running Windows. Using this data, the plan was to map the physical motion in the real world back into game space, so that, for example, the performer could easily get to their food source or to the restroom. We developed a C++ bridge that includes a parser for the Vicon real time data stream in order to communicate this to the Second Life server to produce changes in avatar and object positions based on real physical movement. The goal was to get complete body gestures into Second Life in near real time. - A Puredata patch called Lila, developed by Shahrokh Yadegadi of UCSD, which was used to modulate the performer's voice, to provide a voice system that allowed chat ability in Second Life, which was less gendered and less human.

Paper Details

Date Published: 27 January 2009
PDF: 13 pages
Proc. SPIE 7238, The Engineering Reality of Virtual Reality 2009, 723807 (27 January 2009); doi: 10.1117/12.806260
Show Author Affiliations
Micha Cárdenas, Univ. of California, San Diego (United States)
Christopher Head, Univ. of California, San Diego (United States)
Todd Margolis, Univ. of California, San Diego (United States)
Kael Greco, Univ. of California, San Diego (United States)


Published in SPIE Proceedings Vol. 7238:
The Engineering Reality of Virtual Reality 2009
Ian E. McDowall; Margaret Dolinsky, Editor(s)

© SPIE. Terms of Use
Back to Top