Fast projection mapping onto a moving planar surface
Video projection onto real-world surfaces—known as projection mapping—has been gathering much attention as a key technology for advanced display-based user experiences and for new digital-media art. These projection systems make futuristic impressions once they have been successfully installed, but the installation process—which includes geometric and photometric calibrations—remains tedious. Furthermore, if one needs to adapt the system to a moving object, the installation becomes even more difficult.
Recently, devoted efforts have been made to achieve precise alignment of projected images on a moving surface with projector-camera systems.1–3 Most of these existing systems, however, suffer from low frame rates, which cause severe display latencies. In addition, physically steering the optical axis of a coaxially arranged camera-projector pair to enable fast tracking projection has been proposed.4 Unfortunately, this tracking control method is limited to pan and tilt units.
In our approach,5 we introduce a hardware-accelerated projection system. This enables quick adaption of a projected image to a planar surface, with any 3D motion, which is akin to an image sticking to the surface. We achieve this quick adaption of the projected image by applying a 2D projective transformation to the image at a maximum rate of 1300 frames per second.
To enable our quick adaption projection technique, we have developed a projective transformation engine on a field programmable gate array that is embedded within a digital light processing projector. In our approach, the video content supply and the geometric transformation (which we make rapidly) are decoupled. For the tracking, a PC connected to the projector is required only to send a 3×3 matrix that specifies the 2D projective transformation for each frame. Compared with a more straightforward approach,6 in which a projector accepts a high-speed video sequence as the input, our system takes the computational load off an external PC.
Our system (see Figure 1) consists of a USB 3.0 camera, which takes images at 450 frames per second, a notebook PC, and the high-speed projector (connected to the PC with a USB 2.0 wire). A snapshot that we obtained with the camera during a tracking projection experiment is shown in Figure 2. In this image, the logo of Tohoku University is projected so that it is aligned at the center of a white rectangular surface. The results of this experiment are also presented in a video that is available on the Internet.7
To accomplish tracking projection with an uncalibrated projector-camera pair, we conduct visual tracking of the projected content region, as well as the object surface, in our system. We use robust line fitting to track the surface quadrangle in the camera image against the four edges. The projected content region is then tracked by iterative minimization of the sum of the differences of pixel intensities. These tracking results produce 2D projective mapping between the surface coordinate frame and the camera image, and between the projector image and the camera image. This enables us to align the projector image to the surface coordinate frame without any calibration information.
In summary, we have developed a system and a method for fast tracking projection on a planar surface. Our technique deals with full projective transformation and does not depend on camera-projection calibration. In future work we will attempt to overcome some of the limitations of our current implementation. For example, we need to implement visual tracking of moving pictures so that we can enable movie projection instead of still-image projection. Enhancing the quality of the projected image, by improving the hardware implementation, is also an important issue that we will address.
Shingo Kagami received his PhD in mathematical engineering and information physics from the University of Tokyo, Japan, in 2003 and is currently an associate professor. His research interests include systems, architectures, and algorithms for high-speed vision processing and real-time sensory information processing.