SPIE Startup Challenge 2015 Founding Partner - JENOPTIK Get updates from SPIE Newsroom
  • Newsroom Home
  • Astronomy
  • Biomedical Optics & Medical Imaging
  • Defense & Security
  • Electronic Imaging & Signal Processing
  • Illumination & Displays
  • Lasers & Sources
  • Micro/Nano Lithography
  • Nanotechnology
  • Optical Design & Engineering
  • Optoelectronics & Communications
  • Remote Sensing
  • Sensing & Measurement
  • Solar & Alternative Energy
  • Sign up for Newsroom E-Alerts
  • Information for:
    Advertisers
SPIE Photonics West 2017 | Register Today

SPIE Defense + Commercial Sensing 2017 | Register Today

2017 SPIE Optics + Photonics | Call for Papers

Get Down (loaded) - SPIE Journals OPEN ACCESS

SPIE PRESS




Print PageEmail PageView PDF

Electronic Imaging & Signal Processing

Jitter-Camera provides super-resolution video with controlled sub-pixel detector shifts

Resolution is improved by controlled shifting of the image detector between frames, and by applying an adaptive super-resolution algorithm
20 April 2006, SPIE Newsroom. DOI: 10.1117/2.1200603.0174

Video cameras must produce images at a reasonable frame rate and with a reasonable depth of field. These requirements impose fundamental physical limits on the spatial resolution of the image detector. As a result, current cameras produce video with a very low resolution. This can be improved by a computational technique called super resolution, which exploits the inter-frame differences caused by camera motion.

However, a moving camera introduces motion blur, which limits super-resolution quality. In this work we address the problem of motion blur by introducing a novel device we call the Jitter-Camera. This captures images in a special way that avoids motion blur while providing optimal input for asuper-resolution algorithm. Also, to enhance it further, we have addressed the problem of applying super resolution to dynamic scenes by creating a novel adaptive algorithm.

There is a large body of work on resolution enhancement using super-resolution reconstruction.1–6 The algorithms used typically assume that a set of displaced images are given as input. With a video camera, this can be achieved by moving the camera while capturing the images, but the camera motion introduces additional blur. This degrades the image, as shown in Figure 1.


Figure 1. Motion blur limits image enhancement using super-resolution reconstruction (results generated with a known simulated motion blur).
 

To avoid this problem, our camera uses controlled sub-pixel shifts of the detector. We use very precise actuators to move the detector by exactly a half pixel between frames. Using a hardware controller and camera trigger, we can ensure that these shifts occur only between integration times and that the detector is motionless otherwise. We thus provide images with the required displacements without adding motion blur.

Figure 2 shows the Jitter-Camera prototype. The camera is connected to acomputer using a standard Firewire interface, and appears to be a regular Firewire camera.


Figure 2. Left: The Jitter-Camera prototype is shown with its cover open. The mechanical micro-actuators are used for shifting the board camera; the two actuators and the board camera are synchronized such that the camera is motionless during integration time. Right: The Jitter-Camera was operated by two stand-alone controllers, each controlling one translation stage. The controllers were synchronized using their digital I/O ports. A digital I/O was also used to trigger the camera, resulting in a fully synchronized system.
 

To handle dynamic scenes, our adaptive super-resolution algorithm first divides the frame into blocks (we use an MPEG-sized block to allow future use of hardware for block-motion estimation). The blocks are then split into three classes: stationary, moving, and moving blocks with occlusions. Finally the images are warped and the super resolution performed.

For stationary blocks we use all available frames in the time window for super resolution. For moving blocks, we search forward and backward in time for blocks that match, and use as many as we can find within the given time window. Blocks with motion and occlusion cannot be reconstructed using multiple-image super resolution, and are therefore interpolated. Figure 3 shows the Jitter-Camera super resolution results with stationary text as input, and Figure 4 when using the adaptive super-resolution algorithm on a dynamic scene.


Figure 3. The improvement in the super-resolution result for stationary input is very clear. It is possible to read the text in the super-resolution reconstructed image, while most of the original image is indecipherable. Note that de-mosaicing color artifacts were corrected by the super-resolution reconstruction algorithm as a byproduct of the algorithm.
 

Figure 4. The enhancement of different regions of a dynamic scene can vary due to motion and occlusions. Compare the results for the woman's left arm (moving and occluding) to the child's arm (stationary) and the man's arm (moving but not occluding).
 

Here, we have seen that motion blur causes significant degradation of super-resolution results. The proposed solution is the Jitter-Camera, a videocamera capable of capturing images that are optimal for super-resolution reconstruction without introducing motion blur. Recent advances may facilitate the embedding of the jitter mechanism inside the detector chip. Jittering can then be added to regular video cameras as an option that enables a significant increase in spatial resolution, while keeping other factors such as frame rate unchanged. For detailed information about the Jitter-Camera and the adaptive super resolution algorithm, the interested reader is guided to Refs. 7 and 8.

This research was conducted at the Columbia Vision and Graphics Center in the Computer Science Department at Columbia University. It was funded in parts by an ONR Contract (N00014-03-1-0023) and an NSF ITR Grant (IIS-00-85864).


Authors
Moshe Ben-Ezra
Siemens Corporate Research
Princeton, NJ
Assaf Zomet
Human Eyes Technologies
Jerusalem, Israel
 
Shree Nayar
Columbia University
New York, NY