Riding the range

Whether applied in a camera or during post processing, high dynamic range imaging delivers images that are truer to life.
27 January 2007
Kristin Lewotsky
Nearly all of us have had the experience: you're on vacation, maybe, standing outdoors and photographing the interior of a room that you can see perfectly well. When you look at the image, however, the detail inside is blacked out because of the limited dynamic range of the camera.
This is an issue that has plagued professional and amateur photographers alike-until the advent of high dynamic range (HDR) imaging, that is. HDR imaging means producing an image with a broader dynamic range so that light and dark areas can be imaged in more realistic detail.
Imagine the inside of a room. A shadowy area-perhaps under a table-has a light-meter reading of one. A mid-tone level in the same room is probably closer to 100, while a place where sunlight is hitting a surface directly is nearer 10,000, and the sun itself can reach 10 million. An HDR image would attempt to discriminate all these values to yield a final image that more accurately represents real world appearances.
Must be complicated, right? Not really. The easiest method simply requires imaging the same scene five to seven times, using a standard camera, at exposure times ranging from, say, 0.25 s down to a millisecond. This suite of over-exposed to under-exposed images provides the low-dynamic range input data for the software that will create the HDR image.

Limits of Lenses
The HDR data comes into play primarily in small areas of high contrast where a traditionally exposed image would fall into detector noise. In other words, the HDR image provides a master set of data from which useful subsets can be taken: the image of the cave that shows the paintings inside, for example, or the interior of a cathedral with light streaming in the stained glass windows.
It's not a completely accurate rendering of the scene, however, because of the effects of veiling glare. "Lenses have a fair amount of flare," says former longtime Polaroid scientist John McCann of McCann Imaging (Belmont, MA). "When you follow the ISO standards for measuring veiling glare, it takes an extraordinarily good lens to get it to less than 2% in a camera." An eight-element lens, for example, has 16 surfaces that contribute to glare.
"There's nothing the matter with HDR imaging, but there is a limit that's imposed by the lens of the camera and the lens of the eye, and that turns out to be smaller than some people think," says McCann. "Occasionally, you'll find somebody who claims they've captured 6 log units of dynamic range in a scene when in fact it may be 2 log units if there's a lot of white in the scene and 4 log units if it's a very dark scene. Veiling glare is an image-dependent distortion of in-camera scene radiance estimates."
He is quick to note that HDR imaging is a valuable technique, however. "There's a lot of benefit from HDR imaging, but it works not because you have an accurate rendition of the world, it works because the HDR image gives better quantization of the data and makes images that have better detail separation."
Although it would seem that the HDR files must be enormous, that isn't the case, says HDR pioneer Paul Debevec, professor at the University of Southern California (Los Angeles, CA). "For most of the applications we're talking about-visual effects, medical imaging, or automotive rendering-people tend to use uncompressed image file formats or lossless compression, so the actual additional information necessary for an HDR image is really a modest increase."
Different Approaches
An image with equivalent pixel values of perhaps 0 to 1 million is good, but reality consists of displays and photographic prints that operate with values of 0 to 255. How do you make use of the expanded dynamic range? One approach is tone mapping, a post-processing method that can convert an HDR image into a regular dynamic range image, using the HDR data to add details in very light or very dark areas, although at the price of decreasing discrimination in others.

Actual HDR displays are under development at companies such as Brightside Technologies (Vancouver, BC, Canada), which replaces the typical fluorescent backlight of an LCD display with an array of individually modulated white-light LEDs. They report a contrast ratio of over 200,000:1 and peak luminance above 3,000 candela/m2.
On the image capture side, other teams are working on more sophisticated approaches that can create an HDR image in a single shot. Jonas Unger from Linköping University at Norrköping Campus, Sweden, is focusing on reprogramming sensor firmware. Shree Nayar of Columbia University (New York, NY) is experimenting with using spatially varying optical masks over the detector.
Charles Sodini and colleagues at the Massachusetts Institute of Technology (Cambridge, MA) developed a CMOS imager that can adjust pixel response in near-real time. With the help of an ASIC or an FPGA, the detector captures an image, assesses the pixel exposure, then determines the appropriate wide-dynamic-range transfer function to apply to pixel timing.
"It extends the dynamic range from approximately 60 dB to 120 dB-it can acquire a pixel that's a million times brighter than the most dimly lit pixel," says Sodini, who helped found SMaL Camera Technologies (since sold to Cypress Semiconductor) to commercialize the technology. The camera is currently deployed in a Mercedes night vision system.
Beyond automotive, other applications include medical imaging, photography, and cinematography, particularly in the area of visual effects. Using HDR imaging and a reflective sphere, it is possible to map the reflectance of a scene to be able to precisely synthesize the lighting. Thus, a computer graphics character could be inserted in a scene with human actors and be undetectable.
So next time you're at the movie theater watching a freakily realistic ghoul from outer space, just remember, it could be courtesy of HDR imaging.

Read more about imaging techniques and research in the SPIE Newsroom at spie.org/newsroom. This news website features technical content centered around 13 technical communities, one of which is Electronic Imaging & Signal Processing. Stay upto- date on the latest research by bookmarking the site today.

HDR at Electronic Imaging
Want to learn more about high dynamic range imaging? Attend the Perceptual Issues in High-Dynamic Range Imaging session at the IS&T/SPIE Electronic Imaging symposium (28 January to 1 February, 2007; San Jose, CA). Session chair Thrasyvoulos Pappas of Northwestern University (Evanston, IL) has crafted a notable session on the technology.

You'll hear McCann and collaborator Alessandro Rizzi of the University of Milan, Italy, talk further about their work in veiling glare and the limitations it imposes on HDR imaging [paper #6492-41]. Brightside Technologies will provide details on their HDR displays [paper #6492-36]. Other researchers will discuss tone mapping [paper #6492-38], the HDR imaging pipeline [paper #6492-40], and evaluating HDR images [paper #6492-39].
Another good source on the technology is Paul Debevec's home page at www.debevec.org. There you can view some wonderful images of cathedrals and even a video of the Parthenon in Greece, illuminated by simulated sunlight gathered in Los Angeles.

Kristin Lewotsky
Kristin Lewotsky is a technology writer based in Amherst, NH.

Recent News
PREMIUM CONTENT
Sign in to read the full article
Create a free SPIE account to get access to
premium articles and original research