Visible images are typically much sharper and clearer than infrared ones. Yet, infrared images can be very valuable when measuring temperature and identifying potential problems in predictive maintenance and building sciences applications. Thermographers require the data obtained from both and can therefore benefit from a real-time video camera1 that blends visible and infrared information into a single image.
A camera of this type generally has separate visible-light and infrared optical paths and sensors (see Figure 1). Because the paths are slightly offset, the combined image suffers from parallax errors in the short to moderate distances that are used in most thermography applications. Although parallax is a function of distance to the target, our research has found that it can be automatically corrected by using a new camera design.
Visible-light cameras generally have a field of view (FOV) that is twice that of infrared optics. In our research, we used software algorithms to select a portion of the visible target that was comparable in size to the infrared FOV. The two were then alpha-blended into a single image, producing the spatial detail of a digital camera and the temperature measurement of an infrared. The infrared lens has a shallow depth-of-field that provides an excellent means of determining target distance. When the operator adjusts the infrared focus, the camera automatically selects the matching section of the visible image to blend with the infrared (because the visible optics remain in focus at all usable distances).
To humans, visible images appear sharper and clearer than infrared for several reasons. First, visible detector arrays have millions of elements, which is far more than those on an infrared detector. Second, visible images can be displayed in the same colors, shades, and intensities seen by the human eye, so their structure and character are more easily interpreted than infrared.
Finally, visible images are not used to measure temperature and are typically generated with reflected radiation. This can produce sharp contrasts and depict intensity differences: for example, a thin white line can be distinguished when it is next to a thin black line. It is possible to have distinct infrared reflection contrasts when a surface of low emissivity (high infrared reflectance) is next to a surface of high emissivity (low infrared reflectance). However, it is unusual to have surfaces with sharp temperature differences next to each other. Heat transfer between close objects often washes out differences and produces temperature gradients instead.
Effectively combining visible and infrared images offers distinct advantages for thermographers. Figure 1 shows an example of how a combined image improves spatial detail and clarity. It is not possible to see the background detail and embossed writing on the motor mount in the infrared-only image. However, in the blended image, these details can be seen nearly as well as in the visible-only image.
Because infrared and visible images are pixel-for-pixel matched by the camera, an operator can identify the location of infrared points-of-interest on the target by noting where the features are in the blended image. Once the infrared image is in focus, the operator can select the visible-light image and read the temperatures on the visible image. For example, the ‘visible only’ panel of Figure 1 shows data on the image for the hottest spot at 125.7°F.
Figure 1. Infrared-only, visible-only, and blended images from the camera yield different information for the same scene.
Figure 2 shows an image with different amount of infrared blending. The image was taken in picture-in-picture mode, where the center of the display can be set to infrared-only, visible-only, or a blend of the two.
Figure 2. Picture-in-picture mode shows the center portion of the display with different amounts of blending of infrared information.
Figure 3 shows how the camera can be used to pinpoint the location of a poorly insulated area. On the blended image, it appears as a small blemish. Figure 4 depicts the use of the Color-Alarm mode. With this setting, the infrared section of the image is rendered in grayscale, except for the areas where temperatures meet the criteria setting.
Figure 3. Blending a low-contrast infrared scene with a visible image aids in the identification of an infrared point of interest.
Figure 4. Blending images allows viewing of a color-alarm threshold set at 300F.
Figure 5 shows how a laser spot can be used to identify the location of infrared points of interest on a target. At reasonable distances, the laser dot on the target can be seen in the visible image. With the proper percentage of infrared blending, the laser dot can also be seen in the blended version. The operator can then adjust the camera until the laser dot in the blended image matches the infrared spot of interest. The laser dot then marks the point of interest on the target.
Figure 5. These images show the sequence for locating a hot spot with a laser pointer.
Parallax issues have been a hindrance to combining visible and infrared images. But with our novel approach, we automatically solve this problem for cameras that are equipped with visible and infrared optics. Our research has led to the development of a commercial camera design that benefits those working in predictive maintenance and the building sciences. Specifically, it provides greatly improved spatial detail for infrared images and aids in the identification of the exact location of infrared points of interest.
The author wishes to acknowledge the work of the Fluke Thermography Engineering Team that invented and developed this camera. The team was led by Kirk Johnson and Tom McManus and supported by Peter Bergstrom, Brian Bernald, Pierre Chaput, Lee Kantor, Mike Loukusa, Corey Packard, Tim Preble, Eugene Skobov, Justin Sheard, Ed Thiede, and Mike Thorson. The author also wishes to acknowledge Tony Tallman, who developed the software that made these images possible.