Third-generation thermal cameras acquire IR radiation using an array of detectors placed at the focal plane of the imaging system. Unfortunately, these focal-plane-array detectors differ in their responses. This nonuniformity represents the main cause of fixed-pattern noise (FPN), which affects the acquired frames in the form of ‘granularity noise.’ To model the detected signal, it is assumed that the incident radiation is related to the readout signal through a linear relationship identified by two time-varying parameters, i.e., the angular gain coefficient and the offset, for consideration of the slow temporal drift that characterizes FPN. Using this model, nonuniformity correction (NUC) is carried out by estimation of the gain and offset parameters for each detector.
Two kinds of NUC techniques are generally used, including reference- and scene-based methods. The former refer to the best-established NUC approach, where the gain and offset of each detector are estimated using two blackbodies at two different temperatures. Unfortunately, reference-based techniques present several drawbacks. They require halting of standard camera operations to manage the temporal FPN drift, the systems are mechanically complex, and they require blackbodies. To overcome such disadvantages, several scene-based NUC techniques have been proposed that rely only on processing the data captured by the imaging system. Scene-based techniques also have drawbacks, primarily their great computational load, less accurate calibration, and performance that depends on the scene's content.
Among scene-based techniques, Scribner's least-mean-squares-based algorithm has been widely studied because of its small computational load and excellent NUC capabilities.1 However, these techniques are affected by ghosting, which is a collateral effect of NUC caused by strong edges in the processed scene. Ghosting seriously affects device sensitivity and degrades the visibility of the acquired IR images. In fact, artifacts appear in the processed frames and can remain visible even if the object that has generated them has left the sensor's field of view. With Scribner's algorithm, ghosting is generated by inaccuracies in the spatial estimates for the processed image. Because one employs linear spatial filters, this estimation error becomes less accurate close to the image's edge structures.
Methods for reducing ghosting artifacts have been proposed. Vera and Torres enhanced Scribner's NUC method by adapting the algorithm's learning rate to the content of the processed scene.2 Zhang and Shi presented a de-ghosting technique where the correction coefficients are updated based on information related to the edges of the scene.3 Recently, two other de-ghosting techniques were proposed, in particular a temporal-statistics4 and a bilateral-filter (BF)-based de-ghosting approach.5
We successfully tested this latter BF technique using real IR sequences characterized by strong edges and hot points, i.e., the main causes of ghosting artifacts in the original Scribner algorithm.5 BF is a nonlinear filter that is commonly known for its capability to smooth images while preserving edges. We replaced the linear filter in Scribner's algorithm with the BF to improve the accuracy of the spatial estimates corresponding to edge structures (see Figure 1).
Block scheme of the proposed nonuniformity correction. A bilateral filter replaces the linear filter from the original Scribner algorithm. For a given pixel i, j in current frame n, yi, j
(n+1 )is the input signal, gi, j
(n) and oi, j
(n)are the fixed-pattern-noise (FPN) correction parameters, ei, j
(n)is the error signal for updating the FPN correction parameters, and
is the desired corrected output signal.
We evaluated the performance of the BF-based NUC algorithm through analysis of real IR high-dynamic-range images, which typically present strong edges such as hot points, separation edges caused by horizon effects, or intensity transitions between sky and ground regions. On such sequences, FPN has been simulated according to laboratory measurements.6 We compared the results with those from edge-detection (ED)-based de-ghosting. For comparison, we adopted the offset estimation mean square error (MSE). To calculate the MSE, we first calculate the per-pixel error as the difference between the simulated and estimated offsets. Then we sum the squared errors for all pixels, divide by the variance of the simulated offset matrix, and normalize by dividing by the total number of pixels to yield our MSE.
Note that this BF-based de-ghosting outperforms both the original Scribner and the ED methods (see Figure 2, where the MSE is shown on a logarithmic scale to better emphasize the variations of the curves). This conclusion is confirmed by visualization of a single frame, extracted from the sequence, corrected using the three different techniques: see Figure 3(a) to (d). The ghosting artifacts in the square dotted red boxes in Figure 3(b) and (c) are significantly mitigated when using our proposed BF technique.
Figure 2. Comparison of the offset estimation mean square error (MSE). For dimension D×Dpixels, σr is the standard deviation of the bilateral-filter (BF)-Gaussian kernel in the intensity domain. The MSE is lower for our Scribner BF method. Nr.: Number.
Figure 3. Comparison of the corrected frames. (a) Original reference frame. (b) Scribner's algorithm without de-ghosting. (c) Edge-detection de-ghosting. (d) Bilateral-filter de-ghosting.
We carried out several experiments to test the robustness of the novel technique. We achieved good results in both indoor and outdoor scenarios. Since BF can preserve edges, the presence of the horizon and of hot objects can be easily managed to prevent generation of ghosting artifacts. Our ongoing work aims at making this technique suitable for real-time applications. Our current research focuses on improving the BF to obtain an automatic setting for its parameters (which is noted in the literature as very critical), efficiently implementing the proposed algorithm on a graphics-processing-unit-based system, and testing the computational load of several fast, recently proposed7,8 approximations of the BF.
Alessandro Rossi, Marco Diani, Giovanni Corsini
Department of Information Engineering, University of Pisa
Alessandro Rossi is a PhD student in remote sensing. His research interests are in digital signal and image processing for IR systems.
Marco Diani is an associate professor. His main research fields are image and signal processing for remote sensing.
Giovanni Corsini is a full professor. His main research fields include signal and image processing for hyperspectral and IR images.
1. D. A. Scribner, K. A. Sarkady, Adaptive retina-like pre-processing for imaging detector arrays, Proc. IEEE Int'l Conf. Neur. Netw. 3, pp. 1955-1960, 1993.
2. E. Vera, S. Torres, Fast adaptive nonuniformity correction for infrared focal-plane array detectors, EURASIP J. Appl. Signal Process. 13, pp. 106-117, 2005.
3. T. Zhang, Y. Shi, Edge-directed adaptive nonuniformity correction for staring infrared plane arrays, Opt. Eng. 45, no. 1, pp. 016402, 2006.
4. A. Rossi, M. Diani, G. Corsini, Temporal statistics de-ghosting for adaptive non-uniformity correction in infrared focal plane arrays, Electron. Lett. 46, no. 5, pp. 348-349, 2010.
5. A. Rossi, M. Diani, G. Corsini, Bilateral filter-based adaptive nonuniformity correction for infrared focal-plane array systems, Opt. Eng. 49, no. 5, pp. 057003, 2010.
6. S. Torres, M. Hayat, Kalman filtering for adaptive nonuniformity correction in infrared focal-plane arrays, J. Opt. Soc. Am. A 20, no. 3, pp. 470-480, 2003.
7. F. Durand, J. Dorsey, Fast bilateral filtering for the display of high-dynamic-range images, Proc. 29th Annu. Conf. Comput. Graph. Interact. Techn. 46, no. 5, pp. 257-266, 2002.
8. G. Guarnieri, S. Marsi, G. Ramponi, Fast bilateral filter for edge-preserving smoothing, Electron. Lett. 42, no. 7, pp. 396-397, 2006.