• Newsroom Home
  • Astronomy
  • Biomedical Optics & Medical Imaging
  • Defense & Security
  • Electronic Imaging & Signal Processing
  • Illumination & Displays
  • Lasers & Sources
  • Micro/Nano Lithography
  • Nanotechnology
  • Optical Design & Engineering
  • Optoelectronics & Communications
  • Remote Sensing
  • Sensing & Measurement
  • Solar & Alternative Energy
  • Sign up for Newsroom E-Alerts
  • Information for:
    Advertisers
SPIE Photonics West 2018 | Register Today

SPIE Defense + Commercial Sensing 2018 | Call for Papers

SPIE Photonics Europe 2018 | Call for Papers

SPIE Journals OPEN ACCESS

SPIE PRESS

SPIE PRESS

Print PageEmail PageView PDF

Electronic Imaging & Signal Processing

Turbulence mitigation methods for sea scenario imaging

An adapted processing chain enhances targets of interest in camera images that have a non-static background, and can thus enable longer-range classification and identification of ships.
29 November 2016, SPIE Newsroom. DOI: 10.1117/2.1201609.006732

In general, visual and IR imagery is degraded by turbulence caused by atmospheric conditions. Furthermore, this degradation gets worse for longer distances and thus especially hampers long-range observations. Existing processing algorithms for such data sets are based on the assumption of a static background. Indeed, our standard method provides good results for several land1 and harbor2 scenes, but this approach cannot be used for sea scenarios for two main reasons: the non-static background, and targets that have complex movements. At sea, turbulence can therefore affect the classification and identification of ships (e.g., for security and defense purposes). Although several other groups are working on super-resolution algorithms and turbulence mitigation techniques,3–5 a constant background is also assumed in those algorithms.

Purchase SPIE Field Guide to Image ProcessingIn our standard turbulence mitigation method6—see Figure 1(a)—we first perform ‘global motion compensation’ (also known as registration) by estimating the alignment between subsequent image frames and compensating for the alignment differences.7 After registration, we apply so-called dynamic super-resolution (DSR). In this step, the aligned images are combined into an enhanced output image.8, 9 The requirement for good alignment is clear at this stage, i.e., if the alignment is poor, different parts of the scene will be combined. Following DSR, we perform a sharpening step to correct for blurring that is caused by the averaging of turbulence-induced local distortions in the images. We process color images by performing the super resolution step on the luminance image, and then merging this enhanced luminance image with the colors of the original images.10


Figure 1. Flow diagrams for (a) the standard turbulence mitigation method, for low-turbulence situations, and (b) the region of interest (ROI)-based turbulence mitigation method, applicable for sea scenarios. Visualizations for (c) translation-only motion and (d) translation and rotation motion estimation methods are also shown.

With this standard algorithm, the first challenge in dealing with sea scenarios is that we use a static background with only enough structure for motion estimation. However, this is insufficient for most sea scenarios, i.e., when the water contains moving structures such as waves and wakes. In addition, for sea scenarios, it is usually the aim to improve the imaging of a foreground object of interest rather than the background. The second challenge is accurately estimating the motion of the ship. In the simplest form of motion estimation, we use a translation-only model: see Figure 1(c). This model performs well for static objects and for objects where the apparent motion can be assumed to be approximately linear (i.e., small or directional). We can also select a small region of interest (ROI) on a larger object, for which we can approximate the motion as linear. However, if the movement of the ship is more complex (e.g., due to rolling), the motion cannot be estimated with a translation-only motion model.

In our most recent work, we therefore propose an adapted processing chain to provide optimal turbulence mitigation for sea scenarios.11 In our novel approach we can thus improve the classification and identification range of ships from visual and IR images. When multiple frames of a scene are available, we can also use temporal filtering to produce higher-quality frames and thus improve images that are degraded by turbulence.

To deal with the non-static background problem, we have implemented an extra module—ROI selection—to our standard methodology, as shown in Figure 1(b). With this module we can select the approximate position of the object in the image that we wish to improve. Our ROI selection can be achieved by using another sensor (e.g., a radar), through object detection in the images, or through manual operation (in this work, we have used manually selected ROIs). In the next stage of our updated methodology, we replace the global motion compensation with motion compensation that is based on the ROI estimation. We then follow our standard turbulence mitigation technique for the ship rather than for the background. If most of the structure in the selected ROI is part of the object, we will obtain a correct motion estimation (for registration of the object). In addition, for cases where motion estimation cannot be achieved with a translation-only model, we use a new two-step methodology: see Figure 1(d). First, we estimate the local motion (i.e., motion in certain parts of the image) by determining the optical flow12 over a given grid on the ship. We then use the random sample consensus (RANSAC) method13 to extract an ROI motion model (e.g., an affine or projective movement).

To demonstrate our proposed method, we illustrate a number of different image sequences (including both ships with a translational movement and ships with a more complex movement). For example, our results for a small, incoming boat are shown in Figure 2(a). For this sequence, we used a translation-only model for the motion estimation. We observe that finer details are visible in the processed images than in the original versions. For example, it is almost possible to read the name of the boat in the processed image. In addition, we see the ghost of a buoy in the background of the (processed) center image. This artifact arises because we conducted the motion estimation on the boat. This meant that the unprocessed colors were merged with the processed—and in this case, smeared—luminance of the background. We also present our processing results for an image of a drilling rig, in Figure 2(b). In this example, we also applied translation-only motion estimation and, again, we find that many more details are visible after processing (e.g., it is possible to read or guess the name plate of the rig). In another example—see Figure 2(c)—we used translation-only motion estimation to process an image sequence of the Alfa Britannia. As with the previous examples, it is possible to read the name of the ship from a close-up of the processed image (but not of the original).


Figure 2. Three image sequences that illustrate the power of the proposed processing algorithm. (a) Images of a pilot boat. Images used for three different input ROIs are shown in the top row, and the results after processing are shown in the bottom row. (b) Image of a drilling rig obtained from a distance of 13km (original is on the left, processed image is on the right). The license plate reads ‘PL-Q13a-PA.’ (c) The Alfa Britannia ship imaged from a distance of 8.3km (original is on the left, processed image is on the right). Insets show close-ups of the ship's name.

For ships with a more complex motion, we use our two-step approach to estimate the movement. We illustrate an example of the local motion estimation in Figure 3(a). Such estimations are then used to estimate an ROI translation-rotation motion, as shown in Figure 3(b) and (c). From these results, it is apparent that more of the ship's details are visible after processing, yet the amount of noise is also enhanced. With our approach it is therefore necessary to balance the level of noise reduction with the amount of image sharpening and resolution enhancement.


Figure 3. (a) Illustration of the two-step approach for complex movement estimation. Local motion estimation is marked with yellow arrows, and the estimated ROI translation-rotation motion model is denoted by green arrows. The resulting images obtained from the (b) rotation and (c) affine motion processing.

In summary, we have presented a new method for software-based turbulence mitigation for imaging of sea scenarios. Our proposed processing chain is based on improving the object of interest (i.e., ships) in a selected ROI (rather than improving the background image). After processing, we find that many more details can be seen than in original images. We have also demonstrated that our approach works well for objects with no apparent movement (e.g., a drilling rig) and for objects with linear movement. Our technique thus allows classification and identification of objects to be performed at greater distances. Although we have presented preliminary results for objects with complex movements and for partial occlusion of waves, we still need to further develop and improve the motion estimation for these cases. We are also hoping to be able to detect waves around a boat and to adjust the motion estimation/turbulence mitigation processing for those waves.


Judith Dijk, Klamer Schutte, Robert Nieuwenhuizen
TNO Intelligent Imaging
The Hague, The Netherlands

Judith Dijk is a senior research scientist and has been working at TNO since 2003. Her current research includes image enhancement, image understanding, and electro-optical sensor systems.


References:
1. P. B. W. Schwering, R. A. W. Kemp, K. Schutte, Image enhancement technology research for army applications, Proc. SPIE 8706, p. 87060O, 2013. doi:10.1117/12.2017855
2. A. R. W. Kemp, J. F. de Groot, S. P. van den Broek, D.-J. J. de Lange, J. Dijk, P. B. W. Schwering, Results of optical detection trials in harbour environment, Proc. SPIE 6943, p. 69430Y, 2008. doi:10.1117/12.778185
3. S. C. Park, M. K. Park, M. G. Kang, Super-resolution image reconstruction: a technical overview, IEEE Signal Process. Mag. 20, p. 21-36, 2003.
4. C. S. Huebner, Image enhancement methods for turbulence mitigation and the influence of different color spaces, Proc. SPIE 9641, p. 96410J, 2015. doi:10.1117/12.2196297
5. S. Gladysz, R. B. Gallé, Comparison of image restoration algorithms in the context of horizontal-path imaging, Proc. SPIE 8355, p. 83550X, 2012. doi:10.1117/12.936362
6. A. W. M. van Eekeren, J. Dijk, K. Schutte, Turbulence mitigation methods and their evaluation, Proc. SPIE 9249, p. 92490O, 2014. doi:10.1117/12.2067327
7. T. Q. Pham, M. Bezuijen, L. J. van Vliet, K. Schutte, C. L. L. Hendriks, Performance of optimal registration estimators, Proc. SPIE 5817, p. 133-144, 2005. doi:10.1117/12.603304
8. K. Schutte, D.-J. J. de Lange, S. P. van den Broek, Signal conditioning algorithms for enhanced tactical sensor imagery, Proc. SPIE 5076, p. 92-100, 2003. doi:10.1117/12.487720
9. A. W. M. van Eekeren, K. Schutte, J. Dijk, P. B. W. Schwering, M. van Iersel, N. J. Doelman, Turbulence compensation: an overview, Proc. SPIE 8355, p. 83550Q, 2012. doi:10.1117/12.918544
10. J. Dijk, R. J. M. den Hollander, Image enhancement for noisy color imagery, Proc. SPIE 7113, p. 71131A, 2008. doi:10.1117.12.800274
11. J. Dijk, K. Schutte, R. P. J. Nieuwenhuizen, Turbulence mitigation methods for sea scenarios, Proc. SPIE 9987, p.99870E,  2016. doi:10.1117/12.2243165
12. B. K. P. Horn, B. G. Schunck, Determining optical flow, Artific. Intell. 17, p. 185-203, 1981.
13. M. A. Fischler, R. C. Bolles, Random sample consensus: a paradigm for model fitting with applications to image analysis and automated cartography, Commun. ACM 24, p. 381-395, 1981.