Turbulence mitigation methods for sea scenario imaging
In general, visual and IR imagery is degraded by turbulence caused by atmospheric conditions. Furthermore, this degradation gets worse for longer distances and thus especially hampers long-range observations. Existing processing algorithms for such data sets are based on the assumption of a static background. Indeed, our standard method provides good results for several land1 and harbor2 scenes, but this approach cannot be used for sea scenarios for two main reasons: the non-static background, and targets that have complex movements. At sea, turbulence can therefore affect the classification and identification of ships (e.g., for security and defense purposes). Although several other groups are working on super-resolution algorithms and turbulence mitigation techniques,3–5 a constant background is also assumed in those algorithms.
In our standard turbulence mitigation method6—see Figure 1(a)—we first perform ‘global motion compensation’ (also known as registration) by estimating the alignment between subsequent image frames and compensating for the alignment differences.7 After registration, we apply so-called dynamic super-resolution (DSR). In this step, the aligned images are combined into an enhanced output image.8, 9 The requirement for good alignment is clear at this stage, i.e., if the alignment is poor, different parts of the scene will be combined. Following DSR, we perform a sharpening step to correct for blurring that is caused by the averaging of turbulence-induced local distortions in the images. We process color images by performing the super resolution step on the luminance image, and then merging this enhanced luminance image with the colors of the original images.10

With this standard algorithm, the first challenge in dealing with sea scenarios is that we use a static background with only enough structure for motion estimation. However, this is insufficient for most sea scenarios, i.e., when the water contains moving structures such as waves and wakes. In addition, for sea scenarios, it is usually the aim to improve the imaging of a foreground object of interest rather than the background. The second challenge is accurately estimating the motion of the ship. In the simplest form of motion estimation, we use a translation-only model: see Figure 1(c). This model performs well for static objects and for objects where the apparent motion can be assumed to be approximately linear (i.e., small or directional). We can also select a small region of interest (ROI) on a larger object, for which we can approximate the motion as linear. However, if the movement of the ship is more complex (e.g., due to rolling), the motion cannot be estimated with a translation-only motion model.
In our most recent work, we therefore propose an adapted processing chain to provide optimal turbulence mitigation for sea scenarios.11 In our novel approach we can thus improve the classification and identification range of ships from visual and IR images. When multiple frames of a scene are available, we can also use temporal filtering to produce higher-quality frames and thus improve images that are degraded by turbulence.
To deal with the non-static background problem, we have implemented an extra module—ROI selection—to our standard methodology, as shown in Figure 1(b). With this module we can select the approximate position of the object in the image that we wish to improve. Our ROI selection can be achieved by using another sensor (e.g., a radar), through object detection in the images, or through manual operation (in this work, we have used manually selected ROIs). In the next stage of our updated methodology, we replace the global motion compensation with motion compensation that is based on the ROI estimation. We then follow our standard turbulence mitigation technique for the ship rather than for the background. If most of the structure in the selected ROI is part of the object, we will obtain a correct motion estimation (for registration of the object). In addition, for cases where motion estimation cannot be achieved with a translation-only model, we use a new two-step methodology: see Figure 1(d). First, we estimate the local motion (i.e., motion in certain parts of the image) by determining the optical flow12 over a given grid on the ship. We then use the random sample consensus (RANSAC) method13 to extract an ROI motion model (e.g., an affine or projective movement).
To demonstrate our proposed method, we illustrate a number of different image sequences (including both ships with a translational movement and ships with a more complex movement). For example, our results for a small, incoming boat are shown in Figure 2(a). For this sequence, we used a translation-only model for the motion estimation. We observe that finer details are visible in the processed images than in the original versions. For example, it is almost possible to read the name of the boat in the processed image. In addition, we see the ghost of a buoy in the background of the (processed) center image. This artifact arises because we conducted the motion estimation on the boat. This meant that the unprocessed colors were merged with the processed—and in this case, smeared—luminance of the background. We also present our processing results for an image of a drilling rig, in Figure 2(b). In this example, we also applied translation-only motion estimation and, again, we find that many more details are visible after processing (e.g., it is possible to read or guess the name plate of the rig). In another example—see Figure 2(c)—we used translation-only motion estimation to process an image sequence of the Alfa Britannia. As with the previous examples, it is possible to read the name of the ship from a close-up of the processed image (but not of the original).

For ships with a more complex motion, we use our two-step approach to estimate the movement. We illustrate an example of the local motion estimation in Figure 3(a). Such estimations are then used to estimate an ROI translation-rotation motion, as shown in Figure 3(b) and (c). From these results, it is apparent that more of the ship's details are visible after processing, yet the amount of noise is also enhanced. With our approach it is therefore necessary to balance the level of noise reduction with the amount of image sharpening and resolution enhancement.

In summary, we have presented a new method for software-based turbulence mitigation for imaging of sea scenarios. Our proposed processing chain is based on improving the object of interest (i.e., ships) in a selected ROI (rather than improving the background image). After processing, we find that many more details can be seen than in original images. We have also demonstrated that our approach works well for objects with no apparent movement (e.g., a drilling rig) and for objects with linear movement. Our technique thus allows classification and identification of objects to be performed at greater distances. Although we have presented preliminary results for objects with complex movements and for partial occlusion of waves, we still need to further develop and improve the motion estimation for these cases. We are also hoping to be able to detect waves around a boat and to adjust the motion estimation/turbulence mitigation processing for those waves.
Judith Dijk is a senior research scientist and has been working at TNO since 2003. Her current research includes image enhancement, image understanding, and electro-optical sensor systems.