The current 10m class of telescopes (e.g., the Large Binocular Telescope and the Very Large Telescope), as well as the upcoming class of 40m telescopes (e.g., the European Extremely Large Telescope, the Thirty Meter Telescope, and the Giant Magellan Telescope), all require adaptive optics (AO) systems to compensate for the effects of atmospheric turbulence. Although single-conjugate AO systems can be used to flatten the wavefront from a guide star, in most cases the science target is not coincident with the guide star. The stratified structure of the atmosphere therefore causes the light beams from the science target and from the guide star to be affected by different aberrations. As such, even a perfect correction for the guide star cannot be a perfect correction for the target. The resulting point spread function (PSF) is thus degraded and is variable across the astronomical image.
With recently developed complex AO techniques (e.g., multi-conjugate AO), PSF uniformity can be improved. Some residual PSF variation in the field of view, however, is still possible. Depending on the necessary degree of PSF stability imposed by the science targets, the space variation of the PSF in AO observations can therefore be a significant issue. This can, however, be addressed with image processing methods. The case of a space-variant PSF is computationally tractable only under the assumption that the PSF smoothly varies across the image domain. A sectioning approach has therefore been proposed,1, 2 in which the image is decomposed into sub-domains (or patches). Within each of these sub-domains, the PSF is assumed to be approximately space invariant. An alternative interpolation approach has also been proposed,3–5 with which the discontinuity of the PSF from one sub-domain to another can be suppressed.
We have therefore developed a software package, known as ‘Patch,’ to deblur images that have been degraded because of spatially variable PSF.6, 7 Our package is written in the Interactive Data Language (IDL) and can be downloaded for free from the Internet.8 The method that we have implemented in our software is an improvement to the sectioning approach. We decompose the input image in partially overlapping sub-domains (the size of these overlapping regions depends on the extent of the PSF). We then separately reconstruct each sub-domain by means of a deconvolution method. This method includes boundary effect corrections9 so that artifacts on the edges of the sub-domains—caused by Gibbs oscillations—can be prevented. We are then able to obtain the reconstructed image as a mosaic of the results.
Our Patch graphical user interface (GUI) consists of three panels, i.e., one for each specific step of the reconstruction (input, deconvolution, and output). The main inputs for the software are the observed image, a set of local PSFs (defined in a regular grid and centered in each image sub-domain), and an estimation of the background. It is necessary, however, to estimate the local PSFs separately. The first panel (see Figure 1) shows the input image. In this illustrated case, the image is a simulated star cluster, observed by the Hubble Space Telescope before its Corrective Optics Space Telescope Axial Replacement (COSTAR) correction was performed.10 We enlarge each sub-domain in this image to a suitable (and automatically computed) size so that the partially overlapping domains for each deconvolution can be considered. It is possible for the user to set a larger value if specific image features need to be taken into account.
Figure 1. Screenshot of the first panel from the Patch software graphical user interface (GUI). In this panel the input image and the set of point spread functions (PSFs) can be loaded. In addition, the overlapping region is automatically computed and/or a user-defined value can be set. The input image shown here is a star cluster observed by the Hubble Space Telescope. It is overlain by a regular grid, centered to each image sub-domain, which defines the local PSFs.
In the second panel of our GUI (see Figure 2), the deconvolution method can be set. For this, we can either implement the Richardson-Lucy11, 12 or the Scaled Gradient Project (SGP)13 methods, with corrections for boundary effects. In Figure 2, the SGP method for deconvolution has been chosen and the algorithm will be stopped when the so-called data-fidelity function (i.e., the Kullback-Leibler distance between the model and the data) is approximately constant (given a tolerance of 10−5). Our algorithm stopping rules are based on a variety of criteria, details of which have been published previously.6 The results of the deconvolution step can then be visualized and saved in the final panel (see Figure 3). The residual map, which is dependent on the difference between the input image and the convolution (with the input PSFs) of the reconstructed object, can also be visualized and saved.
Figure 2. Screenshot of the second panel from the Patch GUI. In this panel the background information and CCD camera information are set. The deconvolution algorithm, the algorithm stopping rule, and the total number of iterations are also chosen at this stage (i.e., before starting the reconstruction).
Figure 3. Screenshot of the third panel from the Patch GUI. In this panel the results of the deconvolution step are shown, and can be saved as a flexible image transport system (FITS) file. The residual information, which is based on the statistics of the result, can also be displayed and saved.
We have tested our methodology on a set of simulated images. These images are of point-like sources in a crowded stellar field, as well as extended extragalactic sources.6, 7 For both of these scenarios, we simulated the images by assuming a strongly variable PSF across the field of view. In the case of the stellar objects, our tests show that the photometric and astrometric accuracy increases (and the number of artifacts simultaneously decreases) when the number of sub-domains increases. The color–magnitude diagrams (CMDs) for different reconstructions of the same input image—obtained by dividing it into different numbers of sub-domains—are shown in Figure 4. We simulated the observation in the J-band and in the K-band (with central wavelengths of 1.27 and 2.12μm, respectively), and the color is defined as the difference between the J-band and K-band magnitudes. We see a clear improvement in the photometric accuracy (i.e., there is a narrower spread of data) over the range of CMDs. Furthermore, our extragalactic source simulations indicate that it is possible to retrieve the morphological properties of the extended object with a good level of precision.
Color-magnitude diagrams for different section choices (i.e., n×n, as shown in top left of each plot) of the input image.6
The input color-magnitude diagram is plotted in red for each case. The difference between the magnitudes of the J-band and the K-band (with central wavelengths of 1.27 and 2.12μm, respectively) is shown on the axis, and the K-band magnitude is shown on the y-axis.
We have developed a new method—and accompanying software—for the deblurring of post-AO images characterized by space-variant PSFs. We have tested our approach on both diffuse and stellar images. The results that we obtained are satisfactory and allow us to achieve good image reconstructions. We are now in the process of testing our software on real AO images, and we hope to publish a paper on our findings shortly.
This work has been partially supported by the National Institute for Astrophysics, under project TECNO-INAF 2010 (Exploiting the adaptive power: a dedicated free software to optimize and maximize the scientific output of images from present and future adaptive optics facilities).
Andrea La Camera
Department of Informatics, Bioengineering, Robotics, and Systems Engineering
University of Genoa
Andrea La Camera has a PhD in computer science and is currently a postdoctoral researcher. His research activities are mainly within the field of inverse problems and are focused on astronomical image deconvolution.
Astronomical Observatory of Bologna
National Institute for Astrophysics (INAF)
Laura Schreiber obtained her PhD in astronomy from the University of Bologna, Italy, in 2009. In her current work, she focuses on AO instrumentation and data processing.
1. A. F. Boden, D. C. Redding, R. J. Hanisch, J. Mo, Massively parallel spatially variant maximum-likelihood restoration of Hubble Space Telescope imagery, J. Opt. Soc. Am. A 13, p. 1537-1545, 1996.
2. M. Aubailly, M. C. Roggemann, T. J. Schulz, Approach for reconstructing anisoplanatic adaptive optics images, Appl. Opt. 46, p. 6055-6063, 2007.
3. J. G. Nagy, D. P. O'Leary, Restoring images degraded by spatially variant blur, Soc. Indust. Appl. Math. J. Sci. Comp. 19, p. 1063-1082, 1998.
4. M. Hirsch, S. Sra, B. Scholkopf, S. Harmeling, Efficient filter flow for space-variant multiframe blind deconvolution, Proc. IEEE Conf. Comp. Vision Pattern Recognit.
, p. 607-614, 2010. doi:10.1109/CVPR.2010.5540158
5. L. Denis, E. Thiebaut, F. Soulez, Fast model of space-variant blurring and its application to deconvolution in astronomy, Proc. IEEE Int'l Conf. Image Process.
18, p. 2817-2820, 2011. doi:10.1109/ICIP.2011.6116257
6. A. La Camera, L. Schreiber, E. Diolaiti, P. Boccacci, M. Bertero, M. Bellazzini, P. Ciliegi, A method for space-variant deblurring with application to adaptive optics imaging in astronomy, Astron. Astrophys.
579, p. A1, 2015. doi:10.1051/0004-6361/201525610
7. P. Ciliegi, A. La Camera, L. Schreiber, M. Bellazzini, M. Bertero, P. Boccacci, E. Diolaiti, et al., Image restoration with spatially variable PSF, Proc. SPIE
9148, p. 91482O, 2014. doi:10.1117/12.2055914
9. M. Bertero, P. Boccacci, A simple method for the reduction of boundary effects in the Richardson-Lucy approach to image deconvolution, Astron. Astrophys. 437, p. 369-374, 2005.
11. W. H. Richardson, Bayesian-based iterative method of image restoration, J. Opt. Soc. Am. 62, p. 55-59, 1972.
12. L. B. Lucy, An iterative technique for the rectification of observed distributions, Astron. J. 79, p. 745-754, 1974.
13. S. Bonettini, R. Zanella, L. Zanni, A scaled gradient projection method for constrained image deblurring, Inverse Prob.
25, p. 015002, 2009. doi:10.1088/0266-5611/25/1/015002