PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
This PDF file contains the front matter associated with SPIE Proceedings Volume 11396, including the Title Page, Copyright information and Table of Contents
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Sampling is a fundamental aspect of any implementation of compressive sensing. Typically, the choice of sampling method is guided by the reconstruction basis. However, this approach can be problematic with respect to certain hardware constraints and is not responsive to domain-specific context. We propose a method for defining an order for a sampling basis that is optimal with respect to capturing variance in data, thus allowing for meaningful sensing at any desired level of compression. We focus on the Walsh-Hadamard sampling basis for its relevance to hardware constraints, but our approach applies to any sampling basis of interest. We illustrate the effectiveness of our method on the Physical Sciences Inc. Fabry-Pérot interferometer sensor multispectral dataset, the Johns Hopkins Applied Physics Lab FTIR-based longwave infrared sensor hyperspectral dataset, and a Colorado State University Swiss Ranger depth image dataset. The spectral datasets consist of simulant experiments, including releases of chemicals such as GAA and SF6. We combine our sampling and reconstruction with the adaptive coherence estimator (ACE) and bulk coherence for chemical detection and we incorporate an algorithmic threshold for ACE values to determine the presence or absence of a chemical. We compare results across sampling methods in this context. We have successful chemical detection at a compression rate of 90%. For all three datasets, we compare our sampling approach to standard orderings of sampling basis such as random, sequency, and an analog of sequency that we term `frequency.' In one instance, the peak signal to noise ratio was improved by over 30% across a test set of depth images.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Quantitative phase imaging (QPI) provides enhanced contrast for weakly absorbing specimens such as biological tissues under optical light and soft materials under X-ray. In this work, we develop a model-based phase retrieval framework by integrating the physics principles of phase imaging with the deep learning-based approach. Both measurements and the forward model are used as the inputs for a model-based neural network. The features of the object and the regularization weight of the established priors are learned by minimizing the difference between the network output to the ground truth during the training process. This method is tested on phase imaging of handwriting digital patterns and biological cells in a simulation of propagation-based TIE (transport of intensity equation) phase retrieval. We achieve enhanced accuracy for the phase retrieval compared to non-model based end-to-end neural networks and reduce the computation cost compared to traditional model-based iterative reconstruction algorithms.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
With the increasing amount of available medical data, computing power and network speed, modern medical imaging is facing an unprecedented amount of data to analyze and interpret. Phenomena such as Big Data-omics stemming from several diagnostic procedures and novel multi-parametric imaging modalities tend to produce almost unmanageable quantities of data. The paper addresses the aforementioned context by assuming that a novel paradigm in massive data processing and automation becomes necessary in order to improve diagnostics and facilitate personalized and precision medicine for each patient. Traditional machine learning concepts have demonstrated many shortcomings when it comes to correctly diagnose fatal diseases. At the same time static graph networks are unable to capture the fluctuations in brain processing and monitor disease evolution. Therefore, artificial intelligence and deep learning are increasingly applied in oncologic medical imaging because they excel at providing quantitative assessments of biomedical imaging characteristics. On the other hand, novel concepts borrowed from modern control have paved the path for a dynamic graph theory that can predict neurodegenerative disease evolution and replace longitudinal studies. We chose two important topics, brain data processing and oncologic imaging to show the relevance of these concepts. We believe that these novel paradigms will impact multiple facets of radiology but are convinced that it is unlikely that they will replace radiologists any time in the near future since there are still many challenges in the clinical implementation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Hyperspectral imaging spectrometers are useful in numerous applications including remote sensing, environmental monitoring, surveillance, minerology and precision agriculture. Historically, high cost and complexity has limited the number of fielded hyperspectral imagers. The Computational Reconfigurable Imaging Spectrometer (CRISP) sensor is a novel hyperspectral imaging spectrometer suitable for high-resolution air or space-based missions. CRISP uses a computational imaging approach to reduce the system’s overall size and complexity. It exploits platform motion and a spectrally coded focal-plane mask to temporally modulate the optical spectrum, enabling simultaneous measurement of multiple spectral bins (i.e. multiplexing). The novel design enables high performance from smaller and less-expensive components (e.g. uncooled microbolometers), and is thus suitable for small space and air platforms. This talk discusses our demonstrator system (including recent flight results) and compares it to theory. Our flights demonstrate plume detection using an uncooled, airborne, longwave infrared CRISP imaging spectrometer. We discuss progress developing algorithms to enable spectral recovery in the presence of motion blur, utilizing the CRISP architecture to advantage. These algorithms enable a fast scanning mode, trading off computational complexity and reconstruction quality for fast area coverage rate.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Camera images often contain more information than the human eye can notice. Despite numerous influences degrading image quality, there is potentially some information present or there are at least traces in the image data which would be interesting to be examined but which are below a recognition threshold. And quite often hard to see information needs to be extracted from degraded images or image parts. This paper focuses on making such information visible. First, a novel dedicated retinex algorithm uncovers hidden details due to low contrast. Unlike other retinex variants, it does not only improve the contrast in dark image regions but at any grayvalue or color level. Additionally, it avoids halo effects to a high degree. Second, a dedicated image sharpening method uncovers hidden details due to blurring or haziness and clarifies washed-out details. In comparison to known methods this is done more effectively while image noise and compression artifacts are significantly less increased. Both proposed image enhancement methods are easy to implement. The given results show them to be powerful to uncover image details and to increase the amount of obtained visualized information, which eventually improves the visual range of the camera.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
For centuries, humans have discovered the physical laws that underpin our world. What if the next Einstein or Newton is not a human, but a machine? Machines that are physics-aware can transform a multitude of fields, poised to enable unexpected and meaningful feats in science and engineering. In this paper, we survey methods germane to the imaging sciences where we observe a very special convergence of a millennia of optical theories with decades of digital photos.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Single-photon sensor technology is rapidly emerging as the optical sensor technology of choice in specialized low flux imaging applications such as long-range LiDAR, fluorescence microscopy and non-line-of-sight imaging. We ask the question: Can single-photon sensors be used more broadly as general-purpose image sensors for passive 2D intensity imaging? We derive a photon flux estimator using the number of photons detected in a fixed exposure time by a dead-time-limited single-photon avalanche diode (SPAD) sensor. Unlike a conventional image sensor pixel that has a hard saturation limit due to its full well capacity, our SPAD-based passive imaging method has a non-linear response that never saturates. This enables SPADs to operate not only at extremely low photon flux levels but also at extremely high flux levels, several orders of magnitude higher than the saturation limit of conventional image sensors. We present a comprehensive theoretical analysis of the effect of various design parameters on the signal-to-noise-ratio and dynamic range of a passively operated SPAD pixel, and also demonstrate the dynamic range improvement experimentally.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This work focuses on a methodology to improve spatio-temporal performance in optical imaging systems. We investigate the potential of the optical event-based sensor (EBS) technology to reach spatio-temporal performance limits beyond that of imaging systems employing conventional focal-plane arrays. Specifically, we investigate EBS performance under object/scene/platform motion, where its spatio-temporal performance degrades. We propose a hardware motion compensation sub-system and experimentally demonstrate that the performance of a moving EBS can be recovered through effective reduction of platform motion. This demonstration confirms that the EBS can deliver significantly improved spatio-temporal performance, while under motion.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
X-ray images of low-density materials, such as soft tissue, provide inherently low contrast due to their subtle attenuation differences. However, differences in phase imparted to x rays can be substantial, giving significantly improved contrast. The barrier to widespread implementation of x-ray phase imaging is that most phase techniques require high spatial coherence of the x-ray beam. We have previously demonstrated that employing structured illumination produced by a stainless steel wire mesh can significantly loosen this coherence requirement. We present a computational model utilizing ray tracing that allows us to explore its design space and to optimize our phase reconstruction algorithms.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Imaging is the transfer of information from the object to the sensor. This transfer is typically mediated by a lens. However, this is not the only option. In this presentation, we will describe alternatives that can enable unusual imaging systems with concomitant advantages including a needle microscope, hyperspectral cameras and optics-less cameras. These imaging systems produce data that are not easily interpretable by humans, but nevertheless could be valuable for inferencing. We will explore the co-optimization of algorithms with hardware to enable previously impossible imaging tasks.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In our work we propose a deep learning solution to complete RGB-D images that are acquired by a NIR structured light scanner with an additional RGB camera that measures the visible spectrum. Building on works on image inpainting, we designed and trained a neural network architecture that takes the available fringe and color images as well as the reliably measured depth information and completes the depth images. We particularly focus on occlusion-caused image artifacts that naturally occur due to geometric visibility constraints. Hence, we are able to reconstruct a dense depth image from the viewpoint of the RGB camera, which can be used for further post-processing and visualization purposes.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This talk will describe new microscopy methods and computational algorithms that use computational imaging to enable 3D phase measurement in samples that are thick or incur multiple scattering, such as embryos or whole organisms. We use image reconstruction algorithms that are based on large-scale nonlinear non-convex optimization combined with unrolled neural networks to model the multiple scattering effects of light passing through the sample. This enables us to reconstruct 3D refractive index maps from angle-coded illumination, even with samples that incur significant scattering. We further discuss engineering of data capture for computational microscopes by end-to-end learned design.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Fourier integral microscopy (FiMic), also referred to as Fourier light field microscopy (FLFM) in the literature, was recently proposed as an alternative to conventional light field microscopy (LFM). FiMic is designed to overcome the non-uniform lateral resolution limitation specific to LFM. By inserting a micro-lens array at the aperture stop of the microscope objective, the Fourier integral microscope directly captures in a single-shot a series of orthographic views of the scene from different viewpoints. We propose an algorithm for the deconvolution of FiMic data by combining the well known Maximum Likelihood Expectation (MLEM) method with total variation (TV) regularization to cope with noise amplification in conventional Richardson-Lucy deconvolution.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Computational imaging techniques that rely on a compressed set of measurements and exploit prior information such as target size, scene sparsity, transceiver radiation pattern, etc are rapidly gaining popularity in areas such as medical and security imaging, remote sensing, and automotive radar as they can significantly reduce SWAP-C (Size, Weight, Power, and Cost) of hardware modules, especially at millimeter-wave frequencies. In this article, we propose using the covariance matrix of a large ensemble of representative targets to form a diagonalizing basis in which the transformed scene voxels are uncorrelated. In this basis, we introduce a method of image reconstruction, Covariance Likelihood based Regularization (CLR), where transformed voxels, with low likelihood according to the ensemble statistics, are penalized. We also discuss another method, Thresholded Eigenbasis (TE), which involves thresholding the eigenvalues of the covariance matrix and reconstructing the transformed scene voxels in a lower dimensional approximate basis. We use these techniques to reconstruct images from simulations of measurements made using a W-band (75 - 110 GHz) imaging system, where the linear imaging matrix is carefully designed based on vector electromagnetics and realistic hardware. Based on these reconstruction results, we discuss the opportunities and challenges for these methods, including scenarios where TE provides improved reconstruction speed and CLR provides improved accuracy.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Compressed ultrafast photography (CUP) is a novel computational imaging modality that synergizes compressed sensing with streak imaging. In data acquisition, a transient spatiotemporal (x,y,t) scene is compressively recorded in a single snapshot by successively going through spatial encoding, temporal shearing, and spatiotemporal integration. In movie reconstruction, a compressed-sensing reconstruction algorithm is employed to recover the datacube of the transient event. Compared with pump-probe ultrafast imaging schemes, CUP is capable of single-shot, receive-only, ultrafast imaging of non-repetitive or difficult-to-reproduce transient events. Compared with other single-shot ultrafast imaging techniques, CUP does not require specialized active illumination and possesses high light throughput.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The compressive sensing (CS) technique is a novel tool used to reconstruct images using fewer samples, normally sparse in the transform domain, than those required by conventional imaging systems. However, the methods applied for signal reconstruction within the CS approach still present some problems in the implementation, mainly due to their intensive computational demand and high power consumption requirements. These drawbacks need addressing if this approach is followed in systems aimed at e.g. drone autonomous flying or other embedded applications that additionally require very short processing times. In this paper we evaluate the use of hardware based parallel processing architecture for the implementation of the Orthogonal Matching Pursuit (OMP) algorithm, one of the most efficient CS reconstruction algorithms developed so far. To improve the algorithm performance, we target different maximum allowed processing times to reach minimum image resolutions required by each system of interest using different sparse (16 and 64) amounts of single-pixel generated samples per image. We also target the final image resolution to be above 20 dB in terms of the peak signal-to-noise ratio (PSNR). To reduce the execution and processing times required to generate each image, we propose implementing parallel kernels in the hardware platform for each of the operations required by the algorithms under study. In the proposed implementation the reconstructed images are used to generate video streams that form the foundation on which decisions are to be made by the system in continuous time, whereby each single image (frame) reconstruction cannot overcome 30 ms in order to maintain the minimum amount of frames per second (fps) above 33 (minimum required for an acceptable video stream). The implementation of a variation of the OMP algorithm in a graphics processing unit (GPU) using parallel architecture approach allows obtaining processing times 4 or 5 times shorter than those obtained if central processing unit (CPU) based architecture implementation is used for the same purpose.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We experimentally design, construct and test a compact optical system featuring a vortex diffractive waveplate placed in the aperture stop of the lens system. The wavefront coding camera diffracts polarized coherent illumination in the system's focal plane and reduces the peak intensity of the beam by two orders of magnitude while simultaneously recovering an unpolarized incoherent background image.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We report a new computational super-resolution (SR) imaging technique, termed as coded aperture super-resolution imaging (CASR), which is to modulate the point spread function (PSF) of the imaging system by rotating the aperture pattern. The pattern is designed in an anisotropic manner so that the PSF spreads across multiple pixels and contains clues about high-frequency structure. A fundamental difference between our approach and conventional multi-image superresolution is that CASR accounts for the diffraction effect explicitly with no need for relative motion between the scene and the detector. With CASR, we design and construct two sets of programmable aperture photoelectric imaging systems in the visible spectrum. The achievable equivalent Nyquist sampling frequency of the detectors is increased to 3.57×. Furthermore, it can be flexibly applied to long-distance HR detection due to its advantages of fast response, no mechanical movement, and anti-airflow disturbance.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
One of the fundamental assumptions of compressive sensing (CS) is that a signal can be reconstructed from a small number of samples by solving an optimization problem with the appropriate regularization term. Two standard regularization terms are the L1 norm and the total variation (TV) norm. We present a comparison of CS reconstruction results based on these two approaches in the context of chemical detection, and we demonstrate that optimization based on the L1 norm outperforms optimization based on the TV norm. Our comparison is driven by CS sampling, reconstruction, and chemical detection in two real-world datasets: the Physical Sciences Inc. Fabry-Perot interferometer sensor multispectral dataset and the Johns Hopkins Applied Physics Lab FTIRbased longwave infrared sensor hyperspectral dataset. Both datasets contain the release of a chemical simulant such as glacial acetic acid, triethyl phosphate, and sulfur hexafluoride. For chemical detection we use the adaptive coherence estimator (ACE) and bulk coherence, and we propose algorithmic ACE thresholds to define the presence or absence of a chemical of interest in both un-compressed data cubes and reconstructed data cubes. The un-compressed data cubes provide an approximate ground truth. We demonstrate that optimization based on either the L1 norm or TV norm results in successful chemical detection at a compression rate of 90%, but we show that L1 optimization is preferable. We present quantitative comparisons of chemical detection on reconstructions from the two methods, with an emphasis on the number of pixels with an ACE value above the threshold.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The use of diffractive optics on space-based multi-spectral sensors allows for the creation of large-aperture systems in small cube satellite packages. However, the current technology will only allow for the imaging of one narrow spectral band. By using computational imaging, the number of spectral bands imaged can be increased while still maintaining a lightweight satellite. We have designed and built a diffractive plenoptic camera (DPC) that combines a diffractive optic and a light field camera in order to capture multiple spectral bands of vegetation to calculate a normalized difference vegetation index (NDVI). This paper will derive equations to relate system parameters to its performance and capture a multi-spectral scene with a DPC. The analysis yielded design equations for the spectral range, spectral resolution, and field of view. The experimental results found that the DPC was able to determine the location of the vegetation but with reduced NDVI values in comparison to a grating spectrometer. Additionally, artifacts like the zeroth-order of diffraction and spectra occupying the same spatial location, were found to contribute to the reduction of the NDVI values. Near the borders of the refocused image, false values were found as a result of optical aberrations present. Overall, the DPC shows potential in becoming a space-based multi-spectral sensor, but the contribution from artifacts and aberrations need to be reduced. Future work includes using different diffractive optic designs, using an intermediate image diffractive plenoptic camera, and using 3D deconvolution techniques.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In recent years, research efforts in the field of digital holography have expanded significantly, due to the ability to obtain high resolution intensity and phase images. The information contained in these images have become of great interest to the machine learning community, with applications spanning a wide portfolio of research areas including bioengineering. In this work, we seek to demonstrate a high fidelity simulation of holographic recording. By accurately representing diffraction, aberrations, and speckle introduced via propagation of a coherent light source through a series of optical elements and the object itself, we will accurately predict the optical interference of the object and reference wave at the recording plane. We will show that the optical transformation that predicts the complex field at the recording plane can be generalized for arbitrary holographic recording configurations using matrix optics. In addition, we will provide a detailed description of digital phase reconstruction and aberration compensation, for a variety of off-axis holographic configurations. Reconstruction errors will be presented for the various holographic recording geometries and complex field objects. The generalized holographic simulation described in this work will seek to motivate using the reconstruction of the simulated holograms to populate a database which can be used to train machine learning algorithms aimed at classifying relevant objects recorded through a variety of holographic setups.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Recently, systems of intelligent image processing have been intensively developing. When solving problems of high complexity, modern methods of technical vision are required to increase the efficiency of the digital image processing process with the variability of the working scene, heterogeneity of objects, and interference. One of the trends in the development of modern information technologies is the development of highly efficient methods and algorithms for analyzing signals and images with background noises. When constructing highly-effective techniques and algorithms for image denoising, an a priori knowledge of the characteristics of distorting interference is required. In practice, in most cases, such information is missing. In this paper, we develop a new image denoising method with bank filters using the maximum likelihood estimation. We propose a new approach to using a set of heterogeneous digital image filters, such as a median filter, a Gabor filter, a non-local average filter, a spline filter, a wavelet filter, and others. The feasibility of this approach is determined by the fact that, as a rule, when considering the filtering process, a Gaussian character of the noise distribution density is assumed. Moreover, the effectiveness of various filtering methods on real images recorded against the background of noise will be different. This is due to the fact that under real observation conditions, the noise distribution density may differ from the Gaussian one. This explains the difference in the qualitative filtering characteristics of the same image by different filters. Experimental studies have shown the operability and high efficiency of the developed method, which allows improving the quality of image filtering.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.