Proceedings Volume 11351

Unconventional Optical Imaging II

cover
Proceedings Volume 11351

Unconventional Optical Imaging II

Purchase the printed version of this volume at proceedings.com or access the digital version at SPIE Digital Library.

Volume Details

Date Published: 17 April 2020
Contents: 18 Sessions, 39 Papers, 52 Presentations
Conference: SPIE Photonics Europe 2020
Volume Number: 11351

Table of Contents

icon_mobile_dropdown

Table of Contents

All links to SPIE Proceedings will open in the SPIE Digital Library. external link icon
View Session icon_mobile_dropdown
  • Front Matter: Volume 11351
  • Advanced Methods: Tomography I
  • Advanced Methods: Tomography II
  • Advanced Methods: Scattering
  • Advanced Methods: Microscopy
  • Modelling, Computing, Design: Co-design
  • Applications: Biomed II
  • Hot Topics II
  • Modelling, Computing, Desing: Deep Learning I
  • Modelling, Computing, Design: Deep Learning II
  • Modelling, Computing, Design: Computational Imaging
  • Advanced Methods: QPI/DH
  • Terahertz Imaging I: Joint Session
  • Advanced Methods: Advanced Devices and Modalities for Imaging I
  • Advanced Methods: Advanced Devices and Modalities for Imaging II
  • Advanced Methods: Multi-Hyperspectral
  • Poster Session
  • 11351 Additional Presentations
Front Matter: Volume 11351
icon_mobile_dropdown
Front Matter: Volume 11351
This PDF file contains the front matter associated with SPIE Proceedings Volume 11351, including the Title Page, Copyright information, Table of Contents, Author and Conference Committee lists.
Advanced Methods: Tomography I
icon_mobile_dropdown
Unconventional optical coherence tomography (Conference Presentation)
Adrian G. H. Podoleanu, Adrian Bradu, Ramona C. Cernat, et al.
We have introduced the Master Slave (MS) interferometry method to address the limitations due to the use of conventional FTs or its derivatives in OCT data processing. The novel MS technology replaces the FT operator with a parallel batch of correlators. An electrical signal proportional to the channeled spectrum at the interferometer output is correlated with P masks producing P signals, a signal for each point out of P in the A-scan. In this way, it is possible to: (i) directly access the information from selected depths in the sample placed in the slave interferometer; (ii) eliminate the process of resampling, required by the FT based conventional technology, with immediate consequences in improving the decay of sensitivity with depth, achieving the expected axial resolution limit and reducing the time to display an en-face OCT image, while slightly lowering the cost of OCT assembly and (iii) tolerate the dispersion left unbalanced in the slave interferometer. The lecture will present several developments based on the MS-OCT technology, such as: (a) an equivalent OCT/SLO (scanning laser ophthalmoscopy), where no extra optical channel for the SLO is needed; (b) coherence revival swept source OCT employing the MS tolerance to dispersion: (c) Gabor filtering, where large number of repetitions with different focus adjustments can be performed more time efficiently than when employing FT based OCT; (d) MS phase processing, which opens novel avenues in phase- and polarization-sensitive modalities; (e) achieving the theoretical axial resolution when using a ultra wide broadband source such as a supercontinuum laser; (f) down-conversion OCT that can deliver an en-face OCT image from a sample in real-time, irrespective of the tuning speed of the swept source where the mask signals are generated in real time (by a physical master interferometer) while sweeping the frequency of the swept source.
Advanced Methods: Tomography II
icon_mobile_dropdown
Recent progress in tomographic diffractive microscopy
N. Verrier, M. Debailleul, O. Haeberlé
Tomographic diffractive microscopy (TDM), also known as phase tomography, synthetic aperture microscopy… is becoming a more and more mature technique. It is an extension of holographic microscopy, with controlled conditions of illumination. Several views of the sample are numerically combined to reconstruct a 3-D image. Commercial implementations are even already available, but there are still challenges to be addressed to improve the method. We present possible approaches to further speed up acquisitions, simplify reconstructions and/or improve sensitivity/contrast.
Large-scale polarization contrast optical diffraction tomography (Conference Presentation)
Jeroen Kalkman, Jos van Rooij
We demonstrate large scale high sensitivity optical diffraction tomography (ODT) imaging of zebrafish. We make this possible by three improvements. First, we optimize the field of view by using a high magnification over numerical aperture ratio ODT set-up with phase stepping. Second, we decrease the noise in the reconstructed images by off-axis sample placement, numerical focus tracking, and acquisition of a large number of projections. Third, we optimize the tissue clearing procedure to prevent scattering and refraction. We demonstrate our technique by imaging a zebrafish over a 4.1x4.1x5.5 mm3 volume with 4 micrometer spatial resolution. In addition, we demonstrate with the same set-up combined phase and polarization contrast optical diffraction tomography imaging. We use the phase and amplitude of the digital hologram to reconstruct the refractive index and (scaled) birefringence, respectively. Birefringence contrast imaging is demonstrated on zebrafish and shows high contrast images of the muscle tissue, something that is not well visible in conventional phase-based optical diffraction tomography.
Mirror-effect based tomographic diffraction microscopy
Nicolas Verrier, Ludovic Foucault, Matthieu Debailleul, et al.
Tomographic Diffraction Microscopy (TDM) is a technique that makes it possible to assess for 3D complex refractive index of the investigated sample without fluorescent labeling. Therefore, TDM is a method of choice for the characterization of biological samples or functionalized surfaces. TDM is a generalization of Digital Holographic Microscopy with a full control of the angle of illumination over the object. Angle can be modified either by sweeping the illumination on the object, or by rotating the object while maintaining the angle of illumination. Combining several hundreds of acquisition, it is possible to retrieve a full 3D information about both refraction and absorption of the object. Nevertheless, the time needed for data acquisition might become prohibitive for routine investigations, or dynamic sample imaging. Moreover, simultaneous reflection and transmission characterization of sample remains an experimental challenge. Recently a method called “Mirror-Assisted Tomographic Diffraction Microscopy” (MA-TDM) have been proposed [Opt. Lett. 35, 1857 (2010)], which theoretically allows to achieved isotropic 3D resolution by combining, in a simpler fashion, reflection and transmission modes. When transparent sample are considered, one can take benefits of this mirroring effect to limit the amount of acquired holograms, while maintaining the resolution of TDM. We propose to demonstrate this concept, using a specific preparation of the sample. It will be shown that, using an adequate data processing scheme, it is possible to reconstruct 3D objects using an annular illumination sweep, thus limiting the amount of acquisition. This study paves the way to a versatile TDM configuration allowing for both reflection and transmission acquisitions from a single image acquisition.
On 3D imaging systems based on scattered ionizing radiation
Cécilia Tarpau, Javier Cebeiro, Mai K. Nguyen, et al.
The use of scattered radiation for tomographic reconstruction continues to be a current challenge for designing future imaging systems. In the energy range of X and gamma rays used for biomedical imaging and non- destructive evaluation, Compton scattering is the predominant effect. The one-to-one correspondence between angle and energy of scattered radiation allows the exploitation of the energy information for reconstruction and consequently the emergence of modalities of Compton Scattering Tomography (CST). For two-dimensional systems, the modelling of the imaging processes leads to new Radon transform on circular arcs according to the geometry of the modality. In this context, a new modality, named Circular Compton Scattering Tomography (CCST) has been proposed recently. This system is made of a fixed source and a ring of detectors passing through the source. The purpose of this work is to extend this modality to three dimensions. Two geometries will be proposed in this study: the first one is made of a fixed source and fixed detectors placed on a sphere passing through the source and the second one considers detectors placed on a cylinder, containing also the fixed source. These three-dimensional setups conserve the assets of CCST which include the ability of a faster scanning compared to existing systems, the convenience for small object scanning, having a fixed system (avoiding mechanical rotation) and the compactness compared to planar configurations. The modelling of image acquisition in these new cases leads to Radon transforms on spindle tori resulting of the revolution of a circle along the axis Source-Detector. Numerical simulations are carried out in this work and show the theoretical feasibility of these systems.
Advanced Methods: Scattering
icon_mobile_dropdown
Fluorescence speckle image correlation spectroscopy (Conference Presentation)
Anirban Sarkar, Irène Wang, Aditya Katti, et al.
Fluorescence correlation spectroscopy is used extensively for quantitative characterization of biomolecules at very low concentration. However, light aberration and scattering from the tissues are two major factors that affect the results strongly. Although adaptive optics arrangement can correct the aberrations of light to some extent, it fails completely to eliminate the light scattering effect. Recently, exploiting the fact that autocorrelation of a speckle pattern is a sharply peaked point spread function and the optical memory effect, non-invasive imaging of fluorescent sample through a scattering medium has been possible. However, it is also very challenging to measure the dynamic properties of the fluorescent molecules or particles through a scattering layer due to poor signal to noise ratio. In this study, we employ a modality based on speckle cross-correlation enabled via optical memory effect to study two dimensional (2D) diffusion of fluorescent particles hidden behind a scattering film. We realized a 2D diffusing model system by confining fluorescent polystyrene beads of 1µm diameter at the water/air interface behind a TiO2 diffuser. The experimental set up was built up in an epifluorescence configuration. The fluorescent beads were excited by an illumination speckle generated by the incident light in a plane-wave geometry while passing through the disordered TiO2 film. Similarly, the emitted fluorescent signal also traversed through the same TiO2 film to generate the detection speckle, which was eventually recorded by a high frame rate CMOS camera. The experimental set up has also been modelled numerically, where speckle pattern has been generated by a spherical wave, transmitted through a scattering object in an optical microscope. Moreover, the dependence of the speckle size on the numerical aperture, magnification, and the distance of the focal plane from the bead plane has also been studied. The numerical results have been compared with the experimental values to estimate the speckle size. Furthermore, we have evaluated the 2D diffusion constant by monitoring the widening of the 2D speckle cross-correlation function versus lag time. This result has been compared with that obtained with the single particle tracking method without the scattering layer. Quantitative agreement between the results obtained by the speckle cross-correlations and the single particle tracking technique without the diffuser establishes the potential application of this technique in correlation spectroscopy. Superimposed multiple beads speckle patterns were also studied and the results will be presented in the conference.
Imaging the optical properties of turbid media with single-pixel detection
A. J. M. Lenz, P. Clemente Pesudo, V. Climent, et al.
In this contribution, we present an optical imaging system with structured illumination and integrated detection based on the Kubelka-Munk light propagation model for the spatial characterization of scattering and absorption properties of turbid media. The proposed system is based on the application of single-pixel imaging techniques to achieve spatial resolution. Our strategy allows to retrieve images of the absorption and scattering properties of a turbid media slab by using integrating spheres with photodiodes as bucket detectors. We validate our idea by imaging the absorption and scattering coefficients of a spatially heterogeneous phantom and an organic sample.
Advanced Methods: Microscopy
icon_mobile_dropdown
Multiparametric, label-free two-photon imaging of metabolic function (Conference Presentation)
Metabolism plays a critical role in the function of cells and tissues. Changes in metabolic function are hallmarks of numerous conditions, including aging, cancer, obesity and neurodegenerative diseases. Such changes are highly dynamic and heterogeneous at the microscopic scale. Label-free, two-photon microscopic imaging offers opportunities to monitor and characterize such aspects of metabolic function non-destructively. Intensity and lifetime fluorescence measurements from endogenous NAD(P)H and FAD can serve as sensitive indicators of metabolic changes associated with redox state and mitochondrial organization. In fact, a combination of optical metabolic readouts including the redox ratio, NADH bound fraction, and mitochondrial clustering can provide important insights on the specific nature of the metabolic pathway perturbation that yielded the optical changes. Assessment of such information has the potential to improve our understanding, detection, and treatment of numerous diseases. For example, our studies highlight that the use of multiple optical metabolic readouts enables sensitive and specific detection of changes that are associated with adipose tissue type (brown vs beige vs white) and responses to stimuli. In addition, we have discovered that our ability to characterize depth-dependent variations within epithelial tissues, such as the skin and cervix, plays a key role in identifying alterations that occur at the onset of cancer and may be used to develop improved, non-invasive methods for cancer diagnosis. Monitoring of immune system cell activation is another important application area. Such findings pave the way for exploiting label-free, metabolic imaging techniques to understand dynamic cell interaction and their role in the development and treatment of a number of human diseases.
Laser scanning optical-frequency-comb microscopy for multivariate measurement related to amplitude and phase (Conference Presentation)
Takeo Minamikawa, Shota Nakano, Eiji Hase, et al.
Laser-scanning optical microscopy is widely used for the observation of microstructures and the analysis of molecular functions of samples with tightly focused light. Spectroscopic information is also available if a broadband light source is employed. General laser-scanning optical microscopy observes optical intensity by employing a sample- or laser-scanning system for the analysis of samples via reflectance, scattering, absorbance, and laser-induced phenomena. Another visualization method is using optical phase, which can enhance the image contrast of such high transparent materials and nano-step structures. However, broadband spectroscopic phase-contrast imaging with a laser-scanning configuration is slightly tricky due to the interferometric configuration is required to retrieve phase information of each wavelength. If the simultaneous measurement of amplitude and phase spectra is enabled in laser-scanning microscopy, it is possible to realize multivariate measurement to analyze more detailed information of samples based on such as complex refractive index, polarization characteristics, and so on with tightly focused light. To overcome these limitations, in this study, we proposed an optical-frequency-comb (OFC)-based laser scanning optical microscopy. The OFC technique enables fast Fourier transform spectroscopy by using well-defined two OFC lasers without any mechanical scan in the time domain. The combination of the laser scanning optical microscopy and the OFC technique realized the simultaneous and spectroscopic observation of quantitative amplitude and phase images with tight focusing down to the diffraction limit. Furthermore, we realized the analysis of polarization by the direct observation of the amplitude and phase of the orthogonal components. We applied the proposed method to the observation of nano-step structures, phase objects and anisotropic materials to provide a proof-of-principle demonstration of the proposed method. Our proposed approach will serve as a unique and powerful tool for characterizing the materials via complete characterization of optical information such as amplitude, phase, polarization and spectrum.
A shadow image microscope based on an array of nanoLEDs
Joan Canals, Victor Moro, Nil Franch, et al.
This work presents a compact low-cost and straightforward shadow imaging microscopy technique based on spatially resolved nano-illumination instead of spatially resolved detection. Independently addressable nano-LEDs on a regular 2D array provide the resolution of the microscope by illuminating the sample in contact with the LED array and creating a shadow image in a photodetector located on the opposite side. The microscope prototype presented here is composed by a GaN chip with an 8x8 array of 5μm-LEDs with 10 μm pitch light sources and a commercial CMOS image sensor with integrated lens used as a light collector. We describe the microscope prototype and analyze the effect of the sensing area size on image reconstruction.
Modelling, Computing, Design: Co-design
icon_mobile_dropdown
Can phase masks extend depth-of-field in localization microscopy?
Olivier Lévêque, Caroline Kulcsár, Hervé Sauer, et al.
In localization microscopy, the position of isolated fluorescent emitters are estimated with a resolution better than the diffraction limit. In order to image thick samples, which are common in biological applications, there is considerable interest in extending the depth-of-field of such microscopes in order to make their accuracy as invariant as possible to defocus. For that purpose, we propose to optimize annular binary phase masks placed in the pupil of the microscope in order to generate a point spread function for which the localization accuracy is almost invariant along the optical axis. The optimization criterion is defined as the localization accuracy in the plane expressed in terms of the Cram´er-Rao bound. We show that the optimal masks significantly increase the depth-of-field of single-molecule imaging techniques relatively to an usual microscope objective.
What is the depth of field reachable in practice with generic binary phase masks and digital deconvolution?
We investigate the practical behavior of a co-optimized hybrid system involving a generic binary phase mask and digital deconvolution. We perform experiments with a case-study optical system with observed scene lighting by LED of different colors. By imaging a real scene and a depth of field (DoF) target, we show that the DoF reachable in practice matches with good accuracy the one predicted by simulation in case of monochromatic illumination. We also characterize the drop in performance when using this type of system with actual illumination wavelength departing from the nominal one.
Image sensor with parallel signal processing for motion detection
The paper describes the design of an unconventional biologically inspired image sensor. It contains numerous optical channels similar to facets in a natural compound eye. Each channel has several photodetectors with pre-amplifiers and a microcontroller with a multi-channel analog-to-digital converter. The signals coming from the photodetectors in each channel are amplified, converted into a digital form and processed by a microcontroller. All channels independently perform parallel image processing and image analysis. All microcontrollers are attached to a microcontroller network. They send data through this network only if useful signals are registered. These microcontrollers can be reprogrammed to perform various image processing operations including gradient search, spatial filtration, temporal filtration, signal correlation, neural network simulation and others. This design completely differs from the traditional image sensor architecture, which includes a mega-pixel focal plane array with a sequential signal read-out and a multi-core digital signal processor. The proposed architecture can be considered as a big set of the equal channels - the “smart groups” of several pixels with read-out electronics and a digital microcontroller that can extract only the useful data and send it out. The working prototype of this image sensor has demonstrated ability to measure distribution of speed and direction of optical flow thought its field of view in the very short time. The advantages and possible applications of this sensor are also discussed.
Applications: Biomed II
icon_mobile_dropdown
Video lens-free microscopy of human cells: from standard 2D to 3D organoids culture (Conference Presentation)
Cédric Allier, Sophie Morel, Anthony Berdeu, et al.
Research is continuously developing imaging methods to better understand the structure and function of biological systems. In this paper, we describe our work to develop lens-free microscopy as a novel mean to observe and quantify cells in 2D and 3D cell culture conditions. At first, we developed a lens-free video microscope based on multiple wavelength acquisitions to perform time-lapse 2D imaging of dense cell culture inside the incubator. We demonstrated that novel phase retrieval techniques enable imaging thin cell samples with high concentration (~15000 cells over a large field of view of 29.4 mm2). The experimental data can next be further analyzed with existing cell profiling and tracking algorithms. As an example, we showed that a 7 days acquisition of a culture of HeLa cells leads to a dataset featuring 2.106 cell point measurements and 104 cell cycle tracks. Recently, we extended our work to the video-microscopy of 3D organoids culture. We showed the capability of lens-free microscopy to perform 3D+time acquisitions of 3D organoids culture. To our knowledge, our technique is the only one able to reconstruct very large volumes of 3D cell culture (~5 mm3) by phase contrast imaging. This new mean of microscopy allowed us to observe a broad range of phenomena present in 3D environments, e.g. self-organizations, displacement of large clusters, merging and interconnection over long distances (>1 mm). In addition, this 3D microscope can capture the interactions of single cells and organoids with their 3D environment, e.g. traction forces generated by large cell aggregates over long distances, up to 1.5 mm. Overall, lens-free microscopy techniques favor ease of use and label-free experimentations as well as time-lapse acquisitions of large datasets. Importantly, we consider that these lens-free microscopy technique can thus expand the repertoire of phenomena that can be studied within 2D and 3D organoids cultures.
Chromatic aberration-based phase and fluorescence microscope for cell cycle study (Conference Presentation)
Ondrej Mandula, Jean-Philippe Kleman, Francoise Lacroix, et al.
We designed a particularly simple, compact and robust microscope for phase and fluorescent imaging. The phase-contrast image is reconstructed from a single, approximately 100 µm defocused image with an algorithm based on a constrained optimization of Fresnel diffraction model. Fluorescence image is recorded in-focus. No mechanical movement of neither sample nor objective or any other part of the system is needed to change between the phase-contrast and fluorescence modality. The change of focus between phase (out-of-focus) and fluorescence (in-focus) imaging is achieved with chromatic aberration specifically enhanced by the optical design of our system. Our microscope is sufficiently compact (10x10x10 cm^3) to fit into a standard biological incubator. The simple and robust design reduces the vibration and the drift of the sample. The absence of motorized components makes the system robust and resistant to the humid conditions inside the biological incubator. These aspects greatly facilitate the long-time observation of cell cultures. We can observe a thousand of cells in parallel in a single field of view (1mm^2) with resolution down to 2 µm. We show FUCCI marked HeLa cell culture observed over three days directly in the incubator. FUCCI (fluorescence ubiquitination cell-cycle indicator), is a genetically encoded, two-colour (red and green), indicator of the progression through the cell cycle: the cells in G1 phase show red fluorescence nuclei while the cells in S, G2 and M phase display green fluorescence within the nuclei. We use phase images for segmentation and tracking of the individual cells which allows us to determine the level of fluorescence in each cell in the green and red fluorescence channel. We compare the obtained statistics with the data from flow cytometer acquired at the end of the observation. We show that we can produce a statistically relevant time-resolved measurement of a cell population while keeping access to the individual cells.
Hot Topics II
icon_mobile_dropdown
Computational microscopy (Conference Presentation)
Computational imaging involves the joint design of imaging system hardware and software, optimizing across the entire pipeline from acquisition to reconstruction. Computers can replace bulky and expensive optics by solving computational inverse problems. This talk will describe new microscopes that use computational imaging to enable 3D fluorescence and phase measurement using simple hardware and advanced image reconstruction algorithms that are based on large-scale nonlinear non-convex optimization.
Modelling, Computing, Desing: Deep Learning I
icon_mobile_dropdown
Intelligent imaging under extreme conditions (Conference Presentation)
Recently artificial intelligent techniques such as deep learning (DL) have shown great potential in solving various inverse problems in computational imaging. In this presentation we will focus on the use of DL for computational imaging under certain extreme conditions. Three use cases will be discussed. Two out of the three are about the environment conditions such as imaging with extremely low light, and through very thick scattering media. The third one is about the neural network itself. We demonstrate that a neural network does not need to train at all before it can be used for some specific computational imaging tasks.
Deep neural networks for single-pixel compressive video reconstruction
Antonio Lorente Mur, Bruno Montcel, Françoise Peyrin, et al.
Single-pixel imaging is a paradigm that enables the capture of an image from a single point detector using a spatial light modulator. This approach is particularly interesting for optical set-ups where pixelated arrays of detectors are either too expensive or too cumbersome (e.g., multispectral, infrared imaging). It acquires the inner product between the image of the scene and a set of user-defined patterns that are sequentially uploaded onto the spatial light modulator. Compressed data acquisition reduces the acquisition time, although it leads to an ill-posed reconstruction problem, which is very challenging for real-time applications. Recently, neural networks have emerged as competitive alternatives to traditional reconstruction methods. Neural networks are parametric models that are trained by exploiting large datasets. Their noniterative nature allows for fast reconstructions, which opens the door to real-time image reconstruction from compressed acquisition. In this study, we evaluate the different networks for static and dynamic imaging. In particular, we introduce a recurrent neural network that is designed to exploit the spatiotemporal redundancy in videos via a memory state. We validate our algorithms on simulated data from the UCF-101 dataset, with a resolution of 128x128 pixels and a compression ratio of 98%. We also show experimentally that we can resolve small spectral differences in the spectrum of human skin measured in vivo.
Modelling, Computing, Design: Deep Learning II
icon_mobile_dropdown
Alternation of inverse problem and deep learning approaches for phase retrieval with lens-free microscopy (Conference Presentation)
Cédric Allier, Lionel Hervé, Olivier Cioni, et al.
Lens-free microscopy aims at recovering sample image from diffraction measurements. The acquisitions are usually processed with an inverse problem approach to retrieve the sample image (phase and absorption). The perfect reconstruction of the sample image is however difficult to achieve. Mostly because of the lack of phase information in the recording process. Recently, deep learning has been used to circumvent this challenge. Convolutional neural networks can be applied to the reconstructed image as a single pass to improve e.g. the signal-to noise ratio or the spatial resolution. Here as an alternative, we propose to alternate between the two classes of algorithms, between the inverse problem approach and the data driven approach. In doing so we intend to improve the reconstruction results but also and importantly try to address the concerns associated with the use of deep learning, namely the generalization and hallucination problems. To demonstrate the applicability of our novel approach we choose to address the case of floating cells sample acquired by means of lens-free microscopy. This is a challenging case with a lot of phase wrapping artifacts that has never been solved using inverse problem approaches only. We demonstrate that our approach is successful in performing the phase unwrapping and that it can next be applied to a very different cell sample, namely the cultures of adherent mammalian cell lines.
Real-time imaging through moving scattering layers via a two-step deep learning strategy
Meihua Liao, Shanshan Zheng, Dajiang Lu, et al.
Many methods have been demonstrated that it is possible to reconstruct an object hidden scattering layers. However, it is still a big challenge when suffer from dynamic and/or time-variant scattering media. Speckle correlation is a breakthrough technique which can noninvasively retrieve the image of object from a single-shot captured pattern but it does not allow for imaging in real time as the complicated iteration process. Recently, deep learning has attracted great attention in scattering imaging but they usually employ end-to-end mode so that the scattering medium must be fixed during the training and testing process. Here, we develop a two-step deep learning strategy for imaging through moving scattering layers. In our proposed scheme, speckle autocorrelation de-noising and object image reconstruction from autocorrelation are trained respectively by using two convolution neural network. Optical experiments show that our proposed scheme has outstanding performance for real-time imaging through moving scattering layers.
Deep learning based phase retrieval in quantitative phase microscopy
We propose and demonstrate a new phase retrieval method based on a deep neural network (DNN) structure. By inputting only one sample interferogram, measured from an off-axis holography based quantitative phase microscope (QPM), the DNN can output an accurate quantitative phase image of the sample without using a calibration interferogram, therefore significantly simplifying the measurements procedure. Importantly, our method can eliminate the need of performing phase unwrapping, therefore making it easy to achieve real-time phase retrieval in different program platforms. We used different types of cells as test samples to characterize the performance of our method, and we found that the accuracy of our DNNbased phase retrieval method is similar compared with the standard Fourier transform based phase method, while the background phase noise is reduced. Considering the experimental procedures and image processing steps are significantly simplified, we envision this new phase retrieval method will make QPM more easily accessible in bioimaging and material metrology applications in the future.
Modelling, Computing, Design: Computational Imaging
icon_mobile_dropdown
Building an inverse approach for the reconstruction of in-line holograms: a parallel with Fienup’s phase retrieval technique (Conference Presentation)
Fabien Momey, Loïc Denis, Thomas Olivier, et al.
In this paper, we propose to present the general ingredients involved in an inverse problems methodology dedicated to the reconstruction of in-line holograms, and compare it with the classical Gercherg-Saxton or Fienup alternating projections strategies for phase retrieval [1,2,3]. An inverse approach [4,5] consists in retrieving an optimal solution to a reconstruction/estimation problem from a dataset, knowing an approximate model of its formation process. The problem is generally formulated as an optimization problem that aims at fitting the model to the data, while favoring a priori knowledge on the targeted information using regularizations and constraints. An appropriate resolution method has to be designed, based on a convex optimization framework. We develop the end-to-end inverse problems methodology on a case-study : the reconstruction of an in-line hologram of a collection of weakly dephasing objects. This simple problem allows us to explain current physical considerations (type of objects, diffraction physics) to derive the appropriate model, and to present classical constraints and regularizations that can be used in image reconstruction. Starting from these ingredients, we introduce a simple yet efficient method to solve this inverse problem, belonging to the class of proximal gradient algorithms [6,7]. A special focus is made on the connections between the numerous alternating projections strategies derived from Fienup’s phase retrieval technique and the inverse problems framework. In particular, an interpretation of Fienup’s algorithm as iterates of a proximal gradient descent for a particular cost function is given. We discuss the advantages provided by the inverse problems methodology. We illustrate both strategies on reconstructions from simulated and experimental holograms of micrometric beads. The results show that the transition from alternating projection techniques to the inverse problems formulation is straightforward and advantageous.
Analysis of three-dimensional objects in quantitative phase contrast microscopy: a validity study of the planar approximation for spherical particles
Jérôme Dohet-Eraly, Loïc Méès, Thomas Olivier, et al.
Phase contrast microscopy is highly valuable in medicine, biology, fluid dynamics, etc., as it allows in focus observation of transparent or semi-transparent objects, which are difficult to analyze with conventional bright field microscopy. Indeed, such samples mainly affect the phase of the optical field, i.e., the shape of the wavefront, but not the light intensity. Consequently, techniques providing quantitative phase contrast in microscopy, e.g., digital holography, are suitable for transparent object characterization: assessing the thickness, or more precisely the optical thickness, of an object directly from its phase profile is a very common approximation. However, the phase profile in the object median plane is generally different from its thickness profile, as actual three-dimensional objects cause wavefront distortion. This paper discusses the validity and limitations of this approximation. The presented study considers simulated homogeneous, transparent, spherical particles. The optical field behind the particle, computed using Mie theory, is backpropagated to the object plane by means of Rayleigh–Sommerfeld propagation equation. We have shown that the approximation is better for larger spheres and, to a certain extent, for lesser refractive index difference between the object and the surrounding medium. Moreover, the error in assessing the object thickness directly from the central value of the phase profile, has been studied. Considering, for example, a siliceous sphere in oil or in air, the error increases rapidly above 5% for diameters smaller than the illumination wavelength. The impact of a slight defocus has also been studied.
Active chromatic depth from defocus for industrial inspection
Benjamin Buat, Pauline Trouvé-Peloux, Frédéric Champagnat, et al.
In this paper we propose a new concept for a compact 3D sensor dedicated to industrial inspection, combining chromatic Depth From Defocus (DFD) and structured illumination. Depth is estimated from a single image using local estimation of the defocus blur. As industrial objects usually show poor texture information, which is crucial for DFD, we rely on structured illumination. In contrast with state of the art approaches for active DFD, which project sparse patterns on the scene, our method exploits a dense textured pattern and provides dense depth maps of the scene. Besides, to overcome depth ambiguity and dead zone of DFD with a classical camera, we use an unconventional lens with chromatic aberration, providing spectrally varying defocus blur in the camera color channels. We provide comparisons of depth estimation performance for several projected patterns at various scales based on simulation and real experiments. The proposed method is then qualitatively evaluated on a real industrial object. Finally we discuss the perspectives of this work especially in terms of co-design of an 3D active sensor using DFD.
Computational multimodal and multifocus 3D microscopy
Julia R. Alonso, Alejandro Silva, Miguel Arocena
Multimodal microscopy aims at combining complementary images obtained from different imaging techniques. We have developed a custom built microscope that is capable of separate or simultaneous image acquisition from multiple optical imaging modalities, such as bright-field and fluorescence microscopy. The use of an electrically focus-tunable lens in the microscope allows us to acquire multi-focus z-stacks of thick 3D biological samples. This information can be combined by post-processing algorithms allowing for image reconstruction of "new" images with relevant information e.g., extended depth of field as well as 3D visualization of the sample. After calibration of the ETL and image registration of the z-stack, algorithms are performed in Fourier domain without segmentation of the focused regions or estimation of the depth map, which usually introduce inaccuracies into the reconstruction.
Inverse problem approach for the reconstruction of lateral shearing digital holograms
Dylan Brault, Thomas Olivier, Corinne Fournier, et al.
Unstained biological samples (e.g. cells or bacteria) are mostly transparent objects, optically described by their optical thickness and refractive index changes. The knowledge of this information could help to better identify or at least classify cells according to their types or state. Holographic microscopy techniques are effective methods to obtain quantitative phase profiles of biological samples. These techniques, however, may require high temporal stability to measure cell thickness fluctuations. A simple and low-cost way to ensure temporal stability consists in using a “common path” configuration. In this configuration the reference and signal beams follow the same optical path, leading to high temporal stability. The beam paths are split by a glass plate whose thickness introduces a lateral shift between the beams, reflected by the front and back surfaces. This configuration is an off-axis holographic microscopy setup since the glass plate introduces an angle between the two reflected spherical wavefronts. The inverse problem approach proposes to reconstruct the objects directly from the holograms without any filtering of the signal and with prior information on the objects. In this framework, a good knowledge of the image formation model is important. We propose a reconstruction algorithm based on a parametric inverse problem approach to reconstruct phase objects holograms acquired by the lateral shearing digital holographic system. Assuming the noise in the data to be white and Gaussian, it mainly consists in fitting a model to the data. The algorithm is applied to silica micro-beads on out-of-focus off-axis holograms recorded with the lateral shearing configuration.
Advanced Methods: QPI/DH
icon_mobile_dropdown
Partial coherence effects in digital holographic microscopy (Conference Presentation)
Frank Dubois, Jérôme Dohet-Eraly, Catherine Yourassowsky
Partial coherence effects in digital holographic microscopy
Accounting for the nonstationary correlated noise in digital holography (Conference Presentation)
In in-line digital holography, the background of the recorded images is sometimes much higher than the signal of interest. It can originates, for example, from the diffraction of dusts or fringes coming from multiple reflexions in the optical components. It is often correlated, nonstationary and not constant over time. Detecting a weak signal superimposed over such a background is challenging. Detection of the pattern then requires a statistical modeling of the background. In this work, spatial correlations are locally estimated based on several background images. A fast algorithm that computes detection maps is derived. The proposed approach is evaluated on images obtained from experimental data recorded with a holographic microscope.
Fourier ptychographic microscopy using Fresnel propagation with reduced number of images
High-throughput microscopy in the sense of large areas imaged at high-resolution demands costly hardware such as objective lenses with high numerical aperture and high sensitivity cameras, typically combined with lateral mechanical scanning of the sample. The field of view and the resolution of an imaging system depend strongly on the applied objective lens, with higher resolution coming at the cost of a smaller field of view. To address this limitation of conventional microscopes, both aperture synthesis and phase retrieval techniques are combined in the recent computational imaging approach of Fourier Ptychographic Microscopy (FPM). Gigapixel space-bandwidth product of FPM is obtained by combining low-resolution images obtained with illumination diversity through phase retrieval, which is facilitated by ensuring that the input images overlap in the Fourier domain. In practice, the illumination is achieved using one lamp at a time from an LED array. A drawback of FPM is that it requires long acquisition times and has significant computational cost. Here, we present a refined FPM procedure by using Fresnel propagation and reducing the number of exposures by multiplexing and symmetry considerations, thus slashing the amount of data and the processing time. The multiplexing strategy works by illuminating groups of three LEDs that are chosen from one-half plane of the LED array – an approach valid for pure amplitude samples. We have experimentally demonstrated that the FPM recovered image has approximately the same resolution as recovery based on one exposure from each of the LEDs.
Terahertz Imaging I: Joint Session
icon_mobile_dropdown
Terahertz reflective ptychography
C. Tang, Y. Zhao, F. Tan, et al.
As a widely used lensless coherent diffraction imaging approach, ptychography has recently been implemented in Terahertz range. Combining with the unique penetration property of THz radiation, THz ptychography allows inspecting visually opaque samples and retrieving both amplitude and phase information. Here, we demonstrate for the first time to our knowledge, THz ptychography in reflective configuration. Due to the large wavelength (96.5 μm), THz ptychography requires small recording distance. Particular care has been taken in the experimental setup to minimize the recording distance. Extended ptychographic iterative engine (ePIE) is applied to reconstruct quantitatively the amplitude and phase information of the object wavefront. A lateral resolution of 180 μm and a depth resolution of 1.1μm are achieved.
Advanced Methods: Advanced Devices and Modalities for Imaging I
icon_mobile_dropdown
Ultrahigh speed imaging, from vacuum tube technology to solid state sensors, a state of the art (Conference Presentation)
Ultra-high-speed imaging is widely used in material characterization, biochemistry, time resolved spectroscopy, fluorescence lifetime imaging, Photoluminescence, Diffuse optical tomography, etc. The very fastest cameras are based on a vacuum tube technology such as the streak tube, image intensifier or photomultiplier and can achieve the best temporal resolution in direct light detection devices. For instance, a time gated camera operation with an image intensifier tube is able to sample a scene with a spatial resolution of 1000x1000 pixels with a gate of 1 ns, leading to a pixel rate of 1.1015 pixels per second, i.e., one petapixels/s. In order to manage this tremendous pixel rate, the acquired images are temporally stored on a phosphorous screen and are readout with a classical camera at a conventional pixel rate. This approach is known as the burst imaging concept that consists to let image in the sensor during the acquisition and readout the images afterward. Thus, an intensified gated camera is able to record only one image of a single fast event, but by using several cameras acquiring the same scene through a light splitter a movie of can be recorder with a temporal resolution of 1 ns. In order to push further the temporal resolution, the concept of streak imaging can be used. It consists to reduce the spatial information to a single line of pixels instead of a full frame in order to enhance the temporal resolution. The history of high-speed imaging shows that this approach can improve the temporal resolution by a factor 100 to 1000. For instance, a streak camera also operating with a vacuum tube device can still sample one petapixel/s but offers a temporal resolution of 1 picosecond with the reduced spatial resolution of 1x1000pixels. The drawbacks of this technology are that the vacuum tube-based cameras are bulky, fragile and expensive. It’s the reason why many works are ongoing on the design of some specific solid state sensors for high-speed imaging. The conventional CMOS or CCD sensors are limited by the extraction speed of the image to a pixel rate of a few Gigapixel per second. On the contrary, the systems storing the video frames inside the sensor are not limited by the readout speed and the so-called burst image sensors (BIS) achieve a pixel rate of more than 1 Terapixel per second. The video BIS at the state-of-the-art can sample and store about 100 frames of a 2D scene with a frame rate of more than one Mega frames per second (fps) up to 100 Mega fps by using on-chip analog or digital memory. The latest achievements in ultra-high-speed CMOS image sensors using the streak imaging concept push the line rate up to several giga fps and offer a sub nanosecond temporal resolution. Moreover, these innovative sensors add some new features such as the post trigger acquisition unrealizable with the vacuum tube-based cameras.
Full-field all-optical snapshot technique for QUADrature (FAST-QUAD) demodulation of optical signals at radio-frequencies: principle and experimental proof-of-concept (Conference Presentation)
Swapnesh Panigrahi, Julien Fade, Romain Agaisse, et al.
Since long, optical intensity modulation/demodulation techniques have encountered numerous applications in telemetry, free-space communications or optical characterization of scattering media. Upgrading those techniques to a full-field, real-time imaging modality can allow massive multiplexing, an essential asset not only for 3D imaging or optical communications, but also for imaging in turbid media (medical diagnosis, underwater vision, imaging in colloids, or navigational aid for safe transports). In this context, we have recently proposed a new concept of Full-field All-optical Snapshot Technique for QUADrature demodulation imaging (FAST-QUAD), whose capacity in real-time image demodulation have been demonstrated up to frequencies of 500 kHz, without requiring any synchronization between the receiver and the intensity-modulated source(s) in the imaged scene. This technique relies on an all-optical architecture, at the heart of which is an electro-optical crystal and appropriate polarization optics components, making it possible to spatially multiplex four transmission « gates » in quadrature to each other (0°, 90°, 180°, 270°), addressing four sub-images detected on the same single standard sensor (CCD/CMOS). This setup behaves as a quadrature lock-in detection circuit, well-known in the electronics field, but in the optical domain and in a massively spatially multiplexed way, using the acquisition time of the camera as a low-pass integrator. This optical module can therefore be inserted in front of any camera, and allows the number of electronics components to be minimized. This property provides FAST-QUAD with a major asset, as its operating frequency is fully and continuously tunable in the RF range, which allowed us to establish an experimental proof-of-concept between 0 Hz (DC) and 500 kHz on the first prototype built in the laboratory. We will detail the instrumental conception of this prototype as well as the calibration/processing pipeline developed. Experimental validation results and examples of application of the FAST-QUAD approach will also be presented.
Imaging systems based on active optical signal converters
Maxim V. Trigub, Dmitriy V. Shiyanov, Nikolay A. Vasnev, et al.
The use of metal atom active media allows to convert optical signals with the transferring of the adjusted contrast. Due to the opportunities of metal vapor active media it makes possible to increase the intensity of the signal in the narrow spectral range. As a result, the signal-to-noise ratio can be dramatically increased. It makes possible to build active optical systems for high speed imaging of processes which are hidden by high intensive radiation. In the systems is available the image active filtration due to induced transition on metal atoms. Active optical systems with metal vapor brightness amplifiers (which is called Laser monitor), have shown the high efficiency for reducing the background radiation effect on the observing processes in the real-time mode. Moreover, the use of different active media makes it possible to change the spectral contest of the input signal with the increasing of the intensity. In the work the result of the research such systems are presented. The method of transformation IR images to VIS images is discussed. The work presents the results of the using of different active optical systems for high speed imaging.
Characterization of double-deformable-mirror adaptive optics for IR beam shaping in hyperspectral imaging
Mohammad Azizian Kalkhoran, Ann Fitzpatrick, A. Douglas Winter, et al.
Vibrational microspectroscopy via Fourier transform infrared (FTIR) faces an experimental trade-off among the signal to noise ratio (SNR), acquisition time, spatial resolution, and sample coverage. This is mainly associated with broadband source type: e.g. low brightness thermal sources with high flux for large field of view imaging at low resolution, or low ´etendue of synchrotron radiation infrared (SRIR) for diffraction-limited scanning mi- croanalysis at high magnification.1 Adaptive optics (AO), in this case deformable mirror (DM), is a potent tool in tackling the problem by modulating the intensity of high brightness structured SRIR beam toward a homo- geneous field illumination for IR imaging at high magnification. The latter is required for an efficient coupling of SRIR source to a multi-pixel detector such as focal plane array (FPA).2 Additionally, DM enables to achieve different shapes, optimized for different Cassegrain IR objective. Regardless, the quality of the generated beam relies upon the performance of the adaptive elements, i.e. actuators and their linear and reproducible response to the applied voltage. Moreover, the beam shaping capability of a single DM in controlling light beam position and angle is limited by its actuators influence function. In this work, we implemented two DMs for intensity shaping for the complex SRIR beam. A variation of multi-conjugate AO is implemented to characterize the performance of DMs and their actuators transfer function at multiple locations. An IR sensitive microbolometer array has been optically conjugated to the focal plane of individual actuators and the far-field of DM, in order to probe the corresponding actuating response. By analysing each actuator’s response individually, a measure of linear independence, uniformity in response, and cross-coupling can be obtained in a spectral range, from visible to near and mid IR. Additionally, by assembling the vectorized version of each actuator response, the transfer matrix can be formed. This matrix describes the relationship between the actuation effect on the beam and the response of the IR microbolometer, at the given conjugate planes. Based on such discussion, we assess the stability of the deformable mirror for open-loop (i.e. without feedback) operation.
Advanced Methods: Advanced Devices and Modalities for Imaging II
icon_mobile_dropdown
Piezo-actuated adaptive elements in scanning microscopy (Conference Presentation)
In order to obtain 3D information of an object, laser-scanning techniques like confocal microscopy require a scan in three dimensions. The axial scan is commonly achieved by mechanical translation of the objective or the object. However, inertia is a problem, which limits the achievable scan rates and leads to motion artifacts. The usage of adaptive optical elements bears the potential to overcome these limitations. Adaptive lenses have been applied in different kinds of microscopes to perform the axial scan without the need for any mechanical translation. In this contribution, we introduce a novel bi-actor adaptive lens that enables to manipulate both the focus position and the specimen-induced spherical aberrations that occur in deep tissue applications as well as systematic scan induced aberrations. To achieve the desired lens behavior against environmental influences and hysteresis effects an in-situ monitoring based on digital holography or partitioned aperture wavefront sensing are applied. Experiments on Zebrafish and on phantom samples prove the capabilities of our approach Beside axial scanning adaptive lateral scanning is also addressed: Lateral scans are often realized using galvo scanner. Although this approach works well, it requires a folded beam path resulting in bulky setups. We introduce piezo-actuated adaptive prisms as a suitable alternative that enables an optical setup in in-line transmissive configuration instead. With this device wavefront tilts of up to ±7° can be induced, enabling lateral scanning. We show characterization measurements and first proof of concept applications on the adaptive prisms.
Near-infrared active and selective polarization imaging by orthogonality-breaking: calibration of the acquisition chain
Active polarimetric imaging by orthogonality breaking is an alternative polarimetric imaging method developed at the Institut FOTON, Rennes. By illuminating a sample with a dual-frequency dual-polarization (DFDP) beam whose polarizations are orthogonal, it is possible to characterize its diattenuation and the orientation of the anisotropy in a single acquisition. However, this technique is not sensitive to other polarimetric effects such as birefringence or pure depolarization and requires a detection/demodulation chain that introduces non-linearity effects and does not allow results to be obtained quantitatively. In this paper, after a presentation of the orthogonality-break imaging system, we will detail the calibration/correction protocol which is now implemented to take into account the effects of non-linearities. Then, we will show that it is possible, by adding a polarimetric analysis module, to make this method sensitive to the main polarimetric effects. The results obtained on a simulated operational scene will be presented.
Large-area transmission modulators for 3D time-of-flight imaging
The indirect time-of-flight principle is one possibility to build a three-dimensional (3D) camera system. Available products based on this principle mostly use special CMOS sensors for demodulation of the optical signal at the receiver. This special CMOS chip can be replaced by a standard image sensor in combination with a quantum well electroabsorption modulator. In this case, the modulator heavily influences the 3D camera performance. Especially the characteristics of large-area devices are of major interest. Transmission electroaborption modulators with sizes in the square millimeter range have been fabricated for operating wavelengths of 850 and 940 nm. While the 850nm devices were realized as non-resonant structures, for 940nm devices a resonant design was developed to overcome the limitation in the number of quantum wells. Investigations of the static and dynamic behavior show extinction ratios up to 2.5 dB and corner frequencies up to 30 MHz. A single-point distance measurement setup demonstrates the high potential of the devices for the 3D application.
Advanced Methods: Multi-Hyperspectral
icon_mobile_dropdown
Hyperspectral phase retrieval
Vladimir Katkovnik, Igor Shevkunov, Karen Eguiazarian
Hyperspectral (HS) imaging retrieves information from data obtained across a broad spectral range of spectral channels. The object to reconstruct is a 3D cube, where the two coordinates are spatial and the third one is spectral. We assume that this cube is complex-valued, i.e. characterized spatially-frequency varying amplitude and phase. The observations are squared magnitudes measured as intensities summarized over spectra. HS phase retrieval is formulated as a reconstruction of an HS complex-valued object cube from Gaussian noisy intensity observations. The considered observation model, projections of the object on the sensor plane, includes varying delay operators such that identical but mutually phase-shifted broadband copies of the object are interfering at the sensor plane. The derived iterative algorithm includes an original proximity spectral analysis operator and sparsity modeling for complex-valued 3D cubes. It is demonstrated that the HS phase retrieval problem can be resolved without random phase coding of wavefronts typical for the conventional phase retrieval techniques. The performance of the new algorithm for phase imaging is demonstrated in simulation tests and in the processing of experimental data.
Efficient light collection from a micromirror array: towards simultaneous hyperspectral and hypertemporal mapping of luminophores (Conference Presentation)
Digital micromirror device (DMD) serves in a major part of computational optical setups as a means of encoding an image by a desired pattern. The most prominent is its usage in the so-called single-pixel camera experiments, where light reflected from a DMD is collected onto a single-pixel detector. This often requires efficient and homogenous collection of light from a relatively large chip on a small area of an optical fiber or spectrometer slit. This effort is moreover complicated by the fact that the DMD acts as a diffractive element – this becomes especially prominent in the infrared (IR) spectral region. The light diffraction causes serious spectral inhomogeneities in the light collection. We studied the effect of light diffraction via whiskbroom hyperspectral camera. Based on the knowledge, we designed a variety of different approaches to light collection, which use a combination lenses, off-axis parabolic mirrors, diffuser, light concentrator, and integrating spheres. By using an identical optical setup we mapped the efficiency and spectral homogeneity of each of the approaches. The selected benchmark was the ability to collect the light into fiber spectrometers working in the visible and IR range (up to 2500 nm). As expected, we have found the integrating spheres to provide homogeneous light collection, which however suffers from a low efficiency. The best compromise between the performance parameters was provided by a combination of an engineered diffuser with an off-axis parabolic mirror. We used this configuration to create a computational microscope able to carry out hyperspectral imaging of a sample in a broad spectral range (400-2500 nm) and to map photoluminescence (PL) decay via time-correlated single photon counting technique. This allowed us to create one-to-one maps of absorption and PL inhomogeneities in samples. We see such setup as an ideal tool to study properties of luminophores and the effect of inhomogeneities on the PL properties.
Mapping the optical dielectric response of isolated monolayer MoS2 by push-broom microspectroscopy
Xingchen Dong, Michael H. Köhler, Kun Wang, et al.
Two-dimensional van der Waals materials are attractive for photonics and optoelectronics due to distinctive layerdependent optical properties. Optical properties based on light-matter interactions have been revealed by modern imaging and spectroscopy techniques. Hyperspectral imaging microscopy working in line-scan mode (push-broom microspectroscopy) can provide abundant spectral information covering a large area compared to conventional spectroscopy techniques, with a higher acquisition speed than point-scan techniques such as atomic force microscopy and Raman imaging microscopy. This contribution studies in-depth the reconstruction of 3D datacubes and the extraction of optical responses of the sample. Monolayer MoS2, a subclass of semiconducting two-dimensional materials, is fabricated by the mechanical exfoliation method on the SiO2/Si substrate with an oxide thickness of 285 nm. The isolated monolayer MoS2 is observed and identified by a conventional optical microscope. The custom-built push-broom microspectroscope is utilized to scan the region of interest, with the whole spectrum of every line recorded at each frame. The spectral information of every point is collected and 3D spectral data sets are reconstructed for feature extraction and property analysis. To realize the thickness mapping of flakes, linear unmixing is employed to calculate the abundance of isolated monolayer MoS2 on the SiO2/Si substrate, improving flake identification performances. The characteristic spectrum of monolayer MoS2 is acquired by averaging the spectrum from the monolayer MoS2 flake. Furthermore, the optical dielectric response is further analyzed by Kramers-Kronig constrained analysis and Fresnel-law-based analysis. The optical dielectric function is calculated and compared based on the refractive index and medium thickness. This detailed analysis of optical dielectric responses highlights the feasibility of push-broom microspectroscopy for two-dimensional materials characterization.
Endoscopic probe for multispectral 3D measurements and imaging
Demid D. Khokhlov, Alexander S. Machikhin, Alexey V. Gorevoy, et al.
The industrial endoscopic remote visual inspection and endoscopic minimally invasive procedures in medicine provide the wide range of visualization and measurement techniques for hard-to-reach objects’ inner surfaces inspection. In common practice, the three-dimensional visualization, three-dimensional geometrical measurement and spectral imaging techniques are realized independently in different specialized endoscopic devices. Simultaneous implementation of these techniques in a single versatile endoscopic system may increase the efficiency of the inspection and diagnostic procedures. We propose a combined approach to multispectral stereoscopic endoscopic imaging. The prototype of an endoscopic probe able to carry out remote three-dimensional geometrical measurements as well as spectral visualization and measurements is demonstrated.
Poster Session
icon_mobile_dropdown
Digital holographic imaging based on Mach-Zehnder near common-path interferometer
Liudmila Burmak
The paper addresses near common-path digital holography schemes based on Mach-Zehnder interferometer. These schemes can be implemented as compact add-on modules for conventional imaging devices (microscopes, endoscopes, etc.) and allow measuring the phase structure of objects. The schemes have no light losses on beam splitting for combining beams. It is especially important as a lot of light energy is lost due to formation of the reference beam via spatial filtration.
Residue determined threshold phase unwrapping method
In this paper, we propose a robust phase unwrapping algorithm that can be applied to optic interferometry based on combing the theory of residues and local phase information to mask out the discontinuous regions in the unwrapping. Unlike previous methods, which require subjective appraisal to determine the threshold value of the second differences in a locally unwrapped phase empirically, this technique sets the threshold value in a straightforward way. The technique aims to minimize the loss of phase information produced by erroneous distinguished pixels, and simplifies the unwrapping process. Experimental results on complex discontinuous objects are presented to illustrate the validity of this technique.
Quantitative 3D fluorescence diffuse optical tomography with 3D ultrasound imaging registration in small animal study
An innovative dual-modality 3D fluorescence/3D ultrasound tomography system was demonstrated in this research. The system includes an electron-multiplying charge-coupled device (EMCCD), a 660 nm wavelength fiber-coupling laser, and a 10-15 MHz frequency customized single-element ultrasound transducer on a 3D rotating scanning device. We combined multiple ultrasound images from different sections to obtain a whole-body 3D mesh of mice, which provide anatomical information. In this study, we demonstrated the accuracy of this system through a 4T1 tumor-bearing nude mice experiment.
Non-rigid registration of 3D points clouds of deformed liver models with Open3D and PyCPD
Medical field has always benefited from the latest technological headways such as radiography, robotics or more recently augmented reality. Indeed, the progress in image analysis and augmented reality have led to major therapeutic progress in the surgical field as well as in the diagnosis field. Thus, one of the most important technique of medical image analysis is the registration. Image registration is the process of matching two or more images. More concretely, it consists in finding the transformation that minimizes the difference between two or more images. The transformation can be rigid (composed of rotations and translations only), affine (composed of rotations, translations and scales), or non-rigid. Even though rigid registration can seem quite easy to perform, developing and implementing solutions that realize fast, precise and robust rigid registration on complex objects is still challenging, especially when we deal with 3D objects. One of the most known and used rigid-registration algorithm is the Iterative Closest Point algorithm that has been implemented notably by the Open3D library. However, this method was unable to handle non-rigid registration. That is the reason why we have decided to use the Coherent Point Drift algorithm with non-rigid deformations. To this end, we have used the PyCPD library. In this paper, we present an efficient method for non-rigid registration applied to deformed liver models, robust to translations, rotations and cropping even though it fails to handle the most complex cases.
Line-field confocal optical coherence tomography based on a Mirau interferometer
Weikai Xue, Jonas Ogien, Olivier Levecq, et al.
Line-field confocal optical coherence tomography (LC-OCT) is an imaging technique based on time-domain OCT with line illumination and line detection. The focus is adjusted during the scan of the sample depth to image with high lateral resolution (~ 1 μm), similar to the axial resolution, at a central wavelength of ~ 800 nm. The LC-OCT prototypes reported so far were all based on a Linnik-type interferometer. We present in this paper a LC-OCT device based on a Mirau interferometer. This Mirau-based LC-OCT device has the advantage of being more compact and lighter. In vivo imaging of human skin with a resolution of 1.3 μm × 1.1 μm (lateral × axial) is demonstrated at 12 frames per second over a field of 0.9 mm × 0.4 mm (lateral × axial).
Liquid-crystal polarization state generator
An important problem in imaging polarimetry occurs when the optical axis of the system and the center of the camera sensor get misaligned. This situation typically occurs after rotating the polarizing element mounts in order to change the input state of polarization. This work presents a liquid-crystal polarization state generator, devoid of moving parts, which can generate any arbitrary state of polarization (SOP) on the Poincaré sphere through the phase-shift manipulation of two voltage-controlled variable retarders. The proposed optical system consists of a linear polarizer, cascaded by two liquidcrystal retarders (LCR1 and LCR2) and a quarter-wave plate. We show that by varying the retardance of LCR1 but keeping the LCR2 retardance constant, the SOP moves along the corresponding meridian of the Poincaré sphere. When the reverse is done, the SOP follows a trajectory along the given parallel. Experimental results are compared to numerical simulations where we calculate the Stokes parameters and represent the trajectories of the SOP on the Poincaré sphere as we change the voltage addressed to the LCRs. Good agreement between theory and experiment is obtained if we take into account the Fabry-Perot interference effects on these variable retarders. This system can also be used as a polarization state analyzer. To verify the performance of this system as analyzer, the Mueller Matrix of a retarder plate is determined by imaging polarimetry.
Review of spectral and polarization imaging systems
Sumera Sattar, Pierre-Jean Lapray, Alban Foulonneau, et al.
Spectral and Polarization Imaging (SPI) is an emerging sensing method that combines the acquisition of both spectral and polarization information of a scene. It could benefit for various applications like appearance characterization from measurement, reflectance property estimation, diffuse/specular component separation, material classification, etc. In this paper, we present a review of recent SPI systems from the literature. We propose a description of the existing SPI systems in terms of technology employed, imaging conditions, and targeted application.
Extraction of information of an object hidden behind a diffusing layer using dual beam illumination
Arnav Tamrakar, Surya Kumar Gautam, Dinesh N. Naik
Extracting information of an object which is hidden behind a translucent obstacle is a difficult task. When the obstacle is a diffusing or scattering medium, it becomes much more complicated. The scattering medium makes the phase random, which cannot be retrieved or undone by simple techniques. Many approaches based on interference have been proposed to realise this goal. These approaches have produced good results meanwhile are also very expensive and complex in nature due to the requirement of a separate reference beam to form the interference pattern. We hereby, propose a novel method in which we are illuminating a hidden object with shifted dual beams simultaneously and these two beams are then reflected from the object to produce an interference pattern. This method does not require any separate reference beam to form the interference pattern. The purpose of using dual beam illumination is to eliminate the need of any shearing device to give shear between the object’s fields thus making our technique much more simpler and cost-effective. To validate our technique, a simulation is performed. In this work, we extract the information about the deformation of the object due to loading. In addition, various parameters and their effects on the extraction of information are also mentioned. This technique takes advantage of the merit of Shearography hence the gradient information of the hidden object can also be detected.
Polarimetric imaging microscopy in real-time
Juan M. Llaguno, Ariel Fernández
Imaging polarimetry allows visual information to be extracted from the polarization of light with recent applications ranging from remote sensing from satellite images to the study of anisotropic properties of biomolecules, as well as the development of denoising algorithms. In particular polarized light microscopy can reveal {among other properties- particular changes in the structure of some cells. In conventional approaches to polarimetric microscopy a sample is imaged by rotating polarizer and analizer so variable polarization states are achieved under time-division strategies. We propose a space-division multiplexing technique which by multiview sensing of a microscopic sample with an adequate polarization mask allows us to obtain Stokes parameters of the sample in real-time. Validation experiments obtaining Degree of Linear Polarization as well as Angle of Polarization of the sample are presented.
Spherical object segmentation in digital holographic microscopy by deep-learning
Digital holographic microscopy can image both absorbing and translucent objects. Due to the presence of twin-images and out-of-focus objects, the task of segmenting the objects from a back-propagated hologram is challenging. This paper investigates the use of deep neural networks to combine the real and imaginary parts of the back-propagated wave and produce a segmentation. The network, trained with pairs of back-propagated simulated holograms and ground truth segmentations, is shown to perform well even in the case of a mismatch between the defocus distance of the holograms used during the training step and the actual defocus distance of the holograms at test time.
Optical architectures for pattern recognition with the generalized Hough transform
Ariel Fernández
The generalized Hough transform (GHT) is a well-established technique for the recognition of geometrical fea- tures out of images corrupted by noise, with disconnected boundaries or where the target is partially occluded. As an alternative to the conventional procedure of direct imaging of the scene of interest and mapping to an accumulator or Hough parameter space, we can directly obtain the image transformed under GHT in an incoherently illuminated optical architecture where the pupil is codified according to the target of interest. Parallel processing inherent of optical devices then allows for real-time performance and shift-invariant pattern recognition. Besides, by exploiting the redundancy derived from multiview sensing of the input and its out- of-focus capture with an adequate pupil array, we can obtain in a snapshot the GHT with invariance to target shift, scale, and orientation. Finally, in order to enhance the robustness of the original algorithm in detecting an object out of a single image, we can also consider matching of a pair of corresponding (according to a perspective shift) templates to a given stereo pair of a 3d scene, since the redundancy that results from the simultaneous transformation of both images can overcome the drawbacks (resulting for example from occlusion) that affect the separate matching of an individual template to a given image.
Fourier ptychographic microscopy and Mueller matrix microscopy: differences and complementarity
Anastasia Bozhok, Jean Dellinger, Yoshitate Takakura, et al.
When a light wave passes through a sample, it undergoes a phase delay related to the optical path it has taken. The amount of shift is proportional to the product of the refractive index and the thickness of the sample and cannot be measured using conventional light microscopy. Furthermore, the velocity of light propagation in an optically anisotropic medium may depend on its polarization state. This causes a phase shift between the polarization components of the oscillating electric field called retardance. Both quantitative phase and polarimetric retardance are commonly used to examine biological tissues. This work investigates the complementarity of information retrieved by two optical modalities: Fourier Ptychographic Microscopy and Mueller Matrix Microscopy. We present two constructed microscopes and then compare the results obtained using histological slides for experimental validation.
Simple and low-cost method for particulate matter size determination based on far-field interference pattern image processing
In this work, a simple optical method for measuring particle size distribution is investigated, as part of a particulate matter monitor. Particulate matter, defined as the suspended solid or liquid particles in air, is widely considered as one of the main air pollutants and a source of smog and global warming. Different particle sizes pose different health hazards, where particles with particle diameters less than 10 µm (PM10) can enter the nasal cavity, those smaller than 7 µm (PM7) can penetrate the throat and those smaller than 2.5 µm (PM2.5) can enter the lungs. These health risks have driven global interest in monitoring PM sizes and concentrations. The setup used in this work consists of a laser source in the visible spectral regime (633 nm wavelength) shined on opaque particulate matter of diameter distribution in the order of 10 µm to 50 µm. The diffraction pattern in the far field is captured using a CCD camera, without stringent requirements on the pixel size of the camera or large number of pixels. The captured image is processed to identify the diffraction pattern due to each particular matter, from which the size can be retrieved. This method offers an inexpensive, non-intrusive and simple implementation to finding the size distribution of particulate matter. Having good potential for further improvement and system miniaturization, particle sizing using laser diffraction pattern analysis opens the door for cheap personal air quality monitoring for real-time consumer application.
Deep learning-based image transmission through a multi-mode fiber
Image transmission through a multi-mode fiber is a difficult task given the complex interference of light through the fiber that leads to random speckle patterns at the distal end of the fiber. With traditional methods and techniques, it is impractical to reconstruct a high-resolution input image by using the information obtained from the intensity of the corresponding output speckle alone. In this work, we train three Convolutional Neural Networks (CNNs) with input-output couples of a multi-mode fiber and test the learning with images outside the learning set. The three implemented deep learning models have modern UNet, ResNet and VGGNet architectures and are trained with 31,200 grey-scale handwritten letters of the Latin alphabet. After the training, 5,200 images outside the learning set are used for testing and it was shown that the models successfully reconstruct the input images from the output random speckle patterns with average fidelities ranging from 81% to 90%. Our results show the superiority of the ResNet based architecture over UNet and VGGNet in reconstruction accuracy, achieving up to 97% fidelity in a short amount of time. This can be attributed to the success of the ResNet architecture in learning non-linear systems compared to its counterparts. We believe that the implementation of machine learning techniques to imaging, along with its contributions to biophysics, can reshape the telecommunication industry and thus will be a cornerstone in future optics and photonics studies.
11351 Additional Presentations
icon_mobile_dropdown
Etendue conserving light shaping using irregular fly's eye condensers (Conference Presentation)
Etendue conserving light shaping using irregular fly's eye condensers