See you in two years!
11-16 April 2026
Moore's law has fostered the steady growth of digital image processing, though computational complexity remains a significant problem for most digital image processing applications. At the same time, research in the field of optical image processing has matured, potentially bypassing the limitations of digital approaches and giving rise to new applications. Additionally, from an image acquisition perspective, the rapid convergence of digital imaging devices is driving a robust industrial growth of photonics technologies.

Already, photonics-based enablers can be found in a myriad of imaging and visualization applications such as head-mounted displays, light-field and holographic displays, image sensing, illumination systems, and high-performance light engines - all of which have significant volume positions in the photonics market. Along with the growing interest in emerging multimedia applications, the demand for new photonics enablers is steadily increasing, and new technologies are continuously created to meet the needs.

In miniaturizing digital cameras, new challenges emerge when striving for high performance combined with mass-volume production. This requires the design of sophisticated lens elements and new types of imaging optics, optimized image processing pipelines, compact, high-performance sensors, etc. Microscopy and tomography solutions typically rely on the combined deployment of digital, optical and photonics components where technological advances give rise to unprecedented accuracy and functionality.

These new applications have specific requirements and put new challenges on the optical designs. Finally, we have recently observed the vast emergence of (deep) learning-based solutions in imaging, processing, and visualization.

This conference aims to create a joint forum for both research and application communities in optics, photonics, and digital imaging technologies to share expertise, solve present-day application bottlenecks and propose new application areas. Consequently, this conference has a broad scope, ranging from basic and applied research. The conference sessions will address (but not be limited to) the following topics: ;
In progress – view active session
Conference 12998

Optics, Photonics and Digital Technologies for Imaging Applications VIII

9 - 11 April 2024 | Londres 2/Salon 7, Niveau/Level 0
View Session ∨
  • 1: Biomedical Image Processing
  • 2: Machine Learning and Image Processing
  • Hot Topics II
  • 3: Camera Optics
  • 4: Computer-generated Holography I
  • 5: Computer-generated Holography II
  • 6: Computational Microscopy
  • Posters-Wednesday
  • Hot Topics III
  • 7: Augmented Reality and Holographic Display Systems
  • 8: Computational Imaging
  • 9: Computer Vision Applications
  • Digital Posters
Session 1: Biomedical Image Processing
9 April 2024 • 13:30 - 14:30 CEST | Londres 2/Salon 7, Niveau/Level 0
Session Chair: Adrian Bradu, Univ. of Kent (United Kingdom)
12998-1
Author(s): José Carlos Moreno Tagle, Jimena Olveres, Boris Escalante Ramírez, Univ. Nacional Autónoma de México (Mexico)
9 April 2024 • 13:30 - 13:50 CEST | Londres 2/Salon 7, Niveau/Level 0
Show Abstract + Hide Abstract
This study explores the impact of synthetic medical images, created through Stable Diffusion, on neural network training for lung condition classification. Using a hybrid dataset combining real and synthetic images, diverse state-of-the-art vision models were trained. Neural networks effectively learned from synthetic data, its performance is similar or superior to models trained purely on real images as long as the training is carried out under equal conditions. We selected ConvNeXt-small as our test architecture. However, hybrid data trained models seem to show a limit in their performance when exploring different training regimes. In contrast, a simpler architecture trained with only real images can take advantage of more complex training regimes to elevate its final performance. Both models were evaluated with a real-image-only validation dataset provided by a radiologist. This study concludes by comparing our top AI models and radiologists' performance.
12998-2
Author(s): Meryem B. Avci, Sevim D. Yasar, Arif E. Cetin, Izmir Biomedicine and Genome Ctr. (Turkey)
9 April 2024 • 13:50 - 14:10 CEST | Londres 2/Salon 7, Niveau/Level 0
Show Abstract + Hide Abstract
Our talk introduces a fully automated imaging-based liquid processing platform to revolutionize traditional pipetting in cell culture studies. This system minimizes operational errors by automating processes, significantly enhancing efficiency. With expedited liquid handling and cell imaging capabilities, it maps cellular information. The platform comprises key modules, including Motion, Pipetting, Imaging, and Software, providing precise XYZ axis movement, fluid transfer, cell imaging, and algorithm-driven post-processing and hardware control. Utilizing a CNC-based Motion Module, the platform navigates well-plates precisely based on g-coding. The Imaging Module displays cells, while the Pipetting Module ensures efficient solution handling. System software, coordinating processes, capturing and processes data, guides investigations into cellular pathways and therapeutic profiling. The platform incorporates an incubator with customizable settings, maintaining optimal conditions for cellular analyses. This comprehensive system signifies a significant leap in laboratory technology, promising heightened precision, efficiency, and adaptability for advancing cellular research.
12998-3
Author(s): Maria Castro-Fernandez, Guillermo V. Socorro-Marrero, Carlos Vega, Nerea Márquez-Suárez, Cristina Marcello, Raquel León, Instituto Univ. de Microelectrónica Aplicada, Univ. de Las Palmas de Gran Canaria (Spain); Himar Fabelo, Fundación Canaria Instituto de Investigación Sanitaria de Canarias (Spain), Instituto Univ. de Microelectrónica Aplicada, Univ. de Las Palmas de Gran Canaria (Spain); Gustavo M. Callicó, Instituto Univ. de Microelectrónica Aplicada, Univ. de Las Palmas de Gran Canaria (Spain)
9 April 2024 • 14:10 - 14:30 CEST | Londres 2/Salon 7, Niveau/Level 0
Show Abstract + Hide Abstract
The incidence of skin cancer has increased in the last decades, being the most common cancer, but can have a five-year survival rate of over 99% if treated early. This work describes a novel hyperspectral dermoscope for early skin cancer detection, able to capture spatial and spectral information in the Visible (VIS) and Near Infrared (NIR) ranges by using Liquid Crystal Tunable Filters (LCTFs). KURIOS-VB1 and KURIOS-XE2 filters were used for VIS and NIR ranges, respectively, providing 136 wavelengths with 5 nm of spectral resolution. A dichroic mirror combines output light paths, illuminating the skin's surface via a fiber optic ring light. Reflected light is captured by a 1.3-megapixel monochrome camera. Additionally, a custom hand-held 3D printed part integrates optics and control circuitry. The proposed characterization method used to optimize the camera exposure time for each wavelength has proven effective in obtaining a flat white reference and gathering information in the range of 450 to 1050 nm and, especially, at critical wavelengths such as the test wavelengths evaluated closer to the limit bands of the LCFTs (450 and 600 nm for VIS, and 750 and 900 nm for NIR).
Session 2: Machine Learning and Image Processing
9 April 2024 • 14:30 - 15:50 CEST | Londres 2/Salon 7, Niveau/Level 0
Session Chair: Gabriel Cristóbal Perez, Instituto de Óptica "Daza de Valdés" (Spain)
12998-5
Author(s): Alan Mauricio Camargo, Jimena Olveres, Boris Escalante-Ramírez, Univ. Nacional Autónoma de México (Mexico)
9 April 2024 • 14:30 - 14:50 CEST | Londres 2/Salon 7, Niveau/Level 0
Show Abstract + Hide Abstract
Deep learning models (DLM) encounter challenges in medical image segmentation and classification tasks, primarily due to the requirement for a substantial volume of annotated images, which are both time-consuming and expensive to acquire. In our work, we utilize neural style transfer (NST) to enhance a tiny dataset of ultrasound images, significantly improving the performance of deep learning models (DLM). Additionally, we explore style interpolation to generate new target styles, specifically tailored for ultrasound images. In summary, our objective is to demonstrate the potential utility of neural style transfer in scenarios with limited datasets, particularly in the context of ultrasound images using the dataset of breast ultrasound images (Walid Al-Dhabyani, Mohammed Gomaa, Hussien Khaled, & Aly Fahmy, 2020).
12998-6
Author(s): Carlos Minutti-Martinez, Boris Escalante-Ramirez, Jimena Olveres, Univ. Nacional Autónoma de México (Mexico)
9 April 2024 • 14:50 - 15:10 CEST | Londres 2/Salon 7, Niveau/Level 0
Show Abstract + Hide Abstract
Explainability and bias mitigation are crucial aspects of deep learning (DL) models for medical image analysis. Generative AI, particularly autoencoders, can enhance explainability by analyzing the latent space to identify and control variables that contribute to biases. By manipulating the latent space, biases can be mitigated in the classification layer. Furthermore, the latent space can be visualized to provide a more intuitive understanding of the model's decision-making process. In our work, we demonstrate how the proposed approach enhances the explainability of the decision-making process, surpassing the capabilities of traditional methods like GradCam. Our approach effectively identifies and mitigates biases in a straightforward manner, without necessitating model retraining or dataset modification, showing how Generative AI has the potential to play a pivotal role in addressing explainability and bias mitigation challenges, enhancing the trustworthiness and clinical utility of DL-powered medical image analysis tools.
12998-7
Author(s): Gloria Bueno, Lucia Sanchez, Univ. de Castilla-La Mancha (Spain); Elvira Perona, M. Angeles Munnoz, Alejandro Hiruelas, Univ. Autónoma de Madrid (Spain); Jesus Salido, Univ. de Castilla-La Mancha (Spain); Gabriel Cristobal, Instituto de Óptica (Spain)
9 April 2024 • 15:10 - 15:30 CEST | Londres 2/Salon 7, Niveau/Level 0
Show Abstract + Hide Abstract
Obtaining high-quality images for training AI models in the field of plankton identification, particularly cyanobacteria, is a challenging and time-critical task that necessitates the expertise of biologists. Data augmentation techniques, including conventional methods and GANs, can improve model performance, but GANs typically require large training datasets to produce high-quality results. To tackle this issue, we employed the StyleGAN2ADA model on a dataset of 34 cyanobacteria genera. We evaluated the generated images using both qualitative and quantitative metrics. Qualitative assessments involved a psychophysical test conducted by three expert biologists to identify shape or texture deviations that might impede visual classification. Additionally, three non-reference image quality metrics based on perceptual features were used for quantitative assessment. Images meeting quality standards were incorporated into classification models, resulting in a 20% performance improvement compared to the original dataset. This comprehensive evaluation process ensured the suitability of generated images for enhancing model performance.
12998-8
Author(s): Rodrigo Ramos, Univ. Nacional Autónoma de México (Mexico); Jimena Olveres, Boris Escalante-Ramírez, Univ. Nacional Autónoma de México (Mexico), Ctr. de Estudios en Computación Avanzada, Univ. Nacional Autónoma de México (Mexico)
9 April 2024 • 15:30 - 15:50 CEST | Londres 2/Salon 7, Niveau/Level 0
Show Abstract + Hide Abstract
Previous research into interpretable CNN classifiers involved comparing semantic segmentation masks against heat maps for visual explanations. We propose to cut and test CNN classifiers with enhanced explainability in the middle layers within a fully convolutional network (FCN). Semantic segmentation, vital in computer vision for object recognition, demands efficiency for performance, energy, and hardware costs. The study involves characterizing and comparing our minimal FCN against other lightweight segmentation models. The emphasis lies on data consumption as a determinant for model evaluation, especially with marginal differences in accuracy. Characterizing FCN architectures by data requirements will enable comprehensive comparisons for specific applications.
Break
Coffee Break 15:50 - 16:30
Hot Topics II
9 April 2024 • 16:30 - 18:05 CEST | Auditorium Schweitzer, Niveau/Level 0
Session Moderator:
Anna Mignani, Istituto di Fisica Applicata "Nello Carrara" (Italy)
2024 Symposium Chair

16:30 hrs
Welcome and Opening Remarks
Speaker Introduction
13004-500
Author(s): Kathy Lüdge, Technische Univ. Ilmenau (Germany)
9 April 2024 • 16:35 - 17:20 CEST | Auditorium Schweitzer, Niveau/Level 0
Show Abstract + Hide Abstract
Optical cavities with nonlinear elements and delayed self-coupling are widely explored candidates for photonic reservoir computing (RC). For time series prediction applications that appear in many real-world problems, energy efficiency, robustness and performance are key indicators. With this contribution I want to clarify the role of internal dynamic coupling and timescales on the performance of a photonic RC system and discuss routes for optimization. By numerically comparing various delay-based RC systems e.g., quantum-dot lasers, spin-VCSEL (vertically emitting semiconductor lasers), and semiconductor amplifiers regarding their performance on different time series prediction tasks, to messages are emphasized: First, a concise understanding of the nonlinear dynamic response (bifurcation structure) of the chosen dynamical system is necessary in order to use its full potential for RC and prevent operation with unsuitable parameters. Second, the input scheme (optical injection, current modulation etc.) crucially changes the outcome as it changes the direction of the perturbation and therewith the nonlinearity. The input can be further utilized to externally add a memory timescale that is needed for the chosen task and thus offers an easy tunability of RC systems.
13012-500
Author(s): José Capmany Francoy, Univ. Politècnica de València (Spain)
9 April 2024 • 17:20 - 18:05 CEST | Auditorium Schweitzer, Niveau/Level 0
Show Abstract + Hide Abstract
Programmable photonic circuits manipulate the flow of light on a chip by electrically controlling a set of tunable analog gates connected by optical waveguides. Light is distributed and spatially rerouted to implement various linear functions by interfering signals along different paths. A general-purpose photonic processor can be built by integrating this flexible hardware in a technology stack comprising an electronic monitoring and controlling layer and a software layer for resource control and programming. This processor can leverage the unique properties of photonics in terms of ultra-high bandwidth, high-speed operation, and low power consumption while operating in a complementary and synergistic way with electronic processors. This talk will review the recent advances in the field and it will also delve into the potential application fields for this technology including, communications, 6G systems, interconnections, switching for data centers and computing.
Session 3: Camera Optics
10 April 2024 • 08:40 - 10:20 CEST | Londres 2/Salon 7, Niveau/Level 0
Session Chair: Tomasz Kozacki, Warsaw Univ. of Technology (Poland)
12998-9
Author(s): Sébastien Héron, Laure Lee, Yann Semet, Rémi Barrère, Thales Research & Technology (France)
10 April 2024 • 08:40 - 09:00 CEST | Londres 2/Salon 7, Niveau/Level 0
Show Abstract + Hide Abstract
Freeform optics bring new degrees of freedom to optical systems and require the abilities both to describe any surface (continuous or not) and to optimize their shape together with the geometry of the entire system. This increases the number of variables, and therefore the complexity of the fitness function to be minimized in order to obtain highest optical performance. Most proprietary algorithms from commercial solutions cannot handle more than tens of variables and/or noisy function landscape limiting the implementation of such free-form in optical systems. Here, CMA-ES algorithm is coupled to parallel computation of ray tracing simulations able to cover the high computational demand. The benefits of such state-of-the-art evolutionary optimization algorithms is a one-step convergence by exploring the entire landscape of solutions without the need of any starting optical architecture.
12998-10
Author(s): Alexander V. Laskin, Vadim Laskin, AdlOptica Optical Systems GmbH (Germany); Aleksei Ostrun, ITMO Univ. (Russian Federation)
10 April 2024 • 09:00 - 09:20 CEST | Londres 2/Salon 7, Niveau/Level 0
Show Abstract + Hide Abstract
Extending depth of field (DOF) of imaging optics is a longstanding challenge in machine vision, microscopy, photography and cinematography. Achromatic and aplanatic multi-focus optics foto-foXXus provides 5 times extension of DOF of camera lenses by forming simultaneously several focuses separated along the optical axis. By imaging a scene, several images of each Object are formed along the optical axis. The inevitable decrease in image contrast can be enhanced using specific image processing algorithms. This method is very effective while capturing black-and-white QR-codes or in vision-based robotic arms for detecting the shape and size of objects. Direct measurements of MTF and through-focus MTF curves confirm extending depth of focus and, consequently, depth of field in the Object space. The paper presents description of foto-foXXus devices, measurements data of MTF and through-focus MTF-curves using the MTF test bench, examples of imaging real objects demonstrating effective extending depth of field.
12998-11
Author(s): Christos Katopodis, Ioanna Zergioti, National Technical Univ. of Athens (Greece); Dimitrios Papazoglou, Institute of Electronic Structure and Laser, Foundation for Research and Technology-Hellas (Greece)
10 April 2024 • 09:20 - 09:40 CEST | Londres 2/Salon 7, Niveau/Level 0
Show Abstract + Hide Abstract
A novel multichannel optical imaging system capable of high-resolution imaging across a wide field of view (FOV) of 7x7 mm² with a 4x magnification. Comprised by microlens arrays (MLAs) and micro-aperture arrays, our design circumvents the traditional trade-off between resolution and field size. Each channel of the system is optically isolated by micro-aperture arrays acting as field stops, ensuring high-quality imaging without crosstalk. A 5x5 step micro-scanning technique extends the imaging capability to the entire FOV. Experimental validation of the prototype, which employs commercially available MLAs and fabricated micro-aperture arrays, demonstrates agreement with theoretical predictions, achieving clear imaging without the need for a large sensor. This approach promises significant advancements in applications requiring detailed imaging over large areas.
12998-12
Author(s): Hugo Maurey, ICube (France), Optiive SAS (France); Patrice Twardowski, ICube (France); Robin Pierron, Optiive SAS (France); Philippe Gerard, Manuel Flury, ICube (France)
10 April 2024 • 09:40 - 10:00 CEST | Londres 2/Salon 7, Niveau/Level 0
Show Abstract + Hide Abstract
Compact wide-angle lenses with adjustable focus have a broad range of applications in emerging technologies. The standard approach to designing lenses with variable focal length is to move a group of lenses along the optical axis. These systems are often bulky due to the need for displacements of a few millimeters or centimeters. To create a compact wide angle lens with a variable focal length, we propose a system including Alvarez and aspherical lenses. The complete system is 18mm long, has a cutoff frequency over 223 lp/mm considering three wavelengths in the visible range: 486, 588, 656 nm, low vignetting, 87.5° FOV, object distance range from 25 cm to infinity.
12998-13
Author(s): Silas O'Toole, Dominic Zerulla, Univ. College Dublin (Ireland)
10 April 2024 • 10:00 - 10:20 CEST | Londres 2/Salon 7, Niveau/Level 0
Show Abstract + Hide Abstract
Active and adaptive optics enable the correction of optical imperfections and adaptation to environmental conditions, with applications in astronomy and laboratory based optical experiments. An actively controllable lens and mirror technology is presented here, utilizing transparent oxides and joule heating to generated shaped lenses and mirrors with variable focal length. By inducing local temperature changes, it can create customizable lenses and mirrors on a millisecond scale based on the oxide or metallic layer construction. These elements can be combined to create multi-focal lenses and deformable mirrors for imaging and astronomy. This approach offers advantages over traditional piezo-driven adaptive optics by enabling shape changing non-linear adjustments within each element's area.
Break
Coffee Break 10:20 - 10:50
Session 4: Computer-generated Holography I
10 April 2024 • 10:50 - 12:20 CEST | Londres 2/Salon 7, Niveau/Level 0
Session Chair: Tomoyoshi Shimobaba, Chiba Univ. (Japan)
12998-14
Author(s): Fan Wang, Chiba Univ. (Japan); David Blinder, Chiba Univ. (Japan), Vrije Univ. Brussel (Belgium); Tomoyoshi Ito, Tomoyoshi Shimobaba, Chiba Univ. (Japan)
10 April 2024 • 10:50 - 11:20 CEST | Londres 2/Salon 7, Niveau/Level 0
Show Abstract + Hide Abstract
Spherical waves emanating from a point are usually modeled in wavefront recording planes perpendicular to their direction of propagation, leading to a symmetric wavefield, typically referred to as a point spread function (PSF). But when the wavefront recording plane is tilted with respect to the hologram plane, this wavefield becomes asymmetric and is typically obtained by the rotation of the frequency domain. This work aims to derive the asymmetric PSF (aPSF) analytically directly in the spatial domain, allowing for the accurate and efficient use of tilted wavefront recording planes in computer-generated holography.
12998-15
Author(s): Nabil Madali, b<>com (France), Institut d'Electronique et de Télécommunications de Rennes (France); Antonin Gilles, b<>com (France); Patrick Gioia, b<>com (France), Orange SA (France); Luce Morin, b<>com (France), Institut d'Electronique et de Télécommunications de Rennes (France)
10 April 2024 • 11:20 - 11:40 CEST | Londres 2/Salon 7, Niveau/Level 0
Show Abstract + Hide Abstract
Accurately estimating 3D optical flow in computer-generated holography poses a challenge due to the scrambling of 3D scene information during hologram acquisition. Therefore, to estimate the scene motion between consecutive frames, the scene geometry should be recovered first. Recent studies have demonstrated that a 3D RGB-D representation can be extracted from an input hologram with relatively low error under well-chosen numerical reconstruction parameters. However, limited attention has been given to how the produced error can impact the flow estimation algorithms. Therefore, in this study, we evaluate different learning/non-learning methodologies for recovering 3D scene geometry. Next, we analyze the types of distortions produced by these methods and attempt to minimize estimation error using spatial and temporal constraints. Finally, we compare the performance of several state-of-the-art methods to estimate the 3D optical flow vectors on the recovered sequence of RGB-D images.
12998-16
Author(s): David Blinder, Vrije Univ. Brussel (Belgium), imec (Belgium), Chiba Univ. (Japan); Fan Wang, Chiba Univ. (Japan); Colas Schretter, Vrije Univ. Brussel (Belgium), imec (Belgium); Kakue Takashi, Tomoyoshi Shimobaba, Chiba Univ. (Japan); Peter Schelkens, Vrije Univ. Brussel (Belgium), imec (Belgium)
10 April 2024 • 11:40 - 12:00 CEST | Londres 2/Salon 7, Niveau/Level 0
Show Abstract + Hide Abstract
Color holographic displays typically independently manipulate and combine light for three different wavelengths. Recent advances have made it possible to jointly encode a single extended-phase spatial light modulator (SLM) pattern modulating all colors simultaneously to display holograms at higher framerates and qualities. However, this inevitably leads to ``color replicas'', where the objects at one wavelength are replicated at different depths for different colors, leading to disturbances in the viewing experience, thereby limiting its usefulness for 3D displays. We propose a novel coded illumination scheme to decorrelate the different color signals, eliminating the color replicas. We present the novel joint-color coding CGH algorithm, as well as an additional calibration algorithm, showing significant improvements in visual quality with a minor modification to the optical display setup.
12998-17
Author(s): Antoine Lagrange, b<>com (France), IMT Atlantique Bretagne-Pays de la Loire (France); Antonin Gilles, b<>com (France); Kevin Heggarty, Bruno Fracasso, IMT Atlantique Bretagne-Pays de la Loire (France)
10 April 2024 • 12:00 - 12:20 CEST | Londres 2/Salon 7, Niveau/Level 0
Show Abstract + Hide Abstract
Holograms computed with conventional generation methods are composed of complex values encoded in floating point format that must be converted to pure amplitude or phase signals with quantized values in order to be visualized on current holographic displays. During the conversion process, part of the signal information is lost and conversion noise appears in the reconstructed scene. To enhance the visualization quality, one can use the well-known error diffusion algorithm, which rejects the conversion noise outside of the reconstruction area by propagating the quantization error of each pixel to its neighbors according to a set of fixed diffusion weights. In this paper, we propose a novel and efficient Graphics Processing Unit implementation of the view-dependent error diffusion method which only uses a limited set of diffusion weights to improve the algorithm parallelization while keeping a significant visual quality improvement.
Break
Lunch/Exhibition Break 12:20 - 13:50
Session 5: Computer-generated Holography II
10 April 2024 • 13:50 - 14:50 CEST | Londres 2/Salon 7, Niveau/Level 0
Session Chair: David Blinder, Vrije Univ. Brussel (Belgium)
12998-18
Author(s): Patrick Gioia, Orange SA (France), b<>com (France); Antonin Gilles, Antoine Lagrange, Anas El Rhammad, b<>com (France); San Vu-Ngoc, Univ. de Rennes 1 (France)
10 April 2024 • 13:50 - 14:10 CEST | Londres 2/Salon 7, Niveau/Level 0
Show Abstract + Hide Abstract
Linear Canonical Transformations are a powerful tool for expressing and combining optical optical systems in a concise and efficient way. In addition to their computational simplicity they provide a deep understanding of how the space and frequency components are transformed. Space frequency domain, also called phase space, is transformed according to symplectic linear mappings under the action of such operators, that form a subset of widely studied objects in mathematical physics called Fourier Integral Operators. Considering holograms in phase space has numerous advantages such as the possibility to relate scrambled data to meaningful portions of 3D scenes, alowing advanced processing such as editing or masking. In this paper we review linear symplectic techniques applied to paraxial optics and show how space frequency representations such as Gabor frames can efficiently be used to compute wavefield evolution when traversing complex optical systems.
12998-19
Author(s): Jinze Sha, Andrew Kadis, Benjamin Wetherfield, Roubing Meng, Zhongling Huang, Dilawer Singh, Antoni Wojcik, Timothy D. Wilkinson, Univ. of Cambridge (United Kingdom)
10 April 2024 • 14:10 - 14:30 CEST | Londres 2/Salon 7, Niveau/Level 0
Show Abstract + Hide Abstract
Despite many years of development in computer-generated holography, perfect phase-only holograms for most target images are still yet possible to compute. All computational phase retrieval algorithms end up with some error between the target image and the reconstruction of the computer-generated hologram (CGH), except for specific targets. This research focuses on the fundamental limits of phase-only CGH quantized to limited bit-depth levels, from the information theory point of view, revealing the information capacity of CGH and its effect on reconstruction quality, with an attempt to quantify how hard a target image is for phase-only hologram computation.
12998-21
Author(s): Anas El Rhammad, Antonin Gilles, b<>com (France); Patrick Gioia, Orange SA (France); Antoine Lagrange, b<>com (France)
10 April 2024 • 14:30 - 14:50 CEST | Londres 2/Salon 7, Niveau/Level 0
Show Abstract + Hide Abstract
In this paper, we investigate the application of Gabor Frames (GFs) as an effective TF analysis tool for compressing digital holograms. Our choice of GFs stems from their notable flexibility and accuracy in TF decomposition. Unlike some other techniques, GFs offer the advantage of accommodating both overcomplete and orthonormal signal representations. Furthermore, GFs have a robust mathematical foundation, opening doors to a broad spectrum of potential applications beyond compression. First, we provide an overview of essential concepts in GFs theory like dual GFs, analysis and synthesis operators. Second, we illustrate how GFs can be employed for digital holograms representation in the phase space domain. For compression purpose, we substitute the STFT used in the JPEG PLENO Holography codec by tight GFs, and compare their encoding performance. We present and thoroughly discuss the rate-distortion graphs, shedding light on the efficacy of GFs in digital hologram lossy compression.
Break
Coffee Break 14:50 - 15:30
Session 6: Computational Microscopy
10 April 2024 • 15:30 - 17:30 CEST | Londres 2/Salon 7, Niveau/Level 0
Session Chair: Yunfeng Nie, Vrije Univ. Brussel (Belgium)
12998-22
Author(s): Ségolène Martin, Ctr. de Vision Numérique, CentraleSupélec, Univ. Paris-Saclay, Institut National de Recherche en Informatique et en Automatique (France); Julien Ajdenbaum, Emilie Chouzenoux, Ctr. de Vision Numérique (France); Laetitia Magnol, EpiMaCT, INSERM, INRAE (France), Univ. de Limoges (France); Véronique Blanquet, INSERM (France), Univ. de Limoges (France); Jean-Christophe Pesquet, Ctr. de Vision Numérique (France); Claire Lefort, XLIM (France)
10 April 2024 • 15:30 - 15:50 CEST | Londres 2/Salon 7, Niveau/Level 0
Show Abstract + Hide Abstract
Multiphoton microscopy (MPM) is an advanced imaging technique used in biological and biomedical research to visualize and study living tissues and cells with high resolution. It is particularly well-suited for deep tissue imaging, as it minimizes photodamage and provides improved penetration compared to traditional microscopy techniques like confocal microscopy. The Point Spread Function (PSF) plays a crucial role in multiphoton microscopy and describes how a point source of light is imaged as a spatial distribution in the microscope. Understanding the 3D PSF is essential for deconvolution and other post-processing techniques used to reconstruct 3D images from a stack of 2D images. We develop a computational solution for the PSF estimation based on the imaging of micro objects bigger than the resolution limit. We use fluorescent microspheres with a diameter of 1 µm and estimate the PSF from the deformations observed in the image of these microspheres. A deconvolution strategy illustrates the performance of our method, where we successfully restore an unsliced whole striated skeletal muscle utilizing the PSF estimated with 1 µm diameter beads.
12998-23
Author(s): Amir Mohammad Ketabchi, Koç Univ. (Turkey); Berna Morova, Istanbul Technical Univ. (Turkey); Nima Bavili, Alper Kiraz, Koç Univ. (Turkey)
10 April 2024 • 15:50 - 16:10 CEST | Londres 2/Salon 7, Niveau/Level 0
Show Abstract + Hide Abstract
This study explores the advantages of line-scanning confocal microscopy using a Digital Light Projector (DLP) and a rolling shutter CMOS camera to enhance image contrast. By projecting a sequence of illumination lines onto a sample and synchronizing with the camera's rolling shutter, an acceptable improvement in image contrast is achieved. Then, by using a dataset, a Generative Adversarial Network (cGAN) was trained. The trained network showed promising results in comparison with the ground-truth images.
12998-24
Author(s): Maria Josef Lopera Acosta, Vrije Univ. Brussel (Belgium), Univ. EAFIT (Colombia); Jorge García-Sucerquia, Univ. Nacional de Colombia Sede Medellín (Colombia); Yunfeng Nie, Heidi Ottevaere, Vrije Univ. Brussel (Belgium); Carlos Trujillo, Univ. EAFIT (Colombia)
10 April 2024 • 16:10 - 16:30 CEST | Londres 2/Salon 7, Niveau/Level 0
Show Abstract + Hide Abstract
The numerical simulation of Digital Lensless holographic microscopy (DLHM) involves computationally diffracting a spherical wavefront onto a sample and propagating it to a recording plane. Various computational approaches for DLHM simulation exist, providing different accuracy levels depending on the geometrical conditions of the setup and sample properties. This study evaluates different DLHM simulation methods from the perspectives of the Structural Similarity to experimental holographic recordings and the corresponding time cost. This research finds that an angular spectrum-based method excels in replicating experimental results while maintaining high computational efficiency. This work aids in selecting optimal simulation techniques for DLHM, balancing computational speed and result accuracy.
12998-25
Author(s): Álvaro Cuevas, Daniel Tiemann, Robin Camphausen, ICFO - Institut de Ciències Fotòniques (Spain); Iris Cusini, ICFO - Institut de Ciències Fotòniques (Spain), Politecnico di Milano (Italy); Antonio Panzani, Federica Villa, Politecnico di Milano (Italy); Valerio Pruneri, ICFO - Institut de Ciències Fotòniques (Spain), ICREA - Institució Catalana de Recerca i Estudis Avançats (Spain)
10 April 2024 • 16:30 - 16:50 CEST | Londres 2/Salon 7, Niveau/Level 0
Show Abstract + Hide Abstract
Classical and linear measurements are bound by the shot noise limit. In optics, the sensitivity increases i) with the square root of the number of photons detected, or ii) with the photons-sample interactions. Case (i) is limited by how safe or efficient is the power level, while (ii) is limited by how to achieve and resolve any number of interactions. We report a versatile interference contrast imaging technique, which extracts more information per photon resource than any linear phase imager to date. It is based on a non-resonant multipass design that allows to efficiently implement case (ii) and extract holographic information by using a single photon camera. It has been designed as a wide-field imaging (i.e., without requiring pixel-scanning) technique, able to image highly transparent/reflective samples, with noise reduction beyond 0.22 in less than 7 rounds.
12998-26
Author(s): Aleksandra Ivanina, Advanced Research Ctr. for Nanolithography (Netherlands), Vrije Univ. Amsterdam (Netherlands); Benjamin Lochocki, Advanced Research Ctr. for Nanolithography (Netherlands); Lyubov V. Amitonova, Vrije Univ. Amsterdam (Netherlands), Advanced Research Ctr. for Nanolithography (Netherlands)
10 April 2024 • 16:50 - 17:10 CEST | Londres 2/Salon 7, Niveau/Level 0
Show Abstract + Hide Abstract
Controlling and understanding light propagation through a multimode fiber (MMF) requires knowledge of an optical Transmission Matrix (TM). Holography is essential for extracting phase information from intensity measurements, enabling the TM measurement. Usually, complex optical fields are retrieved through its interferences with a plane wavefront. Here we demonstrate and study TM measurements of an MMF using a self-reference approach, emphasizing its strengths and limitations. We focus on compensating for phase fluctuations to enhance image quality. The efficiency of this approach in precise TM measurements is experimentally confirmed by demonstrating high-quality light focusing, as well as complex patterns transmission through an MMF. This work enhances the understanding of self-reference holography in complex scattering media and its practical applications, particularly in studying and controlling light within MMFs.
12998-27
Author(s): Harbinder Singh, Univ. de Castilla-La Mancha (Spain); Manuel Forero, Nuray Agaoglu, Univ. de Ibagué (Colombia); Gloria Bueno, Oscar Déniz, Univ. de Castilla-La Mancha (Spain); Gabriel Cristóbal, Instituto de Óptica "Daza de Valdés" (Spain)
10 April 2024 • 17:10 - 17:30 CEST | Londres 2/Salon 7, Niveau/Level 0
Show Abstract + Hide Abstract
A novel ImageJ plugin is designed to extend the depth-of-field (DoF) by seamlessly fusing a series of multi-focus images, allowing for in-depth analysis. Moreover, it has been tested on multi-exposure image stacks, demonstrating its adeptness in preserving intricate details within both poorly and brightly illuminated regions of 3-D specimens. The significance of this capability becomes particularly apparent when dealing with images that exhibit a limited DoF and varying exposure settings under low signal-to-noise ratio conditions. The plugin’s ef- effectiveness has been thoroughly validated through the processing and analysis of numerous image stacks featuring diverse diatom and cyanobacteria species. The proposed methodology incorporates a two-scale decomposition (TSD) scheme, complemented by the refinement of weight maps using edge-preserving filtering (EPF). This dual approach ensures the preservation of fine details in the fused image while simultaneously minimizing noise. Such innovations make this plugin a valuable tool for researchers and analysts working with complex image datasets.
Posters-Wednesday
10 April 2024 • 17:45 - 19:45 CEST | Galerie Schweitezer, Niveau/Level 0
Conference attendees are invited to attend the Photonics Europe poster session on Wednesday evening. Come view the posters, enjoy light refreshments, ask questions, and network with colleagues in your field. Authors of poster papers will be present to answer questions concerning their papers. Attendees are required to wear their conference registration badges to the poster sessions.

Poster Setup: Wednesday 10:00 - 17:30 hrs
Poster authors, view poster presentation guidelines and set-up instructions at http://spie.org/EPE/poster-presentation-guidelines.
12998-32
Author(s): Salaheddine Toubi, CEA-LETI (France); Elise Ghibaudo, IMEP-LAHC (France); Christophe Martinez, CEA-LETI (France)
10 April 2024 • 17:45 - 19:45 CEST | Galerie Schweitezer, Niveau/Level 0
Show Abstract + Hide Abstract
Conventionally, LCoS, µ-OLEDs, µ-LEDs, and LBS are the principal micro-displays used in Near-Eye displays. We propose an alternative display concept, which offers increased flexibility for its integration with the optical combiner, resulting in a more efficient energy yield. The concept is based on photonic integrated circuits (PIC) in the visible range, active light extraction components using liquid crystals, and pixelated holograms. The combination of these elements enables the generation of an emissive point, whose properties: position, emission angle, and divergence are adjustable. We describe our concept and compare the expected performances with conventional solutions.
12998-45
Author(s): María-Baralida Tomás, Belen Ferrer, David Mas, Univ. de Alicante (Spain)
10 April 2024 • 17:45 - 19:45 CEST | Galerie Schweitezer, Niveau/Level 0
Show Abstract + Hide Abstract
Object tracking with subpixel accuracy is crucial in experiments where the object's apparent movement on the camera sensor is very small. To achieve high subpixel accuracy, it is necessary to find a balance among the camera's features, the object to camera distance and the target speed. Additionally, selecting the appropriate algorithm is fundamental for accurately determining the target's position. Tracking targets with high subpixel accuracy makes the system very sensitive to thermal errors since heating of the electronics can lead to drifts and distortions in the final image. In our presentation, we will show different combinations that ensure precise subpixel accuracy while accounting for observed thermal distortions. Following our results with Basler cameras, our recommendation is to use the lowest target speed with a temporal resolution to achieve an apparent interframe shift of less than 0.004 pixels and at least 2 hours of stabilization time.
12998-47
Author(s): Belen Ferrer, María-Baralida Tomás, David Mas, Univ. de Alicante (Spain)
10 April 2024 • 17:45 - 19:45 CEST | Galerie Schweitezer, Niveau/Level 0
Show Abstract + Hide Abstract
The design and adjustment of image-based detection and tracking systems require the use of calibrated sliders on which to place the targets. Scientific-grade sliders often provide high precision and repeatability (1 micron), although they are frequently expensive and can only move small-sized and lightweight targets, limiting their applications. In this work, we propose the use of photographic sliders as precision displacement systems. Since these systems are designed for artistic purposes, precise calibration is necessary for their use in accurate displacements. Calibration consists of tracking a circular object through centroid detection at different speeds and compare the results with a scientific-grade slider. The tests were conducted using a Thorlabs DDS050 linear slider and an Edelkrone One photographic slider. The results demonstrate that the photographic slider delivers a similar level of precision at just one-fifth of the cost when compared to the Thorlabs slider.
12998-48
Author(s): David Mas, María-Baralida Tomás, Belen Ferrer, Univ. de Alicante (Spain)
10 April 2024 • 17:45 - 19:45 CEST | Galerie Schweitezer, Niveau/Level 0
Show Abstract + Hide Abstract
Tracking objects with subpixel accuracy offers significant advantages for motion and deformation analysis. Under suitable conditions, it is possible to achieve accuracies exceeding 0.01 pixels, but maintaining these conditions is only valid for a short period of time. Device heating can cause sensor and housing expansions, leading to image distortions. To observe thermal effects, we captured static sequences of a binary target composed of a matrix of circles. Images were taken every two minutes for a duration of 15 hours. To study image deformation, the position of each circle was analyzed by locating its centroid. We used non-cooled cameras equipped with two types of aluminum heat sinks: one placed on top and the other surrounding the camera. We assessed the extent of deformation and compared the effects of each heat sink. Based on the results obtained, we can estimate the magnitude and type of deformation produced.
12998-49
Author(s): Ipek Anil Atalay Ipek, Erdem Sahin, Tampere Univ. (Finland); Christine Guillemot, Institut National de Recherche en Informatique et en Automatique (France); Humeyra Caglayan, Tampere Univ. (Finland)
10 April 2024 • 17:45 - 19:45 CEST | Galerie Schweitezer, Niveau/Level 0
Show Abstract + Hide Abstract
Addressing challenges in fluorescence microscopy, especially chromatic aberration and limited depth of field, is crucial for accurate imaging. Our presentation introduces a design methodology, emphasizing the integration of end-to-end optimized full-color metasurface optics with a differentiable learning framework. Considering a 4f microscopy optical setup, our design actively learns a metasurface and subsequently combines it with a tailored image reconstruction algorithm. This ensures enhanced performance in extreme extended depth of field and broadband imaging scenarios, yielding sharp and distortion-free images.
12998-50
Author(s): Felix Zilske, Leif O. Harders, Fachhochschule Westküste (Germany); Anna Kersten, Marc Schnurawa, BioConsult SH GmbH & Co. KG (Germany); Stephan Hußmann, Fachhochschule Westküste (Germany)
10 April 2024 • 17:45 - 19:45 CEST | Galerie Schweitezer, Niveau/Level 0
Show Abstract + Hide Abstract
Deep-learning based semantic segmentation has so far been used increasingly in medicine, but has a much greater potential. For example, the areas of certain objects or habitats in nature can be determined. In the Wadden Sea of the North Sea, conservationists have been tracking the development of various habitats for several years. Mostly, very fuzzy satellite data are used for this. The results of the analyses must then be validated. This is usually done with very imprecise methods, such as stepping on mussel beds and counting the number of steps on mussels. With the help of drone imagery and various segmentation algorithms, a method was developed that works accurately and efficiently.
12998-51
Author(s): Leif Ole Harders, Fachhochschule Westküste (Germany); Thorsten Ufer, Andreas Wrede, Landwirtschafts-kammer Schleswig-Holstein (Germany); Stephan Hußmann, Fachhochschule Westküste (Germany); Eberhard Hartung, Christian-Albrechts-Univ. zu Kiel (Germany)
10 April 2024 • 17:45 - 19:45 CEST | Galerie Schweitezer, Niveau/Level 0
Show Abstract + Hide Abstract
Advanced information technologies, such as the fifth-generation mobile networks (5G), unmanned aerial vehicles (UAV), and artificial intelligence (AI), build the foundation for integrated autonomous systems in smart agriculture that contribute to the sustainable transformation and optimization of agricultural processes, such as weed control. Weed control is a particular challenge for specialty crops, as it is labor-intensive, and the widely used chemical weed control is increasingly subject to legal restrictions. This paper proposes a sustainable weed control approach based on tree crown detection using remote sensing data and evaluates three state-of-the-art deep learning architectures.
12998-52
Author(s): Yusuke Sando, Yutaro Goto, Makoto Kawamura, Osaka Research Institute of Industrial Science and Technology (Japan)
10 April 2024 • 17:45 - 19:45 CEST | Galerie Schweitezer, Niveau/Level 0
Show Abstract + Hide Abstract
To realize 3D displays observable from all directions, we propose utilizing a hyperboloidal mirror to reflect the light in all directions. The mirror has two focal points in imaging relation. Thus, an image displayed near the one focal point is reconstructed as a virtual image near the other focal point after the hyperboloidal mirror reflection. However, there is a problem that the imaging magnification ratio depends on the propagating direction. This is one reason for distortion in hyperboloidal mirror imaging. In this study, the propagation range of each image is physically limited so that the magnification ratio becomes constant approximately. This limitation also enables to display different multiple images with different propagation directions. Besides, multiple parallax images are optically superimposed and they are displayed near the one focal point. We demonstrated a full-parallax multi-view 3D display with the horizontal and vertical viewing zones of 135° and 60°, respectively.
12998-53
Author(s): Taemi Jung, Jong-Ha Lee, Keimyung Univ. (Korea, Republic of)
10 April 2024 • 17:45 - 19:45 CEST | Galerie Schweitezer, Niveau/Level 0
Show Abstract + Hide Abstract
As societies age, the global elderly population is increasing, resulting in a rise in the incidence of pressure ulcers. As a result, efficient management methods for pressure ulcers are increasingly necessary. In contrast, currently, healthcare professionals diagnose pressure ulcers subjectively through visual examination. Our study employs a deep learning algorithm to diagnose the degree of skin inflammation. Based on this diagnosis, we suggest using a device that adjusts its light output for low-level laser therapy, a method effective in wound treatment. Pressure ulcers can progress to stages 3 and 4, characterized by reversible inflammation, but if not treated, they may become chronic wounds, thus highlighting the importance of early detection and prevention. In our tests, the device demonstrated an accuracy of 90%. Furthermore, we tested the treatment's efficacy through animal experiments, which indicated that the treatment led to an 85% decrease in immune cells in mice with pressure ulcers.
12998-54
Author(s): Piotr Garbat, Warsaw Univ. of Technology (Poland)
10 April 2024 • 17:45 - 19:45 CEST | Galerie Schweitezer, Niveau/Level 0
Show Abstract + Hide Abstract
This research paper showcases the results of an investigation into computational imaging and deep-learning techniques for detecting pits in cherries. The study analyzes different concepts of vision systems for cherry pit detection and evaluates two image acquisition methods. In this work, two different computational imaging techniques are considered to improve recognition accuracy. The paper also provides an overview of deep learning architectures and explores integrating polarizing camera information to enhance prediction accuracy. To this end, the paper presents and describes early and late fusion methods for image classification. The cherry classification results validate the efficacy of the proposed approach and suggest an improved solution. Acknowledgment This work was supported by the National Centre for Research and Development, project POIR.01.01.01-00-1045/17.
12998-55
Author(s): Hugo Didier Longines Tapía, Instituto de Investigaciones en Matemáticas Aplicadas y en Sistemas (Mexico); Jesús García-Ramírez, Univ. Nacional Autónoma de México (Mexico); Boris Escalante-Ramírez, Jimena Olveres, Univ. Nacional Autónoma de México (Mexico), Ctr. de Estudios en Computación Avanzada (Mexico)
10 April 2024 • 17:45 - 19:45 CEST | Galerie Schweitezer, Niveau/Level 0
Show Abstract + Hide Abstract
Deep neural networks automatically extract features; however, in many cases, the features extracted by the classifier are biased by the classes during the training of the model. Analyzing 3D medical images can be challenging due to the high number of channels in the images, which require long training times when using complex deep models. To address this issue, we propose a two-step approach: (i) We train an autoencoder to reconstruct the input images using some channels in the volume. As a result, we obtain a hidden representation of the images. (ii) Shallow models are then trained with the hidden representation to classify the images using an ensemble of features. To validate the proposed method, we use 3D datasets from the MedMNIST archive. Our results show that the proposed model achieves similar or even better performance than ResNet models, despite having significantly fewer parameters (approximately 14,000 parameters).
12998-56
Author(s): Denis Ojdanic, Technische Univ. Wien (Austria); Christopher Naverschnigg, Technischen Univ. Wien (Austria); Andreas Sinn, Georg Schitter, Technische Univ. Wien (Austria)
10 April 2024 • 17:45 - 19:45 CEST | Galerie Schweitezer, Niveau/Level 0
Show Abstract + Hide Abstract
This paper presents the evaluation of object detectors and trackers within a parallel software architecture to enable long distance UAV detection and tracking in real-time using a telescope-based optical system. The architecture combines computationally expensive deep learning-based object detectors with traditional object trackers to achieve a detection and tracking rate of 100 fps. Four object detectors, FRCNN, SSD, Retinanet and FCOS, are fine-tuned on a custom UAV dataset and integrated together with three trackers, Medianflow, KCF and MOSSE, into a parallel software architecture. The evaluation is conducted on a separate set of test images and videos. The combination of FRCNN and Medianflow shows the best performance in terms of intersection over union and center location offset on the video test set, enabling detection and tracking of UAVs at 100 fps.
12998-57
Author(s): Alberto Daniel Fuentes-Villegas, Haydee O. Hernández, Jimena Olveres, Boris Escalante-Ramírez, Univ. Nacional Autónoma de México (Mexico)
10 April 2024 • 17:45 - 19:45 CEST | Galerie Schweitezer, Niveau/Level 0
Show Abstract + Hide Abstract
Most bacteria classifiers created by neural networks and/or image processing methods are unable to generalize when used with different data bases of images acquired with the same type of acquisition systems, even if the sample preparation is similar. In this work, we introduce an ensemble of deep neural networks designed for the classification of bacteria in a broad context. We use a dataset comprising Actinomyces, Escherichia, Staphylococcus, Lactobacillus, and Micrococcus bacteria with Gram staining, which was acquired through brightfield microscopy from various sources. To normalize diversity of image characteristics, we applied domain generalization and adaptation techniques. Subsequently, we used phenotypic characteristics, such as the color reaction to Gram staining and morphology, to classify the bacteria.
12998-58
Author(s): Lucas Cervantes, Boris Escalante-Ramírez, Jimena Olveres, Univ. Nacional Autónoma de México (Mexico)
10 April 2024 • 17:45 - 19:45 CEST | Galerie Schweitezer, Niveau/Level 0
Show Abstract + Hide Abstract
This study explores the application of Video Frame Interpolation (VFI) as an augmentation technique for video datasets, focusing on cardiac ultrasound videos. Leveraging Niklaus et al. 's VFI model designed for general applications, we were able to develop, through the application of transfer learning, a specialized model tailored for generating cardiac ultrasound images. Furthermore, in collaboration with a cardiologist from Centro Médico ABC, we conducted qualitative evaluations of our model's performance. The expert analysis confirms that "The generated images could be considered real. They are consistent with the images used to generate them." This approach (VFI) complements traditional augmentation techniques and can be further refined through rapid model fine-tuning to improve image precision and quality if needed.
12998-61
Author(s): Yushi Zheng, Univ. College Dublin (Ireland); Min Wan, TU Eindhoven (Netherlands); John J. Healy, Univ. College Dublin (Ireland)
10 April 2024 • 17:45 - 19:45 CEST | Galerie Schweitezer, Niveau/Level 0
Show Abstract + Hide Abstract
In recent years, many novel phase space distributions have been proposed and one of the more independently interesting is the Bai distribution function (BDF), which has been shown to interpolate between the instantaneous autocorrelation function and the Wigner distribution function, and to link the geometrical and wave optical descriptions in the Fresnel domain. Currently, the Bai distribution function is only defined for continuous signals. However, for both simulation and experimental purposes, the signals must be discrete. This necessitates the development of a BDF analysis workflow for discrete signals. In this paper, we will analyse the sampling requirements imposed by the BDF, and demonstrate their correctness by comparing the continuous BDF of continuous test signals with their numerically approximated counterparts. Our results will permit more accurate simulations using BDFs, which will be useful in applying them to problems in, e.g., partial coherence.
12998-62
Author(s): Roubing Meng, Jinze Sha, Zhongling Huang, Timothy D. Wilkinson, Univ. of Cambridge (United Kingdom)
10 April 2024 • 17:45 - 19:45 CEST | Galerie Schweitezer, Niveau/Level 0
Show Abstract + Hide Abstract
A small field of view is one of the main limitations of holographic displays that prevent holographic displays from being successful commercial products. Although several methods have been proposed and shown some abilities to extend the replay field, they either introduce high-cost components or have complex experimental setups. In this paper, we propose a new method to extend the field of view of holographic display systems. The proposed method is based on an off-axis holographic display with two laser beams and one SLM. The SLM is illuminated by two laser beams from different angles and the hologram displayed is synchronized with alternating laser beams. Experimental results demonstrate that the proposed method can extend the replay field by two times with high-quality image reconstruction, less cost and simple experimental setups.
12998-63
Author(s): Moncy Sajeev Idicula, Warsaw Univ. of Technology (Poland); Kai Wen, Warsaw Univ. of Technology (Poland), Xidian Univ. (China); Michal Józwik, Warsaw Univ. of Technology (Poland); Hyon-Gon Choo, Electronics and Telecommunications Research Institute (Korea, Republic of); Peng Gao, Xidian Univ. (China); Tomasz Kozacki, Warsaw Univ. of Technology (Poland)
10 April 2024 • 17:45 - 19:45 CEST | Galerie Schweitezer, Niveau/Level 0
Show Abstract + Hide Abstract
Digital Holography Microscopy (DHM) can be used as a non-contact profilometric instrument for revealing the topography of microscopic objects. The utility of DHM encounters constraints stemming from computation inaccuracies or data deficiencies; aberrations can substantially impact the precision of numerical reconstruction when applied to digital holographic profilometry. Spherical-wave illumination scanning digital holographic profilometry (SWS-DHP) was developed for profiling demanding samples having large depth and high NA. Since the classical aberration compensation proved inadequate, this paper proposes a new aberration compensation method enabling measurements of high NA objects. Unlike the classical method, it is based on the propagation of the object and the illumination waves. In this way, aberrations are automatically corrected within the entire 3D volume of the reconstruction. Here, investigation is carried out based on a model-based approach; the accuracy of this new method will be tested numerically and experimentally on high NA and high-depth objects.
12998-64
Author(s): Evita Serpa, Ilze Ceple, Evita Kassaliete, Gunta Krumina, Univ. of Latvia (Latvia)
10 April 2024 • 17:45 - 19:45 CEST | Galerie Schweitezer, Niveau/Level 0
Show Abstract + Hide Abstract
In studies analyzing fixation stability, measurements are taken at different sampling rates, but how the sampling rate affects this fixation parameter has not been extensively investigated. Fixation stability is commonly quantified using bivariate contour ellipse areas (BCEA). The aim of this study is to determine whether the sampling rate of an eye tracker affects the measurement of fixation stability. Participants in the study were adults aged 20 to 30 years. Their eye movements during fixation were recorded using the Tobii Pro Fusion eye tracker. The fixation target was presented on a computer monitor, and eye movements were recorded at three sampling rates: 60 Hz, 120 Hz, and 250 Hz. The results demonstrated strong correlation between the BCEA measurements of each participant across all used sampling rates. However, when analyzing the overall data, there is no significant effect of the sampling rate on fixation stability measurement.
12998-65
Author(s): Stefan Mark Jensen, Gavrielle R. Untracht, Madhu Veettikazhy, Technical Univ. of Denmark (Denmark); Esben Ravn Andresen, Univ. de Lille (France); Peter Eskil Andersen, Technical Univ. of Denmark (Denmark)
10 April 2024 • 17:45 - 19:45 CEST | Galerie Schweitezer, Niveau/Level 0
Show Abstract + Hide Abstract
Optical coherence tomography (OCT) represents a tremendous potential for disease diagnostics and image-guided surgery if the technology could be utilized clinically in-vivo. However, at depth rapid signal attenuation due to multiple scattering diminishes imaging contrast. Our prior work demonstrates that decoupling the illumination and collection fields by introducing a spatial offset between them leads to preferential collection of multiple scattered light thereby improving contrast at depth. The initial results indicate that MCFs provide decoupling of the illumination and collection fields as expected. We compare with conventional endoscopic designs for common-path OCT implementation and propose a novel design for optimized imaging based on a lensed MCF.
12998-67
Author(s): Aurelien Argy, Florin Baumann, Jelil Belheine, Pierre Bibal-Sobeaux, Benoit Brouillet, Hugo Maurey, Patrice Twardowski, Lab. des sciences de l'Ingénieur, de l'Informatique et de l'Imagerie (France)
10 April 2024 • 17:45 - 19:45 CEST | Galerie Schweitezer, Niveau/Level 0
Show Abstract + Hide Abstract
The usual optimization methods in optical design like those available in CodeV and Zemax, while powerful, are very dependent of the starting system and the chosen merit function. Moreover the number of lenses is by the way defined. We present a novel global optimization approach employing saddle point construction that overcomes these limitations. This method facilitates the systematic exploration of design space by adding constructively new lens, leading to innovative, high-performance optical solutions. As study case we consider a wide-angle eyepiece with six lenses. Our findings retrieve the solutions obtained by CodeV global optimizer and a few more with less lenses. For the moment this optimization approach is limited to on-axis systems with spherical lenses.
12998-68
CANCELED: Sampling error analysis of FTIR and design of low noise sampling system
Author(s): Xiangning Lu, Min Huang, Wei Han, Lulu Qian, Zhanchao Wang, Aerospace Information Research Institute (China)
10 April 2024 • 17:45 - 19:45 CEST | Galerie Schweitezer, Niveau/Level 0
Show Abstract + Hide Abstract
In this paper, the sampling errors that can cause spectral noise of Fourier infrared spectrometer are introduced. The main reasons for the sampling errors are the position change of the interferometer moving mirror, the jitter in the motion process, the circuit delay and other random noise, etc. The Matlab is used to model and simulate, and the influence of these sampling errors on spectral noise is intuitively displayed. The methods of equal time and equal optical path difference sampling are compared, and an equal time acquisition system through digital filter and oversampling is proposed. The sampling system does not require high precision for the moving mirror driving mechanical structure, and can reduce the time delay caused by the analog filter circuit, simplify the structure and circuit design of the spectrometer, and improve the imaging quality of the infrared spectrum.
12998-69
Author(s): Wenhao Zhao, Zhanchao Wang, Zixuan Zhang, Yan Sun, Aerospace Information Research Institute (China); Min Huang, Aerospace Information Research Institute (China), Univ. of Chinese Academy of Sciences (China); Lulu Qian, Aerospace Information Research Institute (China)
10 April 2024 • 17:45 - 19:45 CEST | Galerie Schweitezer, Niveau/Level 0
Show Abstract + Hide Abstract
In order to make the image input interface of the image processing system based on embedded processor match the image output interface of the camera and receive the image data output from the camera, this article designs an image interface conversion system based on FPGA, which can receive the image data from HDMI of industrial cameras and then send the image data to the DVP interface on the embedded processor after image processing and conversion. In this article, the images captured by camera, which the resolution is 1920*1080, are received through HDMI. After extracting region of interest (ROI) which size is 1024*1024 and converting RGB to YUV format, the output images are sent to the embedded processor through the DVP interface.
12998-70
Author(s): Petr M. Pivkin, Artem A. Ershov, Moscow State Univ. of Technology "STANKIN" (Russian Federation); Vladimir A. Grechishnikov, Lyudmila A. Uvarova, Vladimir A. Kuznetsov, Anton M. Yazev, Michail Yu. Prus, Moscow State Univ. of Technology (Russian Federation); Dazhong Wang, Shanghai University of Engineering Science (China); Xiaohui Jiang, University of Shanghai for Science and Technology (China); Rao Yao, Shanghai University of Engineering Science (China); Alexey B. Nadykto, Moscow State Univ. of Technology (Russian Federation)
10 April 2024 • 17:45 - 19:45 CEST | Galerie Schweitezer, Niveau/Level 0
Show Abstract + Hide Abstract
Solving the problem of reverse engineering as a key element of the production process and its technological preparation has a key role. This work demonstrates for the first time the possibility of preparing production and collecting key indicators, which allows you to recreate a digital twin of the technological process and display the technological aspects of the design as a result of collecting key indicators. Such indicators include the width of the cut layer, the cutting zone of a conical cutter during multi-axis positioning, obtained based on the results of processing a group of images of processed products.The work constructs an analytical model for the automated creation of processing paths based on improved B-splines, which can significantly improve smoothness compared to numerical methods for generating paths. The actual technological indicators of the machining process can be identified and numerically formalized dependencies by determining the influence of the helical surface on the precise positioning of the end mill with compensation along each axis during 5-axis machining, obtained as a result of multi-axis machining.
12998-71
Author(s): Petr M. Pivkin, Moscow State Univ. of Technology "STANKIN" (Russian Federation); Anton M. Yazev, Artem A. Ershov, Lyudmila A. Uvarova, Moscow State Univ. of Technology (Russian Federation); Alexey B. Nadykto, Moscow State Univ. of Technology "STANKIN" (Russian Federation)
On demand | Presented live 10 April 2024
Show Abstract + Hide Abstract
In precision engineering, screw surfaces on critical parts of equipment have become widespread. The work proposes an new method and practical recommendations for measuring geometric accuracy, linear and angular measurements, and studying the properties of helical surfaces using scanning electron microscopy and specialized equipment for monitoring the accuracy of helical surfaces. The uniqueness of the approach lies in the formation of key indicators of classification and filtering of a set of specialized measurement techniques based on scanning and digital image processing. More accurate results allowed for an increase in accuracy up to 10 times compared to existing methods. In this regard, the work justifies the practical adaptation of the search results for key measurement schemes in comparison with other existing methods for helical surfaces with an alternating angle of inclination of the tangent to the profile in the axial section.
12998-77
Author(s): Sumesh Nair, Guo-Fong Hong, Chai-Wei Hsu, Yvonne Yuling Hu, Shean-Jen Chen, National Yang Ming Chiao Tung Univ. (Taiwan)
10 April 2024 • 17:45 - 19:45 CEST | Galerie Schweitezer, Niveau/Level 0
Show Abstract + Hide Abstract
Our research addresses agricultural pest detection challenges through state of the art approach, deploying YOLO NAS with SORT. These strategic changes lead to a substantial 30% enhancement in overall detection accuracy, particularly benefiting the identification of small objects, when compared to YOLOv7 and SORT. Employing the Intel Realsense D405 for three-dimensional data, the YOLO NAS+SORT approach achieves real-time tracking with a detection speed of 50 ms per frame. Notably, the smallest object, a 2-cm caterpillar, is recognized at 21x12 pixels from a distance of 35 cm. This innovation holds promise for integration with various technologies, from employing robot arms for targeted caterpillar removal to stand-off methods like laser pest targeting. Offering efficient and precise pest control solutions, this breakthrough contributes significantly to sustainable agriculture, addressing the critical need for effective and environmentally friendly practices
12998-78
Author(s): Tomoyoshi Shimobaba, Fan Wang, Chiba Univ. (Japan); Yogi Udjaja, Chiba Univ. (Japan), Bina Nusantara University (Indonesia); Takashi Nishitsuji, Toho University (Japan); Atsushi Shiraki, Tomoyoshi Ito, Chiba Univ. (Japan)
10 April 2024 • 17:45 - 19:45 CEST | Galerie Schweitezer, Niveau/Level 0
Show Abstract + Hide Abstract
Suppressing techniques for ringing artifacts have been proposed so far, but these techniques are time-consuming and use large amounts of memory. This study presents a ringing artifact reduction using the Fresnel integrals.
12998-79
Author(s): Alya Al Abdouli, Univ. of Sharjah (United Arab Emirates); Aisha Al Ali, University of sharjah (United Arab Emirates); Talal Bonny, Univ. of Sharjah (United Arab Emirates)
On demand | Presented live 10 April 2024
Show Abstract + Hide Abstract
This research delves into the realm of image encryption, employing both VHDL and Python for comprehensive analysis and implementation. Focusing on the application of chaotic oscillators as a fundamental element in cryptographic algorithms. Specifically, we present a comparative study between two distinct chaotic oscillators: the well-known Lorenz oscillator and the Single-switch oscillator. The primary objective is to assess and contrast the performance of these chaotic systems in the context of image encryption. This research employs an encryption scheme of a 512x512 grayscale image, where pixels are first shuffled, then passed to an XOR with values from the chaotic oscillators. The performance is compared based on pixel intensity histogram, correlation, Peak Signal-to-Noise Ratio (PSNR) and Number of Pixels Change Rate (NPCR). Moreover, each method was compared before and after shuffling, and The research shows that the single switch oscillator resulted in less correlation between neighboring pixels compared to Lorenz oscillator. Moreover, it shows that the Single-switch oscillator encryption method achieves a Number of Pixels Change Rate (PCNR) of 99.60%, while the Lorenz encryption method yields an NPCR of 99.53%.
12998-4
Author(s): Asma Almakhzoumi, Talal Bonny, Mohammad Al-Shabi, Univ. of Sharjah (United Arab Emirates)
10 April 2024 • 17:45 - 19:45 CEST | Galerie Schweitezer, Niveau/Level 0
Show Abstract + Hide Abstract
Malaria, a significant global health concern, necessitates precise diagnostic tools for effective management. This study introduces an innovative approach to malaria detection using advanced machine-learning techniques. By harnessing convolutional neural networks (CNNs) and deep learning, the research presents a robust framework for automated malaria detection through microscopic images of red blood cells. The study evaluates three key algorithms—CNN, VGG-16, and Support Vector Machine (SVM)—using a meticulously curated dataset of 27,560 images. Results highlight the VGG-16 model’s exceptional accuracy, achieving 98.5%. Transfer learning is pivotal in its success, demonstrating the power of pre-trained models for medical image analysis. This research advances automated disease diagnosis, particularly in resource-limited settings. Future work involves fine-tuning algorithms, exploring ensemble techniques, and enhancing interpretability for broader healthcare applications.
Hot Topics III
11 April 2024 • 09:00 - 10:35 CEST | Auditorium Schweitzer, Niveau/Level 0
Session Moderator:
Thierry Georges, Oxxius (France)
2024 Symposium Chair

9:00 hrs
Welcome and Opening Remarks
Speaker Introduction
12995-501
3D laser nanoprinting (Plenary Presentation)
Author(s): Martin Wegener, Karlsruher Institut für Technologie (Germany)
11 April 2024 • 09:05 - 09:50 CEST | Auditorium Schweitzer, Niveau/Level 0
Show Abstract + Hide Abstract
3D laser nanoprinting based on multi-photon absorption (or multi-step absorption) has become an established commercially available and widespread technology. Here, we focus on recent progress concerning increasing print speed, improving the accessible spatial resolution beyond the diffraction limit, increasing the palette of available materials, and reducing instrument cost.
13006-501
Author(s): Vasilis Ntziachristos, Helmholtz Zentrum München GmbH (Germany)
11 April 2024 • 09:50 - 10:35 CEST | Auditorium Schweitzer, Niveau/Level 0
Show Abstract + Hide Abstract
Biological discovery is a driving force of biomedical progress. With rapidly advancing technology to collect and analyze information from cells and tissues, we generate biomedical knowledge at rates never before attainable to science. Nevertheless, conversion of this knowledge to patient benefits remains a slow process. To accelerate the process of reaching solutions for healthcare, it would be important to complement this culture of discovery with a culture of problem-solving in healthcare. The talk focuses on recent progress with optical and optoacoustic technologies, as well as computational methods, which open new paths for solutions in biology and medicine. Particular attention is given on the use of these technologies for early detection and monitoring of disease evolution. The talk further shows new classes of imaging systems and sensors for assessing biochemical and pathophysiological parameters of systemic diseases, complement knowledge from –omic analytics and drive integrated solutions for improving healthcare.
Break
Coffee Break 10:35 - 11:00
Session 7: Augmented Reality and Holographic Display Systems
11 April 2024 • 11:00 - 12:00 CEST | Londres 2/Salon 7, Niveau/Level 0
Session Chair: Tomasz Kozacki, Warsaw Univ. of Technology (Poland)
12998-29
Author(s): Leehwan Hwang, Seunghyun Lee, Kwangwoon Univ. (Korea, Republic of)
11 April 2024 • 11:00 - 11:20 CEST | Londres 2/Salon 7, Niveau/Level 0
Show Abstract + Hide Abstract
The research focuses on enhancing the uniformity of reconstructed holographic images in near-eye display systems by employing off-axis Holographic Optical Elements (HOE). Near-eye displays are crucial for augmented reality and virtual reality applications, but they often suffer from non-uniform illumination, causing image quality issues. Off-axis HOEs, an innovative optical component, have been investigated to mitigate these problems. Off-axis HOEs can significantly improve image uniformity by redirecting and diffusing light rays across the display area. This technique reduces issues like hotspots and uneven brightness, providing a more consistent and visually appealing holographic experience for users. Researchers in this study leverage their expertise in optics, holography, and display technologies to develop and implement off-axis HOEs, aiming to address the challenges associated with image uniformity in near-eye displays. This work contributes to the advancement of immersive display technologies and their applications in fields such as virtual reality and mixed reality.
12998-30
Author(s): Maksymilian Chlipala, Moncy Sajeev Idicula, Rafal Kukolowicz, Warsaw Univ. of Technology (Poland); Maria L. Cruz, Univ. Panamericana (Mexico); Juan Martínez-Carranza, Tomasz Kozacki, Warsaw Univ. of Technology (Poland)
11 April 2024 • 11:20 - 11:40 CEST | Londres 2/Salon 7, Niveau/Level 0
Show Abstract + Hide Abstract
Holographic near-eye display (HNED) provides realistic reconstruction of 3D object with all human physiological cues. Additionally, it has big advantage over other near-eye display designs because it ensures three dimensional reconstructions without vergence accommodation conflict. To make HNEDs a feasible technology, it is necessary to have large field of view (FOV) and full color reconstructions. While the large FoV can be reached in the HNED configurations by combining the optical elements and the high resolution SLM, the color-reconstruction presents more challenging issue. Full-color display in HNED can be achieved by techniques like temporal multiplexing, or frequency division. A big disadvantage of this works is they do not report a large FOV. In this paper, we compare time multiplexing and frequency division techniques applied in an HNED system that can reconstruct large 3D full-color scene using RGB illumination. The display is supported with a fast and accurate hologram generation technique. Obtained numerical and optical reconstructions proves that display reconstructs high quality large color object.
12998-31
Author(s): Sung Kyu Kim, Ki Hyuk Yoon, Jinho Yoon, Korea Institute of Science and Technology (Korea, Republic of)
11 April 2024 • 11:40 - 12:00 CEST | Londres 2/Salon 7, Niveau/Level 0
Show Abstract + Hide Abstract
A typical near eye AR display displays an image containing depth information. However, a typical AR display can only accurately focus on the depth of a specific optically formed virtual screen. Therefore, images at depths outside the virtual screen become blurred, deteriorating the clarity of virtual information. In addition, eye fatigue occurs due to focus adjustment convergence mismatch (VAC) that occurs in general 3D displays, which can cause serious problems when using the AR system for a long period of time. Therefore, there is a need to solve this problem, and one way is to increase the depth of field (DOF). However, the optical system that increases the depth of focus is complex and increases in volume and weight, so there is a need to solve these problems. This paper includes the construction principles and production results of an AR optical system that implements a realistic, commercially usable extended DOF(EDOF) using a simpler optical system.
Break
Lunch Break 12:00 - 13:30
Session 8: Computational Imaging
11 April 2024 • 13:30 - 15:10 CEST | Londres 2/Salon 7, Niveau/Level 0
Session Chair: John J. Healy, Univ. College Dublin (Ireland)
12998-33
Author(s): Sébastien Bourdel, Olivier Gazzano, ONERA (France); Maxime Cavillon, Institut de Chimie Moléculaire et des Matériaux d'Orsay (France); Guillaume Druart, ONERA (France); Matthieu Lancry, Institut de Chimie Moléculaire et des Matériaux d'Orsay (France)
11 April 2024 • 13:30 - 13:50 CEST | Londres 2/Salon 7, Niveau/Level 0
Show Abstract + Hide Abstract
Hyperspectral cameras collect and process spectral information for each pixel over a given field of view. Therefore, they can extract more information than ordinary cameras. One of the technological challenges of hyperspectral cameras is to create a compact architecture that is also robust and stable. The solution that we will present is to use a 3D Photonic Integrated Circuit (3D PIC) made of a 2D array of waveguides written in 3D by a femtosecond laser inside a glass chip. With respect to free space optical systems, 3D PICs will allow for a better control over the propagation and better mechanical stability. We will show how to perform hyperspectral imaging with an array of waveguide-based static Mach Zehnder interferometers; then our numerical model on the overall size of the 3D PIC; finally, our experiments to characterize the waveguides and to find the most suitable set of inscriptions parameters for hyperspectral applications.
12998-34
Author(s): Jinyan Liu, Colas Schretter, Artem Shcheglov, Heidi Ottevaere, Yunfeng Nie, Vrije Univ. Brussel (Belgium)
11 April 2024 • 13:50 - 14:10 CEST | Londres 2/Salon 7, Niveau/Level 0
Show Abstract + Hide Abstract
We propose digital correction methods for recovering aberrated spectra in a portable low-cost miniaturized grating-based spectrometer, which is modelled in optical and numerical simulations. To realize the digital spectrum recovery, different Point Spread Function (PSF) modelling approaches for wavelength-dependent PSFs are proposed and implemented into four digital correction algorithms, namely inverse Fourier Transform (IFT) deconvolution, Wiener deconvolution, Lucy-Richardson (LR) and Landweber iterative algorithm. Results show that both LR and Landweber algorithms can improve the spectral resolution by about a factor of two. The enhanced spectral resolution is comparable to that of commercial table-top spectrometers, while our spectrometer has a much smaller packaging volume of about one cubic inch.
12998-35
Author(s): Yue Wang, John Healy, Univ. College Dublin (Ireland)
11 April 2024 • 14:10 - 14:30 CEST | Londres 2/Salon 7, Niveau/Level 0
Show Abstract + Hide Abstract
This paper compares Orthogonal Matching Pursuit (OMP) and Iterative Hard Thresholding (IHT) algorithms in digital holography, focusing on their ability to handle phase discontinuities, an under-explored area compared to the study of Gibbs ringing artifacts in image reconstruction. We simulate a digital holographic environment to test both algorithms, analyzing their performance and computational efficiency in the presence of phase discontinuities. Our results offer valuable insights into the advantages and limitations of OMP and IHT, with significant implications for digital holography applications in fields like medical imaging, microscopy, and non-destructive testing.
12998-36
Author(s): Alessandro Tontini, Fondazione Bruno Kessler (Italy); Sonia Mazzucchi, Roberto Passerone, Nicolò Broseghini, Univ. degli Studi di Trento (Italy); Leonardo Gasparini, Fondazione Bruno Kessler (Italy)
11 April 2024 • 14:30 - 14:50 CEST | Londres 2/Salon 7, Niveau/Level 0
Show Abstract + Hide Abstract
Reliable operation under strong photon flux condition is an essential requirement for safe and robust operations of SPAD-based d-ToF LiDAR systems, both for autonomous driving and industrial applications. In this work, we present a histogram post-processing method for SPAD-based direct Time of Flight (d-ToF) depth measurement systems that compensates for the nonlinear behavior of the SPAD, resulting in a linearized histogram of timestamps even under strong background illumination conditions and compensating for the pile-up distortion problem, which arises from the corruption of the histogram of timestamps due to the variability of the intensity of the background and reflected laser light. The proposed approach has been first demonstrated with simulations, based on a physical model for the computation of the optical power budget and a numerical engine for the generation of the simulated train of timestamps and then with measurements from a real d-ToF sensor.
12998-38
Author(s): Shubham Tiwari, Indian Institute of Technology Delhi (India)
11 April 2024 • 14:50 - 15:10 CEST | Londres 2/Salon 7, Niveau/Level 0
Show Abstract + Hide Abstract
Transport of intensity equation(TIE) is a non-interferometry based technique for quantitative phase imaging(QPI) . However the resolution and sensitivity with which the phase is measured in TIE is still limited due to various factors such as noise, boundary conditions, conditions on auxiliary function. In this work we have used an optimization method to improve the phase map of biological samples obtained by solving the transport of intensity equation(TIE). In TIE we have intensity measurements from three different planes of focus. We can significantly improve the sensitivity and resolution with which the phase is obtained via TIE by updating the phase obtained via TIE by making use of intensity information from three different planes. A cost minimization scheme is implemented on the different planes and in each plane the required field at the focus is updated. A simultaneous pupil recovery is also possible using this method that can help us gain information regarding presence of aberrations. This method was applied to the TIE phase of RBCs and Osteosarcoma cells and obtained results show a significantly improved phase map.
Break
Coffee Break 15:10 - 15:30
Session 9: Computer Vision Applications
11 April 2024 • 15:30 - 16:50 CEST | Londres 2/Salon 7, Niveau/Level 0
Session Chair: John J. Healy, Univ. College Dublin (Ireland)
12998-39
Author(s): Alim Yolalmaz, Jeroen Kalkman, Technische Univ. Delft (Netherlands)
11 April 2024 • 15:30 - 15:50 CEST | Londres 2/Salon 7, Niveau/Level 0
Show Abstract + Hide Abstract
Non-invasive, automated, and continuous 3D plant imaging is important for studying plant development, performing digital phenotyping, and detection of plant diseases. In this study, we reconstructed 3D plant structural and fluorescence plant images using an automated monocular vision-based structure from motion technique requiring only 2 RGB images. By using different exposure durations and RGB spectral filters we are able to acquire both white light structural information and fluorescence functional information in a single acquisition. The combined structural and function information enables us to observe and locate the plant disease of autofluorescing downy mildew lettuce plants in 3D. We demonstrate the effect of important parameters such as exposure duration and sampling frequency on the 3D reconstruction quality. We believe that our work will enable plant biologists and plant breeders to aid in understanding plant-pathogen interactions, plant development, and to utilize this for breeding more disease resistant crops.
12998-40
Author(s): Erion Pikoulis, Konstantinos Blekos, Dimitrios Kosmopoulos, Univ. of Patras (Greece)
11 April 2024 • 15:50 - 16:10 CEST | Londres 2/Salon 7, Niveau/Level 0
Show Abstract + Hide Abstract
The olive oil industry plays a significant role in the global agricultural economy, with the quality of olive oil greatly dependent on the quality and ripeness of the olives used in its production. Accurate and efficient sorting and classification of olive fruit are crucial steps in optimizing olive oil yield and quality. In this scientific project, we propose a novel approach to automate the classification of olive fruit based on their ripeness and quality using computer vision techniques. The visual system is composed of a segmentation and classification deep network, based on YOLO architecture. In practice despite the processing unexpected foreign objects may be present as well (e.g. leaves, twigs etc), which may lead to erroneous classification to one of the existing classes. The experimental results validate the utility of the approach with high classification accuracy based on expert annotation and demonstrate high detection rates for outlier objects. The speed of the system ensures a high production throughput.
12998-42
Author(s): Rongrong Qin, Weiyuan Yao, Ning Wang, Aerospace Information Research Institute (China)
11 April 2024 • 16:10 - 16:30 CEST | Londres 2/Salon 7, Niveau/Level 0
Show Abstract + Hide Abstract
Mie lidar has been profoundly applied in the retrieval of aerosol optical coefficients in vertical distribution. However, few studies further explore the strategies for the retrieval of aerosol mass profiles quantitively from lidar observation. To meet the rising demand of aerosol mass concentration in spatial and temporal distribution, an iteration algorithm for profiling aerosol mass composition as well as extinction coefficient based on spaceborne dual-wavelength lidar data is proposed. By constructing the relationship between mixed aerosol mass profiles and optical properties at different wavelengths, new constraints are induced to improve the accuracy of lidar ratio. Meanwhile, aerosol composition profiles can also be deduced based on the a prior estimation of aerosol compositions and intrinsic optical features of the aerosols. This method has been verified by simulated lidar signals and CALIOP data, suggesting potential applicability in satellite data processing.
12998-43
Author(s): Lei Liu, Xin Tan, Nanjing Univ. of Science and Technology (China)
11 April 2024 • 16:30 - 16:50 CEST | Londres 2/Salon 7, Niveau/Level 0
Show Abstract + Hide Abstract
A forward vehicle detection method based on vehicle width matching detection algorithm and AdaBoost iterative algorithm is proposed in this paper. The method first designs the structural Haar features according to the characteristics of the front vehicle, it calculates the corresponding eigenvalues of the sample by using the new feature combined with the integral graph, then it trains the weak classifier by the AdaBoost iterative algorithm. The detection window is filtered by the vehicle width matching algorithm, and the front vehicle detection is realized. The experimental results show that the target detection results obtained by this algorithm can not only guarantee a good detection rate, but also effectively shorten the time of classifier training and target detection.
Digital Posters

The posters listed below are available exclusively for online viewing during the week of SPIE Photonics Europe 2024.

12998-46
Author(s): Talal Bonny, Maryam Al Jaziri, Mohammad Al-Shabi, Univ. of Sharjah (United Arab Emirates)
11 April 2024 • 20:00 CEST | On Demand
Show Abstract + Hide Abstract
Brain tumors represent a critical health challenge, underscoring the urgency of accurate detection for timely intervention. Our study addresses this vital need by employing advanced machine learning techniques. We introduce a novel approach utilizing Convolutional Neural Networks (CNNs) for precise brain tumor classification. Our methodology incorporates tailored data preprocessing, a specialized network architecture, and rigorous training to enhance accuracy. Furthermore, we integrate the renowned VGG16 architecture into our approach. Preliminary results showcase the potential of our algorithm, particularly the VGG16 variant, in surpassing conventional methods for brain tumor detection. With a remarkable 91.00% accuracy rate for VGG16 - Scenario 1 and a significantly improved 78.33% accuracy for CNN - Scenario 2, our findings highlight the superiority of our CNN-based methodology in achieving higher accuracy. As we continue to refine our approach, we anticipate making significant contributions to the medical field’s ability to accurately diagnose brain tumors.
12998-59
Author(s): Ramna Khalid, Isma Javed, MLab, STI Unit, The Abdus Salam International Centre for Theoretical Physics (Italy), Information Technology Univ. of the Punjab (Pakistan); Humberto Cabrera, MLab, STI Unit, The Abdus Salam International Centre for Theoretical Physics (Italy); Masoomed Dashtdar, Department of Physics, Shahid Beheshti University (Iran, Islamic Republic of); Muhammad Qasim Mehmood, Information Technology Univ. of the Punjab (Pakistan); Muhammad Zubair, King Abdullah Univ. of Science and Technology (Saudi Arabia)
11 April 2024 • 20:00 CEST | On Demand
Show Abstract + Hide Abstract
Quantitative phase imaging (QPI) based study of cancerous cell morphology, viability and proliferation, attracts the attention of the pathologist and researchers. In this research article, we have introduced customized QPI based imaging tool for investigation of malignant blood cells for the early detection of cancer. The proposed tool enables the measurement of optical path length variations which gives the provision of label-free, high-resolution imaging of blood cells, allowing for the precise quantification of cellular parameters such as volume, thickness, and dry mass. The proposed configuration referred as self-referencing QPI system, makes use of common path laser beam for generation of the interferogram. Moreover, this technique has the advantage of numerical focusing, and it is not necessary to place the imaging device at the image plane of the magnifying lens, thus omitting human error and declining the time-consumption. The non-invasive nature of proposed imaging system minimizes patient discomfort and enables real-time monitoring of disease progression.
12998-60
Author(s): Nasir Mahmood, King Abdullah Univ. of Science and Technology (Saudi Arabia)
11 April 2024 • 20:00 CEST | On Demand
Show Abstract + Hide Abstract
This study presents a unique helicity-dependent broadband multifunctional metasurface platform to manipulate visible light for futuristic imaging applications. The proposed platform can integrate multiple optical phenomena into a single-layered metadevice to generate several uncorrelated, spin-dependent responses across the targeted visible spectrum (470–650nm). As a proof of concept, we showcased a range of all-dielectric transmissive metadevices that carried diverse spin-multiplexed phase profiles. These devices effectively produced diffraction-limited focusing and structured light beams with unique characteristics. We conducted a thorough examination of each designed metasurface using specific visible wavelengths, and the diffracted light demonstrated superior and consistent broadband performance. The presented approach, which leverages the benefits of multifunctional platforms and compact metadevices, holds promise for various applications, such as optical interconnects, microscopy, and biomedical imaging.
Conference Chair
Vrije Univ. Brussel (Belgium)
Conference Chair
Warsaw Univ. of Technology (Poland)
Program Committee
Wyant College of Optical Sciences (United States)
Program Committee
Vrije Univ. Brussel (Belgium)
Program Committee
Univ. of Kent (United Kingdom)
Program Committee
Tsinghua Univ. (China)
Program Committee
Princeton Univ. (United States)
Program Committee
Univ. of Cambridge (United Kingdom)
Program Committee
Consejo Superior de Investigaciones Científicas (Spain)
Program Committee
Ecole Polytechnique Fédérale de Lausanne (Switzerland)
Program Committee
Univ. Nacional Autónoma de México (Mexico)
Program Committee
Univ. College Dublin (Ireland)
Program Committee
National Univ. of Ireland, Maynooth (Ireland)
Program Committee
Vrije Univ. Brussel (Belgium)
Program Committee
Toho Univ. (Japan)
Program Committee
Inha Univ. (Korea, Republic of)
Program Committee
Canon Information Systems Research (Australia)
Program Committee
Chiba Univ. (Japan)
Program Committee
Univ. of Patras (Greece)
Additional Information

View call for papers

 

What you will need to submit:

  • Presentation title
  • Author(s) information
  • Speaker biography (1000-character max including spaces)
  • Abstract for technical review (200-300 words; text only)
  • Summary of abstract for display in the program (50-150 words; text only)
  • Keywords used in search for your paper (optional)
  • Check the individual conference call for papers for additional requirements (i.e. extended abstract PDF upload for review or instructions for award competitions)
Note: Only original material should be submitted. Commercial papers, papers with no new research/development content, and papers with proprietary restrictions will not be accepted for presentation.