Optics and photonics, similarly to their electronics counterpart, have slowly but steadily migrated from an analog age to a digital age. The term 'digital' refers not only to the end functionality as it does in 'digital electronics,' but also to the way they are designed, fabricated and integrated within systems:

    Numerical algorithms allow design of nonconventional optics from macroscopic (freeform optics) to nanoscopic scales (metasurfaces, photonic crystals, plasmonics... ).
    Novel wafer scale lithography techniques, freeform diamond turning and additive optical manufacturing processes allow for their mass production.
    Dynamic behavior is a key feature of digital optics, such as in switchable, tunable and reconfigurable elements.
    Digital techniques enhance their functionality, as with computational imaging, display or sensing.

This conference aims at combining all aspects of digital optics around the five following topics. These topics are gaining today massive interest in academia, research institutions, defense, industry and consumer systems.


Novel optics for augmented, mixed and virtual reality systems (AR, MR, VR)
  • novel optics for imaging, display and sensing in compact AR/VR/MR systems
  • technologies and techniques to improve visual comfort in binocular near to eye displays
  • optics, displays and optical architectures matched to the human visual perception system.

  • Digital optics for image formation
  • LCOS, Micro-OLED, DLP and micro i-LED display technologies for AR and VR
  • novel holographic and lightfield display technologies
  • novel laser and LED scanning display engines.

  • Computational optics for display, imaging and sensing
  • computational imaging and display techniques and technologies
  • single pixel, lensless and integral flat imaging and sensing devices
  • compression technologies for holographic and lightfield displays and standardization of such.

  • Switchable, tunable and digitally reconfigurable optics
  • dynamic vision impairment correction
  • dynamic digital optics for varifocal, multifocal, light fields and holographic display
  • tunable optics to enhance visual comfort (VAC mitigation, pupil steering, optical foveation… ).

  • Digital optics for sensing
  • compact gaze, eye and pupil tracking, iris recognition systems and algorithms
  • 3D depth cameras for spatial mapping
  • novel sensors for head and hand tracking (optical and non-optical).
  • ;
    In progress – view active session
    Conference 11788

    Digital Optical Technologies 2021

    On demand now
    View Session ∨
    • DIgital Optical Technologies Plenary Session
    • Special Focus: Keynote Session I
    • Special Focus: Keynote Session II
    • Special Focus: Keynote Session III
    • Optical Metrology Plenary Session
    • 1: Digital Optics for AR, VR and MR Systems
    • 2: Novel Materials and Processes for Digital Optics in AR
    • 3: Digital Optics for Sensing
    • 4: Computational Optics for Display, Imaging and Sensing
    • 5: Digital Optics for Image Formation
    • 6: Switchable, Tunable and Digitally Reconfigurable Optics
    Session LIVE: DIgital Optical Technologies Plenary Session
    Livestream: 21 June 2021 • 12:30 - 13:30 CEST | Zoom
    11788-600
    Author(s): Hiroki Kikuchi, Sony Corp. (Japan)
    On demand
    Show Abstract + Hide Abstract
    Sony is leveraging its "3R Technology" - Reality, Real-time, and Remote - to inspire Kando (emotion) value creation. Immersive, large-screen displays give us a sense of reality as if we are traveling around the world while we're at home. Sensors for automotive provide real-time feedback to the drivers to provide safety and comfort. AR/MR/VR technology connects people who are separated remotely and enriches their communication. Photonics is one of the core technologies of Sony and is the foundation of the core devices which create the values of Reality, Realtime and Remote. In this talk, Sony's unique photonic device technologies are introduced, including micro-display, AR/MR/VR, light field displays and laser devices. The prospects for the evolution of these technologies will also be presented.
    Session LIVE: Special Focus: Keynote Session I
    Livestream: 21 June 2021 • 14:15 - 15:45 CEST | Zoom
    11786-33
    Author(s): Adam P. Wax, Duke Univ. (United States)
    On demand
    Show Abstract + Hide Abstract
    OCT adoption is somewhat limited by the lack of an effective means for obtaining adequate image penetration in highly scattering tissue. For example, the ability to observe subtle changes in the layers of skin tissue where microcirculation occurs, generally anywhere between 1-4 mm deep, a depth currently beyond the penetration of traditional OCT systems. DA-OCT presents an attractive potential to image deeper into tissues exposing morphology that otherwise may go undetected. The approach is based on an off-axis scanning approach which uses distinct illumination and collection apertures to accept a larger proportion of quasiballistic signal. Here we present transliation ofDA-OCT to image large tissue volumes using a broadband SLD centered about 1.3 μm paired with a dynamic focus-tracking method to create an enhanced depth of focus.
    11788-1
    Author(s): Frederik Bachhuber, Olaf Claussen, Zhengyang Lu, Clemens Ottermann, Simone Ritter, Bianca Schreder, Ruediger Sprengard, Stefan Weidlich, Ute Woelfel, SCHOTT AG (Germany)
    On demand
    Show Abstract + Hide Abstract
    Waveguide technology is widely believed to constitute the most promising approach to realize affordable and fully immersive Augmented Reality (AR) / Mixed Reality (MR) devices. For all major technology platforms (diffractive, reflective, or holographic), specialty grade high index optical glass is the central component to achieve some of the key features of AR devices, such as field of view, MTF, or weight. We will provide insights into SCHOTT’s roadmap for dedicated glass development for the AR sector and discuss the latest achievement with high relevance for the industry. It is a game of trade-offs between the desired properties to produce an optical glass which enables the entry of AR devices into the consumer market.
    Session LIVE: Special Focus: Keynote Session II
    Livestream: 22 June 2021 • 08:30 - 10:15 CEST | Zoom
    11782-14
    Author(s): Saoucene Hassad, Lab. d'Acoustique de l'Univ. du Maine, CNRS (France); Kouider Ferria, Larbi Bouamama, Univ. Ferhat Abbas Sétif 1 (Algeria); Pascal Picart, Lab. d'Acoustique de l'Univ. du Maine (France)
    On demand
    Show Abstract + Hide Abstract
    Data acquisition and processing is a critical issue for high-speed applications especially for three-dimensional imaging and analysis. Digital holographic tomography is a potential approach that can quantitatively measure the three-dimensional distribution of the refractive index of any phase object or transparent specimen. Generally, tomography is operated by acquiring projections of the sample and numerically mapping those projections onto a 3D representation using an inverse problem, such as the filtered back projection algorithm. From the practical point of view, there are mainly two ways for recording the data. First, the set of data can be acquired when varying the illumination angle. Last, the data can be acquired by the sample rotation. In both approaches, the sample and the optical set-up must be highly stationary whereas the illumination beam or the object is rotated. Another option is to simultaneously acquire the necessary set of data with a single shot acquisition and then to process them. This would have for advantage of permitting 3D imaging of non-stationary targets or transient time-varying object. The use of multiple camera sensors is complicated and not cost efficient. So, this paper presents the proof of concept for a novel approach based on three color digital holography and the use of a single monochromatic sensor. The principle is based on off-axis holography and spatial multiplexing of multi-wavelength holograms. Three wavelengths from three different laser lines are used to illuminate the target at different incidence angles. The reference beams from the lasers are combined into a single three color beam and the spatial frequencies of the reference waves are adjusted so as to allow for the spatial multiplexing of digital holograms with the monochromatic sensor. After de-multiplexing and processing the color holograms, the amplitude and phase of the target along the views are obtained. Further processing in order to compensate for aberrations of the set-up are proposed and discussed. As proof of concept, we provide results for 3D shape of a 3D ball reconstructed using the inverse Radon transform. These first results are adequate to be exploited in the study of the acoustic field of an ultrasound transducer, for a frequency of 40Khz.
    11785-34
    Author(s): Byoungho Lee, Youngjin Jo, Dongheon Yoo, Juhyun Lee, Seoul National University (Korea, Republic of)
    On demand
    Show Abstract + Hide Abstract
    Near-eye displays (NEDs) for augmented and virtual reality (AR/VR) are spotlighted because they have a possibility to provide much more immersive experiences never possible before. With the virtue of recent progress in sensors, optics, and computer science, several commercial products are already available, and the consumer market is expanding rapidly. However, there are several challenging issues for AR and VR NEDs to become closer to our lives. Here, we will explore these issues and important topics for AR and VR, and introduce some of the ideas to overcome them: diffractive optical elements (DOEs), retinal projection displays, and 3D display with focus cues. First, unlike VR with a simple optical system, AR that needs to merge an artificial image with an outer scene requires additional optics. The diffractive elements have the merit of being thin and transparent, suitable for the image combiner. Among them, holographic optical elements (HOEs) have great potential as they can record the desired volume grating from the simple lens to the complex wavefront using light interference. Second, in order to wear the NEDs for a long time, it must deal with the visual fatigue as well as the form factor. Retinal projection display can effectively prevent the vergence-accommodation conflict problem even with a simple optical design. In the retinal projection display, the light rays from the display are adjusted to converge into a small point using a lens. It ensures a wide depth range in which the images are clearly visible. Furthermore, it is possible to provide observers with accurate focus cues for the alleviation of visual fatigue via multi-layer displays and holographic displays. Recently, we conceived tomographic NED that can reproduce dense focal planes. We confirm that this system provides quasi-continuous focus cues, semi-original contrast, and considerable depth of field. The experimental results of our prototypes are explained. We also explain the recent activities of using deep learning in holographic NED system.
    11784-7
    Author(s): Claudia Conti, CNR-Istituto di Scienze del Patrimonio Culturale (Italy); Alessandra Botteon, Istituto di Fisica del Plasma "Piero Caldirola", Consiglio Nazionale delle Ricerche (Italy); Christopher Corden, Ioan Notingher, The Univ. of Nottingham (United Kingdom); Pavel Matousek, STFC Rutherford Appleton Lab. (United Kingdom)
    On demand
    Show Abstract + Hide Abstract
    Recent advances on micro Spatially Offset Raman Spectroscopy (micro-SORS), an optical spectroscopy method able to non-invasively investigate at the microscale the molecular composition of the subsurface of turbid materials, will be presented. The recent research topics include the application of micro-SORS to non-invasively reconstruct the diffusion profiles of conservation treatments applied in calcium-based matrices, the first in-situ surveys of prestigious panel paintings with a portable micro-SORS prototype derived modifying a commercial portable Raman spectrometer, and proof-of-concept experiments performed coupling micro-SORS with Time-Gated Raman Spectral Multiplexing method for the non-invasive suppression of the fluorescence originated by the subsurface.
    Session LIVE: Special Focus: Keynote Session III
    Livestream: 23 June 2021 • 14:00 - 15:30 CEST | Zoom
    11783-1
    Author(s): Lynford L. Goddard, Univ. of Illinois (United States)
    On demand
    Show Abstract + Hide Abstract
    In this talk, I will discuss several new forms of optical microscopy that my group developed in recent years. Our goal was to recover tiny nanoscale features using a conventional microscope. This problem is challenging because of the low signal to noise ratio for such features. In the first method, we introduced the regularized pseudo-phase and used it to measure nanoscale defects, minute amounts of tilt in patterned samples, and severely noise-polluted nanostructure profiles in optical images. We also extended the method to study the dynamics of droplet condensation using environmental scanning electron microscopy. In the second method, we built upon electrodynamic principles (mechanical work and force) of the light-matter interaction and applied it to sense sub-10 nm wide perturbations. In the third method, we introduced the concepts of electromagnetic canyons and non-resonance amplification using nanowires and applied these concepts to directly view individual perturbations (25-nm radius = λ/31) in a nanoscale volume.
    11786-67
    Author(s): Donald T. Miller, Indiana Univ. (United States)
    On demand
    Show Abstract + Hide Abstract
    Vision starts when light is captured by photoreceptors, specialized cells in the retina that set fundamental limits on what we can see and are unfortunately lost in many blinding diseases. While photoreceptors carry considerable clinical and scientific importance in ophthalmology and vision science, means to assess their function and health at the level of individual cells remain limited. Recent advances in adaptive optics optical coherence tomography (AO-OCT) imaging systems have enabled photoreceptor cells to be observed and tracked with unprecedented 3D resolution and sensitivity in the living human eye. This imaging capability has allowed the dynamics of these cells to be studied in exquisite detail, in particular nanoscale transients the cells generate after being stimulated by light. These changes have been found to carry fundamental information about the photoreceptor’s physiology. Here, I will describe the capability of AO-OCT to image, track, and quantify these miniscule cell dynamics and how these measurements are being used to study vision and to assess cell dysfunction and health in disease.
    Session LIVE: Optical Metrology Plenary Session
    Livestream: 23 June 2021 • 16:30 - 17:30 CEST | Zoom
    11782-500
    Author(s): Peter J. de Groot, Zygo Corporation (United States)
    On demand
    Show Abstract + Hide Abstract
    Optical instruments have long played a role in manufacturing, and strong arguments favor accelerated adoption of fast, non-contact measurements of surfaces, shapes and positions as an enabler for industry 4.0. High-precision techniques such as optical interferometry have advanced considerably and have found applications ranging from semiconductor wafer lithography to automotive engine production. Even though there are clear benefits, there are obstacles to the more widespread adoption of optical techniques for dimensional measurements. Many of these obstacles are technical--such as vibration sensitivity and metrological traceability; but others reflect the cultural gaps between academia, makers of optical instruments, standards organizations and end users. In this talk, I propose that understanding these cultural differences can assist in advancing optical methods for the most critical needs of data-driven manufacturing.
    Session 1: Digital Optics for AR, VR and MR Systems

    Presentations scheduled in this session will be live-streamed on Monday 21 June, 16:15 to 18:10 hrs CEST


    To view the presentation timing and to connect to this live session, please follow the Live Link at:
    https://spie.org/digital-optical-technologies/event/monday-live-stream-presentations-digital-optical-technologies/2601620

    The link will be live 15 minutes prior to the announced start of the session.

    Note that times for the live broadcast are all Central European Summer Time, CEST (UTC+2:00 hours)
    11788-3
    Author(s): Kristine Kalnica-Dorosenko, Nadezda Brujeva, Aiga Svede, Univ. of Latvia (Latvia); Sandra Valeina, Children's Clinical University Hospital (Latvia)
    On demand
    Show Abstract + Hide Abstract
    The classical treatment option of amblyopia is occlusions of non-amblyopic eye. The newest methods involve specialized computer and phone games, applications that involve both eyes in visual processing during treatment as well as stimulate binocularity. The aim of the work was to assess the efficiency of specialized phone application ‘Duovision’ in the treatment of amblyopia in preschool-age children. There were 30 participants (5-8 years old): 16 participants had occlusion therapy; 14 participants played the specialized phone application Duovision®. The visual acuity of amblyopic eye, as well as stereopsis was evaluated at near and far distances before the treatment, 2 and 4 months after the beginning of treatment. The results show statistically significant improvement in visual acuity and stereovision in both treatment groups after four months of therapy. The extent of improvement is similar in both groups. Specialized phone applications for amblyopia treatment may be recommended to patients from an age 3 when they are able to use a mobile phone, who want to improve their visual acuity in the amblyopic eye and are not willing to use occlusions. The only requirement for using specialized applications – patients need to have binocular single vision. In conclusion, the use of specialized phone applications is an alternative type of amblyopia treatment compared to occlusion therapy.
    11788-4
    Author(s): Ivan Naumov, Don State Technical Univ. (Russian Federation); Mikhail Sinakin, Olga Sinakina, LLC "DAR" (Russian Federation); Viacheslav V. Voronin, Moscow State Univ. of Technology "STANKIN" (Russian Federation)
    On demand
    Show Abstract + Hide Abstract
    The COVID-19 pandemic has unexpectedly transformed the access and the organization of repair services for industrial equipment. Operation or maintenance and a breakdown of industrial equipment require interaction with the specialists of the equipment manufacturer. The personal presence of a specialist repairman at an industrial facility leads to high financial costs, which consist of losses from prolonged downtime of equipment and production in general and transportation costs. A way out in such situations can be remote work of enterprise engineers and an expert of an equipment supplier, organized using a remote assistant's hardware and software complex. The article compares AR systems by numerous parameters in the following groups: software functionality, technical features for software operation, data security, compatibility requirements. This analysis will allow potential users of such systems to determine the optimal implementation approach in production based on the required parameters. The correct choice of such systems will ensure that employees can work safely and efficiently in production, reduce the risk of incorrect work and reduce production downtime.
    11788-5
    Author(s): Stefano Gampe, Ulrich Haiss, Oliver Vauderwange, Dan Curticapean, Hochschule Offenburg (Germany)
    On demand
    Show Abstract + Hide Abstract
    The paper describes the implementation of practical laboratory settings in a virtual environment. With the entry of VR glasses into the mass market, there is a chance to establish educational and training applications for displaying some teaching materials and practical works. Therefore our project focuses on the realization of virtual experiments and environments, which gives users a deep insight into selected subfields of Optics & Photonics. Our goal is not to substitute the hand on experiments rather to extend them. By means of VR glasses, the user is offered the possibility to view the experiment from several angles and to make changes through interactive control functions. During the VR application, additional context-related information is displayed. By using object recognition, the specific graphics and texts for the respective object are loaded and supplemented at the appropriate place. Thus, complex facts are supported in an informative way. The prototype is developed using the Unity Engine and can thus be exported to different platforms and end devices. Another major advantage of virtual simulations to the real situation is the high degree of controllability as well as the easy repeatability. With slight modifications, entire experiments can be reused. Our research aims to acquire new knowledge in the field of e-learning in association with VR technology. Here we try to answer a core question of the compatibility of the individual media components.
    Session 2: Novel Materials and Processes for Digital Optics in AR

    Presentations scheduled in this session will be live-streamed on Monday 21 June, 16:15 to 18:10 hrs


    To view the presentation timing and to connect to this live session, please follow the Live Link at:
    https://spie.org/digital-optical-technologies/event/monday-live-stream-presentations-digital-optical-technologies/2601620

    The link will be live 15 minutes prior to the announced start of the session.

    Note that times for the live broadcast are all Central European Summer Time, CEST (UTC+2:00 hours)
    11788-7
    Author(s): Martin Bues, Isabel Pilottek, Stephan Prinz, DELO Industrie Klebstoffe GmbH & Co. KGaA (Germany)
    On demand
    Show Abstract + Hide Abstract
    Imprint materials for optical applications can have a multitude of different properties, dictated by the function they must fulfill. Apart from optical properties such as refractive index, wavelength-dependent transmission or scattering behavior, a frequently neglected parameter is the thermo-mechanical behavior. We will discuss the importance to balance all these properties to achieve the best performance in different application and production scenarios, based on simulations and experimental tests.
    11788-8
    Author(s): Niyazi Ulas Dinc, Giulia Panusa, Christophe Moser, Demetri Psaltis, Ecole Polytechnique Fédérale de Lausanne (Switzerland)
    On demand
    Show Abstract + Hide Abstract
    Obtaining a varying refractive index distribution has always been attracting a high interest in the optics community to produce gradient-index (GRIN) optics. The conventional way to store and process data by GRIN media is through volume holograms, where the recording is done by optical means, which prevents independently accessing each point in the volume. Additive manufacturing, specifically 2-photon polymerization, inherits this ability. Considering the scalability advantage of the 3D implementation of computation architectures and the power-speed advantage of optics, there lie many opportunities for additively manufactured GRIN optics performing complex tasks. Independent access to each voxel in fabrication volume opens the way for digital optimization techniques to design GRIN optics since each calculated voxel can be translated into fabrication. In this work, Learning Tomography (LT), which is a nonlinear optimization algorithm originally developed for optical diffraction tomography, is used as the optimization framework to calculate necessary refractive index distribution to perform computation tasks such as matrix multiplication. Here, instead of imaging an object in optical diffraction tomography, we calculate the 3D GRIN element that performs the desired task as defined by its input-output relation. This input-output relation can be chosen such that a computational functionality is satisfied. We report functional robust GRIN elements where the refractive index dynamic range (>0.005) is comparable to the dynamic range of conventional holography materials. We present the digital optimization methodology with details on the beam propagation method as the forward model and the corresponding error reduction scheme for the desired input-output mapping along with the experimental verification of the approach along with the details of the fabrication process by additive manufacturing.
    11788-9
    Author(s): Friedrich-Karl Bruder, Johannes Frank, Sven Hansen, Alexander Lorenz, Christel Manecke, Richard Meisenheimer, Covestro AG (Germany); Jack Mills, Covestro LLC (United States); Lena Pitzer, Igor Pochorovski, Thomas Roelle, Covestro AG (Germany)
    On demand
    Show Abstract + Hide Abstract
    Bayfol® HX photopolymer films prove themselves as easy-to-process recording materials for volume holographic optical elements (vHOEs) and are available at industrial scale. Their full-color (RGB) recording and replay capabilities are two of their major advantages. Bayfol® HX is compatible to plastic processing techniques like thermoforming, film insert molding and casting. Therefore, Bayfol® HX made its way in applications in the field of augmented reality such as Head-up-Displays (HUD) and Head-mounted-Displays (HMD), in free-space combiners, in plastic optical waveguides, and in transparent screens. Bayfol® HX can be adopted for a variety of applications. To offer access to more applications, we address the sensitization into the Near Infrared Region (NIR) and increase the achievable index modulation n1 beyond 0.06. In this paper, we will report on our latest developments in these fields.
    11788-31
    Author(s): David Stark, Carsten Schulze, Marcel Demmler, Matthias Nestler, Michael Zeuner, scia Systems GmbH (Germany)
    On demand
    Show Abstract + Hide Abstract
    Localized ion beam etching has been adapted for fabricating surface relief gratings with locally varying slant angle and trench depth. A hard mask provides basic grating features like period and duty cycle. The anisotropic etch process transfers the hard mask pattern into the substrate. The ion beam dwell time defines the trench depth while the angle of incidence defines the slant angle; both can be varied locally and independently of each other. Reactive ion beam etching prevents redeposition of sputtered material to the trench side walls. The process is available on wafer level and thus scalable for volume production.
    Session 3: Digital Optics for Sensing

    Presentations scheduled in this session will be live-streamed on Tuesday 22 June, 13:15 to 14:00 hrs



    To view the presentation timing and to connect to this live session, please follow the Live Link at:
    https://spie.org/digital-optical-technologies/event/tuesday-live-stream-presentations-digital-optical-technologies/2601621

    The link will be live 15 minutes prior to the announced start of the session.

    Note that times for the live broadcast are all Central European Summer Time, CEST (UTC+2:00 hours)
    11788-10
    Author(s): Johannes Meyer, Thomas Schlebusch, Robert Bosch GmbH (Germany); Hans Spruit, Jochen Hellmig, TRUMPF Photonic Components B.V. (Netherlands); Enkelejda Kasneci, Eberhard Karls Univ. Tübingen (Germany)
    On demand
    Show Abstract + Hide Abstract
    The integration of gaze gesture sensors in next-generation smart glasses will enable novel interaction concepts. However, consumer smart glasses are very demanding in the domains of power consumption, integration capability and robustness against ambient illumination. We propose a novel gaze gesture sensor based on laser feedback interferometry (LFI), which provides eye features like rotational velocity directly without demanding image processing. In combination with a tailored gaze gesture classification algorithm, exceptional classification performance and negative latency have been achieved.
    11788-11
    Author(s): Trung-Hieu Tran, Sven Simon, Univ. Stuttgart (Germany); Bo Chen, Detlef Russ, Daniel Claus, Institut für Lasertechnologien in der Medizin und Messtechnik (Germany)
    On demand
    Show Abstract + Hide Abstract
    This paper discusses a 3D depth-sensing system for mobile application consisting of a structured-light illuminator based on a Vertical-Cavity Surface-Emitting Laser (VCSEL) array and an embedded imaging system equipped with a high-speed CMOS-sensor and a Field Programmable Gate Array (FPGA) device. For structured-light projection, an elliptical array pattern generated with two-photon polymerized micro-optics on top of the VCSEL is employed. To solve the correspondence problem and estimate objects’ distance, we proposed light-weight and robust algorithms in favor of hardware implementation. We demonstrate the validation of the proposed approach with initial experimental results.
    11788-13
    Author(s): Markus Miller, Alexandar Savic, Bedouin Sassiya, Susanne Menzel, Karl Joachim Ebeling, Rainer Michalzik, Univ. Ulm (Germany)
    On demand
    Show Abstract + Hide Abstract
    We present an indirect time-of-flight 3D imaging system using a common image sensor in combination with an electroabsorption modulator (EAM) that allows to capture the depth information of a 3D scene in a single shot with the high lateral resolution of today’s image sensors. The main system components are a laser source for illumination, a standard image sensor, and a large-area (about one square mm) transmissive EAM in front of the sensor. Both, modulator and laser source are modulated with the same frequency. The depth information is obtained from the phase delay of the light emitted from the source, scattered at the object and arriving at the sensor. The EAM has a p-i-n layer structure made from AlGaAs semiconductors on a GaAs substrate and InGaAs quantum wells (QWs) in the intrinsic (i-) region for an operating wavelength of around 940 nm. Using the quantum-confined Stark effect, the absorption of the QWs and thus the light transmission through the device can be changed by varying the applied reverse bias voltage. The modulation contrast and operation frequency of both EAM and source strongly influence the depth resolution of the camera. We have designed, processed, and experimentally characterized large-area resonant EAMs with low insertion loss and high extinction ratios beyond 6 dB. We measured the dynamic behavior of the fabricated EAMs and derived the equivalent circuit parameters. The latter are used to devise a passive external circuit improving the modulation speed to the 100 MHz range. The static and dynamic operation characteristics of high-power vertical-cavity surface-emitting laser (VCSEL) array sources have been investigated, as well as their matching with the resonant modulators. Interfacing the modulator to a Raspberry Pi image sensor has allowed us to build a first prototype camera system, the characterization of which will be presented at the conference.
    11788-14
    Author(s): Zhihui Li, XiongJun Cao, Xianlin Song, Nanchang Univ. (China)
    On demand
    Show Abstract + Hide Abstract
    Visualization software Amira is used to build three-dimensional signal models of PAI. Using threshold adjustment, the reconstructed three-dimensional high-resolution photoacoustic image model contains all the high-resolution information of the original images, which means that the image can be reconstructed by a fusion method. Compared with other methods, the method in this paper realizes the high-resolution fusion of three-dimensional photoacoustic information, which is flexible, high-speed, and efficient. It can greatly reduce the time and mechanical layout cost, and expand the scope of high-resolution imaging. This method provides convenience for the subsequent application of PAI, and has great advantages for short-term dynamic physiological imaging with high-resolution requirements.
    Session 4: Computational Optics for Display, Imaging and Sensing

    Presentations scheduled in this session will be live-streamed on Tuesday 22 June, 14:20 to 15:20 hrs


    To view the presentation timing and to connect to this live session, please follow the Live Link at:
    https://spie.org/digital-optical-technologies/event/tuesday-live-stream-presentations-digital-optical-technologies/2601621

    The link will be live 15 minutes prior to the announced start of the session.

    Note that times for the live broadcast are all Central European Summer Time, CEST (UTC+2:00 hours)
    11788-15
    Author(s): Jing Wang, Bo Li, Aojie Zhao, Xianlin Song, Nanchang Univ. (China)
    On demand
    Show Abstract + Hide Abstract
    We build a virtual simulation platform for compressed sensing photoacoustic tomography by combining compressed sensing reconstruction algorithms with photoacoustic imaging based on the k-wave simulation toolbox. On the one hand, compressed sensing can reduce sample rates, accelerated the speed of imaging. On the other hand, it can modify the demands for hardware devices and facilitate to transmit and store of data. The k-wave simulation toolbox is used to build simulation models for simulating the propagation of photoacoustic fields, recording of photoacoustic signals, and image reconstruction. We validated the performance of the simulation platform by imaging the vascular network. The results show that the virtual simulation platform compressed sensing photoacoustic tomography can achieve high-quality photoacoustic imaging with less data. The virtual platform can provide theoretical guidance for the application of compressed sensing in photoacoustic imaging.
    11788-17
    Author(s): Kota Kumagai, Utsunomiya Univ. (Japan); Shun Miura, Utsunomiya Univ (Japan); Yoshio Hayasaki, Utsunomiya Univ. (Japan)
    On demand
    11788-18
    Author(s): Protim Bhattacharjee, Anko Börner, Deutsches Zentrum für Luft- und Raumfahrt e.V. (Germany)
    On demand
    11788-19
    Author(s): Mohamed Abdelazim, Home (Canada); Ahmed Hamza, PortSmouth university (United Kingdom); Abdelrahman Abdelazim, Blackpool and The Fylde College (United Kingdom); Djamel Ait-Boudaoud, PortSmouth university (United Kingdom)
    On demand
    Show Abstract + Hide Abstract
    High Efficiency Video Coding (HEVC), also known as H.265 and MPEG-H part2 is the state of art video encoder standard developed by the Joint Collaborative Team on Video Coding (JCT-VC). It achieves up to 50% better data compression at the same level of video quality than its predecessor (H.264-AVC). One of the main challenges of the HEVC coding is its overall performance as the improved quality and bit rate of the standard coding comes with significant performance overhead. This motivates significant research to reduce the overall complexity. In this paper, we explore the effect of applying down sampling and up sampling of coded video on the performance of the HEVC. The down sampling is applied to all video frames before the encoder process begins and the up sampling is applied to all the video frames after the decoding process is completed. We use an average filter in down sampling, and machine learning based network (SRCNN) for the up sampling. In contrast to other methods, the down sampling and up sampling are applied on frame level outside the encoding/decoding processes and not in block level. Our experiments show the down sampling and up sampling can improve the HEVC encoding/decoding performance by up to 50% in some sequences with limited impact on the output encoded bit rate and decoded video quality. The performance comparison is done using different quantization values.
    11788-30
    Author(s): Tom N. Jacoby, Andrew A. Herbert, Manuel Caraballo-Sanchez, Daniel P. Rhodes, Rob E. Stevens, Adlens Ltd (United Kingdom)
    On demand
    Show Abstract + Hide Abstract
    Head-mounted display (HMD) technologies are improving in resolution and brightness, but are not generally solving three key issues of prescription, accommodation and presbyopia. Eyeglasses worn within head-mounted devices reduce their optical quality, eyetracking efficacy, and comfort, they add stray light/reflections and increase bulk. Fixed inserts are more compact, but require many stock keeping units (SKU's), are incompatible with shareability, and have achieved a low market share. Adjustable lenses present a low SKU, integrated, on-demand solution to these issues, but with some remaining technology challenges. We show how the spherical optics adjustable non-round fluid-filled lens may be extended to general ophthalmic prescriptions by the inclusion of astigmatism correction on an arbitrary axis. We also describe methods to produce the long lifetime fluid-filled lens with an anti-reflective surface. Finally, we define the rules for building a minimal thickness and weight liquid lens.
    Session 5: Digital Optics for Image Formation

    Presentations scheduled in this session will be live-streamed on Tuesday 22 June, 15:40 to 16:35 hrs


    To view the presentation timing and to connect to this live session, please follow the Live Link at:
    https://spie.org/digital-optical-technologies/event/tuesday-live-stream-presentations-digital-optical-technologies/2601621

    The link will be live 15 minutes prior to the announced start of the session.

    Note that times for the live broadcast are all Central European Summer Time, CEST (UTC+2:00 hours)
    11788-20
    Author(s): Peter Brick, Alexander Günther, Stefan Groetsch, OSRAM Opto Semiconductors GmbH (Germany)
    On demand
    Show Abstract + Hide Abstract
    Progress in LED technology has enabled a variety of new application scenarios also in automotive exterior lighting. A characteristic feature of this field is the dynamic behavior of the lighting scenario. Whereas in the past, mechanical adjustments or several disjoint light sources were necessary, the newest µLED technology enables a high-resolution source array based on a monolithic block of IC-chip, opto chip, and converter layer. Still, an optical system is required to project the µLED array onto the road. The optics discussed has to serve multiple functions as it provides imaging quality, bright illumination, as well as stray light control.
    11788-21
    Author(s): Tatsuya Ichikawa, Akitsuna Takagi, Sony Corp. (Japan); Nishiki Yamada, Kyushu Univ. (Japan); Kazuichiro Itonaga, Sony Corp. (Japan); Hajime Nakanotani, Chihaya Adachi, Kyushu Univ. (Japan)
    On demand
    Show Abstract + Hide Abstract
    There are currently various near-infrared (NIR) light sources for sensing applications. To expand the application of NIR sensing, we are advancing research and development a self-emissive NIR-OLED microdisplay featuring a compact form factor and active drive as a new sensing light source. In general, in the NIR region having a narrow energy gap, an exciton of organic emitters readily decays non-radiatively to the ground state. Therefore, it is challenging to obtain high-efficiency light emission. To overcome this problem, a TADF (thermally-activated delayed fluorescence) material having a rigid electron-accepting unit exhibiting strong electron-withdrawing properties is used as an assisting dopant for a molecule exhibiting fluorescence in the 900nm band. It thereby realized a near-infrared light-emitting characteristic with a longer wavelength, higher efficiency, and higher durability than a conventional device.   Furthermore, in order to actively drive this highly efficient NIR-OLED material on a silicon substrate, a top emission structure is required. To cope with this issue, the optical design was optimized for NIR band emission by the micro-cavity effect. The optimum structure for high-efficiency NIR emission was adopted by examining the structure around the emission layer and the encapsulation process. By integrating this high-efficiency NIR-OLED device with a CMOS-based high-definition backplane formed on a silicon substrate, it’ll have the potential to be realized a high-efficiency NIR-OLED microdisplay with a pixel pitch of 7.8 μm and a maximum external quantum efficiency of approximately 1% and an emission wavelength of over 900 nm. Through this study, it was confirmed that the NIR light emission was possible with high efficiency with fine pixel in principle. It’s expected to contribute to power saving and miniaturization as a new sensing light source to be applicable to new value creation.
    11788-22
    Author(s): Nikolay Primerov, Jose Ojeda, Stefan Gloor, Nicolai Matuschek, Marco Rossetti, Antonino Castiglia, Marco Malinverni, Marcus Duelk, Christian Vélez, EXALOS AG (Switzerland)
    On demand
    Show Abstract + Hide Abstract
    We demonstrate a miniaturized, full-color RGB light source module for near-to-eye display systems, incorporating three semiconductor laser diodes (LDs) that are integrated on a free-space, micro-optical bench together with collimation optics and wavelength filters. The ultra-compact package has a footprint of 4.15 mm x 4.4 mm with an optical height of 2.9 mm (volume = 0.053 ccm) and an optical window through which collimated and collinearly aligned RGB optical beams exit. This source is the smallest and most lightweight RGB LD module with a collimated beam output.
    11788-23
    Author(s): Peter Andras Kara, Budapest Univ. of Technology and Economics (Hungary); Attila Barsi, Holografika Kft. (Hungary); Roopak Tamboli, Indian Institute of Technology Hyderabad (India); Mary Guindy, Holografika Kft. (Hungary); Maria Martini, Kingston Univ. (United Kingdom); Tibor Balogh, Holografika Kft. (Hungary); Aniko Simon, Sigma Technology (Hungary)
    On demand
    Show Abstract + Hide Abstract
    In this paper, we provide a series of recommendations on the viewing distance of light field displays. The displays are separately analyzed within the context of their own use cases, taking into account the key performance indicators of both the apparatus and the visualized content, the various environmental conditions, as well as the relevant use-case-scenario-specific necessities and the professional requirements. The investigated use cases include medical imaging, telepresence, resource exploration, prototype review, training and education, gaming, digital signage, cinematography, cultural heritage exhibition, air traffic control and driver assistance systems.
    11788-24
    Author(s): Tibor Balogh, Attila Barsi, Holografika Kft. (Hungary); Peter Andras Kara, Budapest Univ. of Technology and Economics (Hungary); Mary Guindy, Holografika Kft. (Hungary); Aniko Simon, Sigma Technology (Hungary); Zsolt Nagy, Holografika Kft. (Hungary)
    On demand
    Show Abstract + Hide Abstract
    In this paper, we introduce the implementation of a 3D light field LED wall prototype, focusing on the challenges in designing the optical elements and describing the necessary software components. For the optical design, we aimed at minimizing optical aberration and maximizing heat tolerance. The software system was built to be run on GPUs and to be flexible enough to handle various configurations of display geometry and electronics. We have created a pipeline of two distinct phases. The first phase is rendering the camera rays, and the second phase enables transmission in the correct byte order.
    Session 6: Switchable, Tunable and Digitally Reconfigurable Optics

    Presentations scheduled in this session will be live-streamed on Tuesday 22 June, 16:40 to 17:20 hrs


    To view the presentation timing and to connect to this live session, please follow the Live Link at:
    https://spie.org/digital-optical-technologies/event/tuesday-live-stream-presentations-digital-optical-technologies/2601621

    The link will be live 15 minutes prior to the announced start of the session.

    Note that times for the live broadcast are all Central European Summer Time, CEST (UTC+2:00 hours)
    11788-25
    Author(s): Yoshitomo Isomae, Nariyasu Sugawara, Sony Corp. (Japan); Nobuo Iwasaki, Tomoaki Honda, Sony Group Corp. (Japan); Koichi Amari, Sony Corp. (Japan)
    On demand
    Show Abstract + Hide Abstract
    Comprehensive overview of design requirements and issues in the development of phase-only spatial light modulator (SLM) based liquid crystal on silicon (LCOS) technologies is presented. Phase-only SLMs enable many applications such as holographic three-dimensional (3D) displays, head-up displays for automobiles, laser processing, 3D printers, optical communication, and optical computing. Our LCOS panel can achieve high image quality represented by world-class high contrast ratio. In addition, our LCOS has high reflectivity, high-definition pixels, and high durability against high-power light. In phase-only SLM, high reflectivity means high light use efficiency, high-definition pixels achieve wide-diffraction angle, and high durability against light enables use of high-power laser. In this presentation, we will report a development of phase-only SLM based on LCOS technologies and promising candidates of applications using our phase-only SLM.
    11788-26
    Author(s): Ainars Ozols, Edmunds Zutis, EuroLCDs, SIA (Latvia); Roberts Zabels, Elza Linina, Lightspace Technologies, SIA (Latvia); Kriss Osmanis, Lightspace Technologies (Latvia); Ilmars Osmanis, Lightspace Technologies, SIA (Latvia)
    On demand
    Show Abstract + Hide Abstract
    In this work we provide comprehensive electro optical characterization of switchable liquid-crystal diffuser elements in relation to physical parameters (cell spacing, size, functional layers). Measured parameters include transmission spectra, haze values, effective viewing angle, response time, and others. Furthermore, evaluated are also essential image quality metrics. The results are discussed in terms of optimization for near-to-eye and volumetric display architectures for different conditions or modes of application.
    11788-27
    Author(s): Ahmed B. Ayoub, Demetri Psaltis, Ecole Polytechnique Fédérale de Lausanne (Switzerland)
    On demand
    11788-29
    Author(s): Xianlin Song, Nanchang Univ. (China)
    On demand
    Show Abstract + Hide Abstract
    We designed a multi-focus acoustic lens to expand the detection range of photoacoustic signals, and then achieve large depth of field photoacoustic detection. We used COMSOL to simulate the transmission of ultrasonic field of the designed multi-focus acoustic lens, and compare it with the single-focus acoustic lens. The results show that the depth of field of the multi-focus acoustic lens is about 3.4 times that of the conventional single-focus acoustic lens. And the depth of field of the multi-focus acoustic lens can be further adjusted by adjusting the number of focus of the multi-focus acoustic lens.
    Conference Chair
    Microsoft Corp. (United States)
    Conference Chair
    Sony Corp. (Japan)
    Program Committee
    Holografika Kft. (Hungary)
    Program Committee
    Ctr. Suisse d'Electronique et de Microtechnique SA (Switzerland)
    Program Committee
    Arie den Boef
    ASML Netherlands B.V. (Netherlands)
    Program Committee
    Harvard School of Engineering and Applied Sciences (United States)
    Program Committee
    Northwestern Univ. (United States)
    Program Committee
    Hochschule Offenburg (Germany)
    Program Committee
    HOLOEYE Photonics AG (Germany)
    Program Committee
    Utsunomiya Univ. (Japan)
    Program Committee
    Bosch Sensortec GmbH (Germany)
    Program Committee
    Ecole Polytechnique Fédérale de Lausanne (Switzerland)
    Program Committee
    College of Optical Sciences, The Univ. of Arizona (United States)
    Program Committee
    Fu-Chung Huang
    nVIDIA Corp. (United States)
    Program Committee
    Univ. of Connecticut (United States)
    Program Committee
    Sabina Jeschke
    RWTH Aachen Univ. (Germany)
    Program Committee
    Norbert Kerwien
    Carl Zeiss AG (Germany)
    Program Committee
    Hiroki Kikuchi
    Sony Corp. (Japan)
    Program Committee
    Microsoft Corp. (United States)
    Program Committee
    Douglas R. Lanman
    Facebook Technologies, LLC (United States)
    Program Committee
    Seoul National Univ. (Korea, Republic of)
    Program Committee
    Limbak 4PI S.L. (Spain)
    Program Committee
    Lightspace Technologies, SIA (Latvia)
    Program Committee
    Silvania F. Pereira
    Technische Univ. Delft (Netherlands)
    Program Committee
    Univ. du Maine (France)
    Program Committee
    Virginia Polytechnic Institute and State Univ. (United States)
    Program Committee
    Ecole Polytechnique Fédérale de Lausanne (Switzerland)
    Program Committee
    Medizinische Univ. Innsbruck (Austria)
    Program Committee
    The Ctr. for Freeform Optics (United States)
    Program Committee
    Vrije Univ. Brussel (Belgium)
    Program Committee
    SCHOTT AG (Germany)
    Program Committee
    Adlens Ltd. (United Kingdom)
    Program Committee
    SeeReal Technologies GmbH (Germany)
    Program Committee
    SUSS MicroOptics SA (Switzerland)
    Program Committee
    Stanford Univ. (United States)
    Program Committee
    Goertec Electronics, Inc. (United States)
    Program Committee
    LightTrans International UG (Germany)