Proceedings Volume 7001

Photonics in Multimedia II

cover
Proceedings Volume 7001

Photonics in Multimedia II

View the digital version of this volume at SPIE Digital Libarary.

Volume Details

Date Published: 28 May 2008
Contents: 6 Sessions, 21 Papers, 0 Presentations
Conference: SPIE Photonics Europe 2008
Volume Number: 7001

Table of Contents

icon_mobile_dropdown

Table of Contents

All links to SPIE Proceedings will open in the SPIE Digital Library. external link icon
View Session icon_mobile_dropdown
  • Front Matter: Volume 7001
  • 3D and Immersive Displays
  • Camera Sensors and Signal Processing
  • Laser and LED Projection
  • Applications and Optical Subsystems
  • Poster Session
Front Matter: Volume 7001
icon_mobile_dropdown
Front Matter: Volume 7001
This PDF file contains the front matter associated with SPIE Proceedings Volume 7001, including the Title Page, Copyright information, Table of Contents, and the Conference Committee listing.
3D and Immersive Displays
icon_mobile_dropdown
Optical characterization and measurements of autostereoscopic 3D displays
3D or autostereoscopic display technologies offer attractive solutions for enriching the multimedia experience. However, both characterization and comparison of 3D displays have been challenging when the definitions for the consistent measurement methods have been lacking and displays with similar specifications may appear quite different. Earlier we have investigated how the optical properties of autostereoscopic (3D) displays can be objectively measured and what are the main characteristics defining the perceived image quality. In this paper the discussion is extended to cover the viewing freedom (VF) and the definition for the optimum viewing distance (OVD) is elaborated. VF is the volume inside which the eyes have to be to see an acceptable 3D image. Characteristics limiting the VF space are proposed to be 3D crosstalk, luminance difference and color difference. Since the 3D crosstalk can be presumed to be dominating the quality of the end user experience and in our approach is forming the basis for the calculations of the other optical parameters, the reliability of the 3D crosstalk measurements is investigated. Furthermore the effect on the derived VF definition is evaluated. We have performed comparison 3D crosstalk measurements with different measurement device apertures and the effect of different measurement geometry on the results on actual 3D displays is reported.
Demonstration of a polarization-based full-color stereoscopic projection display using liquid crystal on silicon panels and light emitting diodes
We present a single optical system that can simultaneously generate two linear polarized full-color images with orthogonal state of polarization. The system architecture of the optical core is discussed. Four liquid crystal on silicon panels are used to modulate both images. We also discuss the design of the illumination system with light emitting diodes as light sources. The contrast of both images is simulated. A proof-of-concept demonstrator is built and experimentally characterized. It is capable of two-dimensional and three-dimensional image display. Three-dimensional images can be perceived, independent of the tilt angle of the viewer's head, by wearing specific polarization sensitive eyeglasses and placing a quarter-wave retarder at the projector's output. Important component specifications are overviewed to improve the performance of the demonstrator setup.
Development of monocular and binocular multi-focus 3D display systems using LEDs
Sung-Kyu Kim, Dong-Wook Kim, Jung-Young Son, et al.
Multi-focus 3D display systems are developed and a possibility about satisfaction of eye accommodation is tested. The multi-focus means the ability of monocular depth cue to various depth levels. By achieving the multi-focus function, we developed 3D display systems for one eye and both eyes, which can satisfy accommodation to displayed virtual objects within defined depth. The monocular accommodation and the binocular convergence 3D effect of the system are tested and a proof of the satisfaction of the accommodation and experimental result of the binocular 3D fusion are given as results by using the proposed 3D display systems.
Diffractive exit-pupil expander with a large field of view
A concept of asymmetric exit-pupil expansion for head-worn virtual displays is introduced. Expression for the achievable field of view (FOV) of a stack of asymmetric pupil expanders is derived. Comparison with the symmetric case indicates a possibility for doubling the horizontal FOV for given material parameters and spectral bandwidth. Using the parameter values of readily available plastics, the horizontal field of view approaching the viewing conditions of typical desktop monitors should be possible. Moreover, improvement of the illumination uniformity can be achieved through optimized positioning of the out-coupling diffraction gratings.
Compact near-to-eye display with integrated gaze tracker
Toni Järvenpää, Viljakaisa Aaltonen
Near-to-Eye Display (NED) offers a big screen experience to the user anywhere, anytime. It provides a way to perceive a larger image than the physical device itself is. Commercially available NEDs tend to be quite bulky and uncomfortable to wear. However, by using very thin plastic light guides with diffractive structures on the surfaces, many of the known deficiencies can be notably reduced. These Exit Pupil Expander (EPE) light guides enable a thin, light, user friendly and high performing see-through NED, which we have demonstrated. To be able to interact with the displayed UI efficiently, we have also integrated a video-based gaze tracker into the NED. The narrow light beam of an infrared light source is divided and expanded inside the same EPEs to produce wide collimated beams out from the EPE towards the eyes. Miniature video camera images the cornea and eye gaze direction is accurately calculated by locating the pupil and the glints of the infrared beams. After a simple and robust per-user calibration, the data from the highly integrated gaze tracker reflects the user focus point in the displayed image which can be used as an input device for the NED system. Realizable applications go from eye typing to playing games, and far beyond.
Synfograms: a new generation of holographic applications
Odile Meulien Öhlmann, Dietmar Öhlmann D.D.S., Stanislovas J. Zacharovas
The new synthetic Four-dimensional printing technique (Syn4D) Synfogram is introducing time (animation) into spatial configuration of the imprinted three-dimensional shapes. While lenticular solutions offer 2 to 9 stereoscopic images Syn4D offers large format, full colors true 3D visualization printing of 300 to 2500 frames imprinted as holographic dots. This past 2 years Syn4D high-resolution displays proved to be extremely efficient for museums presentation, engineering design, automobile prototyping, and advertising virtual presentation as well as, for portrait and fashion applications. The main advantages of syn4D is that it offers a very easy way of using a variety of digital media, like most of 3D Modelling programs, 3D scan system, video sequences, digital photography, tomography as well as the Syn4D camera track system for life recording of spatial scenes changing in time. The use of digital holographic printer in conjunction with Syn4D image acquiring and processing devices separates printing and imaging creation in such a way that makes four-dimensional printing similar to a conventional digital photography processes where imaging and printing are usually separated in space and time. Besides making content easy to prepare, Syn4D has also developed new display and lighting solutions for trade show, museum, POP, merchandising, etc. The introduction of Synfograms is opening new applications for real life and virtual 4D displays. In this paper we will analyse the 3D market, the properties of the Synfograms and specific applications, the problems we encounter, solutions we find, discuss about customers demand and need for new product development.
Camera Sensors and Signal Processing
icon_mobile_dropdown
Small pixel development for novel CMOS image sensors
G. Agranov, J. Ladd, T. Gilton, et al.
Modern trends in camera module design for both mobile and DSC applications are driving the race to shrink pixel and increase pixel array size. At the same time higher demands on the quality of color images - DSC-like quality for mobile applications - require maintaining a large pixel capacity, quantum efficiency (QE), and sensitivity to preserve color image quality. This becomes extremely difficult as the size of the pixel shrinks. This paper discusses the Common Element Pixel Architecture (CEPA) for image sensors with small pixels as well as new pixel designs and process changes, that have enabled a new generation of image sensors with high performance 2.2-μm, 1.75-μm, and smaller pixels. Advanced algorithms of capturing the image help to overcome the challenges associated with the limited pixel capacity of small pixels. The paper considers an HDR mode of operation for the small pixel and its effect on the image quality. Achieving good color crosstalk performances is one of the big challenges in CMOS Image Sensors with small pixels. The paper presents results of an experimental study of crosstalk for different pixel sizes, analyzes the effect of crosstalk on the quality of color image and signal-to-noise ratio after color processing, and discusses ways of cross talk reduction for small pixels.
Inorganic color filters by MOCVD for CMOS imager and colorimetry
Samir Guerroudj, François Roy, Jean-Luc Deschanvres
A set of three thin films transmitting in the red, green and blue wavelengths have been demonstrated by the MOCVD aerosol technology. These thin films were elaborated with different organometallic precursors and deposited on glass or fused silica at different temperatures in the range of 350°C-550°C.The physicochemical characteristics enable us to observe the phases responsible for the color. The red one filter consists of a thin film of hematite-Fe2O3 with a transmittance peak of 75% at 630 nm. The green thin film is composed of Cobalt doped ZnO with a transmittance peak of 56% at 540nm. The blue thin film is composed of Cobalt doped Al2O3 with a transmittance peak of 65% at 450nm. Moreover, the absorbance spectra properties are discussed related to the physicochemical characteristics of the deposited films. Then, thanks to the best triplet we can evaluate the color reconstruction. Using a set of spectral files, a "color toolbox" software optimizes by the method of least squares the 3 by 3 color matrix, the white balance and the offset.
Characterizing spatial crosstalk effects in small pixel image sensors
The popularity of miniaturized CMOS image sensors in embedded platforms, such as mobile telephones, is driving the move to increasingly small pixel pitches. The resulting pixels suffer from increased sensitivity to microlens misalignment and degradation in crosstalk performance, as a direct result of their reduced pixel size. This paper presents a novel application of pixel scan techniques to characterize microlens misalignment, the effect of microlens misalignment on crosstalk, and crosstalk performance in general. Pixel scans are performed on 2.2μm pitch sensors, under monochromatic light. A series of scans are taken for each device under test, sweeping the incident light across and beyond the visible spectrum. The captured data is remapped from the image space into a pixel space. Analysis of how the scans develop over the course of the spectral sweep provides insight into the primary directional sources of crosstalk. Further processing derives approximations of pixel spectral responses at various microlens misalignments. It is likely that the device under test has its microlens layer misaligned by an unknown amount, which must be corrected for. This misalignment is characterized by identifying common positional offsets between the peaks of in-band channels in the recorded scans. The spectral responses can be then used to estimate the effects of microlens misalignment on colour and crosstalk performance across the imaging array. The techniques detailed in the paper are designed to be run on unmodified product dice and do not require expensive test devices.
Embedded processor extensions for image processing
Mathieu Thevenin, Michel Paindavoine, Laurent Letellier, et al.
The advent of camera phones marks a new phase in embedded camera sales. By late 2009, the total number of camera phones will exceed that of both conventional and digital cameras shipped since the invention of photography. Use in mobile phones of applications like visiophony, matrix code readers and biometrics requires a high degree of component flexibility that image processors (IPs) have not, to date, been able to provide. For all these reasons, programmable processor solutions have become essential. This paper presents several techniques geared to speeding up image processors. It demonstrates that a gain of twice is possible for the complete image acquisition chain and the enhancement pipeline downstream of the video sensor. Such results confirm the potential of these computing systems for supporting future applications.
Integration and characterization of spin on dielectric materials in image sensor devices
Hai Reznik, Ruth Shima Edelstein, Michal Shach-Caplan, et al.
Continuously increasing performance requirements in CMOS image sensor based digital camera devices demand significant improvement of the optical part of the device as well as improved endurance to camera module assembly. Optical structures construction is the key element to improve the device efficiency and sensitivity. This is especially true for the small pixel size sensors used for mobile phone applications, wherein pitch size is reduced to integrate more pixels on the same area of semiconductor surface. Traditionally, the optical stack is based on organic photo-resist like materials. The introduction of inorganic Spin On Dielectric (SOD) materials opens several new options. Two novel applications of these materials are presented in this paper. In the first one, a waveguide is formed in the device backend and filled with high refractive index SOD (RI=1.652 @ 650nm) to improve optical performance. The second one employs a low refractive index SOD (RI~1.4 @650nm) topcoat, which enables easier micro lens engineering and optimization, and further offers advantage of organic micro lens mechanical protection. The two integration schemes are presented along with SOD material characteristics and processing details.
Laser and LED Projection
icon_mobile_dropdown
Visible lasers for mobile projection
U. Steegmüller, M. Kühnelt, H. Unold, et al.
Visible laser sources are attracting considerable interest to enable ultra-small, embedded laser scanning projection devices. We report recent progress on the development of red, green and blue semiconductor based laser sources. Red and blue colours are achieved by edge-emitting laserdiodes, whereas green uses frequency doubled optically pumped semiconductor lasers. Green lasers turned-out to be on the critical path for the technical and commercial success of laser displays. Because all current approaches are based on frequency doubling, the green source is the major contributor of cost, size and power consumption. Important parameters like size, efficiency, output power, beam quality, and modulation bandwidth are discussed.
Scanning laser beam displays
Maarten Niesten, Randy Sprague, Josh Miller
A projector with a height of 7 mm has been developed. The projector uses a two dimensional MEMS, a red and blue diode laser and a second harmonic green laser. This projector module is able to display images with a WVGA resolution while consuming 1.5 W. Due to the collimated nature of laser beams, the display has a depth of focus that is virtually unlimited. Future MEMS developments will lead to even thinner projection modules. Furthermore, this projection technology enables additional display systems such as head-up displays for vehicles.
Requirements on LEDs in etendue limited light engines
Light engines used in projection systems often set constraints on the design and system application of the LED light source. In these advanced optical systems the optical extend of the LED light source is limited due the etendue of the imager. The etendue is defined as the product of emitting area and viewing angle. This paper shows how the LED light source is constrained by the laws of optics and how these limits influence the light source design. To achieve an efficient system design, the variables that must be optimized include the primary optics, LED package design and chip technology. The LED light sources which are best suited for these applications and requirements will also be demonstrated and discussed.
Applications and Optical Subsystems
icon_mobile_dropdown
Optical design of a compact illumination system for LED projection displays
In this publication we investigate the optical design of a illumination system with a fly's eye integrator for LED projection displays. We compare the performance of CPC-like collimators and tapered light pipes with respect to their optical efficiency. We show that the tapered light pipes with a lens are more efficient and can be used to collimate the light of rectangular LED modules too. Using these tapered light pipes we design an illumination system with tilted collimators. This adapted 2F processor makes a more compact illumination system possible.
Optical links in handheld multimedia devices
S. van Geffen, J. Duis, R. Miller
Ever emerging applications in handheld multimedia devices such as mobile phones, laptop computers, portable video games and digital cameras requiring increased screen resolutions are driving higher aggregate bitrates between host processor and display(s) enabling services such as mobile video conferencing, video on demand and TV broadcasting. Larger displays and smaller phones require complex mechanical 3D hinge configurations striving to combine maximum functionality with compact building volumes. Conventional galvanic interconnections such as Micro-Coax and FPC carrying parallel digital data between host processor and display module may produce Electromagnetic Interference (EMI) and bandwidth limitations caused by small cable size and tight cable bends. To reduce the number of signals through a hinge, the mobile phone industry, organized in the MIPI (Mobile Industry Processor Interface) alliance, is currently defining an electrical interface transmitting serialized digital data at speeds >1Gbps. This interface allows for electrical or optical interconnects. Above 1Gbps optical links may offer a cost effective alternative because of their flexibility, increased bandwidth and immunity to EMI. This paper describes the development of optical links for handheld communication devices. A cable assembly based on a special Plastic Optical Fiber (POF) selected for its mechanical durability is terminated with a small form factor molded lens assembly which interfaces between an 850nm VCSEL transmitter and a receiving device on the printed circuit board of the display module. A statistical approach based on a Lean Design For Six Sigma (LDFSS) roadmap for new product development tries to find an optimum link definition which will be robust and low cost meeting the power consumption requirements appropriate for battery operated systems.
Optical link utilizing polymer optical waveguides: application in multimedia device
For realizing high speed and slim data link in a multimedia device, we have developed a compact and highly flexible optical link module utilizing a polymer optical waveguide. With this module, 1.25Gbps high speed data transmission has been successfully demonstrated. This module has a transmitter and a receiver and those are compactly packaged on the each end of a film optical waveguide in order to provide easy electrical connection to board. This electric connection configuration achieves more compact connector to the electric circuit board than the conventional configuration based on the connection with optical connector. For the flexible optical link module, the highly bendable polymer film optical waveguide has been developed by utilizing the unique replication technology. The propagation loss of the optical waveguide is 0.07dB/cm at 850nm. And the bending loss is <0.2dB after 1million cycles at the bending radius 1mm. These performances promises the practical application of the board to board data link through hinge of multimedia device.
Poster Session
icon_mobile_dropdown
Fractal dimension and neural network based image segmentation technique
QiWei Lin, Feng Gui
A new images segmentation scheme, which is based on combining technique of fractal dimension and self-organization neural network clustering, was presented in this paper. As we know features extracting is a very important step in image segmentation. So, in order to extract more effective fractal features from images, especially in the remote sensing images, a new image feature extracting and segmentation method was developed. The method extracts fractal features from a series of images that are obtained by convolving the original image with various masks to enhance its edge, line, ripple, and spot features. After that a 5-dimension feature vector are procured, in this vector each element is the fractal dimension of original image and four convolved images. And at last, we segment the image based on the strategy that combining the nearest neighbor classifier with self-organization neural network. Applying the presented algorithm to several practical remote sensing images, the experimental results show that the proposed method can improve the feature description ability and segment the images accurately.
A novel image fusion algorithm based on wavelet transforms
QiWei Lin, Feng Gui
A novel image fusion algorithm based on wavelet transform and edge keeping method is proposed in this paper. After DWT the image is decomposed into different frequency bands. The spatial frequency and the contrast within the low-frequency sub-band of the image are measured to determine the best choice of low-frequency component of the fused image. As to the high-frequency sub-band of the image, the coefficients with maximal absolute grads values are selected. The experimental results show that the proposed algorithm can preserve most useful information of original images, and the clarity and contrast of the fused image are improved comparing with the original images.
Design and fabrication of quartz-based micro prism array of dual-view display by using reactive ion etching
In this paper, a quartz-based micro-prism array structure is newly proposed as the "parallax barrier", and such a device is designed only for fitting the size of a 2.2 inch LCD panel, the most popular size for the TV on mobile phone. The optical simulation software LightTools was applied to verify whether the designed structure can work or not. The parameters we considered including the viewing angle, viewing distance, vertex angle of prism, refractive index of quartz (=1.46) and the sub-pixel width (=66 μm). 1,000,000 rays emitted from the dual view display panel are simulated, as the result, our designed quartz-based micro-prism array structure can successfully separate images from odd or even sub-pixels into two different viewers and the view angle is indeed 80° as our requirement. The key to control the red, green, blue light from different sub-pixels can be precisely guided into the same direction is ascribed to the arrangement of the vertex angles of the relative prisms (R:47.1°, G:47.2 °, B:47.4 ° ). Three steps of lithography and the reactive ion etching can fabricate the precise angle and size of our needs. The left and right images generated by our designed dual-view display are quite pellucid without color difference, and very similar to our simulation.