Proceedings Volume 3640

Three-Dimensional Image Capture and Applications II

Joseph H. Nurre, Brian D. Corner
cover
Proceedings Volume 3640

Three-Dimensional Image Capture and Applications II

Joseph H. Nurre, Brian D. Corner
View the digital version of this volume at SPIE Digital Libarary.

Volume Details

Date Published: 8 March 1999
Contents: 5 Sessions, 24 Papers, 0 Presentations
Conference: Electronic Imaging '99 1999
Volume Number: 3640

Table of Contents

icon_mobile_dropdown

Table of Contents

All links to SPIE Proceedings will open in the SPIE Digital Library. external link icon
View Session icon_mobile_dropdown
  • Three-Dimensional Image Capture Hardware
  • Human Body Imaging
  • Surface Fitting and Reconstruction
  • Real-Time Applications for 3D Imaging
  • Poster Session
Three-Dimensional Image Capture Hardware
icon_mobile_dropdown
CyberModeler: a compact 3D scanner based on monoscopic camera
Yukinori Matsumoto, Kouta Fujimura, Toru Kitamura
A 3D scanner based on a monoscopic camera is presented in this paper. The scanner, CyberModeler, is capable of reconstructing 3D shapes as well as obtaining texture information of a target object form multiple camera views. The CyberModeler is a compact, easy-to-use, and inexpensive system because it does not require any special ray such as laser. Several new techniques are featured in the CyberModeler: voting-based Shape-from-Silhouette, calibration using the Hough method, and texture acquisition using an energy minimization technique. These lead to robust use in real-world environments as well as wide applicability to viewing applications. Our experiments showed that not only was the quality of the models generated high enough for viewing but also the modeling speed was at an acceptable practical level.
Real-time active range finder using light intensity modulation
Takeo Azuma, Kenya Uomori, Atsushi Morimura
We propose a new real-time active range finder system, which operates at a video frame rate. Our system consist of a laser line pattern marker, a rotating mirror, a camera with an optical narrow bandpass filter, and a pipe-lined image signal processor. The vertical laser line pattern is horizontally scanned by the video-synchronized rotating mirror. The pattern is modulated with alternating a monotone decreasing intensity function and a monotone increasing function. The bandpass filter selectively transmits the laser light to decrease the disturbance of background light. Depth images are obtained using the intensity ratio of successive video field images, the coordinates of each pixel, and the baseline length. The intensity ratio specifies a line pattern in a plane. Meanwhile, the coordinates of each pixel specify a line that goes through the pixel position on the CCD and the center of the lens. Depth is calculated as an intersection point on the specified plane and line. The image signal processor can perform the above calculation using LUTs within a video frame. In this paper, we evaluate the measurable range, precision, and color properties of our system. Experimental results show that a two meters measurable range is obtained wit a 50 mW laser power, the standard deviations of the depth images with 5 by 5 median filtering are about 1 percent of the measured depth, and the object color has little effect on the measured depth except when the reflectance of the object is very small.
3D profiling by optical demodulation with an image intensifier
Heinrich A. Hoefler, Volker Jetter, Elmar E. Wagner
The acquirement of 3D-information is currently achieved by stereophotography, line and grid projection techniques or by laser scanning in combination with a fast distance measuring device. This paper describes a new principle using a single CCD-camera with an optical demodulator in front of it. The scene is illuminated by a high frequency intensity modulated light source. Demodulating the backscattered light by a gateable image intensifier yields a grey level image which directly corresponds to the object's form. Intensity variations within the image due to inhomogeneous object reflectivity or illumination intensity are overcome by a phase shift technology. Possible applications for such a 3D- camera industrial automation, medical and industrial endoscopic analyses, robotics or 3D-digitalization.
Multispectral pattern projection range finder
Tatsuo Sato
Although many types of range finders have been developed recently, with most method it is difficult to implement actual equipment to measure small objects. So we developed a new range finder based on a color pattern projection method to measure small objects such as ball bumps of a BGA- package. The range finder consists of a LCD projector, a color CCD camera, and a PC with a video input/output interface. The projection pattern consists of color stripes whose color is continuously changed like rainbow-color. Calculation of range is derived from distortion of color stripes in the image obtained by the CCD camera. In general, in a color pattern projection method, it is known that the color of the object surface may give erroneous results in the measurement. We developed a method to cancel the error. Three kinds of projection patterns are prepared and they are projected in turns. These three patterns are phase shifted by 1/3 cycle form each other in turn. Three imags are taken by the CCD camera respectively at the time. The images taken while the three phase patterns are projected are backward phase shifted and synthesized into one image by pixel-wise addition. In our prototype equipment, the whole measuring area of the equipment is 32mm by 43mm by 22mm, where the height corresponds to the range. In this case, though a projection pattern has only one cycle, we can use repetitive projection patterns to improve the range accuracy. At that time the height of the measuring area is decreased according to the repetition count. For evaluation of range accuracy, we measured ceramic gage blocks whose thicknesses are known. The worst case range error was about 1 percent full-scale using 5.5 times repetitive patterns, and at this time the equipment had a measuring area of 4mm in height.
Color digitizing and modeling of free-form 3D objects
Timothee Jost, Christian L. Schutz, Heinz Huegli
This paper deals with the problem of capturing the color information of physical 3D objects thanks to a class of digitizers providing color and range data, like range finders based on structured lighting. It appears typically in a modeling procedure that aims at building a realistic virtual 3D model. The color data delivered by such scanners basically express the reflected color intensity of the object and not its intrinsic color. A consequence is therefore the existence, on the reconstructed model, of strong color discontinuities, which results from acquisition done under different illumination conditions. The paper considers three approaches in order to remove these discontinuities and obtained the desired intrinsic color data. The first one converts the reflected color intensity into the intrinsic color by computation, using a reflectance model and known acquisition parameters. The use of simple reflectance models is considered: Lambert and Phong, respectively for perfectly diffuse and mixed diffuse and specular reflection. The second approach is a hardware solution. It aims at using a nearly constant, diffuse and omnidirectional illumination over the visible parts of the object. A third method combines the first computational approach with the use of several known illumination sources. An experimental comparison of these three approaches is finally presented.
Robust cooperation concept for low-level vision modules
Holger Lange, Jean-Christophe Culioli
An innovative approach in modern Low-Level Vision Systems is the integration of modules in order to overcome the problems of each module being ill-posed. We present a Robust Cooperation Concept for large size heterogeneous systems based on a decomposition and coordination derived from Uzawa's algorithm. This cooperation concept provides an open system, maintains the modularity of the system and guarantees the most adapted resolution method for each module by allowing the use of optimal resolution parameters locally. The cooperation increase the possibilities for the calculation of physical properties and improves the calculation results of the modules. The Cooperation System is less sensitive to the good choice of parameters and is robust with respect to noise and bad experimental conditions. We validate this concept through the realization of the cooperation between the two models, (Stereo) Photometry and Stereo Vision. Results are shown for real imagery.
Moly: a prototype handheld 3D digitizer with diffraction optics
Thomas Ditto, Douglas A. Lyon
A working hand-held 3D digitizer, Moly, is demonstrated. It is distinguished by a magnification feature which is made possible by special diffraction optics that minimize the perspective effects typical of conventional triangulation. As a result this innovative device illuminates its target with a collimated laser projector that produces a sheet of light of uniform height at all working distances. The diffraction optics afford improved depth-of-field compared to triangulation scanners of equivalent resolution. This prototype also employs dual magnetic wave detectors to facilitate freedom of movement for both the digitizer and the subject. The instrument was designed primarily to digitize human faces and figures for applications in art and medicine.
Examining laser triangulation system performance using a software simulation
Jeffery S. Collier, Joseph H. Nurre
The invention of the laser diode, the microcomputer and the CCD camera have made possible the new technology of triangulation measurement systems. Current applications range from scanning the insides of old pipes, to a vision tool for the blind. As such, it is important that techniques be developed to minimize the error in laser triangulation measurement systems. Due to the nonlinear nature of the problem and the fact that error is dependent on an ever changing and vast number of subjects, a computer simulation was written to examine the trade-off between occlusion and data quality. A computer simulation allows for a large amount of flexibility. The software gives the user the ability to calculate the error for a given triangulation configuration without having to build and test the actual hardware. This paper describes and demonstrates the use of the simulator. Limitless laser triangulation systems can be modeled and most subjects represented in CAD files can be used in the computer simulation.
Human Body Imaging
icon_mobile_dropdown
Generating animated sequences from 3D whole-body scans
Roy P. Pargas, Murtuza Chhatriwala, Daniel Mulfinger, et al.
3D images of human subjects are, today, easily obtained using 3D wholebody scanners. 3D human images can provide static information about the physical characteristics of a person, information valuable to professionals such as clothing designers, anthropometrists, medical doctors, physical therapists, athletic trainers, and sculptors. Can 3D human images can be used to provide e more than static physical information. This research described in this paper attempts to answer the question by explaining a way that animated sequences may be generated from a single 3D scan. The process stars by subdividing the human image into segments and mapping the segments to those of a human model defined in a human-motion simulation package. The simulation software provides information used to display movement of the human image. Snapshots of the movement are captured and assembled to create an animated sequence. All of the postures and motion of the human images come from a single 3D scan. This paper describes the process involved in animating human figures from static 3D wholebody scans, presents an example of a generated animated sequence, and discusses possible applications of this approach.
Extracting surface area coverage by superimposing 3D scan data
Peng Li, Brian D. Corner, Steven Paquette
Surface area coverage is an important feature for evaluating the functionality of personal protective equipment and clothing. This paper present an approach for calculating surface area coverage of protective clothing by superimposing two 3D whole body scan images: a scan of a 'nude' human/mankind body and a scan of a clothed body. The basic approach is to align two scans and calculate the per vertex distance field between the two scanned surfaces. Because the clothed body has an extra surface layer relative to the nude scan, the distance field may be used to define covered or uncovered regions by setting a distance threshold based on the thickness of the clothing or equipment. This paper discusses the procedures required for estimating surface area coverage including data slicing, sorting, mesh generation and the computation of the distance field. Although the above method is straightforward to describe, some difficulties related to human body scanning had to be overcome in the practical application of the method. Some of these challenges included: 1) registration of two scan data sets with different shapes, 2) the frequency occurrence of void data, especially in the clothing scan; and 3) the clothing/equipment may cause tissue compression and deformation. This paper discusses these problems and our current solutions.
Multiple structured light system for the 3D measurement of feet
Hansjoerg Gaertner, Jean-Francois Lavoie, Eric Vermette, et al.
In the field of custom foot orthosis bio-mechanics specialists take negative casts of the patient's feet and produce a positive on which they apply corrective elements. The corrected positive cast is then used to thermoform an orthosis. Several production steps can be simplified or eliminated by a 3D-acquisition of the underside of the foot. Such a complete custom footwear system, developed by Neogenix Technologies Inc., has been reported last year in IS and T/SPIE's symposium. A major improvement aimed at maximizing the coverage of the underside of foot surface has been achieved since by using multiple structured light projection technique. A description of a patent pending hardware set-up and range data extraction by software will be given in this paper.
Efficient free-form surface representation with application in orthodontics
Sameh M. Yamany, Ahmed M. El-Bialy
Orthodontics is the branch of dentistry concerned with the study of growth of the craniofacial complex. The detection and correction of malocclusion and other dental abnormalities is one of the most important and critical phases of orthodontic diagnosis. This paper introduces a system that can assist in automatic orthodontics diagnosis. The system can be used to classify skeletal and dental malocclusion from a limited number of measurements. This system is not intended to deal with several cases but is aimed at cases more likely to be encountered in epidemiological studies. Prior to the measurement of the orthodontics parameters, the position of the teeth in the jaw model must be detected. A new free-form surface representation is adopted for the efficient and accurate segmentation and separation of teeth from a scanned jaw model. THe new representation encodes the curvature and surface normal information into a 2D image. Image segmentation tools are then sued to extract structures of high/low curvature. By iteratively removing these structures, individual teeth surfaces are obtained.
Robust 3D reconstruction system for human jaw modeling
Sameh M. Yamany, Aly A. Farag, David Tazman, et al.
This paper presents a model-based vision system for dentistry that will replace traditional approaches used in diagnosis, treatment planning and surgical simulation. Dentistry requires accurate 3D representation of the teeth and jaws for many diagnostic and treatment purposes. For example orthodontic treatment involves the application of force systems to teeth over time to correct malocclusion. In order to evaluate tooth movement progress, the orthodontists monitors this movement by means of visual inspection, intraoral measurements, fabrication of plastic models, photographs and radiographs, a process which is both costly and time consuming. In this paper an integrate system has been developed to record the patient's occlusion using computer vision. Data is acquired with an intraoral video camera. A modified shape from shading (SFS) technique, using perspective projection and camera calibration, is used to extract accurate 3D information from a sequence of 2D images of the jaw. A new technique for 3D data registration, using a Grid Closest Point transform and genetic algorithms, is used to register the SFS output. Triangulization is then performed, and a solid 3D model is obtained via a rapid prototype machine.
Surface Fitting and Reconstruction
icon_mobile_dropdown
Reconstruction of complete 3D object model from multiview range images
Yi-Ping Hung, Chu-Song Chen, Ing-Bor Hsieh, et al.
In this paper, we designed and implemented a method which can register and integrate range images obtained from different view points for building complete 3D object models. This method contains three major parts: (1) registration of range images and estimation of the parameters of rigid-body transformations, (2) integration of redundant surface patches and generation of triangulated mesh surface models, and (3) reduction of triangular mesh and texture mapping. We developed the RANSAC-based DARCES technique to estimate the parameters of the rigid-body transformations between two partially-overlapping range images without requiring initial estimates. Then, we used a circular-ICP procedure to reduce the global registration error. We also used the consensus surface algorithm combined with the marching cube method to generate triangular meshes. Finally, by texture blending and mapping, we can then reconstruct a virtual 3D model containing both geometrical and texture information.
Error sensitivity of rotation angles in the ICP algorithm
Byung-Uk Lee, Chul-Min Kim, Rae-Hong Park
The accuracy of the iterative closest point (ICP) algorithm, which is widely employed in image registration, depends on the complexity of the shape of the object under registration. Objects with complex features yield higher reliability in estimating registration parameters. For objects with rotation symmetry, a cylinder for example, rotation along the center axis can not be distinguished. We derive the sensitivity of the rotation error of the ICP algorithm from the curvature of the error function near the minimum error position. We approximate the defined error function to a second order polynomial and show that the coefficient of the second-order term is related to the reliability of the estimated rotation angle. Also the coefficient is related to the shape of the object. In the known correspondence case, the reliability can be expressed by the second moment of the input image. Finally, we apply the sensitivity formula to a simple synthetic object and ellipses, and verify that the predicted orientation variance of the ICP algorithm is in good agreement with computer simulations.
Slicing, fitting, and linking (SFL): a modular triangulation approach
Elsayed E. Hemayed, Aly A. Farag
Slicing-fitting-linking (SFL) is a fast triangulation technique that guarantees building a closed mesh with consistent normals. The proposed technique can be used with different surface reconstruction cues such as laser scanner, stereo, SFS, and CT/MRI. The output SFL can be in the form of STL files that are suitable for most rapid prototyping machines. The technique has three tasks. The first task is to slice the 3D data points into 2D cross sections parallel to each other. The second task is to fit a curve to the data points of each cross section. The third task is to link the fitted curves to form the mesh. A detailed description of the algorithm is presented in this paper.
Spherical harmonic surface representation with feedback control
Sarp Ertuerk, Tim J. Dennis
Spherical harmonic (SH) surface representation is used commonly for modeling rigid and non-rigid object. SH parameters are evaluated by fitting a surface constructed from a sum of harmonics to the raw 3D object data. Least squares error fitting is used and parameters are computed in a sequential process. Residual error feedback is proposed to control fitting accuracy. The residual radial error is available at each sample point after the computation of each parameter and used in the computation process of the next harmonic parameter. The SH representation order can be incremented until the harmonic model is sufficiently close to the object surface, or the gain in accuracy is so small as to be worthless. In this way representation order is automated. The raw object surface data is likely to contain areas that have no sample data. Although small and moderate sized gaps are automatically limited during SH parameter computation, large areas can develop spurious high amplitude harmonics. The ability of the sample pattern to 'control' a particular basis function can be checked during the computation process and the coefficients of uncontrolled harmonics set to zero to eliminate spurious features. The model is reconstructed in wire-frame form using geodesic sampling of the harmonic representation.
3D reconstruction, visualization, and measurement of MRI images
Abhijit S. Pandya, Pritesh P. Patel, Mehul B. Desai, et al.
This paper primarily focuses on manipulating 2D medical image data that often come in as Magnetic Resonance and reconstruct them into 3D volumetric images. Clinical diagnosis and therapy planning using 2D medical images can become a torturous problem for a physician. For example, our 2D breast images of a patient mimic a breast carcinoma. In reality, the patient has 'fat necrosis', a benign breast lump. Physicians need powerful, accurate and interactive 3D visualization systems to extract anatomical details and examine the root cause of the problem. Our proposal overcomes the above mentioned limitations through the development of volume rendering algorithms and extensive use of parallel, distributed and neural networks computing strategies. MRI coupled with 3D imaging provides a reliable method for quantifying 'fat necrosis' characteristics and progression. Our 3D interactive application enables a physician to compute spatial measurements and quantitative evaluations and, from a general point of view, use all 3D interactive tools that can help to plan a complex surgical operation. The capability of our medical imaging application can be extended to reconstruct and visualize 3D volumetric brain images. Our application promises to be an important tool in neurological surgery planning, time and cost reduction.
Real-Time Applications for 3D Imaging
icon_mobile_dropdown
Three steps to make shape from shading work consistently on real scenes
Holger Lange
For a long time shape from shading had problems to work on real scenes. It was formulated with too simplified physical models which made its application to real scenes difficult and the integration with other modules inconsistent because the physical models of the other modules were more complex and different. These limiting physical models have been: the orthogonal projection, the restriction to a light source at infinity and the lambertian diffusion mode. It also could only calculate either depth or photometric characteristics. One of them had to be known, which is a priori difficult for real scenes. This work present shape from shading in a general formulation using more complex physical models which can cope with the complexity of real scenes and which are consistent with other modules. The physical models are: the perspective projection, point light sources and ambient light as well as the Phong's reflection model dealing with diffuse and specular reflection. In order to overcome the problem of the shape from shading module to be ill-posed and to calculate consistently both depth and photometric characteristics at the same time we use stereo photometry and the cooperation with stereo vision.
Real-time monitoring of icebreaker propeller blades' ice load using underwater laser ranging system
Andre Morin, Michel Arsenault, Merv H. Edgecombe, et al.
Navigation in arctic waters presents a formidable challenge to the ships' propulsion systems as large ice pieces impinging on their propeller blades may result in stresses exceeding the strength of blade material. Damage to propellers is costly and can also spell disaster if a shop becomes disabled in a remote area. To prevent such situations, design practice must be improved and validated against experimental data. In this paper we present the design of a system that performs ice load measurements. This system is based on conventional triangulation and uses an array of laser beams aimed at the propeller blades to monitor in real time their deformations. As the propeller rotates, each point rage sensor describes an arc of a circle on the blades. Using template-matching techniques, the range values for these series of arcs can be used to infer the actual ice-induced blade deformations. The actual system provides range measurements at a rate of 2 kHz on three different channels. The system accuracy is 0.5 mm at distances in excess of 3 meters.
Depth-based selective image reconstruction using spatiotemporal image analysis
Tetsuji Haga, Kazuhiko Sumi, Manabu Hashimoto, et al.
In industrial plants, a remote monitoring system which removes physical tour inspection is often considered desirable. However the image sequence given from the mobile inspection robot is hard to see because interested objects are often partially occluded by obstacles such as pillars or fences. Our aim is to improve the image sequence that increases the efficiency and reliability of remote visual inspection. We propose a new depth-based image processing technique, which removes the needless objects from the foreground and recovers the occluded background electronically. Our algorithm is based on spatiotemporal analysis that enables fine and dense depth estimation, depth-based precise segmentation, and accurate interpolation. We apply this technique to a real time sequence given from the mobile inspection robot. The resulted image sequence is satisfactory in that the operator can make correct visual inspection with less fatigue.
Poster Session
icon_mobile_dropdown
Development of a 3D digitizer for breast surgery procedures
Jorge Rodriguez-Larena, Fernando Canal Bienzobas
The planning of a breast reconstruction surgical operation has to resolve the problem of measuring directly on the patient meaningful anthropometric points from which distances, areas and volumes have to be calculated. In this paper, we propose using a 3D optical digitizer to perform this task.
Reconstruction of the surface of the human body from 3D scanner data using B-splines
Ioannis Douros, Laura Dekker, Bernard F. Buxton
There are an increasing number of applications that require the construction of computerized human body models. Recently, Hamamatsu Photonics have developed an accurate and fast scanner based on position-sensitive photon detectors, capable of providing in a few seconds a dense representation of the body. The work presented here is designed to exploit such a scanner's capabilities. An algorithm is introduced that deals with the surface-from-curves problem and can be combined with an existing curves-from-points algorithm to solve the surface-from-points problems. The algorithm takes as input a set of B-spline curves and uses them to drive a fast and robust surface generation process. This is done by adequately sampling the curves, in a manner that incorporates explicit assumptions about human body geometry and topology. The result is a compound, multi-segment, and yet entirely smooth surface, that my be used to calculate body volume and surface area.
3D profilometry using a dynamically configurable confocal microscope
Sungdo Cha, Paul C. Lin, Lijun Zhu, et al.
Confocal microscopy is a powerful tool that has been used in the development of 3D profilometers for depth-section image capture and surface measurements. Previously developed confocal microscopes operated by scanning a single point, or array of points, over the surface of a sample. The 3D profilometer we constructed acquires measurement data using a confocal microscopy technique, where transverse surface scanning is performed by a digital micromirror device (DMD). The DMD is imaged onto the object's surface allowing for confocal surface scanning of the field of view at a rate faster than video rate without physical movement of the sample. 3D reconstruction is performed a posteriori from stacks of 2D image planes acquired at different depths. A description of the experimental setup with system design issues and solutions are presented. Backscatter noise and diffraction noise due to the periodic micromirror structure is minimized using spatial filtering and polarization coding techniques. Using a 100x objective, the longitudinal point spread function was measured at 2.1 micrometers , with simultaneous transverse resolution of 228.0 lines/mm. The optical resolution performance of our microscope with real-time scanning provided by the DMD, is shown to be effectively equivalent to those of conventional confocal microscopes. The 3D images capabilities of our scanning system using the DMD were demonstrated on various objects.