Proceedings Volume 7239

Three-Dimensional Imaging Metrology

cover
Proceedings Volume 7239

Three-Dimensional Imaging Metrology

View the digital version of this volume at SPIE Digital Libarary.

Volume Details

Date Published: 19 January 2009
Contents: 9 Sessions, 32 Papers, 0 Presentations
Conference: IS&T/SPIE Electronic Imaging 2009
Volume Number: 7239

Table of Contents

icon_mobile_dropdown

Table of Contents

All links to SPIE Proceedings will open in the SPIE Digital Library. external link icon
View Session icon_mobile_dropdown
  • Front Matter: Volume 7239
  • Theory and New Methods for 3D Surface Sensing I
  • Theory and New Methods for 3D Surface Sensing II
  • Measurement Standards and Calibration
  • Coordinate Metrology
  • Applications
  • Artifact-based Characterization
  • Measurement Uncertainty
  • Interactive Paper Session
Front Matter: Volume 7239
icon_mobile_dropdown
Front Matter: Volume 7239
This PDF file contains the front matter associated with SPIE-IS&T Proceedings Volume 7239, including the Title Page, Copyright information, Table of Contents, Introduction, and Conference Committee listing.
Theory and New Methods for 3D Surface Sensing I
icon_mobile_dropdown
Basic theory on surface measurement uncertainty of 3D imaging systems
Three-dimensional (3D) imaging systems are now widely available, but standards, best practices and comparative data have started to appear only in the last 10 years or so. The need for standards is mainly driven by users and product developers who are concerned with 1) the applicability of a given system to the task at hand (fit-for-purpose), 2) the ability to fairly compare across instruments, 3) instrument warranty issues, 4) costs savings through 3D imaging. The evaluation and characterization of 3D imaging sensors and algorithms require the definition of metric performance. The performance of a system is usually evaluated using quality parameters such as spatial resolution/uncertainty/accuracy and complexity. These are quality parameters that most people in the field can agree upon. The difficulty arises from defining a common terminology and procedures to quantitatively evaluate them though metrology and standards definitions. This paper reviews the basic principles of 3D imaging systems. Optical triangulation and time delay (timeof- flight) measurement systems were selected to explain the theoretical and experimental strands adopted in this paper. The intrinsic uncertainty of optical distance measurement techniques, the parameterization of a 3D surface and systematic errors are covered. Experimental results on a number of scanners (Surphaser®, HDS6000®, Callidus CPW 8000®, ShapeGrabber® 102) support the theoretical descriptions.
Design and implementation of an inexpensive LIDAR scanning system with applications in archaeology
Andrew Willis, Yunfeng Sui, William Ringle, et al.
This paper describes the development of a system and associated software capable of capturing 3D LIDAR data from surfaces up to 20m from the sensor. The chief concern of this initial system is to minimize cost which, for this initial system, is approximately $10.5k (USD). Secondary considerations for the system include portability, robustness, and size. The system hardware consists of two motors and a single-point sensor, capable of measuring the range of a single surface point location. The motors redirect the emitted laser along lines nearly equivalent to that specified by a spherical coordinate system generating a spherical range image, r = f ( φ, θ). This article describes the technical aspects of the scanner design which include a bill-of-materials for the scanner components and the mathematical model for the measured 3D point data. The designed system was built in 2007 and has since been used in the field twice: (1) for scanning ruins and underground cisterns within Mayan cities near Merida, Mexico and (2) for scanning the ruins of a Crusader castle at Apollonia-Arsuf, located on the Mediterranean shore near Herzliya, Israel. Using this system in these vastly different environments has provided a number of useful insights or "best practices" on the use of inexpensive LIDAR sensors which are discussed in this paper. We also discuss a measurement model for the generated data and an efficient and easy-to-implement algorithm for polygonizing the measured 3D (x,y, z) data. Specific applications of the developed system to archaeological and anthropological problems are discussed.
Theory and New Methods for 3D Surface Sensing II
icon_mobile_dropdown
Characterization of modulated time-of-flight range image sensors
A number of full field image sensors have been developed that are capable of simultaneously measuring intensity and distance (range) for every pixel in a given scene using an indirect time-of-flight measurement technique. A light source is intensity modulated at a frequency between 10-100 MHz, and an image sensor is modulated at the same frequency, synchronously sampling light reflected from objects in the scene (homodyne detection). The time of flight is manifested as a phase shift in the illumination modulation envelope, which can be determined from the sampled data simultaneously for each pixel in the scene. This paper presents a method of characterizing the high frequency modulation response of these image sensors, using a pico-second laser pulser. The characterization results allow the optimal operating parameters, such as the modulation frequency, to be identified in order to maximize the range measurement precision for a given sensor. A number of potential sources of error exist when using these sensors, including deficiencies in the modulation waveform shape, duty cycle, or phase, resulting in contamination of the resultant range data. From the characterization data these parameters can be identified and compensated for by modifying the sensor hardware or through post processing of the acquired range measurements.
Range imager performance comparison in homodyne and heterodyne operating modes
Range imaging cameras measure depth simultaneously for every pixel in a given field of view. In most implementations the basic operating principles are the same. A scene is illuminated with an intensity modulated light source and the reflected signal is sampled using a gain-modulated imager. Previously we presented a unique heterodyne range imaging system that employed a bulky and power hungry image intensifier as the high speed gain-modulation mechanism. In this paper we present a new range imager using an internally modulated image sensor that is designed to operate in heterodyne mode, but can also operate in homodyne mode. We discuss homodyne and heterodyne range imaging, and the merits of the various types of hardware used to implement these systems. Following this we describe in detail the hardware and firmware components of our new ranger. We experimentally compare the two operating modes and demonstrate that heterodyne operation is less sensitive to some of the limitations suffered in homodyne mode, resulting in better linearity and ranging precision characteristics. We conclude by showing various qualitative examples that demonstrate the system's three-dimensional measurement performance.
Three-dimensional shape measurement of aspheric refractive optics by pattern transmission photogrammetry
Marcus Petz, Marc Fischer, Rainer Tutsch
For fast, accurate and robust shape measurement of specular surfaces there are several powerful measurement techniques based on deflectometry. For boundary surfaces of transparent objects like refractive optics, on the contrary, deflectometry is so far limited to the measurement of only one surface in reflection. This is unsatisfying from a metrological point of view as the geometrical relation between both surfaces, which substantially defines the optical function, is lost. In this work a new deflectometric approach is presented that works in transmission and allows the simultaneous measurement of both surfaces of refractive optics. The basic idea of the approach is calculating the unknown surface geometry of a transparent object by iteratively adapting a surface model to the observed light ray deflections. The main problem in this case is the ambiguity of the refraction at the two boundary surfaces of a lens, as there are multiple possible solutions that produce the same measurement results. This is solved by combining four different views on the object under test, which allows to find an unambiguous solution. An experimental measurement setup is presented and results of different simulations and tests are discussed in this paper.
A 3D imaging system for inspection of large underwater hydroelectric structures
François Mirallès, Julien Beaudry, Michel Blain, et al.
A novel robotic 3D imaging system for the inspection of large underwater hydroelectric structures is proposed. This system has been developed at the Research Institute of Hydro-Quebec and is based on a camera-laser ensemble mounted on a mobile platform. Mechanical, electronic and software design aspects, overall operational modalities, and proof of concept results are presented. These results were obtained in the course of inspection trials carried out under normal operating conditions at the site of two Hydro-Quebec hydroelectric dams.
Subtraction stereo: a stereo camera system that focuses on moving regions
Kazunori Umeda, Yuuki Hashimoto, Tatsuya Nakanishi, et al.
This study aims at developing a practical stereo camera that is suitable for applications such as surveillance, in which detection of anomalies or measurement of moving people are required. In such surveillance cases, targets to measure usually move. In this paper, "Subtraction stereo" is proposed that focuses on motion information to increase the robustness of the stereo matching. It realizes robust measurement of range images by detecting moving regions with each camera and then applying stereo matching for the detected moving regions. Measurement of three-dimensional position, height and width of a target object using the subtraction stereo is discussed. The basic algorithm is implemented on a commercially available stereo camera, and the effectiveness of the subtraction stereo is verified by several experiments using the stereo camera.
Measurement Standards and Calibration
icon_mobile_dropdown
Target penetration of laser-based 3D imaging systems
Geraldine S. Cheok, Kamel S. Saidi, Marek Franaszek
The ASTM E57.02 Test Methods Subcommittee is developing a test method to evaluate the ranging performance of a 3D imaging system. The test method will involve either measuring the distance between two targets or between an instrument and a target. The first option is necessary because some instruments cannot be centered over a point and will require registration of the instrument coordinate frame into the target coordinate frame. The disadvantage of this option is that registration will introduce an additional error into the measurements. The advantage of this option is that this type of measurement, relative measurement, is what is typically used in field applications. A potential target geometry suggested for the test method is a planar target. The ideal target material would be diffuse, have uniform reflectivity for wavelengths between 500 nm to 1600 nm (wavelengths of most commercially-available 3D imaging systems), and have minimal or no penetration of the laser into the material. A possible candidate material for the target is Spectralon1. However, several users have found that there is some penetration into the Spectralon by a laser and this is confirmed by the material manufacturer. The effect of this penetration on the range measurement is unknown. This paper will present an attempt to quantify the laser penetration depth into the Spectralon material for four 3D imaging systems.
Surface-dependent 3D range camera self-calibration
Derek D. Lichti, Denis Rouzaud
This paper reports on an investigation designed to quantify the systematic and random error properties of range measurements from the SwissRanger SR-3000 range camera as a function of reflecting-surface color. This is achieved with an integrated self-calibrating bundle adjustment of image co-ordinate and range observations of a network of targets having three different colors (black, mid-level grey and white). Four different self-calibration adjustments are performed: one per target color and a combined one comprising all targets. The systematic effects of the different target colors are modeled with one rangefinder offset parameter per color. Results show considerable differences (up to 75 mm) between the different rangefinder offset parameters. The stochastic properties of the range observations, measured in terms of the residual root mean square error, also differed considerably among the adjustment cases. Range observations to black targets were found to be much noisier than those of the other targets, with white being the least noisy. High correlations (up to 0.96) between the rangefinder offset and perspective center co-ordinates were found in all adjustments.
Phase coding and absolute calibration for a low-cost fringe projection system
Giovanna Sansoni, Marco Trebeschi
A whole field profilometer for the acquisition of free form shapes is presented. The system is based on the projection of a single pattern of Ronchi fringes and on optical triangulation. A novel approach to phase measuring profilometry is implemented to index the field of view. The system is calibrated in an absolute way, in order to obtain dense point clouds in a global reference system. The device is reconfigurable to the measurement problem, portable and rugged and well adapts to multi-view acquisition of free-form shapes.
Range camera calibration based on image sequences and dense comprehensive error statistics
This article concentrates on the integrated self-calibration of both the interior orientation and the distance measurement system of a time-of-flght range camera (photonic mixer device). Unlike other approaches that investigate individual distortion factors separately, in the presented approach all calculations are based on the same data set that is captured without auxiliary devices serving as high-order reference, but with the camera being guided by hand. Flat, circular targets stuck on a planar whiteboard and with known positions are automatically tracked throughout the amplitude layer of long image sequences. These image observations are introduced into a bundle block adjustment, which on the one hand results in the determination of the interior orientation. Capitalizing the known planarity of the imaged board, the reconstructed exterior orientations furthermore allow for the derivation of reference values of the actual distance observations. Eased by the automatic reconstruction of the cameras trajectory and attitude, comprehensive statistics are generated, which are accumulated into a 5-dimensional matrix in order to be manageable. The marginal distributions of this matrix are inspected for the purpose of system identification, whereupon its elements are introduced into another least-squares adjustment, finally leading to clear range correction models and parameters.
Coordinate Metrology
icon_mobile_dropdown
Dimensional measurement traceability of 3D imaging data
Steve Phillips, Michael Krystek, Craig Shakarji, et al.
This paper discusses the concept of metrological traceability to the International System of Units (SI) unit of length, the meter. We describe how metrological traceability is realized, give a recent example of the standardization of laser trackers, and discuss progress and challenges to the traceability of 3D imaging data.
An industrial comparison of coordinate measuring systems equipped with optical sensors: the VideoAUDIT Project
Simone Carmignato, Alessandro Voltan
The main results of an industrial inter-laboratory comparison for CMMs equipped with optical sensors are presented in this paper. The comparison, named VideoAUDIT, was organized and coordinated by the Laboratory of Industrial and Geometrical Metrology - University of Padova - and carried out in Italy and Switzerland from August 2007 to June 2008. A total of 16 CMMs from different companies participated in the Project, using different kinds of optical sensors. The participants were asked to measure a set of calibrated artefacts, following detailed procedures. The Audit items have been chosen with the following criteria: (1) objects that can be measured with different types of optical sensors and (2) including both reference artefacts for performance verification and common industrial products. Special attention has been paid to the design of the comparison in order to respect the proficiency testing rules; in particular, the long term stability of the audit items was checked during the comparison as a main requirement. An important task of the comparison was to test the ability of the participants to determine the uncertainty of their measurements.
Traceable optical coordinate metrology applications for the micro range
Wiebke Ehrig, Ulrich Neuschaefer-Rube, Michael Neugebauer, et al.
Optical sensors are gaining increasing importance in the field of coordinate metrology. Especially for micro range measurements, different optical sensor principles (e.g. white-light interferometers, autofocus sensors, and confocal microscopes) are used. Micro measurement covers the detection and evaluation of measurands for length, size and form of geometrical structures in the range between 1 μm and 1 mm. These reduced dimensions lead to increased requirements for the applied measuring technique and the verification of the measurement systems. The Physikalisch- Technische Bundesanstalt (PTB) works intensively on the development of suitable measurement standards and test procedures to make a broader industrial use of CMMs with optical sensors possible. The test procedures are analogue to the well-established tests for classical coordinate measuring machines (CMMs). For this purpose, adequate and miniaturized reference standards were manufactured, calibrated and tested considering the specific characteristics of optical sensors. This paper gives a summary of this work. Advice on future developments is given.
Tactile-optical 3D sensor applying image processing
Ulrich Neuschaefer-Rube, Mark Wissmann
The tactile-optical probe (so-called fiber probe) is a well-known probe in micro-coordinate metrology. It consists of an optical fiber with a probing element at its end. This probing element is adjusted in the imaging plane of the optical system of an optical coordinate measuring machine (CMM). It can be illuminated through the fiber by a LED. The position of the probe is directly detected by image processing algorithms available in every modern optical CMM and not by deflections at the fixation of the probing shaft. Therefore, the probing shaft can be very thin and flexible. This facilitates the measurement with very small probing forces and the realization of very small probing elements (diameter: down to 10 μm). A limitation of this method is that at present the probe does not have full 3D measurement capability. At the Physikalisch-Technische Bundesanstalt (PTB), several arrangements and measurement principles for a full 3D tactile-optical probe have been implemented and tested successfully in cooperation with Werth-Messtechnik, Giessen, Germany. This contribution provides an overview of the results of these activities.
Experimental study on performance verification tests for coordinate measuring systems with optical distance sensors
Optical sensors are increasingly used for dimensional and geometrical metrology. However, the lack of international standards for testing optical coordinate measuring systems is currently limiting the traceability of measurements and the easy comparison of different optical systems. This paper presents an experimental investigation on artefacts and procedures for testing coordinate measuring systems equipped with optical distance sensors. The work is aimed at contributing to the standardization of testing methods. The VDI/VDE 2617-6.2:2005 guideline, which is probably the most complete document available at the state of the art for testing systems with optical distance sensors, is examined with specific experiments. Results from the experiments are discussed, with particular reference to the tests used for determining the following characteristics: error of indication for size measurement, probing error and structural resolution. Particular attention is given to the use of artefacts alternative to gauge blocks for determining the error of indication for size measurement.
Applications
icon_mobile_dropdown
Stereo optical tracker for standoff monitoring of position and orientation
W. D. Sherman, T. L. Houk, J. M. Saint Clair, et al.
A Precision Optical Measurement System (POMS) has been designed, constructed and tested for tracking the position (x, y, z) and orientation (roll, pitch, yaw) of models in Boeing's 9-77 Compact Radar Range. A stereo triangulation technique is implemented using two remote sensor units separated by a known baseline. Each unit measures pointing angles (azimuth and elevation) to optical targets on a model. Four different reference systems are used for calibration and alignment of the system's components and two platforms. Pointing angle data and calibration corrections are processed at high rates to give near real-time feedback to the mechanical positioning system of the model. The positional accuracy of the system is ± .010 inches at a distance of 85 feet while using low RCS reflective tape targets. The precision measurement capabilities and applications of the system are discussed.
Scan image registration in industrial inspection of propeller blades
David W. Allen, Jacob J. Reiser, James D. Machin, et al.
We present a method for automatically determining scan image overlap regions and computing image registration for relatively featureless free-form manufactured objects like propeller blades. The method is applicable when the design of the manufactured object exists as a NURBS surface. Incorporated in a comprehensive propeller inspection program, the method performs in an industrial propeller-manufacturing environment without operator involvement.
Using 3D range cameras for crime scene documentation and legal medicine
Gianluca Cavagnini, Giovanna Sansoni, Marco Trebeschi
Crime scene documentation and legal medicine analysis are part of a very complex process which is aimed at identifying the offender starting from the collection of the evidences on the scene. This part of the investigation is very critical, since the crime scene is extremely volatile, and once it is removed, it can not be precisely created again. For this reason, the documentation process should be as complete as possible, with minimum invasiveness. The use of optical 3D imaging sensors has been considered as a possible aid to perform the documentation step, since (i) the measurement is contactless and (ii) the process required to editing and modeling the 3D data is quite similar to the reverse engineering procedures originally developed for the manufacturing field. In this paper we show the most important results obtained in the experimentation.
Artifact-based Characterization
icon_mobile_dropdown
Characterization of three algorithms for detecting surface flatness defects from dense point clouds
Pingbo Tang, Burcu Akinci, Daniel Huber
Surface flatness assessment is required for controlling the quality of various products, such as building and mechanical components. During such assessments, inspectors collect data capturing surface shape, and use it to identify flatness defects, which are surface parts deviating from a reference plane by more than the tolerance. Laser scanners can deliver accurate and dense 3D point clouds capturing detailed surface shape for flatness defect detection in minutes. However, few studies explore algorithms for detecting surface flatness defects from dense point clouds, and provide quantitative analysis of defect detection performance. This paper presents three surface-flatness-defect detection algorithms and our experimental investigations for characterizing their performances. We created a test bed, which is composed of several flat boards with defects of various sizes on them, and tested two scanners and three algorithms using it. The results are reported in the form of a set of performance maps indicating under which conditions (using which scanner, scanning distance, selected defect detection algorithm, and angular resolution of the scanner, etc.), what types of defects are detected. Our analysis shows that scanning distance and angular resolution substantially influence the detection accuracy. Comparative analyses of scanners and defect detection algorithms are also presented.
Resolution characteritazion of 3D cameras
Resolution analysis represents a 2D imaging topic for the use of particular targets for equipment characterization. These concepts can be extended in 3D imaging through the use of specific tridimensional target object. The core of this paper is focused on experimental characterization of seven different 3D laser scanner through the extraction of resolution, accuracy and uncertainly parameters from 3D target object. The process of every single range map defined by the same resolution leads to different results as z-resolution, optical resolution, linear and angular accuracy. The aim of this research is to suggest a characterization process mainly based on resolution and accuracy parameters that allow a reliable comparison between 3D scanner performances.
Evaluating laser range scanner lateral resolution in 3D metrology
David MacKinnon, J. Angelo Beraldin, Luc Cournoyer, et al.
In this study, laser range scanner lateral resolution is investigated for laser range scanners. A standardized method is proposed and demonstrated for quantifying the lateral surface resolvability of a laser range scanner through the use of an appropriately-designed artefact. A new metric for lateral surface resolution, the limit of surface resolvability, is presented and is obtained using what is referred to as the wedge test. The results of applying this metrics using this test method to laser range scanners is also presented.
Measurement Uncertainty
icon_mobile_dropdown
Unified computation of strict maximum likelihood for geometric fitting
A new numerical scheme is presented for strictly computing maximum likelihood (ML) of geometric fitting problems. Intensively studied in the past are those methods that first transform the data into a computationally convenient form and then assume Gaussian noise in the transformed space. In contrast, our method assumes Gaussian noise in the original data space. It is shown that the strict ML solution can be computed by iteratively using existing methods. Then, our method is applied to ellipse fitting and fundamental matrix computation. Our method is also shown to encompasses optimal correction, computing, e.g., perpendiculars to an ellipse and triangulating stereo images. While such applications have been studied individually, our method generalizes them into an application independent form from a unified point of view.
Ways to verify performance of 3D imaging instruments
Currently work is underway to establish standards for the performance verification of 3D imaging instruments including scanners, imagers, and flash LIDAR devices. This paper discusses the figures of merit to be considered in evaluating alternative methods, and it proposes specific ranging test protocols. Experimental results are reviewed.
Proposed procedure for a distance protocol in support of ASTM-E57 standards activities on 3D imaging
J.-A. Beraldin, L. Cournoyer, M. Picard, et al.
The performance of 3D Imaging Systems needs to be evaluated using a common terminology and test procedures. This evaluation is necessary because three-dimensional imaging systems are measuring instruments and the spatial coordinates they provide are only estimates of the 3D surfaces being sampled. These coordinates need to be completed by a quantitative statement about their uncertainty to be meaningful. The statement of uncertainty is based on comparisons with standards traceable to the national units of length (SI units). We describe and present experimental results of a procedure to evaluate distance measurement uncertainty of medium range laser scanners (range between 2- 100m). The procedure is based on the evaluation of Point-to-Point distance errors using custom made Reference Test Object (RTO) and a certified 3D laser tracker as a reference. This work is proposed as a possible protocol to the American Society for Testing and Materials (ASTM)-E57.02 3D Imaging Systems standards committee on test methods.
A 3-D anisotropic diffusion filter for speckle reduction in 3-D ultrasound images
Jinshan Tang, Qingling Sun
In this paper, we study speckle reduction technology for 3-D ultrasound images and a 3-D anisotropic diffusion (AD) filter is developed. The 3-D anisotropic diffusion filter works directly in the 3-D image domain and can overcome the limitations of the 2-D anisotropic diffusion filter and the traditional 3-D anisotropic diffusion filter. The proposed algorithm uses normalized gradient to replace gradient in the computation of the diffusion coefficients, which can reduce the speckle effectively while preserving the edges. Experiments have been performed on real 3-D ultrasound images and the experimental results show the effectiveness of the proposed 3D anisotropic diffusion filter.
Interactive Paper Session
icon_mobile_dropdown
Three-dimensional reconstruction from multiple reflected views within a realist painting: an application to Scott Fraser's "Three way vanitas"
Brandon M. Smith, David G. Stork, Li Zhang
The problem of reconstructing a three-dimensional scene from single or multiple views has been thoroughly studied in the computer vision literature, and recently has been applied to problems in the history of art. Criminisi pioneered the application of single-view metrology to reconstructing the fictive spaces in Renaissance paintings, such as the vault in Masaccio's Trinità and the plaza in Piero della Francesca's Flagellazione. While the vast majority of realist paintings provide but a single view, some provide multiple views, through mirrors depicted within their tableaus. The contemporary American realist Scott Fraser's Three way vanitas is a highly realistic still-life containing three mirrors; each mirror provides a new view of the objects in the tableau. We applied multiple-view reconstruction methods to the direct image and the images reflected by these mirrors to reconstruct the three-dimensional tableau. Our methods estimate virtual viewpoints for each view using the geometric constraints provided by the direct view of the mirror frames, along with the reflected images themselves. Moreover, our methods automatically discover inconsistencies between the different views, including ones that might elude careful scrutiny by eye, for example the fact that the height of the water in the glass differs between the direct view and that in the mirror at the right. We believe our work provides the first application of multiple-view reconstruction to a single painting and will have application to other paintings and questions in the history of art.
Three dimensional map construction using a scanning laser range finder
Yau-Zen Chang, Shih-Tseng Lee
This paper presents the development of a three-dimensional environment reconstruction system using a laser range finder. The original design of URG-04LX laser range finder, provided by Hokuyo Inc., is efficient in providing two-dimensional distance information. To enhance the capability of the device, we developed a rotation mechanism to provide it a sweep motion for stereo data collection. Geometric equations are derived that includes parameters of misalignment that are unavoidable in manufacturing and assembling. The parameters are calibrated according to practical data measurement of three relatively-perpendicular planes. The calibration is formulated as an optimization problem solved using the Nelder- Mead simplex algorithm. Validity of the calibration scheme is demonstrated by the reconstruction of several real-world scenes.
System for conveyor belt part picking using structured light and 3D pose estimation
J. Thielemann, Ø. Skotheim, J. O. Nygaard, et al.
Automatic picking of parts is an important challenge to solve within factory automation, because it can remove tedious manual work and save labor costs. One such application involves parts that arrive with random position and orientation on a conveyor belt. The parts should be picked off the conveyor belt and placed systematically into bins. We describe a system that consists of a structured light instrument for capturing 3D data and robust methods for aligning an input 3D template with a 3D image of the scene. The method uses general and robust pre-processing steps based on geometric primitives that allow the well-known Iterative Closest Point algorithm to converge quickly and robustly to the correct solution. The method has been demonstrated for localization of car parts with random position and orientation. We believe that the method is applicable for a wide range of industrial automation problems where precise localization of 3D objects in a scene is needed.
3D imaging acquisition, modeling, and prototyping for facial defects reconstruction
Giovanna Sansoni, Marco Trebeschi, Gianluca Cavagnini, et al.
A novel approach that combines optical three-dimensional imaging, reverse engineering (RE) and rapid prototyping (RP) for mold production in the prosthetic reconstruction of facial prostheses is presented. A commercial laser-stripe digitizer is used to perform the multiview acquisition of the patient's face; the point clouds are aligned and merged in order to obtain a polygonal model, which is then edited to sculpture the virtual prothesis. Two physical models of both the deformed face and the 'repaired' face are obtained: they differ only in the defect zone. Depending on the material used for the actual prosthesis, the two prototypes can be used either to directly cast the final prosthesis or to fabricate the positive wax pattern. Two case studies are presented, referring to prostetic reconstructions of an eye and of a nose. The results demonstrate the advantages over conventional techniques as well as the improvements with respect to known automated manufacturing techniques in the mold construction. The proposed method results into decreased patient's disconfort, reduced dependence on the anaplasthologist skill, increased repeatability and efficiency of the whole process.
Estimating angle-dependent systematic error and measurement uncertainty for a conoscopic holography measurement system
Anna Paviotti, Simone Carmignato, Alessandro Voltan, et al.
The aim of this study is to assess angle-dependent systematic errors and measurement uncertainties for a conoscopic holography laser sensor mounted on a Coordinate Measuring Machine (CMM). The main contribution of our work is the definition of a methodology for the derivation of point-sensitive systematic and random errors, which must be determined in order to evaluate the accuracy of the measuring system. An ad hoc three dimensional artefact has been built for the task. The experimental test has been designed so as to isolate the effects of angular variations from those of other influence quantities that might affect the measurement result. We have found the best measurand to assess angle-dependent errors, and found some preliminary results on the expression of the systematic error and measurement uncertainty as a function of the zenith angle for the chosen measurement system and sample material.