Proceedings Volume 7447

Videometrics, Range Imaging, and Applications X

cover
Proceedings Volume 7447

Videometrics, Range Imaging, and Applications X

View the digital version of this volume at SPIE Digital Libarary.

Volume Details

Date Published: 24 August 2009
Contents: 8 Sessions, 20 Papers, 0 Presentations
Conference: SPIE Optical Engineering + Applications 2009
Volume Number: 7447

Table of Contents

icon_mobile_dropdown

Table of Contents

All links to SPIE Proceedings will open in the SPIE Digital Library. external link icon
View Session icon_mobile_dropdown
  • Front Matter: Volume 7447
  • Range Systems I
  • Systems Development
  • Industrial/System Metrology
  • Range Systems II
  • Image Sequence Analysis/Tracking
  • System Calibration and Characterization
  • Applications
Front Matter: Volume 7447
icon_mobile_dropdown
Front Matter: Volume 7447
This PDF file contains the front matter associated with SPIE Proceedings Volume 7447, including the Title Page, Copyright information, Table of Contents, Introduction, and the Conference Committee listing.
Range Systems I
icon_mobile_dropdown
Range calibration for terrestrial laser scanners and range cameras
Range cameras and terrestrial laser scanners provide 3D geometric information by directly measuring the range from the sensor to the object. Calibration of the ranging component has not been studied systematically yet, and this paper provides a first overview. The proposed approaches differ in the object space features used for calibration, the calibration models themselves, and possibly required environmental conditions. A number of approaches are reviewed within this framework and discussed. For terrestrial laser scanners, improvement in accuracy by a factor up to two is typical, whereas range camera calibration still lacks a proper model, and large systematic errors typically remain.
Range sensors on marble surfaces: quantitative evaluation of artifacts
While 3D imaging systems are widely available and used, clear statements about the possible influence of material properties over the acquired geometrical data are still rather few. In particular a material very often used in Cultural Heritage is marble, known to give geometrical errors with range sensor technologies and whose entity reported in the literature seems to vary considerably in the different works. In this article a deep investigation with different types of active range sensors used on four types of marble surfaces, has been performed. Two triangulation-based active sensors employing laser stripe and white light pattern projection respectively, and one PW-TOF laser scanner have been used in the experimentation. The analysis gave rather different results for the two categories of instruments. A negligible light penetration came out from the triangulation-based equipment (below 50 microns with the laser stripe and even less with the pattern projection device), while with the TOF system this came out to be two orders of magnitude larger, quantitatively evidencing a source of systematic errors that any surveyor engaged in 3D scanning of Cultural Heritage sites and objects should take into account and correct.
Proposed traceable structural resolution protocols for 3D imaging systems
David MacKinnon, J.-Angelo Beraldin, Luc Cournoyer, et al.
A protocol for determining structural resolution using a potentially-traceable reference material is proposed. Where possible, terminology was selected to conform to those published in ISO JCGM 200:2008 (VIM) and ASTM E 2544-08 documents. The concepts of resolvability and edge width are introduced to more completely describe the ability of an optical non-contact 3D imaging system to resolve small features. A distinction is made between 3D range cameras, that obtain spatial data from the total field of view at once, and 3D range scanners, that accumulate spatial data for the total field of view over time. The protocol is presented through the evaluation of a 3D laser line range scanner.
Systems Development
icon_mobile_dropdown
Real-time phase-stamp range finder with improved accuracy
This paper proposes a method for compensating errors in the phase-stamp range finder (PSRF) proposed by the author. The PSRF consists of the time-domain correlation image sensor (CIS), a sheet of light (SOL), and three-phase sinusoidal reference signals supplied to the CIS. The PSRF produces a range image at the frame rate of the CIS by recording the "phase stamp" of the reference signals at the time of incidence of the SOL over the object during the frame period at each pixel. A problem with the previous PSRF system is that the reconstructed range image suffers artifacts that appear as an undulation and a random noise pattern superimposed on the original surface shape of the object. Experimental results confirm the effectiveness of the proposed method for suppressing the errors in the CIS output images as well as the artifacts in the range images of the PSRF.
Application of inverse square law for 3D sensing
In this work, we present a novel concept to sense 3D surface profile of scenes along with their color images. The method for achieving this functionality includes two isotropic light sources placed at different geometric locations, an optic apparatus for aligning the light rays projected onto the scene, and a high precision camera. To determine the pixel-wise distance information, the system captures two sequential images during the stropping of each light source in sequence. Then, the intensity of each pixel location in these two images are utilized for calculating the distance information of object points corresponding to pixel locations in the image to generate a 3D surface profile of a scene. The approach is suitable for both capturing color information and sensing 3D distance information in high definition format synchronously.
Industrial/System Metrology
icon_mobile_dropdown
Performance study of non-contact surface measurement technology for use in an experimental fusion device
A. Brownhill, R. Brade, S. Robson
Upgrade of the EFDA-JET experimental fusion device has generated interest in remote non-contact surface measurement of protective metallic tile surfaces inside the machine during shutdown periods. The measurement of gap and step features of 0.35-2mm and 0.04-0.2mm respectively, deposition and erosion on planar facets and the form of the complete vessel required specific testing to understand if existing systems could meet project requirements. This paper describes investigations against planar facets posed at differing angles to both fringe projection and tracked laser triangulation non-contact measurement technologies. System capabilities demonstrate typical plane fitting capabilities of the order of 20μm RMS, but highlight systematic discrepancies in the collected data.
Efficient embedded plate position measurement system for large plant construction
H. Yokoyama, Y. Yamamoto, S. Ebata, et al.
The number of large plants (energy, industry, etc.) being planned and constructed in the world has increased tremendously in recent times. By reducing the construction costs involved in the development of these plants, authors can reduce the initial investment required, thereby ensuring a more economical use of monetary resources. However, construction work still requires considerable skill and labor. Hence, it is necessary to develop new systems and processes for construction cost reduction. In this investigation, efficient automatic measurement method for embedded plate fixed position was examined by linking digital photogrammetry and CAD data. The developed measurement system was applied to large plant construction field, and the verification about efficiency was performed. In this paper, these contents will be reported.
A real-time 3D scanning system for pavement rutting and pothole detections
Rutting and pothole are the common pavement distress problems that need to be timely inspected and repaired to ensure ride quality and safe traffic. This paper introduces a real-time, automated inspection system devoted for detecting these distress features using high-speed transverse scanning. The detection principle is based on the dynamic generation and characterization of 3D pavement profiles obtained from structured light measurements. The system implementation mainly involves three tasks: multi-view coplanar calibration, sub-pixel laser stripe location, and pavement distress recognition. The multi-view coplanar scheme was employed in the calibration procedure to increase the feature points and to make the points distributed across the field of view of the camera, which greatly improves the calibration precision. The laser stripe locating method was implemented in four steps: median filtering, coarse edge detection, fine edge adjusting, stripe curve mending and interpolation by cubic splines. The pavement distress recognition algorithms include line segment approximation of the profile, searching for the feature points, and parameters calculations. The parameter data of a curve segment between two feature points, such as width, depth and length, were used to differentiate rutting, pothole, and pothole under different constraints. The preliminary experiment results show that the system is capable of locating these pavement distresses, and meets the needs for real-time and accurate pavement inspection.
Range Systems II
icon_mobile_dropdown
DTM generation in forested area using multiple return pulses from airborne laser scanner
Airborne laser scanner is widely adopted for city modeling, DTM (Digital Terrain Model) generation, monitoring electrical power lines and detection of forest areas. In generally, airborne laser scanning enables to acquire point cloud 3D data for surface of the ground or objects using multiple return pulses (first, last and other pulse). Filtering for distinguish on- and off-terrain points from point cloud 3D data which are collected by airborne laser scanner was issue, and various filtering methods have been developing for generating DTM using point cloud 3D data. Waveform information (range, pulse amplitude, pulse width) which is corrected by resent laser scanner system has been receiving more attention for improvement of classifying the point cloud data into on- and off-terrain points. Waveform information has ability to classify the point cloud data, however, robust filtering for distinguish on- and off-terrain points is still issue. The main problem is robust extraction of the deepest points which shows ground surface. As many filtering and classifying methods for robust extraction of the deepest points were proposed including waveform information, the problem reaches extraction of the last pulse since the last pulse show the deepest points. With this motive, filtering and classifying approach for DTM generation in forested area using multiple return pulses instead of waveform information are investigated in this paper.
Classification of mobile terrestrial laser point clouds using semantic constraints
With mobile terrestrial laser scanning, laser point clouds of large urban areas can be acquainted rapidly during normal speed driving. Classification of the laser points is beneficial to the city reconstruction from laser point cloud, but a manual classification process can be rather time-consuming due to the huge amount of laser points. Although the pulse return is often used to automate classification, it is only possible to distinguish limited types such as vegetation and ground. In this paper we present a new method which classifies mobile terrestrial laser point clouds using only coordinate information. First, a point of a whole urban scene is segmented, and geometric properties of each segment are computed. Then semantic constraints for several object types are derived from observation and knowledge. These constraints concern not only geometric properties of the semantic objects, but also regulate the topological and hierarchical relations between objects. A search tree is formulated from the semantic constraints and applied to the laser segments for interpretation. 2D map can provide the approximate locations of the buildings and roads as well as the roads' dominant directions, so it is integrated to reduce the search space. The applicability of this method is demonstrated with a Lynx data of the city Enschede and a Streetmapper data of the city Esslingen. Four object types: ground, road, building façade, and traffic symbols, are classified in these data sets.
Image Sequence Analysis/Tracking
icon_mobile_dropdown
Markerless motion capture: the challenge of accuracy in capturing animal motions through model based approaches
Emiliano Gambaretto, Stefano Corazza
In the paper the application of model based markerless motion capture technology to general environment and quadrupeds is presented. Some of the authors' recent results are discussed together with the open challenges related to the capture of animal motion. Despite its very recent history, markerless motion capture represents already both a valuable alternative to marker based approaches and in some circumstances the only valuable solution. One of these cases is animal capture where the positioning of markers on the animal is very challenging when possible at all. An example of markerless tracking of animal motion is shown together with a virtual validation to provide quantitative evidence of the robustness and accuracy of the presented method.
Automated tracking of a figure skater by using PTZ cameras
Tomohiko Haraguchi, Tsuyoshi Taki, Junichi Hasegawa
In this paper, a system for automated real-time tracking of a figure skater moving on an ice rink by using PTZ cameras is presented. This system is intended for support in training of skating, for example, as a tool for recording and evaluation of his/her motion performances. In the processing procedure of the system, an ice rink region is extracted first from a video image by region growing method, then one of hole components in the obtained rink region is extracted as a skater region. If there exists no hole component, a skater region is estimated from horizontal and vertical intensity projections of the rink region. Each camera is automatically panned and/or tilted so as to keep the skater region on almost the center of the image, and also zoomed so as to keep the height of the skater region within an appropriate range. In the experiments using 5 practical video images of skating, it was shown that the extraction rate of the skater region was almost 90%, and tracking with camera control was successfully done for almost all of the cases used here.
System Calibration and Characterization
icon_mobile_dropdown
Comparative evaluation of consumer grade cameras and mobile phone cameras for close range photogrammetry
Hirofumi Chikatsu, Yoji Takahashi
The authors have been concentrating on developing convenient 3D measurement methods using consumer grade digital cameras, and it was concluded that consumer grade digital cameras are expected to become a useful photogrammetric device for the various close range application fields. On the other hand, mobile phone cameras which have 10 mega pixels were appeared on the market in Japan. In these circumstances, we are faced with alternative epoch-making problem whether mobile phone cameras are able to take the place of consumer grade digital cameras in close range photogrammetric applications. In order to evaluate potentials of mobile phone cameras in close range photogrammetry, comparative evaluation between mobile phone cameras and consumer grade digital cameras are investigated in this paper with respect to lens distortion, reliability, stability and robustness. The calibration tests for 16 mobile phone cameras and 50 consumer grade digital cameras were conducted indoors using test target. Furthermore, practability of mobile phone camera for close range photogrammetry was evaluated outdoors. This paper presents that mobile phone cameras have ability to take the place of consumer grade digital cameras, and develop the market in digital photogrammetric fields.
Statistical analysis of measurement processes for time-of-flight cameras
Faisal Mufti, Robert Mahony
In recent years, the demand for 3D vision systems has increased in fields such as detection and recognition, motion modelling, 3D environment reconstruction and tracking. This has motivated the development of range image technology, especially Time-of-Flight (TOF) cameras, that provide direct measurement of distance between the camera and the targeted surface. These devices have an advantage over traditional range data sensors due to their capability to provide frame rate range data over a full image array. The quality of the measurement of these sensors depends heavily on signal-to-noise (SNR) of the incoming signal and the subsequent processing algorithms. In phase shift TOF cameras, phase shift sampling is used to measure amplitude, phase and the offset (intensity) of the received signal. Each of these measurements has an associated statistical distribution that affects the SNR of the TOF signal, limiting the reliability of 3D range data. It is crucial to understand the statistical distributions of these three parameters for accurate distance measurement analysis especially in low SNR scenarios. In this paper, we provide explicit noise models for the three parameters of amplitude, phase and intensity. We use this analysis to provide an improved estimation of error in range measurement.
Performance evaluation of macro lens in digital close range photogrammetry
Recently, the documentation and visualization of various cultural heritages have been receiving attention, and a small Buddha such as less than 10 cm tall which was stored in the womb of Buddha is also included in cultural heritages. Zoom lenses are generally used to document these small objects and thus conserve the cultural heritage. However, there exist certain issues pertaining to the use of zoom lenses for such digital documentation. These issues include image sharpness and distortions that occur with changes in focal length setting, and in particular, the depth of field is issue from application standpoint such as documentation of the small cultural heritage. On the other hand, macro lenses can be used to capture sharp images of small objects from the view point of working distance, and its depth of field is related to the aperture of the camera. In order to evaluate the effectiveness of macro lenses in digital close range photogrammetry, macro lens and zoom lens were mounted on a digital single lens reflex camera (Canon EOS20D, 8.2 Mega pixels). This paper deals in a first part with comparative evaluations for both lenses with respect to their lens distortion, imaging mode, and calibration aspects. The results indicated that macro lenses were more suitable for digital close range photogrammetry. Calibration tests are performed to demonstrate the effectiveness and practicability of macro lens in close range photogrammetic applications.
Three-dimensional object recognition using a monoscopic camera system and CAD geometry information
Current research conducted by the Institute for Photogrammetry at the Universitaet Stuttgart aims at the determination of a cylinder head's pose by using a single monochromatic camera. The work is related to the industrial project RoboMAP, where the recognition's result will be used as initiating information for a robot to position other sensors over the cylinder head. For this purpose a commercially available view-based algorithm is applied, which itself needs the object's geometry as a-priori information. We describe the general functionality of the approach and present the results of our latest experiments. The results we achieved show that the accuracy as well as the processing time suite the project's requirements very well, if the image acquisition is prepared properly.
Applications
icon_mobile_dropdown
Payload systems and tracking algorithms for photogrammetric measurement of parachute shape
Mark R. Shortis, Stuart Robson, Tom W. Jones, et al.
Parachute systems play a critical role in many science and military missions. Currently, NASA and the U.S. Army air delivery systems programs are evaluating measurement technologies to support experimental and qualification testing of new and modified parachute concepts. Experiments to validate the concept of parachute shape measurement have been conducted in a controlled, indoor environment using both fixed and payload cameras. The paper will provide further detail on the rationale for the experiments, the design of the payload systems, the indoor and outdoor testing, and the subsequent data analysis to track and visualise the shape of the parachute.
Combined use of photogrammetric and computer vision techniques for fully automated and accurate 3D modeling of terrestrial objects
Nowadays commercial software able to automatically create an accurate 3D model from any sequence of terrestrial images is not available. This paper presents a methodology which is capable of processing markerless block of terrestrial digital images to perform a twofold task: (i) determine the exterior orientation parameters by using a progressive robust feature-based matching followed by a Least Squares Matching refining and a bundle adjustment; (ii) extract a dense point-clouds by using a multi-image matching based on diverse image primitives. The final result is an accurate surface model with characteristics similar to those achievable with range-based sensors. In the whole processing workflow the natural texture of the object is used, thus images and calibration parameters are the only inputs. The method exploits Computer Vision and Photogrammetric techniques and combines their advantages in order to automate the process. At the same time it ensures a precise and reliable solution. To verify the accuracy of the developed methodology, several comparisons with manual measurements, total station data and 3D laser scanner were also carried out.
Analysis of keratoscopic images for detecting fixational eye movements and ocular surface deformation
A sequence of videokeratoscopic images was registered using commercially available instrument E300 at a rate of 50 fps. During the 20 seconds measurement, subject's head was fixed strongly. Acquired images were analyzed for detecting fixational eye movements and corneal surface deformation. For this purpose two rings were extracted from each frame and the ellipses were fitted to them, using least square method. The time series of the ellipses geometrical parameters were considered: minor and major axes length as well as the ellipses center and the orientation. The frequency spectra of mentioned parameters were obtained by application of the Fast Fourier Transform. The longitudinal position of the corneal apex was controlled, thanks to the cone side viewer installed inside the videokeratoscope. The average amplitude of the variation of the ellipse's axes length is around 20μm and of the orientation of the ellipse around 0,1 rad. In the signals frequency characteristics, appear the peak corresponding to the heart rate. No clear relationship was found between the variations of the fitted ellipse parameters and the longitudinal position of the corneal apex. The fixational eye movements were examined using two different methods. One of them consists of calculating the correlation function between the first and successive frame of the sequence and searching its maximum. The other is based on tracking the center of the ellipse fitted to particular ring of the videokeratoscopic image. The accuracy of the second method found to be higher. Simple methods proposed in this work can extend the application of videokeratoscopic measurements.