SPIE Membership Get updates from SPIE Newsroom
  • Newsroom Home
  • Astronomy
  • Biomedical Optics & Medical Imaging
  • Defense & Security
  • Electronic Imaging & Signal Processing
  • Illumination & Displays
  • Lasers & Sources
  • Micro/Nano Lithography
  • Nanotechnology
  • Optical Design & Engineering
  • Optoelectronics & Communications
  • Remote Sensing
  • Sensing & Measurement
  • Solar & Alternative Energy
  • Sign up for Newsroom E-Alerts
  • Information for:

SPIE Photonics West 2019 | Register Today

SPIE Defense + Commercial Sensing 2019 | Call for Papers



Print PageEmail PageView PDF

Sensing & Measurement

Enhancing robot perception with 3D sensors

Advanced sensing can provide recognition and location capabilities that improve robot operation in dynamic environments.
2 November 2009, SPIE Newsroom. DOI: 10.1117/2.1200910.1840

Industrial robots are traditionally programmed to repetitively perform one task on fixed objects. Many applications, however, require greater flexibility to accommodate a wide variety of parts in unconstrained locations. This is common for high-volume, high-flexibility manufacturing operations such as job shops, military vehicle maintenance, and aerospace manufacturing. Advanced sensing methods are required to provide these robots with improved perception for operation in dynamic environments.

2D machine vision is a common approach used to register, and in some cases, identify, parts with an unknown location. It is often used in belt conveyor systems where the part location on the belt is unknown. However, 2D methods often fail in applications where the object position can vary in three dimensions. Newer methods relying on either single or stereo cameras combined with a model of the part can provide full six degree-of-freedom location and orientation of a part for accurate robot guidance.1 However, these methods require prior knowledge of the part and training on its geometry. Highly flexible manufacturing operations need an advanced sensing system that can recognize and accurately locate objects without significant manual teaching. At Southwest Research Institute (SwRI), we are developing methods that use 3D spatial sensors to recognize an object by matching it to features in a model database, and then registering it for robotic system operations.

Our approach is based on a method outlined by Ip and Gupta2 and uses 3D sensors to generate point clouds (sets of points in space) that represent a partial surface model of the object. We used a laser line triangulation sensor for testing (see Figure 1), but other structured-light or time-of-flight systems could be used. Because of the real-time nature of most robotic guidance applications, we assume that only a single view of the part is available, and therefore, the point cloud only covers part of the object surface. With only partial surface topology, local feature extraction is necessary to produce attributes used to match the part to a pre-existing database.

Figure 1. A laser line scanner is used to create a 3D profile of a part.

Using a variation of the watershed algorithm,3 we developed methods to segment the data into regions. This method clusters the data into sets that are bounded by areas of high rates of change of curvature, resulting in a set of regions with relatively consistent topology (see Figure 2). From this data, we extract features such as surface area, moment of area, centroid, and normal vector. We apply the same segmentation and feature extraction algorithms to the database of parts. A comparison of the feature sets provides high-probability matches between the database and the sensed regions. A ranked set of matching objects is then generated based on the number and quality of matches for a particular object.

Figure 2. A segmented and color-coded surface scan.

We perform two stages of verification and registration on the potential matches. The first and quickest is a least-squares fit of the centroids of the matching regions. If the two data sets are a good match, a small residual error is generated from the optimal rigid body transformation that aligns the centroids. We then subject the best candidates from the least-squares alignment to the iterative closest point (ICP) algorithm4 to align the data sets on a point-by-point basis. The ICP algorithm provides the final registration transformation and also verifies the quality of the match from the distance error between the aligned surfaces. Finally, the object identification and rigid body transformation is provided to the robot for processing.

We tested the method on an experimental database of over 1,500 parts. We chose parts at random, and a simulation of the sensor output generated synthetic test data. Part recognition performance was better than 90%, and, with an iterative approach, we achieved near 100% recognition accuracy. The registration of the parts was within the accuracy of the sensor data.

To enable robots to operate in unconstrained and dynamic environments, advanced sensors and perception algorithms are necessary to provide intelligent decision-making abilities. We have shown that low-cost 3D sensors can be used to reliably identify unknown objects and to accurately locate them for robotic processing. Future work will include the ability to create the robot motion paths in real-time based on the sensor data alone. We have already conducted feasibility demonstrations of fully automated painting applications which require no human teaching or intervention and automatically generate the robot paths.

Clay Flannigan
Robotics and Automation Engineering
Southwest Research Institute
San Antonio, TX

Clay Flannigan is the manager of the Robotics and Automation Engineering Section. His interests include advanced sensing systems and robotic technologies for manufacturing applications. He has expertise in large-scale robotic systems and the application of 3D sensing and metrology. He has a BS and MS in mechanical engineering from Case Western Reserve University.