Developing 3D vision systems for transparent objects

Noncontact method captures UV-induced fluorescence to digitize clear objects.
30 July 2012
Fabrice Mériaudeau, David Fofi, Christophe Stolz and Rindra Rantoson

Machine vision systems, which entered industrial use 25 years ago, are used by manufacturers for quality and process control. Over the years, machine vision has greatly benefited from improvements in sensor resolution and spectral bands as well as from inexpensive computing resources. As a result, today's systems can process the large amount of information (including multispectral and even polarization information,1) required for high-level image processing or classification schemes. Meanwhile, new products with more complex appearance and shape continue to enter the market, thus creating a need for systems that are able to recover and control the 3D aspects of complex objects.

3D scanning has been investigated for several decades, but most of the proposed approaches assume a diffuse or near-diffuse reflectance off the object's surface. The literature on the subject is usually divided into active and passive techniques. Active light techniques include laser range scanning, coded structured light systems, digital holography, or time-of-flight scanners. Passive techniques mainly use stereovision, photogrammetry, or shape measurements obtained from techniques such as shading, optical flow, motion, or focus.

In conventional 3D scanning methods, users spray a thin layer of powder onto the transparent (or mirror-surfaced) object to make the surface opaque and diffuse prior to digitization. This extra step is time consuming (because the object must be cleaned afterwards), troublesome, and the final accuracy depends on the powder's thickness and homogeneity. If the vision system is being used to detect defects, some may be overlooked by assuming that they are merely non-homogeneities of the powder coating and thus missed in the final classification step.

Various methods have been developed to eliminate the need for the powder-coating procedure. Wolfgang Heidrich and colleagues published an exhaustive review in 2010,2 which we recently updated.3 Most of the methods require some prior knowledge about the object or assumptions about the interactions of the light with the object surface. These are not yet ready for industrial implementation.

Among these methods, we believe that the approach with the highest potential for industrial adoption lies in obtaining the object's shape using UV light.4 The novelty of this approach lies in exploiting the fluorescence generated at the object surface when it is irradiated with a UV laser (using a specific triangulation approach associated with a fluorescence point-tracking method). We tested two experimental configurations. The first relies on point projection (see Figure 1). The second is an extension of the former in which the point is converted to a line by a hemi-cylindrical lens. Our initial setup was composed of a laser and camera, each in a fixed position, and a translation stage. The UV laser produced an elliptical beam about 2mm in diameter. The color CCD camera had a maximum spectral sensitivity between 400 and 700nm, a resolution of 480× 640, a focal length of 8mm, and a lens f-number of 1.4. Under UV irradiation, the transparent surface of the object fluoresces, emitting a diffuse visible light. The bright spot (about 4×2mm on a flat surface) is then imaged by the camera and used to estimate the object's depth by triangulation from the structured light source.5 The object to be digitized is placed on a translation table that offers accurate horizontal and vertical displacements, with an error of 2% or less for a 1μm motion. Thus, the 2D coordinates X and Y are defined by translation of the table, while the depth coordinate Z can estimated by the triangulation scheme. See Figure 2 for results from our imaging method.


Figure 1. Experimental configuration for digitizing a transparent object using UV-induced fluorescence. Only the support of the object moves. The UV laser induces a spot on the surface of the object to fluoresce in the visible, and the camera images the fluorescence. As the support moves, the fluorescence data is used to build a map of the surface. A similar system can generate a UV line, by placing a hemi-cylindrical lens between the laser and the object.

Figure 2. a) Plastic bottle. b) 3D model of the plastic bottle, digitized with our method. c) The associated error map, with a mean error of 80μm and standard deviation of 90μm.

In a variation on that scheme, we also added a cylindrical lens to create a UV line (effective size ∼120×1mm on a flat surface). The initial beam is enlarged into a stripe with a maintained Gaussian distribution intensity while the estimated surface power density of the UV laser striking an object point is reduced by a factor of 100. However, fluorescence is still induced on the transparent surface and visible to the camera. Therefore, we can apply a triangulation approach based on a structured UV source stripe6 to ascertain the object's 3D shape. This design reduces the acquisition time as well as the processing time. We present results for a line-generated digitization in Figure 3.


Figure 3. (left) Original glass. (center) 3D reconstruction obtained with line projection. (right) Associated error map, which shows a mean error of 140μm and standard deviation of 120μm.

We tested more than ten different objects of various shapes, thicknesses, and materials (including both glass and plastic).4 The mean deviation error on our method can be as low as 100μm, which is highly accurate and indicates the potential of this method. (This is better than measurements obtained by a commercial scanner of transparent objects coated with powder.) We are now developing a commercial device using these techniques with an industrial company for quality control inspection of transparent objects.


Fabrice Mériaudeau, David Fofi, Christophe Stolz, Rindra Rantoson
Laboratory for Electronics, Information and Imaging (Le2i)
University of Burgundy
Le Creusot, France

Fabrice Meriaudeau is a university professor and the director of the Le2i. His research interests focus on image processing for artificial vision inspection, particularly systems using non-conventional imaging modalities such as UV, IR, or polarization, and medical imaging.

David Fofi is a professor, head of the Computer Vision department of the Le2i, and coordinator of the Erasmus Mundus Masters in Vision and Robotics. His research interest includes multiple-view geometry, catadioptric vision, projector-camera systems, and structured light. He has participated in and led several French and European projects in the field of computer vision.

Christophe Stolz received his MSc degree in automatics and industrial computing system in 1995 from the Upper Alsace University in Mulhouse, France. He obtained his PhD in optical signal processing from the same university in 2000. In 2001, he worked as a research assistant in the Photonic Systems Laboratory (LSP) at Louis Pasteur University in Strasbourg, France. The same year he was appointed to assistant professor in the Le2i on the 3D Vision team. His research mainly concerns optical and digital image processing, specifically polarimetric methods applied to shape measurement and to 2D or 3D quality control.

Rindra Rantoson received her MS in applied mathematics specializing in scientific calculation from Toulouse University, France in 2005 followed by her MS in imaging and spatial remote sensing in 2006 from the same university. She obtained her PhD with Le2i in November 2011, in conjunction with Glaizer Group, a private company for which she has been working since her masters graduation. Her PhD topic was 3D reconstruction of transparent objects through noncontact measurement, for which polarization, stereovision, and basic triangulation techniques were explored and extended to wavelength beyond the visible range.


References:
1. http://www.fluxdata.com/. Company website. Accessed 26 July 2012.
2. I. Ihrke, Kiriakos N. Kutulakos, Hendrik P. A. Lensch, Marcus Magnor, Wolfgang Heidrich, Transparent and specular object reconstruction, Comp. Graphics Forum 29(8), p. 2400-2426, 2010.
3. F. Mériaudeau, R. Rantoson, D. Fofi, C. Stolz, Review and comparison of nonconventional imaging systems for 3D digitization of transparent objects, J. Electron. Imaging 21(2), p. 021105, 2012. doi:10.1117/1.JEI.21.2.021105
4. R. Rantoson, C. Stolz, D. Fofi, F. Meriaudeau, Optimization of transparent objects digitization from visible fluorescence UV-induced, Opt. Eng. 51(2), p. 033601, March 2012.
5. F. Marzani, Y. Voisin, L. F. L. Y. Voon, A. Diou, Calibration of a 3D reconstruction system using a structured light source, J. Opt. Eng. 41(2), p. 484-492, 2002. doi:10.1117/1.1427673
6. C. H. Chen, A. C. Kak, Modeling and calibration of a structured light scanner for 3D robot vision, Proc. IEEE Conf. on Robotics and Automation, p. 807-815, 1987. doi:10.1109/ROBOT.1987.1087958
PREMIUM CONTENT
Sign in to read the full article
Create a free SPIE account to get access to
premium articles and original research