Proceedings Volume 10991

Dimensional Optical Metrology and Inspection for Practical Applications VIII

cover
Proceedings Volume 10991

Dimensional Optical Metrology and Inspection for Practical Applications VIII

Purchase the printed version of this volume at proceedings.com or access the digital version at SPIE Digital Library.

Volume Details

Date Published: 26 July 2019
Contents: 9 Sessions, 22 Papers, 19 Presentations
Conference: SPIE Defense + Commercial Sensing 2019
Volume Number: 10991

Table of Contents

icon_mobile_dropdown

Table of Contents

All links to SPIE Proceedings will open in the SPIE Digital Library. external link icon
View Session icon_mobile_dropdown
  • Front Matter: Volume 10991
  • 3D Metrology Applications
  • Metrology Analysis I
  • Metrology Analysis II
  • 3D Methods I
  • 3D Methods II
  • 3D Methods III
  • Metrology for Additive Manufacturing I
  • Metrology for Additive Manufacturing: Critical Technology Review
Front Matter: Volume 10991
icon_mobile_dropdown
Front Matter: Volume 10991
This PDF contains the front matter associated with SPIE Proceedings Volume 10991, including the title page, copyright information, table of contents, and author and committee lists.
3D Metrology Applications
icon_mobile_dropdown
Wide-field 3D imaging with an LED pattern projector for accurate skin feature measurements via Fourier transform profilometry
Accurate 3D imaging of human skin features with structured light methods is hindered by subsurface scattering, the presence of hairs and patient movement. In this work, we propose a wide-field 3D imaging system capable of reconstructing large areas, e.g. the whole surface of the forearm, with an axial accuracy in the order of 10 microns for measuring scattered skin features, like lesions. By pushing the limits of grating projection we obtain high-quality fringes within a limited depth of field. We use a second projector for accurate positioning of the object. With two or more cameras we achieve independent 3D reconstructions automatically merged in a global coordinate system. With the positioning strategy, we acquire two consecutive images for absolute phase retrieval using Fourier Transform Profilometry to ensure accurate phase-to-height mapping. Encouraging experimental results show that the system is able to measure precisely skin features scattered in a large area.
Toward an automatic 3D measurement of skin wheals from skin prick tests
Andrés G. Marrugo, Lenny A. Romero, Jesus Pineda, et al.
The skin prick test (SPT) is the standard method for the diagnosis of allergies. It consists in placing an array of allergen drops on the skin of a patient, typically the volar forearm, and pricking them with a lancet to provoke a specific dermal reaction described as a wheal. The diagnosis is performed by measuring the diameter of the skin wheals, although wheals are not usually circular which leads to measurement inconsistencies. Moreover, the conventional approach is to measure their size with a ruler. This method has been proven prone to inter- and intra-observer variations. We have developed a 3D imaging system for the 3D reconstruction of the SPT. Here, we describe the proposed method for the automatic measurements of the wheals based on 3D data processing to yield reliable results. The method is based on a robust parametric fitting to the 3D data for obtaining the diameter directly. We evaluate the repeatability of the system under 3D reconstructions for different object poses. Although the system provides higher accuracy in the measurement, we compare the results to those produced by a physician.
In-situ measurement of aspherics with sub-aperture deflectometry for precision optical manufacturing
Xiangchao Zhang, Xueyang Xu, Zhenqi Niu, et al.
The measurement of aspheric optics has attracted intensive attention in precision engineering, and efficient in-situ measurement technologies are required urgently. Phase measuring deflectometry is a powerful measuring method for complex specular surfaces. In this paper, an in-situ measurement method is developed based on the sub-aperture deflectometry. A complete measuring procedure is developed, including initial calibration, self-adaptive calibration, route planning, imaging acquisition, phase retrieval, gradient calculation, surface reconstruction and sub-aperture stitching. Several key points concerning the sub-aperture measurement are investigated, and effective solutions are proposed to balance the measuring accuracy and aperture, to overcome the height/slope ambiguity and to eliminate the stitching errors caused by point sampling and measuring errors. The measuring flexibility and stability can be greatly improved compared to the existing SCOTS measuring approach.
Metrology Analysis I
icon_mobile_dropdown
Sources of errors in structured light 3D scanners
Prem Rachakonda, Bala Muralikrishnan, Daniel Sawyer
Structured light scanners have been commercially available for over a decade and some commercial scanners are evaluated using one of two German guidelines – VDI/VDE 2634 parts 2 and/or 3. Several other research groups have developed physical artifacts that are agnostic to instrument construction and are purpose driven. The use of such guidelines and artifacts is not well understood for instruments which have a variety of sensor configurations and capabilities. It is also not clear if these guidelines/artifacts are sensitive to all the sources of errors that are present in these systems. In this context, this paper will describe the ongoing activities at NIST to study various sources of errors in structured light scanners with an objective of characterizing their performance.
Comparing Hilbert transform profilometry and Fourier transform profilometry (Conference Presentation)
Three-dimensional (3D) shape measurement methods based on fringe analysis could achieve high resolution and high accuracy. Fourier transform profilometry (FTP) uses a single fringe pattern is sufficient to recover the carrier phase for 3D shape measurement. Basically, FTP method applies Fourier transform to a fringe image and extracts the desired phase by applying a band-pass filter to obtain the desired carrier phase. Though successful, the single-pattern FTP method has the following major limitations: 1) it is sensitive to noise; 2) it is difficult to accurately measure an object surface with strong texture variations; and 3) it is difficult to measure detailed complex surface structures. To alleviate the influence of averaged background (i.e., DC) signal, the modified FTP method was proposed by taking another fringe pattern to remove DC from the fringe pattern. Even more robust, the modified FTP method still cannot achieve high accuracy for complex surface geometry or objects with strong texture. This is because to properly recover the carrier phase, FTP requires a properly designed filter to recover the carrier phase that might be polluted by surface texture or geometry. Hilbert transform, in contrast, is based on one inherent property of Hilbert transform: it shifts the phase of a sine function by $\pi/2$. For a fringe pattern without DC component, the phase can be directly retrieved using Hilbert transform without filtering. This paper examines differences between these two methods and presents both simulation and experimental comparing results.
Motion-induced error reduction for phase shifting profilometry using double-shot-in-single-illumination technique
This research proposes a motion-induced error reduction method for phase shifting profilomtry. Particularly, each illuminated fringe pattern will be captured twice in one projection cycle when imaging a highly dynamic scene, resulting in two sets of phase shifted fringe images be obtained. A phase map will be computed for each phase shifting set in preparation for error analysis. Finally, motion-induced phase errors will be compensated by examining the difference of the two phase maps obtained respectively from two phase shifting sets. This method uses defocused 1-bit binary patterns to bypass rigid camera-projector synchronization, which has potential for high-speed applications.
Efficient correspondence search algorithm for GOBO projection-based real-time 3D measurement
Patrick Dietrich, Stefan Heist, Peter Lutzke, et al.
Many robot-operated automation tasks require real-time reconstruction of accurate 3D data. While our sensors that are based on GOBO projection-aided stereo matching between two cameras allow for high acquisition frame rates, the 3D reconstruction calculation is really time consuming. In order to find corresponding pixels between cameras, it is necessary to search the best match amongst all pixels within the geometrically possible image area. The well-established method for this search is to compare each candidate pixel by temporal cross correlation of the brightness-value sequences of both pixels. This is computationally intensive and interdicts fast, real-time applications on standard PC hardware. We introduce a new algorithm, which minimizes the number of calculations needed to compare two pixels down to two binary operations per comparison. To achieve this, we pre-calculate a bit-string of binary features for each pixel of both cameras. Then, two pixels can be compared by counting the number of bits that differ between the two bit strings. Our algorithm's results are accurate to a few pixels and require a second, cross correlation-based refinement. In practice, our algorithm (including pre-calculation and refinement step) is much faster than traditional, purely cross correlation-based search, while maintaining a similar level of accuracy.
Metrology Analysis II
icon_mobile_dropdown
Three-dimensional shape measurement of specular object with discontinuous surfaces by direct phase measuring deflectometry
Phase measuring deflectometry (PMD) has been widely studied to obtain three-dimensional (3D) shape of specular surfaces. Due to the procedure of slope integration, complicated specular components having discontinuous surfaces cannot be measured by the existing PMD methods. This paper presents a novel Direct PMD (DPMD) method to solve this problem of measuring discontinuous specular objects. A mathematical model is established to directly relate the absolute phase and depth data. Then a hardware measuring system has been set up. The system parameters are calibrated by using a plane mirror and a translating stage. 3D shape of an artificial specular step, a monolithic multi-mirror array having multiple specular surfaces and a reflected diamond distribution surface has been measured. The experimental results verified that the proposed method based on DPMD successfully measured full-field 3D shape of specular objects having discontinuous surfaces accurately and effectively.
Fringe analysis based on convolutional neural networks (Conference Presentation)
Over the past few decades, tremendous efforts have been devoted to developing various techniques for fringe analysis, and they can be broadly classified into two categories: (1) phase-shifting (PS) methods which require multiple fringe patterns to extract phase information and (2) spatial phase demodulation methods which allow phase retrieval from a single fringe pattern, such as the Fourier transform (FT), windowed Fourier transform (WFT), and wavelet transform (WT) methods. Compared with spatial phase demodulation methods, the multiple-shot phase-shifting techniques are generally more robust and can achieve pixel-wise phase measurement with higher resolution and accuracy. Furthermore, the phase-shifting measurements are quite insensitive to non-uniform background intensity and fringe modulation. Nevertheless, due to their multi-shot nature, these methods are difficult to apply to dynamic measurements and are more susceptible to external disturbance and vibrations. Thus, for many applications, phase extraction from a single fringe pattern is desired, which falls under the purview of spatial fringe analysis. Here, we demonstrate experimentally for the first time, to our knowledge, that the use of convolutional neural networks can substantially enhance the accuracy of phase demodulation from a single fringe pattern. Deep learning is a powerful machine learning technique that employs artificial neural networks with multiple layers of increasingly richer functionality. The effectiveness of the proposed method is verified using carrier fringe patterns under the scenario of fringe projection profilometry. Experimental results demonstrate its superior performance in terms of high accuracy and edge-preserving over two representative single-frame techniques: Fourier transform profilometry and windowed Fourier profilometry.
Bi-frequency temporal phase unwrapping using deep learning
In fringe projection profilometry (FPP), multi-frequency phase unwrapping, as a classical algorithm for temporal phase unwrapping (TPU), can eliminate the phase ambiguities and obtain the unwrapped phase with the aid of additional wrapped phase maps with different fringe periods. However, based on the principle of multi-frequency phase unwrapping, it needs multiple groups of fringe patterns with different fringe periods to eliminate the phase ambiguities of the wrapped phase with high-frequency, which is not suitable for high-speed 3D measurement. If two frequency fringe patterns are only projected, the reliability of multi-frequency phase unwrapping will be decreased significantly. Inspired by deep learning techniques, in this study, we demonstrate that the deep neural networks can learn to perform temporal phase unwrapping after appropriate training, which substantially improves the reliability of phase unwrapping compared with the traditional multi-frequency TPU approach even when high-frequency fringe patterns are used. In our experiment, a challenging problem in TPU is that the unwrapped phase of 64-period fringe patterns cannot be directly unwrapped by only using a single-frequency phase, but it can be easily resolved by our method. Experimental results demonstrate the temporal phase unwrapping method using deep learning provides the best unwrapping reliability to realize the absolute 3D measurement for objects with complex surfaces.
Fitting contrast by least square method for phase-shifting interferometry of unknown and arbitrary phase-steps under high non-uniform illumination
This work presents a very robust and non-iterative algorithm for phase retrieval in phase-shifting interferometry of three unknown and unequal phase-steps under illumination conditions of high spatial non-uniformity. First, the background light is eliminated by subtraction of two interferograms, thereby two secondary patterns are obtained. Second, the object phase is algebraically eliminated from two secondary patterns to obtain only one equation in three unknowns: the modulation light, and the two phase-steps. Third, the square of modulation light is approximated to a polynomial of degree K, and then we demonstrate that it is possible to rewrite the equation in the form of an error function. Forth, the coefficients for the modulation approximation and the phase-steps are computed by applying the least squares method. Some advantages of this approach are its capacity to support high spatial variations in the illumination and the object phase.
3D Methods I
icon_mobile_dropdown
Single-shot 3D shape reconstruction using multi-wavelength pattern projection
Chen Zhang, Anika Brahm, Andreas Breitbarth, et al.
This paper presents an approach for single-shot 3D shape reconstruction using a multi-wavelength array projector and a stereo-vision setup of two multispectral snapshot cameras. Thus, a sequence of six to eight aperiodic fringe patterns can be simultaneously projected at different wavelengths by the developed array projector and captured by the multispectral snapshot cameras. For the calculation of 3D point clouds, a computational procedure for pattern extraction from single multispectral images, denoising of multispectral image data, and stereo matching is developed. In addition, a proof-of-concept is provided with experimental measurement results, showing the validity and potential of the proposed approach.
Real-time high dynamic range 3D scanning with RGB camera
This paper introduces a novel real-time high dynamic range 3D scanning method with RGB camera, which utilizes the cameras varying color sensitivity and the projector’s dark time to alleviate saturation-induced measurement error. The varying color responses in R, G, and B channels creates three different intensity levels, and an additional capture at the projectors bright to dark transition state doubles the total intensity levels to six. Finally, saturation-induced errors can be alleviated by choosing the unsaturated pixel of best quality among the images with six intensity levels. Experimental results will be presented to demonstrate the success of such method.
Pattern projection in the short-wave infrared (SWIR): accurate, eye-safe 3D shape measurement
Stefan Heist, Martin Landmann, Martin Steglich, et al.
3D sensors based on pattern projection are a popular measuring instrument for the three-dimensional acquisition of people, e.g., for identification purposes or for human-machine interaction. State-of-the-art sensors typically project the pattern(s) at a wavelength of 850 or 940 nm. Although illumination at these wavelengths is barely perceptible or completely imperceptible to the human eye, almost 80 or 50% of the incident radiation reaches the retina. In order to make the 3D measurement of faces not only free of disturbance, but also to make it considerably easier to fall below the limits for retinal exposure, the short-wave infrared (SWIR) is well suited. For instance, at a wavelength of 1450 nm, only a negligible portion of the incident radiation hits the lens of the eye, let alone the retina. Since the terrestrial solar spectrum has a minimum in this wavelength range, the susceptibility to natural ambient light is also reduced. Therefore, we have developed an SWIR 3D scanner, which we present and characterize in this article.
Large-volume NIR pattern projection sensor for continuous low-latency 3D measurements
For continuous, low-latency, irritation-free 3D measurements in large-volumes, dot-pattern- or time-of-flight-based sensors have been traditionally used. However, measurement accuracy and temporal stability limits the application in demanding medical or industrial contexts. Practical solutions also need to remain cost-effective. To meet these requirements, we started from a simple GOBO-based, aperiodic sinusoidal pattern projection (using a near-infrared (NIR) LED) 3D sensor for medium-sized measurement volumes. By tuning the system for large-volume operation, we were able to obtain a reasonable combination of measurement accuracy and speed. The current realization covers a volume of up to 4.0 m x 2.2 m x 1.5 m (width x height x depth). The 3D data is acquired at < 20 fps at resolutions of < 1000 x 500 px and true end-to-end latencies of < 140 ms. We present the system architecture consisting of GigE Vision cameras, a high-power LED-driven projection unit using a GOBO wheel, and the compute backend for the online GPU-based, temporal pattern correlation 3D calculation and filtering. To compensate for the low pattern intensity due to the short exposure time, we operate the cameras in 2x2 binning. Furthermore, the optics are tuned for large apertures to maximize light throughput. We characterize the sensor system with respect to measurement quality by quantitative evaluations including probing error, sphere-spacing error, and flatness measurement error. By comparison with another 3D sensor as a baseline, we show the benefits of our approach. Finally, we present measurement examples from human-machine interface (HMI).
Methods for addressing multiple reflections in a structured light profiler
Structured light, using either laser lines or projected patterns, have gained wide use in the profiling parts from extruded parts of complex assemblies. A line of light is projected from one angle and viewed from another angle to provide a view of the profile of the surface using the principles of triangulation. There can be many sources of noise in these systems such as speckle or surface texture which have been addressed by numerous methods. One of the more difficult challenges often encountered with structured light systems is the presence of reflection of the line of light that are not as expected or wanted. These outlier reflections may be due to multiple reflections from one surface to another or in the case of transparent surfaces, from other surfaces behind the surface being contoured. This paper will discuss these challenges and the commonly used assumptions that may not be sufficient to sort out the right light profile, then present several new methods that allow the separation of the desired profile data from the noise.
3D Methods II
icon_mobile_dropdown
High dynamic range 3D shape measurement based on multispectral imaging
High-speed and high-accuracy three-dimensional (3D) measurement plays an important role in numerous areas. The recent proposed binary defocusing techniques have enabled the speed breakthrough by utilizing 1-bit binary fringe patterns with the advanced digital light processing (DLP) projection platform. To enhance the phase quality and measurement accuracy, extensive research has also been conducted to modulate and optimize the binary patterns spatially or temporally. However, it remains challenging for such techniques to measure objects with high dynamic range (HDR) of surface reflectivity. Therefore, to overcome this problem, this paper proposes a novel HDR 3D measurement method based on spectral modulation and multispectral imaging. By modulating the illumination light and acquiring the fringe patterns with a multispectral camera, high-contrast HDR fringe imaging and 3D measurement can be achieved. Experiments were carried out to demonstrate the effectiveness of the proposed strategy.
Multi-axis heterodyne interferometric for simultaneous observation of 5 degrees of freedom using a single beam
James Perea, Brad Libbey, George Nehmetallah
A multi-axis heterodyne interferometer concept is under development for observations of five degrees of dynamic freedom using a single illumination source. This paper presents a laboratory system that combines elements of heterodyne Doppler vibrometry, holography, and digital image correlation to simultaneously quantify in-plane translation, out-of-plane rotation, and out-of-plane displacement at the nanometer scale. The sensor concept observes a dynamic object by mixing a single optical field with heterodyne reference beams and collecting these combined fields at the image and Fourier planes, simultaneously. Polarization and frequency multiplexing are applied to separate two segments of a receive Mach-Zehnder interferometer. Different optical configurations are utilized; one segment produces a focused image of the optical field scattered off the object while the other segment produces an optical Fourier transform of the optical field scattered off the object. Utilizing the amplitude and phase from each plane allows quantification of multiple components of transient motion using a single, orthogonal beam.
Simultaneous high-speed measurement of 3D surface shape and temperature
Martin Landmann, Stefan Heist, Patrick Dietrich, et al.
Pattern projection-based three-dimensional (3D) measurement systems are widely used for contactless, nondestructive, and full-field optical 3D shape measurements. 3D reconstruction is performed between one camera and the projector or between two cameras by detection and triangulation of corresponding image points. In order to record fast processes, such as people in motion or even explosions and crashes, we have recently developed a 3D stereo sensor consisting of two high-speed cameras and a GOBO projection-based high-speed pattern projector. The system, which works in the visible wavelength range (VIS), enables us to successfully reconstruct 3D point clouds of athletes in action, crash tests, or airbag inflations. However, as such processes usually exhibit local temperature changes, simultaneously measuring the surface temperature would be of great benefit. Therefore, we have extended our existing high-speed 3D sensor by including an additional high-speed longwave infrared (LWIR) camera, which detects radiation in the spectral range between 7.5 and 12 μm. The setup allows us to map the recorded temperature data onto the reconstructed 3D points. We present the design of this novel 5D (three spatial coordinates, temperature, and time) sensor and the process of simultaneously calibrating the VIS cameras and the LWIR camera in a common coordinate system. Moreover, we show first promising measurements of an inflating airbag and a basketball player conducted at a frame rate of 1 kHz.
3D Methods III
icon_mobile_dropdown
Dual-mode snapshot interferometric system for on-machine metrology (Conference Presentation)
We present a dual-mode snapshot interferometric system (DMSIS) for measuring both surface shape and surface roughness to meet the urgent need for on-machine metrology in optical fabrication. Two different modes, interferometer mode and microscopy mode, are achieved using Linnik configuration. To realize snapshot measurement, a pixelated polarization camera is used to capture four phase-shifted interferograms simultaneously. We have demonstrated its performance for off-line metrology and on-machine metrology by mounting it on a diamond turning machine.
Compact snapshot freefrom null testing with adaptive optics (Conference Presentation)
We present a snapshot, adaptive null interferometric system for measuring freeform surfaces using deformable mirror as the null corrector to increase the measurement range. To compensate the wavefront for different surfaces, a computer controlled deformable mirror is used as an adaptive wavefront corrector. A deformable mirror control algorithm based on stochastic parallel gradient descent algorithm has been developed to drive the deformable mirror to null the interference fringe. Snapshot phase measurement is proposed in the optimization progress to increase the iterative speed. The surface shape of the deformable mirror is measured by a deflectometry system to calculate the shape of the surface under test.
Metrology for Additive Manufacturing I
icon_mobile_dropdown
X-ray computed tomography instrument performance evaluation: Detecting geometry errors using a calibrated artifact
Bala Muralikrishnan, Meghan Shilling, Steve Phillips, et al.
X-ray computed tomography (XCT) is uniquely suitable for non-destructive dimensional measurements of delicate or internal features produced, for example, by additive manufacturing. While XCT has long been used in medical imaging, it has been used for industrial dimensional measurements only in recent years. The error sources in XCT of industrial components is still a topic of active research. One subgroup of potential error sources in XCT measurements are uncorrected XCT instrument geometry errors, such as detector misalignment or rotation stage errors, and are the focus of this paper. We demonstrate the effect of some instrument geometry errors on measurements performed on a calibrated artifact and compare the results to those obtained through simulations. The overall objective of this work is to support ongoing efforts to develop documentary national and international standards for performance evaluations of XCT instruments. In this study, we focus on cone-beam XCT instruments.
3D on machine metrology for conformal printing of conductors and dielectrics onto complex 3D surfaces
Rajesh Ramamurthy, Harry Chiu, Kevin Harding, et al.
This report evaluates some of the challenges faced with 2D camera based on-machine metrology and potential options with using 3D sensors for such Direct Write applications. Specifically, in order to fully exploit 3D direct write technology to surfaces in excess of 45 degree to the print direction, non-planar motion employing 4th and 5th rotary axes are often necessary. This report will outline a procedure for doing high accuracy rotary axis calibration. Furthermore, the use of online metrology solution to enable tuning of the rotary axis as well as for online print characterization will be detailed. These efforts will provide a fresh impetus to the use of 3D sensors for on-machine monitoring applications in additive manufacturing.
Metrology for Additive Manufacturing: Critical Technology Review
icon_mobile_dropdown
Benchmark measurements for additive manufacturing of metals (Conference Presentation)
Additive manufacturing (AM) of metals is a rapidly growing advanced manufacturing paradigm that promises unparalleled flexibility in the production of parts with complex geometries. However, the extreme processing conditions create position-dependent microstructures, residual stresses, and properties that complicate certification. Quantitative modeling of these characteristics is critical, but model validation requires rigorous measurements including comprehensive in situ monitoring of the melt pool behavior, along with microstructure, residual stress, and property characterizations. Ideally, such benchmark measurements must be accepted broadly by the international AM community so that meaningful comparisons can be made. I will describe our establishment of the Additive Manufacturing Benchmark Test Series (AM-Bench), a continuing series of highly controlled benchmark measurements for additive manufacturing that modelers around the world are now using to test their simulations.
Evaluation of technologies for autonomous visual inspection of additive manufacturing (AM)
Greg A. Finney, Christopher M. Persons, Jacob R. Whitten, et al.
IERUS Technologies, under subcontract to Tethers Unlimited, is developing a machine vision inspection system for the validation of metallic components additively manufactured in space. The effort has begun with a survey of vision technologies, including stereo vision, structure from motion, light field imaging, and structured illumination. Using the optical data, 3D point clouds will be registered as the object is viewed from multiple orientations. From the point cloud data, a mix of deterministic and machine learning algorithms will be used to identify geometric primitives that can be compared to those included in the computer aided design model. In addition, the system will estimate the surface roughness. Based upon the tolerances required within the CAD, pass/fail criteria will be established and the system will determine if the part passes, fails, or cannot be determined. At the end of the current phase, IERUS will perform a demonstration using a prototype system on a challenge artifact provided by NASA.
In-process imaging of morphology and temperature for laser welding and selective laser melting (Conference Presentation)
Tristan G. Fleming, Troy R. Allen, Stephen G. L. Nestor, et al.
Directly measuring morphology and temperature changes during laser processing (such as in keyhole welding and selective laser melting) can help us to understand, optimize, and control on-the-fly the manufacturing process. Even with such great potential, the technical requirements for such an in situ metrology are high due to the fast nature of the highly localized dynamics, all the while in the presence of bright backscatter and blackbody radiation, and possible obstructions such as molten ejecta and plumes. We have demonstrated that by exploiting coherent imaging through a single-mode fiber inline with the processing lens, we can image morphology at line rates up to 312 kHz, with sufficient robustness to achieve closed loop control of the manufacturing process. Applied to metal additive manufacturing, inline coherent imaging can directly measure powder layer thickness and uniformity, and formed track roughness including the onset of balling. Inline coherent imaging measures morphology dynamics but that is only part of the story. Temperature is also key to final part quality. Standard thermal imaging exploits blackbody radiation but are plagued by the highly variable emissivity of the region of interest, making quantitative measurement challenging. We were able to exploit the same apparatus used for coherent imaging to collect surface temperature profiles. Since we spectrally resolve a wide signature, we have overcome the emissivity problem to measure absolute temperature on the micron scale during laser processing.
Process monitoring strategy for metal additive using off-the-shelf metrology
Making metal parts by means of laser consolidation has gained much publicity in recent years. However, unlike laser welding or traditional machining, the experience base of know how to produce good parts is not well understood yet. There are many variables in metal additive part production such as source energy, the speed of the source movement, the consistency of the material, and any flaws or “inclusions” in the prepared work material. These issues are not unlike finding the best feed and speed for machining or working from a near net shape casting without encountering porosity, cracks or voids in the work material. There already exists a wide range of optical gages and inspection tools ranging from IR sensors to laser gages to high resolution video which is in wide use in outer manufacturing sectors. This paper will examine the needs for monitoring and measuring metal additive production that have been identified by the industry and reference those needs against the commercial tools of optical metrology and inspection. Specific examples of possible applications will be presented through a matrix of potential solutions including the pros and cons of each method.