This excerpt is from the SPIE Press book *Discrimination of Subsurface Unexploded Ordnance*.

Buried unexploded ordnance (UXO) poses a persistent, challenging, and expensive cleanup problem. Whether on military practice lands or at the sites of past conflicts, many dropped bombs and fired projectiles failed to explode when they penetrated the ground, thus affecting the accessibility of millions of acres at thousands of sites in the US alone. Globally, the problem is even more extensive and dauntingly diverse. The cleanup of UXO sites is particularly challenging because detection must be extraordinarily reliable and remediation extremely careful. Beyond the challenges of problematic terrain, the sheer diversity of possible ordnance types compounds the difficulties inherent in the fuzziness of what practicable sensors provide. In most locations where some ordnance did not explode, many more items have indeed detonated; clutter is abundant, and its sensor responses are often similar to comparably sized UXO. Necessarily conservative practices to date ensure an enormous false alarm rate, and thus cleanup costs are very high.

Against this background, recent developments provide a heartening story; the particulars are engaging, and there is a happy ending. Spanning the last ten or fifteen years, the narrative proceeds over a continuum of all aspects of the problem:

- head scratching over fundamental physics and phenomenology;
- new, successful modeling and analysis methods;
- informative and reassuring engagements of those methods with data;
- design, development, and testing of new instruments that provide expanded and superior data;
- innovative processing techniques that draw on both the modeling and the instrumentation developments; and, finally,
- highly successful discrimination performance in blind field tests at live cleanup sites.

[excerpt from **Chapter** **1: The Problem and Its Nature**]

**Section 1.1 Discrimination, Inverse Problems, and What Follows**

For the most part, one cannot make discrimination decisions by examining recorded signals alone; they typically vary greatly as a function of the sensor–object configuration, which is inherently unknown. Instead, data must be related to a model of the sensor–object interaction. It is via such a model that one can infer underlying parameters that are not just functions of individual signals. Overall, the three essential constituents of the task are:

- electromagnetic geophysical sensing systems;
- models that enable an analyst to locate a target and estimate its intrinsic, distinguishing parameters; and
- classifiers to determine whether a target of interest (TOI) or target not of interest (TNOI) produced the data.

Specifically for the discrimination portion of this task, a complete system requires the following essential elements:

**Adequate response models**. They should ideally be able to treat cases with multiple or complex heterogeneous objects, and possibly also treat contributions from the surrounding environment.
- Item 1 refers first and foremost to physical models or the concepts at their root. Closely linked to the definition and execution of such models is the issue of
**representation of response**, together with tractable, convenient, and revealing **parameterization** of that representation.
**Appropriate and adequate data** to support the chosen models and the inference of their parameters. Desirably sophisticated models and response representation are for naught if data cannot be obtained to support their requirements. Data diversity is key, and data quality control (QC) is vital.
- Efficient and reliable
**optimization, search, and inversion algorithms**. Data of desirable quality and diversity are not useful if they cannot be inverted effectively for the parameters sought. Intelligent formulation of the computational problem, smart algorithms, and well-directed constraints are generally required to achieve the necessary stability, efficiency, and reduction of ambiguity.
- Systems for
**end-stage processing and decision making**. All of the previous steps are designed to feed in ultimately to processes that produce ranked dig/no-dig determinations. Together with systematic techniques for data manipulation, statistical concepts and treatments come to the fore here.

In terms of the dynamics of development, there is a great deal of interaction amongst the first three items, such that no one of them really precedes or follows the other. Available instruments require that modelers confront specific kinds of sensor–object interactions, along with the way in which those interactions are reported. This fact effectively defines or at least constrains the modeler’s task. Models and the feasibility of extracting their parameters from data may simultaneously direct developments in instrumentation.

The previous list inherently concerns inverse problems, from the most focused level (i.e., how can the field data be treated to infer the specific quantities of interest?) to the broadest level (i.e., is whatever caused this signal an UXO?). To illuminate matters in this domain, let us treat the following general equation as posing, alternatively, a forward problem, a direct inverse solution, and a general inverse problem:

A(, , …)·q = d, (1.1)

where an uppercase bold letter (**A**) indicates a matrix or tensor, with prominent exceptions to be noted. A lowercase bold letter indicates a vector, e.g., of sources **q** producing the vector of data **d**. The specifics of **A** derive from the relevant physics, geometrical configuration, boundary, or other conditions; and the matrix incorporates some parameters , , …. In a causal view, the entities on the left produce the observations (data, output) on the right.

In the *forward problem*, everything on the left side of Eq. (1.1) needed to obtain the output **d** is known, including the source strengths **q** as well as all geometry, applied conditions, etc., that produce the structure and parameters in **A**. One need only turn the computational crank (multiplication) to produce the result. Many inversion approaches, particularly those based on optimization searches, rely on the repeated execution of forward calculations using prospective parameters and/or **q** values. Along with direct inverse solutions, this forward calculation is what most engineers and scientists were taught in general physics and math courses.

At the outset of the *direct inverse solution*, **A** and **d** are known, but **q** is unknown. Assuming that the problem has been properly formulated, the measurements taken, and the system structured so that **A** has no problematic features (something of a leap, as will be seen), the relation in Eq. (1.1) can be inverted mechanically. That is, one may bring to bear a straightforward algorithm with a set of reliable, codified steps that will produce the solution **q**. Loosely speaking, the causes in the forward problem are known, and the result is computed; in the direct inverse solution, the result is known, and the causes are calculated. In general, it is advisable to reduce at least parts of general inversion calculations to well-posed direct solutions. As shown below, exploiting even the direct inverse calculation may be more fraught than it initially seems.

A *general inverse problem* is distinct from the two previous calculations and is also harder to define precisely. The data **d** are known, at least to some degree of certainty. The essential question is: what caused this data? The form of the left side of Eq. (1.1) may not be as accommodating as in the direct inversion. Some constituents of **A** (e.g., , , …) may themselves be unknown and may be part of the solution being sought, in addition to **q**. For example, the parameters may correspond to such items as source position, material properties, or geometrical orientations. It is much more difficult to codify this kind of inverse calculation than in the other two instances. Pitfalls abound, depending on the specific formulation and the computational measures taken. Groping searches may be required, essentially guessing likely answers (left side of the equation) and performing repeated forward calculations based on those values. Analysts try to zero in on source and parameter values that work best according to some measure of agreement between calculated and observed **d** values, while also satisfying any conditions and constraints. It is tempting then to treat this best result as an approximation of “true” input and parameter values, but a number of issues contribute to ambiguity here. Substantially different sets of possible sources and parameters could result in reasonable approximations of the same data. Optimization searches may get stuck around local error minima. The inevitable noise and error in the system may make it unclear how well a global error minimum has been identified.

Successes notwithstanding, much work remains to be done, if only because formidable settings abound, including rugged, vegetated terrain, wetlands, and underwater sites. For more information about this challenging yet vital field, read the full Tutorial Text *Discrimination of Subsurface Unexploded Ordnance*.

** -Kevin O’Neill** received a B.A. magna cum laude from Cornell University and a M.A., M.S.E., and Ph.D. from Princeton University.

After a National Science Foundation Postdoctoral Fellowship with the Thayer School of Engineering, Dartmouth College, and at the U.S. Army Cold Regions Research and Engineering Laboratory (CRREL), he became a Research Civil Engineer with CRREL.

His research has focused on porous media transport phenomena and geotechnically relevant electromagnetics. He has been a Visiting Fellow with the Department of Agronomy, Cornell University, and a Visiting Scientist with the Center for Electromagnetic Theory and Applications, Massachusetts Institute of Technology. Since 1984, he has been an adjunct faculty member with the Thayer School of Engineering, Dartmouth College.

His current research interests include electromagnetic remote sensing of surfaces, layers, and buried objects in particular, such as unexploded ordnance.