SPIE Digital Library Get updates from SPIE Newsroom
  • Newsroom Home
  • Astronomy
  • Biomedical Optics & Medical Imaging
  • Defense & Security
  • Electronic Imaging & Signal Processing
  • Illumination & Displays
  • Lasers & Sources
  • Micro/Nano Lithography
  • Nanotechnology
  • Optical Design & Engineering
  • Optoelectronics & Communications
  • Remote Sensing
  • Sensing & Measurement
  • Solar & Alternative Energy
  • Sign up for Newsroom E-Alerts
  • Information for:
SPIE Defense + Commercial Sensing 2017 | Call for Papers

Journal of Medical Imaging | Learn more


Print PageEmail Page

Electronic Imaging & Signal Processing

Underwater 3D imaging technique identifies bottom mines

From OE Reports Number 183 - March 1999

An Interview with Anthony Gleckler, Senior Systems Engineer; Arete Associates; Sherman Oaks, CA.

28 March 1999, SPIE Newsroom. DOI: 10.1117/2.6199903.0002

What are the different ways of imaging underwater, and why do we need this technology?

There are a couple of different techniques used for underwater imaging. One is just a passive camera. The issue there is that you have problems with lighting. If you try to shine bright light on an object, you tend to just see the backscatter from the water itself. So, people have tried different techniques. One is called a line scan system, which has the receiver and the illumination light pointing from different places, so the light paths only intersect at the bottom. The backscatter in between isn't in the path of the receiver. This technique comes up with some fairly nice images but has some limitations: if bottom conditions change rapidly, you can get out of the intersecting region and not get any light in. Another technique is called laser range gating. That's where you send out a pulsed light that illuminates a very broad area and then you turn your camera on a few nanoseconds later to capture just the bottom image, gating out the reflection from the water itself. The third technique is ours. It involves using a streak tube lidar system that measures the time of flight through the water. You get the water signal but it's kept in separate pixels from the bottom data, so you can easily remove the water backscatter. The range-gated systems tend to capture more of the water and not give you any of the range information. Ours is the only technique that gives a regular picture and provides you with the actual depth and 3D shapes of the objects on the bottom.

You send out a pulse of light underwater, and you're looking for the time for that pulse of light to come back to your receiver. Right?


So anything that is in between the target and receiver is range-gated out?

That's what a range-gated camera would do. We actually collect all that information. The issue that's different between an air-based system and a water-based system is that the water, particularly in the near field close to your transmitter, gives a backscatter return that is actually brighter than the return that you get from the bottom.

Normally, if you don't range-gate that out, it actually swamps out your image. What we do is use a time-resolved detector, a streak tube (another possibility is a photomultiplier tube [PMT]), for measuring the return from that outgoing pulse at every 1-in. interval, say, as it goes out and until it hits the bottom. So, what you'd see is a very bright return from the backscatter of the water near the receiver, and then the return exponentially decays, and you would see another little bump that represents the bottom. The time position of that bump is the range to the target and the height of the bump gives us the reflectivity.

What's the advantage of a streak tube over a PMT?

The benefit of the streak tube is that it can have hundreds or thousands of time-resolved channels running at the same time, which allows you to collect much more information with each laser pulse compared to a PMT, which is a single- or few-channel device. Also, in terms of intensity, a streak tube provides a much larger dynamic range than PMTs.

In a PMT, light comes in, gets converted to electrons, goes through a series of gain stages, and then comes out as an electrical signal, which you then have to digitize. You need several sizable integrated circuit (IC) chips for each channel you want to digitize. For lidar configurations, you want to sample the return light at about 1 GHz. People quote an 8-bit dynamic range for the analog-to-digital (A/D) converters that can take samples at that rate, but it's really more like 6 or 7 bits.

In a streak tube (Figure 1), incoming light is converted to electrons by a photocathode, then the electrons are swept by voltage plates onto a phosphor screen [much like a CRT] on the back end, which intensifies the image. You get about 300 photons out of the phosphor screen for each electron. The image persists and is easily captured with a 12-bit CCD, which provides 212 or 4096 levels of grayscale.

Figure 1. Streak tube architecture. A line image is formed on the slit photocathode. The photons are converted to electrons and accelerated towards the phosphor screen (anode), where the slit image is displayed. The sweep- or deflection plates move the position of the slit image in time, effectively giving a series of line images at different times.

Because each column of pixels in the CCD represents a separate time-resolved detector channel, we can have a thousand channels all in one sensor (Figure 2). If you tried to get a hundred channels of PMTs, you'd have to have hundreds of ICs back there doing the amplification and digitization. You would have a much more complex system and wouldn't have near the dynamic range.

Figure 2. (top) Typical streak tube imaging lidar (STIL) data collection illustration -- note: coverage of the dimension perpendicular to the plane of data collection is achieved by either motion of the sensor (i.e., a pushbroom system) or a 1D scanner. (bottom) CCD image for the target, where each row is the slit image at a different time, and each column represents a single channel time-resolved detector.

You're using lidar. What about sonar?

We work in conjunction with sonar systems for several of our underwater applications. The issue with sonar is that, although it's wonderful for propagating through water, it doesn't have the resolution needed to differentiate between things that are mines and things that are simply mine-like. That's one of the problems the Navy has in their mine countermeasures effort. If they want to clear an area or make sure an area is clear so that their ships can go through, they use a sonar system first. Because sonar systems have poor resolution, they get lots of false alarms. So, they have to send in Navy divers to investigate each one of those contacts, which is costly and time-consuming.

Figure 3. A ruggedized lidar system that will fit into a very small area (19-in. length and 15-in. diameter) of an unmanned underwater vehicle. (Left) STIL shown mounted in the underwater tow-body. (Right) 3D model of the sensor package.

Figure 4. Prototype STIL system mounted on floor of a glass-bottom boat. System used air-cooled LiteCycles DPSS frequency-doubled Nd:YAG laser (200 Hz PRF, 4.5-ns pulsewidth, 4mj/pulse) and Photonis Streak Tube with custom control electronics.

One of the things we're doing under contract with the Office of Naval Research (ONR) is to implement what's called an electro-optic identification capability (EOID), using our streak tube imaging lidar (STIL) technology. The STIL is encased in this tow body (Figures 3 and 4), which is a torpedo-like structure that is dragged through the water by a helicopter. It will have a sonar system in it. The Navy will make one pass with the sonar system, find targets, and then they'll come back and make another pass with the STIL system. They do this because the swath width of any electro-optic system is much less than the swath width of the sonar system. Each contact is then identified with the electro-optic ID system.

What's the resolution with sonar?

Some of those numbers are classified.

What about yours? Is that classified too?

It's a complicated answer because the resolution is a function of the water, how turbid the water is and how much it is scattering the light. If the water were perfectly clear, we would be getting on the order of 1/2-in. resolution when the tow body is 30 feet off the bottom.

That's pretty good. Are you looking for mines that are floating up against ships, or are you talking about bottom mines?

There's a variety of threats: surface mines, tethered mines, and bottom mines. Surface mines and tethered mines tied just beneath the surface typically go off with a direct contact or proximity fuse of some kind. Such mines can usually be found by normal mine sweepers and airborne lidar. The Navy is at work improving the airborne lidar systems. There's a couple of versions out there, but the problem is you can't penetrate the water very deeply from above because water absorbs radiation so rapidly.

The mines that are tethered in the middle region in terms of depth are actually much easier to quantify with the sonar systems. The ones that are tethered deeper -- in the bottom volume of water -- and the ones that are actually sitting on the ocean floor are the trickier ones. Those are the ones that sonar systems have difficulty identifying as true targets. They are the ones that the electro-optic ID, or streak tube, is going after (Figure 5).

Figure 5. (a) Photo of bottom target used in experiment. (b) Contrast/reflectivity image of target at a depth of 25 ft. (c) Range image of target where brighter is closer -- note dark spot in upper left part of target (this is one of the two 2-in. holes in the target that can be seen in (a); the STIL system is ranging down through the hole to the bottom of the target; the other hole was covered by a weight used to keep the mine on the ocean bottom). (d) 3D surface reconstruction with contrast data mapped onto surface. (e) 1D cut through range image, with actual target profile, that shows an excellent match.

I would have guessed that mines on the bottom wouldn't affect any ships above it.

The bottom mines pose a significant threat to the Navy. They're passively listening and looking for the acoustic and/or magnetic signature of a passing ship. The ship has engines and other things that make noise and that signature can be detected by a mine. In addition, a large metal object like a ship will affect the magnetic field of the earth. So, if you have a little magnetometer down there that measures the deviation of the earth's magnetic field, it can detect the passing of a large ship.

Bottom mines don't damage the ship with the explosion. Instead, when they go off, they create a big air bubble that rises and lifts the ship out of the ocean so that it breaks under its own weight.

What's the maximum effective range of a mine?

You're getting into classified numbers, but they can be quite deep.

What kind of laser are you using?

We use an Nd:YAG laser (Figure 4) that's doubled, so it's operating at 532 nm, which is very close to the peak of the transmission curve for ocean water.

What's the power?

We typically try to run on the order of 5 mJ per pulse and run anywhere from 30 to 400 Hz, depending on the application. Typically, they're 5-ns pulses.

You said you have other applications for this technology. Can you tell us about those?

Figure 6. Bathymetric data of the ocean bottom near the island of Kahoolawe, Hawaii.

We're investigating a large number of different applications. One is airborne bathymetry (Figure 6), which is measuring water depth. There is a lot of work being done surveying waterways for shipping and monitoring what's happening with coastal regions. A boat with a sonar system just can't cover that much area with high resolution. If you use an airborne system for bathymetry, you can cover a lot more area in a very quick and economical fashion.

Another area is terrestrial mapping, using the lidar system to map the environment for proper planning and development -- for instance, to ensure proper water runoff. This requires use of real-time kinetic (RTK) GPS to get very high accuracy on where the plane's position is so that we can reconstruct the 3D map of the surface. The nice thing for the surveyors, the people who are interested in this, is that not only do they get a surface map, they also get a picture of the area with the data perfectly registered because the contrast information and range information are in each pixel. In their surveys, they can actually look at the picture and correlate it with that lump in the 3D map and identify it as a building, some trees, or whatever (Figure 7). It looks like there's a fairly sizeable market for these kinds of activities.

Figure 7. Terrestrial mapping data. (a) Aerial photo of the buildings being surveyed. (b) Single laser shot showing raw data for one line image, indicated by single white line in (a). (c) Range image of area outlined in square in (a) generated by reconstructing from the individual line images.

We also have a grant from the Department of Commerce to monitor fish stock. We fly over the ocean and look at fish schools in order to estimate what populations are. Also, one concern is dolphin bycatch, which is catching dolphins when you fish for tuna. If you can identify dolphins with a particular school, the fishermen could avoid going after that particular school and go after another one where there are no dolphins around.

What's the swath of your system compared to that of sonar?

For airborne or for under water?

How about both?

Under water, we can typically run on the order of a 50-ft. swath. It depends on how high above the bottom you are. We have a 70 degree field-of-view with that. The sonar numbers are classified again, but they're bigger than that.

And airborne, you can do a lot bigger than that.

Oh, yes. It depends on what you're trying to do. If you're trying to look at small objects -- for instance, mines -- you want to have decent resolution, so the swath isn't too wide. On the other hand, if you're looking for schools of fish, you can have 1-ft. pixels, and a 1000-ft.-wide swath.

What's the penetration of the lidar into water?

Again, it depends on water clarity. Near shore, the water tends to have a lot of particles from runoff, nearby rivers, or the beach environment. It's very turbid. I've been in water where I could only see a foot or two, even with a lidar system. But in reasonably clear water, where you might go scuba diving, or out in the open ocean where you're away from shore, you can see down a hundred feet or more. I've lived in Hawaii and been in water where you can clearly see the bottom almost 200 feet down. As a simple rule of thumb, the lidar can see about twice as far as what you can see with your eye.

 Anthony D. Gleckler has been a Senior Systems Engineer at Arete Associates (Sherman Oaks, CA) since 1997. Prior to Arete, he worked at the Kaman Aerospace Electro-Optics Development Center, the W. M. Keck Observatory, and IBM. He has 11 publications in the areas of adaptive optics and laser diode characterization and has written technical reports (many classified) on imaging lidar systems and image processing algorithms for target detection and classification. He was interviewed by Frederick Su.