SPIE Startup Challenge 2015 Founding Partner - JENOPTIK Get updates from SPIE Newsroom
  • Newsroom Home
  • Astronomy
  • Biomedical Optics & Medical Imaging
  • Defense & Security
  • Electronic Imaging & Signal Processing
  • Illumination & Displays
  • Lasers & Sources
  • Micro/Nano Lithography
  • Nanotechnology
  • Optical Design & Engineering
  • Optoelectronics & Communications
  • Remote Sensing
  • Sensing & Measurement
  • Solar & Alternative Energy
  • Sign up for Newsroom E-Alerts
  • Information for:

SPIE Photonics West 2017 | Register Today

SPIE Defense + Commercial Sensing 2017 | Call for Papers

Get Down (loaded) - SPIE Journals OPEN ACCESS


Print PageEmail Page

Remote Sensing

GIS, remote sensing, and computer graphics merge to offer 3D imaging

An interview with Nickolas Faust, Georgia Institute of Technology

From OE Reports Number 150 - June 1996
31 June 1996, SPIE Newsroom. DOI: 10.1117/2.6199606.0001

What is the background of geographic information systems (GIS)?

Geographic information systems have existed for a long time, but they weren't called geographic information systems then. They were called geographic databases or something like that. Some of the earliest work in geographic database development, what I call the start of geographic information systems, was in the early 1970s at Harvard University in the School of Landscape Architecture. A guy named Carl Steinitz, who was doing spatial geographic analysis around the Boston area, started a database that contained information about the geographic area. That system used a package they had developed called Grid, which was a complex model written in Fortran. Students in the School of Landscape Architecture were required to use Grid and to do modeling as part of their graduate degrees. The students rebelled, saying that they didn't go there to learn computer programming.

They weren't physics majors or engineers.

Yes. You have two different types of landscape architects, the kind that play in bushes and move trees and the sort that plan national parks and do all sorts of environmental analysis. It was really the latter that was involved with GIS.

The group that was there included people like Jack Dangerman, who is now the head of one of the largest geographic information companies called ESRI, Environmental Systems Research Institute. A couple of other guys named Laurie Jordan and British Rado, who were from the state of Georgia, had gone up there at about the same time. They were all part of the group that had rebelled. They wound up getting the School of Landscape Architecture to develop a much simpler and easier to use version of Grid. That package was called IMGRID, which stood for Interactive Manipulation Grid. It was more keyword oriented and relatively easy for the students to create their own models using spatial data. They could actually do projects and environmental assessments. That, I really consider, is the first raster GIS. It was basically a raster system based on grid cells. There was previous work done in vector representations of spatial data in Canada.

You're talking software here. How does that relate to photographs from space?

When LANDSAT went up in the early '70s, it became a principal source of geographic information. Up to that time, maps were digitized. The U.S. Geological Survey had 7 1/2 minute quadrangles (1:24,000 scale) and 1-degree quadrangles at a scale of 1:250,000. The source of the data for those maps--elevation and features (roads, streams, etc.)--came from airborne flights. When the earth resources satellites went up, they became a major source of information that not only showed you where buildings, office parks, and forests were, but also helped you get a feel for exactly what was on the land, such as croplands, grass for pasture, and even individual buildings. It became a way to gather what is called land cover information. So, in the middle to late '70s, that became a principal source of data that went into GIS systems.

That was the focus of a company named ERDAS (Earth Resources Data Analysis Systems), which was a spinoff of our group here at Georgia Tech. It used GIS and some of the same techniques that Harvard had developed, but it basically integrated that type of geographic data with imagery. So ERDAS was one of the first companies to integrate the use of imagery and geographic information systems data.

Parallel with the development of IMGRID, another branch of GIS was starting to be developed that really didn't have anything to do with imagery. It was taking maps, digitizing roads, and doing everything in polygons rather than in grids. There was a whole other set of software and techniques developed for polygon overlay in this polygon GIS.

When you say polygon, instead of having a rectangular grid, you can have any shape polygon as your grid.

Right. You would have any kind of arbitrary polygonal boundary and inside that boundary, you can have different attributes such as 'this area is owned by John Doe. It contains this many acres. It's a field and cotton is growing on it.' That polygon is a set of vectors, nodes, and links between the nodes. It's an enclosed area with attributes attached to it. On a grid, on the other hand, the same data can be represented, but every point has an attribute that says what it is and each point has exactly the same area.

I don't understand. Each point has the same area?

Well, in an image, you have a pixel size. In a satellite image, the pixel size from LANDSAT is currently 30 m. So, you have an image, and each point on the image is 30 X 30 m. A lake, in an image, may be a set of a hundred pixels that are all considered lake. A polygon, on the other hand, would be just the outside boundary of that lake with an attribute that would say it's a lake in between. It's a different representation of data. It turns out that there's a lot of detail in imagery that would take too long to put into polygons. Sometimes when you want to put imagery information into a polygonal GIS, you wind up losing information because you have to generalize the information in the imagery to put it into a polygonal GIS.

You talked about the history in the civilian sector. Didn't this have a lot to play in the military?

Yes, there's always been a lot of work going on in the military in terms of terrain data. In the mid-'70s, we actually did a lot of work here--and a lot of other people did--with the Army Engineers Topographic Laboratory. Their job was to create terrain data and maps for the military, mainly the army. The Defense Mapping Agency was also building these maps. In the '70s and '80s, they decided to switch from paper maps and go into digital maps. Once you have digital maps, you wind up creating geographic information systems data. The geographic information system is essentially a way to present that map data.

The military started using imagery a long time ago. They were instrumental in making sure the imagery came in geometrically corrected to the map of the same area so that you could look at how things changed over time.

You mentioned LANDSAT. You said one pixel gave a resolution of 30 X 30 m on the ground. I've heard that one of the great things happening these days is that the resolution is improving greatly.

The early LANDSAT, launched in '72, had a resolution of 79 X 57 m. So it was a nonsquare pixel. The LANDSAT in the early '80s had a new sensor called the Thematic Mapper. It had a pixel size of 30 X 30 m. SPOT is a French satellite and it has two different types of systems. One is a color infrared digital camera that has three bands--a green, red, and near-infrared band. Its resolution is 20 X 20 m.

On the same satellite is another sensor that creates panchromatic information, like a black-and-white photograph. Its resolution is 10 X 10 m. Each sensor has different pixel size, but it also has different color information associated with it.

I heard that we're getting better than 10 meters.

Right. There are three or four companies in the U.S. that have permission to launch satellites with as good as 1-m resolution. The first one is probably going up this year. I believe NASA is launching a satellite called Lewis and another one called Clark as part of an evaluation of small satellites with small launchers. It's basically a 3-m multispectral sensor. In addition to that, private companies such as Lockheed and Space Imaging are launching their own satellites that'll have resolution from 3 m down to 1 m. Some of them are multispectral and some are panchromatic.

Okay, what's the difference between multispectral and panchromatic?

Panchromatic is like black and white film. Multispectral is like color film, so you get more color information. The black and white image is fine for showing position of things but it's pretty hard to interpret the vegetation health and things like that for remote-sensing data.

As you go to higher and higher resolution, such as 1 meter, what can you use it for?

What I understand, the principal application for the one-m type data is going to be mapping for counties or even individuals on a land-parcel basis. They're trying to hit a market that is dominated by these mom-and-pop air photo firms that go out and take aerial images to create detailed maps of a subdivision or other small areas. It's not really the traditional GIS market, which is more natural-resources oriented, where you're trying to do vegetation analysis or crop estimation. That's too small a market. These guys are after a much bigger fish, and that market is the worldwide detailed-mapping market.

I would guess they'd have military applications too.

Oh, yes. The military has its own set of satellites, but you know a lot of times it's easier to get commercial data and actually use it. And for countries that don't have their own surveillance satellites, this would be an obvious way to do surveillance.

What is the resolution for military satellites?

That's definitely classified. I can't tell you. Just the idea that the U.S. is allowing companies to launch satellites with 1-m resolution should give you some idea of what's going on. The reason they're doing so is because the French are going to launch a satellite system called Hermes, which is a spin-off of their military technology, that will have 1- to 3-m resolution. So, if the U.S. wasn't going to do it, the French were and then they'd sell the data to the U.S. That competition overrode a lot of the traditional military secrecy.

So, there will be companies offering these images and you have to buy them from whoever puts the satellites up?

Yes. They may offer them by Internet and/or they may make hard copies of them. Whether there is room in this market for these companies, I doubt. I think there will be a shakeout.

Can you reiterate who these companies are?

The three companies that will have near-term, high-resolution satellites are Lockheed Space Imaging, Earthwatch, and Resource 21, which is a consortium that includes agribusiness.

What about 3D imaging, 3D GIS?

One of the new innovations that has come out in the last few years is the merger of three technologies. Those technologies are remote sensing, GIS, and computer graphic visualization. The remote sensing and GIS merged in the last decade with ERDAS and INTEGRAPH and a few other companies that used imagery behind their vector and polygon data.

The ability to create a 3D perspective scene has always been attractive because we live in a 3D world. If you want to understand geographic data, it makes sense to visualize it in 3D. The problem before was the computer systems; they were too slow. I've been doing 3D imagery for maybe 15 years. It used to take hours to generate one perspective scene. Now with all the computing advances, the Pentium chip and graphic computers such as those from Silicon Graphics and Evans and Sutherland, there's a level of computing power available on a work station that allows you to create 3D scenes in real time. It's essentially a workstation-based flight simulator. By generating those scenes in real time, you can query and analyze the GIS. What can I see from the top of this hill? Or if I'm on the 34th floor of the Coke building in Atlanta and I look north, what's my view? Viewsheds (an analogy to watersheds) have always been an important part of what the military is doing in terms of terrain analysis. But GIS people never really considered viewsheds as part of their analysis capability.

The military always had two types of use for geographic information. One was called terrain analysis, which was line-of- sight view from the top of things. What can I see from the top of this hill? It got into fields of fire and things like that.

The other use for GIS had to do with mobility. It analyzed the terrain. What is it going to take to move a tank from here to there, based on the soils, vegetation, roads, and stuff like that?

Those were always two separate types of analyses. The civilian GIS never really did terrain analysis. But when you started to include the visualization in the GIS, the terrain analysis sort of came along for free.

Is there a difference between remote sensing and GIS? To me, GIS data is involved in remote sensing.

No, it's actually the other way around. Remote sensing data is involved in GIS. Remote sensing provides land cover types of data for a GIS. A GIS is a set of layers of information. It could be elevation data, soils data, population data, geology data, hydrology data, or vegetation data. A GIS has many more different layers of information for one geographic area than remote sensing. Remote sensing gives you information used to create one layer in a GIS, which is land cover.

Remote sensing gives you images taken by satellite or aircraft then.

Yes. It gives you images in multiple spectral bands. By doing things like pattern recognition, you can look at those images and determine where trees are and things like that. Then that information gets fed into a GIS. There's another distinction here. I consider remote sensing imagery as data and I consider what's in a GIS as information. So you take data, you go through some process, and you create information. Then it becomes part of a GIS.

I've seen photographs from LANDSAT or SPOT, where certain areas are red and certain areas are blue and from the geology and the reflection of the light, they can tell what kind of rock is down there, what kind of formations, etc., to look for petroleum.

Some of the remote sensing bands allow differentiation between different rock types. There's different spectral response between, say, acidic (e.g., granite, gneiss) and basaltic rocks (e.g., olivine). Some of the spectral bands, for example, in LANDSAT can pick that out. But remember each of those spectral bands is an image. A person can look at it and interpret it. That image is not information until that person interprets it.

I heard that not only are sensors achieving finer resolution, but that they are using more bands.

One of the other things that NASA is launching this summer is a hyperspectral sensor. Instead of one to three spectral bands, it's going to have hundreds of spectral bands. Each bandwidth is 10 nm, real small slices. If you look at a spectral curve, you essentially almost get a continuous curve.

The problem with the LANDSAT bands and the SPOT bands are that they are wide bandwidths because they need to cover enough to get the chlorophyll content of leaves or something like that. Instead of getting the information that is really present in the electromagnetic spectrum, that would differentiate between different vegetation types, it sort of globs it all together. It averages all the radiance in broad band. So the difference between hyperspectral and traditional remote sensing is that instead of broadbands where you have one number that represents chlorophyll, you basically will have a full spectrum that will allow you to differentiate between different types of plants, for example. This also helps in differentiating between different rock types.

What range will all these hundreds of bands cover? From the near infrared through the ultraviolet?

Basically, when you get into the UV, the atmospheric absorption is too great. So you don't get too much out of that, especially for a space platform. So, it's generally blue through the near infrared, up to about 2 µ m.

Anything else?

With the hyperspectral bands, the new high resolution sensors, and what I call a virtual 3D GIS, people can immerse themselves in a virtual reality geographic data system. The GIS and remote sensing data are going to be more understandable because it'll look like it does when you look out your window. And that will appeal to a wider audience.

Nickolas L. Faust is the Associate Director, Center for GIS and Spatial Analysis Technologies (GISSAT) and Head of the Image Processing Branch of the Electro-Optics, Environment and Materials Library at the Georgia Tech Research Institute (GTRI), Georgia Institute of Technology.

Faust holds a BSc in physics and an MS in geophysical sciences from Georgia Institute of Technology. In addition to his early work as an aerospace engineer/physicist for NASA, he has worked at GIT since 1972 in a variety of capacities. Currently he coordinates research in the integration of remote sensing, GIS, and visualization technology areas. Faust is cochair of the International Society of Photogrammetry and Remote Sensing, Commission II Working Group on Hardware and Software Aspects of GIS, is president of the Georgia/South Carolina region of ASPRS, and member of ASPRS, SPIE, ISPRS, IEEE, and Sigma Xi. He was elected to the Space Technology Hall of Fame in 1993 and has received two NASA Certificates of Recognition.

He was an SPIE AeroSense conference chair in April and the technical director of GIS/LIS, Atlanta in 1992. Faust has numerous publications to his name. He was interviewed by Frederick Su.