SPIE Membership Get updates from SPIE Newsroom
  • Newsroom Home
  • Astronomy
  • Biomedical Optics & Medical Imaging
  • Defense & Security
  • Electronic Imaging & Signal Processing
  • Illumination & Displays
  • Lasers & Sources
  • Micro/Nano Lithography
  • Nanotechnology
  • Optical Design & Engineering
  • Optoelectronics & Communications
  • Remote Sensing
  • Sensing & Measurement
  • Solar & Alternative Energy
  • Sign up for Newsroom E-Alerts
  • Information for:
    Advertisers


SPIE Photonics West 2019 | Register Today

SPIE Defense + Commercial Sensing 2019 | Call for Papers

SPIE Journals OPEN ACCESS

SPIE PRESS

Print PageEmail Page

Remote Sensing

Deep Thought: Mapping the World, One Automated Step at a Time

Christian Heipke of Leibniz University Hannover discusses deep learning for remote sensing.

15 August 2018, SPIE Newsroom. DOI: 10.1117/2.2201808.02
Christian Heipke_Deep Thought

Mapping is inherent in what we humans as explorers do, according Dr. Christian Heipke of the Institute of Photogrammetry and GeoInformation at Leibniz University Hannover. "Both George Washington and James Cook were surveyors." says Heipke. "When they traveled, they would first try to get a map of the unknown territory. Why did they do that? Because in order to develop any new territory, you need to know what's already there. Of course, the Earth is mapped by now, but physical changes still occur, and so do man-made ones, of course. Thus, a good understanding of the current version of your environment is a condition sine qua non for any planning, for any development may it be commercial or a national park. Environmental monitoring is the same thing," he continues.

"Although there may be permissible changes, some of them are not so nice - water gets polluted or air gets polluted, so environmental monitoring is more of an alarm system in that sense. Another example would be a volcano which generates motion before the eruption: you want to monitor for that as well, e.g. using GPS or radar techniques. These all are encompassed by remote sensing as a larger entity, and geospatial information is at the heart of any decisions in terms of development of our surroundings."

Deep learning for mapping
Christian Heipke, Institute of Photogrammetry and GeoInformation (IPI), Leibniz Universität Hannover, Germany In September, at the SPIE European Remote Sensing symposium in Berlin, Heipke will discuss the ways in which deep learning is used in geospatial-related aerial and satellite image analysis. "In terms of mapping," he says, "we pretty much have most of the raw data: we now have satellites which provide nearly one-meter resolution every day, of just about every corner on Earth; 40 years ago, we had one satellite which would come across every few weeks, only." But, he says, data can become outdated very quickly. "In some ways, having mapped the Earth is good news, because we can use that data which we already have to train a supervised image classification system: if we take, say, our Geographic Information System (GIS) database which contains all the geospatial data acquired some time ago, and look at a more recent image, there will be some changes but most of it will be the same. We then have a chance to work out from the image data and the GIS data which parts have changed, and we can use those to predict change, as well."

This does require, Heipke notes, a robust system that utilizes both automation and human decision-making: "Let's say, according to the earlier version of the database, there should be a field, but classification results reveal that, ‘This really looks like a roundabout.' Of course, you have two choices: you can just update the database and say, ‘Okay, it's a roundabout,' but the safer way may be for the computer to suggest to the human operator, ‘Hey, why don't you change that part of the database into something you think it matches other than the field. I, the computer, suggest it could be a roundabout.'"

superimposition of aerial image and GIS database content

GIS objects (in red)
Semi-automated map updating: top - superimposition of aerial image and GIS database content (green: forest; yellow: farm land; reddish: settlement area); bottom - GIS objects (in red) which were automatically found to need an update.


For the moment, the final decision would be human-made. "I've been deeply involved in these developments," says Heipke. "And one of things which I have come to learn, is how brilliant the human brain works at automatically processing images. Even after all the success of deep learning, it's nowhere near what we do every day."

The human brain remains more reliable than the current automation of deep learning because ultimately, he says, humans are more flexible. "We can combine what we learn from examples and what we learn from abstract mathematics," Heipke points out. "So after you've seen, say, a dog run across the street, when you're driving, your brain can substitute a kid in place of the dog. And then things change dramatically in terms of consequences and therefore which actions need to be taken, and that impacts how you drive. Even though you might not have seen a kid run across the street in real life, you know about kids, you know about families; linking such unrelated parts of our human life is something I don't see deep learning being able to do at the moment. We're not operating in the future, we're not talking about 50 years from now, but, for the next couple of years I'm very certain that this is not going to happen."

How far can deep learning go?
Heipke's resonant point is that humans have a whole history of innate knowledge that places us a little bit ahead of computer-learning for the moment. "I would claim this is the case," he agrees. "It's about these disparate things which we have no apparent difficulty at all to link; for us, these connections in our mind, they just happen."

That said, deep learning still does plenty of heavy lifting when it comes to our mapping endeavors. "Today, what we do is we take an image -- so it could be from a satellite, it could be from your smartphone, could be anything in between, could be from a microscope," says Heipke. "And we start automatically processing that image - you can call it automatic image processing or artificial intelligence or, if you want the buzzword, deep learning - it's pretty much the same thing." That deep-learning element works within a system of remote-sensing technologies that monitor nearly every aspect of the world around us from agriculture to coastlines, from urban sprawl to green forests. "In agriculture, there's a constant change from day to day as the plants grow," says Heipke."And they should be growing in the way I, as a farmer, want them to grow. So if that's not the case I may need to use more water or more fertilizer or whatever. So that's observing and monitoring your food crop. A totally different task needs to carried out after a natural disaster: insurance companies are interested in seeing what the damage is and typically they will also want to see what that area looked like before the disaster, just to make sure that whatever is being claimed as having been destroyed in the disaster has in fact not been destroyed beforehand. Forestry, whether it's monitoring similar to the agriculture case, or whether it's monitoring for illegal logging is a hot topic in areas like Brazil." Then, talking about other applications involving automatic image processing, there's traffic monitoring, predicting potential traffic jams using car data and, of course, automatic driving.

There is some hype in terms of just how much deep learning can achieve - and how quickly; ultimately, in order to learn, the computers need examples beforehand. "It's replication, yes," says Heipke. "It's another thing entirely to duplicate human intelligence where you have to be prepared for unknown, unexpected events. Having said that, in a normal mapping exercise, you can of course make sure that the computer would have seen more and more information, leading to more fully-automated tasks." It's this concept that Heipke says he thrives on at the moment. "This idea of trying to see how much you can and should automate the image processing is something which is a very complicated, a very difficult subject, but very interesting as well. That's the most interesting aspect at the moment: trying to see how far we can push automation."