Optics of Google Earth
Custom optics and sensor systems power Google Earth imagery.
Since first being unveiled more than a decade ago, Google Earth has awed users the world over with its ability to let us vicariously and remotely travel the globe for free, touching down in cities and landscapes far and wide using nothing more than a basic computer and the Internet. Google Earth displays images of varying resolution of the Earth’s surface, looking down perpendicularly or at an oblique angle and allowing us to see things like cities and houses.
The roots of Google Earth lie in EarthViewer 3D, a program created by Keyhole, a software development company specializing in geospatial data visualization applications. Initially released in 2001, EarthViewer was the first product to stream nearly unlimited, high-quality 3D imagery over the Internet, making satellite and aerial imagery accessible to the public.
In October 2004, Google acquired Keyhole and one year later re-released EarthViewer as Google Earth.
Generally speaking, Google Earth works by superimposing images obtained from satellites, aerial photography, and geographic information systems (GIS) onto a three-dimensional globe. This creates, in essence, a giant, multi-terabyte, high-resolution image of the entire Earth.
While most of the imagery found in Google Earth is captured by commercial satellites, some is provided to Google by city and state governments, and some is even acquired via high-resolution cameras mounted on kites and balloons.
One of the primary providers of Google Earth images is DigitalGlobe, which has partnered with Google since 2005. Founded in 1992 as WorldView Imaging, DigitalGlobe boasts a fleet of commercial satellites that provide images to a multitude of government and commercial customers around the world.
The company launched its first satellite, IKONOS, the first commercial sub-meter resolution imaging satellite, in 1999. This was followed two years later by QuickBird, which at 60 cm was the highest resolution commercial imaging satellite then available.
In 2007, DigitalGlobe launched WorldView-1 (46-cm resolution), followed by the GeoEye-1 in 2008 (41-cm resolution), WorldView-2 in 2009 (50-cm resolution), WorldView-3 in 2014 (30-cm resolution), and WorldView-4 (31-cm resolution), planned for launch in October 2016.
“Typically we think of our satellites as big digital cameras. And on any given day, we have four of them orbiting the Earth collecting more than 3 million square km of imagery daily,” said Kumar Navulur, director of next-generation products at DigitalGlobe. “This turns out to be approximately 1.2 billion square km every year, or roughly 6-7 times the Earth’s land mass. And we have been collecting since 2000, so we have close to 6 billion square km of imagery going back to 2000.”
Capturing images in space is “a whole different ballgame” from capturing images on Earth, Navulur emphasized, and it dramatically influences hardware and software designs for the satellites and imaging components.
“When you talk about gathering images in space, you need to realize that our satellites go from pole to pole, so half the time they are in the Sun and half the time in the dark,” Navulur explained. “As a result, the temperature changes are extreme as well — in most cases a few hundred degrees from dark to light — so we have to ensure that the temperature on the satellite remains constant. To do this we include a thermal blanket so the electronics have a consistent temperature, which means the data is always consistent and accurate as well.”
Another critical factor is stability. “We build big satellites because when you are going 17,000 miles per hour, any little shaking or vibrations can cause your images to be blurry or to be off by a few hundred meters,” he said.
But perhaps the most important feature is the optical imaging system, which is designed to capture sunlight reflected off the Earth’s surface, be it land mass, water, ice, sand, pavement, buildings, etc. This data is then compressed and transmitted back to Earth for processing and archiving. WorldView-3, for example, can send 1.2 GB of data to Earth every second.
In addition to panchromatic and multispectral image data, most of the satellites also collect data outside of the RGB band, including near-infrared and shortwave infrared data.
The optical payload in DigitalGlobe’s satellites comprises an optical telescope unit and a sensor subsystem. The optics in the telescope include a 1.1-m clear-aperture lens; a primary mirror, secondary mirror, tertiary mirror and two folding mirrors; and a proprietary fiber material that holds them all in line. The telescope also features an out-of-barrel assembly and a sun shield that works like a sun visor or baseball cap to keep direct sun off the telescopes and help maintain optical alignment.
“When manufacturing a 1.1 m lens, the curvature needs to be really perfect because at the end, it’s all about the data quality,” Navulur said. “You need perfect curvature of the lens so that the light bouncing off the Earth is captured precisely.”
Harris Corp., which has been supplying optical telescope units and sensor subsystems to DigitalGlobe for many years, specializes in just this sort of precision, noted Craig Oswald, manager of commercial imaging in the space and intelligence systems segment at Harris.
“We have leveraged our capabilities from our heritage telescopes into a medium-sized optical system for DigitalGlobe,” Oswald said. “The wavefront error in our optics is very low, coupled with high MTF (modulation transfer function), line-of-sight stability, and precision structures.”
Also unique to the telescope are the thermally controlled optics, which are designed to guarantee that the optics and service structure don’t fall out of alignment, he added.
“While on the ground, many materials will absorb a small amount of water or moisture,” Oswald said. “However, once in space, the water will be released, slightly shrinking the telescope by some number of microns. Therefore, Harris utilizes a unique, patented fiber material that produces the high resolution imagery that DigitalGlobe desires for their customers.”
All Harris optics are machine-ground using magnetorheological finishing (MRF), a precise, fluid-based optical polishing method that entails putting complex and highly detailed features on optical surfaces with very few defects, he added.
“Mirror manufacturing and precision alignment are very difficult, but using MRF has helped us streamline the polishing, reduce costs, and speed up the process,” Oswald said.
The camera itself is also unique. It does not use a shutter. Rather, it features an imaging bar with a pushbroom sensor, a line of sensors arranged perpendicular to the flight direction of the spacecraft. These sensors can gather more light than other types of sensors because they focus on a particular area for a longer time, like a long exposure on a camera.
In a single pass over a given area, the sensors can capture a swath that is 16 km (nearly 10 miles) in width and 100-200 km in length, roughly 10,000 square km in one sweep in a single point. They can also capture contiguous areas by sweeping two swaths side by side. So in a single 100 km x 100 km pass, the satellite can capture 10,000-15,000 square km, according to Navulur.
“This is important because it means that we can, for example, capture the entire city of Denver in one pass,” he said.
The data captured by the sensors is then focused onto CCD digital chip arrays, eight to 12 banks of arrays (depending on the satellite), each containing 10,000-12,000 detectors. Harris also manufactures these sensor subsystems, which have a high signal-to-noise ratio so that the information being extracted is more “pure,” Navulur noted.
“We have patented technology that is unique and used in all our sensor systems that allows for the collection of panchromatic, multispectral, and shortwave infrared imagery at the same time,” Oswald said.
“But the data processing unit is really the workhorse. We put out 6 gigabits/second of data and produce a lot of imagery, and it all has to get down to the ground. Because DigitalGlobe doesn’t take uncompressed images, they rely on Harris technology to compress them in a way that they can get to the ground quickly.”
The camera also features time-delay integration settings that automatically control how much light is detected, depending on what type of surface the satellite is flying over.
“If you are capturing reflected light from water, for example, it has a very small signal because all of the light is absorbed by the water. So on water, you slow down to scan so you get more light,” Navulur said.
“But on sand and ice, it is very different because so much light is reflected. We want to make sure when collecting this data that we don’t saturate. And although there are manual settings, most of the time we let the satellite figure out the right settings.”
–Kathy Kincade is a freelance science and technology writer based in California (USA).
- Have a question or comment about this article? Write to us at email@example.com.
- To receive a print copy of SPIE Professional, the SPIE member magazine, become an SPIE member.