Optics of Google Earth
Custom optics and sensor systems are what powers the imagery in Google Earth. (An SPIE Professional magazine article. )
Since it was first unveiled more than a decade ago, Google Earth has awed users around the world with its ability to let us vicariously and remotely travel the globe for free, touching down in cities and landscapes far and wide, using nothing more than a basic computer and the Internet.
Google Earth displays images of varying resolution of the Earth's surface, looking down perpendicularly or at an oblique angle and allowing us to see things like cities and houses.
Generally speaking, Google Earth works by superimposing images obtained from satellites, aerial photography, and geographic information systems (GIS) onto a three-dimensional globe. This creates, in essence, a giant, multi-terabyte, high-resolution image of the entire Earth.
While most of the imagery found in Google Earth is captured by commercial satellites launched by DigitalGlobe and other companies, some is provided to Google by city and state governments, and some is even acquired via high-resolution cameras mounted on kites and balloons.
The roots of Google Earth lie in the release in 2001 of EarthViewer 3D, the first product to stream nearly unlimited, high-quality 3D imagery over the Internet, making satellite and aerial imagery accessible to the public.
In October 2004, Google acquired Keyhole, the company that developed EarthViewer 3D, and one year later re-released EarthViewer as Google Earth.
Today, custom optics and sensor systems power the imagery found on Google Earth.
An article in the October 2016 issue of SPIE Professional looks at the cameras, telescopes, sensors, optical coatings, and other optics and photonics devices and technologies that keep the systems stable and thermally constant in space.