SPIE Startup Challenge 2015 Founding Partner - JENOPTIK Get updates from SPIE Newsroom
  • Newsroom Home
  • Astronomy
  • Biomedical Optics & Medical Imaging
  • Defense & Security
  • Electronic Imaging & Signal Processing
  • Illumination & Displays
  • Lasers & Sources
  • Micro/Nano Lithography
  • Nanotechnology
  • Optical Design & Engineering
  • Optoelectronics & Communications
  • Remote Sensing
  • Sensing & Measurement
  • Solar & Alternative Energy
  • Sign up for Newsroom E-Alerts
  • Information for:
SPIE Photonics West 2017 | Register Today

SPIE Defense + Commercial Sensing 2017 | Call for Papers

2017 SPIE Optics + Photonics | Call for Papers

Get Down (loaded) - SPIE Journals OPEN ACCESS


Print PageEmail PageView PDF


Extracting 3D images from lunar orbiter Kaguya data

Geometric analysis of movies captured by a single high-definition TV camera gives depth to 2D images from space.
14 March 2011, SPIE Newsroom. DOI: 10.1117/2.1201102.003406

The Japanese lunar orbiter SELENE (Selenological and Engineering Explorer)1 was launched from the Japan Aerospace Exploration Agency's Tanegashima Space Center on 14 September 2007. Its mission ended on 11 June 2009. The spacecraft is now better known by its nickname Kaguya, after a legendary Japanese moon princess. The scientific objectives were to determine the origin and evolution of the Moon and take measurements of the lunar environment. Fourteen scientific instruments were mounted on the orbiter to achieve these objectives, and it was also fitted with telephoto and wide-angle high-definition TV (HDTV) cameras to record still and moving pictures of the Moon and Earth.2,3 Among these are views of the full rise of Earth and well-known craters on the lunar surface.4

There is substantial interest in rendering these views as realistically as possible. Accordingly, we wished to convert the 2D moving picture data captured by a single HDTV camera mounted on the orbiter into stereoscopic (3D) images.5 Figure 1 details the camera imaging in schematic form. Here, Pl and Pr correspond to the positions of the orbiter at times t1 and t1t, and Δt denotes the time offset. The zl- and zr-axes correspond to the optical axes of the cameras at times t1 and t1t, and Pc denotes the convergence point of the axes. If we compare the images captured by the HDTV camera at time t1 and t1t, Earth appears to have moved vertically with respect to the lunar surface as a result of the distance traveled by the orbiter during time Δt. Consequently, rotating these two images by 90 degrees produces a pair of images with a horizontal disparity in Earth's position with respect to the Moon. Viewing the images through a device that restricts display of the left and right pictures to the corresponding eyes gives the observer a stereoscopic impression of depth.

Figure 1. Schematic diagram of capture specifics. Pland Pr correspond to the positions of the orbiter at times t1 and t1+Δt, and Δt denotes the time offset. The zl- and zr-axes correspond to the optical axes of the cameras at times t1 and t1+Δt, and Pc denotes the convergence point of the axes.

In general, stereoscopic images are captured using a parallel or converged (toed-in) camera configuration.6 Figure 1 shows the position of the left (Pl) and right (Pr) cameras set symmetrically in this arrangement. In contrast, the position of the camera on the right as the orbiter moves around the Moon to collect the image data is Pr, i.e., the right image is displaced along the zr-axis from point Pr, where ideally it should be for convergence. This displacement causes spatial distortion that must be compensated to avoid viewing discomfort.

A typical approach to solving the problem is to correct for the displacement along the zr-axis in the right image using depth data from the lunar surface that is obtainable with computer graphics software. The orbiter is also equipped with a metrological instrument for measuring the required altitude data.7 However, for simplicity we applied a less-complicated method to compensate for the spatial distortion. We generated depth data assuming that the lunar surface is spherical (i.e., we ignored the craters) and used this data to correct the right images taken at Pr. Figure 2 shows a flow chart that describes the process.

Figure 2. Flow chart for converting HDTV movie images into stereoscopic images corrected for spatial distortion. HD: High definition.

Figure 3 shows stereoscopic images without and with our corrections. The top right corner of each picture is enlarged. Both the left and right images were offset horizontally so that the lunar surface was not directly in front of the display. A vertical disparity can be observed in the stereoscopic image in Figure 3(a), where no correction has been made. The disparity is reduced by correcting the right image using the depth data that we generated by assuming a spherical lunar surface: see Figure 3(b).

Figure 3. Stereoscopic images (a) without and (b) with corrections for spatial distortion using data generated under the assumption that the Moon has a spherical surface.

In summary, we have described a method of converting moving pictures captured by a single HDTV camera into stereoscopic images. We rotated sequences from an orbiter movie of the Moon by 90 degrees to make Earth appear horizontally displaced in one image with respect to the other. We then used the rotated images as stereoscopic data to render pictures in 3D. However, the result was spatially distorted because the points traversed as the orbiter moved around the Moon differed from the axi-asymmetrical left and right positions of the camera in the ideal convergence configuration. Because these distortions cause discomfort for viewers, we added depth data to the right images that assumed a spherical Moon. We intend to continue our research in this direction to develop methods such as integral imaging and holography that convert 2D data into pictures that appear 3D without the aid of special glasses.

Masato Miura, Jun Arai, Hisayuki Sasaki, Makoto Okui, Fumio Okano
Science and Technology Research Laboratories Japan Broadcasting Corporation (NHK)
Tokyo, Japan
Masato Miura received his BS (2004), MS (2005), and PhD (2008) in computer and systems engineering from Kobe University, Hyogo, Japan. Since 2008 he has been with NHK. His current research interests include 3D TV systems.
Junichi Yamazaki
NHK Engineering Services Inc.
Tokyo, Japan
Shin-ichi Sobue
Japan Aerospace Exploration Agency
Ibaraki, Japan

1. http://www.kaguya.jaxa.jp/index_e.htm Kaguya (SELENE) home page, Japan Aerospace Exploration Agency. Accessed 16 January 2011.
2. S. Mitsushashi, J. Yamazaki, M. Yamauchi, and J. Tachino, HDTV system onboard lunar explorer Kaguya (SELENE), ITE Tech. Rep. 32, pp. 13-17, 2008. In Japanese.
3. R. Honda, S. Mitsuhashi, J. Yamazaki, M. Yamauchi, J. Tachino, M. Shirao, Initial results of imaging of lunar features by high-definition television (HDTV) on board SELENE (Kaguya), 39th Lunar Planet. Sci. Conf., 2008. http://www.lpi.usra.edu/meetings/lpsc2008/pdf/1876.pdf
4. http://wms.selene.jaxa.jp/selene_viewer/en/observation_mission/hdtv/ Kaguya image gallery, Japan Aerospace Exploration Agency. Accessed 20 January 2011.
5. M. Miura, J. Arai, J. Yamazaki, H. Sasaki, M. Okui, S. Sobue, F. Okano, Geometric analysis on stereoscopic images captured by single high-definition television camera on lunar orbiter Kaguya (SELENE), Proc. SPIE 7690, pp. 79600T, 2010. doi:10.1117/12.849635
6. A. Woods, T. Docherty, R. Koch, Image distortions in stereoscopic video systems, Proc. SPIE 1915, pp. 36-48, 1993. doi:10.1117/12.157041
7. H. Araki, S. Tazawa, H. Noda, Y. Ishihara, S. Goossens, S. Sasaki, N. Kawano, I. Kamiya, H. Otake, J. Oberst, C. Shum, Lunar global shape and polar topography derived from Kaguya-LALT laser altimetry, Science 323, pp. 897-900, 2009.