In recent years, pressure from the aerospace industry and other quarters to make satellites smaller and smarter has driven significant research and development based on using existing, reliable equipment to perform an increasing range of missions. For example, substantial effort has gone into recruiting star trackers, presently the most accurate attitude sensors on board spacecraft, to perform surveillance of space objects from space and guide autonomous rendezvous. In these missions, thanks to the reflectivity (albedo) of space objects, the star trackers observe flying spacecraft and automatically detect and track specific targets. They also estimate their motion parameters, such as orbit and pose (i.e., position and attitude). Although the background of the images (starlight) from the star trackers is uncomplicated, problems such as image smearing, background cluster, and artifacts caused by special illumination conditions and relative motion between observers and targets all present special challenges.
Much of the research in this area has focused on autonomous satellite detection, model-based satellite tracking, and motion estimation of specific targets. However, solutions to these problems depend on other specialized sensors, for example, advanced video-guidance sensors (image sensors that employ active laser sources), and light detection and ranging. Although these sensors have many advantages, ongoing computer-vision research with star trackers can perform the same function in the system while decreasing the power consumption and mass of the satellites. Moreover, current solutions require manual operations in the initial stages, while a well-trained computer-vision system can automatically detect, track, and analyze targets. Perfecting such systems is now a major focus of the field.
We have designed software that simulates space missions using star trackers (see Figure 1). The software was applied to far- (100–10km), mid- (10–1km) and near-range (1km–100m) operations and simulated the pictures taken by on-board star trackers (see Figure 2). In the far range, the observer satellite rotates slowly to estimate its own attitude, and identifies nonstars (e.g., sensor noise and targets). Based on these results, the star tracker captures the target in the field of view and evaluates the azimuth and elevation angles to estimate the space orbit. After completing the surveillance, the system tracks the target in orbit and maneuvers to get closer. The image-processing modules in the mid-range include autonomous target detection, tracking, and location. In the near range, pose estimation of the target is the essential functionality for autonomous rendezvous.
Figure 1. Modular structure of the space-surveillance simulation software application. C++: Programming language.
Figure 2. The image from simulation. (a) The space mission as shown by the satellite tool kit. (b) Sensor image in the mid-range (15,972m). (c) Sensor image in the near range (494m). (d) Computer-aided design model of the target.
Previous methods1–4 tackled problems such as star identification, nonstar detection, and orbit estimation in the far range. These techniques assume that the observer satellite is in a static mode. Consequently, image smear and artifacts (nonstars) caused by the rotation rate of the observer satellite are a substantial drawback of these methods. Our solution aims to achieve autonomous star identification in the presence of sensor motion and artifacts. An additional focal point of our research is to use star trackers to estimate the pose of targets.5,6 We have shown how a learning-based method enables such estimates employing image-based sensors. Our approach does not require targets to have specialized markers, nor does it call for initial guesses or stringent motion constraints to reduce computational complexity.
We propose a novel recognition algorithm to solve nonstar identification in the star image.7 First, we define a pattern, called a flower code, composed of angular distances and circular angles to describe the properties of the pivot star. Second, we employ a three-step strategy to find the correspondence between the sensor and catalog patterns, including initial lookup-table matching, cyclic dynamic matching, and validation. The flower code and matching strategy together form a complete scheme that is stable against a range of errors in star position introduced by the spacecraft's tumbling rate, and is effective at distinguishing between stars and nonstars, as well as listing the nonstars for tracking.
As soon as target detection has been achieved, we proceed with estimating the pose of the targets to enable autonomous rendezvous or proximity operations.8 Figure 2 shows a computer-aided design model of a hypothetical satellite and target images. We have chosen to base our system on silhouettes, which can be extracted fairly reliably from images. Pose is denoted by the 6D vector X [α (yaw), β (pitch), γ (roll), Tx, Ty, Tz] (where T is translation). For the pose estimation problem, we are interested in recovering X from the image features Z: X = f(Z). We use multiple mapping functions to estimate the possible poses of the query image and, finally, resolve the ambiguous poses from tracking.
In summary, we have described our efforts to extend the use of star trackers to space-surveillance situations. We have also presented ongoing computer-vision research with star trackers in carrying out two different mission functionalities. Results based on simulations show that our methods perform well. Because the special lighting conditions in space usually generate high-contrast images, description of the target features and those of the stars is a key stage in both techniques.9 We have developed a flower code to represent the star pattern for autonomous star identification and employed a generic Fourier descriptor to represent the shape of targets for pose estimation. In the near future we intend to incorporate the two functionalities into a real system to test performance. Currently, we are testing actual sky images from a ground-based camera with acceptable results.
Jiaqi Gong, Junbin Gong, Jinwen Tian
National Laboratory for Multi-spectral Information
Huazhong University of Science and Technology (HUST)
Jiaqi Gong received his BS in communication engineering at the China University of Geosciences (Wuhan) in 2004. He is currently a PhD candidate at the Institute for Pattern Recognition and Artificial Intelligence. He specializes in object recognition and space surveillance. In 2009, he presented a paper on the flower algorithm at SPIE's Sixth International Symposium on Multispectral Image Processing & and Pattern Recognition.
Junbin Gong is a postdoctoral research associate at the Institute for Pattern Recognition and Artificial Intelligence. His research focuses on integrated navigation technology, image and video analysis, and high-speed signal-processing system design.
Jinwen Tian received his PhD in pattern recognition and intelligent systems from HUST (1998). He is a professor and PhD supervisor in the areas of pattern recognition and artificial intelligence. His main research topics include remote-sensing image analysis, wavelet analysis, image compression, and computer vision and fractal geometry.