SPIE Membership Get updates from SPIE Newsroom
  • Newsroom Home
  • Astronomy
  • Biomedical Optics & Medical Imaging
  • Defense & Security
  • Electronic Imaging & Signal Processing
  • Illumination & Displays
  • Lasers & Sources
  • Micro/Nano Lithography
  • Nanotechnology
  • Optical Design & Engineering
  • Optoelectronics & Communications
  • Remote Sensing
  • Sensing & Measurement
  • Solar & Alternative Energy
  • Sign up for Newsroom E-Alerts
  • Information for:
    Advertisers
SPIE Photonics West 2018 | Call for Papers

SPIE Journals OPEN ACCESS

SPIE PRESS

SPIE PRESS




Print PageEmail PageView PDF

Remote Sensing

Decision fusion for classifying hyperspectral imagery with high spatial resolution

Combining supervised and unsupervised feature recognition improves classification accuracy by alleviating the impact from trivial intraclass variations.
19 August 2009, SPIE Newsroom. DOI: 10.1117/2.1200908.1733

Hyperspectral imaging is a relatively new technology in remote sensing. It deploys hundreds of spectral bands to collect image data for the same area. The high spectral resolution offers the potential of more accurate land-cover classification than when using instruments with rough spectral resolution, such as multispectral imaging sensors. However, the classification problem is challenging because of intrinsic intraclass variations (samples in the same class may have different spectral signatures). If a hyperspectral image also has high spatial resolution, the problem becomes more serious since intraclass variations exist in both the spectral and spatial domains.

As the accuracy of individual classifiers cannot be improved beyond given hard limits, many studies have been undertaken to develop and analyze combinations of results from different classifiers. The general aim is to obtain better results than can be derived from using classifiers individually, one at a time.1,2 Unlike feature-level fusion, which extracts and combines features to improve performance, decision-level fusion combines the results from individual classifiers for final decision. Most decision-fusion approaches focus on supervised classifiers as base learners: all classifiers need training, so the results can only be as good as the training data. To avoid the possible negative influence from the limited quality of this data, we propose a method that combines supervised and unsupervised classifiers.


Figure 1. Test 1 image.

In general, supervised feature recognition provides better classification than its unsupervised alternative. In addition to training-data limitations, using supervised classifiers may result in overclassification for some homogeneous areas. An unsupervised classifier, although possibly less powerful, generally recognizes such spectrally homogeneous areas fairly well. Thus, fusing supervised and unsupervised approaches may yield better performance since the impact from trivial spectral variations may be alleviated and subtle differences between spectrally similar pixels may not be overly exaggerated. Although individual classifiers are pixel-based, the final fused product yields results similar to those of object-based techniques.3 However, the overall performance is less sensitive to region segmentation.

Support-vector machine (SVM) and K-means clustering are typical supervised and unsupervised classifiers, respectively. SVM classifiers construct a hyperplane that maximizes the margin between two classes, achieving high class separability. SVM has been used successfully for hyperspectral-image classification. K-means classifiers partition the data cloud into clusters. Each data point is assigned to a cluster according to a minimum-distance criterion. After classification results have been obtained from both approaches, the K-means-based technique is deployed on the SVM-based classification as region segmentation. Spatially adjacent pixels grouped by the K-means classifier are reclassified using the majority-voting rule (on the basis of the SVM result). In other words, all pixels in each local segmented region are classified into the same class. This is the class that most pixels belong to using the SVM-based decision.

The hyperspectral data we used was taken by the airborne Hyperspectral Digital Imagery Collection Experiment (HYDICE) sensor. It was collected for the Mall in Washington, DC, using 210 bands covering 0.4–2.4μm with approximately 2.8m spatial resolution. The water-absorption bands were deleted, leaving 191 bands for analysis. The original data has 1280×307 pixels.

The original image was cropped to a subimage of 304×301 pixels. The image, ‘Test 1,’ is shown in Figure 1 in pseudocolor, including six classes: road, grass, shadow, trail, tree, and roof. Figure 2(a) shows the classification result using SVM. Compared with Figure 1, we can see that some misclassifications have occurred among roof, trail, and road pixels. Figure 2(b) shows the K-means classification map, where the misclassifications between roof and trail pixels are obvious. Figure 2(c) represents the fused decision, where the roof areas become smoother and many roof pixels previously misclassified as trail or road have been corrected. The overall accuracy (OA) improves from 92.86 to 96.71% and the κ coefficient from 0.9177 to 0.9593. OA is an average, with the accuracy of each class weighted by the proportion of test samples for that specific class in the test set. The κ coefficient is related to OA, but includes both accurately classified and misclassified sample percentages.


Figure 2. Classification results for Test 1 data. SVM: Support-vector machine.

The original image was cropped to a second subimage, ‘Test 2,’ containing 266×304 pixels (see Figure 3). It includes seven classes: road, grass, water, shadow, trail, tree, and roof. Figure 4(a) shows the classification result using SVM. Compared with Figure 3, we can see that there are some misclassifications among roof, trail, and road pixels, as well as among shadow, road, and water pixels. Figure 4(b) is the K-means classification map, containing obvious misclassifications between roof and trail, and between shadow and water pixels. Figure 4(c) is the fused decision, where the improvement in roof regions is significant. The OA value increases from 95.58 to 98.33% and the κ coefficient from 0.9465 to 0.9798.


Figure 3. Test 2 image.

Figure 4. Classification results for Test 2 data.

In conclusion, we propose a decision-fusion approach combining supervised and unsupervised classifiers. The final output takes advantage of the power of the SVM-based method in class separation and the capability of the K-means classifier to reduce the impact from intraclass variations in spectrally homogeneous regions. This approach simply adopts the majority-voting rule, but can achieve the same goal as object-based approaches. Currently, no spatial information is considered in classification routines. For images with high spatial resolution, incorporating spatial features may further improve classification accuracy. This is the direction of our future work.


Qian Du
Mississippi State University
Mississippi State, MS

Qian (Jenny) Du is an associate professor. She has been working on hyperspectral image processing and analysis for many years and has published hundreds of technical publications.