
Proceedings Paper
Deep learning on hyperspectral data to obtain water properties and bottom depthsFormat | Member Price | Non-Member Price |
---|---|---|
$17.00 | $21.00 |
Paper Abstract
Developing accurate methods to determine bathymetry, bottom type, and water column optical properties from hyperspectral imagery is an ongoing scientific problem. Recent advances in deep learning have made convolutional neural networks (CNNs) a popular method for classification and regression on complex datasets. In this paper, we explore the use of CNNs to extract water depth, bottom type, and inherent optical properties (IOPs) from hyperspectral imagery (HSI) of water. We compare the CNN results to other machine learning algorithms: k-nearest-neighbors (KNN), stochastic gradient descent (SGD), random forests (RF), and extremely randomized trees (ET). This work is an inverse problem in which we seek to find the water properties than impact the reflectance and hence the collected HSI. The data includes optically shallow water, in which the bottom can be seen, and optically deep, in which the bottom cannot be seen and does not affect the reflectance. The scalar optical properties we find through regression are chlorophyll (CHL), colored dissolved organic matter (CDOM), total suspended sediments (TSS). For the case of the optically shallow water, we classify the bottom type among 114 different substrates. The results demonstrate that for finding water depth, bottom type, and IOPs in the case of optically shallow water, the CNN has better performance than other machine learning methods. For regression of the IOPs in optically deep water, the extremely randomized trees method outperforms the CNN. We further investigate the mechanisms of these results and discuss hyperparameter tuning strategies that may improve deep learning accuracy.
Paper Details
Date Published: 7 May 2019
PDF: 8 pages
Proc. SPIE 11018, Signal Processing, Sensor/Information Fusion, and Target Recognition XXVIII, 110180Y (7 May 2019); doi: 10.1117/12.2519881
Published in SPIE Proceedings Vol. 11018:
Signal Processing, Sensor/Information Fusion, and Target Recognition XXVIII
Ivan Kadar; Erik P. Blasch; Lynne L. Grewe, Editor(s)
PDF: 8 pages
Proc. SPIE 11018, Signal Processing, Sensor/Information Fusion, and Target Recognition XXVIII, 110180Y (7 May 2019); doi: 10.1117/12.2519881
Show Author Affiliations
Kristen Nock, U.S. Naval Research Lab. (United States)
Elizabeth Gilmour, U.S. Naval Research Lab. (United States)
Paul Elmore, U.S. Naval Research Lab. (United States)
Elizabeth Gilmour, U.S. Naval Research Lab. (United States)
Paul Elmore, U.S. Naval Research Lab. (United States)
Eric Leadbetter, U.S. Naval Research Lab. (United States)
Nina Sweeney, U.S. Naval Research Lab. (United States)
Frederick Petry, U.S. Naval Research Lab. (United States)
Nina Sweeney, U.S. Naval Research Lab. (United States)
Frederick Petry, U.S. Naval Research Lab. (United States)
Published in SPIE Proceedings Vol. 11018:
Signal Processing, Sensor/Information Fusion, and Target Recognition XXVIII
Ivan Kadar; Erik P. Blasch; Lynne L. Grewe, Editor(s)
© SPIE. Terms of Use
