
Proceedings Paper
Surgical aid visualization system for glioblastoma tumor identification based on deep learning and in-vivo hyperspectral images of human patientsFormat | Member Price | Non-Member Price |
---|---|---|
$17.00 | $21.00 |
Paper Abstract
Brain cancer surgery has the goal of performing an accurate resection of the tumor and preserving as much as possible the quality of life of the patient. There is a clinical need to develop non-invasive techniques that can provide reliable assistance for tumor resection in real-time during surgical procedures. Hyperspectral imaging (HSI) arises as a new, noninvasive and non-ionizing technique that can assist neurosurgeons during this difficult task. In this paper, we explore the use of deep learning (DL) techniques for processing hyperspectral (HS) images of in-vivo human brain tissue. We developed a surgical aid visualization system capable of offering guidance to the operating surgeon to achieve a successful and accurate tumor resection. The employed HS database is composed of 26 in-vivo hypercubes from 16 different human patients, among which 258,810 labelled pixels were used for evaluation. The proposed DL methods achieve an overall accuracy of 95% and 85% for binary and multiclass classifications, respectively. The proposed visualization system is able to generate a classification map that is formed by the combination of the DL map and an unsupervised clustering via a majority voting algorithm. This map can be adjusted by the operating surgeon to find the suitable configuration for the current situation during the surgical procedure.
Paper Details
Date Published: 8 March 2019
PDF: 11 pages
Proc. SPIE 10951, Medical Imaging 2019: Image-Guided Procedures, Robotic Interventions, and Modeling, 1095110 (8 March 2019); doi: 10.1117/12.2512569
Published in SPIE Proceedings Vol. 10951:
Medical Imaging 2019: Image-Guided Procedures, Robotic Interventions, and Modeling
Baowei Fei; Cristian A. Linte, Editor(s)
PDF: 11 pages
Proc. SPIE 10951, Medical Imaging 2019: Image-Guided Procedures, Robotic Interventions, and Modeling, 1095110 (8 March 2019); doi: 10.1117/12.2512569
Show Author Affiliations
Himar Fabelo , The Univ. of Texas at Dallas (United States)
Univ. de Las Palmas de Gran Canaria (Spain)
Martin Halicek , The Univ. of Texas at Dallas (United States)
Emory Univ. and Georgia Institute of Technology (United States)
Samuel Ortega, Univ. de Las Palmas de Gran Canaria (Spain)
Adam Szolna, Hospital Univ. de Gran Canaria Doctor Negrin (Spain)
Univ. de Las Palmas de Gran Canaria (Spain)
Martin Halicek , The Univ. of Texas at Dallas (United States)
Emory Univ. and Georgia Institute of Technology (United States)
Samuel Ortega, Univ. de Las Palmas de Gran Canaria (Spain)
Adam Szolna, Hospital Univ. de Gran Canaria Doctor Negrin (Spain)
Jesus Morera, Hospital Univ. de Gran Canaria Doctor Negrin (Spain)
Roberto Sarmiento, Univ. de Las Palmas de Gran Canaria (Spain)
Gustavo M. Callico, Univ. de Las Palmas de Gran Canaria (Spain)
Baowei Fei, The Univ. of Texas at Dallas (United States)
The Univ. of Texas Southwestern Medical Ctr. (United States)
Roberto Sarmiento, Univ. de Las Palmas de Gran Canaria (Spain)
Gustavo M. Callico, Univ. de Las Palmas de Gran Canaria (Spain)
Baowei Fei, The Univ. of Texas at Dallas (United States)
The Univ. of Texas Southwestern Medical Ctr. (United States)
Published in SPIE Proceedings Vol. 10951:
Medical Imaging 2019: Image-Guided Procedures, Robotic Interventions, and Modeling
Baowei Fei; Cristian A. Linte, Editor(s)
© SPIE. Terms of Use
