Share Email Print
cover

Proceedings Paper • new

Step-wise identification of ultrasound-visible anatomical landmarks for 3D visualization of scoliotic spine
Author(s): Zachary Baum; Ben Church; Andras Lasso; Tamas Ungi; Christopher Schlenger; Daniel P. Borschneck; Parvin Mousavi; Gabor Fichtinger
Format Member Price Non-Member Price
PDF $17.00 $21.00

Paper Abstract

PURPOSE: Identification of vertebral landmarks with ultrasound is a challenging task. We propose a step-wise computer-guided landmark identification method for developing 3D spine visualizations from tracked ultrasound images. METHODS: Transverse process bone patches were identified to generate an initial spine segmentation in real - time from live ultrasound images. A modified k-means algorithm was adapted to provide an initial estimate of landmark locations from the ultrasound image segmentation. The initial estimations using the modified k-means algorithm do not always provide a landmark on every segmented image patch. As such, further processing may improve the result captured from the sequences, owing to the spine’s symmetries. Five healthy subjects received thoracolumbar US scans. Their real- time ultrasound image segmentations were used to create 3D visualizations for initial validation of the method. RESULTS: The resulting visualizations conform to the parasagittal curvature of the ultrasound images. Our processing can correct the initial estimation to reveal the underlying structure and curvature of the spine from each subject. However, the visualizations are typically truncated and suffer from dilation or expansion near their superior and inferior-most points. CONCLUSION: Our methods encompass a step-wise approach to bridge the gap between ultrasound scans, and 3D visualization of the scoliotic spine, generated using vertebral landmarks. Though a lack of ground-truth imaging prevented complete validation of the workflow, patient-specific deformation is clearly captured in the anterior-posterior curvatures. The frequency of user-interaction required for completing the correction methods presents a challenge in moving towards full automation and requires further attention.

Paper Details

Date Published: 8 March 2019
PDF: 8 pages
Proc. SPIE 10951, Medical Imaging 2019: Image-Guided Procedures, Robotic Interventions, and Modeling, 1095129 (8 March 2019); doi: 10.1117/12.2512648
Show Author Affiliations
Zachary Baum, Lab. for Percutaneous Surgery, Queen's Univ. (Canada)
Ben Church, Lab. for Percutaneous Surgery, Queen's Univ. (Canada)
Andras Lasso, Lab. for Percutaneous Surgery, Queen's Univ. (Canada)
Tamas Ungi, Lab. for Percutaneous Surgery, Queen's Univ. (Canada)
Queen's Univ. (Canada)
Christopher Schlenger, Premier Chiropractic (United States)
Daniel P. Borschneck, Queen's Univ. (Canada)
Parvin Mousavi, Medical Informatics Lab., Queen's Univ. (Canada)
Gabor Fichtinger, Lab. for Percutaneous Surgery, Queen's Univ. (Canada)
Queen's Univ. (Canada)


Published in SPIE Proceedings Vol. 10951:
Medical Imaging 2019: Image-Guided Procedures, Robotic Interventions, and Modeling
Baowei Fei; Cristian A. Linte, Editor(s)

© SPIE. Terms of Use
Back to Top