
Proceedings Paper
Automatic thoracic body region localizationFormat | Member Price | Non-Member Price |
---|---|---|
$17.00 | $21.00 |
Paper Abstract
Radiological imaging and image interpretation for clinical decision making are mostly specific to each body region such as head & neck, thorax, abdomen, pelvis, and extremities. For automating image analysis and consistency of results, standardizing definitions of body regions and the various anatomic objects, tissue regions, and zones in them becomes essential. Assuming that a standardized definition of body regions is available, a fundamental early step needed in automated image and object analytics is to automatically trim the given image stack into image volumes exactly satisfying the body region definition. This paper presents a solution to this problem based on the concept of virtual landmarks and evaluates it on whole-body positron emission tomography/computed tomography (PET/CT) scans. The method first selects a (set of) reference object(s), segments it (them) roughly, and identifies virtual landmarks for the object(s). The geometric relationship between these landmarks and the boundary locations of body regions in the craniocaudal direction is then learned through a neural network regressor, and the locations are predicted. Based on low-dose unenhanced CT images of 180 near whole-body PET/CT scans (which includes 34 whole-body PET/CT scans), the mean localization error for the boundaries of superior of thorax (TS) and inferior of thorax (TI), expressed as number of slices (slice spacing ≈ 4mm)), and using either the skeleton or the pleural spaces as reference objects, is found to be 3,2 (using skeleton) and 3, 5 (using pleural spaces) respectively, or in mm 13, 10 mm (using skeleton) and 10.5, 20 mm (using pleural spaces), respectively. Improvements of this performance via optimal selection of objects and virtual landmarks and other object analytics applications are currently being pursued.
and the skeleton and pleural spaces used as a reference objects
Paper Details
Date Published: 3 March 2017
PDF: 6 pages
Proc. SPIE 10134, Medical Imaging 2017: Computer-Aided Diagnosis, 101343X (3 March 2017); doi: 10.1117/12.2254862
Published in SPIE Proceedings Vol. 10134:
Medical Imaging 2017: Computer-Aided Diagnosis
Samuel G. Armato III; Nicholas A. Petrick, Editor(s)
PDF: 6 pages
Proc. SPIE 10134, Medical Imaging 2017: Computer-Aided Diagnosis, 101343X (3 March 2017); doi: 10.1117/12.2254862
Show Author Affiliations
PeiRui Bai, Shandong Univ. of Science and Technology (China)
Univ. of Pennsylvania (United States)
Jayaram K. Udupa, Univ. of Pennsylvania (United States)
YuBing Tong, Univ. of Pennsylvania (United States)
Univ. of Pennsylvania (United States)
Jayaram K. Udupa, Univ. of Pennsylvania (United States)
YuBing Tong, Univ. of Pennsylvania (United States)
ShiPeng Xie, Univ. of Pennsylvania (United States)
Nanjing Univ. of Posts and Telecommunications (China)
Drew A. Torigian, Univ. of Pennsylvania (United States)
Nanjing Univ. of Posts and Telecommunications (China)
Drew A. Torigian, Univ. of Pennsylvania (United States)
Published in SPIE Proceedings Vol. 10134:
Medical Imaging 2017: Computer-Aided Diagnosis
Samuel G. Armato III; Nicholas A. Petrick, Editor(s)
© SPIE. Terms of Use
