This conference is primarily concerned with applications of medical imaging data in the engineering of therapeutic systems. Original papers are requested in the following topic areas: Submissions that cross over between this conference and others at SPIE Medical Imaging, and which would be appropriate for combined sessions, are also welcomed.

We intend to give special recognition to authors whose conference proceedings are accompanied by open-source access to datasets and software.


BEST STUDENT PAPER AWARD
We are pleased to announce that a sponsored cash prize will be awarded to the best student paper in this conference. Qualifying applications will be evaluated by the awards committee. Manuscripts will be judged based on scientific merit, impact, and clarity. The winners will be announced during the conference and the presenting author will be awarded a cash prize.

To be eligible for the Best Student Paper Award, you must:
  • be a student without a doctoral degree (undergraduate, graduate, or PhD student)
  • submit your abstract online, and select “Yes” when asked if you are a full-time student, and select yourself as the speaker
  • be listed as the speaker on an accepted paper within this conference
  • have conducted the majority of the work to be presented
  • submit an application for this award with preliminary version of your manuscript for judging by 1 December 2023 and a recommendation from your advisor confirming student status
  • submit the final version of your manuscript through your SPIE.org account by 31 January 2024
  • present your paper as scheduled.

Nominations
All submitted papers will be eligible for the award if they meet the above criteria.

Award sponsored by:





YOUNG SCIENTIST AWARD
We are pleased to announce the Young Scientist Award in this conference. Qualifying applications will be evaluated by the awards committee. Manuscripts will be judged based on scientific merit, impact, and clarity. The winner will be announced during the conference and the presenting author will be awarded a cash prize.

To be eligible for the Young Scientist Award, you must:
  • submit your abstract online and select yourself as the speaker
  • be listed as the speaker on an accepted paper within this conference
  • have conducted the majority of the work to be presented
  • be an early-career scientist (students and postdoctoral fellows)
  • submit an application for this award with preliminary version of your manuscript for judging by 1 December 2023 and a recommendation from your advisor
  • submit the final version of your manuscript through your SPIE.org account by 31 January 2024
  • present your paper as scheduled.


Award sponsored by:





POSTER AWARD
The Image-Guided Procedures, Robotic Interventions, and Modeling conference will feature a cum laude poster award. All posters displayed at the meeting for this conference are eligible. Posters will be evaluated at the meeting by the awards committee. The winners will be announced during the conference and the presenting author will be recognized and awarded a cash prize and a certificate.

Award sponsored by:


;
In progress – view active session
Conference 12928

Image-Guided Procedures, Robotic Interventions, and Modeling

19 - 22 February 2024 | Pacific C
All sponsors
Show conference sponsors + Hide conference sponsors
View Session ∨
  • SPIE Medical Imaging Awards and Plenary
  • Monday Morning Keynotes
  • 1: Robotic Assistance
  • 2: Moving Targets
  • 3: Tracking and Localization
  • Posters - Monday
  • Tuesday Morning Keynotes
  • 4: Surgical Data Science/Video Analysis
  • 5: Neurosurgery/Neurotology
  • 6: Joint Session with Conferences 12928 and 12932
  • Live Demonstrations Workshop
  • Publicly Available Data and Tools to Promote Machine Learning: an interactive workshop exploring MIDRC
  • 3D Printing and Imaging: Enabling Innovation in Personalized Medicine, Device Development, and System Components
  • Establishing Ground Truth in Radiology and Pathology
  • Wednesday Morning Keynotes
  • 7: Image Segmentation/Registration
  • 8: Spine / Orthopaedic Surgery
  • 9: Deep Image Analysis for Image-Guided Interventions
  • Thursday Morning Keynotes
  • 10: Novel Imaging and Visualization
  • 11: Interventional Radiology
  • 12: Joint Session with Conferences 12925 and 12928
  • Digital Posters
SPIE Medical Imaging Awards and Plenary
18 February 2024 • 5:30 PM - 6:30 PM PST | Town & Country A

5:30 PM - 5:40 PM:
Symposium Chair Welcome and Best Student Paper Award announcement
First-place winner and runner-up of the Robert F. Wagner All-Conference Best Student Paper Award
Sponsored by:
MIPS and SPIE

5:40 PM - 5:45 PM:
New SPIE Fellow acknowledgments
Each year, SPIE promotes Members as new Fellows of the Society. Join us as we recognize colleagues of the medical imaging community who have been selected.

5:45 PM - 5:50 PM:
SPIE Harrison H. Barrett Award in Medical Imaging
Presented in recognition of outstanding accomplishments in medical imaging
12927-501
Author(s): Cynthia Rudin, Duke Univ. (United States)
18 February 2024 • 5:50 PM - 6:30 PM PST | Town & Country A
Show Abstract + Hide Abstract
We would like deep learning systems to aid radiologists with difficult decisions instead of replacing them with inscrutable black boxes. "Explaining" the black boxes with XAI tools is problematic, particularly in medical imaging where the explanations from XAI tools are inconsistent and unreliable. Instead of explaining the black boxes, we can replace them with interpretable deep learning models that explain their reasoning processes in ways that people can understand. One popular interpretable deep learning approach uses case-based reasoning, where an algorithm compares a new test case to similar cases from the past ("this looks like that"), and a decision is made based on the comparisons. Radiologists often use this kind of reasoning process themselves when evaluating a new challenging test case. In this talk, I will demonstrate interpretable machine learning techniques through applications to mammography and EEG analysis.
Monday Morning Keynotes
19 February 2024 • 8:30 AM - 10:45 AM PST | Town & Country A
Session Chairs: Weijie Chen, U.S. Food and Drug Administration (United States), Susan M. Astley, The Univ. of Manchester (United Kingdom), Jeffrey Harold Siewerdsen, The Univ. of Texas M.D. Anderson Cancer Ctr. (United States), Maryam E. Rettmann, Mayo Clinic (United States)

8:30 AM - 8:35 AM:
Welcome and introduction

8:35 AM - 8:45 AM:
Award announcements

  • Robert F. Wagner Award finalists for conferences 12927, 12928, and 12932
  • Computer-Aided Diagnosis Best Paper Award
  • Image-Guided Procedures, Robotic Interventions, and Modeling student paper and Young Scientist Award

12927-403
Author(s): Curtis P. Langlotz, Stanford Univ. School of Medicine (United States)
19 February 2024 • 8:45 AM - 9:25 AM PST | Town & Country A
Show Abstract + Hide Abstract
Artificial intelligence and machine learning (AI/ML) are powerful tools for building computer vision systems that support the work of clinicians, leading to high interest and explosive growth in the use of these methods to analyze clinical images. These promising AI techniques create computer vision systems that perform some image interpretation tasks at the level of expert radiologists. In radiology, deep learning methods have been developed for image reconstruction, imaging quality assurance, imaging triage, computer-aided detection, computer-aided classification, and radiology documentation. The resulting computer vision systems are being implemented now and have the potential to provide real-time assistance, thereby reducing diagnostic errors, improving patient outcomes, and reducing costs. We will show examples of real-world AI applications that indicate how AI will change the practice of medicine and illustrate the breakthroughs, setbacks, and lessons learned that are relevant to medical imaging.
12928-404
Author(s): Lena Maier-Hein, Deutsches Krebsforschungszentrum (Germany)
19 February 2024 • 9:25 AM - 10:05 AM PST | Town & Country A
Show Abstract + Hide Abstract
Intelligent medical systems adept at acquiring and analyzing sensor data to offer context-sensitive support are at the forefront of modern healthcare. However, various factors, often not immediately apparent, significantly hinder the effective integration of contemporary machine learning research into clinical practice. Using insights from my own research team and extensive international collaborations, I will delve into prevalent issues in current medical imaging practices and offer potential remedies. My talk will highlight the vital importance of challenging every aspect of the medical imaging pipeline from the image modalities applied to the validation methodology, ensuring that intelligent imaging systems are primed for genuine clinical implementation.
12932-408
Author(s): Nebojsa Duric, Univ. of Rochester (United States), Delphinus Medical Technologies (United States)
19 February 2024 • 10:05 AM - 10:45 AM PST | Town & Country A
Show Abstract + Hide Abstract
Ultrasound tomography (UST) is an emerging medical imaging modality that has found its way into clinical practice after its recent approval by the Food and Drug Administration (FDA) for breast cancer screening and diagnostics. As an active area of research, UST also shows promise for applications in brain, prostate, limb and even whole-body imaging. The historical development of ultrasound tomography is rooted in the idea of “seeing with sound” and the concept borrows heavily from diverse disciplines, including oceanography, geophysics and astrophysics. A brief history of the field is provided, followed by a review of current reconstruction methods and imaging examples. Unlike other imaging modalities, ultrasound tomography in medicine is computationally bounded. Its future advancement is discussed from the perspective of ever-increasing computational power and Moore's Law.
Session 1: Robotic Assistance
19 February 2024 • 11:10 AM - 12:30 PM PST | Pacific C
Session Chairs: Kristy K. Brock, The Univ. of Texas M.D. Anderson Cancer Ctr. (United States), David M. Kwartowitz, Grand Canyon Univ. (United States)
12928-1
Author(s): Yicheng Hu, Yixuan Huang, Anthony Song, Craig K. Jones, Johns Hopkins Univ. (United States); Jeffrey H. Siewerdsen, Johns Hopkins Univ. (United States), The Univ. of Texas M.D. Anderson Cancer Ctr. (United States); Burcu Basar, Patrick A. Helm, Medtronic, Inc. (United States); Ali Uneri, Johns Hopkins Univ. (United States)
On demand | Presented live 19 February 2024
Show Abstract + Hide Abstract
Finding desired scan planes in ultrasound imaging is a critical first task that can be time-consuming, influenced by operator experience, and subject to inter-operator variability. This work presents a new approach that leverages deep reinforcement learning to automate probe positioning during intraoperative ultrasound imaging. A dueling deep Q-network is applied and evaluated for kidney imaging. The agent was trained on images resliced from CT images, with a novel reward function that used image features. Evaluations on an independent test dataset demonstrated the agent’s ability to reach target views with an accuracy of 76% ± 8% within an average of 18 ± 11 steps.
12928-2
Author(s): Yuan Shi, Michael A. Kokko, Thayer School of Engineering at Dartmouth (United States); Joseph A. Paydarfar, Dartmouth-Hitchcock Medical Ctr. (United States); Ryan J. Halter, Thayer School of Engineering at Dartmouth (United States)
On demand | Presented live 19 February 2024
Show Abstract + Hide Abstract
Cancers of the head and neck (oral cavity, pharynx, and larynx) represent the seventh most common cancer in the world1. In 2023, there will be an estimated 66,920 new cases diagnosed with approximately 15,400 deaths in the United States2. The incidence is increasing and is forecasted to rise by 30% by 20301. Minimally-invasive transoral robotic surgery (TORS) is an effective approach for head and neck cancer management with demonstrated excellent oncologic and functional outcomes and low surgical morbidity3–5. However, the lack of haptic feedback in TORS poses increased risks of positive surgical margins in select areas and neurovascular complications6–9. Proposing that surgical navigation with image guidance has the potential to compensate for the sensory deficit, we have previously demonstrated the feasibility of intra-operative imaging10 and robotic instrument tracking11 in TORS. This paper describes the development of a surgical navigation framework utilizing intra-operative imaging and instrument tracking and its integration with the da Vinci Surgical System.
12928-3
Author(s): Sarah C. Nanziri, The George Washington Univ. (United States); Van Khanh Lam, Pavel Yarmolenko, Children's National Hospital (United States); Lucas Hintz, The George Washington Univ. (United States); Gang Li, Children's National Hospital (United States); Hadi Fooladi Talari, Children's National Health System (United States); Kevin Cleary, Children's National Hospital (United States); Anthony L. Gunderman, Yue Chen, Georgia Institute of Technology (United States); Dimitri Sigounas, The George Washington Univ. (United States)
On demand | Presented live 19 February 2024
Show Abstract + Hide Abstract
Minimally invasive approaches for intracerebral hemorrhage evacuation have shown promising results in improving patient outcomes. However, these approaches are still disruptive to surrounding normal brain tissue and do not allow for the near-total evacuation of a hemorrhage. In this MRI-guided study, we assessed an MRI-Compatible robotic aspiration device in a sheep brain phantom. The robot was advanced into the clot and aspiration was performed with real-time intraoperative MR imaging. The volume of the clot was reduced by 83% in 21 seconds and the phantom did not have any unexpected damage from the procedure.
12928-4
Author(s): Ange Lou, Yamin Li, Xing Yao, Yike Zhang, Jack H. Noble, Vanderbilt Univ. (United States)
On demand | Presented live 19 February 2024
Show Abstract + Hide Abstract
The accurate reconstruction of surgical scenes from surgical videos is critical for various applications, including intraoperative navigation and image-guided robotic surgery automation. However, previous approaches, mainly relying on depth estimation, have limited effectiveness in reconstructing surgical scenes with moving surgical tools. To address this limitation and provide accurate 3D position prediction for surgical tools in all frames, we propose a novel approach called SAMSNeRF that combines Segment Anything Model (SAM) and Neural Radiance Field (NeRF) techniques. Our approach generates accurate segmentation masks of surgical tools using SAM, which guides the refinement of the dynamic surgical scene reconstruction by NeRF. Our experimental results on public endoscopy surgical videos demonstrate that our approach successfully reconstructs high-fidelity dynamic surgical scenes and accurately reflects the spatial information of surgical tools. Our proposed approach can significantly enhance surgical navigation and automation by providing surgeons with accurate 3D position information of surgical tools during surgery.
Session 2: Moving Targets
19 February 2024 • 1:40 PM - 3:20 PM PST | Pacific C
Session Chairs: Elvis C.S. Chen, Robarts Research Institute (Canada), David R. Holmes, Mayo Clinic (United States)
12928-5
Author(s): Patrick K. Carnahan, Charles C. X. Yuan, John Moore, Western Univ. (Canada); Gianluigi Bisleri, Univ. of Toronto (Canada); Daniel Bainbridge, London Health Sciences Ctr. (Canada); Terry M. Peters, Elvis C. S. Chen, Western Univ. (Canada)
On demand | Presented live 19 February 2024
Show Abstract + Hide Abstract
Accurate models of the mitral valve are highly valuable for studying the physiology of the heart and its various pathologies, as well as creating physical replicas for cardiac surgery training. Currently, heart simulator technologies are used which rely on patient-specific data to create valve replicas. Alternatively, mathematical models of the mitral valve have been developed for computational applications. However, there are no studies that mathematically model both the mitral valve’s leaflets and its saddle-shaped annulus in a single design together in current literature. This results in anatomic inaccuracies in current models, as either only the leaflets or the saddle-shaped annulus are realistically modelled. Mathematical models to date have not been replicated as dynamic, physical valves and validated in a heart simulator system. We propose a new parametric representation of the mitral valve based on a combination of valve models from prior literature, combining both accurate leaflet shape, and annular geometry. A physical silicone replica of the model is created and validated in a pulse duplicator using a transesophageal echocardiography probe with color Doppler imaging.
12928-6
Author(s): Matilde Pazzaglia, Atefeh Abdolmanafi, Gerardo Tibamoso Pedraza, Ecole de Technologie Supérieure (Canada); Nagib Dahdah, Div. of Pediatric Cardiology, CHU Sainte-Justine (Canada), Ctr. de Recherche du CHU Sainte-Justine (Canada); Luc Duong, Ecole de Technologie Supérieure (Canada)
On demand | Presented live 19 February 2024
Show Abstract + Hide Abstract
Kawasaki disease, predominantly affecting children, can lead to potential complications in the coronary arteries, potentially causing inflammation of blood vessel walls if left untreated. Intravascular Optical Coherence Tomography (IV-OCT) offers vital coronary artery imaging guidance to cardiologists, but its operation demands skilled expertise and adherence to intricate protocols. Our study introduces a novel approach utilizing polyvinyl alcohol cryogel (PVA-c) to fabricate patient-specific coronary OCT phantoms. These phantoms closely mimic human tissue, serving as valuable tools for training cardiologists and deepening understanding of the OCT image formation process. By designing 3D molds based on real OCT arterial images, we create PVA-c phantoms that capture the morphological characteristics and visual features of diseased coronary arteries. Our findings indicate that these phantoms effectively emulate the structures and appearances observed in OCT, closely resembling human tissue.
12928-7
Author(s): Sarah Latus, Marica Kulas, Johanna Sprenger, Technische Univ. Hamburg-Harburg (Germany); Debayan Bhattacharya, Technische Univ. Hamburg-Harburg (Germany), Universitätsklinikum Hamburg-Eppendorf (Germany); Philippe Christophe Breda, Lukas Wittig, Universitätsklinikum Hamburg-Eppendorf (Germany); Tim Eixmann, Gereon Hüttmann, Medizinisches Laserzentrum Lübeck GmbH (Germany); Lennart Maack, Technische Univ. Hamburg-Harburg (Germany); Dennis Eggert, Christian Betz, Universitätsklinikum Hamburg-Eppendorf (Germany); Alexander Schlaefer, Technische Univ. Hamburg-Harburg (Germany)
On demand | Presented live 19 February 2024
Show Abstract + Hide Abstract
The increasing incidence of laryngeal carcinomas requires approaches for early diagnosis and treatment. In clinical practice, white light endoscopy of the laryngeal region is typically followed by biopsy under general anesthesia. Optical coherence tomography (OCT) has been proposed to study sub-surface tissue layers at high resolution. However, accessing the region of interest requires miniature OCT probes that can be inserted in the working channel of a laryngoscope. Typically, such probes generate single column depth images which are difficult to interpret. We propose a novel approach using endoscopic images to spatially align these images. Given the natural tissue motion and movements of the laryngoscope, resulting OCT images show a three-dimensional representation of sub-surface structures, which is simpler to interpret. We present a motion tracking method and assess the precision of spatial alignment. Furthermore, we demonstrate the in-vivo application, illustrating the benefit of spatially meaningful alignment of OCT images to study laryngeal tissue.
12928-8
Author(s): Yubo Fan, Han Liu, Jack H. Noble, Benoit M. Dawant, Vanderbilt Univ. (United States)
On demand | Presented live 19 February 2024
Show Abstract + Hide Abstract
Localizing the electrode array (EA) in cochlear implant (CI) postoperative computed tomography (CT) images is needed in image-guided CI programming, which has been shown to improve hearing outcomes. Postoperative images with adequate image quality are required to allow the EA to be reliably and precisely localized. However, these images sometimes are affected by motion artifacts which can make the localization task unreliable or even fail. Thus, flagging these low-quality images prior to the subsequent clinical use is important. In this work, we propose to assess the image quality by using a 3D convolutional neural network to classify the level (no/mild/moderate/severe) of the motion artifacts that affect the image. To address the challenges of subjective annotations and class imbalance, several techniques (a new loss term, an oversampling strategy, and motion artifact simulation) are used during training. Results demonstrate the proposed method has the potential to reduce time and efforts on image quality assessment that is traditionally by visual inspection.
12928-9
Author(s): Qi Chang, William E. Higgins, The Pennsylvania State Univ. (United States)
On demand | Presented live 19 February 2024
Show Abstract + Hide Abstract
Lung cancer management relies on 3D computed tomography (CT) imaging and bronchoscopy, offering detailed airway views and anatomical information. However, integrating these data sources is challenging due to the difficulty in obtaining depth and camera pose from bronchoscopic videos. Recent use of deep-learning networks for estimating this information faces hurdles in gathering training data (paired frames + depth). Generative adversarial networks (GANs) help by transforming CT endoluminal views into synthesized bronchoscopic frames, aligning them with CT-derived depth maps for training. However, this domain transformation method lacks the use of sequential frame knowledge, such as photometric consistency, and cannot predict camera ego-motion. Addressing this limitation, a self-supervised training strategy is used for the Monodepth2 architecture, incorporating domain transformation and photometric consistency. This enhances depth and ego-motion prediction in bronchoscopic frames. Test data results show accurate predictions and reference scaling factors derived from these tests facilitate real-world applications.
Session 3: Tracking and Localization
19 February 2024 • 3:50 PM - 5:30 PM PST | Pacific C
Session Chairs: Ziv R. Yaniv, National Institute of Allergy and Infectious Diseases (United States), Eric J. Seibel, Univ. of Washington (United States)
12928-10
Author(s): Morgan J. Ringel, Winona L. Richey, Vanderbilt Univ. (United States); Jon S. Heiselman, Memorial Sloan-Kettering Cancer Ctr. (United States); Alexander W. Stabile, Vanderbilt Univ. (United States); Ingrid M. Meszoely, Vanderbilt Univ. Medical Ctr. (United States); Michael I. Miga, Vanderbilt Univ. (United States)
On demand | Presented live 19 February 2024
Show Abstract + Hide Abstract
Breast conserving surgery is a common treatment option for women with early-stage breast cancer, but these procedures have high and variable reoperation rates due to positive resection margins. This work proposes an image guidance system for breast conserving surgery that combines stereo camera soft tissue monitoring with nonrigid registration for deformation correction. A series of breast phantom deformation experiments were performed to demonstrate system capabilities, and validation studies with human volunteers are ongoing. Overall, this system may allow for better navigation and tumor localization during breast conserving surgeries.
12928-11
Author(s): Fangjie Li, Huilin Xu, Shanelle D. Cao, Jinchi Wei, Dante Rhodes, Johns Hopkins Univ. (United States); Jeffrey H. Siewerdsen, The Univ. of Texas M.D. Anderson Cancer Ctr. (United States), Johns Hopkins Univ. (United States); Luis F. Gonzalez, The Johns Hopkins Univ. School of Medicine (United States); Ali Uneri, Johns Hopkins Univ. (United States)
On demand | Presented live 19 February 2024
Show Abstract + Hide Abstract
This work presents a new system for electromagnetic catheter navigation during endovascular interventions. A custom catheter instrument was designed and constructed to integrate a 5 single EM coil sensor at its tip. The tracked sensor was used in (1) dynamically reconstructing the instrument shape as it is advanced or retracted within the vessels; (2) visualizing the tip direction to guide it through vessel bifurcations; and (3) registering its path to vessel centerlines to provide image overlay. Experimental studies demonstrate sufficient accuracy (4.1 mm and 3.4°) for guiding the catheter through the main arteries.
12928-12
Author(s): Nati Nawawithan, Jeff Young, Patric Bettati, Armand P. Rathgeb, Kelden T. Pruitt, Jordan Frimpter, Henry Kim, Jonathan Yu, Davis Driver, Amanuel Shiferaw, Aditi Chaudhari, The Univ. of Texas at Dallas (United States); Brett A. Johnson, Jeffrey Gahan, The Univ. of Texas Southwestern Medical Ctr. at Dallas (United States); James Yu, The Univ. of Texas Southwestern Medical Ctr. at Dallas (United States), The Univ. of Texas at Dallas (United States); Baowei Fei, The Univ. of Texas at Dallas (United States), The Univ. of Texas Southwestern Medical Ctr. at Dallas (United States)
On demand | Presented live 19 February 2024
Show Abstract + Hide Abstract
Minimally invasive surgical techniques have improved patient outcomes and postoperative recovery, but it's limited by its field of view and difficulty in locating subsurface targets. Our proposed solution applies an augmented reality (AR) based system to overlay pre-operative images acquired from magnetic resonance imaging (MRI) onto the target organ providing the location of subsurface lesions and a proposed surgical guidance path in real-time. An infrared motion tracking camera system was employed to obtain real-time position data of the phantom model and surgical instruments. To perform hologram registration, fiducial markers were used to track and map virtual coordinates to the real -world. Phantom models of each organ were constructed to test the reliability of the AR system. Our results show a registration root-mean-square error of 2.42 ± 0.79 mm and a procedural targeting error of 4.17 ± 1.63 mm using our AR-guided laparoscopic system.
12928-13
Author(s): Sydney Wilson, Western Univ. (Canada), Robarts Research Institute (Canada); David W. Holdsworth, Robarts Research Institute (Canada), Western Univ. (Canada)
On demand | Presented live 19 February 2024
Show Abstract + Hide Abstract
Locating non-palpable lesions and lymph nodes during cancer surgery is crucial for management of the disease. Unfortunately, precise localization of radiolabeled lesions during radioguided surgery is not always possible, especially when using a high-energy radiotracer. This research investigates the use of deep learning algorithms to improve the resolution of lesion detection in a hand-held gamma probe. Preliminary results demonstrate that a neural network achieves up to a 10-fold improvement in resolution compared to existing clinically available gamma probes for detection of high-energy radionuclides. These results show promise for efficiently guiding a surgeon towards the lesion of interest and thus improving the surgical accuracy.
12928-14
Author(s): Philipp Gehrmann, Tom L. Koller, Jan Klein, Fraunhofer-Institut für Digitale Medizin MEVIS (Germany)
On demand | Presented live 19 February 2024
Show Abstract + Hide Abstract
Surgical Navigation Systems (SNS) employing optical tracking systems (OTS) have become the industry standard for computer-aided position tracking of medical instruments and patients. However, OTS face challenges due to line-of-sight issues caused by occluded or contaminated markers. To overcome these limitations, this paper proposes a novel approach using real surgery data to simulate occlusion and evaluate instrument visibility based on the idea to develop a markerless system with multiple RGBD-cameras, AI-based techniques, and optical-geometrical postprocessing for precise instrument tracking. The simulation introduces the "task occlusion score" (TOS) to measure average instrument occlusion. Results indicate that optimal camera placement for visibility is above the situs, contrary to traditional setups. This simulation enhances the usability of navigated surgery, offering potential for marker-based systems with different marker geometries, and further possibilities for optimizing tracking accuracy using multiple cameras.
Posters - Monday
19 February 2024 • 5:30 PM - 7:00 PM PST | Pacific A

Conference attendees are invited to attend the SPIE Medical Imaging poster session on Monday evening. Come view the posters, enjoy light refreshments, ask questions, and network with colleagues in your field. Authors of poster papers will be present to answer questions concerning their papers. Attendees are required to wear their conference registration badges.

Poster Presenters:
Poster Setup Period: 7:30 AM– 5:00 PM Monday

  • In order to be considered for a poster award, it is recommended to have your poster set up by 1:00 PM Monday. Judging may begin after this time. Posters must remain on display until the end of the Monday evening poster session, but may be left hanging until 1:00 PM Tuesday. After 1:00 PM on Tuesday, posters will be removed and discarded.
View poster presentation guidelines and set-up instructions at
spie.org/MI/Poster-Presentation-Guidelines

12928-55
Author(s): Kai Chen, Sreeram Kamabattula, Kiran Bhattacharyya, Intuitive Surgical, Inc. (United States)
On demand | Presented live 19 February 2024
Show Abstract + Hide Abstract
Machine learning models that detect surgical activities in endoscopic videos are instrumental in scaling post-surgical video review tools that help surgeons improve their practice. However, it is unknown how well these models generalize across various surgical techniques practiced at different institutions. In this paper, we examined the possibility of using surgical site information for a more tailored, better-performing model on surgical procedure segmentation. Specifically, we developed an ensemble model consisting of site-specific models, meaning each individual model was trained on videos from a specific surgical site. We showed that the site-specific ensemble model consistently outperforms the state-of-the-art site-agnostic model. Furthermore, by examining the representation of video-frames in the latent space, we corroborated our findings with similarity metrics comparing videos within and across sites. Lastly, we proposed model deployment strategies to manage the introduction of videos from a new site or sites with insufficient data.
12928-56
Author(s): Pengcheng Chen, Nicole M. Gunderson, Andrew Lewis, Jason R. Speich, Michael P. Porter, Eric J. Seibel, Univ. of Washington (United States)
On demand | Presented live 19 February 2024
Show Abstract + Hide Abstract
3D reconstruction of cystoscopy plays a crucial role in urological observation and guided treatment. By creating 3D models of the bladder, physicians can quickly assess conditions such as bladder cancer and monitor patients over time. However, existing 3D reconstruction methods face challenges like texture loss and slow computation. In this study, we achieved dynamic cystoscopy scene reconstruction using Neural Radiance Fields (NeRF). NeRF restores scenes with limited views and features, overcoming texture loss issues. We employed Instant-NGP to accelerate NeRF computation using hash encoding, significantly reducing computation time. Compared to SfM, NeRF exhibits stronger resistance to interference, making it a promising method in endoscopy. NeRF has the potential to provide rapid and comprehensive recording for remote diagnosis of bladder abnormalities in future robotic-assisted flexible cystoscopy.
12928-57
Author(s): Shreyasi Mandal, Indian Institute of Technology Kanpur (India); Srinjoy Bhuiya, Univ. of Alberta (Canada); Elodie Lugez, Toronto Metropolitan Univ. (Canada)
On demand | Presented live 19 February 2024
Show Abstract + Hide Abstract
This research proposes a novel, deep-learning-based method for catheter path reconstruction in high-dose-rate prostate brachytherapy. The proposed method incorporates a lightweight spatial attention-based convolutional neural network to accurately segment volumetric ultrasound images in near real-time and a 3D catheter path reconstruction algorithm. Using automated data augmentation, structured dropout, and batch normalization techniques, the model training pipeline was designed to be robust to various issues, including overfitting and limited annotated data. The model detected 98% of the tested catheter paths and achieved faster inference times than existing methods. This 3D path-tracking pipeline has the potential to significantly improve the accuracy and efficiency of high-dose-rate prostate brachytherapy.
12928-58
Author(s): Abdelkrim Belhaoua, Tom R. L. Kimpe, Stijn Crul, Barco N.V. (Belgium)
On demand | Presented live 19 February 2024
Show Abstract + Hide Abstract
The rise of minimally invasive surgery (MIS) can mainly be attributed to the exponential growth in technology and the evolution of laparoscopic instrumentation over the past two decades. Deep Learning has had a major impact on a range of surgical procedures, such as optimizing workflow, surgical training, intraoperative assistance, patient safety, and efficiency. However, it also requires high computational and memory resources. There has been a lot of research into optimizing deep learning models to balance performance and accuracy under limited resources. Techniques like post-training quantization can significantly reduce model size and latency. In this paper, we explore TensorRT-based techniques with Yolo-based instrument detection technique on edge devices to achieve real-time inference without compromising accuracy under limited compute. This paper gives a review looking at how deep learning and edge computing intersect and how to optimize deep learning for edge devices with limited resources.
12928-59
Author(s): Austin Kao, William E. Higgins, The Pennsylvania State Univ. (United States)
On demand | Presented live 19 February 2024
Show Abstract + Hide Abstract
The examination of suspicious peripheral pulmonary lesions (PPLs) is an important part of lung cancer diagnosis. The physician performs bronchoscopy and then employs radial-probe endobronchial ultrasound (RP-EBUS) to examine and biopsy suspect lesions. Physician skill, however, plays a significant part in the success of these procedures. This has driven the introduction of image-guided bronchoscopy systems. Unfortunately, such systems do not provide guidance on how to use RP-EBUS. Our recently proposed image-guided bronchoscopy system does offer guidance for both the bronchoscope and RP-EBUS. Unfortunately, the system relies on a time-consuming, error-prone, manual approach to generate device maneuvers. We propose an automatic approach for creating a creating a complete set of device maneuvers for both the bronchoscope and the R-EBUS probe. Results show that planning the device maneuvers, which previously took on the order of 5 minutes or more per ROI, is reduced to under one second.
12928-60
Author(s): Yuri F. Hudak, Timo J. C. Oude Vrielink, Fons van der Sommen, Technische Univ. Eindhoven (Netherlands)
On demand | Presented live 19 February 2024
Show Abstract + Hide Abstract
Accurate and reliable medical image analysis, particularly in lung nodule segmentation, plays a crucial role in data-driven healthcare assistance technologies. Current evaluation metrics for segmentation algorithm performance lack specificity to individual use cases and may not adequately assess the accuracy of 2D segmentation in context. In this preliminary work, we propose a novel evaluation approach that incorporates use case-specific evaluation metrics, focusing particularly on the spatial congruence and mass center accuracy of the nodule segmentation in the context of robot-assisted image-guided interventions. By promoting the adoption of use case-specific metrics, we aim to improve the performance of segmentation algorithms, and ultimately, the outcome of critical healthcare procedures.
12928-61
Author(s): Wenzhangzhi Guo, Yanlin Huang, Joel C. Davies, Univ. of Toronto (Canada); Vito Forte, The Hospital for Sick Children (Canada); Eitan Grinspun, Univ. of Toronto (Canada); Lueder A. Kahrs, Univ. of Toronto Mississauga (Canada)
On demand | Presented live 19 February 2024
Show Abstract + Hide Abstract
It is common for facial reconstructive surgeons to take pre- and post-operative images from patients to keep track of their healing progress. However, current guidelines only focus on hardware and lighting setup. Thus, most pre- and post-operative images are taken from different perspectives. This makes them not suitable for quantitative analysis such as comparing with simulation results, as it is very difficult to compare paths and distances in two face photos taken from vastly different perspectives. To address this issue, we propose an application to ensure the pre- and post-operative images are taken from the same perspective. We build a mobile application where we first record the face pose of the pre-operative image. When taking the post-operative image, we compare the face pose of the current frame with the pre-operative pose and only take a photo when the difference is below a threshold. We performed a comparison of taking post-operative images with the proposed application and the phone camera on six head models. Experimental results show that the alignment error for the proposed application is only 1/3 of that of the phone camera, proving the effectiveness of our system.
12928-62
Author(s): Hannah Jungreuthmayer, Medizinische Univ. Wien (Austria), Univ. Wien (Austria), ACMIT GmbH (Austria); S. M. Ragib Shahriar Islam, ACMIT GmbH (Austria); Ander Biguri, Univ. of Cambridge (United Kingdom); Gernot Kronreif, ACMIT GmbH (Austria); Wolfgang Birkfellner, Medizinische Univ. Wien (Austria); Sepideh Hatamikia, Danube Private Univ. GmbH (Austria), ACMIT GmbH (Austria)
On demand | Presented live 19 February 2024
Show Abstract + Hide Abstract
Cone Beam CT (CBCT) has become a routine clinical imaging modality in interventional radiology. Extended Field of View (FOV) CBCT is of great clinical importance for many medical applications, especially for cases where the Volume of Interest (VOI) is outside the standard FOV. In this study, we investigate FOV extension by optimizing customized source-detector CBCT trajectories using Simulated Annealing (SA) algorithm, a heuristic search optimization algorithm. The SA algorithm explores different elliptical trajectories within a given parameter space, attempting to optimize image quality in a given VOI. Kinematic constraints (e.g., due to collisions of the imager with the patient or other medical devices) are taken into account when designing the trajectories. Our experimental results have shown that our proposed customized trajectories can lead to an extended FOV and enable improved visualization of anatomical structures in extreme positions while taking into account the available kinematic constraints.
12928-63
Author(s): Ryan B. Duke, Xiaoyao Fan, William R. Warner, Thayer School of Engineering at Dartmouth (United States); Linton T. Evans, Dartmouth-Hitchcock Medical Ctr. (United States), Geisel School of Medicine, Dartmouth College (United States); Songbai Ji, Thayer School of Engineering at Dartmouth (United States), Worchester Polytechnic Institute (United States); Sohail K. Mirza, Keith D. Paulsen, Thayer School of Engineering at Dartmouth (United States), Dartmouth-Hitchcock Medical Ctr. (United States), Geisel School of Medicine, Dartmouth College (United States)
On demand | Presented live 19 February 2024
Show Abstract + Hide Abstract
Modern spinal procedures are moving to smaller exposures for patient welfare. The surgical scene in these procedures is constantly changing during surgery due to intervertebral motion. Hand-held stereovision systems can be used to drive a deformation model to generate an updated CT using intraoperative data, however they require a large spine exposure for robust data collection. This study uses simulated narrow exposures to test the robustness of the deformation model. The 3 HHS datasets were manually segmented in the following ways: out to the transverse process, out to the facet joints, and out to the lamina. The mean values for L2 norms for the transverse process segmentation data, facet segmentation data, and lamina segmentation are 2.04±1.10mm, 3.18±2.18mm, and 4.59±2.28mm respectively.
12928-64
Author(s): Nora Dimitrova, Armin Teubert, Reutlingen Univ. (Germany); Tim Klopfer, Anna Manawapat-Klopfer, Orthopädisch Chirurgie Bayreuth (Germany); Thomas Notheisen, Heiko Baumgartner, Christoph Emanuel Gonser, BG Klinik Tübingen (Germany); Ramy Zeineldin, Oliver Burgert, Reutlingen Univ. (Germany)
On demand | Presented live 19 February 2024
Show Abstract + Hide Abstract
This study introduces an AR system for nail implantation in complex tibial fractures, using a CNN to accurately segment bone and metal objects from pre- and post-operative CT data. Successful segmentation of bone and metal, even in cases with artifacts, is demonstrated. Integration into clinical workflows could enhance surgical outcomes and safety by reducing radiation exposure and intervention time.
12928-66
Author(s): Bruno Silva, Life and Health Sciences Research Institute, Univ. do Minho (Portugal); Sandro Queirós, Marcos Fernández-Rodríguez, Life and Health Sciences Research Institute (Portugal), ICVS/3B’s - PT Government Associate Laboratory (Portugal); Bruno Oliveira, 2Ai –School of Technology, IPCA (Portugal), LASI – Associate Laboratory of Intelligent Systems (Portugal), Algoritmi Center, School of Engineering, University of Minho (Portugal); Helena R. Torres, Pedro Morais, 2Ai –School of Technology (Portugal), LASI – Associate Laboratory of Intelligent Systems (Portugal); Lukas R. Buschle, KARL STORZ SE & Co. KG (Germany); Jorge Correia-Pinto, Estevão Lima, Life and Health Sciences Research Institute (Portugal), ICVS/3B’s - PT Government Associate Laboratory (Portugal); João L. Vilaça, 2Ai –School of Technology (Portugal), LASI – Associate Laboratory of Intelligent Systems (Portugal)
On demand | Presented live 19 February 2024
Show Abstract + Hide Abstract
Inspired by the ”What Matters in Unsupervised Optical Flow” study, the goal of this work is to evaluate the performance of the ARFlow architecture for unsupervised optical flow in the context of tracking keypoints in laparoscopic videos. This assessment could provide insight into the applicability of ARFlow and similar architectures for this particular application, as well as their strengths and limitations. To do so, we use the SurgT challenge’s dataset and metrics to evaluate the tracker’s accuracy and robustness and its relationship with distinct network components. Our results corroborate some of the findings reported by Jonschkowski et al. However, certain components demonstrate a distinct behavior, possibly indicating underlying issues, namely intrinsic to the application, that impact overall performance and which may have to be addressed in the context of soft-tissue trackers. These results point to potential bottlenecks and areas where future work may target on.
12928-67
Author(s): Kaelyn Button, David C. Zaretksy, Univ. at Buffalo (United States), Canon Stroke and Vascular Research Ctr. (United States); Kasey Pfleging, Megan Malueg, Marissa Kruk, Jeffrey Mullin, Univ. at Buffalo (United States); Ciprian N. Ionita, Univ. at Buffalo (United States), Canon Stroke and Vascular Research Ctr. (United States)
On demand | Presented live 19 February 2024
Show Abstract + Hide Abstract
Improved Adult Spinal Deformity (ASD) surgery outcomes can be achieved with precise anatomical and biomechanical models. Traditional cadavers, limited by scarcity and cost, may not fully meet specific anatomical needs. Patient-tailored 3D-printed spine models offer a promising alternative. This study, leveraging medical image segmentation, CAD, and advanced 3D printing techniques, explores the potential of patient-specific 3D-printed spine models.
12928-68
Author(s): Ryodai Fukushima, Tokyo Univ. of Science (Japan); Toshihiro Takamatsu, National Cancer Ctr. (Japan); Konosuke Sato, Kyohei Okubo, Masakazu Umezawa, Tokyo Univ. of Science (Japan); Nobuhiro Takeshita, Hiro Hasegawa, National Cancer Ctr. (Japan); Hideo Yokota, RIKEN Ctr. for Advanced Photonics (Japan); Kohei Soga, Hiroshi Takemura, Tokyo Univ. of Science (Japan)
On demand | Presented live 19 February 2024
Show Abstract + Hide Abstract
Laparoscopic surgery is a minimally invasive way of cancer resection, which is expected to increase in number. However, because a typical laparoscope can only receive visible light, there is a risk of accidentally damaging nerves that are similar in color to other tissues. To solve this problem, near-infrared (NIR) light (approximately 700-2,500 nm) is considered to be effective because of its feature; component analysis based on the molecular vibrations specific to biomolecules. Previously, we have developed NIR multispectral imaging (MSI) laparoscopy, which acquires NIR spectrum at 14 wavelengths with a band-pass filter. However, since the wavelength is limited, the optimal wavelength for identification cannot be studied. In this study, we developed the world's first laparoscopic device capable of NIR hyperspectral imaging (HSI) with an increased number of wavelengths. Furthermore, NIR-HSI was conducted in a living pig, and the machine-learning was demonstrated to identify nerves and other tissues; accuracy was 0.907.
12928-69
Author(s): Regine Büter, Roger D. Soberanis-Mukul, Paola Ruiz Puentes, Johns Hopkins Univ. (United States); Ahmed Ghazi, The Johns Hopkins Medical Institutions (United States); Jie Ying Wu, Vanderbilt Univ. (United States); Mathias Unberath, Johns Hopkins Univ. (United States)
On demand | Presented live 19 February 2024
Show Abstract + Hide Abstract
This work explores the potential of different head-worn eye-tracking solutions for tele-robotic surgery, as derived metrics from gaze-tracking and pupillometry show promise for cognitive load assessment. Current eye-tracking solutions face challenges in tele-robotic surgery due to close-range interactions, leading to extreme angles of the pupil and occlusion. A matched-user study was performed to compare the effectiveness of the Tobii Pro 3 Glasses and the Pupil Labs Core with regards to the stability of the estimated gaze and pupil diameter. Results show, that both systems perform similarly in both regards without an outdated calibration.
12928-70
Author(s): Haley E. Stoner, Keith D. Paulsen, Sohail K. Mirza, Xiaoyao Fan, Ryan B. Duke, William R. Warner, Thayer School of Engineering at Dartmouth (United States)
On demand | Presented live 19 February 2024
Show Abstract + Hide Abstract
The robotic component artifact influence over the region of surgical interest (ROSI) is to be mitigated for complications as well as to provide accurate guidance for the surgeon. This study defines a large MRI phantom design for specimen submersion to verify and quantify artifact generation from robotic system components as well as provide a better visualization platform for robotic performance during preliminary testing and evaluation. The main topics of focus for the phantom design are fluid selection, phantom shape, phantom containment material, and 3D printed artifact measurement evaluation grids. After image equalization from the acquired MRI images, the image uniformity was determined through the ACR method while the SNR and CNR values were calculated in Fiji. The results illustrated the preferred environmental constraints according to the main topics: food grade mineral oil, cylindrical, motion artifact interference, and PETG 3D printed grid.
12928-71
Author(s): Mahdie Hosseini, Shiva Shaghaghi, You K. Hao, Yubing Tong, Yusuf Akhtar, Mostafa Al-Noury, Caiyun Wu, Univ. of Pennsylvania (United States); Oscar H. Mayer, Joseph M. McDonough, Patrick J. Cahill, Jason B. Anari, The Children's Hospital of Philadelphia (United States); Drew A. Torigian, Jayaram K. Udupa, Univ. of Pennsylvania (United States)
On demand | Presented live 19 February 2024
Show Abstract + Hide Abstract
The article introduces a new non-invasive quantitative method for evaluating regional diaphragmatic structure and function in pediatric patients with thoracic insufficiency syndrome (TIS), before and after VEPTR surgery. Despite minimal changes in diaphragm shape, we observed significant improvement in diaphragm motion after surgery, indicating a positive impact on diaphragmatic function. This promising approach offers comprehensive insights into TIS patient management, potentially leading to improved treatment planning and patient outcomes.
12928-72
Author(s): Gaspard Tonetti, Grenoble INP, Univ. Grenoble Alpes (France), VetAgro Sup (France), CNRS (France); Cecilie Våpenstad, SINTEF (Norway), Norwegian Univ. of Science and Technology (Norway); Nabil Zemiti, Univ. de Montpellier (France), Lab. d'Informatique de Robotique et de Microelectronique de Montpellier (France); Sandrine Voros, INSERM (France)
On demand | Presented live 19 February 2024
Show Abstract + Hide Abstract
Providing manual formative feedback in minimally invasive surgery training requires an expert observer whose availability is limited. Using a simple deep learning method and descriptive motion features, we developed an automatic method to assess technical surgical skills. Our method outperforms the state-of-the-art technique for robotic minimally invasive surgery skills assessment and is also suitable for non-robotic laparoscopic training. As opposed to most methods that classify students in broad skill level categories, we focused on predicting the ratings of specific surgical technical skills. Therefore students can know where to direct their training efforts.
12928-73
Author(s): Chih-Wei Chang, Shaoyan Pan, The Winship Cancer Institute of Emory Univ. (United States); Zhen Tian, The Univ. of Chicago (United States); Tonghe Wang, Memorial Sloan-Kettering Cancer Ctr. (United States); Marian Axente, Joseph Shelton, The Winship Cancer Institute of Emory Univ. (United States); Tian Liu, Mount Sinai Medical Ctr. (United States); Justin Roper, Xiaofeng Yang, The Winship Cancer Institute of Emory Univ. (United States)
On demand | Presented live 19 February 2024
Show Abstract + Hide Abstract
The advent of computed tomography significantly improves patients’ health regarding diagnosis, prognosis, and treatment planning and verification. However, tomographic imaging escalates concomitant radiation doses to patients, inducing potential secondary cancer by 4%. We demonstrate the feasibility of a data-driven approach to synthesize volumetric images using patients’ surface images, which can be obtained from a zero-dose surface imaging system. This study includes 500 computed tomography (CT) image sets from 50 patients. Compared to the ground truth CT, the synthetic images result in the evaluation metric values of 26.9 ± 4.1 Hounsfield units, 39.1 ± 1.0 dB, and 0.965 ± 0.011 regarding the mean absolute error, peak signal-to-noise ratio, and structural similarity index measure. This approach provides a data integration solution that can potentially enable real-time imaging, which is free of radiation-induced risk and could be applied to image-guided medical procedures.
12928-74
Author(s): Sihong He, Siu-Chun M. Ho, McGovern Medical School, The Univ. of Texas Health Science Ctr. at Houston (United States); Andrew Kuhls-Gilcrist, Todd Erpelding, Canon Medical Systems USA, Inc. (United States); Richard Smalling, Memorial Hermann Heart and Vascular Institute (United States), McGovern Medical School, The Univ. of Texas Health Science Ctr. at Houston (United States)
On demand | Presented live 19 February 2024
Show Abstract + Hide Abstract
Structural heart disease (SHD) is a recently recognized subset of heart disease, and minimally invasive, transcatheter treatments for SHD rely heavily on guidance from multiple imaging modalities. Mentally integrating the information from these images can be challenging during procedures and can take up time and increase radiation exposure. This study used the free Unity graphics engine and tailored LabVIEW and Python algorithms, along with deep learning, to merge echocardiography, CT-derived 3D heart models, and fiber optic shape sensing data with fluoroscopic imaging. Tests were performed on a patient specific ballistic gel heart model. This is the first attempt at fusing the above four imaging modalities together and can pave the way for more advanced guidance techniques in the future.
12928-75
Author(s): Seoyoung Lee, Hyoyi Kim, KAIST (Korea, Republic of); Haeyoung Kim, SAMSUNG Medical Ctr. (Korea, Republic of); Seungryong Cho, KAIST (Korea, Republic of)
On demand | Presented live 19 February 2024
Show Abstract + Hide Abstract
It has been reported that individuals may develop vertebral compression fracture (VCF) after stereotactic body radiotherapy (SBRT), and it is necessary to identify possible risk groups prior to performing SBRT. In this study, we propose a multi-modal deep network for risk prediction of VCF after SBRT that uses clinical records, CT images, and radiotherapy factors altogether without explicit feature extraction. The retrospective study was conducted on a cohort of 131 patients who received SBRT for spinal bone metastasis. A 1-D feature vector was generated from clinical information. We cropped a 3-D patch of the lesion area from pretreatment CT images and planning dose images. We designed a three-branch multi-modal deep learning network. From the k-fold validation and ablation study, our proposed multi-modal network showed the best performance with an area under the curve (AUC) of 0.7605 and an average precision(AP) of 0.7273. The prediction model would play a valuable role not only in the treated patients’ welfare but also in the treatment planning for those patients.
12928-76
Author(s): William R. Warner, Xiaoyao Fan, Ryan B. Duke, Kristen L. Chen, Chengpei Li, Haley E. Stoner, Thayer School of Engineering at Dartmouth (United States); Kirthi S. Bellamkonda, Linton T. Evans, Richard J. Powell, Dartmouth-Hitchcock Medical Ctr. (United States); Sohail K. Mirza, Keith D. Paulsen, Thayer School of Engineering at Dartmouth (United States)
On demand | Presented live 19 February 2024
Show Abstract + Hide Abstract
Tracked intraoperative ultrasound (iUS) is growing in use. Accurate spatial calibration is essential to enable iUS navigation. Utilizing sterilizable probes introduces new challenges that can be solved by time-of-surgery calibration that is robust, efficient and user independent performed within the sterile field. This study demonstrates a smart line detection scheme to perform calibration based on video acquisition data and investigates the effect of pose variation on the accuracy of a plane-based calibration. A user-independent US video is collected of a calibration phantom and a smart line detection and tracking filter applied to the video-tracking data pairs to remove poor calibration candidates. A localized point target phantom is imaged to provide a TRE assessment of the calibration. The tracking data is decoupled into 6 degrees of freedom and these ranges iteratively reduced to study the effect on the spatial calibration accuracy in order to indicate the sufficient amount of pose variation required during video acquisition to maintain high TRE accuracy. This work facilitates a larger development toward user-independent, video based iUS calibration at the time of surgery.
12928-77
Author(s): Rintaro Miyazaki, Yuichiro Hayashi, Masahiro Oda, Kensaku Mori, Nagoya Univ. (Japan)
On demand | Presented live 19 February 2024
Show Abstract + Hide Abstract
This paper describes an adaptive octree cube refinement method for deformable organ models. Surgical simulation is one of the most promising ways for surgical training. Laparoscopic surgery simulators are already in practical use and have been evaluated for their effectiveness. To realize a high-quality simulator, it is important to efficiently process organ deformation models. In this study, we extend adaptive mesh refinement and apply it to an octree cube structure. Refinement of the structure is performed based on the grasping position. This approach improves the resolution of the octree around the grasping position. In addition, it makes it easier to detect interference between the grasp model and the high-resolution grid of the octree. Simulation results showed there were 199 cubes before and 339 cubes after refinement, and the FPS decreased from 44.1 FPS to 32.4 FPS, which is still within real-time processing.
12928-78
Author(s): Marcos Fernández-Rodríguez, Life and Health Sciences Research Institute, Univ. do Minho (Portugal), School of Medicine, Univ. do Minho (Portugal); Bruno Silva, Sandro Queirós, Life and Health Sciences Research Institute, Univ. do Minho (Portugal); Helena R. Torres, Applied Artificial Intelligence Laboratory (Portugal); Bruno Oliveira, Life and Health Sciences Research Institute, Univ. do Minho (Portugal); Pedro Morais, Applied Artificial Intelligence Laboratory, Instituto Politécnico do Cávado e do Ave (Portugal); Lukas R. Buschle, KARL STORZ SE & Co. KG (Germany); Jorge Correia-Pinto, Life and Health Sciences Research Institute (Portugal), School of Medicine (Portugal); Estevão Lima, Life and Health Sciences Research Institute, Univ. do Minho (Portugal); João L. Vilaça, Applied Artificial Intelligence Laboratory, Instituto Politécnico do Cávado e do Ave (Portugal)
On demand | Presented live 19 February 2024
Show Abstract + Hide Abstract
Surgical instruments segmentation in laparoscopy is an essential base for computer-assisted surgical systems. The dynamic setting of laparoscopic surgery still makes it hard to obtain a precise segmentation. The nnU-Net framework, excelled in semantic segmentation analyzing single frames without temporal information. Optical flow (OF) estimates motion and represent it in a single frame, containing temporal information. Meanwhile, in surgeries, instruments often show the most movement. This work seeks to explore how OF’s inclusion to the nnU-Net architecture may affect its performance in the surgical instrument segmentation task.
12928-79
Author(s): Kristen L. Chen, Chengpei Li, Xiaoyao Fan, Scott Davis, Thayer School of Engineering at Dartmouth (United States); Linton T. Evans, Dartmouth-Hitchcock Medical Ctr. (United States); Keith D. Paulsen, Thayer School of Engineering at Dartmouth (United States), Dartmouth-Hitchcock Medical Ctr. (United States), Norris Cotton Cancer Ctr. (United States)
On demand | Presented live 19 February 2024
Show Abstract + Hide Abstract
In image-guided neurosurgery, preoperative magnetic resonance (pMR) images are rigidly registered with the patient’s head in the operating room. Image-guided systems incorporate this spatial information to provide real-time information on where surgical instruments are located with respect to preoperative imaging. The accuracy of these systems become degraded due to intraoperative brain shift. To account for brain shift, we previously developed an image-guidance updating framework that incorporates brain shift information acquired from registering intraoperative stereovision (iSV) surface with the pMR surface to create an updated magnetic resonance image (uMR). To register the iSV surface and the pMR surface, the two surfaces must have some matching features that can be used for registration. To capture features falling outside of the brain volume, we have developed a method to improve feature extraction, which involves performing a selective dilation in the region of the stereovision surface. The goal of this method is to capture useful features that can be use to improve image registration.
12928-80
Author(s): Sunder Neelakantan, Tanmay Mukherjee, Texas A&M Univ. (United States); Bradford J. Smith, Univ. of Colorado (United States); Kyle Myers, Texas A&M Univ. (United States); Rahim R. Rizi, Univ. of Pennsylvania (United States); Reza Avazmohammadi, Texas A&M Univ. (United States)
On demand | Presented live 19 February 2024
Show Abstract + Hide Abstract
Several lung diseases lead to alterations in regional lung mechanics, including ventilator-induced lung injuries and radiation-induced lung injuries. Thus, there has been growing interest in quantifying the health of lung parenchyma using regional biomechanical markers. Image registration through dynamic imaging has emerged as a powerful tool to assess lung parenchyma's kinematic and deformation behavior during respiration. However, the difficulty in validating the image registration estimation of lung deformation, primarily due to the lack of ground-truth deformation data, has limited its use in clinical settings. To address this barrier, we developed a method to convert a finite-element (FE) mesh of the lung to a phantom computed tomography (CT) image, advantageously possessing ground-truth information included in the FE model. The phantom CT images generated from the FE mesh were able to replicate the geometry of the lung and large airways included in the FE model. A series of high-quality phantom images, generated from the FE model simulating the respiratory cycle, will allow for the validation and evaluation of image registration estimation of lung deformation.
12928-81
Author(s): Amirreza Heshmat, Caleb S. O'Connor, Jun Hong, Jessica Albuquerque Marques Silva, Iwan Paolucci, Aaron K. Jones, Bruno C. Odisio, Kristy K. Brock, The Univ. of Texas M.D. Anderson Cancer Ctr. (United States)
On demand | Presented live 19 February 2024
Show Abstract + Hide Abstract
Percutaneous Microwave ablation (MWA) is a minimally invasive technique to treat liver tumors. The Penne bioheat equation explains heat distribution in tissues, including factors like blood perfusion rate (BPR) and metabolic heat (MH). We employed 3D patient-specific models and sensitivity analysis to examine how BPR and MH affect MWA results. Numerical simulations using a triaxial antenna and 65 Watts power on tumors demonstrated that lower BPR led to less damage and complete tumor destruction. Models without MH had less liver damage. The study highlights the importance of tailored ablation parameters for personalized treatments, revealing the impact of BPR and MH on MWA outcomes.
12928-82
Author(s): Bipasha Kundu, Zixin Yang, Richard Simon, Cristian A. Linte, Rochester Institute of Technology (United States)
On demand | Presented live 19 February 2024
Show Abstract + Hide Abstract
Nonrigid surface-based soft tissue registration is crucial in surgical navigation systems yet faces challenges due to the complex surface structures of intra-operative data. By employing nonrigid registration, surgeons can achieve a real-time visualization of the patient’s complex pre- and intra-operative anatomy in a common coordinate system to improve navigation accuracy. To address limited access to liver registration methods, we compare the robustness of three open-source optimization-based nonrigid registration methods and one data-driven method to a reduced visibility ratio (reduced partial views of the surface) and an increasing deformation level (mean displacement), reported as the root mean square error (RMSE) between the pre- and intra-operative liver surface meshed following registration. The Gaussian Mixture Model-Finite Element Model (GMM-FEM) consistently yields a lower post-registration error than the other three tested methods in the presence of both reduced visibility ratio and increased intra-operative surface displacement, therefore offering a potentially promising solution for pre- to intra-operative nonrigid liver surface registration.
12928-83
Author(s): Hannah G. Mason, Ziteng Liu, Jack H. Noble, Vanderbilt Univ. (United States)
On demand | Presented live 19 February 2024
Show Abstract + Hide Abstract
Cochlear implants (CIs) are neural prosthetics for patients with severe-to-profound hearing loss. CIs induce hearing sensation by stimulating auditory nerve fibers (ANFs) using an electrode array that is surgically implanted into the cochlea. After the device is implanted, an audiologist programs the CI processor to optimize hearing performance. Without knowing which ANFs are being stimulated by each electrode, audiologists must rely solely on patient performance to inform programming adjustments. Patient-specific neural stimulation modeling has been proposed to assist audiologists, but requires accurate localization of ANFs. In this paper, we propose an automatic neural-network-based method for atlas-based localization of the ANFs. Our results show that our method is able to produce smooth ANF predictions that are more realistic than those produced by a previously proposed semi-manual localization method. Accurate and realistic ANF localizations are critical for constructing patient-specific ANF stimulation models for model guided CI programming.
12928-85
Author(s): Ethan Wilke, Jesse F. d'Almeida, Jason Shrand, Tayfun Ertop, Nicholas L. Kavoussi, Amy Reed, Duke Herrell, Robert J. Webster, Vanderbilt Univ. (United States)
On demand | Presented live 19 February 2024
Show Abstract + Hide Abstract
As surgical robotics are made progressively smaller, and their actuation systems simplified, the opportunity arises to re-evaluate how we integrate them into operating room workflows. Several research groups have shown that robots can be made so small and light that they can become hand-held tools. This hand-held paradigm enables robots to fit much more seamlessly into existing clinical workflows. In this paper, we compare an onboard user interface approach against the traditional offboard approach. In the latter, the surgeon positions the robot, and a support arm holds it in place while the surgeon operates the manipulators using the offboard surgeon console. The surgeon can move back and forth between the robot and the console as often as desired. Three experiments were conducted, and results show that the onboard interface enables statistically significantly faster performance in a point-touching task performed in a virtual reality environment.
12928-86
Author(s): Connor Mitchell, Robarts Research Institute (Canada); Shuwei Xing, Robarts Research Institute (Canada), Western Univ. (Canada); Derek W. Cool, London Health Sciences Ctr. (Canada), Robarts Research Institute (Canada); David Tessier, Robarts Research Institute (Canada); Aaron Fenster, Robarts Research Institute (Canada), Western Univ. (Canada)
On demand | Presented live 19 February 2024
Show Abstract + Hide Abstract
CT-guided renal tumor ablations have been considered an alternative to treat small renal tumors, typically 4 cm in size or smaller, especially for patients who are ineligible to receive nephron-sparing surgery. For this procedure, the radiologist must compare the pre-operative with the post-operative CT to determine the presence of residual tumors. Distinguishing between malignant and benign kidney tumors poses a significant challenge. To automate this tumor coverage evaluation step and assist the radiologist in identifying kidney tumors, we proposed a coarse-to-fine U-Net-based model to segment kidneys and masses. We used the TotalSegmentator tool to obtain an approximate segmentation and region of interest of the kidneys, which was inputted into our 3D segmentation network trained using the nnUNet library to fully segment the kidneys and masses within them. Our model achieved an aggregated DICE score of 0.777 on testing data, and on local CT kidney data with tumors collected from the London Health Sciences University Hospital, our model achieved a DICE score of 0.7 for tumour segmentation. Our results indicate the model will be useful for tumour identification and evaluation.
12928-87
Author(s): Regina W. K. Leung, Ge Shi, Western Univ. (Canada); Christina A. Lim, Matthew Van Oirschot, Schulich School of Medicine & Dentistry, Western Univ. (Canada)
On demand | Presented live 19 February 2024
Show Abstract + Hide Abstract
With a growing role for virtual simulation skills training, we propose a low-cost solution to automate creation of high-fidelity 3D holographic hand animations using motion capture data from the Oculus Quest 2 mixed reality headset for surgical skills training. Using this methodology, we successfully developed a 3D holographic animation of one-handed knot ties used in surgery. With regards to the quality of the produced animation, our qualitative pilot study demonstrated comparable successful learning of knot-ties from the holographic animation to in-person demonstration. Furthermore, participants found learning knot-ties from the holographic animation to be easier, more effective, were more confident in mastery of the skill in comparison to in-person demonstration, and also found the animation comparable to real hands showing promise for application in surgical skills training applications.
12928-88
Author(s): Michael A. Kokko, Ryan J. Halter, Thayer School of Engineering at Dartmouth (United States)
On demand | Presented live 19 February 2024
Show Abstract + Hide Abstract
Reconstruction of stereoendoscopic video has been explored for guiding minimally-invasive procedures across many surgical subspecialties, and may play an increasingly important role in navigation as stereo-equipped robotic systems become more widely available. Capturing stereo video for the purpose of offline reconstruction requires dedicated hardware, a mechanism for temporal synchronization, and video processing tools that perform accurate clip extraction, frame extraction, and lossless compression for archival. This work describes a minimal hardware setup comprising entirely off-the-shelf components for capturing video from the da Vinci and similar 3D-enabled surgical systems. Software utilities are also provided for synchronizing data collection and accurately handling captured video files. End-to-end testing demonstrates that all processing functions (clipping, frame cropping, compression, un-compression, and frame extraction) operate losslessly, and can be combined to generate reconstruction-ready stereo pairs from raw surgical video.
12928-89
Author(s): Soyoung Park, Sahaja Acharya, Matthew Ladra, The Johns Hopkins Univ. School of Medicine (United States); Junghoon Lee, Johns Hopkins Univ. (United States)
On demand | Presented live 19 February 2024
Show Abstract + Hide Abstract
CT image synthesis (sCT) from MRI is necessary for MR-only treatment planning, MRI-based quality assurance, and treatment assessment in radiotherapy (RT). For pediatric cancer patients, reducing ionizing radiation from CT scans is preferred for which MRI-based RT planning is truly beneficial. We investigated a 3D conditional generative adversarial network (cGAN)-based transfer learning approach for accurate pediatric sCT generation. Our model was first trained using adult data, followed by fine-tuning on pediatric data. We compared three different training scenarios; (1) training on 50 adult data, (2) training on combined 50 adult and 50 pediatric patient data, and (3) fine-tuning on 50 pediatric data using the pre-trained model on 50 adult data. 3D cGAN with transfer learning showed significantly better synthesis performance than the other models with average mean absolute error (MAE), peak signal-to-noise ratio (PSNR), and structural similarity (SSIM) index of 51.99 HU, 24.74, and 0.80, respectively. The proposed 3D cGAN-based transfer learning was able to accurately synthesize pediatric CT images from MRI, allowing us to realize pediatric MR-only RT.
12928-90
Author(s): Yike Zhang, Eduardo Davalos Anaya, Dingjie Su, Ange Lou, Jack H. Noble, Vanderbilt Univ. (United States)
On demand | Presented live 19 February 2024
Show Abstract + Hide Abstract
For those experiencing severe-to-profound sensorineural hearing loss, the cochlear implant (CI) is the preferred treatment. Augmented reality (AR) surgery may improve CI procedures and hearing outcomes. Typically, AR solutions for image-guided surgery rely on optical tracking systems to register pre-op planning information to the display so that hidden anatomy or other information can be overlayed co-registered with the view of the surgical scene. In this work, our goal is to develop a method that permits direct 2D-to-3D registration of the microscope video to the pre-operative CT scan without the need for external tracking equipment. Our proposed solution involves surface-mapping a portion of the incus in the video and determining the pose of this structure relative to the surgical microscope by solving the perspective-n-point pose computation to achieve 2D-to-3D registration. This registration can then be applied to pre-operative segmentation of other hidden anatomy as well as the planned electrode insertion trajectory to co-register this information for AR display.
12928-91
Author(s): Michael A. Kokko, Thayer School of Engineering at Dartmouth (United States); Andrew Y. Lee, Geisel School of Medicine, Dartmouth College (United States); Joseph A. Paydarfar, Dartmouth-Hitchcock Medical Ctr. (United States); Ryan J. Halter, Thayer School of Engineering at Dartmouth (United States)
On demand | Presented live 19 February 2024
Show Abstract + Hide Abstract
The transoral approach to resecting oral and oropharyngeal tumors is associated with lower morbidity than open surgery, but is associated with a high positive margin rate. When margins are positive, it is critical that resection specimens be accurately oriented in anatomical context for gross and microscopic evaluation, and also that surgeons, pathologists, and other care team members share an accurate spatial awareness of margin locations. With clinical interest in digital pathology on the rise, this work outlines a proposed framework for generating 3D specimen models intraoperatively via robot-integrated stereovision, and using these models to visualize involved margins in both ex vivo (flattened) and in situ (conformed) configurations. Preliminary pilot study results suggest that stereo specimen imaging can be easily integrated into the transoral robotic surgery workflow, and that the expected accuracy of raw reconstructions is around 1.60mm. Ongoing data collection and technical development will support a full system evaluation.
12928-92
Author(s): Owen Anderson, Biomedical Imaging Resource Core, Mayo Clinic (United States); Nicholas Hugenberg, Deepa Mandale, Songnan Wen, Tasneem Naqvi, David R. Holmes, Mayo Clinic (United States)
On demand | Presented live 19 February 2024
Show Abstract + Hide Abstract
Point of Care ultrasound (POCUS) is a term to describe a field of ultrasound imaging that is portable, fast, and accessible. Such a device can now perform an echocardiogram while connected to a smartphone. While the accessibility of performing a test has been greatly improved, expertise is still required to provide usable results and diagnoses. The goal of this study is to improve the clinical utility of mobile ultrasound echocardiograms with AI machine learning. By integrating artificial intelligence into this workflow, feedback can be given to the provider during its operation to maximize the usability of the ultrasound data and allow more tests to be performed properly. The Intel GETi framework was used to create computer vision models that could quantify the readability of frames taken from an echocardiogram. These models determine the quality and the orientation of each frame. Feedback from these models can alert the user to proper positioning and technique to gather good ultrasound data. The accuracy of the models ranges from 77% - 99%, depending on factors such as how the model was trained and the ratio of training to testing data. Testing accuracy can also be improved with
Tuesday Morning Keynotes
20 February 2024 • 8:30 AM - 10:00 AM PST | Town & Country A
Session Chairs: Barjor Sohrab Gimi, Univ. of Massachusetts Chan Medical School (United States), Andrzej Krol, SUNY Upstate Medical Univ. (United States), John E. Tomaszewski, Univ. at Buffalo (United States), Aaron D. Ward, Western Univ. (Canada)

8:30 AM - 8:35 AM:
Welcome and introduction

8:35 AM - 8:40 AM:
Robert F. Wagner Award finalists announcements for conferences 12930 and 12933

12930-406
Author(s): Frank J. Rybicki, The Univ. of Arizona College of Medicine (United States); Leonid Chepelev, University of Toronto (Canada)
20 February 2024 • 8:40 AM - 9:20 AM PST | Town & Country A
Show Abstract + Hide Abstract
Medical imaging data is often used inefficiently, and this happens most often for patients with abnormal imaging who require a complex procedure. This talk describes those patients, how their medical images undergo Computer Aided Design (CAD), and how that data reaches a Final Anatomic Realization, one of which is 3D printing. This talk highlights “keys” to “unlock” value when this clinical service line is performed in a hospital, and the critical role for medical engineers who work in that infrastructure. The talk includes medical oversight, data generation, and a specific, durable definition of value for medical devices that are 3D printed in hospitals. The talk also includes clinical appropriateness, and how it folds into accreditation for 3D printing in hospitals and universities. Up to the minute information on reimbursement for medical devices that are 3D printed in hospitals and universities will be presented.
12933-409
Author(s): David S. McClintock, Mayo Clinic (United States)
20 February 2024 • 9:20 AM - 10:00 AM PST | Town & Country A
Show Abstract + Hide Abstract
The use of artificial intelligence in healthcare is a current hot topic, generating tons of excitement and pushing multiple academic medical centers, startups, and large established IT companies to dive into clinical AI model development. However, amongst that excitement, one topic that has lacked direction is how healthcare institutions, from small clinical practices to large health systems, should approach AI model deployment. Unlike typical healthcare IT implementations, AI models have special considerations that must be addressed prior to moving them into clinical practice. This talk will review the major issues surrounding clinical AI implementations and present a scalable, standardized, and responsible framework for AI deployment that can be adopted by many different healthcare organizations, departments, and functional areas.
Session 4: Surgical Data Science/Video Analysis
20 February 2024 • 10:30 AM - 12:30 PM PST | Pacific C
Session Chairs: Cristian A. Linte, Rochester Institute of Technology (United States), William E. Higgins, The Pennsylvania State Univ. (United States)
12928-15
Author(s): Stefanie Speidel, Nationales Centrum für Tumorerkrankungen Dresden (Germany)
20 February 2024 • 10:30 AM - 11:10 AM PST | Pacific C
Show Abstract + Hide Abstract
Increasingly powerful technological developments in surgery such as modern operating rooms (OR), featuring digital and interconnected as well as robotic devices provide a huge amount of valuable data which can be used to improve patient therapy. Although a lot of data is available, the human ability to use these possibilities especially in a complex and time-critical situation such as surgery is limited and is extremely dependent on the experience of the surgical staff. This talks focuses on AI-assisted surgery with a specific focus on analysis of intraoperative video data. The goal is to democratize surgical skills and enhance the collaboration between surgeons and cyber-physical systems by quantifying surgical experience and make it accessible to machines. Several examples to optimize the therapy of the individual patient along the surgical treatment path are given. Finally, remaining challenges and strategies to overcome them are discussed.
12928-16
Author(s): Ling Ma, Kelden T. Pruitt, Baowei Fei, The Univ. of Texas at Dallas (United States)
On demand | Presented live 20 February 2024
Show Abstract + Hide Abstract
Laparoscopic and robotic surgery, as one type of minimally invasive surgery (MIS), has gained popularity due to the improved surgeon ergonomics, instrument precision, operative time, and postoperative recovery. Hyperspectral imaging (HSI) is an emerging medical imaging modality, which has proved useful for intraoperative image guidance. Snapshot hyperspectral cameras are ideal for intraoperative laparoscopic imaging because of their compact size and light weight, but low spatial resolution can be a limitation. In this work, we developed a dual-camera laparoscopic imaging system that comprises a high-resolution color camera and a snapshot hyperspectral camera, and we employ super-resolution reconstruction to fuse the images from both cameras to generate high-resolution hyperspectral images. The experimental results show that our method can significantly improve the resolution of hyperspectral images without compromising the image quality or spectral signatures. The proposed super-resolution reconstruction method is promising to promote the employment of high-speed hyperspectral imaging in laparoscopic surgery.
12928-17
Author(s): James Yu, The Univ. of Texas Southwestern Medical Ctr. at Dallas (United States), The Univ. of Texas at Dallas (United States); Kelden T. Pruitt, Nati Nawawithan, The Univ. of Texas at Dallas (United States); Brett A. Johnson, Jeffrey Gahan, The Univ. of Texas Southwestern Medical Ctr. at Dallas (United States); Baowei Fei, The Univ. of Texas at Dallas (United States), The Univ. of Texas Southwestern Medical Ctr. at Dallas (United States)
On demand | Presented live 20 February 2024
Show Abstract + Hide Abstract
Augmented reality (AR) has seen increased interest and attention for its application in surgical procedures. While previous works have utilized pre-operative imaging such as computed tomography or magnetic resonance images, registration methods still lack the ability to accurately register deformable anatomical structures across modalities and dimensionalities. This is especially true of minimally invasive abdominal surgeries due to limitations of the monocular laparoscope. Surgical scene reconstruction is a critical component towards AR-guided surgical interventions and other AR applications such as remote assistance or surgical simulation. In this work, we show how to generate a dense 3D reconstruction with camera pose estimations and depth maps from video obtained with a monocular laparoscope utilizing a state-of-the-art deep-learning-based visual simultaneous localization and mapping (vSLAM) model. The proposed method can robustly reconstruct surgical scenes using real-time data and provide camera pose estimations without stereo or other sensors, which increases its usability and is less intrusive.
12928-18
Author(s): Daiwei Lu, Yifan Wu, Xing Yao, Vanderbilt Univ. (United States); Nicholas L. Kavoussi, Vanderbilt Univ. Medical Ctr. (United States); Ipek Oguz, Vanderbilt Univ. (United States)
On demand | Presented live 20 February 2024
Show Abstract + Hide Abstract
Ureteroscopic intrarenal surgery comprises the passage of a flexible ureteroscope through the ureter into the kidney and is commonly used for the treatment of kidney stones or upper tract urothelial carcinoma (UTUC). Flexible ureteroscopes (fURS) are limited by their visualization ability and fragility, which can cause missed regions during the procedure in hard-to-visualize locations and/or due to scope breakage. This contributes to a high recurrence rate for both kidney stone and UTUC patients. We introduce an automated patient-specific analysis for determining viewability in the renal collecting system using pre-operative CT scans.
12928-19
Author(s): Ange Lou, Jack H. Noble, Vanderbilt Univ. (United States)
On demand | Presented live 20 February 2024
Show Abstract + Hide Abstract
Depth estimation in surgical video plays a crucial role in many image-guided surgery procedures. However, it is difficult and time consuming to create depth map ground truth datasets in surgical videos due in part to inconsistent brightness and noise in the surgical scene. Therefore, building an accurate and robust self-supervised depth and camera ego-motion estimation system is gaining more attention from the computer vision community. Although several self-supervision methods alleviate the need for ground truth depth maps and poses, they still need known camera intrinsic parameters, which are often missing or not recorded. Moreover, the camera intrinsic prediction methods in existing works depend heavily on the quality of datasets. In this work, we aim to build a self-supervised depth and ego-motion estimation system which can predict not only accurate depth maps and camera pose, but also camera intrinsic parameters. We propose a cost-volume-based supervision approach to give the system auxiliary supervision for camera parameters prediction.
Session 5: Neurosurgery/Neurotology
20 February 2024 • 1:40 PM - 3:20 PM PST | Pacific C
Session Chairs: Pierre Jannin, Lab. Traitement du Signal et de l'Image (France), Junghoon Lee, Johns Hopkins Univ. (United States)
12928-20
Author(s): Prasad Vagdargi, Ali Uneri, Stephen Z. Liu, Craig K. Jones, Alejandro Sisniega, Johns Hopkins Univ. (United States); Junghoon Lee, The Johns Hopkins Univ. School of Medicine (United States); Patrick A. Helm, Medtronic, Inc. (United States); William S. Anderson, Mark Luciano, The Johns Hopkins Univ. School of Medicine (United States); Gregory D. Hager, Johns Hopkins Univ. (United States); Jeffrey H. Siewerdsen, The Univ. of Texas M.D. Anderson Cancer Ctr. (United States)
On demand | Presented live 20 February 2024
Show Abstract + Hide Abstract
Recent neurosurgical techniques require accurate targeting of deep-brain structures in the presence of deformation due to CSF egress during surgical access. We introduce a vision-based navigation solution using NeRFs for 3D neuroendoscopic reconstruction on the Robot-Assisted Ventriculoscopy (RAV) platform. An end-to-end 3D reconstruction method using posed images was developed and integrated with RAV. System performance was evaluated in terms of geometric accuracy, precision and runtime across multiple clinically feasible trajectories, achieving accurate sub-mm projected error. Clinical neuroendoscopic video reconstruction and registration was successfully achieved with sub-mm geometric accuracy and high precision.
12928-21
Author(s): Chengpei Li, Dartmouth College (United States); Xiaoyao Fan, Ryan B. Duke, Kristen L. Chen, Thayer School of Engineering at Dartmouth (United States); Linton T. Evans, Dartmouth-Hitchcock Medical Ctr. (United States); Keith D. Paulsen, Thayer School of Engineering at Dartmouth (United States)
On demand | Presented live 20 February 2024
Show Abstract + Hide Abstract
In image-guided cranial surgeries, brain deformation post-dura opening impacts image guidance accuracy. A biomechanical model updates pre-op MR images using intraoperative stereovision (iSV) for accuracy. Traditional methods require manual cortical surface segmentation from iSV, demanding expertise and time. This study introduces the Fast Segment Anything Model (FastSAM), a deep learning approach, for automatic segmentation from iSV. FastSAM's performance was compared with manual segmentation and a U-Net model in a patient case, focusing on segmentation accuracy (Dice coefficient) and image updating accuracy (target registration errors; TRE). All methods showed high Dice coefficients (>0.95). FastSAM and manual segmentation had similar TREs (2.6 ± 0.7 mm), better than U-Net (3.1 ± 0.5 mm). FastSAM's performance aligns with manual segmentation in accuracy, suggesting its potential to replace manual methods for efficiency and reduced user dependency.
12928-22
Author(s): Lina Mekki, Sahaja Acharya, Matthew Ladra, Junghoon Lee, Johns Hopkins Univ. (United States)
On demand | Presented live 20 February 2024
Show Abstract + Hide Abstract
Radiation therapy (RT) planning for pediatric brain cancer is a challenging task. RT plans are typically optimized using CT, thus exposing patients to ionizing radiation. Manual contouring of organs-at-risk (OARs) is time-consuming, particularly difficult due to the small size of brain structures, and suffers from inter-observer variability. While numerous methods have been proposed to solve MR to CT image synthesis or OAR segmentation separately, there exist only a handful of methods tackling both problems jointly, even less specifically developed for pediatric brain cancer RT. We propose a multi-task convolutional neural network to jointly synthesize CT from MRI and segment OARs (eyes, optic nerves, optic chiasm, brainstem, temporal lobes, and hippocampi) for pediatric brain RT planning.
12928-23
Author(s): John S. H. Baxter, Pierre Jannin, Univ. de Rennes 1 (France)
20 February 2024 • 2:40 PM - 3:00 PM PST | Pacific C
Show Abstract + Hide Abstract
Measuring errors in neuro-interventional pointing tasks is critical to better evaluating human experts as well as machine learning algorithms. If the target may be highly ambiguous, different experts may fundamentally select different targets, believing them to refer to the same region, a phenomenon called an error of type. This paper investigates the effects of changing the prior distribution on a Bayesian model for errors of type specific to transcranial magnetic stimulation (TMS) planning. Our results show that a particular prior can be chosen which is analytically solvable, removes spurious modes, and returns estimates that are coherent with the TMS literature. This is a step towards a fully rigorous model that can be used in system evaluation and machine learning.
12928-24
Author(s): Erin L. Bratu, Vanderbilt Univ. (United States); Katelyn A. Berg, Andrea J. DeFreese, Rene H. Gifford, Vanderbilt Univ. Medical Ctr. (United States); Jack H. Noble, Vanderbilt Univ. (United States)
On demand | Presented live 20 February 2024
Show Abstract + Hide Abstract
In an effort to improve hearing outcomes for cochlear implant recipients, many computational models of electrical stimulation in the inner ear have been developed to provide clinicians with objective information that can assist their decision-making. These models exist on a range of complexity, including highly-detailed, patient-specific models designed to more accurately simulate an individual’s experience. One limitation of these models is the large amount of data required to create them, with the resulting model being highly optimized to these single sets of measurements. Thus, it is desirable to create a new model of equal or better quality that does not require this data to create the model and that is adaptable to new sets of clinical data. In this work, we present a methodology for one component of such a model, which uses simulations of voltage spread in the cochlea to estimate patient-specific electric potentials.
Session 6: Joint Session with Conferences 12928 and 12932
20 February 2024 • 3:50 PM - 5:30 PM PST | Pacific C
Session Chairs: Purang Abolmaesumi, The Univ. of British Columbia (Canada), Josquin Foiret, Stanford Univ. School of Medicine (United States)
12928-25
Author(s): Muhammad Awais, Mais Altaie, Caleb S. O'Connor, Austin H. Castelo, Hop S. Tran Cao, Kristy K. Brock, The Univ. of Texas M.D. Anderson Cancer Ctr. (United States)
On demand | Presented live 20 February 2024
Show Abstract + Hide Abstract
This research addresses the challenges of liver resection planning and execution, leveraging intraoperative ultrasound (IOUS) for guidance. We propose an AI-driven solution to enhance real-time vessel identification (inferior vena cava (IVC), right hepatic vein (RHV), left hepatic vein (LHV), and middle hepatic vein (MHV)) using a visual saliency approach that integrates attention blocks into a novel U-Net model with integrated attention blocks. The study encompasses a dataset of IOUS video recordings from 12 patients, acquired during liver surgeries. Employing leave-one-out cross-validation, the model achieves mean dice scores of 0.88 (IVC), 0.72 (RHV), 0.53 (MHV), and 0.78 (LHV). This innovative approach holds the potential to revolutionize liver surgery by enabling precise vessel segmentation, with future prospects including broader vasculature segmentation and real-time application in the operating room.
12932-32
Author(s): Tiana Trumpour, Robarts Research Institute (Canada); Jamiel Nasser, Univ. of Waterloo (Canada); Jessica R. Rodgers, Univ. of Manitoba (Canada); Jeffrey Bax, Lori Gardi, Robarts Research Institute (Canada); Lucas C. Mendez, Kathleen Surry, London Regional Cancer Program (Canada); Aaron Fenster, Robarts Research Institute (Canada)
20 February 2024 • 4:10 PM - 4:30 PM PST | Pacific C
Show Abstract + Hide Abstract
Brachytherapy is a common treatment technique for cervical cancer diagnoses. Radiation is delivered using specialized applicators or needles that are inserted within the patient using medical imaging guidance. However, advanced imaging modalities may be unavailable in underfunded healthcare centers, suggesting a need for accessible imaging techniques during brachytherapy procedures. This work focuses on the development and validation of a spatially tracked mechatronic arm for 3D trans-abdominal and trans-rectal ultrasound imaging. The arm will allow automated acquisition and inherent registration of two 3D ultrasound images, resulting in a fused image volume of the whole female pelvic region. The results of our preliminary testing demonstrate this technique as a suitable alternative to advanced imaging for providing visual information to clinicians during brachytherapy applicator insertions, potentially aiding in improved patient outcomes.
12928-26
Author(s): Andrew S. Kim, Chris Yeung, Queen's Univ. (Canada); Robert Szabo, Óbuda Univ. (Hungary); Kyle Sunderland, Rebecca Hisey, David Morton, Queen's Univ. (Canada); Ron Kikinis, Brigham and Women's Hospital (United States); Babacar Diao, Univ. Cheikh Anta Diop (Senegal); Parvin Mousavi, Tamas Ungi, Gabor Fichtinger, Queen's Univ. (Canada)
On demand | Presented live 20 February 2024
Show Abstract + Hide Abstract
Percutaneous nephrostomy is a commonly performed procedure to drain urine to provide relief in patients with hydronephrosis. Current percutaneous nephrostomy needle guidance methods can be difficult, expensive, or not portable. We propose an open-source based real-time 3D anatomical visualization aid for needle guidance with live ultrasound segmentation and 3D volume reconstruction using deep learning and free, open-source software . Participants performed needle insertions with visualization aid and conventional ultrasound needle guidance. Visualization aid guidance showed a significantly higher accuracy, while needle insertion time and success rate were not statistically significant at our sample size. Participants mostly responded positively to visualization aid needle guidance and 80% found it easier to use than ultrasound needle guidance. We found that real-time 3D anatomical visualization aid for needle guidance produced increased accuracy and an overall mostly positive experience.
12932-33
Author(s): Keshav Bimbraw, Haichong K. Zhang, Worcester Polytechnic Institute (United States)
20 February 2024 • 4:50 PM - 5:10 PM PST | Pacific C
Show Abstract + Hide Abstract
This research introduces an innovative mirror-based ultrasound system for hand gesture classification and explores data analysis using Convolutional Neural Network (CNN) and Vision Transformer (ViT) architectures. Hand gesture recognition with ultrasound has gained interest in prosthetic control and human-computer interaction. Traditional methods used for hand gesture estimation involve placing an ultrasound probe perpendicular to the forearm causing discomfort and interference with arm movement. To address this, a novel approach utilizing acoustic reflection is proposed wherein a convex ultrasound probe is strategically positioned in parallel alignment with the forearm, and a mirror is placed at a 45-degree angle for transmission and reception of ultrasound waves. This positioning enhances stability and reduces arm strain. CNNs and ViT are employed for feature extraction and classification. The system's performance is compared to the traditional perpendicular method, demonstrating comparable results. The experimental outcomes showcase the potential of the system for efficient hand gesture recognition.
12928-27
Author(s): Purnima Rajan, Martin Hossbach, Pezhman Foroughi, Alican Demir, Christopher Schlichter, Clear Guide Medical (United States); Karina Gattamorta, Shayne Hauglum, School of Nursing and Health Studies, Univ. of Miami (United States)
On demand | Presented live 20 February 2024
Show Abstract + Hide Abstract
This paper presents the development and evaluation of an educational system for training and assessing student skills in ultrasound-guided interventional procedures. The system consists of an ultrasound needle guidance system which overlays virtual needle trajectories on the ultrasound screen, and custom anatomical phantoms tailored to specific anesthesiology procedures. The system utilizes artificial intelligence-based optical needle tracking. It serves two main functions: skill evaluation, providing feedback to students and instructors, and as a learning tool, guiding students in achieving correct needle trajectories. The system was evaluated in a study with nursing students, showing significant improvements in guided procedures compared to non-guided ones.
Live Demonstrations Workshop
20 February 2024 • 5:30 PM - 7:00 PM PST | Pacific A
Session Chairs: Karen Drukker, The Univ. of Chicago (United States), Lubomir M. Hadjiiski, Michigan Medicine (United States), Horst Karl Hahn, Fraunhofer-Institut für Digitale Medizin MEVIS (Germany)


The goal of this workshop is to provide a forum for systems and algorithms developers to show off their creations. The intent is for the audience to be inspired to conduct derivative research, for the demonstrators to receive feedback and find new collaborators, and for all to learn about the rapidly evolving field of medical imaging. The Live Demonstrations Workshop invites participation from all attendees of the SPIE Medical Imaging symposium. Workshop demonstrations include samples, systems, and software demonstrations that depict the implementation, operation, and utility of cutting-edge as well as mature research. Having an accepted SPIE Medical Imaging paper is not required for giving a live demonstration. A certificate of merit and $500 award will be presented to one demonstration considered to be of exceptional interest.

Award sponsored by:
Siemens Healthineers
Publicly Available Data and Tools to Promote Machine Learning: an interactive workshop exploring MIDRC
20 February 2024 • 5:30 PM - 7:00 PM PST | Pacific A
Session Chairs: Weijie Chen, U.S. Food and Drug Administration (United States), Heather M. Whitney, The Univ. of Chicago (United States)

View Full Details: spie.org/midrc-workshop

In this interactive hands-on workshop exploring the infrastructure and resources of the Medical Imaging and Data Resource Center (MIDRC), we will introduce the data collection and curation methods; the user portal for accessing data including tools designed specifically for cohort building; system evaluation approaches and tools including evaluation metric selection; as well as tools for diversity assessment, identification and mitigation of bias and more.

3D Printing and Imaging: Enabling Innovation in Personalized Medicine, Device Development, and System Components
20 February 2024 • 5:30 PM - 7:00 PM PST | Town & Country A

Join this technical event on 3D printing and imaging and hear how it is enabling innovation in personalized medicine, device development, and system components. This special session consists of four presentations followed by a panel discussion.

12925-801
20 February 2024 • 5:30 PM - 5:32 PM PST | Town & Country A
12925-168
Author(s): Jonathan M. Morris, Mayo Clinic (United States)
20 February 2024 • 5:32 PM - 5:49 PM PST | Town & Country A
Show Abstract + Hide Abstract
Over the last 17 years Mayo Clinic has become a world leader in a field now known as point of care manufacturing. Using additive manufacturing we focus on five distinct areas. First to create diagnostic anatomic models for each surgical subspecialty from diagnostic imaging. Second to manufacture custom patient-specific sterilizable osteotomy cutting guides for ENT, OMFS, Orthopedics, and Orthopedic Oncology. Third to build simulators and phantoms using a combination of special effects and 3Dprinting. Fourth using 3D printers to create custom phantoms, phantom holders, and other custom medical devices such as pediatric airway devices, proton beam appliances, and custom jigs and fixtures for the department and hospital. Finally to transfer the digital twins into virtual and augmented reality environments for preoperative surgical planning and immersive educational tools. Mayo Clinic has scaled this endeavor to all three of its main campuses including Jacksonville Fl and Scottsdale AZ to complete the enterprise approach. In doing so we have been able to advance patient care locally as well as assist in building the national IT, regulatory, billing, RSAN 3D SIG, and quality control infrastructure needed to assure scaling across this and other countries.
12925-169
Author(s): Alex Grenning, The Jacobs Institute, Inc. (United States)
20 February 2024 • 5:49 PM - 6:06 PM PST | Town & Country A
Show Abstract + Hide Abstract
Engineers often design products to work within available test fixtures. Test fixtures define the goal posts for device evaluation. It is important for test fixtures to accurately represent the critical conditions of operation and be supported with justification for regulatory review. This presentation explores the role of 3D printing and model design workflows in producing anatomically relevant text fixtures which can be used to guide, and more importantly accelerate the device development process. The Jacobs Institute is a one-of-a-kind, not-for-profit vascular medical technology innovation center. The Jacobs Institute’s mission is to accelerate the development of next-generation technologies in vascular medicine through collisions of physicians, engineers, entrepreneurs, and industry.
12925-171
Author(s): Devarsh Vyas, Benjamin Johnson, 3D Systems Corp. (United States)
20 February 2024 • 6:06 PM - 6:23 PM PST | Town & Country A
Show Abstract + Hide Abstract
AM is already a widely adopted manufacturing process used to produce millions of medical devices and healthcare products every year. Common uses for AM include the printing of patient-specific surgical implants and instruments derived from imaging data and the manufacturing of metal implants and instruments with features that are impossible to fabricate using traditional subtractive manufacturing. In addition to reducing costs, patient-specific solutions—such as customized surgical plans and personalized implants—aim to improve surgical outcomes for patients and give surgeons more options and more flexibility in the OR. With advancement in technology, implants are 3D printed in various materials and at various manufacturing sites including at the point-of-care. 3D Systems collaborates with medical device manufacturers and health systems to develop personalized health solutions and is the leader in design, manufacturing and getting regulatory approvals for 3D printed patient-specific implants in various materials and technologies
12925-170
Author(s): David W. Holdsworth, Western Univ. (Canada)
20 February 2024 • 6:23 PM - 6:40 PM PST | Town & Country A
Show Abstract + Hide Abstract
Additive Manufacturing has not realized it’s full potential due to a number of factors. The regulatory environment for medical devices is geared towards conventional manufacturing techniques, making it challenging to certify 3D-printed devices. Additive manufacturing may still not be competitive when scaled up for industrial production, and the need for post-processing may negate some of the benefits. The promises and the challenges of additive manufacturing will be explored in the context of medical imaging device design.
12925-802
20 February 2024 • 6:40 PM - 7:00 PM PST | Town & Country A
Establishing Ground Truth in Radiology and Pathology
20 February 2024 • 5:30 PM - 7:00 PM PST | Palm 4

Establishing ground truth is one of the hardest parts in an imaging experiment. In this workshop we'll talk to pathologists, radiologists, an imaging scientist (who evaluates imaging technology without ground truth), and an FDA staff scientist (who creates his own ground truth) to determine how to best deal with this difficult problem.

Moderator:
Ronald Summers, National Institutes of Health (United States)

Panelists:
Richard Levenson, Univ. of California, Davis (United States)
Steven Horii, Univ. of Pennsylvania (United States)
Abhinav Kumar Jha, Washington Univ., St. Louis (United States)
Miguel Lago, U.S. Food and Drug Administration (United States)

Wednesday Morning Keynotes
21 February 2024 • 8:30 AM - 10:00 AM PST | Town & Country A
Session Chairs: Claudia R. Mello-Thoms, Univ. Iowa Carver College of Medicine (United States), Hiroyuki Yoshida, Massachusetts General Hospital (United States), Shandong Wu, Univ. of Pittsburgh (United States)

8:30 AM - 8:35 AM:
Welcome and introduction

8:35 AM - 8:40 AM:
Robert F. Wagner Award finalists announcements for conferences 12929 and 12931

12929-405
Author(s): Robert M. Nishikawa, Univ. of Pittsburgh (United States)
21 February 2024 • 8:40 AM - 9:20 AM PST | Town & Country A
Show Abstract + Hide Abstract
Image perception, observer performance, and technology assessment have driven many of the advances in breast imaging. Technology assessment metrics were used to develop mammography systems, first with screen-film mammography and then to digital mammography and digital breast tomosynthesis. To optimize these systems clinically, it became necessary to determine what type of information a radiologist needed to make a correct diagnosis. Image perception studies helped define what spatial frequencies were necessary to detect breast cancers and how different sources of noise affected detectability. Finally, observer performance studies were used to show that advances in the imaging system led to better detection and diagnoses by radiologists. In parallel to these developments, these three concepts were used to develop computer-aided diagnosis systems. In this talk, I will highlight how image perception, observer performance, and technology assessment were leveraged to produce technologies that allow radiologists to be highly effective in detecting breast cancer.
12931-407
Author(s): Gordon J. Harris, Massachusetts General Hospital (United States)
21 February 2024 • 9:20 AM - 10:00 AM PST | Town & Country A
Show Abstract + Hide Abstract
Within academia, there are challenges to building and sustaining software platforms and translating them into widely available tools with national or global use. Hurdles include identifying significant needs, acquiring funding, implementing commercial grade development processes and user experience design and choosing a sustainable financial model and licensing plan. In addition, moving beyond the academic sphere into the commercial realm requires an investment in business processes and skills including branding, marketing, sales, operations, regulatory/compliance, legal, and fundraising expertise. Experiences licensing from academia are shared, illustrated with the following two examples: First, a clinical trials imaging informatics platform will be discussed, developed initially to manage all the clinical trials imaging assessments within a Comprehensive Cancer Center, and now licensed commercially for use in over 4,200 active clinical trials at 18 cancer centers including 12 NCI-designated sites. Second, a web-based medical imaging framework will be covered, an open-source software platform that has become the standard for over a thousand academic and industry software projects.
Session 7: Image Segmentation/Registration
21 February 2024 • 10:30 AM - 12:30 PM PST | Pacific C
Session Chairs: John S. H. Baxter, Univ. de Rennes 1 (France), Satish E. Viswanath, Case Western Reserve Univ. (United States)
12928-28
Author(s): Michael I. Miga, Vanderbilt Univ. (United States)
21 February 2024 • 10:30 AM - 11:10 AM PST | Pacific C
Show Abstract + Hide Abstract
While modern medical imaging coupled to contemporary methods in machine learning has allowed for dramatic expansions of diagnostic discrimination, similar advances in procedural medicine have lagged due to systematic barriers associated with the intrinsic data limitations of the procedural environment. This reality motivates many questions, both exhilarating and provocative. The assertion in this talk is that treatment platform technologies of the future will need to be intentionally designed for the dual purpose of treatment and discovery. While it is difficult to be prescient on the forms that these forward-thinking systems will take, it is clear that new requirements associated with data integration/acquisition, automation, real-time computation, and cost will likely be critical factors. Exemplar surgical and interventional technologies will be discussed that involve complex biophysical models, methods of automation and procedural field surveillance, efforts toward data-driven procedures and therapy forecasting, and approaches integrating disease phenotypic biomarkers. The common thread to the work is the use of computational models driven by sparse procedural data as a constraining environment to enable guidance and therapy delivery.
12928-29
Author(s): Jon S. Heiselman, Memorial Sloan-Kettering Cancer Ctr. (United States); Morgan J. Ringel, Vanderbilt Univ. (United States); Jayasree Chakraborty, William R. Jarnagin, Memorial Sloan-Kettering Cancer Ctr. (United States); Michael I. Miga, Vanderbilt Univ. (United States)
On demand | Presented live 21 February 2024
Show Abstract + Hide Abstract
Image registration often requires retrospective tuning of model parameters to optimize registration accuracy. However, these procedures may not produce results that optimally generalize to inter- and intra-dataset variabilities. We present a parameter estimation framework based on the Akaike Information Criterion (AIC) that permits dynamic runtime adaptation of model parameters by maximizing the informativeness of the registration model against the specific data constraints available to the registration. This parameter adaptation framework is implemented in a frequency band-limited reconstruction approach to efficiently resolve modal harmonics of soft tissue deformation in image registration. Our approach automatically selects optimal model complexity via AIC to match informational constraints via a parallel-computed ensemble model that achieves excellent TRE without the need for any hyperparameter tuning.
12928-30
Author(s): Mingzhe Hu, Xiaofeng Yang, Shaoyan Pan, Emory Univ. (United States)
On demand | Presented live 21 February 2024
Show Abstract + Hide Abstract
ABSTRACT This work presents GhostMorph, an innovative model for deformable inter-subject registration in medical imaging, inspired by GhostNet's principles. GhostMorph addresses the computational challenges inherent in medical image registration, particularly in deformable registration where complex local and global deformations are prevalent. By integrating Ghost modules and 3D depth-wise separable convolutions into its architecture, GhostMorph significantly reduces computational demands while maintaining high performance. The study benchmarks GhostMorph against state-of-the-art registration methods using the Liver Tumor Segmentation Benchmark (LiTS) dataset, demonstrating its comparable accuracy and improved computational efficiency. GhostMorph emerges as a viable, scalable solution for real-time and resource-constrained clinical scenarios, marking a notable advancement in medical image registration technology.
12928-31
Author(s): Murong Yi, Ruxiao Duan, Zhikai Li, Jeffrey H. Siewerdsen, Ali Uneri, Junghoon Lee, Craig K. Jones, Johns Hopkins Univ. (United States)
On demand | Presented live 21 February 2024
Show Abstract + Hide Abstract
We propose an uncertainty-aware model for accurate deformation fields generation and risk estimation via a joint synthesis and registration network. By framing warping field prediction as a pixel-wise regression problem, we employ pixel-wise evidential deep learning to predict uncertainties. Visualized uncertainty maps revealed a strong correlation between high warping magnitude and uncertainty. Numeric outcomes on segmentation maps substantiated the benefit of uncertainty integration, yielding improvements significantly better than the training without uncertainty, which shows that introducing uncertainty to the registration network holds great promise.
12928-32
Author(s): Ziteng Liu, Erin L. Bratu, Jack H. Noble, Vanderbilt Univ. (United States)
On demand | Presented live 21 February 2024
Show Abstract + Hide Abstract
Although Cochlear Implants (CIs) have been remarkably successful at restoring sound sensation, the electroneural interface is typically unknown to audiologists who tune the CI programming. Thus, many programming sessions are needed and usually lead to suboptimal results. Previously, our group has developed an ANF localization approach in order to simulate the neural response triggered by CIs. That method relies heavily on manual adjustment and is error prone. In this work, we introduce a fully automatic and accurate ANF localization method, where the peripheral and central axon of an ANF can be estimated individually based on five sets of automatically generated landmarks; the fast marching method can be used to find geodesic paths between landmarks; and cylindrical coordinate systems can be constructed based on the landmarks in order to smoothly interpolate trajectories between landmarks. Experiments show that our proposed method outperforms the original method and achieves impressive performance qualitatively and quantitatively.
Session 8: Spine / Orthopaedic Surgery
21 February 2024 • 1:40 PM - 3:20 PM PST | Pacific C
Session Chairs: Jeffrey H. Siewerdsen, The Univ. of Texas M.D. Anderson Cancer Ctr. (United States), Stefanie Speidel, Nationales Centrum für Tumorerkrankungen Dresden (Germany)
12928-33
Author(s): Abdullah Thabit, Maartje Eijssen, Mohamed Benmahdjoub, Bart Cornelissen, Mark G. van Vledder, Theo van Walsum, Erasmus MC (Netherlands)
On demand | Presented live 21 February 2024
Show Abstract + Hide Abstract
Rib fractures occur in 10% of all trauma patients. Rib fractures can be observed in X-ray and CT scans allowing for better surgical planning. However, translating the surgical plan to the operating table through mental mapping remains a challenging task. Using augmented reality (AR), a preoperative plan can be intraoperatively visualized in the field of view of the surgeon, allowing for a more accurate determination of the size and location of the incision for optimum access to the fractured ribs. This study aims to evaluate the use of AR for guidance in rib fracture procedures. To that end, an AR system using the HoloLens 2 was developed to visualize surgical incisions directly overlayed on the patient. To evaluate the feasibility of the system, a torso phantom was scanned for preoperative planning of the incision lines. A user study with 13 participants was conducted to align the preoperative model and delineate the visualized incisions. For a total of 39 delineated incisions, a mean distance error of 3.6 mm was achieved. The study shows the potential of using AR as an alternative to the traditional palpation approach for locating rib fractures, which has an error of up to 5 cm.
12928-34
Author(s): Jinchi Wei, Debarghya China, Kai Ding, Johns Hopkins Univ. (United States); Neil Crawford, Norbert Johnson, Globus Medical Inc. (United States); Nicholas Theodore, The Johns Hopkins Univ. School of Medicine (United States); Ali Uneri, Johns Hopkins Univ. (United States)
On demand | Presented live 21 February 2024
Show Abstract + Hide Abstract
Image-guided spine surgery relies on surgical trackers for real-time localization of surgical instruments, which are susceptible to local changes in anatomy due to patient repositioning or changes imparted during the procedure. This study presents an ultrasound-guided approach and an integrated real-time system for verifying and recovering tracking accuracy following spinal deformations. The approach combines deep-learning segmentation of the posterior vertebral cortices with a multi-step point-to-surface registration to map reconstructed US features to the 3D CT image. The solution was evaluated in cadaver specimens with induced deformation and demonstrated 1.7 ± 0.3 mm of registration error in localizing vertebrae.
12928-35
Author(s): Lucas Hintz, Sarah C. Nanziri, The School of Medicine & Health Sciences, The George Washington Univ. (United States); Sarah Dance, Kochai Jawed, Matthew Oetgen, Children's National Medical Ctr. (United States); Tamas Ungi, Gabor Fichtinger, Queen's Univ. (Canada); Christopher Schlenger, Verdure Imaging Inc. (United States); Kevin Cleary, Children's National Medical Ctr. (United States)
On demand | Presented live 21 February 2024
Show Abstract + Hide Abstract
We have evaluated the feasibility of using AI-segmented 3D spine ultrasound for the evaluation of scoliosis in pediatric patients. Our system uses motion tracking cameras to track a wireless ultrasound probe and waist belt in conjunction with proprietary SpineUs™ software using neural networks to build a volumetric reconstruction of the spine in real-time on a laptop computer. Transverse process angles from both the ultrasound reconstructions and the patient’s radiographic imaging were compared for five pediatric patients; the results demonstrate a strong linear correlation between the angles obtained from the two imaging methods with minimal variance. The SpineUs™ system shows promise as a potential alternative to x-ray imaging to reduce radiation dose in children, and integrates well into a busy clinic workflow with minimal disruption and additional staff training.
12928-36
Author(s): Yunbo Shao, Skyline High School (United States); Shuo Li, Case Western Reserve Univ. (United States)
On demand | Presented live 21 February 2024
Show Abstract + Hide Abstract
Estimating the severity of scoliosis is time consuming and imprecise. This paper aims to contribute to developing a fully automated method of estimating Cobb angles—a measurement commonly used for scoliosis—through the use of a specialized image segmentation model trained specifically on x-rays to automatically identify vertebrae within x-ray images. This model is named the Adaptive Loss Engine for X-ray Segmentation (ALEXS). Besides training, another method of improving the performance of the ALEXS is altering the original x-ray image without changing the locations of the vertebrae. Sharpening the image and increasing its contrast allowed the ALEXS to identify many more vertebrae than before. Based on the results that were obtained, using the ALEXS combined with altered images produces superior results compared to some previous attempts. These improved methods allow for a more accurate end-to-end process for automatically diagnosing scoliosis.
12928-37
Author(s): Anshuj Deva, Tatiana A. Rypinski, Bhavin Soni, Parvathy Pillai, Laurence D. Rhines, Claudio E. Tatsui, Christopher Alvarez-Breckenridge, Robert Y. North, Justin E. Bird, Jeffrey H. Siewerdsen, The Univ. of Texas M.D. Anderson Cancer Ctr. (United States)
On demand | Presented live 21 February 2024
Show Abstract + Hide Abstract
Statistical surgical process modeling (SPM) presents a powerful framework for computational modeling of workflow, emerging technologies, and analysis of key outcome variables. This work developed statistical SPMs for image-guided spine surgery based on fluoroscopy or CT + navigation, quantifying the benefits of advanced imaging, registration, and planning methods in terms of cycle time, radiation dose, and geometric accuracy.
Session 9: Deep Image Analysis for Image-Guided Interventions
21 February 2024 • 3:50 PM - 5:30 PM PST | Pacific C
Session Chairs: Shuo Li, Case Western Reserve Univ. (United States), Matthieu Chabanas, Univ. Grenoble Alpes (France)
12928-38
Author(s): You K. Hao, Jayaram K. Udupa, Yubing Tong, Univ. of Pennsylvania (United States); Tiange Liu, Yanshan Univ. (China); Caiyun Wu, Dewey Odhner, Drew A. Torigian, Univ. of Pennsylvania (United States)
On demand | Presented live 21 February 2024
Show Abstract + Hide Abstract
To bring natural intelligence (NI) in the form of anatomical information into AI methods effectively, we have recently introduced the hybrid intelligence (HI) concept (NI+AI) for image segmentation [20,21]. This HI system has shown remarkable performance/ robustness to image deficiencies. In this paper, we introduce several advances related to modeling of the NI component, so the HI-system becomes substantially more efficient. We demonstrate a 9-40 fold computational improvement in the auto-segmentation task for RT planning via clinical studies obtained from 4 different RT centers, while retaining state-of-the-art accuracy of the previous system in segmenting 28 objects in Thorax and Neck.
12928-39
Author(s): Mohamed Harmanani, Paul F. R. Wilson, Queen's Univ. (Canada); Fahimeh Fooladgar, The Univ. of British Columbia (Canada); Amoon Jamzad, Mahdi Gilany, Queen's Univ. (Canada); Minh Nguyen Nhat To, The Univ. of British Columbia (Canada); Brian Wodlinger, Exact Imaging Inc. (Canada); Purang Abolmaesumi, The Univ. of British Columbia (Canada); Parvin Mousavi, Queen's Univ. (Canada)
On demand | Presented live 21 February 2024
Show Abstract + Hide Abstract
This work presents a detailed study of several Image Transformer architectures for the classification of prostate cancer in ultrasound images. It seeks to establish a baseline for the performance of these architectures on cancer detection, both in specific regions of interest (ROI-scale methods) and across the entire biopsy core using multiple ROIs (multi-scale methods). This work also introduces a novel framework for multi-objective learning with transformers by combining the loss for individual ROI predictions with the loss for the core prediction, thereby improving performance over baseline methods.
12928-40
Author(s): Nahid Nazifi, Institut National des Sciences Appliquées Centre Val de Loire (France), Institute de Sistemas e Robotica (Portugal); Helder Araujo, Univ. de Coimbra (Portugal); Gopi Krishna Erabati, Institute de Sistemas e Robotica (Portugal); Omar Tahri, VIBOT-ImViA, Univ. de Bourgogne (France)
On demand | Presented live 21 February 2024
Show Abstract + Hide Abstract
Wireless Capsule Endoscopy (WCE) offers a promising approach to painless endoscopic imaging of the whole of the small bowel as well as other parts of the Gastro-Intestine through wireless image capturing. This method detects various diseases and pathologies. However, existing capsules are often passive, necessitating active control to enhance localization and identification accuracy. We propose a deep learning-based system to estimate camera position for active endoscopic capsule robots. Unsupervised methods rely on synthetic data, where pose network outputs guide depth predictions. Our network incorporates supervision through image warping based on predicted depth and ego-motion, with a comprehensive loss covering image synthesis, depth, and pose. We introduce a visual transformer into the Visual Odometry pipeline for improved accuracy, building based on the Pyramid Vision Transformer (PVT) structure to address limitations. Our framework incorporates PVTv2, enabling joint training of depth and pose networks for single-image depth regression. Consecutive frames predict depth maps and relative poses, supervised via photometric loss.
12928-41
Author(s): Gayoung Kim, Johns Hopkins Univ. (United States); Majd Antaki, Elekta (United States); Ehud J. Schmidt, Michael Roumeliotis, Akila N. Viswanathan, Junghoon Lee, Johns Hopkins Univ. (United States)
On demand | Presented live 21 February 2024
Show Abstract + Hide Abstract
We developed an MRI-guidance system for cervical cancer brachytherapy, providing automatic segmentation of organs-at-risk and the high-risk clinical target volume (HR-CTV), and real-time active needle tracking. The segmentation module comprises coarse segmentation step for organ localization, followed by fine segmentation models separately trained for each organs. Size-dependent segmentation is performed for HR-CTV. The needle-tracking module communicates with active stylets, and displays the stylet-tip location/orientation on the MRI in real-time. These modules were incorporated into a brachytherapy treatment planning system, and validated in five cervical cancer cases, demonstrating its clinical utility in increasing procedure efficiency.
12928-42
Author(s): Minh Q. Vu, Yubo Fan, Jack H. Noble, Vanderbilt Univ. (United States)
On demand | Presented live 21 February 2024
Show Abstract + Hide Abstract
Cochlear implant insertion using percutaneous cochlear access involves drilling a single hole through the skull surface and traversing the facial recess, a region approximately 1.0–3.5 mm in width bounded posteriorly by the facial nerve and anteriorly by the chorda tympani. Thus, it is very important that these clinical structures are segmented accurately for trajectory planning. In this work, we propose the use of a conditional generative adversarial network (cGAN) to automatically segment the facial nerve. Our network utilized weakly supervised approach, being trained on a small sample of 12 manually segmented images and an additional 120 images automatically segmented using atlas-based methods. We also leverage endpoint predictions generated by the network to fix noisy or disconnected segmentations by postprocessing the facial nerve skeleton with a minimum cost path search function. Our method generated segmentations with an average mean surface error of only 0.24mm, improving upon the original method by ~50%.
Thursday Morning Keynotes
22 February 2024 • 8:30 AM - 10:00 AM PST | Town & Country A
Session Chairs: Rebecca Fahrig, Siemens Healthineers (Germany), John M. Sabol, Konica Minolta Healthcare Americas, Inc. (United States), Ke Li, Univ. of Wisconsin School of Medicine and Public Health (United States), Olivier Colliot, Ctr. National de la Recherche Scientifique (France), Jhimli Mitra, GE Research (United States)

8:30 AM - 8:35 AM:
Welcome and introduction

8:35 AM - 8:40 AM:
Award announcements

  • Robert F. Wagner Award finalists for conferences 12925 and 12926
  • Physics of Medical Imaging Best Student Paper Award
  • Image Processing Best Paper Award
12925-401
Author(s): David W. Holdsworth, Western Univ. (Canada)
22 February 2024 • 8:40 AM - 9:20 AM PST | Town & Country A
Show Abstract + Hide Abstract
Additive manufacturing (i.e. 3D printing) offers transformative potential in the development of biomedical devices and medical imaging systems, but at the same time presents challenges that continue to limit widespread adoption. Within medical imaging, 3D printing has numerous applications including device design, radiographic collimation, anthropomorphic phantoms, and surgical visualization. Continuous technological development has resulted in improved plastic materials as well as high-throughput fabrication in medical-grade metal alloys. Nonetheless, additive manufacturing has not realized its full potential, due to a number of factors. The regulatory environment for medical devices is geared towards conventional manufacturing techniques, making it challenging to certify 3D-printed devices. Additive manufacturing may still not be competitive when scaled up for industrial production, and the need for post-processing may negate some of the benefits. In this talk, we will describe the current state of 3D printing in medical imaging and explore future potential, including links to 3D design and finite-element modeling.
12926-402
Author(s): Shuo Li, Case Western Reserve Univ. (United States)
22 February 2024 • 9:20 AM - 10:00 AM PST | Town & Country A
Show Abstract + Hide Abstract
Foundation models rapidly emerge as a transformative force in medical imaging, leveraging extensive datasets and sophisticated pre-trained models to decode and interpret complex medical images. Our presentation will begin with an in-depth exploration of the essential concepts that underpin these models, with a special emphasis on the synergy between vision-language models and medical imaging. We aim to elucidate how the integration of prompts, language, and vision catalyzes a groundbreaking shift in the foundation of artificial intelligence. We will then analyze how these four critical elements - prompts, language, vision, and foundation models - will collaboratively shape state-of-the-art AI solutions in medical imaging. Our objective is to ignite a vibrant dialogue about leveraging the collective strength of these components at the SPIE Medical Imaging Conference.
Session 10: Novel Imaging and Visualization
22 February 2024 • 10:30 AM - 12:30 PM PST | Pacific C
Session Chairs: Jack H. Noble, Vanderbilt Univ. (United States), Terry Yoo, The Univ. of Maine (United States)
12928-43
Author(s): Baowei Fei, The Univ. of Texas at Dallas (United States)
22 February 2024 • 10:30 AM - 11:10 AM PST | Pacific C
Show Abstract + Hide Abstract
Advanced imaging and enhanced visualization techniques are critical for precision surgical interventions that can improve outcomes and save lives. Various imaging modalities and visualization approaches are being developed to aid surgeons to complete procedures with high accuracy, reducing inadvertent errors and reoperation rates. For example, one emerging imaging modality is called hyperspectral imaging that has been increasingly explored for image-guided surgery, including laparoscopic procedures. Augmented reality systems have also been developed to enhance the visualization of human organs and lesions for potential applications in biopsy and surgery. Advanced imaging, enhanced visualization, AI tools, and surgical robotics will revolutionize the operating room of the future.
12928-44
Author(s): Nicholas E. Pacheco, Shang Gao, Worcester Polytechnic Institute (United States); Kevin Cleary, Rahul Shah, Children's National Hospital (United States); Haichong K. Zhang, Loris Fichera, Worcester Polytechnic Institute (United States)
On demand | Presented live 22 February 2024
Show Abstract + Hide Abstract
Tonsillectomy, one of the most common surgical procedures worldwide, is often associated with postoperative complications, particularly bleeding. Tonsil laser ablation has been proposed as a safer alternative; however, its adoption has been limited because it can be difficult for a surgeon to visually control the thermal interactions that occur between the laser and the tissue. In this study, we propose to monitor the ablation caused by a CO2 laser on ex vivo tonsil tissue using photoacoustic imaging. Soft tissue’s unique photoacoustic spectra were used to distinguish between ablated and non-ablated tissue. Our results demonstrate that photoacoustic imaging was able to visualize necrosis formation and calculate the necrotic extent, offering the potential for improved tonsil laser ablation outcomes.
12928-45
Author(s): Qi Chang, Vahid Daneshpajooh, Patrick D. Byrnes, The Pennsylvania State Univ. (United States); Danish Ahmad, Jennifer Toth, Rebecca Bascom, Penn State College of Medicine (United States); William E. Higgins, The Pennsylvania State Univ. (United States)
On demand | Presented live 22 February 2024
Show Abstract + Hide Abstract
Lung cancer, a leading cause of global cancer-related deaths, can be detected early with a combination of bronchoscopy imaging techniques: white-light (WLB), autofluorescence (AFB), and narrow-band imaging (NBI). However, each modality requires separate tedious manual examinations, leading to no direct link of the three sources. To address this, we propose a framework for multimodal video synchronization and fusion, built into an interactive graphical system. Key airway video-frame landmarks and lesion frames are noted, registered, and fused to a patient's CT-based 3D airway tree model. Our method eases user interaction, is skill-independent, and facilitates true multimodal analysis of a bronchoscopic airway exam.
12928-46
Author(s): Yubing Tong, Jayaram K. Udupa, Univ. of Pennsylvania (United States); Joseph M. McDonough, The Children's Hospital of Philadelphia (United States); Caiyun Wu, Lipeng Xie, Mostafa Alnoury, Mahdie Hosseini, Shiva Shaghaghi, Leihui Tong, Univ. of Pennsylvania (United States); Samantha Gogel, David M. Biko, Oscar H. Mayer, Jason B. Anari, The Children's Hospital of Philadelphia (United States); Drew A. Torigian, Univ. of Pennsylvania (United States); Patrick J. Cahill, The Children's Hospital of Philadelphia (United States)
On demand | Presented live 22 February 2024
Show Abstract + Hide Abstract
Free-breathing based quantitative dynamic MRI (QdMRI) provides a practical solution to evaluate the regional dynamics and architecture of thorax for TIS patients. Our current aim is to investigate if QdMRI can also measure thoracic architecture for TIS patients before and after surgery as well as in healthy children. Architectural parameters (3D distances and angles from multiple object centroids) were computed and compared via T-testing. The distance between the right lung and right hemi-diaphragm is larger at end-inspiration than that at end-expiration for TIS patients and healthy children, and after surgery becomes closer to that of healthy children.
12928-47
Author(s): Rowan Fraser, Patric Bettati, Jeff Young, Armand P. Rathgeb, Shashank R. Sirsi, Baowei Fei, The Univ. of Texas at Dallas (United States)
On demand | Presented live 22 February 2024
Show Abstract + Hide Abstract
Neuroblastoma is the most common type of extracranial solid tumor in children and can often result in death if not treated. High-intensity focused ultrasound is an up-and-coming, non-invasive way of treating tissue that is deep within the body. It avoids the use of ionizing radiation, avoiding long-term side-effects of these treatments. The goal of this project was to develop the rendering component of an augmented reality (AR) system for the guided treatment of neuroblastoma in focused ultrasound systems. Our project focuses on taking 3D models of neuroblastoma lesions obtained from PET/CT and displaying them in our AR system in near real-time for use by physicians. We used volume ray casting with raster graphics as our preferred rendering method, as it allows for the real-time editing of our 3D DICOM data. We implemented the ability to set a custom transfer function, set custom intensity cutoff points, and region-of-interest extraction via cutting planes. In the future, we hope to incorporate this work as part of a complete system for focused ultrasound treatment by adding ultrasound simulation, visualization, and deformable registration.
Session 11: Interventional Radiology
22 February 2024 • 1:40 PM - 3:20 PM PST | Pacific C
Session Chairs: Michael I. Miga, Vanderbilt Univ. (United States), Baowei Fei, The Univ. of Texas at Dallas (United States)
12928-48
Author(s): David Ng, Hristo N. Nikolov, Robarts Research Institute (Canada), Western Univ. (Canada); Elizabeth Tai, Western Univ. (Canada); Daniel Gelman, David W. Holdsworth, Maria Drangova, Robarts Research Institute (Canada), Western Univ. (Canada)
On demand | Presented live 22 February 2024
Show Abstract + Hide Abstract
This study aimed to develop a versatile vascular model of a liver with a tumour with applications in training interventional radiologists as well as in research programs to improve embolization therapy. The phantom uses exchangeable, single-use tumour models fabricated using 3D-printing, while mimicking the anatomical, hemodynamic, and radiographic properties of the liver. The modular phantom was used to mimic fluoroscopically guided embolization procedures, demonstrating the visual characteristics of the procedures, including reflux that would lead to non-target embolization. The 3D printed modular phantom design represents an adaptable and versatile model for training and research applications.
12928-49
Author(s): Kyvia Pereira, Morgan J. Ringel, Michael I. Miga, Vanderbilt Univ. (United States)
On demand | Presented live 22 February 2024
Show Abstract + Hide Abstract
This study introduces a simulation approach combining eXtended Finite Element Method (XFEM) for retraction modeling with medical image updates to enhance target visualization and localization accuracy. XFEM simulates tissue retraction, representing complex mechanical behavior during surgery. Utilizing XFEM-derived displacement fields, preoperative images are updated, aiding in visualizing tissue deformation. Experimental validation shows an average displacement error from 1.5 to 2.1 mm, showcasing significant improvement in target accuracy compared to traditional methods.
12928-50
Author(s): Patric Bettati, Jeff Young, Armand P. Rathgeb, Nati Nawawithan, The Univ. of Texas at Dallas (United States); Jeffrey Gahan, Brett A. Johnson, The Univ. of Texas Southwestern Medical Ctr. at Dallas (United States); Ryan Aspenleiter, Fintan Browne, Aditi Chaudhari, Aditya Guin, Varin Sikand, Grant Webb, Jeremy Sherey, Alsadiq Shammet, The Univ. of Texas at Dallas (United States); Baowei Fei, The Univ. of Texas at Dallas (United States), The Univ. of Texas Southwestern Medical Ctr. at Dallas (United States)
On demand | Presented live 22 February 2024
Show Abstract + Hide Abstract
A limitation of image-guided biopsies is the lack of optimal visualization of the organ and its surrounding structures, leading to missed target lesions. In this study, we propose an augmented reality (AR) system to increase the accuracy of biopsies. Our AR-guided biopsy system uses high-speed motion tracking technology and an AR headset to display a holographic representation of the organ, lesions, and other structures of interest superimposed on real objects. We apply this system to prostate biopsy by incorporating preoperative computed tomography (CT) scans and real-time ultrasound images. This AR system enables clinicians to gain a better understanding of the lesion’s real-time location. With the enhanced visualization of the prostate, lesion, and surrounding organs, surgeons can perform prostate biopsies with increased accuracy. Our AR-guided biopsy system yielded an average targeting accuracy of 2.94 ± 1.04 mm and can be applied for real-time guidance of prostate biopsy as well as other biopsy procedures.
12928-51
Author(s): Pinyo Taeprasartsit, phenoMapper, LLC (United States); Jan Sebek, Punit Prakash, Kansas State Univ. (United States); Robert F. Short, Dayton VA Medical Ctr. (United States); Henky Wibowo, phenoMapper, LLC (United States)
On demand | Presented live 22 February 2024
Show Abstract + Hide Abstract
We are presenting preliminary results of Machine Learning based local tumor progression (LTP) prediction model to aid physicians in optimizing patient selection for microwave ablation (MWA) treatment. Our model utilizes specialized 3D three-channels data: pre-ablation CT (channel 1), post-ablation CT depicting the resulting ablation zone (channel 2), and overlapping data of the tumor and ablation zone (channel 3). By spatially registering pre- and post-ablation CTs, we establish a clear spatial relationship between the tumor and ablation zone. The model achieved a C-statistic (AUC) of 0.849, outperforming prior work.
12928-52
Author(s): Wenchao Cao, Andrew Missert, Ahmad Parvinian, Daniel Adamo, Brian Welch, Matthew Callstrom, Christopher Favazza, Mayo Clinic (United States)
On demand | Presented live 22 February 2024
Show Abstract + Hide Abstract
Metal artifacts from cryoablation probes interfere with probe placement and ablation monitoring for CT-guided interventional oncology procedures. We developed an approach to training deep learning-based metal artifact reduction (MAR) models that uses phantom-based methods for simulating metal artifacts as well as novel loss functions and data augmentation steps to achieve optimal results. Qualitative comparisons demonstrate that the proposed method can reduce probe-induced metal artifacts while maintaining a high level of anatomic detail. The proposed method does not require access to raw projection data and therefore can be applied to any combination of probes and CT scanners.
Session 12: Joint Session with Conferences 12925 and 12928
22 February 2024 • 3:50 PM - 5:30 PM PST | Town & Country A
Session Chairs: Maryam E. Rettmann, Mayo Clinic (United States), Michael A. Speidel, Univ. of Wisconsin School of Medicine and Public Health (United States)
12925-58
Author(s): Kuan (Kevin) Zhang, Andrea Ferrero, MyungHo In, Christopher P. Favazza, Mayo Clinic (United States)
22 February 2024 • 3:50 PM - 4:10 PM PST | Town & Country A
Show Abstract + Hide Abstract
We aimed to demonstrate the ability of spectral CT to provide temperature mapping within the treatment volume during CT-guided hypo- and hyper-thermal tumor ablations. We collected high dose spectral CT data spanning a wide range of temperatures and generated look up tables that map CT signal to temperature for ranges of clinical interest in cryoablation and hyperthermal ablations. Using electron density images generated from spectral CT data, we demonstrated a sensitivity to temperature changes of 1.2 and 4.1HU-equivalent per 10C in the freezing and heating temperature ranges, respectively. At the clinical radiation dose level for our Interventional Oncology practice, we obtained a maximum precision of 7C and 2C within a 33 mm3 ROI of electron density images for freezing and heating temperatures, respectively. This information was used to develop a clinical-ready CT thermometry protocol that was independently validated and demonstrated a median absolute error of 12.2C and 3.4C for freezing and heating temperature data, respectively.
12928-53
Author(s): Saba Sadatamin, Institute of Biomedical Engineering, Univ. of Toronto (Canada), Posluns Ctr. for Image Guided Innovation & Therapeutic Intervention, The Hospital for Sick Children (Canada); Sara Ketabi, Univ. of Toronto (Canada), The Hospital for Sick Children (Canada); Elise Donszelmann-Lund, Posluns Ctr. for Image Guided Innovation & Therapeutic Intervention, The Hospital for Sick Children (Canada); Saba Abtahi, Yuri Chaban, Institute of Biomedical Engineering, Univ. of Toronto (Canada); Steven Robbins, Richard Tyc, Monteris Medical, Inc. (Canada); Farzad Khalvati, The Hospital for Sick Children (Canada), Univ. of Toronto (Canada); Adam C. Waspe, Posluns Ctr. for Image Guided Innovation & Therapeutic Intervention, The Hospital for Sick Children (Canada), Univ. of Toronto (Canada); Lueder A. Kahrs, Univ. of Toronto Mississauga (Canada), Institute of Biomedical Engineering, Univ. of Toronto (Canada); James M. Drake, Posluns Ctr. for Image Guided Innovation & Therapeutic Intervention, The Hospital for Sick Children (Canada), Institute of Biomedical Engineering, Univ. of Toronto (Canada)
On demand | Presented live 22 February 2024
Show Abstract + Hide Abstract
Magnetic Resonance-guided Laser Interstitial Thermal Therapy (MRgLITT) is a minimally invasive brain tumor treatment involving the insertion of a laser fiber guided by real-time MR thermometry images. However, repositioning the laser is invasive, and accurately predicting thermal spread close to heat sinks poses challenges. To address this issue, we propose the development of MR thermometry prediction using artificial intelligence (AI) modeling. U-Net was trained to model the nonlinear mapping from anatomical magnetic resonance imaging (MRI) planning images to MR thermometry, enabling neurosurgeons to predict heat propagation and choose the best laser trajectory before treatment.
12925-59
Author(s): Martin G. Wagner, Paul F. Laeseke, Amish N. Raval, Michael A. Speidel, Univ. of Wisconsin-Madison (United States)
22 February 2024 • 4:30 PM - 4:50 PM PST | Town & Country A
Show Abstract + Hide Abstract
This study investigates the robustness of device segmentation and tracking in continuous-sweep limited angle fluoroscopy. This technique was developed to provide real-time 3D device navigation during catheter-based procedures. A porcine study is presented were image sequences at different noise levels were acquired and the device automatically tracked using a deep learning-based segmentation approach.
12928-54
Author(s): Alexander Lu, Alejandro L. Montes, Johns Hopkins Univ. (United States); Lonny Yarmus, The Johns Hopkins Univ. School of Medicine (United States); Jeffrey Thiboutot, Ali Uneri, Johns Hopkins Univ. (United States); Jeffrey H. Siewerdsen, The Univ. of Texas M.D. Anderson Cancer Ctr. (United States); Wojciech Zbijewski, Alejandro Sisniega, Johns Hopkins Univ. (United States)
22 February 2024 • 4:50 PM - 5:10 PM PST | Town & Country A
Show Abstract + Hide Abstract
Lung cancer, the second most common cancer in the United States, is diagnosed and staged through the analysis of biopsy specimens, often obtained through transbronchial biopsy (TBB). However, accurate TBB for small nodules is hindered by CT body divergence – misalignment between pre-operative CT and intra-operative coordinate frames. We propose a comprehensive image guidance system, leveraging a stationary multi-source fluoroscopy imager together with deformable 3D/2D registration to solve for a motion field parameterized by implicit neural representations(INR) to jointly track pulmonary and bronchoscopic motion. We evaluate our algorithm using a simulated imaging chain and a 4D-CT dataset, as well as on simulated TBB. Using 5 views, we demonstrate a median landmark TRE of 1.42 mm and a bronchoscope tip error of 2.8 mm. We demonstrate a promising 3D image guidance approach to improving the accuracy of trans-bronchial biopsy using a multi-view stationary imager and estimation of patient motion through deformable 3D/2D registration, which can be extended to track respiratory and bronchoscope motion over time for real-time navigation.
12925-60
Author(s): Grace M. Minesinger, Paul F. Laeseke, Ayca Z. Kutlu, Michael A. Speidel, Martin G. Wagner, Univ. of Wisconsin-Madison (United States)
22 February 2024 • 5:10 PM - 5:30 PM PST | Town & Country A
Show Abstract + Hide Abstract
A deformable liver motion model was developed to advance treatment planning for CBCT-guided histotripsy. The model is FEM-based, informed by displacements at not only external boundaries (liver and gallbladder surfaces), but internal ones as well (vessel surfaces). This method can accurately predict how the target volume has deformed between a high-quality diagnostic scan for sophisticated treatment planning and the day-of CBCT to account for changes in patient positioning.
Digital Posters
The posters listed below are available exclusively for online viewing during the week of SPIE Medical Imaging 2024.
Conference Chair
The Univ. of Texas MD Anderson Cancer Ctr. (United States)
Conference Chair
Mayo Clinic (United States)
Program Committee
The Univ. of British Columbia (Canada)
Program Committee
Univ. de Rennes 1 (France)
Program Committee
The Univ. of Texas M.D. Anderson Cancer Ctr. (United States)
Program Committee
Univ. Grenoble Alpes (France)
Program Committee
Robarts Research Institute (Canada)
Program Committee
Ruprecht-Karls-Univ. Heidelberg (Germany)
Program Committee
Siemens Healthineers (Germany)
Program Committee
The Univ. of Texas at Dallas (United States), The Univ. of Texas Southwestern Medical Ctr. (United States)
Program Committee
Queen's Univ. (Canada)
Program Committee
Thayer School of Engineering at Dartmouth (United States)
Program Committee
Univ. of Washington (United States)
Program Committee
The Pennsylvania State Univ. (United States)
Program Committee
Mayo Clinic (United States)
Program Committee
Univ. de Rennes 1 (France)
Program Committee
Grand Canyon Univ. (United States)
Program Committee
Case Western Reserve Univ. (United States)
Program Committee
Rochester Institute of Technology (United States)
Program Committee
Vanderbilt Univ. (United States)
Program Committee
Nagoya Univ. (Japan)
Program Committee
Queen's Univ. (Canada)
Program Committee
Vanderbilt Univ. (United States)
Program Committee
Univ. of Washington (United States)
Program Committee
National Ctr. for Tumor Diseases Dresden (Germany)
Program Committee
Queen's Univ. (Canada)
Program Committee
Case Western Reserve Univ. (United States)
Program Committee
Vanderbilt Univ. (United States)
Program Committee
Hochschule Mannheim (Germany)
Program Committee
National Institute of Allergy and Infectious Diseases (United States)
Program Committee
The Univ. of Maine (United States)
Additional Information
For information on application for the Robert F. Wagner All-Conference Best Student Paper Award, the Young Scientist Award: Image-Guided Procedures, Robotic Interventions, and Modeling, and the Student Paper Award: Image-Guided Procedures, Robotic Interventions, and Modeling, view the SPIE Medical Imaging Awards page