Registration is open - make plans to attend
Get pricing and details
>
This conference is primarily concerned with applications of medical imaging data in the engineering of therapeutic systems. Original papers are requested in the following topic areas:
Submissions that cross over between this conference and others at SPIE Medical Imaging, and which would be appropriate for combined sessions, are also welcomed.

JOINT SESSIONS:
We plan to continue co-hosting joint sessions with US Imaging and Tomography (MI108) conference and Imaging Informatics for Healthcare, (MI107) conference.

Awards:
The Image-guided procedures, Robotic Interventions and Modeling conference features three awards:
Limited student travel awards are also available, for which all student authors of SPIE Medical Imaging papers are eligible to apply.

TOPIC AREAS: For this conference only

During the submission process, you must choose no more than three topics from the following list to assist in the review process. ;
In progress – view active session
Conference 12034

Image-Guided Procedures, Robotic Interventions, and Modeling

In person: 22 - 24 February 2022
All sponsors
Show conference sponsors + Hide conference sponsors
View Session ∨
  • Awards and Plenary Session
  • 1: Neuro Interventions and Applications
  • 2: Keynote and Robot-Assisted Interventions
  • Tuesday/Wednesday Poster Viewing
  • 3: Video-based Interventional Applications
  • 4: Modeling Applications in Image-Guided Therapy
  • Workshop on Careers at the Intersection of Physics, Engineering, Medical Imaging, and Image-Guided Interventions: SPIE and AAPM Perspectives
  • 5: AI-Based Image Segmentation, Classification, and Detection Techniques
  • 6: Image-Guided Ultrasound Interventions: Joint Session with Conferences 12034 and 12038
  • 7: Image-Guided Intervention Workflow, Training, and Skill Assessment
  • Special 50th Anniversary
  • 8: Imaging Physics in Image-Guided Interventions: Joint Session with Conferences 12031 and 12034
  • Wednesday Poster Session
  • 9: Image-Guided Therapy Applications
  • 10: Benchmarking and Assessment in Image-Guided Interventions: Joint Session with Conferences 12034 and 12037
  • 11: Novel Techniques in Image-Guided Interventions
  • 12: Image Registration
Information
Post-deadline abstracts are not accepted for this conference.
Awards and Plenary Session
In person: 21 February 2022 • 4:00 PM - 5:15 PM PST
Session Chairs: Metin N. Gurcan, Wake Forest Baptist Medical Ctr. (United States), Robert M. Nishikawa, Univ. of Pittsburgh (United States)
4:00 pm: Symposium Chair Welcome and best Student Paper Award Announcement
The first place winner and runner up of the Robert F. Wagner All-Conference Student Paper Award will be announced.
4:15 pm: SPIE 2022 Presidents Welcome and new SPIE Fellows Acknowledgements
4:20 pm: SPIE Harrison H. Barrett Award in Medical Imaging
This award will be presented in recognition of outstanding accomplishments in medical imaging.
12032-300
Author(s): Jennifer N. Avari Silva, Washington Univ. in St. Louis (United States)
In person: 21 February 2022 • 4:30 PM - 5:15 PM PST
Show Abstract + Hide Abstract
With the increased availability of extended reality (XR) devices in the marketplace, there has been a rapid development of medical XR applications spanning from education, training, rehabilitation, pre-procedural planning, and intra-procedural use. We will explore various use case to understand the importance of technology-use case matches and focus on intra-procedural use cases which generally have the highest risk to patient and medical provider but may have the most sizable impact on benefit to patient and procedure.
Session 1: Neuro Interventions and Applications
In person: 22 February 2022 • 8:00 AM - 9:40 AM PST
Session Chairs: Pierre Jannin, Lab. Traitement du Signal et de l'Image (France), David R. Haynor, Univ. of Washington (United States)
12034-1
Author(s): John S. H. Baxter, Univ. de Rennes 1 (France); Stéphane Croci, Antoine Delmas, Luc Bredoux, SYNEIKA (France); Jean-Pascal Lefaucheur, Paris Est Créteil Univ. (France), Henri-Mondor Hospital (France); Pierre Jannin, Univ. de Rennes 1 (France)
In person: 22 February 2022 • 8:00 AM - 8:20 AM PST
Show Abstract + Hide Abstract
Transcranial magnetic stimulation is a non-invasive therapeutic procedure in which specific cortical brain regions are stimulated in order to disrupt abnormal neural behaviour. This procedure requires the annotation of a number of cortical point targets which is often performed by a human expert. Nevertheless, there is a large degree of variability between experts that cannot be described readily using common error models from computer-assisted interventions as the error are often a "difference of type" rather than a "difference of degree." In order to model these, we propose a simple probabilistic model of the agreement between annotations, allowing for the error to be better described.
12034-2
Author(s): Satyananda Kashyap, Hakan Bulu, Ashutosh Jadhav, IBM Research - Almaden (United States); Ronak Dholakia, MicroVention, Inc. (United States); Amon Liu, Ayl Consulting, LLC (United States); Hussain Rangwala, William R. Patterson, MicroVention, Inc. (United States); Mehdi Moradi, IBM Research - Almaden (United States)
In person: 22 February 2022 • 8:20 AM - 8:40 AM PST
Show Abstract + Hide Abstract
The accurate measurement of the dimensions of the aneurysm sac is a critical step in treatment planning for cerebral aneurysms. We report a semi-automatic, and a fully automatic method for segmenting the sac in the standard two dimensional DSA images obtained within the operating suite prior to implanting intrasaccular devices for treatment of aneurysm. We showed that our innovative architecture that uses an EfficentNet encoder to replace the standard UNet encoder, provides a significant improvement in Dice coefficient in the segmentation task.
12034-3
Author(s): Ahmet Yildiz, Brigham and Women's Hospital, Harvard Medical School (United States); Timothy Minicozzi, The Rivers School (United States), Brigham and Women's Hospital, Harvard Medical School (United States); Franklin King, Fumirato Masaki, Garth Rees Cosgrove, Walid Ibn Essayed, Nobuhiko Hata, Brigham and Women's Hospital, Harvard Medical School (United States)
In person: 22 February 2022 • 8:40 AM - 9:00 AM PST
Show Abstract + Hide Abstract
Deep Brain Simulation (DBS), has been a well-established intervention for treating a variety of neurosurgical disorders, including Parkinson’s disorder. Our goal is to design an iCT-guided device and to determine whether it can become a viable option for DBS implantation by undergoing the platform a validation study to asses its accuracy. We found that our platform results in an average Target Point Error (TPE) of 2.09±0.9 mm and 2.52±0.6 mm for Frontal and Parietal entry points, respectively. This concluded that our iCT-guided platform is capable of replacing MRI-guided devices in circumstances where the shorter imaging cycles of CT scanners are imperative.
12034-4
Author(s): Xiaoyao Fan, Alex Hartov, David W. Roberts, Keith D. Paulsen, Thayer School of Engineering at Dartmouth (United States)
In person: 22 February 2022 • 9:00 AM - 9:20 AM PST
Show Abstract + Hide Abstract
Patient registration enables image guidance by establishing a transformation between patient space and image space. In this study, we present an automatic patient registration method using intraoperative stereovision (iSV). Patient registration was achieved via an initial alignment using composite iSV skin patches and a further refinement using exposed cortical surface. The average TRE across 6 cases was 1.91±0.61 mm using landmarks on the cortical surface that were identifiable in both iSV and pMR, with a computational efficiency of ~10 min. These results suggest potential OR applications using intraoperative stereovision for automatic patient registration in image-guided open cranial surgery.
12034-5
Author(s): Runze Han, Craig K. Jones, Pengwei Wu, Prasad Vagdargi, Xiaoxuan Zhang, Ali Uneri, Junghoon Lee, Johns Hopkins Univ. (United States); Mark M. Luciano, William S. Anderson, The Johns Hopkins Hospital (United States); Patrick A. Helm, Medtronic, Inc. (United States); Jeffrey H. Siewerdsen, Johns Hopkins Univ. (United States)
In person: 22 February 2022 • 9:20 AM - 9:40 AM PST
Show Abstract + Hide Abstract
We report A deep learning-based method to solve deformable MR-to-CBCT registration using a joint synthesis and registration (JSR) network is proposed for neuro-endoscopy surgical guidance. The JSR network first encodes the MR and CBCT images into latent variables via MR and CBCT encoders, which are then decoded by two branches: image synthesis branches for MR-CT and CBCT-CT synthesis and a registration branch for intra-modality registration in an intermediate (synthetic) CT domain. The two branches are jointly optimized, encouraging the encoders to extract features pertinent to both synthesis and registration. Both semi-supervised and unsupervised variants of JSR were evaluated and compared to state-of-the-art registration methods (ANTS, VoxelMorph) and image-synthesis based registration methods. Both JSR variants achieved superior registration accuracy (Dice and TRE) than comparison methods, while maintaining diffeomorphism and fast runtime of less than 3 seconds.
Session 2: Keynote and Robot-Assisted Interventions
In person: 22 February 2022 • 10:10 AM - 12:10 PM PST
Session Chairs: Cristian A. Linte, Rochester Institute of Technology (United States), Jeffrey H. Siewerdsen, Johns Hopkins Univ. (United States)
12034-500
Author(s): Ann Majewicz, The Univ. of Texas at Austin (United States)
In person: 22 February 2022 • 10:10 AM - 11:10 AM PST
Show Abstract + Hide Abstract
Human-generated, preventable errors, particularly those made intra-operatively, can lead to morbidity and mortality for the patient, poor training outcomes for residents, and high costs for the hospital. Surgical robotic systems could be designed to avoid these errors and improve training outcomes by interpreting, reacting to, and assisting human behavior. This talk will describe some novel data-driven methods to predict, in real-time, surgical style, expertise levels, and task difficulty; as well as present new systems that could be used to assist with surgical intervention or training in a variety of domains.
12034-6
Author(s): Adrian Florea, Dominick S. Ropella, Ernar Amanov, Duke Herrell, Robert J. Webster, Vanderbilt Univ. (United States)
In person: 22 February 2022 • 11:10 AM - 11:30 AM PST
Show Abstract + Hide Abstract
Concentric tube robots (CTRs) have been explored for a variety of surgical applications in confined and narrow spaces in the human body due to their miniaturization potential and enhanced dexterity. In the design of actuation systems for CTRs, the roller gear has been suggested as a way to enable simultaneous two degree of freedom(DoF) control for a CTR. Yet to date, this idea has only been applied to single-arm systems which are not capable of more complex procedures where multiple tools are required. Here we explore the design and evaluation of a multi-arm CTR utilizing these roller gears.
12034-7
Author(s): Yuan Shi, Dartmouth College (United States); Yajie Lou, Columbia Univ. (United States); Iroha Shirai, Princeton Univ. (United States); Xiaotian Wu, Massachusetts General Hospital (United States), Harvard Medical School (United States); Joseph A. Paydarfar, Dartmouth-Hitchcock Medical Ctr. (United States); Ryan J. Halter, Dartmouth College (United States)
In person: 22 February 2022 • 11:30 AM - 11:50 AM PST
Show Abstract + Hide Abstract
Surgical navigation using intraoperative imaging has not been demonstrated for transoral robotic surgery (TORS). This proof-of-concept study shows the possibility, for the first time, of integrating a CT-compatible oral retractor system, electromagnetic tracking, and the da Vinci Surgical System. A cadaver experiment was performed following standard TORS procedures and TLE was 3.46±0.77mm. The real-time position of tracked robotic instruments were visualized in tri-planar CT images and successfully displayed on the surgeon’s console and on the vision cart via TilePro. This study validates the feasibility of the proposed navigation system and lays an important foundation for safe and effective image-guided TORS.
12034-8
Author(s): Yiqun Q. Ma, Grace J. Gang, Johns Hopkins Univ. (United States); Tina Ehtiati, Siemens Healthineers (Germany); Tess Reynolds, The Univ. of Sydney (Australia); Tom Russ, Ruprecht-Karls-Univ. Heidelberg (Germany); Wenying Wang, Clifford Weiss, Nicholas Theodore, Kelvin Hong, Jeffrey H. Siewerdsen, Joseph W. Stayman, Johns Hopkins Univ. (United States)
In person: 22 February 2022 • 11:50 AM - 12:10 PM PST
Show Abstract + Hide Abstract
Metal artifacts have been a difficult challenge to cone-beam CT (CBCT), especially in the intra-operative imaging scenario. The high attenuation makes this a missing-data problem. Increasingly, modern robotic C-arms provide the flexibility for non-circular orbits, which can improve sampling completeness, thus reducing metal artifacts. In this work, we implement non-circular orbits on a clinical Siemens artis zeego robot C-arm to test the capability of metal artifact reduction on a challenging phantom. Importantly, we restrict our implementation to only using standard built-in functions to demonstrate the generalizability. The only custom component is a simple software tool for data extraction. The results show drastically reduced metal artifacts using non-circular orbits alone, which is even further improved after adding a simple metal artifact reduction algorithm.
Tuesday/Wednesday Poster Viewing
In person: 22 February 2022 • 12:00 PM - 7:00 PM PST
Posters will be on display Tuesday and Wednesday with extended viewing until 7:00 pm on Tuesday. The poster session with authors in attendance will be Wednesday evening from 5:30 to 7:00 pm. Award winners will be identified with ribbons during the reception. Award announcement times are listed in the conference schedule.
Session 3: Video-based Interventional Applications
In person: 22 February 2022 • 1:20 PM - 3:00 PM PST
Session Chairs: William E. Higgins, The Pennsylvania State Univ. (United States), Eric J. Seibel, Univ. of Washington (United States)
12034-9
Author(s): William E. Higgins, Wennan Zhao, Danish Ahmad, Jennifer Toth, Rebecca Bascom, The Pennsylvania State Univ. (United States)
In person: 22 February 2022 • 1:20 PM - 1:40 PM PST
Show Abstract + Hide Abstract
Radial-probe endobronchial ultrasound (RP-EBUS) is commonly used to visualize extraluminal structures and confirm target lesion locations. Unfortunately, physician skill in using RP-EBUS varies greatly. On another front, image-guided bronchoscopy systems have been developed to assist with bronchoscopy navigation. However, these systems offer no direct linkage and guidance for RP-EBUS localization. We propose an image-guidance methodology that introduces capabilities for off-line procedure planning and intra-operative guidance of RP-EBUS invocation and usage.
12034-10
Author(s): Cheng Wang, Yuichiro Hayashi, Masahiro Oda, Nagoya Univ. (Japan); Takayuki Kitasaka, Aichi Institute of Technology (Japan); Hitotsugu Takabatake, Minami Sanjyo Hospital (Japan); Masaki Mori, Sapporo Kosei Hospital (Japan); Hirotoshi Honma, Hokkaido Univ. (Japan); Hiroshi Natori, Nishioka Hospital (Japan); Kensaku Mori, Nagoya Univ. (Japan)
In person: 22 February 2022 • 1:40 PM - 2:00 PM PST
Show Abstract + Hide Abstract
This paper describes a branching level estimation method using the tracking result of the bronchial orifice structure in branches. Since the bronchus has a tree-like structure with many branches, it would be beneficial to physicians if the location of the bronchoscope in branches is provided. Hence, the estimation of branching level is the core work of coarse bronchoscope tracking-based navigation bronchoscopy. Previous method used the changes in the number of the bronchial orifice (BO) and the camera moving direction for the branching level estimation, which cannot observe the changes of each BO region. Therefore, we extract BO regions using a virtual depth image from deep learning and track these regions among real bronchoscope images. The branching level is estimated based on the results of BO tracking. Experimental results showed that the average accuracy of the branching level estimation is 92.1 %.
12034-11
Author(s): Pengcheng Chen, Chen Gong, Andrew Lewis, Yaxuan Zhou, Eric J. Seibel, Blake Hannaford, Univ. of Washington (United States)
In person: 22 February 2022 • 2:00 PM - 2:20 PM PST
Show Abstract + Hide Abstract
Over 1 million cystoscopies are performed annually in the USA. Robotic assistance of the procedure is being considered to overcome healthcare disparities due to geographic distances. To ensure safety, real-time navigation by simultaneous localization and mapping (SLAM) of the cystoscope is desired. However, the near-featureless bladder wall and irregular movement of the monocular cystoscope by the manual operation have made SLAM-based navigation a difficult challenge. In this work, we develop a robot-assisted approach in a bladder phantom using a commercial flexible cystoscope. With a remote-controlled robot, we have succeeded in creating a series of real-time reconstructions of the bladder interior wall. In post-processing, we combine those results and achieve a 3D reconstruction. We compared SLAM performance using the robot and manual control under different scanning speeds, which shows the robotic assistance provides significantly more robust and accurate reconstructions
12034-12
Author(s): Yukiya Sato, Chiba Univ. (Japan)
In person: 22 February 2022 • 2:20 PM - 2:40 PM PST
Show Abstract + Hide Abstract
In this study, we reported that we improved the accuracy of ego-motion estimation in organs by using outputs of CycleGAN which enables narrow-band optical observation by learning pseudo-style transformations as inputs to the latest self-supervised ego-motion estimation framework. We compared the trajectories estimated by ORB-SLAM2, MonoDepth2, and our proposed framework to the correct organ shape points and found that our proposed framework showed the lowest error (4.15 mm) among the existing methods.
12034-13
Author(s): Prasad Vagdargi, Ali Uneri, Craig K. Jones, Pengwei Wu, Runze Han, Johns Hopkins Univ. (United States); Mark G. Luciano, William S. Anderson, The Johns Hopkins Univ. School of Medicine (United States); Patrick A. Helm, Medtronic, Inc. (United States); Gregory D. Hager, Jeffrey H. Siewerdsen, Johns Hopkins Univ. (United States)
In person: 22 February 2022 • 2:40 PM - 3:00 PM PST
Show Abstract + Hide Abstract
A system for real-time 3D neuroendoscopic video reconstruction using simultaneous localization and mapping (SLAM) is presented, using neuroendoscopic imaging to reconstruct a sparse point-cloud representation of the intraoperative anatomy in a computationally efficient, multithreaded procedure. Experiments performed on ventricle phantoms yielded sub-mm reconstruction accuracy with minimal residual bias. SLAM provided a 23× speedup in runtime compared to prior methods. The system demonstrated sub-mm accuracy in cadaveric experiments, with a real-time localization update rate of 7 Hz. Neuroendoscopic video reconstruction was shown to achieve sub-mm error and enable real-time target localization in both phantom and cadaveric studies.
Session 4: Modeling Applications in Image-Guided Therapy
In person: 22 February 2022 • 3:30 PM - 4:50 PM PST
Session Chairs: Michael I. Miga, Vanderbilt Univ. (United States), Amber L. Simpson, Queen's Univ. (Canada)
12034-14
Author(s): Ziteng Liu, Jack H. Noble, Vanderbilt Univ. (United States)
In person: 22 February 2022 • 3:30 PM - 3:50 PM PST
Show Abstract + Hide Abstract
Cochlear implants (CIs) are considered the standard-of-care treatment for profound sensory-based hearing loss. In previous research, our group has developed methods that use patient-specific electrical characteristics to simulate the activation pattern of auditory nerves. However, estimating those electrical characteristics requires extensive computation resources. In this paper, we proposed a deep-learning-based method using Cycle GANs to coarsely estimate the patient-specific electrical characteristics. These estimates can then be further optimized using a limited range conventional searching strategy. The results show that our proposed method can generate high-quality predictions and largely improve the speed of constructing models.
12034-15
Author(s): Roman Vasyltsiv, Xin Qian, Zhigang Xu, Samuel Ryu, Wei Zhao, Adrian Howansky, Stony Brook Univ. (United States)
In person: 22 February 2022 • 3:50 PM - 4:10 PM PST
Show Abstract + Hide Abstract
This work investigates the feasibility of using a C-arm x-ray imaging system to track high dose rate (HDR) brachytherapy sources in vivo using cone-beam tomography and highly constrained image reconstruction. Monte Carlo methods are used to simulate the imaging workflow for such a system and investigate the impact of detector, acquisition, and image reconstruction parameters on its achievable spatiotemporal resolution. Tradeoffs in the system’s spatial, temporal, and dose characteristics are determined. The results indicate that 4D resolutions on the order of 1 mm and 1-2 second may be achieved, which is acceptable for certain HDR brachytherapy applications.
12034-16
Author(s): Mohammad Mahmudur Rahman Khan, Rueben Banalagay, Robert F. Labadie, Jack H. Noble, Vanderbilt Univ. (United States)
In person: 22 February 2022 • 4:10 PM - 4:30 PM PST
Show Abstract + Hide Abstract
The final outcome of the Cochlear Implant (CI) surgery can be improved by pre-surgical planning using an accurate segmentation of intra-cochlear anatomy. In this paper, we are investigating the intra-cochlear segmentation performance as a function of variation of the image acquisition parameters. A dataset of 110 pseudo-CTs was generated from 11 µCTs. An active shape model-based method was evaluated to segment the intra-cochlear structures. Our result analysis presents that the segmented volume has a significantly strong correlation with both resolution and reconstruction filtering parameters. This is important information for clinicians who perform pre-surgery plans using these segmentations.
12034-17
Author(s): Zhiguo Zhou, Univ. of Central Missouri (United States); Meijuan Zhou, Xi'an Jiaotong Univ. (China); Zhilong Wang, Peking Univ. Cancer Hospital & Institute (China); Xi Chen, Xi'an Jiaotong Univ. (China)
In person: 22 February 2022 • 4:30 PM - 4:50 PM PST
Show Abstract + Hide Abstract
Recently, the immunotherapy through immunocheckpoint inhibitors significantly improves the survival rate and reduce recurrence risk in metastatic melanoma. Moreover, accurately predicting immunotherapy response is of great importance to improve treatment effectiveness. We are aiming to develop a new automated multi-objective model with hyperparameter optimization (AutoMO-HO) for improving treatment outcome prediction performance. Delta-radiomic features which calculates the difference between pre- and post-treatment radiomic features were used in this study. However, there are several hyperparameters to be set manually before training, Since Bayesian optimization can efficiently train hyperparameter, it is introduced and a new AutoMO with hyperparameter optimization model (AutoMO-HO) is developed.
Workshop on Careers at the Intersection of Physics, Engineering, Medical Imaging, and Image-Guided Interventions: SPIE and AAPM Perspectives
In person: 22 February 2022 • 5:00 PM - 7:00 PM PST
Session Chairs: David R. Holmes, Mayo Clinic (United States), Jeffrey H. Siewerdsen, Johns Hopkins Univ. (United States)
This workshop is intended to expose attendees to career paths at the intersection of medical imaging, physics, engineering, and medical physics aimed at non-clinical professionals. The event will feature panelists who have embraced such careers as research scientists, engineers or physicists and will provide concrete examples of their education paths, qualifications, typical responsibilities, and potential avenues for embarking similar career paths.
Session 5: AI-Based Image Segmentation, Classification, and Detection Techniques
In person: 23 February 2022 • 8:00 AM - 9:40 AM PST
Session Chairs: Satish E. Viswanath, Case Western Reserve Univ. (United States), David R. Holmes, Mayo Clinic (United States)
12034-19
Author(s): Michael B. Allan, Mohammad H. Jafari, Nathan V. Woudenberg, The Univ. of British Columbia (Canada); Oron Frenkel, Darra Murphy, Tracee Wee, Rob D'Ortenzio, Yong Wu, James Roberts, Naoya Shatani, St. Paul's Hospital (Canada); Ang Nan Gu, Samira Sojoudi, Purang Abolmaesumi, The Univ. of British Columbia (Canada)
In person: 23 February 2022 • 8:20 AM - 8:40 AM PST
Show Abstract + Hide Abstract
The accurate computerized assessment of obstetric ultrasound is a challenging task due to the noisy nature of ultrasound images and presence of complex anatomies. We propose a multi-branch deep learning architecture to identify multiple anatomies in obstetric sonography. The model is trained to segment the uterus and gestational sac regions, and to place landmark points denoting the crown and rump of the fetus. We conduct experiments with varying sizes of models, presenting a trade-off between accuracy and efficiency. We found that one of our larger models is appropriate for a laboratory setting. One of the smaller, for mobile POCUS devices.
12034-20
Author(s): Juhwan Lee, Justin N. Kim, Case Western Reserve Univ. (United States); Gabriel T. R. Pereira, Univ. Hospitals Cleveland Medical Ctr. (United States); Yazan Gharaibeh, Chaitanya Kolluru, Case Western Reserve Univ. (United States); Vladislav N. Zimin, Luis A. P. Dallan, Univ. Hospitals Cleveland Medical Ctr. (United States); Ammar Hoori, Case Western Reserve Univ. (United States); Giulio Guagliumi, Ospedale Papa Giovanni XXIII (Italy); Hiram G. Bezerra, Univ. of South Florida (United States); David L. Wilson, Case Western Reserve Univ. (United States)
In person: 23 February 2022 • 8:40 AM - 9:00 AM PST
Show Abstract + Hide Abstract
We developed a new method for automated detection of microchannel in intravascular optical coherence tomography images. The proposed method includes three main steps including pre-processing, identification of microchannel candidates, and classification of microchannel. Our method provided excellent segmentation of microchannel with a Dice coefficient of 0.811, sensitivity of 92.4%, and specificity of 99.9%. Our method has great potential to enable highly automated, objective, repeatable, and comprehensive evaluations of vulnerable plaques and treatments. We believe that this method is promising for both research and clinical applications.
12034-21
Author(s): S. M. Kamrul Hasan, Cristian A. Linte, Rochester Institute of Technology (United States)
In person: 23 February 2022 • 9:00 AM - 9:20 AM PST
Show Abstract + Hide Abstract
We propose a novel method that incorporates uncertainty estimation to detect failures in the segmentation masks generated by CNNs, our study further showcases the potential of our model to evaluate the correlation between the uncertainty and the segmentation errors for a given model. Furthermore, we introduce a multi-task Cross-task learning consistency approach to enforce the correlation between the pixel-level and the geometric-level tasks. Our experimentation justifies the effectiveness of our model for segmentation and uncertainty estimation of the left ventricle, right ventricle, and myocardium at end-diastole and end-systole phases from cine MRI available through the MICCAI 2017 ACDC Challenge Dataset.
12034-22
Author(s): Yixuan Huang, Craig K. Jones, Xiaoxuan Zhang, Ashley Johnston, Nafi Aygun, Timothy Witham, Johns Hopkins Univ. (United States); Patrick A. Helm, Medtronic, Inc. (United States); Jeffery H. Siewerdsen, Ali Uneri, Johns Hopkins Univ. (United States)
In person: 23 February 2022 • 9:20 AM - 9:40 AM PST
Show Abstract + Hide Abstract
A neural network was developed for automatic vertebrae labeling in Long-Film images – a novel intraoperative imaging modality with extended field-of-view. A MultiSlot network architecture was designed to utilize the unique slot-collimated geometry and consolidate information from the overlapping image regions. Using Long-Films from multiple views, a MultiView architecture was implemented to pair detections and jointly perform classification. The proposed solution achieved 92.2% and 89.7% labeling accuracy on AP and Lateral views, respectively. Effective incorporation of long-contextual image data from multiple perspectives provided by our solution offers a promising means of accurate vertebrae labeling in spine surgery.
Session 6: Image-Guided Ultrasound Interventions: Joint Session with Conferences 12034 and 12038
In person: 23 February 2022 • 10:10 AM - 12:10 PM PST
Session Chairs: Purang Abolmaesumi, The Univ. of British Columbia (Canada), Jørgen Arendt Jensen, Technical Univ. of Denmark (Denmark)
12034-23
Author(s): Mohamed A. Abbass, Sherif Hussein, Mohamed M. Saleh, Military Technical College (Egypt); Mohamed S. Abdel-All, Mohamed Basyouny, Maadi Armed Forces Medical Complex (Egypt); Ahmed Omar, Military Technical College (Egypt)
In person: 23 February 2022 • 10:10 AM - 10:30 AM PST
Show Abstract + Hide Abstract
The feasibility of monitoring radiofrequency thermal ablation (RFA) using echo decorrelation imaging in ex vivo hepatocellular carcinoma was evaluted in this paper. RFA sessions (N = 5) performed using commercial RF generator, following standard-of-care protocol. HCC specimens were obtained from patients underwent liver resection or liver transplants. RFA was guided and monitored using 128 element 8 MHz ultrasound arrays. Echo decorrelation imaging and integrated backscattered images were computed using 20 RF beamformed frames acquired during the ablation process. Segmented tissues were co-registered with corresponding ultrasound images. Both imaging methods were assessed using receiver operating characteristics (ROC) curve. Area under the ROC (AUC) were computed to compare both methods statistically. Results showed that echo decorrelation imaging predicted RFA in ex vivo HCC tissue more accurate than IBS (AUC = 0.878, AUC = 0.77, respectively)
12038-6
Author(s): Tiana Trumpour, Western Univ. (Canada), Robarts Research Institute (Canada); Jessica R. Rodgers, Queen's Univ. (Canada); David Tessier, Robarts Research Institute (Canada); Lucas C. Mendez, Douglas A. Hoover, David D'Souza, London Regional Cancer Program (Canada); Kathleen Surry, London Regional Cancer Program (Canada), Western Univ. (Canada); Aaron Fenster, Western Univ. (Canada), London Regional Cancer Program (Canada), Robarts Research Institute (Canada)
In person: 23 February 2022 • 10:30 AM - 10:50 AM PST
Show Abstract + Hide Abstract
High dose rate brachytherapy is a common procedure used in the treatment of gynecological cancers to irradiate malignant tumors while sparing surrounding healthy tissue. While treatment may be delivered using a variety of applicator types, a hybrid technique consisting of an intracavitary applicator and interstitial needles provides highly localized placement of the radioactive sources. For an accurate procedure, identification of the applicator and the interstitial needle tips is necessary. To improve treatment outcomes we propose the use of image fusion to combine three-dimensional transabdominal and transrectal ultrasound images for the complete visualization of the applicator, needle tips, and surrounding anatomy.
12034-24
Author(s): Jamie Alexis Goco, Mohammad H. Jafari, The Univ. of British Columbia (Canada); Christina Luong, Teresa Tsang, Vancouver General Hospital (Canada); Purang Abolmaesumi, The Univ. of British Columbia (Canada)
In person: 23 February 2022 • 10:50 AM - 11:10 AM PST
Show Abstract + Hide Abstract
Left ventricle internal dimension (LVID) is an important measurement in two-dimensional echocardiography as it provides information about the structural integrity and systolic function of a heart. To decrease the high inter-observer variability seen when measuring LVID, we propose a fully automatic and efficient landmark detection network adapted from uncertainty-driven video landmark detection (U-LanD) that automatically measures LVID and calculates ejection fraction. The model performs better in accuracy and precision than previous models in literature. As a light-weight model that doesn't rely on electrocardiogram tracings, it can easily be integrated into mobile applications with limited computing resources such as point-of-care ultrasound.
12034-25
Author(s): Han Liu, Michelle K. Sigona, Li Min Chen, Charles F. Caskey, Benoit M. Dawant, Vanderbilt Univ. (United States)
In person: 23 February 2022 • 11:10 AM - 11:30 AM PST
Show Abstract + Hide Abstract
Transcranial MRI-guided focused ultrasound (TcMRgFUS) is a therapeutic ultrasound method and has been clinically approved to thermally ablate regions of the thalamus. CT imaging is currently the gold standard for estimating acoustic properties of an individual skull during clinical procedures, but CT imaging exposes patients to radiation and increases the overall number of imaging procedures. A method to estimate acoustic parameters in the skull would be desirable. Here, we synthesized CT images from routinely acquired T1-weighted MRI by using a 3D patch-based conditional generative adversarial network. We compared the performance of synthetic CT to real CT images using a treatment planning software and found that the number of active elements and skull density ratios between real CT and synthesized CT had Pearson’s Correlation Coefficients of 0.9213 and 0.9128 respectively. Our work demonstrates the feasibility of replacing real CT with the MR-synthesized CT for TcMRgFUS planning.
12038-7
Author(s): Claire K. Park, Sam Papernick, Nathan Orlando, Robarts Research Institute (Canada), Western Univ. (Canada); Melanie Jonnalagadda, Western Univ. (Canada); Jeffrey Bax, Lori Gardi, Kevin Barker, David Tessier, Robarts Research Institute (Canada); Aaron Fenster, Robarts Research Institute (Canada), Western Univ. (Canada)
In person: 23 February 2022 • 11:30 AM - 11:50 AM PST
Show Abstract + Hide Abstract
We present an alternative, adaptable, and cost-effective spatially tracked 3DUS system for automated whole-breast 3D ultrasound (US) imaging. This paper describes the system design, optimization of spatial tracking, and the multi-image registration and fusion of acquired 3DUS images in a tissue-mimicking phantom and first proof-of-concept healthy volunteer study. The system contains a clinician-operated manipulator enabling six degrees-of-freedom for motion, and an in-house 3DUS scanner, adaptable to any US transducer. Spatial tracking was optimized, then compound motions were assessed within a clinically relevant workspace. Multi-image registration and fusion of acquired 3DUS images were performed, demonstrating potential utility as a bedside point-of-care (POC) approach toward automated whole-breast 3DUS in women with dense breasts.
12034-26
Author(s): Shuwei Xing, Robarts Research Institute (Canada), Western Univ. (Canada); Terry M. Peters, Aaron Fenster, Elvis C. S. Chen, Robarts Research Institute (Canada); Derek W. Cool, Amol Mujoomdar, Leandro Cardarelli Leite, Western Univ. (Canada); Joeana Cambranis Romero, Robarts Research Institute (Canada)
In person: 23 February 2022 • 11:50 AM - 12:10 PM PST
Show Abstract + Hide Abstract
In percutaneous thermal ablations, complete coverage of the targeted focal tumor by the ablation zone and with a sufficient safety margin of 5-10 mm is required to ensure that the tumor eradication will be achieved. However, the conventional 2D ultrasound-guided procedure has limitations in estimating the tumor coverage during the procedure, due to the insufficiency of evaluation using only one or multiple US images. In this paper, we have evaluated the surface error and volume accuracy of the tumor coverage using 3D US images. Results demonstrated that we could provide sufficient knowledge of intra-procedural tumor coverage and an opportunity to correct the ablation applicator position or modify the thermal ablation zone delivery.
Session 7: Image-Guided Intervention Workflow, Training, and Skill Assessment
In person: 23 February 2022 • 1:20 PM - 2:20 PM PST
Session Chairs: Pierre Jannin, Lab. Traitement du Signal et de l'Image (France), Stefanie Speidel, National Ctr. for Tumor Diseases Dresden (Germany)
12034-27
Author(s): Elizabeth H. Klosa, Queen's Univ. (Canada); Rebecca Hisey, Lab. for Percutaneous Surgery, Queen's Univ. (Canada); Tahmina Nazari, Erasmus MC (Netherlands); Theo Wiggers, Incision Academy (Netherlands); Boris Zevin, Queen's Univ. (Canada); Tamas Ungi, Gabor Fichtinger, Lab. for Percutaneous Surgery, Queen's Univ. (Canada)
In person: 23 February 2022 • 1:20 PM - 1:40 PM PST
Show Abstract + Hide Abstract
This study aims to train a neural network in identifying tissues in a low-cost inguinal hernia phantom. Identifying the tissues will allow us to recognize the tool-tissue interactions needed for task recognition of an open inguinal hernia repair. Five surgeons wore head mounted cameras to record themselves performing the repair. Eight simulated tissues were segmented throughout frames of the videos to be used in training the U-Net. The U-Net was found to sufficiently identify the simulated tissues for use in task recognition.
12034-28
Author(s): Dhruv Patel, Queen's Univ. (Canada); Erik Ziegler, Rob Lewis, Radical Imaging LLC (United States); Parvin Mousavi, Queen's Univ. (Canada); Alireza Sedghi, Radical Imaging LLC (United States)
In person: 23 February 2022 • 1:40 PM - 2:00 PM PST
Show Abstract + Hide Abstract
AI models are often task- and data-specific, requiring input images with specific modalities and sequences. Although DICOM metadata can give provide such information about the data, it is inconsistent, non-standard with many missing values in different fields of metadata. In this paper, we present a deep learning-based unsupervised clustering algorithm to group radiological images solely based on their pixel data. After training our deep clustering network, a human reader then labels each of the clusters by examining the modality, body part, and orientation. At the test time, using the trained clustering model to identify the group the image belongs to, we can assign consistent and correct metadata labels. Experimental evaluation shows that we can successfully cluster images with 94% accuracy without using any labeled data.
12034-29
Author(s): Olivia O'Driscoll, Rebecca Hisey, Queen's Univ. (Canada); Matthew S. Holden, Daenis Camire, Carleton Univ. (Canada); Jason Erb, Daniel Howes, Gabor Fichtinger, Tamas Ungi, Queen's Univ. (Canada)
In person: 23 February 2022 • 2:00 PM - 2:20 PM PST
Show Abstract + Hide Abstract
Computer-assisted surgical skill assessment methods traditionally relied on tracking tool motion with expensive sensors. Recent advances in object detection networks have made it possible to quantify tool motion using only a camera. This study determines the feasibility of using metrics computed with object detection by comparing them to widely accepted metrics computed using traditional tracking methods in central venous catheterization. Both video and tracking data were recorded from participants performing central venous catheterization on a venous access phantom. An object detection network was trained to recognize the ultrasound probe and syringe on the video data. Skill assessment methods were computed with the video and tracking data, then compared using Spearman rank correlation. The video-based metrics correlated significantly with the tracked metrics, suggesting that object detection could be a feasible skill assessment method for central venous catheterization.
Special 50th Anniversary
In person: 23 February 2022 • 2:20 PM - 3:00 PM PST
Session Chairs: Cristian A. Linte, Rochester Institute of Technology (United States), Jeffrey H. Siewerdsen, Johns Hopkins Univ. (United States)
This session is dedicated in memoriam of Dr. Richard (Rich) A. Robb, PhD, Professor of Biophysics and Computer Science at the Mayo Clinic.

The session will celebrate the 50th anniversary of SPIE Medical Imaging and the Image-Guided Procedures, Robotic Interventions, and Modeling conference.
Session 8: Imaging Physics in Image-Guided Interventions: Joint Session with Conferences 12031 and 12034
In person: 23 February 2022 • 3:30 PM - 5:30 PM PST
Session Chair: Rebecca Fahrig, Siemens Healthineers (Germany)
12034-30
Author(s): Sepideh Hatamikia, ACMIT GmbH (Austria), Medizinische Univ. Wien (Austria); Ander Biguri, Univ. College London (United Kingdom); Gernot Kronreif, ACMIT GmbH (Austria); Joachim Kettenbach, Institut für Diagnostische und Interventionelle Radiologie und Nuklearmedizin, Landesklinikum (Austria); Tom Russ, Ruprecht-Karls-Univ. Heidelberg (Germany); Wolfgang Birkfellner, Medizinische Univ. Wien (Austria)
In person: 23 February 2022 • 3:30 PM - 3:50 PM PST
Show Abstract + Hide Abstract
Precise placement of needles plays a crucial role in percutaneous procedures as it helps to achieve higher diagnostic accuracy and accurate tumor targeting. C-arm cone-beam computed tomography (CBCT) has the potential to precisely image the anatomy in direct vicinity of the needle. However, exact needle positioning is very difficult due to strong metal artifacts around the needle. In this study, we evaluate the performance of the prior image constrained compressed sensing (PICCS) CBCT reconstruction in presence of metal objects. Our results confirm the high performance of PICCS to reduce needle artifacts using both circular and non-conventional trajectories under kinematic constraints.
12034-31
Author(s): Chih-Wei Chang, Yang Lei, Serdar Charyyev, Emory Univ. (United States); Shuai Leng, Mayo Clinic (United States); Tim Yoon, Jun Zhou, Xiaofeng Yang, Liyong Lin, Emory Univ. (United States)
In person: 23 February 2022 • 3:50 PM - 4:10 PM PST
12034-32
Author(s): Kevin Treb, Xu Ji, Mang Feng, Ran Zhang, Sarvesh Periyasamy, Paul F. Laeseke, Ke Li, Univ. of Wisconsin-Madison (United States)
In person: 23 February 2022 • 4:10 PM - 4:30 PM PST
Show Abstract + Hide Abstract
C-arm x-ray systems with flat panel detectors (FPDs) capable of cone-beam CT (CBCT) are suboptimal for low-contrast imaging tasks due to wide-beam geometry and limitations of FPDs. Photon counting detectors (PCDs) offer solutions to these limitations. To introduce narrow-beam PCD-CT to the interventional suite, we previously developed a prototype C-arm imaging system with a strip PCD. In this work, we present a data acquisition method to enlarge the z-coverage of the C-arm PCD-CT which involves back-and-forth gantry sweeps with automatic table translation for step-and-shoot acquisitions. The step-and-shoot C-arm PCD-CT improved low-contrast visibility and visualization of fine structures compared to FPD-CBCT.
12031-74
Author(s): Joseph F. Whitehead, Carson A. Hoffman, Paul F. Laeseke, Michael A. Speidel, Martin G. Wagner, Wisconsin Institutes for Medical Research (United States), Univ. of Wisconsin School of Medicine and Public Health (United States)
In person: 23 February 2022 • 4:30 PM - 4:50 PM PST
Show Abstract + Hide Abstract
A motion compensated quantitative digital subtraction angiography approach is presented which allows calculating blood flow velocities from 2D contrast-enhanced x-ray sequences with respiratory and cardiac motion. Phantom and animal studies were performed to evaluate the performance with and without motion compensation. The proposed technique could provide quantitative endpoints for interventional procedures, such as liver embolization, and could improve patient outcomes.
12031-75
Author(s): Tim Vöth, Ziehm Imaging GmbH (Germany), Deutsches Krebsforschungszentrum (Germany); Thomas König, Ziehm Imaging GmbH (Germany); Elias Eulig, Michael Knaup, Deutsches Krebsforschungszentrum (Germany); Klaus Hörndler, Ziehm Imaging GmbH (Germany); Marc Kachelriess, Deutsches Krebsforschungszentrum (Germany)
In person: 23 February 2022 • 4:50 PM - 5:10 PM PST
Show Abstract + Hide Abstract
Today, 2D+T fluoroscopy is usually used for image guidance in interventional radiology. For challenging procedures, 4D (3D+T) image guidance would be advantageous. The difficulty in realizing X-ray-based 4D interventional guidance lies in the development of an extremely dose efficient reconstruction algorithm. To this end, we improve on a previously presented algorithm for the reconstruction of interventional tools. By incorporating temporal information into a 3D convolutional neural network, we reduce the number of X-ray projections that need to be acquired for the 3D reconstruction of guidewires from four to two, thereby halving dose and decreasing the demands put on imaging devices implementing the algorithm. In experiments with two moving guidewires in an anthropomorphic phantom, we observe little deviation of our 3D reconstructions from the ground truth.
12031-76
Author(s): Benjamin D. Killeen, Shreya Chakraborty, Greg Osgood, Mathias Unberath, Johns Hopkins Univ. (United States)
In person: 23 February 2022 • 5:10 PM - 5:30 PM PST
Show Abstract + Hide Abstract
During internal fixation of fractures, it is often challenging to safely position a K-wire due to the projective nature of X-ray images, especially in complex anatomy like the superior pubic ramus. This can result in excess acquisitions and repeat attempts. A perception-based algorithm that interprets interventional radiographs to infer the likelihood of cortical breach might reduce both. Here, we present first steps toward developing such an algorithm. We use an in silico strategy for collection of X-rays with and without cortical breach and demonstrate its suitability for machine learning by training an algorithm to detect cortical breach for fully-inserted K-wires.
Wednesday Poster Session
In person: 23 February 2022 • 5:30 PM - 7:00 PM PST
All symposium attendees are invited to attend the evening Wednesday Poster Session to view the high-quality posters and engage the authors in discussion. Attendees are required to wear their conference registration badges to access the Poster Session. Authors may set up their posters starting Tuesday 22 February.*

*In order to be fully considered for a Poster Award, it is recommended to have your poster set up by 12:00pm on Tuesday 22 February 2022. Posters should remain on display until the end of the Poster Session on Wednesday.
12034-53
Author(s): Vivian van Asperen, Josefien van den Berg, Fleur Lycklama, Victoria Marting, Technische Univ. Delft (Netherlands); Ruisheng Su, Matthijs van der Sluijs, Theo van Walsum, Sandra Cornelissen, Erasmus MC (Netherlands); Wim van Zwam, Maastricht Univ. Medical Ctr. (Netherlands); Jeanette Hofmeijer, Univ. Twente (Netherlands); Aad van der Lugt, Erasmus MC (Netherlands)
In person: 23 February 2022 • 5:30 PM - 7:00 PM PST
Show Abstract + Hide Abstract
In this study, a fully automated A/V classification system is proposed for periprocedural 2D DSA imaging during endovascular thrombectomy in stroke patients. The system uses unsupervised machine learning (UML) with different characteristics of the time-intensity curves (TICs) as input. Experiments consisted of different input features, cluster numbers and UML algorithm. The system had an average accuracy of 76%, when using eight TIC-derived input features and clustering in 2 clusters. It has the potential to be used in a clinical setting after more elaborate preprocessing of DSA images, to allow for standardized analysis and judgement of these images.
12034-54
Author(s): Anja Pantovic, Caroline Essert, ICube, Univ. de Strasbourg (France); Irène Ollivier, Les Hôpitaux Univs. de Strasbourg (France)
In person: 23 February 2022 • 5:30 PM - 7:00 PM PST
Show Abstract + Hide Abstract
For pharmacoresistant epilepsy patients, an accurate localization of the Stereoelectroencephalography (SEEG) electrodes is crucial to design a resection plan before surgically removing the epileptogenic zone. We propose to train a 2D and a 3D version of the U-Net neural network architecture to automatically segment the electrode contacts from CT scans with good accuracy. Our models are evaluated on 18 image datasets of patients and compared using different metrics. Both networks achieve segmentation in less than 6 seconds. They are robust to electrode bending and do not need any prior information to make fast and accurate predictions.
12034-55
Author(s): Torsten Hopp, Luca Springer, Carl Gross, Karlsruher Institut für Technologie (Germany); Saskia Grudzenski-Theis, Universitätsmedizin Mannheim (Germany); Franziska Mathis-Ullrich, Nicole Ruiter, Karlsruher Institut für Technologie (Germany)
In person: 23 February 2022 • 5:30 PM - 7:00 PM PST
Show Abstract + Hide Abstract
Mouse-specific therapy planning for blood brain barrier opening using focused ultrasound (FUS) to treat neurodegenerative diseases is an essential step. For our therapy planning approach based on acoustic simulations we here propose to automatically segment the mouse skull and brain from magnetic resonance imaging, which is usually used in combination with FUS for monitoring purposes. The proposed method is based on a multi-step approach involving traditional image processing algorithms. The method is evaluated with four in-vivo datasets obtained with different parameters. The median MCC score on all slices of four datasets was 0.85 for the brain segmentation, 0.69 for the overall skull segmentation and 0.78 for the skull cap. Finally for showcasing the application, an successful acoustic simulation based on the segmentation is presented
12034-56
Author(s): Aurélien de Turenne, Ctr. Hospitalier Univ. de Rennes (France), Univ. de Rennes 1 (France); François Eugène, Univ. de Rennes 1 (France), Ctr. Hospitalier Univ. de Rennes (France); Raphaël Blanc, La Fondation Ophtalmologique Adolphe de Rothschild (France); Jérôme Szewczyk, Institut des Systèmes Intelligents et Robotiques, Sorbonne Univ. (France); Pascal Haigron, Univ. de Rennes 1 (France), Ctr. Hospitalier Univ. de Rennes (France)
In person: 23 February 2022 • 5:30 PM - 7:00 PM PST
Show Abstract + Hide Abstract
Mechanical thrombectomy has become a therapeutic of reference for cerebral stroke. Nevertheless, catheterization involves a technical gesture that can be very difficult or impossible in complex anatomical configurations. Pre-operative images can help physicians during pre-operative (visualization of segmented navigation structures) and per-operative (segmented navigation structures projected onto 2D per-operative X-rays) phases of the intervention. The objective of this work is to propose a method for segmentation of endovascular path in mechanical thrombectomy from pre-operative images. A simple U-net is used to segment the aortic cross and a cascaded U-net is used to segment the common and internal arteries.
12034-57
Author(s): Daniel Mensing, mediri GmbH (Germany), Fraunhofer-Institut für Digitale Medizin MEVIS (Germany); Johannes Gregori, mediri GmbH (Germany); Jürgen Jenne, mediri GmbH (Germany), Fraunhofer-Institut für Digitale Medizin MEVIS (Germany), German Cancer Research Center (Germany); Michael Stritt, mediri GmbH (Germany); Björn Gerold, Theraclion (France); Matthias Günther, Fraunhofer-Institut für Digitale Medizin MEVIS (Germany), mediri GmbH (Germany), Univ. Bremen (Germany)
In person: 23 February 2022 • 5:30 PM - 7:00 PM PST
Show Abstract + Hide Abstract
Varicose veins treatment with High-Intensity Focused Ultrasound (HIFU) relies on diagnostic imaging through ultrasound for the monitoring of the procedure. We introduce a neural network which leverages the longitudinal dimension of ultrasound data combined with different methods of data preparation to provide a robust segmentation of veins in real time. We further trained the model to track the target tissue outside the image borders and to predict the segmentation for future frames. In conclusion, we show that we improve the current state-of-the-art by adding robustness and segmentation of future frames to it while maintaining the accuracy.
12034-58
Author(s): Asta Olafsdottir, Univ. College London (United Kingdom); David Butt, Addie Majed, Mark Falworth, The Royal National Orthopaedic Hospital NHS Trust (United Kingdom); Matthew J. Clarkson, Wellcome/EPSRC Ctr. for Interventional and Surgical Sciences (United Kingdom); Stephen A. Thompson, Wellcome/EPSRC Ctr. for Interventional and Surgical Sciences (United Kingdom), Univ. College Hospital (United Kingdom)
In person: 23 February 2022 • 5:30 PM - 7:00 PM PST
Show Abstract + Hide Abstract
In this paper we introduce SciKit-SurgeryGlenoid, an open source toolkit for the measurement of Glenoid version. SciKit-SurgeryGlenoid contains implementations of the 4 most frequently used glenoid version measurement algorithms enabling easy and unbiased comparison of the different techniques. We present the results of using the software on 10 sets of pre-operative CT scans taken from patients who have subsequently undergone shoulder replacement surgery. We further compare these results with those obtained from a commercial implant planning software.
12034-59
Author(s): Hannah Büchner, Maximilian Malik, Reutlingen Univ. (Germany); Florian Laux, Eberhard Karls Univ. Tübingen (Germany), BG Klinik Tübingen (Germany); Heiko Baumgartner, BG Klinik Tübingen (Germany); Fabian Springer, Eberhard Karls Univ. Tübingen (Germany); Oliver Burgert, Reutlingen Univ. (Germany)
In person: 23 February 2022 • 5:30 PM - 7:00 PM PST
Show Abstract + Hide Abstract
The aim of this project is the automatic classification of total hip endoprosthesis components in 2D X-ray images. A ground truth based on 76 X-ray images was created: We used an image processing pipeline consisting of a segmentation step performed by a convolutional neural network and a classification step performed by a support vector machine. The best segmentation results were achieved using a U-net architecture. For classification, SVM architectures performed better than neural networks. The overall image processing pipeline performed well, but the ground truth needs to be extended.
12034-60
Author(s): Zixin Yang, Richard Simon, Cristian A. Linte, Rochester Institute of Technology (United States)
In person: 23 February 2022 • 5:30 PM - 7:00 PM PST
12034-61
Author(s): Christiaan Viviers, Technische Univ. Eindhoven (Netherlands), Philips (Netherlands); Joël de Bruijn, Technische Univ. Eindhoven (Netherlands); Lena Filatova, Philips (Netherlands); Peter H. N. de With, Fons van der Sommen, Technische Univ. Eindhoven (Netherlands)
In person: 23 February 2022 • 5:30 PM - 7:00 PM PST
Show Abstract + Hide Abstract
Recent 6D object pose estimation methods opt to solve pose estimation problems using small datasets, making them attractive for the X-ray domain where medical data is scarcely available. We refine the SingleShotPose model to estimate the pose of an object in X-ray images. The model regresses 2D control points and calculates the pose through 2D/3D correspondences using PnP adjusted for X-ray acquisition geometry, allowing a single trained model to be used across all cone-beam-based X-ray geometries. With a high 5-cm/5-degree accuracy, it is comparable with state-of-the-art alternatives, while requiring significantly less real training examples and being applicable in real-time applications
12034-62
Author(s): Ted Shi, Maysam Shahedi, Kayla Caughlin, James D. Dormer, Ling Ma, The Univ. of Texas at Dallas (United States); Baowei Fei, The Univ. of Texas at Dallas (United States), The Univ. of Texas Southwestern Medical Ctr. at Dallas (United States)
In person: 23 February 2022 • 5:30 PM - 7:00 PM PST
Show Abstract + Hide Abstract
Heart segmentation on computed tomography (CT) is of great importance due to the prevalence of cardiovascular disease. Computer-assisted approaches continue to offer an accurate, efficient alternative to manual segmentation. However, fully automated methods for cardiac segmentation have yet to achieve the accuracy to compete with expert segmentation. In this approach, we generate point-distance maps from a fixed number of points selected on the cardiac surface. Then, a 3D fully convolutional neural network was trained using these maps to provide a segmentation. Testing our method with different numbers of points, we achieved an average Dice score from 0.742 to 0.917 across all four chambers, and average Dice scores of 0.846 ± .059, .857 ± .052, .826 ± .062, and .824 ± .062 for the left atrium, left ventricle, right atrium, and right ventricle, respectively. This point-guided, modality-independent segmentation approach demonstrates promising performance for heart chamber delineation.
12034-63
Author(s): Hannah G. Mason, Jack H. Noble, Vanderbilt Univ. (United States)
In person: 23 February 2022 • 5:30 PM - 7:00 PM PST
Show Abstract + Hide Abstract
Cochlear implants (CIs) are neural prosthetics used to treat severe-to-profound hearing loss. After implantation, the process of fine-tuning the implant is expedited if the audiologist has tools to approximate which auditory nerve fiber regions the implant is stimulating. Auditory nerves travel from the cochlea to the brain via the internal auditory canal (IAC). We present a method for segmenting the IAC from a CT image using weakly supervised 3D U-Nets with a region-based level set loss term to assist with localizing the nerve fibers. Preliminary results indicate that this approach successfully improves IAC localization.
12034-64
Author(s): Chih-Wei Chang, Yang Lei, Tonghe Wang, Jun Zhou, Liyong Lin, Jeffrey D. Bradley, Tian Liu, Xiaofeng Yang, Emory Univ. (United States)
In person: 23 February 2022 • 5:30 PM - 7:00 PM PST
12034-65
Author(s): Yubing Tong, Zihan Huang, Jayaram K. Udupa, Leihui Tong, Drew A. Torigian, Chamith S. Rajapakse, Univ. of Pennsylvania (United States)
In person: 23 February 2022 • 5:30 PM - 7:00 PM PST
Show Abstract + Hide Abstract
We proposed an automatic algorithm for bone segmentation from hip 3T MRI via deep CNN, which included two stages: 1) automatic localizing acetabular bone and femur bone by using femur head as a good reference which was inspired by observing thousands of hip MR images, 2) based on the localization information from femur head, 2D boundary box (BBox) is set up each object and followed by a UNet to segment the target object within BBox. 90 3T hip MRI images were utilized in this study and segmentation results are much comparable with the ground truth mask from manual segmentation.
12034-66
Author(s): Ahmad Qasem, Zhiguo Zhou, Univ. of Central Missouri (United States)
In person: 23 February 2022 • 5:30 PM - 7:00 PM PST
Show Abstract + Hide Abstract
Tumor segmentation is a critical step in the diagnosis and treatment of head and neck cancers. Therefore, it is necessary to the development of treatment plan and ultimately the treatment outcome or any complications. In this preliminary study, we are aiming to develop a fully automatic hybrid neural network (HNN) for the localization and segmentation of tumors in PET/CT images through a combination of Faster-RCNN and U-Net. The proposed model was evaluated by measuring the accuracy, sensitivity, and specificity and achieved the average values of 0.959, 0.930, and 0.962, respectively.
12034-67
Author(s): Avani Muchhala, Prathyush Chirra, Katelin Amann, Case Western Reserve Univ. (United States); Jacob Kurowski, Cleveland Clinic (United States); Satish E. Viswanath, Case Western Reserve Univ. (United States)
In person: 23 February 2022 • 5:30 PM - 7:00 PM PST
Show Abstract + Hide Abstract
The goal of this study was to identify radiomic features on baseline MRE associated with disease activity and treatment outcomes in pediatric Crohn’s disease (pCD) as well as investigate potential associations of radiomics with serum-based pCD subtypes. A Random Forest classifier using the most relevant radiomic features achieved an area under the ROC curve (AUC) of 0.83 in distinguishing diseased patients from healthy subjects and an AUC of 0.85 in distinguishing non-responders from responders; in leave-one-out cross-validation. Top-ranked Gabor and Laws features were correlated with serum markers for anemia, inflammation risk, vitamin deficiency, and immune activity.
12034-68
Author(s): Djalal Fakim, Western Univ. (Canada); Hareem Nisar, Robarts Research Institute (Canada), Western Univ. (Canada); John T. Moore, Robarts Research Institute (Canada); Terry M. Peters, Elvis C. S. Chen, Robarts Research Institute (Canada), Western Univ. (Canada)
In person: 23 February 2022 • 5:30 PM - 7:00 PM PST
Show Abstract + Hide Abstract
Recent recognition of the poor prognosis of significant tricuspid regurgitation has resulted in increased indication for tricuspid valve interventions. A key procedure for patient selection and intraoperative assessment of interventions involves manually determining the location of the vena contracta (VC), which can be time consuming depending on its location. Decreasing the required time for VC visualization would potentially result in decreased intraprocedural time, reducing anesthesia time and hospital costs. There is currently no commercially available automatic VC detection system. We present a method to automatically localize the VC using 3D ICE on a simplified phantom as a proof of concept.
12034-69
Author(s): Hareem Nisar, Patrick K. Carnahan, Robarts Research Institute (Canada), Western Univ. (Canada); Djalal Fakim, Humayon Akhuanzada, Western Univ. (Canada); David Hocking, London Health Sciences Ctr. (Canada); Terry M. Peters, Robarts Research Institute (Canada); Elvis C. S. Chen, Robarts Research Institute (Canada), Western Univ. (Canada)
In person: 23 February 2022 • 5:30 PM - 7:00 PM PST
Show Abstract + Hide Abstract
A suitable alternative to the hitherto fluoro-guided vascular navigation during transcatheter interventions is to use a tracked intracardiac ultrasound (ICE). ICE can navigate and scan the vessels and reconstruct a vascular road map to be followed by the surgical tools and catheters. Currently, there is an unmet need for an accurate real-time vessel segmentation algorithm that works with radial ICE images. In this study, we address this challenge using a deep learning based approach. The results show that U-net architecture can perform vessel segmentation with a 90% accuracy.
12034-70
Author(s): Marine Y. Shao, David Huson, Univ. of the West of England (United Kingdom); James Clark, Royal Cornwall Hospitals NHS Trust (United Kingdom)
In person: 23 February 2022 • 5:30 PM - 7:00 PM PST
Show Abstract + Hide Abstract
Laparoscopic treatment of gallstone is a complex procedure requiring multiple skills including laparoscopic ultrasounds imaging. This procedure has many benefits but still fail to be generalised because of its challenging steps. Surgical simulation can provide a useful way to train for surgery. In this study, a surgical simulator made of silicone is tested quantitively and qualitatively. The results show that the density of silicone is within the same range as the density of tissues, but the speed of sound in the silicone is slower, resulting in deformed images. A solution is to perform image processing to create more realistic images.
12034-71
Author(s): Yichuan Tang, Isaac F. Abouaf, Aditya Malik, Ryosuke Tsumura, Jakub T. Kaminski, Worcester Polytechnic Institute (United States); Igor Sorokin, Univ. of Massachusetts Medical School (United States); Haichong K. Zhang, Worcester Polytechnic Institute (United States)
In person: 23 February 2022 • 5:30 PM - 7:00 PM PST
Show Abstract + Hide Abstract
Percutaneous Nephrolithotomy (PCNL), otherwise known as kidney stone removal, requires needle insertion through the patient's skin towards stones in the kidney. Existing employed ultrasound image-guided needle insertion in PCNL faces the challenge in terms of keeping the needle tip visible during the insertion process. The goal of this paper is to develop a needle insertion device that can provide an intuitive approach to monitor the needle insertion path by reflecting the ultrasound waves in line with the needle path.
12034-72
Author(s): Andre Mühlenbrock, Gabriel Zachmann, Univ. Bremen (Germany)
In person: 23 February 2022 • 5:30 PM - 7:00 PM PST
Show Abstract + Hide Abstract
Various novel autonomous lighting systems for illuminating the surgical field consist of many swiveling light modules fixed to the ceiling instead of two or three movable surgical lamps. For such a new type of lighting system for operating rooms, the initial placement of the light modules is of great importance, since the light modules cannot be moved during the surgery. In this paper, we develop and evaluate a method for optimizing the arrangement of light modules using point cloud recordings of real surgeries. In our optimization results, we achieve up to 41% higher minimal illumination compared to naive arrangements.
12034-73
Author(s): Hamed Hooshangnejad, Kai Ding, Johns Hopkins Univ. (United States)
In person: 23 February 2022 • 5:30 PM - 7:00 PM PST
Show Abstract + Hide Abstract
The standard RT clinical workflow imposes substantial burdens on patients. Multiple image acquisitions for diagnosis and RT planning increase the travel, cost, and wait time before actual RT treatment, which is critical for cancer patients. Diagnostic CT (dCT) is shown to be suitable for RT planning, however different planning CT (pCT) acquisition setup (e.g., table curvature) and motion management procedure (e.g., deep inspiration/active breath control) makes it infeasible. In this study, we present feasibility of a fully automatic image adaptation method. A novel 3DCNN based method is designed adapt the dCT to pCT, omitting the need for acquiring multiple scans before treatment delivery, reducing the cost and length of RT treatment pathway.
12034-74
Author(s): Lucas March, Jessica R. Rodgers, Amoon Jamzad, Alice Santilli, Doug McKay, Rebecca Hisey, Gabor Fichtinger, Parvin Mousavi, John F. Rudan, Martin Kaufmann, Kevin Yi Mi Ren, Queen's Univ. (Canada)
In person: 23 February 2022 • 5:30 PM - 7:00 PM PST
Show Abstract + Hide Abstract
Minimizing positive margins around removed basal cell carcinoma lesions is essential for successful treatment; however, detecting remaining cancer can be challenging. The iKnife system can discriminate between healthy and cancerous tissue but lacks spatial information on cautery position. We propose a deep learning approach to recognize surgical and iKnife acquisition phases in intraoperative videos and subsequently track the cautery location, with the future intention of synchronizing this information with iKnife data. Results show promise as a step towards a clinically useful tool to provide guidance on the locations of suspicious margins with minimal disruption to surgical workflow, potentially reducing recurrence.
12034-75
Author(s): Catherine Austin, Rebecca Hisey, Olivia O'Driscoll, Daenis Camire, Jason Erb, Daniel Howes, Tamas Ungi, Gabor Fichtinger, Queen's Univ. (Canada)
In person: 23 February 2022 • 5:30 PM - 7:00 PM PST
12034-76
Author(s): Neha Rajan, Georgia Institute of Technology (United States); Mark Korinek, John Lieske, David R. Holmes, Mayo Clinic (United States)
In person: 23 February 2022 • 5:30 PM - 7:00 PM PST
Show Abstract + Hide Abstract
Kidney endoscopy videos show Randall’s plaques and plugs which are precursors to kidney stones. This study used an image quality assessment model trained with active learning to identify endoscopy frames to serve as a key frame of a video. The model scores frames according to how useful they are in visualizing the kidney, plaques, and plugs. The model, trained on 300 epochs, had an ROC curve with an AUC of 0.9625 on sampled test data. Fluctuating loss was observed; fluctuations decreased after 250 epochs showing improved model performance. Images were shown to have more accurate scores with continuous training.
12034-77
Author(s): Jennifer Muller, Dicom Director (United States), Villanova Univ. (United States); Jennifer Lapier, Dicom Director (United States), Geisel School of Medicine (United States); Andrew Whitaker, Dicom Director (United States), Washington Univ. in St. Louis (United States); David Pearlstone, Dicom Director (United States)
In person: 23 February 2022 • 5:30 PM - 7:00 PM PST
Show Abstract + Hide Abstract
This study developed the first holographic interface for defining patient-specific white matter pathways associated with DBS symptom relief. Our aim was to use advanced patient-specific probabilistic tractography and semi-automatic registration of target structures to overcome limitations of tractography. IntravisionXR was used to render gray matter, target regions, and white matter tractography into 3D models. Holographic reconstructions of individualized target regions and motor/sensory white matter pathways were successfully created with accurate spatial and anatomical relationships. We propose this general framework can be repeated to develop the anatomical understanding necessary for the evolution of advanced targeting methods in functional neurosurgery.
12034-78
Author(s): Raquel Leon, Univ. de Las Palmas de Gran Canaria (Spain); Sofia H. Gelado, Univ. of Glasgow (United Kingdom); Samuel Ortega, Univ. de Las Palmas de Gran Canaria (Spain), Norwegian Institute of Fisheries & Aquaculture (Norway); Laura Quintana, Univ. de Las Palmas de Gran Canaria (Spain); Adam Szolna, Juan F. Piñeiro, Hospital Univ. de Gran Canaria Doctor Negrin (Spain); Francisco Balea-Fernandez, Univ. de Las Palmas de Gran Canaria (Spain); Jesus Morera, Hospital Univ. de Gran Canaria Doctor Negrin (Spain); Bernardino Clavo, Hospital Univ. de Gran Canaria Doctor Negrin (Spain), La Fundación Canaria Instituto de Investigación Sanitaria de Canarias (Spain); Gustavo M. Callicó, Univ. de Las Palmas de Gran Canaria (Spain)
In person: 23 February 2022 • 5:30 PM - 7:00 PM PST
Show Abstract + Hide Abstract
Accurate identification of tumor boundaries during brain cancer surgery determines the quality of life of the patient. Different intraoperative guidance tools are currently employed during the resection tumor but having several limitations. Hyperspectral imaging (HSI) is arising as a non-invasive and non-ionizing technique to assist the neurosurgeon during surgical procedures. In this paper, an analysis between in-vivo and ex-vivo human brain cancer samples using HSI has been performed to evaluate the correlation between both types of samples. Spectral ratios of the oxygenated and deoxygenated hemoglobin were employed to distinguish between different tissue.The proposed method achieved discrimination between tissue using the spectral ratio. Comparison between in-vivo and ex-vivo samples indicated that ex-vivo samples generate higher hemoglobin ratios. Moreover, vascular enhanced maps were generated using the spectral ratio, targeting intraoperative surgical assistance in real-time.
12034-79
Author(s): Annika Hänsch, Jürgen W. Jenne, Fraunhofer-Institut für Digitale Medizin MEVIS (Germany); Neeraj Upadhyay, Carsten Schmeel, Veronika Purrer, Ullrich Wüllner, Universitätsklinikum Bonn (Germany); Jan Klein, Fraunhofer-Institut für Digitale Medizin MEVIS (Germany)
In person: 23 February 2022 • 5:30 PM - 7:00 PM PST
Show Abstract + Hide Abstract
Transcranial therapy with focused ultrasound under the control of magnetic resonance imaging (tcMRgFUS) enables targeted thermal ablation of brain tissue, for example for tremor treatment. Fiber tracking can be used to determine therapy-relevant pathways in the brain. We propose to use an algorithm that relies on the definition of seed and include regions of interest, and to automatically segment these regions using deep learning. The U-Nets are trained on T1 images and color-coded direction maps, with reference data generated by atlas registration, and can segment the required regions also on independent test data.
12034-80
Author(s): Juliane Müller, Martin Oelschlägel, Christian Schnabel, Gerald Steiner, Edmund Koch, Stephan Sobottka, Gabriele Schackert, Universitätsklinikum Carl Gustav Carus Dresden, TU Dresden (Germany); Matthias Kirsch, Universitätsklinikum Carl Gustav Carus Dresden, TU Dresden (Germany), Asklepios Klinik Schildautal Seesen (Germany)
In person: 23 February 2022 • 5:30 PM - 7:00 PM PST
Show Abstract + Hide Abstract
Intraoperative visualization of resection borders of brain tumors remains a significant challenge for neurosurgeons. Recent advances in image-guided therapy could provide real-time information about location, type, and expansion of tumor tissue to improve resection accuracy. In particular, intraoperative thermal imaging, as a non-invasive imaging method, can uncover the thermal heterogeneity of various tissue types. This work investigated the potential of thermal imaging for the intraoperative demarcation of tumor borders of cortical gliomas. Applied as a passive imaging method, healthy tissue could be distinguished from pathological changes. Furthermore, the presented active approach achieves an even more precise delineation of tissue alterations.
12034-81
Author(s): William R. Warner, Xiaoyao Fan, Ryan B. Duke, Tahsin M. Khan, Dartmouth College (United States); Songbai Ji, Worcester Polytechnic Institute (United States); Steven P. Baltic, Dartmouth-Hitchcock Medical Ctr. (United States); Sohail K. Mirza, Dartmouth College (United States), PEERClinic (United States); Keith D. Paulsen, Dartmouth College (United States), Dartmouth-Hitchcock Medical Ctr. (United States)
In person: 23 February 2022 • 5:30 PM - 7:00 PM PST
Show Abstract + Hide Abstract
Successful navigation in spine surgeries relies on accurate representation of the spine’s interoperative pose. However, its position can move between preoperative imaging and instrumentation. In this study, we measure this motion as a preoperative-to-intraoperative change in lordosis. We investigate the effect this change has on navigation accuracy and the degree to which a hand-held, interoperative stereovision system (iSV) for intraoperative patient registration can account for this motion. Using six live pig specimens, we find that preoperative-to-intraoperative change in spinal pose is highly correlated with navigation accuracy. iSV can account for pose changes as its accuracy is uncorrelated with pose change.
12034-82
Author(s): Han Liu, Vanderbilt Univ. (United States); Kathryn L. Holloway, Virginia Commonwealth Univ. (United States); Dario J. Englot, Vanderbilt Univ. Medical Ctr. (United States); Benoit M. Dawant, Vanderbilt Univ. (United States)
In person: 23 February 2022 • 5:30 PM - 7:00 PM PST
Show Abstract + Hide Abstract
Epilepsy is the fourth most common neurological disorder and affects people of all ages worldwide. Deep Brain Stimulation (DBS) has emerged as an alternative treatment when anti-epileptic drugs or resective surgery cannot lead to satisfactory outcomes. To facilitate the planning of the procedure, it is desirable to develop an algorithm to automatically localize the DBS stimulation target, i.e., Anterior Nucleus of Thalamus (ANT). In this work, we perform an extensive comparative study by benchmarking various localization methods for ANT-DBS. Our results show that the deep-learning-based localization methods that are trained with pseudo labels can achieve a comparable performance to the inter-rater and intra-rater variability and are orders of magnitude faster than traditional methods.
12034-83
Author(s): Kristen L. Chen, Thayer School of Engineering at Dartmouth (United States); Chen Li, Thayer School of Engineering at Dartmouth (United States); Tahsin M. Khan, Xiaoyao Fan, Keith D. Paulsen, Thayer School of Engineering at Dartmouth (United States)
In person: 23 February 2022 • 5:30 PM - 7:00 PM PST
12034-84
Author(s): Tyler Salvador, Jacob Slagle, Children's National Hospital (United States); Greg Chaprnka, Michael Agronin, Direct Dimensions, Inc. (United States); Kevin Cleary, Anuradha Dayal, Children's National Hospital (United States)
In person: 23 February 2022 • 5:30 PM - 7:00 PM PST
Show Abstract + Hide Abstract
We have developed a mobile photogrammetry cart, which can be used to obtain 3D models of infant leg anatomy. The purpose is to assess infants with clubfoot by providing quantitative data. Traditional methods of 3D scanning are not possible since infants will not stay still for any length of time. The system consists of 40 Raspberry Pi’s, that are programmed to take synchronized images. We process these images to create 3D models of the infant’s lower body. We have successfully tested this with one healthy, 2-month-old volunteer. The next step is to collect clubfoot data in an IRB approved study.
12034-85
Author(s): Joeana Cambranis Romero, Terry M. Peters, Elvis C. S. Chen, Western Univ. (Canada)
In person: 23 February 2022 • 5:30 PM - 7:00 PM PST
Show Abstract + Hide Abstract
We have developed a prototype of a surgical navigation system (SNS) for guiding an ablation needle percutaneously. A magnetic tracking system and a mini-stereotactic aiming device were used for the development of the SNS. The guidance system displays, in real time, a virtual needle path and the estimated ablation zone before needle insertion. Our hypothesis is that our mini-stereotactic guidance system will improve the targeting accuracy for the focal treatment of Hepatocellular Carcinoma, thanks to the mechanical stabilization provided by the aiming device in conjunction with the magnetic tracking.
12034-86
Author(s): Patric Bettati, James D. Dormer, Maysam Shahedi, The Univ. of Texas at Dallas (United States); Baowei Fei, The Univ. of Texas at Dallas (United States), The Univ. of Texas Southwestern Medical Ctr. at Dallas (United States)
In person: 23 February 2022 • 5:30 PM - 7:00 PM PST
Show Abstract + Hide Abstract
Ultrasound-guided biopsy is widely used for disease detection and diagnosis. We plan to register preoperative imaging, such as positron emission tomography / computed tomography (PET/CT) and/or magnetic resonance imaging (MRI), with real-time intraoperative ultrasound imaging for improved localization of suspicious lesions that may not be seen on ultrasound but visible on other imaging modalities. Once the image registration is completed, we will combine the images from two or more imaging modalities and use Microsoft HoloLens 2 augmented reality headset to display 3D segmented lesions and organs from previously acquired images and real-time ultrasound images. In this work, we are developing a multi-modal, 3D real-time augmented reality system for the potential use in ultrasound-guided prostate biopsy.
12034-87
Author(s): Brianna Jacobson, Vanderbilt Univ. (United States)
In person: 23 February 2022 • 5:30 PM - 7:00 PM PST
12034-88
Author(s): Takaya Oguchi, Chiba Univ. (Japan)
In person: 23 February 2022 • 5:30 PM - 7:00 PM PST
Show Abstract + Hide Abstract
Non-tumorous lesions, a type of early breast cancer, are difficult to accurately locate during surgery. To solve this problem, we proposed a system that displays the resection area created from preoperative MRI images on the camera image of a tablet device. The system uses CArUco markers, which are more robust to blurred images caused by camera movement than conventional AR markers. As a result of experiments to evaluate the recognition rate of CArUco markers for blurred images, it is suggested that the recognition rate of CArUco markers is higher than that of conventional AR markers.
12034-89
Author(s): Maryam E. Rettmann, Stephan Hohmann, Hiroki Konishi, Laura Newman, Amanda Deisher, Jon Kruse, Kenneth Merrell, Robert Foote, Michael Herman, Douglas Packer, Mayo Clinic (United States)
In person: 23 February 2022 • 5:30 PM - 7:00 PM PST
Show Abstract + Hide Abstract
External beam ablation therapy can potentially treat cardiac arrhythmias non-invasively by targeting arrhythmogenic myocardial tissue. A challenge of treating cardiac tissue with beam ablation therapy is cardiac motion. Currently, cardiac motion is typically compensated by expansion of the target volume which can potentially lead to collateral damage of surrounding healthy tissue. This collateral damage could be minimized by gating the beam delivery to a portion of the cardiac cycle. Image-guided interventions are often validated using a swine model. In prior work, we evaluated cardiac motion using anatomic landmarks in multi-phase cardiac computed tomography volumes of swine hearts across the left atria and ventricles. In the current work, we extend this work by quantifying cardiac motion using implanted fiducial clips across the four chambers of the heart which is important for determining when to gate the beam during clinical treatment.
12034-90
Author(s): Daniel R. Allen, Terry M. Peters, Elvis C. S. Chen, Western Univ. (Canada)
In person: 23 February 2022 • 5:30 PM - 7:00 PM PST
Show Abstract + Hide Abstract
Advancements in Head-Mounted-Display (HMD) technology have led to an exponentially increasing rise in the development of Augmented Reality (AR) applications in the image-guided surgery field. Due to the emergence of HMD’s with a built in camera, such as the Microsoft Hololens and HTC Vive, vision-based tracking techniques have been gaining traction, and have been emerging for use in various surgical navigation systems. However, before such systems can be implemented into clinical practice, it is first necessary to evaluate the accuracy of the underlying vision-based tracking mechanisms. Therefore, we developed a co-calibration framework for the purpose of registering a vision-based tracking system and an external ground truth tracking system into a common coordinate frame to evaluate the absolute tracking error. We demonstrate our framework using optical tracking as ground truth, and evaluate the tracking error of the Vuforia vision-based tracking Software Development Kit (SDK).
12034-91
Author(s): Eric Knull, Claire K. Park, Robarts Research Institute (Canada), Western Univ. (Canada); David Tessier, Jeffrey Bax, Robarts Research Institute (Canada); Aaron Fenster, Robarts Research Institute (Canada), Western Univ. (Canada)
In person: 23 February 2022 • 5:30 PM - 7:00 PM PST
Show Abstract + Hide Abstract
Magnetic resonance imaging (MRI)-guided prostate focal laser ablation (FLA) shows potential as an alternative treatment method for localized prostate cancer. We previously developed an MRI-compatible mechatronic guidance system capable of needle positioning and delivery within an in-bore MRI environment. This paper presents an improved multi-fiducial structure for the robust registration of the mechatronic system and MRI coordinate space, and comparison and validation of mechatronics-assisted MRI-guided needle delivery to virtual targets (simulating localized focal zones) in tissue-mimicking prostate phantoms. The improved registration method significantly improves MRI-guided needle delivery in the prostate model enabling a small ablation region for MR-guided FLA therapy.
12034-92
Author(s): Sarah Said, Karlsruher Institut für Technologie (Germany); Paola Clauser, Medizinische Univ. Wien (Austria); Nicole Ruiter, Karlsruher Institut für Technologie (Germany); Pascal Baltzer, Medizinische Univ. Wien (Austria); Torsten Hopp, Karlsruher Institut für Technologie (Germany)
In person: 23 February 2022 • 5:30 PM - 7:00 PM PST
Show Abstract + Hide Abstract
In our earlier work, we proposed a novel method for a matching tool between MRI and spot mammograms using a biomechanical model based registration to match MRI and full X-ray mammograms and an image based registration to align full X-ray mammograms and spot mammograms. In this paper, we focus on developing and evaluating novel methods for image based registration between full and spot mammograms. Results assessed for seven patients from the Medical University of Vienna are presented. The median target registration error (TRE) of the image based registration is 12.7 mm.
12034-93
Author(s): Winona L. Richey, Jon S. Heiselman, Morgan J. Ringel, Vanderbilt Univ. (United States); Ingrid M. Meszoely, Vanderbilt Univ. Medical Ctr. (United States); Michael I. Miga, Vanderbilt Univ. (United States)
In person: 23 February 2022 • 5:30 PM - 7:00 PM PST
Show Abstract + Hide Abstract
Surgical resection is standard of care for the majority of breast cancer patients. This work presents a modeling framework to correct for breast shape changes between imaging and surgery. This work uses the linearized iterative boundary reconstruction approach to model breast deformations due to abduction of the arm. Model performance is compared to rigid registration techniques, and then enhanced with inclusion of subsurface points near the tumor to mimic the inclusion of biopsy clips or localization markers. The method can more accurately predict deformations of breast structures: the tumor, 10 surface points, and 14 subsurface targets.
12034-94
Author(s): Rohan C. Vijayan, Niral Sheth, Lina Mekki, Alexander Lu, Ali Uneri, Alejandro Sisniega, Johns Hopkins Univ. (United States); Jessica Maggaragia, Gerhard Kleinszig, Sebastian Vogt, Siemens Healthineers (Germany); Jeffrey Thiboutot, Hans Lee, Lonny Yarmus, Johns Hopkins Medicine (United States); Jeffrey H. Siewerdsen, Johns Hopkins Univ. (United States)
In person: 23 February 2022 • 5:30 PM - 7:00 PM PST
Show Abstract + Hide Abstract
A method is proposed to resolve deformable respiratory motion in fluoroscopically guided pulmonary interventions using a locally rigid / globally deformable 3D-2D registration for motion-compensated overlay of planning data. The algorithm performs a rigid 3D-2D registration of CBCT to a small local region of interest in a fluoroscopic image and uses a novel preprocessing workflow to drive the registration towards soft-tissue structures (e.g., lung airways). Using parameters acquired in phantom studies, the algorithm successfully compensated for airway motion in porcine studies, yielding mean TRE ranging from 1.2-4.9 mm compared to 2.0-13.1 mm using conventional 3D-2D registration.
12034-95
Author(s): Yang Lei, Zhen Tian, Tonghe Wang, Marian Axente, Justin Roper, Kristin Higgins, Jeffrey D. Bradley, Tian Liu, Xiaofeng Yang, Emory Univ. School of Medicine (United States)
In person: 23 February 2022 • 5:30 PM - 7:00 PM PST
12034-96
Author(s): Lukasz Fura, Norbert Zolek, Tamara Kujawska, Institute of Fundamental Technological Research (Poland)
In person: 23 February 2022 • 5:30 PM - 7:00 PM PST
Show Abstract + Hide Abstract
The research aim was to develop a numerical tool to predict the location and extent of the necrotic lesion formed locally inside ex vivo tissue as a result of its exposure to the pulsed HIFU beam. The proposed model is based on a numerical simulation of non-linear propagation of acoustic waves and heat transfer in heterogeneous media using a k-wave toolbox. The obtained simulation results were compared with the experimental data from previous studies. The 90 % consistency of the results is sufficient to proceed with further studies aimed at numerical optimization of the temporal and spatial tumor thermal ablation.
12034-97
Author(s): Tahsin M. Khan, Chen Li, Xiaoyao Fan, Thayer School of Engineering at Dartmouth (United States); Joshua P. Aronson, Dartmouth-Hitchcock Medical Ctr. (United States); Kristen L. Chen, Keith D. Paulsen, Thayer School of Engineering at Dartmouth (United States)
In person: 23 February 2022 • 5:30 PM - 7:00 PM PST
Show Abstract + Hide Abstract
The placement of electrodes during deep brain stimulation (DBS) surgery can affect the efficacy of treating neurodegenerative diseases. The accuracy of surgical navigation based on preoperative images can be degraded due to intraoperative brain shift. In this study, we extended our lab’s brain deformation model to update preoperative MR scans to address brain shift more accurately during DBS surgeries. We used ventricle sparse deformation data to estimate the whole brain displacement and deform preoperative MR (preMR) to generate updated MR (uMR) scans. The uMR was shown to be qualitatively similar to its ground truth postoperative MR (postMR). The quantitative accuracy of the uMR will be assessed by determining target registration errors (TREs) of landmarks in the sub-ventricular area. This study may be able to demonstrate improved accuracy of model-based image updating in DBS procedures and ultimately lead to its use for intraoperative brain shift compensation during DBS procedures.
12034-98
Author(s): Jeff Young, Maysam Shahedi, James D. Dormer, The Univ. of Texas at Dallas (United States); Brett A. Johnson, Jeffrey Gahan, The Univ. of Texas Southwestern Medical Ctr. at Dallas (United States); Baowei Fei, The Univ. of Texas at Dallas (United States), The Univ. of Texas Southwestern Medical Ctr. at Dallas (United States)
In person: 23 February 2022 • 5:30 PM - 7:00 PM PST
Show Abstract + Hide Abstract
In this paper, polyvinyl chloride (PVC)-plasticizer was explored as an economical material to reliably create long lasting, hyper-realistic kidney phantoms with contrast under both ultrasound and X-ray imaging. The radiodensity of varying formulations of PVC-based gels were characterized to allow adjustable image intensity and contrast. A workflow was established which can be easily adapted to match radiodensity values of other organs and tissues in the body. Lesions with affordable contrast agents were integrated into the phantom to mimic the presence of tumors. The kidney phantoms were imaged under ultrasound and computed tomography scanners to verify the contrast enhancement. Finally, the durability and shelf life of our PVC-based phantoms were observed to be vastly superior to that of agar-based phantoms. The work presented here allows extended periods of usage and storage for each kidney phantom while preserving anatomical detail and contrast for a low cost of materials.
12034-99
Author(s): Noa Chazot, Joeana Cambranis Romero, Terry M. Peters, Elvis C. S. Chen, Western Univ. (Canada)
In person: 23 February 2022 • 5:30 PM - 7:00 PM PST
Show Abstract + Hide Abstract
A multi-modality, Magnetic Resonance (MRI) and Ultrasound (US), Polyvinyl Alcohol Cryogel (PVA-c) liver phantom was developed. PVA-c is a tissue-mimicking material that also presents temperature memory. The phantom has simulated tumours and closed-loop vasculature that simulated blood flow. To allow Computer Tomography (CT) compatibility, tungsten powder was incorporated into the tumours mixture, as contrast agent. Differentiation between the liver tissue, vasculature, and simulated tumours was clearly visualized in US and CT imaging. We hypothesize that the closed-loop vasculature will help in the enhancement of modelling the ablation zone by simulating the “heat sink effect”.
12034-100
Author(s): Andrew Wilson, Harald Scheirich, Beatriz Paniagua, Kitware, Inc. (United States); Tung Nyugen, Raymond White, The Univ. of North Carolina at Chapel Hill (United States); Venkata Arikatla, Kitware, Inc. (United States)
In person: 23 February 2022 • 5:30 PM - 7:00 PM PST
Show Abstract + Hide Abstract
Interactive multimodal surgical simulators are powerful tools allowing efficient and objective assessment of surgical skills. Bilateral Sagittal Split Osteotomies requires precise cutting of the mandible with a motorized saw. Surgeons rely on visual and haptic cues that hard to train for through existing curricula. In this paper, we present a new algorithm for low-cost, precise sawing of mandible bone, capable of providing realistic force feedback at haptic rates. Our method treats the bone surface as an evolving level set while saw movement is governed by a rigid body solver (6dof) with a mass-spring-damper which supplies force feedback.
12034-101
Author(s): Raymond Weiming Chan, Carleton Univ. (Canada); Rebecca Hisey, Queen's Univ. (Canada); Matthew S. Holden, Carleton Univ. (Canada)
In person: 23 February 2022 • 5:30 PM - 7:00 PM PST
Show Abstract + Hide Abstract
For medical education to shift away from skills assessment using experts and professionals, robust and accessible methods of skill evaluation is needed. Automated skills assessment of ultrasound-guided needle insertions has previously been explored through 3D motion tracking data. We use simulated projections to imitate data gathered from video recordings of needle insertion procedures. We investigate the viability of 2D motion tracking data, compared to 3D, in distinguishing between novice and expert subjects.
12034-102
Author(s): Ryan B. Duke, Xiaoyao Fan, Thayer School of Engineering at Dartmouth (United States); Songbai Ji, Thayer School of Engineering at Dartmouth (United States), Worcester Polytechnic Institute (United States); Sohail K. Mirza, Thayer School of Engineering at Dartmouth (United States); Keith D. Paulsen, Thayer School of Engineering at Dartmouth (United States), Geisel School of Medicine (United States), Dartmouth-Hitchcock Medical Ctr. (United States)
In person: 23 February 2022 • 5:30 PM - 7:00 PM PST
Show Abstract + Hide Abstract
Hand-held stereovision (HHS) is an accurate method for marker-less registration, and image updating in image guided surgery. This study shows the accuracy of data collection via video versus the traditional snapshot data collection. The data consisting of 1 snapshot and 4 video errors are 1.03±0.24mm, 1.13±0.34mm, 1.18±0.59mm,1.19±0.47mm, and 3.23±2.27mm respectively. Each sequential video stream was collected at a higher speed. These speeds were calculated to be 7.22±1.95mm/s, 12.73±7.75mm/s, 19.19±12.74mm/s, and 30.14±13.33mm/s respectively. These data show that image acquisition via video streams at relatively low speeds have accuracy comparable to that of snapshot image acquisition.
12034-103
Author(s): William E. Higgins, Trevor K. Kuhlengel, The Pennsylvania State Univ. (United States); Rebecca Bascom, Penn State College of Medicine (United States)
In person: 23 February 2022 • 5:30 PM - 7:00 PM PST
Show Abstract + Hide Abstract
To determine a patient’s lung cancer stage, a physician performs a lymph node staging procedure using bronchoscopy to sample tissue from multiple chest lymph nodes. Despite best practice guidelines recommending a comprehensive sampling of the existing nodes, most physicians tend to sample the bare minimum of locations. While image-guided bronchoscopy systems have shown much promise for lung-cancer management, they do not create comprehensive plans for a staging procedure. To bridge this gap, we propose a full image-based methodology for guiding comprehensive, multi-destination lymph node staging procedures. The complete methodology involves: 1) a procedure planning protocol to optimize bronchoscope sampling order; and 2) a guidance system designed for guiding multi-destination staging procedures.
12034-104
Author(s): Wenda Li, Yuichiro Hayashi, Masahiro Oda, Nagoya Univ. (Japan); Takayuki Kitasaka, Aichi Institute of Technology (Japan); Kazunari Misawa, Aichi Cancer Ctr. Hospital (Japan); Kensaku Mori, Nagoya Univ. (Japan), National Institute of Informatics (Japan)
In person: 23 February 2022 • 5:30 PM - 7:00 PM PST
Show Abstract + Hide Abstract
In this paper, we propose a self-supervised depth estimation with uncertainty-weight joint loss function based on laparoscopic videos. Despite self-supervised learning has achieved impressive performance in depth estimation using pose estimation as an auxiliary task, it still shows undesired results for the pose estimation. Different from streetscape datasets, the laparoscope motion is limited by the minimally invasive surgery settings. It is challenging to estimate laparoscopes' poses with complex rotations from RGB images. To address this issue, we propose an improved self-supervised depth estimation with relative pose loss for stereo laparoscopic videos. Furthermore, we adopt homoscedastic uncertainty to weight our loss function to balance each subtask.
12034-105
Author(s): Peter Jackson, Kelly Merrell, Richard Simon, Cristian A. Linte, Rochester Institute of Technology (United States)
In person: 23 February 2022 • 5:30 PM - 7:00 PM PST
Show Abstract + Hide Abstract
We implement and test a method to perform a patient registration using a tracked camera. We used a simplified patient phantom to which a virtual kidney model featuring landmarks is registered. This setup mimics a situation when a surgeon would navigate a tracked needle to renal landmarks percutaneously, while relying on pre-procedural imaging, optical tracking, and surface video imaging. We conduct several experiments under both optimal phantom registration and purposely altered registration, to show the effect of patient mis-registration on subsequent navigation and demonstrate the use of the camera-based registration correction to restore navigation to an acceptable uncertainty.
Session 9: Image-Guided Therapy Applications
In person: 24 February 2022 • 8:00 AM - 9:40 AM PST
Session Chairs: Kristy K. Brock, The Univ. of Texas M.D. Anderson Cancer Ctr. (United States), Matthieu Chabanas, Univ. Grenoble Alpes (France)
12034-33
Author(s): Baochang Zhang, Mai Bui, Technische Univ. München (Germany); Cheng Wang, Zhongda Hospital, Southeast Univ. (China); Felix Bourier, Heribert Schunkert, Deutsches Herzzentrum München (Germany); Nassir Navab, Technische Univ. München (Germany)
In person: 24 February 2022 • 8:00 AM - 8:20 AM PST
Show Abstract + Hide Abstract
In this paper, a two-stage deep learning framework for real-time guidewire segmentation and tracking is proposed. In the first stage, a Yolov5s detector is trained, using the original X-ray images as well as synthetic ones, which is employed to output the bounding boxes of possible target guidewires. More importantly, a refinement module based on spatiotemporal constraints is incorporated to robustly localize the guidewire and remove false detections. In the second stage, a novel and efficient network is proposed to segment the guidewire in each detected bounding box. The network contains two major modules, namely a hessian-based enhancement embedding module and a dual self-attention module. Quantitative and qualitative evaluations on clinical intra-operative images demonstrate that the proposed approach significantly outperforms our baselines as well as the current state of the art for this task.
12034-34
Author(s): Martin G. Wagner, Sarvesh Periyasamy, Joseph F. Whitehead, Paul F. Laeseke, Michael A. Speidel, Univ. of Wisconsin-Madison (United States)
In person: 24 February 2022 • 8:20 AM - 8:40 AM PST
Show Abstract + Hide Abstract
A C-arm fluoroscopy-based 3D needle navigation technique is presented, where continuous back-and-forth rotation of the C-arm gantry allows frame-by-frame 3D reconstruction of the needle which can then be superimposed and displayed within a 3D CBCT acquired prior to needle insertion. This technique could provide a new image guidance method for percutaneous procedures using only single plane C-arm systems.
12034-35
Author(s): Ali Uneri, Corey Simmerer, Wojciech Zbijewski, Runze Han, Johns Hopkins Univ. (United States); Gerhard Kleinszig, Sebastian Vogt, Siemens Healthineers (Germany); Kevin Cleary, Children's National Health System (United States); Jeffrey H. Siewerdsen, Johns Hopkins Univ. (United States); Babar Shafiq, Johns Hopkins Medicine (United States)
In person: 24 February 2022 • 8:40 AM - 9:00 AM PST
Show Abstract + Hide Abstract
Accurate, image-based planning of joint reduction based on intraoperative cone-beam CT forms the basis for precise robotic assistance and quantitative fluoroscopic guidance. The proposed approach combines statistical shape and pose modeling of the ankle joint to: (1) automatically segment individual bones; and (2) identify the target pose for the dislocated fibula to establish a plan for reduction. Leave-one-out analysis of the atlas members demonstrated accurate segmentation with 0.6 mm mean surface distance error and predicted the fibula pose within 1.6 mm and 1.8°. Future work will expand evaluation and analyze the appropriateness of the contralateral ankle as a patient-specific template.
12034-36
Author(s): Fatemeh Zabihollahy, Akila Viswanathan, Ehud J. Schmidt, Junghoon Lee, Johns Hopkins Univ. (United States)
In person: 24 February 2022 • 9:00 AM - 9:20 AM PST
Show Abstract + Hide Abstract
Brachytherapy (BT) combined with external beam radiotherapy (EBRT) is the standard treatment for cervical cancer. Accurate segmentation of the tumor and nearby organs at risk (OAR) is necessary for accurate radiotherapy (RT) planning. While OAR segmentation has been widely studied, showing promising performance, accurate tumor and/or corresponding clinical target volume (CTV) segmentation has been less explored. In cervical cancer RT, magnetic resonance (MR) imaging is used as the standard imaging modality to define the CTV, which is very challenging as the microscopic spread of tumor cells is not clearly visible even in MRI. We propose a convolutional neural network (CNN) approach to delineate CTV from T2-weighted (T2W) MR images. The reported segmentation accuracy demonstrated the potential of the proposed method for efficiently and robustly delineating CTV on MRI, thus significantly helping physicians improve the CTV segmentation workflow for cervical cancer RT planning.
12034-37
Author(s): Xianjin Dai, Yang Lei, Tonghe Wang, Zhen Tian, Jun Zhou, Mark McDonald, David S. Yu, Beth B. Ghavidel, Jeffrey D. Bradley, Tian Liu, Xiaofeng Yang, Emory Univ. (United States)
In person: 24 February 2022 • 9:20 AM - 9:40 AM PST
Session 10: Benchmarking and Assessment in Image-Guided Interventions: Joint Session with Conferences 12034 and 12037
In person: 24 February 2022 • 10:10 AM - 12:10 PM PST
Session Chair: Sandy Engelhardt, Ruprecht-Karls-Univ. Heidelberg (Germany)
12037-24
Author(s): Joshua R. Chen, Jonathan M. Morris, Adam J. Wentworth, Victoria A. Sears, Andrew M. Duit, Eric R. Erie, Kiaran P. McGee, Shuai Leng, Mayo Clinic (United States)
In person: 24 February 2022 • 10:10 AM - 10:30 AM PST
Show Abstract + Hide Abstract
Three-dimensional (3D) printing has been shown to have a great impact on patient care by generating patient-specific 3D anatomic models and osteotomy guides from volumetric images. However, there are multiple printing technologies, variability between vendors, and inter printer variability from the same vendor, all of which have an impact on print accuracy. In this study, we investigated printing accuracy on a diverse selection of 3D printers commonly used in the medical field. We found that (1) material jetting and vat photopolymerization printers were the most accurate; (2) Printers using the same 3D printing technology but from different vendors also showed differences in accuracy; (3) There were differences in accuracy between printers from the same vendor using the same printing technology, but different models/generations. These results provide guidance on how to appropriately choose 3D printers in practice to avoid potential detrimental consequences.
12034-38
Author(s): Abdullah Thabit, Wiro J. Niessen, Eppo B. Wolvius, Theo van Walsum, Erasmus MC (Netherlands)
In person: 24 February 2022 • 10:30 AM - 10:50 AM PST
Show Abstract + Hide Abstract
In recent years, head mounted displays such as the Microsoft HoloLens presented an attractive alternative for traditional navigation systems. Mono and stereo-vision in the HoloLens have been both reported to be used for marker tracking, but no evaluation on accuracy has been done to compare the two approaches. In this work, we aim to investigate the tracking performance of various camera setups in the HoloLens in relation to different parameters. Our results show that mono-vision is more accurate in marker localization than stereo-vision when high resolution is used. However, this comes at the expense of higher frame processing time. Alternatively, we propose a combined low-resolution mono-stereo tracking setup that outperforms each tracking approach individually. We further discuss our findings and their implications for navigation in surgical interventions.
12034-39
Author(s): Michael A. Kokko, Thayer School of Engineering at Dartmouth (United States); John D. Seigne, Dartmouth-Hitchcock Medical Ctr. (United States); Douglas W. Van Citters, Thayer School of Engineering at Dartmouth (United States); Ryan J. Halter, Thayer School of Engineering at Dartmouth (United States)
In person: 24 February 2022 • 10:50 AM - 11:10 AM PST
Show Abstract + Hide Abstract
This work introduces a novel, geometrically-accurate anatomical phantom model of surgical exposure in the upper urinary tract. Silicone kidneys, ureters, and vasculature were molded based the anatomy of a representative study subject. These were constrained to a rigid spinal model inside an acrylic housing designed to mimic the inner surface of the abdominal cavity. Wool batting simulated occluding adipose tissue. Initial evaluation on CT showed good subjective correspondence with clinical tomography, and physiologically-relevant 25–30mm kidney displacement between orientations. Stereoendoscopic views of partially occluded structures show promise for use in developing and validating image guidance tools for surgical exposure.
12037-25
Author(s): Trent Benedick, Lisa Meng, Eric Manning, The Univ. of Southern California (United States); Anh Le, Univ. at Buffalo (United States); Brent J. Liu, The Univ. of Southern California (United States)
In person: 24 February 2022 • 11:10 AM - 11:30 AM PST
Show Abstract + Hide Abstract
We create an informatics web application which uses algorithmic analysis of historical cases to introduce uniformity and data driven methods into the radiation therapy treatment process by providing treatment planning templates and treatment benchmarking. The database the system uses consists of historical DICOM RT objects from which we extract spatial quantitative features. These values are used to generate a list of historical cases from our database similar to a current case which a physician can then use as templates for treatment planning. Our system aims to introduce uniformity of methods and data driven methods into radiation therapy treatment planning.
12034-40
Author(s): Saba Adabi, Tzu-Chi Tzeng, Yading Yuan, The Mount Sinai Hospital (United States)
In person: 24 February 2022 • 11:30 AM - 11:50 AM PST
Show Abstract + Hide Abstract
In this study, we developed a 3D dose prediction framework based on scale attention networks (SA-Net) and signed distance maps. The attention mechanism of the network fine-tunes the weights of each scale feature to emphasize on the key scales while suppressing the less important ones in an adaptive manner. The proposed method was tested on prostate cancer treated with VMAT. The average dose difference between the predicted dose and the clinical planned dose was 0.94Gy (equivalent to 2.1% of the prescription dose of 45 Gy). Findings show our proposed framework is feasible in automating the treatment planning in prostate cancer radiotherapy.
12034-41
Author(s): Shadab Momin, Yang Lei, Jiahan Zhang, Tonghe Wang, Justin Roper, Jeffrey D. Bradley, Pretesh Patel, Tian Liu, Xiaofeng Yang, The Winship Cancer Institute of Emory Univ. (United States)
In person: 24 February 2022 • 11:50 AM - 12:10 PM PST
Session 11: Novel Techniques in Image-Guided Interventions
In person: 24 February 2022 • 1:20 PM - 3:00 PM PST
Session Chairs: Ivo Wolf, Hochschule Mannheim (Germany), Terry Yoo, The Univ. of Maine (United States)
12034-42
Author(s): Sydney Wilson, Claire K. Park, Western Univ. (Canada), Robarts Research Institute (Canada); Jeffrey Bax, Kevin Barker, Hristo Nikolov, Robarts Research Institute (Canada); Aaron Fenster, David Holdsworth, Western Univ. (Canada), Robarts Research Institute (Canada)
In person: 24 February 2022 • 1:20 PM - 1:40 PM PST
Show Abstract + Hide Abstract
There is high demand for multi-modality, intraoperative, image guidance systems that enable clinicians to perform tumour margin assessment in real-time. To address this need, we describe a novel, dual-modality image guidance system comprising a focussed gamma probe and a commercially available ultrasound probe that simultaneously acquires molecular and anatomical data. The gamma probe was optimized using Monte Carlo simulations to achieve high resolution and sensitivity in a remote focal plane. A custom-designed holder was then created to integrate the gamma probe with an ultrasound probe. This proof-of-concept shows the proposed configuration for the real-time, radio-ultrasound-guided imaging system.
12034-43
Author(s): Yubing Tong, Jayaram K. Udupa, You Hao, Lipeng Xie, Univ. of Pennsylvania (United States); Joseph McDonough, The Children's Hospital of Philadelphia (United States); Caiyun Wu, Univ. of Pennsylvania (United States); Carina Lott, Jason B. Anari, The Children's Hospital of Philadelphia (United States); Drew A. Torigian, Univ. of Pennsylvania (United States); Patrick Cahill, The Children's Hospital of Philadelphia (United States)
In person: 24 February 2022 • 1:40 PM - 2:00 PM PST
Show Abstract + Hide Abstract
We integrated our previous work on TIS into a software system, QdMRI, to address questions in this domain regarding (1) How to effectively acquire free-breathing dynamic MR images? (2) How to assess the thoracic structures from the acquired image, such as lungs, left and right, separately? (3) How to depict the dynamics of thoracic structures due to respiration motion? (4) How to use the structural and functional information for evaluating surgery-based TIS treatment and designing the surgery plan? The QdMRI system can also be applied in scoliosis-related and other applications not only for children but also for adults.
12034-44
Author(s): Laura Connolly, Amoon Jamzad, Arash Nikniazi, Rana Poushimin, Jean Michel Nunzi, John F. Rudan, Gabor Fichtinger, Parvin Mousavi, Queen's Univ. (Canada)
In person: 24 February 2022 • 2:00 PM - 2:20 PM PST
Show Abstract + Hide Abstract
In this paper, we evaluate a combined optical and acoustic imaging approach as a proof-of-concept for a robotic cavity scanning system. We use throughput broadband spectroscopy and ultrasound imaging to demonstrate the viability of this approach for tissue characterization and detecting sample heterogeneity in breast conserving surgery. Tissue phantoms that are designed to represent heterogeneous tissue conditions are used for image acquisition. From the acquired images, we perform optical characterization of the tissue based on the absorption of broadband light and acoustic characterization with machine learning. Our preliminary results suggest that this is a viable, non-destructive imaging approach for tissue characterization.
12034-45
Author(s): Xinyu Kang, Mohammad H. Jafari, Mohammad M. Kazemi, The Univ. of British Columbia (Canada); Christina Luong, Teresa Tsang, Vancouver General Hospital (Canada); Purang Abolmaesumi, The Univ. of British Columbia (Canada)
In person: 24 February 2022 • 2:20 PM - 2:40 PM PST
Show Abstract + Hide Abstract
Design of a lightweight and robust video-based deep learning model for ejection fraction (EF) estimation in portable mobile environments remains a challenge. Here we propose a modified Tiny Video Network (TVN) with sampling-free uncertainty estimation for video-based EF measurement in echocardiography (echo). We achieve a comparable accuracy with the state-of-the-art video-based model, while having a small model size. Moreover, we consider the aleatoric uncertainty in our network to model the inherent noise and ambiguity of EF labels in echo data to improve robustness. The proposed network is suitable for real-time video-based EF estimation compatible with portable mobile devices.
12034-46
Author(s): Kush Hari, Vanderbilt Univ. (United States); Rohan C. Vijayan, Johns Hopkins Univ. (United States); Ma Luo, Jaime Tierney, Jon S. Heiselman, Lola B. Chambless, Reid C. Thompson, Michael I. Miga, Vanderbilt Univ. (United States)
In person: 24 February 2022 • 2:40 PM - 3:00 PM PST
Show Abstract + Hide Abstract
The goal of this project was to create an application to render brain shift simulations. In the app’s patient positioning mode, a patient’s preoperative MR data were loaded into a virtual operating room where the neurosurgeon could adjust the head orientation and craniotomy location. This information was then used to establish the boundary conditions of a deformation model that calculates multiple brain shift solutions to acquire a range of possible positional impact. These results were loaded into the app’s simulation mode to juxtapose the preoperative images and the simulated intraoperative images as a function of head orientations and surgical forces.
Session 12: Image Registration
In person: 24 February 2022 • 3:30 PM - 5:30 PM PST
Session Chairs: Amber L. Simpson, Queen's Univ. (Canada), Ziv R. Yaniv, National Institute of Allergy and Infectious Diseases (United States)
12034-47
Author(s): Jon S. Heiselman, Memorial Sloan-Kettering Cancer Ctr. (United States), Vanderbilt Univ. (United States); William R. Jarnagin, Memorial Sloan-Kettering Cancer Ctr. (United States); Michael I. Miga, Vanderbilt Univ. (United States)
In person: 24 February 2022 • 3:30 PM - 3:50 PM PST
Show Abstract + Hide Abstract
Successful estimation of registration error provides immense opportunities for controlling risks associated with navigation during image-guided surgery. In this work, two uncertainty metrics are leveraged to classify error thresholds for detecting inaccurate regions in sparse data driven elastic registration. Regions of the organ where deformable registration accuracy exceeded the average magnitude of rigid registration error were predicted with AUC above 0.87, and regions of TRE greater than 10 mm predicted with AUC of 0.8. These capabilities enhance clinical confidence in image-guided technologies in deforming organs by enabling immediate quantification and communication of navigational reliability and system accuracy during soft tissue surgery.
12034-48
Author(s): James Huang, The Univ. of Texas at Dallas (United States); Junyu Guo, Ivan Pedrosa, The Univ. of Texas Southwestern Medical Ctr. at Dallas (United States); Baowei Fei, The Univ. of Texas at Dallas (United States), The Univ. of Texas Southwestern Medical Ctr. at Dallas (United States)
In person: 24 February 2022 • 3:50 PM - 4:10 PM PST
Show Abstract + Hide Abstract
In this study, we proposed an iterative affine registration method to perform initial global alignment of the kidneys from dynamic contrast-enhanced MRI, followed by a convolutional neural network (CNN) trained for deformable registration between two images. The proposed registration method was applied successively across the frames of the 3D DCE-MR images to reduce motion effects in the kidney compartments (cortex, medulla, and cavities). Successful reduction in the motion effects caused by patient respiratory motion during image acquisition allows for further kinetic analysis of the kidney. Original and registered images were analyzed and compared using dynamic intensity curves of the kidney compartments, target registration error of anatomical markers, image subtraction, and simple visual assessment. The proposed deep learning-based approach to correct motion effects in abdominal 3D DCE-MRI data can be applied to various kidney MR imaging applications.
12034-49
Author(s): Morgan J. Ringel, Winona L. Richey, Jon S. Heiselman, Ma Luo, Vanderbilt Univ. (United States); Ingrid M. Meszoely, Vanderbilt Univ. Medical Ctr. (United States); Michael I. Miga, Vanderbilt Univ. (United States)
In person: 24 February 2022 • 4:10 PM - 4:30 PM PST
Show Abstract + Hide Abstract
Breast conserving surgery is a common procedure for early-stage breast cancer patients, and supine MR breast imaging can more closely represent the tumor surgical presentation compared to conventional pendant positioning. Utilization of preoperative imaging for surgical guidance requires an accurate image-to-physical registration. Three registration techniques were investigated: (1) a point-based rigid registration using synthetic fiducials, (2) a non-rigid biomechanical model-based registration using sparse data, and (3) a data-dense 3D image-to-image-based registration used as a comparison metric. Registration accuracy significantly improved from (1) to (2) to (3), and this analysis may inform future development of image guidance systems for lumpectomy procedures.
12034-50
Author(s): Yabo Fu, Yang Lei, Tonghe Wang, Marian Axente, Justin Roper, Jeffrey D. Bradley, Tian Liu, Xiaofeng Yang, Emory Univ. (United States)
In person: 24 February 2022 • 4:30 PM - 4:50 PM PST
12034-51
Author(s): Shuwei Xing, Robarts Research Institute (Canada), Western Univ. (Canada); Terry M. Peters, Aaron Fenster, Elvis C. S. Chen, Robarts Research Institute (Canada); Derek W. Cool, Western Univ. (Canada); Lori Gardi, Jeffrey Bax, Robarts Research Institute (Canada)
In person: 24 February 2022 • 4:50 PM - 5:10 PM PST
Show Abstract + Hide Abstract
3D US imaging has attracted much attention in percutaneous liver ablation as it provides sufficient volumetric information. But 3D US has the same limitation as conventional 2D US imaging in visualizing cases with poor tumor contrast. Our objective is to investigate a new ablation paradigm based on our previously developed 3D US system. In this paper, we have developed a 2D/3D US/CT-guided liver ablation system. Results demonstrated that our system could provide accurate tracking information with the unsigned error of 1.79 mm±0.46 mm, and visualize the complementary information correctly from the two imaging modalities in real time. This work is a step towards providing a system to guide ablation procedures.
12034-52
Author(s): Batoul Dahman, Jean-Louis Dillenseger, Lab. Traitement du Signal et de l'Image, Univ. de Rennes 1 (France)
In person: 24 February 2022 • 5:10 PM - 5:30 PM PST
Show Abstract + Hide Abstract
This paper presents a supervised learning-based convolutional neural networks framework for transesophageal ultrasound/computed tomography 2D image registration. A siamese architecture consisting of convolutional layers extracts features from moving and fixed maps analogous to dense local descriptors, these feature maps are concatenate and, finally a registration network directly outputs the registration parameters set of the rigid registration. The registration computation time is around 3 ms, and the median Target Registration Error of 2.2 mm compared to respectively 70 s and 2.7 mm for the classical iterative method.
Conference Chair
Rochester Institute of Technology (United States)
Conference Chair
Johns Hopkins Univ. (United States)
Program Committee
The Univ. of British Columbia (Canada)
Program Committee
The Univ. of Texas M.D. Anderson Cancer Ctr. (United States)
Program Committee
Univ. Grenoble Alpes (France)
Program Committee
Robarts Research Institute (Canada)
Program Committee
Sandy Engelhardt
Ruprecht-Karls-Univ. Heidelberg (Germany)
Program Committee
Siemens Healthineers (Germany)
Program Committee
The Univ. of Texas at Dallas (United States), The Univ. of Texas Southwestern Medical Ctr. (United States)
Program Committee
Queen's Univ. (Canada)
Program Committee
Thayer School of Engineering at Dartmouth (United States)
Program Committee
Univ. of Washington (United States)
Program Committee
The Pennsylvania State Univ. (United States)
Program Committee
Mayo Clinic (United States)
Program Committee
Univ. de Rennes 1 (France)
Program Committee
Grand Canyon Univ. (United States)
Program Committee
Western Univ. (Canada)
Program Committee
Vanderbilt Univ. (United States)
Program Committee
Nagoya Univ. (Japan)
Program Committee
Queen's Univ. (Canada)
Program Committee
Vanderbilt Univ. (United States)
Program Committee
Mayo Clinic (United States)
Program Committee
Univ. of Washington (United States)
Program Committee
Queen's Univ. (Canada)
Program Committee
National Ctr. for Tumor Diseases Dresden (Germany)
Program Committee
Tamas Ungi
Queen's Univ. (Canada)
Program Committee
Case Western Reserve Univ. (United States)
Program Committee
Robert J. Webster
Vanderbilt Univ. (United States)
Program Committee
Northern Digital Inc. (Canada)
Program Committee
Hochschule Mannheim (Germany)
Program Committee
National Institute of Allergy and Infectious Diseases (United States)
Program Committee
The Univ. of Maine (United States)