Plenary Event
SPIE Medical Imaging Awards and Plenary
18 February 2024 • 5:30 PM - 6:30 PM PST | Town & Country A 

5:30 PM - 5:40 PM:
Symposium Chair Welcome and Best Student Paper Award Announcement



Despina Kontos, Columbia Univ. Irving Medical Ctr. (United States) and Joseph Lo, Duke Univ. School of Medicine (United States) welcome all SPIE Medical Imaging 2024 attendees and will announce the first-place winner and runner-up of the Robert F. Wagner All-Conference Best Student Paper Award.

Award Sponsored by:


5:40 PM - 5:45 PM:
Acknowledgment of New SPIE Fellows
Each year, SPIE promotes Members as new Fellows of the Society. Fellows are Members of distinction who have made significant scientific and technical contributions in the multidisciplinary fields of optics, photonics, and imaging. They are honored for their technical achievement and for their service to the general optics community and to SPIE in particular. Join us as we welcome members of the medical imaging community who have been selected this year as new SPIE Fellows.

 

5:45 PM - 5:50 PM:
SPIE Harrison H. Barrett Award in Medical Imaging
Presented in recognition of outstanding accomplishments in medical imaging.

 

5:50 PM - 6:30 PM:
Interpretable deep learning in medical imaging

Cynthia Rudin, Duke Univ. (United States)

We would like deep learning systems to aid radiologists with difficult decisions instead of replacing them with inscrutable black boxes. "Explaining" the black boxes with XAI tools is problematic, particularly in medical imaging where the explanations from XAI tools are inconsistent and unreliable. Instead of explaining the black boxes, we can replace them with interpretable deep learning models that explain their reasoning processes in ways that people can understand. One popular interpretable deep learning approach uses case-based reasoning, where an algorithm compares a new test case to similar cases from the past ("this looks like that"), and a decision is made based on the comparisons. Radiologists often use this kind of reasoning process themselves when evaluating a new challenging test case. In this talk, I will demonstrate interpretable machine learning techniques through applications to mammography and EEG analysis.

Cynthia Rudin is the Earl D. McLean, Jr. Professor of Computer Science and Engineering at Duke University. She directs the Interpretable Machine Learning Lab, and her goal is to design predictive models that people can understand. Her lab applies machine learning in many areas, such as healthcare, criminal justice, and energy reliability. She holds degrees from the University at Buffalo and Princeton. She is the recipient of the 2022 Squirrel AI Award for Artificial Intelligence for the Benefit of Humanity from the Association for the Advancement of Artificial Intelligence (the "Nobel Prize of AI"). She received a 2022 Guggenheim fellowship, and is a fellow of the American Statistical Association, the Institute of Mathematical Statistics, and the Association for the Advancement of Artificial Intelligence.

 


Event Details

FORMAT: General session with live audience Q&A to follow the plenary presentation.
MENU: Coffee, decaf, and tea will be available outside presentation room.
SETUP: Assortment of classroom and theater style seating.