The evidence for AI in medical imaging

Clinician, researcher, and public-health professor Nehmat Houssami talks to SPIE about digital mammography, digital breast tomosynthesis, and the future of AI in imaging interpretation
05 February 2021
By Karen Thomas
3D mammography
3D mammography (breast tomosynthesis) could become a standard screening tool for breast cancer. Credit: Getty Images

Nehmat Houssami is a clinician, researcher, public health physician, and professor of public health at the University of Sydney where she leads a breast cancer research program. She co-chairs the university's Artificial Intelligence in Cancer Care group, serves as a co-editor of The Breast, and is the National Breast Cancer Foundation's inaugural Cancer Research Leadership Fellow.

At SPIE Medical Imaging, 15-19 February, Houssami will give a keynote presentation on adopting artificial intelligence into clinical practice using mammography screening as an example.

What led to your interest in breast-cancer screening/treatment?
I have worked as a breast clinician longer than I have been active in research; I've been working in breast diagnostics for around 30 years. Naturally, my patients are an ongoing inspiration for me to look for ways to improve early detection and treatment, which is why I started to do research in this field. Having also trained as a public health physician, my focus includes the health of populations, for which cancer-screening programs are an important strategy. My research has largely focused on breast-cancer screening — in particular, evaluating new technologies and how tests affect clinical outcomes — aligning with my clinical work.

Your plenary talk will focus on AI for mammography. How can AI enhance mammography/medical imaging?
This is the challenging question — and issue. AI for mammography can definitely support and enhance interpretation and can make breast screening more efficient and potentially more effective. However, this is an emerging field so we also need to be aware that there is much more to do to obtain higher-quality evidence on how AI can be used to improve detection and care for patients, as well as to identify whether there may be harmful or negative (or even unforeseen) consequences. I anticipate that AI for imaging interpretation, in particular for cancer screening (not limited to breast cancer, but also lung and similar cancers), is a field that will experience rapid growth. Those working in developing AI for imaging need to be aware that dialogue with various stakeholders — including end-users of AI systems — must commence early to ensure that AI applications have clinical utility. AI for mammography can potentially complement the human reader and pick up small cancers missed by the human eye, but that will only serve cancer-screening programs if it does not come at the expense of causing many false alarms. What I would hope that AI could potentially do to improve medical imaging interpretation is to be properly trained and programmed to detect the more biologically aggressive cancers. I think that's where it might really make a difference. We need to move from simply looking at Area Under the Curve (AUC) and Receiver Operating Characteristics (ROC) curves, to measuring how including AI in the imaging pathway improves clinical metrics.

What are some of the challenges to adopting AI into medical practice?
There are many challenges, and this is something I will talk about in my presentation. AI development and research has mostly focused on providing evidence of technical and performance capability, i.e. providing measures showing that an AI system can undertake a specific task such as accuracy of AI in image interpretation. But trials specifically reporting clinical outcomes will be required — particularly where AI is contributing to the clinical decision-making process — before it can be adopted into routine clinical practice. There are also challenges related to the specific context of the health system, such as establishing the best model to integrate AI, and developing monitoring and explainability techniques for end-users. There is a lot more work to be done in that area before AI can be fully translated into health system-embedded patient care.

There are also other challenges, not unique to healthcare AI, but that are made more critical or complex due to the personal nature of clinical care: these include data privacy and security, and a whole mix of the ethical, legal, and social implications (ELSI) of AI that must be better assessed as part of the process of evaluating and implementing AI for medical care.

Nehmat Houssami, Professor of Public Health and National Breast Cancer Foundation Research Leadership Fellow at the University of Sydney’s Faculty of Medicine and Health

Nehmat Houssami, professor of public health and National Breast Cancer Foundation Research Leadership Fellow at the University of Sydney's Faculty of Medicine and Health. Credit: University of Sydney

What has made breast-cancer screening an exemplary area for AI technology application?
Well, mammography screening is an example of AI application where evidence is emerging — particularly in very recent years — that AI can equal, or potentially outperform, the human reader. As I will highlight in my presentation, the convergence of several factors has led to this: digital imaging (in this case digital mammography and also tomosynthesis); the nature of the mammography interpretive task (feature and pattern recognition, a repetitive time-consuming task that can be learnt via data); the insertion of a catalyst a few years ago, which was the international DREAM challenge, which provided a large data source for AI training; and the existence of a health system ‘need' to resource large-volume screen-reading without compromising cancer detection.

Can you give us a basic description of tomosynthesis? How does it compare to mammography?
In digital mammography — which replaced the "old" film mammography — a digital detector captures the x-rays and converts the information to a digital image. The x-ray is acquired in two different views with the breast under compression, so essentially each image has the breast tissue fairly superimposed which could obscure some cancers and can sometimes create the appearance of lesions simply due to normal breast tissue overlapping.

Digital breast tomosynthesis (DBT), a tomographic x-ray of the breast, reduces some of the confounding effect of overlapping tissue by acquiring multiple low-dose x-rays for each view of the compressed breast from different angles, with the x-ray tube moving in a limited arc (typically between 15-50°) over the compressed breast. These low-dose "projection" images are mathematically reconstructed to a stack of thin slices of the breast that can be viewed as a cine-loop or through scrolling/viewing single slices. Because each slice contains much less of the overlapping tissue compared to a mammography image, DBT helps better visualise abnormalities or at least facilitates discriminating between normal and abnormal appearances on the image. It does, however, take longer to interpret DBT than digital mammography because there are many more images to scroll through.

Attend the SPIE Medical Imaging Digital Forum for Housammi's presentation on the use of AI in health systems and other exciting developments in the medical imaging community.

Enjoy this article?
Get similar news in your inbox
Get more stories from SPIE
Recent News
PREMIUM CONTENT
Sign in to read the full article
Create a free SPIE account to get access to
premium articles and original research