Pragmatic approaches for optimizing radiation dose in x-ray computed tomography
Using a novel adaptive non-local-means denoising method permits radiation doses from any computed tomography scanner to be reduced by 50% without affecting diagnostic performance.
Ongoing developments in x-ray computed tomography (CT) imaging have enabled an ever-increasing number of clinical applications, many of which have supplanted less-accurate or more-invasive diagnostic tests. As a result, over 80 million CT scans are performed annually in the USA each year. The low doses of ionizing radiation from medical imaging examinations, including CT, have not been shown to increase the risk of adverse health effects. However, the fact that high doses of radiation increase the risk of developing some cancers raises the possibility that low doses might also carry some risk. Thus, it is imperative that the radiation doses used in medical imaging be kept as low as reasonably achievable, provided that the diagnostic accuracy of the examination is maintained.
Our team has worked to address two fundamental limitations with respect to radiation dose reduction. First, most dose reduction techniques require purchasing new CT scanners with costly features, such as iterative reconstruction. Second, users subjectively select the amount of dose reduction to be applied, without evidence that the diagnostic performance will be preserved. We have developed a method for reducing image noise that can be applied to any scanner—even older ones—to enable dose reduction, and have developed methods to quantitatively determine the impact of dose reduction on diagnostic performance.
In CT, image noise increases as dose is decreased; if noise can be reduced after the fact, lower doses can be used when scanning the patient. Our noise reduction algorithm uses an adaptive non-local-means (aNLM) filter that operates on the reconstructed images. To determine the optimal levels of dose reduction for specific diagnostic tasks, we have conducted human and model observer performance trials to determine, for example, the sensitivity and specificity with which a subtle lesion can be detected. Current clinical parameters are used to determine reference performance, and the lowest dose setting that can provide non-inferior performance relative to the reference is deemed optimal for that task.
Our denoising method estimates the local noise level in the CT images using an analytical model based on photon statistics and scanner geometry.1 There are two main advantages of the aNLM method. First, there is no need to access raw CT data; it can be implemented with any CT scanner as a post-processing step. Second, aNLM estimates the local noise level during denoising, which optimally reduces noise throughout the imaging volume without sacrificing structural edge information.
Traditional metrics of image quality assume a linear system; once nonlinear algorithms are applied, such as aNLM or iterative reconstruction techniques, these metrics are no longer adequate. Thus, to evaluate various algorithms and dose levels, we study the impact of the algorithm on actual diagnostic performance. However, determination of the optimal dose levels using humans requires that variations in patient data and reader performance both be taken into consideration, making these multireader, multicase (MRMC) studies extremely time consuming and expensive to perform. We desire, therefore, to develop model (mathematical) observers whose performance is highly correlated with that of trained readers, as these algorithms can be quickly applied to phantom data sets, reducing the expense and uncertainty associated with MRMC studies. Our channelized-Hotelling model observer uses Gabor channels to predict human performance for diagnostic tasks such as detection and localization of simple low-contrast lesions in a uniform background, and discrimination of differences in lesion boundaries of medium-contrast objects,2–4 providing evidence that the dose level can be efficiently determined where performance is maintained as dose is reduced.
In summary, our adaptive denoising and model observer algorithms offer a promising way to reduce image noise, and hence radiation dose, in CT without compromising diagnostic performance. To date, we have used human and model observers to demonstrate that our denoising technique can be used to reduce dose by approximately 50% in a range of examination types and diagnostic tasks without affecting reader performance (see Figure 1). Our next step will be to extend our model observer methodology to tasks involving complex anatomical backgrounds.
Cynthia McCollough is a professor of medical physics and biomedical engineering, where she works with numerous co-investigators on projects seeking to detect and quantify disease using CT imaging. She is the author of over 200 peer-reviewed publications and holds 10 patents.