Graphics processing units for biomedical imaging
Until a few years ago, clock scaling represented the dominant method by which to boost computer performance. Electronic components were therefore designed to operate at higher and higher frequencies, allowing the frequency of the clock that drove them to be increased. However, clock scaling began to show its limitations about 10 years ago.1 A possible way to increase performance and solve the problems associated with clock scaling is to abandon serial computation (in which only one execution flow exists at any time and instructions are executed sequentially) and develop hardware and software components that are capable of carrying out parallel computing.2, 3 Multi-core central processing units (CPUs), which aggregate more than one core in the same physical package, represent one route toward achieving this.
Graphics processing units (GPUs) have recently become a viable alternative to multi-core CPUs. No longer confined to rendering scenes for gaming and media applications, today's GPUs are sophisticated enough that general-purpose parallel algorithms can be coded and run on them. GPU technology has found application in many scientific fields, ranging from signal processing to medical imaging, and from life sciences to fluid dynamics. As a result of the mass production and constant hardware improvements that they enjoy, GPUs offer inexpensive, state-of-the-art computational power.
Modern GPU devices have a large number of cores, making them suitable for many applications in biomedical imaging that typically require the processing of large amounts of data. We have developed GPU software that enables image reconstruction of emission tomography data collected with FastSPECT II.4, 5 FastSPECT II is a single-photon-emission computed tomography (SPECT) imaging system for small-animal imaging. It consists of 16 stationary modular gamma-ray cameras6 that are interfaced to a computer station for data acquisition and processing via dedicated circuitry. The size of the field of view is approximately 42×42×54 mm3. As Figure 1 shows, the system achieves high sensitivity over a volume large enough to accommodate a mouse.4, 5
Our reconstruction code implements the maximum-likelihood expectation-maximization (MLEM) algorithm, which is a popular choice for many image-reconstruction problems.7–10 Briefly, the MLEM algorithm begins with an estimate of the reconstructed image (e.g., a uniform image) and uses the measured data to refine this estimate by performing a certain number of iterations. At each iteration, a forward-projection step propagates the current estimate from image space to data space. Mathematically, this step can be formalized as a matrix multiplication between the system matrix,11H, and the current image estimate. The measured data are then divided, component by component, by the result of the forward projection step, thus providing correction factors. A matrix multiplication by the transpose of H backpropagates these correction factors to image space. The result of this last step is then multiplied, component by component, by the current estimate of the reconstructed image, thereby giving a refined estimate to be used in the next iteration of the algorithm.
Our parallel implementation of the MLEM algorithm takes advantage of the capabilities of modern GPU devices. For example, instead of storing H in memory, we are able to calculate its non-zero entries on the fly and as they are required by the reconstruction algorithm. The non-zero entries can be calculated from a Gaussian fit (described by a limited number of fitting coefficients) of data that were collected during system calibration. Because modern GPUs are much faster at performing floating-point operations than fetching large amounts of data from memory, this approach greatly improves performance. Our approach also makes it easier to perform reconstructions on an arbitrary fine scale by simple interpolation of the Gaussian-fitting coefficients.12
To assess the performance of our algorithm, we performed a simple simulation study. Beginning with a simulated bone scan of a mouse, we ran 30 iterations of the MLEM algorithm on an 8-Tesla K40 GPU machine. The total processing time was less than six minutes, representing a significant speedup compared with traditional hardware. Results are shown in Figure 2, and the full reconstruction can be seen in a short video.13
We have implemented an MLEM algorithm on GPU hardware, enabling fast 3D image reconstruction for biomedical imaging applications. The amount of GPU computational power and memory that is projected to be available in the next few years will allow new theoretical approaches for analyzing imaging systems and processing the data that they produce. In one such approach, we are treating digital imaging systems with large numbers of acquired bits per event as part of continuous-to-continuous mappings. In such a case, an object (which is a function of continuous variables) is imaged, and the imaging system collects a list of parameters that vary over a continuous space. This gives rise to a representation of an imaging system via a continuous-to-continuous mathematical operator that relates the object that is being imaged to the collected data. This operator can then be analyzed mathematically, thereby providing insights regarding the fundamental properties of the imaging system. We are also working to integrate the concept of information content of a photon14 into GPU-based reconstruction code.
The authors would like to acknowledge funding by the National Institutes of Health (grants R37 EB000803 and P41 EB002035).
University of Arizona
Luca Caucci is part of the research faculty and earned his PhD in optical sciences from the University of Arizona. His research interests include list-mode data processing, signal detection, parameter estimation, adaptive imaging, parallel computing, and list-mode digital radiology.
Lars R. Furenlid is a professor in the Department of Medical Imaging and the College of Optical Sciences. He earned his PhD in physical chemistry from the Georgia Institute of Technology. His research interests include scintillation and solid-state detectors, methods of optics, pulse-processing electronics, digital data acquisition, and data reconstruction with a variety of computational methods.