Mobile image analysis for medical applications
Smartphones and portable (or wearable) devices incorporate sensors and enhanced computing ability, offering a practical, accurate, and low-cost solution for medical diagnosis and monitoring. Furthermore, as smartphone systems and apps grow more user-friendly, they become accessible to a broader section of society.
For medical apps, imaging is a key component. Most smartphones are equipped with image sensors that capture photos with significant detail and resolution of >10 megapixels. This enables analysis of photos or videos for initial self-diagnosis of disease, self-monitoring of health conditions, and preliminary examinations. Here, we describe recent research on novel medical solutions using smartphones and mobile imaging.
There are currently several image processing systems1 for automatic diagnosis of melanoma (the most aggressive form of skin cancer, and which is often curable if detected early). These use dermoscopic images taken using a liquid medium or a non-polarized light source and magnifiers under controlled clinical conditions to reveal features below the skin surface. However, dermoscopic imaging is beyond the facility of standard cameras in most smartphones.
As an alternative solution, we combined high-resolution images taken on a smartphone with on-device signal processing algorithms for melanoma detection (see Figure 1).2–4 Generally, an automatic detection system comprises three stages: segmentation, feature extraction, and classification. The key to achieving high levels of accuracy is extracting suitable features to characterize the mole. We used fast detection and fusion of two segmentation algorithms to localize the mole region (see Figure 2), and used new features to mathematically quantify the color variation and border irregularity of the mole. These features are specific for skin cancer detection and are suitable for mobile imaging and on-device processing. Our system uses a selection mechanism that takes into account the coordinates of the feature values to identify more discriminative features. In addition, we use a classifier array and a classification result fusion procedure to compute the detection results. Currently, our system achieves greater than 80% sensitivity and specificity.4

A further application of our approach is in wound assessment, which is critical for the management of pressure ulcers (also called bedsores). Caused by the death of skin and fundamental tissues due to pressure, such wounds are common in diabetic patients, accounting for 85% of non-traumatic lower extremity amputation in the United States. To reliably assess the wound grade, type, severity, and healing process requires accurate and objective measurements such as area, perimeter, and volume. Depending on the application, we can divide wound assessment techniques into those for estimating the size of the wound (area/perimeter or volume), those for modeling the wound appearance, and approaches that perform a complete evaluation of the bedsore.
There are several fully automated techniques using image processing for both wound size estimation and tissue classification.5, 6 These methods estimate the volume of the wound and its characteristics by computing a 3D model using structure from light, photogrammetry, or structure from motion. In most cases, it is necessary to place some external markers (of specific size and color) near the bedsore for camera calibration to avoid illumination and glare distortions during the acquisition process. The only exception is a method proposed by Wang and coworkers.7 that uses a mobile device for complete evaluation of the wound. Experimental results of this approach show an acceptable level of accuracy, with the only drawback being the auxiliary hardware needed during the image acquisition.
In future work, we plan to research issues surrounding acceptance and adoption of these smartphone/algorithm applications.8
Ngai-Man (Man) Cheung is an assistant professor.