Smartphones and portable (or wearable) devices incorporate sensors and enhanced computing ability, offering a practical, accurate, and low-cost solution for medical diagnosis and monitoring. Furthermore, as smartphone systems and apps grow more user-friendly, they become accessible to a broader section of society.
For medical apps, imaging is a key component. Most smartphones are equipped with image sensors that capture photos with significant detail and resolution of >10 megapixels. This enables analysis of photos or videos for initial self-diagnosis of disease, self-monitoring of health conditions, and preliminary examinations. Here, we describe recent research on novel medical solutions using smartphones and mobile imaging.
There are currently several image processing systems1 for automatic diagnosis of melanoma (the most aggressive form of skin cancer, and which is often curable if detected early). These use dermoscopic images taken using a liquid medium or a non-polarized light source and magnifiers under controlled clinical conditions to reveal features below the skin surface. However, dermoscopic imaging is beyond the facility of standard cameras in most smartphones.
As an alternative solution, we combined high-resolution images taken on a smartphone with on-device signal processing algorithms for melanoma detection (see Figure 1).2–4 Generally, an automatic detection system comprises three stages: segmentation, feature extraction, and classification. The key to achieving high levels of accuracy is extracting suitable features to characterize the mole. We used fast detection and fusion of two segmentation algorithms to localize the mole region (see Figure 2), and used new features to mathematically quantify the color variation and border irregularity of the mole. These features are specific for skin cancer detection and are suitable for mobile imaging and on-device processing. Our system uses a selection mechanism that takes into account the coordinates of the feature values to identify more discriminative features. In addition, we use a classifier array and a classification result fusion procedure to compute the detection results. Currently, our system achieves greater than 80% sensitivity and specificity.4
Early melanoma detection using a smartphone for mobile imaging.2
Mobile image analysis algorithm to detect melanoma. ai
: Average intensities of specific skin mole regions.3, 4
A further application of our approach is in wound assessment, which is critical for the management of pressure ulcers (also called bedsores). Caused by the death of skin and fundamental tissues due to pressure, such wounds are common in diabetic patients, accounting for 85% of non-traumatic lower extremity amputation in the United States. To reliably assess the wound grade, type, severity, and healing process requires accurate and objective measurements such as area, perimeter, and volume. Depending on the application, we can divide wound assessment techniques into those for estimating the size of the wound (area/perimeter or volume), those for modeling the wound appearance, and approaches that perform a complete evaluation of the bedsore.
There are several fully automated techniques using image processing for both wound size estimation and tissue classification.5, 6 These methods estimate the volume of the wound and its characteristics by computing a 3D model using structure from light, photogrammetry, or structure from motion. In most cases, it is necessary to place some external markers (of specific size and color) near the bedsore for camera calibration to avoid illumination and glare distortions during the acquisition process. The only exception is a method proposed by Wang and coworkers.7 that uses a mobile device for complete evaluation of the wound. Experimental results of this approach show an acceptable level of accuracy, with the only drawback being the auxiliary hardware needed during the image acquisition.
In future work, we plan to research issues surrounding acceptance and adoption of these smartphone/algorithm applications.8
Ngai-Man Cheung, Victor Pomponiu, Dothanh Toan, Hossein Nejati
Singapore University of Technology and Design
Ngai-Man (Man) Cheung is an assistant professor.
1. I. Maglogiannis, C. N. Doukas, Overview of advanced computer vision systems for skin lesions characterization, IEEE Trans. Inf. Technol. Biomed. 13(5), p. 721-733, 2009.
2. T.-T. Do, Y. Zhou, H. Zheng, H. Nejati, N.-M. Cheung, D. Koh, R. Sosa, et al., Design of a mobile imaging system for early diagnosis of skin cancer, Proc. IEEE Life Sci. Grand Chall. Conf., 2013.
3. T.-T. Do, Y. Zhou, H. Zheng, N.-M. Cheung, D. Koh, Early melanoma diagnosis with mobile imaging, Proc. IEEE Annu. Int'l Conf. Eng. Med. Biol. Soc., p. 6752-6757, 2014.
4. D. Toan, V. Pomponiu, Y. Zhou, Z. Haitian, C. Zhao, N.-M. Cheung, D. Koh, et al., Designing a mobile imaging system for early melanoma detection, Proc. Am. Acad. Dermatol. Annu. Meet.
, 2015. doi:10.1016/j.jaad.2015.02.366
5. S. Treuille, B. Albouy, Y. Lucas, Three-dimensional assessment of skin wounds using a standard digital camera, IEEE Trans. Med. Imag. 28(5), p. 752-762, 2009.
6. H. Wannous, Y. Lucas, S. Treuillet, Enhanced assessment of the wound-healing process by accurate multiview tissue classification, IEEE Trans. Med. Imag. 30(2), p. 315-326, 2011.
7. M. A. L. Wang, P. C. Pedersen, D. M. Strong, B. Tulu, E. Agu, R. Ignotz, Smartphone-based wound assessment system for patients with diabetes, IEEE Trans. Biomed. Eng. 62(2), p. 477-488, 2015.
8. C. Zhao, N.-M. Cheung, R. Sosa, D. Koh, Design self-diagnosis applications for non-patients, Proc. 33rd Annu. ACM Conf. Human Factors Comput. Syst. (CHI EA '15), p. 1433-1438, 2015.