Share Email Print
cover

Proceedings Paper

Deep learning improves mobile-phone microscopy (Conference Presentation)
Author(s): Yair Rivenson; Hatice Ceylan Koydemir; Hongda Wang; Zhensong Wei; Zhengshuang Ren; Harun Gunaydin; Yibo Zhang; Zoltan Gorocs; Kyle Liang; Derek Tseng; Aydogan Ozcan

Paper Abstract

Mobile-phone based microscopy often uses 3D-printed opto-mechanical designs and inexpensive optical components that are not optimized for microscopic imaging of specimen. For example, the illumination source is often a battery-powered LED, which can create spectral distortions in the acquired image. Mechanical misalignments of the optical components and the sample holder as well as inexpensive lenses lead to spatial distortions at the microscale. Furthermore, mobile-phones are equipped with CMOS image sensors with a pixel size of ~1-2 µm, which results in an inferior signal-to-noise ratio compared to benchtop microscopes, which are typically equipped with much larger pixels, e.g., ~5-10 µm. Here, we demonstrate a supervised learning framework, based on a deep convolutional neural network for substantial enhancement of a smartphone microscope image, by eliminating spectral aberrations, increasing the signal-to-noise ratio and improving the spatial resolution of the acquired images. Once trained, the deep neural network is fixed, and it rapidly outputs an image, matching the quality of a benchtop microscope image, in a feed-forward, non-iterative manner, without the need for any modeling of the aberrations in the mobile imaging system. This framework is demonstrated using pathology slides of thin tissues sections and blood smears, validating its superior performance even using highly-compressed images, suitable especially for telemedicine applications with restricted bandwidth and storage requirements. This deep learning-powered approach can be broadly applicable to various mobile microscopy systems that can be used for point-of-care medicine and global health related applications.

Paper Details

Date Published: 18 September 2018
PDF
Proc. SPIE 10772, Unconventional and Indirect Imaging, Image Reconstruction, and Wavefront Sensing 2018, 107720Q (18 September 2018); doi: 10.1117/12.2320864
Show Author Affiliations
Yair Rivenson, Univ. of California, Los Angeles (United States)
Hatice Ceylan Koydemir, Univ. of California, Los Angeles (United States)
Hongda Wang, Univ. of California, Los Angeles (United States)
Zhensong Wei, Univ. of California Los Angeles (United States)
Zhengshuang Ren, Univ. of California, Los Angeles (United States)
Harun Gunaydin, Univ. of California, Los Angeles (United States)
Yibo Zhang, Univ. of California, Los Angeles (United States)
Zoltan Gorocs, Univ. of California, Los Angeles (United States)
Kyle Liang, Univ. of California, Los Angeles (United States)
Derek Tseng, Univ. of California, Los Angeles (United States)
Aydogan Ozcan, Univ. of California, Los Angeles (United States)


Published in SPIE Proceedings Vol. 10772:
Unconventional and Indirect Imaging, Image Reconstruction, and Wavefront Sensing 2018
Jean J. Dolne; Philip J. Bones, Editor(s)

© SPIE. Terms of Use
Back to Top