Share Email Print
cover

Proceedings Paper

Cross-modality super-resolution in fluorescence microscopy enabled by generative adversarial networks (Conference Presentation)
Author(s): Hongda Wang; Yair Rivenson; Yiyin Jin; Zhensong Wei; Ronald Gao; Harun Gunaydin; Laurent A. Bentolila; Comert Kural; Aydogan Ozcan

Paper Abstract

We present a cross-modality super-resolution microscopy method based on the generative adversarial network (GAN) framework. Using a trained convolutional neural network, our method takes a low-resolution image acquired with one microscopic imaging modality, and super-resolves it to match the resolution of the image of the same sample captured with another higher resolution microscopy modality. This cross-modality super-resolution method is purely data-driven, i.e., it does not rely on any knowledge of the image formation model, or the point-spread-function. First, we demonstrated the success of our method by super-resolving wide-field fluorescence microscopy images captured with a low-numerical aperture (NA=0.4) objective to match the resolution of images captured with a higher NA objective (NA=0.75). Next, we applied our method to confocal microscopy to super-resolve closely spaced nano-particles and Histone3 sites within HeLa cell nuclei, matching the resolution of stimulated emission depletion (STED) microscopy images of the same samples. Our method was also verified by super-resolving the diffraction-limited total internal reflection fluorescence (TIRF) microscopy images, matching the resolution of TIRF-SIM (structured illumination microscopy) images of the same samples, which revealed endocytic protein dynamics in SUM159 cells and amnioserosa tissues of a Drosophila embryo. The super-resolved object features in the network output show strong agreement with the ground truth SIM reconstructions, which were synthesized using 9 diffraction-limited TIRF images, each with structured illumination. Other than resolution enhancement, our method also offers an extended depth-of-field and improved signal-to-noise ratio (SNR) in the network inferred images compared against the corresponding ground truth images.

Paper Details

Date Published: 9 September 2019
PDF
Proc. SPIE 11088, Optical Sensing, Imaging, and Photon Counting: From X-Rays to THz 2019, 110880E (9 September 2019);
Show Author Affiliations
Hongda Wang, Univ. of California, Los Angeles (United States)
Yair Rivenson, Univ. of California, Los Angeles (United States)
Yiyin Jin, Univ. of California, Los Angeles (United States)
Zhensong Wei, Univ. of California, Los Angeles (United States)
Ronald Gao, Univ. of California, Los Angeles (United States)
Harun Gunaydin, Univ. of California, Los Angeles (United States)
Laurent A. Bentolila, Univ. of California, Los Angeles (United States)
Comert Kural, The Ohio State Univ. (United States)
Aydogan Ozcan, Univ. of California, Los Angeles (United States)


Published in SPIE Proceedings Vol. 11088:
Optical Sensing, Imaging, and Photon Counting: From X-Rays to THz 2019
Oleg Mitrofanov, Editor(s)

© SPIE. Terms of Use
Back to Top
PREMIUM CONTENT
Sign in to read the full article
Create a free SPIE account to get access to
premium articles and original research
Forgot your username?
close_icon_gray