Calibrating multiple microscopes with a smartphone
How does one build an inexpensive array of handheld microscopes for measuring microscopic dynamic events over a large field of view (FOV)? The challenges of building such an instrument lie in estimating spatial, temporal, and color properties of each handheld microscope, as well as in integrating individual fields of view into a large FOV seamlessly. These calibration challenges of building inexpensive arrays of cameras have been encountered and researched in close-range photogrammetry and multicamera computer vision applications.1–4 Previous research has aimed to reconstruct 3D scenes, but our ultimate objective is to image live cells in a culture dish. An entire dish 10cm in diameter cannot be imaged at the rate of cell state dynamics with a combination of a single camera microscope and a motorized stage. In our experience, acquisition of about 17% of the dish takes around 22 minutes by a Zeiss motorized stage (18×22 tiles with 10% overlap).
To explore the above calibration challenges with arrays of microscopes, we first assembled a linear array of two digital handheld microscopes (Dino X Lite AM-413MT5, 12 frames per second, 1280×1024 image pixel dimensions), and then connected them to a computer via USB. These microscopes are currently used primarily for skin and scalp dermatology and printed circuit board inspection. Although the macroscale calibration methodologies can be applied at a microscale, the microscopic resolution imposes much higher quality specifications and therefore much higher costs. As a result, the same calibration objects cannot be used. We assessed smartphone high-resolution displays (e.g., iPhone 4S, retina display, 0.079mm/pixel) as alternative calibration objects by comparing them to traditional calibration objects (i.e., Gretag Macbeth color chart, a stage micrometer for pixel-to-millimeter conversion, and a set of prior known shapes with their locations for pose estimation): see Figure 1.

Next, we developed calibration methods to perform pixel-to-millimeter conversion, red-green-blue color normalization, and microscope pose estimation using the high-resolution LCD of an iPhone. The iPhone LCD is placed under an array of microscopes as illustrated in Figure 1 (right). It renders temporally varying pixel intensities that represent a dynamic virtual calibration object. There are three current virtual calibration objects. The first is a dynamic web page with varying intensities of each red, green, and blue color for color calibration: see Figure 2 (right). The left image in Figure 2 shows a microscope image of a green printed square on a paper imaged according to the configuration shown in Figure 1 (top left). The constraints of printing and paper imaging give the pixels a variety of colors and intensities, and yield a static semi-regular structure. The right image in Figure 2 shows a microscope image of a green pattern rendered by a smartphone LCD that has very little variation in color and intensity, a very regular structure, and changes intensity and color over time with a known speed (i.e., green, intensity 255→ green, intensity 0→ red, intensity 255→…).

The second is a static web page with a checkerboard pattern for spatial calibration (see Figure 3). The third is a dynamic web page with moving lines in two orthogonal directions with known line spacing and motion vectors for pose estimation (see Figure 4). Our custom-developed software processes the LCD renderings captured by each microscope to determine the calibration parameters. The preliminary accuracy results for traditional and virtual calibration objects are summarized in Table 1.


Calibration type | Metric | Traditional | Virtual |
---|---|---|---|
Color | Initial Euclidean distance between average camera colors | 15.92 | 7.48 |
Euclidean distance after linear color transformation | 14.72 | 6.86 | |
Spatial resolution | Mean of pixel-to-mm measurements | 710.12 | 711.16 |
Standard deviation of pixel-to-mm measurements | 2.35 | 4.56 | |
Pose | Distance between microscope camera centers in mm | Roughly 34–35 | 34.57 |
Angle to next camera (β in degrees) | 77 | 76.4 | |
Angle between cameras (δ in degrees) | 79 | 79.8 |
Our results show that virtual object-based calibration is as accurate overall as physical object-based calibration (see Table 1). In other words, the results of camera integration are similar whether we use traditional or virtual calibration objects. However, the virtual calibration objects have several key advantages. They can be changed quickly and without significant cost. LCD rendering and microscope imaging them leads to a higher signal-to-noise ratio than imaging traditional calibration objects (see Figure 3). In addition, they are able to include dynamic patterns and acquire calibration data at higher rates (see Table 2). Higher acquisition rates are important for achieving higher statistical significance in the calibration results.
Acquisition rate | Data/minute | |
---|---|---|
Calibration type/object | Traditional | Virtual |
Color | 2 | 600 |
Resolution | 1 | 3600 |
Pose | 0.1 | 8 |
In the future, we plan to investigate the relationship between virtual object rendering and the display properties,5 and to acquire real video streams to study live cells, nematodes, and insect behavior.
This work has been supported by the National Institute of Standards and Technology (NIST) 2013 Summer Undergraduate Research Fellowship (SURF) Program. We would like to thank Ganesh Saiprasad and Kiran Bhadriraju at NIST for providing additional comments on the work.
Disclaimer
Commercial products are identified in this document to specify the experimental procedure adequately. Such identification is not intended to imply recommendation or endorsement by the National Institute of Standards and Technology, nor is it intended to imply that the products identified are necessarily the best available for the purpose.
Peter Bajcsy is a computer scientist at NIST working on automatic transfer of image content to knowledge. His scientific interests include image processing, machine learning, and computer and machine vision. He has co-authored more than 24 journal papers, eight book chapters, and close to 100 conference papers.
Mary Brady is the manager of the Information Systems Group in NIST's Information Technology Laboratory. The group focuses on developing measurements, standards, and underlying technologies that foster innovation throughout the information life cycle from collection and analysis to sharing and preservation.
Jacob Siegel is a computer engineering major at the University of Maryland College Park. He participated in the NIST 2013 SURF program. His research interests include camera calibration and image processing.