Proceedings Volume 4266

Microarrays: Optical Technologies and Informatics

Michael L. Bittner, Yidong Chen, Andreas N. Dorsel, et al.
cover
Proceedings Volume 4266

Microarrays: Optical Technologies and Informatics

Michael L. Bittner, Yidong Chen, Andreas N. Dorsel, et al.
View the digital version of this volume at SPIE Digital Libarary.

Volume Details

Date Published: 4 June 2001
Contents: 4 Sessions, 27 Papers, 0 Presentations
Conference: BiOS 2001 The International Symposium on Biomedical Optics 2001
Volume Number: 4266

Table of Contents

icon_mobile_dropdown

Table of Contents

All links to SPIE Proceedings will open in the SPIE Digital Library. external link icon
View Session icon_mobile_dropdown
  • Image Data Analysis
  • Detecting Signals: How and What
  • Data Normalization and Quality Control
  • Analysis of Multiple Expression Profiles
  • Data Normalization and Quality Control
Image Data Analysis
icon_mobile_dropdown
Generic and robust approach for the analysis of spot array images
Norbert Braendle, Horst Bischof, Hilmar Lapp
We present a generic and robust image analysis approach applicable to image data resulting from a broad class of hybridization experiments. The ultimate image analysis goal is to automatically assign a quantity to every array element (spot), giving information about the hybridization signal. Irrespective of the quantification strategy, the most important preliminary information to extract about a spot is the mapping between its location in the digital image and its position in the spot grid (grid fitting). We present a grid fitting approach divided into a spot amplification step (matched filter), a rotation estimation step (Radon transform) and a grid spanning step. Quantification of the hybridization signals is performed with different fitting approaches. The primary approach is a robust fitting of a parametric model with the help of M-estimators. The main advantage of parametric spot fitting is its ability to cope with overlapping spots. If the goodness-of-fit is too bad, a semi-parametric spot fitting is employed.
GLEAMS: a novel approach to high-throughput genetic microarray image capture and analysis
Zheng Zheng Zhou, Jaime A. Stein, Qien Zhou Ji
GLEAMS is a robust, stable and accurate image capture and quantification method for microarrays. It is capable of fully and automatically detecting and quantifying the expression spots. This can be done in a batch mode, without human intervention, achieving a high throughput of parallel data processing. Simple to use visual tools are provided to estimate parameters and to submit, monitor and control jobs execution. The un-supervised batch auto-alignment is based on a novel method requiring only knowledge of the number of rows and columns of dots in the array. Distances between dots along rows and columns are estimated from the image's auto- correlation function. This is also used to align the array and the sides of the image. Applying intensity and geometric constraints to the cross-correlation function between the image and a template sub-array, the location of the sub-arrays can be determined. Carefully implemented, the algorithm can approach human vision in its sensitivity and accuracy in finding the general positions of dots in a micro-array image. Subsequent spot quantification uses Ostu's thresholding method followed by some morphological operations, including the application of a constraining shape mask. Segmentation techniques are applied to detect and remove speckles from the targets and to ensure the veracity of the data extracted.
Statistical issues in signal extraction from microarrays
Tracy Bergemann, Filemon Quiaoit, Jeffrey J. Delrow, et al.
Microarray technologies are increasingly used in biomedical research to study genome-wide expression profiles in the post genomic era. Their popularity is largely due to their high throughput and economical affordability. For example, microarrays have been applied to studies of cell cycle, regulatory circuitry, cancer cell lines, tumor tissues, and drug discoveries. One obstacle facing the continued success of applying microarray technologies, however, is the random variaton present on microarrays: within signal spots, between spots and among chips. In addition, signals extracted by available software packages seem to vary significantly. Despite a variety of software packages, it appears that there are two major approaches to signal extraction. One approach is to focus on the identification of signal regions and hence estimation of signal levels above background levels. The other approach is to use the distribution of intensity values as a way of identifying relevant signals. Building upon both approaches, the objective of our work is to develop a method that is statistically rigorous and also efficient and robust. Statistical issues to be considered here include: (1) how to refine grid alignment so that the overall variation is minimized, (2) how to estimate the signal levels relative to the local background levels as well as the variance of this estimate, and (3) how to integrate red and green channel signals so that the ratio of interest is stable, simultaneously relaxing distributional assumptions.
Groundtruth approach to accurate quantitation of fluorescence microarrays
Laura Mascio Kegelmeyer, Lisa Tomascik-Cheeseman, Melinda S. Burnett, et al.
To more accurately measure fluorescent signals from microarrays, we calibrated our acquisition and analysis systems by using groundtruth samples comprised of known quantities of red and green gene-specific DNA probes hybridized to cDNA targets. We imaged the slides with a full-field, white light CCD imager and analyzed them with our custom analysis software. Here we compare, for multiple genes, results obtained with and without preprocessing (alignment, color crosstalk compensation, dark field subtraction, and integration time). We also evaluate the accuracy of various image processing and analysis techniques (background subtraction, segmentation, quantitation and normalization). This methodology calibrates and validates our system for accurate quantitative measurement of microarrays. Specifically, we show that preprocessing the images produces results substantially closer to the known groundtruth for these samples.
Statistical inference methods for gene expression arrays
Robert Nadon, Peide Shi, Adonis Skandalis, et al.
Gene expression arrays present unique challenges for statistical inference. They typically small number of replicated expression values in array studies make the use of standard parametric statistical tests problematic. Such test have low sensitivity and return potentially inaccurate probability values. This paper describes novel alternative statistical modeling procedures which circumvent these difficulties by pooling random error estimates obtained from replicate expression values. The procedures, which can be used with both micro- and macro-arrays, include outlier detection, confidence intervals, statistical test of differences between conditions, and statistical power analysis for determining number of replicates needed to detect between-condition differences of specified magnitude. The methods are illustrated with experimental data.
Rank-based algorithms for anlaysis of microarrays
Wei-min Liu, Rui Mei, Daniel M. Bartell, et al.
Analysis of microarray data often involves extracting information from raw intensities of spots of cells and making certain calls. Rank-based algorithms are powerful tools to provide probability values of hypothesis tests, especially when the distribution of the intensities is unknown. For our current gene expression arrays, a gene is detected by a set of probe pairs consisting of perfect match and mismatch cells. The one-sided upper-tail Wilcoxon's signed rank test is used in our algorithms for absolute calls (whether a gene is detected or not), as well as comparative calls (whether a gene is increasing or decreasing or no significant change in a sample compared with another sample). We also test the possibility to use only perfect match cells to make calls. This paper focuses on absolute calls. We have developed error analysis methods and software tools that allow us to compare the accuracy of the calls in the presence or absence of mismatch cells at different target concentrations. The usage of nonparametric rank-based tests is not limited to absolute and comparative calls of gene expression chips. They can also be applied to other oligonucleotide microarrays for genotyping and mutation detection, as well as spotted arrays.
Detecting Signals: How and What
icon_mobile_dropdown
RAD (RNA abundance database): an infrastructure for array data analysis
Elisabetta Manduchi, Angel Pizarro, Christian Stoeckert Jr.
Analysis of array-based gene expression experiments is challenging particularly when multiple experiments are involved and presents challenges in data management as well. Selecting well-measured spots and normalizing raw data are basic steps required for subsequent analyses, especially those involved with comparisons over a collection of experiments. Other preprocessing steps might also be needed for certain analyses. The most appropriate criteria for spot selection, for normalization, and for other data transformations depend on the experiments under study and on the questions investigated. Furthermore, comparing experiments appropriately requires knowledge of how the experiments were performed and the samples that were used in sufficient detail to understand their degree of similarity. Approaches taken in RAD to address these issues will be presented. These include the storage of raw and processed data along with history and parameter tracking and the use of ontologies to provide precise consistent experimental descriptions.
Sophisticated lenses for microarray analysis
Frank Guse, Jakob Bleicher
The correct choice of the optical system is of major importance for the performance of microarray analysis instruments based on fluorescence. A variety of different approaches is possible. The basic concepts are (a) imaging of an extended field on a spatially resolving detector, (b) scanning the microarray with a flat-field lens and (c) scanning by laterally shifting a confocal optical system. Depending on the approach, the optical system may range from a lightweight lens of a few grams to a heavy system with a hundred millimeters in diameter. Further this determines the achievable optical performance in terms of spatial resolution on the microarray and the numerical aperture. Whereas by increasing the numerical aperture more fluorescent photons are captured, a higher spatial resolution avoids cross-talking between spots. But only considering the optics will not render the most efficient concept. The optical system is an integrated part in a number of complex, challenging components. Therefore, the requirements on the optics have often to be balanced with other system specific constraints. A custom-tailored optical design for an instrument under development allows the best compromise and is hereby a fundamental step to achieve maximum overall performance.
Designing oligo libraries taking alternative splicing into account
Avi Shoshan, Vladimir Grebinskiy, Avner Magen, et al.
We have designed sequences for DNA microarrays and oligo libraries, taking alternative splicing into account. Alternative splicing is a common phenomenon, occurring in more than 25% of the human genes. In many cases, different splice variants have different functions, are expressed in different tissues or may indicate different stages of disease. When designing sequences for DNA microarrays or oligo libraries, it is very important to take into account the sequence information of all the mRNA transcripts. Therefore, when a gene has more than one transcript (as a result of alternative splicing, alternative promoter sites or alternative poly-adenylation sites), it is very important to take all of them into account in the design. We have used the LEADS transcriptome prediction system to cluster and assemble the human sequences in GenBank and design optimal oligonucleotides for all the human genes with a known mRNA sequence based on the LEADS predictions.
Signal amplification on microarrays: techniques and advances in tyramide signal amplification (TSA)
Karl E. Adler, Mary C. Tyler, Alvydas Mikulskis, et al.
Increased sensitivity for differential mRNA expression analysis on microarrays is rapidly becoming a serious need as the technology matures. Current techniques using direct cyanine labeled targets are effective for expression analysis of abundant mRNA sources but have limited utility for analysis where mRNA quantities are limited. Tyramide signal amplification (TSATM) applied to microarray detection provides dramatic improvements in sensitivity, allowing the reduction of sample sizes by as much as 200-fold. The technique includes hapten labeling of two separate RNA populations, microarray hybridization and detection of each hapten with sequential signal amplification steps. The system uses fluorescein and biotin nucleotide analogs as the hapten pair. Hybridized fluorescein and biotin labeled targets are sequentially reacted with horseradish peroxidase and cyanine 3 and cyanine 5 tyramides, resulting in the numerous depositions of these fluorophors on the array. Differential gene expression analysis of LNCaP and PC3 prostate cancer cell lines using one microgram of total RNA and TSA detection, indicates good correlation with results obtained starting with 100 micrograms ((mu) g) of total RNA in a conventional cyanine 3 and cyanine 5 nucleotide analog labeling and detection system (i.e., the direct method).
Multichannel expression analysis of submicrogram total RNA samples without enzymatic amplification using a one-day protocol
Robert C. Getts
Typical gene expression array analysis requires relatively large quantities of total or poly A+RNA. Samples prepared by techniques such as laser capture microdissection (LCM) and single cell expression analysis yield relatively little RNA and traditionally require the purification of poly A+ message and subsequent enzymatic amplification before analysis on an array. This type of analysis may be biased to the detection of certain messages, is labor intensive, time consuming and requires considerable expertise for reproducible success. The 3DNATM SubmicroTM expression detection kit has been developed to detect low level expression from a microgram or less of total RNA in one day. The method does not require enzymatic amplification or the direct incorporation of a modified nucleotide during probe synthesis and is simple and easy to use. The 3DNATM detection system is based on patented DNA dendrimers that contain hundreds of fluorescent labels. Signal is generated by the dendrimer after it binds to the cDNA probe (sample) via hybridization of the dendrimer to a capture sequence that is part of the original reverse transcription primer. 50-200 fold improvement of specific signal over noise compared to direct incorporation methods has been demonstrated. The theory and use of the 3DNATM SubmicroTM technology will be discussed for 2,3 and 4 channel analysis.
Comparative examination of probe labeling methods for microarray hybridization
David I. Burke, Karen Woodward, Robert A. Setterquist, et al.
For detection of differential gene expression, confocal laser based scanners are now capable of analyzing microarrays using one to five wavelengths. This allows investigators to choose among several labeling methods. Here we compare direct incorporation and indirect methods (amino-allyl and dendrimers) for labeling cDNA probes. We assessed reproducible sensitivity of each probe preparation method in two ways. First, by comparing hybridization intensities for limit of signal detection and second by measuring the lowest detectable concentration of a known ratio of mixed DNA (spikes). Limit of detection assay was done using arrays of mixed targets consisting of a serially diluted human specific gene fragment (HU1) and an undiluted DNA of chloramphenicol acetyl tranferase (CAT) gene. Then, individual single target arrays of CAT and HU1 DNA were used to determine the lowest detectable spike ratio of each labeling method. The results of this study will be presented and their significance for the analysis of microarrays will be discussed.
Data Normalization and Quality Control
icon_mobile_dropdown
Estimation of the confidence limits of oligonucleotide-array-based measurements of differential expression
Glenda Delenstarr, Herb Cattell, Chao Chen, et al.
Microarrays can be used to simultaneously measure the differential expression states of many mRNA's in two samples. Such measurements are limited by systematic and random errors. Systematic errors include labeling bias, imperfect feature morphologies, mismatched sample concentrations, and cross-hybridization. Random errors arise from chemical and scanning noise, particularly for low signals. We have used a combination of fluor-exchanged two- color labeling and improved normalization methods to minimize systematic errors from labeling bias, imperfect features, and mismatched sample concentrations. On-array specificity control proves and experimentally proven probe design algorithms were used to correct for cross- hybridization. Random errors were reduced via automated non-uniform feature flagging and an advanced scanner design. We have scored feature significance, using established statistical tests. We have then estimated the intrinsic random measurement error as a function of average probe signal via sample self-comparison experiments (human K-562 cell mRNA). Finally, we have combined all of these tools in the analysis of differential expression measurements between K-562 cells and HeLa cells. The results establish the importance of the elimination of systematic errors and the objective assessment of the effects of random errors in producing reliable estimates of differential expression.
Maximum-likelihood estimation of optimal scaling factors for expression array normalization
Alexander J. Hartemink, David K. Gifford, Tommi S. Jaakkola, et al.
Data from expression arrays must be comparable before it can be analyzed rigorously on a large scale. Accurate normalization improves the comparability of expression data because it seeks to account for sources of variation obscuring the underlying variation of interest. Undesirable variation in reported expression levels originates in the preparation and hybridization of the sample as well as in the manufacture of the array itself, and may differ depending on the array technology being employed. Published research to date has not characterized the degree of variation associated with these sources, and results are often reported without tight statistical bounds on their significance. We analyze the distributions of reported levels of exogenous control species spiked into samples applied to 1280 Affymetrix arrays. We develop a model for explaining reported expression levels under an assumption of primarily multiplicative variation. To compute the scaling factors needed for normalization, we derive maximum likelihood and maximum a posteriori estimates for the parameters characterizing the multiplicative variation in reported spiked control expression levels. We conclude that the optimal scaling factors in this context are weighted geometric means and determine the appropriate weights. The optical scaling factor estimates so computed can be used for subsequent array normalization.
Normalization for cDNA microarry data
Yee Hwa Yang, Sandrine Dudoit, Percy Luu, et al.
There are many sources of systematic variation in microarray experiments which affect the measured gene expression levels. Normalization is the term used to describe the process of removing such variation, e.g. for differences in labeling efficiency between the two fluorescent dyes. In this case, a constant adjustment is commonly used to force the distribution of the log-ratios to have a median of zero for each slide. However, such global normalization approaches are not adequate in situations where dy biases can depend on spot overall intensity and location on the array (print-tip effects). This paper describes normalization methods that account for intensity and spatial dependence in the dye biases for different types of cDNA microarray experiments, including dye-swap experiments. In addition, the choice of the subset of genes to use fo normalization is discussed. The subset selected may be different for experiments where only a few genes are expected to be differentially expressed and those where a majority of genes are expected to change. The proposed approaches are illustrated using gene expression data from a study of lipid metabolism in mice.
Normalization and error estimation for biomedical expression patterns
Stanley D. Luck
We describe methods for the analysis expression patterns based on a vector representation for intensities. Normalization for multiple experiments is iterative and is obtained by identifying a vector that optimally correlates expression patterns. Error estimates for intensities are obtained by analyzing deviations in expression relative to the correlation vector. A normalization method for applying corrections from spiked, two-color fluorescence measurements is also described.
Random signal model for cDNA microarrays
The images resulting from cDNA microarrays are highly random. There are many aspects to this randomness, including spot size, shape, intensity, uniformity, and circularity, as well as both foreground and background noise. This paper presents a random model for the generation of microarray images. The model is complicated and contains over 20 parameters. It can be used to test microarray imaging algorithms and to simulate the effects of various dependencies within the image formation process.
Analysis of Multiple Expression Profiles
icon_mobile_dropdown
Processing and modeling genome-wide expression data using singular value decomposition
Orly Alter, Patrick O. Brown, David Botstein
We describe the use of singular value decomposition in transforming genome-wide expression data from genes x arrays space to reduced diagonalized eigengenes x eigenarrays space, where the eigengenes (or eigenarrays) are unique orthonormal superpositions of the genes (or arrays). Normalizing the data by filtering out the eigengenes (and eigenarrays) that are inferred to represent additive or multiplicative noise, experimental artifacts, or even irrelevant biological processes enables meaningful comparison of the expression of different genes across different arrays in different experiments. Sorting the data according to the eigengenes and eigenarrays gives a global picture of the dynamics of gene expression, in which individual genes and arrays appear to be classified into groups of similar regulation and function, or similar cellular state and biological phenotype, respectively. After normalization and sorting, the significant eigengenes and eigenarrays can be associated with observed genome-wide effects of regulators, or with measured samples, in which these regulators are overactive or underactive, respectively.
Classification of microarray data with penalized logistic regression
Paul H. C. Eilers, Judith M. Boer, Gert-Jan van Ommen, et al.
Classification of microarray data needs a firm statistical basis. In principle, logistic regression can provide it, modeling the probability of membership of a class with (transforms of) linear combinations of explanatory variables. However, classical logistic regression does not work for microarrays, because generally there will be far more variables than observations. One problem is multicollinearity: estimating equations become singular and have no unique and stable solution. A second problem is over-fitting: a model may fit well into a data set, but perform badly when used to classify new data. We propose penalized likelihood as a solution to both problems. The values of the regression coefficients are constrained in a similar way as in ridge regression. All variables play an equal role, there is no ad-hoc selection of most relevant or most expressed genes. The dimension of the resulting systems of equations is equal to the number of variables, and generally will be too large for most computers, but it can dramatically be reduced with the singular value decomposition of some matrices. The penalty is optimized with AIC (Akaike's Information Criterion), which essentially is a measure of prediction performance. We find that penalized logistic regression performs well on a public data set (the MIT ALL/AML data).
Statistical approaches to analyzing multichip data
James Roy Johnson, Patrick Hurban, Jeff Woessner, et al.
Processing large quantities of micro-arrays designed for high-throughput gene expression profiling presents a completely new set of challenges that must be addressed if biologically meaningful data are to be generated that can undergo statistical analysis. Sources of variation fall naturally into two classes: instrument and biological variation. Each source of variation must be adequately addressed by controlling systematic instrument and operation error, building empirically derived error models, and adequately characterizing the variability observed in biological controls. Finally, the tools used to derive biological meaning from gene expression profiling data must closely tie to the error models and the processes used to generate these data. Robust statistical techniques are appropriate methods for analysis of gene expression profiling data derived from micro-arrays, where adequate characterization of the sources of variation are quantified. No matter how complex or powerful the analysis tools may be, if they are not designed and utilized in this context then the results may remain questionable. At Paradigm Genetics the implementation of these techniques within the gene expression profile platform with the mustard plant, Arabidopsis thaliana, are providing a basis for integrated analysis of micro-array observed data.
Finding robust linear expression-based classifiers
Seungchan Kim, Edward R. Dougherty, Junior Barrera, et al.
A key goal for the use of gene-expression microarrays is to perform classification via different expression patterns. The typical small sample obtained and the large numbers of variables make the task of finding good classifiers extremely difficult, from the perspectives of both design and error estimation. This paper addresses the issue of estimation variability, which can result in large numbers of gene sets that have highly optimistic error estimates. It proposes performing classification on probability distributions derived from the original sample points by spreading the mass of those points to make classification more difficult while retaining the basic geometry of the point locations. This is done in a parameterized fashion, based on the degree to which the mass is spread. The method is applied to linear classifiers.
Parallel computing methods for analyzing gene expression relationships
Edward B. Suh, Edward R. Dougherty, Seungchan Kim, et al.
This paper presents a parallel program for assessing the codetermination of gene transcriptional states from large- scale simultaneous gene expression measurements with cDNA microarrays. The parallel program is based on a nonlinear statistical framework recently proposed for the analysis of gene interaction via multivariate expression arrays. Parallel computing is key in the application of the statistical framework to a large set of genes because a prohibitive amount of computer time is required on a classical single-CPU machine. Our parallel program, named the Parallel Analysis of Gene Expression (PAGE) program, exploits inherent parallelism exhibited in the proposed codetermination prediction models. By running PAGE on 64 processors in Beowulf, a clustered parallel system, an analysis of melanoma cDNA microarray expression data has been completed within 12 days of computer time, an analysis that would have required about one and half years on a single-CPU computing system. A data visualization program, named the Visualization of Gene Expression (VOGE) program, has been developed to help interpret the massive amount of quantitative information produced by PAGE. VOGE provides graphical data visualization and analysis tools with filters, histograms, and accesses to other genetic databanks for further analyses of the quantitative information.
Time series inference from clustering
Edward R. Dougherty, Junior Barrera, Marcel Brun, et al.
This paper presents a toolbox for analyzing inferences drawn from clustering. Often the implication is that points in different clusters come from different underlying classes, whereas those in the same cluster come from the same class. These classes represent different random vectors. Each random vector is modeled as its mean plus independent noise, sample points are generated, the points are clustered, and the clustering error is the number of points clustered incorrectly according to the generating random vectors. Clustering algorithms are evaluated based on class variance and performance improvement with respect to increasing numbers of experimental replications. The study is presented on a website, which includes error tables and graphs, confusion matrices, principle-component plots, and validation measures. There, the toolbox is applied to gene- expression clustering based on cDNA microarrays using real data.
Analysis of gene expression data of the NCl 60 cancer cell lines using Bayesian hierarchical effects model
Jae K. Lee, Uwe Scherf, Lawrence H. Smith, et al.
From the end of the last decade, NCI has been performing large screening of anticancer drug compounds and molecular targets on a pool of 60 cell lines of various types of cancer. In particular, a complete set of cDNA expression array data on the 60 cell lines are now available. To discover differentially-expressed genes in each type of cancer cell lines, we need to estimate a large number of genetic parameters, especially interaction effects for all combinations of cancer types and genes, by decomposing the total variance into biological and array instrumental components. This error decomposition is important to identify subtle genes with low biological variability. An innovative statistical method is required for simultaneously estimating more than 100,000 parameters of interaction effects and error components. We propose a Bayesian statistical approach based on the construction of a hierarchical model adopting parameterization of a liner effects model. The estimation of the model parameters is performed by Markov Chain Monte Carlo, a recent computer- intensive statistical resampling technique. We have identified novel genes whose effects have not been revealed by the previous clustering approaches to the gene expression data.
Genetic network models: a comparative study
Currently, the need arises for tools capable of unraveling the functionality of genes based on the analysis of microarray measurements. Modeling genetic interactions by means of genetic network models provides a methodology to infer functional relationships between genes. Although a wide variety of different models have been introduced so far, it remains, in general, unclear what the strengths and weaknesses of each of these approaches are and where these models overlap and differ. This paper compares different genetic modeling approaches that attempt to extract the gene regulation matrix from expression data. A taxonomy of continuous genetic network models is proposed and the following important characteristics are suggested and employed to compare the models: inferential power; predictive power; robustness; consistency; stability and computational cost. Where possible, synthetic time series data are employed to investigate some of these properties. The comparison shows that although genetic network modeling might provide valuable information regarding genetic interactions, current models show disappointing results on simple artificial problems. For now, the simplest models are favored because they generalize better, but more complex models will probably prevail once their bias is more thoroughly understood and their variance is better controlled.
Simulator for gene expression networks
Hugo A. Armelin, Junior Barrera, Edward R. Dougherty, et al.
This paper presents a simulator for gene expression networks, based on the model of chain dynamical systems (CDS). It gives the definition of CDS, describes the simulator architecture, the language adopted for describing CDS, and the available outputs. Finally, a real genetic network is studied: a subsystem of the genetic network that controls cell cycle of adrenocortical cells of the Y1 cultured cell line.
Data Normalization and Quality Control
icon_mobile_dropdown
Modified confocal scanner system for microarrays
Ian C. Hsu, Ray Chen, Yen L. Chen
Based on the original design of the confocal scanner system by Dr. Eisen and his colleagues at Stanford Univ., we have developed a modified confocal scanner system. The modified scanner only used one PMT instead of two PMTs in its precedent one. About half of the dichroic beamsplitters were replaced by reflection mirrors in the modified system. Therefore, we will be able to reduce the power of laser by a fact of 1.5. It will enhance the stability of the laser power and increase the accuracy of the detection system. The size of the modified system was decreased due to the simplified optics system. The optical system and the preliminary scanning results of this modified confocal scanner system will be presented in this paper. We will also discuss some possible further improvement of the system.