Comparing CMOS sensor technologies
Machine vision imaging technology is used in applications such as automatic inspection and process control. The industry is typically driven by image quality, inspection through-put, and cost factors, which all involve a core hardware component: the industrial camera. Improvements can be made to several aspects of the camera system to reduce their cost, for instance: objects can be viewed in better detail by increasing the resolution of a camera under the same optical field of view conditions; measurement uncertainties can be reduced by lowering the noise-floor of the cameras; and increasing the frame rate can alleviate what is often the bottle neck of a system. The requirements of a particular application normally pre-determine camera resolution and through-put, and camera noise floor—see Figure 1(a)—is the key factor with which different camera manufacturers and technologies are usually compared.1 However, full well depth is another critical, but often overlooked, parameter to end users of machine vision/inspection technologies.
Full well depth is a measure (in units of digital number) of the maximum number of electrons that are generated and stored by each pixel of an image sensor. This parameter controls the noise floor level of a camera—see Figure 1(b)—with a larger full well giving a lower overall noise floor and greater dynamic range (ratio of full well depth to overall read noise). Full well depth is proportional to the photon-sensitive area per pixel for CMOS imager sensors. Large pixel sizes provide lower image sensor resolutions (pixel array density), which dramatically increases chip production cost.
|CMOS chip||Pixel size (μm)||Full Well (ke−)||Readout (e−)||Dynamic range (dB)||FPS|
We compare different types of the currently leading CMOS technology 2D imagers in Table 1. The type I camera has the largest pixel size, therefore the biggest full well depth and the lowest noise floor (see Figure 2). With type II and type III technologies photon-sensitive area is traded for more transistors to give higher frame rates. Type V cameras are the typical scientific CMOS technology and have the smallest pixel size to give the highest image resolution. Full well depth and read noise are balanced by compromising on frame rate, and these cameras require active cooling to achieve their best performances. All the different factors in the type IV technology are balanced to satisfy the requirements of the mainstream market.
When applications are operated within the shot noise zone, with strong input signal, the noise profile of type I cameras can be up to 4 gray levels (GL) lower than for the other camera types (in 8bit mode, with 200GL average background). This is of particular importance in some applications, such as the inspection of semiconductor micro-defects where the contrast is typically only a few gray levels. If the camera noise floor is higher than the defect contrast, it is not possible to distinguish the defect from the rough image background. However, large photon-sensitive areas commonly lead to fewer transistors being associated with each pixel. This can potentially contribute to large numbers of dark electrons or high read noise, which will affect inspections with weak input signal.
The same CMOS technologies manufactured by different companies can exhibit noise floor variations (see Figure 3). This is because there are different opinions on what is the optimum performance of a specific chip. The maximum full well depth threshold can therefore have a different value among the different cameras, which results in different noise floor profiles. This causes variations in the saturation point (where the noise floor decreases near saturation) and shot noise amplitude. Camera sensitivity is another important factor in defining the final output signal noise noise. This is defined by the conversion ratio of the input photon signal to the output signal strengths. Although type IV cameras have 20% lower full well depth values than type II cameras, their sensitivity is four times higher (see Figure 4). The final image signal noise ratio measured is therefore higher than for the type II cameras (see Figure 5), by up to 74% for one particular semiconductor wafer inspection platform.
The development of CMOS technologies has led to the improvement in the technical performance of industrial cameras. However, specific cameras and technologies should be selected carefully depending on the application requirements. Current trends in CMOS camera technology development are focused on multiple large-image sensors with smaller individual pixels, which will be useful for surveillance applications. However, those working on machine vision and inspection applications are developing high-speed cameras with moderate sensor resolution. This will provide room for larger photon-sensitive areas and for more transistors, to give higher full well depths and overall lower noise profiles.
Wei Zhou has been working in the semiconductor defect inspection industry for more than 10 years. He has extensive experience with machine vision and inspection applications, and he specializes in system-level work that covers camera, imaging/illumination, and image-processing algorithms. He obtained his PhD from the University of Minnesota and holds several patents.