Share Email Print
cover

Proceedings Paper

Development of a single-channel, three-view imaging system with classification model for defect and damage assessment of freefalling cereal grains
Author(s): I-Chang Yang; Stephen R. Delwiche; Y. Martin Lo
Format Member Price Non-Member Price
PDF $14.40 $18.00

Paper Abstract

Currently, inspection of wheat in the United States for grade and class is performed by human visual analysis. This is a time consuming operation typically taking several minutes for each sample. Digital imaging research has addressed this issue over the past two decades, with success in recognition of differing wheat classes, and distinguishing wheat from non-wheat species. Detection of wheat kernel defects, either by damage or disease, has been a greater challenge. A study has been undertaken that uses high-speed black and white imaging at 10-bit photometric resolution to detect damaged kernels one kernel at a time. The system, composed of hardware (camera, lighting, power supplies, and data acquisition card), software (LabVIEW and MATLAB), and analytical (MATLAB and SAS) components, is designed to a) capture images of free-falling kernels at opposing angles through the use of optical grade mirrors, b) parameterize the images and, c) perform classification. The system operates with a 1/30,000 second exposure time though with restrictions on image transfer rate (60 Hz) and image processing routines for feature extraction (currently conducted offline). Fifty samples of hard red and white wheat subjected to weather related damage during plant development were used in this study. Parametric (linear discriminant analysis) and non-parametric (k-nearest neighbor) classification models were tested to determine the image features that best foster recognition of the damage conditions of mold, sprout, and black tip. The morphological features used in classification included area, projected volume, perimeter, elliptical eccentricity, and major and minor axis lengths. Textural features from calculated gray level co-occurrence matrices (including contrast, correlation, energy, and homogeneity), are also under consideration though not reported herein. So far, our results indicate that with as few as three image parameters, classification (damaged vs. sound) levels approach 85 to 90 percent accuracy. Information learned from this study is intended to lead to the streamlining of feature extraction in image-based high speed sorting.

Paper Details

Date Published: 5 May 2012
PDF: 8 pages
Proc. SPIE 8369, Sensing for Agriculture and Food Quality and Safety IV, 83690E (5 May 2012); doi: 10.1117/12.921419
Show Author Affiliations
I-Chang Yang, National Science Council (Taiwan)
Stephen R. Delwiche, USDA Agricultural Research Service (United States)
Y. Martin Lo, Univ. of Maryland, College Park (United States)


Published in SPIE Proceedings Vol. 8369:
Sensing for Agriculture and Food Quality and Safety IV
Moon S. Kim; Shu-I Tu; Kuanglin Chao, Editor(s)

© SPIE. Terms of Use
Back to Top