Detecting crops and weeds in precision agriculture

The spatial distribution of weeds can be mapped from digitally recorded images and used to target the application of herbicides.
09 September 2008
Christelle Gee, Jérémie Bossu, Gawain Jones, and Frédéric Truchetet

In recent decades, precision agriculture, a practice geared to delivering ‘the right dose at the right place at the right moment,’ has become possible with the development of remote sensors. In particular, spatial vegetative heterogeneity within fields can be digitally recorded using vision systems embedded either in agricultural engines or in small remotely piloted aircraft. These images can then be used to identify regions of high weed content to focus, and reduce, herbicide applications. We previously developed two programs based on such systems for weed detection in crop fields.

A multispectral camera embedded in an aircraft (see Figure 1) enabled us to obtain images within which the spatial distribution of weeds can be estimated. Various classification methods (ANN, k-nearest neighbor) were used to categorize pixels as either crop or weed.1,2 Alternatively, the second experimental system incorporates an imaging system in a real-time precision sprayer (see Figure 2). For instance, from a monochrome CCD camera located in front of the tractor, the discrimination between crops and weeds is obtained using a 2D Gabor filtering process based on spatial information.3 This method allows us to generate a weed infestation map under the assumption that the periodic signal is associated with crop rows. A pinhole camera model is then used to translate the weed patch coordinates from the image into real-world coordinates to time the triggering of a series of actuators fixed in front of each nozzle on the spraying boom.


Figure 1. A four filter wheel camera with filters in the blue, green, red and infrared ranges is embedded in a small remotely piloted aircraft.

However, few manual ground assessments have been done to evaluate the performance of these image-processing techniques. Hence, a method to validate the potential of such crop/weed discrimination algorithms is needed. When the initial parameters of the scene (crop and weed locations, weed infestation rates) are known, it is possible to assess and compare the efficiency of crop/weed discrimination algorithms such as Gabor filtering, the Hough transform, and the wavelet transform.


Figure 2. The real-time precision sprayer. Actuators are placed in front of each nozzle of the spraying boom and are activated according to the speed sensor and the image snapped by a camera mounted in front of the tractor.

First, a two-dimensional virtual agronomic image was created in simulation software using a statistical approach that models the spatial distribution of plants.4,5 In this field, a periodically arranged sowing pattern is chosen to model the crop plants. The design of this pattern mimics either monocotyledonous or dicotyledonous plants: see Figure 3(a). The weed plants are positioned by applying a mixture of two different stochastic processes (Poisson and Neymann–Scott) to model uniform or patchy distributions. Discrete statistics have been developed under the assumption that weed spatial distribution is a random process and with no memory between successive events (two built images). Moreover, as crop fields typically have few weeds, we have set the occurrence of weed plants relative to crops to very low. In this model, the initial inter-row weed infestation rate (WIR) is a parameter defined as:

 

Next, a virtual camera with predefined intrinsic (CCD size, focal length, and distortion) and extrinsic (translation and rotation) parameters is located in the field, and a snapshot is taken of this virtual field. From a projective transformation (pinhole camera model) a black-and-white virtual image can be obtained: see Figure 3(b). This image can then be used in place of an actual image of a crop field to test the efficiency of crop/weed discrimination algorithms.

This method provides a useful tool to evaluate the robustness of spatial discrimination algorithms. The method has been developed to assure easy algorithm integration and can be configured to test multiple experimental designs. We are now interested in applying the method to test new discrimination algorithms and are working to expand the functionality of the model through the addition of other parameters. In particular, we are interested in exploring the use of spectral data to further discriminate crops from weeds.


Figure 3. (a) A virtual wheat field with a mixture of both uniform and patchy weed spatial distributions (WIR =20%). (b) Resultant virtual wheat field picture: inter-row width =16cm, camera height = 1.20m, pitch angle = 65°,and WIR = 20%.

Christelle Gee, Jérémie Bossu, Gawain Jones
UP-GAP
Établissement National d'Enseignement Supérieur Agronomique de Dijon (ENESAD)
Dijon, France

Christelle Gee was born in Dijon on 31 October 1969. She received a master's degree in chemical physics at Orsay University, Paris, in 1997 and a PhD in chemical physics at the same institution. She is interested in precision agriculture, specifically the development of optical sensors in the context of reducing herbicide use by characterizing plant distributions.

Jérémie Bossu was born in Dijon on 30 July 1976. He received a master's degree in image processing at the University of Burgundy in 2004. He is researching computer vision and image processing in precision agriculture.

Gawain Jones was born in Vendenesse les Charolles, France, on 18 November 1982. He received a master's degree in image processing at the University of Burgundy in 2006. His research interests include modeling of crop fields (spatial and spectral approaches) for testing and validating the accuracy of crop/weed discrimination algorithms.

Frédéric Truchetet
Le2i
University of Bourgogne
Le Creusot, France

Frédéric Truchetet was born in Dijon on 13 October 1951. He received a master's degree in physics at the University of Burgundy in 1973 and a PhD in electronics at the same institution. He is a full professor in Le2i, UMR 5158 uB-CNRS, where his research focuses on image processing for artificial vision inspection, particularly on wavelet transformation, multiresolution edge detection, and image compression.


Recent News
PREMIUM CONTENT
Sign in to read the full article
Create a free SPIE account to get access to
premium articles and original research