Click on “Download PDF” for the PDF version or on the title for the HTML version.


If you are not an ASABE member or if your employer has not arranged for access to the full-text, Click here for options.

A Deep Convolutional Neural Network Based Image Processing Framework for Monitoring the Growth of Soybean Crops

Published by the American Society of Agricultural and Biological Engineers, St. Joseph, Michigan www.asabe.org

Citation:  2021 ASABE Annual International Virtual Meeting  2100259.(doi:10.13031/aim.202100259)
Authors:   Nipuna Chamara A.H.M, Khalid H Alkady, Hongyu Jin, Frank Bai, Ashok Samal, Yufeng Ge
Keywords:   Crop Monitoring, Deep Convolutional Neural Network, Edge-computing, In-field, IoT, MATLAB, Phenocams

Abstract.

While information about crops can be derived from many different modalities including hyperspectral imaging, multispectral imaging, fluorescence imaging, 3D laser scanning, etc. low-cost RGB imaging sensors in continuous monitoring of crops is a more practical and feasible alternative. In this research, an image processing pipeline was developed to monitor the growth of soybean crops in a research field of the University of Nebraska-Lincoln using their RGB images collected by overhead-phenocams within 30 days using Raspberry-Pi-Zero with a camera module where images were saved on an SD card. The images were stored in the JPG file format with 1920512 resolution, followed by a denoising step using a pretrained Denoising Deep Convolutional Neural Network (DCNN). Then, a semantic segmentation algorithm developed and named as SoySegNet was used to isolate the canopy of soybean crops from the background. A DeepLab v3+ DCNN was developed using the transfer learning technique based on the ResNet-18 DCNN, to perform the semantic segmentation. The semantic segmentation DCNN was trained with 119 pixel-labeled images and additional images generated using data augmentation techniques (i.e., random translation and reflection). The augmentation step increased the size of the image dataset used in the training, validation, and testing of the DCNN. The SoySegNet was able to identify soybean canopy with a pixel-level accuracy of 94%. Various vegetative indices (i.e., excess green index, excess green minus excess red, vegetative index, the color index of vegetation, visible atmospherically resistant index, red-green-blue vegetation index, modified green, red vegetation index, and normalized difference index) were computed using the segmented field images to monitor the growth rate of soybean crops. Furthermore, the proposed image processing pipeline was extended to count the soybean leaves in the segmented images using a deep neural network based on the You Only Look Once (YOLO) architecture and named as SoyCountNet. The SoyCountNet was trained with the same 119 labeled images used for SoySegNet, where the leaves were labeled using bounding boxes. Again, data augmentation techniques were used to increase the size of the training, validation, and testing data sets. The SoyCountNet consisted of ResNet50 DCNN as a feature extraction network and an object detection subnetwork. The SoyCountNet was able to count soybean leaves with a 0.36 precision in the field-segmented images of the soybean crops. This research demonstrated that the proposed image processing pipeline in conjunction with low-cost RGB imaging devices could provide a reliable and cost-effective framework for continuous crop monitoring. Novel application of this framework would be to generate meaningful data about the crop in real-time in edge computing devices of Low Power Wide Area Network (LPWAN) based agricultural Internet of Things (IoT) sensor networks.

(Download PDF)    (Export to EndNotes)