ASABE Logo

Article Request Page ASABE Journal Article

Design and Development of a Broiler Mortality Removal Robot

Guoming Li1,2,7*, Gary Daniel Chesser3*, Joseph L. Purswell4, Christopher L. Magee4, Richard S. Gates1,2,5, Yijie Xiong6


Published in Applied Engineering in Agriculture 38(6): 853-863 (doi: 10.13031/aea.15013). 2022 American Society of Agricultural and Biological Engineers.


1    Department of Agricultural and Biosystems Engineering, Iowa State University, Ames, Iowa, USA.

2    Department of Animal Science, Iowa State University, Ames, Iowa, USA.

3    Department of Agriculture and Biological Engineering, Mississippi State University, Mississippi State, Mississippi, USA.

4    Poultry Research Unit, USDA Agricultural Research Service, Mississippi State, Mississippi, USA.

5    Egg Industry Center, Iowa State University, Ames, Iowa, USA.

6    Departments of Animal Science and Biological Systems Engineering, University of Nebraska, Lincoln, Nebraska, USA.

7    Department of Poultry Science, The University of Georgia, Athens, Georgia, USA.

*    Correspondence: gmli@uga.edu; dchesser@abe.msstate.edu

The authors have paid for open access for this article. This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License https://creative commons.org/licenses/by-nc-nd/4.0/

Submitted for review on 3 January 2022 as manuscript number PAFS 15013; approved for publication as a Research Article by Community Editor Dr. Jun Zhu of the Plant, Animal, & Facility Systems Community of ASABE on 22 October 2022.

Mention of trade names or commercial products in this article is solely for the purpose of providing specific information and does not imply recommendation or endorsement by the U.S. Department of Agriculture.

Highlights

Abstract. Manual collection of broiler mortality is time-consuming, unpleasant, and laborious. The objectives of this research were: (1) to design and fabricate a broiler mortality removal robot from commercially available components to automatically collect dead birds; (2) to compare and evaluate deep learning models and image processing algorithms for detecting and locating dead birds; and (3) to examine detection and mortality pickup performance of the robot under different light intensities. The robot consisted of a two-finger gripper, a robot arm, a camera mounted on the robot’s arm, and a computer controller. The robot arm was mounted on a table, and 64 Ross 708 broilers between 7 and 14 days of age were used for the robot development and evaluation. The broiler shank was the target anatomical part for detection and mortality pickup. Deep learning models and image processing algorithms were embedded into the vision system and provided location and orientation of the shank of interest, so that the gripper could approach and position itself for precise pickup. Light intensities of 10, 20, 30, 40, 50, 60, 70, and 1000 lux were evaluated. Results indicated that the deep learning model “You Only Look Once (YOLO)” V4 was able to detect and locate shanks more accurately and efficiently than YOLO V3. Higher light intensities improved the performance of the deep learning model detection, image processing orientation identification, and final pickup performance. The final success rate for picking up dead birds was 90.0% at the 1000-lux light intensity. In conclusion, the developed system is a helpful tool towards automating broiler mortality removal from commercial housing, and contributes to further development of an integrated autonomous set of solutions to improve production and resource use efficiency in commercial broiler production, as well as to improve well-being of workers.

Keywords.Automation, Broiler, Deep learning, Image processing, Mortality, Robot arm.

Modern broiler production systems continue to be intensified and scaled up with approximately 27000 to 29000 birds in a typical house (Gates et al., 2008). In these commercial production systems, broiler mortality can be attributed to severe disease and metabolic problems and inappropriate environmental conditions and management practices , with a mortality rate of about 5% in a typical 7-week production cycle (National Chicken Council, 2020). As routine tasks, farm workers spend a large amount of time daily identifying, gathering, and removing dead birds. Broiler genetic strains have been continuously selected for efficient and fast growth, and bird body weight increases rapidly from less than 500 g within the first two weeks to over 3000 g after week 7 (Aviagen, 2019). Manual mortality collection is relatively simple for young birds and can often be completed within one house walk-through removal operation due to light weights and small sizes of dead birds. However, as birds grow during a production cycle, workers may strategically pile dead birds during inspection and remove the carcasses at the end of inspection (Holland, 2005), often requiring multiple, time-consuming, and arduous trips through each broiler house. Long-term exposure to harsh working environments containing concentrated ammonia, dust, and odors could pose health risks for farmers (Carey et al., 2004), and delayed mortality removal may increase insect population pressure and the risk of disease spreading via direct contact or vector transmission (Dent et al., 2008). As a result, effective and autonomous solutions for broiler mortality removal are needed to reduce labor, enhance farm biosecurity, and improve well-being of workers.

Robotics have provided many solutions for reducing manual labor needs in other industries, and have recently been employed to facilitate poultry production . Contemporary robotics are integrated autonomous systems containing components of perception, reasoning/learning, communication, and task planning and execution (Ren et al., 2020). Some proactive robot companies have tried to robotize repetitive and laborious tasks in poultry production. The French robot company, Octopus Biosafety (2022), designed the Octopus Poultry Safe robots for broiler industry. The robots are equipped with scarifiers to scarify litter, environmental sensors to monitor temperature, relative humidity, and ammonia, and laser pointers to simulate bird activity. Another French robot company, Tibot (2022), designed smaller broiler robots with similar functions to Octopus robots. The robots run along the litter floor of a broiler house with prongs running through the litter to keep it aerated. A poultry facility company in the Netherland, Big Dutchman (2022), developed the ChickenBoy robot mounted on rails overtop of a flock, and the robot travels slowly throughout a house to collect information on ambient environment conditions (e.g., humidity, temperature, airspeed, and carbon dioxide) and thermographic imaging to assess bird welfare and health. There are more European robot companies devoting their efforts to develop robotic systems to assist precision management of broilers; however, owing to a relatively high initial investment, robots have not been widely used in the U.S. broiler industry.

Robotic solutions have also been researched for automating hen production. Li (2016) designed and constructed a robot platform to search for dead layers in a stair-step cage system. Multiple stimuli including sound, light, and vibration were implemented in front of cages to encourage live birds to move, and consequently, the motionless birds were flagged as mortalities. Vroegindeweij et al. (2018) developed a bent helical spring mounted in front of the robot, which can sense the location of floor eggs and collect them, and the robot can navigate autonomously more than 3000 m, while avoiding obstacles and dealing with hens presence in cage-free hen housing systems. Chang et al. (2020) developed an egg collecting robot consisting of a chassis frame, an egg-picking mechanism, an egg-sorting device, an egg storage tank, an egg collection channel, and a control box. The robot successfully collected floor eggs for free-range laying hens. Li et al. (2021a) combined a robot arm, a soft end effector, and a deep learning-based computer vision system to collect floor eggs as well. To sum up, most poultry robotics research projects focused on floor eggs in egg industry.

There was only one robotic research for broiler mortality removal. Liu et al. (2021) developed an integrated robot system that can walk in a commercial broiler house with predefined straight-line paths, identify dead chickens, sweep them onto a conveyor belt, and store them. Although the robot had 95% precision for collecting the dead broilers, the embedded vision system was developed with only 110 images, leading to unknown generalizability and robustness of the system. The development dataset contained images of cocks and laying hens, with fairly different housing and environment conditions from U.S. broiler industry. The conveyor system may work well to collect dead broilers in open areas, but may not reach broilers in secluded areas near corners or under feeding or drinking lines, due to a lack of flexibility of movement (one degree of freedom). Therefore, there is still a need of developing a robust broiler mortality removal robot. Inspired by the poultry robotic team in the Netherland (Vroegindeweij et al., 2018) and Georgia Tech Research Institute (Usher et al., 2017), our team plans to separate the robot development into robot path planning, robot arm, and ground robotic vehicle with accurate indoor navigation systems and integrates them together once all parts are developed. This study focuses on robot arm and vision systems for broiler mortality pickup in simulated broiler production environments.

Practical points should be considered for the design and development of a broiler mortality removal robot. Both bent helical spring (Vroegindeweij et al., 2018) and collection channel (Chang et al., 2020) methods proved to be limited in their ability to reach secluded areas where dead birds could appear; this movement limitation may be addressed by flexible robot arms. Additionally, robot arms can be integrated into mobile platforms including ground vehicles, ceiling-mounted rail systems, and unmanned aerial vehicles to collect dead birds from large spaces associated with typical broiler house construction. However, the performance of picking up dead birds using robot arms remains unclear. A robust vision system is critical for a robot to perceive dead birds. As deep learning-based computer vision systems have been increasingly utilized in broiler production (Li et al., 2021c), they may have the potential to support the dead bird identification process. Efficient deep learning models balancing processing speed and detection accuracy should be verified to precisely detect and locate dead birds in images (Huang et al., 2017), and matched image processing algorithms are under development to provide pinpoint orientation information for grasping necessary parts via robot arms. Light intensities are varied within a broiler house due to bird requirements, light leakage, coverage by infrastructure (e.g., feeder, drinker), etc. (Olanrewaju et al., 2006). A robust means of recognizing and picking up dead birds under various light intensities is needed for a broiler mortality removal robot.

The objectives of this study were: (1) to design and construct a robot to automatically collect dead birds using commercially available components; (2) to compare and evaluate deep learning models and image processing algorithms for detecting and locating dead birds; and (3) to examine dead bird detection and pickup performance of the robot under different controlled light intensities.

Materials and Methods

Birds and System

The robot system development was conducted in the USDA-ARS Poultry Research Unit at Mississippi State (fig. 1). Pine shavings were obtained from a local store and spread on a table to simulate conditions of a broiler house floor. The robot arm (Gen 3, Kinova Inc., Boisbriand, QC, Canada) was mounted on a table and connected to a laptop for control. The laptop was equipped with 32 GB RAM, 9th Generation Intel® Core™ i9-9900K processor, and NVIDIA GeForce RTX 2080 8GB GPU card. The robot arm can move freely with 7 degrees of freedom (DoF) and has a maximum payload of 2000 g. The adaptive two-finger gripper (Robotiq 2F-85, Kinova Inc., Boisbriand, QC, Canada) was installed at the end of the arm to grasp dead birds, and a camera (Intel® RealSense™, Intel Corporation, Santa Clara, Calif.) (table 1) mounted on the arm before the gripper was used to capture top-view images of dead birds.

Figure 1. Illustration of the system setup for the design and development of the robot.
Table 1. Information about the hand-mounted camera.
ParameterValuesParameterValues
ModelIntel RealSense D435Ideal range (m)0.3-3
Maximum focal length (mm)1080Field of view (°)Depth87×58
Depth accuracy (%)<2 at 2 mRGB69×42
Maximum output resolution (pixels)Depth1920×1080Frame rate (fps)Depth90
RGB1080×720RGB30

A total of 64 Ross 708 broilers were used for the robot development. The body weight of these birds ranged from 58.0 to 587.4 g (table 2) and thus did not exceed the maximum payload of the robot. The system design was conducted with mortalities from an ongoing research trial conducted according to the guidelines of the USDA-ARS Animal Care and Use Committee at the Mississippi State, Mississippi location (protocol 21-3 and date of approval 10 February 2021), resulting in an uneven number of birds at different bird ages. These birds had been dead for 24 to 96 h, and in most cases were stiff from rigor mortis. The lying posture of dead broilers was determined after consultation with broiler farm managers to simulate commercial conditions.

Table 2. Information about broilers used for robot development.[a]
Days
of Bird
Age
Number
of Birds
Used
Average
BW
(mean±SD, g)
Maximum
BW
(g)
Minimum
BW
(g)
7158.0
913155.4±29.7197.6100.0
1450462.5±52.1587.4352.4

    [a]    BW is body weight and SD is standard deviation.

Overall Design of the Robot

The overall workflow of the robot operation consisted of initialization steps, shank detection and localization, shank orientation identification, and mortality pickup (fig. 2). The shank is the unfeathered portion between the tibiotarsal joint (“hock”) and the metatarsophalangeal joint. Details of shank determination are presented in Sections 2.4 and 3.1. The initialization steps were to initialize the robot, open the camera, and acquire top-view images for further analysis.

Shank detection and localization were completed via a deep learning model. The models were comparatively evaluated (Section 2.5), and the optimal one was used to extract shank information such as shank indices, coordinates (x, y), and dimensions (width and height). The detected shank with minimum index was cropped and analyzed via image processing algorithms to extract shank orientation relative to the robot’s position. Image processing included cropping, saturation channel extraction, edge detection, and line detection. Details of these processing algorithms are presented in Section 2.6.

The extracted shank coordinates were converted from the image system to the robot arm system and fed to the robot for positioning the gripper above the shank of interest. Then the gripper rotated to match the shank orientation based on the detected angle ? between camera or gripper orientation and shank in a horizontal plane. The shank of the dead bird was gripped, the bird was picked up, the robot moved, and the bird was dropped at the storage place.

Figure 2. Overall workflow for the broiler mortality removal robot.

Coordinate Transformation between the End Effector and Vision System

The robot has seven DoF, corresponding to seven Cartesian coordinate systems, and only the coordinate system at the end of the arm was validated to build a connection between the end effector and hand-mounted camera for broiler mortality pickup. Only the RGB images from the Intel RealSense were used for robot control as they were favorable for most deep learning object detection algorithms. In the desired position, the gripper and camera were perpendicular to the litter floor, so that the camera can capture clear top-view images of dead broilers. The imaging distance between the camera and little floor was approximately 20 cm, and the pixel-to-distance conversion factor for the coordinate transformation was 38.4 pixels/cm based on manual measurement.

Absolute values of coordinates in different coordinate systems were of no interest, and only the speeds of linear and angular movements of the robot arm from the gripper position to the desired position of a body part (e.g., shank) were used for the robot control and calibrated using equations 1-7.

        (1)

        (2)

        (3)

        (4)

        (5)

For long-axis-based rotation,

        (6)

For short-axis-based rotation,

        (7)

where

LSx, LSy, and LSz = linear speeds of the robot arm in the x, y,

            and z axes of the robot Cartesian coordinate

            system, respectively,

    ASx, ASy, and LAz = angular speeds of the robot arm in the x,

            y, and z axes of the robot Cartesian coordinate

            system, respectively,

    (Xe, Ye)     =    centroid coordinates in the imagery coordinate

            system,

    WI, HI     =     width and height of an image, 1,280 and

            720 pixels in this case, respectively,

    CNV     =     pixel-to-distance conversion factor,

            33.8 pixels/cm in this case,

    h1     =     gripper height, 16.3 cm in this case,

    T     =    robot manipulation period, 2 s in this case,

    Ø     =     orientation of a desired body part in the imagery

            coordinate system,

    long- or short-axis-based rotation = the gripper rotates to fit

            the long or short axis of a targeting body part.

Lighting Environments and Image Acquisition for Robot Development

The light intensities of 10 to 70 lux with 10-lux intervals were set to mimic the lighting variance conditions due to bird age, blockage by infrastructure, and lighting leakage. A light meter (LT300, Extech Instruments, Waltham, Mass.) placed at bird level and directly underneath the camera and the voltage transformer adjustment were used to set light intensities of interest. The 1000 lux intensity was reached by the supplemental lamp (fig. 1) to see whether the supplemental lighting helped to improve shank recognition performance. For each intensity, images were captured by the camera at 5-s intervals while the camera rotated together with the gripper. Such an operation created variations of situations in the dataset. One to four (1-4) birds with different postures were partially or completely recorded. Duplicated or similar images were removed. Number of images used was 1653 for 10 lux, 1825 for 20 lux, 1732 for 30 lux, 1707 for 40 lux, 1826 for 50 lux, 1852 for 60 lux, 1725 for 70 lux, and 1941 for 1000 lux. The uniform distribution of number of images across the intensities may assist in inferring shanks without light intensity bias.

Testing of the Dead Bird Pickup Performance via Manual Robot Control

Although a shank was assumed to be the suitable part for pickup, this needed to be verified. The robot was manually controlled to pick up seven parts (i.e., head, neck, wing, hock, shank, toe, and whole body, fig. 3a). Advantages and disadvantages of picking up different parts were compared. Furthermore, the geometric relationships (perpendicular vs. non-perpendicular) between the optimal selected part for pickup and the gripper long axis were determined to gain knowledge of robot control (fig. 3b). Each relationship was measured 30 times.


Figure 3. Illustration of mortality pickup performance testing via manual robot control: (a) body parts and (b) gripper-shank relationship.  is the angle between the shank and gripper in a horizontal plane. The camera lens is parallel with the long axis of the gripper.

Comparative Evaluation of Deep Learning Models for Shank Recognition and Localization

The You Look Only Once (YOLO) is a set of well-recognized deep learning models for real-time processing purposes. Among the YOLO model family, YOLO V3 and V4 are two recently-released models that improve processing speed and detection accuracy from the original model (Redmon and Farhadi, 2018; Bochkovskiy et al., 2020). But the optimal model for recognizing and locating shanks remains to be determined. Although there are more recently published YOLO models, they were not considered in this case because of incompatibility with current robot working environments. Perhaps future versions of the robot arm would have more compatibility for deep learning object detection models and help to address the issue of computing environment incompatibility.

The YOLO V3 and V4 were executed with the Darknet environment. The free cloud server (Google Colab) with a GPU of Tesla P100-PCIE-16GB was used for model training. Most of the training configurations were similar between the two models, including the batch size of 64, resized images of 416×426 pixels, subdivisions of 16, momentum of 0.95, decay of 0.0005, learning rate of 0.001, max batches of 48000, etc. Images described in Section 2.3 were labeled by trained technicians and used for model training and evaluation, of which 80% was for training and 20% was for testing. Bounding boxes were drawn for the part of interest and exported in YOLO format for model development. During training, training loss across epochs was drawn to visualize model performance in real time. Once the loss reached a plateau without large variations, the training process was stopped and corresponding weights for the models were saved for further evaluation and implementation.

Development and Optimization for Image Processing Algorithms

Appropriate design determines the efficiency of image processing algorithms. Both HSV (hue, saturation, and value) and RGB (red, green, and blue) define color channels of images, but the HSV is designed to fit human perception of lights (Gonzalez and Woods, 2002). The skin of chicken shanks was more visually saturated than litter, namely that the color of shank skin was darker than that of litter (dark yellow vs light yellow). The saturation channel (fig. 2) may maintain major features of concern (i.e., chicken shank) while excluding some unnecessary features (i.e., litter), therefore, it was selected to produce grayscale images. The Canny algorithm is a popular edge detection method used to generate binary images from saturation-channel images (Rong et al., 2014). It reduced noises (i.e., small particles out of areas of interest), found intensity gradient of images (i.e., outer edges of shanks), and removed unnecessary pixels (i.e., litter pixels and skin pixels) on the neighborhood of gradient directions. The Hough transform is a popular tool for line detection due to its robustness to noise and missing data and was selected to detect shank edge orientation (Fernandes and Oliveira, 2008). The shank edge orientation represented the shank orientation with respect to the gripper or camera position.

The parameters (e.g., kernel sizes) of the algorithms were fine-tuned to obtain optimal performance. Sixty (60) images containing 2-6 shanks per image were processed for each light intensity using the developed algorithm, and the orientation of shank edges among these images was measured manually for algorithm evaluation. A technician sat in front of the monitor displaying shank images and used a digital protractor to obtain the manual measurements.

Testing for Final Performance

The robot was finally embedded with the optimal deep learning model, fine-tuned image processing algorithms, and developed robot control algorithms to recognize the location and orientation of the optimal anatomical part picked up. Then, different numbers of dead birds were placed onto the litter, and the robot was operated automatically 60 times to pick up the dead birds under each of the eight light intensities. The final performance of the robot for collecting broiler mortality was determined.

Evaluation Metrics

Three sets of evaluation metrics were developed to assess deep learning model performance, image processing algorithm success, and robot pickup performance. The metrics were organized by light intensity, and all evaluations were conducted on the local computer.

The metrics for deep learning included precision, recall, F1 score, and root mean square error (RMSE) (eqs. 8-11). Higher values for the former three indicate better model performance, while smaller values for the RMSE indicate smaller prediction errors and better performance. The processing speed (frame per second, fps) was calculated by dividing number of images processed with total processing time, and larger values suggest faster processing speed.

        (8)

        (9)

        (10)

        (11)

where x^i, y^iare the ithpredicted coordinates in a horizontal plane, xi, yi are the ith ground truth coordinates, and N is total number of measured samples.

The metrics for image processing algorithm evaluation were mainly errors between predicted orientation (O^j) and ground truth orientation (Oj) (eq. 12). Histograms of the errors ranging from -135° to 135° with 45° intervals were also organized. Mean absolute error (MAE) was used to average the absolute errors and provide overall performance of the algorithms (eq. 13).

        (12)

        (13)

where is the absolute operator, Errorj is the error between the jth prediction orientation (O^j) and ground truth orientation (Oj), and m is the total number of evaluated errors.

The metrics for robot pickup performance evaluation were operation time and success rate. Operation time was recorded and reported by Python during each round of operation. Success rate was obtained by dividing the number of successful cases of mortality pickup with the total number of robot operations.

Results

Performance of Picking Up Dead Birds via Manual Control

Hands-on experience with the robot was first gained by manual control for picking up birds by different body parts (table 3). Some anatomical parts (e.g., head and whole body), despite being easily gripped or identified, were physically compressed and insufficiently strong to prevent tissue damage, causing contamination for the robot and litter. Other parts were visually unidentifiable when birds were young (e.g., neck), too soft to be picked up (e.g., wing), or too small to be picked up (e.g., hock and toe). Based on the consideration of biosecurity safety and robust pickup and detection performance across different bird ages, the broiler shank was finally selected as the optimal part to be identified and picked up by the robot, but the detection performance under a dark environment needs improvement.

Table 3. Major advantages and disadvantages for pickup of different broiler body parts via manual robot control.
Part
Picked Up
Major AdvantageMajor Disadvantage
HeadVisually identifiablePhysically compressed and insufficiently strong for pickup
NeckReadily picked up for old birdsNot visually obvious for young birds
WingLarge for pickupFeathers being torn off
HockFirm for pickupExcessive feather coverage
ShankFirm for pickup and visually identifiableColor features being similar to that of litter in dark environments
ToeFirm for pickingToo small, and the color features similar to that of litter in dark environments
Whole bodyEasy pickup for young birdsToo large for picking up old birds, and the integrity of body of very young birds being compromised

Figure 4 presents the performance of the robot for picking up dead birds under two relationships between broiler shank and gripper long axis. The success rate was 93.3% for the perpendicular relationship and 86.6% for the non-perpendicular one. As the success rate of the former one was 6.7% higher than the latter one, the image processing algorithms for extracting shank orientation are necessary to make the gripper fitted to the perpendicular relationship with the broiler shank of interest.

Figure 4. Pickup performance of the robot under two geometric relationships.

Performance of Detecting and Locating Dead Birds via Deep Learning Models

The performance of the two YOLO models is presented in table 4. The precision, recall, and F1 score were 76.1% to 82.4% for YOLO V3 and 86.3% to 95.1% for YOLO V4. The RMSE was similar between the two models, and YOLO V4 processed one more image per second than YOLO V3. Considering higher detection performance and faster processing speed, the YOLO V4 was selected as the deep learning model for detecting and locating chicken shanks.

Table 4. Model performance for detecting and locating broiler shank via two deep learning models.[a]
ModelModel performance
Precision
(%)
Recall
(%)
F1
Score
(%)
RMSE
(mm)
Processing
Speed
(fps)
YOLO V382.476.179.14.56
YOLO V495.186.390.54.87

    [a]    YOLO is You Only Look Once, RMSE is root mean square error, and fps is frame per second.

The performance of YOLO V4 across the eight light intensities is further presented in table 5. The precision, recall, and F1 score increased 17.7-23% with intensities increasing from 10 lux to 30 lux, but they only grew 1.8% to 4.9% from 30 to 1000 lux. The RMSE ranged from 4.5 to 5.3 mm among the eight light intensities.

Table 5. Model performance for detecting and locating broiler shank across eight light intensities.[a]
Light
intensity
(lux)
Model Performance
Precision
(%)
Recall
(%)
F1 Score
(%)
RMSE
(mm)
1074.569.471.84.5
2086.172.278.64.8
3097.587.192.04.9
4098.888.593.45.0
5098.988.693.54.9
6098.992.595.64.1
7099.093.396.05.1
100099.392.095.85.3

    [a]    RMSE is root mean square error. The model performance was evaluated for the YOLO V4 only

Performance of Detecting Shank Orientation Via Image Processing

The errors between predicted orientations and ground truth orientations are depicted in figure 5. The percentage of errors within 45° (indicating small deviation) was 57.2% for 10 lux, 83.1% for 20 lux, 81.7% for 30 lux, 89.9% for 40 lux, 82.7% for 50 lux, 86.6% for 60 lux, 89.2% for 70 lux, and 97.6% for 1000 lux. The MAE was the largest (38.7°) at 10 lux and smallest (12.5°) at 1000 lux.

Final Performance of the Broiler Mortality Removal Robot

The final performance of the robot is presented in figure 6. The success rate for automatically finding and picking up dead birds increased from 53.3% to 80.0% with light intensities increasing from 10 to 30 lux. Further increasing the light intensity from 30 to 60 lux resulted in a small incremental improvement of 6.7%, and success rates stabilized at 86.7% to 90.0% among 60, 70, and 1000 lux. The overall operation time ranged from 70.5 to 77.8 s/round.

Discussion

Part Orientation

The broiler mortality removal robot belongs to the class of so-called pick-and-place robotics, which requires accurate location and orientation information for control (Zeng et al., 2018). Even with manual control and optimal perpendicular shank-gripper orientation, precise localization of the chicken shank is problematic, resulting in missed alignments, unsuccessful grasping, and compromised pickup performance. In contrast to manual control, the robot automatically pinpointed shanks with an RMSE of 4.5 to 5.3 mm, indicating <1% deviation within an image, once the shanks were detected by the model, thus improving robot pickup performance. Interestingly, the robot still had high pickup performance (success rate was 86.6%) even though broiler shanks were non-perpendicular with the gripper long axis. Based on our observation, the robot only failed in pickup when the gripper was parallel or intersecting narrowly (intersecting angle was <15°) with shanks. Because bird pickup was not overly sensitive to other relationships between gripper long axis and broiler shank, the robot may successfully pick up dead birds despite their shank orientation not being exactly detected (fig. 5).


Figure 5. Histogram of errors between prediction and ground truth and mean absolute errors for different light intensities.

Deep Learning Model

Various deep learning models have different performance on detection accuracy and processing speed (Huang et al., 2017). Compared to the YOLO V3, the YOLO V4 had 13.3% to 15.4% higher precision, recall, and F1 score and 16.7% higher processing speed, which were consistent with those reported by Bochkovskiy et al. (2020). The improved performance of YOLO V4 may be attributed to the advanced connection scheme (e.g., weighted residual connections), universal normalization strategy (e.g., cross mini-batch normalization), upgraded training hyperparameter (e.g., self-adversarial training), etc. Therefore, the YOLO V4 was selected as the shank detector in this case. But the YOLO V4 architecture despite being favorable for real-time processing was not robust enough to retain large variations, resulting in it missing shanks in some extreme environments (e.g., 10-lux light intensity) and compromised recall. Some complicated models [e.g., region-based convolutional neural networks (Li et al., 2021b)] and corresponding computing devices for the robot should be researched in the future.

Figure 6. Success rates for automatically picking up dead birds under different light intensities.

Image Processing

Acceptable performance (i.e., MAE was 12.5° at 1000 lux) of the image processing algorithms resulted from a combination of factors including precise deep learning detection results and appropriate algorithm design (e.g., saturation channel extraction, Canny edge detection, and Hough transform) and parameter tuning. Similarly, our previous publication also demonstrated that extensively analyzing the bounding box results from deep learning detection was favorable to continuously track bird behavior when image processing was involved (Li et al., 2021c). With the detected bounding box, the receptive field of the image process algorithms was narrowed, and detected environments turned out to be simple compared to the whole image. Some factors causing errors cannot be ignored for image processing algorithm development, such as bird occlusion and overlapping, various lighting conditions, complex background, random shadows, etc. (Okinda et al., 2020).

Light Intensity Effect

Lower light intensities reduced the performance of the deep learning model detection, image processing orientation identification, and consequently final pickup performance. Such reduction resulted in unidentifiable features between broiler shanks and other objects under dark environments. Previous testing suggested that deep learning models were not subject to light intensity for egg detection (Li et al., 2020). Features (e.g., color, texture, saturation, etc.) of broiler shanks were different from those of eggs, and broiler shanks were not visually recognizable under the 10-lux light intensity (fig. 3), thus leading to detection failure. Such a phenomenon indicates that embedding extra lamps into the robot for illuminating detection environments may be cost-efficient and helpful to boost mortality detection and pickup performance in broiler production with dim lights (Olanrewaju et al., 2006). However, the benefits of supplemental lights tended to be saturated for over 60-lux intensities, and to save robot operation energy, needlessly increasing light intensities for improving detection and pickup performance is not suggested. The other issue with extremely bright lights is the effect they have on the live birds. There could be significant fear and stress responses to very bright lights in a typical broiler operation (Olanrewaju et al., 2006). Infrared or near-infrared imaging could be another option because it captures images based on surface temperature of objects and is insensitive to light intensities (Yanagi et al., 2002). The imaging via the infrared technique should be completed before shanks reach the same temperature as background, which should be validated in the future. Meanwhile, the cost efficiency and economical adaption by producers should be considered.

Bird Size Effect

Bird size effect on the detection and pickup performance was not systematically examined due to an uneven number of dead birds across bird ages, but it cannot be ignored for future robot development. Per observation, shank skin color was less saturated (i.e., lighter yellow) for younger birds, which may make shanks less distinguishable from fresh litter due to similar color features. Additional algorithms should be developed for detecting shanks of young birds. The robot vision system may not capture completed shanks of older birds due to the larger shank size and limited field of view of the vision system. Elevating the camera to capture more scenarios may be considered, and pixel-to-mm conversion factors should be optimized according to camera height. Typical bird body weight will be over 2000 g (the maximum payload of the robot) after five weeks of bird age (Aviagen, 2019), therefore the robot capability of holding birds at different weights, sizes, and ages should be further investigated.

Other Causes of Mortality Pickup Failure

Besides the aforementioned factors, other causes of pickup failure should not be ignored. The gripper of the robot was adaptively deformed from the normal shape to the twisted shape when it hit tough obstacles. Although the adaptive deformation is favorable for protecting the gripper from broken, the twisted gripper under the collision was unable to grasp broiler shanks successfully. Figure 7 represents challenges related to closely curled shanks (left illustration), overlapping birds (center illustration), and litter covering the shank of interest (right illustration). It should be noted that the cases were determined and randomized in lab after consultation with broiler farm managers to simulate commercial conditions, and specific frequency or percentage of the cases was not provided. A more reliable and robust gripper that can overcome collision force without deformation should be developed.

Occlusion by other shanksBird overlappingOcclusion by litter
Figure 7. Sample cases of pickup failure. In the right image, the yellow part represents the chicken leg occluded by litter.

Future Work

Developing a robust broiler mortality robot autonomously manipulated in commercial broilers involves complicated procedures. This study achieved the development of the robot arm for broiler mortality pickup, which will be embedded onto a mobile unmanned ground vehicle. The research team also developed navigation algorithms for the ground robot roaming completely in a broiler house to search for broiler mortalities. Three-dimensional machine vision sensors (e.g., LiDAR) are planned to install on the ground robot to assist in sensing and navigation in a GPS-denied broiler house environment, and appropriately integrating all necessary components, including detection, navigation, servo, and pickup, should be researched as well. The end effector and camera were fixed together and positioned at a firm height (~20 cm). As the camera was placed with a close distance from the floor and perpendicular to the floor, the acquire image contained very little distortion affecting hand-eye connections. Therefore, our simple coordinate transformation method worked. However, additional calibration procedures [e.g., placing a checkerboard in front of a robotic vision system and building homogeneous transformation matrices (Tabb and Ahmad Yousef, 2017)] are needed for the applications with severely distorted views contained in acquired images.

Conclusion

A broiler mortality removal robot was developed to automatically collect dead birds. The broiler shank was the target anatomical part for detection and mortality pickup, with a perpendicular orientation between gripper long axis and broiler shank resulting in higher success rates than the non-perpendicular orientation. For detecting and locating broiler shanks, the YOLO V4 outperformed the YOLO V3 with regard to precision, recall, F1 score, and processing speed. Lower light intensities reduced the performance of the deep learning model detection, image processing orientation identification, and final pickup performance. The final success rate for picking up dead birds was 90.0% at the 1000-lux light intensity.

Declaration of Competing Interest

The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this article.

Acknowledgments

This research was funded by the USDA Agricultural Research Service Cooperative Agreement number 58-6064-015. The authors appreciated the experimental assistant and facility maintenance from USDA technicians and undergraduate assistants at Mississippi State University.

References

Astill, J., Dara, R. A., Fraser, E. D. G., Roberts, B., & Sharif, S. (2020). Smart poultry management: Smart sensors, big data, and the internet of things. Comput. Electron. Agric., 170, 105291. https://doi.org/10.1016/j.compag.2020.105291

Aviagen. (2022). ROSS 708: Performance objectives. Retrieved from https://en.aviagen.com/assets/Tech_Center/Ross_Broiler/RossxRoss708-BroilerPerformanceObjectives2022-EN.pdf

Big Dutchman. (2022). ChickenBoy: Higher productivity and increased welfare with an autonomous robot and artificial intelligence. 2022. Retrieved from https://www.bigdutchman.com/en/poultry-growing/products/detail/chickenboy/

Bochkovskiy, A., Wang, C.-Y., & Liao, H.-Y. M. (2020). YOLOv4: Optimal speed and accuracy of object detection. arXiv:2004.10934, 1-17. https://arxiv.org/abs/2004.10934v1

Carey, J. B., Lacey, R. E., & Mukhtar, S. (2004). A review of literature concerning odors, ammonia, and dust from broiler production facilities: 2. Flock and house management factors. J. Appl. Poult. Res., 13(3), 509-513. https://doi.org/10.1093/japr/13.3.509

Chang, C.-L., Xie, B.-X., & Wang, C.-H. (2020). Visual guidance and egg collection scheme for a smart poultry robot for free-range farms. Sensors, 20(22), 6624. https://doi.org/10.3390/s20226624

Dent, J. E., Kao, R. R., Kiss, I. Z., Hyder, K., & Arnold, M. (2008). Contact structures in the poultry industry in Great Britain: Exploring transmission routes for a potential avian influenza virus epidemic. BMC Veterinary Research, 4(1), 27. https://doi.org/10.1186/1746-6148-4-27

Fernandes, L. A. F., & Oliveira, M. M. (2008). Real-time line detection through an improved Hough transform voting scheme. Pattern Recognit., 41(1), 299-314. https://doi.org/10.1016/j.patcog.2007.04.003

Gates, R. S., Casey, K. D., Wheeler, E. F., Xin, H., & Pescatore, A. J. (2008). U.S. broiler housing ammonia emissions inventory. Atmos. Environ., 42(14), 3342-3350. https://doi.org/10.1016/j.atmosenv.2007.06.057

Gonzalez, R. C., & Woods, R. E. (2002). Digital image processing (2nd ed.). Upper Saddle River, NJ: Prentice Hall.

Holland, W. C. (2005). US6939218B1: Method and apparatus of removing dead poultry from a poultry house. Google Patents.

Huang, J., Rathod, V., Sun, C., Zhu, M., Korattikara, A., Fathi, A.,... Guadarrama, S. (2017). Speed/accuracy trade-offs for modern convolutional object detectors. Proc. IEEE Conf. on Computer Vision and Pattern Recognition (pp. 7310-7311). Piscataway, NJ: IEEE. http://doi.org/10.1109/CVPR.2017.351

Li, G., Chesser, G. D., Huang, Y., Zhao, Y., & Purswell, J. L. (2021a). Development and optimization of a deep-learning-based egg-collecting robot. Trans. ASABE, 64(5), 1659-1669. https://doi.org/10.13031/trans.14642

Li, G., Huang, Y., Chen, Z., Chesser, G. D., Purswell, J. L., Linhoss, J., & Zhao, Y. (2021b). Practices and applications of convolutional neural network-based computer vision systems in animal farming: A review. Sensors, 21(4), 1492. https://doi.org/10.3390/s21041492

Li, G., Hui, X., Chen, Z., Chesser, G. D., & Zhao, Y. (2021c). Development and evaluation of a method to detect broilers continuously walking around feeder as an indication of restricted feeding behaviors. Comput. Electron. Agric., 181, 105982. https://doi.org/10.1016/j.compag.2020.105982

Li, G., Xu, Y., Zhao, Y., Du, Q., & Huang, Y. (2020). Evaluating convolutional neural networks for cage-free floor egg detection. Sensors, 20(2), 332. https://doi.org/10.3390/s20020332

Li, T. (2016). Study on caged layer heath behavior monitoring robot system. PhD diss. China Agriculture University, College of Engineering.

Liu, H.-W., Chen, C.-H., Tsai, Y.-C., Hsieh, K.-W., & Lin, H.-T. (2021). Identifying images of dead chickens with a chicken removal system integrated with a deep learning algorithm. Sensors, 21(11). https://doi.org/10.3390/s21113579

National Chicken Council. (2020). U.S. broiler performance. Retrieved from https://www.nationalchickencouncil.org/about-the-industry/statistics/u-s-broiler-performance/

Octopus Biosafety. (2022). The XO solution for efficient and responsible poultry farming. 2022(August). Retrieved from https://www.octopusbiosafety.com/en/animal-welfare/

Okinda, C., Nyalala, I., Korohou, T., Okinda, C., Wang, J., Achieng, T.,... Shen, M. (2020). A review on computer vision systems in monitoring of poultry: A welfare perspective. Artif. Intell. Agric., 4, 184-208. https://doi.org/10.1016/j.aiia.2020.09.002

Olanrewaju, H. A., Thaxton, J. P., Dozier, W. A., Purswell, J., Roush, W. B., & Branton, S. L. (2006). A review of lighting programs for broiler production. Int. J. Poult. Sci., 5(4), 301-308. https://doi.org/10.3923/IJPS.2006.301.308

Redmon, J., & Farhadi, A. (2018). Yolov3: An incremental improvement. arXiv:1804.02767, 1-6. https://arxiv.org/abs/1804.02767v1

Ren, G., Lin, T., Ying, Y., Chowdhary, G., & Ting, K. C. (2020). Agricultural robotics research applicable to poultry production: A review. Comput. Electron. Agric., 169, 105216. https://doi.org/10.1016/j.compag.2020.105216

Rong, W., Li, Z., Zhang, W., & Sun, L. (2014). An improved CANNY edge detection algorithm. Proc. 2014 IEEE Int. Conf. on Mechatronics and Automation (pp. 577-582). Piscataway, NJ: IEEE. https://doi.org/10.1109/ICMA.2014.6885761

Schwean-Lardner, K., Fancher, B. I., Gomis, S., Van Kessel, A., Dalal, S., & Classen, H. L. (2013). Effect of day length on cause of mortality, leg health, and ocular health in broilers. Poult. Sci., 92(1), 1-11. https://doi.org/10.3382/ps.2011-01967

Tabb, A., & Ahmad Yousef, K. M. (2017). Solving the robot-world hand-eye(s) calibration problem with iterative methods. Mach. Vis. Appl., 28, 569-590. https://doi.org/10.1007/s00138-017-0841-7

Tibot. (2022). Broiler farming-Spoutnic NAV. Retrieved from https://www.tibot.fr/elevage/robot-poulet-chair/

Tottori, J., Yamaguchi, R., Murakawa, Y., Sato, M., Uchida, K., & Tateyama, S. (1997). The use of feed restriction for mortality control of chickens in broiler farms. Avian Dis., 41(2), 433-437. https://doi.org/10.2307/1592200

Usher, C. T., Daley, W. D., Joffe, B. P., & Muni, A. (2017). Robotics for poultry house management. ASABE Paper No. 1701103. St. Joseph, MI: ASABE. https://doi.org/10.13031/aim.201701103

Vroegindeweij, B. A., Blaauw, S. K., IJsselmuiden, J. M., & van Henten, E. J. (2018). Evaluation of the performance of PoultryBot, an autonomous mobile robotic platform for poultry houses. Biosyst. Eng., 174, 295-315. https://doi.org/10.1016/j.biosystemseng.2018.07.015

Yanagi, T., Xin, H., & Gates, R. S. (2002). A research facility for studying poultry responses to heat stress and its relief. Appl. Eng. Agric., 18(2), 255-260. https://doi.org/10.13031/2013.7787

Zeng, A., Song, S., Yu, K.-T., Donlon, E., Hogan, F. R., Bauza, M.,... Rodriguez, A. (2018). Robotic pick-and-place of novel objects in clutter with multi-affordance grasping and cross-domain image matching. Proc. 2018 IEEE Int. Conf. on Robotics and Automation (ICRA) (pp. 3750-3757). Piscataway, NJ: IEEE. https://doi.org/10.1109/ICRA.2018.8461044