Click on “Download PDF” for the PDF version or on the title for the HTML version.
If you are not an ASABE member or if your employer has not arranged for access to the full-text, Click here for options.
MODELING AND RECOGNIZING BEHAVIOR PATTERNS OF LAYING HENS IN FURNISHED CAGES
Published by the American Society of Agricultural and Biological Engineers, St. Joseph, Michigan www.asabe.org
Citation: Proceedings of the Seventh International Symposium, 18-20 May 2005 (Beijing, China) Publication Date 18 May 2005 701P0205.(doi:10.13031/2013.18430)
Authors: T. Leroy, E. Vranken, E. Struelens, B. Sonck, J. and D. Berckmans
Keywords: Computer vision, image processing, poultry housing, behavior, classification
Automated individual animal behavior surveillance, by means of low-cost cameras and computer
vision techniques, has the ability to generate continuous data providing an objective measure of
behavior, without disturbing the animals.
(Download PDF) (Export to EndNotes)
The specific purpose of this current study was to develop an automatic computer vision
technique to quantify six types of behavior of an individual laying hen (standing, sitting,
sleeping, preening, scratching, pecking) continuously and compare them with the current human
For this purpose, a model-based algorithm has been developed, based on the fact that behavior
can be described as a time-series of different subsequent postures. The quantification of the hens
posture consists of its position, orientation and a set of parameters describing its shape, obtained
by fitting a point distribution model to the hens outline. Applying this algorithm to subsequent
images in a video sequence, the successive values of the hens posture parameterization represent
the hens behavior within that sequence. A model for each behavior type is created by clustering
the set of posture parameterizations calculated from training video sequences with known
behavior, provided by a trained ethologist. For the classification of the unknown behavior in a
new video fragment, its posture parameter time series are calculated using the same algorithm
and matched to each of the trained behavior models. The behavior in a new video fragment is
then classified as the behavior type for which the model gives the best match.
The system was tested on a set of over 14000 video fragments of a single hen in a cage, each
fragment containing one of the six behavior types. The average classification rate was between
70%-96%, except 21% for pecking, due to an unreliable tracking of the chickens head. Best
results were obtained for sleeping (96%) and standing (90%).