ASABE Logo

Article Request Page ASABE Journal Article

Computer Vision Approach for Tile Drain Outflow Rate Estimation

Sierra N. Young1, Meng Han2, Joshua M. Peschel3,*


Published in Applied Engineering in Agriculture 39(2): 153-165 (doi: 10.13031/aea.15157). Copyright 2023 American Society of Agricultural and Biological Engineers.


1Civil and Environmental Engineering, Utah State University, Logan, Utah, USA.

2Civil and Environmental Engineering, University of Illinois, Urbana, Illinois, USA.

3Agricultural and Biosystems Engineering, Iowa State University, Ames, Iowa, USA.

*Correspondence: peschel@iastate.edu

The authors have paid for open access for this article. This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License https://creative commons.org/licenses/by-nc-nd/4.0/

Submitted for review on 26 April 2022 as manuscript number ITSC 15157; approved for publication as a Research Article by Community Editor Dr. Seung-Chul Yoon of the Information Technology, Sensors, & Control Systems Community of ASABE on 26 January 2023.

Highlights

Abstract. This article presents a computer vision-based approach for monitoring water flow at outlet points of a tile drain system. The approach relies only on video capture of events at outlet points, thus a camera can be installed remotely and without contact with water. The algorithm detects, identifies, and tracks flows by motion, shape, and color features and measures flow rate based on a proposed model and two provided dimensions. The software was tested in a laboratory environment with three different target flow rate conditions: 0.312, 0.946, and 1.58 L/s (5, 15, and 25 gal/min). Flow rates reported by the computer vision approach are within 12% of the ground-truth flow rate baseline. The results of this work show that computer vision can be used as a reliable method for monitoring outlet flows from free-standing outlet structures under laboratory conditions. This work opens the possibility of applying computer vision techniques in tile drain monitoring from outlet points with mobile video recording devices in the field.

Keywords. Morphological transformations, Outflow detection, Outlet flow, Real-time application.

Subsurface (or ‘tile’) drainage is a common water table management practice for agricultural lands to reduce flooding and maintain productivity. These drainage systems are used extensively throughout humid and wet climates in the U.S. Midwest, Canada, and northern Europe with persistent or seasonally high water tables (Pavelis, 1987; Gilliam et al., 1999). While tile drains can improve field conditions leading to increased agricultural productivity, these systems alter the hydrologic flow pathways and the rates of nutrients and sediment transported into receiving waterways (Laubel et al., 1999; Macrae et al., 2007; King et al., 2015). The manipulation of flow routes by installing subsurface drainage systems can increase the discharge of excess water, including after precipitation events (King et al., 2014). Further, tile drain outlet flow measurements are critical for water balance and hydrological studies as surface flow is highly influenced by drainage flow for tile-drained watersheds (De Schepper et al., 2015). Also, the inclusion of tile drain discharge in coupled surface water-groundwater models is necessary for calibration (Hansen et al., 2013). Understanding outlet flows is particularly important for complex systems which might drain water from multiple fields with different management (fig. 1). In this context, outlet monitoring also aids in understanding discharges and identifying possible design problems and causes for failure. Therefore, monitoring the magnitude and duration of outlet flow rates at tile drain outlets is critical for assessing drainage performance and understanding the effects of subsurface drainage on watershed hydrology.

The current state of practice for outlet flow monitoring is to use sensors or other instrumentation for monitoring flow. These include systems such as ultrasonic flow meters (Enamorado et al., 2007; Szejba and Bajkowski 2019), pressure transducers (Abendroth et al., 2022), instrumentation installed at drain pump stations (Scherer and Jia, 2010), and proportional flow weirs (Vendelboe et al., 2016; Abendroth et al., 2022). However, instrumentation and sensor systems can be expensive, require multiple components, and are placed in harsh environments, which decreases their expected lifetime. Further, outlet and drain access may be limited, inhibiting the installation and maintenance of instrumentation. Modeling approaches can predict flow without instrumentation by simulating the hydrologic responses of drained agricultural systems, often using widely available and standard tools such as DRAINMOD (Skaggs et al., 2012; Youssef et al., 2018); however, model accuracy relies on accurate identification of input parameters (such as soil hydraulic parameters), hydrologic variables (such as rainfall estimates), and information regarding tile drain conditions, which may not always be available or accurate (Singh et al., 2006). Therefore, methods are needed for low-cost, quick-deployment, extended-term outlet monitoring to enable timely action when flow events occur.

(a)(b)
Figure 1. Example agriculture subsurface (tile) drain outlets shown from (a) a side view and (b) a top view.

Computer vision has emerged as a key, low-cost method for non-contact monitoring and surveilling water-related infrastructure within our environment (Jeanbourquin et al., 2011; Narayanan et al., 2014; Iqbal et al., 2021; Sabbatini et al., 2021). Typical applications for computer vision approaches include water level estimation, flood detection, and surface velocity estimation. Water level can be measured directly in an image using a gauge or reference marker within the frame, where marker features correspond to physical dimensions used for calibration and measurement (Gilmore et al., 2013; Sabbatini et al., 2021). Photogrammetric techniques can detect camera movements to make this process more robust to external disturbances (Lin et al., 2018). Alternatively, reference objects within an image, including static objects such as structural elements (Jafari et al., 2021), or dynamic objects such as people (Vallimeena et al., 2018) and cars (Park et al., 2021), can calibrate pixel dimensions to estimate water level. A few studies have estimated outlet flow rates using water level information and computational fluid dynamics (Fach et al., 2008); however, extreme rainfall events limit the collection of water level information. Further, the calibration rating curve from flow depth to flow rate is a global and general hydraulic model, which may not capture local hydraulic effects (Jeanbourquin et al., 2011).

Computer vision approaches can also estimate water flow velocity. Two commonly used methods include Particle Image Velocimetry (PIV) (Willert and Gharib, 1991) and Particle Tracking Velocimetry (PTV). PIV determines vector velocity fields by measuring the displacement of numerous particles that follow the motion of the fluid. PTV uses a similar approach but relies on identifying and tracking single particles (Ohmi and Li, 2000). These methods often require illuminating seeded or preexisting tracer particles in the fluid for detection and tracking. For example, lasers illuminated small particles in a river environment for velocity tracking (Tauro et al., 2016). River surface flows were also measured using PIV methods in low flow conditions using rice crackers as seeded particles in video collected from a helicopter (Fujita and Hino, 2003). Since the development of PIV methods, image processing techniques have been developed that do not require particle illumination or seeded particles. These methods use computer vision approaches to detect and track visual features between subsequent frames, such as surface ripples or small white caps. These surface feature tracking methods successfully estimated water velocities using RGB (Fujita and Hino, 2003; Jeanbourquin et al., 2011) and stereo (Sirazitdinova et al., 2018) imaging systems in open channel flow environments.

In addition to velocity estimation in open channel flow, computer vision methods exist for estimating water levels, velocities, and flows from imaging sensors placed within pipe structures for monitoring. In these environments, flow rates or velocities are directly computed using geometric features extracted from images. One study used contrast adjustments to detect water level boundary lines within sewer pipes, and a deep learning approach performed semantic segmentation of the wetted pipe area (Ji et al., 2020). Then, geometric features, including hydraulic radius, were calculated given known pipe dimensions to estimate flow. In addition to geometry-based approaches, deep learning models can estimate water levels in sewer pipes using classification and regression methods (Haurum et al., 2020) and detect defects within sewer and stormwater pipes (Tennakoon et al., 2018). However, computer vision-based methods to directly measure flow at stormwater and sewer system outlet points are lacking.

To address this need, the objectives of this study were to develop and validate a non-contact, vision-based approach to serve as a proof-of-concept method suitable for detecting flow events and estimating flow rates from free-standing, free-flowing tile drain outlet structures. A computer vision-based approach was developed using RGB imagery to detect outlet flow from tile drains and measure flows ranging from approximately 0.312 to 1.58 L/s (5 to 25 gal/min). Vision-based flow estimates were compared to validation measurements collected from in-line flow meters and volumetric measurements in a controlled experiment. This new method does not require physical access to the structure to measure flow, which is desirable in situations where the outlet may be difficult to access.

Materials and Methods

Laboratory Setup and Data Collection

The laboratory setup was self-recirculated with controllable flow rate. The setup included a large reservoir container with a pump that directed water up through a flow meter and valve, and ultimately through a horizontal pipe representing the outlet (fig. 2). An adjustable in-line valve controlled the flow magnitude. A 3/4 in. flow meter and embedded totalizer (Atlas Scientific, Long Island City, N.Y.) obtained flow measurements in real-time. An Arduino microcontroller recorded flow data, and a pre-filter prevented any sediment from entering the meter and affecting readings. An iPhone 5 (Apple, Cupertino, Calif.) device with an 8 megapixel camera mounted to a tripod recorded video of experiments. Original video was recorded at 30 fps with a resolution of 1080 × 1920 pixels.

Figure 2. Overview of the lab setup consisting of a horizontal outlet pipe, corrugated pipe tubing, valve, pump, pre-filter, flow meter, and reservoir. Arrows represent the direction of flow.

Flow Parameters of Interest

The following parameters were calculated for each experiment: flow occurrence, flow duration, and outlet flow rate. Each parameter and corresponding ground truth data are described in detail below.

Occurrence

The occurrence of a flow event is the presence of any flow, regardless of duration and rate. Flow occurrence was determined by visually inspecting video frames.

Duration

Duration is the elapsed time between the beginning and ending of a flow event. The beginning of an event occurs when flow first starts to exit the pipe; the end of an event occurs when the flow is no longer exiting the pipe. Duration was also determined by visually inspecting video frames for the start and end of a flow event and subtracting their corresponding time stamps.

Flow Rate

Flow rate is the volume of outlet flow per unit time. A flow meter measured flow rate ground truth data during the low flow experiments. However, monitoring flow rate at the outlet using a vision-based approach captured flow at a different point in time than the flow meter, because the flow first arrives at the flow meter before the outlet of the horizontal pipe. Therefore, a shift in time stamp was used to temporally align the flow data. If flow was first detected by the flow meter at the Nth second of the flow meter data and the (N+S)th second of the video data, then the video stream data were shifted back by S seconds. Flow rates for the medium and high flows were calculated by averaging the volume of water collected over time.

Modeling Pipe Outlet Flow

Flow through the horizontal pipe in the laboratory setup was determined using the continuity equation:

(1)

Figure 3. Diagram illustrating the flow calculation model. R is the radius of horizontal pipe, D is the depth of flow in the horizontal pipe, H is the vertical distance between the bottom of the horizontal pipe and the surface of the container, W is the horizontal traveling distance of flow, and points 1-3 mark the necessary locations for parabola fitting determined by the vision-based approach.

where Q is the flow rate, v is the flow velocity, and A is the wetted area in the horizontal pipe (fig. 3). This model requires four assumptions: (i) The shape of the cross section of the horizontal pipe is assumed to be circular; this is a reasonable assumption given that tile drain outlets are often round structures. (ii) Horizontal velocity does not decay while traveling from the horizontal pipe to the receiving water body (or receptacle), given that the traveling distance is relatively short. The initial horizontal velocity, which is the initial velocity of water flowing out of the horizontal pipe, is regarded as the flow velocity. The outlet flow is assumed to be free-falling with an initial vertical velocity of zero. Consequently, the trajectory of the horizontal flow after leaving the outlet is modeled as a parabola. (iii) Two physical dimensions representing the real world are provided: the vertical distance from the bottom of the horizontal pipe and the top of the container (H), and the radius of the outlet (R) (fig. 3). The following calculations of horizontal velocity and wetted area assume that H and R are provided. (iv) The camera captures imagery at a perpendicular angle to the outlet pipe and flow plane.

Based on equation 1 and the four assumptions above, flow velocity v and wetted area in the pipe A are calculated as follows. If Points 1, 2, and 3 are determined by captured video, the pixel distance of H can be denoted by y2-y1. Similarly, pixel distance of W is denoted by x2-x1.

Therefore, according to assumption (iv), W and H can be represented as:

(2)

The traveling time t of the outlet flow traveling from the pipe to the top of the container can be calculated as:

(3)

where g is the gravitational constant. Horizontal flow velocity, vh, remains constant according to assumption (ii). Consequently,

(4)

Once the horizontal flow is determined, the wetted area in the pipe A can be calculated. First, depth of flow D is calculated from pixel locations for Points 1, 3, and H as:

(5)

A cross section of the horizontal outlet pipe is shown in figure 4. Chord length L is calculated by the Pythagorean Theorem:

(6)

The central angle ? in radians is calculated as:

(7)

Therefore, the wetted area A in the pipe is calculated as:

(8)

Finally, equations 1, 4, and 8 can be used to calculate the outlet flow rate:

(9)

Figure 4. Pipe cross section. The wetted area is highlighted in blue.

Image Processing Workflow

Horizontal tile drain outlet pipes are variant in color, dimension, and length; therefore, there are no easily detectable common features or descriptors for rigorous detection of horizontal pipes across different videos. Similarly, water detection is a difficult task in computer vision because of reflection and transparency. Given these challenges, this study implemented motion detection using background subtraction to detect occurrence and duration of horizontal outlet flow events. Before applying background subtraction, videos were down sampled to 480 × 84 pixels at 5 fps to facilitate real-time computation. After motion detection, morphological closing was applied to the background subtracted frame to improve motion detection performance, and the percentage of pixels detected as motion determines whether it is true motion or noise. The final step is identifying flow events and discriminate pipe outlet flow from other motion within the frame using shape and color features. An overview of this approach is shown in figure 5, and each step is described in detail below.

Motion Detection Using Background Subtraction

Several assumptions were made to apply background subtraction for motion detection. First, it was assumed that the camera was static while capturing video, and that the distance from the camera to the flow is fixed. It was also assumed that the flow is not occluded, and that there is only one flow scene in any given video. Given these assumptions, background subtraction calculates the foreground mask by performing a subtraction between the current frame and a background model containing the static part of the scene. There are many available background subtraction models (Bradski, 2000; Sobral, 2013). To determine the best model for this scenario of outlet flow detection, a variety of models were tested on captured video (Bradski, 2000; Sobral, 2013). A brief comparison among the three top performing algorithms is shown in figure 6.

Figure 6 shows the output of three different background subtraction motion detection approaches, including two mixture of Gaussian (MOG) approaches [MOG (KaewTraKulPong and Bowden, 2002) and MOG2 (Zivkovic and Van Der Heijden, 2006) models], and a weighted moving variance (WMV) model (Sobral, 2013). Compared to the source frames, the implementation of MOG (fig. 6b) results in incomplete detection, while MOG2 (fig. 6c) introduces noise and detects part of static scene. The WMV model largely avoids both problems, resulting in a complete outlined flow detection. In addition to quality of motion detection, a computational cost is another important criterion. With similar performance, WMV is much less computationally expensive than other models (i.e., Yao and Odobez, 2007; Hofmann et al., 2012) tested in the set of background detection models. Therefore, WMV was selected as the background subtraction model for flow detection for this application.

Figure 5. Diagram illustrating an overview of the primary image processing steps to estimate outlet flow from captured video.
(a)(b)(c)(d)
Figure 6. Performance evaluations of different background subtraction models under flow conditions. (a) Source frame with flow. (b) Detected region by MOG. (c) Detected region by MOG2. (d) Detected region by WMV.

Morphological Transformations

Despite the improved performance of the WMV model, background subtraction can still result in incomplete coverage of detected pixels due to irregular reflections; therefore, morphological transformations were applied after background subtraction. Small holes inside the foreground objects were closed using dilation followed by erosion. Dilation is a process where an image A is convoluted by a kernel B. As kernel B is scanned over image A, the pixel value at the anchor point, which is usually the center of the kernel, is replaced by the maximum pixel value overlapped by B. Dilation closes holes within the detected region; however, it introduces the problem of expanding existing borders of detected objects. This is addressed by applying erosion after dilation. The only difference of erosion from dilation is that the pixel value at the anchor point is replaced by the minimum pixel value overlapped by the kernel. Morphological closing improved the performance of motion and flow detection. Results from morphological closing shown in figure 7c are overlapped onto the source image shown in figure 7a. The detected outlet flow is a more complete object compared to immediate output from background subtraction. The resulting frame after morphological closing shows detected motion and is used as a binary mask to help determine numerical threshold for motion detection.

Distinguishing between Motion Events

Results from background subtraction can be sensitive to noise within the scene, so a method was developed to determine whether the detected motion is “true” motion or background noise. Detected motion is represented as white pixels in the frame using the binary mask output after morphological transformation. The percentage of mask pixels determined the magnitude threshold, or size, for outlet flow motion within the frame. A threshold for this parameter was determined by capturing videos with three different flow rates shown in figure 8 and calculating the percentage of pixels detected as flow within the frame. As shown in figure 8, when there is no flow, the percentage is zero or near zero. When flow starts, the percentage of white pixels increases and remains steady during the flow event. After turning off the pump, the percentages recover to the no flow scenario. The percentage drops to the scale of 10-4 when the flow is very low (i.e., dripping from the outlet). Therefore, the motion threshold was set to 10-4. If the percentage of motion-detected pixels in one frame is under this threshold, then motion is regarded as environmental disturbance and the scene is regarded as static. When the percentage of pixels is above this threshold, the scene is considered to have true motion. Note that this threshold value can easily be calibrated for other imaging setups.

(a)(b)(c)(d)
Figure 7. Enhanced performance of detection after morphological closing. (a) Sample flow source frame, (b) frame after background subtraction, (c) frame after morphological closing, (d) overlap of frame (c) on (a).
Figure 8. Percentage of white pixels under three flow rate conditions.

After motion detection, shape and color features of detected motion were used for classification between flow events and other motion events (such as external disturbances). Direct use of color values is challenging, as the color of detected flow events is not constant. Since color is partly subject to light conditions, a histogram of values independent of brightness was calculated as a feature for comparison between motion events. Original Red-Green-Blue (RGB) frames were converted to Hue-Saturation-Value (HSV), and histograms were calculated using the H and S channels. When motion was detected, the H-S histogram of the region was calculated, and the correlation between the motion and baseline histograms was calculated. An H-S histogram of motion detected from the medium flow condition was used as the comparison baseline for this application. The correlation of two histograms H1 and H2 which have N bins can be calculated as follows (Bradski, 2000):

(10)

where

(11)

The correlation is between [0, 1], where 1 is a perfect correlation between two histograms, and 0 is no correlation. In addition to color, a shape feature baseline was also used for comparison between motion events. Given the physical characteristics of the outlet structures considered, the shape of horizontal outlet flow is parabolic regardless of flow rate. Therefore, the outlet flow parabola from the medium flow condition was selected as the baseline contour shape. Contours of the detected motion were compared with the baseline parabola by matching their Hu Moments (Hu, 1962) as:

(12)

and

(13)

(14)

where hiA and hiB are the Hu Moments of A and B, respectively. The smaller I2(A,B), the more similar the shapes of two contours. If the threshold for the histogram correlation and contour shape matching are TC and TS, respectively, then detected motion in one frame is determined as flow only if:

(15)

and

(16)

Finally, the occurrence of an outlet flow event was considered true only if detected motion met the criteria in equations 15 and 16 for three consecutive frames.

Flow Estimation

Only the region within the image containing the detected outlet flow is used when measuring flow occurrence, duration, and rate. Flow is estimated using geometrical features calculated from the binary image mask. According to equation 9, the only parameters that require calculation from image data for measuring outlet flow rate are the coordinates of Points 1, 2, and 3 (fig. 3). The corresponding points and dimensions of D, H, and W in the binary mask output from flow motion detection are shown in figure 9.

Figure 9. Locations of Points 1-3 and dimensions D, H, and W in a binary detection image.

Based on the assumption that flow trajectory is parabolic, and assuming the origin (x3,y3) = (0,0) is located in the top left corner of the image, coordinates y1 and x2 can be determined by fitting a parabola y = Ax2 + Bx + C to the lowest flow trajectory that satisfies the conditions A = 0, B = 0, and C = 0 such that the initial vertical velocity is zero or positive downward, and the depth of flow is positive. Three unknown parameters can be solved by three points on the parabola. The algorithm for determining parameters A, B, and C aims to decrease uncertainties by averaging several accepted results. As shown in figure 10a, assume there are ten points on the flow that can be used to fit the parabola. In total, there are 120 combinations of three points that can be used to fit the parabola. For each set of three points, A, B, and C are calculated and are only considered candidates if they match the above criteria, otherwise they are discarded. This iterative process continues until all possible combinations have been reached. The final values of A, B, and C are achieved by averaging all candidates. Two sample results are shown in figure 10. After a parabola is fitted, the coordinates of Points 1-3 and the dimensions D, H, and W can be determined. Given provided real-world dimensions of R and H, flow rate is calculated using equation 9.

Implementation of Flow Detection Approach as Windows Application

The above approach was developed into a standalone Windows application for processing pre-recorded video. The application allows a user to select a video clip of a flow event, provide the required physical dimensions, and visualize the resulting flow rates in real time as the application is running. It also creates a .csv file that records the flow rate at a time interval of 0.2 s. This application and example test videos are available for download at https://github.com/peschelgroup/outlet_vision_flow, and a video of how to use the application on a test video is included as Supplementary Video S1.

Analysis of Exported Flow Data

After applying the vision-based approach, data post-processing steps were completed to evaluate the results. Note that these steps are not part of the application described above but are part of a user-defined set of post-processing steps that can be modified according to user needs depending on the application after flow data are extracted as a csv. First, outliers were detected for all flow rates using the isoutlier function in Matlab (MathWorks, version 2021b, Natick, Mass.) and were defined as being more than three scaled median absolute deviations from the median. All outliers were removed from the data set and replaced with the mean value. Then, the data were subsequently smoothed by averaging data over a moving window of 2 seconds. Finally, only flow rate estimates during steady-state conditions were extracted and used in estimates of error compared to the ground truth values.

Results and Discussion

The proposed method was tested in a laboratory simulation with pipe and setup dimensions of H = 13.97 cm (5.5 in.) and R = 3.81 cm (1.5 in.). Both qualitative evaluations of flow detection and quantitative comparisons of occurrence, duration, and flow rate measurements with ground truth data are presented for five test cases: (1) low flow rate, (2) medium flow rate, (3) high flow rate, and the introduction of external motion disturbances in the frame during high flow rates, including (4) hand motions with different gestures, and (5) object motions using a plastic bottle.

(a)(b)(c)
Figure 10. (a) Example points on flow trajectory that assist in parabola fitting. (b) Fitted parabola for a steady flow event. (c) Fitted parabola for an ending flow event.

Occurrence

Approximately 2-min videos were captured for each test case. Each video first contains a static period with no flow, then outlet flow begins and stabilizes. Eventually the pump is turned off, outlet flow begins to decay, and the flow terminates. Frames were extracted from videos at 5 fps (0.2 s interval). For test cases (1)-(3), the occurrence was recorded as the first frame in which complete outlet flow can be visually observed. The frame selected by visual observation was compared to the frame selected by the computer vision approach. Example results for flow occurrence detection are shown in figure 11 for the medium flow rate test case, and occurrence results are summarized in table 1. On average, motion detection by the computer vision approach occurs 1.0 s later than the ground-truth frame.

Table 1. Occurrence comparison results.
Flow ConditionGround Truth (s)Vision Approach (s)Difference (s)
Low10.010.80.8
Medium17.618.20.6
High18.019.61.6
(a)(b)(c)(d)(e)
Figure 11. Flow occurrence detection process over five consecutive frames under medium flow rate conditions (from 17.4 to 18.2 s, with 0.2 s interval). Ground-truth flow occurrence is detected at 17.6 s (b). Vision approach reports flow occurrence at 18.2 s (e), indicating that the three previous frames at 17.6 s (b), 17.8 s (c), and 18.0 s (d) met the flow detection criteria.

For test cases (1)-(3), outlet flow from the pipe was identified correctly using the proposed approach. Figure 12 shows results for the more challenging test cases (4) and (5) in which moving objects are introduced into the frame. When there is no flow, the moving objects are not detected as motion, and when there are both moving objects and flow present in the frame, only the motion from the outlet flow is detected. Flow occurrence results for the low and high flow cases are included as Supplemental Figures S2 and S3, respectively.

Duration

The duration was calculated using the beginning and ending timestamps for both the ground-truth and vision approaches (note the beginning time stamp is the same as the detected occurrence frame described previously). Duration results are shown in table 2. On average, the vision approach underestimated the flow duration by 2.23% compared to the ground-truth duration values.

Flow Rate

Vision-based estimates of flow rate were compared to ground truth data. All vision-based flow estimates were made at 5 fps (every 0.2 s) and subsequently smoothed by averaging data over a moving window of 2 s. Average flow rates were calculated using the smoothed data and compared to the mean ground-truth flow rate for steady-state conditions only, and a percent error in average flow estimates is reported for each flow rate.

Low Flow Rate

For the low flow rate condition, ground-truth flow rate was measured using the flow meter described previously. Raw flow rate data reported by the proposed vision approach is shown in figure 13a. One outlier was detected in the vision-based flow estimated (0.52 L/s at 96.6 s); this outlier was removed and replaced with the mean flow rate. Smoothed data and the mean flow rates for both the vision-based estimate and ground truth data are shown in figure 13b. The mean flow rate estimated by the vision-based approach was 0.251 L/s, resulting in a 5.9% underestimation compared to the ground truth flow rate of 0.267 L/s during steady-state conditions.

(a)(b)(c)(d)
(e)(f)(g)(h)
Figure 12. Top row: results of flow detection for case (4). Bottom row: flow detection for case 5. (a)(e) No flow and moving objects. (b-c)(f-g) Flow and moving object. Flow is detected, but moving objects are not. (d)(h) No moving objects, flow detected correctly.
Table 2. Flow duration results.
Flow ConditionGround Truth (s)Vision Approach (s)Percent Error(%)
Low100.098.0-2.0
Medium105.0103.2-1.7
High101.098.0-3.0

Medium Flow Rate

Ground-truth medium flow rate was measured by averaging volume of water collected over time, which was 0.953 L/s during steady-state conditions. Raw flow rate data reported by this proposed approach is shown in figure 14a. No outliers were detected in the vision-based flow estimates. Smoothed data and the mean flow rates for both the vision-based estimate and ground truth data are shown in figure 14b. The mean flow rate estimated by this vision-based approach was 0.865 L/s, resulting in a 9.2% underestimation compared to the ground truth flow rate during steady-state conditions.

High Flow Rate

Similar to the medium flow rate, ground truth flow rate data for the high flow condition was measured by averaging the volume of water collected during the flow duration, which was 1.656 L/s. Raw flow rate data reported by this proposed approach is shown in figure 15a. Several outliers were detected in the high flow rate estimates (0.702 L/s at 40 s, 1.98 L/s at 41.6 s, and 1.86 L/s at 41.8 s); these outliers were removed and replaced with the mean flow rate. Smoothed data and the mean flow rates for both the vision-based estimate and ground truth data are shown in figure 15b. The mean flow rate estimated by this vision-based approach was 1.470 L/s, resulting in a 11.2% underestimation compared to the ground truth flow rate during steady-state conditions.

Discussion

This method was successful in estimating the occurrence and duration of flow events with high accuracy. On average, duration was underestimated by only 2.23% or 2.17 s, and occurrence detection was delayed on average by 1.0 s. The proposed method requires that flow is detected in three consecutive scenes to avoid false detection due to noise; therefore, since the captured video is processed as 5 fps, this results in at least 0.6 s of delay, which accounts for nearly all the delay in the low (0.8 s) and medium (0.6 s) flow conditions. However, the delay of 1.6 s was largest for the large flow rate condition. This may be a result of larger water surface areas that have the potential to increase feature complexity of the flow object between frames, resulting in the larger flow rates taking longer to stabilize. This can be seen in figure 16, which includes frames from the start of the flow event to the first frame of detection. Despite this slight delay, the method still detected outlet flow during high flow conditions.

(a)
(b)
Figure 13. (a) Raw vision-based estimates of flow rate, and (b) smoothed flow rate data shown with the average vision-based estimate (red dashed line) and the ground flow truth rate (green line) flow the low flow condition during steady state, indicated by the black dashed bounding box.
(a)
(b)
Figure 14. (a) Raw vision-based estimates of flow rate, and (b) smoothed flow rate data shown with the average vision-based estimate (red dashed line) and the ground flow truth rate (green line) for the medium flow condition during steady state, indicated by the black dashed bounding box.

The proposed method was also evaluated for measuring flow rate of water leaving the outlet. For all flow conditions, the proposed method underestimated flow rates compared to ground truth measurements. Average percent error for flow rate estimates during steady state conditions were 5.9%, 9.2%, and 11.2% for the low, medium, and high flow conditions, respectively. One potential reason for this underestimation may be inaccuracies in parabola fitting. In particular, the flow estimation is driven by the flow depth D and horizontal travel distance of the flow W. These are estimated by determining points x1-3, y1-3. In the detection of the flow water object, there may be some noise or pixels omitted along the boundary. This may lead to slight inaccuracies in the location of points that are used to calculated W and D. A larger driver of this underestimation is likely the assumption in the model that the horizontal velocity at the bottom of the pipe structure is uniform across the entire wetted area. In reality, the velocity at the bottom surface of the pipe is slower than the surface velocity; this difference gets larger as the flow rate and wetted area increase, which also explains the larger error for higher flow rates. However, the flow depth D and wetted area A are still considered in the flow estimation approach, which limits the overall error even for the high flow condition.

While this approach was largely successful in detecting flow events, measuring the event duration, and estimating flow rate, one limitation is that this analysis was conducted in a laboratory environment. This was a necessary first step to validate the proposed approach and model, which largely relies on pipe and outlet flow geometry, to estimate flow rates. The approach was robust against motion in the scene that did not obstruct the view of the outlet flow; however, this analysis was limited to single objects in each frame. Additional analyses are needed to understand performance under a wider range of objects and types of motion. Additionally, more data are needed to verify the robustness of this approach in a real-world environment. While the physical flow model can still be used to estimate flow rate in real-world conditions, other approaches may be needed to detect and segment the outlet flow from the background, especially in cluttered environments. Approaches such as deep learning for flow object detection may be useful; however, these approaches require significant amounts of training data, which is a limitation as outlet flow events may be relatively rare and difficult to capture. Therefore, future work includes collecting and simulating real-world outlet flow events for developing more robust approaches for flow detection and flow estimation in real-world environments.

  1. (b)
Figure 15. (a) Raw vision-based estimates of flow rate, and (b) smoothed flow rate data shown with the average vision-based estimate (red dashed line) and the ground truth rate (green line) for the high flow condition during steady state, indicated by the black dashed bounding box.

Conclusions

Tile drain outlet flow monitoring is a major concern for understanding drainage function and determining the impacts of subsurface drainage on watershed hydrology and nutrient and sediment transport. This study addressed a gap in existing outlet monitoring technology by developing a low-cost, accurate, vision-based method to characterize outlet flow events in terms of occurrence, duration, and rate from free-standing outlet points. Overall, this approach was successful in detecting and measuring outlet flow duration, and measured outlet flows with errors ranging from 5.9% to 11.2%. Additionally, an open-source application was developed and made available to enable flow rate data to be extracted from recorded video. This study validated the feasibility of monitoring tile drain outlet flow events visually and without in situ instrumentation, which is important for monitoring sites that are difficult to access for sensor installation and maintenance. This approach adds value to the agricultural water management and computer vision communities by developing a new approach focused on monitoring outlet flow from tile drain structures, which has not been as widely studied as flow and velocity estimations in open channel flows.

While this study demonstrated the feasibility of such an approach, there are several areas for future work. First, modifications to the model could be made by compensating for uneven horizontal initial velocities of flow within the depth of the flow within the pipe, such as by incorporating the Manning-Strickler equation, as the current method considers the trajectory of flow on the bottom of the pipe structure as the horizontal velocity across the entire wetted area. Note, however, this would require knowledge regarding additional properties of the pipe, including pipe material and roughness, while the current method does not rely on these properties. Additionally, the computer vision algorithm could be tested under more variable conditions to ultimately move towards deployment of this method in the field, as aspects of the proposed approach will likely need to be tuned for site-specific, improved performance, such as by tuning the magnitude threshold for detecting flow motion and the collection of flow features for comparison between motion types (i.e., flow vs. non-flow motion). Finally, aspects related to data storage and transmission need to be considered for long-term field implementation. This includes the development of data reduction algorithms or methods to process and then discard the video in favor of retaining and sending only the numerical flow data, as the storage and transmission of large video files is often impractical at remote field locations.

Acknowledgments

This work was sponsored by Global Quality Corp.

Supplemental Information

Supplemental information for this manuscript can be found on Figshare at: https://doi.org/10.13031/19448552.v1

References

Abendroth, L. J., Chighladze, G., Frankenberger, J. R., Bowling, L. C., Helmers, M. J., Herzmann, D. E.,... Youssef, M. (2022). Paired field and water measurements from drainage management practices in row-crop agriculture. Sci. Data, 9(1), 257. https://doi.org/10.1038/s41597-022-01358-7

Bradski, G. (2000). The OpenCV Library. Dr. Dobb’s Journal: Software tools for the professional programmer. 25(11), 120.

De Schepper, G., Therrien, R., Refsgaard, J. C., & Hansen, A. L. (2015). Simulating coupled surface and subsurface water flow in a tile-drained agricultural catchment. J. Hydrol., 521, 374-388. https://doi.org/10.1016/j.jhydrol.2014.12.035

Enamorado, S. M., Hurtado, M. D., Andreu, L., Martínez, F., Sánchez, J., Delgado, A., & Abril, J.-M. (2007). Development of a recording water flow meter using ultrasonic measurement of water levels in a slotted U-pipe. Agric. Water Manag., 88(1), 263-268. https://doi.org/10.1016/j.agwat.2006.10.003

Fach, S., Sitzenfrei, R., & Rauch, W. (2008). Assessing the relationship between water level and combined sewer overflow with computational fluid dynamics. Proc. 11th Int. Conf. on Urban Drainage.

Fujita, I., & Hino, T. (2003). Unseeded and seeded PIV measurements of river flows videotaped from a helicopter. J. Vis., 6(3), 245-252. https://doi.org/10.1007/BF03181465

Gilliam, J. W., Baker, J. L., & Reddy, K. R. (1999). Water quality effects of drainage in humid regions. In R. W. Skaggs, & J. v. Schilfgaarde (Eds.), Agricultural Drainage (Vol. 38, pp. 801-830). https://doi.org/10.2134/agronmonogr38.c24

Gilmore, T. E., Birgand, F., & Chapman, K. W. (2013). Source and magnitude of error in an inexpensive image-based water level measurement system. J. Hydrol., 496, 178-186. https://doi.org/10.1016/j.jhydrol.2013.05.011

Hansen, A. L., Refsgaard, J. C., Christensen, B. S., & Jensen, K. H. (2013). Importance of including small-scale tile drain discharge in the calibration of a coupled groundwater-surface water catchment model. Water Resour. Res., 49(1), 585-603. https://doi.org/10.1029/2011WR011783

Haurum, J. B., Bahnsen, C. H., Pedersen, M., & Moeslund, T. B. (2020). Water level estimation in sewer pipes using deep convolutional neural networks. Water, 12(12), 3412. https://doi.org/10.3390/w12123412

Hofmann, M., Tiefenbacher, P., & Rigoll, G. (2012). Background segmentation with feedback: The Pixel-Based Adaptive Segmenter. Proc. 2012 IEEE Computer Society Conf. on Computer Vision and Pattern Recognition Workshops, (pp. 38-43). https://doi.org/10.1109/CVPRW.2012.6238925

Hu, M.-K. (1962). Visual pattern recognition by moment invariants. IRE Trans. Inf. Theory, 8(2), 179-187. https://doi.org/10.1109/TIT.1962.1057692

Iqbal, U., Perez, P., Li, W., & Barthelemy, J. (2021). How computer vision can facilitate flood management: A systematic review. Int. J. Disaster Risk Reduct., 53, 102030. https://doi.org/10.1016/j.ijdrr.2020.102030

Jafari, N. H., Li, X., Chen, Q., Le, C.-Y., Betzer, L. P., & Liang, Y. (2021). Real-time water level monitoring using live cameras and computer vision techniques. Comput. Geosci., 147, 104642. https://doi.org/10.1016/j.cageo.2020.104642

Jeanbourquin, D., Sage, D., Nguyen, L., Schaeli, B., Kayal, S., Barry, D. A., & Rossi, L. (2011). Flow measurements in sewers based on image analysis: Automatic flow velocity algorithm. Water Sci. Technol., 64(5), 1108-1114. https://doi.org/10.2166/wst.2011.176

Ji, H. W., Yoo, S. S., Lee, B.-J., Koo, D. D., & Kang, J.-H. (2020). Measurement of wastewater discharge in sewer pipes using image analysis. Water, 12(6), 1771. https://doi.org/10.3390/w12061771

KaewTraKulPong, P., & Bowden, R. (2002). An improved adaptive background mixture model for real-time tracking with shadow detection. In P. Remagnino, G. A. Jones, N. Paragios, & C. S. Regazzoni (Eds.), Video-based surveillance systems: Computer vision and distributed processing (pp. 135-144). Boston, MA: Springer. https://doi.org/10.1007/978-1-4615-0913-4_11

King, K. W., Fausey, N. R., & Williams, M. R. (2014). Effect of subsurface drainage on streamflow in an agricultural headwater watershed. J. Hydrol., 519, 438-445. https://doi.org/10.1016/j.jhydrol.2014.07.035

King, K. W., Williams, M. R., Macrae, M. L., Fausey, N. R., Frankenberger, J., Smith, D. R.,... Brown, L. C. (2015). Phosphorus transport in agricultural subsurface drainage: A review. J. Environ. Qual., 44(2), 467-485. https://doi.org/10.2134/jeq2014.04.0163

Laubel, A., Jacobsen, O. H., Kronvang, B., Grant, R., & Andersen, H. E. (1999). Subsurface drainage loss of particles and phosphorus from field plot experiments and a tile-drained catchment. J. Environ. Qual., 28(2), 576-584. https://doi.org/10.2134/jeq1999.00472425002800020023x

Lin, Y.-T., Lin, Y.-C., & Han, J.-Y. (2018). Automatic water-level detection using single-camera images with varied poses. Measurement, 127, 167-174. https://doi.org/10.1016/j.measurement.2018.05.100

Macrae, M. L., English, M. C., Schiff, S. L., & Stone, M. (2007). Intra-annual variability in the contribution of tile drains to basin discharge and phosphorus export in a first-order agricultural catchment. Agric. Water Manag., 92(3), 171-182. https://doi.org/10.1016/j.agwat.2007.05.015

Narayanan, R., Lekshmy, V. M., Rao, S., & Sasidhar, K. (2014). A novel approach to urban flood monitoring using computer vision. Proc. Fifth Int. Conf. on Computing, Communications and Networking Technologies (ICCCNT), (pp. 1-7). https://doi.org/10.1109/ICCCNT.2014.6962989

Ohmi, K., & Li, H.-Y. (2000). Particle-tracking velocimetry with new algorithms. Meas. Sci. Technol., 11(6), 603. https://doi.org/10.1088/0957-0233/11/6/303

Park, S., Baek, F., Sohn, J., & Kim, H. (2021). Computer vision-based estimation of flood depth in flooded-vehicle images. J. Comput. Civil Eng., 35(2), 04020072. https://doi.org/10.1061/(ASCE)CP.1943-5487.0000956

Pavelis, G. A. (1987). Economic survey of farm drainage. In Farm drainage in the United States: History, status, and prospects (pp. 110-136).

Sabbatini, L., Palma, L., Belli, A., Sini, F., & Pierleoni, P. (2021). A computer vision system for staff gauge in river flood monitoring. Inventions, 6(4), 79. https://doi.org/10.3390/inventions6040079

Scherer, T. F., & Jia, X. (2010). Technical note: A simple method to measure the flow rate and volume from tile drainage pump stations. Appl. Eng. Agric., 26(1), 79-83. https://doi.org/10.13031/2013.29478

Singh, R., Helmers, M. J., & Qi, Z. (2006). Calibration and validation of DRAINMOD to design subsurface drainage systems for Iowa’s tile landscapes. Agric. Water Manag., 85(3), 221-232. https://doi.org/10.1016/j.agwat.2006.05.013

Sirazitdinova, E., Pesic, I., Schwehn, P., Song, H., Satzger, M., Sattler, M.,... Deserno, T. M. (2018). Sewer discharge estimation by stereoscopic imaging and synchronized frame processing. Comput.-Aided Civ. Infrastruct. Eng., 33(7), 602-613. https://doi.org/10.1111/mice.12365

Skaggs, R. W., Youssef, M. A., & Chescheir, G. M. (2012). DRAINMOD: Model use, calibration, and validation. Trans. ASABE, 55(4), 1509-1522. https://doi.org/10.13031/2013.42259

Sobral, A. (2013). BGSLibrary: An opencv c++ background subtraction library. Proc. IX Workshop de Visao Computacional, 27, p. 24.

Szejba, D., & Bajkowski, S. (2019). Determination of tile drain discharge under variable hydraulic conditions. Water, 11(1), 120. https://doi.org/10.3390/w11010120

Tauro, F., Porfiri, M., & Grimaldi, S. (2016). Surface flow measurements from drones. J. Hydrol., 540, 240-245. https://doi.org/10.1016/j.jhydrol.2016.06.012

Tennakoon, R. B., Hoseinnezhad, R., Tran, H., & Bab-Hadiashar, A. (2018). Visual inspection of storm-water pipe systems using deep convolutional neural networks. Proc. 15th International Conference on Informatics in Control, Automation and Robotic, (pp. 145-150). https://doi.org/10.5220/0006851001350140

Vallimeena, P., Nair, B. B., & Rao, S. N. (2018). Machine vision based flood depth estimation using crowdsourced images of humans. Proc. 2018 IEEE International Conference on Computational Intelligence and Computing Research (ICCIC), (pp. 1-4). https://doi.org/10.1109/ICCIC.2018.8782363

Vendelboe, A. L., Rozemeijer, J., de Jonge, L. W., & de Jonge, H. (2016). Continuous ‘Passive’ flow-proportional monitoring of drainage using a new modified Sutro weir (MSW) unit. Environ. Monit. Assess., 188(3), 190. https://doi.org/10.1007/s10661-016-5188-4

Willert, C. E., & Gharib, M. (1991). Digital particle image velocimetry. Exp. Fluids, 10(4), 181-193. https://doi.org/10.1007/BF00190388

Yao, J., & Odobez, J. M. (2007). Multi-layer background subtraction based on color and texture. Proc. 2007 IEEE Conf. on Computer Vision and Pattern Recognition, (pp. 1-8). https://doi.org/10.1109/CVPR.2007.383497

Youssef, M. A., Abdelbaki, A. M., Negm, L. M., Skaggs, R. W., Thorp, K. R., & Jaynes, D. B. (2018). DRAINMOD-simulated performance of controlled drainage across the U.S. Midwest. Agric. Water Manag., 197, 54-66. https://doi.org/10.1016/j.agwat.2017.11.012

Zivkovic, Z., & van der Heijden, F. (2006). Efficient adaptive density estimation per image pixel for the task of background subtraction. Pattern Recognit. Lett., 27(7), 773-780. https://doi.org/10.1016/j.patrec.2005.11.005