Click on “Download PDF” for the PDF version or on the title for the HTML version.


If you are not an ASABE member or if your employer has not arranged for access to the full-text, Click here for options.

Visual Slam and Obstacle Avoidance Based on RGB-D Camera

Published by the American Society of Agricultural and Biological Engineers, St. Joseph, Michigan www.asabe.org

Citation:  2017 ASABE Annual International Meeting  1700522.(doi:10.13031/aim.201700522)
Authors:   Yan Yu, Xiaolin Zhang, Zhiqin Wang, Decheng Wang, Guanghui Wang, Xiaopeng Bai, Qingyun Sun
Keywords:   Path planning, vision, obstacles , image processing

Abstract.

Simultaneous Localization and Mapping (SLAM) refers to the problem that create and update a map of an unknown environment using a robot and simultaneously determine the location of the robot within the map. By solving the problem, priori understanding of an unfamiliar environment will not be required if a robot is designed to perform exploration task or navigate on its own. Therefore, studies on this problem have been widely carried out in related aspects like self-driving cars, autonomous robot and unmanned aerial vehicles. Dissanayake et al. (2001) proved that SLAM is solvable and presented an algorithm which was implemented on a vehicle using millimeter-wave (MMW) radar to give environment information. Davison et al. (2007) described a real-time algorithm to construct the 3D trajectory from a monocular camera and applied it to a full-size humanoid robot to achieve real-time 3D SLAM. Klein and Murray (2007) presented an algorithm that can effectively select key frame and match visual feature and is considered as a SLAM method for monocular video with high accuracy. Engelhard et al. (2011) presented an approach that use RGB-D camera to generate 3D models of indoor objects. Huang et al. (2011) described a system for autonomous flight using RGB-D camera to do mapping. They reported a 0.387 m/s velocity error and a 3.39% gross failure rate, which was outperformed the manned aerial vehicle. Henry et al. (2012) investigated the robustness and feasibility of a system using RGB-D camera to conduct SLAM. They found that RGB-D camera can yield robust frame matching and loop-closure detection. Endres et al. (2014) used only RGB-D camera to generate highly accurate 3D maps for SLAM. RANSAC and g2o framework were used to estimate the transformation between two frames and reduce estimation error. It can be seen that using RGB-D camera to solve SLAM problem has gained its popularity compared to other methods due to its low cost and direct acquisition of depth data. .Therefore, in this group project, emphasis will be laid on the utilization of RGB-D camera to achieve SLAM.

(Download PDF)    (Export to EndNotes)