This paper presents a method for classification and localization of road signs in a 3D space, which is done with a help of neural network and point cloud obtained from a laser range finder (LIDAR). In addition, to accomplish this task and train the neural network (which is based on Faster-RCNN architecture) a dataset was collected. The trained convolutional network is used as a part of ROS node which fuses the obtained classification, data from the camera and lidar measurements. The output of the system is a set of images with bounding boxes and point clouds, corresponding to real signs on the road. The introduced method was tested and performed well on a dataset acquired from a self-driving car during different road conditions.
KEYWORDS: Cameras, Video, Visualization, Clouds, Prototyping, Video acceleration, Detection and tracking algorithms, Camera shutters, Video processing, Direct methods
This paper presents a comparison of four most recent ROS-based monocular SLAM-related methods: ORB-SLAM, REMODE, LSD-SLAM, and DPPTAM, and analyzes their feasibility for a mobile robot application in indoor environment. We tested these methods using video data that was recorded from a conventional wide-angle full HD webcam with a rolling shutter. The camera was mounted on a human-operated prototype of an unmanned ground vehicle, which followed a closed-loop trajectory. Both feature-based methods (ORB-SLAM, REMODE) and direct SLAMrelated algorithms (LSD-SLAM, DPPTAM) demonstrated reasonably good results in detection of volumetric objects, corners, obstacles and other local features. However, we met difficulties with recovering typical for offices homogeneously colored walls, since all of these methods created empty spaces in a reconstructed sparse 3D scene. This may cause collisions of an autonomously guided robot with unfeatured walls and thus limits applicability of maps, which are obtained by the considered monocular SLAM-related methods for indoor robot navigation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.