KEYWORDS: Projection systems, Cameras, Calibration, Fringe analysis, Image processing, Camera calibration, 3D projection, 3D modeling, 3D image processing
Metric three-dimensional reconstruction by fringe projection profilometry requires calibrating the employed camera and projector. However, the calibration process is more difficult for projectors than for cameras. This work presents a reconstruction method where the projector parameters are not required explicitly. For this, we assume the projector follows the pinhole model and single-axis fringe projection is employed. The theoretical principles are explained, and the proposed method is validated experimentally by a metric three-dimensional reconstruction. The results provide a theoretical framework for further generalization, including implicit camera calibration and lens distortion, while keeping the metric reconstruction capability.
KEYWORDS: Pose estimation, Cameras, Matrices, 3D metrology, Singular value decomposition, 3D image processing, Video, Navigation systems, Error analysis, Global Positioning System
Location and pose estimation are essential tasks for robot navigation. Conventional global positioning systems can perform poorly due to environmental or indoor interference. Alternatively, vision-based location and pose estimation systems may be more suitable for indoor and outdoor applications. However, vision-based systems still need to improve their robustness in an uncontrolled environment and operational performance. In this work, a visual pose estimation method for robot navigation in an uncontrolled environment is proposed. The theoretical principles of pose estimation are reviewed and the usefulness of the proposed approach in a navigation sequence is shown. The results obtained show that the proposed method is feasible for robot navigation applications.
KEYWORDS: Cameras, Virtual reality, RGB color model, Human-machine interfaces, Coded apertures, 3D modeling, Visual process modeling, Video processing, Video, MATLAB
Modern advances in optical metrology and computer vision have provided an unprecedented ability to generate a wide variety of 3D digital content. The mouse, trackpad, and touch screens are typical 2D interactive interfaces of digital content. However, such interfaces are restrictive to manipulate 3D content such as models, object scans, and environments. In this work, a 3D pointer based on stereo vision to interact virtually with digital 3D objects is proposed. The theoretical principles and the experimental calibration procedure are provided. The proposed 3D pointer is evaluated experimentally by simple interaction routines with objects reconstructed by fringe projection profilometry.
KEYWORDS: Projection systems, Navigation systems, Image segmentation, Mobile robots, Digital imaging, Dynamical systems, Cameras, 3D image processing, 3D displays
Experimental platforms are necessary to evaluate the performance of algorithms in different navigation scenarios. Physical platforms require materials and time to create a single experimental scenario. This approach becomes impractical for exhaustive evaluation in different scenarios because of the prohibitive increase of resources, time, and space. This paper proposes a multi-projector system to mitigate the time and cost by projecting dynamically designed scenes for vehicle navigation experiments. Theoretical principles of perspective projection and mosaicing are reviewed. The dynamic platform is presented for different vehicle navigation cases. The results show that the proposed approach is feasible for vehicle navigation evaluation.
Perspective distortion is a typical transformation reproduced by the pinhole model. However, camera lenses introduce radial distortion that reduces the accuracy of image processing tasks, such as lane detection for visual navigation. This paper proposes an image warping method based on the distorted pinhole camera model for lane detection applications. The theoretical principles of the imaging process are analyzed. The usefulness of this method is illustrated by estimating the pose of a ground vehicle using lane lines. The results show that the proposed approach is feasible for visual feedback in robot navigation applications.
Pose estimation is an essential task in many mobile robot navigation systems. Visual guidance provides a feasible means for pose estimation using the observed scene information as reference. This work presents an approach to estimate the pose of a mobile robot based on projective transformations. First, the Hough transform is used for lane detection. Next, a projective transformation is computed using the detected lines as reference. Finally, the robot's pose is estimated from the resulting projective transformation. The theoretical principles and computational implementation are analyzed. Experimental results of a visual navigation experiment are presented to validate the usefulness of the proposed approach.
Uncalibrated camera-projector fringe projection systems are unable to provide metric three-dimensional measurements. The main difficulty for camera-projector calibration is that independent calibration of the devices is cumbersome and susceptible to alignment errors. In this paper, an efficient and accurate method for calibration of a camera-projector pair is proposed. The operating principle and computational implementation are analyzed. The metric measurement of a three-dimensional object is carried out to demonstrate the efficiency and accuracy of the proposed method.
Length measurements provide important information about the three-dimensional world. This is especially useful for decision making in robot vision, path planning in autonomous navigation, and people identification in security application. In this work, we present a length measurement method based on perspective transformations using an uncalibrated camera. The theoretical principles are analyzed and the computational implementation is discussed. The usefulness of our proposal is verified experimentally by measuring relative lengths from experimental monocular images.
The capture of panoramic images requires the use of complex and specialized cameras. However, high quality panoramic images can be constructed digitally by stitching several images captured with conventional lowcost cameras. In this work, an image stitching method based on projective transformations is proposed. The theoretical principles and computational implementation are presented. Experimental panoramic images are composed to validate the usefulness of our method.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.