As Autonomous Vehicles (AVs) progress to become more prevalent on the roads, there is an increased focus on reliability of the underlying sensor technology supporting the decisions made by the AI. Continuous calibration of visual sensors, such as LiDAR and camera are essential for the commercial development and societal acceptance of fully autonomous vehicles (AVs). As we move towards full autonomy it is reasonable to demand that sensor tolerances should be minimized. The traditional methods that are used today are time consuming, require extensive setup and configuration, and as a result are prone to errors due to the reliance on human intervention. Furthermore, there is the risk of sensor errors creeping into the system due to the one-time nature of calibration, and the environmental factors that can impact sensors during the operation of an autonomous vehicle. Recently, various target-less calibration methods have been proposed involving traditional feature picking as well as using deep learning-based approaches but still require initial calibration parameters in order to be effective. Our method need not rely on getting initial calibration parameters from visual frames or using ground truth values. This approach is more versatile for calibration in AVs and subsequently improves the reliability when it comes to navigating and decision making on the road. We propose an improvement on our previously reported robust continuous calibration approach that uses one or many objects identified in the visual frames and calibrating one visual sensor with respect to the other without relying on the initial calibration parameters or ground truth values. Our approach extracts the object point cloud data (PCD), downsamples and cost-optimizes the extracted feature points to compute the extrinsic calibration parameters. Without initial calibration parameters, our method uses multiple frames to calibrate a sensor with respect to any other sensor provided that PCD can be generated or derived from the sensors. Our approach performs significantly better with no initial calibration values provided and thus can be applied continuously to calibrate any sensors that become mis-calibrated during the operation of the AV. We have tested our method on the publicly available KITTI dataset and are benchmarking our results against state-of-the-art methodologies. Our goal is to automate the process of object detection and point cloud extraction and test the speed and accuracy.
Ali Hasnain, Pierre-Yves Laffont, Shukri Bin Abdul Jalil, Kutluhan Buyukburc, Pierre-Yves Guillemet, Samuel Wirajaya, Liqiang Khoo, Teng Deng, Jean-Charles Bazin
A fundamental cause of visual discomfort in traditional head-mounted displays (HMDs) for Virtual and Augmented Reality (VR/AR) is the vergence-accommodation conflict (VAC).We propose and develop a novel headset prototype based on the VerifocalTM platform to eliminate VAC by dynamically adjusting the focal plane of the display. The developed headset employs an eye tracking system to track the user's gaze and determine the virtual object being observed by the user. The varifocal display then moves the focal plane of the display to dynamically match the depth of the observed object, providing correct focus cues to the user. We achieve the focal plane adjustment by moving the headset's stereo displays independently using small, lightweight and low-power piezoelectric actuators. Such actuators offer a very low response time with high positional accuracy, while operating at a high speed and virtually silently. Furthermore, our headset prototype has a large adjustable dioptric range for each eye and thus provides the capability for also correcting myopia and anisometropia, allowing vision-impaired users to use the headset without the need for eyeglasses. In conclusion, our approach makes it possible for the user to observe close or far objects in the virtual environment with reduced eye strain by providing consistent depth cues for all viewing distances.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.