We present our research efforts toward the deployment of 3-D sensing technology to an under-vehicle inspection robot. The 3-D sensing modality provides flexibility with ambient lighting and illumination in addition to the ease of visualization, mobility, and increased confidence toward inspection. We leverage laser-based range-imaging techniques to reconstruct the scene of interest and address various design challenges in the scene modeling pipeline. On these 3-D mesh models, we propose a curvature-based surface feature toward the interpretation of the reconstructed 3-D geometry. The curvature variation measure (CVM) that we define as the entropic measure of curvature quantifies surface complexity indicative of the information present in the surface. We are able to segment the digitized mesh models into smooth patches and represent the automotive scene as a graph network of patches. The CVM at the nodes of the graph describes the surface patch. We demonstrate the descriptiveness of the CVM on manufacturer CAD and laser-scanned models.
The purpose of this research is to investigate imaging-based methods to reconstruct 3D CAD models of real-world objects. The methodology uses structured lighting technologies such as coded-pattern projection and laser-based triangulation to sample 3D points on the surfaces of objects and then to reconstruct these surfaces from the
dense point samples. This reverse engineering (RE) research presents reconstruction results for a military tire that is important to tire-soil simulations. The limitations of this approach are the current level of accuracy that imaging-based systems offer relative to more traditional CMM modeling systems. The benefit however is the potential for denser point samples and increased scanning speeds of objects, and with time, the imaging technologies should continue to improve to compete with CMM accuracy. This approach to RE should lead to high fidelity models of manufactured and prototyped components for comparison to the original CAD models and for simulation analysis. We focus this paper on the data collection and view registration problems within the RE pipeline.
KEYWORDS: 3D modeling, Data modeling, Systems modeling, Data acquisition, 3D acquisition, Motion models, 3D scanning, Data fusion, Error analysis, Sensors
3D models of real world environments are becoming increasingly important for a variety of applications: Vehicle simulators can be enhanced through accurate models of real world terrain and objects; Robotic security systems can benefit from as-built layout of the facilities they patrol; Vehicle dynamics modeling and terrain impact simulation can be improved through validation models generated by digitizing real tire/soil interactions. Recently, mobile scanning systems have been developed that allow 3D scanning systems to undergo the full range of motion necessary to acquire such real-world data in a fast, efficient manner. As with any digitization system, these mobile scanning systems have systemic errors that adversely affect the 3D models they are attempting to digitize. In addition to the errors given by the individual sensors, these systems also have uncertainties associated with the fusion of the data from several instruments. Thus, one of the primary foci for 3D model building is to perform the data fusion and post-processing of the models in such a manner as to reconstruct the 3D geometry of the scanned surfaces as accurately as possible, while alleviating the uncertainties posed by the acquisition system. We have developed a modular scanning system that can be configured for a variety of application resolutions, as well as the algorithms necessary to fuse and process the acquired data. This paper presents the acquisition system and the tools utilized for constructing 3D models under uncertain real-world conditions, as well as some experimental results on both synthetic and real 3D data.
The Imaging, Robotics, and Intelligent Systems (IRIS) Laboratory at the University of Tennessee is currently developing a modular approach to unmanned systems to increase mission flexibility and aid system interoperability for security and surveillance applications. The main focus of the IRIS research is the development of sensor bricks where the term brick denotes a self-contained system that consists of the sensor itself, a processing unit, wireless communications, and a power source. Prototypes of a variety of sensor bricks have been developed. These systems include a thermal imaging brick, a quad video brick, a 3D range brick, and a nuclear (gamma ray and neutron) detection bricks. These bricks have been integrated in a modular fashion into mobility platforms to form functional unmanned systems. Research avenues include sensor processing algorithms, system integration, communications architecture, multi-sensor fusion, sensor planning, sensor-based localization, and path planning. This research is focused towards security and surveillance applications such as under vehicle inspection, wide-area perimeter surveillance, and high value asset monitoring. This paper presents an overview of the IRIS research activities in modular robotics and includes results from prototype systems.
State-of-the-art unmanned ground vehicles are capable of understanding and adapting to arbitrary road terrain for navigation. The robotic mobility platforms mounted with sensors detect and report security concerns for subsequent action. Often, the information based on the localization of the unmanned vehicle is not sufficient for deploying army resources. In such a scenario, a three dimensional (3D) map of the area that the ground vehicle has surveyed in its trajectory would provide a priori spatial knowledge for directing resources in an efficient manner. To that end, we propose a mobile, modular imaging system that incorporates multi-modal sensors for mapping unstructured arbitrary terrain. Our proposed system leverages 3D laser-range sensors, video cameras, global positioning systems (GPS) and inertial measurement units (IMU) towards the generation of photo-realistic, geometrically accurate, geo-referenced 3D terrain models. Based on the summary of the state-of-the-art systems, we address the need and hence several challenges in the real-time deployment, integration and visualization of data from multiple sensors. We document design issues concerning each of these sensors and present a simple temporal alignment method to integrate multi-sensor data into textured 3D models. These 3D models, in addition to serving as a priori for path planning, can also be used in simulators that study vehicle-terrain interaction. Furthermore, we show our 3D models possessing the required accuracy even for crack detection towards road surface inspection in airfields and highways.
KEYWORDS: Video, Global Positioning System, Cameras, Sensors, Motion estimation, Imaging systems, Robotics, 3D modeling, Data acquisition, Video processing
Robotic navigation requires that the robotic platform have an idea of its location and orientation within the environment. This localization is known as pose estimation, and has been a much researched topic. There are currently two main categories of pose estimation techniques: pose from hardware, and pose from video (PfV). Hardware pose estimation utilizes specialized hardware such as Global Positioning Systems (GPS) and Inertial Navigation Systems (INS) to estimate the position and orientation of the platform at the specified times. PfV systems use video cameras to estimate the pose of the system by calculating the inter-frame motion of the camera from features present in the images. These pose estimation systems are readily integrated, and can be used to augment and/or supplant each other according to the needs of the application. Both pose from video and hardware pose estimation have their uses, but each also has its degenerate cases in which they fail to provide reliable data. Hardware solutions can provide extremely accurate data, but are usually quite pricey and can be restrictive in their environments of operation. Pose from video solutions can be implemented with low-cost off-the-shelf components, but the accuracy of the PfV results can be degraded by noisy imagery, ambiguity in the feature matching process, and moving objects. This paper attempts to evaluate the cost/benefit comparison between pose from video and hardware pose estimation experimentally, and to provide a guide as to which systems should be used under certain scenarios.
KEYWORDS: 3D modeling, Image registration, 3D image processing, Systems modeling, Data modeling, Unmanned vehicles, 3D acquisition, Imaging systems, Sensors, Image sensors
The focus of this paper is on the reconstruction of 3D representations of real world scenes and objects using multiple sensors with, as one of its main applications, the enhancement of the autonomy and mobility of unmanned vehicles. The sensors considered are primarily range acquisition devices (such as laser scanners and stereo systems) that allow the recovery of 3D geometry. One of the most important technical challenges that we are addressing is the registration task in both its multi-modal and single modality aspects. Our work is based on a unified approach that formulates the correspondence problem as an optimization task. In this context we developed a criterion that can be used for 3D free-form shape registration. The new criterion is derived from simple Boolean matching principles by approximation and relaxation. Technically, one of the main advantages of the proposed approach is convexity in the neighborhood of the alignment parameters and continuous differentiability, allowing for the use of standard gradient-based optimization techniques. The proposed algorithm allows also for a significant automation of the scene modeling task by reducing the intervention of human operators in the tedious image registration task. Furthermore, we show that the criterion can be computed in linear time complexity which permits the fast implementation critical in many applications of autonomous mobile platforms.
KEYWORDS: Inspection, Sensors, Computing systems, Data acquisition, Wireless communications, Data processing, Data modeling, Computer architecture, Ultraviolet radiation, Sensing systems
In this paper, a mobile scanning system for real-time under-vehicle inspection is presented, which is founded on a "Brick" architecture. In this "Brick" architecture, the inspection system is basically decomposed into bricks of three kinds: sensing, mobility, and computing. These bricks are physically and logically independent and communicate with each other by wireless communication. Each brick is mainly composed by five modules: data acquisition, data processing, data transmission, power, and self-management. These five modules can be further decomposed into submodules where the function and the interface are well-defined. Based on this architecture, the system is built by four bricks: two sensing bricks consisting of a range scanner and a line CCD, one mobility brick, and one computing brick. The sensing bricks capture geometric data and texture data of the under-vehicle scene, while the mobility brick provides positioning data along the motion path. Data of these three modalities are transmitted to the computing brick where they are fused and reconstruct a 3D under-vehicle model for visualization and danger inspection. This system has been successfully used in several military applications and proved to be an effective safer method for national security.
Our research efforts focus on the deployment of 3D sensing capabilities to a multi-modal under vehicle inspection robot. In this paper, we outline the various design challenges towards the automation of the 3D scene modeling task. We employ laser-based range imaging techniques to extract the geometry of a vehicle's undercarriage and present our results after range integration. We perform shape analysis on the digitized triangle mesh models by segmenting them into smooth surface patches based on the curvedness of the surface. Using a region-growing procedure, we then obtain the patch adjacency. On each of these patches, we apply our definition of the curvature variation measure (CVM) as a descriptor of surface shape complexity. We base the information-theoretic CVM on shape curvature, and extract shape information as the entropic measure of curvature to represent a component as a graph network of patches. The CVM at the nodes of the graph describe the surface patch. We then demonstrate our algorithm with results on automotive components. With apriori manufacturer information about the CAD models in the undercarriage we approach the technical challenge of threat detection with our surface shape description algorithm on the laser scanned geometry.
KEYWORDS: 3D modeling, Data modeling, Global Positioning System, Robotics, Buildings, Navigation systems, Data acquisition, 3D image processing, 3D acquisition, Laser range finders
In order to effectively navigate any environment, a robotic vehicle needs to understand the terrain and obstacles native to that environment. Knowledge of its own location and orientation, and knowledge of the region of operation, can greatly improve the robot’s performance. To this end, we have developed a mobile system for the fast digitization of large-scale environments to develop the a priori information needed for prediction and optimization of the robot’s performance. The system collects ground-level video and laser range information, fusing them together to develop accurate 3D models of the target environment. In addition, the system carries a differential Global Positioning System (GPS) as well as an Inertial Navigation System (INS) for determining the position and orientation of the various scanners as they acquire data. Issues involved in the fusion of these various data modalities include: Integration of the position and orientation (pose) sensors’ data at varying sampling rates and availability; Selection of "best" geometry in overlapping data cases; Efficient representation of large 3D datasets for real-time processing techniques. Once the models have been created, this data can be used to provide a priori information about negative obstacles, obstructed fields of view, navigation constraints, and focused feature detection.
In this paper, we describe efforts made to implement multiperspective mosaicking of infrared and color video data for the purpose of under vehicle inspection. It is desired to create a large, high-resolution mosaic that may be used to quickly visualize the entire scene shot by a camera making a single pass underneath the vehicle. Several constraints are placed on the video data in order to facilitate the assumption that the entire scene in the sequence exists on a single plane. Therefore, a single mosaic is used to represent a single video sequence. Phase correlation is used to perform motion analysis in this case.
The current threats to U.S. security both military and civilian have led to an increased interest in the development of technologies to safeguard national facilities such as military bases, federal buildings, nuclear power plants, and national laboratories. As a result, the Imaging, Robotics, and Intelligent Systems (IRIS) Laboratory at The University of Tennessee (UT) has established a research consortium, known as SAFER (Security Automation and Future Electromotive Robotics), to develop, test, and deploy sensing and imaging systems for unmanned ground vehicles (UGV). The targeted missions for these UGV systems include -- but are not limited to --under vehicle threat assessment, stand-off check-point inspections, scout surveillance, intruder detection, obstacle-breach situations, and render-safe scenarios. This paper presents a general overview of the SAFER project. Beyond this general overview, we further focus on a specific problem where we collect 3D range scans of under vehicle carriages. These scans require appropriate segmentation and representation algorithms to facilitate the vehicle inspection process. We discuss the theory for these algorithms and present results from applying them to actual vehicle scans.
In this paper we present a new method for the registration of multiple sensors applied to a mobile robotic inspection platform. Our main technical challenge is automating the integration process for various multimodal inputs, such as depth maps, and multi-spectral images. This task is approached through a unified framework based on a new registration criterion that can be employed for both 3D and 2D datasets. The system embedding this technology reconstructs 3D models of scenes and objects that are inspected by an autonomous platform in high security areas. The models are processed and rendered with corresponding multi-spectral textures, which greatly enhances both human and machine identification of threat objects.
Superquadrics are able to represent a large variety of objects with only a few parameters and a single equation. We present a superquadric representation strategy for automotive parts composed of 3-D triangle meshes. Our strategy consists of two major steps of part decomposition and superquadric fitting. The originalities of this approach include the following two features. First, our approach can represent multipart objects with superquadrics successfully by applying part decomposition. Second, superquadrics recovered from our approach have the highest confidence and accuracy due to the 3-D watertight surfaces utilized. A novel, generic 3-D part decomposition algorithm based on curvature analysis is also proposed. Experimental results demonstrate that the proposed part decomposition algorithm is able to segment multipart objects into meaningful single parts efficiently. The proposed superquadric representation strategy can then represent each individual part of the original objects with a superquadric model successfully.
The concept of multiresolution analysis applied to irregular meshes has become more and more important. Previous contributions proposed a variety of methods using simplification and/or subdivision algorithms to build a mesh pyramid. In this paper, we propose a multiresolution analysis framework for irregular meshes with attributes. Our framework is based on simplification and subdivision algorithms to build a mesh pyramid. We introduce a surface relaxation operator that allows to build a non-uniform subdivision for a low computational cost. Furthermore, we generalize the relaxation operator to attributes such as color, texture, temperature, etc. The attribute analysis gives more information on the analysed models allowing more complete processing. We show the efficiency of our framework through a number of applications including filtering, denoising and adaptive simplification.
In view of the problems associated with under-machine inspection, there is a need to develop remote diagnostics systems capable of exploring narrow areas, capturing data and images from various modalities, and displaying the results at a remote location thus providing ease of identifying and diagnosing various machine problems. In this paper, we present a diagnostics system that is remotely controlled and can be deployed with a variety of imaging sensors to capture data. The software allows for segmenting the images and to mosaic the data for a thorough inspection.
KEYWORDS: 3D modeling, Data modeling, Motion models, Video, 3D image processing, Cameras, Systems modeling, Laser scanners, 3D video streaming, Atomic force microscopy
In this paper we describe a new method for the modeling of objects with know generic shape such as human faces from video and range data. The method combines the strengths of active laser scanning and passive Shape from Motion techniques. Our approach consists of first reconstructing a few feature-points that can be reliably tracked throughout a video sequence of the object. These features are mapped to corresponding 3D points in a generic 3D model reconstructed from dense and accurate range data acquired only once. The resulting 3D-3D set of matches is used to warp the generic model into the actual object visible in the video stream using thin-plate splines interpolation. Our method avoids the problems of dense matching encountered in stereo algorithms. Furthermore, in the case of face reconstruction, this method provides dense models while not requiring the invasive laser scanning of faces.
We present a new superquadrics based object representation strategy for automotive parts in this paper. Starting from a 3D watertight surface model, a part decomposition step is first performed to segment the original multi-part objects into their constituent single parts. Each single part is then represented by a superquadric. The originalities of this approach include first, our approach can represent complicated shapes, e.g., multi-part objects, by utilizing part decomposition as a preprocessing step. Second, superquadrics recovered using our approach have the highest confidence and accuracy due to the 3D watertight surfaces utilized. A novel, generic 3D part decomposition algorithm based on curvature analysis is also proposed in this paper. The proposed part decomposition algorithm is generic and flexible due to the popularity of triangle meshes in the 3D computer community. The proposed algorithms were tested on a large set of 3D data and experimental results are presented. The experimental results demonstrate that our proposed part decomposition algorithm can segment complicated shapes, in our case automotive parts, efficiently into meaningful single parts. And our proposed superquadric representation strategy can then represent each part (if possible) of the complicated objects successfully.
Automatic tracking is essential for a 24 hours intruder-detection and, more generally, a surveillance system. This paper presents an adaptive background generation and the corresponding moving region detection techniques for a Pan-Tilt-Zoom (PTZ) camera using a geometric transform-based mosaicing method. A complete system including adaptive background generation, moving regions extraction and tracking is evaluated using realistic experimental results. More specifically, experimental results include generated background images, a moving region, and input video with bounding boxes around moving objects. This experiment shows that the proposed system can be used to monitor moving targets in widely open areas by automatic panning and tilting in real-time.
Traditionally, medical geneticists have employed visual inspection (anthroposcopy) to clinically evaluate dysmorphology. In the last 20 years, there has been an increasing trend towards quantitative assessment to render diagnosis of anomalies more objective and reliable. These methods have focused on direct anthropometry, using a combination of classical physical anthropology tools and new instruments tailor-made to describe craniofacial morphometry. These methods are painstaking and require that the patient remain still for extended periods of time. Most recently, semiautomated techniques (e.g., structured light scanning) have been developed to capture the geometry of the face in a matter of seconds. In this paper, we establish that direct anthropometry and structured light scanning yield reliable measurements, with remarkably high levels of inter-rater and intra-rater reliability, as well as validity (contrasting the two methods).
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.