KEYWORDS: 3D modeling, Data modeling, Systems modeling, Data acquisition, 3D acquisition, Motion models, 3D scanning, Data fusion, Error analysis, Sensors
3D models of real world environments are becoming increasingly important for a variety of applications: Vehicle simulators can be enhanced through accurate models of real world terrain and objects; Robotic security systems can benefit from as-built layout of the facilities they patrol; Vehicle dynamics modeling and terrain impact simulation can be improved through validation models generated by digitizing real tire/soil interactions. Recently, mobile scanning systems have been developed that allow 3D scanning systems to undergo the full range of motion necessary to acquire such real-world data in a fast, efficient manner. As with any digitization system, these mobile scanning systems have systemic errors that adversely affect the 3D models they are attempting to digitize. In addition to the errors given by the individual sensors, these systems also have uncertainties associated with the fusion of the data from several instruments. Thus, one of the primary foci for 3D model building is to perform the data fusion and post-processing of the models in such a manner as to reconstruct the 3D geometry of the scanned surfaces as accurately as possible, while alleviating the uncertainties posed by the acquisition system. We have developed a modular scanning system that can be configured for a variety of application resolutions, as well as the algorithms necessary to fuse and process the acquired data. This paper presents the acquisition system and the tools utilized for constructing 3D models under uncertain real-world conditions, as well as some experimental results on both synthetic and real 3D data.
KEYWORDS: Video, Global Positioning System, Cameras, Sensors, Motion estimation, Imaging systems, Robotics, 3D modeling, Data acquisition, Video processing
Robotic navigation requires that the robotic platform have an idea of its location and orientation within the environment. This localization is known as pose estimation, and has been a much researched topic. There are currently two main categories of pose estimation techniques: pose from hardware, and pose from video (PfV). Hardware pose estimation utilizes specialized hardware such as Global Positioning Systems (GPS) and Inertial Navigation Systems (INS) to estimate the position and orientation of the platform at the specified times. PfV systems use video cameras to estimate the pose of the system by calculating the inter-frame motion of the camera from features present in the images. These pose estimation systems are readily integrated, and can be used to augment and/or supplant each other according to the needs of the application. Both pose from video and hardware pose estimation have their uses, but each also has its degenerate cases in which they fail to provide reliable data. Hardware solutions can provide extremely accurate data, but are usually quite pricey and can be restrictive in their environments of operation. Pose from video solutions can be implemented with low-cost off-the-shelf components, but the accuracy of the PfV results can be degraded by noisy imagery, ambiguity in the feature matching process, and moving objects. This paper attempts to evaluate the cost/benefit comparison between pose from video and hardware pose estimation experimentally, and to provide a guide as to which systems should be used under certain scenarios.
We describe a strategy for the content-based compression of mammograms. In this two-step strategy, the clinically important structures are first identified via a fractal-based segmentation method. Then, a modified version of JPEG2000 is applied in such a way that lossless compression is applied to the extracted structures from the first step, while a lossy compression is applied to the remaining regions. Preliminary results demonstrate that this strategy can achieve high compression ratios (up to 50:1) without compromising the diagnostic quality of the mammograms.
KEYWORDS: 3D modeling, Data modeling, Global Positioning System, Robotics, Buildings, Navigation systems, Data acquisition, 3D image processing, 3D acquisition, Laser range finders
In order to effectively navigate any environment, a robotic vehicle needs to understand the terrain and obstacles native to that environment. Knowledge of its own location and orientation, and knowledge of the region of operation, can greatly improve the robot’s performance. To this end, we have developed a mobile system for the fast digitization of large-scale environments to develop the a priori information needed for prediction and optimization of the robot’s performance. The system collects ground-level video and laser range information, fusing them together to develop accurate 3D models of the target environment. In addition, the system carries a differential Global Positioning System (GPS) as well as an Inertial Navigation System (INS) for determining the position and orientation of the various scanners as they acquire data. Issues involved in the fusion of these various data modalities include: Integration of the position and orientation (pose) sensors’ data at varying sampling rates and availability; Selection of "best" geometry in overlapping data cases; Efficient representation of large 3D datasets for real-time processing techniques. Once the models have been created, this data can be used to provide a priori information about negative obstacles, obstructed fields of view, navigation constraints, and focused feature detection.
This paper presents some preliminary validation results from the content-based compression (CBC) of digitized mammograms for transmission, archiving, and, ultimately, telemammography. Unlike traditional compression techniques, CBC is a process by which the content of the data is analyzed before the compression takes place. In this approach the data is partitioned into two classes of regions and a different compression technique is performed on each class. The intended result achieves a balance between data compression and data fidelity. For mammographic images, the data is segmented into two non-overlapping regions: (1) background regions, and (2) focus-of-attention regions (FARs) that contain the clinically important information. Subsequently, the former regions are compressed using a lossy technique, which attains large reductions in data, while the latter regions are compressed using a lossless technique in order to maintain the fidelity of these regions. In this case, results show that compression ratios averaging 5-10 times greater than that of lossless compression alone can be achieved, while preserving the fidelity of the clinically important information.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.