KEYWORDS: Visualization, Clouds, 3D modeling, Data modeling, LIDAR, Video, 3D image processing, 3D visualizations, Long wavelength infrared, Data storage
Mobile LIDAR scanning typically provides captured 3D data in the form of 3D 'Point Clouds'. Combined with colour
imagery these data produce coloured point clouds or, if further processed, polygon-based 3D models. The use of point
clouds is simple and rapid, but visualisation can appear ghostly and diffuse. Textured 3D models provide high fidelity
visualisation, but their creation is time consuming, difficult to automate and can modify key terrain details. This paper
describes techniques for the visualisation of fused multispectral 3D data that approach the visual fidelity of polygon-based
models with the rapid turnaround and detail of 3D point clouds. The general approaches to data capture and data
fusion are identified as well as the central underlying mathematical transforms, data management and graphics
processing techniques used to support rapid, interactive visualisation of very large multispectral 3D datasets.
Performance data with respect to real-world 3D mapping as well as illustrations of visualisation outputs are included.
The paper intends to show that Applied Multidimensional Fusion will bring the benefits that data fusion and related techniques will bring to Urban Intelligence Surveillance Target Acquisition and Reconnaissance (ISTAR) systems. In the course of this work it has been shown through the practical application of some of the multi-dimensional fusion research in the United Kingdom. This paper highlights work done in the area of: super-resolution, joint fusion, multi-resolution target detection and identification, and task based image and video fusion assessment. Work done to date has produced practical pertinent research products with direct applicability to the problems posed.
This paper presents a fast and robust approach to surface creation and feature extraction. The methodology is based on segmentation of point clouds iteratively till a set bound is reached. This paper concentrates on developing the methodology for developing planar surfaces. To achieve this goal vegetation is filtered and planar surfaces are created using the Delaunay triangulation. Surface creation process uses segmented point clouds based on fluctuation of normal of the surfaces in the segmented cubes. Results produced using this technique show the effect of imposing geometric constraints on the reconstruction to generate realistic surfaces.
This paper presents an algorithm for aligning 2D video to 3D point clouds. The paper is a vignette of on-going research in the area of 3D Urban Environment Modelling. The aim of this research is to produce accurate, fast and useable 3D maps of the dynamic urban environment. Paper presents development of the algorithm followed by the processing and implementation procedure to produce a realistic 3D model of an urban environment model from 3D point cloud and RGB video collected by the system. To allow further discussion the paper concludes with the results of draping 2D video frames to a solid surface developed from 3D point clouds.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.