Spectral LiDAR analysis can be enabled by the use of spatial context, spatial structure, and prior information in the form of map data. LiDAR intensity imagery are analyzed here using an object-based approach which segments the data according to vector information obtained from OpenStreetMap and other vector map information. Polygons and features from the map vectors are used to establish regions of interest for analysis. This automates the training process for use of traditional statistical classifiers and machine learning algorithms. Map-derived objects can demonstrate multiple spectral components which must be resolved to define the primary object, and its semantic label.
Data from the Optech Titan airborne laser scanner were collected over Monterey, CA, in three wavelengths (532 nm, 1064 nm, and 1550 nm), in October 2016, by the National Center for Airborne LiDAR Mapping (NCALM). Lidar waveform data at 532 nm from the Optech Titan were analyzed for data collected over the forested area at the Pont Lobos State Park. Standard point cloud processing was done using LAStools. Waveform data were converted into pseudo “hypercubes” in order to facilitate use of the analysis structures used for hyperspectral imagery. Analysis approaches used were ENVI classification tools such as Support Vector Machines (SVM), Spectral Angle Mapper (SAM), Maximum Likelihood, and K-means to classify returns. Through the use of this analog to hyperspectral data analysis to classify vegetation and terrain, the results are that, by using the Support Vector Machines with full waveform data, we can successfully improve low vegetation classifiers by 40%, and differentiate tree types (Pine/Cypress) at 40–60% accuracy.
Data from the Optech Titan airborne laser scanner were collected over Monterey, CA, in three wavelengths (532 nm, 1064 nm, and 1550 nm), in May 2016, by the National Center for Airborne LiDAR Mapping (NCALM). Analysis techniques have been developed using spectral technology largely derived from the analysis of spectral imagery. Data are analyzed as individual points, vs techniques that emphasize spatial binning. The primary tool which allows for this exploitation is the N-Dimensional Visualizer contained in the ENVI software package. The results allow for significant improvement in classification accuracy compared to results obtained from techniques derived from standard LiDAR analysis tools
LiDAR waveform analysis is a relatively new activity in the area of laser scanning. The work described here is an exploration of a different approach to visualization and analysis, following the structure that has evolved for the analysis of imaging spectroscopy data (hyperspectral imaging). The waveform data are transformed into 3-dimensional data structures that provide xy position information, and a z-coordinate, which is the digitized waveform. This allows for representation of the data in spatial and waveform space, the extraction of characteristic spectra, and the development of regions of interest. This representation allows for the application of standard spectral classification tools such as the maximum likelihood classifier.
Data from Optech Titan are analyzed here for purposes of terrain classification, adding the spectral data component to the lidar point cloud analysis. Nearest-neighbor sorting techniques are used to create the merged point cloud from the three channels. The merged point cloud is analyzed using spectral analysis techniques that allow for the exploitation of color, derived spectral products (pseudo-NDVI), as well as lidar features such as height values, and return number. Standard spectral image classification techniques are used to train a classifier, and analysis was done with a Maximum Likelihood supervised classification. Terrain classification results show an overall accuracy improvement of 10% and a kappa coefficient increase of 0.07 over a raster-based approach.
Computer vision and photogrammetric techniques have been widely applied to digital imagery producing high density 3D point clouds. Using thermal imagery as input, the same techniques can be applied to infrared data to produce point clouds in 3D space, providing surface temperature information. The work presented here is an evaluation of the accuracy of 3D reconstruction of point clouds produced using thermal imagery. An urban scene was imaged over an area at the Naval Postgraduate School, Monterey, CA, viewing from above as with an airborne system. Terrestrial thermal and RGB imagery were collected from a rooftop overlooking the site using a FLIR SC8200 MWIR camera and a Canon T1i DSLR. In order to spatially align each dataset, ground control points were placed throughout the study area using Trimble R10 GNSS receivers operating in RTK mode. Each image dataset is processed to produce a dense point cloud for 3D evaluation.
The Naval Postgraduate School (NPS) Remote Sensing Center (RSC) and research partners have completed a remote sensing pilot project in support of California post-earthquake-event emergency response. The project goals were to dovetail emergency management requirements with remote sensing capabilities to develop prototype map products for improved earthquake response. NPS coordinated with emergency management services and first responders to compile information about essential elements of information (EEI) requirements. A wide variety of remote sensing datasets including multispectral imagery (MSI), hyperspectral imagery (HSI), and LiDAR were assembled by NPS for the purpose of building imagery baseline data; and to demonstrate the use of remote sensing to derive ground surface information for use in planning, conducting, and monitoring post-earthquake emergency response. Worldview-2 data were converted to reflectance, orthorectified, and mosaicked for most of Monterey County; CA. Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) data acquired at two spatial resolutions were atmospherically corrected and analyzed in conjunction with the MSI data. LiDAR data at point densities from 1.4 pts/m2 to over 40 points/ m2 were analyzed to determine digital surface models. The multimodal data were then used to develop change detection approaches and products and other supporting information. Analysis results from these data along with other geographic information were used to identify and generate multi-tiered products tied to the level of post-event communications infrastructure (internet access + cell, cell only, no internet/cell). Technology transfer of these capabilities to local and state emergency response organizations gives emergency responders new tools in support of post-disaster operational scenarios.
When LiDAR data are collected, the intensity information is recorded for each return, and can be used to produce an image resembling those acquired by passive imaging sensors. This research evaluated LiDAR intensity data to determine its potential for use as baseline imagery where optical imagery are unavailable. Two airborne LiDAR datasets collected at different point densities and laser wavelengths were gridded and compared with optical imagery. Optech Orion C200 laser data were compared with a corresponding 1541 nm spectral band from the Airborne Visible/Infrared Imaging Spectrometer (AVIRIS). Optech ALTM Gemini LiDAR data collected at 1064 nm were compared to the WorldView-2 (WV-2) 949 – 1043 nm NIR2 band. Intensity images were georegistered and spatially resampled to match the optical data. The Pearson Product Moment correlation coefficient was calculated between datasets to determine similarity. Comparison for the full LiDAR datasets yielded correlation coefficients of approximately 0.5. Because LiDAR returns from vegetation are known to be highly variable, a Normalized Difference Vegetation Index (NDVI) was calculated utilizing the optical imagery, and intensity and optical imagery were separated into vegetation and nonvegetation categories. Comparison of the LiDAR intensity for non-vegetated areas to the optical imagery yielded coefficients greater than 0.9. These results demonstrate that LiDAR intensity data may be useful in substituting for optical imagery where only LiDAR is available.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.