The Air Force’s Rapid Airfield Damage Assessment (RADA) process was conceived as a means of evaluating airfield pavement assets after attacks to inform subsequent threat mitigation and repair efforts. The classification and geolocation of small objects of interest (< 7.5cm), like unexploded ordnance, is a critical component of this assessment process. In its original form, RADA was conducted manually, exposing teams of service members to dangerous and unknown conditions for hours at a time. In an effort to both expedite and remotely automate this critical task, researchers are developing small Uncrewed Aerial Systems (sUAS) equipped with various sensor payloads to perform object detection across the compromised airfield environment. Hyperspectral imaging has been specifically targeted as a promising sensor solution due to its enhanced discriminatory power in classifying materials. This study is focused on understanding how measurements of these small objects are affected by changes in parameters that govern operation of the drone-sensor system. Radiometric precision and spatial resolution are evaluated with respect to changes in flight speed, altitude, shutter speed, gain, and frames per second, in realistic field conditions. Within the ranges evaluated for each system parameter, the drone-sensor system presented spectrally and spatially resolves objects captured by just a few pixels with sufficient accuracy and precision for the RADA application.
If a U.S. Air Force operated airfield is attacked, the current methodology for assessing its condition is a slow manual inspection process, exposing personnel to dangerous conditions. Advances in drone technology, remote sensing, deep learning, and computer vision have sparked interest in developing autonomous remote solutions. While digital image processing techniques have matured in recent decades, a lack of application-specific training data presents significant obstacles for developing reliable solutions to detect specific objects amongst rubble, debris, variations in pavement types, changing surface features, and other variable runway conditions. Consequently, near-surface hyperspectral imaging has been proposed as an alternative to RGB digital images, due to its discriminatory power in classifying materials. Spatio-spectral data acquired by hyperspectral imagers help address common challenges presented by data scarcity and scene complexity; however, raw data acquired by hyperspectral sensors must first undergo a reflectance correction process before it can be of use. This paper presents an expedient method, tailored to airfield damage assessment, for performing autonomous reflectance correction on near-surface hyperspectral data using in-scene pavement materials with a known spectral reflectance. Unlike most reflectance correction methods, this process eliminates the need for human intervention with the sensor (or its data) pre or post flight and does not require pre-staged reference targets or an additional downwelling irradiance sensor. Positive initial results from real-world flights over pavements are presented and compared to traditional methods of reflectance correction. Three separate flight tests report mean errors between 2% and 2.5% using the new method.
When fielding near-surface hyperspectral imaging systems for computer vision applications, raw data from a sensor are often corrected to reflectance before analysis. This research presents an expedient and flexible methodology for performing spectral reflectance estimation using in situ asphalt cement concrete or Portland cement concrete pavement as a reference material. Then, to evaluate this reflectance estimation method’s utility for computer vision applications, four datasets are generated to train machine learning models for material classification: (1) a raw signal dataset, (2) a normalized dataset, (3) a reflectance dataset corrected with a standard reference material (polytetrafluoroethylene), and (4) a reflectance dataset corrected with a pavement reference material. Various machine learning algorithms are trained on each of the four datasets and all converge to excellent training accuracy (>94 % ). Models trained on the raw or normalized signals, however, did not exceed 70% accuracy when tested against new data captured under different illumination conditions, while models trained using either reflectance dataset saw almost no drop between training and testing accuracy. These results quantify the importance of reflectance correction in machine learning workflows using hyperspectral data, while also confirming practical viability of the proposed reflectance correction method for computer vision applications.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.