Hyperspectral airborne sensing systems frequently employ spectral signature databases to detect materials. To achieve high detection and low false alarm rates, it is critical to retrieve accurate reflectance values from the camera’s digital number (dn) output. A one-time camera calibration converts dn values to reflectance. However, changes in solar angle and atmospheric conditions distort the reflected energy, reducing detection performance of the system.
Changes in solar angle and atmospheric conditions introduce both additive (offset) and multiplicative (gain) effects for each waveband. A gain and offset correction can mitigate these effects. Correction methods based on radiative transfer models require equipment to measure solar angle and atmospheric conditions. Other methods use known reference materials in the scene to calculate the correction, but require an operator to identify the location of these materials. Our unmanned airborne vehicles application can use no additional equipment or require operator intervention. Applicable automated correction approaches typically analyze gross scene statistics to find the gain and offset values. Airborne hyperspectral systems have high ground resolution but limited fields-of-view, so an individual frame does not include all the variation necessary to accurately calculate global statistics.
In the present work we present our novel approach to the automatic estimation of atmospheric and solar effects from the hyperspectral data. Our approach is based on Hough transform matching of background spectral signatures with materials extracted from the scene. Scene materials are identified with low complexity agglomerative clustering. Detection results with data gathered from recent field tests are shown.
Today, requirements for imaging systems are shifting from standard definition to high-definition (HD) imaging systems. Along with HD imagers, users are also requiring embedded metadata and high precision pointing. These requirements place new demands on the opto-mechanical, thermal, and electrical subsystems of stabilized platforms that use these imagers. This paper discusses the impact of HD imagers on gimbal design, including requirements for better stabilization, better thermal management, and better electronics to handle the high data rates associated with HD imagers. We also discuss how the requirements for wide area surveillance sensors will further impact the gimbal designs for these sensors.
Forward looking infrared and Radar (X-band or Ku-band) sensors are potential components in external hazard
monitoring systems for general aviation aircraft. We are investigating the capability of these sensors to provide hazard
information to the pilot when normal visibility is reduced by meteorological conditions. Fusing detection results from
FLIR and Radar sensors can improve hazard detection performance. We have developed a demonstration fusion system
for the detection of runway incursions. In this paper, we present our fusion system, along with detection results from
data recorded on approach to a landing during clear daylight, overcast daylight, and clear night conditions.
Forward Looking Infrared (FLIR) sensors are potential components in hazard monitoring systems for general aviation
aircraft. FLIR sensors can provide images of the runway area when normal visibility is reduced by meteorological
conditions. We are investigating short wave infrared (SWIR) and long wave infrared (LWIR) cameras. Pre-recorded
video taken from an aircraft on approach to landing provides raw data for our analysis. This video includes approaches
under four conditions: clear morning, cloudy afternoon, clear evening, and clear night. We used automatic object
detection techniques to quantify the ability of these sensors to alert the pilot to potential runway hazards. Our analysis is
divided into three stages: locating the airport, tracking the runway, and detecting vehicle sized objects. The success or
failure of locating the runway provides information on the ability of the sensors to provide situational awareness.
Tracking the runway position from frame to frame provides information on the visibility of runway features, such as
landing lights or runway edges, in the scene. Detecting small objects quantifies clutter and provides information on the
ability of these sensors to image potential hazards. In this paper, we present results from our analysis of sample approach
video.
The latest generation of heavily armored vehicles and the proliferation of IEDs in urban combat environments dictate
that electro-optical systems play a greater role in situational awareness for ground vehicles. FLIR systems has been
addressing the needs of the ground vehicle community by developing unique sensor systems combining thermal imaging
and electro-optical sensors, advanced image processing, and networking capabilities into compact, cost effective
packages.
This paper will discuss one of those new products, the WideEye II. The WideEye II combines long wave infrared and
electro-optical sensors in a 180 degree field of view, single integrated package to meet the critical needs of the
warfighter. It includes seamless electronic stitching of the 180 degree image, and state of the art networking capability to
allow it to be operated standalone or to be fully integrated with modern combat vehicle systems. The paper will discuss
system tradeoffs and capabilities of this new product and show potential applications for its use.
Single and multi-sensor imaging systems are being improved every day through the use of image processing, but there are limits to what software can do alone. The capabilities of image processing software can be improved by careful design of the optical and mechanical components of the imaging system. This paper explores the interaction between opto-mechanical design and real-time image processing for airborne imaging systems. We discuss the design of components for multiple imager systems to support both visual and assisted target recognition applications. Critical concepts include boresight alignment, low distortion optics, and pixel matching across multiple imagers for both image fusion and multi-spectral target detection. Incorporation of these concepts into our latest designs has enhanced both image quality and the effectiveness of our imaging systems. In this paper, we discuss opto-mechanical design considerations for individual cameras and look at the tradeoffs between mechanical and software design for providing effective imagery from multiple cameras.
We have investigated the use of forward looking infrared (FLIR) sensors to verify aircraft navigation information during approach and landing. Our research includes the development of an experimental primary flight display (PFD) integrated with a synthetic vision system (SVS). The effectiveness of a traditional SV display is limited by navigation equipment position and orientation errors, database limitations, and lack of knowledge of temporary obstacles. However, integrating information from the navigation system with an external FLIR sensor has the potential to increase information provided to the pilot, improving flight safety. In prior work, we developed software to correct aircraft orientation inaccuracies. Our algorithm locates the runway in a LWIR image and uses the extracted runway location to validate and correct the SV system's understanding of aircraft orientation. Evaluations demonstrated that this orientation correction worked well when there were no position errors. However, uncorrected position inaccuracies introduce errors into the pitch and heading correction estimates as the aircraft approaches the runway. To address this problem, we have developed a new algorithm to separate the image effects of orientation and position errors. This allows our system to correct for both orientation and position errors. We evaluated our system using LWIR video and navigation data recorded by test aircraft during runway approaches. Our results show significant improvements in correction accuracy using dual orientation and position estimation compared to orientation correction alone.
Experiments and analysis were used to determine the number of resolvable cycles across an alphanumeric character required for readability. This has serious implications for the resolution needed for a surveillance camera to present a “readable” image to a human. Fourier analysis was used to predict the number of cycles required for readability. Using two-dimensional Fourier transforms, the set of 26 English letters and 10 Arabic numerals was analyzed and classified. This theory is supported by empirical data based on user identification of random English letters and Arabic numerals. The results strongly indicate that accurate readability (defined as 90% correctness or better) can be accomplished with approximately 2.8 cycles across a block letter. This appears to suggest a lower resolution requirement than that generally accepted for unknown target identification. The reason is the limited data set of only 36 alphanumeric characters, of which the observer possesses a priori knowledge. Moreover, the ability to read an alphanumeric is a steep function of the resolution between 2 and 3 cycles per character height. The probability of correct “Reading” can be expressed similarly to that of Detection, Recognition and Identification by using a postscript such as “Read90”.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.