Integrating Passive Millimeter Wave camera (PMMW), Global Positioning System (GPS), and Differential Global Positioning System (DGPS) provides a pilot with a visual precision approach and landing in inclement weather conditions conceivably down to CAT III conditions. A DARPA funded, NASA Langley managed Technology Reinvestment Program (TRP) consortium consisting of Honeywell, TRW, Boeing, and Composite Optics Corporations is demonstrating the PMMW camera. The TRW developed PMMW camera displays the runway through fog, smoke, and clouds in day or night conditions. The Global Air Traffic Program Office entered into a Cooperative Research and Development Agreement (CRDA) with Honeywell to demonstrate DGPS. The Honeywell developed DGPS provides precision navigational data to within 1 m error where GPS has 100 m of error. In inclement weather the runway approach is initiated using GPS data until a range where DGPS data can be received. The runway is presented to the pilot using the PMMW image viewed via a Heads Up Display (HUD) or Head Mounted Display (HMD). At a range where DGPS data is available, a precise runway and horizon symbology is computed in the Flight Display Computer and overlaid on the PMMW image. Image processing algorithms operate on the PMMW image to identify and highlight obstacles on the runway. The integrated system provides the pilot with an enhanced situation awareness of the runway approach in inclement weather. When a DGPS ground station is not available at the landing area, image processing algorithms (again operating on the PMMW image) generate the runway and horizon symbology. GPS provides the algorithm with initial conditions for runway location and perspective. The algorithm then locates and highlights the runway and any obstacles on the runway. Honeywell Technology Center is performing research in the area of integrating the PMMW, DGPS, and GPS technologies to provide the pilot with the most necessary features of each system; namely: visibility, accuracy, obstacle detection, runway overlay, horizon symbology and availability.
Architecture optimization requires numerous inputs from hardware to software specifications. The task of varying these input parameters to obtain an optimal system architecture with regard to cost, specified performance and method of upgrade considerably increases the development cost due to the infinitude of events, most of which cannot even be defined by any simple enumeration or set of inequalities. We shall address the use of a PC-based tool using genetic algorithms to optimize the architecture for an avionics synthetic vision system, specifically passive millimeter wave system implementation.
For an automatic target recognition (ATR) technology contract, sponsored by the US Marine Corps Systems Command, and by Coastal Systems Station, Honeywell designed, mapped to Khoros, and evaluated state-of-the-art algorithms for target discrimination from an airborne platform. Honeywell's baseline approach to improve traditional algorithm robustness is to use a functional maximization approach for representations of algorithm performance as a function of image metrics and algorithm parameters. Revised ATR parameter values are established by a hillclimbing algorithm that revises the ATR algorithm parameter values in the direction of the largest gradient of the function, thus attaining improved performance for a greater variety of scenarios than those for which the system was trained. The baseline ATR algorithms implemented for this program are designed to effectively exploit spectral features to enhance target cueing reliability. An innovative approach for the mapping of three of the individual waveband images from an array of multispectral images into a feature map which obtains high target versus background contrast is discussed. Experimental results are shown for flight test imagery.
Maximum utilization of national airspace resources requires the development of systems which provide adverse weather landing guidance and allow for continued operations in low visibility conditions. The overall system, known as an enhanced situation awareness system (ESAS), encompasses a broad range of functions including a forward vision system (FVS). The FVS, the part of ESAS on which the paper focuses, consists of forward-looking, imaging sensors, and associated processors which collectively penetrate the atmospheric conditions. The FVS provides a spectrum of services to the flight crew and the aircraft in general. A series of image processing techniques crucial to FVS operation have been developed and implemented at Honeywell. The techniques fall into three core categories: image enhancement, feature extraction, and object recognition and tracking. In this paper, the issues involved in each category of processing are described, the most promising algorithms are described, and preliminary results of the image processing are presented. The sensor types explored to date include visible band TV, FLIR, and 35 GHz radar; results are shown on data from the visible band and 35 GHz radar imaging sensors.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.