PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
Passive millimeter wave (PMMW) sensors have been proposed as forward vision sensors for enhanced vision systems used in low visibility aircraft landing. This work reports on progress achieved to date in the development and manufacturing of a demonstration PMMW camera. The unit is designed to be ground and flight tested starting 1996. The camera displays on a head-up or head-down display unit a real time true image of the forward scene. With appropriate head-up symbology and accurate navigation guidance provided by global positioning satellite receivers on-board the aircraft, pilots can autonomously (without ground assist) execute category 3 low visibility take-offs and landings on non-equipped runways. We shall discuss utility of fielding these systems to airlines and other users.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Landing of aircraft in inclement weather and taxiing operation in the presence of copious obstacles is a major issues in air traffic control for both military and civilian aviation. Onboard sensors are needed to penetrate smoke, fog, and haze and to provide enough resolution for the automated detection and recognition of runways and obstacles. The performance of automatic target recognition (ATR) systems using thermal infrared (FLIR) images is limited by the low contrast in intensity for terrestrial scenes. We are developing a thermal imaging technique where, in each image pixel, a combination of intensity and polarization data is captured simultaneously. Images of polarization have useful contrast for different surface orientations. This contrast should facilitate image segmentation and classification of objects. In this paper, we will describe a combination of two innovative technologies: a polarization-sensitive thermal imaging sensor and a suite of polarimetric specific automatic object detection and recognition algorithms. The sensor has been able to capture polarization data from thermal emissions of automobiles. Surface orientations can be measured in the same image frame as temperature distribution. For the evaluation of the algorithms a set of performance metrics will be defined. We will discuss our evaluation of the algorithms on synthetic images as would be captured with the polarization-sensitive sensor. We will compare the polarimetric specific ATR performance with the performance of conventional FLIR-based ATR.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The Autonomous Landing Guidance program is partly funded by the US Government under the Technology Reinvestment Project. The program consortium consists of avionics and other equipment vendors, airlines and the USAF. A Sextant Avionique HUD is used to present flight symbology in cursive form as well as millimeter wave radar imagery from Lear Astronics equipment and FLIR Systems dual-channel, forward-looking, infrared imagery. All sensor imagery is presented in raster form. A future aim is to fuse all imagery data into a single presentation. Sensor testing has been accomplished in a Cessna 402 operated by the Maryland Advanced Development Laboratory. Development testing is under way in a Northwest Airlines simulator equipped with HUD and image simulation. Testing is also being carried out using United Airlines Boeing 727 and USAF C-135C (Boeing 707) test aircraft. The paper addresses the technology utilized in sensory and display systems as well as modifications made to accommodate the elements in the aircraft. Additions to the system test aircraft include global positioning systems, inertial navigation systems and extensive data collection equipment. Operational philosophy and benefits for both civil and military users are apparent. Approach procedures have been developed allowing use of Category 1 ground installations in Category 3 conditions.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The APALSTM system is a precision approach and landing system designed to enable low visibility landings at many more airports than now possible. Engineering development of the APALSTM system began October 1992 culminating in the pre-production Advanced Development Model (ADM) system currently undergoing flight testing. The paper focuses on the Cat III accuracy and integrity requirements defined by ICAO, Annex 10 and the Required Navigation Performance (RNP) Tunnel Concept. The resulting ADM architecture developed to meet them is described. The primary measurement is made with the aircraft's weather radar and provides range and range rate information to the ADM necessary to update the precision navigation state vector. The system uses stored terrain map data as references for map matching with Synthetic Aperture Radar maps. A description of the Pre-Production flight test program is included. Testing is being conducted at six different airports around the country demonstrating system performance in various environmental conditions (precipitation, heavy foliage, sparse terrain, over water and turbulence). ADM flight test results of 131 successful CAT II hand-flown approaches at Albuquerque, NM and Richmond, VA are presented. Detailed statistical analysis of these results indicate that the APALSTM system meets the RNP for Cat III.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In 1992 Eurocopter Deutschland and Daimler-Benz Aerospace started a research program to investigate the feasibility of a piloting radar based on the so-called ROSAR technology: HELIRADAR. While available radar instruments are not capable of guiding a helicopter pilot safely under poor visibility conditions due to lack of resolution and lack of height information, ROSAR technology, a Synthetic Aperture Radar based on ROtating antennas, has been the promise to overcome these deficiencies. Based on ROSAR technology HELIRADAR has been designed to provide a video-like image whose resolution is good enough to safely guide a helicopter pilot under poor visibility conditions to the target destination. To yield very high resolution a similar effect as for Synthetic Aperture Radar systems can be achieved by means of a rotating antenna. This principle is especially well suited for helicopters, since it allows for a stationary carrier platform. Additional rotating arms with antennas integrated in their tips are mounted on top of the rotating rotor head. While rotating, the antenna scans the environment from various visual angles without assuming a movement of the carrier platform itself. The complete transmitter/receiver system is fixed mounted on top of the rotating axis of the helicopter. The antennas are mounted at the four ends of a cross and rotate at the same speed as the rotor. The received radar signals are transferred through the center of the rotor axis down into the cabin of the helicopter, where they are then processed in the PolyCluster type high performance digital signal processor.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Enhanced vision or synthetic vision is receiving increasing attention as a possible substitute to traditional autoland systems as a means of achieving landings in low-visibility conditions. The attraction centers on the assertion that these sensor-based systems, when combined with a modern head-up display, offer greater capability to more runway ends than autoland, which requires a suitable runway with a high integrity radio signal and complementary runway lighting. Enhanced vision also promises new capability for taxi and takeoff guidance as well. This paper explores the operational characteristics of several possible systems and compares their respective operational capability and economic contribution.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Creation of millimeter wave range radiovision system with super-Rayleigh resolution is considered. On the base of phased antenna array (PAA) radiometer, methods of observed scenes scanning (survey) and output signals processing are developed to provide a vision with resolution on the order higher than Rayleigh level. Optimum mathematical methods (of type of reduction, deconvolution, decomposition) for processing of output signals of scanning radiometer are developed. Effective mathematical model of control by PAA pattern is built. Interconnection between sensitivity and resolution of radiometer measure-computer system with PAA is investigated, including the case of high level of noises. Algorithms of revealing of scenes elements with super-resolution are developed. As a result, created in 8-mm range measure-computer system with antenna array for environmental investigations provides 10- fold increasing of resolution of radiometer over Rayleigh level.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The use of various imaging sensors which can penetrate obscuring visual phenomena (such as fog, snow, smoke, etc.) has been proposed to enable aircraft landings in low visibility conditions. In this paper, we examine the use of two particular sensors, infrared and millimeter wave imaging radar, to aid in the landing task. Several image processing strategies are discussed and demonstrated which could improve the efficacy of an operational concept in which an image is presented to the pilot on a Head-Up Display. The possible strategies include the use of aircraft navigation information to help improve image quality, warping of the images to be geometrically conformal with the pilot's eye-point, use of a priori knowledge about the landing geometry to aid with sensor registration and processing, and fusion of multiple image sources.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We report progress on our development of a color night vision capability, using biological models of opponent-color processing to fuse low-light visible and thermal IR imagery, and render it in realtime in natural colors. Preliminary results of human perceptual testing are described for a visual search task, the detection of embedded small low-contrast targets in natural night scenes. The advantages of color fusion over two alterative grayscale fusion products is demonstrated in the form of consistent, rapid detection across a variety of low- contrast (+/- 15% or less) visible and IR conditions. We also describe advances in our development of a low-light CCD camera, capable of imaging in the visible through near- infrared in starlight at 30 frames/sec with wide intrascene dynamic range, and the locally adaptive dynamic range compression of this imagery. Example CCD imagery is shown under controlled illumination conditions, from full moon down to overcast starlight. By combining the low-light CCD visible imager with a microbolometer array LWIR imager, a portable image processor, and a color LCD on a chip, we can realize a compact design for a color fusion night vision scope.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A multisensor suite that consists of a forward looking infrared camera, a radar, and a low light television camera is likely to be a useful component of enhanced and synthetic vision systems. We examine several aspects of signal processing needed to combine effectively the individual sensor outputs. First we discuss transformations of individual sensor image data to a common representation. Our focus is on rectification of radar data without relying on a flat-earth assumption. We then describe a novel approach to image representation that minimizes loss of information in the transformation process. Finally, we discuss na optimal algorithm for fusion of radar and infrared images.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We describe a novel technique for displaying video in which image frames are continually accumulated within the display devices to form an evolving mosaic. This `video mosaic' effectively extends the field of view and resolution of the source camera, and can aid in tasks such as surveillance and teleoperation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper describes an approach to find linear objects, e.g. powerlines and runways for synthetic vision applications with a multifunction 35 GHz radar. The approach is based on a combination of traditional radar signal processing like the CFAR-algorithm and image processing techniques like the Hough transform. It is assumed that the objects are visible as a sequence of single reflectors on a line. The proposed method ensures that the probability of detection or the false alarm rate of a linetype object is independent of the position. In the first step, a CFAR-algorithm detects the possible points along the line. All detected objects are describes by a list of attributes, from which some relevant ones can be chosen. Subsequently they are transformed to the Hough space, where lines are described by a slope and a distance parameter. A threshold is calculated which ensures a constant false alarm rate or a constant probability of detection. In the next step a cluster algorithm with a special distance measure is used to find all possible lines in the Hough-space. After transforming back to the original space, the plausibility is checked and a final selection is done. The performance of the approach is shown by applying the method described above to simulated and measured data. The paper describes the calculation of the false alarm rate, the probability of detection and the calculation of the threshold.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The shift in defense strategy towards more rapid power projection and deployment drives a need for improved transport rotorcraft mission effectiveness. More efficient cargo handling during sling load operations will decrease time in hover, thereby reducing overall mission timelines. Furthermore, less fuel expenditure in hover will increase the effective mission range. In addition, expected improvements in pilot situational awareness will increase safety during these precise rotorcraft handling operations. This is particularly applicable for naval operations during adverse sea states. This paper outlines a concept for a vision system to improve rotorcraft external cargo handling operations. It presents the operational concept, sensor characteristics, image processing approach, pilot display format, and demonstration strategy.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Probabilistic relaxation has been used previously as the basis for the development of an algorithm to label features extracted from an image with corresponding features from a model. The algorithm can be executed in a deterministic manner, making it particularly appropriate for real-time methods. In this paper, we show how the method may be adapted to image sequences, taken from a moving camera, in order to provide navigation information. We show how knowledge of the camera motion can be incorporated into the labelling algorithm in order to provide better real-time performance and improved robustness.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Enhanced Situation Awareness Systems and Human Factors (Air Vehicles)
In military aircraft operations under hazardous conditions crews are often overcharged by tasks resulting from the complexity of the threat situation and respective low-level flight trajectory planning considerations. The Crew Assistant Military Aircraft which is a knowledge based system under development for improving the crew's situational awareness yields a promising approach. This paper presents two main aspects of the ongoing program; the Tactical Situation Interpreter (TSI) and the Low Altitude Planner (LAP). These modules represent the military specific components of the system. The TSI calculates a so called threat map based upon a list of tactical elements such as SAMs, fighters etc. and respective threat models. Besides this it yields some decision aiding functions such as distance to threat calculations in order to transform sensor data into an intuitive format with respect to the human information processing capabilities of the crew. The LAP calculates a low altitude trajectory through the operation area based upon digital terrain data and the TSI's threat map. The data are visualized in a 2D map display. Developments for a 3D enhanced vision integration are in progress and will be displayed in the Mission Management Technology Demonstrator.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The IntraFormation Positioning System is a networked relative navigation system currently being developed for rendezvous, join-up, and formation flight of Air Force helicopters and fixed wing aircraft in instrument meteorological conditions. The system is designed to be integrated into existing aircraft and will display relative positions of all aircraft within a formation, as well as the relative positions of other formations participating in coordinated missions. The system uses a Global Positioning System receiver integrated with the aircraft Inertial Navigation System to generate accurate aircraft position and velocity data. These data are transmitted over a data link to all participating aircraft and displayed as graphic symbols at the relative range and bearing to own aircraft on a situational awareness display format similar to a radar plan position indicator. Flight guidance computation is based on the difference between a desired formation slot position and current aircraft position relative to the formation lead aircraft. This information is presented on the flight director display allowing the pilot to null out position errors. The system is being developed for the Air Force Special Operations Command; however, it is applicable to all aircraft desiring improved formation situational awareness and formation flight coordination.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Among the numerous human factors issues related to Enhanced Vision Systems the decision making process appears quite critical for certification purpose. An experimental setup based on a simplified aircraft environment including a HUD was developed in the framework of the FANSTIC II program. A stimulation technique based on recordings of IR sensors obtained during weather penetration test flight were used to study visual cues involved in decision process during approaches on IR imagery. A study of visual scanning strategies was also conducted in order to follow dynamically the process. Results show a good consistency in the pattern of visual cues used by different pilots in making their decision. Decision delays were found to be in the region of 5 - 6 seconds with little difference between FLIR and visible images. The introduction of symbology superimposed on the imagery sensibly modify visual scanning patterns. In this case, scanning is deeply influenced by pilot's previous experience.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A consortium of government agencies and major defense contractors has been assembled for the Autonomous Landing Guidance (ALG) Technology Reinvestment Project (TRP). The purpose of the ALG TRP is to develop a prototype system that will integrate a variety of existing technologies to augment landing, take-off and ground operations in low-visibility conditions, particularly on runways that are neither equipped nor approved for Category (CAT) II or III precision approaches. The Advanced Cockpits Branch of Wright Laboratory (WL/FIGP) supported the consortium by addressing pilot-vehicle interface issues associated with the integration of ALG systems into aircraft cockpits. WL/FIGP conducted a study investigating the integration of symbology and sensor imagery on the Head-Up Display (HUD) in an ALG-equipped aircraft. Four HUD symbology sets were evaluated under a number of sensor/visibility conditions using WL/FIGP's Transport Aircraft Cockpit simulator. Objective measures of pilot performance, as well as subjective workload and acceptability ratings, were collected and analyzed to determine the optimum integration of symbology and sensor imagery in the context of an ALG precision approach and landing.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A preliminary study was conducted to investigate the use of reference markers found in the head-fixed frame as an aid to reference frame awareness during aircraft flight while using a helmet mounted display. Three reference-cueing displays were compared: (1) Sparse Reference display: all cockpit and airframe markers removed except for the instrument panel, (2) Cockpit Reference display: entire cockpit environment visible, and (3) Geo/Cockpit Reference display: cockpit environment visible with the addition of a surrounding wire-frame globe. The visual scenery was displayed to subjects using a helmet-mounted virtual reality device that had a 40 X 50 degree field of view liquid crystal display. The study involved six pilots. The task was to locate targets from aural alert information. The aural alerts were based in either the Aircraft reference frame (i.e. target clock position relative to the aircraft nose), or the World reference frame (i.e. target bearing). These tasks were conducted while the subject rode through abrupt maneuvering flight at low level in a fixed-based Cobra helicopter simulator. Performance measures of the pilot's ability to discriminate the intended target from secondary targets in the visual field were collected, as well as subjective ratings for each reference display. The Geo/Cockpit Reference display produced the highest target detection scores for both Aircraft and World-reference alerts. The highest overall detection scores were produced when World-referenced alerts were issued while using the Geo/Cockpit display. The Cockpit display scores were higher than the Sparse display's for both alert types. Subjective scores showed pilot preference for the Geo/Cockpit Reference display over the two displays for both Aircraft and World-reference alerts. A secondary exploratory experiment using the same tasking as the initial experiment was also conducted which observed the effect of peripheral cues. Target detection scores for both alert types decreased when peripheral cues were removed from the Cockpit display. Detection scores also decreased with the removal of peripheral cues from the Geo/Cockpit display during the Aircraft-referenced alerts.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Considerable research has been done regarding the use of enhanced vision as a means to enable a vehicle operator to `see' through bad weather or obscuration such as smoke and dust. This research has generally emphasized Forward-looking infra-red (Flir) and millimeter-wave (radar) technologies. Flir is an acceptable approach if modest performance is all that is required. Millimeter wave radar has distinct advantages over Flir in certain cases, but generally requires operator training to interpret various display-screen presentations. The Northrop Grumman Corporation has begun a major sensor-development program to develop a prototype (eye-safe) laser-illuminator/range-gated camera system. The near-term goal is to field a system that would deliver a minimum of 3000 foot penetration of worst-case fog/obscurant. This image would appear on a display as a high resolution monochromatic image. This paper will explore the concept, the proposed automotive application, and the projected cost.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper a robust method for visual motion estimation under ego-motion is developed. The possible application of this method is image sequence analysis of road traffic or airport runway/taxiway scenes, where the camera is located in a moving vehicle. The method combines an application independent estimation of visual motion with specific methods for instantaneous detection of the vanishing point in the image plane and of the over-road location of the camera. The stationary background is separated from the obstacles while detecting the ego-motion corrected visual motion of on-road objects.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A passive range estimation algorithm developed at NASA Ames Research Center for pilot- aiding during low-altitude helicopter operations has been tested off-line using imagery and motion data collected in flight, and has demonstrated excellent results using this real-world data. In developing a realtime computer architecture for this algorithm, several candidate parallel processing approaches have been previously investigated. A realtime parallel computer system based on the most-promising approach has now been implemented. This paper describes the hardware and software architecture of the realtime passive ranging system. The performance attained with this system indicates that realtime passive ranging can be achieved with a computer system compact enough for installation on board a helicopter.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A machine perception system for aircraft and helicopters using multiple sensor data for state estimation is presented. By combining conventional aircraft sensors like gyros, accelerometers, artificial horizon, aerodynamic measuring devices and GPS with vision data taken by conventional CCD-cameras mounted on a pan and tilt platform, the position of the craft can be determined as well as the relative position to runways or helicopter landing spots. The vision data are required to improve position estimates of GPS is available only in the S/A mode. The architectural design of the machine perception system allows the connection of other processing modules, for example a radar sensor, using the pre-defined interface structure. The system presented also incorporates a control module which uses estimated vehicle states for navigation and control in order to conduct automatic flight and landing. The system has been tested in real-time within a hardware-in-the-loop simulation. Simulated aircraft measurements corrupted by noise and other characteristic sensor errors have been fed into the machine perception system; the image processing module for relative state estimation was driven by computer generated imagery. Results from real-time simulation runs are given.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Visual perception is one of the most important elements of driving in that it enables the driver to understand and react appropriately to the situation along the path of the vehicle. The visual perception of the driver is enabled to the greatest extent while driving during the day. Noticeable decrements in visual acuity, range of vision, depth of field and color perception occur at night and under certain weather conditions. Indirect viewing sensors, utilizing various technologies and spectral bands, may assist the driver's normal mode of driving. Critical applications in the military as well as other official activities may require driving at night without headlights. In these latter cases, it is critical that the device, being the only source of scene information, provide the required scene cues needed for driving on, and often-times, off road. One can speculate about the scene information that a driver needs, such as road edges, terrain orientation, people and object detection in or near the path of the vehicle, and so on. But the perceptual qualities of the scene that give rise to these perceptions are little known and thus not quantified for evaluation of indirect viewing devices. This paper discusses driving with headlights and compares the scene content with that provided by a thermal system in the 8 - 12 micrometers micron spectral band, which may be used for driving at some time. The benefits and advantages of each are discussed as well as their limitations in providing information useful for the driver who must make rapid and critical decisions based upon the scene content available. General recommendations are made for potential avenues of development to overcome some of these limitations.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Passive millimeter wave (MMW) imaging provides a breakthrough capability for driver vision enhancement to counter the blinding effects of inclement weather. This type of sensor images in a manner analogous to an infrared or visible camera, but receives its energy from the MMW portion of the electromagnetic spectrum. Technology has progressed to the point where MMW radiometric systems offer advantages to a number of vision applications. We report on our developmental 94 GHz radiometric testbed, and the eventual technological evolutions that will help MMW radiometers and radars meet military and commercial market needs.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Designing infrared driver viewers for use in military vehicles poses a unique set of technical challenges to ensure operational usability. This paper describes driver, operational, and technical considerations that must be addressed for successful integration of infrared driver viewers in military vehicles. Discussed are requirements derived from user testing that influenced driver viewer design. The Driver's Vision Enhancer sensor and display are described. Also discussed are considerations that derive from the state of flat-panel display technology, including image intensity, field of view, and magnification. Relevant vehicle issues such as sensor position, and parallax are described. Driver issues such as display resolution, brightness, and symbology are addressed. Finally, design alternatives, solutions, and ongoing challenges are presented and discussed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The purpose for this paper is to show some of the used techniques in IR driver training. Topics include the differences in IR displays compared to visual image analysis. The physical laws of radiant energy are not always understood by the student. An example of TI's experience with the Drivers Viewer Enhancer training is included. A driver view using video tape with an example of computer based training will be discussed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The development and widespread use of IR systems which utilize the recently developed thermal stabilized or uncooled Silicon microbolometer focal planes can be enhanced if attention is paid early on to developing specifications of performance which will allow for commonality between systems. An assertion is made that for a vast majority of systems, a common set of building block components can be created and, through economies of scale, enhance the early development and fielding of new systems without having to wait for an individual system's volume to become great enough to bring system costs down. This building block approach will be discussed along with some potential interface characteristics.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Enhanced Situation Awareness Systems and Human Factors (Air Vehicles)
Sensor technology development has progressed to the point where practical Enhanced and Synthetic Vision System Solutions are now realizable. An integral and vital element of such systems is the head up display since it has the unique ability to accurately and spatially display images which correlate to the real world, which is an essential requirement. Images thus presented are termed `conformal'. This paper describes a new generation HUD designed for these applications and the progress made toward it's certification.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The Federal Aviation Administration (FAA) is evaluating Forward Looking Infrared (FLIR) and IR focal plane array technology as part of the Airport Surface Traffic Automation program. Under this program a new application for these technologies in aviation will be developed. The goal of the program is to evaluate FLIR and IR cameras for enhanced Air Traffic Control surface surveillance in all weather conditions for some major airports. Initially, FLIR cameras will be installed at airports with varying traffic densities to analyze and compare their capabilities along with other IR camera systems, displays and security/surveillance software. These cameras will be evaluated for both technical and operational performance. This paper discusses the initial studies that demonstrated the usefulness of FLIR technology for search and rescue situations and multiple coverage for integration with automatic surveillance systems. The general operation of the three IR camera systems to be evaluated in this study is presented. Finally, a concept for the possible integration of FLIR and IR technology with current automatic surface surveillance systems under development such as the Airport Movement Area Safety System and the Airport Traffic Identification System programs is proposed. This paper will conclude with a review of the FAA's future plans for evaluating a microbolometer based, uncooled IR camera system.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.