PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
This PDF file contains the front matter associated with SPIE Proceedings Volume 6957, including the Title Page, Copyright information, Table of Contents, Introduction, and the Conference Committee listing.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A major problem in obtaining FAA approval for infrared EVS-based operations under poor-visibility conditions is the
lack of correlation between runway visible range and IR attenuation or range. The "IR advantage" in fog, although often
substantial, varies greatly as a function of detailed droplet-size distribution. Real-time knowledge of the IR extinction at
a given destination is key to reliable operations with lower decision heights. We propose the concept of a Runway
Infrared Range (RIRR), to be measured by a ground-based IR transmissometer. Although RVR determination now
utilizes single-point scatterometry, the very (Mie) scattering mechanism that often affords a significant IR range
advantage necessitates a return to two-point transmissometry. As an adaptation of RVR technology, RIRR will include
separate determinations of background-scene and runway/approach lights ranges, respectively. The latter algorithm,
known as Allard's Law, will encompass background level,
light-settings, visible extinction, and camera performance
(usually at short-wave IR). The assumptions and validity of this RIRR parallel those for the traditional RVR. Also,
through extended monitoring at a hub, the RIRR may be inexpensively surveyed throughout a fog season, thus predicting
the economic benefits of IR-based EVS for that site.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The acquisition of approach and runway lights by an imager is critical to landing-credit operations with EVS.
Using a GPS clock, LED sources are pulsed at one-half the EVS video rate of 60 Hz or more. The camera then
uses synchronous (lock-in) detection to store the imaged lights in alternate frames, with digital subtraction of the
background for each respective frame-pair. Range and weather penetration, limited only by detector background
shot-noise (or camera system noise at night), substantially exceed that of the human eye. An alternative is the use
of short-wave infrared cameras with eyesafe laser diode emitters. Also, runway identification may be encoded on
the pulses. With standardized cameras and emitters, an "instrument qualified visual range" may be established.
The concept extends to portable beacons at austere airfields, and to see-and-avoid sensing of other aircraft
including UAVs.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
To improve the situation awareness of an aircrew during poor visibility, different approaches emerged during the past
couple of years. Enhanced vision systems (EVS - based upon sensor images) are one of those. They improve situation
awareness of the crew, but at the same time introduce certain operational deficits. EVS present sensor data which might
be difficult to interpret especially if the sensor used is a radar sensor. In particular an unresolved problem of fast
scanning forward looking radar systems in the millimeter waveband is the inability to measure the elevation of a target.
In order to circumvent this problem effort was made to reconstruct the missing elevation from a series of images. This
could be described as a "Stereo radar"-attempt and is similar to the reconstruction using photography (angle-angle
images) from different viewpoints to rebuilt the depth information. Two radar images (range-angle images) with
different bank angles can be used to reconstruct the elevation of targets.
This paper presents the fundamental idea and the methods of the reconstruction. Furthermore, experiences with real data
from EADS's "HiVision" MMCW radar are discussed. Two different approaches are investigated: First, a fusion of
images with variable bank angles is calculated for different elevation layers and picture processing reveals identical
objects in these layers. Those objects are compared regarding contrast and dimension to extract their elevation. The
second approach compares short fusion pairs of two different flights with different nearly constant bank angles.
Accumulating those pairs with different offsets delivers the exact elevation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The use of multiple, high sensitivity sensors can be usefully exploited within military airborne enhanced vision systems
(EVS) to provide enhanced situational awareness. To realise such benefits, the imagery from the discrete sensors must be
accurately combined and enhanced prior to image presentation to the aircrew. Furthermore, great care must be taken to
not introduce artefacts or false information through the image processing routines. This paper outlines developments
made to a specific system that uses three collocated low light level cameras. As well as seamlessly merging the
individual images, sophisticated processing techniques are used to enhance image quality as well as to remove optical
and sensor artefacts such as vignetting and CCD charge smear. The techniques have been designed and tested to be
robust across a wide range of scenarios and lighting conditions, and the results presented here highlight the increased
performance of the new algorithms over standard EVS image processing techniques.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The Crew Vehicle Interface (CVI) group of the Integrated Intelligent Flight Deck Technologies (IIFDT) has done
extensive research in the area of Synthetic Vision (SV), and has shown that SV technology can substantially
enhance flight crew situation awareness, reduce pilot workload, promote flight path control precision and improve
aviation safety. SV technology is being extended to evaluate its utility for lunar and planetary exploration
vehicles. SV may hold significant potential for many lunar and planetary missions since the SV presentation
provides a computer-generated view of the terrain and other significant environment characteristics independent
of the outside visibility conditions, window locations, or vehicle attributes. SV allows unconstrained control of
the computer-generated scene lighting, terrain coloring, and virtual camera angles which may provide invaluable
visual cues to pilots/astronauts and in addition, important vehicle state information may be conformally displayed
on the view such as forward and down velocities, altitude, and fuel remaining to enhance trajectory control and
vehicle system status. This paper discusses preliminary SV concepts for tactical and strategic displays for a lunar
landing vehicle. The technical challenges and potential solutions to SV applications for the lunar landing mission
are explored, including the requirements for high resolution terrain lunar maps and an accurate position and
orientation of the vehicle that is essential in providing lunar Synthetic Vision System (SVS) cockpit displays. The
paper also discusses the technical challenge of creating an accurate synthetic terrain portrayal using an ellipsoid
lunar digital elevation model which eliminates projection errors and can be efficiently rendered in real-time.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Feasibility of an EVS head-down procedure is examined that may provide the same operational benefits under low
visibility as the FAA rule on Enhanced Flight Visibility that requires the use of a head-up display (HUD). The main
element of the described EVS head-down procedure is the crew procedure within cockpit for flying the approach. The
task sharing between Pilot-Flying and Pilot-Not-Flying is arranged such that multiple head-up/head-down transitions can
be avoided. The Pilot-Flying is using the head-down display for acquisition of the necessary visual cues in the EVS
image. The pilot not flying is monitoring the instruments and looking for the outside visual cues.
This paper reports about simulation activities that complete a series of simulation and validation activities carried out in
the frame of the European project OPTIMAL. The results support the trend already observed after some preliminary
investigations. They suggest that pilots can fly an EVS approach using the proposed EVS head-down display with the
same kind of performance (accuracy) as they do with the HUD. There seems to be no loss of situation awareness. Further on, there is not significant trend that the use of the EVS head-down display leads to higher workload compared to the EVS HUD approach. In conclusion,
EVS-Head-Down may be as well a feasible option for getting extra operational credit under low visibility conditions.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper describes flight tests of Honeywell's synthetic vision primary flight display (SV PFD)
system prototype for helicopter applications. The primary differences between fixed-wing and
helicopter SV PFD and challenges of developing such an integrated system suitable for helicopter
applications are discussed. The visual environment and flight symbology specifically developed for
helicopter SV PFD are described. The flight test results and pilot evaluations of a prototype display
system on-board Honeywell's ASTAR helicopter are discussed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Superresolution of images is an important step in many applications like target recognition where the input images are
often grainy and of low quality due to bandwidth constraints. In this paper, we present a real-time superresolution
application implemented in ASIC/FPGA hardware, and capable of 30 fps of superresolution by 16X in total pixels.
Consecutive frames from the video sequence are grouped and the registered values between them are used to fill the
pixels in the higher resolution image. The registration between consecutive frames is evaluated using the algorithm
proposed by Schaum et al. The pixels are filled by averaging a fixed number of frames associated with the smallest error
distances. The number of frames (the number of nearest neighbors) is a user defined parameter whereas the weights in
the averaging process are decided by inverting the corresponding smallest error distances. Wiener filter is used to post
process the image. Different input parameters, such as size of input image, enlarging factor and the number of nearest
neighbors, can be tuned conveniently by the user. We use a maximum word size of 32 bits to implement the algorithm in
Matlab Simulink as well as the hardware, which gives us a fine balance between the number of bits and performance.
The algorithm performs with real time speed with very impressive superresolution results.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Enhanced tracking is accomplished by increasing the resolution, frame rate and processing capabilities in tracking
dynamic regions of interest for vision applications. In many proven algorithms, the ability to distinguish an object and
track it is dependent on the system performance in more than one attribute. We have conducted studies on proven
techniques such as Active Appearance Models, Principle Component Analysis and Eigen tracking. All perform better as
the camera resolution increases, and camera frame rate increases. Additional opportunities have been observed by
combining these techniques, taking advantage of Multicore CPUs, and GPU graphic card processing. Results from an 8
Megapixel commercial sensor combined with a Field Programmable Gate array are presented, and algorithm
performance compared with down scaled images of the same scenes, and simulated typical 30 hertz frame rates verses
the 120 hertz to 300 hertz typical of this smart camera.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
For Unmanned Aerial Vehicles (UAVs), autonomous forms of autoland are being pursued that do not depend on special,
deployability restraining, ground-based equipment for the generation of the reference path to the runway. Typically,
these forms of autoland use runway location data from an onboard database to generate the reference path to the desired
location. Synthetic Vision (SV) technology provides the opportunity to use conformally integrated guidance reference
data to 'anchor' the goals of such an autoland system into the imagery of the nose-mounted camera. A potential use of
this is to support the operator in determining whether the vehicle is flying towards the right location in the real world,
e.g., the desired touchdown position on the runway. Standard conformally integrated symbology, representing e.g., the
future pathway and runway boundaries, supports conformance monitoring and detection of latent positioning errors.
Additional integration of landing performance criteria into the symbology supports assessment of the severity of these
errors, further aiding the operator in the decision whether the automated landing should be allowed to continue or not.
This paper presents the design and implementation of an SV overlay for UAV autoland procedures that is intended for
conformance and integrity monitoring during final approach. It provides preview of mode changes and decision points
and it supports the operator in assessing the integrity of the used guidance solution.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
With the tremendous increase in the number of air passengers in the past years, aviation safety has been of
utmost importance. At any given point of time, there will be several flights lining up for landing. Landing in
good visibility conditions is not a problem. However, the problem arises when we have poor visibility conditions,
especially foggy conditions. The pilot finds it difficult to land the flight in poor visibility conditions because
of the difficulty to spot the runway clearly. This paper presents a novel method for detecting the runways and
hazards on it in poor visibility conditions using image processing techniques.
The first step is to obtain the images of a runway on a clear day and compute the smoothness coefficient
followed by edge detection, using the SUSAN edge detection algorithm and then finally develop a database of
the smoothness coefficients and edge detected images. Now, for the foggy images we compute the smoothness
coefficient. Typically, foggy images have low contrast. Hence, before we perform edge detection, we enhance
the image using Multi-Scale Retinex (msr). msr provides the low contrast enhancement and color constancy,
required to enhance foggy images, by performing non-linear spatial/spectral transforms. After enhancement,
the next step is to run the same edge detection algorithm with appropriate thresholds. Finally we determine a
hazard by comparing the edge detected images of images taken under clear and foggy conditions. The paper also
compares the results of the SUSAN edge detection algorithm with the state of art edge detection techniques.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Flight tests were conducted at Greenbrier Valley Airport (KLWB) and Easton Municipal Airport / Newnam Field
(KESN) in a Cessna 402B aircraft using a head-up display (HUD) and a Norris Electro Optical Systems Corporation
(NEOC) developmental ultraviolet (UV) sensor. These flights were sponsored by NEOC under a Federal Aviation
Administration program, and the ultraviolet concepts, technology, system mechanization, and hardware for landing
during low visibility landing conditions have been patented by NEOC. Imagery from the UV sensor, HUD guidance
cues, and out-the-window videos were separately recorded at the engineering workstation for each approach. Inertial
flight path data were also recorded. Various configurations of portable UV emitters were positioned along the runway
edge and threshold. The UV imagery of the runway outline was displayed on the HUD along with guidance generated
from the mission computer. Enhanced Flight Vision System (EFVS) approaches with the UV sensor were conducted
from the initial approach fix to the ILS decision height in both VMC and IMC. Although the availability of low
visibility conditions during the flight test period was limited, results from previous fog range testing concluded that UV
EFVS has the performance capability to penetrate CAT II runway visual range obscuration. Furthermore, independent
analysis has shown that existing runway light emit sufficient UV radiation without the need for augmentation other than
lens replacement with UV transmissive quartz lenses. Consequently, UV sensors should qualify as conforming to FAA
requirements for EFVS approaches. Combined with Synthetic Vision System (SVS), UV EFVS would function as both
a precision landing aid, as well as an integrity monitor for the GPS and SVS database.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper introduces a sensor-modeling framework to support the test and evaluation of External Hazard Monitor
configurations and algorithms within the Intelligent Integrated Flight Deck (IIFD). The paper, furthermore, examines
various runway hazards that may be encountered during aircraft approach procedures and classifies sensors that are
suited to detect these hazards. The work performed for this research is used to evaluate sensing technologies to be
implemented in the IIFD, a key component of NASA's Next Generation Air Transportation System. To detect objects on
or near airport runways, an aircraft will be equipped with a monitoring system that interfaces to one or more airborne
remote sensors and is capable of detection, classification, and tracking of objects in varying weather conditions. Physical
properties of an object such as size, shape, thermal signature, reflectivity, and motion are considered when evaluating the
sensor most suitable for detecting a particular object. The results will be used to assess the threat level associated with
the objects in terms of severity and risk using statistical methods based on the sensor's measurement and detection
capabilities. The sensors being evaluated include, airborne laser range scanners, forward looking infrared (FLIR), three
dimensional imagers, and visible light cameras.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The current state of technology limits the spectral sensitivity and field of view of
any single sensor. To compensate for this limitation, singular sensors are
configured in groupings of varying sensor types. In systems utilizing multiple
sensors, operators can become overwhelmed with data. The effectiveness of
these new multi-sensor systems requires the construction of an environment that
optimizes the delivery of maximum intelligence to the operator so as to provide
as much information about the environment as possible. GE Fanuc / Octec has
developed such an environment; a panoramic, near-immersive visualization tool
that is intuitive to use and seamlessly integrates multiple sensors into natural
view with important intelligence information retained and enhanced. We will
discuss methods to generate and inject metadata such as automatic detection
and tracking of possible threats and the fusing of multi-spectral streams into this
environment significantly raising the level of situational awareness for the
operator.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Boeing has developed and flight demonstrated a distributed aperture enhanced and synthetic vision system for integrated
situational awareness. The system includes 10 sensors, 2 simultaneous users with head mounted displays (one via a
wireless remote link), and intelligent agents for hostile fire detection, ground moving target detection and tracking, and
stationary personnel and vehicle detection. Flight demonstrations were performed in 2006 and 2007 on a MD-530 "Little
Bird" helicopter.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Flight tests where conducted at Cambridge-Dorchester Airport (KCGE) and Easton Municipal Airport / Newnam Field
(KESN) in a Cessna 402B aircraft using a head-up display (HUD) and a Kollsman Enhanced Vision System (EVS-I)
infrared camera. These tests were sponsored by the MITRE Corporation's Center for Advanced Aviation System
Development (CAASD) and the Federal Aviation Administration. Imagery of the EVS-I infrared camera, HUD
guidance cues, and out-the-window video were each separately recorded at an engineering workstation for each
approach, roll-out, and taxi operation. The EVS-I imagery was displayed on the HUD with guidance cues generated by
the mission computer. Also separately recorded was the inertial flight path data. Enhanced Flight Vision System
(EFVS) approaches were conducted from the final approach fix to runway flare, touchdown, roll-out and taxi using the
HUD and EVS-I sensor as the only visual reference. Flight conditions included two-pilot crew, day, night, non-precision
course offset approaches, ILS approach, crosswind approaches, and missed approaches. Results confirmed the
feasibility for safe conduct of down-to-the-runway precision approaches in low visibility to runways with and without
precision approach systems, when consideration is given to proper aircraft instrumentation, pilot training, and acceptable
procedures. Operational benefits include improved runway occupancy rates, and reduced delays and diversions.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Enhanced Vision (EV) and synthetic vision (SV) systems may serve as enabling technologies to meet the challenges of
the Next Generation Air Transportation System (NextGen) Equivalent Visual Operations (EVO) concept - that is, the
ability to achieve or even improve on the safety of Visual Flight Rules (VFR) operations, maintain the operational
tempos of VFR, and even, perhaps, retain VFR procedures independent of actual weather and visibility conditions. One
significant challenge lies in the definition of required equipage on the aircraft and on the airport to enable the EVO
concept objective. A piloted simulation experiment was conducted to evaluate the effects of the presence or absence of
Synthetic Vision, the location of this information during an instrument approach (i.e., on a Head-Up or Head-Down
Primary Flight Display), and the type of airport lighting information on landing minima. The quantitative data from this
experiment were analyzed to begin the definition of performance-based criteria for all-weather approach and landing
operations. Objective results from the present study showed that better approach performance was attainable with the
head-up display (HUD) compared to the head-down display (HDD). A slight performance improvement in HDD
performance was shown when SV was added, as the pilots descended below 200 ft to a 100 ft decision altitude, but this
performance was not tested for statistical significance (nor was it expected to be statistically significant). The touchdown
data showed that regardless of the display concept flown (SV HUD, Baseline HUD, SV HDD, Baseline HDD) a majority
of the runs were within the performance-based defined approach and landing criteria in all the visibility levels, approach
lighting systems, and decision altitudes tested. For this visual flight maneuver, RVR appeared to be the most significant
influence in touchdown performance. The approach lighting system clearly impacted the pilot's ability to descend to 100
ft height above touchdown based on existing Federal Aviation Regulation (FAR) 91.175 using a 200 ft decision height,
but did not appear to influence touchdown performance or approach path maintenance.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Extending previous works by Doehler and Bollmeyer we describe a new implementation of an imaging radar
simulator. Our approach is based on using modern computer graphics hardware making heavy use of recent
technologies like vertex and fragment shaders. Furthermore, to allow for a nearly realistic image we generate
radar shadows implementing shadow map techniques in the programmable graphics hardware. The particular
implementation is tailored to imitate millimeter wave (MMW) radar but could be extended for other types of
radar systems easily.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Surface Movement is one of the most challenging phases of flight. To support the flight crew in this critical flight phase
and to prevent serious incidents and accidents, of which Runway Incursions are the by far most safety-critical, the electronic
airport moving map display has evolved as the key technology to increase the flight crew's situational awareness
on the airport surface over the past decade.
However, the airport moving map is limited to quasi-static airport information due to the envisaged 28 day update cycle
of the underlying Aerodrome Mapping Database (AMDB), and thus does not include information on safety-relevant
short-term and temporary changes such as runway closures or restrictions. Currently, these are conveyed on paper
through the Pre-Flight Information Bulletin (PIB), a plain-language compilation of current Notices to Airmen
(NOTAM) and other information of urgent character. In this context, the advent of airport moving map technology leads
to a disparity in the conspicuousness of information, resulting in the danger that e.g. a runway that is not displayed as
closed on the airport moving map might be perceived as open even if contrary NOTAM information exists on paper
elsewhere in the cockpit. This calls for an integrated representation of PIB/NOTAM and airport moving map information.
Piloted evaluations conducted by the Institute of Flight Systems and Automatic Control have already confirmed the
high operational relevance of presenting runway closures on an airport moving map.
Based on the results of these trials, this paper expands our previous work by addressing the various pre-requisites of an
integral NOTAM visualization, ranging from the development of appropriate symbology to an operational concept enabling
the transition from conventional to electronic, machine-readable NOTAM information without shifting responsibility
and workload from the dispatcher to the flight deck. Employing Synthetic Vision techniques, a complete symbology
set for various cases of runway closures and other applicable runway and airport restrictions is derived, and the requirements
on the underlying machine-readable NOTAM data are discussed. Finally, the concept of an electronic
Pre-Flight Information Bulletin (ePIB) is used to facilitate the gradual integration of this technology in an airline operational
workflow.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A new, open specification for embedded interchange formats for Airport Mapping Databases has been established in the
ARINC 816 document. The new specification has been evaluated in a prototypical implementation of ground and
airborne components. A number of advantages and disadvantages compared to existing solutions have been identified
and are outlined in this paper. A focus will be on new data elements used for automatic label placement on airport maps.
Possible future extensions are described as well.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Considerable interest continues both in the aerospace industry and the military in the concept of autonomous
landing guidance, and as previously reported, BAE Systems has been engaged for some time on an
internally funded program to replace the high voltage power supply, tube and deflection amplifiers of its head up
displays with an all digital solid state illuminated image system, based on research into the requirements for such
a display as part of an integrated Enhanced Vision System.
This paper describes the progress made to date in realising and testing a weather penetrating system
incorporating an all digital head up display as its pilot-machine interface.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We used a desktop computer game environment to study the effect Field-of-View (FOV) on cybersickness. In particular,
we examined the effect of differences between the internal FOV (iFOV, the FOV which the graphics generator is using
to render its images) and the external FOV (eFOV, the FOV of the presented images as seen from the physical viewpoint
of the observer). Somewhat counter-intuitively, we find that congruent iFOVs and eFOVs lead to a higher incidence of
cybersickness. A possible explanation is that the incongruent conditions were too extreme, thereby reducing the
experience of vection. We also studied the user experience (appraisal) of this virtual environment as a function of the
degree of cybersickness. We find that cybersick participants experience the simulated environment as less pleasant and
more arousing, and possibly also as more distressing. Our present findings have serious implications for desktop
simulations used both in military and in civilian training, instruction and planning applications.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper describes a system that calculates aircraft visual range with instrumentation alone. A unique message is
encoded using modified binary phase shift keying and continuously flashed at high speed by ALSF-II runway approach
lights. The message is sampled at 400 frames per second by an aircraft borne high-speed camera. The encoding is
designed to avoid visible flicker and minimize frame rate. Instrument qualified visual range is identified as the largest
distance at which the aircraft system can acquire and verify the correct, runway-specific signal. Scaled testing indicates
that if the system were implemented on one full ALSF-II fixture, instrument qualified range could be established at 5
miles in clear weather conditions.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.