PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
This PDF file contains the front matter associated with SPIE Proceedings Volume 7328, including the Title Page, Copyright information, Table of Contents, Introduction, and the Conference Committee listing.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper discusses a sensor simulator/synthesizer framework that can be used to test and evaluate various sensor
integration strategies for the implementation of an External Hazard Monitor (EHM) and Integrated Alerting and
Notification (IAN) function as part of NASA's Integrated Intelligent Flight Deck (IIFD) project. The IIFD project under
the NASA's Aviation Safety program "pursues technologies related to the flight deck that ensure crew workload and
situational awareness are both safely optimized and adapted to the future operational environment as envisioned by
NextGen." Within the simulation framework, various inputs to the IIFD and its subsystems, the EHM and IAN, are
simulated, synthesized from actual collected data, or played back from actual flight test sensor data. Sensors and avionics
included in this framework are TCAS, ADS-B, Forward-Looking Infrared, Vision cameras, GPS, Inertial navigators,
EGPWS, Laser Detection and Ranging sensors, altimeters, communication links with ATC, and weather radar. The
framework is implemented in Simulink®, a modeling language developed by The Mathworks. This modeling language
allows for test and evaluation of various sensor and communication link configurations as well as the inclusion of
feedback from the pilot on the performance of the aircraft. Specifically, this paper addresses the architecture of the
simulator, the sensor model interfaces, the timing and database (environment) aspects of the sensor models, the user
interface of the modeling environment, and the various avionics implementations.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A fixed-based simulation experiment was conducted in NASA Langley Research Center's Integration Flight Deck
simulator to investigate enabling technologies for equivalent visual operations (EVO) in the emerging Next Generation
Air Transportation System operating environment. EVO implies the capability to achieve or even improve on the safety
of current-day Visual Flight Rules (VFR) operations, maintain the operational tempos of VFR, and perhaps even retain
VFR procedures-all independent of the actual weather and visibility conditions. Twenty-four air transport-rated pilots
evaluated the use of Synthetic/Enhanced Vision Systems (S/EVS) and eXternal Vision Systems (XVS) technologies as
enabling technologies for future all-weather operations. The experimental objectives were to determine the feasibility of
XVS/SVS/EVS to provide for all weather (visibility) landing capability without the need (or ability) for a visual
approach segment and to determine the interaction of XVS/EVS and peripheral vision cues for terminal area and surface
operations. Another key element of the testing investigated the pilot's awareness and reaction to non-normal events (i.e.,
failure conditions) that were unexpectedly introduced into the experiment. These non-normal runs served as critical
determinants in the underlying safety of all-weather operations. Experimental data from this test are cast into
performance-based approach and landing standards which might establish a basis for future all-weather landing
operations. Glideslope tracking performance appears to have improved with the elimination of the approach visual
segment. This improvement can most likely be attributed to the fact that the pilots didn't have to simultaneously
perform glideslope corrections and find required visual landing references in order to continue a landing. Lateral
tracking performance was excellent regardless of the display concept being evaluated or whether or not there were
peripheral cues in the side window. Although workload ratings were significantly less when peripheral cues were
present compared to when there were none, these differences appear to be operationally inconsequential. Larger display
concepts tested in this experiment showed significant situation awareness (SA) improvements and workload reductions
compared to smaller display concepts. With a fixed display size, a color display was more influential in SA and
workload ratings than a collimated display.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Boeing has developed algorithms and processing to detect power lines and cables in passive imagery from a wide variety
of different sources. The algorithm has been demonstrated with imagery from visible, medium and long wave infra-red
(MWIR and LWIR), and Passive MilliMeter Wave (PMMW) sensors. Flight demonstrations of the real-time system
have been performed with both visible and LWIR image sensors. In the flight demonstrations, an LWIR image sensor
was used for both day and night detection in both rural and urban settings with detection ranges in excess of 15,000'.
The processing system is capable of processing image sizes up to 1024x1024 pixels at 30 frames per second.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We present a series of algorithms and preliminary work towards developing a fully autonomous and real-time lightning
severity prediction capability enabling one hour ahead forecasting based on local lightning strike characteristics. Our
approach characterizes total, cloud to cloud, and cloud to ground strikes as input variables to derive the number of strikes
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In numerous computer vision applications, enhancing the quality and resolution of captured video can be
critical. Acquired video is often grainy and low quality due to motion, transmission bottlenecks, etc.
Postprocessing can enhance it. Superresolution greatly decreases camera jitter to deliver a smooth,
stabilized, high quality video. In this paper, we extend previous work on a real-time superresolution
application implemented in ASIC/FPGA hardware. A gradient based technique is used to register the
frames at the sub-pixel level. Once we get the high resolution grid, we use an improved regularization
technique in which the image is iteratively modified by applying back-projection to get a sharp and
undistorted image. The algorithm was first tested in software and migrated to hardware, to achieve 320x240
-> 1280x960, about 30 fps, a stunning superresolution by 16X in total pixels. Various input parameters,
such as size of input image, enlarging factor and the number of nearest neighbors, can be tuned
conveniently by the user. We use a maximum word size of 32 bits to implement the algorithm in Matlab
Simulink as well as in FPGA hardware, which gives us a fine balance between the number of bits and
performance. The proposed system is robust and highly efficient. We have shown the performance
improvement of the hardware superresolution over the software version (C code).
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Since their introduction by Kohonen Self Organizing Maps (SOMs) have been used in various forms for purposes
of surface reconstruction. They offer robust and fast approximations of manifold data from unstructured input
points while being modestly easy to implement. On the other hand SOMs have certain disadvantages when
used in a setup where sparse, reliable and spacial unbounded data occurs. For example, airborne Lidar sensors
generate a continuous stream of point data while flying above terrain. We introduce modifications of the SOM's
data structure to adapt it to unbounded data. Furthermore, we introduce a new variation of the learning rule
called rapid learning that is feasible for sparse but rather reliable data. We demonstrate examples where the
surroundings of an aircraft can be reconstructed in almost real time.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In the past years, for HEMS operations the number of helicopter accidents increased significantly. This is partly caused
by an increase in the number of operations but it was also caused by the fact that the operators have been trying to extend
the operation time and the operating conditions of the helicopter into the night and into adverse weather conditions.
Based on this fact, the NTSB has started a concerted effort "to improve the safety of emergency medical services
flights".
The project "Pilot Assistance System for Helicopters" which was started in 2003 and funded by the German Federal
Ministry of Economics and Technology tried to find solutions to exactly these types of problems. The project should
provide an electronic system which improves the situational awareness of the pilot and support the pilot in routes
planning considering all external and internal constraints. It should monitor the execution of the helicopter flight during
the mission taking care of all hazards like terrain, obstacles, bad weather zones, air traffic and airspace restrictions. The
system developed was called PILAS and included a projection of the current environmental situation of the helicopter
into the future as well as a module for proposing alternate solutions for avoiding hazardous situations. During all phases
of the flight the system should be an alerted assistant to the pilot.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper addresses the design and implementation of a conceptual Enhanced/Synthetic Vision Primary Flight Display
format. The goal of this work is to explore the means to provide the operator of a UAV with an integrated view of the
constraints for the velocity vector, resulting in an explicit depiction of the margins/boundaries of the multi-dimensional
maneuver space. For non-time-critical situations, this is expected to provide support when the operator has the authority
to manually set avoidance maneuvers, or approve, veto or modify velocity vector changes proposed by the automation.
The integration of the upper bounds of the maneuver space, resulting from energy constraints, and the lower bounds,
resulting from terrain will be illustrated. Additionally, the application of a maneuver cost function will be discussed, for
identifying and prioritizing conflict avoidance options from an integrated multi-dimensional maneuver space, and
communicating those to the operator. Although the integrated avoidance functions have been developed with the UAV
application in mind, they have equal merit for manned aircraft. The need for specific GUI elements depends on the level
of authority of the system and the role of the operator/pilot, which may differ between manned and unmanned
applications.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Next to flight and system status or sensor data, synthetic vision systems visualize information stored in databases on
board of the aircraft in an intuitive manner on flight deck displays. For example, through the three-dimensional depiction
of terrain or traffic information on the primary flight display, the pilot's overall situational awareness can be optimized.
Today's implementations typically create the image using a perspective projection onto a planar image plane.
Commonly, azimuthal angles of view between 30° and about 90° are used for this projection, which significantly limits
the peripheral viewing area. Using larger angles of view for the perspective projection leads to a steady increase of
compression in the image center and stretches at the image borders.
These problems of the depiction of large angles of view have been resolved through the use of a non-planar projection,
which projects the image onto a non-planar surface. In order to depict this curved surface on the planar display plane,
another projection has to be executed. The non-planar projection allows the depiction of objects on the PFD without
length distortions for large angles of view.
By depicting large angles of view in synthetic vision systems, elements of the peripheral viewing area can be visualized.
Aircraft flying abeam the own aircraft or topographic features like mountain valleys located next to the current aircraft
position can be presented to the pilot on the primary flight display. Test flights in a research simulator revealed a strong
acceptance of the non-planar projection by the study group of professional pilots.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper describes display concepts and flight tests evaluations of flight management system
(FMS) flight plan integration into Honeywell's synthetic vision (SV) integrated primary flight
display systems (IPFD). The prototype IPFD displays consist of primary flight symbology overlay
with flight path information and flight director guidance cues on the SV external 3D background
scenes. The IPFD conformal perspective-view background displays include terrain and obstacle
scenes generated with Honeywell's enhanced ground proximity warning system (EGPWS)
databases, runway displays generated with commercial FMS databases, and 3D flight plan
information coming directly from on-board FMS systems. The flight plan display concepts include
3D waypoint representations with altitude constraints, terrain tracing curves and vectors based on
airframe performances, and required navigation performance (RNP) data. The considerations for
providing flight crews with intuitive views of complex approach procedures with minimal display
clutter are discussed. The flight test data on-board Honeywell Citation Sovereign aircraft and pilot
feedback are summarized with the emphasis on the test results involving approaches into terrainchallenged
air fields with complex FMS approach procedures.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
By 2025, U.S. air traffic is predicted to increase 3-fold and may strain the current air traffic management system, which
may not be able to accommodate this growth. In response to this challenge, a consortium of industry, academia and
government agencies have proposed a revolutionary new concept for U.S. aviation operations, termed the Next
Generation Air Transportation System or "NextGen". Many key capabilities are being identified to enable NextGen,
including the concept of "net-centric" operations whereby each aircraft and air services provider shares information to
allow real-time adaptability to ever-changing factors such as weather, traffic, flight trajectories, and security. Data-link is
likely to be the primary source of communication in NextGen. Because NextGen represents a radically different
approach to air traffic management and requires a dramatic shift in the tasks, roles, and responsibilities for the flight
deck, there are numerous research issues and challenges that must be overcome to ensure a safe, sustainable air
transportation system. Flight deck display and crew-vehicle interaction concepts are being developed that proactively
investigate and overcome potential technology and safety barriers that might otherwise constrain the full realization of
NextGen.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Helicopter Emergency Medical Service missions (HEMS) impose a high workload on pilots due to short preparation
time, operations in low level flight, and landings in unknown areas. The research project PILAS, a cooperation between
Eurocopter, Diehl Avionics, DLR, EADS, Euro Telematik, ESG, Jeppesen, the Universities of Darmstadt and Munich,
and funded by the German government, approached this problem by researching a pilot assistance system which supports
the pilots during all phases of flight.
The databases required for the specified helicopter missions include different types of topological and cultural data for
graphical display on the SVS system, AMDB data for operations at airports and helipads, and navigation data for IFR
segments. The most critical databases for the PILAS system however are highly accurate terrain and obstacle data. While
RTCA DO-276 specifies high accuracies and integrities only for the areas around airports, HEMS helicopters typically
operate outside of these controlled areas and thus require highly reliable terrain and obstacle data for their designated
response areas. This data has been generated by a LIDAR scan of the specified test region. Obstacles have been extracted
into a vector format.
This paper includes a short overview of the complete PILAS system and then focus on the generation of the required
high quality databases.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Forward Looking Infrared (FLIR) sensors are potential components in hazard monitoring systems for general aviation
aircraft. FLIR sensors can provide images of the runway area when normal visibility is reduced by meteorological
conditions. We are investigating short wave infrared (SWIR) and long wave infrared (LWIR) cameras. Pre-recorded
video taken from an aircraft on approach to landing provides raw data for our analysis. This video includes approaches
under four conditions: clear morning, cloudy afternoon, clear evening, and clear night. We used automatic object
detection techniques to quantify the ability of these sensors to alert the pilot to potential runway hazards. Our analysis is
divided into three stages: locating the airport, tracking the runway, and detecting vehicle sized objects. The success or
failure of locating the runway provides information on the ability of the sensors to provide situational awareness.
Tracking the runway position from frame to frame provides information on the visibility of runway features, such as
landing lights or runway edges, in the scene. Detecting small objects quantifies clutter and provides information on the
ability of these sensors to image potential hazards. In this paper, we present results from our analysis of sample approach
video.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In 2008 the German Aerospace Center (DLR) started the project ALLFlight (Assisted Low Level Flight and Landing on
Unprepared Landing Sites). This project deals with the integration of a full scale enhanced vision sensor suite onto the
DLR's research helicopter EC135. This sensor suite consists of a variety of imaging sensors, including a color TV
camera and an un-cooled thermal infrared camera. Two different ranging sensors are also part of this sensor suite: an
optical radar scanner and a millimeter wave radar system. Both radar systems are equipped with specialized software for
experimental modes, such as terrain mapping and ground scanning. To be able to process and display the huge incoming
flood of data from these sensors, a compact high performance sensor co-computer system (SCC) has been designed and
realized, which can easily be installed into the helicopter's cargo bay. A sophisticated, high performance, distributed
data acquisition, recording, processing, and fusion software architecture has been developed and implemented during the
first project year. The paper describes the challenging mechanical integration of such a comprehensive sensor suite onto
the EC135 and explains the architectural hard- and software concept and the implementation on the SCC.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
BAE Systems has developed a solution to the problem of helicopter low visibility landings. The passive part of the
system displays 3-D conformal symbology on a tracked helmet mounted display. This provides the pilot with situational
awareness while the helicopter is in brownout/whiteout conditions. A Millimetric Wave Radar detects any stationary or
dynamic obstacles within the landing zone. The pilot is presented with a synthetic view of the area which is produced
using advanced signal and display algorithms.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper discusses the integration of Forward-looking Infrared (FLIR) and traffic information from, for example, the
Automatic Dependent Surveillance - Broadcast (ADS-B) or the Traffic Information Service-Broadcast (TIS-B). The goal
of this integration method is to obtain an improved state estimate of a moving obstacle within the Field-of-View of the
FLIR with added integrity. The focus of the paper will be on the approach phase of the flight. The paper will address
methods to extract moving objects from the FLIR imagery and geo-reference these objects using outputs of both the
onboard Global Positioning System (GPS) and the Inertial Navigation System (INS). The proposed extraction method
uses a priori airport information and terrain databases. Furthermore, state information from the traffic information
sources will be extracted and integrated with the state estimates from the FLIR. Finally, a method will be addressed that
performs a consistency check between both sources of traffic information. The methods discussed in this paper will be
evaluated using flight test data collected with a Gulfstream V in Reno, NV (GVSITE) and simulated ADS-B.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.