PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
This PDF file contains the front matter associated with SPIE Proceedings Volume 10197, including the Title Page, Copyright information, Table of Contents, Introduction (if any), and Conference Committee listing.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Mesopic vision (-2 to 1 log cd/m2 ), where rods and cones are responsive simultaneously, lies between rod-driven scotopic and cone-driven photopic vision. Due to the technical and psychophysiological challenges of assessing mesopic vision, the visual system is not well quantified over the mesopic luminance range. Mesopic vision combined with hypoxia is of particular interest in applications where Army aviators must rely on color-coded visual information displayed under mesopic conditions at altitude. Results are presented from three vision tests (the Lanthony Desaturated-15, the Cone Contrast Test, and the Color Assessment and Diagnosis system) demonstrating that normobaric hypoxia (12% oxygen, approximating 14,000 feet) disrupts mesopic color vision; but not all tests were equally sensitive to this phenomena. Additionally, respiratory data showed that Army aviators increased their tidal volume under normobaric hypoxia, thus respiratory rate assessments alone are insufficient for assessing physiologic adaptation to hypoxia. Under mesopic hypoxia conditions, the known high oxygen demand of rods may reduce the retinal oxygen available for cones thereby diminishing color sensitivity as well as other cone functions. Color vision under mesopic conditions warrants further examination, with particular attention to hypoxia as an important confound in high-terrain, aviation, and aerospace applications.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
There is mounting evidence that the military’s emphasis on photopic visual performance standards in the absence of mesopic and scotopic standards is a major oversight. The lack of an individualized mesopic capability represents a scientific knowledge gap that could impact soldier survivability. Data from 24 subjects (who had participated in a project on “Facilitating the transition from bright to dim environments”) have been re-assessed with regard to their individual dark adaptation response variability. A reduced ‘bleaching’ stimulus of 378 foot Lamberts (fL), lasting only 5 minutes, often elicited a rod-cone break in a matter of seconds. The time required for each subject to subsequently detect and identify as many of 10 reduced–intensity light sources was measured. The 2° lighted targets ranged from 0.250 fL to 0.001 fL in luminance. The dark adaptation time to detect the 0.002 fL target was 60.81 +/- 26.9 seconds (mean +/- standard deviation) for the control condition. The clear spectacle lens group took 78.99 +/- 44.34 seconds. Grouped data analysis indicated the lack of a statistically significant difference between the data-sets by t-test (p ≤ 0.06). Yet, there is a difference in the degree of response variability by both grouped data sets, as well as by individual data response variability. Variation between the control and lens-wearing conditions can readily be explained by 8-10% transmission losses from surface reflections, and direct glare interference. Nevertheless, both iterations exhibited a varied facility or speed of dark adaptation, which further varied on the testretest condition, meaning there was no learning effect from the first test to the second one. This same visual sensitivity variability characteristic is duplicated within the realm of visual resolution.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Degraded visual environments are a serious concern for modern sensing and surveillance systems. Fog is of interest due to the frequency of its formation along our coastlines disrupting border security and surveillance. Fog presents hurdles in intelligence and reconnaissance by preventing data collection with optical systems for extended periods. We will present recent results from our work in operating optical systems in our controlled fog experimental chamber. This facility is a 180-foot-long, 10-foot-wide, and 10-foot-tall structure that has over 60 spray nozzles to achieve uniform aerosol coverage with various particle size, distributions, and densities. We will discuss the physical formation of fog in nature and how our generated fog compares. In addition, we will discuss fog distributions and characterization techniques. We will investigate the biases of different methods and discuss the different techniques that are appropriate for realistic environments. Finally, we will compare the data obtained from our characterization studies against accepted models (e.g., MODTRAN) and validate the usage of this unique capability as a controlled experimental realization of natural fog formations. By proving the capability, we will enable the testing and validation of future fog penetrating optical systems and providing a platform for performing optical propagation experimentation in a known, stable, and controlled environment.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Scattering environment conditions, such as fog, pose a challenge for many detection and surveillance active sensing operations in both ground and air platforms. For example, current autonomous vehicles rely on a range of optical sensors that are affected by degraded visual environments. Real-world fog conditions can vary widely depending on the location and environmental conditions during its creation. In our previous work we have shown benefits for increasing signal and range through scattering environments such as fog utilizing polarized light, specifically circular polarization. In this work we investigate the effect of changing fog particle sizes and distributions on polarization persistence for both circularly and linearly polarized light via simulation. We present polarization tracking Monte Carlo results for a range of realistic monodisperse particle sizes as well as varying particle size distributions as a model of scattering environments. We systematically vary the monodisperse particle size, mean particle size of a distribution, particle size distribution width, and number of distribution lobes (bi-modal), as they affect polarized light transmission through a scattering environment. We show that circular polarization signal persists better than linear polarization signal for most variations of the particle distribution parameters.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
As infrared (IR) imaging technologies improve for the commercial market, optical filters complementing this technology are critical to aid in the insertion and benefit of thermal imaging across markets of industry and manufacturing. Thermal imaging, specific to shortwave infrared (SWIR) through longwave infrared (LWIR) provides the means for an observer to collect thermal information from a scene, whether being temperature gradients or spectral signatures of materials. This is beneficial to applications such as chem/bio sensing, where the identification of a chemical species being present or emitted could compromise personnel or the environment. Due to the abundant amount of information within an environment, the difficulty lies within the observer’s ability to extract the information. The use of optical filters paired with thermal imaging provides the means to interrogate a scene by looking at unique infrared signatures. The more efficient the optical filter can either transmit the wavelengths of interest, or suppress other wavelengths increases the finesse of the imaging system. Such optical filters can be fabricated in the form of micro-spheres, which can be dispersed into a scene, where the optical filter’s intimate interaction with the scene can supply information to the observer, specific to material properties and temperature. To this extent, Lumilant has made great progress in the design and fabrication of such micro-sphere optical filters. By engineering the optical filter’s structure, different optical responses can be tuned to their individual application.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Synthetic vision systems are becoming common in the business jet community. The perspective display of terrain information provides a display of complex information in a visual manner that pilots are accustomed to. Research and flight testing is underway to allow low noise supersonic business jet operations. Widespread acceptance will require regulatory changes, the ability for pilots to predict, and manage where the generated sonic boom will impact people on the ground. A display of the sonic boom impact will be needed for preflight and inflight planning. This paper details the CONOPS, algorithm development, and human machine considerations of a synthetic vision display design incorporating a sonic boom carpet. Using a NASA developed algorithm, sonic boom prediction, Mach cut-off, and sound pressure levels are calculated for current and modified flights plans. The algorithm information is transformed into georeferenced objects, presented on navigation and guidance displays, where pilots can determine whether the current flightplan avoids the generation of sonic booms in noise-sensitive areas. If pilots maneuver away from the flightplan, a dynamically computed predicted boom carpet is presented in which the algorithm is fed an extrapolation of the current flightpath. The resulting depiction is a sonic boom footprint which changes location as the aircraft maneuvers. Using a certain lookahead time for the prediction, the pilot has the ability to shift the location where boom intensity will be at a maximum. Considerations of allowable sound levels for various locations on the ground are incorporated for comparison of the realtime and predicted sonic boom.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Virtual Reality (VR) simulations have become a major component of the US Military and commercial training. VR is an attractive training method because it is readily available at a lower cost than traditional training methods. This has led to a staggering increase in demand for VR technology and research. To meet this demand, game engines such as Unity3D and Unreal have made substantial efforts to support various forms of VR, including the HTC Vive, smartphone-enabled devices like the GearVR, and with appropriate plugins, even fully-immersive Cave Automatic Virtual Environment (CAVETM) systems. Because of this hardware diversity, there is a need to develop VR applications that can operate on several systems, also known as cross-platform development. The goal in developing applications for all these types of systems is to create a consistent user experience across the devices. It is challenging to maintain this consistent user experience, because many VR devices have fundamental differences. Research has begun to explore ways of developing one application for multiple system. The Virtual Reality Applications Center (VRAC) developed a VR football “Game Day” simulation that was deployed to three devices: CAVETM, Oculus Rift HMD and a mobile HMD. Development of this application presented many learning opportunities regarding cross-platform development. There is no single approach to achieving consistency across VR systems, but the authors hope to disseminate these best practices in cross-platform VR development through the game day application example. This research will help the US Military develop applications to be deployed across many VR systems.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper describes operational concepts, certification considerations, and initial user evaluations for using an avionics system for low visibility taxi operations. The ability to taxi to/from the runway safely and efficiently in low visibility conditions is becoming increasingly important as airplanes and aircrews become equipped and approved for low visibility landing and takeoff.
Typically in the United States, any surface operations below 1200 ft. Runway Visual Range (RVR) generally encourage the airport authority to have an approved Low Visibility Operations / Surface Movement Guidance and Control System (LVO/SMGCS) plan which specifies certain markings, signs, lighting, and controls in the airport movement area. Operations below 600 ft. RVR, or 500 ft. RVR under U.S. Federal Aviation Administration (FAA) Order 8000.94 enablement, require additional infrastructure in movement and non-movement areas and may also require Airport Surface Detection Equipment (ASDE or ASDE-X) or a suitable substitute1 to allow the ground controllers to adequately monitor surface traffic. Surface operations below 300 ft. RVR are generally precluded (or need extra controls such as the use of a “follow-me” truck).
The operational concepts described in this paper show how using on-board systems based on synthetic and enhanced vision systems may substitute for or augment an airport’s existing infrastructure. These on-board systems along with proper additional mitigations under the LVO/SMGCS concept of Protected Low Visibility Taxi Routes (PLOVTR) would enable an appropriately equipped aircraft and qualified crew to safely taxi in low visibility conditions.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Head-Worn Displays (HWDs) are envisioned as a possible equivalent to a Head-Up Display (HUD) in commercial and general aviation. A simulation experiment was conducted to evaluate whether the HWD can provide an equivalent or better level of performance to a HUD in terms of unusual attitude recognition and recovery. A prototype HWD was tested with ambient vision capability which were varied (on/off) as an independent variable in the experiment testing for attitude awareness. The simulation experiment was conducted in two parts: 1) short unusual attitude recovery scenarios where the aircraft is placed in an unusual attitude and a single-pilot crew recovered the aircraft; and, 2) a two-pilot crew operating in a realistic flight environment with "off-nominal" events to induce unusual attitudes. The data showed few differences in unusual attitude recognition and recovery performance between the tested head-down, head-up, and head-worn display concepts. The presence and absence of ambient vision stimulation was inconclusive. The ergonomic influences of the head-worn display, necessary to implement the ambient vision experimentation, may have influenced the pilot ratings and acceptance of the concepts.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this work, we present the framework surrounding the development of a mmW radar image-based algorithm for wire recognition and classification for rotorcraft operation in degraded visual environments. While a mmW sensor image lacks the optical resolution and perspective of an IR or LIDAR sensor, it currently presents the only true see-through mitigation under the heaviest of degraded vision conditions. Additionally, the mmW sensor produces a high-resolution, radar map that has proven to be exceedingly interpretable, especially to a familiar operator. Seizing on these clear advantages, the mmW radar image-based algorithm is trained and evaluated against independent mmW imagery data collected from a live flight test in a relevant environment. The foundation of our approach is based on image processing and machine learning techniques utilizing radar-based signal properties and sensor and platform information for added robustness. We discuss some of the requirements and practical challenges of a standalone algorithm development, and lastly, present some preliminary examples using existing development tools and discuss the path for continued advancement and evaluation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Multi-waveband infrared (IR) sensors capture more spectral information of atmospheric particles and may provide better penetration thru dust under dynamically changing conditions. Therefore, enhancing the visibility of multi-waveband infrared images obtained in degraded visual environment (DVE) is an important way to improve the perception of the environment under DVE conditions. In this paper, we present a system to enhance visibility in DVE conditions by modifying the wavelet coefficients of multi-waveband IR images. In the proposed system, input multi-waveband IR images are transferred into the wavelet domain using an integer lifting wavelet transformation. The low-frequency wavelet coefficients in each waveband are independently modified by an adaptive histogram equalization technique for improving the contrast of the images. To process high-frequency wavelet coefficients, a joint edge-mapping filter is applied to the multi-waveband high-frequency wavelet coefficients to find an edge map for each subband of wavelet coefficients; then a nonlinear filter is used to remove noise and enhance edge coefficients. Finally, the inverse lifting wavelet transformation is applied to the modified multi-waveband wavelet coefficients to obtain enhanced multiwaveband IR images. We tested the proposed system with degraded multi-waveband IR images obtained from a helicopter landing in brownout conditions. Our experimental results show that the proposed system is effective for enhancing the visibility of multi-waveband IR images under DVE conditions.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
With the development of a myriad of imaging sensors and associated image processing algorithms to address the DVE problem relative performance evaluations have becoming increasingly important. In this paper, we introduce image quality metrics which have been selected for DVE applications. These quality metrics are based on the human visual system and consider factors such as induced processing noise, information content, and preserved image detail. These measures are shown to be useful for the evaluation of imaging sensors and associated processing. In addition, these measures provide direction for tuning and optimizing DVE local area processing (LAP) algorithms. Results will be shown for sample test images and dust trials of a camera with various image processing algorithms.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Typically referred to as brownout or whiteout, degraded visual environments (DVE) more accurately includes all forms of materially reduced visibility where pilots and operators may become disorientated. These conditions include very low night illumination levels, adverse meteorological conditions (clouds and precipitation) and obscurant particulates (dust storms and sea spray), which can all reduce photopic and thermal target contrasts with respect to their background environment. DVE operations are a significant and persistent safety concern for autonomous or piloted platforms at all points of their mission cycle including launch, transit, execution, egress and recovery. Impacted platforms of all sizes can include aircraft (from drones to hybrid airships), ground vehicles and maritime vessels (surface or submerged). This paper seeks to highlight the capabilities of the Met Office over both land and sea in accurately forecasting the presence of DVE through Numerical Weather Prediction (NWP). These forecasts can then be furthermore leveraged with Met Office expertise in environmental impacts to predict the likely effects on sensor acquisition ranges. These sensor impacts are robustly modelled through Tactical Decision Aids (TDAs) that can take account of platform range, elevation, sensor waveband, target characteristics and background environment, in addition to many other pertinent parameters. The Met Office is the United Kingdom’s National Meteorological Service (NMS) tasked to provide weather information and weather related warnings. Within the realm of defence engagement, the Met Office has provided key elements of geospatial intelligence capability for multinational operations worldwide for over a century. The Met Office supports its defence stakeholders, such as NATO and the USAF, by assimilating over 10 million daily global observations into its Unified Model (UM) for Numerical Weather Predication (NWP). The UM provides the input for the Havemann-Taylor Fast Radiative Transfer Code (HT-FRTC) to predict atmospheric impacts on sensor performance within the context of Tactical Decision Aids (TDAs), such as Neon for thermal contrast and MONIM for night illumination levels. This global UM forecast data, with high temporal-spatial resolution regional sub-models, enables further environmental impact prediction for atmospheric dispersion events such as volcanic ash and radiological incidents through the NAME capability (Numerical Atmospheric-dispersion Modelling Environment). In addition to capabilities in Space Weather, the Met Office also specializes in operational ocean monitoring and forecasting services to support safe operations in the marine environment, but have also evolved to cater for, amongst others, marine security, commercial operations, licensing for marine operations and environmental monitoring.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
There is a need to develop fast vision systems capable of supporting real time operations that require split-second decision making. To perform at high speed, these vision systems are subject to stringent latency requirements thus hindering their light sensing elements to collect a meaningful number of photons with an acceptable Signal to Noise Ratio (SNR). As a result, high amplifier gains end up amplifying large amounts of noise along with image content. Rockwell Collins developed an all-digital vision system, dubbed Integrated Digital Vision System (IDVS) with very low latency capable of operating real time in conditions ranging from complete darkness to daylight. This paper presents an algorithmic approach to denoise IDVS frames based on state of the art image denoising algorithms including Block Matching 3D (BM3D) and Non Local Mean (NLM) algorithms that are modified to meet IDVS hardware restrictions.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The paper proposes a semantic segmentation algorithm based on Convolutional Neural Networks (CNN) related to the problem of presenting multispectral sensor-derived images in Enhanced Vision Systems (EVS). The CNN architecture based on residual SqueezeNet with deconvolutional layers is presented. To create an in-domain training dataset for CNN, a semi-automatic scenario with the use of photogrammetric technique is described. Experimental results are shown for problem-oriented images, obtained by TV and IR sensors of the EVS prototype in a set of flight experiments.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Operating in a degraded visual environment (DVE) poses a significant risk to helicopter operations. A DVE can be caused by partial or total loss of visibility from airborne dust, sand, or snow stirred up by the helicopter’s rotor downwash. A DVE can cause a loss of spatial orientation and situational awareness, which has on several occasions led to controlled flight into terrain, ground obstacle collisions, and the loss of aircraft and personnel. DVEs have driven the development of new display technologies, which in turn present new challenges, including the integration of scene imagery, visual symbology, tactile cueing, and aural cueing. In a full-motion DVE simulation study with seven test pilots, we evaluated aural and tactile cueing along with sensor imagery displayed on either a helmet-mounted display (HMD) or panel-mounted display (PMD). The symbology and forward looking infrared (FLIR) imagery were presented on a UH-60M PMD or a SA Photonics high definition, wide field-of-view, binocular HMD. Additionally, the synergistic effects of aural and tactile cues were assessed. Tactile cues were presented via belt, shoulder harness, and seat cushion using electromechanical tactile stimulators. Aural cues were presented via an HGU-56/P rotary-wing aircrew helmet. The compatibility and effectiveness of each combination of FLIR sensor imagery, selected display, and aural and tactile cueing set were evaluated with quantitative measures of flight performance, pilot subjective reports, and pilot psychophysiological measures.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Donald E. Swanberg, John G. Ramiccio, Deborah Russell, Kathryn A. Feltman, Aaron M. McAtee, Rolf Beutler, Angus H. Rupert, Ian P. Curry, Michael Wilson, et al.
The U.S. Army Aeromedical Research Laboratory has transformed its NUH-60 Blackhawk simulator into a degraded visual environment (DVE) test bed capable of assessing integrated cueing technologies and their impact on flight performance. It is a unique simulator with the Lift Simulator Modernization Program database and is equipped with an enhanced brownout/whiteout model that replicates typical DVE conditions. The simulator is equipped with environmental temperature control and is a full-visual and full-motion simulator with six degrees-of-freedom. The flight simulator consists of a simulator compartment containing a cockpit, instructor/operator station, and observer station. It is equipped with eight Dell XIG visual image generator systems that simulate natural helicopter environments for day, dusk, night, and NVG with blowing sand or snow. The visual scenes data bases are created using satellite imagery of real-world locations. New sensor imaging capabilities produce realistic visuals that allow testing of DVE countermeasures. The simulator is equipped with USAARL’s Tactile Situation Awareness System (TSAS), which stimulates the pilot through belt-worn and seat-cushion “tactors” that vibrate to transmit through the sense of touch specific aircraft flight parameters such as drift, direction, and altitude. In addition, a glass cockpit façade allows UH-60 Mike model functionality. The simulator is now being used as part of the U.S. Army’s Research Development and Engineering Command’s DVE mitigation program.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Areté Associates, in partnership with the Army’s Night Vision and Electronic Sensors Directorate (NVESD), have demonstrated in flight a fused LIDAR/LWIR/EO system that provides a pilotage aid for rotorcraft operations in degraded visual environments (DVE). Areté’s purpose-built DVE lidar provides full-waveform processing to enable novel dust suppression and weak-target detection techniques. This lidar system is integrated with wide field of view, high resolution, LWIR and EO cameras to provide full situational awareness with fast update rates. The sensor fusion system creates a high-fidelity 3D world model in real time including ground surface, terrain features, hazards, and obscurant distributions. This model is used to construct an informative and intuitive cockpit display in real time. This system also incorporates offline terrain and image data bases to augment the live sensor data in areas beyond the sensor fields of view. This system will be tested in flight in DVE in 2016.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Color night vision is a high priority for the defense community. The successful achievement of seeing color images in extremely low light is also of high commercial value for use in a range of degraded visual conditions or as a vision aid for sight impaired users. Creative MicroSystems has been working on a digital night vision system using a multiaperture approach that enables video rate color night vision. The approach produces increased operational performance, a substantial cost reduction and uses enhanced computed vision algorithms running on commercial off-the-shelf processors. Early systems have achieved color imaging in moonless night conditions in a very compact form factor.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The solution space to the DVE sensor problem can be considered to be a continuum where the goal is to minimize Size, Weight, Power, Cost (SWAP-C), and complexity while simultaneously maximizing performance. Performance is often achieved at the expense of SWAP-C and complexity. The core DVE sensor system technologies can be grouped into three broad areas: (1) sensors (e.g., LiDAR, radar, or imaging); (2) data processing such as fusion or sensor processing; and (3) synthetic vision, and symbology. Much of the body of DVE sensor research has focused on advancing the current state of the art in one or more of these particular areas, such as advanced sensing and/or data processing technologies. However, often the difficulties of integrating such a DVE technology into an aircraft and obtaining the proper hardware/software certification(s) for flight are not considered. Both of these adversely impact SWAP-C and complexity. In this paper we examine the solution space to the DVE sensor problem, identify the key drivers for SWAP-C and complexity, and present strategies for their mitigation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The paper discusses recent results of flight tests performed with the Hensoldt (former Airbus DS Electronics and Border Security) LiDAR system in brownout. The SferiSense® LiDAR system was mounted on a Mi-2 test platform as part of the complete DVE system SFERION® to undergo the tests. To optimize brownout capabilities minor modifications were performed on the sensor firmware over the SferiSense® serial LiDAR system which is in operational use on the NH90 transport helicopter. Also dust echoes were filtered out by a real-time filtering algorithm. Numerous approaches into own ship generated dust (light to heavy) as well as fan generated dust clouds (chalk 2 scenarios) were performed. The paper discusses the results and shows under which conditions the LiDAR can still look through the dust cloud. Also the contribution of high resolution real-time 3D LiDAR data to the DVE system usage is discussed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Laser scanners based on Risley prism pair technology offer several advantages, including a multitude of scan pattern generation, non-overlapping patterns, and a conical Field-Of-View (FOV) generating a high data density around the center. The geometry and material properties of the prisms define the conical FOV of the sensor, which can be typically set between 15° to 120°. However, once the prisms are defined, the FOV cannot be changed. Neptec Technologies in collaboration with Defence Research and Development Canada has developed a unique scanner prototype using two pairs of Risley prisms. The first pair defines a small 30° FOV which is then steered into a larger 90° Field-Of-Regard (FOR) by using the second pair of prisms. This presents the advantages of a high-resolution scan pattern footprint that can be steered quickly and randomly into a larger area, eliminating the need for mechanical steering equipment. The OPAL Double Risley Pairs (DRP) prototype was recently evaluated at Yuma Proving Ground with the scanner positioned atop a tower and overlooking various types of targets while dust was generated by a helicopter. Results will be presented in clear and dusty conditions, showing examples of moving a high resolution FOV within the FOR.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Originally developed for military applications, head-mounted displays (HMD) are becoming more common for commercial virtual and augmented reality applications. One of the fundamental requirements of an HMD is the field of view (FOV), the apparent angular size of the virtual image as seen by the user. The size of the FOV has a major impact on the overall bulk, weight, resolution, cost and implied value of the HMD, so careful consideration must be given when determining how large it should be. There are key visual human factors guidelines that suggest how the field of view will allow the user to accomplish their specific tasks. These will be addressed in this paper.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
As the military increases its reliance upon and continues to develop Helmet Mounted Displays (HMDs), it is paramount that HMDs are developed that meet the operational needs of the Warfighter. During the development cycle, questions always arise concerning the operational requirements of the HMD. These include questions concerning luminance, contrast, color, resolution, and other performance parameters. When color is implemented in HMDs, which are eyes-out, see-through displays, visual perception issues become an increased concern. In particular, when HMD symbology combines with the ambient scene, the combination can produce a false perception of either or both images. Work describing the daylight luminance requirements for viable symbology has been previously published. Here we present a follow-up evaluation of possible color considerations and the effect that these choices have on perception and situational awareness. Special emphasis is placed on the evaluation of hues in CIE chromaticity space and their combination, as well as perceptual issues concerning color-deficient versus normal observers. In addition, choices are evaluated in terms of display technology and information content.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The usage of conformal symbology in color head-worn displays (HWDs) opens up a range of new possibilities on modern flight decks. The capability of color augmentation seems especially useful for low flights in degraded visual environments. Helicopter flights in these conditions, including brownout by swirling dust or sand particles, can often lead to spatial disorientation (SD) and result in a significant amount of controlled flight into terrain (CFIT). While first generation color-capable conformal displays are deployed, practical guidelines for the use of color in these see-through interfaces are yet to be established. A literature survey is carried out to analyze available knowledge of color use in conformal displays and to identify established methodologies for human-factors experimentation in this domain. Firstly the key human factors involved in color HWDs are outlined, including hardware design aspects as well as perceptual and attentional aspects. Secondly research on color perception is mapped out, focusing on investigations of luminance contrast requirements, modeling of color space blending and development of color correction solutions. Thirdly application-based research of colored conformal symbology is reviewed, including several simulations and flight experiments. Analysis shows that established luminance contrast requirements need to be validated and that performance effects of colored HWD symbology need more objective measurements. Finally practical recommendations are made for further research. This literature study has thus established a theoretical framework for future experimental efforts in colored conformal symbology. The Institute of Flight Guidance of the German Aerospace Center (DLR) anticipates conducting experiments within this framework.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Head-mounted displays (HMDs) generally exhibit significant image distortion, which must be reduced/eliminated prior to effective use. Additionally, biocular or binocular near-eye displays must be carefully aligned to enable overlapping two- or three-dimensional image synthesis without causing eye strain, fatigue, or performance loss. Typically, HMDs include distortion correction maps supplied by the manufacturer that are often generated by theoretical calculations that do not precisely match the as-built optical system or account for manufacturing variance. However, HMD users often assert that manufacturer-supplied distortion maps are not accurate enough for some alignment-critical applications. In this work we present the design and validation of a relatively low cost alignment and distortion characterization toolset (hardware and software) for characterization of biocular HMDs. This toolset is able to replicate the ocular alignment of most human observers by emulating a user’s ocular position to examine both on- and off-axis distortion and alignment over a wide range of viewing angles and eye positions. This enables accurate characterization of distortion changes experienced as a user’s eyes move to view different regions of the display (e.g., viewing off-boresight symbols in a well-aligned HMD or viewing a new alignment after an HMD has “slipped” to a slightly different position). The toolset characterizes distortion through image registration of distorted patterns displayed in the HMD to undistorted reference patterns. This work is intended to be of interest to HMD manufacturers, vision scientists, and operators of biocular HMDs for use in precision-critical applications.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In the pursuit of fully-automated display optimization, the US Army RDECOM CERDEC Night Vision and Electronic Sensors Directorate (NVESD) is evaluating a variety of approaches, including the effects of viewing distance and magnification on target acquisition performance. Two such approaches are the Targeting Task Performance (TTP) metric, which NVESD has developed to model target acquisition performance in a wide range of conditions, and a newer Detectivity metric, based on matched-filter analysis by the observer. While NVESD has previously evaluated the TTP metric for predicting the peak-performance viewing distance as a function of blur, no such study has been done for noise-limited conditions. In this paper, the authors present a study of human task performance for images with noise versus viewing distance using both metrics. Experimental results are compared to predictions using the Night Vision Integrated Performance Model (NV-IPM). The potential impact of the results on the development of automated display optimization are discussed, as well as assumptions that must be made about the targets being displayed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Creative Microsystems Corporation is developing the Holographic Imageguide Display, or HID, which is a rugged light-weight, sunlight-readable, high-resolution see-through display for augmented or mixed reality which we believe can provide increased situational awareness for many military applications. The display excels in situations where a user must maintain their normal sight (such as shooting a weapon), or when size and weight are a primary consideration (such as behind a night vision device). The display uses holographic technology to display projected images through an imageguide directly to the user’s eye. The system connects to standard image signal inputs and has a small form factor for support electronics. The results are a full-color, low-cost display that can superimpose images onto a user’s field of view. Creative Microsystems has integrated the display into several data and video systems, including geolocated symbology, head trackers, image fusion and augmented/mixed reality for operators, simulations, training and mission command.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Industry and academia have repeatedly demonstrated the transformative potential of Augmented Reality (AR) guided assembly instructions. In the past, however, computational and hardware limitations often dictated that these systems were deployed on tablets or other cumbersome devices. Often, tablets impede worker progress by diverting a user's hands and attention, forcing them to alternate between the instructions and the assembly process. Head Mounted Displays (HMDs) overcome those diversions by allowing users to view the instructions in a hands-free manner while simultaneously performing an assembly operation. Thanks to rapid technological advances, wireless commodity AR HMDs are becoming commercially available. Specifically, the pioneering Microsoft HoloLens, provides an opportunity to explore a hands-free HMD’s ability to deliver AR assembly instructions and what a user interface looks like for such an application. Such an exploration is necessary because it is not certain how previous research on user interfaces will transfer to the HoloLens or other new commodity HMDs. In addition, while new HMD technology is promising, its ability to deliver a robust AR assembly experience is still unknown. To assess the HoloLens’ potential for delivering AR assembly instructions, the cross-platform Unity 3D game engine was used to build a proof of concept application. Features focused upon when building the prototype were: user interfaces, dynamic 3D assembly instructions, and spatially registered content placement. The research showed that while the HoloLens is a promising system, there are still areas that require improvement, such as tracking accuracy, before the device is ready for deployment in a factory assembly setting.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The recent evolution of cockpit design has moved from the established glass cockpits into new directions. Among them is the virtual enhancement of cockpits by augmented reality (AR) and virtual reality (VR) displays. Well known in aviation are helmet mounted see-through displays, but opaque VR displays are of increasing interest also. This technology enables the pilot to use virtual instrumentation as an add-on to the real cockpit. Even a totally virtualized instrumentation is possible. Furthermore, VR technology allows the fast prototyping and pilot training in cockpit environments that are still in development before even a single real instrument is built. We show how commercial off-the-shelf VR hardware can be used to build a prototyping environment. We demonstrate advantages and challenges when using software engines usually built for the games industry. We describe our own integration concept, which re-uses as much of our own software as possible and allows integration with minimal parallel development.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Modern systems are starting to include a wide variety of sensors and camera systems for applications such as night vision, degraded vision, and other sensors. Systems that require multiple coordinated sensors (including sensor fusion) used for ISR, navigation in degraded environments, or infrared countermeasures are constantly trying to increase throughput to carry higher resolution images and video in real time. The need for ever higher throughput challenges system designers on every level, including the physical interface. Simply moving video efficiently from point to point or within a network is a challenge. ARINC 818, the Avionics Digital Video Bus, continues to expand into high-speed sensor applications because of its low latency, robustness, and high throughput capabilities.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Degraded Visual Environments (DVE) can significantly restrict rotorcraft operations during their most common mission profiles of terrain flight and off-airfield operations. The user community has been seeking solutions that will allow pilotage in DVE and mitigate the additional risks and limitations resulting from the degraded visual scene. To achieve this solution there must be a common understanding of the DVE problem, the history of solutions to this point, and the full range of solutions that may lead to future rotorcraft pilotage in the DVE. There are three major technologies that contribute to rotorcraft operations in the DVE: flight control, cueing and sensing, and all three must be addressed for an optimal solution. Increasing aircraft stability through flight control improvements will reduce pilot workload and facilitate operations in both Degraded Visual Environments and Good Visual Environments (GVE) and therefore must be a major piece of all DVE solutions. Sensing and cueing improvements are required to gain a level of situational awareness which can permit low-level flight and off-airfield landings, while avoiding contact with terrain or obstacles which are not visually detectable by the flight crew. The question of how this sensor information is presented to the pilot is a subject of debate among those working to solve the DVE problem. There are two major philosophies in the field of DVE sensor and cueing implementation. The first is that the sensor should display an image which allows the pilot to perform all pilotage tasks as they would fly under visual flight rules (VFR). The second is that the pilot should follow an algorithm-derived, sensor cleared, precision flight path, presented as cues for the pilot to fly as they would under instrument flight rules (IFR). There are also combinations of these two methods that offer differing levels of assistance to the pilots, ranging from aircraft flight symbology overlaid on the sensor image, to symbols that augment the displayed image and help a pilot interpret the scene, to a complete virtual reality that presents a display of the sensed world without any “see-through” capability. These options can utilize two primary means of transmitting a sensor image and cueing information to the pilot: a helmet mounted display (HMD) or a panel mounted display (PMD). This paper will explore the trade space between DVE systems that depend on an image and those that utilize guidance algorithms for both the PMD and HMD as recently demonstrated during the 2016 and 2017 NATO flight trials in the United States, Germany and Switzerland.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In recent years the number of offshore wind farms is rapidly increasing. Especially coastal European countries are building numerous offshore wind turbines in the Baltic, the North, and the Irish Sea. During both construction and operation of these wind farms, many specially-equipped helicopters are on duty. Due to their flexibility, their hover capability, and their higher speed compared to ships, these aircraft perform important tasks like helicopter emergency medical services (HEMS) as well as passenger and freight transfer flights. The missions often include specific challenges like platform landings or hoist operations to drop off workers onto wind turbines. However, adverse weather conditions frequently limit helicopter offshore operations. In such scenarios, the application of aircraft-mounted sensors and obstacle databases together with helmet-mounted displays (HMD) seems to offer great potential to improve the operational capabilities of the helicopters used. By displaying environmental information in a visual conformal manner, these systems mitigate the loss of visual reference to the surroundings. This helps the pilots to maintain proper situational awareness. This paper analyzes the specific challenges of helicopter offshore operations in wind parks by means of an online survey and a structured interview with pilots and operators. Further, the work presents how our previously introduced concept of an HMD-based virtual flight deck could enhance helicopter offshore missions. The advantages of this system – for instance its “see-through the airframe”-capability and its highly-flexible cockpit setup – enable us to design entirely novel pilot assistance systems. The gained knowledge will be used to develop a virtual cockpit that is tailor-made for helicopter offshore maneuvers
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.