PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
This PDF file contains the front matter associated with SPIE Proceedings Volume 11019, including the Title Page, Copyright information, Table of Contents, Author and Conference Committee lists.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The ability to fully operate rotary-wing aircraft in natural and aircraft-induced degraded visual environments (DVE) is a challenge that is being addressed by the US Army Aviation science and technology community. When the term DVE is discussed, most people think of the challenges associated with dust landings; however, the problem involves much more than the final landing phase of a mission. The objective of a DVE system is to enable aircrews to fly a mission under visual flight rules in a degraded environment and/or instrument meteorological conditions. Much like night vision goggles allowed Army Aviation to “own the night,” the capability to safely operate in any weather condition will provide aviators the ability to “own the environment.” The US Army Combat Capabilities Development Command Degraded Visual Environment – Mitigation (DVE-M) program is a multi-year demonstration program that is evaluating key technologies to enable full spectrum operations in degraded environments. The three main components of a DVE solution are flight controls, sensors, and cueing. Improvements in all three areas, along with the development of an advanced and modular architecture, will provide increased capability to safely operate in DVEs. In order to provide a true capability, these technologies must be integrated into an optimized solution consisting of fused sensor data, a comprehensive cueing solution, and advanced flight controls that include autonomous, sensor-driven guidance capabilities. The integrated system developed by the DVE-M program will be demonstrated in multiple locations with degraded environments and will build upon the test data and lessons learned from the 2016 and 2017 NATO flight trials.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
During Operation Enduring Freedom and Operation Iraqi Freedom (OEF/OIF), there were 375 non-combat rotorcraft losses from October 2001 to September 2009. Human factors issues accounted for 78% of these non-combat losses, with controlled flight into terrain (CFIT) and night/degraded visual environment (DVE) as the leading human factor loss causes. This high percentage is reflective of the low survivability rate of DVE/night-based mishaps. Normal human visual sensitivity has been shown to be highly variable under low luminance, low contrast conditions (Lattimore, 2017). Furthermore, Thibos et al. (2002) demonstrated significant structural aberration, with accompanying decreased image quality in a normal population of dilated healthy eyes. Additionally, Watson (2015) computed widely varied human optical point spread functions in normal, dilated eyes. Finally, Bartholomew et al. (2016) assessed visual sensitivity under varied luminance in 234 healthy collegians; only 4.1% of the photopic 20/20 acuity responders were predictive of mesopic acuity responsiveness. Based on these data, nearly all aviators may be at increased risk for poor DVE-related visual and flight performance. Therefore, the greatest risk factor that Army aviators are facing when confronted with DVE conditions is perhaps the visual performance status of their own eyes. Contrast sensitivity testing under mesopic naturally-dilated, night conditions using any of the contrast sensitivity tests available, could be used to determine one’s native or base-line capability under DVE conditions. The systematic compilation of such data could prompt the military services to re-evaluate their vision performance standards, and restructure awareness training.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Army Aviation mission scenarios have inherent risks associated with them. Risks are associated with weather conditions, anticipated flight maneuvers, aircraft sensors and technologies, as well as aviator training and experience, attentiveness, and physiological conditions. We have initiated the development of an Aviator Risk Assessment Model (AvRAM) aimed at modeling a multitude of factors that interact in a highly complex fashion to affect flight safety and mission success. A single risk assessment score is derived that represents the summation of a range of weighted composite risk assessment scores based on mathematical modeling of mission complexity, operational stressors, and pilot particulars.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Operating a helicopter in off-shore wind parks in DVE from clouds or fog can endanger crew and material due to the presence of unseen obstacles. Utilizing on-board sensors like Lidar or radar one can sense and record obstacles that could be potentially dangerous. One major challenge is then to display the resulting raw sensor data in a way that the crew, especially the pilot can make use of without distracting them from their actual task. Augmented and mixed reality applications are thought to play an important role here. By displaying the data in a see-through helmet mounted display (HMD) the pilot can be made aware of obstacles that are currently obscured by degraded visual conditions or even parts of the own helicopter. This can be accomplished in one HMD. No attention sharing between the outside view and a headdown instrument is necessary. DLR is continuously aiming at testing and evaluating both flight-proof and consumer grade HMDs. One particular widely known system is the Microsoft HoloLens. DLR will integrate this low-cost HMD into their test helicopter ACT/FHS. For this, as a first step a Microsoft HoloLens was integrated into DLR's simulator AVES. In this paper the integration process is detailed. The simulation capabilities are described, especially for conformal, open-loop Lidar sensor data. Furthermore, first concepts of the display format are shown, and strengths and drawbacks of the HoloLens in a cockpit environment are discussed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper describes the integration of a Degraded Visual Environment (DVE) system on a civil certified H145 (BK117 D-2) helicopter. The DVE system consists of a LiDAR, an EVS camera and a head-tracked head/helmet mounted display system (HMD) integrated into the Helionix digital avionics suite. The DVE system presents a head-up display of flight symbology combined with Enhanced Vision System (EVS) imagery and 3D conformal symbology based on fused LiDAR and database data. The DVE system also provides a Combined Vision System (CVS) display on the Helionix head-down displays. All sensors were prototypically integrated into the H145 demonstrator in a serial-like manner, allowing for a potential serialization of the system. The flight trials were conducted focusing on military as well as on civil HEMS missions and were used to verify the intended function and evaluate installed DVE system performance.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Degraded visual environments remain a concern for national security applications that require continuous imaging. Fog, due to its prevalence in coastal regions, interrupts surveillance, harbor security, and transportation with notable economic impact. Fog reduces visibility by scattering ambient light and active illumination thereby obscuring the environment and limiting operational capacity. Here we will present our work on two major capabilities for testing polarized light transport in a fog like condition caused by water droplet aerosols. Sandia utilizes a fully polarimetric Monte Carlo simulation to predict transmission performance in varying degraded optical conditions. We have extended this capability to cover the conditions that occur in our second capability that allows for repeatable testing of optical systems in a man-made fog analogue. This capability is a facility for performing optical propagations experiments in a known, stable, and controlled environment where fog can be made on demand. This facility is a 180 ft. by 10 ft. by 10 ft. chamber with temperature control with 80 agricultural spray nozzles. We will discuss the characterization of the fog and instrumentation used for the characterization. We will summarize and present new results from the work performed under the internally funded research program that developed Sandia’s fog capabilities. This includes a short-wave infrared snapshot imaging polarimeter for enhanced contrast in degraded visual environments and investigations of the degradation of image quality in the long-wave infrared waveband with passive and active illumination.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
It is known that the tempting features of free space Non-Line-Of-Sight (NLOS) communications systems operating in the Ultraviolet C-band between 200 and 280 nm are the significantly reduced solar irradiance on ground level, the intense scattering and its combination with strong absorption which ensures the covertness against distant eavesdroppers or jammers. In the majority of the experimental surveys that have been published so far, the performance of point-to-point links has been evaluated under clear atmosphere without taking into account the weather conditions. In this work, it is shown that harsh atmospheric conditions due to fog appearance can be advantageous to short distance NLOS transmissions at 265 nm. Initially, the impact of fog on the losses of the diffuse wireless channels was investigated theoretically. Afterwards, an experimental survey of both the losses and the performance of low rate amplitude signals’ transmissions for two atmosphere cases followed. Initially, the satisfactory relation between scattering and absorption at 265 nm was verified by deploying outdoor NLOS point-to-point links under clear atmosphere. The transmitter consisted of 4 Light Emitting Diodes and the optical part of the receiver included a filter and a Photo-Multiplier tube. Then, the beneficial impact of artificially generated fog on scattering was exploited not only to enhance the system performance but also to identify the modification of the conditions. The experimental results showed a clear decrease of both the losses and the Bit Error Rate under fog conditions making such a system a perfect candidate for low rate communications under dense atmosphere.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Heavy fogs and other highly scattering environments pose a challenge for many commercial and national security sensing systems. Current autonomous systems rely on a range of optical sensors for guidance and remote sensing that can be degraded by highly scattering environments. In our previous and on-going simulation work, we have shown polarized light can increase signal or range through a scattering environment such as fog. Specifically, we have shown circularly polarized light maintains its polarized signal through a larger number of scattering events and thus range, better than linearly polarized light. In this work we present design and testing results of active polarization imagers at short-wave infrared and visible wavelengths. We explore multiple polarimetric configurations for the imager, focusing on linear and circular polarization states. Testing of the imager was performed in the Sandia Fog Facility. The Sandia Fog Facility is a 180 ft. by 10 ft. chamber that can create fog-like conditions for optical testing. This facility offers a repeatable fog scattering environment ideally suited to test the imager’s performance in fog conditions. We show that circular polarized imagers can penetrate fog better than linear polarized imagers.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Fog is a commonly occurring degraded visual environment which disrupts air traffic, ground traffic, and security imaging systems. For many application of interest, spatial resolution is required to identify elements of the scene. However, studying the effects of fog on resolution degradation is difficult because the composition of naturally occurring fogs is variable, and data collection is reliant on changing weather conditions. For our study, we used the Sandia National Laboratories fog facility to generate repeatable characterized fog conditions. Sandia’s well characterized fog generation allowed us to relate the resolution degradation of active and passive long-wave infrared (LWIR) imagers to the properties of fog. Additionally, the fogs we generated were denser than naturally occurring fogs. This allowed for testing of long range imaging in the shorter optical pathlengths obtainable in a laboratory environment.
In this presentation, we experimentally investigate the resolution degradation of LWIR wavelengths in realistic fog droplet sizes. Transmission of LWIR wavelengths has been studied extensively in literature. To date however, there are few experimental results quantifying the resolution degradation for LWIR imagery in fog. We present experimental results on resolution degradation for both passive and active LWIR systems. The degradation of passive imaging was measured using 37˚C blackbody with a slant edge resolution targets. The active imaging resolution degradation was measured using a polarized CO2 laser reflecting off a set of bar targets. We found that the relationship between meteorological optical range and resolution degradation was more complicated than described purely by attenuation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A novel, low-cost, camera-based method of detecting Continuous Wave (CW) lasers has been developed at DSTL. The detector uses a simple optical modification to a standard colour camera combined with image processing techniques to distinguish lasers from other illumination sources, as well as measuring the wavelength, direction and irradiance of the laser light. Such a detector has applications in collecting information on aircraft laser dazzle incidents: providing the evidence required to inform on aircrew laser exposure events and to assess if engagements are eye safe. A prototype has been developed using entirely Commercially available Off-The-Shelf (COTS) components, costing ≈£600, and assessed in the laboratory conditions, with the capability of measuring laser wavelengths to ±5nm and irradiances to ±10%. A realistic hand-held laser engagement scenario, using a range of relevant wavelengths and irradiances, was simulated during the Moonraker trial where the prototype was capable of measuring laser wavelengths to an accuracy of ±10nm, and peak irradiances to ±25%. Comparisons were made with a COTS laser detector, and showed an equivalent performance. This technology offers a low cost approach to CW laser detection, which is capable of extracting a range of parameters, whilst maintaining a relatively wide Field of View (FOV) and angular resolution.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Current degraded visual environment (DVE) solutions primarily support aviation and augment a pilot’s ability to operate in a degraded environment, but the relevancy of information presented to the pilot to safely navigate isn’t necessarily the same format that should be presented to dismounting operators or for mission planners in the command and control center. The need exists for a real-time 3D common operating picture (COP) generating system that can provide enhanced mission planning capabilities as well as real-time status and location of operating forces within that 3D COP. Capabilities and challenges that will be addressed are: 1) real-time processing of all disparate sensor data; 2) database implementation that allows for clients to query the COP for specific users, devices, timeframes, and locations; and 3) representation of 3D COP to command and control elements as well as forward deployed users of HoloLens, flat panel display, and iOS devices. The proposed Real-time Intelligence Fusion Service (RIFS) will operate in real-time by receiving disparate data streams from sensors such as LiDARs, radars, and various localization methods. RIFS will then fuse them to a COP and send the COP to requesting clients. The application of RIFS would allow forward deployed personnel and commanders to maintain a high degree of real-time passive situational awareness in 3D space that would ultimately increase operational tempo and significantly mitigate risk to forward deployed forces.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A Commercial Aviation Safety Team (CAST) study of 18 loss-of-control events determined that a lack of external visual references was a contributing factor in 17 of these events. CAST recommended that manufacturers should develop and implement virtual day-VMC display systems, such as synthetic vision (SV) or equivalent systems (CAST Safety Enhancement, SE-200). In support of this recommended action, CAST has requested studies to define minimum requirements for virtual day-visual meteorological conditions (VMC) displays to improve flight crew awareness of airplane attitude. NASA’s research in Virtual day-VMC displays, known as synthetic vision systems, are intended to support intuitive flight crew attitude awareness similar to a day-VMC-like environment, especially if they could be designed to create visual dominance. A study was conducted to evaluate the utility of ambient vision (AV) cues paired with virtual Head-Up Display (HUD) symbology on a prototype head-worn display (HWD) during recovery from unusual attitudes in a simulated environment. The virtual-HUD component meets the requirement that the HWD may be used as an equivalent display to the HUD. The presence of AV cueing leverages the potential that a HWD has over the HUD for spatial disorientation prevention. The simulation study was conducted as a single-pilot operation, under realistic flight scenarios, with off-nominal events occurring that were capable of inducing unusual attitudes. Independent variables of the experiment included: 1) AV capability (on vs off) 2) AV display opaqueness (transparent vs opaque) and display location (HWD vs traditional headdown displays); AV cues were only present when the HWD was being worn by the subject pilot.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Use of binocular night vision devices by military aircrew has been associated with visual fatigue. Misalignment between the two binocular images may be one source of this fatigue and could degrade task performance. The eyes have some degree of tolerance to optical misalignment; however, published tolerance limits vary widely and may have limited relevance for military pilots flying long missions. We used a simulated flying task to investigate the effect of misalignment on task performance and visual fatigue. A simulated helicopter flying task was presented simultaneously with three secondary tasks relevant to the visual, auditory, and cognitive demands experience by military aircrew. Task performance was objectively assessed using tracking errors, response times and incorrect responses. Eight participants were exposed to a controlled level of optical misalignment by attaching customised lenses to a Tobii Pro 2 eye tracker. The misaligned condition was compared with an aligned condition with identical workload. Pupil diameter, peripheral skin temperature and ECG data were collected during the task as objective markers of fatigue. Our results showed that misalignment induced significant degradation of task performance both in terms of longer response times and an increased number of incorrect responses. Misalignment was also associated with significantly increased fatigue as measured by reduction of peripheral skin temperature and pupil diameter. Moreover, typical task related modulations in heart rate and heart rate variability were significantly impaired in the misaligned condition.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The macular pigment is an accumulation of the dietary carotenoids lutein, zeaxanthin, and mezo-zeaxanthin throughout the retina but principally in the region corresponding to the central 15◦of the visual field. Since the macular pigment absorbs light in the 400 to 520 nm range, it is a spectral filter over the photoreceptors, attenuating the incident light in the macular pigment absorption spectrum. The between-subject average macular pigment optical density is about 0.2 to 0.6 log units with a range reportedly between 0 to 1.5 log units depending on the sampled population. Some people can increase their macular pigment optical density by increasing their consumption of lutein and zeaxanthin, which may have consequences for visibility in degraded visual environments (DVE). Specifically, nutritional and dietary interventions have produced statistically significant enhancements in such visual tasks as low contrast target detection, contrast sensitivity, glare resistance and recovery, etc. The question is whether these changes are operationally meaningful. The present paper models macular pigment optical density effects on mesopic vision using the current CIE recommendation for scotopic- to-photopic weighting to define mesopic spectral sensitivity. Since the scotopic spectrum overlaps that of the macular pigment more than does the photopic spectrum, the effect of the macular pigment increases as vision transitions from photopic-to-scotopic conditions. Our mesopic visibility model, an elaboration of our previously reported photopic and scotopic models, captures this effect and applies it to current light sources common in cultural lighting and to reflectance spectra we previously evaluated and reported.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This review aims to clarify the parameters affecting binocular rivalry, in order to improve comfort for users of monocular augmented reality devices. Augmented reality devices allow users to see virtual information superimposed on the environment. The particularity of monocular systems is that they do not stimulate the eyes in the same way and can therefore induce binocular rivalry. This occurs when the brain is unable to merge the different images presented to each eye and perception alternates between them. It can cause visual fatigue, headache and visual suppression. Binocular rivalry can be characterized in terms of alternation rate, predominance (i.e. total proportion of the binocular rivalry viewing time that a stimulus is dominant) and average dominance duration (for all individual dominance periods). The literature suggests that these variables depend on the conditions of use and the visual stimuli available to the subject. Notably, several parameters have an impact, including contrast, spatial frequency, brightness, etc. The impact of other parameters, such as ocular dominance, remains the subject of debate. With respect to the latter, the literature describes various definitions and tests, and it appears that there are three main forms: motor, acuity and sensorial, the latter being of interest for binocular rivalry.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Head-worn displays (HWDs) and aircraft-mounted sensors are common means to support helicopter pilots who operate in degraded visual environment. The use of see-through HWDs is beneficial in brownout and adverse weather conditions because these displays can visualize occluded real world features like the horizon, nearby obstacles, or the desired landing spot. The German Aerospace Center (DLR) investigates an enhanced vision concept called “Virtual Cockpit”. Instead of a see-through display, an immersive HWD is used to give helicopter pilots an enhanced out-the-window view. As shown in previous publications, the virtual reality (VR) technology creates benefits for several applications. This contribution explores the advantages and limitations of displaying an exocentric perspective view on the VR glasses. Moving the pilot’s eye point out of the cockpit to a viewpoint behind and above the aircraft, appears to be especially useful in situations where the pilot’s natural view is degraded by the own aircraft structure. Moreover, it is beneficial for certain maneuvers, in which the real location of the pilot’s eye is not optimal for capturing the whole situation. The paper presents results from a simulator study with 8 participants, in which the developed symbology was tested in confined area hover and landing scenarios. The 3D exocentric perspective views increased spatial awareness in the tested scenarios and significantly reduced the required head motion. Further research is needed regarding the attitude awareness with such displays. Apart from helicopter operations, the results may also be relevant for remote piloting solutions and for other types of vehicles with restricted external vision.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Following research reported by the authors to SID in 2017 and SPIE in 2018, this paper presents an expanded set of research results for psychophysical research conducted to determine of 255 Just Noticeable Color Differences (JNCDs). Given transmissive displays shall continue to create their color palette via color sub-pixel gray levels; given the number of such gray levels shall continue to be 255 (plus black) for most avionic, vetronic and shipboard applications, the authors propose to identify a set of gamma values unique to a transmissive display’s three color channels and which conform to the intensity difference thresholds of the tritanoptic visual system. Method of research, to include procedure, equipment, stimuli and test subjects shall be identified. Results shall include raw data regarding psychophysically determined intensity difference thresholds and a range of gammas for the red, green, and blue color channels of an hypothetic, JND adapted, 256 gray level transmissive display.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Modern off-the-visor Helmet Mounted Display Systems (HMDS) present today’s pilots with unprecedented and intuitive access to flight, tactical, and sensor information for advanced situational awareness, precision accuracy, and pilot safety in all lighting conditions, day and night. In particular, integrated night vision sensors provide pilots with enhanced situational awareness under Very Low Light Level (VLLL) conditions. This in turn increases safety and recovery rate during carrier flight operations, especially under the Very Low Light Level conditions encountered on a moonless, overcast night. A recent GAO report identified a technical risk specific to Helmet Mounted Displays when operating under these conditions: light from the Liquid Crystal Display (LCD) backlight escapes through gaps between the pixels, creating a “green glow” on the screen, and limiting the pilot’s night adapted vision. This is problematic in particular for night carrier landings under Very Low Light Level background luminance conditions. This presentation will discuss the benefits and challenges associated with emissive microdisplay technologies for Helmet Mounted Display applications, and Organic LED (OLED) microdisplays in particular. Among the primary benefits of emissive display technologies is the potential to resolve the deficiency colloquially known as “green glow.” We discuss the challenges encountered during the development and integration of OLED microdisplays into an off-the-visor Helmet Mounted Display System.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
There is emerging demand for multi-ship sensor-based 3D world modeling (3DWM) for providing safe flight guidance in degraded visual environments (DVE). For illustration, we consider the leader-follower scenario, where the leader platform has the perception sensor suite (e.g., LIDAR, RADAR, Optical Structure from Motion (SfM)) used to create a dynamic 3DWM, depicting objects in the environment. The 3DWM is then distributed to the follower platform, for use in pilot cueing in DVE to avoid obstacles or for pilot workload reduction. Leader navigation state errors can cause inaccurate placement of free/occupied space in the common mapping frame, which can lead to a safety-critical failure to avoid obstacles. To ensure safe obstacle avoidance, an overbound on the leader’s navigation errors relative to the common mapping frame must be known with high confidence. A similar overbound must also be established for the follower vehicle. To support this need for high-integrity navigation, this paper adapts existing high-integrity navigation standards as a framework for this scenario. Based on this framework, a sample error decomposition is performed. Potential navigation solutions that meet these example requirements are defined. Finally, modifications needed to the 3DWM measurement update process to accurately represent navigation uncertainty in the 3DWM are highlighted.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Unmanned aerial vehicles have become widespread in today’s world and are used for applications ranging from real estate marketing and bridge inspection to defense and military applications. These applications have in common some form of autonomous navigation that requires a good localization capability at all time. Most UAV are using a combination of global navigation satellite systems (GNSS) and inertial measurement unit (IMU) to perform this task. Unfortunately, GNSS are subject to signal unavailability and all sorts of interference impeding on the ability of the UAV to self-localize. In this paper, we propose a new algorithm to perform localization in GNSS-denied environments by using a relative visual localization technique. We developed a new measure based on the use of local feature points extracted with ORB to estimate the likelihood of a previously captured image to have been taken in a position close to the current UAV location. The measure is embedded in a particle filter in which IMU data is used in order to reduce the number of images we need to analyze to perform localization. The resulting method have shown significant improvement in both accuracy and execution time in comparison to previous approaches.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The U.S. Army has collected W-band Radar Cross Section (RCS) data on various rotorcraft landing area clutter types and potential hazards. In collecting this data, two unique features were observed that to our knowledge have been reported in the literature: • When assessing the clutter reflectivity (σo) of long, dry grass in flat terrain with moist soil, a very sharp reduction in the σo value was observed with decreasing EL angle. However, the σo began to rapidly increase when the elevation (EL) angle became less than about 5°. • When assessing the RCS of chain link fencing, the peak response occurred at normal incidence for small EL angles. However, as the EL angle was increased, the off-normal angle for the maximum RCS response shifted by more than 35°. The causes of these seemingly anomalous effects are described in this paper.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.