In this paper we propose a dynamic DBSCAN-based method to cluster and visualize unclassified and potential dangerous obstacles in data sets recorded by a LiDAR sensor. The sensor delivers data sets in a short time interval, so a spatial superposition of multiple data sets is created. We use this superposition to create clusters incrementally. Knowledge about the position and size of each cluster is used to fuse clusters and the stabilization of clusters within multiple time frames. Cluster stability is a key feature to provide a smooth and un-distracting visualization for the pilot. Only a few lines are indicating the position of threatening unclassified points, where a hazardous situation for the helicopter could happen, if it comes too close. Clustering and visualization form a part of an entire synthetic vision processing chain, in which the LiDAR points support the generation of a real-time synthetic view of the environment.
Low level helicopter operations in Degraded Visual Environment (DVE) still are a major challenge and bear the risk of
potentially fatal accidents. DVE generally encompasses all degradations to the visual perception of the pilot ranging
from night conditions via rain and snowfall to fog and maybe even blinding sunlight or unstructured outside scenery.
Each of these conditions reduce the pilots’ ability to perceive visual cues in the outside world reducing his performance
and finally increasing risk of mission failure and accidents, like for example Controlled Flight Into Terrain (CFIT). The
basis for the presented solution is a fusion of processed and classified high resolution ladar data with database
information having a potential to also include other sensor data like forward looking or 360° radar data. This paper
reports on a pilot assistance system aiming at giving back the essential visual cues to the pilot by means of displaying 3D
conformal cues and symbols in a head-tracked Helmet Mounted Display (HMD) and a combination with synthetic view
on a head-down Multi-Function Display (MFD). Each flight phase and each flight envelope requires different symbology
sets and different possibilities for the pilots to select specific support functions. Several functionalities have been
implemented and tested in a simulator as well as in flight. The symbology ranges from obstacle warning symbology via
terrain enhancements through grids or ridge lines to different waypoint symbols supporting navigation. While some
adaptations can be automated it emerged as essential that symbology characteristics and completeness can be selected by
the pilot to match the relevant flight envelope and outside visual conditions.
Helicopter pilots often have to deal with bad weather conditions and degraded views. Such situations may decrease the
pilots' situational awareness significantly. The worst-case scenario would be a complete loss of visual reference during
an off-field landing due to brownout or white out. In order to increase the pilots' situational awareness, helicopters
nowadays are equipped with different sensors that are used to gather information about the terrain ahead of the
helicopter. Synthetic vision systems are used to capture and classify sensor data and to visualize them on multifunctional
displays or pilot's head up displays. This requires the input data to be a reliably classified into obstacles and
ground.
In this paper, we present a regularization-based terrain classifier. Regularization is a popular segmentation method in
computer vision and used in active contours. For a real-time application scenario with LIDAR data, we developed an
optimization that uses different levels of detail depending on the accuracy of the sensor. After a preprocessing step where
points are removed that cannot be ground, the method fits a shape underneath the recorded point cloud. Once this shape
is calculated, the points below this shape can be distinguished from elevated objects and are classified as ground. Finally,
we demonstrate the quality of our segmentation approach by its application on operational flight recordings. This method
builds a part of an entire synthetic vision processing chain, where the classified points are used to support the generation
of a real-time synthetic view of the terrain as an assistance tool for the helicopter pilot.
One of the major causes for hazardous situations in aviation is the lack of a pilot’s situational awareness. Common
causes for degraded situational awareness are Brownout and Whiteout situations, low level flights, and flights in DVE.
In this paper, we propose Advanced Synthetic Vision (ASV), a modern situational awareness solution. ASV combines
both Synthetic Vision and Enhanced Vision in order to provide the pilot most timeliness information without being
restricted in the spatial coverage of the synthetic representation. The advantages to a common Enhanced Synthetic
Vision System are the following: (1) ASV uses 3D ladar data instead of a 2D sensor. The 3D point cloud is classified in
real-time to distinguish between ground, wires, poles and buildings; (2) the classified sensor data is fused with onboard
data base contents like elevation or obstacles. The entire data fusion is performed in 3D, i.e. output is a merged 3D
scenario instead of a blended 2D image. Once the sensor stopped recording due to occlusion, ASV switches to pure data
base mode; (3) the merged data is passed to a 3D visualization module, which is fully configurable in order to support
synthetic views on head down displays as well as more abstract augmented representations on helmet mounted displays;
(4) the extendable design of ASV supports the graphical linking of functions like 3D landing aid, TAWS, or navigation
aids.
KEYWORDS: Visualization, Brain activation, Brain, Functional magnetic resonance imaging, Volume rendering, Digital video recorders, Opacity, Particles, Convolution, 3D metrology
Modern medical imaging provides a variety of techniques for the acquisition of multi-modality data. A typical
example is the combination of functional and anatomical data from functional Magnetic Resonance Imaging
(fMRI) and anatomical MRI measurements. Usually, the data resulting from each of these two methods is
transformed to 3D scalar-field representations to facilitate visualization. A common method for the visualization
of anatomical/functional multi-modalities combines semi-transparent isosurfaces (SSD, surface shaded display)
with other scalar visualization techniques like direct volume rendering (DVR). However, partial occlusion and
visual clutter that typically result from the overlay of these traditional 3D scalar-field visualization techniques
make it difficult for the user to perceive and recognize visual structures. This paper addresses these perceptual
issues by a new visualization approach for anatomical/functional multi-modalities. The idea is to reduce the
occlusion effects of an isosurface by replacing its surface representation by a sparser line representation. Those
lines are chosen along the principal curvature directions of the isosurface and rendered by a flow visualization
method called line integral convolution (LIC). Applying the LIC algorithm results in fine line structures that
improve the perception of the isosurface's shape in a way that it is possible to render it with small opacity
values. An interactive visualization is achieved by executing the algorithm completely on the graphics processing
unit (GPU) of modern graphics hardware. Furthermore, several illumination techniques and image compositing
strategies are discussed for emphasizing the isosurface structure. We demonstrate our method for the example
of fMRI/MRI measurements, visualizing the spatial relationship between brain activation and brain tissue.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.