PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
This PDF file contains the front matter associated with SPIE Proceedings Volume 9653, including the Title Page, Copyright information, Table of Contents, Introduction, Authors, and Conference Committee listing.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This new security development is expected to increase interest from Northern European states in supporting the development of conceptually new stealthy ground platforms, incorporating a decade of advances in technology and experiences from stealth platforms at sea and in the air. The scope of this case study is to draw experience from where we left off. At the end of the 1990s there was growing interest in stealth for combat vehicles in Sweden. An ambitious technology demonstrator project was launched. One of the outcomes was a proposed Systems Engineering process tailored for signature management presented to SPIE in 2002.(Olsson et.al, A systems approach…, Proc. SPIE 4718 ) The process was used for the Swedish/BAE Systems Hägglunds AB development of a multirole armored platform (The Swedish acronym is SEP). Before development was completed there was a change of procurement policy in Sweden from domestic development towards Governmental Off-The-Shelf, preceded by a Swedish Armed Forces change of focus from national defense only, towards expeditionary missions. Lessons learned, of value for future development, are presented. They are deduced from interviews of key-personnel, on the procurer and industry sides respectively, and from document reviews.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In development of visual (VIS) and infrared (IR) camouflage for signature management, the aim is the design of surface properties of an object to spectrally match or adapt to a background and thereby minimizing the contrast perceived by a threatening sensor. The so called 'ladder model" relates the requirements for task measure of effectiveness with surface structure properties through the steps signature effectiveness and object signature. It is intended to link materials properties via platform signature to military utility and vice versa. Spectral design of a surface intends to give it a desired wavelength dependent optical response to fit a specific application of interest. Six evaluation criteria were stated, with the aim to aid the process to put requirement on camouflage and for evaluation. The six criteria correspond to properties such as reflectance, gloss, emissivity, and degree of polarization as well as dynamic properties, and broadband or multispectral properties. These criteria have previously been exemplified on different kinds of materials and investigated separately. Anderson and Åkerlind further point out that the six criteria rarely were considered or described all together in one and same publication previously. The specific level of requirement of the different properties must be specified individually for each specific situation and environment to minimize the contrast between target and a background. The criteria or properties are not totally independent of one another. How they are correlated is part of the theme of this paper. However, prioritization has been made due to the limit of space. Therefore all of the interconnections between the six criteria will not be considered in the work of this report. The ladder step previous to digging into the different material composition possibilities and choice of suitable materials and structures (not covered here), includes the object signature and decision of what the spectral response should be, when intended for a specific environment. The chosen spectral response should give a low detection probability (DP). How detection probability connects to image analysis tools and implementation of the six criteria is part of this work.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We present a methodology to determine the camouflage effectiveness of static nets in a SAR image. There is currently no common recognized methodology within the signature management community in this research topic. One step towards establishing a common methodology is to use a standardized target to be camouflaged. We use the STANdard Decoy for CAmouflage Materials (STANDCAM) target developed by the German Army, WTD 52, Oberjettenberg. An ISAR measurement of the STANDCAM with a camouflage configuration is acquired as the first step of the method. The ISAR data is then blended with SAR data acquired in field trials. In the final SAR image a contrast metric between the target and background is extracted. The contrast measure is then the measure of the camouflage effectiveness. As an example of result we present ISAR measurements and determine the camouflage effectiveness in a SAR image using SAR blending for static nets with different electrical conductivity and design. This methodology presents a measure to determine the effectiveness of a static net on the STANDCAM target.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Detection of a camouflaged object in natural sceneries requires the target to be distinguishable from its local background. The development of any new camouflage pattern therefore has to rely on a well-founded test methodology – which has to be correlated with the final purpose of the pattern – as well as an evaluation procedure, containing the optimal criteria for i) discriminating between the targets and then eventually ii) for a final rank of the targets.
In this study we present results from a recent camouflage assessment trial where human observers were used in a search by photo methodology to assess generic test camouflage patterns. We conducted a study to investigate possible improvements in camouflage patterns for battle dress uniforms. The aim was to do a comparative study of potential, and generic patterns intended for use in arid areas (sparsely vegetated, semi desert).
We developed a test methodology that was intended to be simple, reliable and realistic with respect to the operational benefit of camouflage. Therefore we chose to conduct a human based observer trial founded on imagery of realistic targets in natural backgrounds. Inspired by a recent and similar trial in the UK, we developed new and purpose-based software to be able to conduct the observer trial. Our preferred assessment methodology – the observer trial – was based on target recordings in 12 different, but operational relevant scenes, collected in a dry and sparsely vegetated area (Rhodes). The scenes were chosen with the intention to span as broadly as possible. The targets were human-shaped mannequins and were situated identically in each of the scenes to allow for a relative comparison of camouflage effectiveness in each scene. Test of significance, among the targets’ performance, was carried out by non-parametric tests as the corresponding time of detection distributions in overall were found to be difficult to parameterize.
From the trial, containing 12 different scenes from sparsely vegetated areas we collected detection time’s distributions for 6 generic targets through visual search by 148 observers. We found that the different targets performed differently, given by their corresponding time of detection distributions, within a single scene. Furthermore, we gained an overall ranking over all the 12 scenes by performing a weighted sum over all scenes, intended to keep as much of the vital information on the targets’ signature effectiveness as possible. Our results show that it was possible to measure the targets performance relatively to another also when summing over all scenes.
We also compared our ranking based on our preferred criterion (detection time) with a secondary (probability of detection) to assess the sensitivity of a final ranking based upon the test set-up and evaluation criterion. We found our observer-based approach to be well suited regarding its ability to discriminate between similar targets and to assign numeric values to the observed differences in performance. We believe our approach will be well suited as a tool whenever different aspects of camouflage are to be evaluated and understood further.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper presents a survey of main applicable technical principles and mechanisms for adaptive camouflage in the
visible (VIS) and infrared (IR) spectral ranges.
All principles are described by their operation method and technical data such as the active spectral range, the degree and
speed of adaptation, weight, power consumption, robustness, usability, lifetime, technology readiness level (TRL) etc..
The paper allows to compare the different principles and to assess them with regard to an application to an adaptive
camouflage system.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
To provide technical assessments of EO/IR flares and self-protection systems for aircraft, DGA Information superiority resorts to synthetic image generation to model the operational battlefield of an aircraft, as viewed by EO/IR threats. For this purpose, it completed the SE-Workbench suite from OKTAL-SE with functionalities to predict a realistic aircraft IR signature and is yet integrating the real-time EO/IR rendering engine of SE-Workbench called SE-FAST-IR. This engine is a set of physics-based software and libraries that allows preparing and visualizing a 3D scene for the EO/IR domain. It takes advantage of recent advances in GPU computing techniques. The recent past evolutions that have been performed concern mainly the realistic and physical rendering of reflections, the rendering of both radiative and thermal shadows, the use of procedural techniques for the managing and the rendering of very large terrains, the implementation of Image- Based Rendering for dynamic interpolation of plume static signatures and lastly for aircraft the dynamic interpolation of thermal states. The next step is the representation of the spectral, directional, spatial and temporal signature of flares by Lacroix Defense using OKTAL-SE technology. This representation is prepared from experimental data acquired during windblast tests and high speed track tests. It is based on particle system mechanisms to model the different components of a flare. The validation of a flare model will comprise a simulation of real trials and a comparison of simulation outputs to experimental results concerning the flare signature and above all the behavior of the stimulated threat.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Guidance of weapon systems relies on sensors to analyze targets signature. Defense weapon systems also need to detect then identify threats also using sensors. The sensors performance is very dependent on conditions e.g. time of day, atmospheric propagation, background ... Visible camera are very efficient for diurnal fine weather conditions, long wave infrared sensors for night vision, radar systems very efficient for seeing through atmosphere and/or foliage ... Besides, multi sensors systems, combining several collocated sensors with associated algorithms of fusion, provide better efficiency (typically for Enhanced Vision Systems). But these sophisticated systems are all the more difficult to conceive, assess and qualify. In that frame, multi sensors simulation is highly required. This paper focuses on multi sensors simulation tools.
A first part makes a state of the Art of such simulation workbenches with a special focus on SE-Workbench. SEWorkbench is described with regards to infrared/EO sensors, millimeter waves sensors, active EO sensors and GNSS sensors. Then a general overview of simulation of targets and backgrounds signature objectives is presented, depending on the type of simulation required (parametric studies, open loop simulation, closed loop simulation, hybridization of SW simulation and HW ...). After the objective review, the paper presents some basic requirements for simulation implementation such as the deterministic behavior of simulation, mandatory to repeat it many times for parametric studies... Several technical topics are then discussed, such as the rendering technique (ray tracing vs. rasterization), the implementation (CPU vs. GP GPU) and the tradeoff between physical accuracy and performance of computation. Examples of results using SE-Workbench are showed and commented.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Infrared camera as a weapon sub system for automatic guidance is a key component for military carrier such as missile for example. The associated Image Processing, that controls the navigation, needs to be intensively assessed. Experimentation in the real world is very expensive. This is the main reason why hybrid simulation also called HardWare In the Loop (HWIL) is more and more required nowadays. In that field, IR projectors are able to cast IR fluxes of photons directly onto the IR camera of a given weapon system, typically a missile seeker head. Though in laboratory, the missile is so stimulated exactly like in the real world, provided a realistic simulation tool enables to perform synthetic images to be displayed by the IR projectors. The key technical challenge is to render the synthetic images at the required frequency. This paper focuses on OKTAL-SE experience in this domain through its product SE-FAST-HWIL. It shows the methodology and Return of Experience from OKTAL-SE. Examples are given, in the frame of the SE-Workbench. The presentation focuses on trials on real operational complex 3D cases. In particular, three important topics, that are very sensitive with regards to IG performance, are detailed: first the 3D sea surface representation, then particle systems rendering especially to simulate flares and at last sensor effects modelling. Beyond "projection mode", some information will be given on the SE-FAST-HWIL new capabilities dedicated to "injection mode".
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In the fields of early warning, one is depending on reliable image exploitation: Only if the applied detection and tracking
algorithms work efficiently, the threat approach alert can be given fast enough to ensure an automatic initiation of the
countermeasure. In order to evaluate the performance of those algorithms for a certain electro-optical (EO) sensor
system, test sequences need to be created as realistic and comprehensive as possible. Since both, background and target
signature, depend on the environmental conditions, a detailed knowledge of the meteorology and climatology is
necessary. Trials for measuring these environmental characteristics serve as a solid basis, but might only constitute the
conditions during a rather short period of time. To represent the entire variation of meteorology and climatology that the
future system will be exposed to, the application of comprehensive atmospheric modelling tools is essential.
This paper gives an introduction of the atmospheric modelling tools that are currently used at Fraunhofer IOSB to
simulate spectral background signatures in the infrared (IR) range. It is also demonstrated, how those signatures are
affected by changing atmospheric and climatic conditions. In conclusion – and with a special focus on the modelling of
different cloud types - sources of error and limits are discussed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Every vessel moving in the sea, imprints a perturbation on the wave structure of the sea and forms a so-called
wake. These wakes can be used in the detection of a target and can also help in identifying its characteristics.
Several studies concentrated on detection of a target wake by making use of either radar or infrared sensors.
We model the infrared and radar signature of the wake and sea surface background and investigate the synergy
between the two bands. The primary goal of this work is to make a comparative study between the two bands
in order to be able to discriminate which sensor gives a more reliable detection in which scenario.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Reflectance spectra of vegetative background areas are measured and their variation is analyzed. It is shown that the
variation of different samples is significantly larger than the accuracy of the measurement. Furthermore the difference of
diverse measurement procedures is discussed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The Havemann-Taylor Fast Radiative Transfer Code (HT-FRTC) is a core component of the Met Office NEON Tactical Decision Aid (TDA). Within NEON, the HT-FRTC has for a number of years been used to predict the infrared apparent thermal contrasts between different surface types as observed by an airborne sensor. To achieve this, the HT-FRTC is supplied with the inherent temperatures and spectral properties of these surfaces (i.e. ground target(s) and backgrounds). A key strength of the HT-FRTC is its ability to take into account the detailed properties of the atmosphere, which in the context of NEON tend to be provided by a Numerical Weather Prediction (NWP) forecast model. While water vapour and ozone are generally the most important gases, additional trace gases are now being incorporated into the HT-FRTC. The HT-FRTC also includes an exact treatment of atmospheric scattering based on spherical harmonics. This allows for the treatment of several different aerosol species and of liquid and ice clouds. Recent developments can even account for rain and falling snow. The HT-FRTC works in Principal Component (PC) space and is trained on a wide variety of atmospheric and surface conditions, which significantly reduces the computational requirements regarding memory and processing time. One clear-sky simulation takes approximately one millisecond at the time of writing. Recent developments allow the training of HT-FRTC to be both completely generalised and sensor independent. This is significant as the user of the code can add new sensors and new surfaces/targets by supplying extra files which contain their (possibly classified) spectral properties. The HT-FRTC has been extended to cover the spectral range of Photopic and NVG sensors. One aim here is to give guidance on the expected, directionally resolved sky brightness, especially at night, again taking the actual or forecast atmospheric conditions into account. Recent developments include light level predictions during the period of twilight.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The Fraunhofer thermal object model (FTOM) predicts the temperature of an object as a function of the environmental conditions. The model has an outer layer exchanging radiation and heat with the environment and a stack of layers beyond modifying the thermal behavior. The innermost layer is at a constant or variable temperature called core temperature. The properties of the model (6 parameters) are fitted to minimize the difference between the prediction and a time series of measured temperatures. The model can be used for very different objects like backgrounds (e.g. meadow, forest, stone, or sand) or objects like vehicles.
The two dimensional enhancement was developed to model more complex objects with non-planar surfaces and heat conduction between adjacent regions. In this model we call the small thermal homogenous interacting regions thermal pixels. For each thermal pixel the orientation and the identities of the adjacent pixels are stored in an array. In this version 7 parameters have to be fitted. The model is limited to a convex geometry to reduce the complexity of the heat exchange and allow for a higher number of thermal pixels.
For the test of the model time series of thermal images of a test object (CUBI) were analyzed. The square sides of the cubes were modeled as 25 thermal pixels (5 × 5). In the time series of thermal images small areas in the size of the thermal pixels were analyzed to generate data files that can easily be read by the model.
The program was developed with MATLAB and the final version in C++ using the OpenMP multiprocessor library. The differential equation for the heat transfer is the time consuming part in the computation and was programmed in C. The comparison show a good agreement of the fitted and not fitted thermal pixels with the measured temperatures. This indicates the ability of the model to predict the temperatures of the whole object.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The two most important atmospheric transmission bands in the infrared occur at the midwave band of 3–5 μm and the longwave band of 8–12 μm respectively. For a given infrared detector a common question asked is, of the two, which, if any, gives the better performance? A question not without controversy, in this study an analysis designed to assess the relative merits of infrared detectors operating in either of these spectral bands based on the recently defined thermal figure of merit known as the detected thermal contrast is undertaken. Under ideal limiting conditions, by considering a range of targets whose spectral emissivities vary as a function of both wavelength and temperature, we impugn a number of results previously reported in the literature regarding the performance of a detector made using detected thermal contrast as the thermal figure of merit. For the two broad types of detectors considered it is found the midwave band for both thermal and quantum detectors gives better performance for a range of different target types.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Hyperspectral and multispectral imagery of shorelines collected from airborne and shipborne platforms are used following pushbroom imagery corrections using inertial motion motions units and augmented global positioning data and Kalman filtering. Corrected radiance or reflectance images are then used to optimize synthetic high spatial resolution spectral signatures resulting from an optimized data fusion process. The process demonstrated utilizes littoral zone features from imagery acquired in the Gulf of Mexico region. Shoreline imagery along the Banana River, Florida, is presented that utilizes a technique that makes use of numerically embedded targets in both higher spatial resolution multispectral images and lower spatial resolution hyperspectral imagery. The fusion process developed utilizes optimization procedures that include random selection of regions and pixels in the imagery, and minimizing the difference between the synthetic signatures and observed signatures. The optimized data fusion approach allows detection of spectral anomalies in the resolution enhanced data cubes. Spectral-spatial anomaly detection is demonstrated using numerically embedded line targets within actual imagery. The approach allows one to test spectral signature anomaly detection and to identify features and targets. The optimized data fusion techniques and software allows one to perform sensitivity analysis and optimization in the singular value decomposition model building process and the 2-D Butterworth cutoff frequency and order numerical selection process. The data fusion “synthetic imagery” forms a basis for spectral-spatial resolution enhancement for optimal band selection and remote sensing algorithm development within “spectral anomaly areas”. Sensitivity analysis demonstrates the data fusion methodology is most sensitive to (a) the pixels and features used in the SVD model building process and (b) the 2-D Butterworth cutoff frequency optimized by application of K-S nonparametric test. The image fusion protocol is transferable to sensor data acquired from other platforms, including moving platforms as demonstrated.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Hyperspectral remote sensing data can be used for civil and military applications to detect and classify target objects that cannot be reliably separated using broadband sensors. The comparably low spatial resolution is compensated by the fact that small targets, even below image resolution, can still be classified. The goal of this paper is to determine the target size to spatial resolution ratio for successful classification of different target and background materials. Airborne hyperspectral data is used to simulate data with known mixture ratios and to estimate the detection threshold for given false alarm rates. The data was collected in July 2014 over Greding, Germany, using airborne aisaEAGLE and aisaHAWK hyperspectral sensors. On the ground, various target materials were placed on natural background. The targets were four quadratic molton patches with an edge length of 7 meters in the colors black, white, grey and green. Also, two different types of polyethylene (camouflage nets) with an edge length of approximately 5.5 meters were deployed. Synthetic data is generated from the original data using spectral mixtures. Target signatures are linearly combined with different background materials in specific ratios. The simulated mixtures are appended to the original data and the target areas are removed for evaluation. Commonly used classification algorithms, e.g. Matched Filtering, Adaptive Cosine Estimator are used to determine the detection limit. Fixed false alarm rates are employed to find and analyze certain regions where false alarms usually occur first. A combination of 18 targets and 12 backgrounds is analyzed for three VNIR and two SWIR data sets of the same area.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The paper deals with description of newly developing method of Hyperspectral camera utilization for determination of camouflage surfaces spectral characteristics homogeneity. The color patterns of camouflage surfaces are usually checked pointwise. It is assumed subsequently that the spectral characteristics of the pattern are the same for whole area of camouflage surface. The essential properties of hyperspectral camera allow to determine the level of spectral qualities homogeneity of the surface. Although the respective snapping of hyperspectral image is fairly easy, the evaluation of HS datacube features specific problems connected with homogeneity of illuminance, optical system aberrations, transformation to reflectance and spectral unmixing. All the measurement aspects have to be taken into account to correctly determine the homogeneity of camouflage surfaces spectral characteristics.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The spectral behavior of textile camouflage materials in the electro-optical spectral range is analyzed and compared with
different backgrounds. It is shown that it will be difficult to develop camouflage materials that match a vegetative
background in the NIR and SWIR spectral range. The problem of water absorption spectral features is discussed. In
addition the effect of different surface finishing of textiles is shown.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Conventional munitions are not guided with sensors and therefore miss the target, particularly if the target is mobile. The miss distance of these munitions can be decreased by incorporating sensors to detect the target and guide the munition during flight. This paper is concerned with a Precision Guided Munition(PGM) equipped with an infrared sensor and a millimeter wave radar [IR and MmW, for short]. Three-dimensional flight of the munition and its pitch and yaw motion models are developed and simulated. The forward and lateral motion of a target tank on the ground is modeled as two independent second-order Gauss-Markov process. To estimate the target location on the ground and the line-of-sight rate to intercept it an Extended Kalman Filter is composed whose state vector consists of cascaded state vectors of missile dynamics and target dynamics. The line-of-sight angle measurement from the infrared seeker is by centroiding the target image in 40 Hz. The centroid estimation of the images in the focal plane is at a frequency of 10 Hz. Every 10 Hz, centroids of four consecutive images are averaged, yielding a time-averaged centroid, implying some measurement delay. The miss distance achieved by including by image processing delays is 1:45m.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Recent technological advancements in hardware systems have made higher quality cameras. State of the art panoramic systems use them to produce videos with a resolution of 9000 x 2400 pixels at a rate of 30 frames per second (fps) [1]. Many modern applications use object tracking to determine the speed and the path taken by each object moving through a scene. The detection requires detailed pixel analysis between two frames. In fields like surveillance systems or crowd analysis, this must be achieved in real time.
Graphics Processing Units (GPUs) are powerful devices with lots of processing capabilities for parallel jobs. The detection of objects in a scene requires large amount of independent pixel operations on the video frames that can be done in parallel, making GPU a good choice for the processing platform. This paper only concentrates on Background Subtraction Techniques [2] to detect the objects present in the scene. The foreground pixels are extracted from the processed frame and compared to the corresponding ones of the model. Using a connected- component detector, neighboring pixels are gathered in order to form blobs which correspond to the detected foreground objects. The new blobs are compared to the blobs formed in the previous frame to see if the corresponding object moved.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper introduces an interactive recognition assistance system for imaging reconnaissance. This system supports
aerial image analysts on missions during two main tasks: Object recognition and infrastructure analysis. Object
recognition concentrates on the classification of one single object. Infrastructure analysis deals with the description of
the components of an infrastructure and the recognition of the infrastructure type (e.g. military airfield). Based on satellite
or aerial images, aerial image analysts are able to extract single object features and thereby recognize different object
types. It is one of the most challenging tasks in the imaging reconnaissance. Currently, there are no high potential ATR
(automatic target recognition) applications available, as consequence the human observer cannot be replaced entirely.
State-of-the-art ATR applications cannot assume in equal measure human perception and interpretation. Why is this still
such a critical issue? First, cluttered and noisy images make it difficult to automatically extract, classify and identify
object types. Second, due to the changed warfare and the rise of asymmetric threats it is nearly impossible to create an
underlying data set containing all features, objects or infrastructure types. Many other reasons like environmental
parameters or aspect angles compound the application of ATR supplementary. Due to the lack of suitable ATR
procedures, the human factor is still important and so far irreplaceable. In order to use the potential benefits of the human
perception and computational methods in a synergistic way, both are unified in an interactive assistance system.
RecceMan® (Reconnaissance Manual) offers two different modes for aerial image analysts on missions: the object
recognition mode and the infrastructure analysis mode. The aim of the object recognition mode is to recognize a certain
object type based on the object features that originated from the image signatures. The infrastructure analysis mode
pursues the goal to analyze the function of the infrastructure. The image analyst extracts visually certain target object
signatures, assigns them to corresponding object features and is finally able to recognize the object type. The system
offers him the possibility to assign the image signatures to features given by sample images. The underlying data set
contains a wide range of objects features and object types for different domains like ships or land vehicles. Each domain
has its own feature tree developed by aerial image analyst experts. By selecting the corresponding features, the possible
solution set of objects is automatically reduced and matches only the objects that contain the selected features.
Moreover, we give an outlook of current research in the field of ground target analysis in which we deal with partly
automated methods to extract image signatures and assign them to the corresponding features. This research includes
methods for automatically determining the orientation of an object and geometric features like width and length of the
object. This step enables to reduce automatically the possible object types offered to the image analyst by the interactive
recognition assistance system.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Change detection is one of the most important tasks when unmanned aerial vehicles (UAV) are used for video
reconnaissance and surveillance. In this paper, we address changes on short time scale, i.e. the observations are taken
within time distances of a few hours. Each observation is a short video sequence corresponding to the near-nadir
overflight of the UAV above the interesting area and the relevant changes are e.g. recently added or removed objects.
The change detection algorithm has to distinguish between relevant and non-relevant changes. Examples for non-relevant
changes are versatile objects like trees and compression or transmission artifacts. To enable the usage of an
automatic change detection within an interactive workflow of an UAV video exploitation system, an evaluation and
assessment procedure has to be performed. Large video data sets which contain many relevant objects with varying scene
background and altering influence parameters (e.g. image quality, sensor and flight parameters) including image
metadata and ground truth data are necessary for a comprehensive evaluation. Since the acquisition of real video data is
limited by cost and time constraints, from our point of view, the generation of synthetic data by simulation tools has to
be considered.
In this paper the processing chain of Saur et al. (2014) [1] and the interactive workflow for video change detection is
described. We have selected the commercial simulation environment Virtual Battle Space 3 (VBS3) to generate synthetic
data. For an experimental setup, an example scenario “road monitoring” has been defined and several video clips have
been produced with varying flight and sensor parameters and varying objects in the scene. Image registration and change
mask extraction, both components of the processing chain, are applied to corresponding frames of different video clips.
For the selected examples, the images could be registered, the modelled changes could be extracted and the artifacts of
the image rendering considered as noise (slight differences of heading angles, disparity of vegetation, 3D parallax) could
be suppressed. We conclude that these image data could be considered to be realistic enough to serve as evaluation data
for the selected processing components. Future work will extend the evaluation to other influence parameters and may
include the human operator for mission planning and sensor control.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Background: Algorithms show difficulties in distinguishing weak signals of a target from a cluttered background, a task that humans tend to master relatively easily. We conducted two studies to identify how various degrees of clutter influence operator performance and search patterns in a visual target detection task.
Methods: First, 8 male subjects had to look for specific female targets within a heavily cluttered public area. Subjects were supported by differing amounts of markings that helped them to identify females in general. We presented video clips and analyzed the search patterns. Second, 18 subject matter experts had to identify targets on a heavily frequented motorway intersection. We presented them with video material from a UAV (Unmanned Aerial Vehicle) surveillance mission. The video image was subdivided in three zones: The central zone (CZ), a circle area of 10°. The peripheral zone (PZ) corresponding to a 4:3 format and the hyper peripheral zone (HPZ) which represented the lateral region specific to the 16:9 format. We analyzed fixation densities and task performance.
Results: We found an approximately U-shaped correlation between the number of markings in a video and the degree of structure in search patterns as well as performance. For the motorway surveillance task we found a difference in mean detection time for CZ vs. HPZ (p=0.01) and PZ vs. HPZ (p=0.003) but no difference for CZ vs. PZ (p=0.491). There were no differences in detection rate for the respective zones. We found the highest fixation density in CZ decreasing towards HPZ.
Conclusion: We were able to demonstrate that markings could increase surveillance operator performance in a cluttered environment as long as their number is kept in an optimal range. When performing a search task within a heavily cluttered environment, humans tend to show rather erratic search patterns and spend more time watching central picture areas.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Our understanding of camouflage, in military as well as in evolutionary perspectives, has been developing over the last 100 years. In that period of time several underlying principles have emerged. It has turned out in the recent decade that background pattern matching alone may not be sufficient to conceal targets because of the ubiquitous and revealing information contained by the edges of a target. In this paper we have studied one concealment strategy, the so-called disruptive coloration, further as it predicts that high contrast patches placed at the target’s outline will impede detection, by creating false target edges when exposed to the observer. Such disruptive coloration is contra-intuitive as it may impede detection in spite of the fact that the patches themselves may be poorly concealed. In military environments the “disruptive approach” within camouflage has been textbook material for decades. Still, very little has been reported, supporting this idea, especially when it comes to the concealment of human targets in natural sceneries. We report here experimental evidence from a field study, containing detection data from 12 unique natural scenes (5 testing the disruptive effect, 7 as reference tests), with both human targets and human observers, showing that disruptively colored camouflage patches along a human’s outline (its head) may increase detection time significantly as when compared to a similar (human) target concealed only with background matching. Hence, our results support the idea that disruptive coloration may impede detection and similarly that the best concealment is achieved when disruptive coloration is added to a target that matches the background (reasonably) well. This study raises important question to the current understanding of human vision and concealment as well as to any approach to describe the human visual system mathematically.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The statistical methods discussed in this paper are drawn from the area of machine learning or data mining as well as from descriptive statistics. These techniques are discussed with focus on their applicability to the results of observer trials in order to evaluate the effectiveness of signature measures. Signature measures aim at the change of the apparent signature of an object, e.g. a vehicle. So signature measures can be camouflage against infrared sensory, or they can be used for deception reasons. In order to evaluate the effectiveness of signature measures, observer trials provide an efficient method. The department of Signatorics of Fraunhofer IOSB developed a software tool named CARPET (Computer Aided inteRactive Performance Evaluation Tool) for the realization of observer trials. The benefit of this system is the reproducibility and uniformity of trials for every observer. The results from this system consist of marks, that were placed at particular times, as well as computer mouse positions recorded for each human observer. Based on the information gathered from these marks together with the known target object positions the statistical treatment can be done. For the statistics it has to be known to which target object the marks belong. The first problem considered in this paper concentrates on the correct labeling of the marks according to the target objects. The labeling is done using an expectation maximization scheme with the k-means clustering algorithm. The next step involves a second labeling. In this step a linear discriminant is used to decide whether a mark should be considered a hit or miss for every particular target object. After these decisions, a receiver-operating characteristics (ROC) analysis is performed in order to evaluate the detectability of each target object. Furthermore the sample mean and sample covariance formulas are used on the so called hit sets in order to approximate Gaussian distributions for every hit set. These Gaussians facilitate the evaluation of the accuracy and the precision of the hit sets. Accuracy and precision offer information about the quality of the marks set by the observers.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The TNO Human Factors Search 2 dataset is a valuable resource for studies in target detection, providing researchers with observational data against which image-based target distinctness metrics and detection models can be tested. The observational data provided with the Search 2 dataset was created by human observers searching colour images projected from a slide projector. Many target distinctness metrics studies are however carried out not on colour images but on images that have been processed into greyscale by various means. This is usually done for ease of analysis and meaningful interpretation. Utility of a metric is usually assessed by analysing the correlation between metric results and recorded observational results. However, the question remains of how well the results from the contrast metrics analysed from monochromatic images could be expected to compare to the observational results from colour images. We present results of a photosimulation experiment conducted using a monochromatic representation of the Search 2 dataset and an analysis of several target distinctness metrics. The monochromatic images presented to observers were created by processing the Search 2 images into L*, a* and b* colour space representations, and presenting the L* (lightness) image. The results of this experiment are compared with the original Search 2 results, showing strong correlation (0.83) between the monochrome and colour experiments in terms of correct target detection, and in terms of search time. Target distinctness metrics from analysis of these images are compared to the results of the photosimulation experiments, and the original Search 2 results.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Over the past 50 years, the majority of detection models used to assess visible signatures have been developed and validated using static imagery. Some of these models are the German developed CAMAELEON (CAMou age Assessment by Evaluation of Local Energy Spatial Frequency and OrieNtation) model and the U.S. Army's Night Vision and Electronic Sensors Directorate (NVESD) ACQUIRE and TTP (Targeting Task Performance) models. All these models gathered the necessary human observer data for development and validation from static images in photosimulation experiments. In this paper, we compare the results of a field observation trial to a static photosimulation experiment.
The probability of detection obtained from the field observation trial was compared to the detection probability obtained from the static photosimulation trial. The comparison showed good correlation between the field trial and the static image photosimulation detection probabilities, where a Spearman correlation coefficient of 0.59 was calculated. The photosimulation detection task was found to be significantly harder than the field observation detection task, suggesting that to use static image photosimulation to develop and validate maritime visible signature evaluation tools may need correction to represent detection in field observations.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.