PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
This PDF file contains the front matter associated with SPIE Proceedings Volume 11865, including the Title Page, Copyright information, and Table of Contents.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this study, we have investigated reflectance spectra of different snow types under various conditions. Snow reflectance is interesting from a camouflage point of view as snow covers large land areas in many parts of the world during winter, at high altitudes or high (or low) latitudes. Snow reflectance of incident light differs from light reflected by other natural constituents such as soil and vegetation by the high reflectance, particularly in the visible wavelength range. It is therefore important to characterize various snow reflectance properties further. From a concealment point of view, it is also important to study differences – and similarities – in reflectance of snow and vegetation (including frost-covered vegetation) beyond the wavelengths visible to the naked eye. Especially because reflected light from vegetation is dominated more by water content as the wavelength increases. Finally, it is of interest to study the effects on snow coverage needed to mask signatures of underlying objects and to observe if snow reflectance, as a function of thin layers fits well to existing models of light reflected by thin, semi-transparent layers. We found that fresh powder snow, wet snow, coarse snow, and deep (and older) snow layers had similar reflectance spectra, albeit with important differences. Fresh powder snow reflected more light than wet, older or coarse snow that had lower reflectance values, yet distinctly different from vegetation for wavelengths below about 1000 nm. For longer wavelengths, however, the differences between pure snow and green vegetation were much less pronounced. Finally, the reflectance of frost-covered vegetation deviated from pure vegetation, but to a much less degree than pure snow. Layer thickness needed to mask underlying surfaces was studied for coarse snow distributed evenly onto a green reference object, and we found the characteristic thickness (corresponding to specific weights of snow per area) needed to effectively hide the spectral reflectance signatures.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The development of state-of-the-art surveillance technology forces nations to develop camouflage with advanced capabilities. Owing to the abundance and wide distribution of leaves, their spectral properties are often mimicked by camouflage material to decrease the conspicuity of the user operating in woodland theaters. Before interacting with the covered object or soil, light usually interacts with several leaf layers, for example through tree canopies. Knowledge of the spectral characteristics of multi-layered leaves is therefore essential to utilize remote sensing applications and for the endeavor to develop undetectable camouflage materials mimicking nature. The literature is currently scarce on research investigating both the spectral characteristics of multi-layered leaves and camouflage material. In this work, we intend to reduce the knowledge gap with studies on the spectral properties of multiple layers of oak leaves and a generic camouflage net. We measured the reflectance and transmittance of samples with 1–8 layers between 250–2500 nm and found that wavelengths and samples differ in penetration depths. The penetration depth ranged from just a couple of layers for visible light to tree or six layers at infrared wavelengths for the camouflage net and leaf samples, respectively. Moreover, we fitted the reflectance data of the samples with an uncomplicated plate model, here named the extinction model, which we used at selected wavelengths to estimate the transmittance and absorptance values of the multi-layered samples. The model predicts the spectral values of the samples with high accuracy, especially those of the leaf samples, and proves to be a promising tool that may replace experiments due to time restrictions or limited resources.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper shows three experiments from our HyperGreding’19 campaign that combine multi-temporal hyperspectral data to address several essential questions in target detection. The experiments were conducted over Greding, Germany, using a Headwall VNIR/SWIR co-aligned sensor mounted on a drone with a flight altitude of 80 m. Additionally, high-resolution aerial RGB data, GPS measurements, and reference data from a field spectrometer were recorded to support the hyperspectral data pre-processing and the evaluation process for the individual experiments. The focus of the experiments is the detectability of camouflage materials and camouflaged objects. When the goal is to transfer hyperspectral analysis to a practical setting, the analysis must be robust regarding realistic and changing conditions. The first experiment investigates the SAM and the SAMZID approaches for change detection to demonstrate their usefulness for target detection of moving objects within the recorded scene. The goal is to eliminate unwanted changes like shadow areas. The second experiment evaluates the detection of different camouflage net types over two days. This includes camouflage nets in shadows during one flight and brightly illuminated in another due to varying solar elevation angles during the day. We demonstrate the performance of typical hyperspectral target detection and classification approaches for robust detection under these conditions. Finally, the third experiment aims to detect objects and materials behind the cover of camouflage nets by using a camouflage garage. We show that some materials can be detected using an unmixing approach.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Main performance characteristics of defensive systems against incoming threats such as RPGs, rockets or missiles, are the capacity for early detection over longer ranges with high probability and low false alarm rate, the short time recognition and declaration of the threat and its tracking and location capabilities. Infrared multiband imaging is one of the electrooptic techniques used to detect, track and classify the propelled threats through discrimination of the exhaust plumes and emitting body against other background sources. Target is in most of the cases, an unresolved object and appears in the detector smaller than a pixel or covering a few pixels. However, launching eject and boost, and sustained propulsion can be sensed during flight. Before the system design, an analysis is carried out to constrain key performance parameters for defined use cases and scenarios. The work presents a mixed approach for the infrared imager, using commercial key performance parameters, experimental measurements, parametric and ray-tracing based modeling to fully characterize main radiometric, angular an temporal responses. Optical subpixel detection considers the intrinsic parameters of the wide field of view optics and focal plane array sensor such as, the angular distortion, the point spread function (PSF) or, the detector spatial and temporal noise and the radiometric response. Extrinsic inputs to the system are the radiative threat and background and the atmospheric transmitting media where the terrain and sky types and the absorption, scattering and turbulence influences the effective contrast and range. The infrared dynamic signatures of several threats have been simulated with eject, boosting or sustained propulsion and the nose aerodynamic heating embedded into a real infrared scenarios. The overall model generates a synthetic environment of the sub-pixel target signature against the background clutter taking into account the frame-to-frame evolution according to the expected trajectory of the propelled threat. So, it is possible to simulate at sensor level, the signal and spatial timeframe of the fast dynamic target and tailoring design parameters for a better signal to clutter ratio and range performance. Additionally, final simulated results can be used as an input for development of detection and tracking algorithms.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Many applications rely on thermal imagers to complement or replace visible light sensors in difficult imaging conditions. Recent advances in machine learning have opened the possibility of analyzing or enhancing images, yet these methods require large annotated databases. Training approaches that leverage data augmentation via simulated and synthetically-generated images could offer promising prospects. Here, we report on a method that uses generative adversarial nets (GANs) to synthesize images of a complementary contrast. Starting from a dual-modality dataset of co-registered visible and thermal images, we trained a GAN to generate synthetic thermal images from visible images and vice versa. Our results show that the procedure yields sharp synthesized images that might be used to augment dual-modality datasets or assist in visual interpretation, yet are also subject to the limitations imposed by contrast independence between thermal and visible images.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We present two approaches to decompose and neutralize the distortion generated by an imaging system when measuring BRDF data of materials.
BRDF acquisition time can be considerably shortened using imaging systems, making it possible to gather detailed characterizations of multiple materials in very short times. The drawback of this acquisition strategy is that several processing steps have to be applied in order to obtain data that are comparable to the standard sampling method. Two of them are the directional mapping to spherical coordinates and the correction of the radiometric distortions due to geometry and inter-reflections that occur during the acquisition process.
Here we present two approaches that address the decomposition of the various interactions that occur when measuring BRDF material functions with an imaging system. The first one uses the measurement of a mirror as a Kronecker delta function input to the imaging system and focusses on the inter-reflections within the curved projection surface. The second approach addresses the three main contributions that hinder the direct comparison between sampled and imaged measurements and model the dependency to the geometry associated. The components considered are the curved projection surface, the inter-reflections, and the effects associated with the imaging system itself.
We introduce the idea of considering the distorting factors as two separable components. The first one depends only on the surface geometry and camera properties. The second contains all interactions attached to the reflectance characteristic of the probe. This enables the decoupling of the distorting factors and allows us to examine them independently. Current results are presented and discussed. The implications for the radiometric quantitative accuracy of the method will be summarized.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Hyperspectral camera system captures information using large number of wavelength bands with narrow spectral width in contrast to multispectral camera with a few bands across the electromagnetic spectrum. Hyperspectral data cube can provide significant amount of information in target detection. However, such systems are bulky and generate enormous amount of data and hence the real time processing is challenging for light weight airborne platform and wearable sensor system development. With recent advancement in CMOS image sensor and colour filter technologies, multispectral camera system has become compact for the lightweight applications. This paper demonstrates the suitability of a few selected bands from the multispectral camera combined with signature based machine learning techniques can provide accurate target detection. The study has used a four-band multispectral and one hundred and thirty eight bands hyperspectral systems mounted on a drone platform to detect a camouflage sheet of size 250cm x 65cm from different heights. The results will have application in the development of compact spectral image sensor technology suitable for aerial and hand held, or helmet/body mounted applications.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In order to assess camouflage and the role of movement under widely ranging (lighting, weather, background) conditions simulation techniques are highly useful. However, sufficient level of fidelity of the simulated scenes is required to draw conclusions. Here, live recordings were obtained of moving soldiers and simulations of similar scenes were created. To assess the fidelity of the simulation a search experiment was carried out in which performance of recorded and simulated scenes was compared. Several movies of bushland environments were shown (recorded as well as simulated scenes) and participants were instructed to find the moving target as rapidly as possible within a time limit. In another experiment, visual conspicuity of the targets was measured. For static targets it is well known that the conspicuity (i.e., the maximum distance to detect a target in visual periphery) is a valid measure for camouflage efficiency as it predicts visual search performance. In the present study, we investigate whether conspicuity also predicts search performance for moving targets. In the conspicuity task, participants saw a short (560 ms) part of the movies used for the search experiments. This movie was presented in a loop such that the target moved forward, backward, forward, etcetera. Conspicuity was determined as follows: a participant starts by fixating a location in the scene far away from the target so that he/she is not able to detect it. Next, the participant fixates progressively closer to the target location until the target can just be detected in peripheral vision; at this point the distance to the target is recorded. As with static stimuli, we show that visual conspicuity predicts search performance. This suggests that conspicuity may be used as a means to establish whether simulated scenes show sufficiently fidelity to be used for camouflage assessment (and the effect of motion).
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Laboratory target detection experiments are widely used to assess camouflage techniques for effectiveness in the field. There has been some research to suggest that, in maritime environments, target detection in the laboratory (photosimulation) differs from field observations. This difference suggested that the dynamic nature of real world search tasks, not represented in still images, could be a critical element in field detection. To explore the effect of dynamic elements for inclusion in laboratory experiments, we have initiated a series of studies including videosimulation. In this paper, we extend our previous work, exploring the link between field observations, photosimulation and videosimulation using data obtained at a field trial conducted in Darwin (Australia) with small boat targets. Both laboratory-based experiments (photo- and video-simulation) presented the stimuli on an EIZO colour calibrated monitor, and a Tobii eye tracker was used to record eye movements. Comparing probability of detection (Pd) from the field observations and videosimulation experiment yielded a Pearson correlation coefficient (PCC) of 0.43 and mean absolute error (MAE) of 0.23 from the identity function, whereas comparing the field observations to photosimulation yielded a PCC of 0.45 and MAE 0.20. These new results show the opposite trend to that reported in Culpepper et al 2015. That is, the new results show the laboratory experiments to be mostly easier than the field observations, whereas our 2015 results showed that field observations were easier than photosimulation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Surveying for man-made objects in photographic images is of utmost importance for various military and civilian applications. In this paper, we present two supervised approaches for classifying a photographic image as containing either dominant natural or man-made regions. The first approach has low-complexity where features are extracted from a statistical model of multi-scale sub-band coefficients of natural scenes. The second approach is based on traditional robust feature extraction along with recent deep methods. We evaluate the performance of these approaches on two popular image databases composed of a mixture of man-made and natural scene photographic images. We compare their performance in terms of classification accuracy as well as computational complexity. While the traditional robust feature based classification approach appears to be an obvious choice for such a task, we conclude that low-complexity approaches cannot be discounted for real-time applications. Finally, we also construct a likelihood map for the man-made regions for quick localisation of man-made regions within mixed image that could help in speeding up the detection process.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The development of new sensing abilities, such as hyperspectral sensors, small man-portable infrared sensors, battlefield radars and faster AI based data processing requires new capabilities of camouflage materials. Military systems also have to work in changing environments calling for adaptivity.
ACAMSII is an EU funded project within the Preparatory Action for Defence Research (PADR) programme under grant agreement No 800871. Scientists from Sweden, Germany, Portugal, France, Lithuania and the Netherlands cooperate in the project. ACAMSII conducts applied research on improved Camouflage for the Soldier with purpose to provide increased survivability in diversified environments.
The ACAMSII multifunctional camouflage demonstrator consists of three main parts:
• The IR adaptivity is realized by integrating phase changing and low emissivity materials in the printed camouflage patterned textile.
• Adaption to background colours and lighting conditions in the visible range is managed by operating and driving light emitting diodes (LEDs) embedded in a panel of clothing textile material. The LEDs are controlled by means of a camera and a light sensor placed in the back. Thermochromic materials give further adaptation.
• Woven fabric is coated with microwave shielding and absorbing materials reduces the radar signature.
For a soldier combat system, not only camouflage properties are of importance but also the comfort properties of the system is evaluated.
In this paper emphasis is put on military needs (top-down or market pull) and scientific aspects and technical readiness level of available technology and materials (technology push or down-up), on how to build a consortium in an international collaboration during a pandemic. The concept of EDA Overarching Strategic Research Agenda is introduced and the projects relation to this explained. Only general aspects and results from the three-year project, ending 2021, is presented here whereas specific details of the camouflage system are presented in other papers presented during the session.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper opto-electronic methods for adaptive camouflage for a soldier are presented. This work is part of the EU funded project ACAMSII, “Adaptive CAMouflage for the Soldier II”, a future soldier system to be developed and implemented that provides adaptive camouflage against all relevant sensor threats, ranging from visible to thermal IR and radar. The assessment of possible technologies for adaptation of the visual signature is presented and discussed. The three most promising technologies will be discussed in detail: LED, OLED and electronic paper. Concerning the optical properties this paper will focus mainly on single colours and intensities, not on pattern generation and evaluation. Optical properties like luminance and the impact of reflectance are studied and compared. Especially the appearance of LEDs embedded in a textile and the impact of reflected ambient light will be analyzed. A discussion of colour space properties as a function of illuminance and surrounding background textile is presented also. It turned out that OLED and electronic paper devices have considerable disadvantages compared to LEDs. Therefor LEDs are the most promising optoelectronic components to be used for adaptive camouflage in the visible spectral range. Concerning the NIR spectral range a discussion of the optical requirements and the application of NIR diodes in clothing will be presented also.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Within the EDA funded project ACAMSII, “Adaptive camouflage for the soldier”, a future soldier system is being developed and implemented that provides adaptive camouflage against all relevant sensor threats, ranging from visible to thermal and radar. To provide adaptive camouflage in the visible domain LEDs are integrated in the clothing. A proof-of-principle demonstrator of this subsystem was developed and implemented by TNO in collaboration with its international ACAMS partners. The demonstrator consists of a panel of clothing material with printed camouflage pattern embedded with LEDs. The camouflage pattern adapts to the background and lighting conditions by means of a camera and a light sensor. The camera registers the background colours and pattern and the light sensor registers the illumination of the panel. In situations in which the printed camouflage provides sufficient protection (in darker backgrounds) the LEDs are switched off. The small LEDs are invisible when switched off and blend together when they are turned on and are viewed from larger distances (e.g. 50 m). When turned on, the colours and luminance of the LEDs are matched with that of the background. A control unit (mini-laptop) is connected to the LEDs, the camera, and the light sensor. Using the camera image, light sensor output and printed pattern as input, it calculates the LED input levels required to generate the desired output (camouflage pattern), taking into account the light reflected from the clothing and the input-output relationship of the LEDs. The control unit automatically updates the LED output to changes in background and illumination. In our presentation we will show how the control loop is implemented and demonstrate that the system adapts well to changes in background and lighting. Different implementations are presented and discussed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper the results of development and investigation on textile-based radar absorbing materials for protection against battlefield radar are presented. This research has been carried out within the Project “Adaptive Camouflage for the Soldier” (ACAMSII) which has received funding from the European Union’s Preparatory Action for Defence Research—PADR programme under grant agreement No 800871.
To develop the fabrics with microwave shielding and absorbing properties samples of woven fabrics were coated with compositions containing inherently conducting polymers (ICPs), carbon-based formulations or their mixtures. For coating screen printing and knife-over-roll techniques were applied, as our aim was to develop the fabrics coated with conductive layer only on the back side of camouflage pattern printed fabric, that it could be integrated in the military camouflage clothing system.
In the radar threat evaluation, as a part of ACAMSII project, it was pointed up that a major threat to dismounted soldiers are battlefield radars commonly operating within X and Ku-bands. Consequently, the investigation of reflection and transmission properties of developed textile fabrics was performed in a frequency range of 6–18 GHz, which cover the defined frequencies relevant to the application.
It was found that shielding effectiveness (SE) as well as absorption properties
depend not only on the amount and type of conductive paste topped on the fabric, but also resides in the construction parameters of fabrics and their finishing before coating.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Within the EU funded project, “Adaptive camouflage for the soldier” (ACAMSII), led by Totalförsvarets forskningsinstitut - FOI (Sweden) with six other participants from five countries, namely, CITEVE and DAMEL (Portugal), IOSB (Germany), FTMC (Lithuania), TNO (The Netherlands) and SAFRAN (France), a future soldier system is being developed for adaptive camouflage against all relevant sensor threats, ranging from visible to thermal and radar. Considering the huge challenge of achieving a solution for adaptive camouflage, expected to meet several requirements of soldier concealment in different wavelength, some architectures were envisaged. The most promising architectures were materialized in a design concept where possible concealment technologies can be integrated. The overall design concept comprises a multilayer clothing system, combining active and passive adaptation mechanisms, improving camouflage efficiency in either woodland or urban environments. A tri-layer system was conceived comprising an inner layer (underwear), a middle layer (combat uniform) and an outer layer (adaptive camouflage system). Information concerning scenario, soldier signature, design, ergonomics, weight, power consumption, usability and operational functionality requirements were considered, and two research lines ensued, resulting in two architectures: a) Light emitting diodes (LEDs) technology based and b) Thermochromics pigments technology based. In parallel, research on several formulations including thermochromic pigments, metal oxides, conductive polymers, and carbon for thermal infrared (TIR) reduction was performed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Decoys can be used in warfare to affect the enemy intelligence and situational awareness as well as to protect military equipment and soldiers. Since much of current and especially future, reconnaissance is based on advanced wide spectral range sensors, especially airborne sensors carried by drones and satellites, it is imperative that a modern decoy is capable of creating a credible sensor signature in the most commonly used sensors. For this goal, a foil system was designed and fabricated using printed electronics technologies that has sensor responses to thermal imaging, radar and visible or hyperspectral imaging. The foil has a matrix of heating elements, which can be heated in an arbitrary pattern to form any thermal signature desired. Radar response was created by laminating a metal layer on the foil. The foil was painted with a layer of camouflage paint in order to achieve authentic hyperspectral and visible response. The roll-to-roll manufactured, general-purpose decoy foil allows mass production scale benefits since the same foil can be used to mimic any target by using a programmable heater controller.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A Proof of Concept (POC) demonstrator focusing on colour reproduction was designed to investigate the associated principles in the visible spectral range for adaptive camouflage. Consequently, this demonstrator was set up to deliver a fair replica of uniformly specified coloured backgrounds in various background light conditions using a controlled LEDmatrix- panel. For the panel adaption a camera monitors the predefined background colour whilst a regulator-circuit programme simultaneously processes this camera-signal to adjust the LED-colours. Extensive spectrometer measurements carefully quantify the colour replication accuracy. During these measurements the demonstrator and its regulator-circuit-programme were continually developed to handle the following three major concerns: RGB-channel-saturation, RGBchannel- cross-response and background-reflection-correction. The second (RGB-channel-cross-response) concern was mainly considered, which is caused by a mismatch between the spectral sensitivity of the measuring camera (CSS) and emitted spectral range of the LED-panel (SEP) and is relevant in darker environmental conditions. This disagreement causes a complex non-linear interaction behaviour between changes in the set RGB-values of the LED-panel and the camera measuring these changes for regulator circuit programme processing. So, changes in one RGB-component inadvertently affects other RGB-components. If not properly handled, this leads to an erroneous RGB-signal generation for the LED-matrix-panel and eventually to problematic colour-deviations between the panel and background. By spectrometer measurements investigating these emission and sensitivity components, specific white balance settings in the camera were determined to avoid such RGB-channel-cross-response. This replaces in a more elegant way the extensive characterization of the cross-response by response functions. The background was corrected by established standard procedures and worked correctly at the mainly considered darker surrounding illumination, where potential colour-deviations could be better examined. The application of such correction principles, especially with the here developed strategy for a careful chosen white balance setting (WB) wrt. the used regulation circuit method, allows a significantly improved colour-replication.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Maritime surveillance systems’ long-range capabilities are dependent on the quality of their images. Poor weather conditions such as haze, fog, or smog can severely hamper the ability to observe a thermal imaging scenario accurately, which affects the ability to detect, identify and track any object of interest within it. An image processing technique, known as dehazing is required in these types of systems. In this paper, state of the art image dehazing algorithms are used for long range thermal images (~5km, ~9km, ~15km) and their image restoration quality performances are compared based on qualitative metrics such as structural similarity (SSIM), peak signal to noise ratio (PSNR), and Feature similarity (FSIM). The results from this benchmark study can provide a suitable dehazing technique for maritime surveillance systems.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Modern industrial systems and robotic complexes use machine vision systems to increase productivity and automate processes. Deep learning systems are used to assist the operator and make decisions. The creation of industrial systems within the framework of "Industry 4.0" and "Industry 5.0" is impossible without precise control of the position of objects in space. Implementing ma-chine vision systems allows you to control the position of robot elements, objects, and humans. Information about objects and their constituent parts allows predicting their movement based on the accumulated knowledge and the embedded algorithm of the program. The analysis of the position of a material point in space and the forecast of its movement under the conditions of without apriori information about the process can be carried out both using neural networks (a large amount of knowledge is needed for training) and using computational methods (exact conditions are required). On the other hand, the fixed data is affected by noise. The appearance of noise is due to: not the ideality of the sensor (the CCD matrix does not consist of pure groups, but contains impurities in the composition); to the influence of external factors (electrical interference, inter-pixel inter-action, etc.); uneven accumulation, and drainage of the charge of the CCD of the matrix; destruction of the pixel; inaccuracy during subsequent digitization of data (quantization noise), etc. Methods of data accumulation and averaging, neural networks, or computational methods can be used to eliminate the noise component. The article discusses a data filtering method based on the analysis of vector spaces and the formation of an optimal solution according to the combined criterion that minimizes the distance between the maximum approximation, monotony, and minimum deviation of the difference in the norms of data vectors. The article contains 4 theorems and their proofs. In work, practical examples of processing data were obtained for various practical applications, including forecasting, searching for object boundaries in images, and data filtering. On a set of test sequences, the results obtained were compared with an example of data processing by approximation functions, which showed the coincidence of the results with an accuracy of 100%. For the case of the presence of a noise component, the accuracy is more than 94%, with a noise standard deviation of 25% of the signal power. It should be noted, that the proposed filtering method can reduce the effect of the noise component in the absence of a priori information about the belonging of a pure signal to any parametric class of functions. In the presence of the specified information, the approximation of the implementation by a function from the corresponding class may be more accurate.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.