Due to the enormous development in the field of artificial intelligence, especially in the area of reconnaissance, detection and recognition, it has become absolutely necessary to think about methods of concealing one's own military units from this new threat. This publication aims to provide an overview of counter ai approaches against enemy reconnaissance, and the possibilities to assess the effectiveness of these methods. It will focus on explainable AI and the camouflaging of key features as well as the possibility of dual attribute adversarial attack camouflage. These are mathematically optimised patterns that drive an AI-based classifier to an incorrect classification or simply suppress the correct classification. We also discuss the robustness of these patterns.
Deep Learning based architectures such as Convolutional Neural Networks (CNNs) have become quite efficient in recent years at detecting camouflaged objects that would be easily overlooked by a human observer. Consequently, countermeasures have been developed in the form of adversarial attack patterns which can confuse CNNs by causing false classifications while maintaining the original camouflage properties in the visible spectrum. In this paper, we describe the various steps in generating suitable adversarial camouflage patterns based on the Dual Attribute Adversarial Camouflage (DAAC) technique for evading the detection by artificial intelligence as well as human observers which was proposed in [Wang et al. 2021]. The aim here is to develop an efficient camouflage with the added ability to confuse more than a single network without compromising camouflage against human observers. In order to achieve this, two different approaches are suggested and the results of first tests are presented.
Deep Learning based architectures such as Convolutional Neural Networks (CNNs) have become quite efficient in recent years at detecting camouflaged objects that would be easily overlooked by a human observer. Consequently, countermeasures have been developed in the form of adversarial attack patterns which can confuse CNNs by causing false classifications while maintaining the original camouflage properties in the visible spectrum. In this paper, we describe the various steps in generating suitable adversarial camouflage patterns based on the Dual Attribute Adversarial Camouflage (DAAC) technique for evading the detection by artificial intelligence as well as human observers which was proposed in [Wang et al. 2021]. The aim here is to develop an efficient camouflage with the added ability to confuse more than a single network without compromising camouflage against human observers. In order to achieve this, two different approaches are suggested and the results of first tests are presented.
The threat of AI-based surveillance and reconnaissance systems that has emerged in recent years has made it necessary to develop new camouflage and deception measures directed against them. A primary example would be adversarial attack camouflage. This is achieved by employing specifically calculated digital patterns that are more or less conspicuous to human observers but can effectively deceive an AI. In most cases, however, only photo manipulations showing the pattern in optimal frontal positioning are used to evaluate its effectiveness. This paper aims to demonstrate a comprehensive evaluation methodology that examines both the visual conspicuity and effectiveness of AI camouflage and deception methods in terms of spatial and angular positioning, to provide a measure of evaluation as well as advice for the application of patch camouflage. Here, the distances and viewing angles at which DAAC is still effective are investigated to produce a spatial effectiveness map. Consequently, the shape, extent and intensity of the effectiveness range can be used as an evaluation measure.
In the last years AI based algorithms have significantly increased in both popularity and in efficiency for numerous applications. As those artificial neuronal networks can also be used for military reconnaissance, is it necessary to think about methods to avoid or impede enemy detection or recognition by automated AI systems. However, the features that make an object salient to a human observer are not transferable to AI-based systems, since the features that the AI uses to classify things are mostly learning-data dependent and obscure.
In this work, we aim to show ways to understand AI's decisions using LIME or Grad-CAM, and thus find ways to decrease classification performance in order to develop a camouflage against AI, or to decept it with adversarial attacks. Camouflage measures can then be evaluated using these methods for their effectiveness against AI, and by combining this with camouflage performance evaluation against human observers using existing methods we try to find the best possible tradeoff for combined camouflage against both threats.
In this paper we will present a conspicuity quantification model based on anomaly detection. This model extracts numerous local image parameters, in first and higher order (transformation-based) statistics and calculates local conspicuity by a multiscale center-surround comparison, as a point in an image draws attention to it, if it significantly differs from its surroundings in one or more relevant parameters. This is also biologically substantiated, as many parts of the visual system calculate center-surround differences, for example in color or luminance.
In our work we focused on biologically relevant parameters as the camouflage is targeted against human observers. In first order statistics we focused i.a. on local luminance, perceptual color difference in CIELAB color space, r.m.s. contrast and entropy. In the transformation-based higher order statistics we focused on spatial frequency distribution, power spectra, orientation bias and quefrency analysis via Fourier transformation and linear feature extraction via Radon Transformation.
This, at first, enables the possibility to parametrize camouflage patterns and textures in a comprehensive way, offering a similarity rating of textures compared to a mean background, but in particular facilitates the calculation of conspicuity maps, in which eye-catching regions of images are highlighted.
In this work we show that the linear combination of those conspicuity maps, gathered on different scales can provide a good value for local conspicuity and therefore directly acts as a useful quantification for camouflage, as drawing as little attention as possible to the camouflaged object quantified by a low conspicuity value results in a good camouflage rating.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.