The use of camouflage is widespread in the biological domain, and has also been used extensively by armed forces around the world in order to make visual detection and classification of objects of military interest more difficult. The recent advent of ever more autonomous military agents raises the questions of whether camouflage can have a similar effect on autonomous agents as it has on human agents, and if so, what kind of camouflage will be effective against such adversaries. In previous works, we have shown that image classifiers based on deep neural networks can be confused by patterns generated by generative adversarial networks (GANs). Specifically, we trained a classifier to distinguish between two ship types, military and civilian. We then used a GAN to generate patterns that, when overlaid on parts of military vessels (frigates), made the classifier confuse the modified frigates with civilian vessels. We termed such patterns "adversarial camouflage" (AC) since these patterns effectively camouflage the frigates with respect to the classifier. The type of adversarial attack described in our previous work is a so-called white box attack. This term describes adversarial attacks that are devised given full knowledge of the classifier under attack. This is as opposed to black box attacks, which describe attacks on unknown classifiers. In our context, the ultimate goal is to design a GAN that is capable of black box attacks, in other words: a GAN that will generate AC that has effect across a wide range of neural network classifiers. In the current work, we study techniques to improve the robustness of our GAN-based approach by investigating whether a GAN can be trained to fool a selection of several neural network-based classifiers, or reduce the confidence of the classifications to a degree which makes them unreliable. Our results indicate that it is indeed possible to weaken a wider range of neural network classifiers by training the generator on several classifiers.
Different types of imaging sensors are frequently employed for detection, tracking and classification (DTC) of naval vessels. A number of countermeasure techniques are currently employed against such sensors, and with the advent of ever more sensitive imaging sensors and sophisticated image analysis software, the question becomes what to do in order to render DTC as hard as possible. In recent years, progress in deep learning, has resulted in algorithms for image analysis that often rival human beings in performance. One approach to fool such strategies is the use of adversarial camouflage (AC). Here, the appearance of the vessel we wish to protect is structured in such a way that it confuses the software analyzing the images of the vessel. In our previous work, we added patches of AC to images of frigates. The paches were placed on the hull and/or superstructure of the vessels. The results showed that these patches were highly effective, tricking a previously trained discriminator into classifying the frigates as civilian. In this work we study the robustness and generality of such patches. The patches have been degraded in various ways, and the resulting images fed to the discriminator. As expected, the more the patches are degraded, the harder it becomes to fool the discriminator. Furthermore, we have trained new patch generators, designed to create patches that will withstand such degradations. Our initial results indicate that the robustness of AC patches may be increased by adding degrading flters in the training of the patch generator.
The use of different types of camouflage is a longstanding technique employed by armed forces in order to avoid detection, classification or tracking of objects of military interest. Typically, the use of such camouflage is intended to fool human observers. However, in future battle theaters one must expect to face weapons that are ’artificially intelligent’ in some way, and the question then arises as to whether the same types of camouflage will be effective against such weapons. An equally important question is if it is possible to design camouflage in order to specifically confuse ’artificially intelligent’ adversaries and what such camouflage might look like. It is this latter question that is the object of the study reported here. In particular, we consider whether carefully designed patterns of camouflage will have a detrimental effect on the performance of neural networks trained to distinguish among different ship classes. We train a neural network to distinguish between different types of military and civilian vessels and specifically require the network to determine whether the vessel is military or civilian. We then use this network to train a second network, a generative adversarial network, that will generate patterns to overlay on parts of the vessels in such a way as to thwart the performance of the first network. We show that such adversarial camouflage is very effective in confusing the original classification network.
Infrared (IR) imagery is frequently used in security/surveillance and military image processing applications. In this article we will consider the problem of outlining military naval vessels in such images. Obtaining these outlines is important for a number of applications, for instance in vessel classification.
Detecting this outline is basically a very complex image segmentation task. We will use a special neural network for this purpose. Neural networks have recently shown great promise in a wide range of image processing applications, image segmentation is no exception in this regard. The main drawback when using neural networks for this purpose is the need for substantial amounts of data in order to train the networks. This problem is of particular concern for our application due to the difficulty in obtaining IR images of military vessels.
In order to alleviate this problem we have experimented with using alternatives to true IR images for the training of the neural networks. Although such data in no way can capture the exact nature of real IR images, they do capture the nature of IR images to a degree where they contribute substantially to the training and final performance of the neural network.
Well-known detection metrics based on Johnson criteria or Target Task Performance (TTP) models were developed for land-based targets [1,2]. In this paper we investigate how (whether) we can apply these metrics to especially recognition and identification of ships at sea. Large sea targets distinguish themselves from land-based targets by their large aspect ratio, when seen broad side, and their relatively large and hot plume. We shall only address the second of these two issues here. First, however, we shall investigate how the simple Johnson approach to recognition and identification stacks up against a TTP approach. The Johnson approach has clear and simple criteria to measure the target task performance. To apply the TTP model N50 (V50) values need to be found through observer trials. We avoid these trials here but estimate the criteria based on a comparison of the models. From analysis of LWIR and MWIR recordings of a multipurpose ship running outbound and inbound tracks, we find little difference between the two metrics. As mentioned, we study the effect of the plume on task performance ranges, by considering two different estimates for the target contrast: the average contrast and the root of the squares of this contrast and the standard deviation of the contrast. We argue that the plume skews the recognition and identification ranges to much too optimistic values when the standard deviation is included. In other words, although the plume helps to detect the target, it does not help the recognition or identification task. It seems a more careful definition of the temperature contrast needs to be applied when these models are used.
A research platform with four cameras in the infrared and visible spectral domains is under development at the Norwegian Defence Research Establishment (FFI). The platform will be mounted on a high-speed jet aircraft and will primarily be used for image acquisition and for development and test of automatic target recognition (ATR) algorithms. The sensors on board produce large amounts of data, the algorithms can be computationally intensive and the data processing is complex. This puts great demands on the system architecture; it has to run in real-time and at the same time be suitable for algorithm development. In this paper we present an architecture for ATR systems that is designed to be exible, generic and efficient.
The architecture is module based so that certain parts, e.g. specific ATR algorithms, can be exchanged without affecting the rest of the system. The modules are generic and can be used in various ATR system configurations. A software framework in C++ that handles large data ows in non-linear pipelines is used for implementation. The framework exploits several levels of parallelism and lets the hardware processing capacity be fully utilised. The ATR system is under development and has reached a first level that can be used for segmentation algorithm development and testing. The implemented system consists of several modules, and although their content is still limited, the segmentation module includes two different segmentation algorithms that can be easily exchanged. We demonstrate the system by applying the two segmentation algorithms to infrared images from sea trial recordings.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.