Minimizing electro-optical signatures of soldiers against modern sensors is a challenging task, but a task with high importance and benefits for operative soldiers that need to stay undetected. Optimizing camouflage uniforms for winter conditions efficiently reduces soldier signatures in winter scenes, especially in the visual spectrum. Snow usually dominates winter scenes and is difficult to mimic because the spectral properties of snow change with several parameters such as grain size, structure, and wetness. Developing efficient winter camouflage thus requires knowledge and data on the spectral properties of snow. This paper presents spectral data on common snow types in Norway and evaluates the camouflage performance of several winter uniforms of different colors and patterns. We assessed and ranked the camouflage performance of the uniforms quantitatively in the visible spectrum using an observer-based photosimulation where many soldiers searched for targets in various Norwegian winter scenes. By collecting a large number of detection times, indicating how difficult it was for an observer to detect each camouflage in each of the unique winter scenes, it was possible to rank the camouflage targets quantitatively. The results show how each camouflage performed (given by time of detection or as a percentage) compared to all the other camouflages in the test for each scene. The photosimulation method is time-consuming, but it gives a realistic estimation of camouflage performance over the different scenes. We discuss the performance of the various winter camouflages with their pattern and similarity to snow (color coordinates).
The usage of Unmanned Ground Vehicles (UGVs) in defence application is increasing, and much research effort is put into the field. Also, many defence vehicle producers are developing UGV platforms. However, the autonomy functionality of these systems are still in need of improvement. At the Norwegian Defence Research Establishment a project for developing an autonomous UGV was started in 2019 and use a Milrem THeMIS 4.5 from Milrem Robotics as the base platform for achieving this. In this paper we will describe the modifications made to the vehicle to make it ready for autonomous operations. We have added three cameras and a Lidar as vision sensors, for navigation we have added a GNSS, IMU and velocity radar, and all sensors get a common time stamp from a time server. All the sensors have been mounted on a common aluminium profile, which is mounted in the front of the vehicle. The vision and navigation sensors have been mounted on the common aluminium profile to ensure that the direction the vision sensors observe is known with as little uncertainty as possible. In addition to the hardware modification, a control software framework has been developed on top of Milrem’s controller. The vehicle is interfaced using ROS2, and is controlled by sending velocity commands for each belt. We have developed a hardware abstraction module that interfaces the vehicle and adds some additional safety features, a trajectory tracking module and a ROS simulation framework. The control framework has been field tested and results will be shown in the paper.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.