Maritime surveillance systems employing thermal imaging encounter numerous challenges, where image quality significantly affects their effective range of vision. Adverse weather conditions such as haze, fog, and smog can obscure thermal imaging scenes, complicating the detection, identification, and tracking of objects of interest. For instance, these systems must track moving ships from a considerable distance using thermal imaging, while adapting to dynamic backgrounds and various weather conditions. Image quality assessment, a crucial research area, evaluates the perceived quality of an image. Standards for quantifying images often align with human perception, adopting user-focused approaches that consider an observer's ability to perform specific tasks, as outlined in the Johnson criteria. However, in real-time maritime surveillance applications, these criteria may prove inadequate in capturing image properties. This study explores the general factors that measure the dynamic range of marine surveillance thermal images, along with specific challenges in interpreting images using various quality assessment parameters.
Unmanned Aerial Vehicles (UAVs) are gaining more popularity for various applications such as surveillance, monitoring and mapping. However, navigating UAVs in low-features areas or GPS-denied areas poses a significant challenge, as conventional GPS-based method for estimating velocity become ineffective. Optical flow algorithms have become a promising approach for UAV velocity estimation in such scenarios. This paper proposes a novel method for UAV velocity estimation for a vision-based navigation system in GPS-denied low-features environments. The proposed method is evaluated and compared to various optical flow methods considering computational efficiency, accuracy, and robustness when predicting UAV velocity. Understanding the strengths and limitations of these optical flow methods will certainly enable the development and implementation of UAV navigation systems in challenging environments.
Object detection is a critical task in computer vision, with applications ranging from robotics to surveillance. Traditional RGB-based methods often face challenges in low-light, high-speed, or high-dynamic-range scenarios, resulting in blurred or low-contrast images. In this paper, we present a novel algorithmic approach that fuses event data from event cameras with RGB images to improve object detection performance in real-time. Event cameras, unlike traditional frame-based cameras, provide high temporal resolution and dynamic range, capturing intensity changes asynchronously at the pixel level. Our method leverages the complementary strengths of event data and RGB images to reconstruct blurred images while retaining contrast information from the original data. We propose an algorithmic pipeline that first fuses event data and RGB images, followed by a reconstruction step to generate enhanced images suitable for object detection tasks. The pipeline does not rely on deep learning techniques, making it computationally efficient and well-suited for real-time applications. To validate the effectiveness of our approach, we compare its performance against the popular YOLO benchmarks for object detection tasks. Moreover, we assess real-time metrics to demonstrate the practicality of our method in time-sensitive applications.
Image enhancement is an ongoing research problem that the community is addressing through the development of fusion algorithms. Such techniques typically involve the reconstruction of RGB images by removing environmental artifacts and enhancing desired features. Infrared imagery is also widely used to improve situational awareness in low visibility scenarios. Recently, learning-based approaches are used for fusion purposes to extract meaningful representations from images and capture latent features that could otherwise be inaccessible using conventional image processing algorithms. The inadequacies of RGB images in these algorithms’ pipelines are still obvious, despite the fact that the viability of RGB-Infrared image fusion has been thoroughly demonstrated in the literature. For example, RGB images often have artefacts like sudden changes in exposure or motion blur when the illumination changes or sudden changes in the scene. A novel imaging sensor operating in the visible light spectrum has been developed to address these issues. In this paper, we explore the cutting-edge paradigm of Neuromorphic Vision Sensors (NVS), a class of asynchronous analog imaging platforms that operate based on the change of pixel luminosity within a scene. When compared to frame-based counterparts, NVS enhances scene interpretation, processing time, reaction time, and power consumption. Deep-learning reconstruction networks are evaluated in this study to determine the applicability of existing state-of-the-art multi-modal image fusion techniques with the addition of NVS data rather than RGB data. As a benchmark, metrics such as signal-to-noise ratio (SNR) and pixel wise error are used.
KEYWORDS: Cameras, Calibration, Stereoscopic cameras, 3D modeling, 3D metrology, Infrared cameras, Thermography, 3D vision, 3D image processing, Stereo vision systems
Thermal imaging can be used to characterize heated objects in numerous applications, including security and surveillance, where it is vital to locate an object within a 3D space. This can be achieved with stereoscopic vision, where optical sensors are used to construct a 3D image of the space it is looking at. In order to enhance computer vision research in the thermal modality, stereo thermal-infrared camera calibration must be precise and effective. This research presents a calibration board for thermal stereo vision systems that allows for visual thermal contrast and corner detection within the pattern. In an unstructured and dynamic environment, this thermal stereo vision provides a 3D volumetric measurement for depth estimation of the whole scene within the optical sensors’ combined field of view. Experiments are carried out in the lab, with a mean calibration re-projection error of 0.4 pixel and it is estimated the accuracy of 97.26% at a distance of around 6m for 3D depth measurement.
Maritime surveillance systems’ long-range capabilities are dependent on the quality of their images. Poor weather conditions such as haze, fog, or smog can severely hamper the ability to observe a thermal imaging scenario accurately, which affects the ability to detect, identify and track any object of interest within it. An image processing technique, known as dehazing is required in these types of systems. In this paper, state of the art image dehazing algorithms are used for long range thermal images (~5km, ~9km, ~15km) and their image restoration quality performances are compared based on qualitative metrics such as structural similarity (SSIM), peak signal to noise ratio (PSNR), and Feature similarity (FSIM). The results from this benchmark study can provide a suitable dehazing technique for maritime surveillance systems.
One of the main challenging issues in video analysis is the recovery of the original video frames from the annotated and text marked in the guided user interface (GUI) tool, particularly in cases where the original video is not available. Removing annotation from video frames is essential for any kind of algorithm development, such as noise removal, dehazing, object detection, recognition, identification in the video and tracking specific objects in the maritime environment, and further testing process. In this research work, we developed the algorithm for the removal of all annotations from any portion of the video frame, without affecting the integrity of the original video content. Here, we present a novel technique to remove unnecessary annotations and markers using a progressive switching median filter with wavelet thresholding. Experimental studies have shown that the annotation-free images generated from the proposed method can be used for the development of any basic algorithm.
The process of geometric camera calibration for long-wave infrared (LWIR) camera is essential for determining the camera’s intrinsic and extrinsic parameters, which maps 3D world coordinate points onto 2D image coordinate points. In this paper, new calibration board materials are proposed for making a plane checkerboard to calibrate the LWIR camera. These materials are low-cost items and, to the best of our knowledge, has not been used so far for calibration of infrared camera. The resulting calibration board exhibited high thermal infrared image contrast and showed consistent checkerboard corners’ detection to accomplish the geometric calibration of LWIR wavebands camera. The suggested method is tested using three different LWIR cameras in an indoor environment. The quality of new proposed board thermal imaging is compared with existing calibration board made by paint. The calibration approach is experimented using “MATLAB Single Camera Calibrator Tool” for each of the three LWIR cameras and showed that the overall calibration mean reprojection error is less than 0.35 pixel. The proposed calibration board is beneficial in terms of precision, low-cost built-up and easy reusability.
Maritime surveillance contributes in the security of ports, oil platforms, and coastal littoral by detecting unusual activities such as unlicensed fishing boats, pirate attacks, and human trafficking, by monitoring and controlling a maritime region. The maritime surveillance systems face many design challenges. For instance, these systems must track moving vessels at long distances in the presence of a dynamic background, using infrared imaging and under various weather conditions. This work presents a benchmark study of the performance of different state of the art tracking algorithms for marine vehicles using mid-wave infrared (MWIR) images. A comprehensive study was conducted to indicate the advantages and disadvantages of the tracking algorithms.
Tracking the moving objects in a video sequence is a critical problem in wide area airborne surveillance systems due to its cyber-physical nature. The large amount of imagery data generated by the unmanned aerial vehicle (UAV) requires high data rate link to handle real-time video streaming to the ground control station. Free space optics (FSO) technology is a promising data link for UAV due to its easy installation process, long range and duplex operation, and high bandwidth with more security. The main issue using FSO communication is the beam attenuation due to the various atmospheric impairment. In this work, the characteristics of FSO communication links with various data rate and different wavelength from UAV to the ground station has been investigated with Polarization Shift Keying (PolSK) and On-Off Keying (OOK) modulation techniques to improve the performance of data communication under various UAE weather conditions. The proposed system’s degradation model considered the rain, foggy and haze and height of UAV from the ground and a case study is experimented on the effects of FSO based UAV communication system in UAE weather conditions. The results show the performance of the mobile optical link in the presence of atmospheric effects is compared in UAE weather conditions and the graphs plotted against bit error rate (BER), attenuation and distance which highlights the benefits of FSO.
KEYWORDS: Free space optics, Unmanned aerial vehicles, Atmospheric turbulence, Modulation, Data communications, Atmospheric modeling, Surveillance, Computer security, Video surveillance, Video
Surveillance using aerial systems provides a different angle of monitoring, maintain security or locate wanted targets. The imaging data is live streamed to the ground using different types of datalinks. However, the large amount of imagery and video information transferring from the unmanned aerial vehicle (UAV) require high data rate for communication. A method of optical wireless communication that has a high bandwidth which can be used in the field of UAV is free space optics (FSO). FSO technology offers faster and secure data transmission. In this work, FSO channel model have been investigated pertaining to atmospheric turbulence to analyze the average spectral efficiency (ASE) using various modulation schemes such as Polarization Shift and On-Off Keying, and Coherent Optical Wireless Communication. The atmospheric turbulence channel model depends on various parameters Such as the refractive index structure, strong or weak turbulences. Presented in this work, the refractive index is manipulated to obtain the exact turbulence differences through various operational altitudes while operating the UAV. Furthermore, the misalignment error between the overall system receiver and transmitter is considered when performing the analysis. The results show the performance of the optical link in different altitudes and different modulation schemes to optimize the performance of the link.
The following paper presents a benchmarking study of the performance of thirty state of the art background subtraction algorithms. In this work, we test the performance of multiple background subtraction methods using drone imageries taken under various weather conditions in the UAE region. This is done by comparing the quality of the foreground mask that is extracted when using these algorithms. Visual Studio and MATLAB has been used to perform the comparison simulations, which would give us a comprehensive background subtraction study to indicate the advantages and disadvantages of each of the algorithms. The algorithms must be robust to stabilization errors, able to cope with insufficient information as a result from various weather conditions such as wind, haze and heat, and able to cope with dynamic backgrounds.
In typical Intelligence Surveillance and Reconnaissance (ISR) missions, persistent surveillance is commonly defined as the exercise of automatic intelligence discovery by monitoring a wide area coverage at a high altitude leveraging an aerial platform (manned or unmanned). It can be large enough to carry a matrix of high resolution cameras and a rack of high performance computing images processing and exploitation units (PEU). The majority of the small unmanned aerial vehicles (sUAV) are able to carry optics payloads, allowing them to take aerial images from strategic viewpoints. This capability constitutes a key enabler for an immense number of applications, such as crowd monitoring, search and rescue, surveillance scenario, industrial inspection and so on. The constrained onboard processing power in addition to the strict limit in the flying time of sUAV are amongst the serious challenges that have to be overcome to enable a cost effective persistent surveillance based on sUAV platforms. In this paper, we conduct a feasibility study for developing a potential sUAV based persistent surveillance system with tethered power supply.
Tracking a moving object in a video sequence is a critical problem in wide area surveillance system applications. Our proposed system provides high-resolution motion imagery surveillance with a low frame rate on a city-sized region within which multiple moving objects are tracked simultaneously in real time. The system faces multiple challenges, including significant camera motion, strong parallax, tracking many moving objects with few pixels for each target, single-channel data, and low video frame rate. In this work, we propose a new method for parallax rectification and stabilization using a wavelet based scale-invariant feature transform (SIFT) flow technique. The results were compared with the existing state of the art methods. Various challenges associated with the detection and tracking of multiple objects in wide area surveillance systems are discussed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.