Infrared imaging technology has become indispensable across numerous fields due to its unique capabilities. However, small object detection from infrared images is still a challenging task because of the inherent noise and low contrast often present in infrared imagery, and the small target size further increases the detection difficulty. To relieve these problems, we propose a small object detection method based on YOLOv8 with an attention mechanism and multi-level feature fusion from infrared images. Firstly, an attention mechanism is introduced to extract long-range image features, which is effective in decreasing the bad effect of complex image backgrounds. Secondly, multi-level feature fusion is used in the detection neck to recover image details for small objects. Experimental results show that the proposed method is beneficial in improving the detection performance of small objects from infrared images.
Infrared images have the advantage of revealing thermal signatures, enabling enhanced visibility and detection of objects, especially in low-light conditions or scenarios where conventional cameras may struggle. However, detecting small moving objects, such as cars, drones, and people, in infrared images is a challenging task affected by ground clutter and complex image backgrounds, which increases more false positives in detection results. To relieve this problem, a twostage moving object detection method is proposed in this paper. Firstly, a newly designed attention mechanism is inserted into the feature extraction backbone for the YOLOv8 model, which helps the detection framework focus on the object region to reduce interference from complex image backgrounds and increase the detection rate. Secondly, an FP reduction strategy is proposed based on object moving analysis, which associates the detection results in the previous frame of the video with the current frame and performs target motion pattern analysis, and FPs that do not conform to the movement patterns are removed. Experimental results show that the proposed method is beneficial to not only reducing the FPs but also increasing the detection rate.
In many scene classification applications, the variety of surface objects, high within-category diversity and between-category similarity carry challenges for the classification Framework. Most of CNN-based classification methods only extract image features from a single network layer, which may cause the completed image information difficult to extract in complex scenes. We propose a novel transfer deep combined convolutional activations (TDCCA) to integrate both the low-level and high-level features. Extensive comparative experiments are conducted on UC Merced database, Aerial Image database and NWPU-RESISC45 database. The results reveal that our proposed TDCCA achieves higher experimental accuracies than other up-to-date popular methods.
Image is the important source of information for modern war. And making effective use of the information from the image would take full advantage of the reconnaissance capability. When images captured under fog, they are vulnerable to suspended particles in the atmosphere of the light scattering, absorption and other effects, and images suffer from quality degradation problems which lead to many difficulties for battlefield reconnaissance and recognition. Combining the dark and bright channel priors (bi-channel priors), the super-pixels are used as local regions, thus local transmission and atmospheric light values are estimated more reliably and efficiently. Furthermore, adaptive bi-channel priors are developed to rectify any incorrect estimation on transmission and atmospheric light values for both white and black pixels those fail to satisfy the assumptions of the bi-channel priors. Experimental results demonstrate that the white and black pixels on the restored UAV image are with excellent fidelity and the proposed method performs better for restoring images in terms of both quantitation and quality, and leads to great improvements in real-time defogging.
In order to realize detection and precise positioning of small-caliber visual optical system targets, according to the "cat eye effect" of photoelectric system, the research based on laser active reconnaissance precision detection technology was carried out. The effects of parameters such as laser emission power, receiving aperture and detection distance on the detection performance are simulated. The experiment has verified the ability to detect small-caliber targets under certain laser power. The experimental results show that the 10mm aperture visual optical system at 700m can be accurately detected under the condition of laser peak power of 1000W and laser divergence angle of 1.5mrad.
Unmanned aerial vehicles have been widely used in military and civil areas, which requires vision processing in explicit usage scenario. Existence of haze or fog can influence the context awareness capability of the aerial vehicles and makes affectation on target tasks. The captured images in hazy scenes suffer from degradation problems including poor contrast, color distortion, incomplete information, which lead to many difficulties in the follow-up processing. A simple and effective single image dehazing algorithm based on atmospheric scattering model and the optimum of image quality evaluation is proposed in this paper. Three image quality evaluation parameters: image entropy, standard deviation, and Fourier amplitude are combined to establish and the image quality evaluation function. On the basis of quality evaluation function, the image with the optimum of quality evaluation among the potential defogging images is chosen as the best result. Results show that this method has lower computational complexity, simplified operations and improved real-time performance.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.