An innovative deep learning-based solution to address motion blur in radar images, resulting from radar platform or target movement, is presented in this work. Leveraging Convolutional Neural Network (CNN), the proposed method learns a mapping from blurred to deblurred images, while a separate CNN estimates the point spread function (PSF) of the motion blur. This estimated PSF is then used to reconstruct deblurred images, optimising the reconstruction process by integrating the input image, estimated PSF, and ground truth relationship into the training loss term. Trained on a comprehensive dataset of simulated blurred and deblurred radar images, generated from a numerical imaging model, the model exhibits exceptional performance, outperforming state-of- the-art methods across varying degrees and lengths of blur. Specifically, testing on 6,410 images yields mean squared error (MSE) and structural similarity index (SSIM) scores of 0.0086 and 0.9398, respectively. Additionally, validation on experimental measurements showcases promising results. This comprehensive evaluation underscores the effectiveness and versatility of the proposed approach, offering significant advancements in radar image processing for various applications such as target detection, recognition, surveillance, and navigation.
Conventional microwave imaging can provide high-quality reconstructed images, but is also limited by the increased hardware complexity and a slow data acquisition speed. Although computational imaging (CI)-based systems are developed to be alternatives, they may require substantial computational power and time. To reduce the hardware complexity and computational burden associated with scene reconstructions of CI applications, in this paper, a conditional generative adversarial network (cGAN) is presented to achieve image reconstruction, where the back-scattered measurement is regarded as both the condition and the input of the proposed network. With testing dataset, the average values of the normalized mean squared error (NMSE) and the normalized mean absolute error (NMAE) are 0.0474 and 0.2267, respectively. In addition, a noise analysis is conducted, showing the reliability of the proposed network in noisy settings.
This paper introduces a 3-D near-field microwave imaging approach, combining a special 2-D multiple-input multiple-output (MIMO) structure with orthogonal coding and Fourier domain processing. The proposed MIMO coded generalized reduced dimension Fourier algorithm effectively reduces data dimensionality while preserving valuable information, streamlining image reconstruction. Through mathematical derivations, we show how the proposed approach includes phase and amplitude compensators and reduces the computational complexity while mitigating propagation loss effects. Numerical simulations confirm the approach’s satisfactory performance in terms of information retrieval and processing speed.
This paper proposes a novel approach to 3-D microwave imaging using dynamic metasurface antennas in a multistatic configuration. By introducing a panel-to-panel model and a preprocessing technique, raw measurements are converted into the space-frequency domain for efficient data acquisition and reconstruction. Adapting the range migration algorithm in this work enables fast Fourier-based image reconstruction. Simulation results showcase the effectiveness of the proposed method, highlighting its potential for real-world applications.
Conventional microwave imaging-based approaches can produce high quality image reconstructions. At the same time, these techniques typically suffer from increased hardware complexity, cost and slow data acquisition speeds. Although computational imaging (CI)-based systems have been developed as an alternative, they may demand significant computational power and time, especially in the calculation and the storage of the transfer function (or the sensing matrix) of the CI system. However, the previous method considers the scenario where the transmitter and receiver share the same set of aperture distribution fields. To address this challenge, this paper presents a new technique, where the sensing matrix is calculated directly from the aperture fields of the antennas in a CI system. Here, the transmitter and the receiver apertures can be different and they do not necessarily need to have the same field distributions. With the testing dataset, the average value of the normalized mean squared error (NMSE) is 0.0243. In addition, compared to the traditional method, the proposed network reduces the computation time for the sensing matrix by approximately 67%. The proposed network can predict the sensing matrix from two different sets of aperture distribution fields with high accuracy while significantly saving the computation time.
Deep learning methodologies are extensively applied in addressing two-dimensional (2D) and three-dimensional (3D) computer vision challenges, encompassing tasks like object detection, super-resolution (SR), and classification. Radar imagery, however, contends with lower resolution compared to optical counterparts, posing a formidable obstacle in developing accurate computer vision models, particularly classifiers. This limitation stems from the absence of high-frequency details within radar imagery, complicating precise predictions by classifier models. Common strategies to mitigate this issue involve training on expansive datasets or employing more complex models, potentially susceptible to overfitting. However, generating sizeable datasets, especially for radar imagery, is challenging. Presenting an innovative solution, this study integrates a Convolutional Neural Network (CNN)-driven SR model with a classifier framework to enhance radar classification accuracy. The SR model is trained to upscale low-resolution millimetre-wave (mmW) images to high-resolution (HR) counterparts. These enhanced images serve as inputs for the classifier, distinguishing between threat and non-threat entities. Training data for the dual CNN layers is generated utilising a numerical model simulating a near-field coded-aperture computational imaging (CI) system. Evaluation of the resulting dual CNN model with simulated data yields a remarkable classification accuracy of 95%, accompanied by rapid inference time (0.193 seconds), rendering it suitable for real-time threat classification applications. Further validation with experimentally generated reconstruction data attests to the model’s robustness, achieving a classification accuracy of 94%. This integrated approach presents a promising solution for enhancing radar imagery analysis accuracy, offering substantial implications for real-world threat detection scenarios.
In this paper, first, the structure of a linear sparse periodic array for two-dimensional scanning is described. Then, based on its characteristics, an algorithm is presented for fast image reconstruction of the scene in a near-field (NF) multistatic terahertz imaging scenario. Although the basis of this algorithm is developed in the Fourier domain, it is compatible with the non-uniform structure of the array and also takes into account the phase deviations caused by multistatic imaging in NF. The performance of the proposed approach is evaluated with numerical data obtained from electromagnetic simulations in FEKO as well as experimental data. The results are discussed in terms of computational time on the central processing unit and graphics processing unit as well as the quality of the reconstructed image.
In recent years, dynamic metasurface antennas (DMAs) have been proposed as an efficient alternative platform for computational imaging, which can drastically simplify the hardware architecture. In this paper, we first mathematically describe the existing solution to be able to convert raw measurements obtained by a DMA in the frequency-space domain into raw data on Fourier bases. Next, an optimization problem based on compressive sensing theory is defined, through which only a limited share of the total frequency/spatial data will be needed. The converted/retrieved data are used to reconstruct the image in the Fourier domain. The performance of the corresponding image reconstruction techniques (with/without Stolt interpolation operation) is evaluated in terms of the quality of the reconstructed image (both visually and quantitatively) and computational time with computer simulations.
An approach to designing multiple waveforms in a multiple-input multiple-output (MIMO) system is presented so that the full capacity of the transmitting and receiving antennas can be utilized at the same time. On the transmitter-side, the antenna elements are classified into different groups according to their specific signal. On the receive-side, we use a multi-resolution analysis to retrieve the signals of each channel. Due to the superior characteristics of the FMCW signal, especially in terms of sampling, in the proposed approach, an FMCW radar is considered. To adapt the introduced system to multistatic near-field imaging, we use more accurate models than the effective phase center principle. This contributes to the successful reconstruction of the scene image by efficient Fourier-based image reconstruction methods. The performance of the proposed approach is confirmed by numerical simulations.
This paper proposes a simple design method for a multi-static aperiodic array to achieve 220 GHz sparse imaging, and a corresponding image reconstruction algorithm based on Fast Fourier Transform (FFT) and sparse data recovery. The proposed aperiodic sparse array originates from the linear sparse periodic array (SPA), it can further save the number of sampling data, transceivers and system cost compared to SPA imaging system. Low rank matrix recovery technique with principal component pursuit by alternating directions method (PCPADM) is used to recover the missing data caused by the sparse sampling. In order to achieve fast image reconstruction, FFT-based matched filtering method is used in which multistatic-to-monostatic conversion and interpolation are applied for data pre-processing. The proposed imaging scheme has been verified in experiments. An imaging resolution of 6 mm resolution is achieved at 1.4 m with 192 mm × 300 mm field of view, with a significantly reduced reconstruction time in comparison to the generalized synthetic aperture focusing technique (GSAFT).
Computational millimetre-wave (mmW) imaging and machine learning have followed parallel tracks since their inception. Recent developments in computational imaging (CI) have significantly improved the imaging capabilities of mmW imaging systems. Machine learning algorithms have also gained huge popularity among researchers in the recent past with several approaches being investigated to make use of them in imaging systems. One such algorithm, image classifier, has gained significant traction in applications such as security screening and traffic surveillance. In this article, we present the first steps towards a machine learning integrated CI physical model for image classification at mmW frequencies. The dataset used for training CI system is generated using the developed single-pixel CI forward-model, eliminating the need for traditional raster-scanning based imaging techniques.
KEYWORDS: Radar, Receivers, Radar signal processing, Antennas, Signal processing, Information security, Data acquisition, Compressed sensing, Complex systems, Coded apertures
Radar systems for direction of arrival (DoA) estimation have been the subject of significant research with applications ranging from security to channel sounding and automotive radars. Conventional DoA retrieval techniques rely on an array based system architecture as the receiving unit, typically synthesized at the Nyquist limit. This classical array based approach makes it necessary to collect the received radar signals from multiple channels, and process it using DoA estimation algorithms to retrieve the DoA information of incoming far-field sources. A challenge with this multi-pixel approach is that, as the operating frequency is increased, the number of antennas (and hence the number of data acquisition channels) also increases. This can result in a rather complex system architecture at the receiver unit, especially at millimetre-wave and submillimetre-wave frequencies. As an enabling technology for the compressing sensing paradigm, a single-pixel based coded aperture can substantially simplify the physical hardware layer for DoA estimation. A significant advantage of this technique is that the received data from the source is compressed into a single channel, circumventing the necessity to have array-based multiple channels to retrieve the DoA information. In this work, we present a passive compressive sensing radar technique for DoA estimation using a single-frequency, dynamically reconfigurable wave-chaotic metasurface antenna as a receiver. We demonstrate that using spatiotemporarily incoherent measurement modes generated by the coded programmable metasurface aperture to encode and compress source generated far-field incident waves into a single channel, we can retrieve high fidelity DoA patterns from compressed measurements.
In this paper, we describe the recent development of new algorithms applied to short-range radar imaging. Facing the limitations of classical backpropagation algorithms, the use of techniques based on Fast Fourier Transforms has led to substantial image computation accelerations, especially for Multiple-Input Multiple-Output systems. The necessary spatial interpolation and zero-padding steps are still particularly limiting in this context, so it is proposed to replace it by a more efficient matrix technique, showing improvements in memory consumption, image computation speed and reconstruction quality.
Conventionally, resolution characterization of an imaging radar is performed by means of analyzing the diffraction limited point-spread-function (PSF) pattern of the radar. Such an analysis is straightforward and can easily be implemented at microwave and millimeter-wave frequencies using simple point-scatter targets. However, it poses significant challenges at submillimeter-wave (or THz) frequencies due to the strong scattering response of secondary objects that are used to align the PSF targets for imaging at these frequencies. As a result, the reconstructed PSF patterns suffer from artifacts caused by the secondary objects present in the background. In this work, we present the use of the acoustic levitation principle to obtain the PSF characterization of a 340 GHz stand-off imaging radar. We show that using a water droplet acoustically levitated at the focal point of the 340 GHz imaging radar, high-fidelity PSF characterization of the radar is achieved, revealing the resolution limits of the radar while exhibiting good signal-to-noise ratio (SNR).
In this paper, we review modern advances in microwave and millimeter-wave computational frequency-diverse imaging, and submillimeter-wave radar systems. We first present a frequency-diverse computational imaging system developed by Duke University for security-screening applications at K-band (17.5-26.5 GHz) frequencies. Following, we show a millimeter-wave spotlight imaging concept and its conceptual integration with the K-band system as interesting example of sensor fusion. We also demonstrate the application of computational frequency-diverse imaging for polarimetric imaging and phase retrieval problems. We show that using the concept of computational frequency-diverse imaging and quasi-random measurement bases, high-fidelity images of objects can be retrieved without the need for any mechanical scanning apparatus and phase shifting circuits. Increasing the frequency-band of operation, we also demonstrate a 340 GHz radar developed by the Jet Propulsion Laboratory and its application for standoff detection. We demonstrate a new technique to characterize the point-spread-function (PSF) of radars operating at submillimeter-wave frequencies.
In this paper, a spotlight imaging system integrated with a frequency-diverse aperture is presented for security-screening applications. The spotlight imager consists of holographic metasurface antennas that can dynamically be tuned to radiate spotlight patterns allowing the extraction of high-resolution images from a constrained field-of-view (FOV). The reconfigurable holographic metasurface antennas consist of a metasurface layer used to modulate the guided-mode reference to an aperture field of interest producing the desired radiated wavefronts. The reconfigurable operation is achieved in an all-electronic manner without the need for any mechanical moving apparatus or phase shifting circuits. The spotlight aperture operates at a single frequency, 75 GHz, within the W-band frequency regime (75 – 110 GHz) and is used for the high-resolution identification of threat objects while the frequency-diverse aperture operates at K-band frequencies (17.5 – 26.5 GHz) and is used for low-resolution detection purposes. The scene to be imaged is first interrogated using the K-band aperture at low resolution and the constrained-FOV is imaged using the W-band system to achieve the identification of threat objects.
Computational imaging is a proven strategy for obtaining high-quality images with fast acquisition rates and simpler hardware. Metasurfaces provide exquisite control over electromagnetic fields, enabling the radiated field to be molded into unique patterns. The fusion of these two concepts can bring about revolutionary advances in the design of imaging systems for security screening. In the context of computational imaging, each field pattern serves as a single measurement of a scene; imaging a scene can then be interpreted as estimating the reflectivity distribution of a target from a set of measurements. As with any computational imaging system, the key challenge is to arrive at a minimal set of measurements from which a diffraction-limited image can be resolved. Here, we show that the information content of a frequency-diverse metasurface aperture can be maximized by design, and used to construct a complete millimeter-wave imaging system spanning a 2 m by 2 m area, consisting of 96 metasurfaces, capable of producing diffraction-limited images of human-scale targets. The metasurfacebased frequency-diverse system presented in this work represents an inexpensive, but tremendously flexible alternative to traditional hardware paradigms, offering the possibility of low-cost, real-time, and ubiquitous screening platforms.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.