Space-time adaptive processing (STAP) is an important radar and sonar technique that can be used to suppress clutter and jamming. However, traditional constant false alarm rate (CFAR) cascade detection methods are difficult to provide explicit location and number of targets and jammings, while general purpose data-driven object detectors usually consume a large number of floating point operations (FLOPs) and parameters. To deal with this problem, we propose a new idea to design a dedicated data-driven object detector to predict bounding boxes and class probabilities directly from power images of STAP (STAPDet) in this letter. This idea embraces the characteristics of STAP to customize the detector architecture. Specifically, STAPDet first proposes an ultra-lightweight backbone part to effectively recognize the obviously different STAP objects. Second, the proposed detector enlarges the receptive field of detection head to cover the limited scales of the STAP objects instead of using the complicated neck part to fuse multi-scale features. Last, STAPDet adopts the single detection head to predict the sparse STAP objects with better simplicity and fewer parameters. Experiments on real-world data demonstrate that STAPDet provides accurate location and number information of objects while greatly reducing computational complexities and parameters compared with existing state-of-the-art counterparts. These results validate the effectiveness of our idea and suggest a new perspective to design efficiently dedicated detectors.
KEYWORDS: Neural networks, Copper, Data modeling, Convolutional neural networks, Field programmable gate arrays, Control systems, Convolution, Data storage, Matrices
Convolutional neural network is widely used in image recognition. The associated model is computationally demanding. Several solutions are proposed to accelerate its computation. Sparse neural network is an effective way to reduce the computational complexity of neural networks. However, most of the current acceleration programs do not make full use of this feature. In this paper, we design an acceleration unit, using FPGA as the hardware platform. The accelerator unit achieves parallel acceleration through multiple CU models. It eliminates the unnecessary operations by the Match model to improve efficiency. The experimental results show that when the sparsity is ninety percent, the performance can be increased to 3.2 times.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.