The application scene of convolution neural network is more and more extensive, which can be migrated to infrared field. A convolutional layer accelerator is designed on the FPGA to meet the needs of miniaturization and low power consumption of embedded devices. The author reduces the model about 4 times by low-bit quantization,reduces the invalid calculations through padding processing,improves computing efficiency through data flow and parallel computing, effectively reduces the computation time of the convolution layer. Ultimately, taking the SSD algorithm as an example in the FPGA, the author reduces the calculation time to about one tenth of the cpu calculation time. At the same time, the decrease degree of the macro detection result mAP50(mean average precision) caused by quantification is within 3%, and the decrease degree of detection rate and false alarm rate is within 1%.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.