Presentation + Paper
7 September 2018 Video processing in real-time in FPGA
Author Affiliations +
Abstract
In computational vision has a high computational cost, although, some algorithms had been implemented to get image features, that allow assorting, object and face recognition and so on. Some solutions have been developed in computers, DSP and GPU those that are not optimal with time. In order to improve the performance of these algorithms, we are implementing the SURF algorithm in embedded systems (FPGA) and applied to non-controller environments that require real-time response. In this work we development a SURF algorithm in order to improve time processing in video and image processing, we use an FPGA to apply that algorithm, we compare the time processing with different devices and the features found it into the images, this features will be invariant to scale, rotation and lighting, the SURF algorithm localize the interest points (features), its is using in facial recognition, object detection, stereo vision and so on. This algorithm has a high computational cost because of use a lot of data, in order to reduce the high cost we implemented LUTs and reduce time with code. With this work we try to find the best way to implement the algorithm into embedded systems, in order to use in non-controller environments and robots autonomous.
Conference Presentation
© (2018) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.
Erick Morales and Roberto Herrera "Video processing in real-time in FPGA", Proc. SPIE 10751, Optics and Photonics for Information Processing XII, 107510Z (7 September 2018); https://doi.org/10.1117/12.2322021
Lens.org Logo
CITATIONS
Cited by 1 scholarly publication.
Advertisement
Advertisement
RIGHTS & PERMISSIONS
Get copyright permission  Get copyright permission on Copyright Marketplace
KEYWORDS
Image filtering

Image processing

Field programmable gate arrays

Detection and tracking algorithms

Back to Top