This paper presents a machine vision system for real-time computation of distance and angle of a camera from reference
points in the environment. Image pre-processing, component labeling and feature extraction modules were modeled at
Register Transfer (RT) level and synthesized for implementation on field programmable gate arrays (FPGA). The
extracted image component features were sent from the hardware modules to a soft-core processor, MicroBlaze, for
computation of distance and angle. A CMOS imaging sensor operating at a clock frequency of 27MHz was used in our
experiments to produce a video stream at the rate of 75 frames per second. Image component labeling and feature
extraction modules were running in parallel having a total latency of 13ms. The MicroBlaze was interfaced with the
component labeling and feature extraction modules through Fast Simplex Link (FSL). The latency for computing
distance and angle of camera from the reference points was measured to be 2ms on the MicroBlaze, running at 100 MHz
clock frequency. In this paper, we present the performance analysis, device utilization and power consumption for the
designed system. The FPGA based machine vision system that we propose has high frame speed, low latency and a
power consumption that is much lower compared to commercially available smart camera solutions.
There are a number of challenges caused by the large amount of data and limited resources when implementing vision
systems on wireless smart cameras using embedded platforms. Generally, the common challenges include limited
memory, processing capability, the power consumption in the case of battery operated systems, and bandwidth. It is
usual for research in this field to focus on the development of a specific solution for a particular problem. In order to
implement vision systems on an embedded platform, the designers must firstly investigate the resource requirements for
a design and, indeed, failure to do this may result in additional design time and costs so as to meet the specifications.
There is a requirement for a tool which has the ability to predict the resource requirements for the development and
comparison of vision solutions in wireless smart cameras. To accelerate the development of such tool, we have used a
system taxonomy, which shows that the majority of vision systems for wireless smart cameras are common and these
focus on object detection, analysis and recognition. In this paper, we have investigated the arithmetic complexity and
memory requirements of vision functions by using the system taxonomy and proposed an abstract complexity model. To
demonstrate the use of this model, we have analysed a number of implemented systems with this model and showed that
complexity model together with system taxonomy can be used for comparison and generalization of vision solutions.
The study will assist researchers/designers to predict the resource requirements for different class of vision systems,
implemented on wireless smart cameras, in a reduced time and which will involve little effort. This in turn will make the
comparison and generalization of solutions simple for wireless smart cameras.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.