KEYWORDS: Calibration, Stereo vision systems, Cameras, Spherical lenses, Lenses, Visual process modeling, 3D modeling, Image processing, 3D image processing, Digital signal processing
Fish-eye lens is a kind of short focal distance (f=6~16mm) camera. The field of view (FOV) of it is near or even
exceeded 180×180 degrees. A lot of literatures show that the multiple view geometry system built by fish-eye lens will
get larger stereo field than traditional stereo vision system which based on a pair of perspective projection images. Since
a fish-eye camera usually has a wider-than-hemispherical FOV, the most of image processing approaches based on the
pinhole camera model for the conventional stereo vision system are not satisfied to deal with the applications of this
category of stereo vision which built by fish-eye lenses. This paper focuses on discussing the calibration and the epipolar
rectification method for a novel machine vision system set up by four fish-eye lenses, which is called Special Stereo
Vision System (SSVS). The characteristic of SSVS is that it can produce 3D coordinate information from the whole
global observation space and acquiring no blind area 360º×360º panoramic image simultaneously just using single vision
equipment with one time static shooting. Parameters calibration and epipolar rectification is the basic for SSVS to realize
3D reconstruction and panoramic image generation.
Spherical stereo vision is a kind of stereo vision system built by fish-eye lenses, which discussing the stereo algorithms
conform to the spherical model. Epipolar geometry is the theory which describes the relationship of the two imaging
plane in cameras for the stereo vision system based on perspective projection model. However, the epipolar in
uncorrected fish-eye image will not be a line but an arc which intersects at the poles. It is polar curve. In this paper, the
theory of nonlinear epipolar geometry will be explored and the method of nonlinear epipolar rectification will be
proposed to eliminate the vertical parallax between two fish-eye images. Maximally Stable Extremal Region (MSER)
utilizes grayscale as independent variables, and uses the local extremum of the area variation as the testing results. It is
demonstrated in literatures that MSER is only depending on the gray variations of images, and not relating with local
structural characteristics and resolution of image. Here, MSER will be combined with the nonlinear epipolar rectification
method proposed in this paper. The intersection of the rectified epipolar and the corresponding MSER region is
determined as the feature set of spherical stereo vision. Experiments show that this study achieved the expected results.
Because of the further from the center of image the lower resolution and the severe non-linear distortion are the
characteristics of uncorrected fish-eye lens image, the traditional feature matching method can’t achieve good
performance in the applications of fish-eye lens, which correct distortion firstly and then matches the features in image.
Center-symmetric Local Binary Pattern (CS-LBP) is a kind of descriptor based on grayscale information from
neighborhood, which has high ability of grayscale invariance and rotation invariance. In this paper, CS-LBP will be
combined with Scale Invariant Feature Transform (SIFT) to solve the problem of feature point matching on uncorrected
fish-eye image. We first extract the interest points in the pair of fish-eye images by SIFT, and then describe the
corresponding regions of the interest points through CS-LBP. Finally the similarity of the regions will be evaluated using
the chi-square distance to get the only pair of points. For the specified interest point, the corresponding point in another
image can be found out. The experimental results show that the proposed method achieves a satisfying
matching performance in uncorrected fish-eye lens image. The study of this article will be useful to enhance the
applications of fish-eye lens in the field of 3D reconstruction and panorama restoration.
Conventional stereo vision systems have a small field of view (FOV) which limits their usefulness for certain
applications. While panorama vision is able to “see” in all directions of the observation space, scene depth information is
missed because of the mapping from 3D reference coordinates to 2D panoramic image. In this paper, we present an
innovative vision system which builds by a special combined fish-eye lenses module, and is capable of producing 3D
coordinate information from the whole global observation space and acquiring no blind area 360°×360° panoramic
image simultaneously just using single vision equipment with one time static shooting. It is called Panoramic Stereo
Sphere Vision (PSSV). We proposed the geometric model, mathematic model and parameters calibration method in this
paper. Specifically, video surveillance, robotic autonomous navigation, virtual reality, driving assistance, multiple
maneuvering target tracking, automatic mapping of environments and attitude estimation are some of the applications
which will benefit from PSSV.
KEYWORDS: Cameras, Calibration, Visual process modeling, Spherical lenses, Lenses, Mathematical modeling, Stereo vision systems, 3D modeling, 3D vision, Imaging systems
In geometric calibration of stereoscopic cameras the object is to determine a set of parameters which describe the
mapping from 3D reference coordinates to 2D image coordinates, and indicate the geometric relationships between the
cameras. While various methods for stereo cameras with ordinary lenses can be found from the literature, stereoscopic
vision with extremely wide angle lenses has been much less discussed. Spherical stereoscopic vision is more and more
convenient in computer vision applications. However, its use for 3D measurement purposes is limited by the lack of an
accurate, general, and easy-to-use calibration procedure. Hence, we present a geometric model for spherical stereoscopic
vision equipped by extremely wide angle lenses. Then, a corresponding generic mathematical model is built. Method for
calibration the parameters of the mathematical model is proposed. This paper shows practical results from the calibration
of two high quality panomorph lenses mounted on cameras with 2048x1536 resolutions. Here, the stereoscopic vision
system is flexible, the position and orientation of the cameras can be adjusted randomly. The calibration results include
interior orientation, exterior orientation and the geometric relationships between the two cameras. The achieved level of
calibration accuracy is very satisfying.
Omnidirectional vision appears the definite significance since its advantage of acquiring full 360° horizontal field of
vision information simultaneously. In this paper, an embedded original omnidirectional vision navigator (EOVN) based
on fish-eye lens and embedded technology has been researched.
Fish-eye lens is one of the special ways to establish
omnidirectional vision. However, it appears with an unavoidable inherent and enormous distortion. A unique integrated
navigation method which is conducted on the basis of targets tracking has been proposed. It is composed of multi-target
recognition and tracking, distortion rectification, spatial location and navigation control. It is called RTRLN. In order to
adapt to the different indoor and outdoor navigation environments, we implant mean-shift and dynamic threshold
adjustment into the Particle Filter algorithm to improve the efficiency and robustness of tracking capability. RTRLN has
been implanted in an independent development embedded platform. EOVN likes a smart crammer based on
COMS+FPGA+DSP. It can guide various vehicles in outdoor environments by tracking the diverse marks hanging in the
air. The experiments prove that the EOVN is particularly suitable for the guidance applications which need high
requirements on precision and repeatability. The research achievements have a good actual applied inspection.
The purpose of this paper aims to promote the application of fish-eye lens. Accurate parameters calibration and effective
distortion rectification of an imaging device is of utmost importance in machine vision. Fish-eye lens produces a
hemispherical field of view of an environment, which appears definite significant since its advantage of panoramic sight
with a single compact visual scene. But fish-eye lens image has an unavoidable inherent severe distortion. The precise
optical center is the precondition for other parameters calibration and distortion correction. Therefore, three different
optical center calibration methods have been researched for diverse applications. Support Vector Machine (SVM) and
Spherical Equidistance Projection Algorithm (SEPA) are integrated to replace traditional rectification methods. SVM is a
machine learning method based on the theory of statistics, which have good capabilities of imitating, regression and
classification. In this research, SVM provides a mapping table between the fish-eye image and the standard image for
human eyes. Two novel training models have been designed. SEPA has been applied to promote the rectification effect
of the edge of fish-eye lens image. The validity and effectiveness of our achievements are demonstrated by processing
the real images.
Omni-directional vision appears the definite significance since its advantage of acquiring all vision information
simultaneously. In this paper, an integrated omni-directional vision tracker based on the configuration with CMOS,
FPGA and DSP has been implemented. Fisheye lens is one of the most efficient ways to establish omni-directional
vision systems, however, it appear with an unavoidable inherent distortion. An imaging system model which consists of
fisheye-lens and the embedded tracker has been proposed. A novel beacon owning the feature of particular topology
shape can be identified by the appropriative recognition processes. Particle filter has been programmed as an
intersectional arithmetic structure which processes the same step of several particle filters simultaneously. We called it as
Multiple Intersection Particle Filter. MIPF makes multi-targets tracking efficiently and successfully on embedded
platform. A rectification technique based on equidistant projection theorem is used for correction some distorted image
point. The localization method just employs the picture position of two objects to estimate the space position and
orientation for AGV. After target recognition, vision tracking, rectification, and object positioning functions actualized
on the embedded omni-directional vision tracker, autonomous navigation has been demonstrated on experimental AGV.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.