Under coherent light illumination, several approaches need either angle scanning or diffuser rotating to reconstruct the image through opaque scattering media. We propose a linear model to restore the hidden object through the actual power spectrum with disturbance of the scattering layer. The experimental results confirm that, the algorithm quickly converge to the only correct reconstruction solution with the accuracy power spectrum pattern of Fourier transform, and the method can reconstruct the high accuracy image of the object hidden by the scattering media with one-shot power spectrum.
An approach for reflectivity measurement of ultra-weak fiber Bragg gratings is demonstrated. The scheme uses a referenced high-reflectivity FBG to combine with the measured FBG. They have different center wavelengths. By using two referenced FBGs, the reflectivity of one measured FBG is calculated to be 0.298% and 0.287% respectively. For reflectivity measurement of ultra-weak FBG, to eliminate the influence of the reflection spectra side lobe for the referenced FBG, we get the reflection spectrum of the referenced FBG and measured FBG respectively under the same input intensity. The reflectivity of an ultra-weak FBG is measured to be 0.00916% (-40.38dB) and 0.00803%(-40.95dB) based on the two referenced FBGs. The results presents that the method is feasible.
The potential for the research of object tracking in computer vision has been well established, but previous object-tracking methods, which consider only continuous and smooth motion, are limited in handling abrupt motions. We introduce an efficient algorithm to tackle this limitation. A feature-driven (FD) motion model-based features from accelerated segment test (FAST) feature matching is proposed in the particle-filtering framework. Various evaluations have demonstrated that this motion model can improve existing methods’ performances to handle abrupt motion significantly. The proposed model can be applied to most existing particle-filter tracking methods.
The problem of low and nonuniform resolution has long been a topic of research in omnidirectional imaging. A novel fusion method based on nonsubsampled contourlet transform is proposed in this paper to fuse the pair of complementary panoramic images unwrapped from the inner and outer of the omni-image. This work can be considered as a continuation research of the complementary-structure catadioptric imaging technique based on multiple mirrors. Specifically, the high-frequency details are extracted by nonsubsampled directional filters only in the horizontal and vertical directions, since the complementarity mostly exists in two orthogonal directions. The horizontal and vertical high-frequency coefficients of the fused image are selected from the panoramic image which has the advantage in the corresponding direction. The fusion rule for the low-frequency coefficients is an improved "selecting" scheme with considering the high-frequency directional vector. Simulation experiments based on decimation and interpolation are implemented using the proposed fusion method, which outperforms other existing fusion methods in terms of both visual quality and objective evaluation. The proposed fusion method has demonstrated on experiments that both indoor and outdoor of real-scene imaging can gain better performance by using our prototype omnidirectional sensor.
Considering the problem of low and nonuniform resolution in omnidirectional imaging, a complementary catadioptric imaging method is proposed by using multiple mirrors. Due to the reflection of mirrors, each spatial object has two imaging positions, in the inner and outer of sampled omnidirectional images, respectively, which are generated from two different optical paths. For instance, a prototype of omnidirectional sensor based on complementary double imaging is designed, which contains two convex mirrors (cone and hyperboloid) in conjunction with a plane mirror. The design constraints and optical geometric model of imaging are analyzed in detail. By mathematical analysis, two improvements of resolution have been achieved: (1) more uniform distribution of resolution and (2) complementary resolution distribution in the radial and tangential directions between the inner and outer. To prove and demonstrate the remarkable complementarity, a simple fusion experiment based on wavelet decomposition and reconstruction is performed on a pair of cylindrical panoramic images unwrapped from an omnidirectional image.
In the process of the video sequence analysis, the approach of the background modeling and moving target detection is
an important problem. To get the precise contour of the targets, the Gaussian Mixture Model (GMM) is one of the
models which used frequently when the background is stable. But the effect is not so ideal if we use the GMM only. A
top-to-bottom Local Hierarchical GMM (LHGMM) is proposed in this paper. Firstly, the particle filter algorithm is used
to track the targets, and the rough external contour can be gotten. In the area we use the block-based GMM and the pixelbased
GMM to update the background and detect the moving targets. Then the detection results are fed back to the
tracking frame to revise the updating area. The experiment results show that the algorithm proposed in this paper can
adapt to the moving changes of the targets, and the targets can be tracked accurately.
In this paper, a method to detect whether the behavior of a single person in video sequence is abnormal is proposed.
Firstly, after the pre-processing, the background model is gotten based on the Mixture Gaussian Model(GMM), at the
same time the shadow is eliminated; then use the color-shape information and the Random Hough Transform (RHT) to
abstract the zebra crossing and segment the background; Lastly, we use the rectangle and the centroid to judge whether
the person's behavior is abnormal.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.