This study introduces a novel method for analyzing how the choice of a reference camera influences the super-resolution reconstruction in camera array imaging systems. Through a combination of simulations and experimental validations, it becomes evident that traditional, uniformly arranged camera systems often fall short of achieving optimal super-resolution effects, irrespective of the reference camera chosen. The introduction of a central camera into this conventional arrangement, either by adding it or substituting an existing camera, not only yields improved super-resolution outcomes but also significantly enhances the system's robustness to variations in object distance. This advancement notably elevates the functional utility of sparse camera array systems in practical applications.
Due to the small field of view and shallow depth of field, the microscope could only capture 2D images of the object. In order to observe the three-dimensional structure of the micro object, a microscopy images reconstruction algorithm based on an improved patch-based multi-view stereo (PMVS) algorithm is proposed. The new algorithm improves PMVS from two aspects: first, increasing the propagation directions, second, on the basis of the expansion, different expansion radius and times are set by the angle between the normal vector of the seed patch and the direction vector of the line passing through the seed patch center and the camera center. Compared with PMVS, the number of 3D points made by the new algorithm is three times as much as PMVS. And the holes in the vertical side are also eliminated.
A new method is proposed to compare and evaluate the accuracy and robustness of Perspective-n-Points (PnP) algorithms, which bases on the principle that position and pose of a camera in the world coordinate system is unique when the camera captures an image of a target. First, a world coordinate system, a camera coordinate system and several target coordinate systems are established, and transformational relations from the world coordinate system to each target coordinate system are known. Second, the transformational relations from each target coordinate system to the camera coordinate system are calculated from theoretical simulation and experimental test, and furthermore, the transformational relations from the world coordinate system to the camera coordinate system are obtained. Third, the average and variance of the different transformational relations are calculated, and the value of the variance can be used to evaluate the accuracy and robustness of the algorithms. Finally, the EPnP and LHM algorithms are compared by the proposed method. The comparison results show that the LHM is more accurate and more robust than EPnP, and it consistent with previous comparison method using rotation error and translation error.
To obtain the accurate integral PSF of an extended depth of field (EDOF) microscope based on liquid tunable lens and volumetric sampling (VS) method, a method based on statistic and inverse filtering using quantum dot fluorescence nanosphere as a point source is proposed in this paper. First, a number of raw quantum dot images were captured separately when the focus length of the liquid lens was fixed and changed over the exposure time. Second, the raw images were separately added and averaged to obtain two noise-free mean images. Third, the integral PSF was achieved by computing the inverse Fourier transform of the mean image's Fourier transform caught when the focus lens is fixed divided by that when the focus length is changed. Finally, experimental results show that restored image using the measured accumulated PSF has good image quality and no artifacts.
To avoid “stair-casing effect” in the disparity map when dealing with slant plane, curved surface and weak texture region, an improved fast dense stereo matching algorithm based on disparity plane estimation is proposed. First, a set of support points are extracted from the edge of the original matching images and the description images. Second, Delaunay triangulation disparity planes are calculated using all the support points. Third, the sub-pixel disparity map is computed from the best support points and parameters of the Delaunay triangulation disparity planes. Finally, experimental results show that the “stair-casing effect” caused by slant plane, curved surface and weak texture region is eliminated by using the presented method. In addition, the proposed method spends less than 600ms on a one-megapixel image averagely.
A nozzle angle measurement system based on monocular vision is proposed to achieve high-speed and non-contact angle measurement of rocket engine nozzle. The measurement system consists of two illumination sources, a lens, a target board with spots, a high-speed camera, an image acquisition card and a PC. A target board with spots was fixed on the end of rocket engine nozzle. The image of the target board moved along with the rocket engine nozzle swing was captured by a high-speed camera and transferred to the PC by an image acquisition card. Then a data processing algorithm was utilized to acquire the swing angle of the engine nozzle. Experiment shows that the accuracy of swing angle measurement was 0.2° and the measurement frequency was up to 500Hz.
In this paper, we first introduce the concept of the depth of field (DOF) in machine vision systems, which serves as a
basic building block for our study. Then, related work on the generalization of the fundamental methods and current
status with regard to extending the DOF is presented, followed by a detailed analysis of the principles and performances
of some representative extended depth-of-field (EDOF) technologies. Finally, we make some predictions about the
prospects of EDOF technologies.
Conventional optical imaging systems are limited by a fundamental trade-off between the depth of field (DOF)
and signal-to-noise ratio. Apart from a large DOF, a constant magnification within a certain depth range is
particularly essential for visual measurement systems. In this paper, we present a novel visual measurement
system with extended DOF and depth-invariant magnification. A varifocal liquid lens is employed to sweep its
focus within a single exposure of the detector, after which a blurred image is captured. The blurred image is
subsequently reconstructed to form a sharp extended DOF image by filtering with a single blur kernel. The
experimental results demonstrate that our method can extend the DOF of a conventional visual measurement
system by over 10 times, while the change in the magnification within the extended DOF remains less than 1%.
To obtain high speed and accuracy, passive autofocusing system and two-stage searching strategy have been widely
applied in microscopy. The kernel of the passive autofocusing system is the selection of the focus criterion functions,
which will significantly affect the efficiency and accuracy of the autofocusing system. In order to help choose the best
algorithm, this paper proposes a comparison method for the focus criterion functions in the two searching stages. The
ranking methodology for the first stage is proposed for the first time, and the one for the second stage is improved based
on previous works. An overall score of quantifying evaluation is also introduced in both stages. The proposed ranking
methodology for different focus algorithms is tested on a few synthetic defocused images, which are first simulated from
high definition images by Gaussian filter according to the defocus imaging principle, and then added with noise of
different ranges. Finally, to verify the effectiveness of the new ranking methodology, real defocused images are captured
to evaluate all of these algorithms, the result of which matches the comparison results on synthetic image sets.
The selection of an image quality assessment criterion plays a key role when an illuminant of autoregulative intensity is designed for a vision measurement system. In this work eight image quality assessment functions were compared and analyzed with adjustment of the illumination intensity from weak to strong. Four different workpieces, each representing a typical spatial frequency, were used in the comparison. The experimental results show that the best two functions are the image variance function and the gray standard deviation function.
The optical wavefront becomes aberrated when the object optical wavefront propagates through index-of-refractionvariant,
turbulent fluid. This paper related optical wavefront distortions to the fluid-mechanical behaviors. It deduces
Interfacial-Refraction-Index-Thickness (IRIT) approach on the basis of eikonal equation. It simulated turbulent refractive
index field by using high refractive index gradient interfaces instead of the full turbulent information. This simulation
method characterized the turbulent flow optical behaviors by refractive index field, introduced the the interfacial physical
thickness as the inverse of the refractive index gradient magnitude, and quantified the optical wavefront distortions by
the optical path difference (OPD). When compared modeled OPD profile to original full OPD profile, good agreement
was found between the original full aero-optical wavefront OPD and the modeled OPD which only retains approximately
50% refractive index information. The results show that large-scale aero-optical distortions emerge from high-gradient
interfaces and the IRIT approach is useful to simulate aero-optical effects. Furthermore, it indicates that the IRIT
approach can reproduced the large-scale optical distortions using the high-gradient information, and that the IRIT
approach can be used for prediction and correction of aero-optical distortions. The IRIT approach can also be used for
aero-optical control and optimization by modifying the high-gradient interfaces.
Under high magnification of the micro-vision systems, the original metal texture appears in the form of bright spots with different shapes and different sizes and mixes with the real flaws such as maculae, pockmarks, and corrosions. The background of the image is complex while the image brightness is non-uniform due to the spherical reflection. It is difficult to extract the flaws accurately by means of traditional segmentation methods. In this paper, a new method for detecting flaws automatically is presented, in which the flaws are located approximately at first by removing the uniform background and the bright spots of metal surface texture, then detected accurately using the method of double-window. The experimental results show that this method is effective to detect various flaws on the surface of metal sphere.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.