This paper illustrates an adaptive mesh grid scaling algorithm, which can be applied in the simulation of laser propagation. The algorithm can effectively improve the far-filed laser spot resolution in the simulation of focused laser propagation. And the algorithm’s effectiveness is verified by the simulation of various laser beam type’s diffraction. The phenomenon of diffraction exits in all the laser beam propagation, in which the divergence angle is directly proportional to the laser wavelength, and inversely proportional to the laser aperture. The proportion coefficients will vary depending on the type of laser beam, such as gaussian beam and flat-topped beam with different obscuration, which have accurately theoretical value. Moreover, the simulation results of adaptive mesh scaling algorithm agree well with theoretical values, which effectively checked the algorithm’s feasibility and accuracy.
High power fiber lasers can be incoherently combined to form the basis for high energy laser applications. Incoherent combining of fiber lasers has a number of advantages over other laser beam combining methods. However, the far-field beam quality of the incoherently combined laser array can still be significantly degraded by atmospheric optical turbulence. In this article, a general scaling law for propagation of incoherently combined laser array through atmosphere is developed by employing theoretical analysis and the common stochastic wave optics technique, and mainly focus on the effects of diffraction and atmospheric optical turbulence. The scaling law developed in the present work differs from standard scaling laws in its definition of irradiance. We show that the far-field irradiance and beam dispersion of any incoherently combined laser array, regardless of near-field beamlets geometry, can be obtained in terms of four basic parameters: laser power, field effective area, pupil field factor, and the Fried parameter.The results show that the formula is simple but predicts peak irradiance and beam dispersion accurately in the far field with varying levels of atmospheric turbulence, regardless of the near-field beamlets geometry.
The stereo vision is generally used to obtain the 3D information in traditional three-dimensional measurement. At least two cameras are calibrated in advance and then resection is performed to obtain the three-dimensional coordinates. It can be seen that obtaining 3D information needs at least two cameras (or two views) because single camera can only obtain 2D information. In addition, only the 3D spatial position when the image is captured by camera can be obtained. When we measure 3D information of miss distance of the weapon with high-velocity motion, such as missile, it is hard to capture the image when the weapon touches on the target because the limitation of camera fps (frames per second). Hence, we can only obtain the position of the moment before the weapon touches on the target and this would bring error for miss distance estimation. In this paper, a fast miss distance estimation method is proposed using shadow and single view (i.e., single camera). This proposed method only uses one camera and uses the characteristic that the intersection of the axes of weapon and its shadow is the actual image projection of the moment when the weapon touches on the target. The proposed method dose not need to capture the image of moment when the weapon touches on the target and hence not need high fps, then would extend the range of choice for camera. Experimental results indicate our proposed method has better performance in terms of accuracy, numerical stability and computational speed for miss distance estimation, compared with the traditional stereo vision.
KEYWORDS: 3D acquisition, Calibration, Cameras, 3D metrology, 3D image processing, Photogrammetry, Imaging systems, 3D vision, Covariance matrices, Stereoscopy
Photogrammetry with stereo vision is widely used in computer vision and SLAM (simultaneous localization and mapping), whose key steps are calibration and intersection measurement. Calibration is to obtain the intrinsic and extrinsic parameters, including the principal point, focal length and pose. Intersection measurement is to obtain the 3D information after calibration, including position, velocity and rotation. In some cases, such as visual monitoring cameras (VMCs), photogrammetry uses large field of view, and has the characteristics of long distance from camera to target and wide measuring range, which increase the difficulty of calibration and is unable to place 3D control points arbitrarily. What's more, the distance from the target area to 3D control point area has a great influence on the measuring accuracy of intersection measurement. In this paper, we proposed a new method to place 3D control points, including planar and non-planar scenes and this method can distinguish the two scenes. Then the planar and non-planar methods can be used to calibrate in different cases respectively. In addition, we analyzed the layout of 3D control points to obtain relation between the measuring accuracy and the distance from the target area to 3D control point area. Experimental results show the longer the distance, the greater the measuring error in synthetic data and real images, and to improve the measuring accuracy, the 3D control points should be planar or non-planar strictly, not quasi-planar.
For increasing speed of camera external calibration, ensuring the same size of two control point sets and increasing matching rate of camera images after projection transformation, a fast extraction method of control points is established and a point set matching method based on Delaunay triangulation is proposed. Experimental result shows the extraction method can extract all the control points correctly without spurious and lost extraction. In addition, we obtain simulation images with projection transformation according to camera imaging principle. Through simulation we can see the point set matching method can adapt to the projection transformation and improve matching rate in the limited angle range. Lastly, the experimental result shows the external calibration relative error based on our method is below 0.3% compared with manual external calibration.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.