Many deblurring and blur kernel estimation methods use a maximum a posteriori approach or deep learning-based classification techniques to sharpen an image and/or predict the blur kernel. We propose a regression approach using convolutional neural networks (CNNs) to predict parameters of linear motion blur kernels, the length and orientation of the blur. We analyze the relationship between length and angle of linear motion blur that can be represented as digital filter kernels. A large dataset of blurred images is generated using a suite of blur kernels and used to train a regression CNN for prediction of length and angle of the motion blur. The coefficients of determination for estimation of length and angle are found to be greater than or equal to 0.89, even under the presence of significant additive Gaussian noise, up to a variance of 10% (SNR of 10 dB). Using our estimated kernel in a nonblind image deblurring method, the sum of squared differences error ratio demonstrates higher cumulative histogram values than comparison methods, with most test images yielding an error ratio of less than or equal to 1.25.
Non-uniform motion blur, including effects commonly encountered in blur associated with atmospheric turbulence, can be estimated as a superposition of locally linear uniform blur kernels. Linear uniform blur kernels are modeled using two parameters, length and angle. In recent work, we have demonstrated the use of a regression-based Convolutional Neural Network (CNN) for robust blind estimation of the length and angle blur parameters of linear uniform blur kernels. In this work we extend the approach of regression-based CNNs to analyze patches in images and estimate the parameters of a locally-linear motion blur kernel, allowing us to model the blur field. We analyze the effectiveness of this patch-based approach versus patch size for two problems: synthetic images generated as a superposition of locally linear blurs, and synthetic images generated with a Zernike polynomial-based wavefront distortion applied at the pupil plane.
Facial classification has numerous real-world applications in various fields such as security and surveillance. However, images collected at long range through the atmosphere exhibit spatially and temporally varying blur and geometric distortion due to turbulence; consequently, making facial identification challenging. A multispectral facial classification approach is proposed utilizing machine learning for long-range imaging. A method for simulating turbulence effects is applied to a multispectral face image database to generate turbulence-degraded images. The performance of the machine learning method for this classification task is assessed to explore the effectiveness of multispectral imaging for improving classification accuracy over long ranges.
This work presents an extended analysis of atmospheric refraction effects captured by time-lapse imagery for near-ground and near-horizontal paths. Monthly trends and multipath analysis of image shift caused by refraction during daytime are studied. Nighttime shift measurements during moonlit nights are also presented. Advanced nonlinear machine learning approaches for image shift prediction are implemented and the performance of the models is evaluated.
A Deep Echo State Neural Network is used to predict total intensity at a detector, standard deviation of intensity over the area of a detector, and center-of-intensity for a deep turbulence example. A short description of the reason for choosing a Deep Echo State Network, as well as a full description of the network optimization and an example using 30 seconds of data is given. Specifically, indications are that this type of network can handle the nonstationary and nonlinear aspects of laser propagation through long distance deep atmospheric turbulence. The network shows a remarkable ability to predict future signals. At this time, more work needs to be done on optimizing the network to achieve even better results.
We develop and study two approaches for the prediction of optical refraction effects in the lower atmosphere. Refraction can cause apparent displacement or distortion of targets when viewed by imaging systems or produce steering when propagating laser beams. Low-cost, time-lapse camera systems were deployed at two locations in New Mexico to measure image displacements of mountain ridge targets due to atmospheric refraction as a function of time. Measurements for selected days were compared with image displacement predictions provided by (1) a ray-tracing evaluation of numerical weather prediction data and (2) a machine learning algorithm with measured meteorological values as inputs. The model approaches are described and the target displacement prediction results for both were found to be consistent with the field imagery in overall amplitude and phase. However, short time variations in the experimental results were not captured by the predictions where sampling limitations and uncaptured localized events were factors.
This work details the analysis of time-lapse images with a point-tracking image processing approach along with the use of an extensive numerical weather model to investigate image displacement due to refraction. The model is applied to create refractive profile estimates along the optical path for the days of interest. Ray trace analysis through the model profiles is performed and comparisons are made with the measured displacement results. Additionally, a supervised machine learning algorithm is used to build a predictive model to estimate the apparent displacement of an object, based on a set of measured metrological values taken in the vicinity of the camera. The predicted results again are compared with the field-imagery ones.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.