PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
Telescopes and imaging interferometers with sparsely filled apertures can be lighter weight and less expensive than conventional filled-aperture telescopes. However, their greatly reduced MTF’s cause significant blurring and loss of contrast in the collected imagery. Image reconstruction algorithms can correct the blurring completely when the signal-to-noise (SNR) is high, but only partially when the SNR is low. This paper compares both linear (Wiener) and nonlinear (iterative maximum likelihood) algorithms for image reconstruction under a variety of circumstances. These include high and low SNR, Gaussian noise and Poisson-noise dominated, and a variety of aperture configurations and degrees of sparsity. The quality metric employed to compare algorithms is image utility as quantified by the National Imagery Interpretability Rating Scale (NIIRS). On balance, a linear reconstruction algorithm with a power-law power-spectrum estimate performed best.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A Bayesian optimization scheme is presented for reconstructing fluorescent yield and lifetime, the absorption coefficient, and the scattering coefficient in turbid media, such as biological tissue. The method utilizes measurements at both the excitation and emission wavelengths for reconstructing all unknown parameters. The effectiveness of the reconstruction algorithm is demonstrated by simulation and by application to experimental data from a tissue phantom containing a fluorescent agent.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We propose a family of new algorithms that can be viewed as a generalization of the Algebraic Reconstruction Techniques (ART). These algorithms can be tailored for trade-offs between convergence speed and memory requirement. They also can be made to include Gaussian a priori image models. A key advantage is that they can handle arbitrary data acquisition scheme. Approximations are required for practical sized image reconstruction. We discuss several approximations and demonstrate numerical simulation examples for computed tomography (CT) reconstructions.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Sequential diversity imaging uses images from a video camera outfitted with an adaptive optic (AO) to improve the images of an extended object. Phase changes introduced by the AO provide the diversity. The technique estimates both the object and the time-varying wavefront introduced by the optical medium, including atmospheric distortion and changes in the camera. The wavefront estimate is used to control the AO and no other wavefront sensing mechanism is needed. We show computer simulations in which the imagery is improved by about a factor of three, provided that the AO changes are made about ten times faster than changes in the medium. Any camera with adaptive focus and digital processing could use the method.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We postulate that under anisoplanatic imaging conditions involving imaging through turbulent media over a wide-area there exists the possibility of spatial frequency content that is normally lost outside the aperture of an imaging instrument under unperturbed viewing conditions, being aliased into the aperture. Simulation is presented that reinforces this premise. We apply restoration algorithms that were designed to correct non-uniform distortions, to a real image sequence to the effect of noticing the de-aliased super-frequency content. We claim this to be super-resolution, and that it is only possible under anisoplanatic imaging scenarios, where the point spread function of the image is position dependent as a result of the atmospheric turbulence.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The predominant effect of the atmosphere on the incoming wavefront of an astronomical object is the introduction of a phase distortion, resulting in a speckle image at the ground-based telescope. Deconvolution from wavefront sensing is an imaging technique used to compensate for the degradation due to atmospheric turbulence, where the point spread function is estimated from the wavefront sensing data. However in this approach any information in the speckle images regarding the point spread function is not utilised. This paper investigates the joint application of wavefront sensing data and speckle images in reconstructing the point spread function and the object in a Bayesian framework. The results on experimental data demonstrate the feasibility of this approach even under very low light levels.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We analyze the quality of reconstructions obtained when using the multi-frame blind deconvolution (MFBD) algorithm and the bispectrum algorithm to reconstruct images from atmospherically-degraded data that are corrupted by detector noise. In particular, the quality of reconstructions is analyzed in terms of the fidelity of the estimated Fourier phase spectra. Both the biases and the mean square phase errors of the Fourier spectra estimates are calculated and analyzed. The reason that the comparison is made by looking at the Fourier phase spectra is because both the MFBD and bispectrum algorithms can estimate Fourier phase information from the image data itself without requiring knowledge of the system transfer function, and because Fourier phase plays a dominant role in image quality. Computer-simulated data is used for the comparison in order to be able to calculate true biases and mean square errors in the estimated Fourier phase spectra. For the parameters in this study, the bispectrum algorithm produced less-biased phase estimates in all cases than the MFBD algorithm. The MFBD algorithm produced mean square phase errors comparable to or lower than the bispectrum algorithm for good seeing and few data frames, while the converse is true for many data frames and poor seeing.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The inverse problem of determining the structure (atomic coordinates) of a helical molecule from measurements of the intensities of x-rays diffracted from a disordered, oriented, polycrystalline fiber of the molecule is considered. The problem is highly underdetermined, but can be solved by incorporating additional geometric and steric information. However, current solution methods do not allow for disorder in the fiber specimen. A method for solving this problem for disordered fibers is described that utilizes current solution methods by iteratively modifying the diffraction data to account for the disorder. The method is successfully applied to diffraction data from a disordered DNA fiber.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We describe an automated target tracking algorithm which is based on a linear spectral estimation technique, termed the PDFT algorithm. Typically, the PDFT algorithm is applied to obtain high resolution images from scattered field data by incorporating prior information about the target shape into the reconstruction process. In this investigation, the algorithm is used iteratively for determining the target location and a target signature which can be used as the input to an automated target recognition systems. The implementation and the evaluation of the algorithm is discussed in the context of low resolution imaging systems with special reference to foliage penetration radar and ground penetrating radar.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
For weakly scattering permittivities, each measurement of the scattered far field can be interpreted as a sampling point of the Fourier transformation of the object. Furthermore, each sampling point can be accessed by more than one combination of wavelength, propagation direction, and polarization of the incident field. This means, a set of measurements which access the same sampling point can be regarded as being redundant. For strongly scattering objects the Fourier diffraction slice theorem does not apply. We show that measurements which are redundant in the weakly scattering case can be exploited to resolve difficulties associated with imaging of the strongly scattering objects. One dimensional geometries are investigated to estimate the potential redundant data sets offer for addressing the inverse scattering problem of strongly and multiply scattering objects. In addition, we discuss preliminary results for solving 2D imaging problems.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Inpainting is an image interpolation problem, with broad applications in image processing and the digital technology. This paper presents our recent efforts in developing inpainting models based on the Bayesian and variational principles. We discuss several geometric image (prior) models, their role in the construction of variational inpainting models, the resulting Euler-Lagrange differential equations, and their numerical implementation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In seismic data processing, we often need to interpolate/extrapolate missing spatial locations in a domain of interest. The reconstruction problem can be posed as an inverse problem where from inadequate and incomplete data one attempts to recover the complete band-limited seismic wavefield. However, the problem is often ill posed due to factors such as inaccurate knowledge of bandwidth and noise. In this case, regularization can be used to help to obtain a unique and stable solution. In this paper, we formulate band-limited data reconstruction as a minimum norm least squares type problem where an adaptive DFT-weighted norm regularization term is used to constrain solutions. In particular, the regularization term is updated iteratively through using the modified periodogram of the estimated data. The technique allows for adaptive incorporation of prior knowledge of the data such as the spectrum support and the shape of the spectrum. The adaptive regularization can be accelerated using FFTs and an iterative solver like preconditioned conjugate gradient algorithm. Examples on synthetic and real seismic data illustrate improvement of the new method over damped least squares estimation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Three-dimensional (3-D) object surface reconstruction is an important step toward non-destructive measurements of surface area and volume. Laser triangulation technique has been widely used for obtaining 3-D information. However, the 3-D data obtained from triangulation are not dense enough or usually not complete for surface reconstruction, especially for objects with irregular shape. As a result of fitting surfaces with these sparse 3-D data, inaccuracy in measuring the surface area or calculating the volume of the object is inevitable.
A computer vision technique combining laser triangulation and distance transform has been developed to improve the measurement accuracy for objects with irregular shape. A 3-D wire-frame model is generated first with all available 3-D data. Each pixel within the image boundary is given the distance information using distance transform. The distance information of each pixel is then used as the constraints for surface fitting and interpolation. With this additional information from distance transform, more accurate surface approximation can be achieved. The measurement accuracy of this technique is compared with other interpolation techniques for the volume measurement of oyster meats.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Many significant features of images are represented in their Fourier transform. The spectral phase of an image can often be measured more precisely than magnitude for frequencies of up to a few GHz. However, spectral magnitude is the only measurable data in many imaging applications. In this paper, the reconstruction of complex-valued images from either the phases or magnitudes of their Fourier transform is addressed. Conditions for unique representation of a complex-valued image by its spectral magnitude combined with additional spatial information is investigated and presented. Reconstruction algorithms of complex-valued images are developed and introduced. Three types of reconstruction algorithms are presented. (1) Algorithms that reconstruct a complex-valued image from the magnitude of its discrete Fourier transform and part of its spatial samples based on the autocorrelation function. (2) Iterative algorithms based on the Gerchberg and Saxton approach. (3) Algorithms that reconstruct a complex-valued image from its localized Fourier transform magnitude. The advantages of the proposed algorithms over the presently available approaches are presented and discussed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We show that if seismic data d is related to the migration image by mmig = LTd. then mmig is a blurred version of the actual reflectivity distribution m, i.e., mmig = (LTL)m. Here L is the acoustic forward modeling operator under the Born approximation where d = Lm. The blurring operator (LTL), or point spread function, distorts the image because of defects in the seismic lens, i.e., small source-receiver recording aperture and irregular/coarse geophone-source spacing. These distortions can be partly suppressed by applying the deblurring operator (LTL)-1 to the migration image to get m = (LTL)-1mmig. This deblurred image is known as a least squares migration (LSM) image if (LTL)-1LT is applied to the data d using a conjugate gradient method, and is known as a migration deconvolved (MD) image if (LTL)-1 is directly applied to the migration image mmig in (kx, ky, z) space. The MD algorithm is an order-of-magnitude faster than LSM, but it employs more restrictive assumptions.
We also show that deblurring can be used to filter out coherent noise in the data such as multiple reflections. The procedure is to, e.g., decompose the forward modeling operator into both primary and multiple reflection operators d = (Lprim + Lmulti)m, invert for m, and find the primary reflection data by dprim = Lprimm. This method is named least squares migration filtering (LSMF). The above three algorithms (LSM, MD and LSMF) might be useful for attacking problems in optical imaging.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The restoration of images formed through atmospheric turbulence is usually attempted through operating on a sequence of speckle images. The reason is that high spatial frequencies in each speckle image are effectively retained though reduced in magnitude and distorted in phase. However, speckle imaging requires that the light is quasi-monochromatic. An alternative possibility, discussed here, is to capture a sequence of images through a broadband filter, correct for any local warping due to position-dependent tip-tilt effects, and average over a large number of images. In this preliminary investigation, we simulate several optical transfer functions to compare the signal levels in each case. The investigation followed encouraging results that we obtained recently using a blind-deconvolution approach. The advantages of such a method are that narrow-band filtering is not required, simplifying the equipment and
allowing more photons for each short exposure image, while the method lends itself to restoration over fields of view wider than the isoplanatic patch without the need to mosaic. The preliminary conclusions are that, so long as the ratio of the telescope objective diameter, D, to Fried parameter, r0, is less than about 5, the method may be a simple alternative to speckle imaging.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The Phase Diverse Speckle (PDS) problem is formulated mathematically as Multi Frame Blind Deconvolution (MFBD) together with a set of Linear Equality Constraints (LECs) on the wavefront expansion parameters. This MFBD--LEC formulation is quite general and, in addition to PDS, it allows the same code to handle a variety of different data collection schemes specified as data, the LECs, rather than in the code. It also relieves us from having to derive new expressions for the gradient of the wavefront parameter vector for each type of data set. The idea is first presented with a simple formulation that accommodates Phase Diversity, Phase Diverse Speckle, and Shack--Hartmann wavefront sensing. Then various generalizations are discussed, that allows many other types of data sets to be handled.
Background: Unless auxiliary information is used, the Blind Deconvolution problem for a single frame is not well posed because the object and PSF information in a data frame cannot be separated. There are different ways of bringing auxiliary information to bear on the problem. MFBD uses several frames which helps somewhat, because the solutions are constrained by a requirement that the object be the same, but is often not enough to get useful results without further constraints. One class of MFBD methods constrain the solutions by requiring that the PSFs correspond to wavefronts over a certain pupil geometry, expanded in a finite basis. This is an effective approach but there is still a problem of uniqueness in that different phases can give the same PSF. Phase Diversity and the more general PDS methods are special cases of this class of MFBD, where the observations are usually arranged so that in-focus data is collected together with intentionally defocused data, where information on the object is sacrificed for more information on the aberrations. The known differences and similarities between the phases are used to get better estimates.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In a digital camera the MTF of the optical system must comprise a low-pass filter in order to avoid aliasing. The MTF of incoherent imaging usually and in principle is far from an ideal low-pass. Theoretically a digital ARMA-Filter can be used to compensate for this drawback. In praxis such deconvolution filters suffer from instability because of time-variant noise and space-variance of the MTF. In addition in a line scanner the MTF in scan direction slightly differs in each scanned image. Therefore inverse filtering will not operate satisfactory in an unknown environment. A new concept is presented which solves both problems using a-priori information about an object, e.g. that parts of it are known to be binary. This information is enough to achieve a stable space and time-variant ARMA-deconvolution filter. Best results are achieved using non linear filtering and pattern feedback.
The new method was used to improve the bit-error-rate (BER) of a high-density matrix-code scanner by more than one order of magnitude. An audio scanner will be demonstrated, which reads 12 seconds of music in CD-quality from an audio coded image of 18mm×55mm size.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A far-field radar range has been constructed at the University of Massachusetts Lowell Submillimeter-Wave Technology Laboratory to investigate electromagnetic scattering and imagery of threat military targets located in forested terrain. The radar system, operating at X-band, uses 1/35th scale targets and scenes to acquire VHF/UHF signature data. The trees and ground planes included in the measurement scenes have been dielectrically scaled in order to properly model the target/clutter interaction. The signature libraries acquired by the system could be used to help develop automatic target recognition algorithms. The difficulty in target recognition in forested areas is due to the fact that trees can have a signature larger than that of the target. The rather long wavelengths required to penetrate the foliage canopy also complicate target recognition by limiting image resolution. The measurement system and imaging algorithm will be presented as well as a validation of the measurements obtained by comparing measured signatures with analytical predictions. Preliminary linear co-polarization (HH,VV) and cross-polarization (HV,VH) data will be presented on an M1 tank in both forested and open-field scenarios.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Predicting the future state of a random dynamic signal based on corrupted, distorted, and partial observations is vital for proper real-time control of a system that includes time delay. Motivated by problems from Acoustic Positioning Research Inc., we consider the continual automated illumination of an object moving within a bounded domain, which requires object location prediction due to inherent mechanical and physical time lags associated with robotic lighting. Quality computational predictions demand high fidelity models for the coupled moving object signal and observation equipment pair. In our current problem, the signal represents the vector position, orientation, and velocity of a stage performer. Acoustic observations are formed by timing ultrasonic waves traveling from four perimeter speakers to a microphone attached to the performer. The goal is to schedule lighting movements that are coordinated with the performer by anticipating his/her future position based upon these observations using filtering theory.
Particle system based methods have experienced rapid development and have become an essential technique of contemporary filtering strategies. Hitherto, researchers have largely focused on continuous state particle filters, ranging from traditional weighted particle filters to adaptive refining particle filters, readily able to perform path-space estimation and prediction. Herein, we compare the performance of a state-of-the-art refining particle filter to that of a novel discrete-space particle filter on the acoustic positioning problem. By discrete space particle filter we mean a Markov chain that counts particles in discretized cells of the signal state space in order to form an approximated unnormalized distribution of the signal state. For both filters mentioned above, we will examine issues like the mean time to localize a signal, the fidelity of filter estimates at various signal to noise ratios, computational costs, and the effect of signal fading; furthermore, we will provide visual demonstrations of filter performance.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
An iterative optimization algorithm which can be used for speckle reduction and segmentation of synthetic aperture radar (SAR) images is presented here. This method contains as a first step a fast restoration and as a second one the segmentation.
We have worked in 3-look simulated and real ERS-1 amplitude images. The iterative filter is based on a membrane model Markov random field (MRF) approximation optimized by a synchronous local iterative method (SLIM). The final form of restoration gives a total sum preserving regularization (TSPR).
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, a Bayesian-based image reconstruction scheme is utilized for estimating a high resolution temperature map of the top of the earth’s atmosphere using the GOES-8 (Geostationary Operational Environmental Satellite) imager infrared channels. By simultaneously interpolating the image while estimating temperature, the proposed algorithm achieves a more accurate estimate of the sub-pixel temperatures than could be obtained by performing these operations independently of one another. The proposed algorithm differs from other Bayesian-based image interpolation schemes in that it estimates brightness temperature as opposed to image intensity and incorporates a detailed optical model of the GOES multi-channel imaging system.
The temperature estimation scheme is compared to deconvolution via pseudo-inverse filtering using two metrics. One metric is the mean squared temperature error. This metric describes the radiometric accuracy of the image estimate. The second metric is the recovered Modulation Transfer Function (MTF) of the image estimate. This method has traditionally been used to evaluate the quality of image recovery techniques. It will be shown in this paper that there is an inconsistency between these two metrics in that an image with high spatial frequency content can be reconstructed with poor radiometric accuracy. The ramifications of this are discussed in order to evaluate the two metrics for use in quantifying the performance of image reconstruction algorithms.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.