KEYWORDS: Zoom lenses, Imaging systems, Imaging spectroscopy, Image processing, Video, RGB color model, Prototyping, Signal to noise ratio, Matrices, Chemical elements
The use of linear algebra in optics has been shown to be an important tool in optical design and imaging system analysis for hundreds of years. More recently the use of matrix theory has allowed the development of image processing, particularly with the introduction of large scale computer processing. The ability to approximate matrices as Toplitz allowed matrix multiplies to be carried out with fast transforms and convolutions thus allowing much faster implementations for many image processing applications. There remain a large class of problems for which no Toplitz representation is feasible, particularly those requiring an inversion of a large matrix which is often ill-conditioned, or, formally singular. In this article we discuss a technique for providing an approximate solution for problems which are formally singular. We develop a method for solving problems with a high degree of singularity (those for which the number of equations is far less than the number of variables). By way of illuminating the utility of the overall technique, several examples are presented. The use of the method for solving small under-determined problems is presented as an introduction to the use and limitations of the solution. The technique is applied to digital zoom and the results compared with standard interpolation techniques. The development of multi spectral data cubes for tomographic type multi spectral imaging systems is shown with several simulated results based on real data.
While able to measure the red, green, and blue channels, color imagers are not true spectral imagers
capable of spectral measurements. In a previous paper, it was demonstrated that an estimate of a
low resolution visible spectra of a naturally illuminated outdoor scene can be estimated from RGB
values measured by a conventional color imager. In this paper we present a refined algorithm and
document results in a study to estimate visible source spectra from solar illumination scenes using
reflectance spectra generated from the USGS data base.
KEYWORDS: Zoom lenses, Algorithm development, Visual information processing, Current controlled current source, Diffraction, Super resolution, Video, Visualization
One of the goals of superresultion has been to achieve interpolation in excess of some externally imposed physical constraint. Initially it was the optical diffraction limit while the Nyquist Limit of sampled data systems has also become a more recent issue. Regardless of the setting, the limitations are the same; there generally is not enough available degrees of freedom to perform an
interpolation without severe loss of information. While some success has been achieved in superresolution, magnification is generally limited to less than 2. In this paper we present a method
where context based basis functions are developed for digital zoom where the magnifications were assumed to be greater that 2. The number of degrees of freedom are still less than the number formally required, because the basis functions are developed for scenes similar to scenes presented for interpolation, they are more efficient than those developed without regard to context.
The technique is presented together with several still images and video examples of digital zoom for a magnification of 5 and 10. Results are compared with conventional B-Cubic Spline
interpolation. Parallelization of the technique with graphic processors is discussed toward its real time implementation.
Despite its renewed interest, remote sensing of explosives has proven to be difficult due to
the low vapor pressure of the agents. In this paper we discuss a method to detect residue of
explosive agents on fabric and clothing using Multi Spectral Imaging. Such a technique will aid in
the detection of bomb making activities and individuals. While limited to line of sight only, Multi
Spectral Imaging has much to recommend it including inspection of clothing in public places,
luggage, and potential locations for bomb manufacture.
This paper presents the basic techniques developed for detection of trace TNT and reports
the results of several limited field trials. Imaging hardware is discussed and processing methodology
is reviewed with some demonstrations of the identification difficulty for explosives and other false
targets commonly found. The use of other spectral bands is presented with the goal of eliminating
common false targets.
While able to measure the red, green, and blue channels, color imagers are not true spectral imagers
capable of spectral measurements. In this paper we present a processing technique for estimating
low resolution visible spectra from (RGB) imagery components. Such a methodology will find
application in separating reflectivity components from source illumination as performed in Retinex
or Linear Models.
Numerical processing is discussed and formulated within non-regulated iterative restoration
methodology. Numerical stability and convergence are considered relative to speed and accuracy as
well as calibration data effects on overall results. Implementation is demonstrated together with
several suggestions for real time processing. Imager calibration methodology is presented together
with several results. Limitations of the method are discussed as well as directions for further
investigations.
Tomographic spectral imagers owe much of their development to sophisticated numerical processing. In order to reduce
system size and complexity, mechanical detail has often been replaced with ever-increasing algorithm sophistication. In
developing the Field Multiplexed Dispersive Imaging Spectrometer (FMDIS), the processing has been broken down into
two steps; one which deconvolves the solution spatially and a second which deconvolves the solution spectrally. The first
step is characterized by large inversion matrices of a few iterations (typically less than 10), while the second requires
small matrices and a large number of iterations (hundreds to millions). Iterative processing has been employed due to the
physical nature of the data. Inversions must be robust to moderate amounts of noise and calibration uncertainty.
In this paper we present a deterministic pseudo inversion technique to replace the second iterative processing step in
FMDIS datacube generation. It is shown to be within required limits of accuracy and can speed up processing by an
order of magnitude or more. While not intended to replace the iterative solution technique, it provides a fast means of
processing data when speed is more important than accuracy. Implementation of the solution algorithm is discussed
relative to the over-all solvability of the under determined system of equations. Several results are shown from a visible
instrument with 33 colors which contrast the two techniques.
Hyperspectral imaging spectrometers have proven to be both versatile and powerful
instruments with applications in diverse areas such as medical diagnosis, land usage, military target
detection, and art forgery. In many applications scanning systems cannot be effectively employed
and true "flash" operation is necessary. Multiplex systems have been developed which can gather
information in multispectral bands simultaneously, and then produce a datacube after mathematical
restoration. Such system enjoy compact size, robust construction, inexpensive costs and zero moving
parts at the cost of highly complex mathematical restoration operations. Currently the limiting
feature of such tomographic hyperspectral imagers such as the FMDIS [1,2] is the speed of
restoration. Due to the large sizes of the restoration kernel, restorations are typically recursive and
require many iterations to achieve satisfactory results. Little can be done to make the systems
smaller since the size is determined by the number of colors and pixel size of the focal plane arrays
(FPA) employed. Thus, techniques must be investigated to speed up the restoration either by
reducing the number of iterations or reducing the number of operations within an iteration. It is
assumed that little can be done to reduce the number of operations in an iteration since the
operations are done in sparse format, we therefore investigate reducing the number of iterations
through mathematical accelerations. We assume this acceleration will work to advantage regardless
of the mechanism (PC-based or dedicated processor such as a gate array) by which the restoration is
implemented.
The ability to remotely detect explosive devices and explosive materials has generated
considerable interest over the last several years. The study of remote sensing of explosives date back
many decades but recent world events have forced the technology to respond to changing events and
bring new technologies to the field in shorter development times than previously thought possible.
Several applications have proven both desirable and illusive to date. Most notable is the
desire to sense explosives within containers, sealed or otherwise. This requires a sensing device to
penetrate the walls of the container, a difficult task when the container is steel or thick cement.
Another is the desire to detect explosive vapors which escape from a container into the ambient air.
This has been made difficult because explosives are generally formulated to have extremely low
vapor pressure. (This has made many gas detection technique not strong candidates for explosive
vapor detection due to the low vapor pressure of explosive materials [1].) Because of the many
difficulties in a general explosive remote detection, we have attempted to bound the problem into a
series of achievable steps with the first step a simple remote detection of TNT-bearing compounds.
Prior to discussing our technology, we will first discuss our choice for attacking the problem in this
manner.
The operation of tomographic spectral imaging devices is analogous to problems in image restoration with similar restoration techniques. Generally the problem is cast as restoration of a sparse, singular kernel where both accuracy and computational speed are trade off issues. While there is much conventional wisdom concerning the ability to restore such systems, experience has shown that the situation is often less bleak than imagined. Results of the restoration of several tomographic instruments are presented with a series of improvements which are the result of both ad hoc numerical techniques and theoretical constraints. The influence of physical hardware on restoration results is discussed as well as counter intuitive lessons learned from a multi-year program to develop efficient restoration techniques for tomographic imaging spectrometers.
A test was undertaken in Tucson, AZ to simultaneously measure the four components of the Stokes vector with a Lenticular Prismatic Polarization Integrating (LPPI) Filter. Simultaneous measurements were taken with a tomographic hyperspectral imaging instrument. Data were taken in the visible spectral band of a variety of scenes over a diurnal period to find portions of objects which possessed a relatively high degree of polarization (DOP; total, linear, and circular). Spectral content of both natural and man made objects was analyzed as well as the spectral content of the areas which possessed a relatively high DOP to ascertain if relatively high DOP objects have similar spectra. The objective of the study is the development of techniques which enable estimation of the DOP of objects from an analysis of the spectral content alone, thus enabling both multispectral processing and polarization detection without the need for separate polarization instrumentation.
Tomographic Imaging Spectrometers have proven to be a cost-effective means to achieve moderate resolution spectral imaging. Instruments have been constructed in the visible which have demonstrated acceptable performance. Infrared instruments have been developed and tested as proof of concept, however optimization issues have remained. In this paper we discuss the tradeoff between optical design, disperser design, and mathematical restoration. While the final design choice is often application dependent, we show that issues such as more effective use of the focal plane array, increasing signal to noise ratio, and removal of self-emission in the infrared, all impact the restoration algorithm and have tradeoff issues associated with each. We introduce the Field Multiplexed Dispersive Imaging Spectrometer (FMDIS), an alternate tomographic imaging spectrometer design. Results of FMDIS designs and restorations will be presented.
Computed Tomographic Imaging Spectrometers are described and shown to be capable of providing real-time flash multispectral imagery. Restoration equations and techniques for optimizing performance are described. Experimental results are shown illustrating system optimization and practical usage of CTIS systems.
The advent of imaging spectroscopy has enabled optical sensors to be constructed that provide hyperspectral imagery on scales previously unattainable. Whereas multiband imagery on several spectral bands have been available for some time, the new generation of instruments is capable of providing imagery in hundreds or thousands of spectral bands. The price of increased measurement resolution is both greater system complexity, and, increased data processing burden.
One of the new instrument designs for producing hyperspectral imagery is the Computed Tomographic Imaging Spectrometer (CTIS). This instrument relies on a computer generated holographic mask as a dispersing element with relatively conventional optical elements and arrays. Design philosophy is discussed relative to systems requirements for using hyperspectral imaging in missile and fire control systems. Issues of optical throughput, dispersion, mask complexity, and, producability are discussed. Results are shown for masks manufactured to operate in the visible and infrared regions.
In concert with the design issues of the Computed Tomographic Imaging Spectrometer, the data processing and reduction is discussed both for remote sensing, and, typical missile and fire control applications. System tradeoff between algorithm complexity and mission is presented with regard to current algorithms and their implementation.
Completed systems are presented and results from both first and second-generation instruments are displayed. Deviation of actual operation from expectations is discussed relative to plans for further development.
A technique for shape recognition that is invariant to scale and rotation is presented. This technique employs the number of bit quads, the basic 2 x 2 element of binary (0,1) imagery, of each object. The feature vector is a scaled version of the number of bit quads, which allows a distance to be defined between unknown objects and a collection of known prototypes. Recognition is accomplished by utilizing this distance metric as a classifier. An example is provided that recognizes an automobile shape from a set of six prototypes. Several experiments are performed that change the scale and relative rotation of the unknown. In all cases the correct automobile is identified from the set of six prototypes. A second example considers the effects of boundary noise on classification and points out the advantage of employing noise smoothing prior to feature extraction. The technique presented has the advantage of simplicity, pipeline implementation, and low storage requirements.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.