Accurate image registration is a pre-requisite for most systems utilising two or more imaging sensors. This can often be
accomplished off-line in the laboratory using appropriate test targets and calibration sources but achieving and
maintaining registration accuracy automatically in the field is a significant challenge. This paper presents an efficient
image registration algorithm capable of automatically registering dual waveband image streams upon system start-up and
then producing updated transform coefficients during live operation. The algorithm is fully automatic and constrained to
ensure reliable operation with minimal or no operator supervision. Robustness to large initial alignment errors is
demonstrated using a selection of challenging multimodal image sets. In addition, a novel high performance adaptive
image fusion algorithm for maximising fused image quality in the presence of sensor noise is presented.
Image fusion technology is becoming increasingly used within military systems. However, the migration of the
technology to non-defence applications has been limited, both in terms of functionality and processing performance. In
this paper, the development of a low-cost automatic registration and adaptive image fusion system is described. In order
to fully exploit commercially available processor hardware, an alternative registration and image fusion approach has
been developed and the results of this are presented. Additionally, the software design offers interface flexibility and user
programmability and these features are illustrated through a number of different applications.
Many image fusion systems involving passive sensors require the accurate registration of the sensor data prior to
performing fusion. Since depth information is not readily available in such systems, all registration algorithms are
intrinsically approximations based upon various assumption about the depth field. Although often overlooked, many
registration algorithms can break down in certain situations and this may adversely affect the image fusion performance.
In this paper, we discuss a framework for quantifying the accuracy and robustness of image registration algorithms
which allows a more precise understanding of their shortcomings. In addition, some novel algorithms have been
investigated that overcome some of these limitations. A second aspect of this work has considered the treatment of
images from multiple sensors whose angular and spatial separation is large and where conventional registration
algorithms break down (typically greater than a few degrees of separation). A range of novel approaches is reported
which exploit the use of parallax to estimate depth information and reconstruct a geometrical model of the scene. The
imagery can then be combined with this geometrical model to render a variety of useful representations of the data.
These techniques (which we term Volume Registration) show great promise as a means of gathering and presenting 3D
and 4D scene information for both military and civilian applications.
The reliable detection and tracking of missile plumes in sequences of infrared images is a crucial factor in developing
infrared missile warning systems for use in military and civil aircraft. This paper discusses the development of a set of
algorithms that allow missile plumes to be detected, tracked and classified according to their perceived motion in the
image plane. The aim is to classify the missile motion so as to provide an indication of the guidance law which is being
used and, hence, to determine the type of missile that may be present and allow the appropriate countermeasures to be
deployed. The algorithms allow for the motion of the host platform and they determine the missile motion relative to the
fixed background provided by the scene. The tracks produced contain sufficient information to allow good
discrimination between several standard missile types.
Many airborne imaging systems contain two or more sensors, but they typically only allow the operator to view the output of one sensor at a time. Often the sensors contain complimentary information which could be of benefit to the operator and hence there is a need for image fusion. Previous papers by these authors have described the techniques available for image alignment and image fusion. This paper discusses the implementation of a real-time image alignment and fusion system in a police helicopter. The need for image fusion and the requirements of fusion systems to pre-align images is reviewed. The techniques implemented for image alignment and fusion will then be discussed. The hardware installed in the helicopter and the system architecture will be described as well as the particular difficulties with installing a 'black box' image fusion system with existing sensors. The methods necessary for field of view matching and image alignment will be described. The paper will conclude with an illustration of the performance of the image fusion system as well as some feedback from the police operators who use the equipment.
Detection of anomalies in hyperspectral clutter is an important task in military surveillance. Most algorithms for unsupervised anomaly detection make either explicit or implicit assumptions about hyperspectral clutter statistics: for instance that the abundance is either normally distributed or elliptically contoured. In this paper we investigate the validity of such claims. We show that while non-elliptical contouring is not necessarily a barrier to anomaly detection, it may be possible to do better. In this paper we show how various generative models which replicate the competitive behaviour of vegetation at a mathematically tractable level lead to hyperspectral clutter statistics which do not have Elliptically Contoured (EC) distributions. We develop a statistical test and a method for visualizing the degree of elliptical contouring of real data. Having observed that in common with the generative models much real data fails to be elliptically contoured, we develop a new method for anomaly detection that has good performance on non-EC data.
Image registration is in some senses the 'forgotten problem' in multi-sensor image exploitation. Image registration will, at present, typically involve using a constant registration transform to align images from sensors fixed relative to each other and separated by as small a distance as possible. For more challenging situations in which sensors are more widely spaced and not fixed relative to each other (e.g. teams of Uninhabited Air Vehicles, UAVs) the registration problem becomes far more complex. The theory of solving the registration problem for such situations is poorly understood with outstanding issues being the expected optimal image alignment, how to perform automatic registration, and methods of addressing time varying image registration. As a pre-cursor to the solution of such problems we present an analysis of the likely errors associated with more complex registration problems. Results of trials using current state-of-the-art technology are presented followed by initial concepts in improving these results.
It is now established that hyperspectral images of many natural backgrounds have statistics with fat-tails. In spite of this, many of the algorithms that are used to process them appeal to the multivariate Gaussian model. In this paper we consider biologically motivated generative models that might explain observed mixtures of vegetation in natural backgrounds. The degree to which these models match the observed fat-tailed distributions is investigated. Having shown how fat-tailed statistics arise naturally from the generative process, the models are put to work in new anomaly detection and un-mixing algorithms. The performance of these algorithms is compared with more traditional approaches.
Infrared cameras can detect the heat signatures of missile plumes in the mid-wave infrared waveband (3-5 microns) and are being developed for use, in conjunction with advanced tracking algorithms, in Missile Warning Systems (MWS). However, infrared imagery is also liable to contain appreciable levels of noise and significant levels of thermal clutter, which can make missile detection and tracking very difficult. This paper discusses the use of motion-based methods for the detection, identification and tracking of missiles: utilising the apparent motion of a missile plume against background clutter. Using a toolbox designed for the evaluation of missile warning algorithms, algorithms have been developed, tested and evaluated using a mixture of real, synthetic and composite infrared imagery
The idea of combining multiple image modalities to provide a single, enhanced picture offering added value to the observer or processor is well established, but the technology to realise it is somewhat less mature. In the past few years computing power has advanced sufficiently to finally enable affordable, real-time image fusion systems to become a reality and the field has started to move out of the research laboratories and into real products. Although algorithmic techniques for fusing images are now well known and understood, challenges remain with regard to exploiting different sensor modalities, robustness to environmental and operational conditions and proving performance benefit, to name but a few. This paper provides a broad review of the field of image fusion, from initial research published to the latest technology being developed and systems being deployed. Particular emphasis is placed on image fusion developments that have been made for the military community, which were mainly designed to exploit low light devices and thermal imagers. Wider applications of image fusion are also considered as well as all of the main technologies required to produce real-time image fusion systems. A summary of current and near-term products is given, as well as the latest research trends and end-user analyses reported to date.
Many modern imaging and surveillance systems contain more than one sensor. For example, most modern airborne imaging pods contain at least visible and infrared sensors. Often these systems have a single display that is only capable of showing data from either camera, and thereby fail to exploit the benefit of having simultaneous multi-spectral data available to the user. It can be advantageous to capture all spectral features within each image and to display a fused result rather than single band imagery. This paper discusses the key processes necessary for an image fusion system and then describes how they were implemented in a real-time, rugged hardware system. The problems of temporal and spatial misalignment of the sensors and the process of electronic image warping must be solved before the image data is fused. The techniques used to align the two inputs to the fusion system are described and a summary is given of our research into automatic alignment techniques. The benefits of different image fusion schemes are discussed and those that were implemented are described. The paper concludes with a summary of the real-time implementation of image alignment and image fusion by Octec and Waterfall Solutions and the problems that have been encountered and overcome.
Many modern imaging and surveillance systems contain more than one sensor. For example, most modern airborne imaging pods contain at least visible and infrared sensors. Often these systems have a single display that is only capable of showing data from either camera, and thereby fail to exploit the benefit of having simultaneous multi-spectral data available to the user. It can be advantageous to capture all spectral features within each image and to display a fused result rather than single band imagery. This paper discusses the key processes necessary for an image fusion system and then describes how they were implemented in a real-time, rugged hardware system. The problems of temporal and spatial misalignment of the sensors and the process of electronic image warping must be solved before the image data is fused. The techniques used to align the two inputs to the fusion system are described and a summary is given of our research into automatic alignment techniques. The benefits of different image fusion schemes are discussed and those that were implemented are described. The paper concludes with a summary of the real-time implementation of image alignment and image fusion by Octec and Waterfall Solutions and the problems that have been encountered and overcome.
Traditional missile warning systems (MWSs) have tended to use the ultra-violet waveband, where the ambient intensity levels tend to be low and the resultant false alarm rate is comparatively small. The development of modern infrared imagers has generated interest in the use of infrared imagers in MWSs. Infrared cameras can detect the heat signatures of missile plumes, which peak in the mid-wave (3-5 micron) infrared band, but they can also contain appreciable levels of noise: including intermittent defects that are of the same size as the potential targets. Typically, both missiles and defects will only occupy a few pixels in each image. This paper reviews a project concerned with developing an MWS algorithm toolbox for use in evaluating infrared MWSs. In particular, the paper discusses some of the main problems associated with detecting and tracking missiles in infrared imagery from a moving platform in the presence of localised image noise.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.