A model-based multi-sensor fusion framework has previously been developed that supports improved target recognition
by fusing target signature information obtained from sensor imagery [1], [2]. Image- based signature features, however,
are not the only source of information that may be exploited to advantage by a target recognition system. This paper
presents a review of the key features of the model-based fusion framework and shows how it can be expanded to
support information derived from imaging sensors as well as data from other non-imaging sources. The expanded
model-based multi-source framework supports not only the combination of image data, such as Synthetic Aperture
Radar (SAR) and electro-optical (EO), but also various types of non-image data that may be derived from those, or
other sensor measurements. The paper illustrates the flexibility of the model-based framework by describing the
combination of spatial information from an imaging sensor with scattering characteristics derived from polarimetric
phase history data. The multi-source fusion is achieved by relating signature features to specific structural elements on
the 3-D target geometry. The 3-D model is used as a sensor neutral, view independent, common reference for the
combination of multi-source information.
Synthetic aperture radar (SAR) is an all weather sensor that has provided breakthrough remote sensing capabilities for
both civilian and military applications. SAR differs from other real-aperture sensors in that it achieves fine resolution
using signal processing techniques that are based on certain assumptions about the relative dynamics between the sensor
and the scene. When these assumptions are violated, the quality of the SAR imagery degrades, impacting its
interpretability.
This paper describes the development of a simulation testbed for evaluating the effects of SAR-specific error sources on
image quality, including effects that originate with the sensor (e.g. system noise, uncompensated motion), as well as
effects that originate in the scene (e.g. target motion, wind-blown trees). The simulation generates synthetic video
phase history and can accommodate a variety of sensor collection trajectories, acquisition geometries, and image
formation options. The simulation approach will be described, example outputs will be shown, and initial results
relating simulation inputs to image quality measures will be presented.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.