The ability to passively reconstruct a scene in 3D provides significant benefit to Situational Awareness systems
employed in security and surveillance applications. Traditionally, passive 3D scene modelling techniques, such as Shape
from Silhouette, require images from multiple sensor viewpoints, acquired either through the motion of a single sensor or
from multiple sensors. As a result, the application of these techniques often attracts high costs, and presents numerous
practical challenges. This paper presents a 3D scene reconstruction approach based on exploiting scene shadows, which
only requires information from a single static sensor. This paper demonstrates that a large amount of 3D information
about a scene can be interpreted from shadows; shadows reveal the shape of objects as viewed from a solar perspective
and additional perspectives are gained as the sun arcs across the sky. The approach has been tested on synthetic and real
data and is shown to be capable of reconstructing 3D scene objects where traditional 3D imaging methods fail. Providing
the shadows within a scene are discernible, the proposed technique is able to reconstruct 3D objects that are
camouflaged, obscured or even outside of the sensor's Field of View. The proposed approach can be applied in a range
of applications, for example urban surveillance, checkpoint and border control, critical infrastructure protection and for
identifying concealed or suspicious objects or persons which would normally be hidden from the sensor viewpoint.
Techniques such as SIFT and SURF facilitate efficient and robust image processing operations through the use of sparse
and compact spatial feature descriptors and show much potential for defence and security applications. This paper
considers the extension of such techniques to include information from the temporal domain, to improve utility in
applications involving moving imagery within video data. In particular, the paper demonstrates how spatio-temporal
descriptors can be used very effectively as the basis of a target tracking system and as target discriminators which can
distinguish between bipeds and quadrupeds. Results using sequences of video imagery of walking humans and dogs are
presented, and the relative merits of the approach are discussed.
This paper discusses a novel image noise reduction strategy based on the use of adaptive image filter kernels. Three
adaptive filtering techniques are discussed and a case study based on a novel Adaptive Gaussian Filter is presented. The
proposed filter allows the noise content of the imagery to be reduced whilst preserving edge definition around important
salient image features. Conventional adaptive filtering approaches are typically based on the adaptation of one or two
basic filter kernel properties and use a single image content measure. In contrast, the technique presented in this paper is
able to adapt multiple aspects of the kernel size and shape automatically according to multiple local image content
measures which identify pertinent features across the scene. Example results which demonstrate the potential of the
technique for improving image quality are presented. It is demonstrated that the proposed approach provides superior
noise reduction capabilities over conventional filtering approaches on a local and global scale according to performance
measures such as Root Mean Square Error, Mutual Information and Structural Similarity. The proposed technique has
also been implemented on a Commercial Off-the-Shelf Graphical Processing Unit platform and demonstrates excellent
performance in terms of image quality and speed, with real-time frame rates exceeding 100Hz. A novel method which is
employed to help leverage the gains of the processing architecture without compromising performance is discussed.
For the purpose of detecting clouds over large areas it is necessary to use satellite imagery. Although a variety of techniques for cloud detection and cloud height estimation exist, they often make assumptions concerning the radiometry and spectral coverage available to the sensor payload. This paper explores the use of registration shifts observed between dual-bank single-band image pairs from the DMC multi-spectral imager to detect and estimate the height of clouds. The Disaster Monitoring Constellation (DMC) comprises a network of five disaster-monitoring micro-satellites that have been built by Surrey Satellite Technology Ltd (SSTL). Each DMC satellite has a multi-spectral imager (MSI) consisting of 2 banks of 3 channels pairs. The proposed technique uses a narrow angle between imagers to discern altitude and is comparable to stereo imaging but able to distinguish absolute cloud height without reference to the ground surface, using satellite telemetry. Simulations have shown that with a sub-degree angle between imagers and appropriate sub-pixel level registration scheme, vertical accuracies in the order of a few hundred metres maybe extracted. Preliminary results using phase correlation registration to ensure sub-pixel accuracies between DMC imagery have helped confirm the viability of the technique and will be presented alongside simulations.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.