PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
This PDF file contains the front matter associated with SPIE Proceedings Volume 9841 including the Title Page, Copyright information, Table of Contents, Introduction, and Conference Committee listing.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Geoinformatic Emerging Architectures and Technologies
The U. S. Geological Survey (USGS) initiated the Requirements, Capabilities and Analysis for Earth Observations (RCA-EO) activity in the Land Remote Sensing (LRS) program to provide a structured approach to collect, store, maintain, and analyze user requirements and Earth observing system capabilities information. RCA-EO enables the collection of information on current key Earth observation products, services, and projects, and to evaluate them at different organizational levels within an agency, in terms of how reliant they are on Earth observation data from all sources, including spaceborne, airborne, and ground-based platforms. Within the USGS, RCA-EO has engaged over 500 subject matter experts in this assessment, and evaluated the impacts of more than 1000 different Earth observing data sources on 345 key USGS products and services. This paper summarizes Landsat impacts at various levels of the organizational structure of the USGS and highlights the feedback of the subject matter experts regarding Landsat data and Landsat-derived products. This feedback is expected to inform future Landsat mission decision making. The RCA-EO approach can be applied in a much broader scope to derive comprehensive knowledge of Earth observing system usage and impacts, to inform product and service development and remote sensing technology innovation beyond the USGS.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper proposes novel automated gender classification of subjects while engaged in running activity. The machine learning techniques include preprocessing steps using principal component analysis followed by classification with linear discriminant analysis, and nonlinear support vector machines, and decision-stump with AdaBoost. The dataset consists of 49 subjects (25 males, 24 females, 2 trials each) all equipped with approximately 80 retroreflective markers. The trials are reflective of the subject’s entire body moving unrestrained through a capture volume at a self-selected running speed, thus producing highly realistic data. The classification accuracy using leave-one-out cross validation for the 49 subjects is improved from 66.33% using linear discriminant analysis to 86.74% using the nonlinear support vector machine. Results are further improved to 87.76% by means of implementing a nonlinear decision stump with AdaBoost classifier. The experimental findings suggest that the linear classification approaches are inadequate in classifying gender for a large dataset with subjects running in a moderately uninhibited environment.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Smartphones have put powerful sensor arrays in nearly everyone’s pockets. Fusing the data from these sensors it is possible to estimate the phone’s current orientation. In this study we utilize a 3 axis gimbal to compare the performance of multiple orientation estimation algorithms. Controlling the position of the gimbal allows us to compare the known device orientation to the estimated orientation. Using this same method we determine where each algorithm’s faults lie, and where they begin to break down. Then repeating these movements we are able to compare each algorithm to each other.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Multi-INT fusion of GEOINT and IMINT can enable performance optimization of target detection and target tracking problem domains1, amongst others. Contextual information, which defines the relationship of foreground to background scene content, is a source of GEOINT for which various online repositories exist today including but not limited to the following: Open Street Maps (OSM)2 and the United States Geological Survey (USGS)3. However, as the nature of the world’s landscape is dynamic and ever-changing, such contextual information can easily become stagnant and irrelevant if not maintained. In this paper we discuss our approach to providing the latest relevant context by performing automated scene generated background context segmentation and classification for near-nadir look angles for the purpose of defining roadways or parking lots, buildings, and natural areas. This information can be used in a variety of ways including augmenting context data from repositories, performing mission pre-planning, and for real-time missions such that GEOINT and IMINT fusion can occur and enable significant performance advantages in target detection and tracking applications in all areas of the world.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Multi-focus image fusion is becoming increasingly prevalent, as there is a strong initiative to maximize visual information in a single image by fusing the salient data from multiple images for visualization. This allows an analyst to make decisions based on a larger amount of information in a more efficient manner because multiple images need not be cross-referenced. The contourlet transform has proven to be an effective multi-resolution transform for both denoising and image fusion through its ability to pick up the directional and anisotropic properties while being designed to decompose the discrete two-dimensional domain. Many studies have been done to develop and validate algorithms for wavelet image fusion, but the contourlet has not been as thoroughly studied. When the contourlet coefficients for the wavelet coefficients are substituted in image fusion algorithms, it is contourlet image fusion. There are a multitude of methods for fusing these coefficients together and the results demonstrate that there is an opportunity for fusing coefficients together in the contourlet domain for multi-focus images. This paper compared the algorithms with a variety of no reference image fusion metrics including information theory based, image feature based and structural similarity based assessments to select the image fusion method.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
There is a strong initiative to maximize visual information in a single image for viewing by fusing the salient data from multiple images. Many multi-focus imaging systems exist that would be able to provide better image data if these images are fused together. A fused image would allow an analyst to make decisions based on a single image rather than crossreferencing multiple images. The bandelet transform has proven to be an effective multi-resolution transform for both denoising and image fusion through its ability to calculate geometric flow in localized regions and decompose the image based on an orthogonal basis in the direction of the flow. Many studies have been done to develop and validate algorithms for wavelet image fusion but the bandelet has not been well investigated. This study seeks to investigate the use of the bandelet coefficients versus wavelet coefficients in modified versions of image fusion algorithms. There are many different methods for fusing these coefficients together for multi-focus and multi-modal images such as the simple average, absolute min and max, Principal Component Analysis (PCA) and a weighted average. This paper compares the image fusion methods with a variety of no reference image fusion metrics including information theory based, image feature based and structural similarity based assessments.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Automated image fusion has a wide range of applications across a multitude of fields such as biomedical diagnostics, night vision, and target recognition. Automation in the field of image fusion is difficult because there are many types of imagery data that can be fused using different multi-resolution transforms. The different image fusion transforms provide coefficients for image fusion, creating a large number of possibilities. This paper seeks to understand how automation could be conceived for selected the multiresolution transform for different applications, starting in the multifocus and multi-modal image sub-domains. The study analyzes the greatest effectiveness for each sub-domain, as well as identifying one or two transforms that are most effective for image fusion. The transform techniques are compared comprehensively to find a correlation between the fusion input characteristics and the optimal transform. The assessment is completed through the use of no-reference image fusion metrics including those of information theory based, image feature based, and structural similarity based methods.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Fusion of visual information from multiple sources is relevant for applications security, transportation, and safety applications. One way that image fusion can be particularly useful is when fusing imagery data from multiple levels of focus. Different focus levels can create different visual qualities for different regions in the imagery, which can provide much more visual information to analysts when fused. Multi-focus image fusion would benefit a user through automation, which requires the evaluation of the fused images to determine whether they have properly fused the focused regions of each image. Many no-reference metrics, such as information theory based, image feature based and structural similarity-based have been developed to accomplish comparisons. However, it is hard to scale an accurate assessment of visual quality which requires the validation of these metrics for different types of applications. In order to do this, human perception based validation methods have been developed, particularly dealing with the use of receiver operating characteristics (ROC) curves and the area under them (AUC). Our study uses these to analyze the effectiveness of no-reference image fusion metrics applied to multi-resolution fusion methods in order to determine which should be used when dealing with multi-focus data. Preliminary results show that the Tsallis, SF, and spatial frequency metrics are consistent with the image quality and peak signal to noise ratio (PSNR).
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Motion video analysis is a challenging task, particularly if real-time analysis is required. It is therefore an important issue how to provide suitable assistance for the human operator. Given that the use of customized video analysis systems is more and more established, one supporting measure is to provide system functions which perform subtasks of the analysis. Recent progress in the development of automated image exploitation algorithms allow, e.g., real-time moving target tracking. Another supporting measure is to provide a user interface which strives to reduce the perceptual, cognitive and motor load of the human operator for example by incorporating the operator’s visual focus of attention. A gaze-enhanced user interface is able to help here. This work extends prior work on automated target recognition, segmentation, and tracking algorithms as well as about the benefits of a gaze-enhanced user interface for interaction with moving targets. We also propose a prototypical system design aiming to combine both the qualities of the human observer’s perception and the automated algorithms in order to improve the overall performance of a real-time video analysis system. In this contribution, we address two novel issues analyzing gaze-based interaction with target tracking algorithms. The first issue extends the gaze-based triggering of a target tracking process, e.g., investigating how to best relaunch in the case of track loss. The second issue addresses the initialization of tracking algorithms without motion segmentation where the operator has to provide the system with the object’s image region in order to start the tracking algorithm.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Of all the steps in the image acquisition and formation pipeline, compression is the only process that degrades image quality. A selected compression algorithm succeeds or fails to provide sufficient quality at the requested compression rate depending on how well the algorithm is suited to the input data. Applying an algorithm designed for one type of data to a different type often results in poor compression performance. This is mostly the case when comparing the performance of H.264, designed for standard definition data, to HEVC (High Efficiency Video Coding), which the Joint Collaborative Team on Video Coding (JCT-VC) designed for high-definition data. This study focuses on evaluating how HEVC compares to H.264 when compressing data from small UAS platforms. To compare the standards directly, we assess two open-source traditional software solutions: x264 and x265. These software-only comparisons allow us to establish a baseline of how much improvement can generally be expected of HEVC over H.264. Then, specific solutions leveraging different types of hardware are selected to understand the limitations of commercial-off-the-shelf (COTS) options. Algorithmically, regardless of the implementation, HEVC is found to provide similar quality video as H.264 at 40% lower data rates for video resolutions greater than 1280x720, roughly 1 Megapixel (MPx). For resolutions less than 1MPx, H.264 is an adequate solution though a small (roughly 20%) compression boost is earned by employing HEVC. New low cost, size, weight, and power (CSWAP) HEVC implementations are being developed and will be ideal for small UAS systems.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.