PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
This PDF file contains the front matter associated with SPIE Proceedings Volume 9124 including the Title Page, Copyright information, Table of Contents, Introduction, and Conference Committee listing.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Virtual dimensionality (VD) has received considerable interest in its use of specifying the number of spectrally distinct signatures, denoted by p. So far all techniques are eigen-based approaches which use eigenvalues or eigenvectors to estimate the value of p. However, when eigenvalues are used to estimate VD such as Harsanyi-Farrand-Chang’s method or hyperspectral signal subspace identification by minimum error (HySime), there will be no way to find what the spectrally distinct signatures are. On the other hand, if eigenvectors/singular vectors are used to estimate VD such as maximal orthogonal complement algorithm (MOCA), eigenvectors/singular vectors do not represent real signal sources. Most importantly, current available methods used to estimate VD run into two major issues. One is the value of VD being fixed at a constant. The other is a lack of providing a means of finding signal sources of interest. As a matter of fact, the spectrally distinct signatures defined by VD should adapt its value to various target signal sources of interest. For example, the number of endmembers should be different from the number of anomalies. In this paper we develop a second-order statistics approach to determining the value of the VD and the virtual endmember basis.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Global positioning system (GPS) signals reflected from the ocean surface can be used for various remote sensing purposes. In this paper, we develop a facet model to simulate the received GPS single from a 2-D largescale sea surface. In this model, the sea surface is envisaged as a two-scale profile on which the long waves are locally approximated by planar facets. The microscopic profile within a facet is assumed to be represented by a set of sinusoidal ripple patches. The complex reflective function of each modified facet is evaluated by a modified formula of the original Bass and Fuks’ two-scale model, in which the phase factor of each facet is with the capillary wave modification. The scattering field and the bistatic scattering coefficient of facet model is derived in detail. With received GPS single, we give a detail analysis of the polarization property, the scattering property of GPS scattering signal over the sea surface.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Region growing is one of the popular segmentation algorithms for 2-D image, which comes up a continuous interested region. How to extent this method to hyperspectral image processing effectively is a problem needs to be discussed deeply. Here in this paper, three ways of using region growing in hyperspectral scenario are explored to separate oil from sea water. Furthermore, in order to release the influence of sunlight, a modification to growing rule is prompted, considering the property of local region. At last, a normalized ATGP is used to obtain more potential target. The experiment results show that combining unmixing techniques with region growing is better than other methods.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The Point Spread Function (PSF) is one of the key indicators characterizing the signal transfer characteristics of an imaging system. Edge method is applicable to calculate the PSF of the remote sensing imaging systems for its easy implement and robust noise-resistant ability. In this paper, a Double-Knife-Edge method is proposed to recover the degraded images using a precise estimated PSF of the imaging system. The exact motion-blur direction is estimated by image differentiation firstly. Two orthogonal edges, one of which is in the same direction as the main motion-blur, are picked up from the candidate edges via Hough transform and employed to obtain edge spread functions (ESF). Derived from these ESFs, a more accurate PSF is used to deconvolute the degraded image by an image restoration algorithm based on total variation (TV) deconvolution which is capable of suppressing the artifacts and noise. The experiment results show that this algorithm is adaptive and efficient to reconstruct remote sensing images, and the reconstructed image has better PSNR, MSE and MTF than the original degraded image.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In remote sensing, modern sensors produce multi-dimensional images. For example, hyperspectral images contain hundreds of spectral images. In many image processing applications, segmentation is an important step. Traditionally, most image segmentation and edge detection methods have been developed for one-dimensional images. For multidimensional images, the output images of spectral band images are typically combined under certain rules or using decision fusions. In this paper, we proposed a new edge detection algorithm for multi-dimensional images using secondorder statistics. First, we reduce the dimension of input images using the principal component analysis. Then we applied multi-dimensional edge detection operators that utilize second-order statistics. Experimental results show promising results compared to conventional one-dimensional edge detectors such as Sobel filter.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Efficient onboard satellite hyperspectral image compression represents a necessity and a challenge for current and future space missions. Therefore, it is mandatory to provide hardware implementations for this type of algorithms in order to achieve the constraints required for onboard compression. In this work, we implement the Lossy Compression for Exomars (LCE) algorithm on an FPGA by means of high-level synthesis (HSL) in order to shorten the design cycle. Specifically, we use CatapultC HLS tool to obtain a VHDL description of the LCE algorithm from C-language specifications. Two different approaches are followed for HLS: on one hand, introducing the whole C-language description in CatapultC and on the other hand, splitting the C-language description in functional modules to be implemented independently with CatapultC, connecting and controlling them by an RTL description code without HLS. In both cases the goal is to obtain an FPGA implementation. We explain the several changes applied to the original Clanguage source code in order to optimize the results obtained by CatapultC for both approaches. Experimental results show low area occupancy of less than 15% for a SRAM-based Virtex-5 FPGA and a maximum frequency above 80 MHz. Additionally, the LCE compressor was implemented into an RTAX2000S antifuse-based FPGA, showing an area occupancy of 75% and a frequency around 53 MHz. All these serve to demonstrate that the LCE algorithm can be efficiently executed on an FPGA onboard a satellite. A comparison between both implementation approaches is also provided. The performance of the algorithm is finally compared with implementations on other technologies, specifically a graphics processing unit (GPU) and a single-threaded CPU.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
An improved classified DCT-based compression algorithm for hyperspectral image is proposed. As variation of pixel
values in one band of the hyperspectral image is large, the traditional DCT is not very efficient for spectral decorrelation
(compared with the optimal KLT). The proposed algorithm is designed to deal with this problem. Our algorithm begins
with a 2D wavelet transform in spatial domain. After that, the obtained spectral vectors are clustered into different subsets
based on their statistics characteristics, and a 1D-DCT is performed on every subset. The classified algorithm consists of
three steps to make the statistics features fully used. In step1, a mean based clustering is performed to obtain basic subsets.
Step2 refines clustering by the range of spectral vector curve. Spectral vector curves, whose maximum and minimum
values are located in different intervals, are separated in step3. Since vectors in one subset are close to each other both in
values and statistic characteristics, which means a high relationship within one subset, the performance of DCT can be
very close to KLT, but the computation complexity is much lower. After the DWT and DCT in spatial and spectral domain,
an appropriate 3D-SPIHT image coding scheme is applied to the transformed coefficients to obtain a bit-stream with
scalable property. Results show that the proposed algorithm retains all the desirable features of compared state-of-art
algorithms despite its high efficiency, and can also have high performance over the non-classified ones at the same bitrates.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Among discrete orthogonal transforms, Karhunen-Loeve transform (KLT) achieves the most optimal spectral decorrelation for hyperspectral data compression with minimum mean square error. A common approach for those spectral decorrelation transform techniques such as KLT is to select m coefficient using some threshold value and then treating the rest of the coefficients as zero, this will result in loss of information. In order to preserve more information on small target data, this paper focused on a new technique called joint KLT-Lasso. The Lasso was applied to KLT coefficient. Sparse loadings were obtained using the Lasso constraint on KLT regression coefficients and more coefficients were shrunk to exact zero. The goal of our new method is to introduce a limit on the sum of the absolute values of the KLT coefficients and in which some coefficients consequently become zero without using any threshold value. A simulation on different hyperspectral data showed encouraging results.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
To simultaneously compress multichannel climate data, the Wavelet Subbands Arranging Technique (WSAT) is studied. The proposed technique is based on the wavelet transform, and has been designed to improve the transmission of voluminous climate data. The WSAT method significantly reduces the number of transmitted or stored bits in a bit stream, and preserves required quality. In the proposed technique, the arranged wavelet subbands of input channels provide more efficient compression for multichannel climate data due to building appropriate parent-offspring relations among wavelet coefficients. To test and evaluate the proposed technique, data from the Nevada climate change database is utilized. Based on results, the proposed technique can be an appropriate choice for the compression of multichannel climate data with significantly high compression ratio at low error.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Anomaly detection finds data samples whose signatures are spectrally distinct from their surrounding data samples. Unfortunately, it cannot discriminate the anomalies it detected one from another. In order to accomplish this task it requires a way of measuring spectral similarity such as spectral angle mapper (SAM) or spectral information divergence (SID) to determine if a detected anomaly is different from another. However, this arises in a challenging issue of how to find an appropriate thresholding value for this purpose. Interestingly, this issue has not received much attention in the past. This paper investigates the issue of anomaly discrimination which can differentiate detected anomalies without using any spectral measure. The ideas are to makes use unsupervised target detection algorithms, Automatic Target Generation Process (ATGP) coupled with an anomaly detector to distinguish detected anomalies. Experimental results show that the proposed methods are indeed very effective in anomaly discrimination.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The low resolved satellite images caused by serious degradation in remote sensing weaken its utilities in practice. An effective algorithm of high resolution remote sensing image reconstruction is proposed to recover the degraded images using a precise estimated modulated transfer function (MTF) of the imaging system from a curve knife edge. The curve edge is chosen automatically and robustly among many candidate edges, which can provide a higher precision in comparison to straight edge. To suppress the artifacts and noise, the total variation (TV) method is applied as well. The experiments show this algorithm is suitable to recover a high-resolved image with a high signal-to-noise ratio (SNR).
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Ongoing research at Los Alamos National Laboratory studies the Earth’s radio frequency (RF) background utilizing satellite-based RF observations of terrestrial lightning. Such impulsive events are dispersed through the ionosphere and appear as broadband nonlinear chirps at a receiver on-orbit. They occur in the presence of additive noise and structured clutter, making their classification challenging. The Fast On-orbit Recording of Transient Events (FORTE) satellite provided a rich RF lightning database. Application of modern pattern recognition techniques to this database may further lightning research in the scientific community, and potentially improve on-orbit processing and event discrimination capabilities for future satellite payloads. Conventional feature extraction techniques using analytical dictionaries, such as a short-time Fourier basis or wavelets, are not comprehensively suitable for analyzing the broadband RF pulses under consideration here. We explore an alternative approach based on non-analytical dictionaries learned directly from data, and extend two dictionary learning algorithms, K-SVD and Hebbian, for use with satellite RF data. Both algorithms allow us to learn features without relying on analytical constraints or additional knowledge about the expected signal characteristics. We then use a pursuit search over the learned dictionaries to generate sparse classification features, and discuss their performance in terms of event classification. We also use principal component analysis to analyze and compare the respective learned dictionary spaces to the real data space.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Compressed coded aperture based imaging warning system with a low resolution optical sensor is proposed in this paper, which is specifically designed to support the demands of rapid, high resolution, long-range detection and warning in complex battlefield environment. After analyzing of the tactic specification and the technical specification, the key techniques of this novel alarming system are discussed and designed, including optical imaging module, image-processing module, alarming control module and interfaces unit. The optical imaging module is used for image compression, then, the coded image will be mathematically reconstructed to a high resolution image by the image-processing module. The presented super-resolution reconstruction algorithm is efficient and robust. Combining compressed coded imaging simulation and coded image super-resolution reconstruction, experiments show that the compressed coded aperture imaging alarming system has a longer detectable range and higher resolution, which is potential in the defence of important targets.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In the study, we proposed an adaptive filter with multiple constrains based on the generalized sidelobe canceller (GSC) structure for target detection of hyperspectral images. The proposed filtering approach can alleviate the performance degradation in target detection caused by estimation errors in spectral signature of the desired target or some random noise by unknown interference. First, we design an optimal filter to minimize the interference effect with multiple constrains including unit gain response on desired target and null response on undesired targets. The optimal filter can detect the desired target, suppress the undesired targets and minimize the interference effect. Next, an adaptive filter with GSC structure is proposed to transform the constrained minimization problem into an equivalent unconstrained minimization. The structure of GSC contains two branches: the upper branch is a filter with fixed weights wf designed by multiple constrains to reserve the desired target and interference; the lower branch contains a blocking matrix B and an adaptive filter with weights wa. Matrix B blocks the desired target and preserve the interference. The adaptive filter can be designed to minimize the interference effect without constrains. Simulations validate the effectiveness of the proposed adaptive filter with GSC structure which is robust to the random errors in spectral signature of the desired target.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper presents a new method in order to perform the endmembers extraction with the same accuracy in the results that the well known Winter’s N-Finder algorithm but with less computational effort. In particular, our proposal makes use of the Orthogonal Subspace Projection algorithm, OSP, as well as the information provided by the dimensionality reduction step that takes place prior to the endmembers extraction itself. The results obtained using the proposed methodology demonstrate that more than half of the computing time is saved with negligible variations in the quality of the endmembers extracted, compared with the results obtained with the Winter’s N-Finder algorithm. Moreover, this is achieved with independence of the amount of noise and/or the number of endmembers of the hyperspectral image under processing.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Endmember variability presents a great challenge in endmember finding since a true endmember may be contaminated by many unknown factors. This paper develops a pixel purity index (PPI) based approach to resolving this issue. It is known that endmember candidates must have their PPI counts greater than 0. Using this fact we can start with all data samples with PPI counts greater than 0 and cluster them into p endmember classes where the value of p can be determined by virtual dimensionality (VD). We further develop an endmember identification algorithm to select true endmembers from these p endmembers. So, in our proposed technique three state processes are developed. It first uses PPI to produce a set of endmember candidates and then develops a clustering algorithm to group PPI-generated endmember candidates into p endmember classes and finally concludes by designing an algorithm to extract true endmembers from the p endmember classes.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Spectral mixture analysis is one of the major techniques in hyperspectral remote sensing image analysis. Endmember extraction for spectral mixture analysis is a necessary step when endmember information is unknown. If endmembers are assumed to be pure pixels present in an image scene, endmember extraction is to search the most distinct pixels. Popular algorithms using the criteria of simplex volume maximization (e.g., N-FINDR) and spectral signature similarity (e.g., Vertex Component Analysis) belong to this type. N-FINDR is a parallel-searching method, where all the endmembers are determined simultaneously. VCA is a sequential-searching method, finding endmembers one after another, which can greatly save computational cost. In this paper, we focus on VCA-based endmember extraction. In particular, we propose a new searching approach that makes the extracted endmembers more distinct. Real data experiments show that it can improve the quality of extracted endmembers.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Endmember extraction has recently received considerable interest in hyperspectral imagery. However, several issues in endmember extraction may have been overlooked. The first and foremost is the term of using endmember extraction. Many algorithms claimed to be endmember extraction algorithms actually do not extract true endmembers but rather find potential endmember candidates, referred to as virtual endmembers (VEs). Secondly, how difficult for an algorithm to find VEs is primarily determined by two key factors, endmember variability and endmember discriminability. While the former issue has been addressed recently in the literature, the latter issue is yet explored and has not been investigated before. This paper re-invents a wheel by developing a Fisher’s ratio approach to finding VEs using Fisher’s ratio criterion which is defined by ratio of endmember variability to endmember discriminability.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper presents a progressive band processing (PBP) version of an endmember finding algorithm, simplex growing algorithm (SGA), to be called PBP-SGA, which allows users to perform SGA band by band progressively. Several advantages can be gained from this approach. First of all, PBP-SGA does not require data dimensionality reduction since PBP begins with a lower band dimension and gradually increases band dimensions band by band progressively until it achieves the desired results. Secondly, PBP can process SGA whenever bands are available without waiting for the information from all band information to be received. As a result, PBP-SGA can be used for data transmission and communication. Thirdly, PBP-SGA can help identify which bands are crucial during the process of finding endmembers. Finally, PBP-SGA provides feasibility of being implemented in real time according to Band SeQuential (BSQ) format.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Nonlinear spectral unmixing constitutes an important field of research for hyperspectral imagery. An unsupervised nonlinear spectral unmixing algorithm, namely multiple kernel constrained nonnegative matrix factorization (MKCNMF) is proposed by coupling multiple-kernel selection with kernel NMF. Additionally, a minimum endmemberwise distance constraint and an abundance smoothness constraint are introduced to alleviate the uniqueness problem of NMF in the algorithm. In the MKCNMF, two problems of optimizing matrices and selecting the proper kernel are jointly solved. The performance of the proposed unmixing algorithm is evaluated via experiments based on synthetic and real hyperspectral data sets. The experimental results demonstrate that the proposed method outperforms some existing unmixing algorithms in terms of spectral angle distance (SAD) and abundance fractions.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Technology miniaturization and system architecture advancements have created an opportunity to significantly lower the cost of many types of space missions by sharing capabilities between multiple spacecraft. Historically, most spacecraft have been atomic entities that (aside from their communications with and tasking by ground controllers) operate in isolation. Several notable example exist; however, these are purpose-designed systems that collaborate to perform a single goal. The above the cloud computing (ATCC) concept aims to create ad-hoc collaboration between service provider and consumer craft. Consumer craft can procure processing, data transmission, storage, imaging and other capabilities from provider craft. Because of onboard storage limitations, communications link capability limitations and limited windows of communication, data relevant to or required for various operations may span multiple craft. This paper presents a model for the identification, storage and accessing of this data. This model includes appropriate identification features for this highly distributed environment. It also deals with business model constraints such as data ownership, retention and the rights of the storing craft to access, resell, transmit or discard the data in its possession. The model ensures data integrity and confidentiality (to the extent applicable to a given data item), deals with unique constraints of the orbital environment and tags data with business model (contractual) obligation data.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The problem of wave scattering by rough surfaces has been studied extensively by scientists and engineers because of its wide applications in science and technology. In this letter, the Physical Optics method, which is a high frequency technique, is presented for analyzing the scattering of rough surface. In addition, the compute unified device architecture of NVIDIA takes advantage of the Graphics Processing Units for parallel computing, and greatly improves the speed of computation. As there is a large number of data to deal with, a parallelization concept is presented which is based on the utilization of GPU to further improve the computational efficiency. In the end, the simulation time of CPU-based Physical Optics method and GPU-based Physical Optics method are compared, and it can be found that good acceleration effect has been achieved.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The Thompson cloud microphysics scheme is a sophisticated cloud microphysics scheme in the Weather Research and
Forecasting (WRF) model. The scheme is very suitable for massively parallel computation as there are no interactions
among horizontal grid points. Compared to the earlier microphysics schemes, the Thompson scheme incorporates a large
number of improvements. Thus, we have optimized the speed of this important part of WRF. Intel Many Integrated Core
(MIC) ushers in a new era of supercomputing speed, performance, and compatibility. It allows the developers to run code
at trillions of calculations per second using the familiar programming model. In this paper, we present our results of
optimizing the Thompson microphysics scheme on Intel Many Integrated Core Architecture (MIC) hardware. The Intel
Xeon Phi coprocessor is the first product based on Intel MIC architecture, and it consists of up to 61 cores connected by a
high performance on-die bidirectional interconnect. The coprocessor supports all important Intel development tools. Thus,
the development environment is familiar one to a vast number of CPU developers. Although, getting a maximum
performance out of MICs will require using some novel optimization techniques. Those optimization techniques are
discusses in this paper. The results show that the optimization improved MIC performance by 3.4x. Furthermore, the
optimized MIC code is 7.0x faster than the optimized multi-threaded code on the four CPU cores of a single socket Intel
Xeon E5-2603 running at 1.8 GHz.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper presents computational models of microstrip antennas using the software CST. The main objective of this paper is to evaluate an alternative way to miniaturize dimensions of microstrip antennas. In order to this, a coating made of ceramic with high dielectric constant was considered for two different cases. Scattering parameters (S11) and radiation patterns were obtained for both structures and compared with standard microstrip antennas for S and C bands. Finally, the results show the possibility of reducing the dimensions by 22% to 31% and demonstrate the feasibility for the implementation and development of these antennas.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
JPEG2000 is an important technique for image compression that has been successfully used in many fields. Due to the increasing spatial, spectral and temporal resolution of remotely sensed imagery data sets, fast decompression of remote sensed data is becoming a very important and challenging object. In this paper, we develop an implementation of the JPEG2000 decompression in graphics processing units (GPUs) for fast decoding of codeblock-based parallel compression stream. We use one CUDA block to decode one frame. Tier-2 is still serial decoded while Tier-1 and IDWT are parallel processed. Since our encode stream are block-based parallel which means each block are independent with other blocks, we parallel process each block in T1 with one thread. For IDWT, we use one CUDA block to execute one line and one CUDA thread to process one pixel. We investigate the speedups that can be gained by using the GPUs implementations with regards to the CPUs-based serial implementations. Experimental result reveals that our implementation can achieve significant speedups compared with serial implementations.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The Weather Research and Forecasting (WRF) model is designed for numerical weather prediction and atmospheric
research. The WRF software infrastructure consists of several components such as dynamic solvers and physics schemes.
Numerical models are used to resolve the large-scale flow. However, subgrid-scale parameterizations are for an
estimation of small-scale properties (e.g., boundary layer turbulence and convection, clouds, radiation). Those have a
significant influence on the resolved scale due to the complex nonlinear nature of the atmosphere. For the cloudy
planetary boundary layer (PBL), it is fundamental to parameterize vertical turbulent fluxes and subgrid-scale
condensation in a realistic manner. A parameterization based on the Total Energy – Mass Flux (TEMF) that unifies
turbulence and moist convection components produces a better result that the other PBL schemes. For that reason, the
TEMF scheme is chosen as the PBL scheme we optimized for Intel Many Integrated Core (MIC), which ushers in a new
era of supercomputing speed, performance, and compatibility. It allows the developers to run code at trillions of
calculations per second using the familiar programming model. In this paper, we present our optimization results for
TEMF planetary boundary layer scheme. The optimizations that were performed were quite generic in nature. Those
optimizations included vectorization of the code to utilize vector units inside each CPU. Furthermore, memory access
was improved by scalarizing some of the intermediate arrays. The results show that the optimization improved MIC
performance by 14.8x. Furthermore, the optimizations increased CPU performance by 2.6x compared to the original
multi-threaded code on quad core Intel Xeon E5-2603 running at 1.8 GHz. Compared to the optimized code running on a
single CPU socket the optimized MIC code is 6.2x faster.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The availability of hyperspectral images has increased in recent years, which is used in military and civilian applications,
such as target recognition, surveillance, geological mapping and environmental monitoring. Because of its abundant data
quantity and special importance, now it exists lossless compression methods of hyperspectral images mainly exploiting
the strong spatial or spectral correlation. C-DPCM-APL is a method that achieves highest lossless compression ratio on
the CCSDS hyperspectral images acquired in 2006 but consuming longest processing time among existing lossless
compression methods to determine the optimal prediction length for each band. C-DPCM-APL gets best compression
performance mainly via using optimal prediction length but ignoring the correlationship between reference bands and the
current band which is a crucial factor that influences the precision of prediction. Considering this, we propose a method
that selects reference bands according to the atmospheric absorption characteristic of hyperspectral images. Experiments
on CCSDS 2006 images data set show that the proposed reduces the computation complexity heavily without decaying
its lossless compression performance when compared to C-DPCM-APL.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In order to meet the different needs for different users to the quality of remote sensing images in heterogeneous network environments, an online remote sensing image progressive transmission model is constructed in which remote sensing image compression and decompression are synchronized with transmission. At the same time, a pipeline-based multi-threaded acceleration method has been proposed through solving the asynchronous problem between compression decompression and transmission to improve the efficiency of remote sensing progressive transmission. At last, an idea of retry broken downloads transmission interruption has been implemented to improve end-user interactive experience. Experimental results show that the whole processing speed has been improved nearly twice without reducing image transmission quality by using the proposed progressive transmission and real-time compression model.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Techniques for automated feature extraction, including neuroscience-inspired machine vision, are of great interest for landscape characterization and change detection in support of global climate change science and modeling. We present results from an ongoing effort to extend machine vision methodologies to the environmental sciences, using state-of-theart adaptive signal processing, combined with compressive sensing and machine learning techniques. We use a modified Hebbian learning rule to build spectral-textural dictionaries that are tailored for classification. We learn our dictionaries from millions of overlapping multispectral image patches and then use a pursuit search to generate classification features. Land cover labels are automatically generated using CoSA: unsupervised Clustering of Sparse Approximations. We demonstrate our method on multispectral WorldView-2 data from a coastal plain ecosystem in Barrow, Alaska (USA). Our goal is to develop a robust classification methodology that will allow for automated discretization of the landscape into distinct units based on attributes such as vegetation, surface hydrological properties (e.g., soil moisture and inundation), and topographic/geomorphic characteristics. In this paper, we explore learning from both raw multispectral imagery, as well as normalized band difference indexes. We explore a quantitative metric to evaluate the spectral properties of the clusters, in order to potentially aid in assigning land cover categories to the cluster labels.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In hyperspectral image classification, each hyperspectral pixel can be represented by linear combination of a few training samples in the training dictionary. Assuming the training dictionary is available, the hyperspectral pixel can be recovered using a minimal training samples by solving a sparse representation problem, then the weighted coefficients of training samples are obtained and the class of the pixel can be determined, the above process is called classification algorithm based on sparse representation. However, the traditional sparse classification algorithms have not fully utilized the spatial information and classification accuracy is relatively low. In this paper, in order to improve classification accuracy, a new sparse classification algorithm based on First-Order Neighborhood System Weighted (FONSW) constraint is proposed. Compared with other sparse classification algorithms, the experimental results show that the proposed algorithm has a smoother classification map and higher classification accuracy.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, a feature extraction method using a very simple local averaging filter for hyperspectral image classification is proposed. The method potentially smoothes out trivial variations as well as noise of hyperspectral data, and simultaneously exploits the fact that neighboring pixels tend to belong to the same class with high probability. The spectral-spatial features, which are extracted and fed into a following classifier with locality preserving character in the experimental setup, are compared with other features, such as spectral only and wavelet-features. Simulated results show that the proposed approach facilitates superior discriminant features extraction, thereby yielding significant improvement in hyperspectral image classification performance.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
To improve the efficiency of stereo information for remote sensing classification, a stereo remote sensing feature selection method is proposed in this paper presents, which is based on artificial bee colony algorithm. Remote sensing stereo information could be described by digital surface model (DSM) and optical image, which contain information of the three-dimensional structure and optical characteristics, respectively. Firstly, three-dimensional structure characteristic could be analyzed by 3D-Zernike descriptors (3DZD). However, different parameters of 3DZD could descript different complexity of three-dimensional structure, and it needs to be better optimized selected for various objects on the ground. Secondly, features for representing optical characteristic also need to be optimized. If not properly handled, when a stereo feature vector composed of 3DZD and image features, that would be a lot of redundant information, and the redundant information may not improve the classification accuracy, even cause adverse effects. To reduce information redundancy while maintaining or improving the classification accuracy, an optimized frame for this stereo feature selection problem is created, and artificial bee colony algorithm is introduced for solving this optimization problem. Experimental results show that the proposed method can effectively improve the computational efficiency, improve the classification accuracy.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Anomaly detection becomes increasingly important in hyperspectral data exploitation due to the use of high spectral resolution which can uncover many unknown substances that cannot be visualized or known a priori. Unfortunately, in real world applications with no availability of ground truth its effectiveness is generally performed by visual inspection which is the only means of evaluating its performance qualitatively in which case background information provides an important piece of information to help image analysts to interpret results of anomaly detection. Interestingly, this issue has never been explored in anomaly detection. This paper investigates the effect of background on anomaly detection via various degrees of background suppression. It decomposes anomaly detection into a two-stage process where the first stage is background suppression so as to enhance anomaly contrast against background and is then followed by a matched filter to increase anomaly detectability by intensity. In order to see background suppression progressively changing with data samples causal anomaly detection is further developed to see how an anomaly detector performs background suppression sample by sample with sample varying spectral correlation. Finally, a 3D ROC analysis used to evaluate effect of background suppression on anomaly detection.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The conventional image quality assessment algorithm, such as Peak Signal to Noise Ratio (PSNR), Mean Square Error(MSE) and structural similarity (SSIM), needs the original image as a reference. It’s not applicable to the remote sensing image for which the original image cannot be assumed to be available. In this paper, a No-reference Image Quality Assessment (NRIQA) algorithm is presented to evaluate the quality of remote sensing image. Since blur and noise (including the stripe noise) are the common distortion factors affecting remote sensing image quality, a comprehensive evaluation factor is modeled to assess the blur and noise by analyzing the image visual properties for different incentives combined with SSIM based on human visual system (HVS), and also to assess the stripe noise by using Phase Congruency (PC). The experiment results show this algorithm is an accurate and reliable method for Remote Sensing Image Quality Assessment.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
As launches and satellites lower in overall cost, the variety and purpose of the data collected continues to evolve. This evolution requires a revised set of standards for best practices with regard to academic, governmental and industrial communication and scheduling design. With deliberate consideration into communication and scheduling design, throughput of data passed via the ever more crowded and noisy limited-bandwidth channels can be improved. This study outlines how implementing a revised standard with regard to ground station scheduling and communication impact the expectation for throughput from the satellite itself.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Target detection is one of the most important applications in hyperspectral remote sensing image analysis. Sparse representation method has been considered to be effective in hyperspectral target detection. In this method, a sparse representation with respect to a certain pixel in hyperspectral imagery means a linear combination of few data vectors in the data dictionary. An training dictionary consisting of both target and background samples in the same feature space is first constructed and test pixels are sparsely represented by decomposing over the dictionary. Though sparse representation is considered to preserve main information of most pixels, inevitable indeterminacy may lead to different representations of same or similar pixels. In this paper, a manifold regularized sparsity model is proposed to deal with this problem. A graph regularization term is incorporated into the sparsity model under the manifold assumption that similar data pixels should have similar sparse representation. Then a modified simultaneous version of the SP algorithm (SSP) is implemented to obtain the recovered sparse vectors which are composed of sparse coefficients corresponding to both target sub-dictionary and background sub-dictionary. Once the sparse vectors are obtained, the residual between original test samples and estimate recovered from target sub-dictionary as well as the residual between original test samples and estimate recovered from background sub-dictionary are calculated to determine the test pixels’ class. The proposed algorithm is applied to real hyperspectral image to detect targets of interest. Experimental results show a more accurate target detection performance with this proposed model over that with conventional sparse models.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A novel IR polarization staring imaging system employing a four-camera-array is designed for target detection and recognition, especially man-made targets hidden in complex battle field. The design bases on the existence of the difference in infrared radiation’s polarization characteristics, which is particularly remarkable between artificial objects and the natural environment. The system designed employs four cameras simultaneously to capture the00 polarization difference to replace the commonly used systems engaging only one camera. Since both types of systems have to obtain intensity images in four different directions (I0 , I45 , I90 , I-45 ), the four-camera design allows better real-time capability and lower error without the mechanical rotating parts which is essential to one-camera systems. Information extraction and detailed analysis demonstrate that the caught polarization images include valuable polarization information which can effectively increase the images’ contrast and make it easier to segment the target even the hidden target from various scenes.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In most digital imaging applications, high-resolution imaging or videos are usually desired for later processing and analysis. The desire for high-resolution stems from two principal application areas: improvement of pictorial information for human interpretation, and helping representation for automatic machine preception. While the image sensors limit the spatial resolution of the image, the image details are also limited by the optical system, due to diffraction, and aberration1. Monocentric lens are an attractive option for gigapixel camera because the symmetrical design focuses light identically coming from any direction. Marks and Brady proposed a monocentric lens design imaging 40 gigapixels with an f-number of 2.5 and resolving 2 arcsec over a 120 degrees field of view2. Recently, Cossairt, Miau, and Nayer proposed a proof-of-concept gigapixel computational camera consisting of a large ball lens shared by several small planar sensors coupled with a deblurring step3. The design consists of a ball element resulting in a lens that is both inexpensive to produce and easy to align. Because the resolution of spherical lens is fundamentally limited by geometric aberrations, the imaging characteristics of the ball lens is expressed by the geometrical aberrations, in which the general equations for the primary aberration of the ball lens are given. The effect of shifting the stop position on the aberrations of a ball lens is discussed. The variation of the axial chromatic aberration with the Abbe V-number when the refraction index takes different values is analyzed. The variation of the third-order spherical aberration ,the fifth-order spherical aberration and the spherical aberration obtained directly from ray tracing with the f-number is discussed. The other imaging evaluation merits, such as the spot diagram, the modulation transfer function(MTF) and the encircled energy are also described. Most of the analysis of the ball lens is carried out using OSLO optics software from Lambda Research Corporation4.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The fundamental problem of the modulation transfer function(MTF) from the viewpoint of the lens designer is to find relation between the MTF and the geometrical aberrations. Let it be required to develop the spherical aberration into a polynomial expansion. The incoherent point spread function(PSF) of the optical imaging system is derived from the diffraction integral in the presence of aberrations. The optical transfer function(OTF) is the Fourier transform of the PSF, and the modulus of the OTF is the MTF. The relation between the spherical aberration and the MTF is denoted by numerical integration method. The normalized MTF is numerically calculated for various amounts of spherical aberration. A comparison is made between the MTF of the corrected spherical aberration using the optimum design for the minimum root mean square(RMS) wavefront aberration and those for the minimum peak-to-valley(P-V) wave front aberration.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Multiple scattering of light in opaque materials such as white paint and human tissue forms a volume speckle field, will greatly reduce the imaging depth and degrade the imaging quality. A novel approach is proposed to focus light through a turbid medium using amplitude modulation with genetic algorithm (GA) from speckle patterns. Compared with phase modulation method, amplitude modulation approach, in which the each element of spatial light modulator (SLM) is either zero or one, is much easier to achieve. Theoretical and experimental results show that, the advantage of GA is more suitable for low the signal to noise ratio (SNR) environments in comparison to the existing amplitude control algorithms such as binary amplitude modulation. The circular Gaussian distribution model and Rayleigh Sommerfeld diffraction theory are employed in our simulations to describe the turbid medium and light propagation between optical devices, respectively. It is demonstrated that the GA technique can achieve a higher overall enhancement, and converge much faster than others, and outperform all algorithms at high noise. Focusing through a turbid medium has potential in the observation of cells and protein molecules in biological tissues and other structures in micro/nano scale.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Particle Swarm Optimization (PSO) is exploited in an optical focusing system, by changing the phase of the incident light, which can break the diffraction limit and enhance focal intensity through highly scattering media. To emphasize that the focusing optical system is mainly composed of a spatial light modulator (SLM), a lens and highly scattering media placed behind the lens. The stepwise sequential algorithm and the continuous sequential algorithm are sensitive to noise and the genetic algorithm converges slowly. Compared with these algorithms theoretically and experimentally, the PSO is robust, effective and able to converge rapidly, which obtains a best solution by following the search for the optimal particle in the solution space. The capacity of beyond-diffraction and increasing intensity of the focus through dynamic scattering media could be conducive to biological microscopy and imaging through turbid environments.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
To acquire high-resolution IR polarization images, a pixel-level image reconstruction method was introduced. It was aimed at IR polarization imaging systems employing multi-aperture principle. The geometric mapping relation between images was firstly studied and was basis of this method. Parameters of the mapping relation were calculated, and then pixels of each image obtained were mapped to a virtual digital plane at which precise and resolution enhanced polarization images could be obtained by taking advantage of the pixel deviation and rearranging the pixels. Experimental results demonstrated that the algorithm could assist the multi-aperture imaging system in rendering easily precise and high-resolution polarization images.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Visual tracking is an important task in computer vision. Despite many researches have been done in this area, some problems remain. One of the problems is drifting. To handle the problem, a new appearance model update method based on a forward filtering backward smoothing particle filter is proposed in this paper. A smoothing of previous appearance model is performed by exploiting information of current frame instead of updating instantly in traditional tracking methods. It has been shown that smoothing based on future observations makes previous and current predictions more accurate, thus the appearance model update by our approach is more accurate. And at the same time, online tracking is achieved compared with some previous work in which the smoothing is done in an offline way. With the smoothing procedure, the tracker is more accurate and less likely to drift than traditional ones. Experimental results demonstrate the effectiveness of the proposed method.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Visual tracking is one of the significant research directions in computer vision. Although standard random ferns tracking method obtains a good performance for the random spatial arrangement of binary tests, the effect of the locality of image on ferns description ability are ignored and prevent them to describe the object more accurately and robustly. This paper proposes a novel spatial arrangement of binary tests to divide the bounding box into grids in order to keep more details of the image for visual tracking. Experimental results show that this method can improve tracking accuracy effectively.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.