KEYWORDS: Hyperspectral imaging, Associative arrays, Surface plasmons, Data modeling, Signal to noise ratio, Chemical elements, Convex optimization, Detection and tracking algorithms, 3D image processing, Instrument modeling
Spectral pixels in a hyperspectral image are known to lie in a low-dimensional subspace. The Linear Mixture Model
states that every spectral vector is closely represented by a linear combination of some signatures. When no prior
knowledge of the representing signatures available, they must be extracted from the image data, then the abundances of
each vector can be determined. The whole process is often referred to as unsupervised endmember extraction and
unmixing.
The Linear Mixture Model can be extended to Sparse Mixture Model R=MS + N, where not only single pixels but the
whole hyperspectral image has a sparse representation using a dictionary M made of the data itself, and the abundance
vectors (columns of S) are sparse at the same locations. The endmember extraction and unmixing tasks then can be done
concurrently by solving for a row-sparse abundance matrix S. In this paper, we pose a convex optimization problem,
then using simultaneous sparse recovery techniques to find S. This approach promise a global optimum solution for the
process, rather than suboptimal solutions of iterative methods which extract endmembers one at a time. We use l1l2 norm
of S to promote row-sparsity in simultaneous sparse recovery, then impose additional hyperspectral constraints to
abundance vectors (such as non-negativity and sum-to-one).
Endmember extraction in Hyperspectral Images (HSI) is a critical step for target detection and abundance estimation. In
this paper, we propose a new approach to endmember extraction, which takes advantage of the sparsity property of the
linear representation of HSI's spectral vector. Sparsity is measured by the l0 norm of the abundance vector. It is also well
known that l1 norm well resembles l0 in boosting sparsity while keeping the minimization problem convex and tractable.
By adding the l1 norm term to the objective function, we result in a constrained quadratic programming which can be
solved effectively using the Linear Complementary Programming (LCP). Unlike existing methods which require
expensive computations in each iteration, LCP only requires pivoting steps, which are extremely simple and efficient for
the un-mixing problem, since the number of signatures in the reconstructing basis is reasonably small. Preliminary
experiments of the proposed methods for both supervised and unsupervised abundance decomposition showed
competitive results as compared to LS-based method like Fully Constrained Least Square (FCLS). Furthermore,
combination of our unsupervised decomposition with anomaly detection makes a decent target detection algorithm as
compared to methods which require prior information of target and background signatures.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.