PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
Markov Random Fields (MRF) are powerful methods to introduce contextual knowledge in image processing. In this
paper, we aim at showing that they are well adapted to deal with many SAR applications, specially when using graphs of
primitives. Three main applications are presented using the markovian framework: SAR image interpretation, road network
detection and 3D reconstruction. For the last application, 3 situations are considered: interferometry, interferometry using
an additional optical image and radargrammetry with optic. This paper gathers some previous and current works on the use
of MRF for SAR image analysis.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We describe a new approach for performing pseudo-imaging of point energy sources from spectral-temporal sensor data collected using a rotating-prism spectrometer. Pseudo-imaging, which involves the automatic localization, spectrum estimation, and identification of energetic sources, can be difficult for dim sources and/or noisy images, or in data containing multiple sources which are closely spaced such that their signatures overlap, or where sources move during data collection. The new approach is specifically designed for these difficult cases. It is developed within an iterative, maximum-entropy, framework which incorporates an efficient optimization over the space of all model parameters and mappings between image pixels and sources, or clutter. The optimized set of parameters is then used for detection, localization, tracking, and identification of the multiple sources in the data. The paper includes results computed from experimental data.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Multi-frame blind deconvolution (MFBD) algorithms can be used to reconstruct a single high-resolution image of an
object from one or more measurement frames of that are blurred and noisy realizations of that object. The blind nature
of MFBD algorithms permits the reconstruction process to proceed without having separate measurements or
knowledge of the blurring functions in each of the measurement frames. This is accomplished by estimating the object
common to all the measurement frames jointly with the blurring functions that are different from frame to frame. An
issue of key importance is understanding how accurately the object pixel intensities can be estimated with the use of
MFBD algorithms. Here we present algorithm-independent lower bounds to the variances of estimates of the object
pixel intensities to quantify the accuracy of these estimates when the blurring functions are estimated pixel by pixel.
We employ support constraints on both the object and the blurring functions to aid in making the inverse problem
unique. The lower bounds are presented as a function of the sizes and shapes of these support regions and the number
of measurement frames.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Novel algorithms to suppress impulsive noise in 3D color images are presented. Some of them have demonstrated effectiveness in preservation of inherent characteristics in the images, such as, edges, details and chromaticity. Robust algorithm that uses order statistics, vector directional and adaptive methods is developed applying three-dimensional video processing permitting suppressing a noise. Several algorithms are extended from 2D to 3D for video processing. The results show that proposed Video Adaptive Vector Directional filter outperforms the video versions of Median M-type K-Nearest Neighbour, Vector Median, Generalized Vector Directional, K-Nearest Neighbour, α-trimmed Mean, and Median filters. All of them evaluated during simulation using PSNR, MAE and NCD criteria.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A novel algorithm to address the reprojection of MODIS level 1B imagery is proposed. The method is based on the simultaneous 2D search of latitude and longitude fields using local gradients. In the case of MODIS, the gradient search is realized in two steps: inter-segment and intra-segment search, which helps to resolve the discontinuity of the latitude/longitude fields caused by overlap between consecutively scanned MODIS multi-detector image segments. It can also be applied for reprojection of imagery obtained by single-detector scanning systems, like AVHRR, or push-broom systems, like MERIS. The structure of the algorithm allows equal efficiency with either the nearest-neighbor or the bilinear interpolation modes.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper presents an ongoing study for the estimation of the cloud-top height by using only geometrical methods. It is based on the hypothesis that an infra-red camera is on board a satellite and pairs of images concern nearly the same scene. Stereo-vision techniques are therefore explored in order to test the methodology for height retrieval and in particular results of several techniques of stereo matching are evaluated. This study includes area-based matching algorithms by implementing the basic versions, without considering any further steps of optimisation to improve the results. Dense depth maps are the final outputs whose reliability is verified by computing error statistics with respect to a set of Digital Terrain Elevation Data, used as ground truth for a set of nearly cloud free images. A set of real pairs of images from the Along-Track Scanning Radiometer2 11μm data set, has been considered. The evaluated errors range between .75 and .80 km, that is not a particularly bad result if it is compared to the resolution of the ATSR2 pixel (1 km resolution).
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Surveillance camera automation and camera network development are growing areas of interest. This paper
proposes a competent approach to enhance the camera surveillance with Geographic Information Systems (GIS)
when the camera is located at the height of 10-1000 m. A digital elevation model (DEM), a terrain class
model, and a flight obstacle register comprise exploited auxiliary information. The approach takes into account
spherical shape of the Earth and realistic terrain slopes. Accordingly, considering also forests, it determines
visible and shadow regions. The efficiency arises out of reduced dimensionality in the visibility computation.
Image processing is aided by predicting certain advance features of visible terrain. The features include distance
from the camera and the terrain or object class such as coniferous forest, field, urban site, lake, or mast. The
performance of the approach is studied by comparing a photograph of Finnish forested landscape with the
prediction. The predicted background is well-fitting, and potential knowledge-aid for various purposes becomes
apparent.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper we propose an analysis of the effects of the multiresolution fusion process on the accuracy provided by supervised classification algorithms. In greater detail, the rationale of this analysis consists in understanding in what conditions the merging process can increase/decrease the classification accuracy of different labeling algorithms. On the one hand, it is expected that the multiresolution fusion process can increase the classification accuracy of simple classifiers, characterized by linear or "moderately" non-linear discriminant functions. On the other hand, the spatial and spectral artifacts unavoidably included in the fused images can decrease the accuracy of more powerful classifiers, characterized by strongly non-linear discriminant functions. In this last case, in fact, the classifier is intrinsically able to extract and emphasize all the information present in the original images without any need of a merging procedure. These effects may be different by considering different fusion methodologies and different classification techniques. Several experiments are carried out by applying the different fusion and classification techniques to an image acquired by the Quickbird sensor on the city of Pavia (Italy). From these experiments it is possible to derive interesting conclusions on the effectiveness and the appropriateness of the different investigated multiresolution fusion techniques with respect to classifiers having different complexity and capacity.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In the mid-1980s, image fusion received significant attention from researchers in remote sensing and image processing,
as SPOT 1 (launched in 1986) provided high-resolution (10m) Pan images and low-resolution (20m) MS images. Since
that time, much research has been done to develop effective image fusion techniques. Image fusion is a technique used to
integrate the geometric detail of a high-resolution panchromatic (Pan) image and the color information of a lowresolution
multispectral (MS) image to produce a high-resolution MS image.
Many methods such as Principal Component Analysis (PCA), Multiplicative Transform, Brovey Transform, and IHS
Transform have been developed in the last few years producing good quality fused images. These images are usually
characterized by high information content, but with significantly altered spectral information content. There are also
some limitations in these fusion techniques. The most significant problem is color distortion. A major reason for the
significant color distortion in fusion provoked by many fusion techniques is the wavelength extension of some satellite
panchromatic images. Unlike the panchromatic image of the SPOT and IRS sensors, the wavelength range of the new
satellites is extended from the visible into the near infrared. This difference significantly changes the gray values of the
new panchromatic images. Therefore, traditional image fusion techniques - useful for fusing SPOT Pan with other MS
images - cannot achieve quality fusion results for the new satellite images.
More recently new techniques have been proposed such as the Wavelet Transform, the Pansharp Transform and the
Modified IHS Transform. Those techniques seem to reduce the color distortion problem and to keep the statistical
parameters invariable.
Ideally, the methods used to fuse image data sets should preserve the spectral characteristics of the original multispectral
input image. While many technologies exist and emphasize the preservation of spectral characteristics, they do not take
into account the resolution ratio of the input images. Usually the spatial resolution of the panchromatic image is two
(Landsat 7, Spot 1-4) or four times (Ikonos, Quickbird) better than the size of the multispectral images. This paper is an
attempt to fuse high-resolution panchromatic and low-resolution multispectral bands of the EO-1 ALI sensor. ALI
collects nine multispectral bands with 30m resolution and a panchromatic band with 3 times better resolution (10m). ALI
has a panchromatic band narrower than the respective band of Landsat7. It has also two narrower bands in the spectral
range of Landsat7 band 4. It has also an extra narrower band near the spectral range of Landsat7 band 1.
In this study we compare the efficiency of seven fusion techniques and more especially the efficiency of Gram Schmidt,
Modified IHS, PCA, Pansharp, Wavelet and LMM (Local Mean Matching) LMVM (Local Mean and Variance
Matching) fusion techniques for the fusion of ALI data. Two ALI images collected over the same area have been used.
In order to quantitatively measure the quality of the fused images we have made the following controls: Firstly, we have
examined the optical qualitative result. Then, we examined the correlation between the original multispectral and the
fused images and all the statistical parameters of the histograms of the various frequency bands.
All the fusion techniques improve the resolution and the optical result. In contrary to the fusion of other data (ETM,
Spot5, Ikonos and Quickbird) all the algorithms provoke small changes to the statistical parameters.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In computer vision, stereoscopic image analysis is a well-known technique capable of extracting the third (vertical) dimension. Starting from this knowledge, the Remote Sensing (RS) community has spent increasing efforts on the exploitation of Ikonos one-meter resolution stereo imagery for high accuracy 3D surface modelling and elevation data extraction. In previous works our team investigated the potential of neural adaptive learning to solve the correspondence problem in the presence of occlusions. In this paper we present an experimental evaluation of an improved version of the neural based stereo matching method when applied to Ikonos one-meter resolution stereo images affected by occlusion problems. Disparity maps generated with the proposed approach are compared with those obtained by an alternative stereo matching algorithm implemented in a (non-)commercial image processing software toolbox. To compare competing disparity maps, quality metrics recommended by the evaluation methodology proposed by Scharstein and Szelinski (2002, IJCV, 47, 7-42) are adopted.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper presents the participation of the SIC to the EuroSDR contest (European Spatial Data Research; formerly known as OEEPE) about road extraction. After presenting the framework of the EuroSDR contest, our approach for road extraction is described. It consists of a line detector based on edge detection using a straightness constraint obtained from geometrical moment to filter out non-straight segments. Those segments are then filtered according to the NDVI (vegetation index) since roads are made of material different from vegetation. Resulting figures about the completeness, correctness and localization precision of the road segments are discussed for the EuroSDR data, and compared to the results of other challengers participating to the contest.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Phase Congruency is introduced as a frequency-domain based method to detect features from high-resolution remotely sensed imagery. Three types of objects were selected from the IKONOS pan imagery in Nanjing, i.e. paddy, road, and workshop objects. The Phase Congruency feature images were obtained by applying Phase Congruency model to these images with 2 octave log Gabor wavelets filters over 5 scales and 6 orientations. The outputs of space-domain based detectors Sobel and Canny are also presented for comparing to Phase Congruency. It is then shown the results that the magnitude of Phase Congruency response is largely independent of image local illumination and contrast, and Phase Congruency marks the line with a single response, not two. It is followed by a set of results illustrating the effects of varying filter parameters and noise in the calculation of Phase Congruency. It is found that Phase Congruency can obtain more accurate localization than space-domain based detectors because it does not need low-pass filtering to restrain noise first. The results also show that the noise has been successfully ignored in the smooth regions of the image, unlike the Canny detector results fluctuate all over the image.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper we address the problem of registering images acquired under unknown conditions including acquisition at different times, from different points of view and possibly with different type of sensors, where conventional approaches based on feature correspondence or area correlation are likely to fail or provide unreliable estimates. The result of image registration can be used as initial step for many remote sensing applications such as change detection, terrain reconstruction and image-based sensor navigation. The key idea of the proposed method is to estimate a global parametric transformation between images (e.g. perspective or affine transformation) from a set of local, region-based estimates of rotation-scale-translation (RST) transformation. These RST-transformations form a cluster in rotation-scale space. Each RST-transformation is registered by matching in log-polar space the regions centered at locations of the corresponding interest points. Estimation of the correspondence between interest points is performed simultaneously with registration of the local RST-transformations. Then a sub-set of corresponding points or, equivalently, a sub-set of local RST-transformations is selected by a robust estimation method and a global transformation, which is not biased by outliers, is computed from it. The method is capable of registering images without any a priori knowledge about the transformation between them. The method was tested on many images taken under different conditions by different sensors and on thousands of calibrated image pairs. In all cases the method shows very accurate registration results. We demonstrate the performance of our approach using several datasets and compare it with another state-of-the-art method based on the SIFT descriptor.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We explicitly formulate a family of kernel-based methods for (supervised and partially supervised) multitemporal classification and change detection. The novel composite kernels developed account for the static and temporal cross-information between pixels of subsequent images simultaneously. The methodology also takes into account spectral, spatial, and temporal information, and contains the familiar difference and ratioing methods in the kernel space as a particular cases. The methodology also permits straightforward fusion of multisource information. Several scenarios are considered in which partial or complete labeled information at the prediction time is available. The developed methods are then tested under different classification frameworks: (1) inductive support vector machines (SVM), and (2) one-class support vector data description (SVDD) classifier, in which only samples of a class of interest are used for training. The proposed methods are tested in a challenging real problem for urban monitoring. The composite kernel approach is additionally used as a fusion methodology to combine synthetic aperture radar (SAR) and multispectral data, and to integrate the spatial and textural information at different scales and orientations through Gabor filters. Good results are observed in almost all scenarios; the SVDD classifier demonstrates robust multitemporal classification and adaptation capabilities when few labeled information is available, and SVMs show improved performance in the change detection approach.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper presents a novel split-based approach to automatic and unsupervised detection of changes caused by tsunamis in large-size multitemporal SAR images. Unlike standard methods, the proposed approach can detect in a consistent and reliable way changes in images of large size also when the prior probability of the class of changed pixels is very small (and therefore the extension of the changed area is small). The method is based on: i) pre-processing of images and comparison; ii) sea identification and masking; iii) split-based analysis. The proposed system has been developed for properly identifying damages induced by tsunamis along coastal areas. Nevertheless presented approach is general and can be used (with small modifications) for damage assessment in different kinds of problems with different types of multitemporal remote sensing images. Experimental results obtained on multitemporal RADARSAT-1 SAR images of the Sumatra Island (Indonesia) confirm the effectiveness of the proposed split-based approach.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Since the availability of spatial high resolution satellite imagery, the use of remote sensing data has become very
important for nuclear monitoring and verification purposes. For the detection of small structural objects in highresolution
imagery recent object-based procedures seem to be more significant than the traditional pixel-based
approaches.
The detection of undeclared changes within facilities is a key issue of nuclear verification. Monitoring nuclear
sites based on a satellite imagery database requires the automation of image processing steps. The change
detection procedures in particular should automatically discriminate significant changes from the background.
Besides detection, also identification and interpretation of changes is crucial.
This paper proposes an new targeted change detection methodology for nuclear verification. Pixel-based
change detection and object-based image analysis are combined to detect, identify and interpret significant
changes within nuclear facilities using multitemporal satellite data. The methodology and its application to case
studies on Iranian nuclear facilities will be presented.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Though widely used for spectral discrimination of materials, the spectral angle mapper (SAM) metrics exhibits some limitations, due to its lack of monotonicity as the number of components, i.e., spectral bands, increases. This paper proposes an outcome of the band add-on (BAO) decomposition of SAM, known as as BAO-SAM, for assessing compressed hyperspectral data. Since the material discrimination capability of BAO-SAM is superior to that of SAM, the underlying idea is that if the BAO-SAM between compressed and uncompressed data is kept low, the discrimination capability of compressed data will be favored. Experimental results on AVIRIS data show that BAO-SAM is capable of characterizing the spectral
distortion better than SAM does. Furthermore, the possibility of
developing a BAO-SAM bounded compression method is investigated. Such a method is likely to be useful for a variety of applications concerning hyperspectral image analysis.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We propose a new low-complexity algorithm for lossless compression of hyperspectral imagery using lookup tables along with a predictor selection mechanism. We first compute a locally averaged interband scaling (LAIS) factor for an estimate of the current pixel from the co-located one in the previous band. We then search via lookup tables in the previous band for the two nearest causal pixels that are identical to the pixel co-located to the current pixel. The pixels in the current band co-located to the causal pixels are used as two potential predictors. One of the two predictors that is closest to the LAIS estimate is chosen as the predictor for the current pixel. The method pushes lossless compression of the AVIRIS hyperspectral imagery to a new high with an average compression ratio of 3.47.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
It is often necessary to compress remote sensing (RS) data such as optical or radar images. This is needed for transmitting them via communication channels from satellites and/or for storing in databases for later analysis of, for instance, scene temporal changes. Such images are generally corrupted by noise and this factor should be taken into account while selecting a data compression method and its characteristics, in the particular, compression ratio (CR). In opposite to the case of data transmission via communication channel when the channel capacity can be the crucial factor in selecting the CR, in the case of archiving original remote sensing images the CR can be selected using different criteria. The basic requirement could be to provide such a quality of the compressed images that will be appropriate for further use (interpreting) the images after decompression. In this paper we propose a blind approach to quasi-optimal compression of noisy optical and side look aperture radar images. It presumes that noise variance is either known a priori or pre-estimated using the corresponding automatic tools. Then, it is shown that it is possible (in an automatic manner) to set such a CR that produces an efficient noise reduction in the original images same time introducing minimal distortions to remote sensing data at compression stage. For radar images, it is desirable to apply a homomorphic transform before compression and the corresponding inverse transform after decompression. Real life examples confirming the efficiency of the proposed approach are presented.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper a novel semifragile watermarking scheme for images with multiple bands is presented. We propose to use the remote sensing image as a whole, using a vector quantization approach, instead of processing each band separately. This scheme uses the signature of the multispectral or hyperspectral image to embed the mark in it and detects a modification of the original image, e.g. a replacement of a part of the image into the same image or any other similar manipulation. A modification of the image means to modify the signature of each point, all the bands simultaneously, because in multispectral images it does not have sense to modify a single band of all those that compose the multispectral image. The original multispectral or hyperspectral image is segmented in three-dimensional blocks and, for each block, a tree structured vector quantizer is built, using all bands at the same time. These trees are manipulated using an iterative algorithm until the resulting image compressed by the manipulated tree satisfies all the imposed conditions by such tree, which represents the embedded mark. Each tree is partially modified accordingly to a secret key in order to avoid copy-and-replace attacks, and this key determines the internal structure of the tree and, also, the resulting distortion, in order to make the resulting image robust against near-lossless compression. The results show that the method works correctly with multispectral and hyperspectral images and detects copy-and-replace attacks from segments of the same image and basic modifications of the marked image.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
JPEG-LS1 is the new ISO/ITU standard for lossless and near-lossless compression of 2D continuous-tone images. For contemporary and future ultraspectral sounder data that features good correlations in disjoint spectral channels, we develop an MST-embedded JPEG-LS (Minimum Spanning Tree embedded JPEG-LS) for achieving higher compression gains through MST channel reordering. Unlike previous non-embedded MST work with other cost functions used only for data preprocessing, the MST-embedded JPEG-LS uniquely uses the sum of absolute median prediction errors as the cost function for MST to determine each optimal pair of predicting and predicted channels. The MST can be embedded within JPEG-LS because of the same median prediction used in JPEG-LS. The advantage of this embedding is that the median prediction residuals are available to JPEG-LS after MST channel reordering without recalculation. Numerical experiments show that the MST-embedded JPEG-LS yields an average compression ratio of 2.81, superior to 2.46 obtained with JPEG-LS for the 10 standard ultraspectral granules.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
SVM classification has great potential in remote sensing. The nature of SVM classification also provides opportunities for accurate classification from relatively small training sets, especially if interest is focused on a single class. Five approaches to reducing training set size from that suggested by conventional heuristics are discussed: intelligent selection of the most informative training samples, selective class exclusion, acceptance of imprecise descriptions for spectrally distinct classes, the adoption of a one-class classifier and a focus on boundary regions. All five approaches were able to reduce the training set size required considerably below that suggested by conventional widely used heuristics without significant impact on the accuracy with which the class of interest was classified. For example, reductions in training set size of ~90% from that suggested by a conventional heuristic are reported with the accuracy of the class of interest remaining nearly constant at ~95% and ~97% from the user's and producer's perspectives respectively.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We propose a contextual unsupervised classification method of geostatistical data based on combination of Ward clustering method and Markov random fields (MRF). Image is clustered into classes by using not only spectrum of pixels but also spatial information. For the classification of remote sensing data of low spatial resolution, the treatment of mixed pixel is importance. From the knowledge that the most of mixed pixels locate in boundaries of land-covers, we first detect edge pixels and remove them from the image. We here introduce a new measure of spatial adjacency of the classes. Spatial adjacency is used to MRF-based update of the classes. Clustering of edge pixels are processed as final step. It is shown that the proposed method gives higher accuracy than conventional clustering method does.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Consider a confusion matrix obtained by a classifier of land-cover categories. Usually, misclassification rates are not uniformly distributed in off-diagonal elements of the matrix. Some categories are easily classified from the others, and some are not. The loss function used by AdaBoost ignores the difference. If we derive a classifier which is efficient to classify categories close to the remaining categories, the overall accuracy may be improved. In this paper, the exponential loss function with different costs for
misclassification is proposed in multiclass problems. Costs due to misclassification should be pre-assigned. Then, we obtain an emprical cost risk function to be minimized, and the minimizing procedure is established (Cost AdaBoost). Similar treatments for logit loss functions are discussed. Also, Spatial Cost AdaBoost is proposed. Out purpose is originally to minimize the expected cost. If we can define costs appropriately, the costs are useful for reducing error rates. A simple numerical example shows that the proposed method is useful for reducing error rates.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In traditional unsupervised classification method, the number of clusters usually needs to be assigned subjectively by
analysts, but in fact, in most situations, the prior knowledge of the research subject is difficult to acquire, so the suitable
and best cluster numbers are very difficult to define. Therefore, in this research, an effective heuristic unsupervised
classification method-Genetic Algorithm (GA) is introduced and tested here, because it can be through the
mathematical model and calculating procedure of optimization to determine the best cluster numbers and centers
automatically. Furthermore, two well-known models--Davies-Bouldin's and the K-Means algorithm, which adopted by
most research for the applications in pattern classification, are integrated with GA as the fitness functions. In a word, in
this research, a heuristic method-Genetic Algorithm (GA), is adopted and integrated with two different indices as the
fitness functions to automatically interpret the clusters of satellite images for unsupervised classification. The
classification results were compared to conventional ISODATA results, and to ground truth information derived from a
topographic map for the estimation of classification accuracy. All image-processing program is developed in MATLAB,
and the GA unsupervised classifier is tested on several image examples.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The Maximum Noise Fraction (MNF) transformation is frequently used to reduce multi/hyper-spectral data dimensionality. It explores the data finding the most informative features, i.e. the ones explaining the maximum signal to noise ratio. However, the MNF requires the knowledge of the noise covariance matrix. In actual applications such information is not available a priori; thus, it must be estimated from the image or from dark reference measurements. Many MNF based techniques are proposed in the literature to overcome this major disadvantage of the MNF transformation. However, such techniques have some limits or require a priori knowledge that is difficult to obtain. In this paper, a new MNF based feature extraction algorithm is presented: the technique exploits a linear multi regression method and a noise variance homogeneity test to estimate the noise covariance matrix. The procedure can be applied directly to the image in an unsupervised fashion. To the best of our knowledge, the MNF is usually performed to remove the noise content from multi/hyperspectral images, while its impact on image classification is not well explored in the literature. Thus, the proposed algorithm is applied to an AVIRIS data set and its impact on classification performance is evaluated. Results are compared to the ones obtained by the widely used PCA and the Min/Max Autocorrelation Fraction (MAF), which is an MNF based technique.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The aim of this paper is to assess and compare the performance of two kernel-based classification methods based
on two different approaches. On one hand the Support Vector Machine (SVM), which in the last years has shown
excellent results for hard classification of hyperspectral data; on the other hand a detection method called Kernel Orthogonal Subspace Projection KOSP, proposed in a recent paper.1 To this aim, the widely used "Indian Pine" Aviris dataset is adopted, and a common "test protocol" has been considered: both methods have been tested adopting the one-vs-rest strategy, i.e. by performing the detection of each spectral signature (representing one of the N classes) and by considering the spectral signatures of the remaining N - 1 classes as background. The same dimensionality of the training set is also considered in both approaches.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A method for extracting statistics from hyperspectral data and generating synthetic scenes suitable for scene generation models is presented. Regions composed of a general surface type with a small intrinsic variation, such as a forest or crop field, are selected. The spectra are decomposed using a basis set derived from spectra present in the scene and the abundances of the basis members in each pixel spectrum found. Statistics such as the abundance means, covariances and channel variances are extracted. The scenes are synthesized using a coloring transform with the abundance covariance matrix. The pixel-to-pixel spatial correlations are modeled by an autoregressive moving average texture generation technique. Synthetic reflectance cubes are constructed using the generated abundance maps, the basis set and the channel variances. Enhancements include removing any pattern from the scene and reducing the skewness. This technique is designed to work on atmospherically-compensated data in any spectral region, including the visible-shortwave infrared HYDICE and AVIRIS data presented here. Methods to evaluate the performance of this approach for generating scene textures include comparing the statistics of the synthetic surfaces and the original data, using a signal-to-clutter ratio metric, and inserting sub-pixel spectral signatures into scenes for detection using spectral matched filters.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Classification of hyperspectral data is one of the most challenging problems in the analysis of remote sensing images. The complexity of this process depends on both the properties of data (non-stationary spectral signatures of classes, intrinsic high dimensionality) and the practical constraints in ground-truth data collection (which result in a small ratio between the number of training samples and spectral channels). Among the methods proposed in the literature for classification of hyperspectral images, semisupervised procedures (which integrate in the learning phase both labeled and unlabeled samples) and systems based on Support Vector Machines (SVMs) seem to be particularly promising. In this paper we introduce a novel Progressive Semisupervised SVM technique (PS3VM) designed for the analysis of hyperspectral remote sensing data, which exploits a semisupervised process according to an iterative procedure. The proposed technique improves the one presented in [1,2], exhibiting three main advantages: i) an adaptive selection of the number of iterations of the semi-supervised learning procedure; ii) an effective model-selection strategy; iii) a high stability of the learning procedure. To assess the effectiveness of the proposed approach, an extensive experimental analysis was carried out on an hyperspectral image acquired by the Hyperion sensor over the Okavango Delta (Botswana).
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In addition to typical random noise, remote sensing hyperspectral images are generally affected by non-periodic partially deterministic disturbance patterns due to the image formation process and characterized by a high degree of spatial and spectral coherence. This paper presents a new technique that faces the problem of removing the spatial coherent noise known as vertical stripping (VS) usually found in images acquired by push-broom sensors, in particular for the Compact High Resolution Imaging Spectrometer (CHRIS). The correction is based on the hypothesis that the vertical disturbance presents higher spatial frequencies than the surface radiance. The proposed method introduces a way to exclude the contribution of the spatial high frequencies of the surface from the destripping process that is based on the information contained in the spectral domain. Performance of the proposed algorithm is tested on sites of different nature, several acquisition modes (different spatial and spectral resolutions) and covering the full range of possible sensor temperatures. In addition, synthetic realistic scenes have been created, adding modeled noise for validation purposes. Results show an excellent rejection of the noise pattern with respect to the original CHRIS images. The analysis shows that high frequency VS is successfully removed, although some low frequency components remain. In addition, the dependency of the noise patterns with the sensor temperature has been found to agree with the theoretical one, which confirms the robustness of the presented approach. The approach has proven to be robust, stable in VS removal, and a tool for noise modeling. The general nature of the procedure allows it to be applied for destripping images from other spectral sensors.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Orthogonal subspace projection (OSP) has been used in hyperspectral image processing for automatic target detection and image classification. Existing OSP based approaches for target detection require a priori knowledge of all undesired signatures present in the input scene. In this paper, we proposed a new technique for target detection which does not require a priori knowledge of the non-target signatures present in the input scene. The length of any pixel vector containing the target reduces significantly when it is projected in a direction orthogonal to the target signature. Thus the ratio between the original pixel vector to the projected pixel vector yields a high value for the pixels containing the target. Therefore, an OSP based parameter along with noise adjusted principal component analysis (NAPCA) was introduced in this paper for target detection in hyperspectral images. For noisy images, NAPCA is used as a preprocessing step to reduce the effects of noise as well as to reduce the spectral dimension thereby yielding better target detection capability while enhancing the computational efficiency. For noise-free input scenes or when very small amount of noise is present in the input scene, principal component analysis (PCA) may be used instead of NAPCA. The OSP based technique requires that the number of spectrally distinct signatures present in the input scene must be less than the number of spectral bands. The proposed algorithm yields very good results even when this criterion is not satisfied.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper a classification scheme based on recurrent neural networks is presented. Neural networks may be viewed as a mathematical model composed of many non-linear computational elements, called neurons, operating in parallel and massively connected by links characterized by different weights. It is well known that conventional feedforward neural networks can be used to approximate any spatially finite function given a set of hidden nodes. Recurrent neural networks are fundamentally different from feedforward architectures in the sense that they not only operate on an input space but also on an internal state space - a trace of what already has been processed by the network. This capability is referred as internal memory of the recurrent networks. The general objectives of this paper are to describe, demonstrate and test the potential of simple recurrent artificial neural networks for dark formation detection using SAR satellite images over the sea surface. The type and the architecture of the network are subjects of research. Input to the networks is the original SAR image. The network is called to classify the image into dark formations and clean sea. Elman's and Jordan's recurrent networks have been examined. Jordan's networks have been recognized as more suitable for dark formation detection. The Jordan's specific architecture with five inputs, three hidden neurons and one output is proposed for dark formation detection as it classifies correctly more than 95.5% of the data set.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, we present results from a study on classifiers for automatic oil slick classification in ENVISAT ASAR images. First, based on our basic statistical classifier, we improve the classification performance by introducing regularization of the covariance matrixes. The new improved classifier reduces the false alarm rate from 19.6% to 13.1%. Second, we compare the statistical classifier with SVM, finding that the statistical classifier outranks SVM for this particular application. Experiments are done on a set of 103 SAR images.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper computational intelligence, referring here to the synergy of neural networks and genetic algorithms, is deployed in order to determine a near-optimal neural network for the classification of dark formations in oil spills and look-alikes. Optimality is sought in the framework of a multi-objective problem, i.e. the minimization of input features used and, at the same time, the maximization of overall testing classification accuracy. The proposed method consists of two concurrent actions. The first is the identification of the subset of features that results in the highest classification accuracy on the testing data set i.e. feature selection. The second parallel process is the search for the neural network topology, in terms of number of nodes in the hidden layer, which is able to yield optimal results with respect to the selected subset of features. The results show that the proposed method, i.e. concurrently evolving features and neural network topology, yields superior classification accuracy compared to sequential floating forward selection as well as to using all features together. The accuracy matrix is deployed to show the generalization capacity of the discovered neural network topology on the evolved sub-set of features.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We describe and analyse a generalization of a parametric segmentation technique adapted to Gamma distributed SAR images to a simple non parametric noise model. The partition is obtained by minimizing the stochastic complexity of a quantized version on Q levels of the SAR image and lead to a criterion without parameters to be tuned by the user. We analyse the reliability of the proposed approach on synthetic images. The quality of the obtained partition will be studied for different possible strategies. In particular, one will discuss the reliability of the proposed optimization procedure. Finally, we will precisely study the performance of the proposed approach in comparison with the statistical parametric technique adapted to Gamma noise. These studies will be led by analyzing the number of misclassified pixels, the standard Hausdorff distance and the number of estimated regions.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper deals with the detection of non-metallic anti-personnel (AP) land mines using stepped-frequency ground penetrating radar. A class of the so-called Independent Component Analysis (ICA) represents a powerful tool for such a detection. Various ICA algorithms have been introduces in the literature; therefore there is a need to compare these methods. In this contribution, four of the most common ICA methods are studied and compared to each other as regarding their ability to separate the target and clutter signals. These are the extended Infomax, the FastICA, the Joint Approximate Diagonalization of Eigenmatrices (JADE), and the Second Order Blind Identification (SOBI). The four algorithms have been applied to the same data set which has been collected using an SF-GPR. The area under the Receiver Operating Characteristic (ROC) curve has been used to compare the clutter removal efficiency of the different algorithms. All four methods have given approximately consistent results. However both JADE and SOBI methods have shown better performances over Infomax and FastICA.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
High resolution sonars are required to detect and classify mines on the sea-bed. Synthetic aperture sonar increases the sonar cross range resolution by several orders of magnitudes while maintaining or increasing the area search rate. The resolution is however strongly dependent on the precision with which the motion errors of the platform can be estimated. The term micro-navigation is used to describe this very special requirement for sub-wavelength relative positioning of the platform. Therefore algorithms were designed to estimate those motion errors and to correct for them during the (ω, k)-reconstruction phase. To validate the quality of the motion estimation algorithms a single transmitter/multiple receiver simulator was build, allowing to generate multiple point targets with or without surge and/or sway and/or yaw motion errors. The surge motion estimation is shown on real data, which were taken during a sea trial in November of 2003 with the low frequency (12 kHz) side scan sonar (LFSS) moving on a rail positioned on the sea-bed near Marciana Marina on the Elba Island, Italy.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.