PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
This paper gives an overview of image registration algorithms and presents a new algorithm, which can be used to register images of the same or different modalities. In particular, a correlation-based scheme is used, but instead of grey values it correlates numbers formulated by different combinations of the extracted local Walsh coefficients of the images.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Polar orbital satellites wiht low spatial resolution sensors, such as the AVHRR, provide global coverage with a short repetition period. The data is direclty transmitted to ground stations, and can be distributed immediately after data acquistion. Near real time applications can be implemented if the adequate processing tools are available. One task usually needed is the geometric correction of image data. Automatic methods, based on satellite orbital parameters, can in some cases provide satisfactory results. However, the identification of Ground Control Points (GCPs) is generally required in order to achieve registration errors below the pixel size. A fully automatic method for the geometric registration of AVHRR data is proposed here. The method comprises four stages: (i) an initial iamge transformation based on orbital parameters, (ii) image segmentation of this image into 3 main classes and 9 additional classes of mixed water and land at various levels, (iii) automatic GCP collection by image matching, (iv) final image production combining both orbital and GCP information. The method was tested on ten images of the Iberian Peninsula, and proved effective in accurately geo-referencing sub-sections of an AVHRR scene of medium dimension in a few minutes.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The Multi-angle Imaging SpectroRadiometer (MISR) is a part of the payload for NASA's Terra spacecraft launched in December 1999. The MISR instrument continuously acquires a systematic, global, multi-angle imagery in reflected sunlight in order to support and improve studies of the Earth ecology and climate. This paper focuses on the photogrammetric aspect of the data production and discusses quality of the global mapping as evaluated during the first two years of the mission. Usually, remote sensing image data has been only radiometrically and spectrally corrected, as a part of standard processing, prior to being distributed to investigators. In the case of the spaceborne MISR instrument with its unique configuration of nine fixed pushbroom cameras, continuous and autonomous coregistration and geolocation of image data are required prior to application of scientific retrieval algorithm. In order to address this problem, the MISR ground data processing system includes photogrammetric processing. From the entire MISR production system, three segments can be singled out as photogrammetric in nature. These are 1) in-flight geometric calibration, 2) georectification, and 3) cloud height retrieval. The data obtained through in-flight geometric calibration significantly simplify georectification part of the standard processing. Georectification gives fundamental input to scientific retrieval including cloud-top height retrieval.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The measurement of the Modulation Transfer Function (MTF) to quantify the quality of an imaging system proves to be very important in the context of Earth observation satellites. In particular, this measurement is essential to carry out the focusing of the telescope, or to implement a deconvolution filter whose goal is to enhance the image contrast or to reduce the noise. Its knowledge also allows us to compare the characteristics of different known and unknown satellites. In this paper, we suggest an univariant MTF measurement method using non specific views. First of all, the landscape has to be characterized in order to discriminate ground structure information from MTF information. Once this separation is carried out, landscape structure information can be extracted, allowing a classification between very uniform scenes and more structured ones. Then the MTF, which is described by a bidimensional analytical physical model, can be assessed using an artificial neural network. The principle is to use the artificial neural network to learn the MTF of simulated or perfectly known images, and then to use it to assess the MTF of totally unknown images. One can show that this method is robust even if the noise is taken into account. As a result, maximum MTF assessment errors are less than 10%. This enables us to suggest further developments including a general scheme of criteria assessment of image quality.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Data fusion based on multiresolution analysis requires the definition of a proper model establishing how the missing highpass information to be injected into the resampled multispectral (MS) bands is extracted from the panchromatic (P) band. Such a model can be global over the whole image or depend on the spatial context. Goal of the model is to make the fused bands the most similar to what the MS
sensor would image if it had the same resolution as the broadband one. In this perspective, both radiometric and spectral distortions are jointly considered in the proposed model which has been set up through simulated SPOT 5 data (XS + P) of an urban area including vegetation. A space-varying equalization of sensors is achieved by multiplying the highpass pixel detail extracted from the P image by the ratio between the pixel values in the expanded XS and and in the lowpass version of the P band. Radiometric distortion (RMSE between true and fused XS bands) is abated by almost 20 with respect to the case in which as many scalar cross-gain factors as are the bands are employed. Spectral distortion is measured as the absolute angle between a pixel vector in the reference and fused bands. It can be perceived a change in color hues between the true and fused color-composite images. Thanks to the proposed injection model, the spectral angle of the fused product is identical to that measured between the true and resampled original data. Besides spectral distortions, also spatial distortions, e.g., ringing artifacts and aliasing impairments, which are typical of critically-subsampled multiresolution fusion schemes, are completely missing in this pyramid approach.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Hidden Markov Chain (HMC) models are widely used in various signal or image restoration problems. In such models, one considers that the hidden process X=(X1, ., Xn) we look for is a Markov chain, and the distribution p(y/x) of the observed process Y=(Y1, ., Yn), conditional on X, is given by p(y/x)=p(y1/x1). p(yn/xn). The 'a posteriori' distribution p(x/y) of X given Y=y is then a Markov chain distribution, which makes possible the use of different Bayesian restoration methods. Furthermore, all parameters can be estimated by the general 'Expectation-Maximization' algorithm, which renders Bayesian restoration unsupervised. This paper is devoted to an extension of the HMC model to a 'Triplet Markov Chain' (TMC) model, in which a third auxiliary process U is introduced and the triplet (X, U, Y) is considered as a Markov chain. Then a more general model is obtained, in which X can still be restored from Y=y. Moreover, the model parameters can be estimated with Expectation-Maximization (EM) or Iterative Conditional Estimation (ICE), making the TMC based restoration methods unsupervised. We present a short simulation study of image segmentation, where the bi- dimensional set of pixels is transformed into a mono-dimensional set via a Hilbert-Peano scan, that shows that using TMC can improve the results obtained with HMC.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A generalized Gaussian model that can be used for speckle reduction and restoration of synthetic aperture radar (SAR) images is presented here. We have worked in 3-look simulated and real ERS-1 amplitude images. A MAP approximation of the a posteriori log-likelihood distribution is given and the results of a local deterministic estimator are presented.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The efficiency of Markov models in the context of SAR image segmentation mainly relies on their spatial regularity constraint. However, a pixel may have a rather different visual aspect when it is located near a boundary or inside a large set of pixels of the same class. According to the classical hypothesis in Hidden Markov Chain (HMC) models, this fact can not be taken into consideration. This is the very reason of the recent Pairwise Markov Chains (PMC) model which relies on the hypothesis that the pairwise process (X,Y) is
Markovian and stationary, but not necessarily X. The main interest of the PMC model in SAR image segmentation is to not assume that the speckle is spatially uncorrelated. Hence, it is possible to take into
account the difference between two successive pixels that belong to the same region or that overlap a boundary. Both PMC and HMC parameters are learnt from a variant of the Iterative Conditional Estimation method. This allows to apply the Bayesian Maximum Posterior Marginal criterion for the restoration of X in
an unsupervised manner. We will compare the PMC model with respect to the HMC one for the unsupervised segmentation of SAR images, for both Gaussian distributions and Pearson system of distributions.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Hidden Markov fields (HMF) are widely used in image processing. In such models, the hidden random field of interest X=(Xs) is a Markov field, and the distribution p(y/x) of the observed random field Y=(Ys) conditional on X is given by the product of p(ys/xs), with s in the set of pixels. The posterior distribution p(x/y) is then a Markov distribution, which affords different Bayesian processing. However, when dealing with the segmentation of images containing numerous classes with different textures, the simple form of the distribution p(y/x) above is insufficient and has to be replaced by a Markov field distribution. This poses problems, because taking p(y/x) Markovian implies that the posterior distribution p(x/y), whose Markovianity is needed to use Bayesian techniques, may no longer be a Markov distribution, and so different model approximations must be made to remedy this. This drawback disappears when considering directly the Markovianity of (X, Y); in these recent 'Pairwise Markov Fields (PMF) models, both p(y/x) and p(x/y) are then Markovian, the first one allowing us to model textures, and the second one allowing us to use Bayesian restoration without model approximations. In this paper we generalize the PMF to Triplet Markov Fields (TMF) by adding a third random field U=(Us) and considering the Markovianity of (X, U, Y). We show that in TMF X is still estimable from Y by Bayesian methods. The parameter estimation with Iterative Conditional Estimation (ICE) is specified and we give some numerical results showing how the use of TMF can improve the classical HMF based segmentation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper describes the experience gained from the evaluation of selected automatic edge detection techniques applied to LANDSAT TM, SPOT HRV, IRS 1C and IKONOS images. Emphasis was given to the detection of man-made objects and linear features such as coastlines, roads and parcel boundaries in combination with selected preprocessing and postprocessing operations. As preprocessing Gaussian, adaptive and morphological operators were implemented and tested for image enhancement and smoothing. Edge extraction processing followed. First the Canny edge detector was applied. Then a morphological nonlinear Laplacian operator was applied and its zero-crossings yielded edge locations. Finally an edge detector resulting by overlaying two thresholded images from the Prewitt gradient, preserving edges appearing in both images, was applied. Postprocessing followed to eliminate noisy edges and restore edge connectivity through morphological operators. An analysis of the relative performance of the processing scheme indicated each detector's relation to noise (features at certain undesired scales, shadows along roads boundaries, irrelevant edges within parcel boundaries) and the set of specific parameters needed for proper enhancement and smoothing before edge extraction.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Accurate and automatic extraction of skeletal shape of objects of interest from satellite images provides an efficient solution to such image analysis tasks as object detection, object identification, and shape description. The problem of skeletal shape extraction can be effectively solved in three basic steps: intensity clustering (i.e. segmentation) of objects, extraction of a structural graph of the object shape, and refinement of structural graph by the orthogonal regression fitting. The objects of interest are segmented from the background by a clustering transformation of primary features (spectral components) with respect to each pixel. The structural graph is composed of connected skeleton vertices and represents the topology of the skeleton. In the general case, it is a quite rough piecewise-linear representation of object skeletons. The positions of skeleton vertices on the image plane are adjusted by means of the orthogonal regression fitting. It consists of changing positions of existing vertices according to the minimum of the mean orthogonal distances and, eventually, adding new vertices in-between if a given accuracy if not yet satisfied. Vertices of initial piecewise-linear skeletons are extracted by using a multi-scale image relevance function. The relevance function is an image local operator that has local maximums at the centers of the objects of interest.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
For many applications in data mining and knowledge discovery in
databases, clustering methods are used for data reduction.
If the amount of data increases like in image information
mining, where one has to process GBytes of data, for instance, many of the existing clustering algorithms cannot be applied because of a high computational complexity. To overcome this disadvantage, we developed an efficient clustering algorithm called dyadic k-means. The algorithm is a modified and enhanced version of the traditional k-means. Whereas k-means has a computational complexity of O(nk) with n samples and k clusters, dyadic k-means has one of O(n \log k).
Our algorithm is particularly efficient for the grouping of
very large data sets with a high number of clusters. In this article we will present statistically-based methods for the objective evaluation of clusters obtained by dyadic k-means. The main focus is on how well the clusters describe the data point distribution
in a multi-dimensional feature space and how much information can be obtained from the clusters. Both the filling of the feature space with samples and the characterization of this configuration with dyadic k-means produced clusters will be considered. We will use the well-established scatter matrices to measure the compactness and separability of clustered groups in the feature space. The probability of error, which is another indicator for the characterization of samples in the featuer space by clusters, will be calculated for each point, too. This probability delivers the relationship of each point to its cluster and can therefore be considered as a measurement of cluster reliability. We will test the evaluation methods both on a synthetic and a real world data set.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The purpose of this paper is to provide a multifacetted approach
to interactive analysis and modeling of time series, signals, and
dynamical data sets. The models use statistical regression analysis and are built in an incremental way from a set of simple functional
units. The analysis is supported by data visualization and on-line
adaptive tutorials accessed on the World Wide Web. This work extends our results obtained for the Landsat-5 and Landsat-7 calibration data.
The another objective is to educate the user about available
mathematical models and also to allow the user to build those
models interactively through applets prepared by the authors on a Web
server. These models may be constructed for the user's own data set.
The approach is illustrated using the calibration data sets for
the Landsat sensors, we also discuss agent communication framework of regression models and calibration data. However, the same integrated approach can be used for other data domains. This type of approach is consistent with other recent activities regarding semantic web.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In the literature, the problem of the biophysical parameter estimation has been faced through the use of predefined regression models or, more recently, or artificial neural networks. However, different estimation methods may provide different accuracies depending on the region of the input feature space to which the analyzed pattern belongs. In this paper, we propose a novel estimation approach that consists in defining a Multiple Estimator Ssstem (MES). The key idea of the MES is to capture the peculiarities of an ensemble of different estimators in order to improve the accuracy and robustness of the single estimators. The proposed MES can be implemented in two conceptually different ways: 1) by combining the estimates obtained by the different estimators; 2) by selecting the output of the best single estimator identified according to an adaptive measure of accuracy applied to the input feature space. The MES was applied to the problem of estimating water quality parameters, with a particular focus on the measure of concentration of chlorophyll. In the experimental phase, we used a recent and promising regression approach based on Support Vector Machines (SVMs) to create a set of estimators characterized by different 'architectures' to be integrated in the ensemble. Experimental results pointed out the capabiilty of the MES in increasing both the accuracy and robustness of the system.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, a technique for the automatic extraction of houses based on data from the airborne sensor HRSC-A is presented. Due to uncertainties within the given data sources (multispectral, 3D, and panchromatic information), a fuzzy approach is applied that is divided into two main processing parts and a third post-processing part. Within the first part, each information source is processed separately to achieve higher level information products for the following fusion process. A multispectral classification with a fuzzy measure extracts the posibility of each pixel to belong to an urban class. Since the necessary type of 3D information is the actual height of objects above the ground level, this information including the possibilities for different height levels are computed from the digital surface model. A watershed segmentation is used to identify homogeneous regions within the panchromatic band. The resulting segments represent the basic units for further analysis steps. Within the second part, height and spectral information of each segment are combined and improved by fuzzy rules. Uncertainties within the height information are reduced by spectral context knowledge while many spectral ambiguities are solved using reliable height information. Finally, possible house segments are extracted based upon the improved class possibilities.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The study of the morphodynamics of tidal channel networks is important because of their role in tidal propagation and the evolution of salt-marshes and tidal flats. Channel dimensions range from tens of meters wide and meters deep near the low water mark to only 20-30cm wide and 20cm deep for the smallest channels on the marshes. The conventional method of measuring the networks is cumbersome, involving manual digitizing of aerial photographs. This paper describes a semi-automatic knowledge-based network extraction method that is being implemented to work using airborne scanning laser altimetery. The channels exhibit a width variation of several orders of magnitude, making an approach based on multi-scale line detection difficult. The processing therefore uses multi-scale edge detection to detect channel edges, then associates adjacent anti-parallel edges together to form channels uing a distance-with-destination transform. Breaks in the networks are repaired by extending channel ends in the direction of their ends to join with nearby channels, using domain knowledge that flow paths should proceed downhill and that nay network fragment should be joined to a nearby fragment so as to connect eventually to the open sea.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Thermal Infrared (TIR) techniques have some interesting capabilities that may assist in the detection of shallowly buried objects, in particular to help in the identification of landmine contaminated areas. The working principle of the sensor is the measurement of the thermal contrast on the soil surface, caused by the disturbance of the thermal flow due to the presence of the buried object with respect to the surroundings. This paper presents some preliminary results for the detection of buried antipersonnel landmines (APLs) with a thermal infrared imaging system. We describe an algorithm for the detection of landmine candidates by exploiting features in the image associated with the observed thermal contrast. Different threshold levels are applied to select groups of pixels that correspond to hot formations in the image, and are the ones that could indicate a target position. A logical AND combination that is then applied to the produced binary images, and can deliver an acceptable performance for landmine detection. However the method cannot distinguish landmine candidates from background variations sharing similar spatial patterns. Since the performance of the method depends strongly on the environmental conditions, a time series measurement is potentially a more promising approach to the whole problem of thermal IR measurement of buried objects. The time series of the IR data set presented in this paper was collected from the test lanes of JRC in Ispra, Italy, in the framework of the Multi-sensor Mine-signature (MsMs) measurement project.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
One unique feature in the remote sensing problems is that a significant amount of data are available, from which desired information must be extracted. Transform methods offer effective procedures to derive the most significant information for further processing or human interpretation and to extract important features for pattern classificaiton. In this paper a survey of the use of major transforms in remote sensing is presented. These transforms have significant effects on pattern recognition as features derived from orthogonal or related transforms tend to be very effective for classification, and on data reduction and compression. After the introduction, we will examine the empirical orthogonal function, the discrete Karhunen-Loeve transform and related transforms, the wavelet transform, and the component analysis.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A new end-member analysis method based on convex cones has been developed. The method finds extreme points in a convex set. Unlike convex methods that rely on a simplex, the number of end-members is not restricted by the number of spectral channels. The algorithm simultaneously finds fractional abundance maps. The fractional abundances are the fractions of the total spectrally integrated radiance of a pixel that are contributed by the end-members. A physical model of the hyper-spectral or multi-spectral scene is obtained by combining subsets of the end-members into bundles of spectra for each scene material. The bundle spectra represent the spectral variability of the material in the scene induced by illumination, shadowing, weathering and other environmental effects. The method offers advantages in multi-spectral data sets where the limited number of channels impairs material un-mixing by standard techniques. The method can also be applied to compress hyper-spectral data. The fractional abundance matrices are sparse and offer an additional compression capability over standard matrix factorization techniques. A description of the method and applications to real and synthetic hyper-spectral and multi-spectral data sets will be presented.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Image classification based on radiometric and spectral information is connected with histogram labelling, therefore the required computing time exponentially grows with increasing pixel depth (m) and the number of spectral bands (M). In fact, histogram labelling requires the estimation of a cost function summed over every state of the input image, for a total number of states N = 2mM. Various techniques have been exploited in order to overcome this difficulty, but no general and satisfactory solution has been pointed out yet. We developed an iterative fitting algorithm in which the image histogram is analysed in the neighbourhood of its peaks. Each peak is fitted independently from other, using only local histogram data that feed an iterative fast procedure. Once the current peak has been processed, the input histogram is cleared from its contributions, and the residual histogram maximum is processed. The performance of the algorithm has been investigated by processing several TM images and photogrammetry. Executed tests have shown a good ability of the algorithm to accurately recognise different image classes, performing the entire classification in a very short time.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A novel data fusion approach to partially supervised classification problems is presented, which allows a specific land-cover class of interest to be mapped by using only training samples belonging to such class. This represents a significant operational advantage in many application domains where end-users require information products for the monitoring of a specific or few land cover classes (e.g., forestry, urban monitoring) of interest. The proposed technique overcomes one of the main methodological drawbacks of this type of problems: i.e., the lack of prior knowledge on the statistics of the unknown classes present in the scene under consideration. Experiments carried out on a multisource data set demonstrate the validity of the proposed technique.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Time-series of NASA/NOAA Pathfinder AVHRR Land (PAL) data have been analysed to extract parameters describing the seasonality of vegetation in Africa. Two methods have been developed to fit smooth curves to the time-series. The first method is based on an adaptive Savitzky-Golay filtering technique, and the second on non-linear least-squares fits of asymmetric Gaussian model functions. Both processing methods involve a preliminary definition of the number and timing of growing seasons using a least-squares fit of sinusoidal functions and a second order polynomial. The fit to the sinusoidal functions is used to determine the type of seasonal pattern (uni-modal or bi-modal) and to obtain starting values for the non-linear Gaussian function fits to the data. The processing incorporates qualitative information on cloudiness from the CLAVR dataset. The resulting smooth curves are used for defining parameters describing the growing seasons. The method has been applied to PAL NDVI data, and resulting imagery have been generated that show parameters such as beginnings and ends of seasons, seasonal integrated NDVI, seasonal amplitudes etc. The results indicate that the two methods complement each other and that they may be suitable in different areas depending on the behaviour of the NDVI signal.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A wavelet-based approach to local fractal dimension estimation of SAR images of the sea surface is presented. Fractal analysis is considered as a tool for image texture characterization which can play a fundamental role to automatically detect oil slicks, and possibly distinguish them from natural surface films. A fractional Brownian motion (fBm) model is assumed for the clean sea surface. FBm processes have been proved to be suitable to describe signals backscattered by many natural surfaces, particularly by the sea surface within a certain range of scales. By using the properties of the average power spectra of fBm's, it is possible to estimated the fractal dimension, as demonstrated on synthetic fBm realizations. In this paper, a redundant wavelet representation is applied for estimating the local fractal dimension of the sea surface. By using this technique, which allows to operate at the original image resolution, all discontinuities of the fractal sea surface can be detected and accurately localized. Experimental results on true SAR images show that without considering the backscatter coefficient for calculating the fractal dimension, but only textural features, it is possible to detect oil slicks and man-made objects on the sea surface.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper presents a novel classification scheme for SAR images
based on the perceptual classification of image patterns in the
Discrete Hermite Transform (DHT) domain over a roughly hexagonal
sampling lattice. The DHT analyzes a signal through a set of
binomial filters which approximate the Gaussian derivatives with
the advantage that they are computed efficiently. In order to
obtain the DHT referred to a rotated coordinate system the set of
coefficients of a given order are mapped through a unitary
transformation that is locally specified. Such a transformation is
based on the generalized binomial functions so that the rotation
algorithm is efficient too. This representation allows a
perceptual classification, which is achieved by thesholding the
approximation errors that are obtained under the hypotheses that
the underlying pattern is a constant (0-D), an oriented structure
(1-D) or a non-oriented structure (2-D). The threshold is based on
light adaptation and contrast masking properties of the human
vision.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Canada's RADARSAT-2 (R2) Synthetic Aperture Radar (SAR) satellite will be equipped with an experimental Ground Moving Target Indication (GMTI) mode, which makes use of the 'Dual-Receive' capability of the R2 antenna to provide two apertures aligned in the along-track direction. The mode allows two SAR images to be taken under identical geometry of observation, but separated by a short time lag. One of the GMTI techniques, currently being explored, is based on SAR Along-Track Interferometry (SAR-ATI), which uses the magnitude-phase information of the interferogram to extract movers from stationary clutter. In this paper, an unconventional but fully automatic detection scheme, derived using a histogram approximation to the clutter joint Probability Density Function (PDF), is proposed. The new method permits the implementation of a Constant False Alarm Rate (CFAR) detector without the need to derive a theoretical joint PDF for the clutter interferogram. A false alarm reduction technique, based on 'selective' local density calculations, is also discussed and implemented, showing striking improvement in the reduction of the number of false alarms (up to 75% reduction) over the original detector without significantly degrading its performance. The detector is shown to be robust in its ability to handle both simulated (R2) and real (airborne) data. Preliminary comparison with a conventional CFAR detector, derived using theoretical marginal PDFs of the interferogram's magnitude and phase, show the performance superiority of the new detector.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Current pulsed laser radar systems for ranging purposes are based on time-of-flight techniques. Nowadays first pulse as well as last pulse exploitation is used for different application, e.g. urban planning, forestry surveying. Besides this technique of time measurement the complete signal form over the time might be of interest, because it includes the backscattering characteristic of the illuminated field. This characteristic can be used for estimating the aspect angle of a plane with special surface property or estimating the surface property of a plane with a special aspect angle. In this paper a monostatic bi-directional experimental system with a fast digitizing receiver is described. The spatio-temporal beam propagation, the spatial reflectance of the surface, and receiver properties are modeled. A time dependent description of the received signal power is derived and our special surface property is considered. The spatial distribution of the used laser beam was measured and displayed by the beam profile. For a plane surface under various aspect angles the transversal distributions of the beam were simulated and measured. For these angles the corresponding temporal beam distributions were measured and compared with their pulse widths. The pulse spread is used to estimate the aspect angle of the illuminated object. The statistics for different angles was calculated. Different approaches which detect a characteristic time value were compared and evaluated. The consideration of the signal form allows a more precise determination of the time-of-flight. A 3-d visualization of equi-irradiance surfaces allows to access the spatio-temporal shape of the pulses.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper presents an application of Fourier Descriptors and Neural Network for the recognition of archeological artifacts in Ground Penetrating Radar (GPR) images of a surveyed site. Multiple 2-D GPR images of a site are made available by NASA-SSC center. The buried artifacts in these images appear in the form of parabolas which are the results of radar backscatter from the artifacts. The Fourier Descriptors of an image are applied as inputs to a feed-forward backpropagation Neural Network Classifier (NNC). The NNC algorithm was trained to recognize parabola-like shapes from non-parabola shapes in the sub-surface images. The procedure consisted of removing background noise using a suitable threshold filter, locating the separate shapes in the image using N8(p) connectivity algorithm, calculating a short sequence of Fourier Descriptors (FD) of each isolated shape, and finally classifying parabola/no-parabola using Neural Network applied to the FDs. The results are images with recognized parabolas which indicate the presence of buried artifacts. As a useful feature to archeologists, a 3-D Visualization of the complete survey area is produced using C++ and Visualization Tool Kit. The Algorithms for removing the background noise, thresholding, calculating the Fourier Descriptors, and obtaining a classification using a Neural Network were developed using Matlab.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper a new approach for clutter and target characterization is proposed. The method is based on the use of Markov chains for representing the samples of both the clutter and the target. The mathematical representation of the clutter and the target is based on the transition matrix of an irreducible Markov chain. This kind of representation incorporates a full description of the underlying pdf as well as any order of statistical correlation. Among the useful and meaningful parameters of the transition matrix are its eigenvalues. In natural signals, transition matrices have only a small number of their elements with significant value. This fact can be used to device relatively simple Markov chain models for clutter representation. The target statistics can also be modeled by means of a Markov chain model. However, in this case, the model may be simpler since the target samples or pixels are highly correlated and their values are restricted to a smaller range compared to those of the clutter.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Several powerful lossy compression methods have been developed for hyperspectral images. However, it is difficult to determine sufficient quality for reconstructed hyperspectral images. We have measured the information loss from the lossy compression with Signal-to-Noise-Ratio (SNR) and Peak-Signal-to-Noise-Ratio (PSNR). To get more illustrative error measures unsupervised K-means clustering combined with spectral matching methods was used. Spectral matching methods include Euclidean distance, Spectral Similarity Value (SSV) and Spectral Angle Mapper (SAM). We used two AVIRIS radiance images, which were compressed with three different methods: the Self-Organizing Map (SOM), Principal Component Analysis (PCA) and three-dimensional wavelet transform combined with lossless BWT/Huffman encoding. The two-dimensional JPEG2000 compression method was applied to the eigenimages produced by the PCA. It was found that clustering combined with spectral matching is a good method to realize the image quality for many applications. The high classification accuracies have been achieved even at very high compression ratios. The SAM and the SSV are much more vulnerable for information loss caused by the lossy compression than the Euclidean distance. The results suggest that lossy compression is possible in many real-world segmentation applications. The PCA transform combined with JPEG2000 was the best compression method according to all error metrics.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper proposes an improvement to an interband version of the linear prediction approach for lossless compression of hyperspectral images. The improvements consisted of the use of non-predictable bands and the varied size of the sample set. Our improved method achieved an average compression ratio of 3.19 using 13 Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) images, compared to 3.08 in the basic method.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Dimensionality Reduction and Classification of Hyperspectral Images
Several well-known methods for lossy compression of still images are here analyzed to evaluate their performance for hyperspectral images.
The lossy compression methods discussed are the JPEG standard,
and four approaches based on the Wavelet Transform: the Embedded coding of ZeroTree wavelet coefficients, the Set Partitioning in Hierarchical Trees, a Lattice Vector Quantizer, and the new JPEG2K.
Experiments are first performed on corpuses of natural grayscale still images to provide a general framework of the performance of each method. Then experiments are performed on several hyperspectral images taken with CASI and AVIRIS sensors. Experiments show that it is possible to employ the basic lossy compression methods for hyperspectral image coding. The wavelet-based approaches produce results consistently better than the JPEG: JPEG can not achieve
compression ratios above 75:1; on the other side, with EZT, SPIHT and LVQ compression ratios of 250:1 or higher may be reached. For JPEG2K, higher compression ratios than JPEG may also be reached, but with a PSNR quality lower than the three other techniques. At compression ratios about 8:1, the wavelet methods yield results 1.5 dB better than those of JPEG. These results help to explain why JPEG2K standard uses the WT instead of the DCT.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Methods for noise reduction in multicomponent spectral images are developed and discussed. Multicomponent spectral images can be corrupted by noise either on all the channels or on some of the channels only. In the first case there are two possibilities: either the noise is on all the channels in the same way or the noise is randomly distributed on all the channels. We studied two methods for noise reduction directly on the multicomponent spectral image: the vector median filter and our new method, the spectrum smoothing, which does not care about neighbouring pixels but tries to reduce noise on one pixel at a time. The idea behind spectrum smoothing lies on the nature of a color spectrum. Color spectrum is naturally smooth, and does not have any peaks, unlike a noisy spectrum would have. If some of the channels are noisy, there is a problem of
finding the noisy channels. We came into a conclusion that if a channel correlates poorly with the neighboring channel,
the channel can be considered noisy, and filtering is applied to that channel. Results from our new spectrum smoothing filter were very promising for Gaussian noise compared to Gaussian 3 by 3 filter and mean 5 by 5 filter.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A method for the extraction of spectral and spatial scene statistics from hyperspectral data is discussed. The method is designed to work on atmospherically compensated data in any spectral region, although this paper will report on visible scene statistics derived from atmospherically compensated AVIRIS data. Our approach is based on a physical description where the scene is composed of materials that in turn are described by a set of spectral endmembers. The spatial statistics of individual scene materials have more stationary behavior than the statistics for the whole scene. For this reason we have formulated our approach around statistics that are determined from the fractional abundance images obtained from the spectral un-mixing of the scene. These quantities are used to construct a high spatial resolution reflectance or emissivity/temperature surface using a fast autoregressive texture generation tool. The spectral complexity of the synthetic surfaces have been evaluated by inserting objects for detection and calculating ROC curves. Preliminary results indicate that synthetic scenes with realistic levels of spectral clutter can be generated using spectral and spatial statistics determined from endmember fractional abundance maps. This work is motivated by the need for realistic hyperspectral scene generation capabilities to test future hyperspectral sensor concepts.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The aim of this paper is investigating the use of overcomplete bases for the representation of hyperspectral image data. The idea is building an overcomplete basis starting from several orthogonal or non-orthogonal bases and picking up a set of vectors fitting pixel spectra to the largest extent. A common technique to select the most representative elements of a signal is Matching Pursuit (MP). This technique is analogous to the Mixed-Transform Analysis (MTA) and has been successfully used to represent speech and images. The main problems in using MTA for hyperspectral data analysis are: (1) choice of bases that potentially convey the maximum of spectral information; (2) calculation of projections in the non-orthogonal representation. A large variety of bases has been taken into consideration, including several types of wavelets with compact support. An iterative approach is used to find the coefficients of the linear combination of vectors, so that the residual function has minimum energy. The computational cost is extrmeely high when a large set of data is to be processed. To encompass computational constraints, a reduced data set (RDS) is produced by applying the projection pursuit technique to each of the square blocks in which the input hyperspectral iamge is partitioned based on a spatial homogeneity criterion. Then MTA is applied to the RDS to find out a non-orthogonal frame capable to represent such data through waveforms selected to best match spectral features. Experimental results carried out on the hyperspectral data AVIRIS Moffett Field '97 show the joint use of different bases, including wavelet bases, may be preferable to a unique orthogonal basis in terms of energy compaction, was well as of significance of the outcome components.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Feature extraction, implemented as a linear projection from a higher dimensional space to a lower dimensional subspace, is a very important issue in hyperspectral data analysis. The projection must be done in a matter that minimizes the redundancy, maintaining the information content. In hyperspectral data analysis, a relevant objective of feature extraction is to reduce the dimensionality of the data maintaining the capability of discriminating object of interest from the cluttered background. This paper presents a comparative study of different unsupervised feature extraction mechanisms and shows their effects on unsupervised detection and classification. The mechanisms implemented and compared are an unsupervised SVD based band subset selection mechanism, Projection Pursuit, and Principal Component Analysis. For purposes of validating the unsupervised methods, supervised mechanisms as Discriminant Analysis and a supervised band subset selection using Bhattacharyya distance were implemented and its results were compared with the unsupervised methods. Unsupervised band subset selection based on SVD chooses automatically the most independent set of bands. Projection Pursuit based feature extraction algorithm automatically searches for projections that optimize a projection index. The projection index we optimized is one that measures the information divergence between the probability density function of the projected data and the Gaussian probability density function. This produces a projection where the probability density function of the whole data set is multi-modal, instead of a Gaussian uni-modal distribution. This augments the separability of the unknown clusters in the lower dimensional space. Finally they were compared with well-known and used Principal Component Analysis. The methods were tested using synthetic as well as remotely sensed data obtained from AVIRIS and LANDSAT. They were compared using unsupervised classification methods in a known ground truth area.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Dimensionality Reduction and Classification of Hyperspectral Images
An important aspect of hyperspectral pattern recognition is selecting a subset of bands to perform the classification. This is generally necessary because the statistical algorithms on which classification is based need probabilistic estimates to work. The great number of spectral bands in hyperspectral images means that there is not enough data to accurately perform these estimates. In typical hyperspectral pattern recognition, the band selection and classification stages are done separately. This paper presents research done with an iterative system that integrates the band selection and classification. The objective is to choose an optimal subgroup of bands by maximizing the distance between the centroids of the classified data. The results of the study show that: (1) the algorithm correctly chooses the best bands based on centroid separability with synthetic data, (2) the system converges, and (3) the percentage of samples classified correctly using the iterative system is greater than the percentage using all the bands.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The task of the analysis of hyperspectral data, due to their high spectrla reolution, requires dealing with the problem of the curse of dimenioality. Many feature selection/extraction techniques have been developed, which map the hyperdimensional feature space in a lower-dimensional space, based on the optimization of a suitable criterion function. This paper studies the impact of several such techniques and of the criterion chosen on the accuracy of different supervised classifiers. The compared methods are the 'Sequential Forward Selection' (SFS), the 'Steepest Ascent' (SA), the 'Fast Constrained Search' (FCS), the 'Projection Pursuit' (PP) and the 'Decision Boundary Feature Extraction' (DBFE), while the considered criterion functions are standard interclass distance measures. SFS is well known for its conceptual and computational simplicity. SA provides more effective subsets of selected features at the price of a higher computational cost. DBFE is an effective transformation technque, usually applied after a preliminary feature-space reduction through PP. The experimental comparison is performed on an AVIRIS hyperspectral data set characterized by 220 spectral bands and nine ground cover classes. The computational time of each algorithm is also reported.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Dimensionality Reduction and Classification of Hyperspectral Images
In this paper we study the use of classifier combination for
improving the classification accuracy of AVIRIS data. Two types of
combination ensembles are used as high-level classifiers, cascading
and voting. Regarding the base-level classifiers, we use limited
depth decision trees and the nearest neighbor classifier (k-NN). The
final classification system uses a threshold parameter that allows
the user to specify a trade-off between classification accuracy and
the percentage of classified samples. Dimensionality reduction is
carried out by using decision trees in order to select the most
promising classification features, which will be used to build the
base-level classifiers. We also use classical statistical analysis
to measure correlation between spectral bands. A set of post-processing rules may be also applied to generate large homogeneous regions from the pixmap generated by the classifier: false spots and 'unknown' samples may be re-classified depending on their neighborhood. Experiments show that the combined use of cascading small decision trees and a voting scheme with a k-NN classifier, improves classification performance, when compared to a single classifier, while the the 'unknown' class allows us to identify the possible outliers present in the training set. The use of post-processing generates large regions which may be more useful for classification and interpretation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The availability of only small samples of training data presents a problem when using statistical pattern recognition techniques. Recently new methods for pattern recognition, designed specifically for use with small training data samples, have begun to appear in the literature. These methods sample the training data many times to assemble a range of different classifiers. The classifiers produced may then be collected into an ensemble and, when presented with an unseen sample, use a voting scheme to determine class membership. A particular example of this ensemble classification technique, the random subspace method, is examined here and tested using both synthetic data having known properties, and with data from the AVIRIS hyperspectral-imaging sensor. The paper discusses the application of the method to problems that are not linearly separable; the selection of parameters for the method, and examines the performance envelope for different problems and parameterizations. Good results are produced for both datasets, even where the training samples are too small for conventional classification techniques to be used. Specifically, error rates of only twice those calculated for a large training sample may be achieved using training sets with as few as 20 examples per class, for a thirteen-class classification problem, using the 200-dimensional AVIRIS "Indian Pines" hyperspectral image.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
An automated method that can select corresponding point candidates is developed. This method has the following three features: 1) employment of the RIN-net for corresponding point candidate selection; 2) employment of multi resolution analysis with Haar wavelet transformation for improvement of selection accuracy and noise tolerance; 3) employment of context information about corresponding point candidates for screening of selected candidates. Here, the 'RIN-net' means the back-propagation trained feed-forward 3-layer artificial neural network that feeds rotation invariants as input data. In our system, pseudo Zernike moments are employed as the rotation invariants. The RIN-net has N x N pixels field of view (FOV). Some experiments are conducted to evaluate corresponding point candidate selection capability of the proposed method by using various kinds of remotely sensed images. The experimental results show the proposed method achieves fewer training patterns, less training time, and higher selection accuracy than conventional method.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A new method for selection of appropriate training areas which are
used for supervised texture classification is proposed. In the method, the genetic algorithms (GA) are employed to determine the appropriate location and the appropriate size of each texture category's training area. The proposed method consists of the following procedures: 1) the determination of the number of classification category and those kinds; 2) each chromosome
used in the GA consists of coordinates of center pixel of each training area candidate and those size; 3) 50 chromosomes are generated using random number; 4) fitness of each chromosome is calculated; the fitness is the product of the Classification Reliability in the Mixed Texture Cases (CRMTC) and the Stability of NZMV against Scanning Field of View Size (SNSFS); 5) in the selection operation in the GA, the elite preservation strategy is employed;
6) in the crossover operation, multi point crossover is employed and two parent chromosomes are selected by the roulette strategy; 7) in mutation operation, the locuses where the bit inverting occurs are decided by a mutation rate; 8) go to the procedure 4. Some experiments are conducted to evaluate searching capability of appropriate training areas of the proposed method by using images from Brodatz's photo album and their rotated images. The experimental results show that the proposed method can select appropriate training areas much faster than conventional try-and-error method. The proposed method has been also applied to supervised texture classification of airborne multispectral scanner images. The experimental results show that the proposed method can provide appropriate training areas for reasonable classification results.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
There is a wide set of digital images, where the problem of detecting specific structures is filtering between multiple and complex lines and secondary elements. The real problem is extracting relevant information from images, discarding uninteresting information previously, during and after the segmentation process. In this work, we resume the advantages and disadvantages of each approach, concluding a basic preference of filtering as soon as possible. In this sense, we present a method of filtering during segmentation, which mixes the mobile windows and the seeded regions approaches. Main steps are: 1) The whole image is divided in windows with a size related with the searched structures; 2) Previous knowledge about the location of the searched elements is applied to reduce the number of windows; 3) The number of windows is reduced using distribution and compacity conditions; 4) The population of each work windows is analyzed to fix one threshold; 5) Filtered work pairs are segmented using simple two populations criteria; 6) Analyzing the detected segments, the list of work window-threshold pairs is extended to include new windows. Most relevant result is the definition of a new
border based segmentation approach, which gives good results searching specific objects in complex images.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
According to the recent incoming of high resolution images acquired by the new earth observation satellites (IKONOS, EROS, QUICKBIRD) we are suggested to considered their possible photogrammetric exploitation. High geometric (up to 0.6 m GSD) and radiometric (11-12 bit) resolution of such images drive us to consider them as possible substitutes of classic aerial images used for cartographic purposes at the 1:5000/1:2000 scale. In such context we can't omit the heavy incidence of terrain altimetry onto the images georeferencing operations; orthocorrection is necessary to be carried out. This paper, far away from solving the real orthoprojection instances related to the definition of the camera position and attitude, demonstrates as well complex urban DDEM (Dense Digital Elevation Model) completed with volume information of buildings, can improve the planimetric accuracies of the orthocorrected images. A proprietary software, developed by the authors, can automatically extract buildings DEM from a 3D cartography and integrate it with a simple terrain DEM. Results are referred to an orthocorrection carried out by a commercial software. They are certainly conditioned by the out-of-our-control geometric model used by the software itself. The purpose is simply to demonstrate the real improvement of the planimetric positioning obtained using DDEM.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Image registration is an important operation in remote sensing applications that basically involves the identification of many control points in the images. As the manual identification of control points may be time-consuming and tedious several automatic techniques have been developed. This paper describes a system for automatic registration and mosaic of remote sensing images under development at the Division of Image Processing (National Institute for Space Research - INPE) and the Vision Lab (Electrical & Computer Engineering department, UCSB). Three registration algorithms, which showed potential for multisensor or temporal image registration, have been implemented. The system is designed to accept different types of data and information provided by the user which speed up the processing or avoid mismatched control points. Based on a statistical procedure used to characterize good and bad registration, the user can stop or modify the parameters and continue the processing. Extensive algorithm tests have been performed by registering optical, radar, multi-sensor, high-resolution images and video sequences. Furthermore, the system has been tested by remote sensing experts at INPE using full scene Landsat, JERS-1, CBERS-1 and aerial images. An online demo system, which contains several examples that can be carried out using web browser, is available.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A clutter removal procedure for Infra-Red (IR) naval surveillance
systems is presented. The proposed method is specifically
designed to manage the maritime scenario and it is not sensitive
to the sharp transition between sea and sky across the horizon
line. It is also effective for the removal of striping noise
which arises as a consequence of the non-uniform calibration
of the detector array. The effectiveness of the clutter
removal procedure is illustrated on a set of experimental
IR data.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We present a method for elimination of global intensity deformation in greyscale and color images of any size using contrast control. The method is based on image resolution manipulation and uses a representation of an image in the form of a difference-of-low-pass pyramid (DoLP). In the first step, a Gaussian pyramid representation of the input image is prepared through low-pass filtration and sampling of successive pyramid levels. In the second step, the DoLP pyramid is built, and finally all levels in the DoLP pyramid are expanded to the original image size and added with weights, to reconstruct the image. Proper choice of the weights is crucial for efficient elimination of global intensity deformation and leads to contrast enhancement at certain levels of the pyramid. Color images and images of a size different than 2N+1 x 2N+1, where N = 2, 3, ?., require additional processing. They are converted to and from hue-saturation-value (HSV) color space model and geometrically transformed, which can be performed using two proposed methods. An algorithm and a computer routines library written in object programming language C++ are developed. The proposed method is useful in digital archiving of airborne, scanned, photo-copied and optical camera-made photographs, degraded due to ageing processes.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Contextual classification methods, which require the extraction of complex spatial information over a range of scales, from fine details in local areas to large features that extend across the image, are necessary in many remote sensing image classification studies. This work presents a supervised adaptive object recognition model which integrates scale-space filtering techniques for feature extraction within a neural classification procedure based on multilayer perceptron (MLP). The salient aspect of the model is the integration within the back-propagation learning task of the search of the most adequate filter parameters. The experimental evaluation of the method has been conducted coping with object recognition in high-resolution remote sensing imagery. To investigate whether the strategy can be considered an alternative to conventional procedures the results were compared with those obtained by a well known contextual classification scheme.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
At Faculty of Physics, Moscow State University, the new super-resolution methods for different physical measuring systems are created. Special super-resolution methods for actual multi-rays systems of a millimeter wave range were developed. Until recent time the data obtained from an actual one-ray radio-vision system, were represented in such aspect: as though they were obtained from a virtual multi-rays system. Such an approach appears to be very successful: the problem of super-resolution has been solved at low values of a signal/noise ratio with the paralleled solutions on all the virtual rays. The paper is devoted to the problem of super-resolution of the data from actual compact 6-rays system of radio-vision. The experimental device of a 6-rays radio vision system was created without the mathematical modeling of its operation. It is possible even to tell that at the output of this system the poor results were gained. Therefore there was a problem to improve considerably the poor results: to increase the resolution of each ray. We suppose that obtained experience to be useful while constructing the modern all-weather multi rays real-time radio-vision systems. The methods of super-resolution of actual multi-rays microwave vision system is considered in the paper.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The paper proposes an idea of the adaptive classification procedure (ACP) making the process of remotely sensed (RS) data classification more flexible and efficient in comparison with existing recognition methods. The ACP employs an improved scheme of forming feature space and adaptive decision rule allowing an optimal imagery classification method to be chosen during thematic processing. In the paper the basic principles of the ACP design and the results of its classification methods efficiency research are considered. Also the results of the ACP application for solving problems of landscape-ecological mapping of Lake Chani area (Omsk region, Russia) and Pervomayskoe oil field (Tomsk region, Russia) using multispectral images from Russian satellite RESURS-O1 are shown.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We present both semi-automated and automated methods for road extraction using IKONOS imagery. The automated method extracts straight-line, gridded road networks by inferring a local grid structure from initial information and then filling in missing pieces using hypothesization and verification. This can be followed by the semi-automated road tracker tool to approximate curvilinear roads and to fill in some of the remaining missing road structure. After a panchromatic texture analysis, our automated method incorporates an object-level processing phase which enables the algorithm to avoid problems arising from interference such as crosswalks and vehicles. It is limited, however, in that the logic is designed for reasoning concerning intersecting grid patterns of straight road segments. Many suburban areas are characterized by curving streets which may not be well-approximated using this automatic method. In these areas, missing content can be filled in using a semi-automated tool which tracks between user-supplied points. The semi-automated algorithm is based on measures derived from both the panchromatic and multispectral bands of IKONOS. We will discuss both of these algorithms in detail and how they fit into our overall solution strategy for road extraction. A presentation of current experimentation and test results will be followed by a discussion of advantages, shortcomings, and directions for future research and improvements.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.