PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
This PDF file contains the front matter associated with SPIE
Proceedings Volume 6700, including the Title Page, Copyright
information, Table of Contents, Introduction, and the
Conference Committee listing.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In hyperspectral imaging systems with a continuous-to-discrete (CD) model, the goal is to solve the matrix equation g =
Hθ + n for θ. Here g is a data vector obtained on pixels on a focal plane array (FPA), and n is the additive pixel noise
vector. The hyperspectral object cube f(x, y, λ) to be recovered is represented by θ, which is the vectorized set of
expansion coefficients of f with respect to a family of functions. The imaging operator is the system matrix H of which
its columns represent the projection of each expansion function onto the FPA. Hence an estimate of the object cube f(x,
y, λ) is reconstructed from these recovered expansion function projection coefficients. Furthermore H is equivalently a
calibration matrix, and amenable to an analytic description. Since the number of expansion functions is large, and the
number of pixels on an FPA is large, H becomes huge and very unwieldy to store. We describe a means by which we
can reduce the effective size of H by taking advantage of the analytic model of the imaging system and converting H
into a series of look-up tables. By this method we have been able to drastically reduce the storage requirements for H
from terabytes to sub-megabyte sizes. We provide an example of this technique in isoplanatic and polychromatic
calibration of a flash hyperspectral imaging system. These sets of lookup tables are expansion function independent and
also independent of object cube sampling.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Until now space telescopes, like Humbble, did not require a strong data compression. In fact, images were captured on demand and their proximity to Earth gave them a sufficient downlink bandwidth. Yet, the next generation space telescopes like GAIA (ESA) and the James Webb Space Telescope (JWST, ESA & NASA) will observe even wider sky fields at even higher resolutions. Moreover, they will be dramatically farther from Earth than Hubble (1.5 million versus 600 kilometers). This will imply a poor downlink bandwidth, and thus require a fast, on-board strong data compression (better than 1:200 ratios). To achieve GAIA scientific objectives, a real-time «selectively lossless» compression is needed. With standard schemes, it is simply not possible today, even without time constraints (because of the entropy limit...). This paper explains why the GAIA Compression, which is based on Object-Based Compression (OBC), is efficient for stellar images. Since the baseline implementation did not meet all the ESA requirements (compression speed and ratio), we have also brought our contribution to optimize the GAIA Compression. It consists mainly in using (i) non-rectangular regions for large objects and (ii) and (inter-objects) differential predictive coding to improve the effficiency of the final lossless compression. We have tested our algorithms on the GAIA sky generator (GIBIS) which stimulates flight-realistic conditioins (CCD read-noise, cosmic rays...). Without any loss on signal, we have obtained promising ratios up to 1:270 for the worst case sky.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper presents a design technique for multi channel filter banks for subband coding of audio signal. In sub-band
coding, the speech is first split into frequency bands using a bank of bandpass filters. The individual band pass signals
are then decimated by a factor 'N' and encoded for transmission. A filter bank is a collection of bandpass filters, all
processing the same input signal. The important parameters in
sub-band coders are the number of frequency bands and
the frequency range of the system, and the sub-band coding technique. The total number of filters required are 2N. The
sub-band signals can be reconstructed perfectly with linear-phase FIR filters. The filter bank is designed so as to
overcome the effect of non-ideal transition-band and stop-bands filtering. With real-world filters, the non-zero signal
energy in the transition and stop bands is reflected back into the pass-band during the interpolation process at the
receiver causing aliasing. This aliasing is canceled in the filter bank during reconstruction of the signal. This paper deals
with the designing of 8 band filter banks and coding the subband signals at various bit rates using DPCM technique. In
this we used a sampling rate of 44.1Khz. The first two bands are coded at 8 bits/sample, next three bands are coded at
4bits/sample and last 3 bands are coded at 2 bits/sample. Lower frequency spectrum is encoded at higher bit rate, as
more energy is concentrated in the lower range. Simulated results using MATLAB Software shows that a compression
ratio of 3.76:1 is achieved with perceptual quality. Beyond this we find that the signal quality degraded to reasonable
extent, which is not recommended. There has to be a tradeoff between the compression ratio and Quality of transmitted
signal.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We begin with a summary of the optimum fixed-type interpolation approximation minimizing the upper bound of various measures of approximation error, simultaneously. The optimum interpolation functions used in this approximation are different from each other and have to cover the entire interval in the time domain to be approximated. Secondly, by applying the above approximation, we present the optimum running-type interpolation approximation for arbitrary long but time-limited signals. The proposed interpolation functions are time-limited and can be realized by FIR filters. Hence, the approximation system can be realized by time-invariant FIR
filter bank. We present one-to-one correspondence between error of approximation in a small interval in the time domain and error of approximation in limited but wide interval in the time domain based on Fredholm integral equation using Pincherle-Goursat kernel. Finally, as a practical application of the optimum fixed-type
interpolation approximation, we present a discrete numerical solution of differential equations.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We introduce a multilevel PDE solver for equations whose solutions exhibit large gradients. Expanding on Ami Harten's
ideas, we construct an alternative to wavelet-based grid refinement, a multiresolution coarsening method capable of capturing
sharp gradients across different scales and thus improving PDE-based simulations by concentrating computational
resources in places where the solution varies sharply. Our scheme is akin to Finite Differences in that it computes derivatives
explicitly and then uses the derivative information to march the solution in time. However, we utilize meshfree
methods to compute derivatives and integrals in space-time to increase the robustness of our solver and tailor the basis
functions to the Kd-tree structure provided by the multiresolution analysis.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Fast development of multimedia technology during the last two decades has brought different approach to the evaluation of image quality. In most of the cases, multimedia technology applications do not rely on the image fidelity criterion but the human impression plays the main role. A model for perceptual assessment of image quality in multimedia technology is presented in this paper. The model exploits properties of the human visual system (HVS) while utilizing steerable
pyramidal decomposition. Image distortion features are based on Jeffrey divergence (JD) as a metric between probability distributions of original and distorted image signal values in each subband of steerable pyramid. Mean square error (MSE) is also computed. Data preprocessing using mutual information (MI) approach has been used to get a smaller set of objective distortion features describing the perceived image quality with reasonable precision. The impairment feature vector is processed by the radial basis function (RBF) artificial neural network (ANN) to allow simple adaptation of the
model in respect to the required mode of operation, fidelity or impressiveness based. Parameters of the ANN are adjusted using mean opinion scores (MOS) obtained from the group of assessors. The presented system mimics an assessment process with human subjects. Model performance is verified comparing predicted quality and scores from human observers.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Transform techniques generally are more robust than spatial techniques for watermark embedding. In this research,
neural networks and adaptive models are utilized to estimate watermarks in the presence of noise as well as other
common image processing attacks in the discrete cosine transform (DCT) and discrete wavelet transform (DWT)
domains. The proposed method can be used to semi-blindly determine the estimated watermark. In this paper, a
comparative study to a previous method, LMS correlation based detection, is performed and demonstrates the efficacy
of the proposed adaptive neural network watermark embedding and detection scheme under different attacks. Finally,
the proposed scheme in the DCT transform domain is compared to the proposed scheme in the DWT domain.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
When considering the multimedia production chain from the content creation to the end-user consumption,
watermarking provides a well defined functionality: property right identification and copy-maker tracking. However, its
place within this chain is not yet clearly stated. The present paper describes an objective study aiming at establishing the
functional peculiarities in the cases when watermarking follows compression. First, some general limits concerning the
transparency, robustness and capacity in compressed (MPEG-4 AVC) domain watermarking are identified. Then, these
results are discussed and compared to the uncompressed domain watermarking case. The experiments were carried out
on a video corpus of 5 video sequences, each of them of 35000 frames (about 25 minutes each), coded at 256kbit/s.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper analyses a trade-off between convergence rate and distortion obtained through a multi-resolution training of a
Kohonen Competitive Neural Network. Empirical results show that a multi-resolution approach can improve the training
stage of several unsupervised pattern classification algorithms including K-means clustering, LBG vector quantization,
and competitive neural networks. While, previous research concentrated on convergence rate of on-line
unsupervised training. New results, reported in this paper, show that the multi-resolution approach can be used to
improve training quality (measured as a derivative of the rate distortion function) on the account of convergence speed.
The probability of achieving a desired point in the
quality/convergence-rate space of Kohonen Competitive Neural
Networks (KCNN) is evaluated using a detailed Monte Carlo set of experiments. It is shown that multi-resolution can
reduce the distortion by a factor of 1.5 to 6 while maintaining the convergence rate of traditional KCNN. Alternatively,
the convergence rate can be improved without loss of quality. The experiments include a controlled set of synthetic data,
as well as, image data. Experimental results are reported and evaluated.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Lattice independence and strong lattice independence of a set of pattern vectors are fundamental mathematical
properties that lie at the core of pattern recognition applications based on lattice theory. Specifically, the development
of morphological associative memories robust to inputs corrupted by random noise are based on strong
lattice independent sets, and real world problems, such as, autonomous endmember detection in hyperspectral
imagery, use auto-associative morphological memories as detectors of lattice independence. In this paper, we
present a unified mathematical framework that develops the relationship between different notions of lattice
independence currently used in the literature. Computational procedures are provided to test if a given set of
pattern vectors is lattice independent or strongly lattice independent; in addition, different techniques are fully
described that can be used to generate sets of vectors with the aforementioned lattice properties.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
K-means is a widely used objective optimization clustering method. It is generally implemented along with the minimum
square error (MSE) minimization criteria. It has been shown empirically that the algorithm provides "good" MSE
results. Nevertheless, K-means has several deficiencies; first, it is sensitive to the seeding method and may only
converge to a local optimum. Second, the algorithms is known to be NP complete, hence, validating the quality
of results may be intractable. Finally, the convergence rate of the algorithm is dependent on the seeding. Generally, low
convergence rate is observed. This paper presents a multi-resolution K-means clustering method which applies the K-means
algorithm to a sequence of monotonically increasing-resolution samples of the given data. The cluster-centers
obtained from a low resolution stage are used as initial
cluster-centers for the next stage which is a higher resolution
stage. The idea behind this method is that a good estimation of the initial location of centers can be obtained through K-means
clustering of a sample of the input data. This can reduce the convergence time of K-means. Alternatively the
algorithm can be used to obtain better MSE in about the same time as the traditional K-means. The validity of pyramid
K-means algorithm is tested using Monte Carlo simulations applied to synthetic data and to multi-spectral images and
compared to traditional K-means. It is found that in the average case pyramid K-means improves the MSE by a factor of
four to six. This may require only 1.35 more iterations than the traditional K-means. Alternatively, it can reduce the
computation time by a factor of three to four with a slight improvement in the quality of clustering.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Evolutionary computation can increase the speed and accuracy of pattern recognition in multispectral images, for
example, in automatic target tracking. The first method treats the clustering process. It determines a cluster of pixels
around specified reference pixels so that the entire cluster is increasingly representative of the search object. An initial
population (of clusters) evolves into populations of new clusters, with each cluster having an assigned fitness score. This
population undergoes iterative mutation and selection. Mutation operators alter both the pixel cluster set cardinality and
composition. Several stopping criteria can be applied to terminate the evolution. An advantage of this evolutionary
cluster formulation is that the resulting cluster may have an arbitrary shape so that it most nearly fits the search pattern.
The second algorithm automates the selection of features (the
center-frequency and the bandwidth) for each population
member. For each pixel in the image and for each population member, the Mahalanobis distance to the reference set is
calculated and a decision is made whether or not this pixel belongs to a target. The sum of correct and false decisions
defines a Receiver Operating Curve, which is used to measure the fitness of a population member. Based on this fitness,
the algorithm decides which population members to use as parents for the next iteration.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Real time imaging of macro molecular interactions is a challenging issue in life science. Among different developed techniques, the emerging SPR (Surface Plasmon Resonance) approach is one of the most promising: no molecular labeling is necessary to reveal molecular interactions, especially since extension of microarray allow multiple
interactions to be observed at the same time. Such a real time monitoring of various biomolecular interactions raises several challenges in terms of image segmentation of the spotted material, extraction and interpretation of the mean hybridization signal of each spot. This paper develops an automated approach for SPR image analysis tackling the above-mentioned issues. First, a
spatio-temporal anisotropic filtering removes the random noise of
large amplitude present in the SPR data. The pre-filtered signal supplies an image segmentation module in charge with the automated detection of the material deposited on the spots. Microarray spot segmentation is performed by combining advanced gray-level morphological operators and a priori knowledge on the spotting geometry. Non-uniform hybridization within a spot can thus be detected and the spot excluded from analysis. In the same manner, the presence of image artefacts (chip scratch, deposit leakage) can be notified during experimentation. The mean signal over each valid spot is then extracted and its temporal behavior provides the kinetic parameters characterizing the biological interaction hybridization/dehybridization speed, end-point state). The preliminary
results obtained on test biomolecular interactions confirmed our expectations in sensitivity improvement with respect to
fluorescence-based techniques. A larger validation of the proposed approach in terms of maximum sensitivity allowed for biological interactions discrimination is currently in progress.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Diffuse lung diseases (DLD) include a heterogeneous group of
non-neoplasic disease resulting from damage to the lung parenchyma by varying patterns of inflammation. Characterization and quantification of DLD severity using MDCT, mainly in interstitial lung diseases and emphysema, is an important issue in clinical research for the evaluation of new therapies. This paper develops a 3D automated approach for detection and diagnosis of diffuse lung diseases such as fibrosis/honeycombing, ground glass and emphysema. The proposed methodology combines multi-resolution 3D morphological filtering (exploiting the sup-constrained connection cost operator) and
graph-based classification for a full characterization of the parenchymal tissue. The morphological filtering performs a
multi-level segmentation of the low- and medium-attenuated lung regions as well as their classification with respect to a granularity criterion (multi-resolution analysis). The original intensity range of the CT data volume is thus reduced in the segmented data to a number of levels equal to the resolution depth used (generally ten levels). The specificity of such morphological filtering is to extract tissue patterns locally contrasting with their neighborhood and of size inferior to the resolution depth, while preserving their original shape. A multi-valued hierarchical graph describing the segmentation result is built-up according to the resolution level and the adjacency of the different segmented components. The graph nodes are then enriched with the textural information carried out by their associated components. A graph analysis-reorganization based on the nodes attributes delivers the final classification of the lung parenchyma in normal and ILD/emphysematous regions. It also makes possible to discriminate between different types, or development stages, among the same class of diseases.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The next generation of Compton telescopes (such as MEGA or NCT) will detect impinging gamma rays by
measuring one or more Compton interactions, possibly electron tracks, and a final photo absorption. However,
the recovery of the original parameters of the photon, especially its energy and direction, is a challenging task,
since the measured data only consists of a set of energy and position measurements and their ordering, i.e. the
path of the photon, is unknown. Thus the main tasks of the pattern recognition algorithm are to identify the
interaction sequence of the photon (i.e. which hit is the start point) and distinguish the pattern from background
signatures, especially incompletely absorbed events.
The most promising approach up to now is based on Bayesian statistics: The Compton interactions are
parameterized in a multi-dimensional data space, which contains the interaction information of the Compton
sequence as well as geometry information of the detector. For each data space cell the probability that the
corresponding interaction sequence is one of a correctly ordered, completely absorbed source photon can be
determined by Bayesian statistics and detailed simulations. This probability can then be used to distinguish
source photons from incompletely absorbed photons.
Simulations show that the Bayesian approach can improve the 68% event containment of the ARM distribution
by up to 40%, and results in a much better separation between "good" and "bad" events. In addition, sensitivity
improvements up to a factor 1.7 can be achieved.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper introduces a new interactive mobile TV application related to parliament session. This application aims to
provide additional information to mobile TV users by inserting automatically and in real-time interactive contents
(complementary information, subject of the current session...) into original TV program, using MPEG-4 streaming video
and extra real time information (news, events, databases... from RSS streams, Internet links...). Here, we propose an
architecture based on plug-in multimedia analyzers to generate the contextual description of the media and on an
interactive scene generator to dynamically create related interactive scenes. Description is implemented according to the
MPEG-7 standard.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This method of parallel-hierarchical Q-transformation offers new approach to the creation of computing
medium - of parallel -hierarchical (PH) networks, being investigated in the form of model of neurolike scheme of data
processing [1-5]. The approach has a number of advantages as compared with other methods of formation of neurolike
media (for example, already known methods of formation of artificial neural networks). The main advantage of the
approach is the usage of multilevel parallel interaction dynamics of information signals at different hierarchy levels of
computer networks, that enables to use such known natural features of computations organization as: topographic nature
of mapping, simultaneity (parallelism) of signals operation, inlaid cortex, structure, rough hierarchy of the cortex,
spatially correlated in time mechanism of perception and training [5].
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The Moment-Preserving Thresholding technique for digital images has been used in digital image processing for decades, especially in image binarization and image compression. Its main strength lies in that the binary values that the MPT produces as a result, called representative values, are usually unaffected when the signal being thresholded goes through a signal processing operation. The two representative values in MPT together with the threshold value are obtained by solving the system of the preservation equations for the first, second, and third moment. Relying on this robustness of the
representative values to various signal processing attacks considered in the watermarking context, this paper proposes a new watermarking scheme for audio signals. The watermark is embedded in the
root-sum-square (RSS) of the two representative values of each signal block using the quantization technique. As a result, the RSS values are modified by scaling the signal according to the watermark bit sequence under the constraint of inaudibility relative to the human
psycho-acoustic model. We also address and suggest solutions to the problem of synchronization and power scaling attacks. Experimental results show that the proposed scheme maintains high audio quality and robustness to various attacks including MP3 compression, re-sampling, jittering, and, DA/AD conversion.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.