PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
This PDF file contains the front matter associated with SPIE
Proceedings Volume 6763, including the Title Page, Copyright
information, Table of Contents, and the
Conference Committee listing.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Defined as the maximum amount of information which can be inserted in an original media for prescribed transparency
and robustness, watermarking capacity has been a challenging research topic in the last years. The present paper allows
several current limitations in this respect to be overcame. As the capacity strongly depends on the attack statistical
behaviour, the first part of our paper is devoted to their in-depth investigation. By advancing an original statistical
approach, it is pointed out that we may speak about probability density functions modelling several types of attacks
(filtering, small rotation, StirMark). Then, these new accurate models are considered as the starting points in the
probability evaluation. The experimental study is based on the watermarking methods inserting the mark in the hierarchy
of the coefficients corresponding to three types of wavelets transforms, namely the (2,2), (4,4) and (9,7). The video
corpus consisted in 10 video sequences of about 25 minutes each, with heterogeneous content.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This article is a systematic overview of compression, smoothing and denoising techniques based on shrinkage of
wavelet coefficients, and proposes an advanced technique for generating enhanced composite
wavelet shrinkage strategies.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Nowadays one of the most important issue linked to image transform is to take
into account the singularities of a signal which is organized on more than one
dimension. The best example is the wavelet transform extension to two
dimensional signal analysis. The drawback when one pass from a one dimensional
signal process to a two dimensional signal process by simply using separability of
wavelet transform is the over representation of irregularities in the wavelet
transform domain. In order to decrease this drawback, second generation wavelet
transform tries to take geometrical aspects of the image into account in the
analysis of the image (one can find examples with bandelets, curvelets,
ridgelets and others).
2 layers bandelets or first generation bandelets is among the first wavelet
transform which uses the flow to enhance the efficiency of the process. The
present proposition is mainly theoretical : we will propose now a pratical
interpretation of this work in order to make a new implementation of the
transform.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A modification of the Haar wavelet method, for which the stepsize of the argument is variable, is proposed. To establish
the efficiency of the method three test problems, for which exact solution is known, are considered. Computer
simulations show clear preference of the suggested method compared with the Haar wavelet method of a constant
stepsize.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Expo-rational B-splines have been introduced in 2002 and by now have been shown to exhibit certain 'super-properties'
compared to ordinary polynomial B-splines. The Euler Beta-function B-splines, a polynomial version
of the expo-rational B-splines, has been introduced very recently, and has been shown to share some of the
'super-properties' of the expo-rational B-splines. In this paper we discuss several of the ways in which these
'superproperties' can be used to enhance the theory of polynomial spline wavelets and multiwavelets.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In the present paper a procedure is suggested to inversely determine the elastic foundation constants of delaminated vibrating beams using wavelet packets and artificial neural network (ANN) identification. The modal displacement responses of the beam with and without foundation are used to calculate energy variation vectors with the aid of wavelet package. The ANN model is trained to establish the mapping relationship between modal response and elastic foundation parameters. The components of energy variation vectors are used as the inputs for the ANN model. The outputs of the ANN are constants of elastic foundation. The Euler-Bernoulli beam model is used to calculate the displacement responses of the vibrating beam resting on elastic foundation. The calculations are carried out taking into account the beam damage in the form of delamination.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This research presents a different fault diagnostic approach using the Stationary Wavelet Transform (SWT) as an alternative method to Discrete Wavelet Transform (DWT). In this sense, it is aimed to find potential defects, which exist in healthy motor bearings as manufacturing defects as compared to the faulty case. This approach extracts the origin of the bearing damage that develops during the aging process. In this manner, the advantage of the SWT over the DWT is emphasized. Hence, it can be introduced as a new approach for condition monitoring studies in rotating machineries like the induction motors.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Vibration measurement is of great interest in many fields (e.g. mechanical engineering). Laser Doppler Vibrometry (LDV) provides a non-contact method of measuring vibrations very exactly (due to zero mass loading) and greatly increases the investigation capabilities of experimental modal analysis. By applying a standard FFT-algorithm it is possible to analyze the frequency spectrum of the vibrating structure. To overcome many of the shortcomings of classical Fourier-based signal analysis, which are mainly the result of neglecting time resolution, the wavelet transform has been established as an important technique in time-frequency analysis combining high temporal resolution with good frequency resolution. By applying complex wavelets (e.g. the Morlet wavelet) amplitude and phase information can be extracted from the analyzed signal and time is kept as an additional parameter which allows measurement of signal coherence over time as well. The aim of this work is to achieve accurate and reliable quantitative measurements for the characterization of the vibration characteristics of different skis (even when measured under harsh industrial conditions). In this regard, coherence is very sensitive to fluctuations of linearity in phase, relatively less so to nonlinear fluctuations of amplitude and completely insensitive to linear fluctuations in amplitude. Thus, the application of wavelet coherence analysis can give additional detailed insight into the dynamics of vibrations. Experimental results of laser vibrometry measurements are presented and the wavelet-based coherence approach is discussed and compared to classical Fourier methods in order to show the advantages of the wavelet-based representation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper we present a new method for joint denoising of depth and luminance images produced by
time-of-flight camera. Here we assume that the sequence does not contain outlier points which can be
present in the depth images. Our method first performs estimation of noise and signal covariance matrices
and then performs vector denoising. Luminance image is segmented into similar contexts using k-means
algorithm, which are used for calculation of covariance matrices. Denoising results are compared with the
ground truth images obtained by averaging of the multiple frames of the still scene.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this work, we address the problem of multichannel image retrieval in the compressed domain. A wavelet
transform is applied to each component of the multispectral image. The salient features are computed from
the resulting wavelet subbands. To this purpose, two approaches are envisaged. In the first one, the wavelet
coeffcients of each component are separately considered whereas in the second one, they are jointly processed.
More precisely, the contribution of this work lies on the fact that the features are extracted from the multivariate
distribution of the wavelet coeffcients modelized thanks to copulas. Experimental results indicate that the
second approach gives the best performances in terms of precision and recall.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
As the dimensions of integrated circuit devices continue to shrink, the effect of line edge roughness (LER) of resist patterns on the device performance is becoming a serious problem. Therefore, the desire to measure LER with more accuracy is growing. We have proposed the method for LER measurement by using the wavelet multiresolution analysis. It is still required that a wavelet filter is optimized to reveal line edges from noisy SEM signal profile to achieve more accuracy. In this paper, we try to estimate statistically a wavelet that is matched to measure LER from the scanning electron microscopy (SEM) signal profile. Here we propose a novel mathematical and statistical modeling of the SEM signal profile. The mathematical model has been deduced from a variety of SE signal shapes which have been calculated by Monte Carlo simulations. It is also necessary to consider statistical effects such as the property of the atoms within the nanostructures and the fluctuation of the exposure. We then formulate a model by considering four characteristics of critical dimension (CD) in photolithography: the exponential distribution of the image intensity around the impinging electron beam, the spatial frequency distribution of LER, phase difference between both sides of a line pattern, and shot noise in SEM images. Statistically matched wavelet is estimated from local signal profiles around true edge positions. As a result, it was seen that there was a high degree of similarity between real and model SEM images. We also compared performance in accuracy of CD measurement between the matched wavelet and the first-order derivative Gaussian wavelet, which has selected as the most suitable one for lithography metrology in previous work. CDs measured by both wavelets were almost equal since the matched wavelet has relatively small effect on noise reduction.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Multiscale analysis has become indispensable
in image processing and computer vision. Our work is motivated by the need to efficiently represent 3D shapes that exhibit a spherical topology. This note presents a wavelet based model for shape denoising and data compression. The 3D shape signal is first encoded using biorthogonal spherical wavelet functions defined on a 3D triangulated mesh. We propose a Bayesian shrinkage model for this type of second generation wavelets in order to eliminate wavelet coefficients that likely correspond
to noise. This way, we are able to reduce dimension without losing significant information by estimating a noiseless version of our shape.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Most wavelet transforms used in practice are based on integer sampling factors. Wavelet transforms based on rational sampling factors offer in principle the potential for time-scale signal representations having a finer frequency resolution.
Previous work on rational wavelet transforms and filter banks includes filter design methods and frequency domain implementations. We present several specific examples of Daubechies-type filters for a discrete orthonormal rational wavelet transform (FIR filters having a maximum number of vanishing moments) obtained using Gröbner bases. We also present the design of overcomplete rational wavelet wavelet transforms (tight frames) with FIR filters obtained using matrix spectral factorization.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper we study the restoration of multicomponent images, and more particularly, the effects of taking into account the dependencies between the image components. The used method is an expectation-maximization algorithm, which applies iteratively a deconvolution and a denoising step. It exploits the Fourier transform's economical noise representation for deconvolution, and the wavelet transform's economical representation of piecewise smooth images for denoising. The proposed restoration procedure performs wavelet shrinkage in a Bayesian denoising framework by applying multicomponent probability density models for the wavelet coefficients that fully account for the intercomponent correlations. In the experimental section, we compare our multicomponent procedures to its single-component counterpart. The results show that the methods using a multicomponent model and especially the one using the Gaussian scale mixture model, perform better than the single-component procedure.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, we study denoising of multicomponent images. We present a framework of spatial wavelet-based
denoising techniques, based on Bayesian least-squares optimization procedures, using prior models for the wavelet
coefficients that account for the correlations between the image components. Within this framework, multicomponent
prior models for the wavelet coefficients are required that a) fully account for the interband correlations
between the image components, and b) approximate well the marginal distributions of the wavelet coefficients.
For this, multicomponent heavy tailed models are applied. We analyze three mixture priors: Gaussian scale
mixture (GSM) models, Laplacian mixture models and Bernoulli-Gaussian mixture models. As an extension of
the Bayesian framework, we propose a framework that also accounts for the correlation between the multicomponent
image and an auxiliary noise-free image, in order to improve the SNR of the first. For this, a GSM prior
model was applied. Experiments are conducted in the domain of remote sensing in both, simulated and real
noisy conditions.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The recurrent presence of clouds and clouds shadows in aerial or remotely sensed images is an awkward
problem that severely limits the regular exploitations capability of these images. Removing cloud-contaminated
portions of the image and then filling in the missing data represent an important photo editing cumbersome task.
The intent of this work is to propose a technique for the reconstruction of areas obscured by clouds in a remotely
sensed image. To this end, a new efficient reconstruction technique for missing data synthesis is presented.
This technique is based on the Bandelet transform and the multiscale geometrical grouping. It consists of two
steps. In the first step, the curves of geometric flow of different zones of the image are determined by using the
Bandelet transform with multiscale grouping. This step allows a better representation of the multiscale geometry
of the image's structures. Having well represented this geometry, the information inside the cloud-contaminated
zone is synthesized by propagating the geometrical flow curves inside that zone. This step is accomplished by
minimizing a functional whose role is to reconstruct the missing or cloud contaminated zone independently of
the size and topology of the reconstruction or inpainting domain. Thus, the flow lines are well tied inside the
cloud-contaminated zone. The proposed technique is illustrated with some examples on processing multispectral
aerial images. The obtained results are compared with those obtained by other clouds removal techniques.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The objective of this work was the implementation of a computational algorithm that allowed the electrooculographics
signals analysis, which represented different types of saccadics and antisaccadics movements. A study about the nature
of the electrooculographics signals was made to determine the signals frequency composition. This study yielded that the
electrooculographics signals studied mainly have low frequency elements, although they possess elements belonging to a
wide range of high frequencies, but that as a whole, these elements constitute a smaller part of the signals.
Computational routines were developed which allowed filtering the signals to eliminate the noise that could be contained
in the signal and improve the quality of the original signal. In the implementation of these routines, two mathematical
algorithms were used: Wavelet Transform and the Parks-McClellan or Remez algorithm for the low pass filters design.
The results showed that it was faster filtering the signal with the Wavelet Transform, but the filters designed with the
Parks-McClellan algorithm did not distort the original signal in an appreciable way.
Once filtered the signals the characterization takes place, which consisted on the automatic detection of the regions
corresponding to saccades. In this detection was used another Wavelet Transform application, as is the locating of abrupt
changes in the signals. It was carried out the determination of the parameters from each saccade: Amplitude, Duration,
Gain, Peak Velocity and Latency. Some important relations between the saccades parameters of the electrooculographics signals were obtained and others found by different authors were verified.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Cardiovascular diseases, in particular Acute Myocardial Infarction (AMI) are the first cause of death in industrialized countries. Measurements of indicators of the behavior of the autonomic nervous system, such as the Heart Rate Variability (HRV) and the QT Interval Dispersion (QTD) in the acute phase of the AMI (first 48 hours after the event) give a good estimation of the subsequent cardiac events that could present a person who had suffered an AMI.
This paper describes the implementation of the second version of Prognostic-AMI, a software tool that automate the calculation of such indicators. It uses the Discrete Wavelet Transform (DWT) to de-noise the signals an to detect the QRS complex and the T-wave from a conventional electrocardiogram of 12 leads. Indicators are measured in both time and frequency domain. A pilot trial performed on a sample population of 76 patients shows that people who had had cardiac complications in the acute phase of the AMI have low values in the indicators of HRV and QTD.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Optimal mass transport is an important technique with numerous applications in econometrics, fluid dynamics,
automatic control, statistical physics, shape optimization, expert systems, and meteorology. Motivated by certain
problems in image registration and medical image visualization, in this note, we describe a simple gradient
descent methodology for computing the optimal L2 transport mapping which may be easily implemented using
a multiresolution scheme. We also indicate how the optimal transport map may be computed on the sphere. A
numerical example is presented illustrating our ideas.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
TWiGS (two-dimensional wavelet transform with generalized cross validation
and soft thresholding) is a novel algorithm for denoising liquid
chromatography-mass spectrometry (LC-MS) data for use in "shot-gun"
proteomics. Proteomics, the study of all proteins in an organism, is an
emerging field that has already proven successful for drug and disease
discovery in humans. There are a number of constraints that limit the
effectiveness of liquid chromatography-mass spectrometry (LC-MS) for
shot-gun proteomics, where the chemical signals are typically weak, and data
sets are computationally large. Most algorithms suffer greatly from a
researcher driven bias, making the results irreproducible and unusable by
other laboratories. We thus introduce a new algorithm, TWiGS, that removes
electrical (additive white) and chemical noise from LC-MS data sets.
TWiGS is developed to be a true two-dimensional algorithm, which operates in
the time-frequency domain, and minimizes the amount of researcher bias. It
is based on the traditional discrete wavelet transform (DWT), which allows
for fast and reproducible analysis. The separable two-dimensional DWT
decomposition is paired with generalized cross validation and soft
thresholding. The Haar, Coiflet-6, Daubechie-4 and the number of
decomposition levels are determined based on observed experimental
results. Using a synthetic LC-MS data model, TWiGS accurately retains key
characteristics of the peaks in both the time and m/z domain, and can
detect peaks from noise of the same intensity. TWiGS is applied to
angiotensin I and II samples run on a LC-ESI-TOF-MS
(liquid-chromatography-electrospray-ionization) to demonstrate its
utility for the detection of low-lying peaks obscured by noise.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper we provide an overview about an orthonormal (multi) wavelet-based method for isometric immersion
of smooth n-variate m-dimensional vector fields onto fractal curves and surfaces. This method was proposed in
an earlier publication by two of the authors, with the purpose of extending the applicability of emerging GPU-programming
to rich diversity of multidimensional problems. Here we propose (in Section 3) several directions
for upgrading the method, with respective new applications.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In precision agriculture, the reduction of herbicide applications requires an accurate detection of weed patches.
From image detection, to quantify weed infestations, it would be necessary to identify crop rows from line detection
algorithm and to discriminate weed from crop. Our laboratory developed several methods for line detection
based on Hough Transform, double Hough Transform or Gabor filtering. The Hough Transform is well adapted
to image affected by perspective deformations but the computation burden is heavy and on-line applications are
hardly handled. To lighten this problem, we have used a Gabor filter to enhance the crop rows present into the
image but, if this method is robust with parallel crop rows (without perspective distortions), it implies to deform
image with an inverse projection matrix to be applied in the case of an embedded camera. We propose, in order to
implement a filter in the scale / space domain, to use a discrete dyadic wavelet transform. Thus, we can extract the
vertical details contained in various parts of the image from different levels of resolution. Each vertical detail level
kept allows to enhance the crop rows in a specific part of the initial image. The combination of these details enable
us to discriminate crop from weeds with a simple logical operation. This spatial method, thanks to the fast wavelet
transform algorithm, can be easily implemented for a real time application and it leads to better results than those
obtained from Gabor filtering. For this method, the weed infestation rate is estimated and the performance are
compared to those given by other methods. A discussion concludes about the ability of this method to detect the
crop rows in agronomic images. Finally we consider the ability of this spatial-only approach to classify weeds
from crop.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper is discussed using technique of wavelet analysis in the task of technical state monitoring of civil and
industrial constructions. In order to validate supposed technique it was built finite element model of reinforce
concrete beam which allows to investigate dynamic characteristics of a health beam and a beam which includes
crack at periodic and impulse loading. As the result it was established relation between geometric dimensions of
a crack and: 1) natural frequencies of first 4 beam mode shape; 2) wavelet coefficient of decomposition of
displacement velocity for chosen control point of beam caused by impulse loading.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
One of the most desired aspects for power suppliers is the acquisition/sell of energy in a future time. This paper presents a study of load forecasting for power suppliers, presenting a comparative application of the techniques of wavelets, time series methods and neural networks, considering short and long term forecast; both of great importance for power suppliers in order to define the future power consumption of a given region.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We present an overview of results obtained in the last 10-15 years in the field of constrained deterministic approximation
and constrained statistical estimation of non-parametric regression functions, cumulative distribution
functions and densities. The case of deterministic approximation follows from the case of statistical estimation
of non-parametric regression when the noise variance is zero. Many unpublished results are announced here for
the first time.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We study the effects of the use of near-degenerate elements in finite and boundary element (multigrid) methods,
and their analogues with wavelet (multiresolution) methods. In the context of these results, a brief comparison
between finite/boundary element methods and wavelet methods is made.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A new simple and efficient scheme for edge detection is presented. Its first property is to localize correctly edges
in sharp images. The ability of the method to strongly reduce any noise is demonstrated. Its performances are
also established in the case of the gaussian white noise. The multiresolution version of this method is introduced:
it will be developed in a future work.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.