KEYWORDS: Signal to noise ratio, Signal attenuation, Space operations, Receivers, Interference (communication), Einsteinium, Compressed sensing, Signal processing, Matrices, Radar
Traditional compression involves sampling a signal at the Nyquist rate, then reducing the signal to its essential
components via some transformation. By taking advantage of any sparsity inherent in the signal, compressed
sensing attempts to reduce the necessary sampling rate by combining these two steps. Currently, compressive
sampling operators are based on random draws of Bernoulli or Gaussian distributed random processes. While
this ensures that the conditions necessary for noise-free signal reconstruction (incoherence and RIP) are fulfilled,
such operators can have poor SNR performance in their measurements. SNR degradation can lead to poor reconstruction despite using operators with good incoherence and RIP. Due to the effects of incoherence-related signal
loss, SNR will degrade by M/K compared to the SNR of the fully sampled signal (where M is the dimensionality
of the measurement operator and K is the dimensionality of the representation space).
We model an RF compressive receiver where the sampling operator acts on noise as well as signal. The signal
is modeled as a bandlimited pulse parameterized by random complex amplitude and time of arrival. Hence, the
received signal is random with known prior distribution. This allows us to represent the signal via Karhunen-Loeve expansion and so investigate the SNR loss in terms of a random vector that exists in the deterministic KL basis. We are then able to show the SNR trade-off that exists between sampling operators based on random
matrices and operators matched to the K-dimensional basis.
Support Vector Machines (SVMs) have generated excitement and interest in the pattern recognition community due to their generalization, performance, and ability to operate in high dimensional feature spaces. Although SVMs are generated without the use of user-specified models, required hyperparameters, such as Gaussian kernel width, are usually user-specified and/or experimentally derived. This effort presents an alternative approach for the selection of the Gaussian kernel width via analysis of the distributional characteristics of the training data projected on the 'trained' SVM (margin values). The efficacy of a particular kernel width can be visually determined via one-dimensional density estimate plots of the training data margin values. Projecting the data onto the SVM hyperplane allows the one-dimensional analysis of the data from the viewpoint of the 'trained' SVM. The effect of kernel parameter selection on class-conditional margin distributions is demonstrated in the one-dimensional projection subspace, and a criterion for unsupervised optimization of kernel width is discussed. Empirical results are given for two classification problems: the 'toy' checkerboard problem and a high dimensional classification problem using simulated High-Resolution Radar (HRR) targets projected into a wavelet packet feature space.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.