PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
An effective method for geolocation of a radar emitter is to intercept its signal at multiple platforms and share the data to allow measurement of the time-difference-of-arrival (TDOA) and the frequency-difference-of-arrival (FDOA). This requires effective data compression. For radar location we show that it is possible to exploit pulse-to-pulse redundancy. A compression method is developed that exploits the singular value decomposition (SVD) to compress the intercepted radar pulse train. This method consists of five steps: (i) pulse gating, (ii) pulse alignment, (iii) matrix formation, (iv) SVD-based rank reduction, and (v) encoding. Matrix formation places aligned pulses into rows to form a matrix that has rank close to one and SVD truncation gives a low rank approximate matrix. We show that (i) compression is maximized if the matrix is made to have two-thirds as many rows as columns and (ii) truncation to a rank-one matrix is feasible. We interpret this as extracting a prototype pulse trainlet. The maximum compression ratio is expressed in terms of the number of pulses and the number of samples per pulse and point out a particularly interesting and important characteristic - the compression ratio increases as the total number of signal samples increases. Theoretical and simulation results show that this approach provides a compression ratio up to about 30:1 in practical signal scenarios.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The location of an emitter is estimated by intercepting its signal and sharing the data among several platforms to measure the time-difference-of-arrival (TDOA) and the frequency-difference-of-arrival (FDOA). Doing this in a timely fashion requires effective data compression. A common compression approach is to use a rate-distortion criterion where distortion is taken to be the mean-square error (MSE) between the original and compressed versions of the signal. However, in this paper we show that this MSE-only approach is inappropriate for TDOA/FDOA estimation and then define a more appropriate, non-MSE distortion measure. This measure is based on the fact that in addition to the dependence on MSE, the TDOA accuracy also depends inversely on the signal's RMS (or Gabor) bandwidth and the FDOA accuracy also depends inversely on the signal's RMS (or Gabor) duration. We discuss how the wavelet transform is a natural choice to exploit this non-MSE criterion. These ideas are shown to be natural generalizations of our previously presented results showing how to determine the correct balance between quantization and decimation. We develop a MSE-based wavelet method and then incorporate the non-MSE error criterion. Simulations show the wavelet method provides significant compression ratios with negligible accuracy reduction. We also make comparisons to methods that don't exploit time-frequency structure and see that the wavelet methods far out-perform them.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper has a threefold goal. First an error resilience technique based on UEP-FEC, proposed by the authors in a previous work, is extended to incorporate a more efficient, adaptive, intra refresh mechanism based on an error-sensitivity metric representing the vulnerability of each coded block to channel errors. Second, the validity of the UEP technique for video transmission over a WCDMA channel is evaluated, thus extending the range of applicability of the above technique, which was originally conceived for and validated on DECT, slow fading, channels. Third, by elaborating on experimental evidence, we prove again that a final judgement on which is the best performing resilience strategy can not be given, without taking into account channel conditions, especially the Doppler spread value.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A wide variety of digital image compression transforms developed for still imaging and broadcast video transmission are unsuitable for Internet video applications due to insufficient compression ratio, poor reconstruction fidelity, or excessive computational requirements. Examples include hierarchical transforms that require all, or large portion of, a source image to reside in memory at one time, transforms that induce significant locking effect at operationally salient compression ratios, and algorithms that require large amounts of floating-point computation. The latter constraint holds especially for video compression by small mobile imaging devices for transmission to, and compression on, platforms such as palmtop computers or personal digital assistants (PDAs). As Internet video requirements for frame rate and resolution increase to produce more detailed, less discontinuous motion sequences, a new class of compression transforms will be needed, especially for small memory models and displays such as those found on PDAs. In this, the third series of papers, we discuss the EBLAST compression transform and its application to Internet communication. Leading transforms for compression of Internet video and still imagery are reviewed and analyzed, including GIF, JPEG, AWIC (wavelet-based), wavelet packets, and SPIHT, whose performance is compared with EBLAST. Performance analysis criteria include time and space complexity and quality of the decompressed image. The latter is determined by rate-distortion data obtained from a database of realistic test images. Discussion also includes issues such as robustness of the compressed format to channel noise. EBLAST has been shown to perform superiorly to JPEG and, unlike current wavelet compression transforms, supports fast implementation on embedded processors with small memory models.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A modified MPEG Advanced Audio Coding (AAC) scheme based on the Karhunen-Loeve transform (KLT) to remove inter-channel redundancy, which is called the MAACKL method, has been proposed in our previous work. However, a straightforward coding of elements of the KLT matrix generates about 240 bits per matrix for typical 5 channel audio contents. Such an overhead is too expensive so that it prevents MAACKL from updating KLT dynamically in a short period of time. In this research, we study the de-correlation efficiency of adaptive KLT as well as an efficient way to encode elements of the KLT matrix via vector quantization. The effect due to different quantization accuracy and adaptation period is examined carefully. It is demonstrated that with the smallest possible number of bits per matrix and a moderately long KLT adaptation time, the MAACKL algorithm can still generate a very good coding performance.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This work focuses on estimating the information conveyed to a user by either multispectral or hyperspectral image data. The goal is establishing the extent to which an increase in spectral resolution can increase the amount of usable information. As a matter of fact, a tradeoff exists between spatial and spectral resolution, due to physical constraints of sensors imaging with a prefixed SNR. Lossless data compression is exploited to measure the useful information content. In fact, the bit rate achieved by the reversible compression process takes into account both the contribution of the observation noise i.e., information regarded as statistical uncertainty, the relevance of which is null to a user, and the intrinsic information of hypothetically noise-free data. An entropic model of the image source is defined and, once the standard deviation of the noise, assumed to be Gaussian and possibly nonwhite, has been preliminarily estimated, such a model is inverted to yield an estimate of the information content of the noise-free source from the code rate. Results both of noise and of information assessment are reported and discussed on synthetic noisy images, on Landsat TM data, and on AVIRIS data.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The efficient transmission and storage of digital imagery increasingly requires compression to maintain effective channel bandwidth and device capacity. Unfortunately, in applications where high compression ratios are required, lossy compression transforms tend to produce a wide variety of artifacts in decompressed images. Image quality measures (IQMs) have been published that detect global changes in image configuration resulting from the compression or decompression process. Examples include statistical and correlation-based procedures related to mean-squared error, diffusion of energy from features of interest, and spectral analysis. Additional but sparsely-reported research involves local IQMs that quantify feature distortion in terms of objective or subjective models. In this paper, a suite of spatial exemplars and evaluation procedures is introduced that can elicit and measure a wide range of spatial, statistical, or spectral distortions from an image compression transform T. By applying the test suite to the input of T, performance deficits can be highlighted in the transform's design phase, versus discovery under adverse conditions in field practice. In this study, performance analysis is concerned primarily with the effect of compression artifacts on automated target recognition (ATR) algorithm performance. For example, featural distortion can be measured using linear, curvilinear, polygonal, or elliptical features interspersed with various textures or noise-perturbed backgrounds or objects. These simulated target blobs may themselves be perturbed with various types or levels of noise, thereby facilitating measurement of statistical target-background interactions. By varying target-background contrast, resolution, noise level, and target shape, compression transforms can be stressed to isolate performance deficits. Similar techniques can be employed to test spectral, phase and boundary distortions due to decompression. Applicative examples are taken from ATR practice, with supporting performance analysis of space, time, and computational error associated with measures included in the test suite.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A crucial deficiency of lossy blockwise image compression is the generation of local artifacts such as ringing defects, obscuration of fine detail, and blocking effect (BE). To date, few published reports of image quality measures (IQMs) have addressed the detection of such errors in a realistic, efficient manner. Exceptions are feature-based IQMs, perceptual IQMs, error detection templates, and quantification of BE that support its reduction in JPEG-and wavelet-compressed imagery. In this paper, we present an enhanced suite of IQMs that emphasize detection of local, feature-specific errors that corrupt visual appearance or numerical integrity of decompressed digital imagery. By the term visual appearance is meant subjective error, in contrast with objectively quantified effects of compression on individual pixel values and their spatial interrelationships. Subjective error is of key importance in human viewing applications, for examples, Internet video. Objective error is primarily of interest in object recognition applications such as automated target recognition (ATR), where implementational concerns involve the effect of compression or decompression algorithms on probability of detection (Pd) and rate of false alarms (Rfa). Analysis of results presented herein emphasizes application- specific quantification of local compression errors. In particular, introduction of extraneous detail (e.g., ringing defects of BE) or obscuration of source detail (e.g., texture masking) adversely impact both subjective and objective error of a decompressed image. Blocking effect is primarily a visual problem, but can confound ATR filters when a target spans a block boundary. Introduction of point or cluster errors primarily degrades ATR filter performance, but can also produce noticeable degradation of fine detail for human visual evaluation of decompressed imagery. In practice, error and performance analysis is supported by examples of ATR imagery including airborne and underwater mono-and multi-spectral images.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, we introduce new approaches to remove the boundary artifacts of the reconstructed images caused by transforms using overlapping non-symmetrical cosine-IV bases in image compression. In the field of image compression, overlapping cosine-IV bases can reduce the block artifacts that occur in JPEG. These basis functions are longer than the block size and they decay to zero at their boundaries. These cosine-IV bases have, however, one important disadvantage. They are not symmetric. Therefore the symmetric periodic extension cannot be applied to sequences of finite length. Artifacts appear at low bit rates in image compression if only the periodic extension is used. With the aid of the folding operator, we derive the symmetric periodic extension for cosine-IV bases. Weighting functions are introduced. We point out that no artifacts appear at image boundaries if our weighting functions are used. In the second part of our paper, we present a new approach which avoids the extension of the image. There is no overlap at the image boundaries. The efficiency of our proposed methods in image compression is studied. We show, that there are no artifacts at image boundaries in the reconstructed image if our methods are used.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The main objective of this paper is to present some tools to analyze a digital chaotic signal. We have proposed some of them previously, as a new type of phase diagrams with binary signals converted to hexadecimal. Moreover, the main emphasis will be given in this paper to an analysis of the chaotic signal based on the Lempel and Ziv method. This technique has been employed partly by us to a very short stream of data. In this paper we will extend this method to long trains of data (larger than 2000 bit units). The main characteristics of the chaotic signal are obtained with this method being possible to present numerical values to indicate the properties of the chaos.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The first part of the article gives a brief examination of the state of art in terms of methodologies, hardware available and algorithms used in space applications. Particular emphasis is given to the lossless algorithms used and their characterization. In the second part a more detailed analysis, in terms of data entropy, is presented. At the end the preliminary results in the determination of compressibility of the file will be presented.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Near-lossless compression, i.e., yielding strictly bounded reconstruction error, is extended to preserve the radiometric resolution of data produced by coherent imaging systems, like Synthetic Aperture Radar (SAR). First a causal spatial DPCM based on a fuzzy matching-pursuit (FMP) prediction is adjusted to yield a relative-error bounded compression by applying a logarithmic quantization to the ratio of original to predicted pixel values. Then, a noncausal DPCM is achieved based on the Rational Laplacian Pyramid (RLP), recently introduced by the authors for despeckling. The baseband icon of the RLP is (causally) DPCM encoded, the intermediate layers are uniformly quantized, and the bottom layer is logarithmically quantized. As a consequence, the relative error, i.e., pixel ratio of original to decoded image, can be strictly bounded around unity by the quantization step size of the bottom layer of the RLP. Experimental results reported on true SAR data from NASA/JPL AIRSAR show that virtually lossless images can be achieved with compression ratios larger than three.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Scientific images collected by a remote platform are preferably transmitted to the ground station in lossless compressed form. However, the data rate to be transmitted has been rapidly increasing due to the developments in sensor technology; the available downlink bandwidth has not increased accordingly, thus often imposing to resort to lossy compression. In particular, this paper deals with the evaluation of the new JPEG2000 standard for the transmission of scientific images to the ground station. We focus on a particular type of system, namely the packet telemetry channel standardized by the CCSDS (Consultative Committee for Space Data Systems), which can be used for the transmission of both application data (e.g. images) and telecommand and control data. A performance evaluation is presented, in terms of error resilience and lossy compression performance, comparing the results achieved by JPEG2000 and the classical JPEG algorithm. The simulation of the transmission aspects, including the complete channel model, is performed by means of the TOPSIM IV simulator of communication systems.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In the context of compression of high resolution multi-spectral satellite image data consisting of radiances and top-of-the-atmosphere fluxes, it is vital that image calibration characteristics (luminance, radiance) must be preserved within certain limits in lossy image compression. Though existing compression schemes (SPIHT, JPEG2000, SQP) give good results as far as minimization of the global PSNR error is concerned, they fail to guarantee a maximum local error. With respect to this, we introduce a new image compression scheme, which guarantees a MAXAD distortion, defined as the maximum absolute difference between original pixel values and reconstructed pixel values. In terms of defining the Lagrangian optimization problem, this reflects in minimization of the rate given the MAXAD distortion. Our approach thus uses the l-infinite distortion measure, which is applied to the lifting scheme implementation of the 9-7 floating point Cohen-Daubechies-Feauveau (CDF) filter. Scalar quantizers, optimal in the D-R sense, are derived for every subband, by solving a global optimization problem that guarantees a user-defined MAXAD. The optimization problem has been defined and solved for the case of the 9-7 filter, and we show that our approach is valid and may be applied to any finite wavelet filters synthesized via lifting. The experimental assessment of our codec shows that our technique provides excellent results in applications such as those for remote sensing, in which reconstruction of image calibration characteristics within a tolerable local error (MAXAD) is perceived as being of crucial importance compared to obtaining an acceptable global error (PSNR), as is the case of existing quantizer design techniques.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Huge amounts of data are generated thanks to the continuous improvement of remote sensing systems. Archiving this tremendous volume of data is a real challenge which requires lossless compression techniques. Furthermore, progressive coding constitutes a desirable feature for telebrowsing. To this purpose, a compact and pyramidal representation of the input image has to be generated. Separable multiresolution decompositions have already been proposed for multicomponent images allowing each band to be decomposed separately. It seems however more appropriate to exploit also the spectral correlations. For hyperspectral images, the solution is to apply a 3D decomposition according to the spatial and to the spectral dimensions. This approach is not appropriate for multispectral images because of the reduced number of spectral bands. In recent works, we have proposed a nonlinear subband decomposition scheme with perfect reconstruction which exploits efficiently both the spatial and the spectral redundancies contained in multispectral images. In this paper, the problem of coding the coefficients of the resulting subband decomposition is addressed. More precisely, we propose an extension to the vector case of Shapiro's embedded zerotrees of wavelet coefficients (V-EZW) with achieves further saving in the bit stream. Simulations carried out on SPOT images indicate the outperformance of the global compression package we performed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Traditional watermarking systems require the complete disclosure of the watermarking key in the watermark verification process. In most systems an attacker is able to remove the watermark completely once the key is known, thus subverting the intention of copyright protection. To cope with this problem, public-key watermarking schemes were proposed that allow asymmetric watermark detection. Whereas a public key is used to insert watermarks in digital objects, the marks can be verified with a private key. Knowledge of this private key does not allow piracy. We describe two public-key watermarking schemes which are similar in spirit to zero-knowledge proofs. The key idea of one system is to verify a watermark in a blinded version of the document, where the scrambling is determined by the private key. A probabilistic protocol is constructed that allows public watermark detection with probability of 1/2; by iteration, the verifier can get any degree of certainty that the watermark is present. The second system is based on watermark attacks, using controlled counterfeiting to conceal real watermark data safely amid data useless to an attacker.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The ability to resolve ownership disputes and copyright infringement is difficult in the worldwide digital age. There is an increasing need to develop techniques that adequately protect the owner of digital data. Digital Watermarking is a technique used to embed a known piece of digital data within another piece of digital data. The embedded piece of data acts as a fingerprint for the owner, allowing the protection of copyright, authentication of the data, and tracing of illegal copies. Digital Watermarking techniques have matured, and the most successful techniques are modeled after data communications techniques. In this case, the image is similar to the atmosphere (medium), and the watermark message is the signal communicated through the medium. Data communications techniques provide a sound model for measuring and improving watermark robustness. The goal of this project is to compare and measure the effectiveness of forward error correction (FEC) when used with a spread spectrum watermarking technique. This paper compares and contrasts Golay and convolutional error correction schemes. Most papers on digital watermarking mention or elude to the use of FEC, but none measure the effectiveness or offer a comparison of different techniques.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Earth observation missions have recently attracted ag rowing interest form the scientific and industrial communities, mainly due to the large number of possible applications capable to exploit remotely sensed data and images. Along with the increase of market potential, the need arises for the protection of the image products from non-authorized use. Such a need is a very crucial one even because the Internet and other public/private networks have become preferred means of data exchange. A crucial issue arising when dealing with digital image distribution is copyright protection. Such a problem has been largely addressed by resorting to watermarking technology. A question that obviously arises is whether the requirements imposed by remote sensing imagery are compatible with existing watermarking techniques. On the basis of these motivations, the contribution of this work is twofold: i) assessment of the requirements imposed by the characteristics of remotely sensed images on watermark-based copyright protection ii) analysis of the state-of-the-art, and performance evaluation of existing algorithms in terms of the requirements at the previous point.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper addresses the issue of robust data hiding in the presence of perceptual coding. Two common classes of data hiding schemes are considered: spread spectrum and quantization-based techniques. We identify analytically the advantages of both approaches under the lossy compression class of attacks. Based on our mathematical model, a novel hybrid data hiding algorithm which exploits the best of both worlds is presented. Theoretical and simulation results demonstrate the superior robustness of the resulting hybrid scheme.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
So far, only a limited attention has been paid to the distinction between readable and detectable watermarking. A drawback with detectable watermarking is that the embedded code can convey only 1 bit of information. Actually, if one could look for all possible watermarks, then the detection of one such watermark would convey log2N information bits. Unfortunately, such an approach is not computationally feasible, since the number of possible watermarks is usually very high. In this work we explore two alternative strategies to build a readable watermark by starting from a detectable one. According to the first strategy (position encoding), information bits are encoded in the position of M known pseudo-random sequences, embedded within the host document by relying on a generic detectable watermarking algorithm. In the second case (amplitude encoding), the pseudo-random sequence is split into a set of sub-sequences, and each sub-sequence is modulated by multiplying it by +1 or -1. To compare the two different approaches, we focus on image watermarking achieved through a well-known, frequency-domain watermarking algorithm previously proposed by the authors. The two alternative strategies are compared both from the point of view of bit error probability and watermark presence assessment.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this work, we extend arithmetic coding and present a data encryption scheme that achieves data compression and data security at the same time. This scheme is based on a chaotic dynamics, which makes use of the fact that the decoding process of arithmetic coding scheme can be considered as the repetition of Bernoulli shift map. Data encryption is achieved by controlling the piecewise linear maps by a secret key in three kinds of approach: (i) perturbation method, (ii) switching method, and (iii) source extension method. Experimental results show that the obtained arithmetic codes for a message are randomly distributed on the mapping domain [0,1) by using different keys without seriously deteriorating the compression ratio, and the transition of the orbits in the domain [0,1) is similar to the chaotic dynamics.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Watermarking or steganography technology provides a possible solution in digital multimedia copyright protection and pirate tracking. Most of the current data hiding schemes are based on spread spectrum modulation. A small value watermark signal is embedded into the content signal in some watermark domain. The information bits can be extracted via correlation. The schemes are applied both in escrow and oblivious cases. This paper reveals, through analysis and simulation, that in oblivious applications where the original signal is not available, the commonly used correlation detection is not optimal. Its maximum likelihood detection is analyzed and a feasible suboptimal detector is derived. Its performance is explored and compared with the correlation detector. Subsequently a linear embedding scheme is proposed and studied. Experiments with image data hiding demonstrates its effectiveness in applications.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Time-domain and frequency-domain spreading algorithms are presented in this paper. However, detection algorithm does not rely on correlation method, but based on patchwork method. Time-domain algorithm can survive translation and clipping attacks. Frequency-domain algorithm is robust against compression attack. Embedding and detection functions are presented. Simulation results show the effectiveness of the algorithms.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, comparative analysis on the characteristics of the Pseudo Random Sequence(PRS) and the Normal Random Sequence(NRS) which are used as a watermark for copyright protection and authentication about digital media in the conventional digital watermarking systems. From the analysis, it is found that in case of the NRS, the detection of the watermark might be difficult by many crosscorrelation peak values from other NRS codes because of the non-uniform distribution of 1 and 0 in those codes. However, in case of the PRS, since it has the uniform distribution of the random number in the generated codes, there exists very few crosscorrelation peaks. So the PRS is analyzed to be more robust against possible attacks such as JPEG image compression and Gaussian noise compared with the case of the NRS. Finally, in this paper, simulation results on the robustness of the PRS and NRS against some attacks such as JPEG compression, Gaussian noise are also suggested.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, a new digital watermarking scheme based-on the random casting method in the DCT domain is proposed. In the conventional watermarking methods, the DCT-transformed watermark is casting to the high frequency coefficients of the original cover image and the watermark is sequentially embedded in the casting frequency. But this kind of watermarking scheme can be attacked easily by the pirates and unlawful users, because these methods might have some structured patterns by the sequential embedding and the fixed casting domain. Also this method might get the damaged stego-image and is not robust to the image compression algorithms. Therefore, in this paper, a new robust digital watermarking scheme is proposed. In this algorithm, the frequency coefficients of the DCT-transformed original cover-image in which the watermark is inserted are randomly selected. These random position values of the casting frequencies can be used as another secret-key together with the watermark-key. From some experimental results the proposed method is found to be more robust to the possible attacks than those of the conventional methods.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, a wavelet domain robust watermarking technique for still images is presented. Watermark message encoding is accomplished based on iterative error correction codes with reasonable decoder complexity, followed by the codeword spreading over the whole image. Unlike the traditional technique, the proposed method utilizes the statistical property of a certain local area of an image in DWT domain for the watermark embedding and extraction. To minimize the perceptual degradation of the watermarked image, we propose an image compensation strategy (ICS) to make the watermark perceptually invisible. Experimental results demonstrate the robustness of the algorithm to many attacks, such as A/D and D/A processing, rescaling, and lossy compression.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this work, a new watermarking scheme is proposed to authenticate an image. This scheme is capable to detect the alternation locations, if any. The basic idea of this scheme is to uniquely represent a given image by a short string-sequence. This string-sequence is based on a correlation coefficient statistic; where the relationship between two adjacent image-rows/columns is utilized. To generate this string-sequence a given image is divided into small blocks, where each block generates a different string-sequence. The Elliptic Curve Digital Signature Algorithm (ECDSA) is employed to sign these string-sequences to form an image dependant block-based watermark. Experimental results showed that the proposed watermark is highly sensitive to any alteration, and it can identify the location of any modified block in a given image.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.