In this paper we propose a lossless compression algorithm for hyperspectral images based on distributed source coding;
this algorithm represents a significant improvement over our prior work on the same topic, and has been developed during
a project funded by ESA-ESTEC. In particular, the algorithm achieves good compression performance with very low
complexity; moreover, it also features a very good degree of error resilience.
These features are obtained taking inspiration from distributed source coding, and particularly employing coset codes
and CRC-based decoding. As the CRC can be used to decode blocks using a reference different from that used to compress
the image, this yields error resilience. In particular, if a block is lost, decoding using the closest collocated block in the
second previous band is successful about 70% of the times.
The goal of this work is to study the feasibility of a low-complexity encoder for lossless compression of hyperspectral images. Since on-board bandwidth and power resources are limited for remote sensing systems, we adopted the distributed source coding (DSC) paradigm as a starting point for moving the computational complexity from the encoder to the decoder. The advantages from locating a simple encoder on the aerial platform are far more relevant than the increased costs of a more complex decoder at the ground station. Two lossless compression algorithms have been developed, the former performing a scalar encoding of the syndromes transmitted for each band of the hyperspectral image, the latter implementing a vectorial approach and yielding a slightly better compression ratio than the scalar one. No information about the spatial correlation is taken into account, while spectral correlation is explicitly exploited only at the decoder side. Experimental results confirm the asymmetrical distribution of computational complexity between encoder and decoder, with a strong increase of decoding times although the recorded encoding times are still higher than those achieved by JPEG-LS. As to the compression rate, our codecs perform very well compared to JPEG-LS or CALIC 2D, and worse than CALIC 3D, which also carries out inter-band decorrelation thus requiring a quite long processing time.
A novel quantization-based data-hiding method, named Rational Dither Modulation (RDM), is presented. This method retains most of the easiness of the Dither Modulation (DM) scheme, which is known to be vulnerable to fixed-gain attacks. However, RDM modifies DM in such a way that it becomes invariant to those attacks. The basic principle behind RDM is the use of an adaptive quantization step-size at both embedder and decoder, which depends on previously watermarked samples. When the host signal is stationary, this causes the watermarked signal being under some mild conditions asymptotically stationary. Mathematical tools, new to data-hiding, are used to determine this stationary probability density function, which is later employed to analytically establish the performance of RDM in Gaussian channels. We also show that by properly increasing the memory of the system, it is possible to asymptotically approach the performance of conventional DM, while still keeping invariance to fixed gain attacks. Moreover, RDM is compared to improved spread-spectrum (ISS) methods, showing that for the former much higher rates can be achieved for the same bit error probability. Our theoretical results are validated with experimental results, which also serve to show a moderate resilience of RDM in front of slow-varying gain attacks. Perhaps the main advantage of RDM in comparison with other schemes designed to cope with fixed-gain attacks is its simplicity.
KEYWORDS: Digital watermarking, Distortion, Gold, Monte Carlo methods, Data hiding, Error analysis, Forward error correction, Computer programming, Modulation, Computing systems
A new dirty paper coding technique for robust watermarking is presented based on the properties of orthogonal codes. By relying on the simple structure of these codes, a simple yet powerful technique to embed a message within the host signal is developed. In addition, the equi-energetic nature of the coded sequence, together with the adoption of a correlation-based decoder, ensures
that the watermark is robust against value-metric scaling. The performance of the dirty coding algorithm are further improved by replacing orthogonal codes with Gold sequences and by concatenating them with an outer turbo code. To this aim, the inner decoder is modified so to produce a soft estimate of the embedded message and to make it possible the adoption of an iterative multistage decoding strategy. Performance analysis is carried out by means of Monte Carlo simulations proving the validity of the novel watermarking scheme. A comparison with dirty-trellis watermarking reveals the effectiveness of the new system, which, thanks to its very low computational burden, allows the adoption of extremely powerful channel coding strategies, hence ensuring a very high robustness or, even thanks to the optimum embedding procedure, a low distortion.
KEYWORDS: Video, Networks, Video coding, Multimedia, Pollution control, Detection and tracking algorithms, Control systems, Performance modeling, Telecommunications, Data communications
The specification and the management of Quality of Service (QoS) are important aspects in mobile telecommunication networks. There are increasing interests and open issues in providing users with a wide range of multimedia services, in particular for video-telephony applications in a third generation cellular system environment.
This paper presents a performance assessment of combined voice-video transmission in a realistic UMTS scenario in which the radio interface is simulated by using the environment offered by OPNET (Optimized Network Engineering Tool). In such a context, it is possible to take into account the effects of interference and mobility in a classical cellular scenario. Hence, we can evaluate the effectiveness of different solutions: power control and macro-diversity at the physical layer, coding and retransmission strategies at the link layer and handover, resource allocation and scheduling schemes at the network layer. Power control is designed in order to adapt the power transmitted by mobile terminals on the basis of interference level, traffic and channel conditions, and required packet loss probability.
Two main aspects are considered to test the system performance: the ability to support given QoS requirements for both voice and video services - by using accurate models for voice sources and real H.263 video sources - and the effectiveness in the utilization of system resources. Toward this end, simulations are performed for various configurations of parameters and different environmental conditions with a particular attention devoted to the QoS for video transmission.
This paper has a threefold goal. First an error resilience technique based on UEP-FEC, proposed by the authors in a previous work, is extended to incorporate a more efficient, adaptive, intra refresh mechanism based on an error-sensitivity metric representing the vulnerability of each coded block to channel errors. Second, the validity of the UEP technique for video transmission over a WCDMA channel is evaluated, thus extending the range of applicability of the above technique, which was originally conceived for and validated on DECT, slow fading, channels. Third, by elaborating on experimental evidence, we prove again that a final judgement on which is the best performing resilience strategy can not be given, without taking into account channel conditions, especially the Doppler spread value.
The objective of this work is to analyze and compare H.263 resilience techniques for H.223-based real-time video transmission over narrow-band slow-fading channels. These channel conditions, which are typical for pedestrian video communications, are very critical, because they require Forward Error Correction (FEC), since data retransmission is not feasible, due to high network delay, and they reduce the effectiveness of FEC techniques- due to the bursty nature of the channel. In this work, two different strategies for H.263 video protection against channel errors are considered and compared. The strategies are tested over a slow-fading wireless channel, over which the H.263 video streams, organized and multiplexed by the H.223 Multiplex Protocol, are transmitted. Both standard FEC techniques considered by the H.223 recommendation for equal error protection of the video stream, and unequal error protection (UEP) through GOB synchronization are tested. The experimental results of this comparative analysis prove the superiority of the UEP technique for H.223-based video transmission.
Several techniques have been proposed to limit the effect of error propagation in video sequences coded at a very low bit rate. The best performance is achieved by combined FEC and ARQ coding strategies. However, retransmission of corrupted data frames introduces additional delay which may be critical either for real-time bidirectional communications, or when the round-trip delay of data frames is high. In such cases, only a FEC strategy is feasible. Full reliable protection of the H.263 stream would produce a significant increase of the overall transmission bit rate. In this paper, an unequal error protection (UEP) FEC coding strategy is proposed. The proposed technique operates by protecting only the most important bits of an H.263 coded video with periodically INTRA refreshed GOB's. ARQ techniques are not considered to avoid delays and simplify the receiver structure. Experimental tests are carried out by simulating a video transmission over a DECT channel in an indoor environment. The results, in terms of PSNR and overall bit rate, prove the effectiveness of the proposed UEP FEC coding.
In this work two methods are presented aiming at reducing the color degradation introduced by CRT monitors. According to the first, a mathematical model of the CRT is developed and an approximation of its inverse is used to pre-process the images to be reproduced in such a way that color distortion is minimized. The second method is similar to the first one, the only difference consisting in the use of a neural network to model the behavior of the CRT. According to both strategies, training against a set of reference colors is needed to tune the parameters of the model. Experiments were carried out to evaluate the performance of the proposed methods. Upon analysis of the results, the superior accuracy of the neural- based system comes out, due to its capability of dealing with the non-linear, non-ideal behavior of commercial monitors. On the other hand, more flexibility of use can be achieved through a mathematical description of the CRT, which in some application scenarios makes such a solution a more desirable one.
Reversible compression of color images is gaining the ever- increasing attention of multimedia publishing industries for collections of works-of-art. In fact, the availability of high-resolution high-quality multispectral scanners demands robust and efficient coding techniques capable to capture inter-band redundancy without destroying the underlying intra-band correlation. Although DPCM schemes (e.g., lossless JPEG) are employed for reversible compression, their straightforward extension to true-color (e.g., RGB, XYZ) image data usually leads to a negligible coding gain or even to a performance penalty with respect to individual coding of each color component. Previous closest neighbor (PCN) prediction has been recently proposed for lossless data compression of multispectral images, in order to take advantage of inter-band data correlation. The basic idea to predict the value of the current pixel in the current band on the basis of the best zero-order predictor on the previously coded band has been applied by extending the set of predictors to those adopted by lossless JPEG. On a variety of color images, one of which acquired directly from a painting by the VASARI Scanner at the Uffizi Gallery with a very high resolution (20 pel/mm, 8 MSB for each of the XYZ color components), experimental results show that the method is suitable for inter-frame decorrelation and outperforms lossless JPEG and, to a lesser extent, PCN.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.