We revisit the well-known watermarking detection problem, also known as one-bit watermarking, in the presence of an oracle attack. In the absence of an adversary, the design of the detector generally relies on probabilistic formulations (e.g., Neyman-Pearson's lemma) or on ad-hoc solutions. When there is an adversary trying to minimize the probability of correct detection, game-theoretic approaches are possible. However, they usually assume that the attacker cannot learn the secret parameters used in detection. This is no longer the case when the adversary launches an oracle-based attack, which turns out to be extremely effective. In this paper, we discuss how the detector can learn whether it is being subject to such an attack, and take proper measures. We present two approaches based on different attacker models. The first model is very general and makes minimum assumptions on attacker's beaver. The second model is more specific since it assumes that the oracle attack follows a weel-defined path. In all cases, a few observations are sufficient to the watermark detector to understand whether an oracle attack is on going.
In this paper we propose a method for perspective distortion correction of rectangular documents. This scheme
exploits the orthogonality of the document edges, allowing to recover the aspect ratio of the original document.
The results obtained after correcting the perspective of several document images captured with a mobile phone
are compared with those achieved by digitizing the same documents with several scanner models.
KEYWORDS: Digital watermarking, Signal detection, Distortion, Sensors, Error analysis, Statistical analysis, Interference (communication), Data hiding, Transmitters, Binary data
The problem of asymptotically optimum watermark detection and embedding has been addressed in a recent
paper by Merhav and Sabbag where the optimality criterion corresponds to the maximization of the false negative
error exponent for a fixed false positive error exponent. In particular Merhav and Sabbag derive the optimum
detection rule under the assumption that the detector relies on the second order statistics of the received signal
(universal detection under limited resources), however the optimum embedding strategy in the presence of attacks
and a closed formula for the negative error exponents are not available. In this paper we extend the analysis by
Merhav and Sabbag, by deriving the optimum embedding strategy under Gaussian attacks and the corresponding
false negative error exponent. The improvement with respect to previously proposed embedders are shown by
means of plots.
KEYWORDS: Distortion, Multimedia, Digital watermarking, Computer programming, Data hiding, Signal detection, Computer security, Signal processing, Quantization, Information security
This work deals with practical and theoretical issues raised by the information-theoretical framework for authentication
with distortion constraints proposed by Martinian et al.1 The optimal schemes proposed by these
authors rely on random codes which bear close resemblance to the dirty-paper random codes which show up
in data hiding problems. On the one hand, this would suggest to implement practical authentication methods
employing lattice codes, but these are too easy to tamper with within authentication scenarios. Lattice codes
must be randomized in order to hide their structure. One particular multimedia authentication method based
on randomizing the scalar lattice was recently proposed by Fei et al.2 We reexamine here this method under the
light of the aforementioned information-theoretical study, and we extend it to general lattices thus providing a
more general performance analysis for lattice-based authentication. We also propose improvements to Fei et al.'s
method based on the analysis by Martinian et al., and we discuss some weaknesses of these methods and their
solutions.
KEYWORDS: Digital watermarking, Sensors, Distortion, Signal detection, Signal to noise ratio, Binary data, Detection and tracking algorithms, Algorithm development, Information security, Signal processing
From December 15, 2005 to June 15, 2006 the watermarking community was challenged to remove the watermark
from 3 different 512×512 watermarked images while maximizing the Peak Signal to Noise Ratio (PSNR) measured
by comparing the watermarked signals with their attacked counterparts. This challenge, which bore the inviting
name of Break Our Watermarking System (BOWS),1 and was part of the activities of the European Network
of Excellence ECRYPT, had as its main objective to enlarge the current knowledge on attacks to watermarking
systems; in this sense, BOWS was not aimed at checking the vulnerability of the specific chosen watermarking
scheme against attacks, but to inquire in the different strategies the attackers would follow to achieve their target.
In this paper the main results obtained by the authors when attacking the BOWS system are introduced.
Mainly, the strategies followed can be divided into two different approaches: blind sensitivity attacks and exhaustive
search of the secret key.
KEYWORDS: Digital watermarking, Computer security, Information security, Data hiding, Distortion, Monte Carlo methods, Data analysis, Optical spheres, Quantization, Statistical analysis
In this paper, security of lattice-quantization data hiding is considered under a cryptanalytic point of view. Security in this family of methods is implemented by means of a pseudorandom dither signal which randomizes the codebook, preventing unauthorized embedding and/or decoding. However, the theoretical analysis shows that the observation of several watermarked signals can provide sufficient information for an attacker willing to estimate the dither signal, quantifying information leakages in different scenarios. The practical algorithms proposed in this paper show that such information leakage may be successfully exploited with manageable complexity, providing accurate estimates of the dither using a small number of observations. The aim of this work is to highlight the security weaknesses of lattice data hiding schemes whose security relies only on secret dithering.
KEYWORDS: Digital watermarking, Sensors, Binary data, Signal detection, Distortion, Detection and tracking algorithms, Information security, Beryllium, Iterative methods, Quantization
Until now, the sensitivity attack was considered as a serious threat to the robustness and security of spread spectrum-based schemes, since it provides a practical method of removing watermarks with minimum attacking distortion. Nevertheless, it had not been used to tamper other watermarking algorithms, as those which use side-information. Furthermore the sensitivity attack has never been used to obtain falsely watermarked contents, also known as forgeries. In this paper a new version of the sensitivity attack based on a general formulation is proposed; this method does not require any knowledge about the detection function nor any other system parameter, but just the binary output of the detector, thus being suitable for attacking most known watermarking methods, both for tampering watermarked signals and obtaining forgeries. The soundness of this new approach is tested by empirical results.
KEYWORDS: Digital watermarking, Signal detection, Quantization, Information security, Sensors, Distortion, Radon, Detection and tracking algorithms, Matrices, Interference (communication)
In this paper, a novel method for detection in quantization-based
watermarking is introduced. This method basically works by quantizing a projection of the host signal onto a subspace of smaller dimensionality. A theoretical performance analysis under
AWGN and fixed gain attacks is carried out, showing great improvements over traditional spread-spectrum-based methods operating under the same conditions of embedding distortion and attacking noise. A security analysis for oracle-like attacks is also accomplished, proposing a sensitivity attack suited to quantization-based methods for the first time in the literature, and showing a trade-off between security level and performance; anyway, this new method offers significant improvements in security, once again, over spread-spectrum-based methods facing the same kind of attacks.
KEYWORDS: Distortion, Computer programming, Digital watermarking, Signal to noise ratio, Data hiding, Quantization, Forward error correction, Information security, Binary data, Multimedia
Structured codes are known to be necessary in practical implementations of capacity-approaching "dirty paper schemes." In this paper we study the performance of a recently proposed dirty paper technique, by Erez and ten Brink which, to the authors' knowledge, is firstly applied to data-hiding, and compare it with other existing approaches. Specifically, we compare it with conventional side-informed schemes previously used in data-hiding based on repetition and turbo coding. We show that a significant improvement can be achieved using Erez and ten Brink's proposal. We also study the considerations we have to take into account when these codes are used in data-hiding, mainly related with perceptual questions.
KEYWORDS: Radon, Darmstadtium, Digital watermarking, Data hiding, Electronic filtering, Digital signal processing, Matrices, Computer security, Information security, Optimal filtering
A game-theoretic approach is introduced to quantify possible information leaks in spread-spectrum data hiding schemes. Those leaks imply that the attacker knows the set-partitions and/or the pseudorandom sequence, which in most of the existing methods are key-dependent. The bit error probability is used as payoff for the game. Since a closed-form strategy is not available in the general case, several simplifications leading to near-optimal strategies are also discussed. Finally, experimental results supporting our analysis are presented.
KEYWORDS: Expectation maximization algorithms, Data hiding, Distortion, Digital watermarking, Computer programming, Forward error correction, Lead, Reliability, Quantization, Signal to noise ratio
Distortion-Compensated Dither Modulation (DC-DM), also known as Scalar Costa Scheme (SCS), has been theoretically shown to be near-capacity achieving thanks to its use of side information at the encoder. In practice, channel coding is needed in conjunction with this quantization-based scheme in order to approach the achievable rate limit. The most powerful coding methods use iterative decoding (turbo codes, LDPC), but they require knowledge of the channel model. Previous works on the subject have assumed the latter to be known by the decoder. We investigate here the possibility of undertaking blind iterative decoding of DC-DM, using maximum likelihood estimation of the channel model within the decoding procedure. The unknown attack is assumed to be i.i.d. and additive. Before each iterative decoding step, a new optimal estimation of the attack model is made using the reliability information provided by the previous step. This new model is used for the next iterative decoding stage, and the procedure is repeated until convergence. We show that the iterative Expectation-Maximization algorithm is suitable for solving the problem posed by model estimation, as it can be conveniently intertwined with iterative decoding.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.