In January 2014, Digimarc announced Digimarc® Barcode for the packaging industry to improve the check-out efficiency and customer experience for retailers. Digimarc Barcode is a machine readable code that carries the same information as a traditional Universal Product Code (UPC) and is introduced by adding a robust digital watermark to the package design. It is imperceptible to the human eye but can be read by a modern barcode scanner at the Point of Sale (POS) station. Compared to a traditional linear barcode, Digimarc Barcode covers the whole package with minimal impact on the graphic design. This significantly improves the Items per Minute (IPM) metric, which retailers use to track the checkout efficiency since it closely relates to their profitability. Increasing IPM by a few percent could lead to potential savings of millions of dollars for retailers, giving them a strong incentive to add the Digimarc Barcode to their packages. Testing performed by Digimarc showed increases in IPM of at least 33% using the Digimarc Barcode, compared to using a traditional barcode.
A method of watermarking print ready image data used in the commercial packaging industry is described. A significant proportion of packages are printed using spot colors, therefore spot colors needs to be supported by an embedder for Digimarc Barcode. Digimarc Barcode supports the PANTONE spot color system, which is commonly used in the packaging industry. The Digimarc Barcode embedder allows a user to insert the UPC code in an image while minimizing perceptibility to the Human Visual System (HVS). The Digimarc Barcode is inserted in the printing ink domain, using an Adobe Photoshop plug-in as the last step before printing. Since Photoshop is an industry standard widely used by pre-press shops in the packaging industry, a Digimarc Barcode can be easily inserted and proofed.
The “Internet of Things” is an appealing concept aiming to assign digital identity to both physical and digital
everyday objects. One way of achieving this goal is to embed the identity in the object itself by using digital
watermarking. In the case of printed physical objects, such as consumer packages, this identity can be later read
from a digital image of the watermarked object taken by a camera. In many cases, the object might occupy only
a small portion of the the image and an attempt to read the watermark payload from the whole image can lead
to unnecessary processing. This paper proposes a statistical learning-based algorithm for localizing watermarked
physical objects taken by a digital camera. The algorithm is specifically designed and tested on watermarked
consumer packages read by an off-the-shelf barcode imaging scanner. By employing simple noise-sensitive features
borrowed from blind image steganalysis and a linear classifier, we are able to estimate probabilities of watermark
presence in every part of the image significantly faster than running a watermark detector. These probabilities
are used to pinpoint areas that are recommended for further processing. We compare our adaptive approach with
a system designed to read watermarks from a set of fixed locations and achieve significant savings in processing
time while improving overall detector robustness.
Most steganographic schemes for real digital media embed messages by minimizing a suitably defined distortion
function. In practice, this is often realized by syndrome codes which offer near-optimal rate-distortion performance.
However, the distortion functions are designed heuristically and the resulting steganographic algorithms
are thus suboptimal. In this paper, we present a practical framework for optimizing the parameters of additive
distortion functions to minimize statistical detectability. We apply the framework to digital images in both spatial
and DCT domain by first defining a rich parametric model which assigns a cost of making a change at every
cover element based on its neighborhood. Then, we present a practical method for optimizing the parameters
with respect to a chosen detection metric and feature space. We show that the size of the margin between support
vectors in soft-margin SVMs leads to a fast detection metric and that methods minimizing the margin tend
to be more secure w.r.t. blind steganalysis. The parameters obtained by the Nelder-Mead simplex-reflection
algorithm for spatial and DCT-domain images are presented and the new embedding methods are tested by blind
steganalyzers utilizing various feature sets. Experimental results show that as few as 80 images are sufficient for
obtaining good candidates for parameters of the cost model, which allows us to speed up the parameter search.
Sensor fingerprint is a unique noise-like pattern caused by slightly varying pixel dimensions and inhomogeneity of the
silicon wafer from which the sensor is made. The fingerprint can be used to prove that an image came from a specific
digital camera. The presence of a camera fingerprint in an image is usually established using a detector that evaluates
cross-correlation between the fingerprint and image noise. The complexity of the detector is thus proportional to the
number of pixels in the image. Although computing the detector statistic for a few megapixel image takes several
seconds on a single-processor PC, the processing time becomes impractically large if a sizeable database of camera
fingerprints needs to be searched through. In this paper, we present a fast searching algorithm that utilizes special
"fingerprint digests" and sparse data structures to address several tasks that forensic analysts will find useful when
deploying camera identification from fingerprints in practice. In particular, we develop fast algorithms for finding if a
given fingerprint already resides in the database and for determining whether a given image was taken by a camera
whose fingerprint is in the database.
In this paper, we propose a practical approach to minimizing embedding impact in steganography based on syndrome
coding and trellis-coded quantization and contrast its performance with bounds derived from appropriate
rate-distortion bounds. We assume that each cover element can be assigned a positive scalar expressing the impact
of making an embedding change at that element (single-letter distortion). The problem is to embed a given
payload with minimal possible average embedding impact. This task, which can be viewed as a generalization of
matrix embedding or writing on wet paper, has been approached using heuristic and suboptimal tools in the past.
Here, we propose a fast and very versatile solution to this problem that can theoretically achieve performance
arbitrarily close to the bound. It is based on syndrome coding using linear convolutional codes with the optimal
binary quantizer implemented using the Viterbi algorithm run in the dual domain. The complexity and memory
requirements of the embedding algorithm are linear w.r.t. the number of cover elements. For practitioners,
we include detailed algorithms for finding good codes and their implementation. Finally, we report extensive
experimental results for a large set of relative payloads and for different distortion profiles, including the wet
paper channel.
It is a well-established result that steganographic capacity of perfectly secure stegosystems grows linearly with
the number of cover elements-secure steganography has a positive rate. In practice, however, neither the
Warden nor the Steganographer has perfect knowledge of the cover source and thus it is unlikely that perfectly
secure stegosystems for complex covers, such as digital media, will ever be constructed. This justifies study of
secure capacity of imperfect stegosystems. Recent theoretical results from batch steganography, supported by
experiments with blind steganalyzers, point to an emerging paradigm: whether steganography is performed in a
large batch of cover objects or a single large object, there is a wide range of practical situations in which secure
capacity rate is vanishing. In particular, the absolute size of secure payload appears to only grow with the square
root of the cover size. In this paper, we study the square root law of steganographic capacity and give a formal
proof of this law for imperfect stegosystems, assuming that the cover source is a stationary Markov chain and
the embedding changes are mutually independent.
This paper presents a large scale test of camera identification from sensor fingerprints. To overcome the problem of
acquiring a large number of cameras and taking the images, we utilized Flickr, an existing on-line image sharing site. In
our experiment, we tested over one million images spanning 6896 individual cameras covering 150 models. The
gathered data provides practical estimates of false acceptance and false rejection rates, giving us the opportunity to
compare the experimental data with theoretical estimates. We also test images against a database of fingerprints,
simulating thus the situation when a forensic analyst wants to find if a given image belongs to a database of already
known cameras. The experimental results set a lower bound on the performance and reveal several interesting new facts
about camera fingerprints and their impact on error analysis in practice. We believe that this study will be a valuable
reference for forensic investigators in their effort to use this method in court.
In this paper, we propose a general framework and practical coding methods for constructing steganographic
schemes that minimize the statistical impact of embedding. By associating a cost of an embedding change with
every element of the cover, we first derive bounds on the minimum theoretically achievable embedding impact
and then propose a framework to achieve it in practice. The method is based on syndrome codes with low-density
generator matrices (LDGM). The problem of optimally encoding a message (e.g., with the smallest embedding
impact) requires a binary quantizer that performs near the rate-distortion bound. We implement this quantizer
using LDGM codes with a survey propagation message-passing algorithm. Since LDGM codes are guaranteed
to achieve the rate-distortion bound, the proposed methods are guaranteed to achieve the minimal embedding
impact (maximal embedding efficiency). We provide detailed technical description of the method for practitioners
and demonstrate its performance on matrix embedding.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.