Locally most-powerful detector for secret key estimation in spread spectrum image steganography
Author(s):
Shalin P. Trivedi;
Rajarathnam Chandramouli
Show Abstract
We define sequential steganography as those class of embedding
algorithms that hide messages in consecutive (time, spatial or
frequency domain) features of a host signal. This paper presents a
steganalysis method that estimates the secret key used in
sequential steganography. A theory is developed for detecting
abrupt jumps in the statistics of the stego signal during
steganalysis. Stationary and non-stationary host signals with low,
medium and high SNR embedding are considered. A locally most
powerful steganalysis detector for the low SNR case is also
derived. Several techniques to make the steganalysis algorithm
work for non-stationary digital image steganalysis are also
presented. Extensive experimental results are shown to illustrate
the strengths and weaknesses of the proposed steganalysis
algorithm.
Kernel Fisher discriminant for steganalysis of JPEG hiding methods
Author(s):
Jeremiah Joseph Harmsen;
William A. Pearlman
Show Abstract
The use of kernel Fisher discriminants is used to detect the presence of JPEG
based hiding methods. The feature vector for the kernel discriminant is constructed from the quantized DCT coefficient indices. Using methods developed in kernel theory a classifier is trained in a high dimensional feature space which is capable of discriminating original from stegoimages. The algorithm is tested on the F5 hiding method.
On estimation of secret message length in LSB steganography in spatial domain
Author(s):
Jessica Fridrich;
Miroslav Goljan
Show Abstract
In this paper, we present a new method for estimating the secret message length of bit-streams embedded using the Least
Significant Bit embedding (LSB) at random pixel positions. We introduce the concept of a weighted stego image and
then formulate the problem of determining the unknown message length as a simple optimization problem. The
methodology is further refined to obtain more stable and accurate results for a wide spectrum of natural images. One of
the advantages of the new method is its modular structure and a clean mathematical derivation that enables elegant
estimator accuracy analysis using statistical image models.
Steganalysis using color wavelet statistics and one-class support vector machines
Author(s):
Siwei Lyu;
Hany Farid
Show Abstract
Steganographic messages can be embedded into digital images in ways
that are imperceptible to the human eye. These messages, however,
alter the underlying statistics of an image. We previously built
statistical models using first-and higher-order wavelet statistics,
and employed a non-linear support vector machines (SVM) to detect
steganographic messages. In this paper we extend these results to
exploit color statistics, and show how a one-class SVM greatly
simplifies the training stage of the classifier.
Steganalysis using modified pixel comparison and complexity measure
Author(s):
Sos S. Agaian;
Benjamin M. Rodriguez;
Glenn B. Dietrich
Show Abstract
This article presents a new approach, which focuses on the following problems: detection and localization of stego
informative regions within digital clean and noisy images; removing hidden data along with minimizing the statistical
differences between stego images and stego information removed image. The new approach is based on a new pixel
comparison and a new complexity measure. This new measure identifies the informative and stego-like regions of an
image, with the objective of stegoanalysis through the saving of informative regions and the discarding of stego-like
areas. The areas that are harder for detection are scanned in an alternate method in an attempt to detect areas that are
classified as good for embedding. This allows for a higher detection rate and a low false positive. Experimental results
will be presented in the complete write up. The data gathered will be listed on tables from a set of 100+ digital images.
The images used in the analysis will vary in size, format, and color. Various commonly employed (e.g., S-Tools,
SecurEngine, and wbStego3.51) approaches were used to hide hidden data onto the digital images for analysis. The new
method has shown remarkable detection accuracy and localization of embedded information for LSB embedding. We
have also shown that the presented method works even in the presence of noise in the image. In addition, this method
shows that an image can be divided into ideal detection areas and ideal embedding areas. With this in mind the image
can be scanned for the ideal detection methods to reduce both false positives and false negatives. This technique can be
applied to data compression and for hiding secret information, in both time and transformed domains. It is also
independent of the order color vectors in the palette.
Performance evaluation of blind steganalysis classifiers
Author(s):
Mark T. Hogan;
Guenole C. M. Silvestre;
Neil J. Hurley
Show Abstract
Steganalysis is the art of detecting and/or decoding secret messages embedded in multimedia contents. The topic
has received considerable attention in recent years due to the malicious use of multimedia documents for covert
communication. Steganalysis algorithms can be classified as either blind or non-blind depending on whether or
not the method assumes knowledge of the embedding algorithm. In general, blind methods involve the extraction
of a feature vector that is sensitive to embedding and is subsequently used to train a classifier. This classifier can
then be used to determine the presence of a stego-object, subject to an acceptable probability of false alarm. In
this work, the performance of three classifiers, namely Fisher linear discriminant (FLD), neural network (NN)
and support vector machines (SVM), is compared using a recently proposed feature extraction technique. It
is shown that the NN and SVM classifiers exhibit similar performance exceeding that of the FLD. However,
steganographers may be able to circumvent such steganalysis algorithms by preserving the statistical transparency
of the feature vector at the embedding. This motivates the use of classification algorithms based on the entire
document. Such a strategy is applied using SVM classification for DCT, FFT and DWT representations of an
image. The performance is compared to a feature extraction technique.
Searching for the stego-key
Author(s):
Jessica Fridrich;
Miroslav Goljan;
David Soukal
Show Abstract
Steganalysis in the wide sense consists of first identifying suspicious objects and then further analysis during which
we try to identify the steganographic scheme used for embedding, recover the stego key, and finally extract the
hidden message. In this paper, we present a methodology for identifying the stego key in key-dependent
steganographic schemes. Previous approaches for stego key search were exhaustive searches looking for some
recognizable structure (e.g., header) in the extracted bit-stream. However, if the message is encrypted, the search
will become much more expensive because for each stego key, all possible encryption keys would have to be tested.
In this paper, we show that for a very wide range of steganographic schemes, the complexity of the stego key search
is determined only by the size of the stego key space and is independent of the encryption algorithm. The correct
stego key can be determined through an exhaustive stego key search by quantifying statistical properties of samples
along portions of the embedding path. The correct stego key is then identified by an outlier sample distribution.
Although the search methodology is applicable to virtually all steganographic schemes, in this paper we focus on
JPEG steganography. Search techniques for spatial steganographic techniques are treated in our upcoming paper.
Quantitative evaluation of pairs and RS steganalysis
Author(s):
Andrew David Ker
Show Abstract
We give initial results from a new project which performs statistically accurate evaluation of the reliability
of image steganalysis algorithms. The focus here is on the Pairs and RS methods, for detection of
simple LSB steganography in grayscale bitmaps, due to Fridrich et al. Using libraries totalling around
30,000 images we have measured the performance of these methods and suggest changes which lead to significant
improvements.
Particular results from the project presented here include notes on the distribution of the RS statistic,
the relative merits of different "masks" used in the RS algorithm, the effect on reliability when previously
compressed cover images are used, and the effect of repeating steganalysis on the transposed image. We also discuss
improvements to the Pairs algorithm, restricting it to spatially close pairs of pixels, which leads to a
substantial performance improvement, even to the extent of surpassing the RS statistic which was previously
thought superior for grayscale images.
We also describe some of the questions for a general methodology of evaluation of steganalysis, and potential
pitfalls caused by the differences between uncompressed, compressed, and resampled cover images.
Steganalysis with JPEG and GIF images
Author(s):
Rui Du;
Larry E. Guthrie;
Doug Buchy
Show Abstract
Steganalysis methods for detecting secret message embedded in JPEG and GIF images are presented in this paper. Usually, the DCT coefficients are modified when secret data is embedded to JPEG images, and the pixel indexes are changed in GIF data hiding. No matter it is the DCT coefficients or image pixel changed in data hiding, the introduced noise deteriorates the smoothness of images. For JPEG images, the change of smoothness at the block boundary is used to distinguish the clean and stego images. For GIF images, the change of smoothness between neighbor pixels is used in steganalysis. For stego GIF images created by reordering colors in the palette, we discriminate the stego and clean images by checking the palette to see if there is a pattern existing or not.
Fast audio watermarking: concepts and realizations
Author(s):
Michael Arnold;
Zongwei Huang
Show Abstract
In this paper we present concepts and corresponding implemenations to maximize the speed of audio watermarking
encoders in order to be applicable in different scenarios. To motivate the development and implementation of fast audio
watermarking encoders, application scenarios requiring high embedding speed are presented.
Different concepts with assumptions concerning the underlying watermarking algorithms are discussed. The paper
presents the necessary audio stream preparation and the corresponding implementation realizing the fast audio watermarking
methods. The quality of the watermarked audio tracks and the robustness of the embedded watermarks will be
verified by experimental tests. Enhancements of the fundamental principle concerning the distribution of audio tracks are
discussed.
An informed synchronization scheme for audio data hiding
Author(s):
Alejandro LoboGuerrero;
Patrick Bas;
Joel Lienard
Show Abstract
This paper deals with the problem of synchronization in the particular case of audio data hiding. In this kind of application the goal is to increase the information of an audio data set by inserting an imperceptible message. An innovating synchronization scheme that uses informed coding theory is proposed. The goal is to realize a complementary approach from two different techniques in order to obtain an enhanced synchronization system. To that end, the analysis of the classical spread spectrum synchronization is done and this classical scheme is improved by the use of side information. Informed coding theory is presented and revisited taking into account the problem of synchronization to enable the selection of signal realizations called Feature Time Points (FTP) which are correlated with a code. Such considerations yield to the definition of informed synchronization. The proposed scheme and the definition of FTP are after presented taking into account the robustness criterion. Finally, results and comparison with classical spread spectrum synchronization schemes are presented.
Digital watermarking based on process of speech production
Author(s):
Toshiyuki Sakai;
Naohisa Komatsu
Show Abstract
A speech production procedure can be divided into three parts, namely the glottal source, articulation and
radiation, respectively. We propose a watermarking method for speech by manipulating the articulation in the
process of speech production. We apply our method to CS-ACELP(G.729 standard), which is the ITU-T
approved recommendation. It provides a low bit rate 8 kb/s speech coding algorithm with wire/line quality. The
watermarked vocal tract model is expressed by codebooks made by LSP(Line Spectrum Pair) parameters. The
codebook vectors replace some of the extracted LSP. Speech is synthesized using replaced LSP. We generate a
couple of codebooks using a unique method to modify the LSP of the spectrum envelope. Shortening the width
of the LSPs creates one watermarked codebook, and the second codebook is created by stretching the LSP of
both sides of each formant. There are ten LSP dimensions in each voice frame of the CS-ACELP decoder. In the
detecting process, the weighted Euclidean distance(WED) between the watermarked codebooks and the
extracted LSP will be calculated. Whether the watermark is embedded will be judged by utilizing the calculated
WED. Evaluation tests on detection accuracy will be discussed with simulation results.
Two-dimensional audio watermark for MPEG AAC audio
Author(s):
Ryuki Tachibana
Show Abstract
Since digital music is often stored in a compressed file, it is desirable that an audio watermarking method in a content management system handles compressed files. Using an audio watermarking method that directly manipulates compressed files makes it unnecessary to decompress the files before embedding or detection, so more files can be processed per unit time. However, it is difficult to detect a watermark in a compressed file that has been compressed after the file was watermarked.
This paper proposes an MPEG Advanced Audio Coding (AAC) bitstream watermarking method using a two-dimensional pseudo-random array. Detection is done by correlating the absolute values of the recovered MDCT coefficients and the pseudo-random array. Since the embedding algorithm uses the same pseudo-random values for two adjacent overlapping frames and the detection algorithm selects the better frame in the two by comparing detected watermark strengths, it is possible to detect a watermark from a compressed file that was compressed after the watermark was embedded in the original uncompressed file. Though the watermark is not detected as clearly in this case, the watermark can still be detected even when the watermark was embedded in a compressed file and the file was then decompressed, trimmed, and compressed again.
Cepstral domain modification of audio signals for data embedding: preliminary results
Author(s):
Kaliappan Gopalan
Show Abstract
A method of embedding data in an audio signal using cepstral domain modification is described. Based on successful
embedding in the spectral points of perceptually masked regions in each frame of speech, first the technique was
extended to embedding in the log spectral domain. This extension resulted at approximately 62 bits /s of embedding
with less than 2 percent of bit error rate (BER) for a clean cover speech (from the TIMIT database), and about 2.5
percent for a noisy speech (from an air traffic controller database), when all frames - including silence and transition
between voiced and unvoiced segments - were used. Bit error rate increased significantly when the log spectrum in the
vicinity of a formant was modified.
In the next procedure, embedding by altering the mean cepstral values of two ranges of indices was studied. Tests on
both a noisy utterance and a clean utterance indicated barely noticeable perceptual change in speech quality when lower
range of cepstral indices - corresponding to vocal tract region - was modified in accordance with data. With an
embedding capacity of approximately 62 bits/s - using one bit per each frame regardless of frame energy or type of
speech - initial results showed a BER of less than 1.5 percent for a payload capacity of 208 embedded bits using the
clean cover speech. BER of less than 1.3 percent resulted for the noisy host with a capacity was 316 bits. When the
cepstrum was modified in the region of excitation, BER increased to over 10 percent. With quantization causing no
significant problem, the technique warrants further studies with different cepstral ranges and sizes. Pitch-synchronous
cepstrum modification, for example, may be more robust to attacks. In addition, cepstrum modification in regions of
speech that are perceptually masked - analogous to embedding in frequency masked regions - may yield imperceptible
stego audio with low BER.
ICA-based robust logo image watermarking
Author(s):
Thai Duy Hien;
Zensho Nakao;
Yen-Wei Chen
Show Abstract
Digital watermarking is a technology proposed to address the issue of copyright protection for digital content. In this
paper, we have developed a new robust logo watermarking technique. Watermark embedding is performed in the wavelet
domain of the host image. The human visual system (HVS) is exploited by building a spatial mask based on stochastic
model for content adaptive digital watermarking. Independent component analysis (ICA) is introduced to extract the logo
watermark. Our simulation results suggest that ICA can be used to extract exactly watermark that was hidden in image
and show that our system performs robustness well under various important types of attacks.
Collocated Dataglyphs for large-message storage and retrieval
Author(s):
Rakhi C. Motwani;
Jeff A. Breidenbach;
John R. Black
Show Abstract
In contrast to the security and integrity of electronic files, printed documents are vulnerable to damage and
forgery due to their physical nature. Researchers at Palo Alto Research Center utilize DataGlyph technology
to render digital characteristics to printed documents, which provides them with the facility of tamper-proof
authentication and damage resistance. This DataGlyph document is known as GlyphSeal. Limited DataGlyph
carrying capacity per printed page restricted the application of this technology to a domain of graphically simple
and small-sized single-paged documents. In this paper the authors design a protocol motivated by techniques
from the networking domain and back-up strategies, which extends the GlyphSeal technology to larger-sized,
graphically complex, multi-page documents. This protocol provides fragmentation, sequencing and data loss
recovery. The Collocated DataGlyph Protocol renders large glyph messages onto multiple printed pages and
recovers the glyph data from rescanned versions of the multi-page documents, even when pages are missing,
reordered or damaged. The novelty of this protocol is the application of ideas from RAID to the domain of
DataGlyphs. The current revision of this protocol is capable of generating at most 255 pages, if page recovery
is desired and does not provide enough data density to store highly detailed images in a reasonable amount of
page space.
Machine vision applications of digital watermarking
Author(s):
John Stach
Show Abstract
In the realm of digital watermarks applied to analog media, publications have mostly focused on applications such as document authentication, security, and links where synchronization is merely used to read the payload. In recent papers, we described issues associated with the use of inexpensive cameras to read digital watermarks [5], and we have discussed product development issues associated with the use of watermarks for several applications [3.4.6]. However, the applications presented in these papers also have been focused on the detection and use of the watermark payload as the critical technology. In this paper, we will extend those ideas by examining a wider range of analog media such as objects and surfaces and by examining machine vision applications where the watermark synchronization method (i.e., synchronizing the watermark orientation so that a payload can be extracted) and the design characteristics of the watermark itself are as critical to the application as recovering the watermark payload. Some examples of machine vision applications that could benefit from digital watermarking technology are autonomous navigation, device and robotic control, assembly and parts handling, and inspection and calibration systems for nondestructive testing and analysis. In this paper, we will review some of these applications and show how combining synchronization and payload data can significantly enhance and broaden many machine vision applications.
Fingerprinting of music scores
Author(s):
Jonathan Irons;
Martin Schmucker
Show Abstract
Publishers of sheet music are generally reluctant in distributing their content via the Internet. Although online sheet music distribution's advantages are numerous the potential risk of Intellectual Property Rights (IPR) infringement, e.g. illegal online distributions, disables any innovation propensity.
While active protection techniques only deter external risk factors, additional technology is necessary to adequately treat further risk factors. For several media types including music scores watermarking technology has been developed, which ebeds information in data by suitable data modifications. Furthermore, fingerprinting or perceptual hasing methods have been developed and are being applied especially for audio. These methods allow the identification of content without prior modifications.
In this article we motivate the development of watermarking and fingerprinting technologies for sheet music. Outgoing from potential limitations of watermarking methods we explain why fingerprinting methods are important for sheet music and address potential applications. Finally we introduce a condept for fingerprinting of sheet music.
Watermarking and fingerprinting for electronic music delivery
Author(s):
Michiel van der Veen;
Aweke N. Lemma;
Ton Kalker
Show Abstract
In recent years we have seen many initiatives to provide electronic music delivery (EMD) services. We observe that a key success factor in EMD is the transparency of the distribution service. We could compare it with the traditional music distribution via compact discs. By buying a CD, a user acquires a 'free' control of the content, i.e. he can copy it, he can play it multiple times etc. In the electronic equivalent, the usage and digital rights management rules should be transparent, and preferably comparable to the classical method of distributing contents.
It is the goal of this paper to describe a technology concept that facilitates, from a consumer perspective simple EMD service. Digital watermarking and fingerprinting are the two key technologies involved. The watermarking technology is used to convey the information that uniquely identifies a specific transaction, and the fingerprint technology is adopted for key management and security purposes. In this paper, we discuss how these two technologies are integrated in such a way that watermark security (i.e. the inability to maliciously alter the watermark) and distribution efficiency (i.e. the ability to serve multiple consumers with one distribution PC) are maximized.
Improvement to CDF grounded lattice codes
Author(s):
Brett A. Bradley
Show Abstract
Lattice codes have been evaluated in the watermarking literature based on their behavior in the presence of additive
noise. In contrast with spread spectrum methods, the host image does not interfere with the watermark. Such evaluation
is appropriate to simulate the effects of operations like compression, which are effectively noise-like for lattice codes.
Lattice codes do not perform nearly as well when processing that fundamentally alters the characteristics of the host
image is applied. One type of modification that is particularly detrimental to lattice codes involves changing the
amplitude of the host. In a previous paper on the subject, we describe a modification to lattice codes that makes them
invariant to a large class of amplitude modifications; those that are order preserving. However, we have shown that in its
pure form the modification leads to problems with embedding distortion and noise immunity that are image dependent.
In the current work we discuss an improved method for handling the aforementioned problem. Specifically, the set of
quantization bins that is used for the lattice code is governed by a finite state machine. The finite state machine approach
to quantization bin assignment requires side information in order for the quantizers to be recovered exactly. Our paper
describes in detail two methods for recovery when such an approach is used.
Advanced audio watermarking benchmarking
Author(s):
Jana Dittman;
Martin Steinebach;
Andreas Lang;
Sascha Zmudizinski
Show Abstract
Digital watermarking is envisaged as a potential technology for copyright protection and manipulation recognition. A key issue in the usage of robust watermarking is the evaluation of robustness and security. StirMark Benchmarking has been taken to set a benchmarking suite for audio watermarking in addition to existing still image evaluation solutions. In particular we give an overview of recent advancements and actual questions in robustness and transparency evaluations, in complexity and performance issues, in security and capacity questions. Further more we introduce benchmarking for content-fragile watermarking by summarizing design aspects and concluding essential benchmarking requirements.
The Watermark Evaluation Testbed (WET)
Author(s):
Hyung Cook Kim;
Hakeem Ogunleye;
Oriol Guitart;
Edward J. Delp
Show Abstract
While digital watermarking has received much attention within the academic community and private sector
in recent years, it is still a relatively young technology. As such there are few widely accepted benchmarks
that can be used to validate the performance claims asserted by members of the research community. This
lack of a universally adopted benchmark has hindered research and created confusion within the general public.
To facilitate the development of a universally adopted benchmark, we are developing at Purdue University a
web-based system that will allow users to evaluate the performance of watermarking techniques. This system
consists of reference software that includes both watermark embedders and watermark detectors, attack scenarios,
evaluation modules and a large image database. The ultimate goal of the current work is to develop a platform
that one can use to test the performance of watermarking methods and obtain fair, reproducible comparisons
of the results. We feel that this work will greatly stimulate new research in watermarking and data hiding by
allowing one to demonstrate how new techniques are moving forward the state of the art. We will refer to this
system as the Watermark Evaluation Testbed or WET.
Thread-based benchmarking deployment
Author(s):
Sebastien Lugan;
Benoit Macq
Show Abstract
Information and knowledge are actually well and easily distributed thanks to
electronic journals, news, mailing-lists and forums. It is however more
difficult to deploy algorithmic and programming collaboration: the
heterogeneity of the programming languages and the operating systems used by
researchers constitutes a major problem to the design of a common testing
platform. Current solutions exist to develop collaborative work. Generally,
these solutions impose specific programming languages and/or operating systems
to developers. Some others specific rules have to be respected. These heavy
constrains slow down the usage of collaborative programming platforms.
The OpenWatermark project proposes a modern architecture for cooperative
programming exchanges that takes all these aspects into account. Developers
work with their favorite programming languages and operating systems as long as
the OpenWatermark platform supports them.
In this paper, we will present the OpenWatermark platform
(www.openwatermark.org) and its application to the benchmarking of image
watermarking algorithms.
Human perception of geometric distortions in images
Author(s):
Iwan Setyawan;
Reginald L. Lagendijk
Show Abstract
We present in this paper the results of our study on the human perception of geometric distortions in images. The ultimate goal of this study is to devise an objective measurement scheme for geometric distortions in images, which should have a good correspondence to human perception of the distortions. The study is divided into two parts. The first part of the study is the design and implementation of a user-test to measure human perception of geometric distortions in images. The result of this test is then used as a basis to evaluate the performance of the second part of the study, namely the objective quality measurement scheme. Our experiment shows that our objective quality measurement has good correspondence to the result of the user test and performs much better than a PSNR measurement.
Rate-distortion analysis of steganography for conveying stereovision disparity maps
Author(s):
Toshiyuki Umeda;
Ana Barros Dias Torrão Batolomeu;
Filipe Andre Liborio Francob;
Damien Delannay;
Benoit M. M. Macq
Show Abstract
3-D images transmission in a way which is compliant with traditional 2-D representations can be done through the embedding of disparity maps within the 2-D signal. This approach enables the transmission of stereoscopic video sequences or images on traditional analogue TV channels (PAL or NTSC) or printed photographic images. The aim of this work is to study the achievable performances of such a technique. The embedding of disparity maps has to be seen as a global rate-distortion problem. The embedding capacity through steganography is determined by the transmission channel noise and by the bearable distortion on the watermarked image. The distortion of the 3-D image displayed as two stereo views depends on the rate allocated to the complementary information required to build those two views from one reference 2-D image. Results from the works on the scalar Costa scheme are used to optimize the embedding of the disparity map compressed bit stream into the reference image. A method for computing the optimal trade off between the disparity map distortion and embedding distortion as a function of the channel impairments is proposed. The goal is to get a similar distortion on the left (the reference image) and the right (the disparity compensated image) images. We show that in typical situations the embedding of 2 bits/pixels in the left image, while the disparity map is compressed at 1 bit per pixel leads to a good trade-off. The disparity map is encoded with a strong error correcting code, including synchronisation bits.
Orthogonal dirty paper coding for informed data hiding
Author(s):
Andrea Abrardo;
Mauro Barni
Show Abstract
A new dirty paper coding technique for robust watermarking is presented based on the properties of orthogonal codes. By relying on the simple structure of these codes, a simple yet powerful technique to embed a message within the host signal is developed. In addition, the equi-energetic nature of the coded sequence, together with the adoption of a correlation-based decoder, ensures
that the watermark is robust against value-metric scaling. The performance of the dirty coding algorithm are further improved by replacing orthogonal codes with Gold sequences and by concatenating them with an outer turbo code. To this aim, the inner decoder is modified so to produce a soft estimate of the embedded message and to make it possible the adoption of an iterative multistage decoding strategy. Performance analysis is carried out by means of Monte Carlo simulations proving the validity of the novel watermarking scheme. A comparison with dirty-trellis watermarking reveals the effectiveness of the new system, which, thanks to its very low computational burden, allows the adoption of extremely powerful channel coding strategies, hence ensuring a very high robustness or, even thanks to the optimum embedding procedure, a low distortion.
Capacity of data-hiding system subject to desynchronization
Author(s):
Stephane Pateux;
Gaetan Le Guelvouit;
Jonathan Delhumeau
Show Abstract
Data hiding has been mainly studied in the last years. Many applications are
targeted such as copy-rights management, meta-data embedding for rich-media
applications, ... In all these applications, it is crucial to estimate what
is the capacity of data hiding. Many works have then been made to study
watermarking performance considering data-hiding as a kind of channel
communication. However in all these studies, an assumption is made about the
perfect knowledge of all attacks parameters (may be known in advance or later
estimated with attacks modeling). More especially a malicious attacker may
biased its attack so that parameters estimation may not be perfect
(desynchronization in parameters). Furthermore, random geometrical attacks
for images such as proposed by Stirmark benchmark (more generally
desynchronization attacks) show that perfect synchronization may not also be
achievable. These last kind of attacks are actually the most effective and
lack of theoretical modeling for capacity estimation. We then propose a new
model for taking into account desynchronization phenomenon in data hiding
(coupled with degrading attacks - i.e. optimal SAWGN attacks). Further,
thanks to the use of game theory, we state bounds on the capacity that may be
obtained by data hiding systems when subject to desynchronization.
Adaptive quantization watermarking
Author(s):
Job C. Oostveen;
Ton Kalker;
Marius Staring
Show Abstract
Although quantization index modulation (QIM) schemes are optimal from an information theoretic capacity-maximization point of view, their robustness may be too restricted for widespread practical usage. Most papers assume that host signal samples are identically distributed from a single source distribution and therefore, they do not need to consider local adaptivity. In practice there may be however several reasons for introducing locally varying watermark parameters.
In this paper, we study how the Scalar Costa Scheme (which we take as a representative member of the class of QIM schemes) can be adapted to achieve practical levels of robustness and imperceptibility. We do this by choosing the basic watermark parameters on the basis of a perceptual model. An important aspect is the robustness of the statistic on which the adaptation rule is based. The detector needs to be able to accurately re-estimate the value of the parameters as used by the embedder, even in the presence of strong channel noise. One way to achieve this is to base the adaptation rule on an aggregate of the pixel values in a neighborhood around the relevant pixel. We present an analysis of the robustness-locality trade-off, based on a model for the bit error probability.
Secure background watermarking based on video mosaicing
Author(s):
Gwenael Doerr;
Jean-Luc Dugelay
Show Abstract
Digital watermarking was introduced during the last decade as a complementary technology to protect digital multimedia data. Watermarking digital video material has already been studied, but it is still usually regarded as watermarking a sequence of still images. However, it is well-known that such straightforward frame-by-frame approaches result in low performance in terms of security. In particular, basic intra-video collusion attacks can easily defeat basic embedding strategies. In this paper, an extension of the simple temporal frame averaging attack will be presented, which basically considers frame registration to enlarge the averaging temporal window size. With this attack in mind, video processing, especially video mosaicing, will be considered to produce a temporally coherent watermark. In other words, an embedding strategy will be proposed which ensures that all the projections of a given 3D point in a movie set carry the same watermark sample along a video scene. Finally, there will be a discussion regarding the impact of this novel embedding strategy on different relevant parameters in digital watermarking e.g. capacity, visibility, robustness and security.
Synchronization technique to detect MPEG video frames for watermark retrieval
Author(s):
Enrico Hauer;
Stefan Thiemert
Show Abstract
A main problem of I-frame based watermarking schemes is their lack of robustness regarding re-encoding attacks on
MPEG material. After a normal post-processing modification the structure of the Groups of Pictures (GOPs) in the
modified video should be the same as in the original one. An attack, which has the goal to destroy the watermark, could
change this structure. The position of the marked intra coded frames will be shifted. Without detecting the correct frame
position an incorrect watermark message could be retrieved. Our conceptual paper proposes a possible solution. A
combined watermark, consisting of two watermark messages, is embedded into the video material to increase the
robustness of the watermark. The first part of the message is the synchronization information to locate the previously
marked frames. The second part contains the information watermark. Our approach is to design a template pattern based
on the synchronization information. With the pattern the original I-frame can be detected and the correct watermark
information can be retrieved. After the recovery of the attacked video material the watermark can be correctly retrieved.
We present the concept and the evaluation of the first test results.
Feature-based watermarking scheme for MPEG-I/II video authentication
Author(s):
Yuewei Dai;
Stefan Thiemert;
Martin Steinebach
Show Abstract
This paper presents a new content-fragile watermarking algorithm for the detection and localization of malicious manipulations of MPEG-I/II videos. While being fragile to malicious manipulations, the watermarking scheme is robust against content-preserving manipulations like re-encoding processes. It is a bitstream watermarking method based on 8x8 DCT blocks. One of the main advantages of our scheme is the possibility of localizing positions within the video where modifications occurred. Another main advantage is the portability of the scheme to other multimedia documents based on the 8x8 DCT block domain, e.g. JPEG images. The framework of the watermarking scheme can be divided into three main parts: watermark construction, watermark embedding and watermark detection. We derive a Content Based Message (CBM) from the multimedia document, based on a partial energy relationship between two groups of DCT blocks. Embedding the CBM is based on the Differential Energy Watermarking (DEW) concept. In the detection process we compare the CBM and the retrieved watermark to detect and locate manipulations. Besides the algorithm we present experimental results to demonstrate the feasibility of the scheme. We discuss four experiments representing four typical kinds of malicious manipulations.
Impact of MPEG-4 3D mesh coding on watermarking algorithms for polygonal 3D meshes
Author(s):
Wolfgang Funk
Show Abstract
The MPEG-4 multimedia standard addresses the scene-based composition of audiovisual objects. Natural and synthetic multimedia content can be mixed and transmitted over narrow and broadband communication channels. Synthetic natural hybrid coding (SNHC) within MPEG-4 provides tools for 3D mesh coding (3DMC).
We investigate the robustness of two different 3D watermarking algorithms for polygonal meshes with respect to 3DMC. The first algorithm is a blind detection scheme designed for labelling applications that require high bandwidth and low robustness. The second algorithm is a robust non-blind one-bit watermarking scheme intended for copyright protection applications. Both algorithms have been proposed by Benedens. We expect 3DMC to have an impact on the watermarked 3D meshes, as the algorithms used for our simulations work on vertex coordinates to encode the watermark.
We use the 3DMC implementation provided with the MPEG-4 reference software and the Princeton Shape Benchmark model database for our simulations. The watermarked models are sent through the 3DMC encoder and decoder, and the watermark decoding process is performed. For each algorithm under consideration we examine the detection properties as a function of the quantization of the vertex coordinates.
Video steganography based on bit-plane decomposition of wavelet transformed video
Author(s):
Hideki Noda;
Tomofumi Furuta;
Michiharu Niimi;
Eiji Kawaguchi
Show Abstract
This paper presents a steganography method using lossy compressed video which provides a natural way to send a large amount of secret data. The proposed method is based on wavelet compression for video data and bit-plane complexity segmentation (BPCS) steganography. BPCS steganography makes use of bit-plane decomposition and the characteristics of the human vision system, where noise-like regions in bit-planes of a dummy image are replaced with secret data without deteriorating image quality. In wavelet-based video compression methods such as 3-D set partitioning in hierarchical trees (SPIHT) algorithm and Motion-JPEG2000, wavelet coefficients in discrete wavelet transformed video are quantized into a bit-plane structure and therefore BPCS steganography can be applied in the wavelet domain. 3-D SPIHT-BPCS steganography and Motion-JPEG2000-BPCS steganography are presented and tested, which are the integration of 3-D SPIHT video coding and BPCS steganography, and that of Motion-JPEG2000 and BPCS, respectively. Experimental results show that 3-D SPIHT-BPCS is superior to Motion-JPEG2000-BPCS with regard to embedding performance. In 3-D SPIHT-BPCS steganography, embedding rates of around 28% of the compressed video size are achieved for twelve bit representation of wavelet coefficients with no noticeable degradation in video quality.
Lossless data embedding with file size preservation
Author(s):
Jessica Fridrich;
Miroslav Goljan;
Qing Chen;
Vivek Pathak
Show Abstract
In lossless watermarking, it is possible to completely remove the embedding distortion from the watermarked image
and recover an exact copy of the original unwatermarked image. Lossless watermarks found applications in fragile
authentication, integrity protection, and metadata embedding. It is especially important for medical and military
images. Frequently, lossless embedding disproportionably increases the file size for image formats that contain lossless
compression (RLE BMP, GIF, JPEG, PNG, etc...). This partially negates the advantage of embedding information as
opposed to appending it. In this paper, we introduce lossless watermarking techniques that preserve the file size. The
formats addressed are RLE encoded bitmaps and sequentially encoded JPEG images. The lossless embedding for the
RLE BMP format is designed in such a manner to guarantee that the message extraction and original image
reconstruction is insensitive to different RLE encoders, image palette reshuffling, as well as to removing or adding
duplicate palette colors. The performance of both methods is demonstrated on test images by showing the capacity,
distortion, and embedding rate. The proposed methods are the first examples of lossless embedding methods that
preserve the file size for image formats that use lossless compression.
Reversible compressed domain watermarking by exploiting code space inefficiency
Author(s):
Bijan G. Mobasseri;
Robert J. Berger II
Show Abstract
Algorithms that perform data hiding directly in the compressed domain, without the need for partial decompression or transcoding, are highly desirable. We based this work on the recognition of the idea that only a limited amount of a possible codespace is actually used for any specific code. Therefore, if bits are chosen appropriately, watermarking them will place a codeword outside of the valid codespace. Variable length codes in compressed bitstreams, however, have virtually no redundancy to losslessly carry hidden data. Altered VLCs will likely remain valid. In this work, we examine the bitstream not as individual VLCs but as codeword-pairs. Pairing codewords conceptually shrinks the percentage of available codespace that is actually being used. This idea has a number of key advantages, including that the watermark embedding is mathematically lossless, file size is preserved and the watermarked bitstream will still remain format-compliant. This algorithm is most appropriate for compression algorithms that are error-resilient. For example, the error concealment property of MPEG-4 or H.263+ can also counter bit “errors” caused by the watermarking while playing the video. The off-line portion of the algorithm needs to be run only once for a given VLC table regardless of multiple mediums employing the table. This allows for the algorithm to be applied in real time, both in embedding and removal, due to implementation in the compressed domain.
Reversible watermarking for images
Author(s):
Arno J. van Leest;
Michiel van der Veen;
Fons Bruekers
Show Abstract
Reversible watermarking is a technique for embedding data in a digital host signal
in such a manner that the original host signal can be restored in a bit-exact
manner in the restoration process. In this paper, we present a general framework
for reversible watermarking in multi-media signals. A mapping function, which
is in general neither injective nor surjective, is used to map the input signal
to a perceptually equivalent output signal. The resulting unused sample values of
the output signal are used to encode additional (watermark) information and
restoration data.
At the 2003 SPIE conference, examples of this technique applied to digital audio
were presented. In this paper we concentrate on color and gray-scale images.
A particular challenge in this context is not only the optimization of rate-distortion,
but also the measure of perceptual quality (i.e. the distortion). In literature
distortion is often expressed in terms of PSNR, making comparison among different
techniques relatively straightforward. We show that our general framework for
reversible watermarking applies to digital images and that results can be presented
in terms of PSNR rate-distortions. However, the framework allows for more subtle
signal manipulations that are not easily expressed in terms of PSNR distortion.
These changes involve manipulations of contrast and/or saturation.
A high-capacity invertible data-hiding algorithm using a generalized reversible integer transform
Author(s):
John Stach;
Adnan M. Alattar
Show Abstract
A high-capacity, data-hiding algorithm that lets the user restore the original host image after retrieving the hidden data is presented in this paper. The proposed algorithm can be used for watermarking valuable or sensitive images such as original art works or military and medical images. The proposed algorithm is based on a generalized, reversible, integer transform, which calculates the average and pair-wise differences between the elements of a vector extracted from the pixels of the image. The watermark is embedded into a set of carefully selected coefficients by replacing the least significant bit (LSB) of every selected coefficient by a watermark bit. Most of these coefficients are shifted left by one bit before replacing their LSBs. Several conditions are derived and used in selecting the appropriate coefficients to ensure that they remain identifiable after embedding. In addition, the selection of coefficients ensures that the embedding process does not cause any overflow or underflow when the inverse of the transform is computed. To ensure reversibility, the locations of the shifted coefficients and the original LSBs are embedded in the selected coefficients before embedding the desired payload. Simulation results of the algorithm and its performance are also presented and discussed in the paper.
Reversible watermarking using two-way decodable codes
Author(s):
Bijan G. Mobasseri;
Domenick Cinalli
Show Abstract
Traditional variable length codes(VLC) used in compressed media are brittle and suffer synchronization loss following bit errors. To counter this situation, resynchronizing VLCs(RVLC) have been proposed to help identify, limit and possibly reverse channel errors. In this work we observe that watermark bits are in effect forced bit errors and are amenable to the application of error-resilient techniques. We have developed a watermarking algorithm around a two-way decodable RVLC. The inherent error control property of the code is now exploited to implement reversible watermarking in the compressed domain. A new decoding algorithm is developed to reestablish synchronization that is lost as a result of watermarking. Resynchronization is achieved by disambiguating among many potential markers that are abundantly emulated in data. The algorithm is successfully applied to several MPEG-2 streams.
Integer DCT-based reversible watermarking for images using companding technique
Author(s):
Bian Yang;
Martin Schmucker;
Wolfgang Funk;
Christoph Busch;
Shenghe Sun
Show Abstract
We present a high capacity reversible watermarking scheme using companding technique over integer DCT
coefficients of image blocks. This scheme takes advantage of integer DCT coefficients' Laplacian-shape-like
distribution, which permits low distortion between the watermarked image and the original one caused by the bit-shift
operations of the companding technique in the embedding process.
In our scheme, we choose AC coefficients in the integer DCT domain for the bit-shift operation, and therefore the
capacity and the quality of the watermarked image can be adjusted by selecting different numbers of coefficients of
different frequencies. To prevent overflows and underflows in the spatial domain caused by modification of the DCT
coefficients, we design a block discrimination structure to find suitable blocks that can be used for embedding without
overflow or underflow problems. We can also use this block discrimination structure to embed an overhead of location
information of all blocks suitable for embedding. With this scheme, watermark bits can be embedded in the saved LSBs
of coefficient blocks, and retrieved correctly during extraction, while the original image can be restored perfectly.
Towards fraud-proof ID documents using multiple data hiding technologies and biometrics
Author(s):
Justin Picard;
Claus Vielhauer D.D.S.;
Niels Thorwirth
Show Abstract
Identity documents, such as ID cards, passports, and driver's licenses, contain textual information, a portrait of
the legitimate holder, and eventually some other biometric characteristics such as a fingerprint or handwritten
signature. As prices for digital imaging technologies fall, making them more widely available, we have seen an
exponential increase in the ease and the number of counterfeiters that can effectively forge documents. Today,
with only limited knowledge of technology and a small amount of money, a counterfeiter can effortlessly replace a
photo or modify identity information on a legitimate document to the extent that it is very diffcult to differentiate
from the original.
This paper proposes a virtually fraud-proof ID document based on a combination of three different data
hiding technologies: digital watermarking, 2-D bar codes, and Copy Detection Pattern, plus additional biometric
protection. As will be shown, that combination of data hiding technologies protects the document against any
forgery, in principle without any requirement for other security features. To prevent a genuine document to be
used by an illegitimate user,biometric information is also covertly stored in the ID document, to be used for
identification at the detector.
Visual communications with side information via distributed printing channels: extended multimedia and security perspectives
Author(s):
Sviatoslav V. Voloshynovskiy;
Oleksiy Koval;
Frederic Deguillaume;
Thierry Pun
Show Abstract
In this paper we address visual communications via printing channels from an information-theoretic point of view as communications with side information. The solution to this problem addresses important aspects of multimedia data processing, security and management, since printed documents are still the most common form of visual information representation. Two practical approaches to side information communications for printed documents are analyzed in the paper. The first approach represents a layered joint source-channel coding for printed documents. This approach is based on a self-embedding concept where information is first encoded assuming a Wyner-Ziv set-up and then embedded into the original data using a Gel'fand-Pinsker construction and taking into account properties of printing channels.
The second approach is based on Wyner-Ziv and Berger-Flynn-Gray set-ups and assumes two separated communications channels where an appropriate distributed coding should be elaborated. The first printing channel is considered to be a direct visual channel for images ("analog" channel with degradations). The second "digital channel" with constrained capacity is considered to be an appropriate auxiliary channel. We demonstrate both theoretically and practically how one can benefit from this sort of "distributed paper communications".
Print protection using high-frequency fractal noise
Author(s):
Khaled Walid Mahmoud;
Jonathon M. Blackledge;
Sekharjit Datta;
James A. Flint
Show Abstract
All digital images are band-limited to a degree that is determined by a spatial extent of the point spread function; the
bandwidth of the image being determined by the optical transfer function. In the printing industry, the limit is determined
by the resolution of the printed material. By band limiting the digital image in such away that the printed document
maintains its fidelity, it is possible to use the out-of-band frequency space to introduce low amplitude coded data that
remains hidden in the image. In this way, a covert signature can be embedded into an image to provide a digital
watermark, which is sensitive to reproduction. In this paper a high frequency fractal noise is used as a low amplitude
signal. A statistically robust solution to the authentication of printed material using high-fractal noise is proposed here
which is based on cross-entropy metrics to provide a statistical confidence test. The fractal watermark is based on
application of self-affine fields, which is suitable for documents containing high degree of texture. In principle, this new
approach will allow batch tracking to be performed using coded data that has been embedded into the high frequency
components of the image whose statistical characteristics are dependent on the printer/scanner technology. The details of
this method as well as experimental results are presented.
Signature-embedding in printed documents for security and forensic applications
Author(s):
Aravind K Mikkilineni;
Gazi N Ali;
Pei-Ju Chiang;
George T. C. Chiu;
Jan P. Allebach;
Edward J. Delp
Show Abstract
Despite the increase in email and other forms of digital
communication, the use of printed documents continues to increase
every year. Many types of printed documents need to be "secure"
or traceable to the printer that was used to print them. Examples
of these include identity documents (e.g. passports) and documents
used to commit a crime.
Traditional protection methods such as special inks, security
threads, or holograms, can be cost prohibitive. The goals of our
work are to securely print and trace documents on low cost
consumer printers such as inkjet and electrophotographic (laser)
printers. We will accomplish this through the use of intrinsic and
extrinsic features obtained from modelling the printing process.
Specifically we show that the banding artifact in the EP print
process can be viewed as an intrinsic feature of the printer used
to identify both the model and make of the device. Methods for
measuring and extracting the banding signals from documents are
presented. The use of banding as an extrinsic feature is also
explored.
Universal image steganalysis using rate-distortion curves
Author(s):
Mehmet U. Celik;
Gaurav Sharma;
A. Murat Tekalp
Show Abstract
The goal of image steganography is to embed information in a cover
image using modifications that are undetectable. In actual practice,
however, most techniques produce stego images that are perceptually
identical to the cover images but exhibit statistical irregularities
that distinguish them from cover images. Statistical steganalysis
exploits these irregularities in order to provide the best
discrimination between cover and stego images. In general, the process
utilizes a heuristically chosen feature set along with a classifier
trained on suitable data sets. In this paper, we propose an
alternative feature set for steganalysis based on rate-distortion
characteristics of images. Our features are based on two key
observations: i) data hiding methods typically increase the image
entropy in order to encode hidden messages; ii) data hiding methods
are limited to the set of small, imperceptible distortions. The proposed
feature set is used as the basis of a steganalysis algorithm and its
performance is investigated using different data hiding methods.
Steganalysis of block-structured stegotext
Author(s):
Ying Wang;
Pierre Moulin
Show Abstract
We study a detection-theoretic approach to steganalysis. The
relative entropy between covertext and stegotext determines the
steganalyzer's difficulty in discriminating them, which in turn
defines the detectability of the stegosystem. We consider the case
of Gaussian random covertexts and mean-squared embedding
constraint. We derive a lower bound on the relative entropy
between covertext and stegotext for block-based embedding
functions. This lower bound can be approached arbitrarily closely
using a spread-spectrum method and secret keys with large entropy.
The lower bound can also be attained using a stochastic
quantization index modulation (QIM) encoder, without need for
secret keys. In general, perfect undetectability can be achieved
for blockwise memoryless Gaussian covertexts. For general Gaussian
covertexts with memory, the relative entropy increases
approximately linearly with the number of blocks observed by the
steganalyzer. The error probabilities of the best steganalysis
methods decrease exponentially with the number of blocks.
Fast additive noise steganalysis
Author(s):
Jeremiah Joseph Harmsen;
Kevin D. Bowers;
William A. Pearlman
Show Abstract
This work reduces the computational requirements of the additive noise steganalysis presented by Harmsen and Pearlman. The additive noise model assumes that the stegoimage is created by adding a pseudo-noise to a coverimage. This addition predictably alters the joint histogram of the image. In color images it has been shown that this alteration can be detected using a three-dimensional Fast Fourier Transform (FFT) of the histogram. As the computation of this transform is typically very intensive, a method to reduce the required processing is desirable. By considering the histogram between pairs of channels in RGB images, three separate two-dimensional FFTs are used in place of the original three-dimensional FFT. This method is shown to offer computational savings of approximately two orders of magnitude while only slightly decreasing classification accuracy.
Hiding correlation-based watermark templates using secret modulation
Author(s):
Jeroen Frederik Lichtenauer;
Iwan Setyawan;
Reginald L. Lagendijk
Show Abstract
A possible solution to the difficult problem of geometrical distortion of watermarked images in a blind watermarking
scenario is to use a template grid in the autocorrelation function. However, the important drawback of this method is
that the watermark itself can be estimated and subtracted, or the peaks in the Fourier magnitude spectrum can be
removed. A recently proposed solution is to modulate the watermark with a pattern derived from the image content and
a secret key. This effectively hides the watermark pattern, making malicious attacks much more difficult. However, the
algorithm to compute the modulation pattern is computationally intensive. We propose an efficient implementation,
using frequency domain filtering, to make this hiding method more practical. Furthermore, we evaluate the performance
of different kinds of modulation patterns. We present experimental results showing the influence of template hiding on
detection and payload extraction performance. The results also show that modulating the ACF based watermark
improves detection performance when the modulation signal can be retrieved sufficiently accurately. Modulation signals
with small average periods between zero crossings provide the most watermark detection improvement. Using these
signals, the detector can also make the most errors in retrieving the modulation signal until the detection performance
drops below the performance of the watermarking method without modulation.
LOT-based adaptive image watermarking
Author(s):
Yuxin Liu;
Bin Ni;
Xiaojun Feng;
Edward J. Delp III
Show Abstract
A robust, invisible watermarking scheme is proposed for digital images, where the watermark is embedded
using the block-based Lapped Orthogonal Transform (LOT). The embedding process follows a spread spectrum
watermarking approach. In contrast to the use of transforms such as DCT, our LOT watermarking scheme allows
larger watermark embedding energy while maintaining the same level of subjective invisibility. In particular, the
use of LOT reduces block artifacts caused by the insertion of the watermark in a block-by-block manner, hence
obtaining a better balance between invisibility and robustness. Moreover, we use a human visual system (HVS)
model to adaptively adjust the energy of the watermark during embedding. In our HVS model, each block is
categorized into one of four classes (texture, fine-texture, edge, and plain-area) by using a feature known as the
Texture Masking Energy (TME). Blocks with edges are also classified according to the edge direction. The block
classification is used to adjust the watermark embedding parameters for each block.
Error correction coding of nonlinear side-informed watermarking schemes
Author(s):
Kevin M. Whelan;
Guenole C.M. Silvestre;
Neil J. Hurley
Show Abstract
The application of error correction coding to side-informed watermarking utilizing polynomial detectors is investigated.
The overall system is viewed as a code concatenation in which the outer code is a powerful channel
code and the inner code is a low rate repetition code. For the inner code we adopt our previously proposed
side-informed embedding scheme in which the watermark direction is set to the gradient of the detection function
in order to reduce the effect of host signal interference. Turbo codes are employed as the outer code due
to their near capacity performance. The overall rate of the concatenation is kept constant while parameters
of the constituent codes are varied. For the inner code, the degree of non-linearity of the detector along with
repetition rate is varied. For a given embedding and attack strength, we determine empirically the best rate
combinations for constituent codes. The performance of the scheme is evaluated in terms of bit error rate when
subjected to various attacks such as additive/multiplicative noise and scaling by a constant factor. We compare
the performance of the proposed scheme to the Spread Transform Scalar Costa Scheme using the same rates
when subjected to the same attacks.
Spatial synchronization using watermark key structure
Author(s):
Eugene T. Lin;
Edward J. Delp III
Show Abstract
Recently, we proposed a method for constructing a template for efficient temporal synchronization in video watermarking. Our temporal synchronization method uses a state machine key generator for producing the watermark embedded in successive frames of video. A feature extractor allows the watermark key schedule to be content dependent, increasing the difficulty of copy and ownership attacks. It was shown that efficient synchronization can be achieved by adding temporal redundancy into the key schedule.
In this paper, we explore and extend the concepts of our temporal synchronization method to spatial synchronization. The key generator is used to construct the embedded watermark of non-overlapping blocks of the video, creating a tiled structure. The autocorrelation of the tiled watermark contains local maxima or peaks with a grid-like structure, where the distance between the peaks indicates the scale of the watermark and the orientation of the peaks indicate the watermark rotation. Experimental results are obtained using digital image watermarks. Scaling and rotation attacks are investigated.
Vector-quantization-based scheme for data embedding for images
Author(s):
Ning Liu;
Koduvayur P. Subbalakshmi
Show Abstract
Today, data hiding has become more and more important in a variety of applications including security. Since
Costa's work in the context of communication, the set of quantization based schemes have been proposed as one
class of data hiding schemes. Most of these schemes are based on uniform scalar quantizer, which is optimal
only if the host signal is uniformly distributed. In this paper, we propose pdf -matched embedding schemes,
which not only consider pdf -matched quantizers, but also extend them to multiple dimensions. Specifically,
our contributions to this paper are: We propose a pdf-matched embedding (PME) scheme by generalizing the
probability distribution of host image and then constructing a pdf-matched quantizer as the starting point.
We show experimentally that the proposed pdf-matched quantizer provides better trade-offs between distortion
caused by embedding, the robustness to attacks and the embedding capacity. We extend our algorithm to embed
a vector of bits in a host signal vector. We show by experiments that our scheme can be closer to the data
hiding capacity by embedding larger dimension bit vectors in larger dimension VQs. Two enhancements have
been proposed to our method: by vector flipping and by using distortion compensation (DC-PME), that serve
to further decrease the embedding distortion. For the 1-D case, the PME scheme shows a 1 dB improvement
over the QIM method in a robustness-distortion sense, while DC-PME is 1 dB better than DC-QIM and the 4-D
vector quantizer based PME scheme performs about 3 dB better than the 1-D PME.
Image watermarking based on scale-space representation
Author(s):
Jin S. Seo;
Chang D. Yoo
Show Abstract
This paper proposes a novel method for content-based watermarking based on feature points of an image. At
each feature point, watermark is embedded after affine normalization according to the local characteristic scale
and orientation. The characteristic scale is the scale at which the normalized scale-space representation of an
image attains a maximum value, and the characteristic orientation is the angle of the principal axis of an image.
By binding watermarking with the local characteristics of an image, resilience against affine transformations can
be obtained. Experimental results show that the proposed method is robust against various image processing
steps including affine transformations, cropping, filtering and JPEG compression.
A wavelet watermarking algorithm based on a tree structure
Author(s):
Oriol Guitart Pla;
Eugene T. Lin;
Edward J. Delp III
Show Abstract
We describe a blind watermarking technique for digital images. Our technique constructs an image-dependent watermark in the discrete wavelet transform (DWT) domain and inserts the watermark in the most signifcant coefficients of the image. The watermarked coefficients are determined by using the hierarchical tree structure induced by the DWT, similar in concept to embedded zerotree wavelet (EZW) compression. If the watermarked image is attacked or manipulated such that the set of significant coefficients is changed, the tree structure allows the correlation-based watermark detector to recover synchronization.
Our technique also uses a visual adaptive scheme to insert the watermark to minimize watermark perceptibility. The visual adaptive scheme also takes advantage of the tree structure. Finally, a template is inserted into the watermark to provide robustness against geometric attacks. The template detection uses the cross-ratio of four collinear points.
Classification of watermarking schemes robust against loss of synchronization
Author(s):
Damien Delannay;
Benoit Macq
Show Abstract
The resistance of watermarking schemes against geometrical distortions has been the subject of quite much research in the last ten years. The ability for a communication scheme to cope with a loss of synchronization is indeed a very difficult issue. Still, the tolerance of the human visual perception in presence of such distortions is surprisingly high and situations where loss of synchronization takes place are numerous. The aim of this paper is to present an extensive survey of existing works addressing this particular problem. Each of the proposed class of techniques will be analyzed to show which forms and what severity of distortions it is able to survive. The possible security implications of the proposed techniques will also be studied. We will try to point out the strengths and weaknesses of each solution. Special attention will be given to implementation details such as cropping operation which is subsequent to most geometrical distortions. Partial loss of content, change of width to height ratio or modification of the image size have important consequence on some proposed schemes. We will also briefly discuss the difficulty to evaluate the severity of a geometrical distortion.
Quantifying security leaks in spread spectrum data hiding: a game-theoretic approach
Author(s):
Pedro Comesana;
Fernando Perez-Gonzalez
Show Abstract
A game-theoretic approach is introduced to quantify possible information leaks in spread-spectrum data hiding schemes. Those leaks imply that the attacker knows the set-partitions and/or the pseudorandom sequence, which in most of the existing methods are key-dependent. The bit error probability is used as payoff for the game. Since a closed-form strategy is not available in the general case, several simplifications leading to near-optimal strategies are also discussed. Finally, experimental results supporting our analysis are presented.
New attacks on SARI image authentication system
Author(s):
Jinhai Wu;
Bin Benjamin Zhu;
Shipeng Li;
Fuzong Lin
Show Abstract
The image authentication system SARI proposed by Lin and Chang passes JPEG compression and rejects other malicious manipulations. Some vulnerabilities of the system have been reported recently. In this paper, we propose two new attacks that can compromise the SARI system. The first attack is called a histogram attack which modifies DCT coefficients yet maintains the same relationship between any two DCT coefficients and the same mean values of DCT coefficients. Such a modified image can pass the SARI authentication system. The second attack is an oracle attack which uses an oracle to efficiently find the secret pairs used by SARI in its signature generation. A single image plus an oracle is needed to launch the oracle attack. Fixes to thwart the proposed attacks are also proposed in this paper.
Distortion compensated lookup-table embedding: joint security and robustness enhancement for quantization-based data hiding
Author(s):
Min Wu
Show Abstract
Data embedding mechanism used for authentication applications should be secure in order to prevent an adversary from forging the embedded data at his/her will. Meanwhile, semi-fragileness is often preferred to allow for distinguishing content changes versus non-content changes. In this paper, we focus on jointly enhancing the robustness and security of the embedding mechanism, which can be used as a building block for authentication. The paper presents analysis showing that embedding through a look-up table (LUT) of
non-trivial run that maps quantized multimedia features randomly to binary data offers a probability of detection error considerably smaller than that of the traditional quantization embedding. We quantify the security strength of LUT embedding and enhance its robustness through distortion compensation. We introduce a combined security and capacity measure and show that the proposed distortion compensated LUT embedding provides joint enhancement of security and robustness over the traditional quantization embedding.
Attacks on biometric systems: a case study in fingerprints
Author(s):
Umut Uludag;
Anil K. Jain
Show Abstract
In spite of numerous advantages of biometrics-based personal authentication systems over traditional security systems based on token or knowledge, they are vulnerable to attacks that can decrease their security considerably. In this paper, we analyze these attacks in the realm of a fingerprint biometric system. We propose an attack system that uses a hill climbing procedure to synthesize the target minutia templates and evaluate its feasibility with extensive experimental results conducted on a large fingerprint database. Several measures that can be utilized to decrease the probability of such attacks and their ramifications are also presented.
Biometric verification based on grip-pattern recognition
Author(s):
Raymond N.J. Veldhuis;
Asker M. Bazen;
Joost A. Kauffman;
Pieter Hartel
Show Abstract
This paper describes the design, implementation and evaluation of a user-verification system for a smart gun,
which is based on grip-pattern recognition. An existing pressure sensor consisting of an array of 44 × 44 piezoresistive
elements is used to measure the grip pattern. An interface has been developed to acquire pressure images
from the sensor. The values of the pixels in the pressure-pattern images are used as inputs for a verification
algorithm, which is currently implemented in software on a PC. The verification algorithm is based on a likelihoodratio
classifier for Gaussian probability densities. First results indicate that it is feasible to use grip-pattern
recognition for biometric verification.
Security for biometric data
Author(s):
Claus Vielhauer;
Ton Kalker
Show Abstract
Biometric authentication, i.e. verifying the claimed identity of a person based on physiological characteristics or behavioral traits, has the potential to contribute to both privacy protection and user convenience. From a security point of view, biometric authentication offers the possibility to establish physical presence and unequivocal identification. However from a privacy point of view, the use of biometric authentication also introduces new problems and user concerns. Namely, when used for privacy-sensitive applications, biometric data are a highly valuable asset. When such data are available to unauthorized persons, these data can potentially be used for impersonation purposes, defeating the security aspects that are supposed to be associated with biometric authentication. In this paper, we will systematically unveil critical sections based on the two generic biometric flow models for enrolment and authentication respectively. Based on these critical sections for biometric data in authentication systems, we will point out measures to protect biometric systems. It will be shown that especially techniques using non-reversible representations are needed to disallow malicious use of template information and we will discuss a variant of the Linnartz-Tuyls model for securing biometric templates in detail.
Multimedia document authentication using on-line signatures as watermarks
Author(s):
Anoop M. Namboodiri;
Anil K. Jain
Show Abstract
Authentication of digital documents is an important concern as digital documents are replacing the traditional
paper-based documents for offcial and legal purposes. This is especially true in the case of documents that are
exchanged over the Internet, which could be accessed and modified by intruders. The most popular methods used
for authentication of digital documents are public key encryption-based authentication and digital watermarking.
Traditional watermarking techniques embed a pre-determined character string, such as the company logo, in a
document. We propose a fragile watermarking system, which uses an on-line signature of the author as the
watermark in a document. The embedding of a biometric characteristic such as signature in a document enables
us to verify the identity of the author using a set of reference signatures, in addition to ascertaining the document
integrity. The receiver of the document reconstructs the signature used to watermark the document, which is then
used to verify the author's claimed identity. The paper presents a signature encoding scheme, which facilitates
reconstruction by the receiver, while reducing the chances of collusion attacks.
Application of invisible image watermarks to previously halftoned images
Author(s):
Gordon W. Braudaway;
Frederick C. Mintzer
Show Abstract
The ability to detect the presence of invisible watermarks in a printed copy is generally useful to help establish
ownership and authenticity, and to establish the origin of an unauthorized disclosure. Heretofore watermarking methods
have been concerned with inserting a watermark into a digitized image of an entire composed page prior to its being
halftoned for printing. However, this may not be feasible if elements of the page being composed are available only as
blocks of text and sub-images, where each unmarked sub-image has previously been halftoned, perhaps even using
different halftoning methods. Earlier, we presented a highly robust invisible watermarking method having a payload of
one bit -- indicating the presence or absence of the watermark. Using that method, it will be shown that it is a
straightforward process to place one or more watermarks into a printed page composed of unmarked and previously
halftoned sub-images, and to successfully detect the inserted watermark from a scan of the printed page. The method
presented applies to a variety of invisible watermarking methods, not just the one used in the example.
Show-through watermarking of duplex printed documents
Author(s):
Gaurav Sharma;
Shen-ge Wang
Show Abstract
A technique for watermarking duplex printed pages is presented. The
technique produces visible watermark patterns like conventional
watermarks embedded in paper fabric. Watermark information is embedded
in halftones used to print images on either side. The watermark
pattern is imperceptible when images printed on either side are viewed
independently but becomes visible when the sheet of paper is held up
against a light. The technique employs clustered dot halftones and
embeds the watermark pattern as controlled local phase
variations. Illumination with a back-light superimposes the halftone
patterns on the two sides. Regions where the front and back-side
halftones are in phase agreement appear lighter in show-through
viewing, whereas regions over which the front and back side halftones
are in phase disagreement appear darker. The image printed on one side
has a controlled variation of the halftone phase and the one printed
on the other side provides a constant phase reference. The watermark
pattern is revealed when the sheet is viewed in "show-through mode"
superimposing the halftones on the two sides. Threshold arrays for the
halftone screens are designed to allow incorporation of a variety of
halftone patterns while minimizing artifacts in images printed using
these halftones.
Watermarking electronic text documents containing justified paragraphs and irregular line spacing
Author(s):
Adnan M. Alattar;
Osama Mohammed Alattar
Show Abstract
In this paper, we propose a new method for watermarking electronic text documents that contain justified paragraphs and irregular line spacing. The proposed method uses a spread-spectrum technique to combat the effects of irregular word or line spacing. It also uses a BCH (Bose-Chaudhuri-Hocquenghem) error coding technique to protect the payload from the noise resulting from the printing and scanning process. Watermark embedding in a justified paragraph is achieved by slightly increasing or decreasing the spaces between words according to the value of the corresponding watermark bit. Similarly, watermark embedding in a text document with variable line spacing is achieved by slightly increasing or decreasing the distance between any two adjacent lines according to the value of the watermark bit. Detecting the watermark is achieved by measuring the spaces between the words or the lines and correlating them with the spreading sequence. In this paper, we present an implementation of the proposed algorithm and discuss its simulation results.
Efficient multimedia data encryption based on flexible QM coder
Author(s):
Dahua Xie;
C.-C. Jay Kuo
Show Abstract
Efficient multimedia encryption algorithms are fundamental to multimedia data security because of the massive data size and the need of real-time processing. In this research, we present an efficient encryption scheme, called the RCI-QM coder, which achieves the objective of encryption with a regular QM coder by applying different coding conventions to encode individual symbols according to a statistically random sequence. One main advantage of our scheme is the negligible computation cost associated with encryption. We also demonstrate with cryptanalysis that the security level of our scheme is high enough to thwart most common attacks. The proposed RCI-QM coder is easy to implement in both software and hardware, while providing backward compatibility to the standard QM coder.
Multilayer multicast key management with threshold cryptography
Author(s):
Scott D. Dexter;
Roman Belostotskiy;
Ahmet M. Eskicioglu
Show Abstract
The problem of distributing multimedia securely over the Internet is often viewed as an instance of secure multicast communication, in which multicast messages are protected by a group key shared among the group of clients. One important class of key management schemes makes use of a hierarchical key distribution tree. Constructing a hierarchical tree based on secret shares rather than keys yields a scheme that is both more flexible and provably secure. Both the key-based and share-based hierarchical key distribution tree techniques are designed for managing keys for a single data stream. Recent work shows how redundancies that arise when this scheme is extended to multi-stream (e.g. scalable video) applications may be exploited in the key-based system by viewing the set of clients as a “multi-group”.
In this paper, we present results from an adaptation of a multi-group key management scheme using threshold cryptography. We describe how the multi-group scheme is adapted to work with secret shares, and compare this scheme with a naïve multi-stream key-management solution by measuring performance across several critical parameters, including tree degree, multi-group size, and number of shares stored at each node.
Securing display of grayscale and multicolored images by use of visual cryptography
Author(s):
Hirotsugu Yamamoto;
Yoshio Hayasaki;
Nobuo Nishida
Show Abstract
Security has become an important issue as information technology has become increasingly pervasive in our everyday lives. Security risks arise with a display that shows decrypted information. In this paper, we propose a secure information display technique using visual cryptography. Its decryption requires no special computing devices and is implemented using only human vision: the proposed display appears as a random pattern to anyone who looks at it unless the person views the displayed image through a decoding mask. We have constructed code sets to represent grayscale and multicolored images. Each pixel in a secret image is expanded to a group of subpixels. The displayed image consists of black and white subpixels to encrypt a grayscale image. To encrypt a multicolor image, black, red, green, and blue subpixels compose the displayed image. The decoding mask consists of transparent and opaque subpixels. Every pixel is encrypted using a pair that is chosen at random. We have demonstrated the proposed secure display with an LCD panel and a transparency on which a decoding mask was printed. The secret image was visible for a viewer within the viewing zone, although viewers outside the viewing zone perceived it as a random dot pattern.
A hybrid scheme for encryption and watermarking
Author(s):
Xiaowei Xu;
Scott D. Dexter;
Ahmet M. Eskicioglu
Show Abstract
Encryption and watermarking are complementary lines of defense in protecting multimedia content. Recent watermarking techniques have therefore been developed independent from encryption techniques. In this paper, we present a hybrid image protection scheme to establish a relation between the data encryption key and the watermark. Prepositioned secret sharing allows the reconstruction of different encryption keys by communicating different activating shares for the same prepositioned information. Each activating share is used by the receivers to generate a fresh content decryption key. In the proposed scheme, the activating share is used to carry copyright or usage rights data. The bit stream that represents this data is also embedded in the content as a visual watermark. When the encryption key needs to change, the data source generates a new activating share, and embeds the corresponding watermark into the multimedia stream. Before transmission, the composite stream is encrypted with the key constructed from the new activating share. Each receiver can decrypt the stream after reconstructing the same key, and extract the watermark from the image. Our presentation will include the application of the scheme to a test image, and a discussion on the data hiding capacity, watermark transparency, and robustness to common attacks.
Near-lossless image authentication transparent to near-lossless coding
Author(s):
Roberto Caldelli;
Giovanni Macaluso;
Franco Bartolini;
Mauro Barni
Show Abstract
In this paper attention is paid to the problem of remote-sensing image authentication by relying on a digital watermarking
approach. Both transmission and storage is usually cumbersome for remote-sensing imagery and compression is
unavoidable to make it feasible. In the case of multi-spectral images, used for classification purposes, too large valuemetric
changes can cause miss-classification errors, so near-lossless compression is used to grant a given peak compression
error. Similarly it is likely that also watermarking based authentication should grant such a maximum peak error and
the requirements to be satisfied are: to allow valuemetric authentication, to control the peak watermark embedding error,
to tolerate near-lossless compression. To achieve these requirements, a methodology has been designed by integrating
into the standard JPEG-LS compression algorithm, by means of a stripe approach, a known authentication technique
derived from Fridrich. This procedure points out two advantages: firstly, the produced bit-stream is perfectly compliant
with the JPEG-LS standard, secondly, when the image has been decoded, it is always authenticated because information
has been embedded in the reconstructed values. Near-lossless coding does not harm authentication procedure and robustness
against different attacks is preserved. JPEG-LS coding/decoding has not got worse from the point of view of
computational time.
Collusion-resistant multimedia fingerprinting: a unified framework
Author(s):
Min Wu;
Wade Trappe;
Z. Jane Wang;
K.J. Ray Liu
Show Abstract
Digital fingerprints are unique labels inserted in different copies of the same content before distribution. Each digital fingerprint is assigned to an inteded recipient, and can be used to trace the culprits who use their content for unintended purposes. Attacks mounted by multiple users, known as collusion attacks, provide a cost-effective method for attenuating the identifying fingerprint from each coluder, thus collusion poses a reeal challenge to protect the digital media data and enforce usage policies. This paper examines a few major design methodologies for collusion-resistant fingerprinting of multimedia, and presents a unified framework that helps highlight the common issues and the uniqueness of different fingerprinting techniques.
Analysis and design of authentication watermarking
Author(s):
Chuhong Fei;
Deepa Kundur;
Raymond Kwong
Show Abstract
This paper focuses on the use of nested lattice codes for effective analysis and
design of semi-fragile watermarking schemes for content authentication
applications. We provide a design framework for digital watermarking which is semi-fragile to any form of acceptable distortions, random or deterministic, such that both objectives of robustness and fragility can be effectively controlled and
achieved. Robustness and fragility are characterized as two types of authentication errors. The encoder and decoder structures of semi-fragile schemes are derived and implemented using nested lattice codes to minimize these two types of errors. We then extend the framework to allow the legitimate and illegitimate distortions to be modelled as random noise. In addition, we investigate semi-fragile signature generation methods such that the signature is invariant to watermark embedding and legitimate distortion. A new approach, called MSB signature generation, is proposed which is shown to be more secure than the traditional dual subspace approach. Simulations of semi-fragile systems on real images are provided to demonstrate the effectiveness of nested lattice codes in achieving design objectives.
Analysis of a wavelet-based robust hash algorithm
Author(s):
Albert Meixner;
Andreas Uhl
Show Abstract
This paper paper is a quantitative evaluation of a wavelet-based, robust authentication hashing algorithm. Based on the results of a series of robustness and tampering sensitivity tests, we describepossible shortcomings and propose variousmodifications to the algorithm to improve its performance. The second part of the paper describes and attack against the scheme. It allows an attacker to modify a tampered image, such that it's hash value closely matches the hash value of the original.
Geometric soft hash functions for 2D and 3D objects
Author(s):
Emmanuel Fernandes;
Jean-Francois Delaigle
Show Abstract
Hash functions play a fundamental role in cryptography, data integrity and digital signatures. They are a valuable tool for implementing
authentication protocols for electronic and multimedia content. In this paper we define and study soft hash functions used to obtain digital
fingerprints of 2D and 3D geometrical objects, based on the Radon Integral Geometric Transform. These geometric one-way functions satisfy the
classical compression, ease of computation and collision resistant properties. The digests are invariant to certain geometric transformations
such as rotation and translation, and present a soft property: they are sensible to the global geometrical shape but not to small
and local perturbations of the geometry. The main applications are: classical pattern recognition and watermarking detection (or watermarking
removal detection). Digital fingerprints of virtual CAD-based 3D models can be used for testing geometric integrity and origin authentication,
and can also be used to implement anti-counterfeiting techniques for real manufactured 3D pieces. The soft property guaranties the robustness
of the digest to the 3D scanning process, which introduces only small local errors.
Statistical amplitude scale estimation for quantization-based watermarking
Author(s):
Ivo D Shterev;
Reginald L. Lagendijk;
Richard Heusdens
Show Abstract
Quantization-based watermarking schemes are vulnerable to
amplitude scaling. Therefore the scaling factor has to be
accounted for either at the encoder, or at the decoder, prior to
watermark decoding. In this paper we derive the marginal
probability density model for the watermarked and attacked data,
when the attack channel consists of amplitude scaling followed by
additive noise. The encoder is Quantization Index Modulation with
Distortion Compensation. Based on this model we obtain two
estimation procedures for the scale parameter. The first approach
is based on Fourier Analysis of the probability density function. The estimation of the
scaling parameter relies on the structure of the received data.
The second approach that we obtain is the Maximum Likelihood
estimator of the scaling factor. We study the performance of the
estimation procedures theoretically and experimentally with real
audio signals, and compare them to other well known approaches for
amplitude scale estimation in the literature.
Blind iterative decoding of side-informed data hiding using the expectation-maximization algorithm
Author(s):
Felix Balado;
Fernando Perez-Gonzalez;
Pedro Comesana
Show Abstract
Distortion-Compensated Dither Modulation (DC-DM), also known as Scalar Costa Scheme (SCS), has been theoretically shown to be near-capacity achieving thanks to its use of side information at the encoder. In practice, channel coding is needed in conjunction with this quantization-based scheme in order to approach the achievable rate limit. The most powerful coding methods use iterative decoding (turbo codes, LDPC), but they require knowledge of the channel model. Previous works on the subject have assumed the latter to be known by the decoder. We investigate here the possibility of undertaking blind iterative decoding of DC-DM, using maximum likelihood estimation of the channel model within the decoding procedure. The unknown attack is assumed to be i.i.d. and additive. Before each iterative decoding step, a new optimal estimation of the attack model is made using the reliability information provided by the previous step. This new model is used for the next iterative decoding stage, and the procedure is repeated until convergence. We show that the iterative Expectation-Maximization algorithm is suitable for solving the problem posed by model estimation, as it can be conveniently intertwined with iterative decoding.
Wavelet domain watermarking using maximum-likelihood detection
Author(s):
Tek Ming Ng;
Hari Krishna Garg
Show Abstract
A digital watermark is an imperceptible mark placed on multimedia content for a variety of applications including copyright protection, fingerprinting, broadcast monitoring, etc. Traditionally, watermark detection algorithms are based on the correlation between the watermark and the media the watermark is embedded in. Although simple to use, correlation detection is only optimal when the watermark embedding process follows an additive rule and when the media is drawn from Gaussian distributions. More recent works on watermark detection are based on decision theory. In this paper, a maximum-likelihood (ML) detection scheme based on Bayes's decision theory is proposed for image watermarking in wavelet transform domain. The decision threshold is derived using the Neyman-Pearson criterion to minimize the missed detection probability subject to a given false alarm probability. The detection performance depends on choosing a probability distribution function (PDF) that can accurately model the distribution of the wavelet transform coefficients. The generalized Gaussian PDF is adopted here. Previously, the Gaussian PDF, which is a special case, has been considered for such detection scheme. Using extensive experimentation, the generalized Gaussian PDF is shown to be a better model.
Performance bounds on optimal watermark synchronizers
Author(s):
Vinicius Licks;
Fabricio Ourique;
Ramiro Jordan
Show Abstract
The inability of existing countermeasures to consistently cope against localized geometric attacks has precluded the widespread acceptance of image watermarking for commercial applications. The efficiency of these attacks against the so-called spread spectrum methods resides in their ability to affect the synchronization between the watermark reference and the extracted watermark at the detector end. In systems based on quantization schemes, geometric attacks have the effect of moving the watermark vector away from its actual quantization centroid, thus causing the watermark decoder to output wrong message symbols. In this paper, our goal is to gain a better understanding of the challenges imposed by the watermark synchronization problem in the context of localized geometric attacks. For that matter, we propose a model for the watermark synchronization problem based on maximum-likelihood (ML) estimation techniques. In that way, we derive theoretically optimal watermark synchronizer structures for either blind or non-blind schemes and based on the Cramer-Rao inequality we set lower bounds on the variance of these attack parameter estimates as a means to assess the accuracy of such synchronizers.
Malicious attacks on media authentication schemes based on invertible watermarks
Author(s):
Stefan Katzenbeisser;
Jana Dittmann
Show Abstract
The increasing availability and distribution of multimedia technology has made the
manipulation of digital images, videos or audio files easy. While this enables numerous
new applications, a certain loss of trust in digital media can be observed. In general,
there is no guarantee that a digital image "does not lie", i.e., that the image content
was not altered. To counteract this risk, fragile watermarks were proposed to protect the
integrity of digital multimedia objects. In high security applications, it is necessary to be
able to reconstruct the original object out of the watermarked version. This can be
achieved by the use of invertible watermarks. While traditional watermarking schemes
introduce some small non-invertible distortion in the digital content, invertible
watermarks can be completely removed from a watermarked work.
In the past, the security of proposed image authentication schemes based on invertible
watermarks was only analyzed using ad-hoc methods and neglected the possibility of malicious
attacks, which aim at engineering a fake mark so that the attacked object appears to
be genuine. In this paper, we characterize and analyze possible malicious attacks against
watermark-based image authentication systems and explore the theoretical limits of previous
constructions with respect to their security.
Attacking digital watermarks
Author(s):
Radu Sion;
Mikhail Atallah
Show Abstract
This paper discusses inherent vulnerabilities of digital watermarking
that affect its mainstream purpose of rights protection. We ask: how
resistant is watermarking to un-informed attacks ?
There are a multitude of application scenarios for watermarking and,
with the advent of modern content distribution networks and
the associated rights assessment issues,
it has recently become a topic of increasing interest.
But how well is watermarking suited for this main
purpose of rights protection ?
Existing watermarking techniques are vulnerable to attacks
threatening their overall viability. Most of these attacks
have the final goal of removing the watermarking information
while preserving the actual value of the watermarked Work.
In this paper we identify an inherent trade-off between two
important properties of watermarking algorithms: being "convincing enough"
in court while at the same time surviving a set of attacks,
for a broad class of watermarking algorithms. We show that there
exist inherent limitations in protecting rights over digital
Works. In the attempt to become as convincing as possible
(e.g. in a court of law, low rate of false positives),
watermarking applications become more fragile to attacks aimed
at removing the watermark while preserving the value of the Work.
They are thus necessarily characterized by a significant (e.g. in
some cases 35%+) non-zero probability of being successfully
attacked without any knowledge about their algorithmic details.
We quantify this vulnerability for a class of algorithms and show how a
minimizing "sweet spot" can be found. We then derive a set of
recommendations for watermarking algorithm design.
Evaluating the performance of ST-DM watermarking in nonadditive channels
Author(s):
Mauro Barni;
Attilio Borello;
Franco Bartolini;
Alessandro Piva
Show Abstract
In this paper, the performance of ST-DM watermarking in presence of two categories of non additive attacks, such as the
gain attack plus noise addition, and the quantization attack, are evaluated. The work has been developed by assuming
that the host features are independent and identically distributed Gaussian random variables, and that a minimum distance
criterion is used to decode the embedded information. The theoretical bit error probabilities are derived in closed form,
thus permitting to evaluate the impact of the considered attacks on the watermark at a theoretical level. The analysis is
validated by means of extensive Monte Carlo simulations. Moreover, Monte Carlo simulations permitted to abandon the
hypothesis of normally distributed host features, in favor of more realistic models based on a Laplacian or a Generalized
Gaussian pdf. The overall result of our analysis is that ST-DM exhibits excellent performance in all cases with the only
noticeable exception of the gain attack.
Characterization of geometric distortions attacks in robust watermarking
Author(s):
Xavier Desurmont;
Jean-Francois Delaigle;
Benoit Macq
Show Abstract
Robust image watermarking algorithms have been proposed as methods for discouraging illicit copying and distribution of copyright material. Having robustness to pixels modifications in mind, many watermarking designers use techniques coming from the communications domain such as spread spectrum to embed hidden information, be it in the spatial or in the transform domain. There exist numerous attacks dedicated to make watermarking algorithms inefficient that degrade images by geometric distortions. One solution to counter them is to add synchronization information. In this paper we present an analysis of this type of distortions and we propose a metric to estimate the distortion undergone by an image. This metric is content independent, invariant to global translation, rotation and scaling, which can be considered as non-meaningful transformations. To demonstrate the relevance of this metric, we compare some of its results with the subjective degradation of the image produced by the Stirmark software.