Proceedings Volume 4675

Security and Watermarking of Multimedia Contents IV

cover
Proceedings Volume 4675

Security and Watermarking of Multimedia Contents IV

View the digital version of this volume at SPIE Digital Libarary.

Volume Details

Date Published: 29 April 2002
Contents: 13 Sessions, 71 Papers, 0 Presentations
Conference: Electronic Imaging 2002
Volume Number: 4675

Table of Contents

icon_mobile_dropdown

Table of Contents

All links to SPIE Proceedings will open in the SPIE Digital Library. external link icon
View Session icon_mobile_dropdown
  • Special Session: Steganalysis
  • Audio Watermarking
  • Text and Printing
  • Speech and Music
  • Color Watermarking Techniques
  • Special Session: Protocols
  • Attacks
  • Transform Methods
  • Theoretical Models
  • Video Techniques
  • Embedding
  • Applications
  • Fragile Watermarking
Special Session: Steganalysis
icon_mobile_dropdown
Practical steganalysis of digital images: state of the art
Steganography is the art of hiding the very presence of communication by embedding secret messages into innocuous looking cover documents, such as digital images. Detection of steganography, estimation of message length, and its extraction belong to the field of steganalysis. Steganalysis has recently received a great deal of attention both from law enforcement and the media. In our paper, we classify and review current stego-detection algorithms that can be used to trace popular steganographic products. We recognize several qualitatively different approaches to practical steganalysis - visual detection, detection based on first order statistics (histogram analysis), dual statistics methods that use spatial correlations in images and higher-order statistics (RS steganalysis), universal blind detection schemes, and special cases, such as JPEG compatibility steganalysis. We also present some new results regarding our previously proposed detection of LSB embedding using sensitive dual statistics. The recent steganalytic methods indicate that the most common paradigm in image steganography - the bit-replacement or bit substitution - is inherently insecure with safe capacities far smaller than previously thought.
Mathematical approach to steganalysis
A mathematical approach to steganalysis is presented in this paper with linear steganography being the main focus. A mathematically formal definition of steganalysis is given followed by definitions for passive and active steganalysis. The steganalysis problem is formulated as blind system identification and conditions for identifiability (successful steganalysis) are derived. A procedure to systematically exploit any available spatial and temporal diversity information for efficient steganalysis is also discussed. Experimental results are given for steganalysis of Gaussian distributed, spread spectrum image steganography and watermarking. The proposed technique is observed to produce impressive results for a variety of performance measures. Based on the results we conclude that a common belief, namely, spread spectrum steganography/watermarking is secure because of the low strength, noise-like message carrier is not valid anymore within the current context. Therefore, new questions regarding steganography security that differ from the standard information theoretic notion are raised and some answers are provided.
Communications approach to image steganography
Steganography is the art of communicating a message by embedding it into multimedia data. It is desired to maximize the amount of hidden information (embedding rate) while preserving security against detection by unauthorized parties. An appropriate information-theoretic model for steganography has been proposed by Cachin. A steganographic system is perfectly secure when the statistics of the cover data and the stego data are identical, which means that the relative entropy between the cover data and the stego data is zero. For image data, another constraint is that the stego data must look like a typical image. A tractable objective measure for this property is the (weighted) mean squared error between the cover image and the stego image (embedding distortion). Two different schemes are investigated. The first one is derived from a blind watermarking scheme. The second scheme is designed specifically for steganography such that perfect security is achieved, which means that the relative entropy between cover data and stego data tends to zero. In this case, a noiseless communication channel is assumed. Both schemes store the stego image in the popular JPEG format. The performance of the schemes is compared with respect to security, embedding distortion and embedding rate.
Applied public-key steganography
Pierre Guillon, Teddy Furon, Pierre Duhamel
We consider the problem of hiding information in a steganographic framework, i.e. embedding a binary message within an apparently innocuous content, in order to establish a suspicion-free digital communication channel. The adversary is passive as no intentional attack is foreseen. The only threat is that she discovers the presence of a hidden communication. The main goal of this article is to find if the Scalar Costa Scheme, a recently published embedding method exploiting side information at the encoder, is suitable for that framework. We justify its use assessing its security level with respect to the Cachin's criterion. We derive a public-key stego-system following the ideas of R. Anderson and P. Petitcolas. This technique is eventually applied to PCM audio contents. Experimental performances are detailed in terms of bit-rate and Kullback-Leibler distance.
Defining security in steganographic systems
Intuitively, the security of a steganographic communication between two principals lies in the inability of an eavesdropper to distinguish cover-objects from stego-objects, that is objects which contain secret messages. A system should be already considered insecure, if an eavesdropper can suspect the presence of secret communication. Several definitions of steganographic security were proposed in the literature. However, they all consider only perfectly secure steganographic systems, where even a computationally unbounded observer cannot detect the presence of a secret message exchange. Second, it might be difficult to construct secure schemes usable in practice following these definitions. Third, they all require the knowledge of the probability distribution of normal covers; although it might be possible in certain cases to compute this probability, it will in general be infeasible to obtain. In this paper, we propose a novel approach for defining security in steganographic systems. This definition relies on a probabilistic game between the attacker and a judge. Given the ability to observe the normal communication process and the steganographic system, the attacker has to decide whether a specific object (given to him by a judge) is in fact a plain cover or a stego-object. We discuss the applicability of this new definition and pose the open problem of constructing provably secure steganographic systems.
StegoWall: blind statistical detection of hidden data
Novel functional possibilities, provided by recent data hiding technologies, carry out the danger of uncontrolled (unauthorized) and unlimited information exchange that might be used by people with unfriendly interests. The multimedia industry as well as the research community recognize the urgent necessity for network security and copyright protection, or rather the lack of adequate law for digital multimedia protection. This paper advocates the need for detecting hidden data in digital and analog media as well as in electronic transmissions, and for attempting to identify the underlying hidden data. Solving this problem calls for the development of an architecture for blind stochastic hidden data detection in order to prevent unauthorized data exchange. The proposed architecture is called StegoWall; its key aspects are the solid investigation, the deep understanding, and the prediction of possible tendencies in the development of advanced data hiding technologies. The basic idea of our complex approach is to exploit all information about hidden data statistics to perform its detection based on a stochastic framework. The StegoWall system will be used for four main applications: robust watermarking, secret communications, integrity control and tamper proofing, and internet/network security.
Adaptive steganography
Steganalysis techniques attempt to differentiate between stego-objects and cover-objects. In recent work we developed an explicit analytic upper bound for the steganographic capacity of LSB based steganographic techniques for a given false probability of detection. In this paper we look at adaptive steganographic techniques. Adaptive steganographic techniques take explicit steps to escape detection. We explore different techniques that can be used to adapt message embedding to the image content or to a known steganalysis technique. We investigate the advantages of adaptive steganography within an analytical framework. We also give experimental results with a state-of-the-art steganalysis technique demonstrating that adaptive embedding results in a significant number of bits embedded without detection.
Audio Watermarking
icon_mobile_dropdown
StirMark Benchmark: audio watermarking attacks based on lossy compression
StirMark Benchmark is a well-known evaluation tool for watermarking robustness. Additional attacks are added to it continuously. To enable application based evaluation, in our paper we address attacks against audio watermarks based on lossy audio compression algorithms to be included in the test environment. We discuss the effect of different lossy compression algorithms like MPEG-2 audio Layer 3, Ogg or VQF on a selection of audio test data. Our focus is on changes regarding the basic characteristics of the audio data like spectrum or average power and on removal of embedded watermarks. Furthermore we compare results of different watermarking algorithms and show that lossy compression is still a challenge for most of them. There are two strategies for adding evaluation of robustness against lossy compression to StirMark Benchmark: (a) use of existing free compression algorithms (b) implementation of a generic lossy compression simulation. We discuss how such a model can be implemented based on the results of our tests. This method is less complex, as no real psycho acoustic model has to be applied. Our model can be used for audio watermarking evaluation of numerous application fields. As an example, we describe its importance for e-commerce applications with watermarking security.
Quality evaluation of watermarked audio tracks
Michael Arnold, Kai Schilz
This paper presents an exhaustive evaluation of the quality of an audio watermarking algorithm. The integration of the psychoacoustic model into the audio watermarking approach is demonstrated. The quality parameter relating the power of the watermark noise and the masking threshold is presented. The evaluation method is detailed and the quality of the watermarked audio tracks is evaluated with regard to different settings of the quality parameter used to adjust the power of the embedded watermarks. The subjective listener test compares the quality of the original audio track with the watermarked one. Different quality parameter settings were used in order to enable the adjustment between quality and maximum robustness according to the items to be watermarked and the target audience.
Nth-order audio watermarking
Neil J. Hurley, Guenole C.M. Silvestre
Second generation watermarking schemes differ from first generation symmetric schemes in that the detection process does not require the use of the same private key in both the embedder and the detector. An advantage of such schemes is that estimation of the watermark by an averaging attack is rendered impossible, so that the overall system is more secure. Almost all second generation schemes to date are also second order; that is, they are based on the computation of a quadratic form in the detector. Recently, Furon presented a unified description of second order schemes. Furon showed that O(m2) attacks were required to estimate the quadratic form where m is the spreading period. This presents a significant improvement over O(m) attacks required to estimate the watermarking key in symmetric schemes. In this work, the authors propose an audio watermarking scheme which employs an n-th order detection process. The scheme is based on a generalized differential modulation scheme and provides increased security over second order schemes. The cost of such increased security is a loss of efficiency, so that the watermark must be spread over more content. The paper presents an efficiency and security analysis for the third- and fourth-order scheme.
Audio content authentication based on psycho-acoustic model
The goal of audio content authentication techniques is to separate malicious manipulations from authentic signal processing applications like compression, filtering, etc. The key difference between malicious operations and signal processing operations is that the latter tends to preserve the perceptual content of the underlying audio signal. Hence, in order to separate malicious operations from allowed operations, a content authentication procedure should be based on a model that approximates human perception of audio. In this paper, we propose an audio content authentication technique based on an invariant feature contained in two perceptually similar audio data, i.e. the masking curve. We also evaluate the performance of this technique by embedding a hash based on the masking curve into the audio signal using an existing transparent and robust data hiding technique. At the receiver, the same content-based hash is extracted from the audio and compared with the calculated hash bits. Correlation between calculated hash bits and extracted hash bits degrades gracefully with the perceived quality of received audio. This implies that the threshold for authentication can be adapted to the required level of perceptual quality at the receiver. Experimental results show that this content-based hash is able to differentiate allowed signal processing applications like MP3 compression from certain malicious operations, which modify the perceptual content of the audio.
Text and Printing
icon_mobile_dropdown
Digimarc MediaBridge: the birth of a consumer product from concept to commercial application
Burt Perry, Brian MacIntosh, David Cushman
This paper examines the issues encountered in the development and commercial deployment of a system based on digital watermarking technology. The paper provides an overview of the development of digital watermarking technology and the first applications to use the technology. It also looks at how we took the concept of digital watermarking as a communications channel within a digital environment and applied it to the physical print world to produce the Digimarc MediaBridge product. We describe the engineering tradeoffs that were made to balance competing requirements of watermark robustness, image quality, embedding process, detection speed and end user ease of use. Today, the Digimarc MediaBridge product links printed materials to auxiliary information about the content, via the Internet, to provide enhanced informational marketing, promotion, advertising and commerce opportunities.
Digital watermarking scheme for extremely high-resolution printing images
Yuji Honjo, Kiyoshi Tanaka
In this work, we propose a new digital watermarking scheme for extremely high-resolution printing images applicable to On-Demand Publishing (ODP) system. We designed our scheme by considering the following requirements: (i) high image quality, (ii) high security, and (iii) watermark immunity (robustness). In order to attain these requirements we employ the idea of Spread Spectrum (SS) watermarking technique in our scheme and modify it to be applicable to color (CMYK) binary printing images. Simulation results verified that we could embed a watermark spreading over the entire output image as a weak energy and still keep high image quality. Also the watermark could be robustly decoded by controlling some parameters even after some possible attacks by a third party.
HIT: a new approach for hiding multimedia information in text
Essam A. El-Kwae, Li Cheng
A new technique for hiding multimedia data in text, called the Hiding in Text (HIT) technique, is introduced. The HIT technique can transform any type of media represented by a long binary string into innocuous text that follows correct grammatical rules. This technique divides English words into types where each word can appear in any number of types. For each type, there is a dictionary, which maps words to binary codes. Marker types are special types whose words do not repeat in any other type. Each generated sentence must include at least one word from the marker type. In the hiding phase, a binary string is input to the HIT encoding algorithm, which then selects sentence templates at random. The output is a set of English sentences according to the selected templates and the dictionaries of types. In the retrieving phase, the HIT technique uses the position of the marker word to identify the template used to build each sentence. The proposed technique greatly improves the efficiency and the security features of previous solutions. Examples for hiding text and image information in a cover text are given to illustrate the HIT technique.
Practical off-line authentication
Heajoung Yoo, Seungchul Seo, Sangjin Lee, et al.
This paper shows our approach for the secure and practical authentication system for the printed data. It is based on the new watermarking technique, which is robust to printer/scanner (ps) distortions and transformation techniques. Ps distortions are random, so it interferes with the self-alignment of the scanned data of a printed image. Therefore, most of conventional watermarking methods are not robust to ps distortions. However, in our system, detection of the watermark is possible by using the correlation between the scanned material of a printed image and the original cover-image. The retrieval of this watermark is fragile to copy and printing distortions more than two times. The documents in which watermarks are embedded can be used as an off-line authentical documents, thus, people had no sooner applied for issuing of a document via internet than they received it in their house or office by using printers. This printed document can be authenticated using our system whether they are genuine or not. We expect our proposed watermarking technology to have wide applications in real life.
Sensitivity labels and invisible identification markings in human-readable output
Christoph Busch, Stephen D. Wolthusen
This paper presents a mechanism for embedding both immediately readable and steganographically hidden information in human-\-readable output, particularly in hard copy format. The mechanism is embedded within a domain inaccessible to unprivileged users in the operating system's Trusted Computing Base. A realization is presented which permits the embedding of such markings in arbitrary printing systems under the Microsoft Windows NT family of operating systems.
Speech and Music
icon_mobile_dropdown
Comparison of two speech content authentication approaches
Speech content authentication, which is also called speech content integrity or tamper detection, protects the integrity of speech contents instead of the bitstream itself. In this work, two major approaches for flexible speech authentication are presented and compared. The first scheme is based on content feature extraction that is integrated with CELP speech coders to minimize the total computational cost. Speech features relevant to semantic meaning are extracted, encrypted and attached as the header information. The second method embeds fragile watermarks into the speech signal in the frequency domain. If the speech signal is tampered, the secret watermark sequence is also modified. The receiver detects the fragile watermark from received data and compares it to the original secret sequence. These two approaches are compared in terms of computational complexity, false detection rate, and tolerance to mis-synchronization and content preserving operations. It is shown that each approach has its own merits and shortcomings.
Automatic music monitoring and boundary detection for broadcast using audio watermarking
Taiga Nakamura, Ryuki Tachibana, Seiji Kobayashi
An application of watermarking for automatic music monitoring of radio broadcasts is discussed. By embedding information into the music as a watermark before broadcasting it, it is possible to keep track of what music has been on the air at what time, and for how long. However, to effectively implement this application, the handling of content transitions is important, because the detection reliability deteriorates at the content boundaries. In this paper, a method of detecting content boundaries using overlapping detection windows is described. The most probable pattern of content transition is selected under the condition that detection results from multiple windows are available. The derived rules are represented using a finite state model, which is useful for detection in real time. Experimental results on FM radio broadcasts are also presented.
Evolution of music score watermarking algorithm
Christoph Busch, Paolo Nesi, Martin Schmucker, et al.
Content protection for multimedia data is widely recognized especially for data types that are frequently distributed, sold or shared using the Internet. Particularly music industry dealing with audio files realized the necessity for content protection. Distribution of music sheets will face the same problems. Digital watermarking techniques provide a certain level of protection for these music sheets. But classical raster-oriented watermarking algorithms for images suffer several drawbacks when directly applied to image representations of music sheets. Therefore new solutions have been developed which are designed regarding the content of the music sheets. In Comparison to other media types the development for watermarking of music scores is a rather young art. The paper reviews the evolution of the early approaches and describes the current state of the art in the field.
Scheme of standard MIDI files steganography and its evaluation
Steganography aims to make communication invisible by hiding genuine information in innocent objects. We have proposed the SMF steganography that enables to hide information into Standard MIDI Files (SMF). The SMF is widely used as a standard storage format of the data of Musical Instrumental Digital Interface (MIDI). Most of digital musical instruments and personal computers equip the MIDI. Our hiding method utilizes a redundancy of the description of note events (note on/off) in SMF. Some note events, which are performed simultaneously, are allowed as correct contents of SMF even if they are described in any order. Therefore, we can permute the order of such note events according to the embedded data without changing their sounds. To clarify the potential of the SMF steganography, we calculate the embeddable data size about over three hundred SMFs that are opened to the public on the Internet. As a result, the embedding rate, which is the percentage of embeddable data size to cover SMF size, shows about 1% on average, and the best case reaches about 4%. We also clarify the influence of the Quantize function (which adjusts the timing of note events) on the embeddable data size.
Capacity improvement for a blind symbolic music score watermarking technique
Current music score watermarking techniques either lack in robustness or in capacity: While embedding data into staff lines guarantees a certain payload, modulation of the staff lines' thickness can be attacked using image processing techniques. Changing properties of musical symbols is more robust. But, the number of musical symbols limits the payload. Therefore, we propose the following method: The staff lines' distances are used as a carrier signal for embedding information. Changes are applied by using an image warping technique to reduce artifacts. Image warping techniques can also be applied locally to music symbols to change their properties, e.g. their horizontal distance or their width. Thus, quality improves. Furthermore, musical symbols doesn't have to be recognized. It is sufficient to cluster the symbols according to their properties and to treat symbols of the same cluster equally. The described method guarantees a minimum payload and increased robust. The application of image warping techniques reduces visible distortion. The results are visual appealing music scores. Capacity improvement can be achieved by applying the proposed methods to music symbols.
Color Watermarking Techniques
icon_mobile_dropdown
Color image watermarking using channel-state knowledge
Josep Vidal, Maribel Madueno, Elisa Sayrol
This work concentrates on the problem of watermarking embedding and blind optimum detection in full-frame DCT domain using channel-state knowledge concepts. Minimum length sequences are used to embed the watermark information in the color components. Each chip of the sequence is inserted in a random-like fashion in those coefficients that ensure imperceptibility, robustness and a very low probability of error in detection. As it will be shown the power of the watermark has to be distributed among the symbols not only considering imperceptibility but also to improve the detection process. Furthermore, two solutions are proposed to improve the robustness against cropping operations.
Adaptive color watermarking
Alastair M. Reed, Brett T. Hannigan
In digital watermarking, a major aim is to insert the maximum possible watermark signal while minimizing visibility. Many watermarking systems embed data in the luminance channel to ensure watermark survival through operations such as grayscale conversion. For these systems, one method of reducing visibility is for the luminance changes due to the watermark signal to be inserted into the colors least visible to the human visual system, while minimizing the changes in the image hue. In this paper, we develop a system that takes advantage of the low sensitivity of the human visual system to high frequency changes along the yellow-blue axis, to place most of the watermark in the yellow component of the image. We also describe how watermark detection can potentially be enhanced, by using a priori knowledge of this embedding system to intelligently examine possible watermarked images.
Hiding-based compression for improved color image coding
Patrizio Campisi, Deepa Kundur, Dimitrios Hatzinakos, et al.
This paper considers the use of data hiding strategies for improved color image compression. Specifically, color information is piggybacked on the luminance component of the image in order to reduce the overall signal storage requirements. A practical wavelet-based data hiding scheme is proposed in which selected perceptually irrelevant luminance bands are replaced with perceptually salient chrominance components. Simulation results demonstrate the improvement in compression quality of the proposed scheme to SPIHT and JPEG at low bit rates. The novel technique also has the advantage that it can be used to further reduce the storage requirements of algorithms such as SPIHT which is optimized for grayscale image compression.
Special Session: Protocols
icon_mobile_dropdown
Watermarking protocols for authentication and ownership protection based on timestamps and holograms
Digital watermarking has become an accepted technology for enabling multimedia protection schemes. One problem here is the security of these schemes. Without a suitable framework, watermarks can be replaced and manipulated. We discuss different protocols providing security against rightful ownership attacks and other fraud attempts. We compare the characteristics of existing protocols for different media like direct embedding or seed based and required attributes of the watermarking technology like robustness or payload. We introduce two new media independent protocol schemes for rightful ownership authentication. With the first scheme we ensure security of digital watermarks used for ownership protection with a combination of two watermarks: first watermark of the copyright holder and a second watermark from a Trusted Third Party (TTP). It is based on hologram embedding and the watermark consists of e.g. a company logo. As an example we use digital images and specify the properties of the embedded additional security information. We identify components necessary for the security protocol like timestamp, PKI and cryptographic algorithms. The second scheme is used for authentication. It is designed for invertible watermarking applications which require high data integrity. We combine digital signature schemes and digital watermarking to provide a public verifiable integrity. The original data can only be reproduced with a secret key. Both approaches provide solutions for copyright and authentication watermarking and are introduced for image data but can be easily adopted for video and audio data as well.
Return of ambiguity attacks
The ambiguity attack, or invertibility attack, was described several years ago as a potential threat to digital watermarking systems. By manipulating the invertibility of watermark embedding, one could negate or subvert the meaning of a copyright mark. These attacks were easily prevented, however, with the appropriate application of one-way functions and cryptographic hashes in watermarking protocols. New research in watermarking, however, has caused the ambiguity attack to resurface as a threat, and this time it will not be as easy averted. Recent work in public-key watermarking create scenarios in which one-way functions may be ineffective against this threat. Furthermore, there are also positive uses for ambiguity attacks, as components in watermarking protocols. This paper provides an overview of the past and possible future of these unusual attacks.
Securing symmetric watermarking schemes against protocol attacks
Stefan Katzenbeisser, Helmut Veith
With the advent of the web and the creation of electronic distribution channels for multimedia objects, there is an increased risk of copyright infringements. Content providers try to alleviate this problem by using copyright protection facilities that often involve watermarking schemes as primitives. Clearly, the intention of the content provider can be subverted if the watermarking scheme is susceptible to intentional attacks, especially to attacks on the robustness of watermarks. It was noted early during the development of watermarking algorithms that the intention of resolving the copyright situation might be subverted entirely without removing any watermark contained in multimedia objects. Indeed, so-called protocol attacks try to introduce some sort of ambiguity during the copyright resolution process. After providing formal definitions for some common protocol attacks, we discuss the possibility of constructing watermarking schemes that are provably secure against ambiguity and copy attacks. Although there were several previous attempts to secure watermarking schemes against protocol attacks, we provide for the first time a formal security proof of our scheme. The security of the construction is based on a cryptographic primitive, namely an unforgeable public-key signature scheme, that is used to constrain the watermarking bits to have a specific form.
Links between cryptography and information hiding
Caroline Fontaine, Frederic Raynal
Both cryptography and information hiding (steganography, watermarking and fingerprinting) deal with the protection of data. Both are quite old areas, but cryptography has been studied more precisely. Our purpose is here to compare each of them, and to understand how they can improve each other. We will mainly recall the important concepts of each area, and then confront them.
Attacks
icon_mobile_dropdown
Channel model for watermarks subject to desynchronization attacks
One of the most important practical problems of blind Digital Watermarking is the resistance against desynchronization attacks, one of which is the Stirmark random bending attack in the case of image watermarking. Recently, new blind digital watermarking schemes have been proposed which do not suffer from host-signal interference. One of these quantization based watermarking scheme is the Scalar Costa Scheme (SCS). We present an attack channel for SCS which tries to model typical artefacts of local desynchronization. Within the given channel model, the maximum achievable watermark rate for imperfectly synchronized watermark detection is computed. We show that imperfect synchronization leads to inter-sample-interference by other signal samples, independent from the considered watermark technology. We observe that the characteristics of the host signal play a major role in the performance of imperfectly synchronized watermark detection. Applying these results, we propose a resynchronization method based on a securely embedded pilot signal. The watermark receiver exploits the embedded pilot watermark signal to estimate the transformation of the sampling grid. This estimate is used to invert the desynchronization attack before applying standard SCS watermark detection. Experimental results for the achieved bit error rate of SCS watermark detection confirm the usefulness of the proposed resynchronization algorithm.
Attacks, applications, and evaluation of known watermarking algorithms with Checkmark
Peter Meerwald, Shelby Pereira
The Checkmark benchmarking tool was introduced to provide a framework for application-oriented evaluation of watermarking schemes. In this article we introduce new attacks and applications into the existing Checkmark framework. In addition to describing new attacks and applications, we also compare the performance of some well-known watermarking algorithms (proposed by Bruyndonckx,Cox, Fridrich, Dugad, Kim, Wang, Xia, Xie, Zhu and Pereira) with respect to the Checkmark benchmark. In particular, we consider the non-geometric application which contains tests that do not change the geometry of image. This attack constraint is artificial, but yet important for research purposes since a number of algorithms may be interesting, but would score poorly with respect to specific applications simply because geometric compensation has not been incorporated. We note, however, that with the help of image registration, even research algorithms that do not have counter-measures against geometric distortion -- such as a template or reference watermark -- can be evaluated. In the first version of the Checkmark benchmarking program, application-oriented evaluation was introduced, along with many new attacks not already considered in the literature. A second goal of this paper is to introduce new attacks and new applications into the Checkmark framework. In particular, we introduce the following new applications: video frame watermarking, medical imaging and watermarking of logos. Video frame watermarking includes low compression attacks and distortions which warp the edges of the video as well as general projective transformations which may result from someone filming the screen at a cinema. With respect to medical imaging, only small distortions are considered and furthermore it is essential that no distortions are present at embedding. Finally for logos, we consider images of small sizes and particularly compression, scaling, aspect ratio and other small distortions. The challenge of watermarking logos is essentially that of watermarking a small and typically simple image. With respect to new attacks, we consider: subsampling followed by interpolation, dithering and thresholding which both yield a binary image.
Design of template in the autocorrelation domain
This paper proposes a methodology in designing a spatial watermark which is robust to geometrical attacks. The proposed watermarking methodology is based on self-registering watermark that tiles the watermark pattern over the entire image. Thus, the peaks in the autocorrelation domain reveals the information about the geometrical transformations which the image has undergone. However, due to the limited precision of the autocorrelation domain, the template search is not reliable enough. The proposed scheme is based on a novel methodology in designing a watermark that is robust to small geometrical attacks. The watermark pattern is designed such that when the synchronization is off by the small amount of geometrical transformations, it can be identified without any searching. This characteristic of the watermark eventually leads to the reduction in search space of the template and compensation for the limited precision of the autocorrelation domain when the synchronization is off by the large amount. The proposed watermark is generated as a filtered white pattern, and in order for the watermark to be robust against geometrical transformation and lossy compression the filter must be carefully designed. The watermark generated by the filter designed by the proposed method has shown improvement in detection reliability.
Method for the estimation and recovering from general affine transforms in digital watermarking applications
An important problem constraining the practical exploitation of robust watermarking technologies is the low robustness of the existing algorithms against geometrical distortions such as rotation, scaling, cropping, translation, change of aspect ratio and shearing. All these attacks can be uniquely described by general affine transforms. In this work, we propose a robust estimation method using apriori known regularity of a set of points. These points can be typically local maxima, or peaks, resulting either from the autocorrelation function (ACF) or from the magnitude spectrum (MS) generated by periodic patterns, which result in regularly aligned and equally spaced points. This structure is kept under any affine transform. The estimation of affine transform parameters is formulated as a robust penalized Maximum Likelihood (ML) problem. We propose an efficient approximation of this problem based on Hough transform (HT) or Radon transform (RT), which are known to be very robust in detecting alignments, even when noise is introduced by misalignments of points, missing points, or extra points. The high efficiency of the method is demonstrated even when severe degradations have occurred, including JPEG compression with a quality factor of 50%, where other known algorithms fail. Results with the Stirmark benchmark confirm the high robustness of the proposed method.
Robust image watermarking scheme resilient to desynchronization attacks
Changryoul Choi, Jechang Jeong
Desynchronization attacks have been a particularly serious threat to watermarking applications. Therefore, there has been a concerted effort to solve this problem. It was Martin Kutter who first proposed to use watermark itself as a tool for estimating the affine transform parameters by its periodicity, what is called the self-reference scheme. His brilliant idea has led to many variations. This scheme is resistant to general geometric transform attacks and the 'reference' is more difficult to destroy than templates. However, this scheme has some drawbacks such as relatively low capacity and high computational complexity, especially in calculating the autocorrelation function. In this paper, we propose a robust watermarking scheme particularly resilient to desynchronization attacks by embedding the watermark periodically. We solve these two problems of the self-reference scheme. For the problem of low channel capacity, we propose to use a finite field sequence to partition a given block and M-ary modulation. For the other problem of high computational complexity, we introduce bit-wise computations in calculating the autocorrelation function. Also, we separate the estimation of a translation factor and affine transform factors, respectively, which results in performance improvement.
Transform Methods
icon_mobile_dropdown
Adaptive watermarking using successive subband quantization and perceptual model based on multiwavelet transform
Ki Ryong Kwon, Ahmed H. Tewfik
This paper presents an adaptive digital image watermarking scheme that uses successive subband quantization (SSQ) and perceptual modeling. Our approach performs a multiwavelet transform to determine the local image properties optimal and the watermark embedding location. The multiwavelet used in this paper is the DGHM multiwavelet with approximation order 2 to reduce artifacts in the reconstructed image. A watermark is embedded into the perceptually significant coefficients (PSC) of the image in each subband. The PSCs in high frequency subbands are selected by setting the thresholds to one half of the largest coefficient in each subband. After the PSCs in each subband are selected, a perceptual model is combined with a stochastic approach based on the noise visibility function to produce the final watermark.
Digital watermarking in wavelet domain with predistortion for authenticity verification and localization
Dominque Albert Winne, Henry D. Knowles, David R. Bull, et al.
In this paper, we present a blind fragile authentication algorithm by modifying a robust algorithm. The embedding process modifies the relative position of one wavelet coefficient from a vector of 3 coefficients. The introduced distortion of the watermarking system is reduced by a content dependent quantization parameter. This parameter refines the quantization step according to the magnitude of the coefficients in the vector. The smallest wavelet coefficients in the smooth areas of the image are pre-distorted to improve the performance and efficiency of the algorithm in these areas. This pre-distortion does not visually degrade the image as the introduced high frequency noise is evenly distributed over these areas. A dichotomous detector compares the extracted and embedded watermark on a bit by bit basis. This results in a high detection resolution, which can deliver information about the shape of the modified object. Embedding of the watermark with a larger redundancy increases the robustness of the system to additive white Gaussian noise attack. A weighted estimation then extracts the embedded watermark. This technique is fully described in the paper. Experimental results of this system embedded in the wavelet domain illustrate the performance and effectiveness compared with other reported fragile watermarking methods.
Compression-compatible digital watermark algorithm for authenticity verification and localization
Dominque Albert Winne, Henry D. Knowles, David R. Bull, et al.
In this paper, a new fragile watermarking system for authenticity verification is presented. This technique can detect and locate minor changes in a marked image. The method can be implemented in any domain. The embedding procedure modifies the representative value of a selected vector of coefficients according to the embedded watermark bit value. The mapping of the bits to the representative values and the formation of the orthogonal vectors are secured using a symmetric key system. The detector is a standalone system that does not need any prior knowledge about the original image or the embedded watermark. A tolerance bandwidth can be set to a minimum to reduce the level of false negative detector responses. Assuming there is a consistent pattern of tampering, an optimization algorithm has been designed to further reduce the false negative probability. By embedding the watermark in the same domain as is used for compression, the system can allow compression as an undetectable content preserving operation if the amount of quantization is known in advance. Experimental results of this system embedded in the DCT and Wavelet domain illustrate the performance and effectiveness compared with other reported fragile watermarking methods.
Blockwise image watermarking system with selective data embedding in wavelet transform domain
Pao-Chi Chang, Ta-Te Lu, Li-Lin Lee
In this paper, we propose an image watermarking system that is highly robust against various attacks without perceivable image degradation. The cover image is first discrete wavelet transformed (DWT), and then the low and middle subbands are divided into wavelet blocks. A selective watermark embedding method is used in which a DWT block is chosen for watermark embedding only when its coefficients clearly indicate the block polarity. Instead of the original image, a key is used in the watermark extraction to indicate the locations where watermark bits are embedded. The key is generated by a Tri-state Exclusive OR (TXOR) operation on the randomized watermark and the randomized DWT coefficients of the original image. Finally, a deadzone evacuation procedure is performed to ensure an adequate noise margin. If a DWT coefficient is very close to the polarity threshold, e.g., the median, then it will be forced to shift to the positive or the negative end of the deadzone depending on its polarity. Simulation results show that the key method proposed herein achieves excellent performance for Checkmark non-geometric attacks, such as filtering, compression, and copy attacks. The proposed scheme is also robust for image cropping at different positions.
Image watermarking for copyright protection and data hiding via the Mojette transform
This paper describes a new methodology for image watermarking which is suitable both for copyright protection and for data hiding. The two presented algorithms are based upon the morphological mathematics properties of the Mojette Transform (denoted as MT in the following). The main properties of the Mojette transform are roughly recalled and the linked concept of phantom which depicts the null space of the operator is presented. Theses phantoms are implemented in the spatial domain giving the added watermarks. Then, two algorithms are presented based on this type of marks, the first one is devoted to the copyright embedding process and the second describes the steganographic scheme. Corresponding extractions of either the mark or the hidden message are then described. Finally, results are given in the last section for the two above schemes and robustness characteristics for the first scheme in terms of geometric attacks as well as the data hiding capacity for the second algorithm are discussed.
Theoretical Models
icon_mobile_dropdown
Estimation of amplitude modifications before SCS watermark detection
New blind digital watermarking schemes that are optimized for additive white Gaussian noise (AWGN) attacks have been developed by several research groups within the last two years. Currently, the most efficient schemes, e.g., the scalar Costa scheme (SCS), involve scalar quantization of the host signal during watermarking embedding and watermark reception. Reliable watermark reception for these schemes is vulnerable to amplitude modification of the attacked host signal. In this paper, a method for the estimation of possible amplitude modifications before SCS watermark detection is proposed. The estimation is based on a securely embedded SCS pilot watermark. We focus on linear amplitude modifications, but investigate also the extension to nonlinear amplitude modifications. Further, the superiority of our proposal over an estimation method based on a spread-spectrum pilot watermark is demonstrated.
Turbo-coded trellis-based constructions for data hiding
Jim C. Chou, Kannan Ramchandran, S. Sandeep Pradhan
It has recently been discovered that many current applications such as data hiding and watermarking can be posed as the problem of channel coding with side information. As a result there has been considerable interest in designing codes to try and attain the theoretical capacity of the problem. It was shown by Pradhan et. al that in order to achieve capacity, a powerful channel codebook that partitions into a powerful source codebook should be chosen. The data to be embedded will index the source codebook partition. The constructions that exist in the literature, however, are typically based on powerful channel codebooks and weak source codebook partitions and hence remain at a considerable gap to capacity. In this paper, we present several methods of construction that are based on a powerful channel codebook (i.e. turbo codes) and powerful source codebook partitions (i.e., trellis coded quantization) to try and bridge the gap to capacity. For the Gaussian channel coding with side information (CCSI) problem at a transmission rate of 1 bit/channel use, our proposed approach comes within 2.72 dB of the information-theoretic capacity.
Optimum decoding and detection of a multiplicative amplitude-encoded watermark
Mauro Barni, Franco Bartolini, Alessia De Rosa, et al.
The aim of this paper is to present a novel approach to the decoding and the detection of multibit, multiplicative, watermarks embedded in the frequency domain. Watermark payload is conveyed by amplitude modulating a pseudo-random sequence, thus resembling conventional DS spread spectrum techniques. As opposed to conventional communication systems, though, the watermark is embedded within the host DFT coefficients by using a multiplicative rule. The watermark decoding technique presented in the paper is an optimum one, in that it minimizes the bit error probability. The problem of watermark presence assessment, which is often underestimated by state-of-the-art research on multibit watermarking, is addressed too, and the optimum detection rule derived according to the Neyman-Pearson criterion. Experimental results are shown both to demonstrate the validity of the theoretical analysis and to highlight the good performance of the proposed system.
Performance analysis of information hiding
Shuichi Shimizu
Information hiding in host data, or transparent digital watermarking, can be treated as an application of digital communications in which the hidden information is conveyed through a channel where the noise includes the host data and stems from other sources. The amount of information to be hidden is called the payload. At the detector, the hidden information (the watermark) should be retrieved with high confidence. We present a theoretical performance analysis of this information hiding problem in terms of payload, detection error rate, SNR, bandwidth of the watermarking channel, and channel coding for error correction. The detector is assumed to be a correlator, which is known to be optimal for Gaussian noise. However, our analysis does not require that the host data has a Gaussian distribution. Since our analysis does not depend on the synchronization between the watermark signal and the detector or on the maximum watermark power as constrained by preserving the fidelity, our result defines the theoretical performance limits. We present two decision rules designed to satisfy the given false alarm and code word error rate, based on energy detection and SNR estimation. We then apply two watermarking schemes, one with constant strength and the other with adaptive strength, in order to determine the watermarking design parameters by examining how the SNR is decreased against random and quantization noises.
Theoretical framework for watermark capacity and energy estimation
Ruizhen Liu, Tieniu Tan
In this paper, a theoretical framework for image watermark capacity estimation is derived. We present one general watermarking embedding scheme which is based on unitary transforms and adapts the additive watermarking model. In this general frame, simple and elegant formulas about the relationship of robustness and imperceptibility are derived. Hence a direct link between watermark capacity and image quality metrics is established for a given level of perceptual image distortion. The main work of this paper consists of four parts: (1) a general watermark embedding framework, which is based on two basic assumptions (unitary transform and additive watermark embedding model) and also includes the case of using block unitary transformation (such as 8-by-8 block DCT in JPEG and MPEG-1/2); (2) under the above two assumptions, we obtain two important relationships between watermark capacity or energy and image quality measures (one for PSNR and another for HVS-based WPSNR); (3) in addition the paper proves that for block unitary transforms, the above estimated relationships are also held. So it is very helpful for watermark capacity and energy analysis in JPEG and MPEG based watermarking schemes.
Watermark design for the Wiener attack and whitening filtered detection
Youngha Hwang, Kyung Ae Moon, Myung Joon Kim
A well-known power spectrum condition (PSC) was derived for resisting the Wiener attack. The Wiener attack estimates the embedded watermark from a watermarked signal and subtracts it to disable the watermark detection. According to the PSC, the power spectral density of a watermark should be proportional to that of the original data since the watermark becomes difficult to estimate. However, PSC considers the correlation-based detection only and does not pay attention to whitening filtering before detection. In watermark detection problem, the host signal is considered as a noise and usually it has a colored power spectrum. Thus, by applying the whitening filter the watermark detection performance can be considerably improved. When using the whitening filter, if the power spectrum of watermark is like that of the original data, the gain of the whitening filter becomes one and there is little enhancement in detection performance. On the contrary, the watermark with different power spectral density from the original data can be removed by the Wiener attack, and the detection of the watermark becomes difficult. So this paper aims to design a watermark to improve the detection performance by satisfying apparently opposite conditions. The watermark optimized with calculus of variation method achieved the above objective in the experiment.
Improved watermarking scheme by reference signal mingling
Various watermarking schemes were developed in an attempt to address the piracy issue. One of the most important requisites for an effective watermarking scheme is its robustness. A robust watermarking scheme means the embedded watermark can still be extracted successfully from an attacked watermarked data. Regardless of the motivations, the attacked watermarked data should have an acceptable quality, that is, an attacker can't remove the embedded watermark without penalty. In this paper, we propose the robustness can be improved by mingling a reference signal during watermark embedding. The knowledge of reference signal at receiver end makes better channel estimation and lower probability of detection error. To show the performance improved by mingling reference watermark, we applied it to quantization-based image watermarking in discrete cosine transform (DCT) domain and watermarked image was attacked by JPEG compression. The simulation results will show mingling reference signals really achieved better performance. Having the same quality of attacked watermarked image, probability of error with reference signals mingling could be lower than without it.
Video Techniques
icon_mobile_dropdown
Watermarking for automatic quality monitoring
Matthew J. Holliman, Minerva M. Yeung
In this paper, we propose a new application for digital watermarking, which is that of automatic quality assessment. We compare several schemes in the literature that could be applied to the problem, and describe the benefits of distortion-dependent watermarking for the application.
Computational analysis and system implications of video watermarking applications
Eric Debes, Matthew J. Holliman, William W. Macy, et al.
The aim of this paper is to analyze the computational requirements of video watermarking algorithms running on PC-based systems and to study their implication for the design of general-purpose processors and systems. Selected watermarking algorithms are analyzed from a computational point of view. Application examples are executed on current general-purpose processor architecture to understand the computational requirements and to detect potential bottlenecks. In addition to this workload analysis, the potential exploitation of data level parallelism through the use of SIMD instructions available on current architectures is evaluated. Thread level parallelism schemes is also studied in current watermarking in order to understand the potential benefit of simultaneous multithreading processors and symmetric multiprocessor systems for such applications. Even if the study of the different watermarking algorithms is crucial to understand the requirements of a system, it is not sufficient. Indeed, watermarking schemes are very often only one kernel in a complete application and the interaction between the watermarking kernel and the rest of the application can highly influence the computational and memory bandwidth requirements of the system. Therefore the example of watermarking detection in a video decoder is used to understand the additional system implications due to the merging of video decoding and watermarking algorithms.
Synchronization-insensitive video watermarking using structured noise pattern
Iwan Setyawan, Geerd Kakes, Reginald L. Lagendijk
For most watermarking methods, preserving the synchronization between the watermark embedded in a digital data (image, audio or video) and the watermark detector is critical to the success of the watermark detection process. Many digital watermarking attacks exploit this fact by disturbing the synchronization of the watermark and the watermark detector, and thus disabling proper watermark detection without having to actually remove the watermark from the data. Some techniques have been proposed in the literature to deal with this problem. Most of these techniques employ methods to reverse the distortion caused by the attack and then try to detect the watermark from the repaired data. In this paper, we propose a watermarking technique that is not sensitive to synchronization. This technique uses a structured noise pattern and embeds the watermark payload into the geometrical structure of the embedded pattern.
Novel approach to collusion-resistant video watermarking
Karen Su, Deepa Kundur, Dimitrios Hatzinakos
This work considers the problem of frame collusion in video watermarking, one that is particularly relevant for this media due to the large collection of frames whose temporal inter-relationships may be exploited to facilitate estimation of the mark. Two new components are introduced: A mathematical framework for the statistical analysis of linear collusion and development of potential counterattacks; and a novel video watermarking approach employing the proposed strategies for robustness to collusion as well as other frame-as-image distortions. Experimental results demonstrating the performance of the proposed techniques against two types of collusion attacks are presented.
Real-time video watermarking technique
Han Ho Lee, Jong Jin Chae, Jong Uk Choi
Most previous video watermarking algorithms cannot be supported by real-time processing. Our algorithm proposed the specific embedding method in the spatial domain directly rather than the frequency domain. Also the algorithm supports the robustness from the video attacking skills. In the paper, for example, watermark is inserted immediately into the output frame of Digital Video (DV) camcorder. We select the Y component from the DV signal, and then the watermark information is inserted in all of the Y frames. The watermarked video frames put in the video MPEG encoder. We consider embedding information to the high quality video streams, such as a DVD, HDTV. Our experimental results show the high quality of the video even if compressed. Therefore, the robustness from compression is tested by MPEG-2 of 6Mbits/sec of 720x480 frame size and the invisibility is proved by measurement of PSNR. The results also show the robustness from several video editing methods, such as a cut-and-splice and cut-insert-splice, and video conversions, letterboxing, pan & span, and wide screen of media.
Video watermarking resistance to rotation, scaling, and translation
Xiamu Niu, Martin Schmucker, Christoph Busch
A video watermarking with robustness against rotation, scaling and translation (RST) is proposed. The watermark information is embedded into pixels along the temporal axis within a Watermark Minimum Segment (WMS). Since the RST operations for every frame along the time axis in video sequence are the same at a very short interval, the watermark information can be detected from watermarked frames in each WMS subjected to RST. Experimental results show that the proposed technique is robust against the attacks of RST, bending and shearing of frames, MPEG-2 lossy compression, color-space conversion, and frame dropping attacks.
Temporal synchronization in video watermarking
This paper examines the problems with temporal synchronization in video watermarking and describes a new method for efficient synchronization and resynchronization. In our method, efficient synchronization is achieved by designing temporal redundancy in the structure of the watermark. Temporal redundancy allows the watermark detector to establish and maintain synchronization without performing extensive search in the watermark key space. Our method does not use synchronization templates that may be subject to attack and increase the visibility of the watermark. Another advantage of our technique is that the watermark structure is video-dependent, which enhances security. The technique is implemented using a spatial domain watermark on uncompressed video and a finite state machine watermark key generator. The implementation illustrates the effectiveness of using temporal redundancy for synchronization, and is shown to be resilient to desynchronization attacks such as frame dropping, frame insertion, local frame transposition, and frame averaging. Our method for synchronization is independent of the embedding method and can be used with a wide class of watermark embedding and detection techniques, including those for other time-varying signals such as digital audio.
Video authentication with self-recovery
Digital video has become increasingly susceptible to spatio-temporal manipulations as a result of recent advances in video editing tools. In this paper, we propose a secure and flexible fragile digital video authentication watermark which also enables the self-recovery of video content after malicious manipulations. In the proposed block-based method, the watermark payload of a block is composed of two parts: authentication and recovery packets. The authentication packet is a digital signature with a special structure and carries the spatio-temporal position of the block. The digital signature guarantees the authenticity and integrity of the block as well as the recovery packet, whereas the localization information prevents possible cut & paste attacks. On the other hand, the recovery packet contains a highly compressed version of a spatio-temporally distant block. This information enables the recovery of the distant block, upon detection of tampering by its authentication packet. A spatio-temporal interleaving scheme and a simple multiple description coding mechanism increase the probability of self recovery by diffusing recovery information throughout the sequence. Finally, watermark payload is embedded by least significant bit modulation.
Digital watermarking for secure and adaptive teleconferencing
Jan Christop Vorbrueggen, Niels Thorwirth
The EC-sponsored project ANDROID aims to develop a management system for secure active networks. Active network means allowing the network's customers to execute code (Java-based so-called proxylets) on parts of the network infrastructure. Secure means that the network operator nonetheless retains full control over the network and its resources, and that proxylets use ANDROID-developed facilities to provide secure applications. Management is based on policies and allows autonomous, distributed decisions and actions to be taken. Proxylets interface with the system via policies; among actions they can take is controlling execution of other proxylets or redirection of network traffic. Secure teleconferencing is used as the application to demonstrate the approach's advantages. A way to control a teleconference's data streams is to use digital watermarking of the video, audio and/or shared-whiteboard streams, providing an imperceptible and inseparable side channel that delivers information from originating or intermediate stations to downstream stations. Depending on the information carried by the watermark, these stations can take many different actions. Examples are forwarding decisions based on security classifications (possibly time-varying) at security boundaries, set-up and tear-down of virtual private networks, intelligent and adaptive transcoding, recorder or playback control (e.g., speaking off the record), copyright protection, and sender authentication.
Embedding
icon_mobile_dropdown
Method for hiding synchronization marks in scale- and rotation-resilient watermarking schemes
As concern about watermark robustness grows, different approaches based on the introduction of synchronization marks have been presented to survive geometrical attacks. Most of the proposed techniques rely either on the introduction of a dedicated template either on the detection of particular properties in the autocorrelation function (ACF) of the watermark itself in order to resist against scaling and rotation transformations. The use of this side-information at the detector makes an inversion of the transformation possible. However, due to the publicness of this side-information, those techniques turn out to be very vulnerable to a removal attack. We propose an innovative method to hide the synchronization marks and therefore prevent malicious removal attacks (eg. template attack,...). The ability to detect the synchronization marks will be conditioned by the knowledge of a secret key. The technique consists in using an image dependent secret binary mask to modulate the synchronization pattern before it is introduced in the image. The ability to recover this binary mask upon rotation and scaling allows the detection of the synchronization marks even after transformation. Although mask recovering presents a considerable error rate, sufficient detection of the synchronization marks can be achieved. The mask, obtained from a signal-dependent partition, leads to a spectral spreading of the synchronisation mark, making template attacks nearly impossible to perform.
Application of composite invisible image watermarks to simplify detection of a distinct watermark from a large set
Gordon W. Braudaway, Frederick C. Mintzer
Earlier, we presented a highly robust invisible watermarking technique for digitized images having a payload of one bit - indicating the presence or absence of the watermark. Other invisible watermarking techniques also possess this property. This family of techniques may be used to watermark a source image with distinct marks, perhaps to indicate the identity of the recipient, resulting in a set of many near-copies of the source image. Then, the problem of detecting a distinct watermark in an image from the set may imply attempting detection of all possible watermarks. In this paper we will present a technique using composite watermarks which reduces the number of attempts necessary for distinct watermark detection. If the number of images in the set is m to the power n, then the number of attempted detections is never more than m times n. Thus, for m=10 and n=3, a set of 1000 distinctly watermarked near-copies can be produced, but instead of 1000 attempted detection's to insure identification of a particular watermark, only thirty are required. The techniques used for constructing composite watermarks will be detailed and limitations of this approach will be discussed. Results of a successful detection of a distinct watermark from a large set will be presented.
Rotation-, scaling-, and translation-robust image watermarking using Gabor kernels
In this paper, we propose a RST-robust watermarking algorithm which exploits the orientation feature of a host image by using 2D Gabor kernels. From the viewpoint of watermark detection, host images are usually regarded as noise. However, since geometric manipulations affect watermarks as well as the host images simultaneously, evaluating host image can be helpful to measure the nature of distortion. To make most use of this property, we first hierarchically find the orientation of the host image with 2D Gabor kernels and insert a modified reference pattern aligned to the estimated orientation in a selected transformed domain. Since the pattern is generated in a repetitive manner according to the orientation, in its detection step, we can simply project the signal in the direction of image orientation and average the projected value to obtain a 1-D average pattern. Finally, correlation of the 1-D projection average pattern with watermark identifies periodic peaks. Analyzed are experimental results against geometric attacks including aspect ratio changes and rotation.
Lossless data embedding for all image formats
Lossless data embedding has the property that the distortion due to embedding can be completely removed from the watermarked image without accessing any side channel. This can be a very important property whenever serious concerns over the image quality and artifacts visibility arise, such as for medical images, due to legal reasons, for military images or images used as evidence in court that may be viewed after enhancement and zooming. We formulate two general methodologies for lossless embedding that can be applied to images as well as any other digital objects, including video, audio, and other structures with redundancy. We use the general principles as guidelines for designing efficient, simple, and high-capacity lossless embedding methods for three most common image format paradigms - raw, uncompressed formats (BMP), lossy or transform formats (JPEG), and palette formats (GIF, PNG). We close the paper with examples of how the concept of lossless data embedding can be used as a powerful tool to achieve a variety of non-trivial tasks, including elegant lossless authentication using fragile watermarks. Note on terminology: some authors coined the terms erasable, removable, reversible, invertible, and distortion-free for the same concept.
Image-feature-based robust digital watermarking scheme
A novel robust digital image watermarking scheme which combines image feature extraction and image normalization is proposed. The goal is to resist both geometrical and signal processing attacks. We adopt a feature extraction method called Mexican Hat wavelet scale interaction. The extracted feature points can survive various attacks such as common signal processing, JPEG compression, and geometric distortions. Thus, these feature points can be used as reference points for both watermark embedding and detection. The normalized image of a rotated image (object) is the same as the normalized version of the original image. As a result, the watermark detection task can be much simplified when it is done on the normalized image without referencing to the original image. However, because image normalization is sensitive to image local variation, we apply image normalization to non-overlapped image disks separately. The center of each disk is an extracted feature point. Several copies of one 16-bit watermark sequence are embedded in the original image to improve the robustness of watermarks. Simulation results show that our scheme can survive low quality JPEG compression, color reduction, sharpening, Gaussian filtering, median filtering, printing and scanning process, row or column removal, shearing, rotation, scaling, local warping, cropping, and linear transformation.
Image watermarking in the Fourier domain based on global features of concentric ring areas
This paper presents a blind decoding watermarking scheme that takes advantage of two basic properties of the Fourier transform: The image information is transformed into frequency bands centered around the origin of the coordinate system and the image information is represented as phase and amplitude information, the latter being independent from shifts in the original image (i.e. pixel domain). These properties are exploited to embed a watermark that is inherently robust against shifts and rotation in the pixel domain and shows considerable robustness against cropping and downscaling as well. The amplitude part of the Fourier representation of the image is subdivided into sections. A pair of sections is used to embed one bit of watermark information with the bit value being represented by a predefined difference between the mean power values of the sections. The payload of the presented watermarking scheme strongly depends on the size of the image. Tests based on a light-weight implementation of the presented scheme were run with a watermark payload of 16 bits for images of 512 by 512 pixels in size.
Image compression and watermarking using lattice vector quantization
Jean-Marie Moureaux, Ludovic Guillemot
In this paper, we propose a new algorithm which combines compression using Lattice Vector Quantization and Watermarking. It has been motivated by the fact that lossy compression tends to remove useless or non visible information while watermarking aims at inserting invisible information. We show that this combined approach enables to improve robustness to attacks like additional JPEG compression, as well as median filtering or white noise. Furthermore it is of low complexity and offers a wide range of channel capacities according to robustness.
Applications
icon_mobile_dropdown
Use of web cameras for watermark detection
John Stach, Trenton J. Brundage, Brett T. Hannigan, et al.
Many articles covering novel techniques, theoretical studies, attacks, and analyses have been published recently in the field of digital watermarking. In the interest of expanding commercial markets and applications of watermarking, this paper is part of a series of papers from Digimarc on practical issues associated with commercial watermarking applications. In this paper we address several practical issues associated with the use of web cameras for watermark detection. In addition to the obvious issues of resolution and sensitivity, we explore issues related to the tradeoff between gain and integration time to improve sensitivity, and the effects of fixed pattern noise, time variant noise, and lens and Bayer pattern distortions. Furthermore, the ability to control (or at least determine) camera characteristics including white balance, interpolation, and gain have proven to be critical to successful application of watermark readers based on web cameras. These issues and tradeoffs are examined with respect to typical spatial-domain and transform-domain watermarking approaches.
Watermarking 2D-vector data for geographical information systems
Michael Voigt, Christoph Busch
This paper deals with the issue of watermarking 2D-vector data which are used in Geographical Information Systems (GIS). The watermark is embedded in the tolerance range of the coordinates, where one bit of the watermarking information is represented by one PN-sequence, whose elements consist of the two values +tolerance and - tolerance. To robustly embed one bit of the watermarking information the length of the PN-sequence has to be much greater than the square maximum coordinate value leading to non-acceptable sequence lengths due to high coordinate values. To achieve a PN-sequence length that is suitable to the size of the data domain we do not consider the whole coordinate value but only those decimal digit positions of the coordinate value, where changes are significant but not violating the tolerance requirements. Due to this restriction on a smaller range of values, overflow and underflow has to be considered during the embedding process. Within the retrieval process we first extract this fraction of the coordinate value before correlating it with the PN- sequence. The proposed method is robust against attackers changing the coordinates within the tolerance range.
Biometric authentication for ID cards with hologram watermarks
Lucilla Croce Ferri, Astrid Mayerhoefer, Marcus Frank, et al.
We present an analysis of a new technique for the authentication of ID cardholders, based on the integration of a biometrics-based authentication system with digital watermarks. The chosen biometric authentication method is the dynamic signature verification of the ID cardholder, while we use a specific integrity watermark technique we developed, called Hologram Watermark, to embed and retrieve the off-line data onto an ID card. We take advantage of the fact, that two static biometric features (images of the user's face and signature) are already integral part of ID cards for manual verification and extent the stored biometric information by embedding on-line handwriting features of the signature as holographic watermarks in the overall image information of an ID card. Manipulation of any of the image information can be detected and will further disallow biometric verification of the forger. The Hologram Watermark technique produces content-related data using computer-generated hologram coding techniques. These data are embedded with watermarking techniques into the personal data printed on an ID card. The content-related data in this specific application are the dynamic features of the cardholder's signature. The main goal of this paper is to analyze the suitability of dynamic signature verification in combination with the Hologram Watermark technique, to facilitate automated user authentication based on information transparently embedded in ID cards.
Biometric watermarking based on face recognition
We describe biometric watermarking procedure based on object recognition for accurate facial signature authentication. An adaptive metric learning algorithm incorporating watermark and facial signatures is introduced to separate an arbitrary pattern of unknown intruder classes from that of known true-user ones. The verification rule of multiple signatures is formulated to map a facial signature pattern in the overlapping classes to a separable disjoint one. The watermark signature, which is uniquely assigned to each face image, reduces the uncertainty of modeling missing facial signature patterns of the unknown intruder classes. The adaptive metric learning algorithm proposed improves a recognition error rate from 2.4% to 0.07% using the ORL database, which is better than previously reported numbers using the Karhunen-Loeve transform, convolution network and the hidden Marcov model. The face recognition facilitates generation and distribution of the watermark key. The watermarking approach focuses on using salient facial features to make watermark signatures robust to various attacks and transformation. The coarse-to-fine approach is presented to integrate pyramidal face detection, geometry analysis and face segmentation for watermarking. We conclude with an assessment of the strength and weakness of the chosen approach as well as possible improvements of the biometric watermarking system.
Improved key management for digital watermark monitoring
Volker Roth, Michael Arnold
In this article we propose content based retrieval techniques as means of improving the key management used for digital watermark monitoring. In particular, we show how keys for watermark monitoring can be used on a per media item basis (rather than one secret key for all works copyrighted by a single owner), while retaining a high probability of successful spotting.
Fragile Watermarking
icon_mobile_dropdown
New semifragile image authentication watermarking techniques using random bias and nonuniform quantization
Kurato Maeno, Qibin Sun, Shih-Fu Chang, et al.
Semi-fragile watermarking methods aim at detecting unacceptable image manipulations, while allowing acceptable manipulations such as lossy compression. In this paper, we propose new semi-fragile authentication watermarking techniques using random bias and non-uniform quantization, to improve the performance of the methods proposed by Lin and Chang. Specifically, the objective is to improve the performance tradeoff between the alteration detection sensitivity and the false detection rate.
Multivalued semifragile watermarking
In this paper we propose a new semi fragile watermarking technique which embeds multivalued watermarks. It is necessary for semi fragile watermarking to assess the degree of tampering correctly. However most previously proposed schemes which use binary watermarks claim an image to be tampered with even for light grade JPEG compression. The proposed scheme embeds multivalued watermarks to wavelet coefficients, which we call q-ary watermarking. In q-ary watermarking we embed each bit of q-ary watermarks into wavelet coefficients, which reside in different wavelet levels to reflect the features of lossy compression. For that purpose we embed significant bits of watermarks to lower frequency components and less significant bits to higher frequency components. Furthermore the wavelet coefficients should have spatial relation. We apply the concept of zero-tree, which is proposed in conjunction with the wavelet based image compression scheme, for selecting such coefficients. Using q-ary watermarking we can evaluate not only the positions or the number of positions of tampering but also the degree of tampering. It is proven that q-ary watermarking can evaluate the degree of JPEG compression more accurately than Kundur-Hatzinakos' scheme for a wide range of JPEG compression ratios.
Wavelet-based reversible watermarking for authentication
In the digital information age, digital content (audio, image, and video) can be easily copied, manipulated, and distributed. Copyright protection and content authentication of digital content has become an urgent problem to content owners and distributors. Digital watermarking has provided a valuable solution to this problem. Based on its application scenario, most digital watermarking methods can be divided into two categories: robust watermarking and fragile watermarking. As a special subset of fragile watermark, reversible watermark (which is also called lossless watermark, invertible watermark, erasable watermark) enables the recovery of the original, unwatermarked content after the watermarked content has been detected to be authentic. Such reversibility to get back unwatermarked content is highly desired in sensitive imagery, such as military data and medical data. In this paper we present a reversible watermarking method based on an integer wavelet transform. We look into the binary representation of each wavelet coefficient and embed an extra bit to expandable wavelet coefficient. The location map of all expanded coefficients will be coded by JBIG2 compression and these coefficient values will be losslessly compressed by arithmetic coding. Besides these two compressed bit streams, an SHA-256 hash of the original image will also be embedded for authentication purpose.
Security of fragile authentication watermarks with localization
In this paper, we study the security of fragile image authentication watermarks that can localize tampered areas. We start by comparing the goals, capabilities, and advantages of image authentication based on watermarking and cryptography. Then we point out some common security problems of current fragile authentication watermarks with localization and classify attacks on authentication watermarks into five categories. By investigating the attacks and vulnerabilities of current schemes, we propose a variation of the Wong scheme18 that is fast, simple, cryptographically secure, and resistant to all known attacks, including the Holliman-Memon attack9. In the new scheme, a special symmetry structure in the logo is used to authenticate the block content, while the logo itself carries information about the block origin (block index, the image index or time stamp, author ID, etc.). Because the authentication of the content and its origin are separated, it is possible to easily identify swapped blocks between images and accurately detect cropped areas, while being able to accurately localize tampered pixels.
Fragile and robust watermarking by histogram specification
Dinu Coltuc, Philippe Bolon, Jean-Marc Chassery
This paper presents new results on regional image watermarking by exact histogram specification. Image is split in regions and for each region a watermark is specified. Watermarks are selected such as image original histogram is preserved. Main improvement of proposed regional scheme consists in the marking of the entire image (all the regions) with complementary watermarks. This procedure considerably increases watermarking robustness. The region selection strategy is discussed so that direct identification of regions and bordering effects are eliminated. Robustness/fragility of the proposed scheme depends on the specified histograms. In a general setting, exact histogram specification allows only certain graylevel values for the pixels of each region. Fragile watermarking is obtained when sentinel pixels' region is allowed to take only certain discrete values. Thus, using sparse histograms, one achieves not only image authentication, but also, in case of any attack or malicious editing, the detection of the area where image has been altered. On the contrary, robust watermarking against many attacks is obtained when pixels of each region are allowed to take values on compact intervals of graylevels.