Optimization techniques to transform fragile watermarking
Digital watermarking refers to a set of techniques aimed at embedding a signal (the watermark) inside a host object, for the purpose of copyright protection, content authentication, or tracking of origin.1 Watermarks may be robust or fragile. Robust refers to a signal we can detect even when the host object has been subject to modifications. Fragile watermarks detect modifications to the host object. After watermark embedding, the host object may still be recoverable from the watermarked object, in which case the process is called reversible. Otherwise it is said to be irreversible.
The aim of watermarking is to embed a signal while making minimal alterations to the original digital object. Obviously, this task is particularly demanding for irreversible watermarking, while for reversible processes (the host object being recoverable) the quality of the watermarked object is less important. For robust watermarking, it is preferable to embed a strong signal to combat attempts at its removal, which produces more distortion in the host. For fragile watermarking, we embed a signal sensitive to minimal modifications of the watermarked object. In general, the more information a signal carries, the more sensitive it is. Embedding all the information may require many pixel modifications. Thus it is preferable to use a method that produces the least distortion.
For both cases (fragile and robust), we may regard these contrasting requirements as an optimization problem, where a function measures how many modified pixels satisfy robustness or fragility. Watermark embedding may take place in the spatial domain, such as in the pixels, or in a set of coefficients in a transformed domain, like the discrete Fourier transform (DFT), the Karhunen-Loève transform (KLT), or the discrete cosine transform (DCT). Most approaches compute the transformed domain, embed the watermark in the chosen coefficients, and then perform inverse transformation to obtain the altered pixels. In general, the inverse transformation into the pixel domain creates values that are real numbers. Therefore, it requires rounding to an integer or a quantization. This rounding presents a problem by removing all or part of the watermark, and results in lower-quality images.
We have addressed this problem by designing an algorithm2 that embeds a watermark into a transformed-domain KLT without the need for inverse transformation, thus avoiding the rounding problem and resulting in high-quality watermarked images.
We define the watermark embedding process as an optimization problem, where the function combines different requirements, such as the quality of the resulting image, the computational complexity, the correct insertion of the watermark, and its robustness or fragility.
We divide the image to be watermarked into blocks and apply the main algorithm cycle to each of these (see Figure 1). We realize the pixel modification by a genetic algorithm (GA),3 a computation paradigm that seeks out the best (local) optima according to a ‘fitness’ function. GAs encode possible solutions (in this case, alterations to the image pixels) into ‘chromosomes’ and adapt them in a way similar to how living organisms reproduce, measuring the fitness of the solutions found and reproducing the best ones. The result is that only the fittest individuals survive, leading to an optimal solution. The function we used takes into account the distortion introduced by embedding the watermark, the robustness to known attacks (in the case of robust watermarking), and the fact that the coefficients in the transformed domain carry the watermark after altering the image pixels.
GAs are a valuable tool for image watermarking because they produce high-quality watermarked images even when we use approximate parameter tuning. Thus, researchers have used GAs for both robust and fragile image watermarking.
We applied GAs to fragile image watermarking4—grayscale and color—producing high-quality images (more than 60dB). All our results are averages over a set of more than 500 images (all publicly available). In future, we plan to use this experimental methodology to assess the quality of the algorithm. Many efforts limit their analysis to one or a few classical images, and these are too few to allow for the significant sample of cases our algorithm will face during its life cycle.
Marco Botta holds a PhD in computer science and is associate professor in the Department of Computer Science.
Davide Cavagnino holds a PhD in computer science and is an assistant professor of computer science in the Department of Computer Science.
Singapore University of Technology and Design
Victor Pomponiu received a PhD in computer science from the University of Turin. He is currently a postdoctoral researcher.