Show all abstracts
View Session
- Front Matter: Volume 7880
- Keynote Presentation
- Security
- Forensics I
- Watermark I
- Steganography
- Watermark II
- Steganalysis I
- Content Identification I
- Forensics II
- Steganalysis II
- Content Identification II
- Miscellaneous
Front Matter: Volume 7880
Front Matter: Volume 7880
Show abstract
This PDF file contains the front matter associated with SPIE Proceedings Volume 7880, including the Title Page, Copyright information, Table of Contents, Introduction, and the Conference Committee listing.
Keynote Presentation
Signal rich art: enabling the vision of ubiquitous computing
Bruce Davis
Show abstract
Advances in networking and mobile computing are converging with digital watermarking technology to realize the
vision of Ubiquitous Computing, wherein mobile devices can sense, understand, and interact with their environments.
Watermarking is the primary technology for embedding signals in the media, objects, and art constituting our everyday
surroundings, and so it is a key component in achieving Signal Rich Art: art that communicates its identity to context-aware
devices. However, significant obstacles to integrating watermarking and art remain, specifically questions of
incorporating watermarking into the process of creating art. This paper identifies numerous possibilities for research in
this arena.
Security
Comparison of three solutions to correct erroneous blocks to extract an image of a multiplicative homomorphic cryptosystem
Show abstract
Multiplicative homomorphic properties of a cryptosystem can be used in various applications requiring security,
protection and authentication e.g. digital fingerprinting, electronic voting, on line betting etc. Secret sharing
between two or more parties exploiting multiplicative homomorphic property of RSA results into erroneous blocks
while extracting the message. The generation of these erroneous blocks limits the capabilities of homomorphic
properties of RSA to be used in its full extend. This paper provides three different approaches as solutions to
the problem of erroneous blocks in image. These solutions are: mean value approach, shortest distance approach
and image preprocessing approach. It has been observed that shortest distance approach results into good
PSNR but it is computationally expensive. The best approach with high PSNR is image preprocessing approach
before sharing process, which results into no erroneous blocks in the extracted image, thus no extra extraction
techniques are required.
Homomorphic encryption-based secure SIFT for privacy-preserving feature extraction
Show abstract
Privacy has received much attention but is still largely ignored in the multimedia community. Consider a cloud
computing scenario, where the server is resource-abundant and is capable of finishing the designated tasks, it is
envisioned that secure media retrieval and search with privacy-preserving will be seriously treated. In view of the
fact that scale-invariant feature transform (SIFT) has been widely adopted in various fields, this paper is the first
to address the problem of secure SIFT feature extraction and representation in the encrypted domain. Since all
the operations in SIFT must be moved to the encrypted domain, we propose a homomorphic encryption-based
secure SIFT method for privacy-preserving feature extraction and representation based on Paillier cryptosystem.
In particular, homomorphic comparison is a must for SIFT feature detection but is still a challenging issue for
homomorphic encryption methods. To conquer this problem, we investigate a quantization-like secure comparison
strategy in this paper. Experimental results demonstrate that the proposed homomorphic encryption-based SIFT
performs comparably to original SIFT on image benchmarks, while preserving privacy additionally. We believe
that this work is an important step toward privacy-preserving multimedia retrieval in an environment, where
privacy is a major concern.
Forensics I
Determining approximate age of digital images using sensor defects
Show abstract
The goal of temporal forensics is to establish temporal relationship among two or more pieces of evidence. In this paper, we focus on digital images and describe a method using which an analyst can estimate the acquisition time of an image given a set of other images from the same camera whose time ordering is known. This is achieved by first estimating the parameters of pixel defects, including their onsets, and then detecting their presence in the image under investigation. Both estimators are constructed using the maximum-likelihood principle. The accuracy and limitations of this approach are illustrated on experiments with three cameras. Forensic and law-enforcement analysts are expected to benefit from this technique in situations when the temporal data stored in the EXIF header is lost due to processing or editing images off-line or when the header cannot be trusted. Reliable methods for establishing temporal order between individual pieces of evidence can help reveal deception attempts of an adversary or a criminal. The causal relationship may also provide information about the whereabouts of the photographer.
Performance comparison of denoising filters for source camera identification
Show abstract
Source identification for digital content is one of the main branches of digital image forensics. It relies on the
extraction of the photo-response non-uniformity (PRNU) noise as a unique intrinsic fingerprint that efficiently
characterizes the digital device which generated the content. Such noise is estimated as the difference between
the content and its de-noised version obtained via denoising filter processing. This paper proposes a performance
comparison of different denoising filters for source identification purposes. In particular, results achieved with
a sophisticated 3D filter are presented and discussed with respect to state-of-the-art denoising filters previously
employed in such a context.
Identifying image forgeries using change points detection
Show abstract
In this paper, we show that methods detecting multiple change points in a discrete distribution of variables
can play an effective role in identifying image tampering. Methods analyzing change points deal with detecting
abrupt changes in the characteristics of signals. Methods dealing with detecting image tampering isolate a subset
of the given image that is significantly different from the rest. Apparently, both groups of methods have similar
goals and thus there might be an interesting synergy between these two research fields. Change points detection
algorithms can help in automatically detecting altered parts of digital images without any previous training or
complicated threshold settings.
Enhancing ROC performance of trustworthy camera source identification
Show abstract
Sensor pattern noise (SPN) extracted from digital images has been proved to be a unique fingerprint of digital camera.
However, sensor pattern noise can be contaminated largely in frequency domain by image detail from scene according to
Li's work and non-unique artifacts of on-sensor signal transfer, sensor design, color interpolation according to Chen et
al's work, the source camera identification performance based on SPN needs to be improved especially for small image
block. Motivated by their works, in order to lessen the effect of these contaminations, the unique SPN fingerprint for
identifying one specific camera is assumed to be a white noise which has a flat frequency spectrum, so the SPN extracted
from an image is whitened first to have a flat frequency spectrum, then inputted to the mixed correlation detector.
Source camera identification is the detection of the existence of the camera reference SPN in the SPN extracted from a
single image. Compared with the correlation detection approach and Li's model based approaches on 7 cameras, 1400
photos totally, each camera is responsible for 200, the experimental results show that the proposed mixed correlation
detection enhances the receiver operating characteristic (ROC) performance of source camera identification, especially
greatly raises the detection rate (true positive rate) in the case of trustworthy identification which is with a low false
positive rate. For example, the proposed mixed correlation detection raises the true positive rate from 78% to 93% at
zero false positive rate on image blocks of 256x256 pixels cropped from the center of the 1400 photos. The proposed
mixed correlation detection also has large advantage to resist JPEG compression with low quality factor. Fridrich's
group has proposed two reference SPN extraction methods which are the noise residues averaging and the maximum
likelihood estimation method. They are compared from the aspect of ROC performance associated with the correlation
detection and mixed correlation detection respectively. It is observed that the combination of mixed correlation detection
and reference SPN extraction method of noise residues averaging achieves the best performance. We also demonstrate an
image management application of the proposed SPN detection method for the news agent. It shows that the detection
method discriminates the positive samples from a large number of negative samples very well on image bock size of
512×512 pixels.
Watermark I
Feature point-based 3D mesh watermarking that withstands the cropping attack
Show abstract
State-of the-art robust 3D watermarking schemes already withstand combinations of a wide variety of attacks (e.g. noise
addition, simplification, smoothing, etc). Nevertheless, there are practical limitations of existing 3D watermarking
methods due to their extreme sensitivity to cropping. Spread Transform Dither Modulation (STDM) method is an
extension of Quantization Index Modulation (QIM). Besides the simplicity and the trade-off between high capacity and
robustness provided by QIM methods, it is also resistant against re-quantization. This paper focuses on two state-of-the-art
techniques which offer different and complementary advantages, respectively QIM-based 3D watermarking and
feature point-based watermarking synchronization. The idea is to combine both in such a way that the new scheme
would benefit from the advantages of both techniques and compensate for their respective fragilities. The resulting
scheme does not make use of the original 3D model in detection but of some parameters as side-information. We show
that robustness against cropping and other common attacks is achieved provided that at least one feature point as well as
its corresponding local neighborhood is retrieved.
A perceptually driven hybrid additive-multiplicative watermarking technique in the wavelet domain
Show abstract
This paper presents a hybrid watermarking technique which mixes additive and multiplicative watermark embedding
with emphasis on its robustness versus the imperceptibility of the watermark. The embedding is performed
in six wavelet sub-bands using independently three embedding equations and two parameters to modulate the
embedding strength for multiplicative and additive embedding. The watermark strength is independently modulated
into distinct image areas. Specifically, when a multiplicative embedding is used, the visibility threshold
is first reached near the image edges, whereas using an additive embedding technique the visibility threshold is
first reached into the smooth areas. A subjective experiment has been used to provide the optimal watermark
strength for three distinct embedding equations. Observers were asked to tune the watermark amplitude and to
set the strength at the visibility threshold. The experimental results showed that using an hybrid watermarking
technique significantly improves the robustness performance. This work is a preliminary study for the design of
an optimal wavelet domain Just Noticeable Difference (JND) mask.
Assessment of camera phone distortion and implications for watermarking
Show abstract
The paper presents a watermark robustness model based on the mobile phone camera's spatial frequency response and
watermark embedding parameters such as density and strength. A new watermark robustness metric based on spatial
frequency response is defined. The robustness metric is computed by measuring the area under the spatial frequency
response for the range of frequencies covered by the watermark synchronization signal while excluding the interference
due to aliasing. By measuring the distortion introduced by a particular camera, the impact on watermark detection can be
understood and quantified without having to conduct large-scale experiments. This in turn can provide feedback on
adjusting the watermark embedding parameters and finding the right trade-off between watermark visibility and
robustness to distortion. In addition, new devices can be quickly qualified for their use in smart image applications. The
iPhone 3G, iPhone 3GS, and iPhone 4 camera phones are used as examples in this paper to verify the behavior of the
watermark robustness model.
A new metric for measuring the visual quality of video watermarks
Show abstract
In this paper we present an extension to the video watermarking scheme that we introduced in our previous
work as well as a new objective quality metric for video watermarks. As the amount of data that today's
video watermarks can embed into a single video frame still is too small for many practical applications, our
watermarking scheme provides a method for splitting the watermark message and spreading it over the complete
video. This way we were able to overcome the capacity limitations, but we also encountered a new kind of
distortions that effects the visual quality of the video watermark, the so-called "flickering" effect. However we
found that the existing video quality metrics were unable to capture the "flickering" effect. The extension of our
watermarking scheme that is presented in this paper is able to reduce the "flickering" effect and thus improves
the visual quality of the video watermark by using scene detection techniques. Further on we introduce a new
quality metric for measuring the "flickering" effect, which is based on the well-known SSIM metric for still images
and which we call "Double SSIM Difference". Finally we present our results of the evaluation of the proposed
extension of the watermark embedding process, which was applied using the "Double SSIM Difference" metric.
Steganography
A curiosity regarding steganographic capacity of pathologically nonstationary sources
Show abstract
Square root laws state that the capacity of an imperfect stegosystem - where the embedding does not preserve
the cover distribution exactly - grows with the square root of cover size. Such laws have been demonstrated
empirically and proved mathematically for a variety of situations, but not for nonstationary covers. Our aim
here is to examine a highly simplified nonstationary source, which can have pathological and unpredictable
behaviour. Intuition suggests that, when the cover source distribution is not perfectly known in advance, it should
be impossible to distinguish covers and stego objects because the detector can never learn enough information
about the varying cover source. However we show a strange phenomenon, whereby it is possible to distinguish
stego and cover objects as long as the cover source is stationary for two pixels at a time, and then the capacity
follows neither a square root law nor a linear law.
Design of adaptive steganographic schemes for digital images
Show abstract
Most steganographic schemes for real digital media embed messages by minimizing a suitably defined distortion
function. In practice, this is often realized by syndrome codes which offer near-optimal rate-distortion performance.
However, the distortion functions are designed heuristically and the resulting steganographic algorithms
are thus suboptimal. In this paper, we present a practical framework for optimizing the parameters of additive
distortion functions to minimize statistical detectability. We apply the framework to digital images in both spatial
and DCT domain by first defining a rich parametric model which assigns a cost of making a change at every
cover element based on its neighborhood. Then, we present a practical method for optimizing the parameters
with respect to a chosen detection metric and feature space. We show that the size of the margin between support
vectors in soft-margin SVMs leads to a fast detection metric and that methods minimizing the margin tend
to be more secure w.r.t. blind steganalysis. The parameters obtained by the Nelder-Mead simplex-reflection
algorithm for spatial and DCT-domain images are presented and the new embedding methods are tested by blind
steganalyzers utilizing various feature sets. Experimental results show that as few as 80 images are sufficient for
obtaining good candidates for parameters of the cost model, which allows us to speed up the parameter search.
Feature restoration and distortion metrics
Show abstract
Our work focuses on Feature Restoration (FR), a technique which may be used in conjunction with steganographic
schemes to reduce the likelihood of detection by a steganalyzer. This is done by selectively modifying the stego
image to reduce a given distortion metric to a chosen target feature vector. The technique is independent of the
exact steganographic algorithm used and can be applied with respect to any set of steganalytic features and any
distortion metric. The general FR problem is NP-complete and hence intractable, but randomized algorithms are
able to achieve good approximations. However, the choice of distortion metric is crucial: our results demonstrate
that, for a poorly chosen metric or target, reducing the distortion frequently leads to an increased likelihood of
detection. This has implications for other distortion-reduction schemes.
Watermark II
Lossless image data embedding in plain areas
Show abstract
This letter presents a lossless data hiding scheme for digital images which uses an edge detector to locate plain areas for
embedding. The proposed method takes advantage of the well-known gradient adjacent prediction utilized in image coding.
In the suggested scheme, prediction errors and edge values are first computed and then, excluding the edge pixels, prediction
error values are slightly modified through shifting the prediction errors to embed data. The aim of proposed scheme is to
decrease the amount of modified pixels to improve transparency by keeping edge pixel values of the image. The experimental
results have demonstrated that the proposed method is capable of hiding more secret data than the known techniques at the
same PSNR, thus proving that using edge detector to locate plain areas for lossless data embedding can enhance the
performance in terms of data embedding rate versus the PSNR of marked images with respect to original image.
Re-synchronizing audio watermarking after nonlinear time stretching
Show abstract
Digital audio watermarking today is robust to many common attacks, including lossy compression and digital-to-analogue
conversion. One robustness and security challenge, still, is time-stretching. This operation speeds up or slows
down the playback speed while preserving the tone pitch. Although inaudible for an uninformed listener if smoothly
applied, time-stretching can be confusing for a blind watermark detection algorithm. We introduce a non-blind approach
for reconstructing the original timing based on dynamic time warping. Our experiments show that the approach is
successful even if non-linear stretching was applied. Our solution can significantly increase the robustness and security
of every audio watermarking scheme that is dependent on precise timing conditions at detection time.
Steganalysis I
On locating steganographic payload using residuals
Show abstract
Locating steganographic payload usingWeighted Stego-image (WS) residuals has been proven successful provided
a large number of stego images are available. In this paper, we revisit this topic with two goals. First, we
argue that it is a promising approach to locate payload by showing that in the ideal scenario where the cover
images are available, the expected number of stego images needed to perfectly locate all load-carrying pixels is
the logarithm of the payload size. Second, we generalize cover estimation to a maximum likelihood decoding
problem and demonstrate that a second-order statistical cover model can be used to compute residuals to locate
payload embedded by both LSB replacement and LSB matching steganography.
Steganalysis using logistic regression
Show abstract
We advocate Logistic Regression (LR) as an alternative to the Support Vector Machine (SVM) classifiers commonly
used in steganalysis. LR offers more information than traditional SVM methods - it estimates class
probabilities as well as providing a simple classification - and can be adapted more easily and efficiently for
multiclass problems. Like SVM, LR can be kernelised for nonlinear classification, and it shows comparable classification
accuracy to SVM methods. This work is a case study, comparing accuracy and speed of SVM and
LR classifiers in detection of LSB Matching and other related spatial-domain image steganography, through the
state-of-art 686-dimensional SPAM feature set, in three image sets.
Steganalysis in high dimensions: fusing classifiers built on random subspaces
Show abstract
By working with high-dimensional representations of covers, modern steganographic methods are capable of
preserving a large number of complex dependencies among individual cover elements and thus avoid detection
using current best steganalyzers. Inevitably, steganalysis needs to start using high-dimensional feature sets as
well. This brings two key problems - construction of good high-dimensional features and machine learning that
scales well with respect to dimensionality. Depending on the classifier, high dimensionality may lead to problems
with the lack of training data, infeasibly high complexity of training, degradation of generalization abilities, lack
of robustness to cover source, and saturation of performance below its potential. To address these problems
collectively known as the curse of dimensionality, we propose ensemble classifiers as an alternative to the much
more complex support vector machines. Based on the character of the media being analyzed, the steganalyst first
puts together a high-dimensional set of diverse "prefeatures" selected to capture dependencies among individual
cover elements. Then, a family of weak classifiers is built on random subspaces of the prefeature space. The
final classifier is constructed by fusing the decisions of individual classifiers. The advantage of this approach is
its universality, low complexity, simplicity, and improved performance when compared to classifiers trained on
the entire prefeature set. Experiments with the steganographic algorithms nsF5 and HUGO demonstrate the
usefulness of this approach over current state of the art.
Content Identification I
Private content identification based on soft fingerprinting
Show abstract
In many problems such as biometrics, multimedia search, retrieval, recommendation systems requiring privacypreserving
similarity computations and identification, some binary features are stored in the public domain or
outsourced to third parties that might raise certain privacy concerns about the original data. To avoid this
privacy leak, privacy protection is used. In most cases, privacy protection is uniformly applied to all binary
features resulting in data degradation and corresponding loss of performance. To avoid this undesirable effect
we propose a new privacy amplification technique that is based on data hiding principles and benefits from side
information about bit reliability a.k.a. soft fingerprinting. In this paper, we investigate the identification-rate vs
privacy-leak trade-off. The analysis is performed for the case of a perfect match between side information shared
between the encoder and decoder as well as for the case of partial side information.
Geometrically robust perceptual fingerprinting: an asymmetric case
Show abstract
In this paper, the problem of multimedia object identification in channels with asymmetric desynchronizations
is studied. First, we analyze the achievable rates attainable in such protocols within digital communication
framework. Secondly, we investigate the impact of the fingerprint length on the error performance of these
protocols relaxing the capacity achieving argument and formulating the identification problem as multi class
classification.
Trading-off performance and complexity in identification problem
Show abstract
In this paper, we consider an information-theoretic formulation of the content identification under search complexity
constrain. The proposed framework is based on soft fingerprinting, i.e., joint consideration of sign and
magnitude of fingerprint coefficients. The fingerprint magnitude is analyzed in the scope of communications with
side information that results in channel decomposition, where all bits of fingerprints are classified to be communicated
via several channels with distinctive characteristics. We demonstrate that under certain conditions
the channels with low identification capacity can be neglected without considerable rate loss. This is a basis for
the analysis of fast identification techniques trading-off theoretical performance in terms of achievable rate and
search complexity.
Forensics II
A context model for microphone forensics and its application in evaluations
Show abstract
In this paper we first design a suitable context model for microphone recordings, formalising and describing the
involved signal processing pipeline and the corresponding influence factors. As a second contribution we apply the
context model to devise empirical investigations about: a) the identification of suitable classification algorithms for
statistical pattern recognition based microphone forensics, evaluating 74 supervised classification techniques and 8
clusterers; b) the determination of suitable features for the pattern recognition (with very good results for second order
derivative MFCC based features), showing that a reduction to the 20 best features has no negative influence to the
classification accuracy, but increases the processing speed by factor 30; c) the determination of the influence of changes
in the microphone orientation and mounting on the classification performance, showing that the first has no detectable
influence, while the latter shows a strong impact under certain circumstances; d) the performance achieved in using the
statistical pattern recognition based microphone forensics approach for the detection of audio signal compositions.
Double H.264/AVC compression detection using quantized nonzero AC coefficients
Show abstract
Developments of video processing technology make it much easier to tamper with video. In some situation, such as in a
lawsuit, it is necessary to prove videos are not tampered. This contradiction poses challenges to ascertain integrity of
digital videos. Most of tamperings occur in pixel domain. However, nowadays videos are usually stored in compressed
format, such as H.264/AVC. For attackers it is necessary to decompress original video bitstreams and recompress it into
compressed domain. As a result, by detecting double compression, we can authenticate integrity of digital video. In this
paper, we propose an efficient method to detect whether or not a digital video has been double compressed. Specifically,
we use probability distribution of quantized nonzero AC coefficients as features to distinguish double compressed video
from those original one compressed video. If a smaller QP is used in the second compression, the original distribution
law will be violated, which can be used as the evidence of tampering.
Forensic printer detection using intrinsic signatures
Show abstract
Several methods exist for printer identification from a printed document. We have developed a system that
performs printer identification using intrinsic signatures of the printers. Because an intrinsic signature is tied
directly to the electromechanical properties of the printer, it is difficult to forge or remove. In previous work we
have shown that intrinsic signatures are capable of solving the problem of printer classification on a restricted set
of printers. In this paper we extend our previous work to address the problem of forensic printer identification,
in which a document may or may not belong to a known set of printers. We propose to use a Euclidean distance
based metric in a reduced feature space. The reduced feature space is obtained by using sequential feature
selection and linear discriminant analysis.
Steganalysis II
Non-destructive forensic latent fingerprint acquisition with chromatic white light sensors
Show abstract
Non-destructive latent fingerprint acquisition is an emerging field of research, which, unlike traditional methods,
makes latent fingerprints available for additional verification or further analysis like tests for substance abuse or
age estimation. In this paper a series of tests is performed to investigate the overall suitability of a high resolution
off-the-shelf chromatic white light sensor for the contact-less and non-destructive latent fingerprint acquisition.
Our paper focuses on scanning previously determined regions with exemplary acquisition parameter settings.
3D height field and reflection data of five different latent fingerprints on six different types of surfaces (HDD
platter, brushed metal, painted car body (metallic and non-metallic finish), blued metal, veneered plywood) are
experimentally studied. Pre-processing is performed by removing low-frequency gradients. The quality of the
results is assessed subjectively; no automated feature extraction is performed. Additionally, the degradation
of the fingerprint during the acquisition period is observed. While the quality of the acquired data is highly
dependent on surface structure, the sensor is capable of detecting the fingerprint on all sample surfaces. On blued
metal the residual material is detected; however, the ridge line structure dissolves within minutes after fingerprint
placement.
Detecting messages of unknown length
Show abstract
This work focuses on the problem of developing a blind steganalyzer (a steganalyzer relying on machine learning
algorithm and steganalytic features) for detecting stego images with different payload. This problem is highly
relevant for practical forensic analysis, since in practice, the knowledge about the steganographic channel is very
limited, and the length of hidden message is generally unknown. This paper demonstrates that the discrepancy
between payload in training and testing / application images can significantly decrease the accuracy of the
steganalysis. Two fundamentally different approaches to mitigate this problem are then proposed. The first
solution relies on quantitative steganalyzer. The second solution transforms one-sided hypothesis test (unknown
message length) to simple hypothesis test by assuming a probability distribution on length of messages, which
can be efficiently solved by many machine-learning tools, e.g. by Support Vector Machines. The experimental
section of the paper (a) compares both solutions on steganalysis of F5 algorithm with shrinkage removed by
wet paper codes for JPEG images and LSB matching for raw (uncompressed) images, (b) investigates the effect
of the assumed distribution of the message length on the accuracy of the steganalyzer, and (c) shows how the
accuracy of steganalysis depends on Eve's knowledge about details of steganographic channel.
A new paradigm for steganalysis via clustering
Show abstract
We propose a new paradigm for blind, universal, steganalysis in the case when multiple actors transmit multiple
objects, with guilty actors including some stego objects in their transmissions. The method is based on clustering
rather than classification, and it is the actors which are clustered rather than their individual transmitted objects.
This removes the need for training a classifier, and the danger of training model mismatch. It effectively judges
the behaviour of actors by assuming that most of them are innocent: after performing agglomerative hierarchical
clustering, the guilty actor(s) are clustered separately from the innocent majority. A case study shows that this
works in the case of JPEG images. Although it is less sensitive than steganalysis based on specifically-trained
classifiers, it requires no training, no knowledge of the embedding algorithm, and attacks the pooled steganalysis
problem.
Content Identification II
Collusion-secure patchwork embedding for transaction watermarking
Show abstract
Digital transaction watermarking today is a widely accepted mechanism to trace back copyright infringements.
Here, copies of a work are individualized by embedding user-specific watermark messages. One major threat on
transaction watermarking are collusion attacks. Here, multiple individualized copies of the work are compared
and/or combined to attack the integrity or availability of the embedded watermark message. In this work, we
show how Patchwork embedding can be adapted to provide a maximum of resistance against collusion attacks
at reduced payload and improved robustness.
Probabilistic fingerprinting codes used to detect traitor zero-bit watermark
Show abstract
This paper presents a traitor tracing method dedicated to video content distribution. It is based on a probabilistic
traitor tracing code and an orthogonal zero-bit informed watermark. We use the well-known Tardos fingerprinting
tracing function to reduce the search space of suspicious users. Their guiltiness is then confirmed by detecting
the presence of a personal watermark embedded with a personal key. To prevent watermarking key storage at
the distributor side, we use a part of the user probabilistic fingerprinting sequence as a personal embedding key.
This method ensures a global lower false alarm probability compared to original probabilistic codes. Indeed, we
combine the false alarm probability of the code with the false alarm probability of the watermarking scheme.
However the efficiency of this algorithm strongly depends on the number of colluders at the watermarking side.
To increase the robustness, we present an additive correlation method based on successive watermarked images,
we then study its limitation under different sizes of collusion.
Rihamark: perceptual image hash benchmarking
Show abstract
We identify which hash function has the best characteristics for various applications. In some of those the
computation speed may be the most important, in others the ability to distinguish similar images, and sometimes
the robustness of the hash against attacks is the primary goal. We compare the hash functions and provide test
results. The block mean value based image hash function outperforms the other hash functions in terms of
speed. The discrete cosine transform (DCT) based image hash function is the slowest. Although the Marr-
Hildreth operator based image hash function is neither the fastest nor the most robust, it offers by far the
best discriminative abilities. Interestingly enough, the performance in terms of discriminative ability does not
depend on the content of the images. That is, no matter whether the visual appearance of the images compared
was very similar or not, the performance of the particular hash function did not change significantly. Different
image operations, like horizontal flipping, rotating or resizing, were used to test the robustness of the image hash
functions. An interesting result is that none of the tested image hash function is robust against flipping an image
horizontally.
Miscellaneous
Contrast-enhancing and deterministic tone mapping method in natural image hiding scheme using halftone images
Show abstract
This paper presents a novel tone mapping method for a natural image hiding scheme using halftone images, where a
natural image can be visually decoded by overlaying two natural halftone images. In this scheme, there is a tradeoff
between noise and contrast, and a generally applicable setting of tone mapping before halftoning is required. However, in
a conventional method, the coefficients for the tone mapping cannot be found automatically. It is difficult even for
experts to manipulate the tradeoff with the coefficients. To solve this problem, we propose a deterministic tone mapping
method that can intuitively control the tradeoff. To realize this, we introduce a noise metric which can be understood
intuitively instead of affine parameters used in the mapping system. To maximize the dynamic range at a given noise
level and to make tone mapping deterministic, we clarify the condition and introduce the general functions to obtain
coefficients from the geometric constraint of the tone mapping region. The proposed method enables any user to generate
images with the highest possible contrast at a given noise level deterministically from any natural images such as own
photos just by setting the noise level. Experimental results show the validity of the proposed noise metric and also show
that the generally applicable tradeoff point through various images that are used as a guideline to set the noise level. By
using the tradeoff point, the average dynamic range is expanded by a factor of 1.4 compared to the noiseless case.
Toward the identification of DSLR lenses by chromatic aberration
Show abstract
While previous work on lens identification by chromatic aberration succeeded in distinguishing lenses of different
model, the CA patterns obtained were not stable enough to support distinguishing different copies of the same
lens. This paper discusses on how to eliminate two major hurdles in the way of obtaining a stable lens CA pattern.
The first hurdle was overcome by using a white noise pattern as shooting target to supplant the conventional
but misalignment-prone checkerboard pattern. The second hurdle was removed by the introduction of the lens
focal distance, which had not received the attention it deserves. Consequently, we were able to obtain a stable
enough CA pattern distinguishing different copies of the same lens. Finally, with a complete view of the lens CA
pattern feature space, it is possible to fulfil lens identification among a large lens database.