Proceedings Volume 4551

Image Compression and Encryption Technologies

Jun Tian, Tieniu Tan, Liangpei Zhang
cover
Proceedings Volume 4551

Image Compression and Encryption Technologies

Jun Tian, Tieniu Tan, Liangpei Zhang
View the digital version of this volume at SPIE Digital Libarary.

Volume Details

Date Published: 26 September 2001
Contents: 5 Sessions, 47 Papers, 0 Presentations
Conference: Multispectral Image Processing and Pattern Recognition 2001
Volume Number: 4551

Table of Contents

icon_mobile_dropdown

Table of Contents

All links to SPIE Proceedings will open in the SPIE Digital Library. external link icon
View Session icon_mobile_dropdown
  • Image Compression
  • Image Compression and Watermark
  • Image Compression
  • Image Compression and Watermark
  • Additional Paper
  • Image Compression and Watermark
  • Image Compression
  • Image Compression and Watermark
  • Image Compression
  • Image Compression and Watermark
  • Image Compression
  • Image Compression and Watermark
  • Image Compression
  • Image Compression and Watermark
  • Image Retrieval
  • Image Compression and Watermark
  • Image Retrieval
  • Poster Session
Image Compression
icon_mobile_dropdown
Coding and transmission of subband coded images on the Internet
Benjamin W. Wah, Xiao Su
Subband-coded images can be transmitted in the Internet using either the TCP or the UDP protocol. Delivery by TCP gives superior decoding quality but with very long delays when the network is unreliable, whereas delivery by UDP has negligible delays but with degraded quality when packets are lost. Although images are delivered currently over the Internet by TCP, we study in this paper the use of UDP to deliver multi-description reconstruction-based subband-coded images. First, in order to facilitate recovery from UDP packet losses, we propose a joint sender-receiver approach for designing optimized reconstruction-based subband transform (ORB-ST) in multi-description coding (MDC). Second, we carefully evaluate the delay-quality trade-offs between the TCP delivery of SDC images and the UDP and combined TCP/UDP delivery of MDC images. Experimental results show that our proposed ORB-ST performs well in real Internet tests, and UDP and combined TCP/UDP delivery of MDC images provide a range of attractive alternatives to TCP delivery.
Image Compression and Watermark
icon_mobile_dropdown
Data compression of stereoscopic image pairs
Changman Xu, Zhaoyang Zhang
An emerging feature of multimedia and telepresence systems is stereo imagery. Stereo images provide an enhanced sense of presence, and have been found to be operationally useful in tasks requiring remote manipulation or judgment of spatial relationships. A conventional stereo system with a single left-right pair needs twice the raw data as a monoscopic imaging system. As a result there has been increasing attention given to image compression methods specialized to stereo pairs. In this paper, the mixed-resolution coding, is a psychophysically justified technique that exploits known facts about human stereovision to code stereopairs in a subjectively acceptable manner, is used to the stereo image compression. By combining both the mixed-resolution coding and SPT(subspace projection technique)-based disparity-compensation techniques, the left image can be compressed by a wavelet-transform-based scheme independent of the right image. By performing low-resolution SPT-based disparity-compensation technique, the disparity is able to predict the low-resolution right image from the left image at a lower resolution using the disparity relation. The low-resolution images are obtained using the wavelet decomposition. At the decoder, the low-resolution right subimage is estimated using the disparity from the low-resolution left subimage. A full-sized resolution is obtained by upsampling with a factor of 4 and resonstructing with the synthesis low pass filter. We provide experimental results, that show that our proposed scheme achieves a PSNR gain of about 0.98 dB as compared to a block-based disparity compensation coding scheme, encoded at the same bit rate.
Image Compression
icon_mobile_dropdown
Fast fractal coding of multispectral remote sensing images
Lin Ni
Fractal image coding represented the static image data with parameters of dynamic iterating processes and was able to break through the theoretical limitation of entropy coding. It had attracted wide interests of many researchers. In this paper, we applied fractal coding to multispectral remote sensing image compression and made some improvements to the quad-tree-partition based fractal coding method according to the properties of multispectral remote sensing images. For the improvements, the same partition scheme was assigned to images in different bands. In addition, the size of the searching space of affine transform was diminished to further improve the compression ratio and also the coding speed by making use of the spectral correlation. Experimental results showed that the proposed method could improve the performances of the quad-tree-partition based fractal coding algorithm obviously. Satisfactory results were obtained.
Image Compression and Watermark
icon_mobile_dropdown
New multicarrier code division multiple access based on multiwavelet packet transform
Junli Chen, Xinxing Yang, Licheng Jiao
A new multicarrier code division multiple access based on multiwavelets packet transform is proposed, which can suppress multiple access interference effectively and improve the performance of system remarkably in the environment of asynchronous transmission and multipath delay.
Additional Paper
icon_mobile_dropdown
Simple, fast codebook training algorithm by entropy sequence for vector quantization
Chao-yang Pang, Shaowen Yao, Zhang Qi, et al.
The traditional training algorithm for vector quantization such as the LBG algorithm uses the convergence of distortion sequence as the condition of the end of algorithm. We presented a novel training algorithm for vector quantization in this paper. The convergence of the entropy sequence of each region sequence is employed as the condition of the end of the algorithm. Compared with the famous LBG algorithm, it is simple, fast and easy to be comprehended and controlled. We test the performance of the algorithm by typical test image Lena and Barb. The result shows that the PSNR difference between the algorithm and LBG is less than 0.1dB, but the running time of it is at most one second of LBG.
Image Compression and Watermark
icon_mobile_dropdown
Applications of generalized computing in image compression coding
Min Yao, Jianhua Luo, Ming Wu
Data compression is one of key techniques in image processing. To begin with, this paper discusses the actuality of image compression coding, and then introduces generalized computing concept. Finally, by introducing knowledge to date compression, this paper presents a kind of intelligent image compression model (IICM) based on generalized computing.
Image Compression
icon_mobile_dropdown
Fractal image coding based on same-sized-block mapping
Yao Zhao, Baozong Yuan
In traditional fractal image coding schemes, domain blocks are constrained to be twice as large as range blocks in order to ensure the convergence of the iterative decoding stage. However, this constraint has limited the fractal encoder to exploit the self-similarity of the original image. In order to overcome the shortcoming, a novel scheme using same sized range and domain blocks is proposed in the letter. Experimental results show the remarkable improvement in compression ratio and image quality.
Efficient embedded subband coding algorithm for DCT image compression
Jun Chen, Chengke Wu
A low memory embedded subband image coding algorithm based on the discrete cosine transform (DCT) with low complexity and high performance is presented in this paper. The zerotree-based coder SPIHT and the zeroblock-based coder SPECK have found more successful application in wavelet image compression. Xiong et al.'s EZDCT scheme also employs the zerotree structure and fulfills a DCT-based embedded zerotree quantizer with its performance outperforming the baseline JPEG and improved JPEG. We point out the zerotree structure of DCT is not efficient in embedded DCT coder. Without relying on the zerotree structure, we propose a more efficient embedded subband codec based on the zeroblock partitioning for the DCT image (ESDCT) with lower memory requirements and higher compression performance. A comparison reveals that PSNR results of our ESDCT and are 0.5-1.5 dB higher than those of EZDCT and the compression performance is slightly superior to SPIHT in some cases.
Radar signal compression using wavelet transforms
Yan Zhou, Fei-peng Li, Yu Xu, et al.
Generally, the echo of radar can be regarded as a combination of useful signal and noise. In this paper, an efficient wavelet-based non-linear signal compression scheme is presented, which takes full advantage of the two-dimensional correlation consisted in the radar signal. The whole process begins with a 4-scale QMF decomposition, then thresholding, quantization, zerotree scan and finished with arithmetic coding. Since radar signal usually is corrupted by a great amount of noise, a particular process of de-noising is added to the compression. For typical radar signal, testing results show that when compressed at a ratio of 200, the signal is still good fairly. Further more, a new criterion fit for quality evaluation of radar signal, i.e. Morphological Fidelity, is proposed.
Effect of channel coding on data embeding in images
Nidhal Abdulaziz, Khok Khee Pang
In this paper we present the effect of channel coding on the performance of data hiding in images. The bit error rate (BER) associated with the watermark decoding is obtained when convolutional and concatenated codes are employed. The approach is adapted to make retrieval possible for applications where the original host is not available to the receiver. Experimental results are shown for both algorithms.
Image Compression and Watermark
icon_mobile_dropdown
Simple look-up-table algorithms to lower the bit rate of AMBTC for image coding
Chun-He Liu, Zhe-Ming Lu, Sheng-He Sun
Block truncation coding (BTC) is an efficient lossy image compression technique, which has the advantage of being easy to implement compared to other block based compression techniques such as transform coding and vector quantization. The principle of the original BTC method is to preserve the block mean and the block standard deviation. Lema and Mitchell present absolute moment BTC (AMBTC) that preserves the higher mean and the lower mean and minimizes the MSE value among the BTC variants that use the mean value as the quantization threshold. However, the bit rate achieved with the original BTC algorithm or the AMBTC algorithm is 2bits/pixel. In this paper, we introduce two simple look- up-table algorithms to code the higher mean and the lower mean of AMBTC, one can reduce the bit rate without any extra distortion, and the other can reduce more bit rate with a little extra distortion. The main idea of the two proposed algorithms is to encode the higher mean and the lower mean together as a mean pair. These two algorithms can be combined with the prediction and interpolation techniques that are used to code the bit plane of BTC to further reduce the total bit rate of AMBTC. We denote the two algorithms as LUTBTC-1 and LUTBTC-2. These two algorithms are used to encode 256-gray images including remote-sensed images. Test results show that the LUTBTC-1 algorithm has the same PSNR as the AMBTC algorithm but has lower bit rate compared to the AMBTC algorithm. The LUTBTC-2 algorithm has a little extra degradation in image quality but has lower bit rate than the AMBTC algorithm and the LUTBTC-1 algorithm. Both LUTBTC-1 and LUTBTC-2 have higher encoding quality than BTC- VQ (the algorithm that uses vector quantization (VQ) to code the mean pairs without using VQ to code the bit plane), and LUTBTC-2 also has lower bit rate than BTC-VQ for ordinary images (not for remote sensed images).
Image Compression
icon_mobile_dropdown
Wavelet-based image compression and content authentication
In the digital information age, digital content (audio, image, and video) can be easily copied, manipulated, and distributed. Copyright protection and content authentication of digital content has become an urgent problem to content owners and distributors. Digital watermarking has provided a valid solution to this problem. Based on its application scenario, most digital watermarking methods can be divided into two categories: robust watermarking and fragile watermarking. Here, we will concentrate on fragile watermarking of digital images, which is for image content authentication. Our fragile watermarking method is heavily based on the new image compression standard JPEG 2000. We choose a compressed bit stream from JPEG 2000 as the hash of an image, and embed the hash back to the image. The exceptional compression performance of JPEG 2000 solves the tradeoff between small hash size and high hash confidence level. In the authentication stage, the embedded compressed bit stream will be extracted. Then it will be compared with the compressed bit stream of the image to be authenticated. The authentication decision comes from the comparison result. Besides content authentication, we will also show how to employ this watermarking method for hiding one image into another.
Image Compression and Watermark
icon_mobile_dropdown
3D scattered dataset compression based on Gaussian curvature
Zeyu Li, Dehua Li, Shiwei Tang, et al.
A compression method of 3D scattered dataset based on octree is proposed in this paper. The Gaussian curvature of a surface is used as the criterion to compress the dataset. The points containing more structural information of the shape of a surface will be kept. At the same time, the 3D object surface is decomposed into a set of Local Surface Patches (LSPs) using the octree model, and the time-consuming process of can be avoided. Practical experiments show that this method can achieve higher compression rate.
Image Compression
icon_mobile_dropdown
Fractal-coding-like lossless binary image compressing method
Tianxu Zhang, Xiaofeng Tong, Zhen C. Zuo, et al.
A new binary image lossless compressing method is proposed, which regards a binary image as being constructed of a limited number of fractal elements that have undergone a series of operations such as contraction/dilation, embedding and jointing. Therefore, coding compression for an image is mainly a process of acquiring its specific fractal structure. This algorithm defines 16 basic elements of size 2x2, which can be dilated to power of 2 or put together side by side when of the same type to make up a self-similar element set in different scales. This element set constitutes the codebook of fractal-like-coding. Prior to coding, it is necessary to carry out decorrelation operation of an image and then perform sliding matching on the image with the elements to find the best matching element that meets appropriate matching merit. Record the error subimage that may have formed owing to incomplete matching. Then carry out dynamic segment designates coding for the error image featuring a sparse matrix form. Finally perform arithmetic coding for the code characters sequence obtained. It has been demonstrated by testing images of different complexities that the new method is very efficient to encode binary images.
Image Compression and Watermark
icon_mobile_dropdown
Novel coding method based on triangular fuzzy vector quantization for noisy image
Yi-Bing Li, Tao Jiang, Zhe Lou
In this paper a novel coding method which based on fuzzy vector quantization for polluted image by Gaussian white-noise is presented. By restraining the high frequency component of wavelet image, the noise is significantly removed. Then it could be coded by fuzzy vector quantization technique. The simulation result shows that this method can not only achieve high compression ratio but also remove noise dramatically.
Image Compression
icon_mobile_dropdown
Near-lossless image compression based on integer wavelet transforms
This paper discusses the constructing method of a general integer wavelet transform algorithm. Coupling such an algorithm with subblock differential pulse code modulation (DPCM), an encoding scheme for near-lossless image compression is obtained for remote sensing images. This method possesses the following features: (1) real-time processing is possible; (2) hardware implementation is easy; (3) the algorithm is of parallel structure; (4) only addition, subtraction and bit-shift are involved in the processing. Experiments illustrate that our algorithm is an effective encoding scheme to compress remote sensing images.
Image Compression and Watermark
icon_mobile_dropdown
Performance of a volume holographic wavelet packet compression image recognition system
Li Ding, Yingbai Yan, Guofan Jin
Volume holographic associative storage in a photorefractive crystal has some special properties such as multichannal operation, parallel processing, and real-time response. It can provide a suitable mechanism to develop an optical correlation system for image recognition. In this paper, a practical image recognition system based on such mechanism is proposed and constructed. Wavelet packet theory is introduced in this system to solve the cross-talk as the same time to improve the parallelism and the storage capability of the system. Through the wavelet packet bases, a set of eigen-images, which are regarded as the reference images for recognition in the associative correlation, are extracted from the training images. Since wavelet packet transform can decompose information through different orthogonal bases in different depths, and different entropy can be used to evaluate the weight of each basis, the way to select the best analyzing bases which is corresponding to the best eigen-images then can be discussed and achieved. Furthermore, different kinds of wavelet packet and the number of training images also influence the way of selection. Basic theoretical analysis of these factors is presented, and experimental results are given for future research.
Image Retrieval
icon_mobile_dropdown
Improved coarseness-based image retrieval
Xinghua Sun, Jingyu Yang, Li Guo
Coarseness is the most fundamental textural feature and has been much investigated since early studies. This paper improves the previous coarseness algorithm on the selection of neighborhood sizes and the calculation of neighborhood average differences, and the improved coarseness algorithm is presented. Experiments show that the improved coarseness has higher texture discriminability and better rotation invariance, and that the image retrieval result based on the improved coarseness is superior to that based on the previous coarseness.
Image retrieval by using subpiece accumulative histogram
Liangpei Zhang, Xianwen Ke, Hong Shu
In the algorithms of content-based image retrieval, color has been used widely as the most important visual character. Three key factors for the image retrieval when using color characters are: color expression, color abstraction, and color similarity treasure. In this paper, authors develop a sub-block method based on the color characters to retrieve images. It is found that the image retrieval accuracy of the sub-block histogram method is superior to that of the conventional single global histogram method, but its image retrieval speed is slightly slower than that of the single global histogram method.
Image Compression and Watermark
icon_mobile_dropdown
Multispectral image watermarking based on KLT
Xinpeng Zhang, Kaiwen Zhang, Shuozhong Wang
This paper describes a triple-layered watermarking scheme employing KLT and two layers of discrete cosine transforms, with data shuffled pseudo-randomly prior to the second DCT. In this way, the watermark is imperceptibly and robustly embedded into a set of multispectral images. The watermark is extractable without original host images. Simulation experiments are carried out to study the anti-attack performance. It is shown that the proposed technique is applicable to color and remote sensed multispectral images.
Watermarking algorithm based on permutation and PDF417 coding
Zhen Ji, Weiwei Xiao, Jihong Zhang
This paper represents a new spatial domain digital watermarking method, which can trade off between spatial domain and frequency domain approaches. This technique produces a watermarked image that closely retains the quality of the original host image while concurrently surviving various image processing operations such as lowpass/highpass filtering, lossy JPEG compression, and cropping. This image watermarking algorithm takes full advantage of permutation and 2-D barcode, which is PDF417 coding. The actual watermark embedding in spatial domain is followed using permutated image for improving the resistance to image cropping. Much higher robustness of watermark is obtainable via forward error correction (FEC) technique, which is the main feature of PDF417 codes. Additional features of this technique include the easy determination of the existence of the watermark and that the watermark verification procedure does not need the original host image.
Digital watermarking based on self-similarity in DWT of image
Hanqiang Cao, Guang-Xi Zhu, Yaoting Zhu, et al.
Digital watermarking has been recently proposed as the mean for property right protection of digital products. In this paper we analyze the self-similarities in detail signals of discrete wavelet transform of the image for the purpose of protecting the copyright of the image. The signature embedded using this method is retrievable only by the mean of protected information. Our studies have shown that the watermarked image has a good quality of image, and such a watermark is difficult to detect and unchangeable without the appropriate user crytogram.
Novel secret-key watermarking system
Hua Zhong, Fang Liu, Licheng Jiao
Most watermarking systems available have only a secret key, which can not be public. But in some applications watermark needs to be retrieved by public keys. How to generating public keys without weakening the performance of the private key is a key problem. In this paper a secret key watermarking system is designed, in which a novel method of generating public keys is proposed. The identifier (ID) embedded can be reliably retrieved using public keys without resorting to the original data. Because only part of embedding information is used in public keys, the above problem is successfully solved. Experimental results show its security and validity.
Embedding strategy of image watermarking in wavelet transform domain
Zhengqing Zhang, Yu Long Mo
In this paper, we study a new strategy to embed watermarks to the scale subband of image wavelet transform domain. The coefficients in scale subband are much bigger than most coefficients in other subband. This means the coefficients in scale subband have higher perceptual capacity than coefficients in other subband to embed watermarks. We also use nonlinear wavelet transform derived from lifting scheme to get bigger scale subband coefficients. The experiment result shows the scheme is robust enough and the image watermarking algorithm using nonlinear wavelet transform is more robust than algorithms using usual wavelet transform.
Image Retrieval
icon_mobile_dropdown
IP-based storage of image information
Xianglin Fu, Changsheng Xie, Zhaobin Liu
With the fast growth of data in multispectral image processing, the traditional storage architecture was challenged. It is currently being replaced by Storage Area Networks (SAN), which makes storage devices externalized from servers. A SAN is a separate network for storage, isolated from the messaging network and optimized for the movement of data between servers and storage devices. Nowadays, most of current SAN use Fibre Channel to move data between servers and storage devices (FC-SAN), but because of the drawbacks of the FC-SAN: for interoperability, lack of skilled professional and management tools, high implementation cost and so on, the development and application of FC-SAN was obstructed. In this paper, we introduce an IP-based Storage Area Networks architecture, which has the good qualities of FC- SAN but overcomes the shortcoming of it. The principle is: use IP technology to move data between servers and storage devices, build a SAN with the IP-based network devices (not the FC-based network device), and through the switch, SAN is attached to the LAN(Local Area Network) through multiple access. Especially, these storage devices are acted as commercial NAS devices and PC.
Trademark image retrieval
Li Guo, Jingyu Yang, Xinghua Sun
The local features are as important as global features for content-based trademark image retrieval. This paper gives a trademark image retrieval method based on the features of sub-images together with global image information. We extract the sub-images for each candidate, and take the image for the sub-image of itself, and then use the features of sub-images for retrieval. We have tested our method on an image database containing 3000 binary trademark images and use PVR-component as the evaluation measure, experiments show that using local information together with the global information, the retrieval performance of our method is better than that of retrieval method based only on global features, and the retrieval result can fit the people's visual feelings well.
Retrieval based on image content using DC-image
Qinghai Wang, Yu Long Mo
In this paper the method is presented to retrieve based on image content in video database using DC-image. The DC-image with the DC coefficient is extracted from DCT domain of image in this video database model. The principle of DC-image and three different extraction methods of the DC-image from MPEG video stream are described. With the DC-image, temporary segmentation of compress video is realized depending the histogram of DC-image. Comparing the performance of three extracting methods and analyzing the result of DC-image extracted by different methods, we propose a video database structure and feature extraction method. To create the uniform indexing for video clip, the expression format of frame is developed by normalized histogram of DC-image.
Region-based representations of image and motion estimation
Xiaoxiao Du, Xin Yang, Peng-Fei Shi
In this paper, an image representation method based on arbitrary shaped regions, and execute the motion estimation of the image sequence according to this representation of the image is proposed. In order to avoid over-segmentation, the initial frame in the image sequence is smoothed while edge of the image is preserved. The smoothing algorithm is the modification version of Alvarez's method. Then, the smoothed frame is segmented by the watershed method. According to the label image, the image is stored in the form of region adjacency graph. To further solve the problem of the over-segmentation, the merging criterions based on average region intensity and edge strength and region size are given. The affine transformation is used as motion model for each region, and the nonlinear least square method is used for the optimization. Compared with the method based on pixel, the result shows that the motion vectors produced by our algorithm are more consistent and the PSNR is improved.
Poster Session
icon_mobile_dropdown
Feature-point-based multiscale shape coding
Xuli Shi, Zhaoyang Zhang
In this paper, we propose a new fast and efficient method of shape coding called feature point-based adaptive arithmetic encoding algorithm (FPAE). For dealing with video image, we regard a shape as a set of points that are parameterized by arc length and B-spline bases. Then, the evolution of curve at different resolution levels s in B-spline scale space is achieved by convoluting the curve with the dilated B-spline kernel instead of Gaussian kernel. Compared with Gaussian method, this method has an advantage of fast algorithm. By calculating the curvature, the feature-points including significant information of shape contour can be found. But for shape coding, this shape representation will not very efficient when the shape consist of arcs. The modified shape representation also includes the feature-point that lies in arc and the distance from the feature-point to chord is largest. All of the feature-points are encoded by adaptive arithmetic encoding. Experimental results show that our method reduces coded bits by about 25% compared with the context-based arithmetic encoding (CAE) of the MPEG-4 VM and the subjective quality of the reconstructed shape is better than that of CAE at same Dn.
Efficient color image indexing and retrieval using color and spatial feature
Kui Cao, Yucai Feng, Yuan-Zhen Wang
Recently, some image retrieval systems have begun to move away from histogram techniques and begun to make use of segmentation to extract and index features. The representative color or region color descriptor is more compact and can be easily incorporated with spatial features. However, the spatial feature is less investigated and confined to some simple geometric properties, such as centroid, area, etc. In this paper, we present a novel and efficient scheme for extracting, indexing, and retrieving color images. The scheme is built upon a region-based image retrieval and is based on the observed fact that a small number of colors are usually enough to characterize the color information in an image. First, we propose a color clustering method to better capture color properties of the image. Then, we will present a new method to characterize the spatial feature by using spatial histograms. The proposed method computes image features automatically from a given image and they can be used to retrieve images. The experiments were conducted to establish the retrieval capabilities of our approach. The retrieval quality is measured by the retrieval rate. Experimental results are reported indicating that our approach performs very well and gives very good retrieval efficiency.
New algorithm of classified vector quantization based on wavelet transform for image coding
Luping Xu, Beilei Kou, Ya-e Zhao
New algorithm of classified vector quantization based on wavelet transform for image coding is presented. According to the similarity in the same orientation but different levels of the discrete wavelet transform domain, this paper proposes a new algorithm of multiresolutional classified vector quantization. This algorithm exploits an efficient classifier to classify the vectors and uses the class information of the higher lever to instruct the vector classification of the lower lever by taking advantaging of the similarity. Simulation results show that this algorithm can get the higher image compression ratio and pleasant quality of the reconstructed image in comparison with others.
Object-region-based color image retrieval
Xinghua Sun, Jingyu Yang, Li Guo
In this paper one color image retrieval algorithm based on object regions is presented. First obtain each component image in the HSV space and then get the binary edge image of each component image. According to the connectedness of the edge images the object regions of the color image are extracted. During the image retrieval the sub image features corresponding to the object regions are used in the image similarity matching process in place of the global image features. Experiments show that the colorimage retrieval algorithm based on object regions is superior to that based on the global image.
Improvement bi-block zero tree coding for video data compression
Jingwen Yan, Guiming Shen, Gang Lu
This paper presents a multispectral image compression method based on vector quantization technique that uses fuzzy c-means (FCM) algorithm to generate the codebook. There are two ways to form the vector set from the input image: the intraband vector forming where each vector was formed by dividing the input image into blocks, and the interband vector forming where each vector was formed by the gray values of all bands representing a pixel. In this research, a modified version of FCM was used to reduce the computational time. The experimental results comparing the effect of both vector forming methods are given.
Wavelet-fractal coding of image sequence
Zhiming Zhang, Sile Yu
In this paper, a new image sequence compression scheme combining wavelet with fractal is presented. It adopts the coding structure similar to MPEG's except replacing DCT transform with wavelet transform, and uses fractal prediction for I-frames. This scheme is successful in compressing digital TV sequences.
Technique of quasi-lossless compression of multiple-spectrum remote sensing images based on image restoration
QingQuan Li, Qingwu Hu
Bit compression is processed to increase the compression ratio and remove correlation based on 4-neighboring pixels decomposition and integer orthogonal wavelet transformation is done combining the contour feature of multiple spectrum images. An image restoration based on the theory of modulation transfer function (MTF) is given to improve the image quality. The test of SPOT remote sensing images shows that the compression ratio is over 8, the average fidelity reached 0.99 and peak value signal-noise ratio (PSNR) is over 42.
Error-control-based fractal coding algorithm
Ping Fu, Shigang Wang, Su Yan
In this paper, we propose a fractal image coding technique based on error-control method. By analyzing error gap between collage error and reconstruction error, we make correction to the selected domain blocks to reduce the collage error as well as the error gap. In this way, the reconstruction error is substantially reduced, and the collage theory is corrected. In simulation experimentation, we have got obvious incremental performance under condition of almost similar compression ratio with classical fractal image coding theory.
Decorrelation of multispectral images for lossless compression
Rong Zhang, Zhengkai Liu, Nenghai Yu
Decorrelation is the most important step in lossless compression. There are spatial and spectral redundancies in multispectral images. In this paper, we proposed a new technique that integer wavelet transform is used to remove spatial redundancy and non-linear predictors are used to remove spectral redundancy. For computational reason, the number of spectral predictors is discussed. These techniques result in higher compression ratios.
New fast license plate location method
Wei Li, Xinhan Huang, Min Wang, et al.
This paper presents a new fast license plate location method based on gray-scale image. According to the vertical edge features of the characters on Chinese license plate, it applies the threshold iteration to realize license plate location in complicated background. The algorithm satisfies the requirements of a real-time system and has good robustness. The precision of segmentation has been close to 98%.
Integer wavelet transform for weather radar data compression
Ning Ma, Wei Yan, Qingdong Wang
A novel compression algorithm for weather radar data based on integer wavelet transforms and zerotree coding is presented. Considering the dependencies of the radar data, we give an efficient pre-processing of the data first. Then an advanced Embedded Zerotree Wavelet encoding algorithm is presented which uses context-based adaptive arithmetic coding to improve its performance. Experimental results show that the compression ratios are improved significantly both for lossless and lossy data compression.
Dynamic segment designating lossless coding for sparse binary patterns
Xiaofeng Tong, Tianxu Zhang
A dynamic segment designate lossless coding (DSDC for short) method for sparse binary patterns is proposed in this paper. First of all, in order to reduce the correlation of the binary image, we carry out peculiar difference in row and column on the white/black (0/1) binary image successively, the resultant image is also binary, but commonly it is sparser than before, that is to say, the proportion of non-zero element will decrease. Afterwards we join rows of the simpler image by linking their heads and tails correspondingly into a long one-dimensional array. Scan the one-dimensional array from left to right step by step and divide the array into a number of non-all-zero segments (1 and 0 included) of fixed length and all-zero segments (only 0 included) of varying length according to a certain rule. In the algorithm, we always encode a subsection as often as we divide it from the one-dimensional array. After that, perform distinct coding method for the two kinds of segments of non-all-zero segments and all-zero segments respectively. For non-all-zero segment, we encode each non-all-zero element with its segment-relative address code and skip over the zero elements in each segment. As for all-zero segment, we designate the length of the all-zero segment with bin tree code, which represents the number of contiguous zero element contained in it. The Decoding method is the backward process of coding. Read out the bit stream from the code file and reconstruct the segment by using the corresponding method every time. Finally, if differential operation is carried out at the course of encoding, perform reverse difference on the current binary image and we will reconstruct the original binary image. The whole process is lossless. The encoding process and the decoding process are nearly balance in computational time or in computational complexity. In this paper, the eight binary images CCITT 1-8 have been tested by some methods mentioned in this paper. It has been demonstrated by the experimental result that the dynamic segment designate coding method is efficient for encoding sparse binary patterns.
Digital watermarking technique based on integer Harr transforms and visual properties
In this paper, a new method is proposed to hide watermark image based on the discrete integer Haar wavelet transform. This method utilizes excellent properties of the discrete integer Haar wavelet transform and some characteristics of human visual system(HSV). The watermark are processed the discrete Haar wavelet transforms as a grey-value image, and are decomposed and synthesized the image of the watermark and hiding. The algorithm of the discrete integer Haar wavelet is simple and viable. Algorithmic operation is also small .The speed of algorithmic operation is quick. The algorithm is of parallel structure. The experimental results using this algorithm shows that the method of this paper implement that be added watermark and be hidden processing to image. This method can improve robustness of watermarking.
Image coding algorithm using a new VQ distortion measure
Shouda Jiang, Qi Wang, Sheng-He Sun
As an efficient technique for data compression, vector quantization (VQ) has been successfully used for various applications involving VQ-based encoding and VQ-based recognition. The response time of encoding and recognition is a very important factor to be considered for real-time applications. The codeword search problem (i.e., the encoding problem) in VQ is to assign one codeword to the input vector in which the distortion between this codeword and the test vector is the smallest among all codewords. The encoding process is a computationally intensive procedure. This limits the applicability of VQ in practical considerations. Many fast algorithms using the squared Euclidean distortion measure have been proposed for reducing the computational complexity of the full search encoding. The threshold decomposition technique is an important technique for stack filter. By decomposing a vector into binary vectors based on the threshold decomposition technique of stack filter, a new distortion measure based on the decomposed binary vectors can be derived. This distortion measure needs to multiplication operations but some XOR operations and a counter. This distortion measure is suitable for VLSI implementation. Experiments were carried out to test the performance of the proposed encoding algorithm and the conventional full-search encoding algorithm using the squared Euclidean Distortion measure. From the experimental results, we see that the proposed algorithm is faster than the conventional full-search encoding algorithm. Especially, the encoding time will be much shorter than the conventional full-search encoding algorithm and the encoding structure will be much simpler if we use the hardware to encode the image. The PSNR of the proposed algorithm is only a little worse than that of the conventional algorithm and the new encoding algorithm is also faster than the conventional full-search encoding algorithm by software.
Intra/inter-band coding in FCM-VQ of multispectral images
Yuttapong Rangsanseri, Uthai Sangthongpraow, Punya Thitimajshima
This paper presents a multispectral image compression method based on vector quantization technique that uses fuzzy c-means (FCM) algorithm to generate the codebook. There are two ways to form the vector set from the input image: the intraband vector forming where each vector was formed by dividing the input image into blocks, and the interband vector forming where each vector was formed by the gray values of all bands representing a pixel. In this research, a modified version of FCM was used to reduce the computational time. The experimental results comparing the effect of both vector forming methods are given.
Efficient wavelet-based deblocking algorithm for highly compressed images
Shuanhu Wu, Hong Yan, Zheng Tan
In this paper, a novel post-processing method is proposed in wavelet domain for the suppression of blocking artifacts in compressed images. The novelty of new method is that we can obtain soft-threshold values based on the difference between the wavelet transform coefficients of image blocks and the coefficients of the entire image and threshold high frequency wavelet coefficients in different subbands using different values and strategies. The threshold value is made adaptive to different images and characteristics of blocking artifacts. In particular, the new method is robust, fast and works remarkably well for different DCT based compressed images at low bit rate. The method is nonlinear, computationally efficient and spatially adaptive. Another advantage of the new method is that it retains sharp features in the images after it removes artifacts. Experimental results show that the proposed method can achieve a significantly improved visual quality in the images, and also increase PSNR in the output image. The algorithm can be used for real-time post-processing in DCT-based encoders and decoders.
Very low bit streaming image compression coder based on lift scheme
This paper presents a novel compression encoding/decoding method based on lift scheme for I-VOP. In traditional coding I-VOP algorithm, the textures and shapes are separately coded, and the macro-block DCT coding method is adopted in the coding texture and 16x16 BAB(Binary Alpha Block) coding method is employed in shape coding. The texture and shape coding method is not of embed feature. The algorithm based on lift scheme for I-VOP, the texture and shape at same time is coded and the bit streaming is of embed feature.
Computer pseudocolor equi-density coding of gray image based on tri-primary colors of RGB and pixel self-transformations
Maoyong Cao, Nongliang Sun, Daoyin Yu, et al.
Pseudo-color coding of gray image is a typical processing in the fields of medicine, engineering and military. This paper proposed a new method of pseudo-color equi-density coding of gray image based on tri-primary colors of RGB and pixel's self-transformations. In the method, the negative Pixel f' (x,y) and the positive-negative superimposed pixel f'' (x,y) is easily obtained from the original(positive) pixel f(x,y) in an arbitrary space (x,y). Then the three signals for RGB which drive the physical display device are specified fo f(x,y), f' (x,y) and f'' (x,y) individually, and are fed separately into the red, green and blue guns of the RGB color cube of the monitor. The theoretical analysis and experimental results is also given. This method has advantages of being easy to realize and high color sensitivity for gray levels.
Proposal for multispectral image compression methods
Aniati Murni, Sani Muhamad Isa, Febriliyan Samopa
This paper has proposed two image compression and decompression schemes for multispectral images. Two issues were considered in the proposed methods. The first issue is the possibility of applying the compression process directly to a set of multispectral images, where the standard JPEG should be applied to each individual image. Considering this issue, a compression and decompression method is proposed based on a hybrid of lower bit suppression and Karhunen-Loeve transform and named as KLT Hybrid. The second issue is the possibility of obtaining a general codebook for a bulky of typical data such as a set of hyperspectral images. Considering this issue, another compression and decompression method is proposed based on vector quantization (VQ) where the general codebook is obtained by a proposed fair-share amount method. Four performance indicators were used to evaluate the results. The indicators include compression ratio, root mean square error, maximum absolute error, and signal to noise ratio. The experimental results have shown good performance indication of both methods.