SPIE Startup Challenge 2015 Founding Partner - JENOPTIK Get updates from SPIE Newsroom
  • Newsroom Home
  • Astronomy
  • Biomedical Optics & Medical Imaging
  • Defense & Security
  • Electronic Imaging & Signal Processing
  • Illumination & Displays
  • Lasers & Sources
  • Micro/Nano Lithography
  • Nanotechnology
  • Optical Design & Engineering
  • Optoelectronics & Communications
  • Remote Sensing
  • Sensing & Measurement
  • Solar & Alternative Energy
  • Sign up for Newsroom E-Alerts
  • Information for:

SPIE Photonics West 2017 | Register Today

SPIE Defense + Commercial Sensing 2017 | Call for Papers

Get Down (loaded) - SPIE Journals OPEN ACCESS


Print PageEmail Page

Electronic Imaging & Signal Processing

Compression and encryption: doing the most with the least

From OE Reports Number 198 - June 2000
31 June 2000, SPIE Newsroom. DOI: 10.1117/2.6200006.0001

Bandwidth is the high-technology equivalent of richness and thinness: you can never have enough of it. But today, nearly every user of the Internet has only a limited amount of bandwidth available for transmitting rapidly increasing quantities of text, data, and images. In response to the problem, research teams are developing a variety of approaches to fitting quarts of data into pint pots of bandwidth.

The work has the generic name of compression. It has the goal of increasing the efficiency of available bandwidth by sacrificing some of the information in text, data, or images prior to transmission, with the expectation that processing at the receiving end will restore most of that information in its original form.

A related technology, encryption, involves authentication instead of compression. Digital watermarking, in particular, aims to identify images or documents wherever they travel in cyberspace. That means retaining small packages of information in secure form, whatever happens to their surroundings during transmission across the Internet. Somewhat ironically, encrypted data must be able to survive the compression process, among other operations.

Original image. This is a standard test image that is difficult to compress because it contains large regions with texture (pants, scarf, tablecloth, chair) as well as large regions where the intensity varies smoothly (arms, floor, chair legs).
Compression: an infant technology

As a usable set of technologies, compression is still in its infancy. "Nowadays, very few images are actually compressed to a large extent," said Francois Meyer, an assistant professor of electrical engineering and radiology at the Univ. of Colorado at Boulder.

Mark Schmalz, a scientist at the Univ. of Florida, agreed with Meyer. "So far, the field has a lot of unfulfilled promise," he said. "The main problem is finding a consistent way to represent information in an image in such a way that it can be compactly encoded." However, he said, "this area of research has exciting promise."

An early starting point for compression was the graphics interchange format (GIF). This method squeezes down data and images without actually losing any information. It does so by using the same digit or set of digits to identify identical parts of an image or a set of data, such as adjacent segments of a plain wall or a field of grass viewed from a distance. A key to this identification, stored in the hash table that accompanies the compressed image, enables the decoding program to restore the image after transmission. GIF does an effective job of compressing images that contain a lot of repetition, in the form of large areas of the same color, for example. However, it has limited use for images that have a lot of fine detail.

A more sophisticated technology, the Joint Photographic Experts Group (JPEG) format, performs more sophisticated compression, by actually getting rid of some parts of the original image. The JPEG algorithm first divides an entire image into squares. Then, a mathematical operation, called a discrete cosine transformation, converts each square into a set of numbers called coefficients, some large and others small, that together make up the image. Finally, the algorithm throws away the smaller coefficients that represent the least important parts of the data. The result is a compressed image that must be reconstituted after transmission on the basis of less than complete information.

For all its value, Meyer said, "JPEG uses rather old technology." He and other researchers are developing a new and improved version: wavelet-based compression.

Image compressed with the current JPEG standard. The compression ratio is 32 to 1, and the quality, measured with a mean-squared error criterion, is 24 dB. JPEG is based on a DCT expansion of 8 X 8 blocks, and very annoying blocking artifacts are visible.
From seismology to transmission

Wavelets are mathematical functions that cover a finite interval of space and provide more flexibility than the traditional Fourier transforms. Researchers initially applied wavelet transforms to geological readings to identify the local frequencies and pitches of wavefronts in seismic waves. Mathematicians soon realized that they could apply the same technique to the compression of signals and images. The key advances in understanding occurred in 1992 and 1993, with the publication of papers by A.S. Lewis and G. Knowles, and by Jerome Shapiro. Between them, the two papers outlined the way in which wavelets could be used to create compression algorithms.

To perform compression prior to transmission, a wavelet transform is applied to disjoint partitions of the text, data, or image. The process starts by selecting groups of four pixels in parts of an image that are relatively smooth in texture, and substituting for them a single pixel that represents the average of the four. "The process blurs the image, and loses some information," Meyer said. "Each time you blur in one direction, you lose half the pixels. So, by doing it in two directions, you get down to one-quarter of the pixels. You can increase the blurring to create an extremely blurred version that needs relatively few pixels."

Wavelet compression doesn't blur the whole picture. To reproduce images effectively, Meyer said, "it must retain the edges. These are any region where you have a strong difference in intensity as you move in any direction from one pixel to another. If you keep the extremely blurred version of the smooth areas along with a few of the edges, you can get a beautiful rendering of your original image."

Compression is only the first half of the procedure. After transmission, text, images, and data must undergo reconstruction, otherwise called decompression. "You start with the coarse image that has a small number of pixels," Meyer said. "Then, you keep multiplying the pixels by four, and hopefully put back the edges. Reconstruction is completely equivalent to compression, but much faster. You can get progressive renderings, with a handful of coefficients."

Wavelet compression has yet to make a major impact on the transmission of images. But researchers have no doubt that its time will come, probably very soon. International committees that develop standards for Internet traffic have decided to include wavelet compression in the new JPEG-2000 standard, which will soon be finalized. "There's great hope that this standard will make possible the exchange of images in high-quality compressed form," Meyer said. "Research is also under way on extending the concept to moving images. One part of MPEG 4, a standard for moving images, will use wavelets for the first time."

Image compressed with the FWP wavelet packet coder of Francois Meyer.1 Wavelet packets are relative of the wavelets that can be tailored to each image. In this case, the wavelet packet basis contains many functions similar to sine or cosine functions that are very well localized in frequency. These wavelet packets obviously match the texture on the scarf, the woman's legs, and the checker pattern on the tablecloth. Because the basis is well fitted for the image, the FWP coder has no difficulty preserving the oscillatory texture everywhere in the image. The compression ratio is again 32 to 1. The PSNR is now 29.12 dB.
Alternative approaches to compression

Wavelets may have the broadest promise among emerging compression technologies, but researchers are also developing alternative approaches. High-compression block encoding is one. "We mathematically subdivide an image into blocks of equal size, and apply the compression transform to each block in turn," Schmalz said. "You have a relatively small amount of data to deal with in each block. And at no time do you need to have the entire image in memory. In practice, this is useful for compression using fast, embedded processors with small memories." He said this technology has specialized uses. "Our research group has developed efficient high-compression transforms for military applications, such as target recognition. The technology might also be useful for low-resolution image displays such as Internet imaging."

Other research groups are developing iterative methods of compressing data. Invented by Michael Barnsley, a Georgia Tech engineer and founder of Iterated Systems, Inc. (Atlanta, Georgia), iterative methods rely on the concept familiar to mathematicians who work with fractals: similarities among objects at different scales. The idea is to decompose any image into individual parts that resemble each other, although one of the parts may have to be rotated and scaled up or down to achieve similarity. For example, a patch of sky with a piece of cloud, once rotated and enlarged, might closely resemble a stretch of lake in the same image that contains a flock of white birds. "The basic patterns that support these resemblances can be used to formulate a code book for the compression," Schmalz said.

The process isn't easy. "It's rather like putting a jigsaw puzzle together," Schmalz said. And so far it has proved difficult to carry out in real time. Nevertheless, Schmalz said, "if you choose the right size patches and you have enough redundancy in the image, you can get 50:1 or even up to 200:1 compression with visually acceptable decompression in selected cases."

Encryption: preserving information

Some information is too valuable to be lost in transmission; thus, research teams are developing forms of mathematical manipulation similar to those used to compress data. The process is called digital watermarking. Its objective is to format data in such a way that an identifying pattern remains with an image, text, or data.

"The challenge is not just to put a pattern into the image, but to design the patterns to be highly tolerant of image processing methods," Schmalz said. For example, a digital watermark that signifies ownership of an image can't be placed only in the corner of the image, because it can easily be cropped out. Instead, the watermark must be spread across the entire image. To avoiding spoiling the appearance of the image, the mark should also be invisible to all but the most sophisticated viewer.

Like advanced forms of compression technology, digital watermarking is still developing. It has notched up some successes. IBM, for example, uses its own version of watermarking to prevent reproduction without permission of digitized images of such valuable documents as collections from the Vatican and the papers of Reformation leader Martin Luther. Oregon firm Digimarc Corp. has developed patented technology that incorporates what it calls identity document marks into images transmitted over the web. But fresh challenges remain. "Some success has been obtained in maintaining digital watermarks in linear operations," Schmalz said. "But with nonlinear operations, the watermarks can in some cases be significantly altered."


1. Francois G. Meyer, Amir Z. Averbuch, Jan-Olov Strömberg, "Fast Adaptive Wavelet Packet Image Compression," IEEE Trans. on Image Processing, vol 9, No 4, May 2000.

Peter Gwynne

A former science editor of Newsweek, Peter Gwynne is a freelance science writer based in Marstons Mills, MA.