Share Email Print

Proceedings Paper

Compressive computation and Moore’s Law
Author(s): Mark S. Schmalz
Format Member Price Non-Member Price
PDF $14.40 $18.00

Paper Abstract

The ongoing, dramatic increase in speed and accessibility in digital computing has been potentiated since the early 1970s by the increasing capacity of integrated circuits expressed as gates per unit chip area. The governing model, called Moore's Law, states that gate density doubles approximately every 18 months, and has held since the early 1970s. However, there exist physical limits on the size of integrated circuit features such as insulators. These limits are expected to be reached in conventional circuit technology in the next 15 to 20 years. At that time, (a) circuit density will no longer increase significantly, or (b) new technologies (if available) are envisioned to support further progress approximating the curve dictated by Moore's Law. Such technological innovations can be classified as: (1) fundamental advances in circuit technology that support further growth in gate density according to Moore’s Law, or (2) technologies that advance state-of-the-art in computing in either hardware or software to provide a constant offset to the graph of Moore’s Law. Examples of the former include quantum computing, and of the latter, more efficient chip layouts. In this paper, we show that another, recently-developed technology exists that can partially ameliorate the performance constraints imposed by physical limits on gate density, thereby providing a sustainable but constant offset of two to three orders of magnitude in Moore’s performance model. Called compressive processing, this technique involves computation using compressed data (e.g., signals or imagery) in compressed form, without decompression, primarily for image and signal processing (ISP) applications. Given a source image a and a compression transform T, a compressed image c = T(a) is subjected to an operation O' that is an analogue over range(T) of an operation on uncompressed imagery b = O(a), such that d = O’(c) = T(O (a)). By applying the inverse of T (denoted by T-1) to d, we obtain b. If T does not have an inverse, then an approximation T* to the inverse can be applied, such that the decompressed image is given by d = T*( O (T(a))) ≈ b, which is the customary case in lossy compression. Analysis emphasizes implementational concerns such as applicable compression transforms (T) and theory that relates compressive operations (O’) to corresponding image or signal processing operations (O). Discussion also includes error propagation in cascades of compressive operations. Performance results are given for various image processing operations.

Paper Details

Date Published: 30 January 2003
PDF: 14 pages
Proc. SPIE 4793, Mathematics of Data/Image Coding, Compression, and Encryption V, with Applications, (30 January 2003); doi: 10.1117/12.452382
Show Author Affiliations
Mark S. Schmalz, Univ. of Florida (United States)

Published in SPIE Proceedings Vol. 4793:
Mathematics of Data/Image Coding, Compression, and Encryption V, with Applications
Mark S. Schmalz, Editor(s)

© SPIE. Terms of Use
Back to Top