The discrete Pascal transform is a polynomial transform with applications in pattern recognition, digital filtering, and digital image processing. It already has been shown that the Pascal transform matrix can be decomposed into a product of binary matrices. Such a factorization leads to a fast and efficient hardware implementation without the use of multipliers, which consume large amounts of hardware. We recently developed a field-programmable gate array (FPGA) implementation to compute the Pascal transform. Our goal was to demonstrate the computational efficiency of the transform while keeping hardware requirements at a minimum. Images are uploaded into memory from a remote computer prior to processing, and the transform coefficients can be offloaded from the FPGA board for analysis. Design techniques like as-soon-as-possible scheduling and adder sharing allowed us to develop a fast and efficient system. An eight-point, one-dimensional transform completes in 13 clock cycles and requires only four adders. An 8x8 two-dimensional transform completes in 240 cycles and requires only a top-level controller in addition to the one-dimensional transform hardware. Finally, through minor modifications to the controller, the transform operations can be pipelined to achieve 100% utilization of the four adders, allowing one eight-point transform to complete every seven clock cycles.
We present a new discrete transform, the Gould transform (DGT). The transform has many interesting mathematical properties. For example, the forward and inverse transform matrices are both lower triangular, with constant diagonals and sub-diagonals and both can be factored into the product of binary matrices. The forward transform can be used to detect edges in digital images. If G is the forward transform matrix and y is the image, then the two dimensional DGT, GyGT can be used directly to detect edges. Ways to improve the edge detection technique is to use the "combination of forward and backward difference", GT(Gy) to better identify the edges. For images that tend to have vertical and horizontal edges, we can further improve the technique by shifting rows (or columns), and then use the technique to detect edges, essentially applying the transform in the diagonal directions.
KEYWORDS: Image processing, Computer programming languages, Computer programming, Signal processing, Telecommunications, Control systems, Algorithm development, Parallel computing, Video processing, Video
The discrete cosine transform (DCT) is commonly used in signal processing, image processing, communication systems and control systems. We use two methods based on the algorithms of Clenshaw and Forsyth to compute the recursive DCT in parallel. The symmetrical discrete cosine transform (SCT) is computed first and then it can be used as an intermediate tool to compute other forms of the DCT. The advantage of the SCT is that both the forward SCT and its inverse can be computed by the same method and hardware implementation. Although Clenshaw’s algorithm is the more efficient in computational complexity, it is not necessarily the more accurate one. The computational accuracy of these algorithms is discussed. In addition, the front-to-back forms of Clenshaw and Forsyth’s algorithms are implemented in aCe C, a parallel programming language.
KEYWORDS: Very large scale integration, Digital filtering, Computer architecture, Electrical engineering, Image compression, Video compression, Aluminum
New recursive VLSI architectures for arbitrary length discrete cosine transform (DCT) and inverse DCT (IDCT) are presented. Compared with previous methods, the proposed approach saves N-1 adders but requires one additional delay element for the DCT implementation, and saves (N/2) multipliers for the IDCT. The reduction in elements results from identifying common computations in the processing that generates the transform components.
A new block coding technique for the compression of bilevel text images is presented. The technique uses a combination of pre-processing the image to extract the edge information and block coding to compress the data. The key feature of the technique is simplicity in implementation. The pre-processing consists of an image differencing operation to decorrelate the strings of black pixels. The decorrelation is followed by the lossless coding of each block of the image. The performance of the new image differencing method is examined based on both theoretical and experimental code length data. Both theoretical and simulation results show that by pre-processing the image, the number of nonzero pixels can be significantly reduced and a more efficient block code is realized.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.