PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
Image compression is increasingly employed in applications such as medical imaging, for reducing data storage requirement, and Internet video transmission, to effectively increase channel bandwidth. Similarly, military applications such as automated target recognition (ATR) often employ compression to achieve storage and communication efficiencies, particularly to enhance the effective bandwidth of communication channels whose throughput suffers, for example, from overhead due to error correction/detection or encryption. In the majority of cases, lossy compression is employed due the resultant low bit rates (high compression ratio). However, lossy compression produces artifacts in decompressed imagery that can confound ATR processes applied to such imagery, thereby reducing the probability of detection (Pd) and possibly increasing the rate or number of false alarms (Rfa or Nfa). In this paper, the authors' previous research in performance measurement of compression transforms is extended to
include (a) benchmarking algorithms and software tools, (b) a suite of error exemplars that are designed to elicit compression
transform behavior in an operationally relevant context, and (c) a posteriori analysis of performance data. The following transforms are applied to a suite of 64 error exemplars: Visual Pattern Image Coding (VPIC [1]), Vector Quantization with a fast codebook search algorithm (VQ [2,3]), JPEG and a preliminary implementation of JPEG 2000 [4,5], and EBLAST [6-8]. Compression ratios range from 2:1 to 200:1, and various noise levels and types are added to the error exemplars to produce a database of 7,680 synthetic test images. Several global and local (e.g., featural) distortion measures are applied to the
decompressed test imagery to provide a basis for rate-distortion and rate-performance analysis as a function of noise and compression transform type.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
An arc or line segment in a Euclidean plane with the natural topology is selected. It is known the number of its endpoints is two and we calculate its midpoint and equation. Two finite sequences of points from the arc are selected which exclude the midpoint and endpoints. We define two circles which intersect at the arc’s midpoint and so that their diameters are each half the length of the arc. A finite collection of circles tangent to the interior of the arc at points of the selected sequences is created. The collection of circles is contained in the interiors of the two circles taken previously. This is accomplished using the dichotomous properties of the linear lemma and the Jordan curve theorem. Next a relation is defined to create the coefficients of a binary number using base two expansion.
Now it can be shown as well by selecting a binary number that a continuum can be created to represent it. Again we take an arc in a plane with an origin and two circles which intersect at its midpoint and each contain one other endpoint of the arc. Circles are selected in the interiors of the two circles based on the coefficients of the base two expansion of the binary number. These circles are each tangent to the interior of the arc and positioned by the dichotomous natures of the line containing the arc and the coefficients of the base two expansion of the chosen binary number. The union of these objects in the plane creates a continuum as a subset of harmonic cellular continuum called the binary continuum.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Multimedia data may be transmitted or stored either according to the classical Shannon information theory or according to the newer Autosophy information theory. Autosophy algorithms combine very high "lossless" data and image compression with virtually unbreakable "codebook" encryption. Shannon's theory treats all data items as "quantities", which are converted into binary digits (bit), for transmission in meaningless bit streams. Only "lossy" data compression is possible. A new "Autosophy" theory was developed by Klaus Holtz in 1974 to explain the functioning of natural self-assembling structures, such as chemical crystals or living trees. The same processes can also be used for growing self-assembling data structures, which grow like data crystals or data trees in electronic memories. This provides true mathematical learning algorithms, according to a new Autosophy information theory. Information in essence is only that which can be perceived and which is not already known by the receiver. The transmission bit rates are dependent on the data content only. Applications already include the V.42bis compression standard in modems, the gif and tif formats for lossless image compression, and Autosophy Internet television. A new 64bit data format could make all future communications compatible and solve the Internet's Quality of Service (QoS) problems.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Images, 2D or 3D, are usually perceived or analyzed in their
respective number of dimensions either in the spatial domain or
frequency domain. 3D images such as volumetric data sets are
important in many scientific and biomedical fields. To extend a 2D
image compression coder to 3D, special care is often required. We
are proposing a progressive lossy to lossless image coder that can
be extended to multi-dimensions with minimum effort. Hilbert
traversal enables us to transform a multi-dimensional signal to a
1D signal, thus 2D/3D images can be compressed in the same way.
Only the traversal program needs to be modified for images in
different dimensions. Hilbert traversal's locality and slow
context change properties render a very compressible 1D signal.
After integer wavelet transformation, the resulting wavelet
coefficients are rearranged based on our new linearization
algorithm. The most important information appears in the front of
the data stream. Progressive image encoding/decoding, which is
desired by many applications, is possible due to the linearization
algorithm. The control data and wavelet coefficients are finally
entropy coded to produce a compact data stream. Lossy and lossless
image information is embedded in the same data stream.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A bit rate allocation (BRA) strategy is needed to optimally compress three-dimensional (3-D) data on a per-slice basis, treating it as a collection of two-dimensional (2-D) slices/components. This approach is compatible with the framework of JPEG2000 Part 2 which includes the option of pre-processing the slices with a decorrelation transform in the cross-component direction so that slices of transform coefficients are compressed. In this paper, we illustrate the impact of a recently developed inter-slice rate-distortion optimal bit-rate allocation approach that is applicable to this compression system. The approach exploits the MSE optimality of all JPEG2000 bit streams for all slices when each is produced in the quality progressive mode. Each bit stream can be used to produce a rate-distortion curve (RDC) for each slice that is MSE optimal at each bit rate of interest. The inter-slice allocation approach uses all RDCs for all slices to optimally select an overall optimal set of bit rates for all the slices using a constrained optimization procedure. The optimization is conceptually similar to Post-Compression Rate-Distortion optimization that is used within JPEG2000 to optimize bit rates allocated to codeblocks. Results are presented
for two types of data sets: a 3-D computed tomography (CT) medical image, and a 3-D metereological data set derived from a particular modeling program. For comparison purposes, compression results are also illustrated for the traditional log-variance approach and for a uniform allocation strategy. The approach is illustrated using two decorrelation tranforms (the Karhunen Loeve transform, and the discrete wavelet transform) for which the inter-slice allocation scheme has the most impact.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Image compression based on transform coding appears to be approaching an asymptotic bit rate limit for application-specific distortion levels. However, a new compression technology, called object-based compression (OBC) promises improved rate-distortion performance at higher compression ratios. OBC involves segmentation of image regions, followed by efficient encoding of each region’s content and boundary. Advantages of OBC include efficient representation of commonly occurring textures and shapes in terms of pointers into a compact codebook of region contents and boundary primitives. This facilitates fast decompression via substitution, at the cost of codebook search in the compression step.
Segmentation cose and error are significant disadvantages in current OBC implementations. Several innovative techniques have been developed for region segmentation, including (a) moment-based analysis, (b) texture representation in terms of a syntactic grammar, and (c) transform coding approaches such as wavelet based compression used in MPEG-7 or JPEG-2000. Region-based characterization with variance templates is better understood, but lacks the locality of wavelet representations. In practice, tradeoffs are made between representational fidelity, computational cost, and storage requirement. This paper overviews current techniques for automatic region segmentation and representation, especially those that employ wavelet classification and region growing techniques. Implementational discussion focuses on complexity measures and performance metrics such as segmentation error and computational cost.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper proposes a novel hierarchical coding algorithm based on the All Phase IDCT (APIDCT) interpolation. The All Phase Digital Filter (APDF) is a new type of linear phase filter. For a data vector with a length of N obtained by blocking a signal, there are N different phase data blocks that include the same sampling point. Through taking the mean of the N values as the filtering output, the APDF can eliminate different meanings of the orthogonal transform filtering values of those data vectors and thereby the block effect. According to this idea, this paper deduces the formula of two-dimension APIDCT. Then, this paper compares the performances of several kinds of APDF and demonstrates that the APIDCT filter has the best performance in image interpolation. By combining the interpolation with multi-subsampling and adaptive arithmetic coding, a simple hierarchical image-coding algorithm is shaped up. This technique can be used to scalable coding in spatial resolution. The simulation results show that the compression ratio and restored image quality better than JPEG compression can be achieved by only three layers, and no block effect has been found even in high compression ratio.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
An ongoing problem in remote sensing is that imagery generally consumes considerable amounts of memory and transmittance bandwidth, thus limiting the amount of data acquired. The use of high quality image compression algorithms, such as the wavelet-based JPEG2000, has been proposed to reduce much of the memory and bandwidth overhead; however, these compression algorithms are often lossy and the remote sensing community has been wary to implement such algorithms for fear of degradation of the data. We explore this issue for the JPEG2000 compression algorithm applied to Landsat-7 Enhanced Thematic Mapper (ETM+) imagery. The work examines the effect that lossy compression can have on the retrieval of the normalized difference vegetation index (NDVI). We have computed the NDVI from JPEG2000 compressed red and NIR Landsat-7 ETM+ images and compared with the uncompressed values at each pixel. In addition, we examine the effects of compression on the NDVI product itself. We show that both the spatial distribution of NDVI and the overall NDVI pixel statistics in the image change very little after the images have been compressed then reconstructed over a wide range of bitrates.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper first discusses the need for data compression within sensor networks and argues that data compression is a fundamental tool for achieving trade-offs in sensor networks among three important sensor network parameters: energy-efficiency, accuracy, and latency. Next, it discusses how to use Fisher information to design data compression algorithms that address the trade-offs inherent in accomplishing multiple estimation tasks within sensor networks. Results for specific examples demonstrate that such trades can be made using optimization frameworks for the data compression algorithms.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Recent progress in shape theory, including the development of object/image equations for shape matching and shape space metrics
(especially object/image metrics), is now being exploited to develop new algorithms for target recognition. This theory makes use of advanced mathematical techniques from algebraic and differential geometry to construct generalized shape spaces for various projection and sensor models, and then uses that construction to find natural metrics that express the distance (difference) between two configurations of object features, two configurations of image features, or an object and an image pair. Such metrics produce the most robust tests for target identification; at least as far as target geometry is concerned. Moreover, they also provide the basis for efficient hashing schemes to do target identification quickly and provide a rigorous foundation for error analysis in ATR.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Current investigations in the area of 3D video systems are based on depth map sequences. They are used to obtain the compatibility to conventional 2D video systems. Instead of transmitting or storing two video sequences for each eye, 2D video and the appropriate depth information is delivered. If the viewer owns a standard TV set only the 2D video is displayed. In the case of using a stereoscopic display or shutter glasses a second virtual view is synthesized based on depth information. Because of limited storage and bandwidth, data compression has to be used. In this paper several approaches for compression of depth sequences, applicable for 3D video systems, are investigated and evaluated. For temporal prediction established video compression standards perform a block based motion compensation and encode the prediction residuals for every block separately. Alternatively, motion compensation methods using control grid interpolation and coding algorithms based on wavelet transform are applicable. Another possible approach is given by a mesh-based interpolation of the depth scene content. In that case the temporal prediction can be obtained by node tracking. We demonstrate the coding performance of these alternative methods for compressing depth map sequences in comparison with the video compression standards MPEG-2, MPEG-4 and H.264.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Steganography is a technique of embedding information in innocuous data such that only the innocent data is visible. The wavelet transform lends itself to image steganography because it generates a large number of coefficients representing the information in the image. Altering a small set of these coefficients allows embedding of information (payload) into an image (cover) without noticeably altering the original image. We propose a novel, dual-wavelet steganographic technique, using transforms selected such that the transform of the cover image has low sparsity, while the payload transform has high sparsity. Maximizing the sparsity of the payload transform reduces the amount of information embedded in the cover, and minimizing the sparsity of the cover increases the locations that can be altered without significantly altering the image. Making this system effective on any given image pair requires a metric to indicate the best (maximum sparsity) and worst (minimum sparsity) wavelet transforms to use. This paper develops the first stage of this metric, which can predict, averaged across many wavelet families, which of two images will have a higher sparsity. A prototype implementation of the dual-wavelet system as a proof of concept is also developed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We present the further development of a watermarking technique that embeds an authentication signal in an image. In this paper, we concentrate on the JPEG 2000 image format. The detection/extraction of this signal can then be used to decide whether the image has gone through any intentional malicious tampering. Therefore, the watermark needs to be fragile to such tampering attacks. On the other hand, we need to make sure that the authentication is robust to change resulting from the watermarking process itself, or from necessary changes such as image compression.
We address the robustness against watermarking process issue in two ways. First, we decompose the image into phase and magnitude values. A signature is then generated from the phase values. In particular, binary phase-only filters and their variants will are utilized for this. This signature is subsequently hidden into the magnitude part by a bit-plane embedding technique. The disjoint operations of signature generation and signature embedding minimize the embedding artifacts of the authentication signal. Secondly, we use wavelet decomposition, whereby, the signature can be generated from one subband, and then it can be embedded in other subband(s), or the same subband.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Steganographic and watermarking information inserted into a color image file, regardless of embedding algorithm, causes disturbances in the relationships between neighboring pixels. A method for steganalysis utilizing the local binary pattern (LBP) texture operator to examine the pixel texture patterns within neighborhoods across the color planes is presented. Providing the outputs of this simple algorithm to an artificial neural net capable of supervised learning results in the creation of a surprisingly reliable predictor of steganographic content, even with relatively small amounts of embedded data. Other tools for identifying images with steganographic content have been developed by forming a neural network input vector comprised of image statistics that respond to particular side effects of specific embedding algorithms. The neural net in our experiment is trained with general texture related statistics from clean images and images modified using only one embedding algorithm, and is able to correctly discriminate clean images from images altered by data embedded by one of various different watermarking and steganographic algorithms. Algorithms tested include various steganographic and watermarking programs and include spatial and transform domain image hiding techniques. The interesting result is that clean color images can be reliably distinguished from steganographically-altered images based on texture alone, regardless of the embedding algorithm.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Recently, several watermarking schemes have been proposed that embed a watermark into two halftone images such that the watermark can be extracted by overlaying these halftone images. The watermark images in these schemes are binary images and the pixels in the two halftone images are correlated or not depending on whether the corresponding pixel in the watermark is on or off. In these schemes, the watermark is binary and does not contain detailed features. Furthermore, the extracted watermark contains residual patterns from the two images which reduces the fidelity of the extracted watermark image. This paper proposes a watermarking algorithm that addresses these problems. In addition, the proposed scheme admits more general watermark extraction functions and allows embedding of multiple watermark
images.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Past research in the field of cryptography has not given much consideration to arithmetic coding as a feasible encryption technique, with studies proving compression-specific arithmetic coding to be largely unsuitable for encryption. Nevertheless, adaptive modelling, which offers a huge model, variable in structure, and as completely as possible a function of the entire text that has been transmitted since the time the model was initialised, is a suitable candidate for a possible encryption-compression combine. The focus of the work presented in this paper has been to incorporate recent results of chaos theory, proven to be cryptographically secure, into arithmetic coding, to devise a convenient method to make the structure of the model unpredictable and variable in nature, and yet to retain, as far as is possible, statistical harmony, so that compression is possible. A chaos-based adaptive arithmetic coding-encryption technique has been designed, developed and tested and its implementation has been discussed. For typical text files, the proposed encoder gives compression between 67.5% and 70.5%, the zero-order compression suffering by about 6% due to encryption, and is not susceptible to previously carried out attacks on arithmetic coding algorithms.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.