PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
This PDF file contains the front matter for SPIE Proceedings Volume 7084, including the Title Page, Copyright information, Table of Contents, Introduction, and the Conference Committee listing.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Hyperspectral and Ultraspectral Data Compression I
The increasing number of airborne and satellite platforms that incorporate hyperspectral imaging spectrometers has soon created the need for efficient storage, transmission and data compression methodologies. In particular, hyperspectral data compression is expected to play a crucial role in many remote sensing applications. Many efforts have been devoted to designing and developing lossless and lossy algorithms for hyperspectral imagery. However, most available lossy compression approaches have largely overlooked the impact of mixed pixels and subpixel targets, which can be accurately modeled and uncovered by resorting to the wealth of spectral information provided by hyperspectral image data. In this paper, we
develop a simple lossy compression technique which relies on the concept of spectral unmixing, one of the most popular approaches to deal with mixed pixels and subpixel targets in hyperspectral analysis. The proposed method uses a two-stage approach in which the purest spectral signatures (also called endmembers) are first extracted from the input data, and then used to express mixed pixels as linear combinations of endmembers. Analytical and experimental results are presented in the context of a real application, using hyperspectral data collected by NASA's Jet Propulsion Laboratory over the World Trade Center area in New York City, right after the terrorist attacks of September 11th. These data are used in this work
to evaluate the impact of compression using different methods on spectral signature quality for accurate detection of hot spot fires. Two parallel implementations are developed for the proposed lossy compression algorithm: a multiprocessor implementation tested on Thunderhead, a massively parallel Beowulf cluster at NASA's Goddard Space Flight Center, and a hardware implementation developed on a Xilinx Virtex-II FPGA device. Combined, these parts offer a thoughtful
perspective on the potential and emerging challenges of incorporating parallel data compression techniques into realistic hyperspectral imaging problems.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We propose a novel method for lossless compression of ultraspectral sounder data. The method utilizes spectral linear prediction and the optimal ordering of the granules. The prediction coefficients for a granule are computed using prediction coefficients that are optimized using a different granule. The optimal ordering problem is solved using Edmonds's algorithm for optimume branching. The results show that the proposed method outperforms previous methods on publicly available NASA AIRS data.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The size of images used in remote sensing scenarios has constantly increased in the last years. Remote sensing images
are not only stored, but also processed and transmitted, raising the need for more resources and bandwidth. On another
side, hyperspectral remote sensing images have a large number of components with a significant inter-component redundancy,
which is usually taken into account by many image coding systems to improve the coding performance. The
main approaches used to decorrelate the spectral dimension are the Karhunen Loeve-Transform and the Discrete Wavelet
Transform (DWT).
This paper is focused on DWT decorrelators because they have a lower computational complexity, and because they
provide interesting features such as component and resolution scalability and progressive transmission. Influence of the
spectral transform is investigated, considering the DWT kernel applied and the number of decomposition levels.
In addition, a JPIP compliant application, CADI, is introduced. It may be useful to test new protocols, techniques, or
coding systems, without requiring significant changes on the application. CADI can be run in most computer platforms and
devices thanks to the use of JAVA and the configuration of a light-version, suitable for devices with constrained resources.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Hyperspectral and Ultraspectral Data Compression II
With its bulky volume, the ultraspectral sounder data might still suffer a few bits of error after channel coding. Therefore
it is beneficial to incorporate some mechanism in source coding for error containment. The Tunstall code is a variable-to-
fixed length code which can reduce the error propagation encountered in fixed-to-variable length codes like Huffman
and arithmetic codes. The original Tunstall code uses an exhaustive parse tree where internal nodes extend every symbol
in branching. It might result in assignment of precious codewords to less probable parse strings. Based on an infinitely
extended parse tree, a modified Tunstall code is proposed which grows an optimal non-exhaustive parse tree by
assigning the complete codewords only to top probability nodes in the infinite tree. Comparison will be made among the
original exhaustive Tunstall code, our modified non-exhaustive Tunstall code, the CCSDS Rice code, and JPEG-2000 in
terms of compression ratio and percent error rate using the ultraspectral sounder data.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
To deal with the huge volume of data produced by hyperspectral sensors, the Canadian Space Agency (CSA) has developed two simple and fast algorithms for compressing hyperspectral data, namely Successive Approximation Multistage Vector Quantization (SAMVQ) and Hierarchical Self-Organizing Cluster Vector Quantization (HSOCVQ). The CSA intends to use these algorithms, which are capable of providing high compression rates, on-board a proposed Canadian hyperspectral satellite. It has been shown that both SAMVQ and HSOCVQ are near-lossless compression algorithms as their designs restrict compression errors to levels consistent with the level of the intrinsic noise in the original hyperspectral data. Although both of them are more bit-error resistant than the traditional compression algorithms, when the bit-error rate (BER) exceeds 10-6, the compression fidelity starts to drop apparently. This paper explores the benefits of employing forward error correction on top of data compression, by SAMVQ or HSOCVQ, to deal with higher BERs. In particular, it is shown that by proper use of convolutional codes, the resilience of compressed hyperspectral data against bit errors can be improved by close to two orders of magnitude.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
As part of NASA's New Millennium Program, the Geostationary Imaging Fourier Transform Spectrometer (GIFTS) is an
advanced ultraspectral sounder with a 128x128 array of interferograms for the retrieval of such geophysical parameters as
atmospheric temperature, moisture, and wind. With massive data volume that would be generated by future advanced
satellite sensors such as GIFTS, chances are that even the state-of-the-art channel coding (e.g. Turbo codes, LDPC) with low
BER might not correct all the errors. Due to the error-sensitive ill-posed nature of the retrieval problem, lossless compression
with error resilience is desired for ultraspectral sounder data downlink and rebroadcast. Previously, we proposed the fast
precomputed vector quantization (FPVQ) with arithmetic coding (AC) which can produce high compression gain for ground
operation. In this paper we adopt FPVQ with the reversible variable-length coding (RVLC) to provide better resilience
against satellite transmission errors remaining after channel decoding. The FPVQ-RVLC method is compared with the
previous FPVQ-AC method for lossless compression of the GIFTS data. The experiment shows that the FPVQ-RVLC
method is a significantly better tool for rebroadcast of massive ultraspectral sounder data.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, an unsupervised change detection method for satellite images is proposed. The algorithm exploits the inherent multiscale data structure of the dual-tree complex wavelet transform (DT-CWT) to individually decompose each input image into six directional subbands at each scale. Such representation is to facilitate better change detection. The difference resulted from the DT-CWT coefficients of two satellite images taken at two different time instances is analyzed automatically by unsupervised selection of the decision threshold that minimizes the total error probability of change detection, under the assumption that the pixels in the difference image are independent of one another. The change maps produced in different subbands are merged by using both inter- and intra-scale information. Furthermore, the proposed technique requires the knowledge of the statistical distributions of the changed and unchanged subband coefficients of the two images. To perform an unsupervised estimation of the statistical terms that characterizes these distributions, an iterative method based on the expectation maximization (EM) algorithm is proposed. For the performance evaluation, the proposed algorithm is exploited for both noise-free and noisy images, and the results show that the proposed method not only provides accurate detection of small changes but also demonstrates attractive robustness against noise interference.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, we propose a new unsupervised segmentation method for hyperspectral images using edge fusion. We first remove noisy spectral band images by examining the correlations between the spectral bands. Then, the Canny algorithm is applied to the retained images. This procedure produces a number of edge images. To combine these edge images, we compute an average edge image and then apply a thresholding operation to obtain a binary edge image. By applying dilation and region filling procedures to the binary edge image, we finally obtain a segmented image. Experimental results show that the proposed algorithm produced satisfactory segmentation results without requiring user input.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Due to the spatial resolution limitation, mixed pixels containing energy reflected from more than one type of ground object will present, which often results in inefficiency in the quantitative analysis of the remote sensing images. To address this problem, a fully constrained linear unmixing algorithm based on Hopfield Neural Network (HNN) is proposed in this paper. The Nonnegative constraint, which has no close-form analytical solution, is secured by the activation function of neurons instead of traditional numerical method. The Sum-to-one constraint is embedded in the HNN by adopting the least square Linear Mixture Model (LMM) as the energy function. The Noise Energy Percentage (NEP) stop criterion is also proposed for the HNN to improve its robustness to various noise levels. The proposed algorithm has been compared with the widely used Fully Constrained Least Square (FCLS) algorithm and the Gradient Descent Maximum Entropy (GDME) algorithm on two sets of benchmark simulated data. The experimental results demonstrate that this novel approaches can decompose mixed pixels more accurately regardless of how much the endmember overlaps. The HNN based unmixing algorithm also shows satisfied performance in the real data experiments.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Multispectral imaging is becoming an increasingly important tool for monitoring the earth and its environment
from space borne and airborne platforms. Multispectral imaging data consists of visible and IR measurements
from a scene across space and spectrum. Growing data rates resulting from faster scanning and finer spatial and
spectral resolution makes compression an increasingly critical tool to reduce data volume for transmission and
archiving. Research for NOAA NESDIS has been directed to finding for the characteristics of satellite atmospheric
Earth science Imager sensor data what level of Lossless compression ratio can be obtained as well as appropriate
types of mathematics and approaches that can lead to approaching this data's entropy level. Conventional
lossless do not achieve the theoretical limits for lossless compression on imager data as estimated from the
Shannon entropy. In a previous paper, the authors introduce a lossless compression algorithm developed for
MODIS as a proxy for future NOAA-NESDIS satellite based Earth science multispectral imagers such as GOES-R.
The algorithm is based on capturing spectral correlations using spectral prediction, and spatial correlations
with a linear transform encoder. In decompression, the algorithm uses a statistically computed look up table to
iteratively predict each channel from a channel decompressed in the previous iteration. In this paper we present
a new approach which fundamentally differs from our prior work. In this new approach, instead of having a
single predictor for each pair of bands we introduce a piecewise spatially varying predictor which significantly
improves the compression results. Our new algorithm also now optimizes the sequence of channels we use for
prediction. Our results are evaluated by comparison with a state of the art wavelet based image compression
scheme, Jpeg2000. We present results on the 14 channel subset of the MODIS imager, which serves as a proxy
for the GOES-R imager. We will also show results of the algorithm for on NOAA AVHRR data and data from
SEVIRI. The algorithm is designed to be adapted to the wide range of multispectral imagers and should facilitate
distribution of data throughout globally. This compression research is managed by Roger Heymann, PE of OSD
NOAA NESDIS Engineering, in collaboration with the NOAA NESDIS STAR Research Office through Mitch
Goldberg, Tim Schmit, Walter Wolf.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A novel compression algorithm for interferential multispectral images based on adaptive classification and curve-fitting
is proposed. The image is first partitioned adaptively into major-interference region and minor-interference region.
Different approximating functions are then constructed for two kinds of regions respectively. For the major interference
region, some typical interferential curves are selected to predict other curves. These typical curves are then processed by
curve-fitting method. For the minor interference region, the data of each interferential curve are independently
approximated. Finally the approximating errors of two regions are entropy coded. The experimental results show that,
compared with JPEG2000, the proposed algorithm not only decreases the average output bit-rate by about 0.2 bit/pixel
for lossless compression, but also improves the reconstructed images and reduces the spectral distortion greatly,
especially at high bit-rate for lossy compression.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Future high resolution instruments planned by CNES for space remote sensing missions will lead to higher bit rates because of the increase in resolution and dynamic range. For example, the ground resolution improvement induces a data rate multi-plied by 8 from SPOT4 to SPOT5 and by 28 to PLEIADES-HR. Lossy data compression with low complexity algorithms is then needed since compression ratio are always higher. New image compression algorithms have been used to increase their compression performance while complying with image quality requirements from the community of users and experts. Thus, DPCM algorithm used on-board SPOT4 was replaced by a DCT-based compressor on-board SPOT5. Recent compression algorithms such as PLEIADES-HR one use a wavelet-transform and a bit-plane encoder. But future compressors will have to be more powerful to reach higher compression ratios. New transforms are studied by CNES to exceed the DWT but a per-formance gap could be obtained with selective compression. This article gives an overview of CNES past and present studies of on-board compression algorithms for high-resolution images.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A novel compression method for interferential multispectral images based on distributed source coding is proposed.
Slepian-Wolf and Wyner-Ziv theorems on source coding with side information are taken as the basic coding principles.
In our system, a rate control solution is proposed to avoid the feedback channel used in many practical distributed source
coding solutions. The residual statistics between corresponding coefficients in original frame and the side information
frame is assumed to be modeled by Laplacian distribution. We estimate the distribution parameter on line at the encoder
at subband levels. The experimental results show that our method outperforms significantly over JPEG2000, especially at
medium and low bit rates, the method can obtain about 5dB gains than JPEG2000, and the subjective quality is obviously
enhanced.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Geographical information system has come into the Web Service times now. In this paper, Web3D applications have been developed based on our developed Gridjet platform, which provides a more effective solution for massive 3D geo-dataset sharing in distributed environments. Web3D services enabling web users could access the services as 3D scenes, virtual geographical environment and so on. However, Web3D services should be shared by thousands of essential users that inherently distributed on different geography locations. Large 3D geo-datasets need to be transferred to distributed
clients via conventional HTTP, NFS and FTP protocols, which often encounters long waits and frustration in distributed wide area network environments. GridJet was used as the underlying engine between the Web 3D application node and geo-data server that utilizes a wide range of technologies including the one of paralleling the remote file access, which is a WAN/Grid-optimized protocol and provides "local-like" accesses to remote 3D geo-datasets. No change in the way of using software is required since the multi-streamed GridJet protocol remains fully compatible with existing IP
infrastructures. Our recent progress includes a real-world test that Web3D applications as Google Earth over the GridJet protocol beats those over the classic ones by a factor of 2-7 where the transfer distance is over 10,000 km.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This study deals with the effects of lossy image compression in the visual analysis of remotely sensed images. The experiments consider two factors with interaction: the type of landscape and the degree of lossy compression. Three landscapes and two areas for each landscape (with different homogeneity) have been selected. For every of the six study area, color 1:5000 orthoimages have been submitted to a JPEG2000 lossy compression algorithm at five different compression ratios. The image of every area and compression ratio has been submitted to on-screen photographic interpretation, generating 30 polygon layers. Maps obtained using compressed images with a high compression ratio present high structural differences regarding to maps obtained with the original images. On the other hand, the
compression of 20% obtains values only slightly different from those of the original photographic interpretation, but these differences seem owed to the subjectivity of the photographic interpretation. Therefore, this compression ratio seems to be the optimum since it implies an important reduction of the image size without determining changes neither in the topological variables of the generated vector nor in the obtained thematic quality.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Multispectral, hyperspectral and ultraspectral imagers and sounders are increasingly important for atmospheric
science and weather forecasting. The recent advent of multipsectral and hyperspectral sensors measuring radiances
in the emissive IR are providing valuable new information. This is due to the presence of spectral channels
(in some cases micro-channels) which are carefully positioned in and out of absorption lines of CO2, ozone, and
water vapor. These spectral bands are used for measuring surface/cloud temperature, atmospheric temperature,
Cirrus clouds water vapor, cloud properties/ozone, and cloud top altidude etc.
The complexity of the spectral structure wherein the emissive bands have been selected presents challenges
for lossless data compression; these are qualitatively different than the challenges offered by the reflective bands.
For a hyperspectral sounder such as AIRS, the large number of channels is the principal contributor to data size.
We have shown that methods combining clustering and linear models in the spectral channels can be effective
for lossless data compression. However, when the number of emissive channels is relatively small compared to
the spatial resolution, such as with the 17 emissive channels of MODIS, such techniques are not effective. In
previous work the CCNY-NOAA compression group has reported an algorithm which addresses this case by
sequential prediction of the spatial image. While that algorithm demonstrated an improved compression ratio
over pure JPEG2000 compression, it underperformed optimal compression ratios estimated from entropy. In
order to effectively exploit the redundant information in a progressive prediction scheme we must, determine a
sequence of bands in which each band has sufficient mutual information with the next band, so that it predicts
it well.
We will provide a covariance and mutual information based analysis of the pairwise dependence between
the bands and compare this with the qualitative expected dependence suggested by a physical analysis. This
compression research is managed by Roger Heymann, PE of OSD NOAA NESDIS Engineering, in collaboration
with the NOAA NESDIS STAR Research Office through Mitch Goldberg, Tim Schmit, Walter Wolf.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A new lossless compression method based on prediction tree with error compensation for hyperspectral imagery is
proposed in this paper. This method incorporates the techniques of prediction tree and adaptive band prediction. The
proposed method is different from previous similar approaches in that its prediction to the current band is performed by
multiple bands and the error created by the prediction tree is compensated by a linear adaptive predictor for decorrelating
spectral statistical redundancy. After de-correlating intraband and interband redundancy, the SPIHT (Set
Partitioning in Hierarchical Trees) wavelet coding is used to encode the residual image. The proposed method achieves
high compression ratio on the NASA JPL AVIRIS data.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Motion estimation has been shifted from encoder to the decoder in distributed video coding (DVC). In this paper, a simplified skip mode is introduced in the WZ (Wyner-Ziv) frame coding process. In the proposed scheme, a skip mode decision process is performed to determine whether to apply skip mode to the blocks with low motion first. With the skip-mode, the block is reconstructed from the side information in the decoder without any encoding bits. In addition, the
non-skip mode blocks are divided into two parts by motion activity. The encoding bitplane is extracted within each set and encoded independently. Simulations show the proposed scheme can achieve up to 54.29% bitrate savings without visible PSNR sacrifice
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
An innovative VLSI architecture for JPEG-LS compression algorithm is proposed, which
implements real-time image compression either in near lossless mode or in lossless mode. The
proposed architecture mainly includes four parallel pipelines, in which four pixels from four
continuous lines could be processed simultaneously with a specific coding scan sequence, which
ensures low complexity and real-time data processing. Our VLSI architecture is implemented on a
Xilinx XC2VP30 FPGA. The experiment results show that our hardware system has the same results in
image quality and compression rate as the standard JPEG-LS method and the processing speed of our
system is four times more than that of traditional method.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Synthetic aperture radar (SAR) instruments on spacecraft are capable of producing huge quantities of data.
Onboard lossy data compression is commonly used to reduce the burden on the communication link. In this
paper an overview is given of various SAR data compression techniques, along with an assessment of how much
improvement is possible (and practical) and how to approach the problem of obtaining it.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper first introduces the character of JPEG2000 and JPEG, a digital stereo aerial image pair is compressed using the JPEG2000 and JPEG method. Comparing are provided from subjective quality, PSNR and accuracy of digital terrain models (DTM) automatically derived from digital stereo aerial image pair. Experiment analysis is provided in the end, and result indicates that the JPEG2000 method has better effect.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Based on the analyses of the interferential multispectral imagery(IMI), a new compression algorithm based on
distributed source coding is proposed. There are apparent push motions between the IMI sequences, the relative shift
between two images is detected by the block match algorithm at the encoder. Our algorithm estimates the rate of
each bitplane with the estimated side information frame. then our algorithm adopts a ROI coding algorithm, in which
the rate-distortion lifting procedure is carried out in rate allocation stage. Using our algorithm, the FBC can be
removed from the traditional scheme. The compression algorithm developed in the paper can obtain up to 3dB's gain
comparing with JPEG2000 and significantly reduce the complexity and storage consumption comparing with
3D-SPIHT at the cost of slight degrade in PSNR.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.