In this paper we present formats and delivery methods of Large Volume Streaming Data (LVSD) systems. LVSD
systems collect TBs of data per mission with aggregate camera sizes in the 100 Mpixel to several Gpixel range at
temporal rates of 2 - 60 Hz. We present options and recommendations for the different stages of LVSD data collection
and delivery, to include the raw (multi-camera) data, delivery of processed (stabilized mosaic) data, and delivery of user-defined
region of interest windows. Many LVSD systems use JPEG 2000 for the compression of raw and processed data.
We explore the use of the JPEG 2000 Interactive Protocol (JPIP) for interactive client/server delivery to thick-clients
(desktops and laptops) and MPEG-2 and H.264 to handheld thin-clients (tablets, cell phones). We also explore the use of
3D JPEG 2000 compression, defined in ISO 15444-2, for storage and delivery as well. The delivery of raw, processed,
and region of interest data requires different metadata delivery techniques and metadata content. Beyond the format and
delivery of data and metadata we discuss the requirements for a client/server protocol that provides data discovery and
retrieval. Finally, we look into the future as LVSD systems perform automated processing to produce "information"
from the original data. This information may include tracks of moving targets, changes of the background, snap shots of
targets, fusion of multiple sensors, and information about "events" that have happened.
In this paper we present a JPEG2000-enabled ISR dissemination system that provides an airborne-based compression server and a ground-based screener client. This system makes possible direct dissemination of airborne collected imagery to users on the ground via existing portable communications. Utilizing the progressive nature of JPEG2000, the interactive capabilities of its associated JPIP streaming, and the on-the-fly mosaicing capability of the MIRAGE ground screener client application, ground-based users can interactively access large volumes of geo-referenced imagery from an airborne image collector. The system, called QUICKFIRE, is a recently developed prototype demonstrator. We present preliminary results from this effort.
As the number of MSI/HSI data producers increase and the exploitation of this imagery matures, more users will request MSI/HSI data and products derived from MSI/HSI data. This paper presents client-server architecture concepts for the storage, processing, and delivery of MSI/HSI data and derived products in client-server architecture. A key component of this concept is the JPEG 2000 compression standard. JPEG 2000 is the first compression standard that is capable of preserving radiometric accuracy when compressing MSI/HSI data. JPEG 2000 enables client-server delivery of large data sets in which a client may select spatial and spectral regions of interest at a desired resolution and quality to facilitate rapid viewing of data. Using these attributes of JPEG 2000, we present concepts that facilitate thin-client server-side processing as well as traditional thick-client processing of MSI/HSI data.
This paper presents the constructs for a transformational paradigm within a standards-based architectural framework, which enables extremely quick and accurate visualization of large imagery sets directly from airborne intelligence and surveillance collection assets. The architecture we present handles the dissemination and “on-demand” visualization of JPEG2000 encoded geospatial imagery while providing dramatic improvements in reconnaissance and surveillance operations where low-latency access and time-critical visualization of targets are of substantial importance. This innovative framework, known as the “advanced wavelet architecture” (AWA), has been developed using open standards and nonproprietary formats, within the Commercial and Government Systems Division of Eastman Kodak Company. Numerous software and hardware applications have been developed as a result of the AWA research and development activities.
This paper describes a continuing study effort investigating the impact of hyperspectral compression on the utility of compressed and subsequently reconstructed data. The current study involved the application of new compression options in JPEG-2000 to hyperspectral data and the investigation of their effects on exploitation. Part II of the JPEG-2000 standard (ISO/IEC 15444-2) provides extensions to the baseline JPEG-2000 compression algorithms (ISO/IEC 15444-2) that allow for the compression of hyperspectral data. In this study, Karhunen-Loeve Transform (KLT) was used for spectral decorrelation along with wavelet compression and scalar quantization to encode two HYDICE scenes at five different average bit rates (4.0, 2.0, 1.0, 05., 02.5 bits/pixel/band). Part II of the JPEG- 2000 standard also introduces the notion of component collections, which may be used to spectrally segment (and spectrally permute) hyperspectral data. Component collections were used in conjunction with KLT to reduce computation complexity and improve numeric stability. Two exploitation tasks, anomaly detection and material identification, were performed on these compressed and reconstructed data. We report the conventional root-mean- square-error (RMSE) and peak signal-to-noise ration (PSNR) metrics. We also report the exploitation results to facilitate the determination of acceptable bit rate for each exploitation task and the comparison amongst different compression algorithms. Comparisons are also made with previously reported results using an earlier version of JPEG-2000 to compress the HYDICE data.
JPEG2000 Part I provides a host of compression options and data ordering choices which enable powerful applications and create tremendous flexibility in the handling of still images. Part I, however, is restricted to handle multiple component images (with the exception of three-component images) a single component at a time. In general, Part I allows no exploitation of inter-component correlation that may exist. Part II introduces a robust multiple component transform capability which is applied prior to the Part I spatial wavelet decomposition and compression. This paper describes some of the multiple component transform capabilities in JPEG2000 Part II, including prediction, traditional decorrelation, wavelet transformations, and reversible integer transformations.
The efficient compression of greater than three-component imagery has not been allowed within current image compression standards. The advanced JPEG 2000 image compression standard will have provisions for multiple component imagery that will enable decorrelation in the component direction. The JPEG 2000 standard has been defined in a flexible manner, which allows for the use of multiple transform techniques to take advantage of the correlation between components. These techniques will allow the user to make the trade between complexity and compression efficiency. This paper compares the compression efficiency of three techniques within the JPEG 2000 standard against other standard compression techniques. The results show that the JPEG 2000 algorithm will significantly increase the compression efficiency of multiple-component imagery.
JPEG-2000 is the new image compression standard currently under development by ISO/IEC. Part I of this standard provides a “baseline” compression technology appropriate for grayscale and color imagery. Part II of the standard will provide extensions that allow for more advanced coding options, including the compression of multiple component imagery. Several different multiple component compression techniques are currently being investigated for inclusion in the JPEG-2000 standard. In this paper we apply some of these techniques toward the compression of HYDICE data. Two decorrelation techniques, 3D wavelet and Karhunen-Loeve Transform (KLT), were used along with two quantization techniques, scalar and trellis-coded (TCQ), to encode two HYDICE scenes at five different bit rates (4.0, 2.0, 1.0, 0.5, 0.25 bits/pixel/band). The chosen decorrelation and quantization techniques span the range from the simplest to the most complex multiple component compression systems being considered for inclusion in JPEG-2000. This paper reports root-mean-square-error (RMSE) and peak signal-to-noise ratio (PSNR) metrics for the compressed data. A companion paper [1] that follows reports on the effects of these compression techniques on exploitation of the HYDICE scenes.
This paper describes a second study effort investigating the impact of hyperspectral compression on the utility of compressed and subsequently reconstructed data. The overall objective is to assess and quantify the extent to which degradation introduced by compression affects the exploitation results of the compressed-reconstructed hyperspectral data. The goal of these studies is to provide a sound empirical basis for identifying the best performing compression algorithms and establishing compression ratios acceptable for various exploitation functions. Two nonliteral exploitation functions (i.e., anomaly detection and material identification) were performed on the original and compressed-reconstructed image data produced by two new hyperspectral compression algorithms (i.e., 3D Wavelets and Karhunen-Loeve Transform [KLT] Trellis-Coded Quantizer [TCQ] based JPEG-2000) at five compression ratios (i.e., 3:1, 6:1, 12:1, 24:1, and 48:1) on two scenes (a desert background and a forest background scene). The results showed that, in general, no appreciable degradation in anomaly detection performance occurred between the compressed-reconstructed and original hyperspectral data sets for both scenes using the KLT- TCQ based JPEG-2000 algorithm over the compression ratios studied. Degradation was observed for the 3D Wavelets based JPEG-2000 algorithm at 48:1 compression ratio. As for material identification, no appreciable degradation occurred between the compressed- reconstructed and original hyperspectral data sets for the desert scene using the KLT-TCQ algorithm over all the compression ratios studied. Some degradation was observed for the forest scene at higher compression ratios. Degradation was observed for the 3D Wavelets algorithm at compression ratios of 6:1 and higher for the desert scene and at compression ratios of 24:1 and higher for the forest scene. These results were compared with those obtained in the previous study using the Unmixing/Wavelets and KLT/Wavelets compression algorithms. The results of this study, as well as our previous study, continue to point to implementing compression algorithms and compression ratios empirically determined suitable for specific exploitation functions as a viable means to significantly alleviate transmission overload.
The Joint Photographic Experts Group (JPEG) within the ISO international standards organization is defining a new standard for still image compression--JPEG-2000. This paper describes the Wavelet Trellis Coded Quantization (WTCQ) algorithm submitted by SAIC and The University of Arizona to the JPEG-2000 standardization activity. WTCQ is the basis of the current Verification Model being used by JPEG participants to conduct algorithm experiments. The outcomes from these experiments will lead to the ultimate specification of the JPEG-2000 algorithm. Prior to describing WTCQ and its subsequent evolution into the initial JPEG-2000 VM, a brief overview of the objectives of JPEG-2000 and the process by which it is being developed is presented.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.