Recently, Han et. al. developed a method for visually lossless compression using JPEG2000. In this method, visibility thresholds (VTs) are experimentally measured and used during quantization to ensure that the errors introduced by quantization are below these thresholds. In this work, we extend the work of Han et. al. to visually lossy regime. We propose a framework where a series of experiments are conducted to measure Just-Noticeable-Differences using the quantization distortion model introduced by Han et. al. The resulting thresholds are incorporated into a JPEG2000 encoder to yield visually lossy, JPEG2000 Part 1 compliant codestreams.
Conventional performance analysis of detection systems confounds the effects of the system architecture (sources, detectors, system geometry, etc.) with the effects of the detection algorithm. Previously, we introduced an information-theoretic approach to this problem by formulating a performance metric, based on Cauchy-Schwarz mutual information, that is analogous to the channel capacity concept from communications engineering. In this work, we discuss the application of this metric to study novel screening systems based on x-ray scatter or phase. Our results show how effective use of this metric can impact design decisions for x-ray scatter and phase systems.
In this work we present an information-theoretic framework for a systematic study of checkpoint x-ray systems using photoabsorption measurements. Conventional system performance analysis of threat detection systems confounds the effect of the system architecture choice with the performance of a threat detection algorithm. However, our system analysis approach enables a direct comparison of the fundamental performance limits of disparate hardware architectures, independent of the choice of a specific detection algorithm. We compare photoabsorptive measurements from different system architectures to understand the affect of system geometry (angular views) and spectral resolution on the fundamental limits of the system performance.
Compressive imaging exploits sparsity/compressibility of natural scenes to reduce the detector count/read-out bandwidth in a focal plane array by effectively implementing compression during the acquisition process. How-ever, realizing the full potential of compressive imaging entails several practical challenges, such as measurement design, measurement quantization, rate allocation, non-idealities inherent in hardware implementation, scalable imager architecture, system calibration and tractable image formation algorithms. We describe an information-theoretic approach for compressive measurement design that incorporates available prior knowledge about natural scenes for more efficient projection design relative to random projections. Compressive measurement quantization and rate-allocation problem are also considered and simulation studies demonstrate the performance of random and information-optimal projection designs for quantized compressive measurements. Finally we demonstrate the feasibility of optical compressive imaging with a scalable compressive imaging hardware implementation that addresses system calibration and real-time image formation challenges. The experimental results highlight the practical effectiveness of compressive imaging with system design constraints, non-ideal system components and realistic system calibration.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.