We propose Compressed Connected Components (CxCxC), a new fast algorithm for labeling connected components in binary images making use of compression. We break the given 3D image into non-overlapping 2x2x2 cube of voxels (2x2 square of pixels for 2D) and encode these binary values as the bits of a single decimal integer.
We perform the connected component labeling on the resulting compressed data set. A recursive labeling approach by the use of smart-masks on the encoded decimal values is performed. The output is finally decompressed back to the original size by decimal-to-binary conversion of the cubes to retrieve the connected components in a lossless fashion. We demonstrate the efficacy of such encoding and labeling for large data sets (up to 1392 x 1040 for 2D and
512 x 512 x 336 for 3D). CxCxC reports a speed gain of 4x for 2D and 12x for 3D with memory savings of 75% for 2D and 88% for 3D over conventional (recursive growing of component labels) connected components algorithm.
We also compare our method with those of VTK and ITK and find that we outperform both with speed gains of 3x and 6x for 3D. These features make CxCxC highly suitable for medical imaging and multi-media applications where the size of data sets and the number of connected components can be very large.
Recent trends in medical image processing involve computationally intensive processing techniques on large data sets, especially for 3D applications such as segmentation, registration, volume rendering etc. Multi-resolution image processing techniques have been used in order to speed-up these methods. However, all well-known techniques currently used in multi-resolution medical image processing rely on using Gaussain-based or other equivalent floating point representations that are lossy and irreversible. In this paper, we study the use of Integer Wavelet Transforms (IWT) to address the issue of lossless representation and reversible reconstruction for such medical image processing applications while still retaining all the benefits which floating-point transforms offer such as high speed and efficient memory usage. In particular, we consider three low-complexity reversible wavelet transforms namely the - Lazy-wavelet, the Haar wavelet or (1,1) and the S+P transform as against the Gaussian filter for multi-resolution speed-up of an automatic bone removal algorithm for abdomen CT Angiography. Perfect-reconstruction integer wavelet filters have the ability to perfectly recover the original data set at any step in the application. An additional advantage with the reversible wavelet representation is that it is suitable for lossless compression for purposes of storage, archiving and fast retrieval. Given the fact that even a slight loss of information in medical image processing can be detrimental to diagnostic accuracy, IWTs seem to be the ideal choice for multi-resolution based medical image segmentation algorithms. These could also be useful for other medical image processing methods.
Medical image fusion is increasingly enhancing diagnostic accuracy
by synergizing information from multiple images, obtained by the
same modality at different times or from complementary modalities
such as structural information from CT and functional from PET. An
active, crucial research topic in fusion is validation of the registration (point-to-point correspondence) used. Phantoms and
other simulated studies are useful in the absence of, or as a preliminary to, definitive clinical tests. Software phantoms in
specific have the added advantage of robustness, repeatability and
reproducibility. Our virtual-lung-phantom-based scheme can test
the accuracy of any registration algorithm and is flexible enough
for added levels of complexity (addition of blur/anti-alias, rotate/warp, and modality-associated noise) to help evaluate the
robustness of an image registration/fusion methodology. Such a
framework extends easily to different anatomies. The feature of
adding software-based fiducials both within and outside simulated
anatomies prove more beneficial when compared to experiments using
data from external fiducials on a patient. It would help the diagnosing clinician make a prudent choice of registration algorithm.
Radiologists perform a CT Angiography procedure to examine vascular structures and associated pathologies such as aneurysms. Volume rendering is used to exploit volumetric capabilities of CT that provides complete interactive 3-D visualization. However, bone forms an occluding structure and must be segmented out. The anatomical
complexity of the head creates a major challenge in the segmentation of bone and vessel. An analysis of the head volume reveals varying spatial relationships between vessel and bone that can be separated into three sub-volumes: “proximal”, “middle”, and “distal”. The “proximal” and “distal” sub-volumes contain good spatial separation between bone and vessel (carotid referenced here). Bone and vessel appear contiguous in the “middle” partition that
remains the most challenging region for segmentation. The partition algorithm is used to automatically identify these partition locations so that different segmentation methods can be developed for each sub-volume. The partition locations are computed using bone, image entropy, and sinus profiles along with a rule-based method. The algorithm is validated on 21 cases (varying volume sizes, resolution, clinical sites, pathologies) using ground truth identified
visually. The algorithm is also computationally efficient, processing a 500+ slice volume in 6 seconds (an impressive 0.01 seconds / slice) that makes it an attractive algorithm for pre-processing large volumes. The partition algorithm is integrated into the segmentation workflow. Fast and simple algorithms are implemented for processing
the “proximal” and “distal” partitions. Complex methods are restricted to only the “middle” partition. The partitionenabled
segmentation has been successfully tested and results are shown from multiple cases.
All known methods of lossless or reversible data embedding that exist today suffer from two major disadvantages: 1) The embedded image suffers from distortion, however small it may be by the very process of embedding and 2) The requirement of a special parser (decoder), which is necessary for the client to remove the embedded data in order to obtain the original image (lossless). We propose a novel lossless data embedding method where both these disadvantages are circumvented. Zero-distortion lossless data embedding (ZeroD-LDE) claims 'zero-distortion' of the embedded image for all viewing purposes and further maintaining that clients without any specialized parser can still recover the original image losslessly but would not have direct access to the embedded data. The fact that not all gray levels are used by most images is exploited to embed data by selective lossless compression of run-lengths of zeros (or any compressible pattern). Contiguous runs of zeros are changed such that the leading zero is made equal to the maximum original intensity plus the run-length and the succeeding zeros are converted to the embedded data (plus maximum original intensity) thus achieving extremely high
embedding capacities. This way, the histograms of the host-data and the embedded data do not overlap and hence we can obtain zero-distortion by using the window-level setting of standard DICOM viewers. The embedded image is thus not only DICOM compatible but also zero-distortion visually and requires no clinical validation.
KEYWORDS: Image compression, Video, Wavelets, Digital video recorders, Image quality, JPEG2000, Image processing, Video coding, Lithium, Video compression
Set Partitioned Embedded bloCK coder (SPECK) has been found to
perform comparable to the best-known still grayscale image coders
like EZW, SPIHT, JPEG2000 etc. In this paper, we first propose
Color-SPECK (CSPECK), a natural extension of SPECK to handle color
still images in the YUV 4:2:0 format. Extensions to other YUV
formats are also possible. PSNR results indicate that CSPECK is
among the best known color coders while the perceptual quality of
reconstruction is superior than SPIHT and JPEG2000. We then
propose a moving picture based coding system called Motion-SPECK
with CSPECK as the core algorithm in an intra-based setting.
Specifically, we demonstrate two modes of operation of
Motion-SPECK, namely the constant-rate mode where every frame is
coded at the same bit-rate and the constant-distortion mode, where
we ensure the same quality for each frame. Results on well-known
CIF sequences indicate that Motion-SPECK performs comparable to
Motion-JPEG2000 while the visual quality of the sequence is in
general superior. Both CSPECK and Motion-SPECK automatically
inherit all the desirable features of SPECK such as embeddedness,
low computational complexity, highly efficient performance, fast
decoding and low dynamic memory requirements. The intended
applications of Motion-SPECK would be high-end and emerging video
applications such as High Quality Digital Video Recording System,
Internet Video, Medical Imaging etc.
An important telemedicine application is the perusal of CT scans (digital format) from a central server housed in a healthcare enterprise across a bandwidth constrained network by radiologists situated at remote locations for medical diagnostic purposes. It is generally expected that a viewing station respond to an image request by displaying the image within 1-2 seconds. Owing to limited bandwidth, it may not be possible to deliver the complete image in such a short period of time with traditional techniques. In this paper, we investigate progressive image delivery solutions by using JPEG 2000. An estimate of the time taken in different network bandwidths is performed to compare their relative merits. We further make use of the fact that most medical images are 12-16 bits, but would ultimately be converted to an 8-bit image via windowing for display on the monitor. We propose a windowing progressive RoI technique to exploit this and investigate JPEG 2000 RoI based compression after applying a favorite or a default window setting on the original image. Subsequent requests for different RoIs and window settings would then be processed at the server. For the windowing progressive RoI mode, we report a 50% reduction in transmission time.
In this paper, we propose a block-based conditional entropy coding
scheme for medical image compression using the 2-D integer Haar
wavelet transform. The main motivation to pursue conditional
entropy coding is that the first-order conditional entropy is
always theoretically lesser than the first and second-order
entropies. We propose a sub-optimal scan order and an optimum
block size to perform conditional entropy coding for various
modalities. We also propose that a similar scheme can be used to
obtain a sub-optimal scan order and an optimum block size for
other wavelets. The proposed approach is motivated by a desire to
perform better than JPEG2000 in terms of compression ratio. We
hint towards developing a block-based conditional entropy coder,
which has the potential to perform better than JPEG2000. Though we
don't indicate a method to achieve the first-order conditional
entropy coder, the use of conditional adaptive arithmetic coder
would achieve arbitrarily close to the theoretical conditional
entropy. All the results in this paper are based on the medical
image data set of various bit-depths and various modalities.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.