This paper offers an algorithm for determining the blood flow parameters in the neck vessel segments using a single (optimal) measurement plane instead of the usual approach involving four planes orthogonal to the artery axis. This new approach aims at significantly shortening the time required to complete measurements using Nuclear Magnetic Resonance techniques. Based on a defined error function, the algorithm scans the solution space to find the minimum of the error function, and thus to determine a single plane characterized by a minimum measurement error, which allows for an accurate measurement of blood flow in the four carotid arteries. The paper also comprises a practical implementation of this method (as a module of a larger imaging-measuring system), including preliminary research results.
This paper considers reversible transforms which are used in wavelet compression according to nowadays JPEG2000 standard. Original data decomposition in a form of integer wavelet transformation realized in subband decomposition scheme is optimized by design and selection of the most effective transforms. Lifting scheme is used to construct new biorthogonal symmetric wavelets. Number and distribution of vanishing moments, subband coding gain, associated filter length, computational complexity and number of lifting steps were mainly analyzed in the optimization of designed transforms. Coming from many tests of compression efficiency evaluation in JPEG2000 standardization process, the best selected transforms have been compared to designed ones to conclude the most efficient for compression wavelet bases and their important features. Certain new transforms overcome all other in both phases of lossy-to-lossless compression (e.g. up to 0.5 dB of PSNR for 0.5 bpp in comparison to the state-of-art transforms of JPEG2000 compression, and up to 3dB over 5/3 standard reversible transform). Moreover, the lossy compression efficiency of proposed reversible wavelets is comparable to reference irreversible wavelets potential in several cases. The highest improvement over that reference PSNR values is close to 1.2 dB.
Some improvements of embedded zerotree wavelet algorithm are considere. Compression methods tested here are based on dyadic wavelet image decomposition, scalar quantization and coding in progressive fashion. Profitable coders with embedded form of code and rate fixing abilities like Shapiro EZW and Said nad Pearlman SPIHT are modified to improve compression efficiency. We explore the modifications of the initial threshold value, reconstruction levels and quantization scheme in SPIHT algorithm. Additionally, we present the result of the best filter bank selection. The most efficient biorthogonal filter banks are tested. Significant efficiency improvement of SPIHT coder was finally noticed even up to 0.9dB of PSNR in some cases. Because of the problems with optimization of quantization scheme in embedded coder we propose another solution: adaptive threshold selection of wavelet coefficients in progressive coding scheme. Two versions of this coder are tested: progressive in quality and resolution. As a result, improved compression effectiveness is achieved - close to 1.3 dB in comparison to SPIHT for image Barbara. All proposed algorithms are optimized automatically and are not time-consuming. But sometimes the most efficient solution must be found in iterative way. Final results are competitive across the most efficient wavelet coders.
The considerations on effective lossless coding of non- smooth images are presented in this paper. Selection of the best not time consuming coding algorithms for a class of medical images is made a matter rather than completely new concept introduction. As a reference we consider the most efficient CALIC method, new lossless standard JPEG-LS and BTPC algorithm. Different methods of image scanning and 1D encoding are tested. Simple raster-scan data ordering followed by n-order arithmetic coding gives significant encoding efficiency for ultrasound images considered as a representative of the non-smooth image class. The lower bit rates could be achieved by additional statistical modeling in arithmetic coder based on the 12th order context quantized to one-order context. Therefore number of states in conditional probability model is reduced to overcome dilution problem. Finally, improved compression efficiency of non-smooth images in comparison to state-of-the-art CALIC algorithm is achieved. Average bit rate value is diminished over 30 percent. To compress smooth images the linear prediction scheme is incorporated for entire data redundancy reduction. The same model based on linear combination of adjacent pixels is used in prediction and entropy encoding steps. For smooth images our method performance is comparable to JPEG-LS and slightly worse than CALIC.
Efficient coding scheme for image wavelet representation in lossy compression scheme is presented. Spatial-frequency hierarchical structure of quantized coefficient and their statistics is analyzed to reduce any redundancy. We applied context-based linear magnitude predictor to fit 1st order conditional probability model in arithmetic coding of significant coefficients to local data characteristics and eliminate spatial and inter-scale dependencies. Sign information is also encoded by inter and intra-band prediction and entropy coding of prediction errors. But main feature of our algorithm deals with encoding way of zerotree structures. Additional symbol of zerotree root is included into magnitude data stream. Moreover, four neighbor zerotree roots with significant parent node are included in extended high-order context model of zerotrees. This significant parent is signed as significant zerotree root and information about these roots distribution is coded separately. The efficiency of presented coding scheme was tested in dyadic wavelet decomposition scheme with two quantization procedures. Simple scalar uniform quantizer and more complex space-frequency quantizer with adaptive data thresholding were used. The final results seem to be promising and competitive across the most effective wavelet compression methods.
Efficient image compression technique especially for medical applications is presented. Dyadic wavelet decomposition by use of Antonini and Villasenor bank filters is followed by adaptive space-frequency quantization and zerotree-based entropy coding of wavelet coefficients. Threshold selection and uniform quantization is made on a base of spatial variance estimate built on the lowest frequency subband data set. Threshold value for each coefficient is evaluated as linear function of 9-order binary context. After quantization zerotree construction, pruning and arithmetic coding is applied for efficient lossless data coding. Presented compression method is less complex than the most effective EZW-based techniques but allows to achieve comparable compression efficiency. Specifically our method has similar to SPIHT efficiency in MR image compression, slightly better for CT image and significantly better in US image compression. Thus the compression efficiency of presented method is competitive with the best published algorithms in the literature across diverse classes of medical images.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.