PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
We describe video coding algorithms and show coded results based on a three-dimensional spatio-temporal subband decomposition for digital videoconferencing at the ISDN rates of 64 and 128 kbps. The original video material typically consists of scenes showing one or two persons moving in front of a still background, and possibly displaying printed material. The original CIF video signal (240 X 360) temporally downsampled at 7.5 frames-per-second is first low-pass filtered spatially, then decomposed into seventeen spatio-temporal frequency bands--two temporal subbands followed by a cascade of spatial decompositions for each temporal subsequence.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The advent of widespread mobile communications together with the continuous development of image communication markets led to the idea of offering mobile image communications, particularly mobile videotelephony. Since very low bitrate video coding is still a quite unexplored subject, a large research effort is being put into the study of the possible solutions. For the moment two main approaches related to the compatibility or not with the CCITT H.261 hybrid coding scheme are foreseen. This paper proposes some experiments related to the CCITT H.261 compatible solution in order to look for its limits, namely when typical mobile videotelephone sequences are used.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The state of the art in video coding at constant bit rates of up to about 1.5 Mbit/s is represented by the MPEG1 recommendation. Interest in low bitrate applications is coming from mobile applications, from storage media limitations and from transmission over subscriber copper loops. In this paper the ability of the MPEG1 standard to work at low bit rates, extrapolating it out of its design range is studied. A statistical analysis at bit level of the information generated by a compatible MPEG1 scheme is performed. The aim of this study is to obtain statistical results of the bit distribution in order to find out how the scheme spends the bits. This information will give us essential knowledge to decide in which modules it is possible to obtain a bit rate saving. A study of the motion vector search range reduction of a compatible MPEG1 scheme is also done. Simulations have been done by using a simulator written in 'C' language. A video tape with the simulation has been produced.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Three-dimensional (3-D) frequency coding is an alternative approach to hybrid coding concepts used in nowaday standards. However, the lack of Motion Compensation (MC) techniques is a drawback in 3-D coders when compared to hybrid MC coding. This paper presents a 3-D Subband Coding (SBC) scheme with MC, which is based on a separable structure. Motion-compensated 2-tap Quadrature Mirror Filters (QMFs) are employed for subband decomposition along the temporal axis, and a parallel Time Domain Aliasing Cancellation (TDAC) filter bank is used for decomposition in the spatial domain. The temporal axis analysis and synthesis operations are performed in a cascade structure. A special substitution technique, which occasionally places unfiltered values into the temporal lowpass band at any stage of cascade, guarantees perfect reconstruction synthesis in the case of a non- uniform Motion Vector Field (MVF) with full-pel MC accuracy. With sub-pel accuracy, though coding efficiency is higher, perfect reconstruction is no longer guaranteed. Lattice Vector Quantization (LVQ) has been employed to encode the subband samples. In contrast to hybrid coders, it is straightforward to perform quantization with spatio-temporal perceptual weighting. Coding results are presented, which show that standard hybrid coding concepts are outperformed by up to 4 dB with the 3-D MC-SBC/VQ coder.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The HDTV studios of the future will require magnetic recorders for the acquisition, storage, editing and broadcasting of HDTV material. Inside these studios, the HDTV signal will probably be a digital signal. Due to the high bit rate of digital HDTV, recorders without bit- rate reduction will be mechanically complex and these recorders will have a limited playing time. The complexity of the recorder can be decreased, and the playing time can be increased, by reducing the bit rate of the HDTV signal before it is recorded on tape. This paper first discusses the constraints on bit-rate reduction for professional HDTV recording and then goes on to describe the results obtained with a bit-rate reduction method called motion-adaptive intraframe transform coding.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper presents a motion-compensated hybrid DCT scheme developed for scalable recording of HDTV. The scheme allows for a full exchangeability of cassettes between the SD and HD machines. The coding scheme yields high-quality HD pictures, although in some cases the quality of the SD pictures is not sufficient.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The widespread use of mobile telephones and the availability of efficient low bitrate video compression algorithms has recently sparkled interest in mobile videotelephony. The usual mobile telephony environments, car or handheld, have some challenging characteristics for video compression algorithms originated from the vibration of the car and/or small movements of the operator's hand: these movements tend to increase the encoded information, which in turn leads to worse quality for fixed bitrates. This paper presents an extended H.261 codec with global motion compensation and motion vector smoothing which attains better subjective quality than basic H.261 codecs in the adverse environments described. A method is also proposed for global motion cancellation, instead of compensation. This method has the advantage of being compatible with H.261 while reducing the subjective quality degradation due to camera motion/vibration in mobile environments. Closely associated with global motion compensation and cancellation are the global motion detection and motion vector smoothing algorithms also proposed in this paper.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Motion estimation is a key element in motion-compensated image coding. In the conventional motion estimation algorithms, only image intensity information is exploited to estimate the motion parameters in an image sequence. Motion estimation is usually obtained by searching a best match of intensity signals between two successive images, and the resultant motion vectorfields may be very noisy and not suitable for some specific applications like interframe interpolation. Some simple post-processing techniques like vector smoothing and filtering on noisy vectorfields are not very efficient to improve the reliability of motion estimation. In this paper, a motion estimation algorithm with vectorfield smoothness constraints is proposed. Instead of searching a best match only on interframe intensity signals, the motion estimator aims to search an optimum on both the match of interframe intensity signals and the similarity (or smoothness) of local motion vectors. The experiments indicate that the motion vectorfields obtained by this method are much smoother and more homogeneous than those by the conventional estimation algorithms. The simulations of motion-compensated interframe interpolation further show that the motion parameters estimated by the proposed algorithm are reliable and closely approximate to the real physical motion in the scenes.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In video coding it is clearly worthwhile to have a more realistic motion field than what can be obtained by the classical mean absolute difference (MAD) full search method. Applications can be found in very low bitrate coding where the amount of bits to code the motion field is important compared to the total bitrate, in codecs which want to exploit some features of the human visual system (classification for optimal bit allocation, use of motion masking...) or in object based coding where the motion estimation algorithm interacts with the segmentation procedure. Our research started from the well known MAD full search procedure and wanted to obtain reasonable results without adding too much complexity. The improvement is performed inside the algorithm without any need for post processing. After a more thorough description of these improvements, some results will be compared and applications indicated.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In recent year several algorithms for signal reconstruction from sign information have been developed. In this paper we propose an improved version of the 1D algorithm presented, with the following new characteristics: (1) restriction of the input data space to the set of energy normalized bandpass functions, (2) preservation of the global energy content of the original signal, and (3) implicit determination of the scaling constant. Simulation results show that the restriction of the solution data space to the set on energy normalized functions leads, in general, to a reconstruction that, even if is not coincident in all the zerocrossing positions to the original signal, seems to be stable in L2 sense. A complete coding scheme based on the JBIG standard for an efficient encoding of the zerocrossing is presented. Encouraging results have been obtained both at reconstruction and at coding level that show the feasibility of the global scheme.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper addresses the problem of displacement field estimation and segmentation in image sequences. Emerging from the Bayesian paradigm, we derive an objective function yielding the MAP estimate with respect to some model assumptions. It can be interpreted as a measure for the estimates' explanation of the image data regularized by our prior assumptions on the estimates. The observation model we impose, considers experimental studies of the displaced frame difference and decovered regions. It involves some unknown parameters. The a priori is modelled by a coupled Gibbs/Markov random field. Optimization is performed via deterministic relaxation in a multiscale pyramid maintaining the structure of the algorithm in all pyramid levels. Iteratively, the unknown parameters of the observation model are estimated. The relaxation procedure tests only a small number of likely displacement-label candidates at each site. The relationship of regularization weights in the pyramid is thoroughly investigated. Simulation results with complex natural scenes demonstrate the good performance of the algorithm.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
There is no doubt that in a near future a large number of image processing techniques will be based on motion compensation, making thus very common the cascading of several 'motion compensated' devices in the same image chain. A reference scheme for the optimum use of motion compensation in future image communication networks is presented. Motion estimation is performed once only, at a very early stage of the process chain, then motion information is encoded, transmitted in a separate data channel and distributed to the cascaded motion compensated processes. The distribution scenario must take into consideration the various transformations performed on the image signal since its origination so that the motion information distributed is always consistent with the pictures to process. The problems of the representation of motion relatively to a given source image signal and of its adjustment to new frame rate environments are especially addressed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Bandwidth compression for high definition television (HDTV) relies heavily on 2-D motion parameter acquisition from image sequences. Wu et al. proposed a differential method for estimation of 2-D motion parameters. In this paper we will present a new iterative method to estimate the motion parameters. We take use of the square error of intensity of two successive frames of image in interested region as an energy function. The steepest gradient descent method is directly applied to the error function and an iterative algorithm without neglecting any terms involving high orders is established. A lot of simulation results show that the algorithm can be successfully applied to motion estimation from image sequences. Two typical tests are demonstrated in the paper. We expect to implement the algorithm on a neural network and realize simultaneous computation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The objective of the paper is to present a new object based image coding technique using morphological segmentation. These are the first results of a final objective of proposing a completely new coding/decoding scheme for storage and transmission applications based on Mathematical Morphology. The paper presents a new object based image coding algorithm that involves three main processing steps: segmentation, coding of contours and coding of the inside. The three fundamental coding steps of our approach work on a multiscale representation of the data. The coding of contours represents the shape and location of the region and is based on techniques relying on chain codes. The coding of inside consists in modeling the gray level function of the image and in filling each region with this model. Orthogonal polynomials are used for inside coding and bit allocation techniques are developed such that efficient compression rates are obtained. Several computer generated images are presented that show good visual results for a variety of different compression ratios. The techniques can also be applied to image sequences. Current research is under way to propose new coding techniques for both the contour and the inside coding using Mathematical Morphology.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A new method for image sequence coding involving a wavelet packets transform is proposed to compact the energy of the transformed signal while preserving acceptable restitution quality. Based on an interframe coding scheme, the sequence is split into groups of frames where a similarity between successive frames exists. The best basis is generated for the first intra-frame, the first inter-predicted frame and the first inter-interpolated frame of each group and conserved for the following corresponding frames to reduce the computational complexity. Two different methods for on-line selection of the best basis are presented. The choice of the proposed technique is validated by experimental results where a codec based on motion compensated wavelet packets transform is presented.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In MPEG compatible image sequence encoding platforms, the discrete cosine transform is used both for encoding and for resolution scalability purposes. The paper describes techniques for introducing wavelet transforms in frequency scalable applications. Using the fact that wavelets allow perfect reconstruction of low-resolution low-frequency images, the proposed approach permits the implementation of variable resolution decoders from an already available nonscalable MPEG encoded bitstream. A further improvement of quality for the low resolution sequences may be obtained through conventional reconstruction techniques (e.g. drift correction) or by altering the wavelet filter bands according to error metrics computed at the encoder.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Data compression is quite relevant for storage and transmission of museal data. This work reports on the performance of subband coding applied to paintings pictures obtained by an HDTV camera. The performance of different analysis/synthesis filter banks and coding schemes is compared. The best results are obtained via Johnston's filters and a uniform threshold quantizer. The effectiveness of a scalable use of subband coding in this application is also reported.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper describes a subband coding algorithm which has been submitted to MPEG (Motion Picture Experts Group) for subjective testing during the Kurihama meeting in November '91. It presents investigations, most of which have been carried out in the framework of the collaboration of the European projects VADIS and COST 211. The main idea is to achieve a simple but robust coding scheme which is flexible enough to handle future requirements, such as TV and HDTV compatibilities. The field merged frames are coded in 16 subbands. Motion compensated prediction of the frames reduces the temporal redundancy. The results obtained show that subband coding is a competitive alternative to other coding methods.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this work, we present a technique which suggests the use of symmetry to reduce the redundancy in images. A symmetry-based image segmentation and coding scheme is described. A segmentation technique is analyzed, and applied on natural images, performing their efficient partition. In order to find symmetries in regions of any shape, the concept of axes of symmetry is generalized to skeletons of symmetry, dividing the regions into two symmetric subparts by the introduction of the Medial Axis Transformation of the regions. Each subpart of the regions is then linearly predicted with respect to the skeletons. An efficient coding strategy specifying the shape and the luminance of the regions is described. Results on natural images show that the described technique outperforms the more classical second generation image coding methods in terms of visual quality.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A hierarchical data structure for the representation of document images in vector form is suggested, which allows to store in compact form all needed information about connected components, segments and feature points. The process steps for obtaining this data structure are described. A fast one-pass algorithm for the transformation of a large-size thinned image into the vector form is suggested. The defects which can exist in the vector representation are extracted and an algorithm for their reduction is shortly described. Experimental results are also shown.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper presents a method for fast encoding of still images based on iterated function systems (IFSs). The major disadvantage of this coding approach, usually referred to as fractal coding, is the high computational effort of the encoding process compared to e.g. the JPEG algorithm. This is mainly due to the costly 'full search' of the transform parameters within a fractal codebook. We therefore propose an hierarchical encoding scheme which is based upon a two level codebook search and a structural classification of its entries. By this way only a small subset of the codebook has to be considered, which increases encoding speed significantly. Refining the initial codebook and applying a second search even increases the reconstruction quality compared to the full search but with a fraction of its computational effort.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
ISO/MPEG has specified an algorithm for compression of video sequences at 1.5 Mbit/s. Meanwhile, additional features must be provided by the decoding system in order to build interactive applications. Support of special modes (slow, reverse, fast) and direct interfacing with graphical and video systems give an added value to the global system. In this paper, we describe a one chip MPEG1 video decoder, DIVA (Decoder for Interactive Video Applications) dedicated to interactive applications. It implements the decoding of MPEG core bitstream and is able to decode a compressed video bitstream at up to 5 Mbit/s. The architecture is organized around a single memory bank of DRAMs used in fast page mode to reach the performances required by the algorithm. After decoding, output pictures are resampled at CCIR 601 resolution and can be generated at 13.5 MHz for interlaced systems or at 25.175 MHz for computer (VGA) PC systems. An instruction set was developed in order to drive the chip so that an application can load instructions in the decoder well in advance before their execution, thus releasing the CPU from the scheduling task. DIVA was designed with an ASIC approach in a 0.8 micron technology. The chip contains 600,000 transistors in a die size of 11 X 11 sq mm and dissipates 1.3 Watt at 27 MHz.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A multiprocessor architecture for compact realizations of video coding applications is presented. The actual standards for video coding e.g. H.261 and MPEG are based on a hybrid coding scheme, which allows parallelization at both data level and task level. The parallelization at data level is performed by distribution of image data among the processors. Each processor works on locally stored image segments. The parallelization at task level is realized inside the processors by functional modules which are adapted to classes of algorithms. The functionality of the modules and the number of their data paths is determined by applying efficiency calculations resulting in a module for motion estimation and a block- level coprocessor for transform and quantization. The controlling and synchronization is accomplished by a programmable module. A hierarchical controlling concept reduces the on- chip control overhead. A chip size of 70 mm2 is estimated for one processor, when using 0.6 micrometers CMOS technology. With an operating frequency of 65 MHz one chip will perform the computations for a full CIF H.261 codec with 30 Hz framerate and motion estimation based on +/-15 pel full search blockmatching algorithm.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper provides an overview of subband and wavelet theories. It emphasizes their strong relations. The practical merits of these decomposition techniques in signal processing are examined. The current status of this active research field is summarized and it concludes with the discussion of potential extensions for future study.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Networks are an important component of a picture archiving and communication system (PACS). The development of PACS has partially been a function of the availability of network technology. On the other hand, the specific needs of medical image transfer have, although to a somewhat lesser extent, impacted changes in network technology. Most important is the development of a digital imaging and communications standard (DICOM) for medical image transfer which could lead to a more universal 'image transfer' standard to be used outside of the medical field. The need for large image data transfer also promoted the early use of new technologies such as asynchronous transfer mode (ATM) packet switched networks and other high speed networks. This paper will review the development of computer networks, the history of PACS and their mutual interaction.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The transmission of multimedia page sequences in real-time tele-education presentations via narrowband links results in unacceptable end-to-end delays. This paper proposes a time- shifting method referred to as pretransfer which transfers presentation data in background during the session, without user involvement. Point-to-point and multipoint protocols are discussed. For multicast situations an effective page-scheduling method is developed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A comprehensive analysis of image communication requirements in hospitals led to the conclusion that current-state LANs do not offer the data throughput necessary to perform the basic functionality of Picture Archiving and Communication Systems (PACS), the image collection and the storage, under routine workload. Extended PACS functionalities like online consultation sessions and remote image processing will even require more data transmission capacity of the networks. As images are carriers for the most comprehensive form of diagnostic information their very high data volume, e.g. 50 Mbit for a x-ray chest image, only lossless compression algorithms are acceptable. The compression will at most reduce the throughput requirements by a factor of 3 and bring additional complexity into the PACS. Thus image networking is a central issue for PACS.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Facing the introduction of full-scale PACS systems the accent is undoubtedly made on the data transport and management which are mission critical. Not only the appropriate techniques such as the network architecture (certainly broadband based) and communication protocols (ATM) will have an influence on the performance, reliability and maintenance of the system and perhaps to a much larger extent the integrated view on the PACS data management. The later approach in our research team has been called NOSS, Network Object Server System. The concept allows the storage of any kind of object that will emerge in the future, of particular interest are the Multi-Media developments. The object distribution is done according to the intelligence of the NOSS, incorporating the requirements of the applications. The second accent in the project has been made on the distributed multi-vendor environment, focusing the integratability. Standardization in the field of interoperability will offer us to take advantage of services and applications like remote image processing and to offer image distribution and management services. The current state of the project is a pretty good analysis of the related standards and the work of new standard commissions. The results are reflected in a functional model. The feasibility and performance will be tested in a prototype running at the AZ-VUB.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In the EC project TELEMED (R1086) the feasibility of digital image communication for remote expert consultation in radiology is investigated. Three modes are considered: batch mode (normal case), interactive mode (emergency case), and mixed mode (teaching situation). To allow short transmission and response times for the voluminous data files, 2 Mbit/s networks and 384 kbit/s ISDN are employed to interconnect eight European university hospitals. A set of requirements was derived, and software has been developed for image and text communication, for image display and manipulation, and for synchronization of remote workstations. From the experiences gained during experimentation regarding the appropriate design of networks, transmission rates and data throughput, user acceptance, integration and standardization, recommendations for future routine application could be concluded.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Picture archiving and communication systems (PACS) have to handle a lot of moving or still images services with different resolution levels (HDTV-like, TV-like and lower resolutions), quality requirements (from acceptable visual quality to lossless) and different priorities. In this paper we present a new system for the integration of hierarchical images transmission in a broadband network based on the DQDB protocol (IEEE 802.6 standard). This system is based on the combination of two new tools. First we have designed a multiresolution/multiquality image coding scheme based on wavelet transforms. The main original feature of the algorithm is that it allows a multiresolution lossless access to images. A greater compression ration is possible at the price of graceful degradations. Secondly, we have implemented an improved priority mechanism in the DQDB protocol. The aim was that, in case of network overload, the emissions lose quality but are always maintained. A simulation of the implementation of such a system for the handling of the communications inside a hospital is also presented.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In the frame of the EC-sponsored TELEMED project, videoconferencing was evaluated as a tool for Remote Expert Consultation (REC) in radiology. The REC environment has been established as a pan-European demonstrator for advanced telecommunication applications in medicine. Five European university hospitals have been interconnected using broadband networks at a speed of 2 Mbit/s and 140 Mbit/s. Videoconference was evaluated in more than 120 sessions. In the experiments, videoconference proved to be a useful tool for teleconsultations, providing a set of requirements identified as indispensable for medical videoconference such as remotely adjustable iris for the document camera are fulfilled. Diagnostic reliability of images recorded with our equipment proved acceptable for digitally acquired images while reliability is limited for conventional images with a high demand of spatial and contrast resolution. Regional spin-off applications that have been established at some participating sites underline the potential of videoconference in health care.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A Global PACS is a national network which interconnects several PACS networks at medical and hospital complexes using a national backbone network. A Global PACS environment enables new and beneficial operations between radiologists and physicians, when they are located in different geographical locations. One operation allows the radiologist to view the same image folder at both Local and Remote sites so that a diagnosis can be performed. The paper describes the user interface, database management, and network communication software which has been developed in the Computer Engineering Research Laboratory and Radiology Research Laboratory. Specifically, a design for a file management system in a distributed environment is presented. In the remote consultation and diagnosis operation, a set of images is requested from the database archive system and sent to the Local and Remote workstation sites on the Global PACS network. Viewing the same images, the radiologists use pointing overlay commands, or frames to point out features on the images. Each workstation transfers these frames, to the other workstation, so that an interactive session for diagnosis takes place. In this phase, we use fixed frames and variable size frames, used to outline an object. The data pockets for these frames traverses the national backbone in real-time. We accomplish this feature by using TCP/IP protocol sockets for communications. The remote consultation and diagnosis operation has been tested in real-time between the University Medical Center and the Bowman Gray School of Medicine at Wake Forest University, over the Internet. In this paper, we show the feasibility of the operation in a Global PACS environment. Future improvements to the system will include real-time voice and interactive compressed video scenarios.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The complexity of information representation in the medical domain is presented. The use of standard application oriented information architectures for multimedia medical information is discussed. An object oriented approach and the use of a document metaphor as a means of creating a framework that can accommodate this complexity is introduced. Existing standards for document descriptions are examined and their suitability as architectural foundation for such a framework is discussed. A ultrasound laboratory report is discussed in some detail as a case study for this architecture.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Cooperative working with hypermedia documents has applications in many areas where it is necessary for geographically dispersed people to jointly and interactively converse over a common information pool. Scenarios within the medical sphere include: remote and tele- consultation, remote diagnosis and wide area conferencing. In this paper we outline a cooperative working system (TeCo) for hypermedia documents in the medical sphere. With this background we examine the OSI/ODP concepts necessary when realizing such a system. In particular, we demonstrate the document structure facilities supported by the SGML/HyTime standard to express user needs in viewing and handling documents for specific user roles.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper describes the computation of the adapted normalization matrices as they were developed in the 'Structural Analysis and Coding of Images' study. In this study we had elaborated an adaptive compression method based on a parametrical segmentation of the images. After a training phase, for each defined parametrical class was associated a specific adapted normalization matrix to be used in the Discrete Cosine Transform (DCT) domain very much in the same way that the Joint Photographic Experts Group (JPEG) method performs its normalization. Obviously, the accuracy of the segmentation and of the matrix values are the critical points that determine the performance of this method.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
JPEG is a very versatile image coding and compression standard for single images. Medical images make a higher demand on image quality and precision than the usual 'pretty pictures'. In this paper the potential applications of the various JPEG coding modes in a medical environment are evaluated. Due to legal reasons the lossless modes are especially interesting. The spatial modes are equally important because medical data may well exceed the maximum of 12 bit precision allowed for the DCT modes. The performance of the spatial predictors is investigated. From the users point of view the progressive modes, which provide a fast but coarse approximation of the final image, reduce the subjective time one has to wait for it, so they also reduce the user's frustration. Even the lossy modes will find some applications, but they have to be handled with care, because repeated lossy coding and decoding leads to a degradation of the image quality. The amount of this degradation is investigated. The JPEG standard alone is not sufficient for a PACS because it does not store enough additional data such as creation data or details of the imaging modality. Therefore it will be an imbedded coding format in standards like TIFF or ACR/NEMA. It is concluded that the JPEG standard is versatile enough to match the requirements of the medical community.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The main goal of this work is to provide as quickly and as easily as possible the transfer and transformation of the underlying information between a video camera acting as the source of the picture and the processor memory. For this purpose a simple special purpose camera was built for evaluation use. This is considered to be a feasibility study. Communication into the processors memory was accomplished by using transputer links. the basic idea was to generate clock signals according to the processors operation and thus be able to shift information directly into the processor memory without using an intermediate buffer. Synchronization was performed on a pixel basis in order to input the original information precisely without over- sampling. The result of this work is a faster transfer and access to usual information, with less data still containing all information available in the camera. The technical results are illustrated using a sample camera built for evaluation purposes. This camera is based on LCA control circuitry, using a transputer as a processing element. It is understood as a feasibility study.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The paper is concerned with theoretical and practical problems related to real time acquisition and storage of standard PAL/SECAM images and also with developing fast software algorithms for digital image processing. A first section of the paper comprises the description of the block diagram of the PC plug-in conversion unit and a second one refers to practical image data compression and basic image processing algorithms.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This contribution describes a Videotex terminal with photographic capabilities based on a personal computer (PC). This system takes advantage of a recently available support--the Integrated Services Digital Network (ISDN)--which represents a significant improvement in the performance of communication systems, and is intended to offer a new and practical service while maintaining compatibility with the Alphamosaic Videotex.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.