PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
This paper discusses the issues involved in putting the media into a hypermedia system. The main argument of the paper is that to-date media representation and underlying link strategies have been too closely tied together in the move from hypertext to hypermedia. We argue that it is necessary to separate the issues of media from link structure, and we present a model which solves some of the problems of genuine media integration in a hypermedia system. At the same time the model provides support for the creation of links between data of different media types in a conceptually meaningful way. The paper describes the design of Microcosm++, and object oriented extensible service-based architecture for building consistent integrated hypermedia systems. It is based on the Microcosm hypermedia system which was developed at Southampton. The current implementation of Microcosm++ demonstrates the flexibility of object-based services for making hypermedia more viable in a working environment. The approach described reduces authoring effort significantly while at the same time increasing the integrity of the link structures and providing a unified model for media integration.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
MEHIDA is a multimedia system offering hearing-impaired children an easy and attractive method to communicate with their hearing and deaf peers. It is a TOTAL COMMUNICATION method whose objective is the acquisition of various forms of communication available to the hearing impaired simultaneously: gesture, speech, dactylology, formal signing, lip reading, reading and writing. Didactic activities and games are used to teach the different means of communication. The approach gives the child the chance to practice the different types of communication. A character has been created in the shape of a pear to assist and guide the child. The pupil identifies with the character at all times, as it explains what the child is being asked to do during each activity. The MEHIDA learning process is divided into six stages: basic learning, prereading and prewriting, syllable, word, simple and complex sentence reading and writing. Each phase establishes a hierarchy of didactic objectives which are the expression of the skills and knowledge to be acquired by the child during the learning process (e.g., learning concepts of similarity) broken down into a series of lower level operational objectives (e.g., select figures of the same shape, size and color).
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The audiovisual library of the future will be based on computerized access to digitized documents. In this communication, we address the user interface issues which will arise from this new situation. One cannot simply transfer a user interface designed for the piece by piece production of some audiovisual presentation and make it a tool for accessing full-length movies in an electronic library. One cannot take a digital sound editing tool and propose it as a means to listen to a musical recording. In our opinion, when computers are used as mediations to existing contents, document representation-based user interfaces are needed. With such user interfaces, a structured visual representation of the document contents is presented to the user, who can then manipulate it to control perception and analysis of these contents. In order to build such manipulable visual representations of audiovisual documents, one needs to automatically extract structural information from the documents contents. In this communication, we describe possible visual interfaces for various temporal media, and we propose methods for the economically feasible large scale processing of documents. The work presented is sponsored by the Bibliotheque Nationale de France: it is part of the program aiming at developing for image and sound documents an experimental counterpart to the digitized text reading workstation of this library.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper describes an approach to an object model for multimedia interactions. Multimedia interactions differ from common interactions in the requirement for sophisticated analysis of input data and that such interactions are usually time-based. They can be modeled as an extension of pure multimedia presentations (as we did). The ideas and concepts for multimedia interactions are subject to research in several computer science domains like visual computing, Artificial Intelligence, pattern recognition and image analysis. The proposed object model is a new approach to allow the arrangement of the results from different research domains in an object-framework to make them usable in a real-world multimedia toolkit. As far as it is known by the author, no other such approach exists. By extending the existing object model for multimedia presentations of the multimedia user interface toolkit MME, it is possible to smoothly integrate interactions into our system. The structure of the paper is as follows: In the first part, the notion of multimedia interactions is introduced, different types and examples of such interactions are given together with a glance at related work in this area. In the second part, the object model for multimedia interactions is presented, being closely related to the object model of MME. The remaining parts of the paper deal with a brief description of some interesting implementation issues and future developments both for the object model and the implementation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Multimedia research has mainly focussed on real-time data capturing and display combined with compression, storage and transmission of these data. However, there is another problem considering real-time selecting and arranging a possibly large amount of data from multiple media on the computer screen together with textual and graphical data of regular software. This problem has already been known from complex software systems, such as CASE and hypertest, and will even be aggravated in multimedia systems. The aim of our work is to alleviate the user from the burden of continuously selecting, placing and sizing windows and their contents, but without introducing solutions limited to only few applications. We present an experimental system which controls the computer screen contents and layouts, directed by a user and/or tool provided information filter and prioritization. To be application independent, the screen layout is based on general layout optimization algorithms adapted from the VLSI layout which are controlled by application specific objective functions. In this paper, we discuss the problems of a comprehensible screen layout including the stability of optical information in time, the information filtering, the layout algorithms and the adaptation of the objective function to include a specific application. We give some examples of different standard applications with layout problems ranging from hierarchical graph layout to window layout. The results show that the automatic tool independent display layout will be possible in a real time interactive environment.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The proposed scheme of pyramid broadcasting is a new way of rendering Video On Demand service. In pyramid broadcasting, the most frequently requested movies are multiplexed on the broadcast network, resulting in radical improvement of access time and efficient bandwidth utilization. This is achieved by using storage at the receiving end. As the available bandwidth increases, the improvement in access time is exponential as opposed to just linear improvement as in the case of conventional broadcasting. The larger the bandwidth of the network, the better the access time gain due to pyramid broadcasting. As the access time requirement decreases, the bandwidth requirement in the case of conventional broadcasting increases linearly while the bandwidth requirement in the case of pyramid broadcasting increases only logarithmically. We provide analytical and experimental evaluations of pyramid broadcasting based on its implementation on ethernet LAN, illustrating its advantages.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In a video-on-demand (VOD) system, it is desirable to support pause- resume function. The requirement that each viewer be able to independently pause the video at any instant and later resume the viewing with little delay can cause difficulties in batching viewers for each showing. The conventional approach to support on-demand pause- resume provides one video access stream to disks for each video request. In this paper, we propose a more efficient mechanism to support the pause-resume feature using look-ahead scheduling with look-aside buffering. The idea is to use buffering to improve the number of concurrent viewers supportable. The concept of look-ahead scheduling is not to back up each viewer with a real stream capacity so he can pause and resume at any time, but rather to back it up with a (look-ahead) stream that is currently being used for another showing that is close to completion. Before the look-ahead stream becomes available, the pause and resume features have to be supported by the original stream through (look-aside) buffering of the missed content. It is shown via simulations that the proposed scheme can provide a substantial improvement in throughput as compared to the approach with no batching. Furthermore, for a given amount of buffer, the improvement in throughput grows more than linearly with the stream capacity of the server.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Packet-switching based video conferencing has emerged as one of the most important multimedia applications. Lip synchronization can be disrupted in the packet network as the result of the network properties: packet delay jitters at the capture end, network delay jitters, packet loss, packet arrived out of sequence, local clock mismatch, and video playback overlay with the graphic system. The synchronization problem become more demanding as the real time and multiparty requirement of the video conferencing application. Some of the above mentioned problem can be solved in the more advanced network architecture as ATM having promised. This paper will present some of the solutions to the problems that can be useful at the end station terminals in the massively deployed packet switching network today. The playback scheme in the end station will consist of two units: compression domain buffer management unit and the pixel domain buffer management unit. The pixel domain buffer management unit is responsible for removing the annoying frame shearing effect in the display. The compression domain buffer management unit is responsible for parsing the incoming packets for identifying the complete data blocks in the compressed data stream which can be decoded independently. The compression domain buffer management unit is also responsible for concealing the effects of clock mismatch, lip synchronization, and packet loss, out of sequence, and network jitters. This scheme can also be applied to the multiparty teleconferencing environment. Some of the schemes presented in this paper have been implemented in the Multiparty Multimedia Teleconferencing (MMT) system prototype at the IBM watson research center.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This project is aimed at developing a cost-effective working environment for the transfusion medicine specialists of American Red Cross (ARC). In this project we are developing a multimedia-based consultation environment that uses Internet and teleconferencing to increase the quality of services and to replace currently used 800 telephone lines. Through the use of Internet/LAN/ISDN the physicians can share information and references while they discuss patient cases. A multimedia interface allows the physician to access data from the office and from the house. This paper discusses the approach, current status of the project and future plans to extend the approach to other areas of medicine.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper describes the software architecture for the ATM network interface of a prototype multimedia terminal. The terminal is designed to support continuous media and data communications over a low speed ATM network and is based on a UNIX workstation connected to a commercial ATM LAN. The interface architecture addressed complex multimedia call handling, quality of service (QoS) and resource allocation issues. The terminal will be used to investigate methods for handling degradation in QoS on defence ATM networks incorporating low speed network links where QoS is time variant.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The concept of schedulable region previously introduced to broadband networks with quality of service guarantees is extended to multimedia devices such as audio/video processing and disk storage units. The resulting multimedia capacity region characterized the amount of resources a physical device is able to provide under quality of service constraints. The modeling methodology supports a straightforward association of resources with logical objects and, thereby, the mapping of logical objects onto physical objects with quality of service guarantees. Examples showing the size and shape of the multimedia capacity region of various physical devices are given.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Support for real-time multimedia applications is becoming an essential requirement for future high-speed networks. Many of these real-time applications will require guaranteed quality of service (QoS) such as a bound on the maximum message delay and/or on the maximum message loss rate. This poses an exciting challenge to the high speed transport protocol design and implementations. In this paper, we study the resource and admission control algorithms for real-time transport connections, and give a necessary and sufficient condition for the schedulabiltiy of n real-time transport connections at a destination host under deadline scheduling policy for deterministic guarantees of QoS. We propose the deadline scheduling with interrupt points policy, under which we give a necessary and sufficient condition of the schedulability of n real-time transport connections for statistical guarantees of QoS. These necessary and sufficient conditions form the mathematical basis for the QoS guarantees in transport communication services. Based on the necessary and sufficient conditions, we give the connection admission control algorithms for deterministic guarantees and statistical guarantees of QoS respectively. We also calculate the buffer space needed for each real-time transport connection. Our results could be applied to other fields as well, such as real-time operating systems.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The OS/2 Resource Reservation System supports bandwidth reservation in the LAN Server Ultimedia multimedia file server. This paper describes how audio/video streams are specified and managed by the resource reservation system. The problem of variable-bit-rate stream utilization is considered: It is shown that the peak-rate of the variable-bit-rate stream can be reduced as the size of the destination buffer is increased beyond a single block. A buffered-peak-rate descriptor is presented that computes peak rate of variable-bit-rate stream based upon a particular client-buffer size. The framework and interfaces of the resource reservation system are described.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
One of the challenges in the design of a distributed multimedia system is to devise suitable specification models for various schemas in different levels of the system. Another important research issue is the integration and synchronization of heterogeneous multimedia objects. In this paper, we present our models for different multimedia schemas and transformation algorithms that transform high-level multimedia objects into schemas that can be used to support the presentation and communication of the multimedia objects.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
It is now recognized that object-oriented techniques are well suited to the design and implementation of multimedia applications. Objects may be used to encapsulate the great variety of hardware devices used in such applications and to abstract over the details of low level interfaces. Furthermore, complex media processing algorithms, such as compression/decompression, may be encapsulated within objects making them easier to reuse across applications. Real-time synchronization is also an essential aspect of multimedia which arises from the inherently temporal properties of media such as audio and video. In this paper, we propose a set of programming abstractions and an approach to address real-time synchronization requirements in an object-oriented framework. In our approach, active objects encapsulate media processing activities. Real-time synchronization is maintained by reactive objects that control the execution of media processing objects. A key advantage of our approach is that it allows the separation of synchronization from the behavior of objects. Both objects and synchronization specifications may be reused in different contexts. In addition, the approach enables the specification of real-time synchronization in a high-level notation that has proven well suited to this task.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Inter-media synchronization methods developed until now have been based on syntactic timestamping of video frames and audio samples. These methods are not fully appropriate for the synchronization of multimedia objects which may have to be accessed individually by their contents, e.g. content-base data retrieval. We propose a content-based multimedia synchronization scheme in which a media stream is viewed as hierarchial composition of smaller objects which are logically structured based on the contents, and the synchronization is achieved by deriving temporal relations among logical units of media object. content-based synchronization offers several advantages such as, elimination of the need for time stamping, freedom from limitations of jitter, synchronization of independently captured media objects in video editing, and compensation for inherent asynchronies in capture times of video and audio.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Current multimedia group presentation environments such as video conference systems, remote presentation and shared workspaces are typically developed as self-contained applications, i.e. without much support from operating systems. However, newly developed operating systems have a framework to support multimedia data processing but these are still not enough for multimedia group presentations. To resolve this problem, we present a framework for designing an extended operating system called COSMOS (Collaborative Object Sharing for Multimedia Operating System). The framework provides concurrent and distributed multimedia data processing, fine grained synchronization, real-time data flow management, presentation, session and shared object management of multimedia CSCW applications. The framework can be useful for either designing customized multimedia servers for playback machines or application developments using its API.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The paper describes the results of using a time and synchronization toolkit in a medical imaging application. The design and architectural principles of the toolkit are summarized. We then present the cineloop synchronization situation and discuss the possible solutions. We conclude by describing how these solutions are implemented and what kind of support is required from the toolkit.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The transportation of compressed video data generally requires the network to adapt to large fluctuations in bandwidth requirements if the quality of video is to remain constant. Techniques that use averaging for smoothing video data, such as those found in video conferences, allows for some smoothing at the expense of delay. With video-on-demand systems on the horizon, smoothing techniques for prerecorded video data are necessary for the efficient use of network resources. Simply extending algorithms that smooth via averaging for use in video playback cannot smooth the burstiness in bandwidth requirements without using large amounts of buffering. In this paper, we introduce the notion of critical bandwidth allocation which allows for the most effective use of buffering while allowing for long durations between bandwidth increase requests. A comparison between critical bandwidth allocation algorithms and other smoothing algorithms is presented.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, we develop a transmission system, referred to as Dual Stream JPEG (DSJ), that improves the resiliency of JPEG video streams by unifying layered encoding with image scrambling techniques. Whereas layered encoding techniques naturally provide resiliency to isolated loss of low priority image data, scrambling techniques make is possible to distribute burst losses throughout the image. In DSJ, this unification is instantiated by employing the discrete cosine transform (DCT) to convert each 8 X 8 pixel block of an image into frequency domain, partitioning the resultant locks into high-priority DC coefficients and low-priority AC coefficients, and the scrambling the transmission of encoded blocks of AC coefficients. Furthermore, DSJ defines an adaptation layer that implements efficient error detection and recovery by including the sequence number and offset (relative to the beginning of the cell payload) of blocks carried in each cell. We have evaluated the performance of DSJ through extensive simulations. We present and analyze our results.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We examine some of the implications of the recent introduction of a class of highly scalable video compression algorithms for network distribution. In particular, we investigate statistical multiplexing of highly scalable VBR traffic in high speed networks, characterized by low channel error rates. The superiority of multiple priority shared queuing policies over the most commonly considered two priority approaches is established through simulation studies. We also propose an optimal Earliest Due Date (EDD) scheduling approach, which has decided advantages over shared queuing, when transmission delay and jitter are to be kept very small. This proposed approach involves dynamic modification of EDD scheduling parameters.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, we statistically characterized four VBR encoded video sequences, containing I/B/P frames, at the Slice layer with the goal of developing an accurate source model to better understand the bit-rate behavior of these sources. We presented the cells/slice distributions and showed that it is 'heavy tailed' and fits the Pareto distribution better than Gamma. We showed that an 8-state Markov Chain fits the cells/slice distribution well, reaching steady state after 37 to 80 transitions (2 to 5 frames). We also showed that the autocorrelation function is quasi-periodic which is mostly due to the frame sequence pattern rather than spatial content. We discussed the impact to I/B/P sequences on multiplexing and dynamic bandwidth allocation and proposed a multiplexing method called Time Shifted Multiplexing (TSM); whereby, the multiplexer attempts to overlap I and P frames of one video stream with B frames of another. This tends to reduce both Peak-to-Mean-Ratio and Coefficient-of-Variation of the multiplexed output stream. We showed that coefficient-of-variation reduced in half and bandwidth requirements reduced by 41% using TSM.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper discusses a local distribution system for interactive multimedia TV (IMTV) to the home. The network architecture considered here id fiber to the curb, and local distribution of the IMTV signals to and from the home is provided on telephone wiring and coaxial cable. The downstream IMTV channel, from the curb to the home, operates at a data rate of 51.84 Mb/s, and the upstream channel, from the home to the curb, operates at a data rate of 1.62 Mb/s.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Some of the most challenging multimedia applications have involved real- time conferencing, using audio and video to support interpersonal communication. Here we re-examine assumptions about the role, importance and implementation of video information in such systems. Rather than focussing on novel technologies, we present evaluation data relevant to both the classes of real-time multimedia applications we should develop and their design and implementation. Evaluations of videoconferencing systems show that previous work has overestimated the importance of video at the expense of audio. This has strong implications for the implementation of bandwidth allocation and synchronization. Furthermore our recent studies of workplace interaction show that prior work has neglected another potentially vital function of visual information: in assessing the communication availability of others. In this new class of application, rather than providing a supplement to audio information, visual information is used to promote the opportunistic communications that are prevalent in face-to-face settings. We discuss early experiments with such connection applications and identify outstanding design and implementation issues. Finally we examine a different class of application 'video-as-data', where the video image is used to transmit information about the work objects themselves, rather than information about interactants.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Floor control allows users of networked multimedia applications to remotely share resources like cursors, data views, video and audio channels, or entire applications without access conflicts. Floors are mutually exclusive permissions, granted dynamically to collaborating users, mitigating race conditions and guaranteeing fair and deadlock- free resource access. Although floor control is an early concept within computer-supported cooperative work, no framework exists and current floor control mechanisms are often limited to simple objects. While small-scale collaboration can be facilitated by social conventions, the importance of floors becomes evident for large-scale application sharing and teleconferencing orchestration. In this paper, the concept of a scalable session protocol is enhanced with floor control. Characteristics of collaborative environments are discussed, and session and floor control are discerned. The system's and user's requirements perspectives are discussed, including distributed storage policies, packet structure and user-interface design for floor presentation, manipulation, and triggering conditions for floor migration. Interaction stages between users, and scenarios of participant withdrawal, late joins, and establishment of subgroups are elicited with respect to floor generation, bookkeeping, and passing. An API is proposed to standardize and integrate floor control among shared applications. Finally, a concise classification for existing systems with a notion of floor control is introduced.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We present the design and implementation of Collaborative Spray or CSpray (pronounced 'sea spray'). CSpray is a CSCW (Computer Supported Cooperative Work) application geared towards supporting multiple users in a collaborative scientific visualization setting. Scientists are allowed to share data sets, graphics primitives, images, and create visualization products within a view independent shared workspace. CSpray supports incremental updates to reduce network traffic, separates large data streams from smaller command streams with a two level communication strategy, provides different service levels according to client's resources, enforces permissions for different levels of sharing, distinguishes private from public resources, and provides multiple fair and intuitive floor control schemes for shared objects. Off the shelf multimedia tools such as nv and vat can be used concurrently. CSpray is based on the spray rendering visualization interaction technique to generate contours, surfaces, particles, and other graphics primitives from scientific data sets such as those found in oceanography and meteorology.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The paper reports on the progress at Leeds University to build a Virtual Science Park (VSP) to enhance the University's ability to interact with industry, grow its applied research and workplace learning activities. The VSP exploits the advances in real time collaborative computing and networking to provide an environment that meets the objectives of physically based science parks without the need for the organizations to relocate. It provides an integrated set of services (e.g. virtual consultancy, workbased learning) built around a structured person- centered information model. This model supports the integration of tools for: (a) navigating around the information space; (b) browsing information stored within the VSP database; (c) communicating through a variety of Person-to-Person collaborative tools; and (d) the ability to the information stored in the VSP including the relationships to other information that support the underlying model. The paper gives an overview of a generic virtual working system based on X.500 directory services and the World-Wide Web that can be used to support the Virtual Science Park. Finally the paper discusses some of the research issues that need to be addressed to fully realize a Virtual Science Park.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The trend toward the integration, into a single packet-switching or cell-switching network, of services intended to satisfy the different needs of multiple types of traffic calls for the design of protocols for multimedia communication. An essential prerequisite for such a design is that a service model be chosen; in other words, the services to be offered by the network must be selected and specified in detail. This paper presents the service models proposed, of being developed, by two groups within the Internet Community, by the ATM community, and by the Tenet Group; it also compares them, focusing on their common characteristics and their possible convergences, hence unveiling the outlines of the service models the first integrated-services networks to be deployed will implement, if the extensive experiments that are now needed will not be unsuccessful.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper provides a data placement method based on rate staggering to store scalable video data in a disk-array-based video server. A scalable, or layered, video stream is one which is encoded in a manner that permits the extraction of lower resolution subsets from the full- resolution video bit stream. It is desirable to support layered video streams from a video server since these can be used to serve a variety of clients with different decoding capabilities. When a layered video stream is stored on a disk array, the video data corresponding to different rates of the video clip are not required to reside in the same disk. In view of this, we propose and explore the approach of rate staggering, i.e., staggering video data in the disk array based on data rates. It is shown that the advantages of the proposed rate staggering method include: (1) minimizing the intermediate buffer space required at the server, (2) achieving better load balancing due to finer scheduling granularity, and (3) alleviating the disk bandwidth fragmentation. These advantages enable a video server using the rate staggering method to provide feasible solutions to some video stream requests which cannot be met otherwise. The system throughput can thus be increased.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Current approaches for continuous media (CM) file systems focus on scheduling requirements for sessions consisting of single video or audio streams. This paper examines the multimedia delivery problem from the perspective of hypermedia document servers. Such hypermedia documents can be characterized as a web of nodes, each node containing a set of time-dependent CM and discrete media objects. We first look at a hypothetical user's view of a hypermedia session. We then present two service models, the CM service model and the hypermedia service model, and compare them. We propose a new view of session, a hypermedia session, suitable for scheduling the delivery of hypermedia documents, and present sample scheduling algorithms. This notion of hypermedia session is examined from the standpoint of resource management at the orchestration layer of distributed MM system.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper illustrates a step-by-step approach to designing an example multimedia system using our periodic pipelined design and evaluation framework. The periodic pipelined framework provides a systematic approach to designing scalable, distributed heterogeneous systems whose timing properties can be strictly controlled and analyzed. The approach exploits the natural pipelined execution pattern found in a large number of continuous-data applications executing over heterogenous distributed resources. This paper illustrates how to model the example multimedia system using the framework and how to make design decisions such that the system's end-to-end timing requirements can be met. Specifically, we focus on how to assign pipeline stage rates to model the specific set of signal processing rates required in the example system. Finally, this paper provides timing analysis results and makes explicit the fundamental trade-offs in designing this example system.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In distributed multimedia applications time-dependent data streams are conveyed and processed under real-time conditions. The timely accurate activation of the stream handlers processing the data units of streams requires deriving their scheduling times from the temporal properties of the data streams and the amount of data that is processed in each activation. In this paper, we propose the activation set concept as a data access abstraction to consume sets of data units from a group of time-dependent data streams having synchronization relationships among each other. The concept supports the consumption of multiple periodic data streams of different rates as well as the integration of aperiodic streams. It is shown how scheduling times of activations are derived from the streams' temporal properties and the requested amount of data. When a stream handler is activated, data units of the streams are selected for consumption according to synchronization relationships and tolerated skew bounds. The effects of the approach on delays are discussed in the context of a rate-monotonic scheduling algorithm. The concept provides a configurable interface to groups of time-dependent data streams which, for example, is required when nixing streams. It is widely applicable in multimedia system architectures that allow for configuration of distributed multimedia applications based on interconnected stream handling components.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Historically, Multimedia Video-on-Demand (VOD) systems have considered stream indexing as an authoring activity, decomposing monolithic streams containing no explicit indexing information. This paper suggests a scheme for 'up-front,' capture-time indexing of digital video streams, whereby the indexing information logically becomes part of the stream. This approach takes advantage of the sequential, temporal nature of capture and the knowledge of the stream recorder to empower further manipulation and playout of the stream. We explore the impact of capture-time indices, and implement a sample format in a Segment Definition File (SDF). The Video Broadcast Authoring Tool (VBAT) is the focus of our paper. Taking a video stream and an SDF as input, VBAT provides a means for authors to create, delete, modify and annotate segments of that stream. VBAT also integrates existing technology such as World-Wide Web's HTTP links and the Motion Picture Parser application. Creation of various-format stills for browsing is supported. VBAT provides for post-processing of the SDF to various playout environments; we implement and describe a postprocessor for World-Wide Web browsing and playout. Finally, we discuss VBAT's position in an integrated digital video broadcast environment and areas of future work.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Browsing is important for multimedia content retrieval, editing, authoring and communications. Yet, we are still lacking browsing tools which are user friendly and content-based, at least for video materials. In this paper, we present a set of video browsing tools which utilize video content information resulting from a parsing process. Video parsing algorithms are briefly discussed and a detail description of both sequential and time-space browsing tools are presented.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper describes a new technique for extracting a hierarchical decomposition of a complex video selection for browsing purposes. The technique combines visual and temporal information to capture the important relations within a scene and between scenes in a video, thus allowing the analysis of the underlying story structure with no a priori knowledge of the content. We define a general model of hierarchical scene transition graph, and apply this model in an implementation for browsing. Video shots are first identified and a collection of key frames is used to represent each video segment. These collections are then classified according to gross visual information. A platform is built on which the video is presented as directed graphs to the user, with each category of video shots represented by a node and each edge denotes a temporal relationship between categories. The analysis and processing of video is carried out directly on the compressed videos. Preliminary tests show that the narrative structure of a video selection can be effectively captured using this technique.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Historically, database systems have provided convenient methods for obtaining specific information from a large repository of data. All the information was readily available and the mapping between the data and its semantics was straightforward. However, the increasing availability of multimedia data sources introduces new data forms for which the mapping between data and its semantics is not clear. In particular non- alphanumeric data such as images, videos, graphs, charts, etc. contain large amounts of information that are difficult to quantify in a complete and concise fashion. Moreover, the semantic information contained in such data is often application specific. This paper addressed the problem of embedded semantic information. The basic idea is the integration of a data processing component into data modeling systems to allow additional (possibly application dependent) information to be extracted. This paper proposes a new object-oriented data modeling approach, shows how such a design can be applied to a pictorial information system, and describes a prototype system with an example application.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The advent of high speed networks has stimulated development and deployment of many new distributed applications, such as multiparty video conferencing. At the same time, the networking community is rapidly moving towards new high speed networking architectures that offer advanced features such as guaranteed bandwidth and connection performance guarantees. The performance of may applications would be improved significantly if features offered by these new networks are utilized. While it is desirable to use the features of the new protocols provided by the emerging high speed networks, these protocols have not yet reached the same degree of stability and maturity as the existing protocols. As new networks with advanced features are deployed, schemes that take advantage of the advanced network capabilities, are necessary to migrate existing applications to the new networking interfaces. In this paper, several application migration paths are outlined. The concept of a Bandwidth Server, that provides transparent application migration, is introduced, e.g. transparent migration does not require an application to be rewritten, recompiled, or relinked. We explore the design of a simple and efficient Bandwidth Server that allows TCP/IP applications, written using the well-known socket-interface, to execute across a B-ISDN network.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Current efforts in the area of MPEG-I audio/video synchronization have been limited to single audio, single video applications. The MPEG-I specification includes provisions for the interleaving of up to 16 separate video streams with up to 32 distinct audio streams. This paper explores the possible uses of this capability as well as the design of a robust encoder and playback system. Perceived shortfalls within the specification are discussed including the usefulness of time stamps and the lack of sequence start and end codes within the audio stream format. We also describe our implementation of a software-only MPEG-I encoder/player set and describe its performance under various configurations.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We use the smart multimedia object concept to design a Distributed Multimedia System (DMS). A key module in the system is the Object Exchange Manager (OEM). In this paper, we present the design and implementation of the OEM module and discuss in detail the interaction between the Object Exchange Manager and other modules in the DMS system. An example is given to show the application of the DMS system.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Cell loss probability is a critical performance criterion that will be encountered in satisfying the needs of ATM services. To enforce dynamic control on the finite buffer in an ATM switch, it is desirable that multiple loss priority classes can be supported to accommodate diverse connection-levels as well as cell-level loss requirements of users. However, very little work has appeared in literature in the study of controlling cell loss probabilities with multiple priority classes. In this paper we conduct a thorough analysis of a generalized space priority control scheme, the Partial Buffer Sharing scheme, to manage optimally the finite shared buffer system. We develop an analytical queuing model to characterize accurately the system, and present efficient optimization procedures that are capable of finding the optimal loss thresholds to maximize the system admissible load. We verify the optimization procedures, demonstrate the resource efficiency improvement and evaluate the impact of given traffic conditions and cell loss criteria by numerous numerical examples. The study produces a feasible and attractive means to support different grades of loss probabilities with a minimum hardware cost and also to provide a better control on the delay introduced during cell buffering.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Bursty traffic which has a high peak rate with respect to the link rate has been shown to be difficult to carry efficiently in ATM networks using currently proposed methods. Bursty traffic is thought likely to occur with future multimedia systems. We propose a device known as a Resource Adjunct Processor (RAP) which will allow burst traffic to be carried efficiently and also overcome the problem of cell loss multiplication. The RAP will do this through intelligent buffering of traffic bursts. Simulation test results are presented to demonstrate the efficiency gains possible.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, we investigate the architectural suitability of Cable TV networks for supporting Interactive Video on Demand. We present the existing cable TV structure and comment on the expected future architecture towards which cable TV networks are rapidly evolving. Practical realization of Interactive Video on Demand is in jeopardy because of an unmanageable peak in the subscribers' viewship pattern. We propose that multimedia servers of suitable capacities be installed at strategic locations in the cable TV network and function as temporary caches for multimedia information delivered from metropolitan repositories. We develop techniques for information caching that take into account the network bandwidth and storage constraints, and transform unmanageable peaks in viewers' demand patter to manageable plateaus.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The World-Wide-Web (WWW) has created a new paradigm for online information retrieval by providing immediate and ubiquitous access to digital information of any type from data repositories located throughout the world. The web's development enables not only effective access for the generic user, but also more efficient and timely information exchange among scientists and researchers. We have extended the capabilities of the web to provide an improvement to the current paradigm for interacting with inline images, and to allow multidimensional image datasets to be embedded, together with realtime interactive viewers, within WWW documents. Those datasets can then be accessed via our modified version of NCSA's Mosaic WWW browser. This paper will provide a brief background on the World-Wide-Web, and overview of the extensions necessary to support these new data types and a description of an implementation of this approach in a WWW-compliant distributed visualization system.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
An application-level technique for improving the transmission rate of large files is described in this paper. Such techniques are important in areas such as telemedicine, where near-real-time delivery of large files such as digital images is a goal: end users may include specialist whose time is scarce and expensive, and timely access to the data may be necessary for effective clinical treatment. Faster delivery is also an enabling technology for accessing remote medial archives. In conventional TCP/IP transmission, data to be transmitted is sent down one logical communication channel. Our technique divided the data into segments; each segment is sent down its own channel, and the segments are reassembled into a copy of the original data at the receiving end. This technique has been implemented and tested in a client-server program using Berkeley Unix sockets, multiple independent process for channel control, and interprocess communication techniques to guarantee the receipt and correct reassembly of the transmitted data. Performance measurements have been made on several hundred Internet transmissions (including Arizona-to-Maryland transmissions) of 5-megabyte cervical x- ray images. Transmission time as a function of number of channels has been recorded, and a 3-fold improvement in transmission rate has been observed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
An automatic authoring system for the generation of pictorial transcripts of video programs which are accompanied by closed caption information is presented. A number of key frames, each of which represents the visual information in a segment of the video (i.e., a scene), are selected automatically by performing a content-based sampling of the video program. The textual information is recovered from the closed caption signal and is initially segmented based on its implied temporal relationship with the video segments. The text segmentation boundaries are then adjusted, based on lexical analysis and/or caption control information, to account for synchronization errors due to possible delays in the detection of scene boundaries or the transmission of the caption information. The closed caption text is further refined through linguistic processing for conversion to lower- case with correct capitalization. The key frames and the related text generate a compact multimedia presentation of the contents of the video program which lends itself to efficient storage and transmission. This compact representation can be viewed on a computer screen, or used to generate the input to a commercial text processing package to generate a printed version of the program.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.