PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
This PDF file contains the front matter associated with SPIE-IS&T Proceedings Volume 6818, including the Title Page, Copyright information, Table of Contents, Introduction (if any), and the Conference Committee listing.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
An increasingly common feature of a Set Top Box (STB) is that of a Personal/Digital Video Recorder (PVR), which
enables subscribers to record broadcasted content to be viewed at a later time--time-shifting. Currently, subscribers have
the limited choice of watching time-shifted shows either from their own PVRs or from a centralized VoD server that makes
available only the popular shows for time shifted viewing. Our CommunityPVR-a new system that forms a peer-to-peer
network among the STBs and streams recorded content among peer STBs-makes available less popular titles to niche
audiences (the long tail effect) of a community without incurring addtional cost to service providers for servers, bandwidth,
and storage. In this paper, we present an analytical model to investigate how far along the tail of the popularity curve can
be covered by CommunityPVR. Using TV shows ranked by Nielsen Media Research and VoD shows from China Telecom,
our model provides a framework to determine the number of copies of broadcast/VoD content recorded by a community
and the probability that CommunityPVR is able to deliver an on-demand stream of a given show over a DSL network. For
example, CommunityPVR can stream near DVD quality video of the top ranked 5000 shows with 100% probability to a
community of 100K. Unlike a centralized VoD solution, CommunityPVR has the potential to deliver both popular and long
tail content on demand to a service provider's community in a cost-effective manner.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper we examine the impact of the adopted playout policy on the performance of P2P live streaming systems. We
argue and demonstrate experimentally that (popular) playout policies which permit the divergence of the playout points
of different nodes can deteriorate drastically the performance of P2P live streaming. Consequently, we argue in favor of
keeping different playout points "near-in-time", even if this requires sacrificing (dropping) some late frames that could
otherwise be rendered (assuming no strict bidirectional interactivity requirements are in place). Such nearly synchronized
playout policies create "positive correlation" with respect to the available frames at different playout buffers. Therefore,
they increase the number of upstream relay nodes from which a node can pull frames and thus boost the playout quality of
both single-parent (tree) and multiple-parent (mesh) systems. On the contrary, diverging playout points reduce the number
of upstream parents that can offer a gapless relay of the stream. This is clearly undesirable and should be avoided as it
contradicts the fundamental philosophy of P2P systems which is to supplement an original service point with as many
additional ones presented by the very own users of the service.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Centralised solutions for Video-on-Demand (VoD) services, which stream pre-recorded video content to multiple clients
who start watching at the moments of their own choosing, are not scalable because of the high bandwidth requirements of
the central video servers. Peer-to-peer (P2P) techniques which let the clients distribute the video content among themselves,
can be used to alleviate this problem. However, such techniques may introduce the problem of free-riding, with some peers
in the P2P network not forwarding the video content to others if there is no incentive to do so. When the P2P network
contains too many free-riders, an increasing number of the well-behaving peers may not achieve high enough download
speeds to maintain an acceptable service. In this paper we propose Give-to-Get, a P2P VoD algorithm which discourages
free-riding by letting peers favour uploading to other peers who have proven to be good uploaders. As a consequence,
free-riders are only tolerated as long as there is spare capacity in the system. Our simulations show that even if 20% of
the peers are free-riders, Give-to-Get continues to provide good performance to the well-behaving peers. In particular, they
show that Give-to-Get performs very well for short videos, which dominate the current VoD traffic on the Internet.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Web services such as YouTube which allow the distribution of user-produced media have recently become very
popular. YouTube-like services are different from existing traditional VoD services because the service provider
has only limited control over the creation of new content. We analyze how the content distribution in YouTube
is realized and then conduct a measurement study of YouTube traffic in a large university campus network. The
analysis of the traffic shows that: (1) No strong correlation is observed between global and local popularity; (2)
neither time scale nor user population has an impact on the local popularity distribution; (3) video clips of local
interest have a high local popularity. Using our measurement data to drive trace-driven simulations, we also
demonstrate the implications of alternative distribution infrastructures on the performance of a YouTube-like
VoD service. The results of these simulations show that client-based local caching, P2P-based distribution, and
proxy caching can reduce network traffic significantly and allow faster access to video clips.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this study, we characterize user sessions of the popular multimedia Web 2.0 site, YouTube. We observe
YouTube user sessions by making measurements from an edge network perspective. Several characteristics of user
sessions are considered, including session duration, inter-transaction times, and the types of content transferred by
user sessions. We compare and contrast our results with "traditional" Web user sessions. We found that YouTube
users transfer more data and have longer think times than traditional Web workloads. These differences have
implications for network capacity planning and design of next generation synthetic Web workloads.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A number of prior efforts analyzed the behavior of popular peer-to-peer (P2P) systems and proposed ways for maintaining
the overlays as well as methods for searching for contents using these overlays. However, little was known about how
successful users could be in locating the shared objects in these system. There might be a mismatch between the way
content creators named objects and the way such objects were queried by the consumers. Our aim was to examine the
terms used in the queries and shared object names in the Gnutella file-sharing system. We analyzed the object names of
over 20 million objects collected from 40,000 peers as well as terms from over 230,000 queries. We observed that almost
half (44.4%) of the queries had no matching objects in the system regardless of the overlay or search mechanism used to
locate the objects. We also evaluated the query success rates against random peer groups of various sizes (200, 1K, 2K, 3K,
4K, 5K, 10K and 20K peers sampled from the full 40,000 peers). We showed that the success rates increased rapidly from
200 to 5,000 peers, but only exhibited modest improvements when increasing the number of peers beyond 5,000. Finally,
we observed Zipf-like distribution for query terms and the object names. However, the relative popularity of a term in the
object names did not correlate with the terms popularity in the query workload. This observation affected the ability of
hybrid P2P systems to guide searches by creating a synopsis of the peer object names. A synopsis created by using the
distribution of terms in the object names need not represent relevant terms for the query. Our results can be used to guide
the design of future P2P systems that are optimized for the observed object names and user query behavior.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Online gameplay is impacted by the network characteristics of players connected to the same server. Unfortunately, the
network characteristics of online game servers are not well-understood, particularly for groups that wish to play together
on the same server. As a step towards a remedy, this paper presents analysis of an extensive set of measurements of game
servers on the Internet. Over the course of many months, actual Internet game servers were queried simultaneously by
twenty-five emulated game clients, with both servers and clients spread out on the Internet. The data provides statistics on
the uptime and populations of game servers over a month long period an an in-depth look at the suitability for game servers
for multi-player server selection, concentrating on characteristics critical to playability--latency and fairness. Analysis
finds most game servers have latencies suitable for third-person and omnipresent games, such as real-time strategy, sports
and role-playing games, providing numerous server choices for game players. However, far fewer game servers have the
low latencies required for first-person games, such as shooters or race games. In all cases, groups that wish to play
together have a greatly reduced set of servers from which to choose because of inherent unfairness in server latencies and
server selection is particularly limited as the group size increases. These results hold across different game types and even
across different generations of games. The data should be useful for game developers and network researchers that seek
to improve game server selection, whether for single or multiple players.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We describe a networked video application where personalized avatars, controlled by a group of "hecklers", are
overlaid on top of a real-time encoded video stream of an Internet game for multicast consumption. Rather
than passively observing the streamed content individually, the interactivity of the controllable avatars, along
with heckling voice exchange, engenders a sense of community during group viewing. We first describe how
the system splits video into independent regions with and without avatars for processing in order to minimize
complexity. Observing that the region with avatars is more delay-sensitive due to their interactivity, we then
show that the regions can be logically packetized into separable sub-streams, and be transported and buffered
with different delay requirements, so that the interactivity of the avatars can be maximized. The utility of our
system extends beyond Internet game watching to general community streaming of live or pre-encoded video
with visual overlays.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper we propose a joint resource allocation and scheduling algorithm for video decoding on a resource-constrained
system. By decomposing a multimedia task into decoding jobs using quality-driven priority classes, we
demonstrate using queuing theoretic analysis that significant power savings can be achieved under small video
quality degradation without requiring the encoder to adapt its transmitted bitstream. Based on this scheduling
algorithm, we propose an algorithm for maximizing the sum of video qualities in a multiple task environment, while
minimizing system energy consumption, without requiring tasks to reveal information about their performances to
the system or to other potentially exploitative applications. Importantly, we offer a method to optimize the
performance of multiple video decoding tasks on an energy-constrained system, while protecting private
information about the system and the applications.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Transmitting high-quality, real-time interactive video over lossy networks is challenging because network data loss can severely
degrade video quality. A promising feedback technique for low-latency video repair is Reference Picture Selection (RPS), whereby
the encoder selects one of several previous frames as a reference frame for predictive encoding of subsequent frames. RPS operates in
two different modes: an optimistic policy that uses negative acknowledgements (NACKs) and a more conservative policy that relies
upon positive acknowledgements (ACKs). The choice between RPS NACK and RPS ACK depends on network conditions, such as
round-trip time and loss probability, and on the video content, such as low or high motion. This paper derives two analytical models to
predict the quality of videos (using Peak Signal to Noise Ration, PSNR) with RPS NACK and RPS ACK. These models are used to
study RPS performance under varied network conditions and with different video contents through a series of experiments. Analysis
shows that the best choice of ACK or NACK greatly depends upon the round-trip time and packet loss, and somewhat depends upon
the video content and Group of Pictures (GOP) size. In particular: 1) RPS ACK performs better than RPS NACK when round-trip
times are low; 2) RPS NACK performs better than RPS ACK when the loss rate is low, and RPS ACK performs better than RPS
NACK when the loss rate is high; 3) for a given round-trip time, the loss rate where RPS NACK performs worse than RPS ACK is
higher for low motion videos than it is for high motion videos; 4) videos with RPS NACK always perform no worse than videos
without repair for all GOP sizes; however, 5) below certain GOP sizes, videos without RPS outperform videos with RPS ACK. These
insights derived from our models can help determine appropriate choices for RPS NACK and RPS ACK under various scenarios.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
As a basic requirement of live peer-to-peer multimedia streaming sessions, the streaming playback rate needs to be strictly
enforced at each of the peers. In real-world peer-to-peer streaming sessions with very large scales, the number of streaming
servers for each session may not be easily increased, leading to a limited supply of bandwidth. To scale to a large number
of peers, one prefers to regulate the bandwidth usage on each of the overlay links in an optimal fashion, such that limited
supplies of bandwidth may be maximally utilized. In this paper, we propose a decentralized bandwidth allocation algorithm
that can be practically implemented in peer-to-peer streaming sessions. Given a mesh P2P topology, our algorithm
explicitly reorganizes the bandwidth of data transmission on each overlay link, such that the streaming bandwidth demand
is always guaranteed to be met at any peer in the session, without depending on any a priori knowledge of available peer
upload or overlay link bandwidth. Our algorithm is especially useful when there exists no or little surplus bandwidth supply
from servers or other peers. It adapts well to time-varying availability of bandwidth, and guarantees bandwidth supply
for the existing peers during volatile peer dynamics. We demonstrate the effectiveness of our algorithm with in-depth
simulation studies.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Recently Internet P2P/overlay streaming has gained increasing popularity. While plenty of research has focused
on streaming performance study, it is not quite known yet on how to efficiently serve heterogeneous devices
that have different limitations on display size, color depth, bandwidth capacities, CPU and battery power, than
desktop computers. Although previous work1 proposes to reuse intermediate information (metadata) produced
during transcoding to facilitate runtime content adaption to serve heterogeneous clients by reducing total computing
load, unbalanced resource contribution may pre-maturely exhaust the limited power of mobile devices,
and adversely affect the performance of participating nodes and subsequently threaten the robustness of the
whole system. In this work, we propose a Dynamic Bi-Overlay Rotation (DOOR) scheme, in which, we further
consider resource consumption of participating nodes to design a dynamic rotation scheme that reacts to dynamic
situations and balances across multiple types of resources on individual nodes. Based on the computing load
and transcoding quality parameters obtained through real transcoding sessions, we drive large scale simulations
to evaluate DOOR. The results show clear improvement of DOOR over earlier work.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We describe a practical auditing approach designed to encourage fairness in peer-to-peer streaming. Auditing
is employed to ensure that correct nodes are able to receive streams even in the presence of nodes that do not
upload enough data (opportunistic nodes), and scales well when compared to previous solutions that rely on
tit-for-tat style of data exchange. Auditing involves two roles: local and global. Untrusted local auditors run on
all nodes in the system, and are responsible for collecting and maintaining accountable information regarding
data sent and received by each node. Meanwhile, one or more trusted global auditors periodically sample the
state of participating nodes, estimate whether the streaming quality is satisfactory, and decide whether any
actions are required. We demonstrate through simulation that our approach can successfully detect and react to
the presence of opportunistic nodes in streaming sessions. Furthermore, it incurs low network and computational
overheads, which remain fixed as the system scales.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Both research and practice have shown that BitTorrent-like (BT) P2P systems are scalable and efficient for
Internet content distribution. However, existing BT systems are mostly used for distributing non-copyrighted or
pirated digital objects on the Internet. They have not been leveraged to distribute the majority of legal media
objects because existing BT systems are incapable of copyright protection. On the other hand, existing Digital
Rights Management (DRM) techniques are mainly based on a client-server model, and cannot be directly applied
to peer-to-peer based BT systems.
To leverage the efficiency and the scalability of BT systems for Internet content distribution, we propose a
novel scheme to enable DRM in existing BT systems without demanding infrastructure changes. In our scheme,
each file piece is re-encrypted at runtime before a peer uploads it to any other peer. Thus, the decryption keys
are unique for both different peers and difference pieces. In addition, any user can take part in the content
distribution while only legitimate users can access the plaintext of being distributed content. To evaluate
the performance of our proposed scheme, we have conducted experiments on PlanetLab with an implemented
prototype and compared with the original BT system. The results show that our proposed scheme introduces
less than 10% of system throughput degradation for copyright protection when compared to BT systems without
copyright protection.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper presents a delivery framework for streaming media with advertisements and an associated pricing
model. The delivery model combines the benefits of periodic broadcasting and stream merging. The advertisements'
revenues are used to subsidize the price of the media content. The pricing is determined based on the total
ads' viewing time. Moreover, this paper presents an efficient ad allocation scheme and three modified scheduling
policies that are well suited to the proposed delivery framework. Furthermore, we study the effectiveness of the
delivery framework and various scheduling polices through extensive simulation in terms of numerous metrics,
including customer defection probability, average number of ads viewed per client, price, arrival rate, profit, and
revenue.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Multimedia services are usually selected and composed for processing, analyzing and transporting multimedia
data over the Internet for end-users. The selection of these services is often performed based on their reputation,
which is usually computed based on the feedback provided by the users. The users' feedback bears many problems
including the low incentive for providing ratings and the bias towards positive or negative ratings. To overcome
the dependency on the user's feedback, this paper presents a method that dynamically computes the reputation
of a multimedia service based on its association with other multimedia services in a composition task. The
degree of association between any two services is computed by utilizing the statistics of how often they have
been composed together, which is used in our method to show the evolution of reputation over a period of time.
The experimental results demonstrate the utility of the proposed method.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The Mirrored Server (MS) architecture for network games uses multiple mirrored servers across multiple locations to
alleviate the bandwidth bottleneck and to reduce the client-to-server delay time. Response time in MS can be reduced by
optimally assigning clients to their mirrors. The goal of optimal client-to-mirror-assignment (CMA) is to achieve the
minimum average client-to-mirror delay considering player joins (CMA-J) and leaves (CMA-L), and mirrors with limited
capacity. The existing heuristic solution considers only CMA-J, and thus the average delay of the remaining players may
increase when one or more players leave. Furthermore, the solution ignores mirror capacity, which may overload mirrors. In
this paper we present a resource usage model for the MS architecture, and formally state the CMA problem. For both CMA-J
and CMA-L we propose a polynomial time optimal solution and a faster heuristic algorithm that obtains near optimal CMA.
Our simulations on randomly generated MS topologies show that our algorithms significantly reduce the average delay of the
existing solution. We also compare the merits of the solutions in terms of their optimality and running time efficiency.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
An emerging killer application for enterprise wireless LANs (WLANs) is voice over IP (VoIP) telephony, which
promises to greatly improve the reachability and mobility of enterprise telephony service at low cost. None
of the commercial IEEE 802.11 WLAN-based VoIP products can support more than ten G.729-quality voice
conversations over a single IEEE 802.11b channel on real-world WLANs, even though the physical transmission
rate is more than two orders of magnitude higher than an individual VoIP connection's bandwidth requirement.
There are two main reasons why these VoIP systems' effective throughput is significantly lower than expected:
VoIP's stringent latency requirement and substantial per-WLAN-packet overhead. Time-Division Multiple Access
(TDMA) is a well-known technique that provides per-connection QoS guarantee as well as improves the
radio channel utilization efficiency. This paper compares the effective throughput of IEEE 802.11, IEEE 802.11e
and a software-based TDMA (STDMA) protocol that is specifically designed to support WLAN-based VoIP applications,
on the same commodity IEEE 802.11 WLAN hardware. Empirical measurements from a VOIP over
WLAN testbed show that the numbers of G.729-quality voice conversations that IEEE 802.11, IEEE 802.11e
and STDMA can support over a single IEEE 802.11b channel are 18, 22 and 50, respectively.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Next generation mobile ad-hoc applications will revolve around users' need for sharing content/presence information
with co-located devices. However, keeping such information fresh requires frequent meta-data exchanges,
which could result in significant energy overheads. To address this issue, we propose distributed algorithms
for energy efficient dissemination of presence and content usage information between nodes in mobile ad-hoc
networks. First, we introduce a content dissemination protocol (called CPMP) for effectively distributing frequent
small meta-data updates between co-located devices using multicast. We then develop two distributed
algorithms that use the CPMP protocol to achieve "phase locked" wake up cycles for all the participating nodes
in the network. The first algorithm is designed for fully-connected networks and then extended in the second to
handle hidden terminals. The "phase locked" schedules are then exploited to adaptively transition the network
interface to a deep sleep state for energy savings. We have implemented a prototype system (called "Where-Fi")
on several Motorola Linux-based cell phone models. Our experimental results show that for all network topologies
our algorithms were able to achieve "phase locking" between nodes even in the presence of hidden terminals.
Moreover, we achieved battery lifetime extensions of as much as 28% for fully connected networks and about
20% for partially connected networks.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper presents an adaptive near-optimal scheduler for multimedia traffic for the 802.11e Enhanced Distributed Channel Access (EDCA) medium access control scheme. The scheduler exploits the ant colony optimization (ACO) meta heuristic to tackle the challenge of packet scheduling. ACO is a biologically inspired algorithm that is known to find near-optimal solutions for combinatorial optimization problems. Thus, we expect that ACO scheduling produces more efficient schedules than comparable deterministic scheduling approaches at the expenses of a computational overhead it introduces. We compare ACO scheduling relevant deterministic scheduling approaches, and in particular the MLLF scheduler that is specifically designed for the needs of compressed multimedia applications. The purpose of the evaluation is twofold. It allows to draw conclusions on the feasibility of ACO scheduling for multimedia traffic while it serves as a benchmark to determine to what extent deterministic schedulers fall short of a near-optimal solution.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This work explored mechanisms to asynchronously distribute video objects to intranet users. The primary application
driver was to disseminate lecture videos created by the instructor as well as annotated videos from students. The storage
requirements made remote storage mechanisms as well as local infrastructure storage impractical. Hence, we investigated
the feasibility of distributing video contents from user devices. Based on the recent trend of devices going wireless, we
analyzed the viability of using laptop devices. We envision a variant of RSS feed mechanism that searched for the lectures
among currently available replicas. The effectiveness of this distribution mechanism depended on the total number of
voluntary replicas and availability patterns of wireless devices. Using extensive analysis of the observed node behavior,
we showed that though laptop users were online for shorter durations, their temporal consistency can provide reasonable
availability, especially at the times of the day when students were typically active.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper presents the VMedia multimedia virtualization framework, for sharing media devices among multiple
virtual machines (VMs). The framework provides logical media devices, exported via a well defined, higher level,
multimedia access interface, to the applications and operating system running in a VM. By using semantically
meaningful information, rather than low-level raw data, within the VMedia framework, efficient virtualization
solutions can be created for physical devices shared by multiple VMs. Experimental results demonstrate that the
base cost of virtual device access via VMedia is small compared to native physical device access, and in addition,
that these costs scale well with an increasing number of guest VMs. Here, VMedia's MediaGraph abstraction is
a key contributor, since it also allows the framework to support dynamic restructuring, in order to adapt device
accesses to changing requirements. Finally, VMedia permits platforms to offer new and enhanced logical device
functionality at lower costs than those achievable with alternative solutions.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Modern consumer-grade 3D graphic cards boast a computation/memory resource that can easily rival or even
exceed that of standard desktop PCs. Although these cards are mainly designed for 3D gaming applications, their
enormous computational power has attracted developers to port an increasing number of scientific computation
programs to these cards, including matrix computation, collision detection, cryptography, database sorting, etc.
As more and more applications run on 3D graphic cards, there is a need to allocate the computation/memory
resource on these cards among the sharing applications more fairly and efficiently. In this paper, we describe the
design, implementation and evaluation of a Graphic Processing Unit (GPU) scheduler based on Deficit Round
Robin scheduling that successfully allocates to every process an equal share of the GPU time regardless of their
demand. This scheduler, called GERM, estimates the execution time of each GPU command group based on
dynamically collected statistics, and controls each process's GPU command production rate through its CPU
scheduling priority. Measurements on the first GERM prototype show that this approach can keep the maximal
GPU time consumption difference among concurrent GPU processes consistently below 5% for a variety of
application mixes.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Event detection from a video stream is becoming an important and challenging task in surveillance and sentient
systems. While computer vision has been extensively studied to solve different kinds of detection problems over
time, it is still a hard problem and even in a controlled environment only simple events can be detected with a
high degree of accuracy. Instead of struggling to improve event detection using image processing only, we bring
in semantics to direct traditional image processing. Semantics are the underlying facts that hide beneath video
frames, which can not be "seen" directly by image processing. In this work we demonstrate that time sequence
semantics can be exploited to guide unsupervised re-calibration of the event detection system. We present an
instantiation of our ideas by using an appliance as an example--Coffee Pot level detection based on video data--to show that semantics can guide the re-calibration of the detection model.
This work exploits time sequence semantics to detect when re-calibration is required to automatically relearn
a new detection model for the newly evolved system state and to resume monitoring with a higher rate of
accuracy.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.