PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
Architecture overview of cell switch router (CSR), that is one of actual implementation using label switching paradigm, and the CSR prototype system supporting standard ATM interfaces are described. CSR can operate both with PVC (permanent virtual connection) and with SVC (switched virtual connection) as the VC for cut-through packet forwarding. CSR contains cell switch fabric and IP packet switch fabric to achieve high throughput IP forwarding. IP packets are forwarded either through a cut-through packet transmission, in which packet are forwarded without reassembling IP packet nor IP header processing, or through a conventional hop-by-hop IP packet forwarding. This paper describes and proposes the mechanism to forward the connectionless IP packet flows at the CSR. A CSR prototype system has developed. The CSR prototype system uses PVC and SVC connections to transfer the IP packets. With the CSR prototype system, we can make sure that CSR system could establish the cut-through packet transmission path between the adjacent node with acceptable establishment delay, that was less than few hundred second. The SVC connections for cut- through packet transmission are established on demand using ATM Forum UNI 3.0 or UNI 3.1.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Tag switching is based on two key ideas: (1) single forwarding algorithm based on label (tag) swapping, and (2) support for a wide range of forwarding granularities that could be associated with a single tag. Combination of these two ideas facilitates the development of a routing system that is functionally rich, scalable, suitable for high forwarding performance, and is capable of graceful evolution to address new and emerging requirements in a timely fashion.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper describes a method for mapping IP flows to ATM switches. No signaling is necessary to setup a path through ATM switches. Switch controllers run an IP routing protocol and execute IP forwarding. The IPSOFACTO component is responsible for mapping a IP flow to a switched path. Mechanisms for switching both multicast and unicast flows are described.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The purpose of this paper is to investigate how to design an integrated routing architecture for IP and ATM meeting the requirements for a large scale Internet based on IP and ATM. Integration of IP and ATM at the routing level leads us to consider two separate aspects: using a common routing architecture for IP and ATM on one hand (layer integration) and, on the other hand, integrating best-effort and QoS traffic support in the same routing architecture (service integration). The first level of integration is, for obvious reasons, highly recommended. In contrast, we show that the second level of integration is not desirable because best- effort and QoS traffic flows have, in terms of routing contradictory requirements. To conduct this analysis, we feel that, because of the inherent complexity of the problem, confronting the existing proposals is too restrictive. Instead, we propose to go one step back in the design process and identify the basic design options to be considered when designing a routing architecture. We identify three options, namely, route updating vs. route pinning, hop by hop vs. explicit routing and pre-computed routes vs. on-demand route computation. A fourth option is whether or not to integrate in the routing architecture the capability to compute shortcut paths, that is, bypassing layer 3 (L3) nodes and using only layer 2 (L2) devices. Using this framework, we conclude that best-effort traffic flows are well served by a combination of route updating, hop by hop routing and pre-computed routes while QoS flow routing is built on route pinning, explicit routing and on-demand route computation. We also observe that the capability to compute L2 shortcuts in an L2/L3 integrated routing architecture is an added value simplifying the overall network design and optimizing the efficiency of the forwarding path.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Recent work on building fast IP routers has emphasized on integrating ATM switching with IP routing. One critical issue is concerned with ways to map IP routing information to ATM labels. VC merging allows many routes to be mapped to the same VC label, thereby providing a scalable mapping method that can support tens of thousands of edge routers. VC merging requires reassembly buffers so that cells belonging to different packets intended for the same destination do not interleave with each other. We investigate the impact of VC merging on the additional buffer required for the reassembly buffers and other buffers due to the perturbation in the traffic process. The main result indicates that VC merging incurs a minimal overhead compared to non-VC merging in terms of additional buffering. Moreover, the overhead decreases as utilization increases, or as the traffic becomes more bursty.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
One of the most important network elements in the Internet are the routers which do relaying of IP packets. Because of growth of the Internet routers currently experience serious problems in relaying traffic in a satisfying speed. The idea of switching Internet traffic flows has recently been introduced and a new technology called IP switching has emerged. Several differing technological solutions have been suggested. In this paper we describe and compare two methods in flow-based IP switching to make the decisions whether to switch internet traffic flows to separate ATM-connections. Traffic measurements are made in two networks of varying size and based on a specific three-stage flow analysis we suggest that the decision to switch should be made as flexible as possible due to the expected diversity of traffic profiles in different parts of the network. This way the optimal service cluster could be switched and router resources could be optimally utilized. A simple model to determine workload to an IP switch is introduced. Using this model we see that the workload of the flow setup component and the routing component may be optimized, if we use flexible methods to determine the flows that are to be separately switched.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Key issues in the current development of Internet seem to be its capability to scale and to support new real-time or near real-time applications like video- and audio-conferencing. There are two factors that affect these qualities: one is the ability to distinguish which connections should be switched and the other is the effective control over network resources. ATM is a serious attempt to standardize global multiservice networks. This attempt seems to suit well for the future Internet. ATM was originally meant to be an easy and an efficient protocol but it is now turning to be 'yet another ISDN.' More and more features are implemented to ATM resulting in the overloading of the network with management procedures. Therefore a new approach needs to be taken. In this approach a strong reminder of 'what is necessary' needs to be kept in mind. This paper presents an alternative, simpler approach to the ATM traffic management and introduces some suggestions how to map Internet applications to simplified ATM environment using an advanced IP switching concept.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper presents an integrated, server-based mechanism for the efficient support of the IP integrated services (IIS) model in ATM networks, namely the multicast integration server (MIS) architecture. Instead of viewing IP-ATM multicast address resolution and QoS support separately, the approach in this paper is to consider such issues in an integrated manner. The multicast integration server is capable of IP multicast to ATM NSAP address resolution using the easy multicast routing through ATM clouds (EARTH) protocol, as well as of QoS management using the resource reservation protocol (RSVP). With the use of EARTH, several ATM point-to-multipoint connections with different QoS parameters can be associated to a single IP multicast address. AN RSVP server within the MIS is used to distribute RSVP messages inside the ATM cloud and to set the corresponding QoS state in the address resolution table of EARTH. In addition, this paper defines a quantized heterogeneity model which supports, together with the MIS, advanced IIS features as QoS heterogeneity and dynamic QoS changes in IP-ATM networks.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper presents CONGRESS: a connection-oriented group- address resolution service, and its applications. CONGRESS is an efficient native ATM protocol for resolution and management of multicast group addresses in an ATM WAN. It complements the native ATM multicast mechanisms. CONGRESS resolves multicast group addresses and maintains their membership for applications. It is not designed to handle the applications' data-exchange. Applications can use the resolved addresses returned by CONGRESS, in order to implement a many-to-many communication model. CONGRESS employs hierarchically organized servers in order to be scalable. CONGRESS' hierarchy is naturally mapped onto the ATM private network to network interface peer group hierarchy. CONGRESS communication overhead for management of a single multicast group is linear in the size of the group. Apart from facilitating native ATM multicast applications, CONGRESS can be used for the implementation of an IP multicast 'cut-through' routing over ATM. The cut-through routing paradigm is conceived as one of the most promising techniques for enabling the traditional IP- based communication with QoS. Unfortunately, due to a variety of reasons, the building of scalable IP multicast cut-through protocols is non-trivial. We claim that a multicast address resolution and maintenance service like CONGRESS, can greatly contribute to the development of scalable IP cut-through routing services. A conceptual cut-through routing solution, IP multicast service for non-broadcast access networking technology (IP-SENATE), built on top of CONGRESS and scalable to a large ATM cloud is sketched.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We propose a new scheme for multicasting in a binary tree that combines packet self-replication and routing in a space- division ATM switch. We revisit Law and Leon-Garcia's approach for packet self-replication and routing. Then we propose a new packet self-replication and routing scheme using only 2b address bits for a b-level binary tree. This method, when applied to a unique 3-dimensional ATM switch architecture, constitutes an optimum combination of packet self-replication and routing for multicasting in a continuously expanding self- routing space-division switch.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We present a cost minimized multicast routing algorithm, referred to as the constrained multicast tree with virtual destination (CMTVD), that can be used for heterogeneous applications in ATM networks. In routing multipoint information flows over ATM VP/VC networks, the algorithm generates near optimal multicast tree T[s, (M)] based on the delay requirements of services, link costs and path overlapping effects for resource saving and QoS satisfying purposes. For the delay sensitive service type, the cost optimized route is the minimum cost Stenier tree (MCST) connecting all the destination nodes, virtual destination nodes and the source node with least costs, subject to the delay along the path being less than the maximum allowable end to end delay. On the other hand for the delay in-sensitive service, the cost optimized multicast route is the MCST connecting all the multicast group with least costs, subject to the traffic load is balanced in the network. The CMTVD algorithm uses the virtual destination node concept in order to find the multicast route that maximizes the overlapping effects of the path between multiple destinations, thus minimizes the number of links and switches used in the multicast communications. The cost performance of CMTVD algorithm is evaluated through computer simulation on random graphs.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
With the maturing of the G.723.1 and G.729 voice encoding algorithms, low-cost good-performance compressed voice becomes more readily available. At the same time, the ITU has just completed the AAL 2 (ATM adaptation layer) specification based on the 'mini-packet' technology which fits naturally with the transfer of packetized compressed voice over ATM. As a result, there is a great deal of interest in the marketplace to develop specification(s) based on AAL 2. This paper describes the current status of the AAL 2 standards in the ITU (International Telephone Union) and the activities in the ATM Forum on the use of AAL 2 in the landline trunking application.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper presents information related to the substantive changes that are contained in the ATM Forum LAN emulation over ATM version 2 specifications (LANE Version 2). With LANE Version 2, the client-server behavior (LUNI) and the server- server behavior (LNNI) are detailed in separate documents. The LUNI specification includes new features such as support for LLC-multiplexed data connections and support for quality of service and is a replacement document for the original LAN emulation over ATM specification. The LNNI specification details the interactions between the various LANE Version 2 service components. This paper highlights the LUNI changes and details the LNNI operational theory.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The ATM Forum's LAN emulation (LANE) specification has now been available for more than two years, and LANE products have matured sufficiently that enterprises have deployed production LAN Emulation networks. However, the scaleability of LANE is still frequently debated, as is its applicability in wide-area networks (WANs). This paper examines design features and supporting applications that can enhance the scaleability and manageability of LAN Emulation networks. The design features include techniques for distributing the LANE Services to provide load balancing and robustness, mechanisms for managing broadcast and multicast traffic (which have been a classical problem with large LAN networks), approaches for controlling signaling rates during power-up and failover situations, and extensions that exploit the distributed routing capabilities provided by the next hop resolution protocol (NHRP). Issues associated with WAN deployment of LAN Emulation are also explored, including requirements for carrier-based service environments, where conservation of network resources, security capabilities, and monitoring applications are especially important. The paper is concluded with an assessment of LANE's status/prospects as a technology for building large-scale networks.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper describes the next hop resolution protocol (NHRP), and its use in the deployment over non-broadcast multi-access (NBMA) networks with particular emphasis on NHRP's use with IP over ATM campus networks. NHRP allows a host or router to determine the 'best' next hop at the egress of an NBMA cloud toward a final destination. NHRP differs from traditional ARP- like mechanisms in that NHRP will return information that may permit the creation of a cut-through connection across multiple IP subnets and thus would bypass the associated router hops. This bypassing of intermediate routers should cause a net decrease in delay and delay variation since fewer protocol layers of the intervening nodes would potentially have to access the data carried on those connections. Note that while this protocol was developed for use with NBMA subnetworks, it is possible, if not likely, that it will be applied to BMA subnetworks as well. However, this usage of NHRP is for further study. Further, while this paper emphasizes IP networks, NHRP is built to be multi-protocol at the network layer.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Next hop resolution protocol (NHRP) is an IP/NBMA address resolution protocol that is being developed by IETF to replace RFC1577 ATMARP. In this paper, we discuss Bellcore's experiences and lessons from the development of the first NHRP prototype publicly released to IETF's IP over NBMA (ION) working group in late 1996. In the first half of this paper, we present the implementation details of the prototype software, the NHRP server and the NHRP client. In particular, the software configuration in our testbed, the data structures, and the control structures of the server and the client are explained. Then we illustrate the address resolution and the VC setup procedure initiated by TCP telnet application in our testbed. In the second half, we discuss the interactions between NHRP and the traditional IP routing model. We demonstrate that the NHRP server that resides on an IP router and exchanges NHRP messages with other servers through IP's routed path, such as our server implementation, cannot use the IP routing engine as such, and therefore needs a separate interface to interact with it. Also, we show that the NHRP client creates a potential scalability problem in the IP routing table size since it cannot exploit the traditional 'default' routing philosophy in which it suffices for a host to have partial routing information. Finally, we show that the NHRP client also needs to have a dynamic interaction with the IP routing protocol to create short-cuts.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Multiprotocol over ATM (MPOA) is a new protocol specified by the ATM Forum. MPOA provides a framework for effectively synthesizing bridging and routing with ATM in an environment of diverse protocols and network technologies. The primary goal of MPOA is the efficient transfer of inter-subnet unicast data in a LAN Emulation (LANE) environment. MPOA integrates LANE and the next hop resolution protocol (NHRP) to preserve the benefits of LAN Emulation, while allowing inter-subnet, internetwork layer protocol communication over ATM VCCs without requiring routers in the data path. It reduces latency and the internetwork layer forwarding load on backbone routers by enabling direct connectivity between ATM-attached edge devices (i.e., shortcuts). To establish these shortcuts, MPOA uses both routing and bridging information to locate the edge device closest to the addressed end station. By integrating LANE and NHRP, MPOA allows the physical separation of internetwork layer route calculation and forwarding, a technique known as virtual routing. This separation provides a number of key benefits including enhanced manageability and reduced complexity of internetwork layer capable edge devices. This paper provides an overview of MPOA that summarizes the goals, architecture, and key attributes of the protocol. In presenting this overview, the salient attributes of LANE and NHRP are described as well.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The multiprotocol over ATM (MPOA), specified by the ATM Forum, provides an architecture for transfer of Internetwork layer packets (Layer 3 datagram such as IP, IPX) over ATM subnets or across the emulated LANs. MPOA provides shortcuts that bypass routers to avoid router bottlenecks. It is a grand union of some of the existing standards such as LANE by the ATM Forum, NHRP by the IETF, and the Q.2931 by ITU. The intent of this paper is to clarify the data flows between pairs of source and destination hosts in an MPOA system. It includes scenarios for both the intra- and inter-subnet flows between different pairs of MPOA end-systems. The intrasubnet flows simply use LANE for address resolution or data transfer. The inter-subnet flows may use a default path for short-lived flows or a shortcut for long-lived flows. The default path uses the LANE and router capabilities. The shortcut path uses LANE plus NHRP for ATM address resoluton. An ATM virtual circuit is established before the data transfer. This allows efficient transfer of internetwork layer packets over ATM for real-time applications.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Internet traffic burstiness allows for statistical multiplexing gain in the available bandwidth of an ATM link. However, a dynamic allocation bandwidth assignment (ABR) has to be performed. In this paper we evaluate the real advantages of ABR versus CBR for Internet service provisioning. We consider performance parameters such as connection setup delay and active waiting time due to flow control and show that CBR schemes can be a good alternative for Internet service provisioning over ATM networks.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The ATM Forum has adopted rate-based congestion control for ABR (available bit rate) traffic. Much of the existing work evaluating ABR congestion control schemes has used some threshold value on buffer queue length to indicate congestion. On the other hand, many ER (explicit rate) algorithms calculate their 'fair-share' values based on utilization level, with the assumption that ER switches are able to measure the current utilization level of ABR traffic. If one would use the same mechanism -- measuring utilization level -- to indicate congestion, then the same switch could easily implement both binary and ER ABR control algorithms. Based on the above observations, in this paper we study the effect of using two different congestion indication methods: (1) buffer queue length (the most commonly used method); and (2) utilization level (the new method). We evaluate two binary ABR control schemes: EFCI (explicit forward congestion indication) and CI (congestion indication) using backward notification, using the two different congestion methods. We also evaluate and compare two ER algorithms: the ERICA (explicit rate indication for congestion avoidance) algorithm proposed by Jain and the CAPC-2 (congestion avoidance with proportional control - 2) algorithm proposed by Barnhart. Performance evaluation are carried out by computer simulation. We simulate two ABR switches connected by an OC-3 link, with each switch connecting five end-systems. The distance between the two switches are 20 km for LAN and 1,000 km for WAN, based on ATM forum specification. For each simulation run, we measure average queuing delay, maximum queue length, and network utilization. Traces of ACR (allowed cell rate) and buffer queue length are also examined. We found that using the new congestion method indication dramatically reduces the maximum queue length and average queuing delay, with a slight decrease in utilization. Both ER schemes show smooth buffer occupancy and attain high utilization.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Ensuring end-to-end bounded delay and fair allocation of bandwidth to a backlogged session are no more the only criterias for declaring a queue service scheme good. With the evolution of packet-switched networks, more and more distributed and multimedia applications are being developed. These applications demand that service offered to them should be homogeneously distributed at all instants contrarily to back-to-back packet's serving in WFQ scheme. There are two reasons for this demand of homogeneous service: (1) In feedback based congestion control algorithms, sources constantly sample the network state using the feedback from the receiver. The source modifies its emission rate in accordance to the feedback message. A reliable feedback message is only possible if the packet service is homogeneous. (2) In multicast applications, where packet replication is performed at switches, replicated packets are probable to be served at different rates if service to them, at different output ports, is not homogeneous. This is not desirable for such applications as the phenomena of packet replication to different multicast branches, at a switch, has to be carried out at a homogeneous speed for the following two important reasons: (1) heterogeneous service rates of replicated multicast packets result in different feedback informations, from different destinations (of same multicast session), and thus lead to unstable and less efficient network control. (2) in a switch architecture, the buffer requirement can be reduced if replication and serving of multicast packets are done at a homogeneous rate. Thus, there is a need of a service discipline which not only serve the applications at no less than their guaranteed rates but also assures a homogeneous service to packets. The homogeneous service to an application may precisely be translated in terms of maintaining a good inter-packets spacing. EWFQ scheme is identical to WFQ scheme expect that a packet is stamped with delayed value of service start time of packet in corresponding GPS scheme, This delay is meant to consider the packet slots which might be occupied by a packet of precedently served session. Then EWFQ scheme serves the packets in the increasing order of their stamp values. It provides an end-to-end bounded delay service to applications. For multicast sessions, this scheme ensures a homogeneous service rate to all the replicated packet thus permits the replicator to work at a rather constant speed. Session's packets get distributed more accurately with low cost, moreover EWFQ scheme is highly probable to perform lesser number of operations than other schemes (e.g. WF2Q) while ensuring good inter-packets spacing.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The flow of papers proposing new schemes to cope with congestion in networks continues unabated. In particular as the deployment of ATM networks advances effective congestion control is required to ensure that these networks can effectively provide the wide range of services that they promise. This paper attempts to evaluate whether recently proposed algorithms are likely to be useful in practice using performance simulation and modeling methods. However the performance is very sensitive to the flow control parameters and identifying an appropriate set of parameters is difficult since it depends heavily on the traffic conditions. The aim of this paper described is to broaden the context within which ATM performance is considered, and outline ongoing work in performance evaluation of ATM networks. This paper presents the complete picture for evaluating the properties of congestion control mechanisms including fairness, overhead, data loss and network utilization are described. It is particularly aimed at estimating the effects of recent congestion control schemes for ATM networks.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A discrete-time finite buffer capacity queueing system for a generic multimedia shared medium is studied in this paper. In such an environment, the arrival process at the aggregate level (e.g., frame arrival rather than cell arrival) can be captured using the Markovian arrival process and the service time can be represented by the phase type service. The quality of service can be increased through the assignment of priority and buffer allocation scheme. In this paper, we consider two types of priorities and three buffer allocation schemes. First, we consider priority service and non-priority buffer allocation, second, priority service and fixed buffer allocation, and third, priority service with priority buffer allocation with push-out and threshold. Our analysis leads to the queue length distribution and the blocking probability. In addition, we show that the throughput can be obtained as a Markov renewal process described by two sub-stochastic matrices derived by partitioning the transition matrix. Through several numerical examples, we show the effect of buffer allocation scheme on the throughput, and present some results for the average queue length, blocking probability, and the throughput.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Since emerging networked applications require a variety of different communication services, the number of communication subsystems and approaches to deliver flexible services has increased. Nevertheless, applications have to be programmed sufficiently on top of these communication subsystems. Providing an easy-to-use and intuitively programmable communication service for multimedia applications, an up-to- date application programming interface is needed. The developed approach offers an object-oriented interface for setting-up, accessing, and managing communication services. Moreover, these services may be of flexible nature, offering the potential to application programmers to specify communication requirements in a set of application-dependent quality-of-service (QoS) parameters. Service needs and communication demands are specified by, e.g., bandwidth requirements, delay bounds, or authentication requests. The developed and implemented application programing interface hides away communication-relevant information from applications and provides a set of efficient and stream-lined interface functions and operations.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Several studies concerning the performances of TCP over UBR service have shown that ATM switches with limited buffers respond to UBR congestion with a low throughput and high unfairness. In order to achieve a higher efficiency it is possible to implement some additional mechanism to control cells dropping at buffers switches. One of the most popular drop strategy is the early packet discard, which drops the entire higher level data units when the buffer queue reaches a certain threshold level. Another mechanism is the partial packet discard, which, if a cell is lost in case of congestion, discards all the subsequent cells belonging to the same packet. The first investigations of the throughput behavior of TCP over ATM with EPD, using simulation studies, have shown a performance improvement. How efficiently the switches buffers are used depends on the placement of the EPD threshold and on how cell dropping will occur, due to the level of the congestion. Two of the most relevant factors are the distribution of the packet size and the traffic distribution. Our main goal is to analyze a relation between the efficiency for different buffer dimensions and the parameters, as the packet size and the number of sources on the network, responsible for the volume of the traffic. Furthermore we want to investigate the optimal EPD threshold value. The performances with and without application of the enhancements are compared, trying to supply a relation by interpreting the results obtained with the simulations. This suggests an answer to the question 'why and when early packet discard' could be used.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Data and telecommunications industries are using ATM in a number of applications and in several configurations, enabling companies to re-engineer important functions and effectively distribute the workforce as needed. In this paper, the authors will define and offer solutions to the issues and concerns of telecom/datacom mangers when providing enhanced network access via ATM. We quantify several important traffic management implementation and testing issues within an ATM network. Guidelines are presented for meeting quality of service requirements, for mapping source traffic descriptors into different service classes, and for measuring various traffic management parameters. Abstract test suite development is discussed with respect to performance testing, and guidelines are presented on performance testing in network and in application designs.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Simple integrated media access (SIMA) is a new service category for packet based communications systems, such as ATM or IP. SIMA offers a simple way to introduce easy charging, real time support and increase in the capacity of networks. According to the SIMA concept each customer shall define only two issues before a connection establishment: a nominal bit rate (NBR) and the selection between real-time and non-real- time classes. NBR forms the basis of charging and defines how the network capacity is divided among different connections during overload situations. The ratio of momentary bit rate of the source and NBR defines for each individual cell a priority level that is used to select the cells to be discarded under congestion. Simulation results are described about the cell loss as a function of priority level of a cell. SIMA can be also be equipped with a priority feedback system that informs traffic sources about the condition of the network. Results on the performance of sources using this information are published. This paper also presents some simulations on a SIMA network that uses packet discarding. Simulations presented in this paper show that SIMA is capable of fulfilling requirements for the future broadband Internet networks.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The implementation of different well-known scheduling schemes using a cell sequencer/scheduler circuit, already proposed by the authors, is investigated. Two groups of scheduling schemes, namely priority-based and rate-based schemes are considered. The first group includes static and dynamic priority schemes such as head-of-line priority and windowed priority schemes. The second group includes fair queueing, self-clocked fair queueing, and pacing mechanisms. The application of the sequencer in shaping and policing circuits in ATM networks is also addressed. A mechanism of scheduling real-time and non-real-time traffics using two different algorithms but a common sequencer is also presented to demonstrate the capability of the sequencer to implement a combination of algorithms and functions in the same environment.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper reports the design of the inter-working unit (IWU) required when high-speed information processing machines (hosts) communicate to each other by channel interface via the lower-speed wide area network (WAN). It formulates the relationship among the window size W, N of ACK_N (the acknowledgement returned for each N frames), ATM network utilization ratio RTP (relative throughput), required segmentation buffer size Bmax, and queue average delay Tw by analysis. The research target is the maximum enhancement of inherent host performance (throughput) up to the limit of the network bandwidth by new flow control It focuses 1.0625 Gbps fiber channel (FC) mounted on supercomputers, workstations, etc., and 622.08 Mbps ATM network. The report proposes ACK based flow control consists of pseudo ACK in order to fully utilize the network bandwidth independent from the network propagation delay, and ACK priority transmission to prevent queuing in full-duplex communication. It also discuses ACK registration in order to minimize required segmentation buffer size. At present, the prototype IWUs are being developed and basic function of FC through ATM transmission has already been verified in the laboratory test.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper addresses a fundamental problem in resource management for flow-based hybrid switching systems. Such systems aim at efficiently transporting layer 3 connectionless IP traffic over layer 2 connection-oriented ATM switching fabrics. One idea behind flow-based hybrid switching is first to decompose individual IP packet streams into flows and then to classify them into short-lived flows and long-lived flows. While the short-lived flows are best forwarded by the embedded software through permanent virtual connections (PVC), the long-lived flows are more effectively transmitted by hardware through to-be-established switched virtual connections (SVC). Clearly the flow classification mechanism will have great impact on the utilization of the system's resources. Unlike the traditional emphasis on resources such as link bandwidth and cell buffer size, our paper focuses on the resources which are directly associated with packet processing power, signaling capacity and routing table size. Our study indicates that the presently available static flow classification methods have a vital shortcoming in balancing the utilization of the system's resources. We propose a novel approach for adaptive flow classification which can balance the utilization of system resources to match the time varying traffic characteristics. After formulating the proposed flow adaptation as a stochastic control problem, a heuristic algorithm is developed. The simulation study based on real traces shows the viability of the proposed flow adaptation for dynamic resource management in flow-based hybrid switching system design.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Residential broadband access networks using technologies such as ADSL and cable modems have enabled the provisioning of emerging Internet applications such as Internet telephony, video conferencing and interactive games. These applications have specific end-to-end performance requirements from the network in order to have an acceptable performance. Currently the Internet is a best effort network which doesn't provide levels of service. There are many elements of an end-to-end network which are currently suitable to provide quality-of- service guaranties such as ATM links. Nevertheless, only with recent deployments of broadband access technologies and the introduction of Internet protocols such as RSVP, providing levels of service becomes feasible without the use expensive links to the customer site. This paper examines several network implementation options for introducing levels of service using cable modem access. Limitations imposed by the applications on the network as well as the contribution of the different network elements to level of service parameters such as end-to-end delay, throughput and jitter are examined. Concentration network architectures as well as proposed backbone configuration options for end-to-end level of service provisioning are presented. At the access network, provisioning of level of service using bandwidth control through packet throttling and through access network design providing excess bandwidth to customers are presented. HFC protocol dependent means to provide level of services including reservation and ATM based protocols are examined as well.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
An initialization protocol for a TDMA based QPSK burst-mode transportation system for use on shared medium networks such as cable TV networks is described. Cable TV networks employ the coaxial bus principle between an optical node and the client modem. Each cable modem connected to the coaxial bus needs to undergo a start-up procedure at activation to determine certain timing and physical layer related settings. The mechanism described here performs identification of a modem, power ranging and distance ranging. A unique distance ranging mechanism is employed that offers delay ranging based on distribution power measurement.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The application of the IEEE 802.11 access protocol for wireless local area networks (WLAN) to an ATM wireless environment is considered with the aim to support ATM virtual connections. Two different schemes are considered to perform cell multiplexing in MAC frames. The proposed solutions are suitably discussed and evaluated by means of simulation in a multiservice environment with data and video traffic. The effect of the presence of fading in the communication channel is also taken into account.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper describes a novel TDMA/FDMA combined 16 QAM receiver architecture developed for video-on-demand applications. A burst-operated rapid synchronization scheme is proposed which employs an efficient training preamble for overlapped operation of automatic gain control, carrier phase acquisition and symbol timing alignment. All the dedicated synchronization algorithms are digitally implemented, using field programmable gate arrays (FPGA), for a data rate of 10.8 Mbit/s. Several analytic relationships for control accuracy, acquisition time and signal to noise ratio (S/N) are derived. Experimental results demonstrate that the proposed method significantly decreases the required preamble length to 23 symbols, together with a dynamic range of 11 dB and a sensitivity of minus 56 dBm for a bit-error-rate (BER) of 5 * 10-9. The BER performance with frequency offset and input power variation is also investigated.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper describes efficient and scalable solution for IP multicast over ATM clouds, dubbed EARTH -- easy IP multicast routing through ATM clouds. Analysis of related work shows poor scalability even for small multicast groups, due to the fact that classical IP over ATM is retained 'as is.' Two major principles introduced in the paper are: multicast logical IP subnet (MLIS) and multiprotocol over MLIS (MOM). The MLIS concept is defined over ATM in parallel with classical LISs. MLIS dynamically includes all multicast capable hosts and statically includes all egress Mrouters with EARTH server being an AT-MARP server for MLIS. MLIS spans the whole ATM cloud and therefore enables efficient short-cuts. Like IP class D address which is not a 'true' IP address, MLIS is not a 'true' LIS, but an extension of LIS concept. The need for MOM is motivated by the following: separation of IP control and IP data flows is inevitable over ATM; control flows of multiple protocols tend to concentrate around ARP data. For IP multicast these protocols are: interdomain multicast protocols (like DVMRP); subnet group management protocol, resource reservation protocols (like RSVP). The EARTH server becomes a point of protocol attraction in MLIS.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper presents the definition of a new transfer mode suited for all optical packet switching. A preliminary format of the optical packet has been defined, consisting of a header at fix bit rate and a payload, switched transparently inside the network, able to carry whatever service and bit rate. This optical packet is inserted in a time slot of fixed time length, regardless of the link speed, in order to simplify switches operation and guaranty the network modularity. In this paper the issue of the optimal size of the time slot is addressed, with reference to access delay and traffic shaping.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.