Proceedings Volume 4526

Scalability and Traffic Control in IP Networks

Sonia Fahmy, Kihong Park
cover
Proceedings Volume 4526

Scalability and Traffic Control in IP Networks

Sonia Fahmy, Kihong Park
View the digital version of this volume at SPIE Digital Libarary.

Volume Details

Date Published: 25 July 2001
Contents: 11 Sessions, 32 Papers, 0 Presentations
Conference: ITCom 2001: International Symposium on the Convergence of IT and Communications 2001
Volume Number: 4526

Table of Contents

icon_mobile_dropdown

Table of Contents

All links to SPIE Proceedings will open in the SPIE Digital Library. external link icon
View Session icon_mobile_dropdown
  • Scalable Network Services
  • Multicast Routing, Management, and Applications I
  • Multicast Routing, Management, and Applications II
  • Multimedia Wireless Networks
  • Scalable Routing and Traffic Engineering
  • Internet Topology I
  • Internet Topology II
  • Service Models, Pricing, and Accounting
  • Scalable Simulation
  • Service Differentiation in the Internet
  • Multicast Congestion Control
Scalable Network Services
icon_mobile_dropdown
Class provisioning using proportional delay differentiation
Constantinos Dovrolis, Parameswaran Ramanathan
We consider the problem of link provisioning in a differentiated services network that offers N classes of service. At the provisioning phase, the network manager configures the link to support the requirements of M distinct traffic types. Each traffic type is specified by an expected arrival rate and an average delay requirement. The objective of the provisioning phase is to jointly determine: the minimum link capacity needed to support the M given traffic types, the nominal class of service for each traffic type, and the appropriate resource allocation between classes. We propose such a class provisioning methodology. Our methodology is based on Proportional Delay Differentiation (PDD) scheduling. The major advantage of PDD scheduling is that it avoids the computation of an explicit bandwidth share for each class. Instead, PDD scheduling determines the order of packet transmissions in order to meet the N-1 ratios of the N target class delays. Having fixed the delay ratios with PDD, we then set the N class delays to their target values adjusting a single knob, which is the link capacity. The methodology is illustrated with examples.
Economics of scalable network services
John Chung-I Chuang
This paper explores the economics of scalable network services by posing two simple questions. First, what is the difference between scale economics and scalability? Second, why and how should we scale network services for competition and cooperation? By answering these questions in the context of network services such as multicast, QoS and web caching, we gain some insight into the tradeoffs involved in the design of scalable network services.
Flexible transport services for emerging opportunities in Internet content delivery
John W. Byers
The emergence of new technologies for Internet content delivery including CDNs, end system multicast, peer-to-peer content exchange (e.g. Napster) and application-level anycast are fundamentally changing the relationships between consumers and producers of digital content. These new technologies enable delivery mechanisms which go far beyond point-to-point connections between a client and a single origin server. For example, CDNs redirect clients to a number of equivalent replica servers, while end system multicast blurs the distinction between client and server by requiring that every participant who wishes to access content serve content as well. In this position paper, we motivate and describe scalable strategies for addressing the challenges and opportunities that distributed content distribution mechanisms will place on reliable transport protocols and distributed scheduling algorithms. Central to our discussion are lightweight reliable transport services designed for maximum flexibility which can accomodate connection preemption, connection migration and connection aggregation built in part on fast forward error correcting (FFEC) codes.
Position paper on networks with aggregate quality of service
Stephen D. Patek, Jorg Liebeherr
As the Internet continues to expand into new application domains, there is growing demand for differentiated services that provide varying degrees of Quality of Service (QoS). Until recently, the approach for service differentiation in the Internet has focused on providing QoS guarantees to individual traffic flows. This per-flow model has not been widely embraced, largely due to the vast amount of state information that needs to be maintained in the network. As a consequence, the network community is currently redefining the notion of QoS for the Internet, leading to a new service model where guarantees are made to aggregate flows, rather than to individual flows. The notions of aggregate QoS which are being defined, e.g., by the Differentiated Services (DiffServ) working group in the IETF, are very different from traditional networking concepts and require novel algorithms for traffic control and provisioning. In our work in this area we seek to derive fundamental insights into the nature and delivery of QoS to traffic aggregates, i.e. aggregate QoS. A central feature of service models for aggregate QoS is that there is no requirement for users to specify the composition or the destination of traffic. This significantly complicates the provisioning of network resources and suggests that new, feedback-intensive traffic control algorithms must be developed to mitigate uncertainty about network loading. At present there is little or no theory available to guide the development of such algorithms. In this paper we lay out a research agenda for illuminating the fundamental issues associated with QoS for aggregate traffic. By providing a theoretical reference frame for aggregate QoS, one which provides tools and techniques for congestion control, capacity provisioning, and admission control, we complement ongoing standardization efforts for differentiated services in the Internet.
Multicast Routing, Management, and Applications I
icon_mobile_dropdown
Enabling end-user network monitoring via the multicast consolidated proxy monitor
Anshuman Kanwar, Kevin C. Almeroth, Supratik Bhattacharyya, et al.
The debugging of problems in IP multicast networks relies heavily on an eclectic set of stand-alone tools. These tools traditionally neither provide a consistent interface nor do they generate readily interpretable results. We propose the ``Multicast Consolidated Proxy Monitor''(MCPM), an integrated system for collecting, analyzing and presenting multicast monitoring results to both the end user and the network operator at the user's Internet Service Provider (ISP). The MCPM accesses network state information not normally visible to end users and acts as a proxy for disseminating this information. Functionally, through this architecture, we aim to a) provide a view of the multicast network at varying levels of granularity, b) provide end users with a limited ability to query the multicast infrastructure in real time, and c) protect the infrastructure from overwhelming amount of monitoring load through load control. Operationally, our scheme allows scaling to the ISPs dimensions, adaptability to new protocols (introduced as multicast evolves), threshold detection for crucial parameters and an access controlled, customizable interface design. Although the multicast scenario is used to illustrate the benefits of consolidated monitoring, the ultimate aim is to scale the scheme to unicast IP networks.
Efficient server replication and client redirection for multicast services
Zongming Fei, Mostafa H. Ammar, Ellen W. Zegura
Server replication and multicasting are well-established techniques for increasing the capacity of a networked service and improving client performance. In this paper, we consider the combination of these two techniques. Specifically, we investigate the problem of selecting amongst rate-adaptive multicast servers, which adjust their sending rate based on network conditions and/or feedback from clients. Effective server rate adaptation can lead to efficient utilization of network resources and performance improvement perceived by clients. In this study of adaptive multicast server selection, we explore some fundamental issues and investigate the implications of different selection strategies on the performance perceived by clients. We first formulate several optimization problems based on different performance measures. We prove that the general problem is NP-hard and then present a special case with an optimal polynomial-time solution. We then design several heuristics for the general problem. We investigate the performance of the heuristics through simulation and show that our heuristics can improve the performance perceived by clients over other strategies.
Multicast Routing, Management, and Applications II
icon_mobile_dropdown
RCache: design and analysis of scalable fault-tolerant multimedia stream caching schemes
Katherine Guo, Milind M. Buddhikot, Youngsu Chae, et al.
In the current Internet, web content is increasingly being cached closer to the end-user to reduce network and web server load and therefore improve performance and user perceived quality. Existing web caching systems typically cache entire web documents and attempt to keep them consistent with the origin server. This approach works well for text and images; for bandwidth intensive multimedia data such as audio and video, caching entire documents is not cost effective and does not scale. An alternative approach is to cache parts of the multimedia stream on different caches in the network and coordinate stream playback from these caches. In such a case, the collection of cooperating distributed caches act as a single cache that is both scalable and fault-tolerant. This paper focuses on the design and the evaluation of novel data placement and replacement techniques for such distributed caches. Specifically, we propose schemes that work together: (1) RCache, a family of easy-to-implement, fault-tolerant multimedia data layout schemes that incorporate novel intra- and inter-clip replication controls, (2) TwoD, a two-dimensional local data replacement scheme based on the concept of segment popularity used for data replacement ordering at each cache. Our schemes optimize storage space, start-up latency, server load, network bandwidth usage, and overhead from playback switch-overs. Our simulation results show that the RCache schemes provide 4 - 9 times higher cache hit ratio than a comparable traditional web caching system with the same amount of storage space.
Protocol design for scalable and reliable group rekeying
Xincheng Brian Zhang, Simon S. Lam, Dong Young Lee, et al.
We present the design and specification of a scalable and reliable protocol for group rekeying together with performance evaluation results. The protocol is based upon the use of key trees for secure groups and periodic batch rekeying. At the beginning of each rekey period, the key server sends a rekey message to all users consisting of encrypted new keys (encryptions, in short) carried in a sequence of packets. We present a simple strategy for identifying keys, encryptions, and users, and a key assignment algorithm which ensures that the encryptions needed by a user are in the same packet. Our protocol provides reliable delivery of new keys to all users eventually. It also attempts to deliver new keys to all users with a high probability by the end of the rekeying period. For each rekey message, the protocol runs in two steps: a multicast step followed by a unicast step. Proactive FEC multicast is used to control NACK implosion and reduce delivery latency. Our experiments show that a small FEC block size can be used to reduce encoding time at the server without increasing server bandwidth overhead. Early transition to unicast, after at most two multicast rounds, further reduces the worst-case delivery latency as well as user bandwidth requirement. The key server adaptively adjusts the proactivity factor based upon past feedback information; our experiments show that the number of NACKs after a multicast round can be effectively controlled around a target number. Throughout the protocol design, we strive to minimize processing and bandwidth requirements for both the key server and users.
Reliable on-demand multicast routing with congestion control in wireless ad hoc networks
Ken Tang, Mario Gerla
In this paper, we address the congestion control multicast routing problem in wireless ad hoc networks through the medium access control (MAC) layer. We first introduce the Broadcast Medium Window (BMW) MAC protocol, which provides reliable delivery to broadcast packets at the MAC layer. We then extend the wireless On-Demand Multicast Routing Protocol (ODMRP) to facilitate congestion control in ad hoc networks using BMW. Through simulation, we show that ODMRP with congestion control adapts well to multicast sources that are aggressive in data transmissions.
Multimedia Wireless Networks
icon_mobile_dropdown
Prefetching protocol for streaming of prerecorded continuous media in wireless environments
Frank H. P. Fitzek, Martin Reisslein
Streaming of continuous media over wireless links is a notoriously difficult problem. This is due to the stringent Quality of Service requirements of continuous media and the unreliability of wireless links. We develop a streaming protocol for the real-time delivery of prerecorded continuous media from a central base station to multiple wireless clients within a wireless cell. Our protocol prefetches parts of the ongoing continuous media streams into prefetch buffers in the clients. Our protocol prefetches according to a Join-the-Shortest-Queue policy. By exploiting rate adaptation techniques of wireless data packet protocols, the Join-the-Shortest-Queue policy dynamically allocates more transmission capacity to streams with small prefetched reserves. Our protocol uses channel probing to handle the location-dependent, time-varying, and bursty errors of wireless links. We evaluate our prefetching protocol through extensive simulations with VBR MPEG encoded video traces. Our simulations indicate that for burst VBR video with an average rate of 64 kbit/sec and typical wireless communication conditions our prefetching protocol achieves client starvation probabilities on the order of 10-4 and a bandwidth efficiency of 90% with prefetch buffers of 128 kBytes.
Power control for wideband cellular wireless systems with link adaptation
Mingbo Xiao, Ness B. Shroff, Edwin K. P. Chong
Power control is an important radio resource management technique to enhance the spectral and power efficiency of wireless systems. However, most power control schemes proposed in the literature consider only narrowband cases, where users are accommodated by a single channel. Next-generation wireless systems aim to provide the mobile users multimedia applications and services, which often have heterogeneous QoS specifications, and demand more bandwidth than voice. In this paper, we explore the power control issue for wideband cellular wireless systems with link adaptation. Each user arriving to the system is associated with a specific minimum rate requirement. By evaluating the availability of the channels, we decide which channels and what SIR target (or equivalently what bit rate) for each chosen channel to use for this user. The objective is to save power and avoid too many channels being occupied by a user while the minimum rate requirement is satisfied. Since this problem is a nonlinear integer program problem with large search space, we propose a heuristic scheme to solve it online. With the SIR target being determined, each channel can independently adjust the power level using any distributed power control algorithm to achieve the target.
Framework for power and activity factor allocation in a multiclass CDMA system
Xinzhou Wu, Rayadurgam Srikant
We consider a multimedia CDMA uplink where there are multiple classes of users with different Quality-of-Service (QoS) requirements. Each user is modeled as an ON-OFF source, where in the ON state, the user transmits a fixed number of bits in each time slot and in the OFF state, the user is silent. The probability of being in the ON state, known as the activity factor, could be different for different users. Assuming a constant channel gain, we first characterize the set of transmit power levels, activity factors and number of users in each class that can be supported by a system with a given spreading gain under the constraint that each user's QoS requirement must be met. Using this characterization, we then present a utility function-based algorithm for choosing the activity factors of elastic users in the network.
Scalable Routing and Traffic Engineering
icon_mobile_dropdown
Understanding traffic dynamics at a backbone POP
Nina Taft, Supratik Bhattacharyya, Jorjeta Jetcheva, et al.
Spatial and temporal information about traffic dynamics is central to the design of effective traffic engineering practices for IP backbones. In this paper we study backbone traffic dynamics using data collected at a major POP on a tier-1 IP backbone. We develop a methodology that combines packet-level traces from access links in the POP and BGP routing information to build components of POP-to-POP traffic matrices. Our results show that there is wide disparity in the volume of traffic headed towards different egress POPs. At the same time, we find that current routing practices in the backbone tend to constrain traffic between ingress-egress POP pairs to a small number of paths. As a result, there is a wide variation in the utilization level of links in the backbone. Frequent capacity upgrades of the heavily used links are expensive; the need for such upgrades can be reduced by designing load balancing policies that will route more traffic over less utilized links. We identify traffic aggregates based on destination address prefixes and find that this set of criteria isolates a few aggregates that account for an overwhelmingly large portion of inter-POP traffic. We also demonstrate that these aggregates exhibit stability throughout the day on per-hour time scales, and thus they form a natural basis for splitting traffic over multiple paths in order to improve load balancing.
Policy-aware algorithms for proxy placement in the Internet
Krishnanand Monahar Kamath, Harpal S. Bassali, Rajendraprasad B. Hosamani, et al.
Internet has grown explosively for the past few years and has matured into an important commercial infrastructure. The explosive growth of traffic has contributed to degradation of user perceived response times in today's Internet. Caching at the proxy server have emerged as an effective way of reducing the overall latency. The effectiveness of a proxy server is primarily determined by its locality. This locality is affected by factors such as the Internet topology and routing policies. In this paper, we present heuristic algorithms for placing proxies in the Internet by considering both Internet topology and routing policies. In particular, we make use of the logical topology inferred from Autonomous System (AS) relationships to derive the path between a proxy and a client. We present heuristic algorithms for placing proxies and evaluate these algorithms for the Internet logical topology over three years. To the best of our knowledge, this is the first work on placing proxy servers in the Internet that considers logical topology.
Internet Topology I
icon_mobile_dropdown
Internet topology: connectivity of IP graphs
Andre Broido, kc claffy
In this paper we introduce a framework for analyzing local properties of Internet connectivity. We compare BGP and probed topology data, finding that currently probed topology data yields much denser coverage of AS-level connectivity. We describe data acquisition and construction of several IP- level graphs derived from a collection of 220 M skitter traceroutes. We find that a graph consisting of IP nodes and links contains 90.5% of its 629 K nodes in the acyclic subgraph. In particular, 55% of the IP nodes are in trees. Full bidirectional connectivity is observed for a giant component containing 8.3% of IP nodes.
Internet path inflation due to policy routing
Hongsuda Tangmunarunkit, Ramesh Govindan, Scott Shenker
In our previous work, we used a simplified model of routing policy in the Internet to study the impact of policy routing on Internet path-lengths. This prior work suffered from two shortcomings--it was based on a single snapshot of the Internet topology, and our simplified policy model could generate AS paths that violate peering relationships. In this paper, we address these two shortcomings by re-examining our results with respect to a more recent snapshot of the Internet, and improving the policy model to avoid peering violation. We find that our prior observations regarding the path inflation due to routing policy appear to hold both across time and with respect to a more sophisticated model of routing policy.
Internet Topology II
icon_mobile_dropdown
Inferring AS-level Internet topology from router-level path traces
Hyunseok Chang, Sugih Jamin, Walter Willinger
A number of recent studies characterize AS-level topology of the Internet by exploiting connectivity information contained in BGP routing tables. In this paper, we present an alternative method for discovering AS connectivity by inferring individual AS connections from the Internet's router-level topology. This methodology has several advantages over using BGP routing tables. First, it allows us to obtain AS-level connectivity information at a finer granularity (e.g., multiple connections between a pair of ASs); second, we can discover ASs aggregated in BGP routing tables; and third, we can identify AS border routers, which may allow us to further characterize inter-AS connections. Since border routers have, by definition, multiple interfaces, each with an address in a potentially different AS, a major challenge of our approach is to properly map border routers to their corresponding ASs. To this end, we present in this paper several mapping rules and heuristics for inferring the ASs of border routers and report on results showing the effectiveness and validity of these rules and heuristics.
Hierarchical structure of the logical Internet graph
Zihui Ge, Daniel R. Figueiredo, Sharad Jaiswal, et al.
The study of the Internet topology has recently received much attention from the research community. In particular, the observation that the network graph has interesting properties, such as power laws, that might be explored in a myriad of ways. Most of the work in characterizing the Internet graph is based on the physical network graph, i.e., the connectivity graph. In this paper we investigate how logical relationships between nodes of the AS graph can be used to gain insight to its structure. We characterize the logical graph using various metrics and identify the presence of power laws in the number of customers that a provider has. Using these logical relationships we define a structural model of the AS graph. The model highlights the hierarchical nature of logical relationships and the preferential connection to larger providers. We also investigate the consistency of this model over time and observe interesting properties of the hierarchical structure.
Service Models, Pricing, and Accounting
icon_mobile_dropdown
Prioritized service system behavior
Huw Oliver
Internet technology is becoming the infrastructure of the future for any information that can be transmitted digitally, including voice, audio, video and data services of all kinds. The trend to integrate voice and data traffic observed in the Internet is expected to continue until the full integration of all media types is achieved. At the same time it is obvious that the business model employed for current Internet usage is not sustainable for the creation of an infrastructure suitable to support a diverse and ever-increasing range of application services. Currently the Internet provides only a single class of best-effort service and prices are mainly built on flat-fee, access based schemes. We propose the use of pricing mechanisms for controlling demand for scarce resources, in order to improve the economic efficiency of the system. Standard results in economic theory suggest that increasing the value of the network services to the users is beneficial to both the users and the network operator (since he can charge them more and get back a bigger percentage of their surplus). Using pricing mechanisms helps in that respect. When demand is high, prices are being raised and hence deter the users with low valuation for the service to use it. This leaves resources to be available for the users that value them more, and hence are ready to pay more.
Internet pricing in light of the history of communications
There are repeating patterns in the histories of communication technologies, including ordinary mail, the telegraph, the telephone, and the Internet. In particular, the typical story for each service is that quality rises, prices decrease, and usage increases to produce increased total revenues. At the same time, prices become simpler. The historical analogies of this paper suggest that the Internet will evolve in a similar way, towards simplicity. The schemes that aim to provide differentiated service levels and sophisticated pricing schemes are unlikely to be widely adopted.
Charging for differentiated Internet services
Burkhard Stiller, Jan Gerke, Hasan Hasan, et al.
Internet communications today play a major role in data transport within the local as well as the wide area. On one hand, since emerging networked applications require a variety of different communication services, the number of service classes offered by service providers has to be targeted at this demand. On the other hand, assuming that a set of different traffic classes are offered by service providers, the 'right' incentives have to be provided, ensuring that applications or their users select the most appropriate traffic class. Even though users are considered to be selective, they intend to chose that class which offers the best available Quality-of-Service (QoS), as long as the price to be paid does not exceed their willingness-to-pay. Therefore, charging for differentiated Internet services is important in a commercialized Internet. The current Internet does not cater for charging at all, since the technology required, the pricing models going beyond a flat-fee approach, and the appropriate efficiency in technical as well as economic terms are still missing on a global basis. This paper outlines the major problems for charging Internet services and describes briefly an important new solution for a pricing approach, which solves the Internet pricing feasibility problem. Based on these preconditions an Internet Charging System (ICS) is introduced, focusing mainly on its leading characteristics and its generic approach to instantiate for a given scenario.
Scalable Simulation
icon_mobile_dropdown
Towards scalable network emulation
Robert Simmonds, Brian W. Unger
A network emulator enables real hosts to interact via a virtual network. It combines a real-time network simulator with a mechanism to capture packets from and write packets to a real network. Packets generated by external hosts interact with synthetic traffic within the virtual network, providing a controlled environment for testing real Internet applications. This paper focuses on two aspects related to the scalability of network emulators. The scalability of the virtual network run within the emulator and the scalability of the number of external hosts that can interact via the emulator concurrently. For the scalability of the virtual network, parallel discrete event simulation (PDES) techniques can be employed. The scalability of the number of external hosts requires handling varying amounts of network I/O and mapping packets into the simulator efficiently. These issues are discussed in terms of work being done on the Internet Protocol Traffic and Network Emulator (IP-TNE) developed at the University of Calgary.
Hybrid discrete-continuous fluid-flow simulation
Benjamin Melamed, Shuo Pan, Yorai Wardi
The fluid-flow paradigm is an alternative approach to the traditional queueing paradigm, where workload movement in a network is described in terms of a fluid metaphor rather than the familiar customer metaphor. Stochastic fluid network models can considerably speed up the simulation of complex flow networks, such as high-speed packet-based telecommunications networks, where traditional queueing-based simulation routinely gives rise to enormous numbers of events. Furthermore, they are also amenable to fast, nonparametric, and unbiased estimation (unlike traditional queueing) of IPA (Infinitesimal Perturbation Analysis) gradients (derivatives of performance measures with respect to various parameters), which may be used to make decisions on network design, replenishment, management or control. This paper outlines the design of a fluid-oriented hybrid simulator admitting both packet flows and fluid flows. It focuses on the fluid processing design and on efficiency issues.
Overview of fluid-based quick simulation techniques for large packet-switched communication networks
Nenad Milidrag, George Kesidis, Mihail Devetsikiotis
We consider a large packet-switched communication network. Traffic in such networks is heavily aggregated especially in the network core. Fluid traffic models have been used for this reason and because the individual packets are very small compared to the volume of aggregated traffic. Fluid models have also been considered for the network components themselves in order to explore the possibility of simulation speed-up. In the event-driven simulation of Kesidis et al of such a `fluid' network, a `ripple effect' was described to explain the substantial degradation in simulation speed-up as the network size grew, especially when work-conserving bandwidth schedulers were present. Thereafter, studies attempted to identify under what network dimensions and designs and under what traffic conditions the ripple effect is minimized. Hybrids of packet/fluid and event/time-driven simulation strategies were considered. This paper gives an overview of the fluid network modeling approach and surveys recent work on such hybrid approaches.
Service Differentiation in the Internet
icon_mobile_dropdown
Scalable resources management using shaped weighted round-robin scheduling
George Kesidis
A framework for resource management of MPLS label-switched routes (LSRs) over a diffserv Internet domain is described. A `traffic management' problem is formulated in which the LSRs are not statically allocated an amount of bandwidth and buffers per node. A very limited amount of information is passed on to the diffserv nodes about the traffic characteristics and QoS requirements using the LSR. The diffserv node can use this information and on-line queue measurements for dynamic resources (bandwidth and buffer) management. In this general context, a suitable `shaped' weighted round-robin (SWRR) bandwidth scheduling mechanism is specified. Performance and implementation issues are explored for SWRR. Finally, how SWRR can approximate weighted fair queueing is discussed.
Provisioning power of optimal aggregate-flow scheduling
Huan Ren, Kihong Park
Providing scalable QoS-sensitive services to applications with varying degrees of elasticity using aggregate-flow scheduling is a challenging problem. Our previous work advanced a theoretical framework where an optimal differentiated services provisioning problem is formulated and solved to yield solutions for optimal per-hop behavior(PHB) and absolute QoS shaping using end-to-end control. In this paper, we build on our previous work and achieve the following innovations. First, we generalize the mean square error criterion employed in the previous work for measuring goodness of aggregate-flow scheduling by introducing a scaling function which is applied to the TOS field label value in the IP header. This allows the service provider to configure the PHB so as to export customized QoS differentiation---essential when shaping end-to-end absolute QoS over per-hop relative QoS---commensurate with the QoS profiles of its user base. Second, we put the theoretical framework and analysis ``to the test'' by presenting a comprehensive performance evaluation study of differentiated services as affected by optimal aggregate-flow per-hop control. In particular, we show that end-to-end QoS shaping power degrades gracefully as the number of service classes at routers in a multi-hop path is decreased, which admits heterogeneity in router configurations and incremental deployment. Collectively, our results demonstrate that relative QoS can be effectively harnessed to achieve absolute QoS in WAN environments if the PHB implements optimal aggregate-flow scheduling in the generalized framework.
Fundamental trade-offs in aggregate packet scheduling
Zhi-Li Zhang, Zhenhai Duan, Yiwei Thomas Hou
In this paper we investigate the fundamental trade-offs in aggregate packet scheduling for support of guaranteed delay service. In particular, we study the relationships between the worst-case edge-to-edge delay (i.e., the maximum delay experienced by any packet across a network domain), the (maximum) allowable link utilization level of a network and the ``sophistication/complexity'' of aggregate packet scheduling employed by a network. In our study, besides the simple FIFO packet scheduling algorithm, we consider two classes of aggregate packet scheduling algorithms: the static earliest time first} (SETF) and dynamic earliest time first (DETF). In both classes additional control information is encoded in the packet header for scheduling purpose: in the class of SETF, packets are stamped with its entry time at the network edge, and they are scheduled in the order of their (network entry) time stamps at a router; in the class of DETF, the packet time stamps are modified at certain routers as packets traverse them. Through these two classes of aggregate packet scheduling, we show that, with additional time stamp control information encoded in the packet header for scheduling purpose, we can significantly increase the (maximum) allowable link utilization level of a network, and at the same time reduce the worst-case edge-to-edge delay bound. These results illustrate the fundamental trade-offs in aggregate packet scheduling algorithms and shed light on their provisioning power in support of guaranteed delay service.
Avoiding congestion through dynamic load control
Vasil Hnatyshin, Adarshpal S. Sethi
The current best effort approach to quality of service in the Internet can no longer satisfy a diverse variety of customer service requirements, and that is why there is a need for alternative strategies. In order to solve this problem a number of service differentiation models have been proposed. Unfortunately, these schemes often fail to provide proper service differentiation during periods of congestion. To deal with the issue of congestion, we introduce a new load control mechanism that eliminates congestion based on the feedback from the network core by dynamically adjusting traffic load at the network boundary. We introduce four methods for calculating load distribution among the ingress routers and among different flows in each ingress router, and we evaluate these proposed methods through simulation.
Multicast Congestion Control
icon_mobile_dropdown
RACCOOM: a rate-based congestion control approach for multicasts
Yuan Gao, Jennifer Chao-Ju Hou, Sanjoy Paul
As continuous media multicast applications become widely deployed on the Internet, it becomes increasingly important to ensure these applications respond to network congestion in a TCP-friendly manner so as to coexist with TCP connections (which constitutes the majority of the Internet traffic). In this paper, we present a RAte-based Congestion COntrOl scheme for Multicasts, called RACCOOM, for continuous media multicast applications. In the absence of packet loss, a RACCOOM session keeps track of the congestion status of the on-tree path with the largest round trip time (called the target path), and adjusts its sending rate using a TCP Vegas-like method. RACCOOM is also equipped with mechanisms to deal with changes of the target path due to traffic change and member join/leave. Upon detection of packet loss anywhere in the multicast tree, RACCOOM then responds by reducing its sending rate by half in a TCP-Reno manner. The ACK aggregation method used in RACCOOM prevents ACK implosion and yet provides the sender with a simple but comprehensive view of congestion conditions in the multicast tree. To achieve TCP-friendliness, we have devised a simple method in RACCOOM to on-line adjust the parameters of its rate adjustment method. Alternatively, we can achieve (weighted) fairness among competing RACCOOM connections by tuning its parameters using results obtained from the control feedback theory. We validate the design, and demonstrate the features, of RACCOOM in ns-2. The encouraging simulation results coupled with the fact that all the RACCOOM operations can be performed at end hosts, suggest that RACCOOM is a practical, and yet effective congestion control solution for continuous media multicast applications.
Large-scale reliable multicast of small messages
Vijay Shivshanker Gupta, Roy H. Campbell
Several applications could benefit from large-scale reliable multicast of small messages. For example, the problems of propagating invalidations for web-cache consistency, dissemination of stock quotes to traders, and propagation of information about web updates to search engines all require reliable multicast of small messages. While multicasting is itself an active area of research, and each of the problems mentioned have received attention in literature, there are no guiding principles for facilitating multicast of small messages. First, we outline a global rendezvous architecture (GRA)--an application-level architecture for large-scale reliable multicast of small messages. The main contributions of GRA are the global rendezvous point concept, the join protocol for client, and the facility for trust across AS boundaries. Second, we describe QuickFlow, an architecture for web cache consistency that utilizes GRA, and show the benefits of QuickFlow compared to previous approaches. Third, we show how FreshFlow, an architecture proposed for search engine freshness, follows GRA. We also present new results for FreshFlow that demonstrate the scalability of FreshFlow.
Bandwidth analysis of tree-based reliable multicast transport protocols
Christian Maihofer, Kurt Rothermel
In this paper, an analytical bandwidth evaluation of generic tree-based reliable multicast protocols is presented. Our analysis is based on a realistic system model, including data packet and control packet loss, asynchronous local clocks and imperfect scope-limited local groups. Two of the four considered protocol classes use aggregated acknowledgments (AAKs). With AAKs they are able to provide reliability even in case of node failures. All tree-based protocols provide good scalability for large receiver groups. Relating to protocols with aggregated acknowledgments, the analysis shows only little additional bandwidth overhead and therefore high throughput rates. Finally, we have analyzed the influence of the branching factor on a protocol's performance. Our results show that the optimal branching factor depends mainly on the probability for receiving messages from other local groups. If local groups are assigned to a separate multicast address, the optimal branching factor is 2. On the other hand, if TTL scoping is used as in RMTP or TMTP and therefore the probability for receiving messages from other local groups is greater than zero, larger local groups provide better performance.
Rate-based reliable multicast congestion control
Melody Moh, Shuqin Zhang
This work address two important challenges in congestion control for reliable multicast: (1) Provably TCP-fairness and (2) Providing scalable feedback. We first present a simple, rate-based, flat-structured scheme, RMCC (Reliable Multicast Congestion Control), that does not rely on router assistance and is readily deployable at end hosts. It is enhanced from a generic scheme originally proposed by Whetten and Cohan. Detailed design of source and receiver mechanisms is described. Next we propose three scalable feedback schemes for RMCC over heterogeneous wired/wireless/mobile networks. Using sampling techniques, these feedback schemes adapt their feedback generation frequency according to network dynamics including congestion status, end host mobility, and packet losses. The RMCC algorithm and the three feedback methods are carefully evaluated through simulation experiments using various network configurations and traffic scenarios. We found that RMCC multicast flow co-exists well with TCP traffic and is TCP-fair. The three feedback schemes are effective in reducing acknowledgments, yet are able to promptly communicate significant network changes.