Proceedings Volume 4523

Internet Performance and Control of Network Systems II

cover
Proceedings Volume 4523

Internet Performance and Control of Network Systems II

View the digital version of this volume at SPIE Digital Libarary.

Volume Details

Date Published: 24 July 2001
Contents: 9 Sessions, 35 Papers, 0 Presentations
Conference: ITCom 2001: International Symposium on the Convergence of IT and Communications 2001
Volume Number: 4523

Table of Contents

icon_mobile_dropdown

Table of Contents

All links to SPIE Proceedings will open in the SPIE Digital Library. external link icon
View Session icon_mobile_dropdown
  • MPLS
  • Differentiated Services
  • Control and Optimization of the Internet
  • Congestion Control and Scheduling
  • Queueing
  • Traffic Characterization
  • Internet QoS
  • Network Design
  • Posters - Tuesday
MPLS
icon_mobile_dropdown
Statistical-classification-based admission control
Timothy X. Brown
This paper introduces methods based on statistical classification that allow arbitrary measurement features to be incorporated into admission control decisions. Our results show that for high target packet loss rates, nearly any set of features can control the packet loss rate well, while for low target packet loss rates more features provide better control. The methods demonstrate relatively high accuracy in controlling the packet loss rate for both high and low target loss rates and for both memoryless and heavy-tailed traffic distributions. These results represent significant improvements on prior methods and suggest new directions for future research.
Path selection and bandwidth allocation in MPLS networks: a nonlinear programming approach
J. E. Burns, Teunis J. Ott, Johan M. de Kock, et al.
Multi-protocol Label Switching extends the IPv4 destination-based routing protocols to provide new and scalable routing capabilities in connectionless networks using relatively simple packet forwarding mechanisms. MPLS networks carry traffic on virtual connections called label switched paths. This paper considers path selection and bandwidth allocation in MPLS networks in order to optimize the network quality of service. The optimization is based upon the minimization of a non-linear objective function which under light load simplifies to OSPF routing with link metrics equal to the link propagation delays. The behavior under heavy load depends on the choice of certain parameters: It can essentially be made to minimize maximal expected utilization, or to maximize minimal expected weighted slacks (both over all links). Under certain circumstances it can be made to minimize the probability that a link has an instantaneous offered load larger than its transmission capacity. We present a model of an MPLS network and an algorithm to find and capacitate optimal LSPs. The algorithm is an improvement of the well-known flow deviation non-linear programming method. The algorithm is applied to compute optimal LSPs for several test networks carrying a single traffic class.
Heuristics for dimensioning large-scale MPLS networks
Carlos Miguel Borges, Amaro Fernandes de Sousa, Rui Jorge Morais Tomaz Valadas
MultiProtocol Label Switching (MPLS) technology allows the support of multiple services with different Quality of Service (QoS) requirements in classical IP networks. In an MPLS domain, packet flows belonging to a particular class are classified in the same Forward Equivalence Class (FEC). Based on different FECs, each service can be set up in the network through logical networks. Each logical network is a set of Label Switched Paths (LSPs), one for each service traffic trunk. The network-dimensioning problem is formulated as the determination of routes for all LSPs to achieve the least cost physical network. To solve this problem some widely known heuristics are used and two enhancement algorithms are proposed that allow for significant gains when compared with the basic heuristics. The heuristics tested include a genetic algorithm, a greedy based heuristic and a lagrangean relaxation based heuristic. The enhancements are proposed for application to the greedy based heuristic and to the lagrangean heuristic. The results show that the enhanced lagrangean heuristic is the best overall technique for the case studies presented. This technique yields significant average gains when compared to the basic lagrangean heuristic.
Monitoring information model for traffic engineering over MPLS VPNs
Omar Cherkaoui, Alain Sarrazin, Guy Francoeur, et al.
Monitoring is the part of Traffic Engineering (TE) that aims at optimizing the use of network resources and assists both in informing the provider and proving to the customer that the service offered respects the SLAs required to optimize the network resources utilization. In this paper, we present a monitoring framework for MPLS-VPN services. We first briefly review MPLS, VPN, NBVPN, Constrained Based Routing to provide a background for the discussion of traffic Engineering. We then discuss the general issues surrounding the design of a MPLS VPN from the point of view of TE and go on to present a management framework that extends the DEN information specific to MPLS VPNs. In this framework, we add policy actions that react dynamically to abnormal results by changing the sampling frequency.
Differentiated Services
icon_mobile_dropdown
Not all bits have equal value: investigating users' network QoS requirements
Anna Bouch, M. Angela Sasse
The number of Internet users is expected to triple between 1998 and 2002 largely because of new applications (such as videoconferencing) and new services (such as e-commerce). This shift in usage imposes higher Quality of Service (QoS) requirements at different levels of granularity. It also means that the traditional Internet way of managing quality (best-effort) has to be replaced by a more service-oriented approach. The aim of this paper is to investigate end-users' cognitive and perceptive QoS requirements. We present empirical results on user QoS preferences and QoS graduations. Guidelines for translating these results into metrics that can be used to guide resource allocation mechanisms are discussed.
Impact of traffic handling on Internet capacity
Towela Nyirenda-Jere, Victor S. Frost, Nail Akar
This paper describes the impact of traffic handling mechanisms on network capacity for support of Quality of Service (QoS) in the Internet. The emergence of applications with diverse throughput, loss and delay requirements requires a network that is capable of supporting different levels of service as opposed to the single best-effort service that was the foundation of the Internet. As a result the Integrated Services (Intserv) and Differentiated Services (Diffserv) models have been proposed. The Intserv model requires resource reservation on a per-flow basis. The Diffserv model requires no explicit reservation of bandwidth for individual flows and instead relies on a set of pre-defined service types to provide QoS to applications. Flows are grouped into aggregates having the same QoS requirements and the aggregates are handled by the network as a single entity with no flow differentiation. We refer to this type of handling as semi-aggregate. The Best-Effort model does not perform any differentiation and handles all traffic as a single aggregate. Each of these traffic handling models can be used to meet service guarantees of different traffic types, the major difference being in the quantity of network resources that must be provided in each case. In this paper, we consider the issue of finding the cross-over point at which the three approaches of aggregate traffic management, semi-aggregate traffic management and per-flow traffic management become equivalent. Specifically, we determine the network capacity required to achieve equivalent levels of performance under these three traffic management approaches. We use maximum end-to-end delay as the QoS metric and obtain analytic expressions for network capacity based on deterministic network analysis. One key result of this work is that on the basis of capacity requirements, there is no significant difference between semi-aggregate traffic handling and per-flow traffic handling. However Best-Effort handling requires capacity that is several orders of magnitude greater than per-flow handling.
Traffic engineering for Internet applications
Ajit K. Jena, Adrian Popescu
The focus of the paper is on resource engineering for supporting Service Level Agreements (SLAs) in IP networks. SLA at both link level and application level are considered. Using an object-oriented simulation model a case study is presented for client-server interactions generated by mixed traffic conditions in a Frame Relay (FR) WAN. Performance issues of Short Range Dependence and Long Range Dependence traffic under different resource control regimes are compared. The results show that major portion of the end-to- end delay comes from the queueing delay at the WAN ingress points, which is due to the significant bandwidth differences that may exist between LAN and WAN link layers. The results also highlight the role TCP window size and FR PVC control mechanisms play in the provision of delay performance for Internet services.
Measurement-based traffic classification in differentiated services
Marko Luoma, Mika Ilvesmaeki
Internet is moving towards the time of Quality of Service (QoS) networking. This move is taking place through the application of Differentiated Services (DiffServ) architecture. DiffServ offers low overhead tools to implement class based differentiation for the traffic. Decision of differentiation is, however, left as an open matter, to be settled between service provider and customer. Majority of customers are, based on our assumption, not ready to say what should be the quality or class for their traffic. This leaves space for provider intervention - service, to do this classification for the customer. Service provider is dealing with three problems which need to be solved concurrently: (1)Deciding the proper forwarding class for the application data stream (2)Separation of application flows from the packet stream (3)Constructing proper forwarding treatments. If successful with this operation, operator has direct control over the resource utilization within different classes and therefore service level provided to the customer. In order to cope with this service, tools for analyzing network traffic and forming suitable traffic groups are required. We present algorithms and methodologies which do differentiation of traffic based on the activity/traffic characteristics of applications. These values are determined from the flow analysis of packet lengths and inter-sending times.
Control and Optimization of the Internet
icon_mobile_dropdown
Accuracy of TCP performance models
Hans Peter Schwefel, Manfred Jobmann, Daniel Hoellisch, et al.
Despite the fact that most of todays' Internet traffic is transmitted via the TCP protocol, the performance behavior of networks with TCP traffic is still not well understood. Recent research activities have lead to a number of performance models for TCP traffic, but the degree of accuracy of these models in realistic scenarios is still questionable. This paper provides a comparison of the results (in terms of average throughput per connection) of three different `analytic' TCP models: I. the throughput formula in [Padhye et al. 98], II. the modified Engset model of [Heyman et al. 97], and III. the analytic TCP queueing model of [Schwefel 01] that is a packet based extension of (II). Results for all three models are computed for a scenario of $N$ identical TCP sources that transmit data in individual TCP connections of stochastically varying size. The results for the average throughput per connection in the analytic models are compared with simulations of detailed TCP behavior. All of the analytic models are expected to show deficiencies in certain scenarios, since they neglect highly influential parameters of the actual real simulation model: The approach of Model (I) and (II) only indirectly considers queueing in bottleneck routers, and in certain scenarios those models are not able to adequately describe the impact of buffer-space, neither qualitatively nor quantitatively. Furthermore, (II) is insensitive to the actual distribution of the connection sizes. As a consequence, their prediction would also be insensitive of so-called long-range dependent properties in the traffic that are caused by heavy-tailed connection size distributions. The simulation results show that such properties cannot be neglected for certain network topologies: LRD properties can even have counter-intuitive impact on the average goodput, namely the goodput can be higher for small buffer-sizes.
Scalable low-overhead rate control algorithm for multirate multicast sessions
Koushik Kar, Saswati S. Sarkar, Leandros Tassiulas
In multirate multicasting, different users (receivers) within the same multicast group could receive service at different rates, depending on user requirements and network congestion level. Compared to unirate multicasting, this provides more flexibility to the user, and allows more efficient usage of network resources. In this paper, we address the rate control problem for multirate multicast sessions, with the objective of maximizing the total receiver utility. This aggregate utility maximization problem not only takes into account the heterogeneity in user requirements, but also provides a unified framework for diverse fairness objectives. We propose an algorithm for this problem and show, through analysis and simulation, that it converges to the optimal rates. In spite of the non-separability of the problem, the solution that we develop is completely decentralized, scalable and does not require the network to know the receiver utilities. The algorithm requires very simple computations both for the user and the network, and also has very low overhead of network congestion feedback. Moreover, the algorithm does not require the network links to maintain per-flow state, and is suitable for deployment in the current internet.
Flow control in networks with multiple paths
Weihua Wang, M. Palaniswami, Steven H. Low
We propose two flow control algorithms for networks with multiple paths between each source-destination pair. Both are distributed algorithms over the network to maximize aggregate source utility. Algorithm 1 is a first order Lagrangian method applied to a modified objective function that has the same optimal solution as the original objective function but has a better convergence property. Algorithm 2 is based on the idea that, at optimality, only paths with the minimum price carry positive flows, and naturally decomposes the overall decision into flow control (determines total transmission rate based on minimum path price) and routing (determines how to split the flow among available paths). Both algorithms can be implemented as simply a source-based mechanism in which no link algorithm nor feedback is needed. We present numerical examples to illustrate their behavior.
Integration of fluid-based analytical model with packet-level simulation for analysis of computer networks
Tak Kin Yung, Jay Martin, Mineo Takai, et al.
Fluid flow analytical models have been shown to be able to capture the dynamics of TCP flows and can scale well to solving for networks with a large number of flows. However, accurate closed form solutions are not yet available for wireless networks. Traditional packet-level discrete event simulations provide accurate predictions of network behavior, but their solution time can increase significantly with the number of flows being simulated. Integration of fluid flow models with packet-level simulators appears to offer significant benefits. In this paper, we describe an approach to integrate fluid flow models into QualNet, a scalable packet-level simulator. We validate the mixed model with detailed packet-level simulations for the scenarios considered in this paper. The execution time of the mixed model is significantly impacted by the frequency with which the analytical model must be solved in response to changes in the data rate at the interface of the packet-level and analytical models. We present a time averaging approach to mitigate this impact and present the results of the resulting tradeoff between prediction accuracy and model execution time.
Congestion Control and Scheduling
icon_mobile_dropdown
Impact of polarized traffic on scheduling algorithms for high-speed optical switches
John Blanton, Hal Badt, Gerard Damm, et al.
The problem of maintaining high throughput of a slotted switch matrix while observing data transit time limits involves balancing two contradictory requirements. It is desired to transmit only full packets through the matrix whenever possible, even when traffic is unevenly distributed among the input queues. However, to prevent loss of data due to timeout it will be necessary to transmit some incomplete packets from queues that have light traffic. Our scheme for scheduling the switch matrix takes into account the conflicting requirements of data timeout and switch matrix efficiency. Using only elementary queue state information (data content and age), this scheme works by presenting ideal service requests to the central scheduler. The scheduler does not incorporate any priority scheme and can use any of a number of available scheduling algorithms that provide efficient matrix operation and fairness of service for the input data queues. Simulations of a switch system using our scheme demonstrate that polarized (unevenly distributed) traffic can be handled with a loss of only a few percent of the switch matrix capacity.
Visual traffic monitoring and evaluation
As computer networks and associated infrastructures become ever more important to the nation's commerce and communication, it is becoming exceedingly critical that these networks be managed effectively. Current techniques, which rely on manual or log based analysis, are too slow and ineffective to handle the explosive growth of network infrastructures. We have developed visualization techniques geared towards aiding the analysis of network based infrastructures such that network managers can quickly identify usage characteristics of the network and reallocate bandwidth or restructure portions of the network to better improve connectivity. In this fashion, bottlenecks can be quickly identified along with their cause so the issues can be remedied expeditiously. The techniques can also be used for long range infrastructure planning and network misuse detection.
Control theoretical analysis of a window-based flow control mechanism for TCP connections with different propagation delays
Hiroyuki Ohsaki, Keiichi Takagaki, Masayuki Murata
A feedback-based congestion control mechanism is essential to realize an efficient best-effort service in high-speed networks. A window-based flow control mechanism called TCP (Transmission Control Protocol), which a sort of feedback-based congestion control mechanism, has been widely used in the current Internet. Recently-proposed TCP Vegas is another version of TCP mechanism, and can achieve better performance than the current TCP Reno. In our previous works, we have analyzed stability of a window-based flow control mechanism based on TCP Vegas in both homogeneous and heterogeneous networks. In this paper, using our analytic results, we invesitigate how the dynamics of the window-based flow control mechanism based on TCP Vegas is affected by the difference in propagation delays of TCP connections. We also investigate the effect of various system parameters on transient performance of the window-based flow control mechanism.
Queueing
icon_mobile_dropdown
Analysis of combined voice/data/video operation in cable and DSL access networks: graceful degradation under overload
We develop exact models to analyze the performance of several types and grades of data, voice and video sessions over a cable or DSL based access network. Each session is characterized by a minimum guaranteed data-rate and a maximum allowed data-rate. Sessions would normally transmit at the maximum rate but under congestion some or all sessions would see graceful rate degradation. For each class the blocking probability and the average data-rate attained by a session are computed. In addition, a system-wide probability of rate degradation is also computed. A bufferless model with product-form structure and insensitivity to session holding time distribution except through mean (heavy-tailed distributions are allowed), and a buffered model with standard Markov chain structure are developed. The models are also generalized to allow rate degradation of real-time streaming traffic (e.g., switch from G.711 to G.728 encoding or turn on silence suppression) whenever the total bandwidth usage exceeds a certain threshold. Whenever a model is sensitive to session holding time distribution, that sensitivity is studied through simulations.
Search process evaluation for a hierarchical menu system by Markov chains
Hideaki Takagi, Muneo Kitajima, Tetsuo Yamamoto, et al.
When computers are used to execute tasks, it is often necessary for the user to locate a target item in a menu or a list. For example, users of word processors and spreadsheet applications select appropriate commands in a hierarchical menu to display dialog boxes and edit file or table attributes. To locate the desired information on the World Wide Web, users select the most appropriate candidate out of those presented by a search engine, and proceed through a series of hyperlinks that appear to be related to the task. This paper applies a cognitive model of the user's item selection process to the task of target search in a hierarchical menu system that contains one or more of the following four operations: (1) item selection on the basis of similarity to the task, (2) consideration in various ways of the selection history when making the next selection, (3) backtracking when an appropriate item is not present among those selectable at a given point in time, and (4) abandoning the task unachieved. We model this selection process with Markov chains. We calculate the probability that task goals are achieved and the average number of selections to make until the task goals are achieved. Finally we use these results to propose a method of evaluating the structures of hierarchical menus and links on a website.
Practical QoS design for packet/cell switching systems
Haruo Akimaru, Tohru Okuyama, Tomoaki Oida
For packet/cell switching of connection-less and connection oriented modes, practical QoS design methods are presented. Identification of bursty traffic by the CR-H2 (correlated 2nd order hyper-exponential distribution) convenient for practical measurements is proposed, which is equivalent to the 2-phase MMPP (Markov modulated Poisson process). The formulas including 3rd moment of the packet delay are presented, using which the mean and peak packet delays are calculated. A simplified QoS design for COPS (connection oriented packet switching) is proposed for the ATM (asynchronous transfer mode). Numerical examples and simulation results are shown to demonstrate the proposed methods.
Wireless cellular networks with Pareto-distributed call holding times
Ramon M. Rodriguez-Dagnino, Hideaki Takagi
Nowadays, there is a growing interest in providing internet to mobile users. For instance, NTT DoCoMo in Japan deploys an important mobile phone network with that offers the Internet service, named 'i-mode', to more than 17 million subscribers. Internet traffic measurements show that the session duration of Call Holding Time (CHT) has probability distributions with heavy-tails, which tells us that they depart significantly from the traffic statistics of traditional voice services. In this environment, it is particularly important to know the number of handovers during a call for a network designer to make an appropriate dimensioning of virtual circuits for a wireless cell. The handover traffic has a direct impact on the Quality of Service (QoS); e.g. the service disruption due to the handover failure may significantly degrade the specified QoS of time-constrained services. In this paper, we first study the random behavior of the number of handovers during a call, where we assume that the CHT are Pareto distributed (heavy-tail distribution), and the Cell Residence Times (CRT) are exponentially distributed. Our approach is based on renewal theory arguments. We present closed-form formulae for the probability mass function (pmf) of the number of handovers during a Pareto distributed CHT, and obtain the probability of call completion as well as handover rates. Most of the formulae are expressed in terms of the Whittaker's function. We compare the Pareto case with cases of $k(subscript Erlang and hyperexponential distributions for the CHT.
Traffic Characterization
icon_mobile_dropdown
Fitting World Wide Web request traces with the EM-algorithim
Rachid El Abdouni Khayari, Ramin Sadre, Boudewijn R. Haverkort
In recent years, several studies have shown that network traffic exhibits the property of self-similarity. Traditional (Poissonian) modelling approaches have been shown not to be able to describe this property and generally lead to the underestimation of interesting performance measures. Crovella and Bestavros have shown that network traffic that is due to World Wide Web transfers shows characteristics of self-similarity and they argue that this can be explained by the heavy-tailedness of many of the involved distributions. Considering these facts, developing methods which are able to handle self-similarity and heavy-tailedness is of great importance for network capacity planing purposes. In this paper we discuss two methods to fit hyper-exponential distributions to data sets which exhibit heavy-tails. One method is taken from the literature and shown to fall short. The other, new method, is shown to perform well in a number of case studies.
Framework based on Markov modulated Poisson processes for modeling traffic with long-range dependence
Paulo Jorge Ferreira Salvador, Rui Jorge Morais Tomaz Valadas
This paper proposes a novel fitting procedure for Markov Modulated Poisson Processes (MMPPs), consisting of the superposition of N 2-MMPPs, that is capable of capturing the long-range characteristics of the traffic. The procedure matches both the autocovariance and marginal distribution functions of the rate process. We start by matching each 2-MMPP to a different component of the autocovariance function. We then map the parameters of the model with N individual 2-MMPPs (termed superposed MMPP) to the parameters of the equivalent MMPP with 2N states that results from the superposition of the N individual 2-MMPPs (termed generic MMPP). Finally, the parameters of the generic MMPP are fitted to the marginal distribution, subject to the constraints imposed by the autocovariance matching. Specifically, the matching of the distribution will be restricted by the fact that it may not be possible to decompose a generic MMPP back into individual 2-MMPPs. Overall, our procedure is motivated by the fact that direct relationships can be established between the autocovariance and the parameters of the superposed MMPP and between the marginal distribution and the parameters of the generic MMPP. We apply the fitting procedure to traffic traces exhibiting LRD including (i) IP traffic measured at our institution and (ii) IP traffic traces available in the Internet such as the well known, publicly available, Bellcore traces. The selected traces are representative of a wide range of services/protocols used in the Internet. We assess the fitting procedure by comparing the measured and fitted traces (traces generated from the fitted models) in terms of (i) Hurst parameter; (ii) degree of approximation between the autocovariance and marginal distribution curves; (iii) range of time scales where LRD is observed using a wavelet based estimator and (iv) packet loss ratio suffered in a single buffer for different values of the buffer capacity. Results are very clear in showing that MMPPs, when used in conjunction with the proposed fitting procedure, can be used to model efficiently Internet traffic in the relevant time scales, even when exhibiting LRD behavior.
Capabilities of application-level traffic measurements to differentiate and classify Internet traffic
Mika Ilvesmaeki, Marko Luoma
The use of network based traffic classification to differentiate aggregate traffic has been introduced with the development of new Internet service architectures, especially with the Differentiated Services. We present measurements and analysis of various packet and flow statistics to aid in classifying or differentiating traffic flows according to the application nature. Our study on methods traffic classification includes the background analysis of traffic traces to detect applications of varying nature by measuring packet inter-arrival times, packet lengths, flow inter-arrival times, and packet and flow shares of total traffic. Most promising results with a single statistic are achieved when classifying traffic based on packet inter-arrival patterns. The interarrival time distributions of packets seem to be able to divide the traffic into two distinguishable classes. However, the division to three or more classes remains as somewhat ambiguous issue and needs further research. However, the results also indicate that no single statistic is able to classify application flows with reasonable certainty but that this might be achieved when several statistics and their analysis results are combined. A good method of increasing the classification result would be to increase the dimensionality of the classification. For instance, combining the classification results of packet IAT and packet length distributions would almost certainly lead to the detection of applications of different nature.
Analyzing the relevant time scales in a network of queues
Antonio Manuel Duarte Nogueira, Rui Jorge Morais Tomaz Valadas
Network traffic processes can exhibit properties of self-similarity and long-range dependence, i.e., correlations over a wide range of time scales. However, as already shown by several authors for the case of a single queue, the second-order behavior at time scales beyond the so-called correlation horizon or critical time scale does not significantly affect network performance. In this work, we extend previous studies to the case of a network with two queuing stages, using discrete event simulation. Results show that the second stage provokes a decrease in the correlation horizon, meaning that the range of time scales that need to be considered for accurate network performance evaluation is lower than predicted by a single stage model. We also used simulation to evaluate the single queue model. In this case, the estimated correlation horizon values are compared with those predicted by a formula derived by Grossglauser and Bolot, which presumes the approximation of the input data by a traffic model that enables to control the autocorrelation function independently of first-order statistics. Results indicate that although the correlation horizon increases linearly with the buffer size in both methods, the simulation ones predict a lower increase rate.
Internet QoS
icon_mobile_dropdown
ABB: active bandwidth broker
Kason Wong, Eddie Law
In this paper, we shall discuss a novel design on the policy-based management for the Internet. This design deploys the concept of active networking. As opposed to the traditional network design, active network empowers network node with the ability to manipulate data and program code in packets, and configure the network properties according to the needs of different applications. The policy-based management can control network routers in order to realize end-to-end Quality of Service (QoS), such as differentiated and integrated services, across the Internet. For the moment, the Internet Engineering Task Force (IETF) has defined the framework of the policy-based management. It employs a simple client/server model that uses Common Open Policy Service (COPS) protocol to facilitate policy management and control. Our design of Active Bandwidth Broker (ABB) belongs to an active application. Our goals are to distribute centralized workload of the policy-based management over multiple active nodes in the active networks, introduce mobility of the bandwidth brokers, and allows load sharing to the policy-based management. This results a network-wide intelligent, highly available, and consistent QoS control that allows performance protection for voice, video and Internet business application while reducing costs for growing networks.
Satisfying customer bandwidth demand in IP data networks
Yaakov Kogan, Haluk Kosal, Gangaji Maguluri, et al.
We introduce the notion of customer bandwidth fulfillment in IP data networks and provide a quantitative characterization of the fulfillment using measurements of the router uplink (link connecting a router to the backbone) utilization. The threshold for the uplink utilization is calculated for a given probability of customer fulfillment based on the normal approximation. We use three different stochastic models to prove the normal approximation for the distribution of the uplink utilization. The convergence to the Gaussian diffusion prcess is proved in the framework of the nonstationary exponential Benes buffer model. In a special case of an alternating renewal process, we show that the fulfillment can be evaluated based on measurements of the mean uplink utilization. We also prove that the distribution for the number of busy links in a large generalized Engset model is asymptotically normal that provides another justification of the normal approximation for the uplink utilization. We analyze 5-minutes measurements of the uplink utilization and show that their empirical distribution is close to normal.
Charging multidimensional QoS with the cumulus pricing scheme
Peter Reichl, Burkhard Stiller, Thomas Ziegler
The recently established Cumulus Pricing Scheme (CPS) has turned out to be a novel approach for efficiently charging differentiated Internet services, based on integrating different time-scales into one edge-pricing mechanism. Depending on an initial specification of expected resource requirements, customer and provider negotiate a contract fixing a flat rate charge for QoS delivery. As soon as the scheme has started, the customer receives a continuous coarse-grained feedback about her real resource consumption. To this end, over- or underutilization are expressed in terms of Cumulus Points CP, whose accumulation may indicate an imbalance between specified and actually monitored traffic and eventually requires to adapt the contract accordingly. This paper extends the original CPS for services that are characterized not only by their bandwidth or volume requirements, but by general QoS parameters. Starting with a discussion on CPS for different one-dimensional QoS parameters, consequences for the basic CPS mechanism are investigated, covering especially the determination of relevant thresholds for CPs. These investigations deliver crucial input for the specification of multi-dimensional QoS vectors within the initial contract. Suitable metrics are introduced and applied in order to reduce the complexity of the contract as well as of the different monitoring methods. Finally, the implementation of the extended scheme within an Internet Charging System is discussed.
Network Design
icon_mobile_dropdown
Self-healing ring design for optical access networks
Mohan Gawande, John G. Klincewicz
We propose solution techniques for the problem of determining how many access rings are required, which locations should be served by each ring, and to which hub each access ring should be connected. We take into account the cost per mile of the optical fiber used to form the Wavelength Division Multiplexing (WDM) rings, the cost associated with exchanging traffic, the capacity of the WDM rings, the set of demands for wavelengths and the distances between locations. These techniques are based on `joining' algorithms used in statistical clustering. Initially, each location is assumed to be served by its own WDM ring. Using a particular metric to determine candidate pairs of rings, an iterative procedure is used to combine locations served by two rings onto a single ring. We compare different metrics in order to evaluate their performance on a study network based on data from a major U.S. city.
Efficient server selection system for widely distributed multiserver networks
Hyun-pyo Lee, Sung-sik Park, Kyoon-Ha Lee
In order to providing more improved quality of Internet service, the access speed to a subscriber's network and a server which is the Internet access device was rapidly enhanced by traffic distribution and installation of high-performance server. But the Internet access quality and the content for a speed were remained out of satisfaction. With such a hazard, an extended node at Internet access device has a limitation for coping with growing network traffic, and the root cause is located in the Middle-mile node between a CP (Content Provider) server and a user node. For such a problem, this paper proposes a new method to select a effective server to a client as minimizing the number of node between the server and the client while keeping the load balance among servers which is clustered by the client's location on the physically distributed multi-site environments. The proposed method use a NSP (Network Status Prober) and a contents server manager so as to get a status of each servers and distributed network, a new architecture will be shown for the server selecting algorithm and the implementation for the algorithm. And also, this paper shows the parameters selecting a best service providing server for client and that the grantor will be confirmed by the experiment over the proposed architectures.
Performance comparison of reservation MAC protocols for broadband powerline communications networks
Halid Hrasnica, Abdelfatteh Haidine, Ralf Lehnert
We study the MAC layer of powerline communications (PLC) transmission systems applied to telecommunication access networks. PLC networks have to operate with a limited signal power which makes them more sensitive to disturbances from the electrical power supply grid and from the network environment. Well-known error handling mechanisms can be applied to the PLC systems to solve the problem of transmission errors caused by the disturbances (e.g. FEC and ARQ). However, the use of this mechanisms consumes a part of the transmission capacity and therefore decreases the already limited net data rate of the PLC systems. Because of the limited bandwidth, PLC networks have to provide a very good network utilization. Also a sufficient QoS is required, which can be reached by usage of efficient methods for the network capacity sharing - MAC protocols. The impulsive noise influences very much the error-free transmission. Therefore, this investigation includes modeling of several disturbance scenarios, too. We propose the reservation MAC protocols to be applied to the PLC access networks, because they are suitable to carry hybrid traffic with variable data rates ensuring a high network utilization. The analysis of the basic reservation protocols shows that the ALOHA random protocol can not deal with frequent transmission demands but it is more robust against disturbances than the polling based access protocol. The ALOHA protocol can be improved by the piggybacking method which degrades the collision probability and accordingly, shorts access delay. The polling protocol is extended with insertion of a contention component building a hybrid access method, which makes the access delays shorter, if there is a small number of stations in the network. Generally, the problems caused by the frequent transmission requests remain in all investigated access methods. However, the ALOHA based protocols show worst behavior in this case.
Connection-level simulation-based capacity planning method
Zdenko Vrdoljak, Gordana Kovacevic, Mladen Kos
This paper describes a multiservice network capacity planning method based on a simulation driven procedure. We admit that existing capacity planning methods already allow capacity estimation, but we indicate several drawbacks and inefficiencies of these methods. Our approach is alternative and can be used to obtain satisfying results without any limitations imposed by analytic approaches. The procedure is based on running event-driven simulation which 'drives' a routine which adjusts capacities of links (resources) so that the blocking probability of connections being established is less than a predefined tolerance. We analyze the procedure and present results presenting preciseness and correctness of the results obtained by the procedure.
Posters - Tuesday
icon_mobile_dropdown
UPM: unified policy-based network management
Eddie Law, Achint Saxena
Besides providing network management to the Internet, it has become essential to offer different Quality of Service (QoS) to users. Policy-based management provides control on network routers to achieve this goal. The Internet Engineering Task Force (IETF) has proposed a two-tier architecture whose implementation is based on the Common Open Policy Service (COPS) protocol and Lightweight Directory Access Protocol (LDAP). However, there are several limitations to this design such as scalability and cross-vendor hardware compatibility. To address these issues, we present a functionally enhanced multi-tier policy management architecture design in this paper. Several extensions are introduced thereby adding flexibility and scalability. In particular, an intermediate entity between the policy server and policy rule database called the Policy Enforcement Agent (PEA) is introduced. By keeping internal data in a common format, using a standard protocol, and by interpreting and translating request and decision messages from multi-vendor hardware, this agent allows a dynamic Unified Information Model throughout the architecture. We have tailor-made this unique information system to save policy rules in the directory server and allow executions of policy rules with dynamic addition of new equipment during run-time.
Secure electronic commerce communication system based on CA
Deyun Chen, Junfeng Zhang, Shujun Pei
In this paper, we introduce the situation of electronic commercial security, then we analyze the working process and security for SSL protocol. At last, we propose a secure electronic commerce communication system based on CA. The system provide secure services such as encryption, integer, peer authentication and non-repudiation for application layer communication software of browser clients' and web server. The system can implement automatic allocation and united management of key through setting up the CA in the network.
Fault management system for reliable ADSL services provisioning
Dong-Il Kim, Won-Kyu Hong, Mun-Jo Jong, et al.
A number of ADSL subscribers are explosively increasing every year. The ATM over ADSL gives us a new paradigm for Internet access service using existing copper cable. An ATM network takes the role of access network for providing the Internet access service using the ATM over ADSL. However, it is very difficult for network service provider to manage the large-scale ATM access network in uniform in terms of the stabilized Internet access service provision using the ATM over ADSL model. We logically divide the ATM access network into two domains from the perspective of fault management. The first domain is composed of DSLAMs and the seconds one is composed of the pure ATM switches. This paper proposes two level fault management schemes for ATM over ADSL service provision in terms of TMN functional layering: one is the fault management in terms of the network management layer (NML) and the other is one in terms of the service management layer (SML). We also describe the experience learned from the network management under the real network applying the proposed fault management schemes.
Transmission of variable-length packets over an unreliable output line
Dieter Fiems, Herwig Bruneel
We consider queueing behavior for transmission of variable-length packets over a slotted unreliable transmission line. Transmission reliability is obtained using either the stop-and-wait or the go-back-N retransmission protocol. The typical bursty nature of errors on the transmission medium is captured by means of an N-state Markov modulated Bernoulli process. Results include the probability generating function of the packet delay which allow to obtain expressions for performance measures such as mean packet delay and delay variance. As a numerical example, we investigate the protocols under consideration in the case errors are modeled by mean of a Markovian on/off process.
Traffic measurement for dimensioning and control of IP networks
Traffic measurements are collected to gain information about the traffic carried on a network. In this paper, the focus is on those aspects pertaining to network dimensioning and control. Issues related to time scale, read-out period, and traffic classes are discussed. Different measurement types are classified, with each being specified as a meaningful combination of a measurement entity and a measurement basis. To avoid the shortcomings of flow-based measurements, it is proposed that path-based measurements be developed. The use of measurement-based admission control as a means of adaptive resource management is also explored.