Congestion Control Architectures, Overlay QoS | Related Papers | Back to Top |
This paper investigates the properties of an edge-based flow control framework that could make it a viable data-plane building block for quality of service (QoS) architectures. We consider two broad categories of flow control: end host-based (eg: TCP), and network-based (eg: Fair Queuing, CSFQ) that differ in the choice of nodes that cooperate and where functionality is placed. We propose an edge-based closed-loop flow control (EC) framework in the network-based category in which only network nodes are expected to cooperate. The novelty of the EC-framework lies in its {\em entire} placement of flow control functionality at edge routers. Interior routers merely provide one or two isolated FIFO queues for the overall framework. The framework is divided into logical components, e.g., congestion estimation and congestion response. Component instances can be combined to form schemes that do not depend upon, but accommodate, packet dropping or marking for congestion detection at core routers. We show that the Vegas is one possible scheme in the EC-framework, and propose two new schemes, ``Monaco'' and ``Riviera''. A fluid model analysis is used to develop key concepts, to provide a reference for the packet system implementations, and to demonstrate stability, fairness and bounded queue behavior. Simulations illustrate scheme performance, robustness comparisons, potential solutions to pitfalls of existing mechanisms, and an example of service differentiation. Architectural mapping of the EC-framework to DiffServ and commercial scenarios like cross-ISP data VPN is discussed.
In this work, we focus on the question: How to provide QoS for private networks in an inter-domain multi-provider environment. Traditional QoS mechanisms like diff-serv, CSFQ require deployment of the QoS mechanisms at every potential bottleneck and also require coordination among ISPs. These complex coordination and deployment issues have inhibited the large scale deployment of QoS mechanisms. Our solution is to use a congestion control algorithm to push back all congestion to the edge (e.g access router) and hence called edge-to-edge control algorithm. Then QoS issues like queueing, bandwidth sharing, packet loss distribution are all pushed to the edge, allowing the core to operate using largely stateless mechanisms.
Postscript | PDF | CCR Web Site | Powerpoint Presentation
We propose a new overlay architecture and algorithms to augment best-effort congestion control in the Internet called "edge-to-edge traffic control." The basic architecture works at the network layer and involves pushing congestion back from the interior of a network, distributing it across edge nodes where the smaller congestion problems can be handled with flexible, sophisticated and cheaper methods. The edge-to-edge virtual link thus created can be used as the basis of several applications. These applications include controlling TCP and non-TCP flows, improving buffer management scalability, developing simple differentiated services, and isolating bandwidth-based denial-of-service attacks. The methods are flexible, combinable with other protocols (like MPLS and diff-serv), require no standardization and can be quickly deployed.
This paper proposes an edge-to-edge overlay congestion control
architecture for ISP networks to manage traffic
aggregates. Motivated by scalability issues, the core rate-based
scheme breaks up interior congestion and distributes it across network
edges leading to superior best-effort performance. Consolidation of
bottlenecks at edges also enables the creation of purely
edge-based better-best-effort services, such as an end-to-end
lossless TCP service illustrated in the paper. Our overlay
architecture is unique in that it:
[a)] does not require any support from interior network
nodes or end-systems (unlike ATM ABR, bit-based schemes or TCP modifications)
[b)] operates in a lossless manner (unlike TCP) with
bounded buffers,
[c)] aims to complement, not substitute existing end-to-end
congestion control (eg: TCP) to achieve scalability,
[d)] enables new edge-based better-best-effort services
over a best-effort or CoS-based network interior,
[e)] is applicable to controlling any type of L4 traffic (TCP, non-TCP), and
[f)] can be deployed on a variety of existing networks (IP, MPLS,
frame-relay, ATM or X.25).
We propose an explicit rate indication scheme for congestion avoidance in ATM networks. In this scheme, the network switches monitor their load on each link, determining a load factor, the available capacity, and the number of currently active virtual channels. This information is used to advise the sources about the rates at which they should transmit. The algorithm is designed to achieve efficiency, fairness, controlled queueing delays, and fast transient response. The algorithm is also robust to measurement errors caused due to variation in ABR demand and capacity. We present performance analysis of the scheme using both analytical arguments and simulation results. The scheme is being implemented by several ATM switch manufacturers.
Postscript (1.29 MB | Gzip Compressed Postscript (200 KB) | PDF
TCP Related (Randomization, Rate Control, Modeling, TCP over ADSL): | Related Papers | Back to Top |
Current implementations of TCP suffer from serious performance problems like unfairness to flows with higher round trip times (RTTs), synchronization and phase effects in flows and correlated losses leading to throughput degradations under a wide range of scenarios. In this paper we propose a solution to these issues by randomizing the packet sending times at TCP sources (called Randomized TCP). Specifically, we space successive packet transmissions with a time interval Delta = RTT(1+x)/cwnd, where x is a zero mean random number drawn from an Uniform distribution. We find that an Uniform distribution on U[-1,1] is optimal with respect to metrics like throughput, fairness, timeouts, losses and de-synchronization. We show that the scheme is better than Paced as well as TCP Reno in a large number of scenarios. The proposed scheme is also fair with TCP Reno. We show through simple simulations that Randomized TCP reduces phase effects and synchronization even when multiplexed with TCP Reno. Also, we extend randomization to other window based schemes like the Binomial schemes and show that fairness to TCP Reno increases dramatically.
Current TCP implementations employ Additive Increase Multiplicative Decrease (AIMD) as the congestion control mechanism. Recently, a new set of schemes called Binomial Congestion Control Schemes (BCCS) were proposed and a section of these schemes is TCP compliant. In this paper we evaluate the performance of these TCP compliant binomial schemes and show through simulations that AIMD performs better than the other BCCS policies in a wide range networking environments. Specifically, we study the performance of these schemes with respect to throughput, fairness, losses, timeouts and self-similarity. We show that the superior performance of AIMD can be attributed to its more conservative attitude in the presence of losses when it reduces its transmission rate much faster than the other schemes. This results in smaller congestion periods thereby reducing the losses and timeouts which in turn increases the throughput and decreases the degree of self-similarity of the traffic. We also evaluated the performance of TCP Compatible BCCS when they compete with TCP flows. It was found that with sufficiently large number of flows, BCCS competes fairly with TCP. However, with a smaller number of flows in the network TCP flows get smaller share of the bottleneck and disproportionately higher losses and timeouts.
The self-similarity of network traffic has been established in a variety of environments and it is well known that self-similar traffic can lead to larger queueing delays, higher drop rates and extended periods of congestion. In this paper, we investigate the impact of various buffer management algorithms on the self-similarity of network traffic. In this paper we investigate the impact of active and passive queue management policies used at the routers on the self-similarity of TCP traffic. We also propose a modification to the RED algorithm, aimed at reducing the timeouts and exponential backoffs in TCP flows, and show that it can lead to significant reductions in the traffic self-similarity under a wide range of network conditions, as compared to the currently implemented active and passive buffer management policies. We also show that though our techniques are aimed at TCP related causes, it is also effective in reducing the degree of self-similarity in traffic even when application and user level causes are also present, as long as TCP is used as the underlying transport protocol.
Continuing the process of improvements made to TCP through the addition of new algorithms in Tahoe and Reno, TCP SACK aims to provide robustness to TCP in the presence of multiple losses from the same window. In this paper we present analytic models to estimate the latency and steady-state throughput to TCP Tahoe, Reno and SACK and validate our models using both simulations and TCP traces collected from the Internet. In addition to being the first models for the latency of finite Tahoe and SACK flows, our model for the latency of TCP Reno gives a more accurate estimation of the transfer times than existing models. Our models show that under the losses introduced by the droptail queues which dominate most routers in the Internet, current implementations of SACK fail to provide adequate protection against timeouts and a loss of roughly more than half the packets in a round will lead to timeouts. We also show that with independent losses, SACK performs better than Tahoe and Reno and as losses become correlated, Tahoe can outperform both Reno and SACK.
Current models for TCP performance focus mainly on the steady-state throughput of infinite flows, making them inappropriate for the short TCP flows which dominate the transfers over the Internet. In this paper, we present a model for the latency of finite TCP Reno flows with independent losses for which currently no models exist. The proposed model can also estimate the steady-state throughput of long flows. We also present a sensitivity analysis and empirical models which investigate the effects of various factors like loss rates, window limitation, delayed acknowledgments and packet size on TCP latency and quantify and isolate their individual contributions. Our model captures the effect of timeouts and slow start phases which occur anywhere during the transfer and uses a more accurate model for the slow start phase, leading to a more accurate estimation of TCP latencies.
Most TCP connections in today's Internet transfer data on the order of only a few KBytes. Such TCP transfers are very short and spend most of their time in the slow start phase. Thus the underlying assumptions made by steady-state models cease to hold making them unsuitable for modeling finite flows. In this paper, we we propose an accurate model for estimating the transfer times of TCP flows of arbitrary size. Our model gives a more accurate estimation of the transfer times than those predicted by [CSAn00], which extends the steady state analysis of Padhye et al. [PFTK98] to model finite flows. The main features of out work are the modeling of timeouts and slow start phases which occur anywhere during the transfer and a more accurate model for the evolution of the cwnd in the slow start phase. Additionally, the proposed model can also model the steady state throughput of TCP connections. The model is verified using web based measurements of real life TCP connections. We also introduce an empirical model which allows a better ``feel'' of TCP latency and the nature of its dependence on loss probabilities and window limitation. Finally, the paper investigates the the effect on window limitation and packet size on TCP latency.
This paper presents TCP Rate Control, a new technique for {\bf transparently augmenting end-to-end TCP performance} by controlling the sending rate of a TCP source. The ``rate'' of a TCP source is determined by its window size, the round trip time and the rate of acknowledgments (or acks). TCP rate control controls these aspects by modifying the ack number and receiver window fields in acknowledgments and by modulating the acknowledgment rate. From a performance viewpoint a key benefit of TCP rate control is to avoid adverse performance effects due to packet losses such as reduced goodput due to retransmission and timeout delays, and unfairness or large spread in per-user goodputs. Further, TCP rate control can positively affect performance even if the {\bf bottleneck is non-local} and even if the {\bf end-host TCP implementations are non-conforming}. We demonstrate these aspects through a comparative study with RED and TCP-ECN, and discuss deployment issues. The TCP rate control approach has been implemented and patented by Packeteer Inc.
Related Tech Report in Postscript | In HTML
We analyze TCP performance in asymmetric networks, where the throughput significantly depends on the reverse direction and packet loss. We introduce a new operational model called the "AMP model" which, with an understanding of TCP dynamics and buffer management techniques explains the performance effects seen. We apply this model to guide design improvements in buffer management (ack-regulation) and scheduling schemes to achieve performance improvements of an order of magnitude or more. The schemes have been tested through simulation and implementation, and the module driven improvements observed are better than those proposed in earlier literature.
This contribution examines different methods of controlling the rate of TCP applications to achieve the goals of fairness, delay control and throughput. The key idea is to use an ATM ABR algorithm like ERICA+ to calculate rate feedback and use new techniques to control TCP sources using the feedback calculated. Specifically we propose two rate-to-window translation schemes for explicit window feedback to TCP and one variant of an acknowledgment bucket scheme (which controls TCP rate by controlling the rate of acknowledgements). These techniques can be applied in ATM edge devices or in Internet routers. We explore the dynamics of TCP traffic under different methods of rate control and show through simulations that our techniques can considerably improve fairness, throughput and end-to-end delay properties of TCP applications.
Postscript (1960KB) |
PKzip Compressed Postscript (255KB bytes) |
PDF |
gzip Compressed Postscript (255KB) |
Text (without figures) (46KB)
Slides in HTML (35KB)
Expanded Tech Report:
Postscript
Packeteer, a company that makes TCP rate-control products
Multimedia Issues/Multicast Congestion Control: | Related Papers | Back to Top |
In this paper we address the issue of efficient multimedia streaming by integrating an intelligent buffer management scheme with the congestion control scheme at the source. The scheme exploits the fact that most of the transmission losses actually occur at the source and not in the network. An intelligent transmission scheme can take advantage of this fact and thus control exactly what data is dropped in response to network congestion. The integrated model uses priority information from the encoder and network information from the congestion scheme and drops low priority packets and sends the most important packets in the available bandwidth. The packets are dropped when the source transmission buffer length exceeds a minimum threshold. This scheme ensures that the media transmitted has the highest possible quality under the given network conditions using a given coding scheme. This paper also presents a randomized transmission scheme as part of the integrated model to reduce the jitter and burst losses in the multimedia transmissions.
There are two categories of multicast congestion control: multi-rate and single-rate. The former can adapt the transmission rate to heterogeneous receivers but requires layered transmission. The latter is simple to implement and deploy, and thus at least suitable for some situations. In this article, a new single-rate scheme ORMCC is proposed. It requires very simple support from receivers, which distinguishes it from other schemes in this category, e.g. PGMCC [11], TFMCC [12] and MDP-CC [8]. Simulations on ns-2 [1] show that the scheme is TCP-friendly and does not suffer from drop-to-zero problem.
We propose MCA, a rate-based end-to-end multicast congestion scheme. Congestion avoidance is different from congestion control in the sense that our scheme detects and responds to network congestion without necessarily inducing packet loss. Our scheme is a single-rate scheme and operates end-to-end, i.e., it goes at the rate allowed by the worst congested receiver and does not expect packet marking or other support from intermediate bottlenecks. Congestion is detected autonomously at receivers using the concept of "accumulation" and simple thresholding techniques proposed in our recent unicast work. Congestion feedback to senders can be in the form of single-bit congestion indication (CIs) or as a multi-bit output rate measure. The feedback is sparse in the sense that at most one feedback is generated per measurement period (unlike multiple loss indications generated during packet loss). The source implements two key blocks: a filtering block to discriminate between competing feedback from receivers, and a congestion response block which implements a rate-increase/decrease policy. The two different feedback models (bit-based or explicit rate-based) leads to two different schemes: bit-based and explicit rate-based schemes. Simulation results show that both schemes avoid the drop-to-zero problem and are fair with unicast congestion avoidance schemes.
GZIP Postscript | Postscript | PDF
We propose an end-to-end single-rate source-based multicast congestion control scheme (LE-SBCC) for reliable or unreliable multicast transport protocols. It addresses key issues such as drop-to-zero issues, TCP friendliness and RTT estimation. The scheme consists of a cascaded set of filters and a rate adaptation policy module (AIMD or TFRC) which transform the multicast tree to appear like a unicast path for the purposes of congestion control. The scheme is not self-clocked but acts upon a stream of loss indications (LIs), which are filtered to get a stream of loss events (LEs) (at most one per RTT per receiver). This LE stream is further filtered to extract the maximum LEs from any one receiver, which results in at most one rate-reduction per RTT. A range of results (simulation and experimental) is presented and compared against the mathematical model of the scheme components.
Postscript | GZIPPED Postscript
This paper presents a simple, generic source-based end-to-end multicast congestion control (GSC) algorithm for reliable multicast transport (RMT) protocols. The algorithm is completely implemented at the source and leverages the reverse control information flow in RMT protocols like PGM or RMTP. Specifically, it does not introduce any new control traffic or new fields in RMT protocol headers. It addresses the drop-to-zero problem in limited cases by introducing a robust, adaptive time-filter based upon RTT estimates collected by observing NAK traffic. This solution allows it to be very adaptive to congestion situation changes in any part of the tree. The algorithm is friendly to TCP in terms of competition for bandwidth shares. The scheme has minimal control traffic requirements and weak RTT estimation requirements which allows a large deployment space including multi-sender multicast and combination with receiver-based schemes.
Postscript | Gzipped Postscript | PDF
Congestion control is a key requirement for reliable multicast transport (RMT) protocols. We have recently developed a purely source-based scheme (GSC) which requires no receiver/network support. This paper extends that work to handle high degrees of multiplexing and reasonably large receiver sets and demonstrates the TCP friendliness of the scheme.
Abstract (PS) | Summary (PS) | NGC site | Poster Slides (PPT)
Network Management: | Back to Top |
Traffic engineering broadly relates to optimization of the operational performance of a network. This survey discusses techniques like multi-path routing, traffic splitting, constraint-based routing, path-protection etc. that are used for traffic engineering in contemporary Internet Service Provider (ISP) networks. These techniques can be classified under two broad classes, connectionless and connection-oriented, that dominate the current debate on next-generation routing and traffic engineering in IP networks. The connectionless approach evolves current distance-vector and link-state algorithms, or influences routing metrics. The connection-oriented approach uses signaling and is being used by techniques like Multi Protocol Label Switching (MPLS). Connection-oriented techniques offer a convenient way to monitor, allocate, reroute, and protect resources for a given traffic on an explicit and flexible basis. This survey will examine the core problems, discuss solutions in both connectionless and signaled approach, and point to topics for research and advanced development.
In this paper, we present an OSPF weights optimization scheme using a general automatic network management framework proposed in \cite{olsp}, i.e., on-line simulation framework. We have chosen the packet drop rate in the network as the optimization metric as it is a good indicator of congestion in the network and also impacts the performance of the underlying applications. The packet drop rate has been formulated in terms of the link parameters, such as bandwidth and buffer space, and the parameters of the traffic demands. A GI/M/1/K queuing model has been used to compute the packet drop probability on a given link. We have also developed a fast recursive random search algorithm to address the issues associated with network optimization problems. The search algorithm has been compared to the local search heuristic of \cite{Thorup} in terms of the number of function evaluations needed to obtain a ``good'' OSPF link weight setting. Our results demonstrate that the recursive random search takes 50-90\% fewer function evaluations to find a ``good'' setting. The amount of improvement depends on the network topology, traffic conditions and optimization metric. We have simulated the proposed OSPF optimization scheme in $ns$\cite{ns} and the results indicated improvements of the order of 30-60\% in the total packet drop rate.
PS | PDF | PDF (shortened version)
Random Early Detection (RED) is an active queue management mechanism designed to provide better performance than traditional DropTail. However, its parameter setting has proved to be very sensitive to network scenarios and needs constant tuning to achieve ideal performance under varying network conditions. In view of the fact that RED has not been understood well enough for an analytical approach, this paper takes advantage of network simulation techniques and formulates the optimal configuration of RED as a black-box optimization problem. An optimization objective is designed to effectively reflect the tradeoff between utilization and queueing delay. Based on the proposed RED optimization scheme, a general automatic network management system, i.e., on-line simulation system[1], has been used for the on-line tuning of RED under changing network conditions. The proposed approach is empirically validated with simulations and real network experiments. The simulation results show that RED controlled with on-line simulation system is able to stabilize around the expected equilibrium status under varying conditions and maintain high utilization.
The performance of several network protocols can be significantly enhanced by tuning their parameters. The optimization of network protocol parameters can be modeled as a ``black-box'' optimization problem with unknown, multi-modal and noisy objective functions. In this paper, a recursive random search algorithm is proposed to address this type of optimization problems. The new algorithm takes advantage of the favorable statistical properties of random sampling and achieves high efficiency without imposing extra restrictions, e.g., differentiability, on the objective function. It is also robust to noise in the evaluation of objective function since it use no traditional noise-susceptible local search techniques. The proposed algorithm is tested on classical benchmark functions and its performance compared with a multi-start hillclimbing algorithm. The algorithm is also integrated with a new on-line simulation system which attempts to automate network management by tuning protocol parameters when network conditions change significantly. We present the application of this on-line simulation system in enhancing the performance of network protocols, such as, RED and OSPF.
PDF | PS | (EXTENDED TECH REPORT) PS | (EXTENDED TECH REPORT) PDF
Earlier version: ``An Adaptive Random Search Alogrithm for Optimizing Network Protocol Parameters,'' Technical Report, 2001. PDF
The complex, dynamic nature of the Internet requires scalable, effective network management. This paper proposes a dynamic network management architecture, which performs the parameter tuning of the underlying network protocols based upon collaborative on-line simulation, thus achieving second-order control over the network. This paper presents the various components of the architecture, optimization techniques and a proof-of-concept implementation with the RED algorithm.
Pricing: | Related Papers | Back to Top |
Several congestion pricing proposals have been made in the last decade. Usually, however, those proposals studied optimal strategies and did not focus on implementation issues. Our main contribution in this paper is to address implementation issues for congestion-sensitive pricing over a single domain of the differentiated-services (diff-serv) architecture of the Internet. We propose a new congestion-sensitive pricing framework Distributed Dynamic Capacity Contracting (Distributed-DCC), which is able to provide a range of fairness (e.g. max-min, proportional) in rate allocation by using pricing as a tool. Within the Distributed-DCC framework, we develop an Edge-to-Edge Pricing Scheme (EEP) and present simulation experiments of it.
One of the biggest obstacles for implementing congestion pricing is the pricing time-scale. The Internet traffic is highly variant and hard to control without a mechanism that operates on very low time-scales, i.e. on the order of round-trip-times (RTTs). However, pricing naturally operates on very large time-scales because of human involvement. So, in order to put tight control on congestion through pricing, new implementation methods and architectures are needed for congestion pricing. In order to solve this problem, we propose a novel approach Pricing over Congestion Control (POCC). The essence of POCC is to overlay congestion pricing on top of an underlying congestion control scheme which enforces a much tighter control than pricing. This way congestion in the interior network is controlled very tightly, while pricing is done at time-scales large enough to incorporate human involvement. We investigate the problems raised within such an overlay architecture and provide solutions to them. We particularly focus on diff-serv and use edge-to-edge congestion control and edge-toedge pricing techniques to illustrate POCC ideas in simulation.
Several proposals have been made for congestion-sensitive pricing of the Internet. One key implementation obstacle for these dynamic pricing schemes is the necessity of frequent price updates whereas the structure of wide area networks does not allow frequent price updates since round-trip-times are very large for some cases. As the networks allow infrequent price updates, more control is achieved by the pricing schemes with more frequent price updates. So an important issue to investigate is to find a maximum value for the interval (i.e. pricing interval) over which price updates occur, such that the level of congestion control can remain in a desired range. This paper presents our modeling and analysis work for the length of pricing intervals. To represent the level of control over congestion, we use correlation between prices and congestion measures. After developing approximate models for the correlation, we find and prove that the correlation degrades at most inversely proportional to an increase in the pricing interval. We also find that the correlation degrades with an increase in mean or variance of the incoming traffic.
Dynamic pricing techniques attracted significant attention as tools to implement better resource sharing, and address the issue of congestion in the Internet. Among the pricing schemes, the "smart market" is a purely congestion-sensitive scheme, which makes it a basis for comparing new pricing schemes. However, there are significant implementation challenges in this scheme. We investigate these issues and come up with an implementation strategy in the differentiated-services architecture. This strategy involves certain changes to the scheme and limitations on the traffic types. This paper reports our implementation strategy, scheme changes, and simulation results illustrating the performance of the smart market scheme.
Internet pricing is receiving increased attention in industry and academia. In this paper, we report the results of comparing two Internet pricing models. Using simulation techniques, we evaluate the technical and economic efficiencies of the Smart Market model proposed by MacKie-Mason & Varian (1993) and compare it with Dynamic Capacity Contracting, a pricing scheme that we have developed. Dynamic Capacity Contracting is a congestion-sensitive pricing model implementable in the differentiated services architecture of the Internet. The central idea of congestion-sensitive pricing is that, based on congestion monitoring mechanisms, a network could raise prices and vary contract terms dynamically. Our results indicate that while the smart market model achieves a higher economic efficiency, it results in poor technical performance of the network. On the other hand, the dynamic contracting model achieves a better balance of economic and technical efficiencies. We discuss the implications of our work and identify future research directions.
Differentiated Services/TCP-Friendly Marker Work: | Related Papers | Back to Top |
The goal of this paper is to examine the gains of partial upgrades to existing FIFO networks, to support delay assurances. Specifically, we try to find the number of hops of FIFO multiplexing after which a latency target is violated. We first examine the effect of multiplexing two flows through successive FIFO schedulers, and for a simple scenario where cross-traffic is assumed to be absent, we derive a worst-case bound on the burstiness increase across n nodes. We use the result to obtain an effective service curve and a worst-case latency bound. We then examine the effect of having a priority scheduler at the entry of the network with FIFO nodes in the core. We provide a basis for determining the number of hops up to which a worst-case latency target is met.
In this paper we propose an edge-based quality of service (QoS) architecture aimed at site-to-site private networks over the Internet. We extend the traditional point-to-point service model to a point-to-set service model, assuming a finite, bounded set of destination sites. A point-to-set service allows a user to have a pool of premium tokens, which could be flexibly assigned to traffic going towards any destination within the set. The proposed point-to-set service provides statistical assurances and flexibility to users while allowing providers to obtain multiplexing gains. To realize the point-to-set service model, we introduce edge-based dynamic bandwidth tracking and provisioning schemes. The tracking algorithm predicts demand towards a given destination edge. This information is used to efficiently allocate bandwidth towards the destinations in the set. Simulation results are presented to demonstrate the merits of the proposed architecture in terms of cost savings to the customer and efficient resource utilization to the provider.
GZIP Postscript | GZIP Postscript | PDF
This paper proposes the use of TCP-aware network-based packet marking, which in conjunction with differential packet dropping, is a powerful way to scale the performance of buffer management for best-effort services on the Internet. In particular, we extend the notion of ``TCP-Friendly'' packet marking, proposed recently by us, and apply it to improve the performance of traditional best-effort services. The TCP-aware use of deterministic packet marking at the enterprise edge allows us to protect selected TCP packets from suffering loss, thus significantly reducing the total number of timeouts and avoiding the resulting service degradation. In particular, we protect TCP sessions with very small windows, disperse packet loss across a given window, and protect retransmitted packets from encountering loss. We show baseline results which illustrate that the performance gains are considerable (orders of magnitude) when compared to stateful packet dropping algorithms like FRED, and even when TCP SACK implementations are employed. The scheme has been implemented in Linux 2.2.10, and all the results are based upon experimental data.
Postscript | Gzipped Postscript | PDF | Technical Report (PS+zip) | Technical Report (PDF+zip)
The differentiated services architecture allows the provision of ``better-than-best-effort'' services in the Internet. However, the performance of legacy TCP applications over differentiated services is still influenced by bursty packet loss behavior. This paper proposes the use of TCP-friendly building blocks in the diff-serv architecture, and in particular, a TCP-friendly traffic marker to enhance TCP performance over assured service. We present the marker design and resulting performance improvements in terms of improved predictability of service, reduced provisioning requirements, better fairness and fewer TCP timeouts. Though the marker improves assured service performance predictability compared to a simple token bucket marker, the performance is still dependent to some extent on TCP dynamics.
Postscript |
Gzipped Postscript |
PDF |
Presentation (ppt)
Related IETF Internet Draft:
draft-azeem-tcpfriendly-diffserv-00.txt :
In Text
This document proposes the use of one bit in the DS-byte to facilitate ECN-type control in the differentiated services architecture. Specifically during periods of congestion along a flow's path (indicated using the one-bit mechanism), the out-of- profile packets of the flow (packets for which the "In profile" bit is cleared) can be either delayed or dropped by the ingress network edge routers. This scheme would allow interior routers to preserve their resources for processing in-profile packets during congestion and guard against certain types of denial-of-service attacks. The proposed mechanism could also be used to build differentiated services networks with lower average delay, and aid the implementation of congestion-based pricing schemes in such architectures. The mechanism interoperates well with the baseline differentiated services architecture and can also interoperate with ECN-TCP and future ECN-enabled transports/applications. Further, the ECN-type flow control can be deployed within a differentiated services network even if end-to-end ECN support is unavailable, allowing a quick migration path.
Text (38KB)
Control-theoretic Work: | Related Papers | Back to Top |
An H-infinity based robust controller is designed for a rate-feedback flow-control problem in single-bottleneck data-communication networks. The controller is robust to uncertain time-varying multiple time-delays in different channels. The controller forces the queue length at the bottleneck node to the desired steady-state value asymptotically and also satisfies a weighted fairness condition. Lower bounds for stability margins for uncertainty in the time-delays and for the rate of change of the time-delays are derived. A number of simulations are included to demonstrate the time-domain performance of the controller. Trade offs between robustness and time-domain performance are also discussed.
In this paper, we investigate the use of capacity information in the H-infinity controller design for rate-based flow control in high-speed networks for the case of a single bottleneck node and a single queue. In the previous works in this line of research, it was assumed that the controller has access to queue information, and robust controllers were designed for queue management, under time-varying time delay uncertainties. Here we assume that, besides the queue information, the controller has access to capacity. On top of the existing robust controller, we use an additional controller term, acting on the capacity information. We investigate optimal ways to design such additional controller terms.
In this paper we consider the effect of timedelay in the performance of the AIMD algorithm, specifically queue length and fairness for both rate and window based sources. The queue bounds can be used to allocate buffers at bottlenecks so that flows can go through without any loss.We modify the AIMD algorithm by making the additive rate increase parameter proportional to the round trip time and the decrease parameter proportional to the power of round trip time which makes the algorithm fair even with heterogenous round trip times.
Postscript | Slides (Postscript)
An H-infinity based robust controller is designed for a rate-feedback flow-control problem in single-bottleneck data-communication networks. The controller is robust to uncertain time-varying multiple time-delays in different channels. The controller forces the queue length at the bottleneck node to the desired steady-state value asymptotically and also satisfies a weighted fairness condition. Some simulations are also presented to demonstrate the time domain performance of the controller.
In a recent work by the authors, an H-infinity based flow controller was designed for explicit rate-based congestion control in high-speed networks. Time-daly uncertainties were taken into account when the controller was designed. This paper studies robustness margins, e.g., largest allowable time delay, as functions of a weighting parameter used in the definition of the H-infinity optimization problem. Another issue discussed here is "non-fragile" (i.e. internally robust) implementation of the H-infinity controller. Time-domain performance is demonstrated via simulations.
Typical congestion control algorithms for high speed networks inclide local flow controllers at the bottleneck nodes. In this paper an H-infinity based controlled is developed for rate feedback in a single bottleneck network. The rates can be assigned to the sources only after a certain transmission delay. Controller design specifications for this time delay system include "fairness" to multiple users, "usage" optimization, and minimization of the transients in the queue length. STability robustness, against uncertainties in time delays, is also specified as a design goal. By a simple algebra the problem is transformed to an H-infinity control of a plant with a time delay, and it is solved by using an algorithm developed earlier for this class of problems
Postscript (459496 bytes) | PKzip Compressed Postscript (97220 bytes) | PDF