Summary
This paper addresses the problems resulting from increase in per-flow bandwidth and high latency links, and proposes a solution to this problem.
As more high-bandwidth links and low-latency satellite links get added to the internet, TCP becomes more and more inefficient, no matter what queueing scheme is used. The authors put forth a new protocol called Explicit Control Protocol (XCP), which they've found to outperform TCP in both high-bandwidth, low-latency situations as well as conventional situations. The authors also argue that XCP achieves fair bandwidth allocation, short queues, high utilization and near-zero packet drops. One major feature of XCP is that the controllers for fairness and efficiency are decoupled, allowing each controller to optimize for the one area.
XCP does not use a binary feedback mechanism to signal congestion. Instead, it adds a feedback field in the header, which holds a number that indicates how much the sender should slow down, based on the bottleneck link through the network. XCP routers need to compute the feedback so as to optimize efficiency and min-max fairness.
The authors run several simulations to test XCP performance against TCP Reno. It uses the following queueing protocols: random early discard, random early marking, adaptive virtual queue, and core stateless fair queueing. Their simulations confirm that XCP has high utilization and fairness. It has short queues and near-zero packet drops. Over all the queueing protocols, XCP always performed at least as well as TCP.
Criticism and Questions
I thought this paper was interesting. It's one of the few papers that tries to rebuild a protocol from scratch, which definitely has its own problems. Generally, those problems surround the practicality of the protocol. I liked how the authors stated their assumptions upfront, but also addressed the practicality of deploying a protocol using a new packet format, and provided suggestions on how to do so. One thing I am still wondering about is what the impact would be on the sources to use and understand the XCP packet format.
Showing posts with label congestion avoidance. Show all posts
Showing posts with label congestion avoidance. Show all posts
Tuesday, September 15, 2009
Tuesday, September 8, 2009
Congestion Avoidance and Control
Summary
This paper provides some simple algorithms to deal with congestion avoidance and control. It also gives some examples of what happens with and without those algorithms. Specifically, the authors put forth 7 new algorithms: RTT variance estimation, exponential retransmit timer backoff, slow-start, more aggressive receiver ack policy and dynamic window sizing on congestion. They determined that the main cause for congestion control is a violation of conversation of packets, which can be caused by (i) the connection failing to get to equilibrium, (ii) sender injecting a new packet before old one has exited and (iii) by being unable to reach equilibrium because of resource limits along the path.
To solve the first problem, they propose the slow-start algorithm. This algorithm increases the congestion window by 1 each time it gets an ack for new data. One the first problem is solved, the second problem is caused by a incorrect implementation of the retransmit timer. The authors say that not estimating the variation in RTT is usually the cause of an incorrect implementation and they provide some suggestions on how to improve it. They also state that exponential backoff is the best scheme for backoff after a retransmit. Finally, for the third problem, they suggest using a retransmit to indicate that the network is congested. Once the sender knows this, they should use the additive increase/multiplicative decrease algorithm to adjust their congestion window size.
Finally, the authors give information about their future work. It involves using the gateways to ensure fair sharing of the network capacity. They argue that the gateway should 'self-protect' against misbehaving hosts. They finish off by mentioning work on being able to detect congestion early because it can be fixed faster the earlier it is detected.
Criticisms & Questions
I liked that they talked about how to deal with uncooperative hosts and suggested gateways as a possible place to automatically moderate them. I would like to know more about this mechanism and if it's been implemented, whether or not it works in the real world.
I also liked that they point out that congestion avoidance and slow-start are 2 different algorithms with 2 different objectives. I was a little confused about that myself and was glad to have that cleared up.
This paper provides some simple algorithms to deal with congestion avoidance and control. It also gives some examples of what happens with and without those algorithms. Specifically, the authors put forth 7 new algorithms: RTT variance estimation, exponential retransmit timer backoff, slow-start, more aggressive receiver ack policy and dynamic window sizing on congestion. They determined that the main cause for congestion control is a violation of conversation of packets, which can be caused by (i) the connection failing to get to equilibrium, (ii) sender injecting a new packet before old one has exited and (iii) by being unable to reach equilibrium because of resource limits along the path.
To solve the first problem, they propose the slow-start algorithm. This algorithm increases the congestion window by 1 each time it gets an ack for new data. One the first problem is solved, the second problem is caused by a incorrect implementation of the retransmit timer. The authors say that not estimating the variation in RTT is usually the cause of an incorrect implementation and they provide some suggestions on how to improve it. They also state that exponential backoff is the best scheme for backoff after a retransmit. Finally, for the third problem, they suggest using a retransmit to indicate that the network is congested. Once the sender knows this, they should use the additive increase/multiplicative decrease algorithm to adjust their congestion window size.
Finally, the authors give information about their future work. It involves using the gateways to ensure fair sharing of the network capacity. They argue that the gateway should 'self-protect' against misbehaving hosts. They finish off by mentioning work on being able to detect congestion early because it can be fixed faster the earlier it is detected.
Criticisms & Questions
I liked that they talked about how to deal with uncooperative hosts and suggested gateways as a possible place to automatically moderate them. I would like to know more about this mechanism and if it's been implemented, whether or not it works in the real world.
I also liked that they point out that congestion avoidance and slow-start are 2 different algorithms with 2 different objectives. I was a little confused about that myself and was glad to have that cleared up.
Subscribe to:
Posts (Atom)