V. Jacobson, M. Karels, “Congestion Avoidance and Control,” ACM SIGCOMM Conference, (August 1988). [PDF]
Summary
This paper argues that much of the congestion-related problems (observed circa 1988 in LBL to UC Berkeley connections on 4.3 BSD machines) stem from wrong implementations and bugs in transport protocol, rather than the protocol itself. The authors present examples of wrong interpretations of the window-based transport protocol and propose several algorithms (e.g., RTT time variation estimation, exponential retransmit timer backoff, slow-start, more aggressive receiver ACK policy, dynamic window sizing on congestion) for ensuring network stability by using so-called packet conservation principle.
By packet conservation the authors mean that for a connection in steady state a new packet isn’t put into the network until an old packet leaves, i.e., total number packets in a connection at steady state with full window of data transit is constant. They observe that a congestion can occur only if this principle is violated, which can happen in one of the three cases:
- The connection doesn’t get to the equilibrium, or
- A sender injects new packet before an old packet has exited, or
- The equilibrium can’t be reached because of resource limits along the path.
Based on this observation, the authors put forward measures to avoid/prevent or recover from each of the three possible causes of conservation principle violators.
Contributions
- Slow start: The Internet transport protocol is self clocking, meaning it adjusts to the connection speed based on the rate of received ACKs. When a connection starts or restarts after failure, it might not get to an equilibrium since there are no ACKs in the first place. To solve this problem, this paper introduces the concept of congestion window and presents slow start algorithm that smoothly adjusts to the connection speed resulting increased throughput.
- Round-trip timing: Considering the rest of the protocol implementation correct, the authors argue that a sender might inject a new packet without an old one leaving if its estimation of round-trip time is incorrect. In this respect, they propose use of RTT variation to calculate retransmission timeout interval with an aim to reduce spurious retransmissions.
- Exponential retransmit timer backoff: The authors propose that in case of multiple retransmissions of the same packet, only exponential backoff can avoid congestion in general case.
- Congestion avoidance: This paper assumes that packet loss in a network occurs mostly (with >99% probability) due to congestion. To counter that there must be a control-feedback mechanism so that network elements can notify of congestion and end hosts can take actions accordingly. The authors end up with (not surprisingly) a linear AIMD solution, where they use packet loss as an indication of congestion without using explicit congestion indicator bits.
Comments
The results presented in the paper are oriented toward wired Internet, which is understandable based on the period it was published. However, applicability of these results in wireless, ad-hoc networks is questionable; e.g., congestion is NOT the only source of packet loss in a wireless network.
One interesting predication in this paper is that as the Internet grows bigger, higher order control functions will be needed to avoid congestion. Recent congestion avoidance algorithms do use cubic control functions!
Overall, this paper is well-argued with lots of comments/annotations in footnotes to allow better understanding of whatever is going on. The authors also synthesize the whole solution by borrowing/combining interesting results from different proposals to create a holistic solution, which by itself is an art of system building.