Tcp Congestion Control

TCP Congestion Control Taoufik#1, Kamrul#2, Rifat#3 #EECS Department North South University [email protected] com [email protected] com [email protected] com Abstract Considerable interest has arisen in congestion control through traffic engineering. The goal of congestion control is to avoid congestion in network elements. A network element is congested if it is being offered more traffic than it can process. To detect such situations and to neutralize them we should monitor traffic in the network. This report focuses on the congestion control in the network. It explains congestion control in TCP.

Do you like this text sample?
We can make your essay even better one!

order now

How congestion occurs and different measures adopted to remove congestion are highlighted here. Keywords Data, Data Traffic, Network Congestion, Congestion control, Congestion control categories and Mechanism, Congestion control in TCP. * * * 1. INTRODUCTION * As the days are going forward, the necessity and involvement to network are increasing at an exponential rate. With the increment of user, development in this field is necessary. Communication over internet is nothing but transferring of data from one user to other user or we can say from one end system to other end system.

There are two basic types of methodology used for data transferring, circuit switching and packet switching. Modern networks are basically a hybrid of both of them. In the packet switched network the original data is divided into small packets and sent over the net to ensure the maximum use of the network resources. These packets travel the whole network with the help of the intermediate routers that is the network switches and reaches the destination end system by a complex procedure. In the intermediate routers all the packets are first stored, queued and then forwarded to the outgoing link.

Here very often it happens that due to various causes all the packets arriving at the router cannot be stored and forwarded to the client. As a result congestion occurs in the network and the performance of delivery deteriorates. As a result data loss occurs and the file is not delivered to the client completely. This is a big problem in the communication process and also causes waste of time and network resources. The main focus of our report is to figure out the causes of network congestion and its remedy. * 2. DATA Data is nothing but a set of information.

In computer science, data is anything in a form suitable for use with a computer. Data and programs are two totally different concepts. Data is often distinguished from programs. A program is a set of instructions that detail a task for the computer to perform. In this sense, data is thus everything that is not program code. However in cases a program can be data and data can be a program. In computers data can be in any form. It can be a music file, may be a text file or it may be anything that is of use and of interest. We can process and modify data for our own convenience.

The main focus of computer networking is to exchange data from one end system to other. Exchanging data is very complex and complicated procedure. During the exchange of data users face various complications among which ‘congestion control in TCP’ is our centre of concentration. 3. Data Traffic: The main focus of congestion control is data traffic. Therefore before we move on to our main topic let’s have a very brief idea about data traffic. In the jargon of computer networking, data traffic is nothing but the amount and type of traffic on a particular network.

Monitoring on data traffic is necessary to reduce difficulties like congestion, latency and packet loss in the network. This is basically a part of bandwidth management. It is necessary to measure the network traffic to determine the causes of network congestion and attack those problems specifically. Data traffic control is basically the process of managing, prioritizing, controlling or reducing the network traffic, particularly internet bandwidth, used by network administrators. Some common terminologies used in data traffic are: average ata rate, peak data rate, maximum burst size and effective bandwidth. The data flow is categorized in the following traffic profiles: Constant bit rate: A constant-bit-rate (CBR), or a fixed-rate, traffic model has a data rate that does not change. Fig. : Constant Bit Rate. Variable bit rate: In the variable-bit-rate (VBR) category, the rate of the data flow changes in time, with the changes smooth instead of sudden and sharp. Fig. : Variable Bit Rate. Bursty: In the bursty data category, the data rate changes suddenly in a very short time. Fig. : Bursty. * 4.

Network Congestion: “Network congestion” is a common phenomenon in the TCP (Transmission Control Protocol) packet switched network. The term “Network congestion” generally points to the scenario when a network’s link or node is overcrowded and overwhelmed by the data sent resulting queuing delay, packet loss and blocking of new connection. Undoubtedly the deterioration of the network service is a big obstacle in the continuous flow of information. As a result it has attracted the attention of the scientists and still is a matter of concern in the modern era.

Network congestion generally occurs when the load in the network exceeds the capacity of the network that it can handle. The data networking and queuing theory states that, “network congestion occurs when a link or node is carrying so much data that its quality of service deteriorates”. Congestion in a network or internetwork occurs because routers and switches have ‘queues – buffers’ that hold the packets before and after processing. Typically a Router has an input queue and an output queue for each interface. A packet normally undergoes three steps before it departs.

These three steps are: 1. The packet is put at the end of the input queue while waiting to be checked. 2. The processing module of the router removes the packet from the input queue once it reaches the front of the queue and uses its routing table and the destination address to find the route. 3. The packet is put in the appropriate output queue and waits for its turn to be sent. Fig. : Queues in a router. Therefore it is obvious and clear that congestion occurs if – * The rate of packet arrival is higher than the packet processing rate (the input queues become longer and longer. * if the packet departure rate is less than the packet processing rate (the output queues become longer and longer) * 5. Congestion Control: The term ‘Congestion Control’ refers to the mechanisms and techniques to control the congestion of the network. It also refers to techniques and mechanisms that can either prevent congestion, before it happens, or remove congestion, after it has happened. The goal of congestion control is to keep the load of the network below the capacity. A network element is congested, if it is being offered more traffic than it can process.

Congestion control is control of resources: routers CPUs, bandwidth of links, routers memory, etc. Congestion control system usually works in the following way. The system monitors various factors (e. g. router’s CPU occupancy, link occupancy, percent of delivered packets, messaging delays, etc. ). Based on this information the system detects possible congestion. If the system has detected an incipient congestion, it restricts traffic rates in some network elements and continues to monitor the state of the network.

This activity reduces traffic through these elements. When elements will have been unloaded, the system will restore normal traffic rates in these elements. If we don’t control our network, there is potential for a serious trouble. When some network element becomes congested, it processes traffic very slowly and some packets are lost. Therefore, users don’t receive expected packets (or conformation of delivery) in the time limit. Users begin to resubmit packets and new packets cause further congestion. Such situation is called congestive collapse.

To detect such situations and to neutralize them we should monitor traffic in the network which in single term is known as congestion control. Congestion control and quality of service are two issues so closely bound together that improving one means improving the other and ignoring one usually means ignoring the other. Most techniques to prevent or eliminate congestion also improve the quality of service in a network. The objective of congestion control is to restrict consumption and wastage of network resources. Make internet a satisfactory service and save valuable time of the user. . Congestion Control Categories and Mechanism: The congestion control of network generally refers to the prevention of network congestion or its removal. So, it is obvious that congestion control system must have opportunity to monitor the state of the network. Congestion control mechanism can be categorized in the following two- 1. Open-loop (prevention) 2. Closed-loop (removal) Figure: congestion control categories Open loop: When congestion is controlled either in the source or the destination then it is open loop congestion control. In open-loop congestion control, policies re applied to prevent congestion before it happens. The open-loop flow control mechanism is characterized by having no feedback between the receiver and the transmitter. This simple means of control is widely used. The allocation of resources must be a “prior reservation” or “hop-to-hop” type. The Open Loop flow control has inherent problems with maximizing the utilization of network resources. Resource allocation is made at connection setup using a CAC (Connection Admission Control) and this allocation is made using information that is already “old news” during the lifetime of the connection.

Often there is an over-allocation of resources. Open-Loop flow control is used by ATM in its CBR, VBR and UBR services The policies related to open-loop congestion control are- a) Retransmission policy Retransmission is sometimes unavoidable. If the sender feels that a sent packet is lost or corrupted, the packet needs to be retransmitted. Retransmission in general may increase congestion in the network. However, a good retransmission policy can prevent congestion. The retransmission policy and the retransmission timers must be designed to optimize efficiency and at the same time prevent congestion.

For example, the retransmission policy used by TCP (explained later) is designed to prevent or alleviate congestion. b) Window policy The type of window at the sender may also affect congestion. The Selective Repeat window is better than the Go-Back-N window for congestion control. In the Go-Back-N window, when the timer for a packet times out, several packets may be resent, although some may have arrived safe and sound at the receiver. This duplication may make the congestion worse. The Selective Repeat window, on the other hand, tries to send the specific packets that have been lost or corrupted. ) Acknowledgment policy The acknowledgment policy imposed by the receiver may also affect congestion. If the receiver does not acknowledge every packet it receives, it may slow down the sender and help prevent congestion. Several approaches are used in this case. A receiver may send an acknowledgment only if it has a packet to be sent or a special timer expires. A receiver may decide to acknowledge only N packets at a time. We need to know that the acknowledgments are also part of the load in a network. Sending fewer acknowledgments means imposing fewer loads on the network. ) Discarding policy A good discarding policy by the routers may prevent congestion and at the same time may not harm the integrity of the transmission. For example, in audio transmission, if the policy is to discard less sensitive packets when congestion is likely to happen, the quality of sound is still preserved and congestion is prevented or alleviated. e) Admission Policy An admission policy, which is a quality-of-service mechanism, can also prevent congestion in virtual-circuit networks. Switches in a flow first check the resource requirement of a flow before admitting it to the network.

A router can deny establishing a virtual circuit connection if there is congestion in the network or if there is a possibility of future congestion. Closed loop Closed-loop congestion control is the mechanisms that try to alleviate congestion after it happens. The Closed Loop flow control mechanism is characterized by the ability of the network to report pending network congestion back to the transmitter. This information is then used by the transmitter in various ways to adapt its activity to existing network conditions. Closed Loop flow control is used by ABR. Several mechanisms have been used by different protocols. ) Backpressure The technique of backpressure refers to a congestion control mechanism in which a congested node stops receiving data from the immediate upstream node or nodes. This may cause the upstream node or nodes to become congested, and they, in turn, reject data from their upstream nodes or nodes. And so on. Backpressure is a node-to-node congestion control that starts with a node and propagates, in the opposite direction of data flow, to the source. The backpressure technique can be applied only to virtual circuit networks, in which each node knows the upstream node from which a flow of data is corning.

Figure below shows the idea of backpressure. Figure: Backpressure method for alleviating congestion. Node III in the figure has more input data than it can handle. It drops some packets in its input buffer and informs node II to slow down. Node II, in turn, may be congested because it is slowing down the output flow of data. If node II is congested, it informs node I to slow down, which in turn may create congestion. If so, node I inform the source of data to slow down. This, in time, alleviates the congestion. Note that the pressure on node III is moved backward to the source to remove the congestion.

It was, however, implemented in the first virtual-circuit network, X. 25. The technique cannot be implemented in a datagram network because in this type of network, a node (router) does not have the slightest knowledge of the upstream router. b) Choke Packet A choke packet is a packet sent by a node to the source to inform it of congestion. When a router in the Internet is overwhelmed with IP datagram, it may discard some of them; but it informs the source host, using a source quench ICMP message. The warning message goes directly to the source station; the intermediate routers, and does not take any action.

Following figure shows the idea of a choke packet. Figure: Choke packet. The difference between the backpressure and choke packet methods is notable. In backpressure, the warning is from one node to its upstream node, although the warning may eventually reach the source station. In the choke packet method, the warning is from the router, which has encountered congestion, to the source station directly. The intermediate nodes through which the packet has travelled are not warned. c) Implicit Signalling In implicit signalling, there is no communication between the congested node or nodes and the source.

The source guesses that there is congestion somewhere in the network from other symptoms. For example, when a source sends several packets and there is no acknowledgment for a while, one assumption is that the network is congested. The delay in receiving an acknowledgment is interpreted as congestion in the network; the source should slow down. This type of signalling is basically used in TCP congestion control. d) Explicit Signalling The node that experiences congestion can explicitly send a signal to the source or destination. The explicit signalling method, however, is different from the choke packet method.

In the choke packet method, a separate packet is used for this purpose; in the explicit signalling method, the signal is included in the packets that carry data. Explicit signalling in Frame Relay congestion control can occur in either the forward or the backward direction. Backward signalling a bit can be set in a packet moving in the direction opposite to the congestion. This bit can warn the source that there is congestion and that it needs to slow down to avoid the discarding of packets. Forward Signalling A bit can be set in a packet moving in the direction of the congestion.

This bit can warn the destination that there is congestion. The receiver in this case can use policies, such as slowing down the acknowledgments, to alleviate the congestion. 7. Congestion Control in TCP: The Internet model has three protocols at the transport layer: UDP (User Datagram Protocol), TCP (Transmission Control Protocol), and SCTP (Stream Control Transmission Protocol). TCP provides a connection oriented, reliable, byte stream service. The term connection-oriented means the two applications using TCP must establish a TCP connection with each other before they can exchange data.

It is a full duplex protocol, meaning that each TCP connection supports a pair of byte streams, one flowing in each direction. TCP implements congestion-control mechanisms that allows preventing congestion or alleviate congestion in the network. TCP also includes a flow-control mechanism for each of these byte streams that allow the receiver to limit how much data the sender can transmit. Congestion Window: In Transmission Control Protocol (TCP), the congestion window (cwnd) is one of the factors that determine the number of bytes that can be outstanding at any time.

Maintained on the sender, this is a means of stopping the link between two places from getting overloaded with too much traffic. The sender window size is determined by the available buffer space in the receiver (rwnd). The network is another entity that determines the size of the sender’s window. The size of this window is calculated by estimating how much congestion is there between the two places. The actual size of the window is the minimum of these two. Actual window size= minimum (rwnd, cwnd) When a connection is set up, the congestion window is set to the maximum segment size (MSS) allowed on that connection.

Further variance in the collision window is dictated by an Additive Increase/Multiplicative Decrease approach. Congestion Policy: TCP’s general policy for handling congestion is based on three phases: 1) Slow start (Exponential Increase) 2) Congestion avoidance (Additive Increase) and 3) Congestion detection (Multiplicative Decrease). In the slow-start phase, the sender starts with a very slow rate of transmission, but increases the rate rapidly to reach a threshold. When the threshold is reached, the data rate is reduced to avoid congestion.

Finally if congestion is detected, the sender goes back to the slow-start or congestion avoidance phase based on how the congestion is detected. Slow Start (Exponential Increase): One of the algorithms used in TCP congestion control is called slow start. This algorithm is based on the idea that the size of the congestion window (cwnd) starts with one maximum segment size (MSS). The MSS is determined during connection establishment by using an option of the same name. The size of the window increases one MSS each time an acknowledgment is received.

As the name implies, the window starts slowly, but grows exponentially. To show the idea, let us look at the figure below. Figure: Slow start (exponential increase) Three simplifications are used to make the discussion more understandable. * We have used segment numbers instead of byte numbers (as though each segment contains only 1 byte). * We have assumed that rwnd is much higher than cwnd, so that the sender window size always equals cwnd. * We have assumed that each segment is acknowledged individually. The sender starts with cwnd =1 MSS. This means that the sender can send only one segment.

After receipt of the acknowledgment for segment 1, the size of the congestion window is increased by 1, which means that cwnd is now 2. Now two more segments can be sent. When each acknowledgment is received, the size of the window is increased by 1 MSS. When all seven segments are acknowledged, cwnd = 8. If we look at the size of cwnd in terms of rounds (acknowledgment of the whole window of segments), we find that the rate is exponential as shown below: Start ….. cwnd=l After round 1 ….. cwnd=21=2 After round 2 ….. cwnd=22 =4 After round 3 ….. cwnd=23 =8

We need to mention that if there is delayed ACKs, the increase in the size of the window is less than power of 2. Slow start cannot continue indefinitely. There must be a threshold to stop this phase. The sender keeps track of a variable named ssthresh (slow-start threshold). When the size of window in bytes reaches this threshold, slow start stops and the next phase starts. In most implementations the value of ssthresh is 65,535 bytes. In the slow-start algorithm, the size of the congestion window increases exponentially until it reaches a threshold. Congestion Avoidance (Additive Increase):

If we start with the slow-start algorithm, the size of the congestion window increases exponentially. To avoid congestion before it happens, one must slow down this exponential growth. TCP defines another algorithm called congestion avoidance, which undergoes an additive increase instead of an exponential one. When the size of the congestion window reaches the slow-start threshold, the slow-start phase stops and the additive phase begins. In this algorithm, each time the whole window of segments is acknowledged (one round), the size of the congestion window is increased by 1.

Applying this algorithm to the same scenario as slow start we will see that the congestion avoidance algorithm usually starts when the size of the window is much greater than 1. Following figure depicts the idea. Figure: Congestion avoidance (additive increase). In this case, after the sender has received acknowledgments for a complete window size of segments, the size of the window is increased by one segment. If we look at the size of cwnd in terms of rounds, we find that the rate is additive as shown below: Start ….. cwnd= 1 After round 1 ….. cwnd= 1+ 1 =2 After round 2 ….. wnd=2+ 1 =3 After round 3 ….. cwnd=3+ 1 =4 In the congestion avoidance algorithm, the size of the congestion window increases additively until congestion is detected. Congestion Detection (Multiplicative Decrease): If congestion occurs, the congestion window size must be decreased. The only way the sender can guess that congestion has occurred is by the need to retransmit a segment. However, retransmission can occur in one of two cases: when a timer times out or when three ACKs are received. In both cases, the size of the threshold is dropped to one-half, a multiplicative decrease.

Most TCP implementations have two reactions: 1. If a time-out occurs, there is a stronger possibility of congestion; a segment has probably been dropped in the network, and there is no news about the sent segments. In this case TCP reacts strongly: a) It sets the value of the threshold to one-half of the current window size. b) It sets cwnd to the size of one segment. c) It starts the slow-start phase again. 2. If three ACKs are received, there is a weaker possibility of congestion; a segment may have been dropped, but some segments after that may have arrived safely since three ACKs are received.

This is called fast transmission and fast recovery. In this case, TCP has a weaker reaction: a) It sets the value of the threshold to one-half of the current window size. b) It sets cwnd to the value of the threshold (some implementations add three segment sizes to the threshold). c) It starts the congestion avoidance phase. An implementation reacts to congestion detection in one of the following ways: o If detection is by time-out, a new slow-start phase starts. o If detection is by three ACKs, a new congestion avoidance phase starts.

We summarize the congestion policy of TCP and the relationships between the three phases in the following figure. Figure: TCP congestion policy summary. 8. Conclusion From this report we have gained knowledge about congestion, congestion control and congestion control policy in TCP. In this modern time complications like congestion causes irritating problems. In addition to that this also costs a lot of time and wastes a lot network resources. Our report covered almost every part of congestion control except the algorithm. 9. Acknowledgement We would like to thank to Behrouz A Forouzan whose book helped a lot.

Special thanks to Wikipedia. Cordial thanks to all other authors and researchers whose papers and presentations helped a lot. 10. References 1. Wikipedia, www. wikipedia. com 2. Behrouz A Forouzan; Data communication and Networking, 4th. 3. Maxim A. Kolosovskiy, Elena N. Kryuchkova; Network congestion control using Net Flow. 4. Liu Pingping, Zhou Lianying; The Research Of Adaptive Network Congestion Control Algorithm Based On AQM. 5. Felicia M Holnes; Congestion Control Mechanism within MPLS networks. 6. James F Kurose, Keith W Ross; Computer Networking A top down approach 4th.

ˆ Back To Top

I'm Samanta

Would you like to get such a paper? How about receiving a customized one?

Check it out