Envoyé par
http://www.isoc.org/INET97/proceedings/F3/F3_1.HTM
First, we focused on the case in which only TCP connections use all of the bandwidth of the network. In particular, when network delays of connections are the same, all TCP congestion window sizes change in a synchronized manner (i.e., TCP synchronization). In these cases, the queue length of bottleneck node buffer evolves in a periodic way, and can stay full or almost empty for a relatively long duration.
Second, we treated the case in which UDP streams are added to the TCP synchronization case. UDP packet loss occurs more often and successively in the TCP synchronization case. This is because the TCP synchronization can make the node buffer full for a relatively long duration repeatedly and periodically. Even if the node is filled with packets, UDP packets are still transmitted constantly and must be dropped successively. Therefore, the UDP stream suffers harmful effects of the TCP synchronization.
Partager