You are on page 1of 30

How to Calculate TCP throughput for long distance WAN links

So you just lit up your new high-speed link between Data Centers but are unpleasantly surprised to see relatively slow file transfers across this high speed, long distance link Bummer! Before you call Cisco TAC and start trouble shooting your network, do a quick calculation of what you should realistically expect in terms of TCP throughput from a one host to another over this long distance link. When using TCP to transfer data the two most important factors are the TCP window size and the round trip latency. If you know the TCP window size and the round trip latency you can calculate the maximum possible throughput of a data transfer between two hosts, regardless of how much bandwidth you have.

Formula to Calculate TCP throughput


TCP-Window-Size-in-bits / Latency-in-seconds = Bits-per-second-throughput So lets work through a simple example. I have a 1Gig Ethernet link from Chicago to New York with a round trip latency of 30 milliseconds. If I try to transfer a large file from a server in Chicago to a server in New York using FTP, what is the best throughput I can expect?

First lets convert the TCP window size from bytes to bits. In this case we are using the standard 64KB TCP window size of a Windows machine. 64KB = 65536 Bytes. 65536 * 8 = 524288 bits Next, lets take the TCP window in bits and divide it by the round trip latency of our link in seconds. So if our latency is 30 milliseconds we will use 0.030 in our calculation. 524288 bits / 0.030 seconds = 17476266 bits per second throughput = 17.4 Mbps maximum possible throughput

So, although I may have a 1GE link between these Data Centers I should not expect any more than 17Mbps when transferring a file between two servers, given the TCP window size and latency. What can you do to make it faster? Increase the TCP window size and/or reduce latency. To increase the TCP window size you can make manual adjustments on each individual server to negotiate a larger window size. This leads to the obvious question: What size TCP window should you use? We can use the reverse of the calculation above to determine optimal TCP window size.

Formula to calculate the optimal TCP window size:


Bandwidth-in-bits-per-second * Round-trip-latency-in-seconds = TCP window size in bits / 8 = TCP window size in bytes So in our example of a 1GE link between Chicago and New York with 30 milliseconds round trip latency we would work the numbers like this 1,000,000,000 bps * 0.030 seconds = 30,000,000 bits / 8 = 3,750,000 Bytes Therefore if we configured our servers for a 3750KB TCP Window size our FTP connection would be able to fill the pipe and achieve 1Gbps throughput.

One downside to increasing the TCP window size on your servers is that it requires more memory for buffering on the server, because all outstanding unacknowledged data must be held in memory should it need to be retransmitted again. Another potential pitfall is performance (ironically) where there is packet loss, because any lost packets within a window requires that the entire window be retransmitted unless your TCP/IP stack on the server employs a TCP enhancement called selective acknowledgements, which most do not. Another option is to place a WAN accelerator at each end that uses a larger TCP window and other TCP optimizations such as TCP selective acknowledgements just between the accelerators on each end of the link, and does not require any special tuning or extra memory on the servers. The accelerators may also be able to employ Layer 7 application specific optimizations to reduce round trips required by the application.

Reduce latency? How is that possible? Unless you can figure out how to overcome the speed of light there is nothing you can do to reduce the real latency between sites. One option is, again, placing a WAN accelerator at each end that locally acknowledges the TCP segments to the local server, thereby fooling the servers into seeing very low LAN like latency for the TCP data transfers. Because the local server is seeing very fast local acknowledgments, rather than waiting for the far end server to acknowledge, is the very reason why we do not need to adjust the TCP window size on the servers.

In this example the perfect WAN accelerator would be the Cisco 7371 WAAS Appliance, as it is rated for 1GE of optimized throughput. WAAS stands for: Wide Area Application Services The two WAAS appliances on each end would use TCP optimizations over the link such as large TCP windows and selective acknowledgements. Additionally, the WAAS appliances would also remove redundant data from the TCP stream resulting in potentially very high levels of compression. Each appliance remembers previously seen data, and if that same chunk of data is seen again, that data will be removed and replaced with a tiny 2 Byte label. That tiny label is recognized by the remote WAAS appliance and it replaces the tiny label with the original data before sending the traffic to the local server. The result of all this optimization would be higher LAN like throughput between the server in Chicago and New York without any special TCP tuning on the servers.

Formula to calculate Maximum Latency for a desired throughput


You might want to achieve 10 Gbps FTP throughput between two servers using standard 64KB TCP window sizes. What is the maximum latency you can have between these two servers to achieve 10 Gbps? TCP-window-size-bits / Desired-throughput-in-bits-per-second = Maximum RTT Latency 524288 bits / 10,000,000,000 bits per second = 52.4 microseconds

omments
1.
SERGE SAYS:
JANUARY 12, 2009 AT 2:38 AM

Excellent material, Brad, thank you very much. The only thing, which confuses me the most, it that: When using TCP to transfer data the two most important factors are the TCP window size and the round trip latency. If you know the TCP window size and the round trip latency you can calculate the maximum possible throughput of a data transfer between two hosts, regardless of how much bandwidth you have. Why dont we use one-way latency? Or for reverse calculation, when we are using BW*RTT formula, what do we receive as a result? Amount of data on the wire (in flight), so why do we need a double-sized buffer?
REPLY

2.

BRAD HEDLUND SAYS:


JANUARY 12, 2009 AT 7:38 PM

Serge, We use round trip latency because we need to account for the time it takes the TCP sender to receive an acknowledgment from the receiver. In our example here, we want the server in Chicago to continue sending data while the acknowledgment from the New York server is traveling back to Chicago. Make sense?
REPLY

3.

SERGE SAYS:
JANUARY 13, 2009 AT 10:05 AM

Well, yes, but Lets assume that one-way latency is 15ms that means that Chicago sent 1MB of data and after 15 ms NY received it. Immediately it sends ACK, which is received by Chicago after another 15ms. That means that we really have throughput 1MB per 30ms, but in fact half of the time sender waits till the ACK come. Is it really the way TCP works?
REPLY

4.

BRAD HEDLUND SAYS:


JANUARY 13, 2009 AT 6:22 PM

That is correct. Once the TCP window has been transmitted the TCP sender will stop transmitting data until an acknowledgement is received. If we use one-way latency in our calculations the WAN link would be idle 50% of the time while the sender is waiting for acknowledgements from the receiver.
REPLY

5.

TINGLI PAN SAYS:


JANUARY 14, 2009 AT 5:15 PM

I vaguely remember that in tcp protocol, the sender wont wait for the first acknowledgement before sending the second one. It will send several packages continuously before getting the first acknowledgement, thus to speed up the transfer rate.
REPLY

6.

BRAD HEDLUND SAYS:


JANUARY 14, 2009 AT 7:08 PM

A fundamental principal of TCP is the congestion avoidance window which represents the maximum amount of unacknowledged data. This window is precisely what we are using for the calculations in this article. Some variations and enhancements to TCP optimize the behavior of the congestion avoidance window, such as dynamically adjusting its size in varying conditions. However the fundamental concept of managing a maximum amount of unacknowledged data never changes. Most standard Windows machines use a very standard TCP/IP stack without all the additional enhancements. You can read more about fundamental TCP behavior and its variations here: http://en.wikipedia.org/wiki/TCP_congestion_avoidance_algorithm
REPLY

7.

EMMANUEL COURREGES SAYS:


FEBRUARY 5, 2009 AT 8:34 AM

Tingli pan: you are referring to acknowledging TCP segments for which the size is usually around your MTU so 1400 bytes, which may be delayed and sent on every other segment, but thats still acknowledged at the beginning of a 64kB burst of segments which is the window size. Hope that doesnt add to confusion
REPLY

8.

NED SAYS:
MARCH 5, 2009 AT 5:34 PM

Hi, just want to confirm some behavior. If there is physical machines connected to switches at 1000Mb at either side of a WAN link which is only 100Mb than would the servers throttle down to 100Mbps or would they continue sending at 1000 but due to the WAN link being only 100Mb the packets will be dropped. Will TCP Window size effect the rate at which the machine send and receive packet. Please confirm.
REPLY

BRAD HEDLUND SAYS:


MARCH 5, 2009 AT 8:59 PM

Ned, Two possible things could happen here, assuming a standard TCP/IP implementation on each server 1) the latency of your 100mb WAN link is high enough and/or the TCP window is small enough such that TCP windowing is never able reach 100mb of throughput. 2) if the latency is very low and the window sizes are large enough, TCP will ramp up to 100mb and at which point congestion will occur and packets will begin to drop. When packet loss is detected TCP will cut throughput in half and slowly ramp back up to 100mb again, and the cycle will repeat. This is called the saw tooth effect.
REPLY

9.

NED SAYS:
MARCH 7, 2009 AT 2:27 PM

Hi thanks for replying. Can u pls explain why the devices will only send at 100Mb even though their connection is set to a 1000Mb. The Window size from what I read is the number of byte packets that can be sent before getting acknowledgement and is negotiated at session startup and during the sessions it keeps changing. Correct? Also another confusion is there a calculation to check how many packets need to be transferred to get a throughput of 100Mb and 1000Mb given that the packet size is 1460(1500-20-20) Bytes of data = 11680 bits. 1Gb = 1000000000b. Hence if each packet is 11680 bits than to send 1000000000 bits it would 1000000000/11680 = 85616 packets of 1460 bytes each. Given that the size of packet can only be 1460 and the window is usually 65KB=512000b only than it would take approx 43 segments to fill the window size. So its not really possible to send packets at 1Gb. Is this correct calculation because it looks like something is wrong. thanks you.
REPLY

BRAD HEDLUND SAYS:


MARCH 9, 2009 AT 10:06 AM

Ned, Can u pls explain why the devices will only send at 100Mb even though their connection is set to a 1000Mb TCP pays no attention to the physical LAN connection speed of the host (1000Mb in your scenario). The only three factors that matter in TCP throughput are window size, latency, and packet loss. Of course the underlying link speed sets the maximum possible throughput under ideal conditions. Given that the size of packet can only be 1460 and the window is usually 65KB=512000b only than it would take approx 43 segments to fill the window size. So its not really possible to send packets at 1Gb. Yes, it is possible. In your scenario, if the latency is low enough the sender may have received the first ACK before all 43 segments had been sent, at which point the window size is replenished by the amount of bytes acknowledged in the ACK. This is called a sliding window and allows the sender to continuously send data at the rate of the link speed until there is packet loss.
REPLY

10.

LISA SAYS:
MARCH 20, 2009 AT 1:29 PM

Brad, Can you clarify one of the equations for me? In the section on optimizing window size, you have 1,000,000,000 bps * 0.030 seconds = 30,000,000 bits / 8 = 3,750,000 Bytes Therefore if we configured our servers for a 3750KB TCP Window size So you are taking 3,750,000 Bytes and dividing by 1000 to get 3750KB. My confusion relates to what side of the fence window size sits on is it a data storage number (since its size affects memory allocation and storage amounts), or is it a network number (since its size affects the throughput of the link)? I assumed it landed on the data storage side of the fence, so I was wondering why you didnt divide by 1024 instead of 1000, where the window size would turn out to be 3662.11 KB. Since Im not that familiar with networking, I want to make sure I know when to multiply/divide by 1000 vs 1024.

BTW Thanks for the excellent, clear explanation. Its the type of thing thats really helpful for newbies like myself.
REPLY

11.

BRAD HEDLUND SAYS:


MARCH 22, 2009 AT 9:29 PM

Lisa, You are right. To be 100% accurate I should have divided by 1024, not 1000. Notice that I started the post by calculating how many bits were in a 64KB window size by multiplying 64 * 1024. When talking about bytes, as we are in TCP window sizes, we should be using the 1024 number to represent a KB. With serial communications such as in networking, when we are talking about bits, it is normal to use 1000 as the number that represents a Kb (kilobit). A TCP window size is more of a storage number than it is a bit rate number. Nice catch. You get extra credit points Cheers, Brad
REPLY

12.

JEFF SAYS:
APRIL 12, 2009 AT 9:22 AM

Brad, Our branch and the main office are also in Chicago and NY. Ive done a lot of low level WAN research using wireshark. What I found is that without any optimization the acknowledgement sometimes coming back after 3 frames of 1460 bytes size, sometimes after 6, 12, 23, and 45. The windows size is always the same 64KB. How this can be explained? I also know about so call Windowing mechanism that requests an acknowledgment after 6 frames. How this fit into the picture? Now, in one of our tests we have been using the compressing within application. We achieved 30 times improvements in the transfer speed The throughout was 4 MB/sec, which exceeds your calculation limit. How this can be explaned.
REPLY

BRAD HEDLUND SAYS:


MAY 3, 2009 AT 8:58 PM

Hi Jeff, acknowledgement sometimes coming back after 3 frames of 1460 bytes size, sometimes after 6, 12, 23, and 45. The windows size is always the same 64KB. How this can be explained? TCP is a very conservative protocol. This is likely the TCP Slow Start mechanism. New TCP flows will start slow and gradually ramp up until packet loss is detected, at which point the Slow Start cycle repeats. Now, in one of our tests we have been using the compressing within application. We achieved 30 times improvements in the transfer speed The throughout was 4 MB/sec, which exceeds your calculation limit. How this can be explaned.

The 4MB/sec was likely the throughput as observed by the application because of the compression. However, the actual load on the network should still fit within the TCP throughput calculations.
REPLY

13.

BRUCE H. SAYS:
APRIL 21, 2009 AT 4:02 PM

TCP-Window-Size-in-bits / Latency-in-seconds = Bits-per-second-throughput Brad, The calculation sounds plausible. However, we have a large ISP network with thousands of routers, switches, layer 1 trasport devices, customer F/Ws etc, etc along one path. In our sceanrio, are we supposed to modify the TCP window size on all devices? What an effort! Shoud we standardize window size? Hmmm The only thing we can truly control is latency, in which case, you must look at each link to reduce latency as much as possible. Also, along the path, the packet may hit 100mbps, 1000 mbps or 10Gbps links or higher. The link may also be saturated or over subscribed increasing latency. It sounds like, reducing latency by standardizing (for example) all gbps links, low link utilization, fast CPUs at each link, having a clean, error-free network is the best way to reduce latency, hence increase throughput. What is the best practical method to get the highest throughput with a very large ISP type network?
REPLY

BRAD HEDLUND SAYS:


MAY 3, 2009 AT 8:42 PM

Bruce, Adjusting TCP window settings to improve throughput is only effective on the client and server machines, not the intermediate devices carrying the traffic. Adjusting any such TCP settings on your gear carrying the customer traffic will be a futile effort.
REPLY

14.

MARIO SAYS:
MAY 15, 2009 AT 7:33 AM

Hi Brad, First off, thank you for the well-written article and the replies posted above. Its been a very interesting read. Regarding your last comment on the TCP slow-start mechanism, is there a way to optimize this to cause TCP to send flows as quickly as possible from the start? I realise this is a mechanism of TCP, but am interested to know if there is a calculation available or setting to optimise this. Thanks, Mario
REPLY

BRAD HEDLUND SAYS:


MAY 23, 2009 AT 10:45 PM

Mario, WAN optimization appliances such as Cisco WAAS employ several optimizations for TCP traffic over WAN links and one of those is bypassing TCP Slow Start mechanism and employing large initial windows. The nice thing about using WAN optimization appliances like WAAS is that no TCP modifications are required on the client or server machines. Without WAN optimization appliances you can try loading a HS-TCP (high speed TCP) compliant TCP/IP stack on your client and server machines. http://www.faqs.org/rfcs/rfc3649.html Cheers, Brad
REPLY

15.

VACLAV MOLIK SAYS:


MAY 20, 2009 AT 1:44 PM

Hi Brad, your article is really helpfull, thanks for it. Let me ask if you have also experience with IP Sec? Consider you have MPLS VPN with access circuits mostly E1 or T1. But you encrypt all traffic between HQ and specific branch using IP Sec tunnel. E.g. NY-HongKong or NY-Moscow. What would you advice for the best utilization of available capacity at this case? Thanks a lot, Vaclav.
REPLY

BRAD HEDLUND SAYS:


MAY 23, 2009 AT 11:04 PM

Vaclav, I have several customers with IPSec WAN environments that are using Cisco WAAS with great success. Some are using the Cisco ISR router with the WAAS network module, others are using a WAAS appliance deployed inline between the existing router and LAN switch. Cheers, Brad
REPLY

16.

ALAB SAYS:
JUNE 11, 2009 AT 1:38 PM

Hello, if one is using a low bandwidth WAN link in which the bandwidth-delay product (call it B1) is significantly less than the TCP receive window what is the effective throughput formula? Is it (1) TCP-Window-Size-in-bits / Latency-in-seconds = Bits-per-second-throughput or just (2) B1
REPLY

BRAD HEDLUND SAYS:


JULY 14, 2009 AT 10:02 PM

In your case, if the bandwidth delay product is significantly less than the TCP window size then throughput is constrained by the speed of the link, not by TCP. Cheers, Brad
REPLY

17.

CHRIS STAND SAYS:


JULY 22, 2009 AT 10:25 PM

go grab TCPOPTIMIZER if you are running win2k or xp. then you can turn on sacks. vista & win2k3 & win2k8 all provide this feature.
REPLY

18.

INVISIBLE SAYS:
JULY 26, 2009 AT 10:11 PM

Hmm, if I remember correctly there are TCP/IP implementations allowing sliding TCP window sizes. In other words, if either side started with 64K TCP window it does not mean that window size will remain the same all the time it will increase and decrease depending on underlying conditions.
REPLY

BRAD HEDLUND SAYS:


JULY 28, 2009 AT 8:06 PM

invisible, You are correct. Infact, this is how the TCP slow start phase works, start out with a small window and gradually increase throughput by increasing the window size. When packet loss is detected, throughput is cut in half and ramps up again via adjusting window sizes. There is usually a set limit of how large the window will go, which would be the max window size. You cant keep increasing the window size without a limit as larger window sizes require larger memory resources on the host. Cheers, Brad
REPLY

19.

CP SAYS:
JULY 27, 2009 AT 5:40 PM

Brad, Thanks for the write-up. My question is with the BDP formula using actual metrics for RTT, window size, and throughput. A lot of people give example on how the BDP formula work to give the estimated throughput including the ideal window size. Like in your example. The problem that examples they provide are either vague and do not provide real world results to determine some of these things. Here is my example and what I have done:

CLIENT-SERVER (http) Downloading a 6.33MB file from the SERVER. ====================== WINDOW SIZE from BDP: ====================== First, what is the RTT. When I do a ping to the server I get 32ms for the RTT. Second, during the file transfer (using Vista or XP) it tells me that my data rate around 270KB/s. Convert that to bits, thats around 2.1Mbps. Which is correct because when I do many bandwidth tests they give me the same thing. Perfect. Therefore based on that info my RWIN size should be this: Bw (270KB) x RTT (.032) = 8640 bytes. Well, when I ran Wireshark during the download session my RWIN on my client, receiving the download was 66,780 bytes. Well thats not 8640 bytes. Not that it matters, but the SERVER window size was 6816KB. Not close to 8640 bytes. Lets try this in a different way ====================== THROUGHPUT from BDP: ====================== Whats my throughput then, RWIN (66,780 bytes) / RTT (.032) = 2,086,875 bytes (or 16.7Mbps) Umm, my DSL is only up to 2Mbps, so that is NOT my throughput nor correct. ====================== LATENCY from BDP: ====================== And when I calculate the latency from the BDP formula: Bw (270KB) / RWIN (66,780 bytes) = 4 seconds (4043 ms) for the RTT Not true either. I understand that the BDP is used to help engineers know what window size should be optimized to fill-up or use efficiently a circuit (LAN or WAN). Well using actual data that I shown above, Im not getting results that prove that the BDP works in real world. Thus, there is something I am not understanding at all and relating it to actual/estimated throughput results that can be calculated. Or Im obtaining the wrong type of data for the window size (through Wireshark that I see in the SYN packet and all the data sessions decreasing the RWIN size at data is being received), RTT (using ping), or the throughput (by looking at the file transfer window I see on Vista or XP). Can you and others please help me fill in the gap on what is going on here? FYI it would also be super nice if there was a simple GUI APP (that can be installed on a client and server of coarse) that would show what is the RTT, Throughput, and Window Size for the connection between them. Heck maybe also the packet loss, etc. The numbers would be variable, but that would be a useful tool for troubleshooting and understanding the performance conditions. I dont think there is anything simplified like that out there. Thank you very much! cp
REPLY

BRAD HEDLUND SAYS:


JULY 28, 2009 AT 7:46 PM

CP, Your math is correct, and proves my statement from a few comments above when I said: if the bandwidth delay product is significantly less than the TCP window size then throughput is constrained by the speed of the link, not by TCP. The speed of your link is 2.1mbps You could have the largest window size in the world and the lowest possible latency and running the BDP formulas as you have done will give you a very high max theoretical throughput but if your link speed is only 2.1 mbps, throughput cannot go higher than 2.1 mbps, obviously. So, what your math has told you is that your TCP window size needs no adjustment to improve your throughput. The only adjustment that will make your file transfer go faster than 2.1 mbps would be to get a faster link, OR, compress the TCP data at each end to provide the illusion of a faster link to the servers and clients Cisco WAAS. Cheers, Brad
REPLY

20.

DUKE SAYS:
AUGUST 18, 2009 AT 8:43 AM

Dear Brad, Thanks for your article. It really helps me a lot to improve network performance. But there is still something I cant figure out in an experiment. Hope you could help me out. Thanks in advance. Experiment enviorment: AB (A downloads from B) The bandwidth from A to B is 50Mbps and the latency is 22ms. A and B are both Linux system with version 2.6.18. BDP=50000*22/8=137.5 KBytes I tuned the maximum number of tcp_rmem in client and tcp_wmem in server to BDP 137500 and to get about 1.2MBps downloading speed. From the formula throughput=window_size/RTT, I should have get about 6.25MBps (137.5K/22ms=6.25MBps) theoretically. I dont know why I only get 1.2MBps. But when I set the max number to 1375000 and 26214000 in client and server, the speed comes to 2.3MBps and 4.42MBps. So I guess if there is someting I dont take into consideration when calculating throughput. Thanks and regards, Duke
REPLY

BRAD HEDLUND SAYS:


AUGUST 19, 2009 AT 7:43 PM

Duke, There are other factors, for example packet loss has a detrimental effect on TCP throughput. The formulas discussed in this article calculating max *theoretical* throughput assume a best case scenario of zero packet loss. Most well engineered networks have very low packet loss in the .01% range, however some links may have higher packet loss for a variety of reasons (too much congestion, lack of QoS, rate-limiting, etc.) Your WAN bandwidth of 50Mbps is interesting, thats an odd number that doesnt match a typical physical circuit speed. Why 50Mbps? Is it perhaps a physical link much master than 50Mbps (such as 100Mbps or 1GE) but something is rate-limiting the throughput to 50Mbps? Cheers, Brad
REPLY

21.

DUKE SAYS:

AUGUST 21, 2009 AT 1:37 AM

Thanks Brad. Actually, 50Mbps is the SDH physicall link bandwidth. There are no Qos and rate-limiting policies applied to each side of the routers.
REPLY

22.

DUKE SAYS:
AUGUST 22, 2009 AT 5:51 AM

Hi Brad, If we set the window size to BDP, we could get max theoretical throughput. The max theoretical throughput should be the bandwidth/8. Am I correct? If we put the same client and server into another link of the same bandwidth but higher latency, we should get the same max theoretical throughput(bandwidth/8) because the bandwidth is the same. right? The reason why I get much smaller throughput in higher latency is higher latency has more packet loss or what? Thanks. Duke
REPLY

23.

DEEPAK VYAS SAYS:


SEPTEMBER 12, 2009 AT 12:15 AM

Can we change the tcp window size on router to get a better throughput?
REPLY

BRAD HEDLUND SAYS:


SEPTEMBER 15, 2009 AT 2:18 PM

Deepak, The TCP session state is between the client and server machines, the routers are just passing the TCP packets between client/server untouched, therefore changing any settings related to TCP on your routers will have no effect.
REPLY

24.

CT SAYS:
SEPTEMBER 15, 2009 AT 3:36 PM

I found this blog while researching how long should a 1.7G file transfer take from HQ (45M) to a remote site (2xT1) over MPLS GRE. My file transfer test indicates that I am getting about 2 Mbps. How much should I accommodate for the overhead for GRE and T1 bundling? I already have tcp-mss size on the GRE tunnel interface set to 1436 per one of the Cisco articles I read1500-40 (TCP/IP header)-24(GRE header). When I remove this command from the tunnel interface then the performance is even worse. So, are you saying that changing the MSS helps, but changing the tcp window size on a router will not? Any suggestions on how I can optimize my setup further so that I use full 3M? Or what data do I give to our ISP to prove that the traffic is available to be sent but the pipe is not carrying it? Thanks in advance.
REPLY

25.

PETE SAYS:
SEPTEMBER 17, 2009 AT 1:44 AM

Hi Im wondering about the measurement of the RTT on a link with ping. What should the correct packet size be in the ping when measuring the RTT ? regs. pete
REPLY

BRAD HEDLUND SAYS:


SEPTEMBER 29, 2009 AT 3:16 PM

Pete, A very good question. To get the most precise and worst case scenario measurement for your calculations you would want to know how much the latency the TCP stream will see when the pipe is full. The most accurate way to do this would be to saturate the link with a UDP stream of MTU sized packets and measure how long it takes for the packets to get from point A to point B. Then do the same measurement from point B to point A, then add the two measurements together for the worst case round-trip latency (full pipe, large packets). Cheers, Brad
REPLY

26.

PRABHU SAYS:
OCTOBER 13, 2009 AT 4:03 AM

HI brad, I am new to wireshark and linux tcp tuning. and i used BIC tcp algorithm i underwent a Experiment.. I downloaded a 100mb of file and captured the packets uing wireshark analyzer. From this i need to calculate the 1.Link Delay.. 2.Bandwidth 3.Slowstart phase 4.Window size And i need to tune the TCP parameters like rmem and wmem and again i need to download the file measure these things. i am pretty confussed abt this concept ..Can u please explain this.. What will happen when we tune the tcp parameters and what will happen to the window size.
REPLY

27.

MICHAEL SAYS:
OCTOBER 19, 2009 AT 9:44 AM

Hi Brad, Thank you for your post. We have two 1gb Ethernet link to site. These 1gb link goes through two different circuits. To check the link currently we just run a ftp process with a very large file. Doing this however, we found out that we can only get a maximum of 30% throughput of the fast link on the slower link. We use the same

notebook to do this test. Any suggestion on how to do further troubleshooting? Thank you in advance for your inputs. Michael
REPLY

28.

KK SAYS:
OCTOBER 21, 2009 AT 1:04 AM

Hi Brad, Thank you for a well written article and subsequent follow-up discussions. Very infromative. Ive following situation and am wondering, if you could comment: Ive a 1.5MB long distance link between HQ and a branch. The average RTT on the link is around 760msec. When I tried a 100Mb file transfer (while link is idle), I got ~330Kbps throughput. Which means Im using probably 20% of the maximum (theoritically) possible throughput, right? Now the question is if I had 3 more such sessions in parallel (trying to transfer 100Mb file) will I still get ~330Kbps throughput per session (as against overall link throughput)? I believe thatll be 330/4=82.5Kbps. Id think overall link throughput stays same at 330Kbps irrespective of number of sessions or amount of data being transferred. Is this correct? In case, I want to increase the throughput on the link (in order to reduce the data transfer time say, by half), which option would help me do that? - Double the bandwidth of existing link from 1.5mbps to 3mbps - Or take an additional 1.5mbps link and load balance the traffic Though both these options look same (as cumulative bandwidth will be 3Mbps) but will TCP behavior be different in both these situations? Please clarify. Thnaks much in advance. Regards, KK
REPLY

29.

BRETT SAYS:
NOVEMBER 16, 2009 AT 12:55 PM

Brad, How does the ACK delay timer figure into your equation if the latency is greater then 70 ms between the two communication partners? Brett
REPLY

BRAD HEDLUND SAYS:


NOVEMBER 18, 2009 AT 9:56 PM

Brett, Good question. If the Nagle algorithm is in use (RFC 896) along with the Delayed ACK algorithm (RFC 2581), then our calculations here are basically useless, as the communications partners will only allow one packet outstanding on the wire. If Nagel is disabled with the TCP_NODELAY socket option, then the Delayed ACK timer does not need to be factored in our equation because as soon as the receiver has 2 outstanding ACKs it will

begin sending ACKs that can acknowledge multiple packets. With Nagle disabled on the sender (thus sending multiple packets without waiting for an ACK), the receiver will have 2 outstanding ACKs almost immediately after receiving the first few packets. Therefore, in my opinion, as long as the Nagel algorithm is disabled for the session, Delayed ACK should have minimal impact on our throughput calculations discussed here. Infact, the Delayed ACK timer is a *good thing* as it reduces traffic consumed just for sending ACKs. Cheers, Brad
REPLY

30.

SAJID MAHMOOD SAYS:


NOVEMBER 17, 2009 AT 4:59 PM

hi, i want to calculate the link load using (SNMP), can u help me how to calculate linlk load using formula
REPLY

31.

JAMES SAYS:
NOVEMBER 25, 2009 AT 11:04 PM

Hi, Need your help. Here is the following scenario. A centralized antivirus server is scheduled for version upgrade. There are about 1000 over clients connections to that antivirus server from various WAN links, fastest is about 2MBps, slowest connections comes from VSAT link 64Kbps. Total files upgrade take about 70 MB, which means server will have distribute 70 MB of updates/upgraded files to all clients. Ping furthest clients results in intermittent replies between 100ms-600ms. Meaning worst case scenario is 600ms. My worst fear is that once I have performed upgrades on the server, it will cause a massive jam on the WAN links as the files are too large to be distributed even for one client Questions:1) What would the ideal throughput to transmit 70 MB of files over the WAN links as mentioned above ? How is this being calculated? Appreciate your feedback Best Regards James
REPLY

BRAD HEDLUND SAYS:


NOVEMBER 27, 2009 AT 8:11 PM

James, Per your Question #1 and the scenario I think it would best to steer your thought process in a more productive direction Heres why: Im afraid trying to figure out the ideal bandwidth for transferring 70MB files is not going to get you any closer to solving your problem. Lets imagine you worked the numbers and figured out that a 100MB link to every site would be great A) that wont be cheap, and B) you still have a latency problem, so much latency that a high BW link to your sites may still deliver poor results the whole point of this post. Allocating more bandwidth is just a matter of money (often times lots of it). On the other hand, no amount of money can reduce latency as we are dealing with the laws of physics here, namely the speed of light.

With that in mind, it would be best to address your problem at hand with a solution designed to work with your your current BW and latency. First and foremost would be implementing a WAN optimization and content caching solution, such as Cisco WAAS. The solution to your problem is handled mostly by way of virus update files needing only to be downloaded just once to a site. This single download would be for pre-positioning the update files on the local Cisco WAAS appliance at that site. The many client machines would then download their update files from their local Cisco WAAS appliance at LAN like speeds. The solution is very transparent in that client machines would believe they are downloading the files from the central anti virus server as they always have done in the past, but in reality this time they are getting the update files locally from the Cisco WAAS appliance, therefore reconfiguration of the client machines may not be necessary. The Cisco WAAS appliances also have the ability to provide impressive optimizations for all of the other traffic traversing your slow WAN links, improving application performance and responsiveness for any other TCP based applications, not just the anti virus updates. Cheers, Brad
REPLY

32.

SHIVLU JAIN SAYS:


NOVEMBER 30, 2009 AT 5:37 AM

Brad Article is awesome. Lets assume if we are having a latency of 100ms and I want to transfer 1 Gb of data on 10Mb pipe, how long it will take time to transfer.
REPLY

33.

PETER SAYS:
DECEMBER 3, 2009 AT 6:27 AM

then for 802.11n what is the maximum tcp size can be used for up to 300mbps
REPLY

34.

MICHAEL SAYS:
DECEMBER 10, 2009 AT 6:38 AM

Pls i need clarification on this: Assuming a SWS of 10 pkts is used in a 50kps communication link, it is observed that when 100 byte pkts are transmitted, the throughput is close to max(50kps). but when 80 byte long pkts are used the throughput drops considerably. Why? and what should be done to transmit smaller pkts efficiently? thanks
REPLY

35.

MUNIR K SAYS:
DECEMBER 11, 2009 AT 10:21 AM

Dear Brad, Thanks for this wonderful information which has infact attracted a lot of attention. We are actually facing same issue between our DC-DR Replication Link. The Scenerio is as under:

We have a 10 Mbps replication link between DC-DR which is actually used for host to host replication (Host A to Host B). The traffic flowing on the link is pure TCP. Offlate we could see that the link is chocking at Peak hours thus hampering business activities and the application team has been asking for an immediate upgrade from 10 Mbps to 45 Mbps keeping in view the data increase trend. May be in place to mention that the RTT on the link is 40ms. Since, as per your document, we may not be able to reach this thruput as Application team does not want to change the TCP Window which is default 64Kbps, so kindly suggest what options we are left with for getting the desired thruput. In case we are left with an option of only using WAAS or any WAN Optimization Controller, I would like to ask if 3745 routers support WAAS Controller Cards? Look forward for your Expert Comments. Munir K.
REPLY

BRAD HEDLUND SAYS:


DECEMBER 13, 2009 AT 7:21 PM

Munir, How many hosts in the DC are using the 10Mbps link for replication? In other words, how many TCP flows are using link?
REPLY

36.

MUNIR K SAYS:
DECEMBER 14, 2009 AT 7:02 AM

Hi Brad, It is host to host replication and In DC there is only one host which replicates to a host in DR. The only beauty is that every minute around 3 files of 100MB each gets created on DC HOST which needs immediate replication with DR HOST.
REPLY

BRAD HEDLUND SAYS:


DECEMBER 18, 2009 AT 11:15 PM

Munir, In your case, I would recommend that you look at deploying a Cisco WAAS appliance at each end of the link. The Cisco 3745 router does not support WAAS modules, but I think the performance profile of an appliance would be better for you anyway such as the Cisco WAVE-674. You can simply place the WAAS appliance inline between your 3745 and LAN switch. http://www.cisco.com/go/waas
REPLY

37.

ANDY SAYS:
DECEMBER 16, 2009 AT 11:07 AM

Hi Brad,

Thanks a lot for your excellent post here. I am facing a very critical issue.Need your help(all r welcome to help me) I am in China and my company has a WAN link to US (4*T1-Bundled) and then we have a IPSEC tunnel to our Customer network through Internet.We use MRDP to access the Customers VM and access some applications.This application is based on Flash and its extremely slow as we are working on MRDP. The utilization on our WAN link is not above 30%. Latency is 300ms beween the Client and Server. Changing the Window size is going to help us Most of our desktops in my companies are Vista and i read we can tune the TCP by setting netsh int tcp set global autotuninglevel=experimental so that the window size can be upto 16MB. After setting this ,we are dont see any performance difference. I am really being screwed because of this issue.Kindly suggest a suitable solution. I want to know how we can change the Window Size?is it by creating a new Reg Value ? These are the parameters on my Windows XP MSS: 1440 MTU: 1480 TCP Window: 17280 (multiple of MSS) RWIN Scaling: 2 bits (2^2=4) Unscaled RWIN : 4320 Recommended RWINs: 63360, 126720, 253440, 506880, 1013760 BDP limit (200ms): 691kbps (86KBytes/s) BDP limit (500ms): 276kbps (35KBytes/s) Please help Andy
REPLY

38.

MUNIR K SAYS:
DECEMBER 21, 2009 AT 8:11 AM

Hi Brad, Thanks a lot for this. I have already lined up with Cisco for POC. I will surely share the results with you on this forum. Regards, Munir K.
REPLY

39.

ANDY SAYS:
DECEMBER 21, 2009 AT 11:17 AM

Hi Brad, This is continuation of my Previous Post. Throughput is RWIN/Latency is Sec say I have a 2Mbps link and as per the throughput calulation ,i get a throughput of 1Mbps ,Can i get a trasfer ratefor 1mbps for all file trasfer like FTP traffic,http traffic,MS-DS traffic ,MRDP traffic etc. How this can be decided? Please help I am desperately looking for an Answer
REPLY

40.

DHEERAJ SAYS:
DECEMBER 30, 2009 AT 5:16 AM

dear Brad i am doing research on multipath routing how can we prove that throughput of multipath routing is better than single path routing thanks and regards
REPLY

41.

TQ SAYS:
JANUARY 11, 2010 AT 9:44 AM

Brad Hedlund, I just wanted to thank you for your effort in publishing this article. Regards
REPLY

42.

JORGE LUIS OBREGON SAYS:


JANUARY 11, 2010 AT 2:24 PM

Hi Brad: Could you help me with a model that calculated the throughput, included loss. I need a formula for estimated a throughput with loss with the same topology. Please help me if you can.
REPLY

BRAD HEDLUND SAYS:


JANUARY 12, 2010 AT 8:11 PM

Jorge, I found this formula in my files:

Hope this helps. Cheers, Brad


REPLY

43.

SLIMER SAYS:
JANUARY 26, 2010 AT 3:07 PM

Brad, Thanks for the very interesting explanation here! I have a question and hope you can help. We want to perform a data consolidation for some of our application whereby we already know the number of server and we will be able to get the volume of traffic. However, we are not sure how much WAN bandwidth needed in the data center that we will do the data consolidation. Lets say, I have the following parameters: - 120GB of aggregated traffic over the LAN (this is the server/client traffic) - WAN latency is 75ms (RTD-150ms) Objective: To know the WAN bandwidth to be used ThanksSlimer
REPLY

44.

VINCE SAYS:
JUNE 21, 2010 AT 3:15 AM

how about udp throughpout, do you use the same formula.


REPLY

BRAD HEDLUND SAYS:


JUNE 21, 2010 AT 11:19 AM

No, since UDP does not require round-trip acknowledgements, these formulas do not apply to UDP.
REPLY

45.

ASHWIN SAYS:
JUNE 30, 2010 AT 10:56 PM

Hello Brad, 1.As I understand Windows systems on the LAN uses a TCP window size of 17K+ bytes. In your example, each server has a default setting of 17K+ bytes. Now while replicating the data between the two servers in remote sites, will the TCP window size be changed by the routers to 65K ? Or will it remain to be 17K+ through the entire path ? Assume we are not using any bandwidth optimizers. 2. Assume there are two servers at the source site replicating to two servers at the remote site via a common router pair. Each replication context will then constitue a seperate stream and hence we will get double the bandwidth over the WAN link [assuming the WAN link to be a fat pipe]. Is my understanding correct ? Ashwin
REPLY

BRAD HEDLUND SAYS:


JULY 3, 2010 AT 7:18 AM

Ashwin, 1) The routers are operating at Layer 3 of the OSI model and therefore pass IP packets paying no attention the upper layer TCP information. A standard router never changes window sizes etc. TCP windowing is managed by the end hosts participating in the TCP exchange. 2) Correct. The TCP throughput calculations discussed in this article are for the purposes of calculating the throughput potential of an individual TCP session. If you have multiple TCP sessions, each session would add its own bandwidth load to the link. If the link does not have enough bandwidth to carry the potential bandwidth of all the TCP sessions, congestion will occur, packets will get dropped, TCP will detect that and dynamically scale back the window size in half, then slowly increase the window size until packet loss happens again, and the cycle repeats. The result of this behavior is all TCP flows evenly and fairly balancing throughput on the link.
REPLY

46.

MINA SAYS:
JULY 6, 2010 AT 5:05 AM

dear Brad , thanks a lot for your great effot my question related to CP question which was : ===================== THROUGHPUT from BDP: ====================== Whats my throughput then, RWIN (66,780 bytes) / RTT (.032) = 2,086,875 bytes (or 16.7Mbps) Umm, my DSL is only up to 2Mbps, so that is NOT my throughput nor correct so my questions are : 1- is it really that server sense that i can download with 16 M , but i will drop the rest as my physical speed is 2M ( that means i will drop 14 M)? 2- what about if my speed was 24 M , is that means my max peed will be 16 M ?? i hope that i could elaborated my questions correctly thanks again for your effor , Mina
REPLY

MINA SAYS:
JULY 10, 2010 AT 6:00 PM

any update please??


REPLY

MINA SAYS:
JULY 14, 2010 AT 4:10 AM

Dear Brad .. any Update?


REPLY

BRAD HEDLUND SAYS:


JULY 15, 2010 AT 7:49 PM

Mina, The throughput calculations will tell you the maximum possible throughput. If your DSL line is much slower than what your calculation says, well, of course your actual throughput will be limited to your link speed. If your link speed is much faster than the result of your calculations, you will not transmit any faster than what your calculation states, because that is the maximum possible throughput. This is the entire point this article tries to make. Cheers, Brad
REPLY

47.

BRADY SAYS:
JULY 7, 2010 AT 9:47 PM

My only commnet would be, Be careful when looking at TCP window sizes, and assuming one size fits all. Stating that all applications will operate with the same throughput is not neccessarily accurate. Technologies in the middle can and do have an impact on the overall TCP window size. For instance , if you were to look at an legacy implemantation of site to site VPN across a 6500 using a VPN IPSEC module you would most certainly find tcp window/mtu manipulation on the router that will impact Windows Servers and clients. This will also without a doubt be application dependent. For instance dont assume that all applications operate under the same conditions with windows sizes, etc. Your browser may operate a certain throughput and MS outlook at another throughput level. If you are using MPLS to connect your site dont always assume that all the carrier gear is setup the same way to handle a particular MTU size from end to end. For instance Carrier X may have legacy devices that do in fact impact the MTU size and therefore TCP stack is impacted indirectly. Granted I am not stating that IP MTU and TCP window sizes are the same animal. But what you will find is there is a very close relationship and the MTU will in fact impact TCP applications. Most the Cisco VPN Documentation will refer to this a MSS size , but a closer look will revel its window sizing. I would just caution that you need to look at the underlying technologies that connect the locations and see if they could potentially impact tcp windows size, througput , etc. I would highly suggest you get a software package that tests throughput from end to end on server/client before suggesting results. As over time you will see the TCP window sizes and throughput began to degrade on networks. Thats my two cents. Regards,
REPLY

48.

MINA SAYS:
SEPTEMBER 1, 2010 AT 12:12 PM

hello Brad , i d like to thank u alot for that nice forum ,, my questions are : 1- when TCP indicate that there is packet loss , is it will begin slow start phase from he very begining (1 , 2 , 4 , 8 , .) ? or it will begin from the last window size before dropping ? 2- is there any way to overcome packet loss due to congestion ?

i know my questions may be silly but it really confuse me .. thanks Mina


REPLY

MINA SAYS:
SEPTEMBER 2, 2010 AT 9:30 AM

dear Brad , any Update please?


REPLY

SEAN SAYS:
NOVEMBER 1, 2010 AT 2:47 PM

Mina, It will be dependent on the operating system. If the stack is based on Tahoe, you start right back at 1 and do slow start over again. If the stack is Reno, you will cut the window by 50% and slow start from there. If you drop again, you go to 50% of the new window, and so on There are a whole slew of other algorithms for TCP, above are most common though. sean
REPLY

49.

MINA SAYS:
SEPTEMBER 5, 2010 AT 7:21 AM

Dear Brad , any update ?


REPLY

50.

JIM SAYS:
NOVEMBER 5, 2010 AT 10:48 PM

Brad: I worked through the following calculation and was wondering if it makes sense. It is based on summing the durations required to place bits on the wire, includes latency, then calculates an effective throughput based on total time and data moved. The problem is that it basically negates the calculation for optimal TCP window size. I apologize for the poor format but a quick cut and paste into Excel will reformat in columns. I would appreciate if you could find a moment to review and respond. Thanks Jim Variable Amount Unit TCP Window 65,536 Bytes

1 way latency 0.015000 Seconds Theoretical Maximum (WindowSize/rtt) 17,476,267 bits per second All Interface Speeds 100,000,000 bits per second Time to put 1 x Window (8 x 65,536 bits) on wire 0.005243 seconds Plus 1 Way Latency for last bit in Window to reach client 0.015000 seconds Time to Put ACK (100 Bytes) on Wire 0.000008 seconds Plus 1 Way Latency for last ACK bit to reach server 0.015000 seconds Total Time for 1 Window plus 1 ACK 0.035251 seconds Better Max speed calculation 14,873,047 bits per second Best Window Size (using link speed x rtt) 3,000,000 Bits Best Window Size (Bytes) 375,000 Bytes 1 way latency 0.015000 Seconds Theoretical Maximum (WindowSize/rtt) 100,000,000 bits per second All Interface Speeds 100,000,000 bits per second Time to put 1 x Window (8 x 375,000 bits) on wire 0.030000 seconds Plus 1 Way Latency for last bit in Window to reach client 0.015000 seconds Time to Put ACK (100 Bytes) on Wire 0.000008 seconds Plus 1 Way Latency for last ACK bit to reach server 0.015000 seconds Total Time for 1 Window plus 1 ACK 0.060008 seconds Better Max speed calculation 49,993,334 bits per second
REPLY

51.

ABID SAYS:
DECEMBER 26, 2010 AT 10:12 AM

Hi Brad, Thanks for excellent material that you have posted. We have WAN link of 45Mbps between point a and point B. We use ODG(oracle dataguard) to transfer arch files between point A & B. We have put WAAS devices at both points but still are getting 26Mbps of utilization. My n/w vendor/speciaalist asked me to increase the number of sessions ODG makes,we increased it to 9 from 4. However problem persists. What i understand is: 1: The throughput is dependent on Latency and TCP window (WAAS vendors says he has tuned the device for max TCP window). 2: With WAAS devices in place, even one session should have shown utilization of 45Mbps. PLease let us know if we are missign on anything.. Thanks Abid
REPLY

52.

SOULHACKER SAYS:
APRIL 13, 2011 AT 12:53 AM

I just wonder how the latency comes out? If tcp mss is considered. Because the latency rises when tcp mss grows.
REPLY

53.

ROLF WIKLUND SAYS:


APRIL 14, 2011 AT 5:17 AM

Hi Brad. It looks that the window size is more important than the MTU? Do you have any calc. about the MTU size impact?

I thinking mostly how to solv throughtput issues in DCI (40km) Thanks Rolf
REPLY

54.

KNUCKLES SAYS:
JULY 4, 2011 AT 4:56 PM

Hiim expecting about 1.5 Mbps on a link that i have. UDP works fine, however, TCP has yielded results close to 0.022 Mbps (essentially nothing!). Would the above tweaks be done on both ends of the network (being both PCs)? And also should a TcpWindowSize be added to the Interface Registry key where the network interface details exist? >(HKEY_LOCAL_MACHINESYSTEMCurrentControlSetServicesTcpipParameters, TcpipParametersInterface) Thanks!
REPLY

55.

JOHN TKACZEWSKI SAYS:


AUGUST 3, 2011 AT 8:21 PM

It is worth to mention that besides WAAS, there is a number of other commercial software vendors that provide accelerated file transfer software. At FileCatalyst, we use a UDP based protocol to send data at the maximum available link speed. Unlike other UDP based file transfer protocols, we use an efficient algorithm to keep track of lost pockets and we re-transmit only the missing data. With much fewer acknowledgments than any other TCP based protocol, the file transfer speed is not affected by the latency. And the speed loss is linear to the pocket loss (Which is impossible with large Window Sizes) We also use our own built-in congestion control that is immune to latency and takes into the account the average latency of the link before slowing down. We have an on-line calculator on our web site that provides a comparative of TCP over our UDP protocol. http://www.filecatalyst.com/web_demos/comparison_tool.html I recognize that this is plug for a commercial product however this article explains exactly the same problem that we have been trying to fix for the last 5 years.
REPLY

56.

UDAY SAYS:
SEPTEMBER 21, 2011 AT 12:38 PM

how to find the throughput,end to end delay,delivery ratio for the protocols using mcbr application as it is a single host application.. what all properties has to be set for that in nodes,sunet and mcbr and file statistics of scenario in qualnet 5.0 environment..
REPLY

57.

GEORGIE SAYS:
NOVEMBER 30, 2011 AT 7:45 AM

unless your TCP/IP stack on the server employs a TCP enhancement called selective acknowledgements, which most do not

In my experience any recent Linux 2.4/2.6 kernel and any modern Window$ system have SACK enabled by default in the kernel.
REPLY

58.

GURU SAYS:
FEBRUARY 3, 2012 AT 7:39 AM

Nice post.. referred it in my post http://www.consultguru.me/post/2012/02/02/SQL-Server-Replication-Project--2-(Network).aspx Guru


REPLY

59.

NIMA0102 SAYS:
MARCH 5, 2012 AT 5:36 AM

Thanks a lot for your article. Is there any formula for non tcp packet? such as GRE or UDP. Thanks in advance
REPLY

60.

DARREN R. STARR SAYS:


MAY 23, 2012 AT 3:54 PM

The TCP header contains a 16-bit field to identify the window size. So far as I know, TCP does not support any extensions which are standard that allows for an extended TCP window size, though there may be some option field Im unaware of. Can you please point me to an RFC which covers this enhancement? On the other hand, TCP is terrible for file transfer and a UDP protocol is generally better suited as TCP does not lend itself well to punching holes in PAT firewalls. Typically a better approach is a UDP protocol based on RTP/RTCP with SIP negotiation assisted by STUN, ICE or TURN. Them bandwidth can be throttled by standard routers inspecting the rate. Using RTCP video conferencing extensions, dropped packets can be requested by performing an RTCP to request retransmission of drops by referencing a Slice number. In addition the jitter buffer architecture will provide the majority of the same services we expect from TCP. In short, payloading file transfer data as video data is probably more efficient than TCP, the overhead would be 8 byte UDP header + 12 byte RTP header as opposed to the larger overhead involved with TCP+ a word aligned TCP option. In addition, tracking packets and performing retransmits would provide for a stable and efficient form of continuing aborted sessions. Even better, since checksum is optional in IPv4 UDP, it would be at more efficient processor-wise to perform authentication on larger chunks and the binary search for smaller blocks when failures occur. Could even use generic FEC to compensate for inevitable packet loss. No new protocol needed, just need to recycle an oldie but goodie. Just my two cents.. That link please?
REPLY

SIMON LEINEN SAYS:


MAY 26, 2012 AT 12:13 PM

> Can you please point me to an RFC which covers this enhancement? RFC 1323, from 1992. All of these extensions, including the Window Scaling option, is widely implemented by now, so multi-megabyte TCP windows are very much feasible these days.
REPLY

BRAD HEDLUND SAYS:


MAY 26, 2012 AT 3:42 PM

Thanks Simon
REPLY

SIMON LEINEN SAYS:


MAY 29, 2012 AT 3:39 AM

Oops, I forgot to plug a wiki topic on that topic (RFC 1323 etc.): http://kb.pert.geant.net/PERTKB/LargeTcpWindows I think it contains some useful background information.
REPLY

MIN SAYS:
JULY 17, 2012 AT 11:12 AM

we have 100Mbps circuit and latency is 140ms download speed is about 3Mbps which is correct according to this formula but upload is about 9Mbps. I am not sure why it is so high. Can anyone inform me what could be possible reason? thanks
REPLY

61.

JOE SAYS:
SEPTEMBER 14, 2012 AT 3:21 PM

unless your TCP/IP stack on the server employs a TCP enhancement called selective acknowledgements, which most do not When was this article written? 2008? I cant remember the last time I looked at a packet capture where SACK wasnt permitted.
REPLY

62.

RAUSHAN SAYS:
OCTOBER 9, 2012 AT 6:53 AM

actually they have given the interval between packets as 0.005.it means that 200 packets (inverse of 0.005) could be sent per sec.they have calculated the data rate of 0.01Mb.If it is a udp packet its size is 552 bytes and if it is a TCP packet its default size is 1000

bytes.can you tel me how did they calculate bandwidth as 0.01Mb using the interval as 0.005 for UDP.
REPLY

63.

RBNETENGR SAYS:
MARCH 21, 2013 AT 4:53 PM

In reading through the comments, I wanted to add a few notes 1) When you use PING to measure round trip delay, keep in mind that your standard Windows OS will use a 32 byte packet for the payload. This will not give you an accurate measure of round trip latency, as your FTP file transfer will normally use full payload packets (1500 bytes). Its best to ping with a full payload, to get an accurate measure. Also, use the -f option, to not fragment the packet, in order to find out the minimum MTU along the path. If you dont do this, you may find that there is a place along the path where every full size packet gets fragmented, adding to the processing delay at the receiving end. 2) Windows 7 and Server 2008 R2, among others, claim to dynamically adjust the RWIN size dynamically, based on RTT of TCP ACKs. Whether it works properly in all cases is another story. It can be disabled by editing the registry. 3) The most effective way to optimize for a LFN (Long Fat Network) is to use one of the WAN optimizer appliances available from Cisco (WAAS), Riverbed (Steelhead), etc. Yes, it does make the link cost higher, but youll be able to use all of the bandwidth that youre already paying for. Cisco recently announced a new ISR-AX router, which contains a WAAS processor, in order to optimize WAN links. Of course, in order for it to work, youll need another WAAS at the other end. -rb
REPLY

Trackbacks
1.
BLOG | JIM80.NET WHY 100MBPS DOES NOT MEAN 100MBPS TRANSFER RATESsays:
JANUARY 22, 2010 AT 10:11 AM

[...] http://www.bradhedlund.com/2008/12/19/how-to-calculate-tcp-throughput-for-longdistance-linksfor a more in-depth discussion on [...]


REPLY

2.

O3B AND GOOGLE TAG TEAM: RURAL AFRICAS REDEMPTION? TOM MAKAU

says:

APRIL 26, 2011 AT 2:42 AM

[...] reach in a long time. To see how low latency is related to higher throughput see this tutorial here. The O3B idea is a great one and deserves all the support it can get so as to make it a [...]
REPLY

3.

NBN FIBRE IN SMALL TOWNS

says:

AUGUST 19, 2011 AT 9:19 PM

[...] thus the further increase of latency. Alright then Comrade, Ill humour you for a moment. Using this article as a base I will be doing some calculations. Next I will use Tranquillity as my server, which is [...]
REPLY

4.

AFRICA WILL IGNORE SATELLITE COMMUNICATIONS AT ITS OWN PERIL TOM MAKAUsays:

MAY 25, 2012 AT 2:54 AM

[...] The O3B will utilize satellites that are closer to the earth hence the term LEO which stands for Low Earth Orbit. The fact that these satellites are closer means that latency on the links will be much lower (at 200ms) compared to traditional satellite capacity that gives about (600ms). This will enable higher throughput at lower latencies. To read more on the relationship between latency and throughput, read this tutorial here [...]
REPLY

You might also like