Categories

Archives

Bandwidth or Throughput? What are they? What Should you look for?

Bandwidth and Throughput! There is a lot of confusion about these terms, what they mean, what you should care for.

Let’s try some definition first, and then let’s go into the details.

From the electrical perspective, Bandwidth is the max amount of frequency a cable can support without signal degradation. In the networking field, this is usually confused with the Max Data Rate that a link can support, either physical (i.e. a cable) or wireless (i.e. WiFi).

In reality, Bandwidth and Max Data Rate are not the same thing. The Max Data Rate that a link can support depends on the Bandwidth, but also depends on the carrier modulation supported by the link (please take it as it is, this is not the place to start a digression on telecommunications theory, but if you really want to know, please drop me a comment and I will post an article just for that).

Also, the difference between the bandwidth and the max data rate depends on the protocol being used and, if more than one protocol are stacked together (see my post on the OSI model), you have different max data rate values for each of the protocol layers. The higher is the protocol layer, the lower is the max data rate.

Bandwidth is measured in Hz (hertz) and its multiples (kHz, MHz, GHz).

Data Rate is measured in the number of bits sent in one second, or b/s and its multiples (kb/s, Mb/s, Gb/s). Note that I mentioned bits, not bytes. Carries sell or rent their data links using b/s as measurement unit.

Since a byte is made up of 8 bits, 1 byte/s = 8 b/s. “byte/s” is usually abbreviated as B/s.

So, when a carrier sells you a data link capable of transporting 10 Mb/s, in reality that link can transport only 10/8 = 1.25 MB/s. Keep that in mind, when you do your calculations to figure out how much do you need to buy.

Also, the carrier will most probably tell you that it is selling or renting you a data link with a bandwidth of 10 Mb/s. But, remember, that is not really bandwidth, it is instead maximum data rate.

The real data rate that the link will be able to carry will be definitively less than that “bandwidth” they are selling you. In fact, that data rate is what is called throughput. In these correct terms, what they sell you as bandwidth is in reality called the maximum throughput that the link can support.

The real throughput will always be less than the max throughput for several reasons. Among others:

  • latency (delay in transmitting/receiving a data packet)
  • computational capability of the network elements between the sender and the receiver of the data packets. The throughput will never be greater than the smaller throughput encountered in the network elements in the path.
  • amount of users using the same link, which will cause a contention in the usage of the link itself
  • type of protocol, because each protocol uses different header sizes to envelop the packets it is sending/receiving, and the bytes in those headers add up to the bytes of actual data being transmitted/received

Let’s dig a little bit more in these 4 bullets, and remember that there are several more reasons why the real throughput is smaller than the max throughput.

In order to do so, let’s compare a data link with a pipe transporting water.

The max throughput, which many call improperly bandwidth, is the amount of water that the pipe can transport, given its physical size, the diameter of the pipe, in particular.

You can easily figure out on your own that the max amount of water the pipe can carry is not necessarily what the pipe will carry at any given time. Think of the case where you are providing water through a valve that can regulate the amount of water through it. If the valve is closed, there will be no water in the pipe. If the valve is half open, there will be an amount of water flowing in the pipe, and the rest of the pipe will be filled with air.

That said, let’s see what the latency causes in a data link, compared with the pipe and the water. Most protocols, when sending a data packet, will wait for a response from the other end that confirms the reception of the packet before sending the next one. Since it takes a certain amount of time for the sent packet to reach the far end and then for the response to come back (this is the latency, or delay), the data link will not be used to transmit data for that hole interval after a packet is sent. This is the equivalent of opening the valve of the water for a short amount of time, then close it. Then we wait for the water to reach the other end of the pipe and then we wait some more time for somebody else at that end of the pipe to tell us that the water is arrived. Only then we open the valve for another short moment. You can see that the pipe will remain empty for most of the time, and that is what happens with a data link, because of the latency.

The computational capability of the network elements can be assimilated to the size of the pipe along the way. If the pipe becomes narrower in certain parts of the path, the whole pipe will be limited in the amount of water that can transport at any given time by the smaller of all the pipe diameters along the way.

Amount of users using the data link. The more users the less throughput. Think at the number of users as the number of valves that input the water into the pipe. With two users, if one pushes an amount of water equal to half the amount the pipe can transport, the other user will be at most able to push half the amount of water in to the pipe, because that way the pipe will be totally filled, between the two user. Now, think about if one user opens almost completely the valve, and therefore uses most of the capacity of the pipe. Even if the other user opens the valve in the same amount, just because he did it a little later that the first user, it won’t be able to push the amount of water he wants.

For the effect due to the type of protocol, think it this way: each protocol introduces an overhead in terms of extra bytes to provide certain information that depend on how the protocol works. So, you have your data, then you have to send them in a packet. The number of bytes you send is equal to the number of bytes of your data plus the number of bytes of the overhead introduced by the protocol. In addition to that, when you think that normally you will use a number of protocols stacked to each other, and considering that each protocol sees the data packet built by its predecessor in the stack as pure data and, therefore, adds its own overhead, you see that you may end up with the case where the most bytes you send are due to the involved protocols and just a few are the actual data you wanted to send. So, although you are filling the pipe with all this data, only a percentage of it is what you meant to send in the first place.

Each of the above problems has its own solution to minimize the impact to the real throughput. However, keep in mind that there is only so much you can do, and there is no way you can address all the possible causes of throughput reduction. So, at the end, the final throughput you obtain in your data link will never be that close to the max throughput. I can actually tell you that, in most of the cases, you are very lucky if you can reach 30% of the max throughput, unless you are using the link in some special way (but this is a whole different story).

All in all, when you design your network, keep in mind that in order to use as much throughput as you can, you have to make sure that the links you are using are at least 3 times oversized with respect of what you need. But, in addition to that, you have to make sure that all the network elements (routers, DNS servers, and so forth), are built to withstand a throughput higher than the one you need.

For example, let’s say you have links at 100Mb/s. And now you have a router that can connect to 100Mb/s links. Also, you are in need to have a throughput across the network of at least 30 Mb/s. Then if the router from above connects to 10 of those links, and each link needs to transfer at least 30 Mb/s of data, then your router has to have a total throughput of at least 10 x 30 = 300Mb/s, even though it is connected to 100Mb/s links. If your router is not capable of doing that, you will never be able to use your links at the best of their capabilities.

Now, think at what kind of throughput a wireless router or access point needs to have if it needs to support just the 802.11g protocol, which has a max throughput of 54 Mb/s, and the number of users is 20. The throughput of this network element should be 54 x 20 = 1080 Mb/s ~= 1 Gb/s !!!

Interesting, isn’t it?

Maybe now you understand why your wireless network is not capable of supporting the amount of users that attach to it, and why everything slows down to a crawl when several users are connected, while everything is perfectly fine if only one or two users are using the WiFi connection.

Leave a Reply

You can use these HTML tags

<a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>

  

  

  

This site uses Akismet to reduce spam. Learn how your comment data is processed.