By Jean-Pierre Matte, Director of Fibre Deployment
When it comes to describing how fast an Internet connection is, bandwidth and data transfer rates, in megabits per second (Mbps) and gigabits per second (Gbps), are the most commonly used measurements. However, these metrics don’t tell the whole story about how fast your Internet will be. Your Internet connection is essentially your connection to your Internet Service Provider’s (ISP) network, and there are several things about an ISP’s network that can affect your perceived bandwidth.
A little refresher how the Internet works…
The Internet is a jumble of independent networks operated by thousands of companies. These network providers interconnect, or “peer” with each other in common peering hubs. So, the information we transmit travels over these different networks, across a wide variety of mediums (wireless, coax, fibre, etc), to get from one point to another. Since there are so many disparate parties and technologies involved in connecting your computer to Medium, there are some rules, or protocols, that data transmission follows when moving through the network.
The Transmission Control Protocol (TCP) is responsible for ensuring that information travels across network hops in one piece. TCP accomplishes this by having both devices check in with each other: the sending device sends a “heads-up” about what’s being sent, and the receiving device verifies and sends confirmation that the data arrived as expected. This ensures that information is being sent and received correctly and that networks aren’t overwhelmed, but it definitely slows things down. The bad news is that networks are full of these connection points where TCP does its “traffic cop” thing to keep our data moving safely.
What is latency?
The simplest way to define latency is the amount of time it takes for data to travel from one point to another in a network. It is caused by the number and location of obstacles (such as routers) between the source and destination. If our data packet was travelling on a road network, latency would be like the delays caused by traffic jams or road construction. If the network has alternate routes available, or if we have multiple paths to transmit information, latency can be reduced.
Similar to a car’s journey, our data packet doesn’t travel in a straight line. It follows the pathways that were created by its ISP. Each ISP varies in how it peers its network with other private networks, ISPs, institutional networks, and Cloud providers. So, your data packet could take a different road to get to where it’s going, depending on your ISP’s network peering. If the TCP is akin to an extra-vigilant traffic cop, a poorly peered network would have you travel from Toronto to Scarborough, by way of New Jersey. Case in point, large international budget ISPs often bring their traffic back to major hubs in the US, which has a significant effect on your packet delivery speed.
Bridging the latency gap by connecting locally.
Content and Cloud Saas providers are quickly building access to their servers all around the world, making it possible for ISPs to peer with them closer to home. By having the best and closest peering, the overall experience is improved since there’s less physical distance to cross before your data reaches its destination.
Local peering brings content closer to the user. It’s important to our mission to ensure that our network remains locally relevant and as locally connected as possible. This way we become the local anchor point that connects our communities to the world.
Questions to ask your ISP if you’re concerned about latency:
- Are you connected locally to all the public internet exchanges?
- Are you peered locally with all the ISPs in your region?
- Are you peered locally with CDNs in your region and beyond?
- Are you peered locally with cloud providers in your region and beyond?
- Who are your global transit providers and why did you choose them?
- Is your peering in multiple places in the region, so as to not put all your eggs in the same basket (or building)?