Today’s networks have a lot of ground to cover. The resources companies depend on—cloud-based applications, remote offices, and centralized data centers, to name a few—can be located a significant distance from the business. Networks need to tie all these pieces of the enterprise together, transmitting data speedily and seamlessly. But there is a complication: latency.
Latency can be thought of as, simply, delay. It is the time required for a packet of data to travel from one point to another — say, from the server at a cloud provider’s facility to a desktop PC back at the office. All networks experience some measure of latency. For one thing, there are the laws of physics to contend with; namely, the speed of light. This will vary with the transmission medium a network uses (for fiber, it takes 0.82 milliseconds to traverse a mile). But the basic rule is simple – the further apart two points are, the greater the latency.
Other factors can increase delay further, and sometimes, significantly. There may be network congestion en route, routers may be overloaded, or the data may simply be taking a longer-than-necessary path. When the public Internet is used for all, or even part, of the transmission, these impediments can be particularly problematic. That’s because the data isn’t traveling over a single network run by one provider, but over multiple segments managed by different entities. Data, therefore, isn’t prioritized — so mission-critical packets receive no preference over less vital ones. It also means that the most efficient route is not always used. Indeed, it’s not uncommon for data heading from Denver to Seattle to be routed through New York.
When an e-commerce web site is slow to load, potential customers may do their shopping elsewhere. When cloud-based applications run slower than they should, that slows down employee productivity. Videoconferencing can prove a frustrating experience when latency is high, with frozen images and crackling sound hindering its effectiveness. What’s more, some applications, such as using a data center for off-site backup and mirroring, will work only in a low-latency environment.
The good news is that latency can be managed, and kept to a minimum. When a single provider controls the entire network, delays are dramatically reduced. Connecting key business locations — data centers, cloud providers, back-up sites, and remote offices alike — with a dedicated Ethernet network, instead of a patchwork of Internet and low-capacity T1 circuits, ensures that traffic reaches its destination quickly, effectively, and safely. It enables high-priority traffic, like video and voice, to be prioritized, reduces contention, and improves performance. For users, that means your IT engine can run on all cylinders, and your business can move full speed ahead.
Today’s networks have a lot of ground to cover. The resources companies depend on—cloud-based applications, remote offices, and centralized data centers, to name a few—can be located a significant distance from the business.
Locked Content
Click on the button below to get access
Unlock NowOr sign in to access all content on Comcast Business Community