A Guide to TDM to Business Ethernet Network Migration

overhead shot of highway, blurred headlights, blue tint

Ethernet technology, which is based on packet switching, has become a primary enabler for service integration at the edge of the network. Businesses subscribing to T1 services are rapidly migrating to Ethernet services. The migration from legacy TDM networks to Ethernet Services offer organizations advantages, including standardized services, scalability, reliability, and service management.

Some of the challenges, however, include changes in how the network is operated and configured. The transition from TDM legacy services, such as T1 and T3, to Ethernet services has created some unintended consequences. In this paper, we discuss some of the differences between TDM and Ethernet services, settings for compatibility between customer and network equipment, traffic shaping by customers as well as considerations to optimize the performance of applications with Ethernet services.

TDM and Ethernet Services

TDM is a multiplexing technology designed to interleave digital signals in time. Time slots are pre-assigned to source data streams and time slots are allocated even if there is no data to send from the different data streams. A T1 system is a synchronous transport system based on TDM originally designed for multiplexing 24 digital voice channels at 64 Kbps per channel plus 8 Kbps for framing resulting in a data rate of 1.544 Mbps. TDM’s static allocation means that some time slots may not be occupied resulting in inefficient use of bandwidth. For example, some of the digital voice channels in a T1 may be empty because the users of the channels are inactive in sending data over a period of time.

TDM services typically have the same service bandwidth as the associated port speed. For example, all bits on a T1 line are used for IP/Ethernet traffic encapsulated in T1 frames. The bandwidth increment of the T1 hierarchy is lumpy; therefore, as bandwidth requirements increase, customers need to move from an NxT1 circuit to a T3/OC-1 circuit which is equivalent to 28xT1 with a bandwidth of 45 Mbps. The costs of private line services like T3 and OC-1 connections are significantly higher in comparison to Ethernet services with equivalent bandwidths.

Ethernet services are based on packet switching where source data streams are packetized and statistically multiplexed. Statistical multiplexing allocates time slots dynamically based on the source demand and the bandwidth available resulting in more efficient network bandwidth usage. The bandwidth range available for Ethernet services is a better match of the customer’s incremental needs, e.g., port speeds consist of one of the following: 10/100/1000/10000 Mbps while the Committed Information Rate (CIR) of the Ethernet Virtual Connection (EVC) can range from 1 Mbps to 10 Gbps etc.

Network Equipment Compatible Settings

Network performance can be impacted when the communications equipment on the network is configured with incompatible parameters. For example, a number of network settings related to rate shaping can stand in the way of a successful Ethernet implementation. These include data rate, full-duplex communications and the maximum transmission unit (MTU) size settings on the customer Ethernet equipment. If any of these parameters are configured incorrectly, the result can be poor performance or no communications at all.

To make all of this work, the Ethernet equipment on both ends of the Ethernet connection need to communicate the details of their settings so that they match. For this reason, modern networking equipment incorporates a technique called "Auto Negotiation" in which the network interface cards at each end of a connection agree on the communications parameters and set themselves accordingly.

In addition, there's the need to determine traffic direction. To communicate efficiently and make the best use of the network transport (e.g., fiber, hybrid fiber coaxial or copper), modern networking equipment operates in full-duplex mode, which means it can transmit and receive at the same time. Regarding Ethernet MTU, If one sets the MTU to a smaller size than needed, the result can be poor bandwidth utilization, because the frame overhead (the part of the data packet that's not carrying the payload) remains constant, resulting in more overhead for the same payload data rate.

Traffic Policing

Based upon a customer’s subscribed bandwidth, a service provider enforces a bandwidth profile which monitors and limits bandwidth consumption by subscribers. The bandwidth profile enforces the long term average guaranteed bandwidth and excess bandwidth allowed by the service. A service provider only provides performance commitments based on the CIR of the EVC for the service.

Equipment is configured by using a “bandwidth” value. The goal is to ensure the traffic rate in which packets and frames are sent upstream are below this configured “bandwidth” value. If the traffic rate is higher than the configured bandwidth, the modem will explicitly drop any excess frames/packets.

Traffic Shaping

Traffic shaping is a mechanism that a customer can use to address conformance to a CIR. Customers should shape their bandwidth to adhere to their subscribed CIR rates. A mismatch in the port speed versus CIR requires traffic shaping before sending traffic upstream. Otherwise, some of the customer’s frames could be lost due to traffic policing.

Traffic shaping also helps so that there is less likelihood for congestion on the Ethernet connection. For both On-Net HFC and fiber access, sending a burst of data traffic above the subscribed bandwidth should be avoided.

Customer Applications

An additional layer of traffic shaping may also take place at the application layer. Depending on the application, there may be additional configurations available that reduce the amount of data that is sent at once over the Ethernet connection to the destination endpoint. Sensitive and bursty applications like VoIP, Citrix virtualization and cash register systems are sensitive to time delay and may have specific requirements to help keep latency and frame loss below certain thresholds. Some types of traffic, such as VoIP, use a smaller packet and communicate using user datagram protocol (UDP). UDP has a smaller header and less overhead; its packets contain up to 218 bytes, depending on the type of compression. These smaller data packets occupy smaller Ethernet frames and are more suitable for voice communications that require lower latency.

Ethernet networks do not have inherent bit error correction capability in layer 2 protocol. For customer networks with higher layer protocols, the applications can retransmit frames/packets over the connection when bit error or packet loss occurs. Packet loss can get worse if data traffic is not shaped properly by customers and need to be discarded by service provider. This can result in higher packet loss, introducing longer delay and higher jitter from the customer services point of view. Higher layer protocols such as Transmission Control Protocal (TCP) should be optimized for the applications via setting the appropriate window size according to the round trip delay of the Ethernet connection.

Conclusion

Ethernet networking is a simpler, more efficient and cost effective alternative to a TDM network. Making a successful transition to Ethernet services, however, means that one has to set the appropriate configurations on premises equipment to match the service provider’s network parameters to optimize the performance of network connections.

Ethernet service is a simple and efficient alternative to legacy Time Division Multiplex (TDM) networks. It also offers scalability and ease of network management. However, for business Ethernet services to work efficiently, customer equipment must be configured properly.

Locked Content

Click on the button below to get access

Unlock Now

Or sign in to access all content on Comcast Business Community

Learn how Comcast Business can help
keep you ready for what's next.