Backbone Performance: Testing the Impact of Cato Cloud’s Optimized Routing on Latency

Listen to post:

It’s no secret that the Internet has a love-hate relationship with performance. Tidy and quick one day, slow and sluggish the next — Internet connections are anything but predictable. Which begs the question: how can an SD-WAN perform well if it’s based on the public Internet?  

The key is replacing the Internet core with a managed network. Simply taking a more direct path across the middle mile helps reduce latency. However, latency can be reduced even further by looking at the network more holistically, as we recently saw when analyzing Cato Cloud performance. So often a straight line across an IP network is not the shortest distance.

Latency is a middle-mile issue

A recent study showed once again that latency in an Internet connection is a matter of the middle mile, not the last mile.  The testing conducted by SD-WAN Experts compared latency across public Internet connections, isolating last mile from middle-mile performance, and that of a private backbone, namely Amazon’s AWS network.

The results showed that although the last mile proved to be more erratic than the middle mile, the impact on the overall connection was negligible. “What we found was that by swapping out the Internet core for a managed middle mile makes an enormous difference,” writes Steve Garson, president of SD-WAN Experts. “The latency and variation between our AWS workloads were significantly better across Amazon’s network than the public Internet.”

The reason for the problems in the Internet middle mile are well known. Routers are built for fast traffic processing and are therefore stateless. Control plane intelligence is limited as there’s little communication between the control and data planes. As such, routing decisions are not based on application-requirements nor the current network levels of packet loss, latency, or congestion for each route. Shortest path selection is abused: Service providers’ commercial relationships often work against the end user interest in best path selection. In short, the Internet moves traffic forward based on what’s best for the providers, not the users or their applications.

Cato Cloud fixes the middle mile

Cato replaces the Internet middle mile with a private network, the Cato Cloud network. Cato PoPs constructs an overlay across SLA-based, IP transit services from multiple tier-1 providers. With SLA-backed IP transit, Cato can route traffic globally on a single provider and avoid the loss and congestion issues associated with traffic handoffs that occur at Internet exchanges Cato further improves the connection by monitoring the real-time conditions across its providers, selecting the optimum path across Cato Cloud for every packet.

The optimum path is not always the most direct one, though. Case in point was a recent example between two Cato PoPS, one in Virginia and the other in Singapore. In this case, the Cato software evaluated the round trip time (RTT) across the direct path between Virginia and Singapore but identified a better, indirect, route, via Dallas.  

virginia-singapore PoP

Cato Cloud’s direct path showed an RTT of 227 milliseconds, about 5% less latency than the typical RTT (240ms) for Internet connections between Singapore and Ashburn.

virginia-dallas-singapore PoP

Routing through Dallas, though, showed a lower RTT of 216 ms, shaving 10% off of Internet RTTs and providing latency comparable to what you might expect from MPLS services — at a fraction of the cost.

Round-trip times
We calculated round-trip times, measuring latency from Virginia to Singapore (1) and Singapore to Virginia (2) for both optimized and direct paths (3)

The latency impact 

A ten percent savings is particularly significant as organizations look at real-time application delivery. Voice, remote desktop — these applications are sensitive to the kind of latencies seen on connections between the Asia Pacific and North America. The latency on these connections is already at the edge of impacting the user experience.  As Phil Edholm recently explained, we naturally wait 250 to 300 milliseconds before speaking again in a voice conversation. A 10 percent savings in latency can make the difference between an intelligible call and an unintelligible one.

For too long, organizations had to choose between the cheap public Internet, and its unpredictable global connectivity attributes, or an expensive, but solid, global MPLS connection. Independent backbones, like Cato Cloud, offer a way out of that trap. By selecting the optimum path across affordable IP backbones, be it direct or through another city, Cato Cloud can give companies MPLS-like performance at Internet-like prices.