Listen to post:
A recent conversation with a WAN engineer got me thinking about how network optimization techniques have changed over the years. Optimization has always been about overcoming latency, jitter, packet loss, and bandwidth limitations. However, in recent years bandwidth has become much less of an issue for most enterprises. Lower dollar-per-bit costs of bandwidth and apps that incorporate data duplication and compression are big drivers of this shift.
Edge computing is growing in popularity and the real WAN optimization challenges enterprises face relate to reducing RTT (round trip time), packet loss, and jitter to ensure high QoE (Quality of Experience) for services like UCaaS (Unified Communications as a Service). At a high-level, this means overcoming latency across the middle mile and addressing jitter and packet loss in the last mile.
Traditional WAN optimization tools do little to help address these challenges, as they’re simply designed to reduce bandwidth consumption. Fortunately, Cato Cloud offers enterprises a suite of network optimization tools that can.
But how do these network optimization techniques work and what can they do for your WAN? We’ll answer those questions here.
Middle-Mile Network Optimization Techniques: Reducing Latency
In the past, MPLS provided enterprises with low-latency, albeit expensive, connectivity between sites. As such, sites were often connected by the minimal amount of necessary capacity. WAN optimization appliances emerged to solve that problem, providing the means to extract the maximum usage out of available MPLS capacity.
However, the shift to a cloud-first, mobile-centric enterprise undermined the value of WAN optimization appliances. With more assets in the cloud, branch offices were required to send traffic back to the secure Internet gateway in the datacenter. The so-called trombone effect meant that latencies across MPLS network to the cloud were often worse than accessing the same cloud assets directly over inexpensive DSL lines.
WAN optimization appliances couldn’t fix that trombone problem. Furthermore, their ability to extract value out of every bit of capacity became less relevant when with Internet prices offices could have 20x more capacity than they did with MPLS.
Finally, the form factor — a physical appliance — was increasingly incompatible where users worked out of the office and the data lived in cloud, two places where installing an appliance was difficult if not impossible.
Appliance-based SD-WAN and Internet-based VPN provided an alternative to MPLS, but there were tradeoffs. For example because of the problems with the public Internet, they couldn’t reliably provide the same low latency performance as MPLS. They too faced the “form factor” problem.
Cato Cloud solves these problems by providing a “best of both worlds” approach to WAN optimization. The converged nature of Cato’s Secure Access Service Edge (SASE) model makes cloud connectivity and mobile support possible without inefficient backhauling. Further, Cato provides a global private backbone with a 99.999% uptime SLA that delivers performance that meets or exceeds MPLS for most use cases.
This backbone consists of 50+ Points of Presence (PoPs) interconnected by multiple, Tier-1 providers. Traffic is optimally routed across these providers to ensure low-latency WAN connectivity across the globe. End-to-end route optimization and self-healing are built into the underlying cloud-native network to deliver high-performance connectivity in the middle mile. Additionally, Cato’s cloud-native network stack leverages network optimization techniques and tools like TCP proxies and advanced congestion management algorithms to improve WAN throughput.
Just how effective is Cato Cloud at optimizing the middle mile? Stuart Gall, Infrastructure Architect at Paysafe, can speak to that: “During our testing, we found latency from Cambridge to Montreal to be 45% less with Cato Cloud than with the public Internet, making Cato performance comparable to MPLS”. You can read more about how Paysafe replaced MPLS and Internet VPN with Cato here.
Last Mile Network Optimization Techniques: Compensating for Packet Loss and Jitter
While latency is primarily a middle-mile problem, link availability, packet loss, and jitter are common WAN performance challenges in the last mile. Cato Cloud enables WANs to mitigate these last mile problems using several network optimization techniques, including:
Packet Loss Mitigation
By breaking the connection into segments, Cato reduces the time to detect and recover lost packets. Where connections are too unstable Cato duplicates packets across active active connections for all or some applications.
Active/active link usage
Cato’s SD-WAN connects and manages multiple Internet links, routing traffic on both links in parallel. Using active-active, customers can aggregate capacity for production use instead of having idle backup links.
In case packet loss jumps, Cato automatically detects the change and switches traffic to the alternate link. When packet loss rates improve to meet predefined thresholds, traffic is automatically returned to primary links.
TCP Proxy with Advanced Congestion Control
Each Cato PoP acts as TCP proxy server, “tricking” the TCP clients and servers into “thinking” their destinations are closer than they really are, allowing them to set larger TCP windows. In addition, an advanced version of TCP congestion control allows endpoints connected to the Cato Cloud to send and receive more data and better utilize the available bandwidth. This increases the total throughput and reduces the time needed to remediate errors.
Dynamic Path Selection and Policy-Based Routing (PBR)
Cato classifies and dynamically allocates traffic in real-time to the appropriate link based on predefined application policies and real-time link quality metrics.
Just how effective are these features in the real world? RingCentral testing has shown Cato Cloud can deliver high-quality voice connectivity across Internet links with up to 15% packet loss.
Cloud Network Optimization Techniques: Optimal Egress & Shared Datacenter Footprint
With so many workloads residing in the cloud, low latency connectivity to cloud service provides has become a major part of network optimization for the modern enterprise. Often, this entails purchasing expensive premium connections like AWS DirectConnect or Azure ExpressRoute.
With Cato, premium connectivity is built into Cato Cloud. Cato PoPs are often in the same physical datacenters as the entrance points to cloud datacenter services, such as AWS and Azure. The latency from Cato to the cloud datacenter is often a matter of just hopping across the local network. Latency to the designated PoP is minimized by Cato’s intelligent routing. Further, by using advanced congestion management algorithms and TCP proxies, Cato optimizes throughput for bandwidth-intensive operations such as large file transfers.
But how much of a difference can Cato actually make? Cato’s cloud acceleration can improve end-to-end throughput to cloud services by up to 20 times and more.
Cato Cloud Modernizes WAN Optimization
As we’ve seen, Cato Cloud’s multi-segment WAN optimization approach enables enterprises to address the challenges facing network engineers today. By taking a holistic approach to optimization, enterprises can improve QoE for cloud, mobile, and on-premises regardless of WAN size.
To see the benefits of Cato Cloud in action, hear how Cato improves voice quality by checking out our SD-WAN & UCaaS- Better Together webinar or try this SD-WAN Demo. If you have questions about how to best optimize your WAN, contact us today.