Why a Backbone Is More Than Just a Bunch of PoPs
Listen to post:
Getting your Trinity Audio player ready...
|
Since SASE’s introduction, many networking and security vendors have rushed to capitalize on the market by partnering with other providers to include cloud backbones as part of their SASE offerings. But SASE isn’t just a bunch of features in appliances managed from the cloud. It’s about building a true cloud service, one that delivers optimal, secure access to your sites, mobile users, and cloud resources regardless of their location. Achieving that lofty goal requires far more than simply partnering with a global backbone provider. Here’s why.
Simple PoPs Have Shortcomings
Every vendor claiming a SASE solution communicates about having a worldwide deployment of Points of Presence (PoPs). But you must consider the architecture of this network.
Most vendors claiming a SASE solution host their PoPs in a center provided by Amazon (AWS), Google (GCP), or Microsoft (Azure). The PoP is just a connection point – a gateway, of sorts – where the external world (i.e., your sites) connect to the hosting provider. It is not where data is managed or secured. Those functions take place in a separate compute location/datacenter. Thus, when your traffic reaches a PoP, the PoP sends the traffic into the backbone of the provider to the separate compute location. There is latency in this additional traffic flow. What’s more, while a SASE vendor may claim to have 100+ PoPs, it may only have 20 or 25 compute locations in the world, creating a funnel effect with traffic. This architecture is inherently inefficient and adds latency to all traffic flows.
Of course, that compute location isn’t the final destination for your traffic. It would typically be bound for a SaaS application – perhaps one that isn’t hosted in the same region – or to your own datacenter or another branch office (i.e., site to site traffic), or to the general Internet. Now you must consider, how do we keep a reliable network between these points? How do we manage the inbound quality of service? How do we ensure good performance? Are we able to prioritize our key applications, use cases and workloads into our network? SD-WAN will not do these things for you.
The fact is, many SASE providers use the Internet as the backbone network between their PoPs. Putting traffic on this backbone uses the best default path. There is no predictability or SLA about performance. There is little or no control over the packets that are traveling on that Internet backbone. If you can’t control the performance of this traffic, you lose control over applications and thus over the user experience.
“Best Effort” Isn’t Always the Best Way
I have talked with numerous enterprises that replaced MPLS circuits with SD-WAN. Initially, users and management are often happy. IT reduced circuit costs and gained some application steering capabilities at the local level. By that I mean, traffic goes to the SASE solution provider’s PoPs to be forwarded to SaaS or IaaS applications or the Internet. Things work fine—until they don’t.
But when performance problems arise, the customers don’t know who to call because the WAN is using the “best effort” Internet to move traffic. It is not under the SASE provider’s control, so there are no guarantees for performance or quality of service. The network works fine 70% or 80% of the time, but the rest of the time there is packet loss, jitter, and high latency. There’s no way to know where the issue is, or how to resolve it. As a result, critical applications like voice can really suffer in this model.
Cato’s PoPs Are on a Global Private Backbone
Cato also has a global network of PoPs – more than 60 at this writing – but this network has a different, far more efficient architecture. Cato’s PoPs have multitenant software running in our own datacenters, not in a Google, Microsoft or Amazon cloud. Cato PoPs manage the network and security functions in those very same datacenters. So unlike hosted SASE solutions, Cato doesn’t have to send your traffic to a separate compute location to manage and secure data, which eliminates latency.
Moreover, there is a 1:1 ratio of PoPs to compute locations because they are one and the same datacenter. This is important because Gartner says the density and scope of coverage is going to be critical to the success of SASE.
Now let’s talk about the network backbone connecting the PoPs. Instead of using the general Internet to connect them, Cato has a global private backbone. It consists of the global, geographically distributed, SLA-backed PoPs interconnected by multiple tier-1 carriers. The IP transit services on these carriers are backed by “five 9s” availability and minimal packet loss guarantees. As such, the Cato Cloud network has predictable and consistent latency and packet loss metrics, unlike the public Internet.
Cato’s cloud-native software provides global routing optimization, self-healing capabilities, WAN optimization for maximum end-to-end throughput, and full encryption for all traffic traversing the network.
Cato’s global PoPs are connected in a full-mesh topology to provide optimal global routing. The Cato software calculates multiple routes for each packet to identify the shortest path across the mesh. Direct routing to the destination is often the right choice, but in some cases traversing an intermediary PoP (or two) is the better route.
Cato Uses Multiple Cloud Optimization Techniques
Cato natively supports cloud datacenters (IaaS) and cloud applications (SaaS) resources without additional configuration, complexity, or point solutions. Specific optimizations include:
- Shared Internet Exchange Points (IXPs), where the Cato PoPs collocate in data centers directly connected to the IXP of the leading IaaS providers, such as Amazon AWS, Microsoft Azure, and Google Cloud Platform.
- Optimized Cloud Provider (IaaS) Access, in which Cato places PoPs on the AWS and Azure infrastructure.
- Optimized Public Cloud Application (SaaS) Access, whereby SaaS traffic sent to the Cato Cloud will route over the Cato backbone, exiting at the PoP nearest to the SaaS application.
Cato Has Strong Security at the PoPs
Another differentiator for Cato is the full stack of security solutions embedded in every PoP. Security is conveniently applied to all traffic at the PoP before going to its final destination—whether it’s to another branch, to a SaaS application, to a cloud platform, or to the Internet.
The enterprise-grade security includes an application-aware next-generation firewall-as-a-Service (FWaaS), secure web gateway with URL filtering (SWG), standard and next-generation anti-malware (NGAV), and a managed IPS-as-a-Service (IPS). Cato can further secure your network with a comprehensive Managed Threat Detection and Response (MDR) service to detect compromised endpoints. Zero Trust Network Access (ZTNA) is also part of the integrated security offering.
Cato’s PoPs Deliver Extra Value
At Cato, we have a lot of engagements with enterprise organizations that deployed SD–WAN + SWG with PoPs. Most are not satisfied with the results. They are complaining about the lack of full visibility, lack of full control on WAN and Internet traffic, and lack of unification. Part of my job as a sales engineering manager is to reassure them that the solution they really want has existed for five years now and is deployed on thousands of customers. Talk to us about how Cato’s series of PoPs offers quite a lot of value beyond simply connecting edge locations into the WAN.