June 07, 2020 5m read

The Path of a Packet in Cato’s SASE Architecture

Dave Greenfield
Dave Greenfield
The Path of a Packet in Catos SASE Architecture

Table of Contents

Wondering where to begin your SASE journey?

We've got you covered!
Listen to post:
Getting your Trinity Audio player ready...

The business environment is in a state of continuous change. So, too, are the supporting technologies that enable a business to rapidly shift priorities to adapt to new market conditions and customer trends. In particular, the emergence of cloud computing and user mobility have increased business agility, allowing rapid response to new opportunities.

The network of old needs to change to accommodate the phenomenal growth of cloud and mobility. It’s impractical to centralize a network around an on-premise datacenter when data and applications increasingly are in the cloud and users are wherever they need to be—on the road, at home, at a customer site, in a branch office.

Incorporating the Internet into the enterprise network reduces costs and lets companies connect resources anywhere, but security is paramount. Security must be an inherent part of the network, which is why Gartner expects networking and security to converge. They’ve dubbed this converged architecture SASE, or secure access service edge. SASE moves security out of the legacy datacenter and closer to where users, data and applications reside today. In this way, security comes to the traffic, rather than the traffic going to security.


MPLS, SD-WAN and SASE

Just what does it all mean in terms of how a data packet flows through this converged architecture to get from Point A to Point B? Let’s break it down to the various network stages to discuss how Cato applies security services and various optimizations along the way.

The Last Mile: Just Enough Smarts to Bring Packets to the Cato PoP

Start with the traffic being sent from a user in an office across “last mile” or what some might call the “first mile.” (Cato connects remote users and cloud resources as well, but we’ll focus on site connectivity in this example.) The user’s traffic is sent to Cato’s SD-WAN edge device, the Cato Socket, sitting on the local network.

The Cato Socket provides just enough intelligence to get the packet to the Cato point of presence (PoP), which is where the real magic happens. The Cato Socket addresses issues that can impact delivering the packet across the last mile to the nearest Cato PoP.

The Socket classifies and dynamically routes traffic based on application type and real-time link quality (packet loss, latency, utilization). Robust application-prioritization capabilities allow enterprises to align last-mile usage with business needs by prioritizing and allocating bandwidth by application. Latency sensitive applications, such as voice, can be prioritized over other applications, such as email. Enterprises also can prioritize bandwidth usage within applications using Cato’s identity-aware routing capabilities. In this way, for example, sales VoIP traffic can be  prioritized above generic voice traffic. And Cato overcomes ISP packet loss and congestion in the mile by sending duplicate packets over multiple links.

The Middle Mile: Improving the Network While Protecting Users

When the packet arrives at the Cato PoP, it’s decrypted and then Cato applies its suite of network and security optimizations on the packet. Cato independently optimizes the middle mile. Every one of our 50+ PoPs are interconnected with one another in a full mesh by multiple tier-1 carriers with SLAs on loss and latency.

When traffic is to be sent from one PoP, Cato software calculates multiple routes for each packet to identify the shortest path across the mesh. Cato also consistently measures latency and packet loss of the tier-1 carriers connecting the PoPs. Traffic is placed on the best path available and routed across that provider’s network end-to-end. Direct routing to the destination is often the right choice, but in some cases traversing an intermediary PoP or two is the more expedient route.

Routing across a global private backbone end-to-end also reduces packet loss that often occurs at the handoff between carriers. Next, each Cato PoP acts as TCP proxy to maximize the transmission rate of clients, increasing total throughput dramatically. Our customers frequently report 10x-30x improvement in file download speeds.

In addition to network improvements, Cato also provides a fully managed suite of enterprise-grade and agile network security capabilities directly built into the Cato Global Private Backbone. Current services include a next-gen firewall/VPN, Secure Web Gateway, Advanced Threat Prevention, Cloud and Mobile Access Protection, and a Managed Threat Detection and Response (MDR) service.

Unlike other SASE vendors that treat network and security deep packet inspections as serial activities, Cato puts all packets through a process of inspection for network optimization and security—thus providing a real boost to performance.

Cato uses a single DPI engine for both network routing and security decisions. The packet is decrypted and all security policy enforcements and network optimizations are done in parallel. The security policy enforcement refers to the security capabilities of Cato—NGFW to permit/prevent communication with a location/user/IP address; URL filtering to permit/prevent communication with Internet resources anti-malware (advanced and regular) inspection; and network-based threat prevention. This allows for maximum efficiency of packet processing.

The Last, Last Mile: Reaching from Cato to Destination

Packets are directed across the Cato private backbone to the PoP closest to the destination. The packet egresses from the PoP and is sent to the destination. For cloud applications, we set egress points on our global network to get internet traffic for specific apps to exit at the Cato PoP closest to the customer application instance (like Office 365).

For cloud data centers, the Cato PoPs collocate in datacenters directly connected to the Internet Exchange Points (IXP) as the leading IaaS providers, such as Amazon AWS, and Microsoft Azure. This means that we are dropping the traffic right in the cloud’s data center in the same way premium connections (like Direct Connect and Express Route) work. These are no longer needed when using Cato.

multi-cloud and hybrid cloud integration

Summary

Enterprises today need a network with the capabilities and flexibility to meet their business challenges. By adding security into the network stack, as Cato’s SASE architecture does, the network can be more efficient in helping the enterprise achieve its business goals.

Cato Diagram

With Cato’s SASE platform, branches send data along encrypted tunnels across Internet last miles to the nearest PoP. Cato’s one-pass architecture applies all security inspections and network optimizations in parallel to the packet. The packet is then sent across Cato’s optimized middle mile to the datacenter.

Related Topics

Wondering where to begin your SASE journey?

We've got you covered!
Dave Greenfield

Dave Greenfield

Dave Greenfield is a veteran of IT industry. He’s spent more than 20 years as an award-winning journalist and independent technology consultant. Today, he serves as a secure networking evangelist for Cato Networks.

Read More