Cato overcomes the technical shortcomings undermining Amazon WorkSpaces deployments

August 11, 2020

Interest in Amazon WorkSpaces, a managed, secure Desktop-as-a-Service solution, continues to grow as enterprises look for ways to reduce costs, simplify their infrastructure, and support remote workers. Companies can use Amazon WorkSpaces to provision Windows or Linux desktops in minutes and quickly scale to meet workers’ needs around the world. The service has been a boon to companies during the pandemic as millions of workers were told to work from home with very little notice or time to set up a proper home office. With WorkSpaces, “the office” can be in the cloud. However, Amazon’s regional hosting requirements can cause application performance issues. Here are the networking and security issues to consider and how the cloud acceleration and control techniques of Cato’s SASE platform address them.

Eenie, Meenie, Miney, Mo: Pick Your Amazon Region Carefully

Amazon WorkSpaces is available in 13 AWS regions across the globe. A region is a physical location where Amazon clusters its datacenters and hosts applications such as WorkSpaces. When a customer wants to initially setup WorkSpaces, it chooses which regional data center to host the application and data resources. Amazon only allows the choice of a single regional location, regardless of how dispersed the customer’s users are.

So, for example, an organization that is headquartered in Atlanta in the United States might choose Amazon’s US East region to host the resources for the entire enterprise. This may be just fine for those employees in or close to the Atlanta office, but it doesn’t work so well for the company’s workers located in Europe, Asia-Pacific, or Latin America. In the case of a hosted application like WorkSpaces, location – and specifically the distance from the host datacenter – matters very much.

In fact, in Amazon’s own FAQs about WorkSpaces, the company advises, “If you are located more than 2,000 miles from the regions where Amazon WorkSpaces is currently available, you can still use the service, but your experience may be less responsive.” For global organizations, that can be a problem.

Public Internet Routing: A Buzzkill for Productivity

Legions of workers who are now home-based are using their public Internet connections to access their Amazon WorkSpaces. This definitely has some ramifications for latency, performance, and ultimately, the user experience.

Take, for example, that Atlanta-based company who has an application development team in Bangalore, India. Most of the team members work from home. Each developer has access to a personal WorkSpace on Amazon’s network. A worker receives client software from Amazon that establishes the connection to the WorkSpace. The worker opens a laptop and clicks on the icon to open the WorkSpace application.

There are two problems from a networking perspective, though. If the worker’s packets need to traverse the entire path to the Amazon US East datacenter over the public Internet, the distance they travel will be quite long. The natural latency of the great distance is only exacerbated by TCP inefficiencies as well as public Internet routing.

The TCP handshake would take an extraordinarily long time. When the worker in India sends their traffic to the datacenter, TCP will send an acknowledgement that the traffic arrived as expected. A roundtrip of that send/acknowledge action can take hundreds of milliseconds. In the meantime, the circuit is tied up waiting for the response. It’s an incredibly inefficient use of resources.

Even if the user is connected to an SD-WAN solution, if it doesn’t have a private global backbone and simply uses the public Internet for transport, there’s really no way to reduce Internet latency between India and the US East datacenter in Northern Virginia.

The second problem is packet loss. Packet drops at congested exchange points and in the last mile can be pretty significant. With each packet drop, more time is needed to retransmit the packet. The bottom line: the combination of long distance and high packet loss results in latency and retransmits, which in turn beget poor application performance.

Cato Resolves Latency and Other Issues for Amazon Workspaces

With Cato’s cloud datacenter integration, Amazon WorkSpaces and any cloud resource become native participants of the enterprise network. More specifically, Cato takes a several steps to improve the user experience with Amazon WorkSpaces.

Let’s take those same workers in India trying to access an application on Amazon US East. When users open their laptops and click the icon to access WorkSpaces, they connect to a Cato Client (if working remotely) or a Cato Socket (if working in a branch office), which then routes the traffic to the nearest Cato PoP. With more than 50 PoPs worldwide, the traffic will travel only a short distance until reaching the Cato network.

If there are multiple communication links at the user’s location, as is common for branch offices, the remote desktop traffic can be duplicated to be carried over both links at the same time. In this way, if loss rates are high in the last mile, or there’s some other issue, the data is replicated over the second link for reliability.

When the TCP traffic gets to the Cato PoP, it will terminate there at Cato’s TCP proxy. That means a handshake can be sent back to the user to confirm receipt of the data packets at the PoP, which then frees the circuit for other uses.

From the Cato PoP, the data packets travel the middle mile from India to a Cato PoP in the U.S. over the SLA-driven Cato gobal, private backbone. There’s no congestion on the backbone and no packet loss. The Cato network is also continuously optimized for sending traffic over the fastest path to the destination PoP. Cato PoPs are co-located in the same physical datacenters as Amazon’s regions, which puts the data packets in very close proximity to Amazon. It’s certainly within two milliseconds.

Cato Improves Data Security, Too

There are a few ways to enforce security in this scenario. First, we restrict access to and from Cato. On the Cato side, the customer can create a policy to create a permanent IP address explicitly for traffic going from Cato to Amazon WorkSpaces. Then on the AWS side, the customer can restrict access into WorkSpaces to only traffic coming from that specific IP address.

As for traffic going from WorkSpaces back to the end user, traffic is sent back to Cato Cloud where it’s run through the Cato security stack. Currently, the stack includes next-generation firewall-as-a-service (FWaaS),  secure web gateway with URL filtering (SWG), standard and next-generation anti-malware (NGAV), a managed IPS-as-a-Service (IPS), and a comprehensive Managed Threat Detection and Response (MDR) service to detect compromised endpoints.

Amazon WorkSpaces and Cato Are a Match Made in the Cloud

Amazon WorkSpaces can make workers more productive with a virtual desktop in the cloud that they can access from anywhere. Cato helps customers overcome the technical shortcomings undermining WorkSpaces deployments with network optimization and security capabilities that aren’t available from other SD-WAN solutions.

Jerry Young

Jerry Young

Jerry serves as the Sales Engineer for Midwest team at Cato Networks, supporting and designing customer solutions. Prior to Cato he was the Principle Solution Architect for a MSP/CLEC designing a wide variety of network and security solutions. He has over 20 years of experience in networking and security and has focused the last 6 years specifically on the SD-WAN/SASE marketplace helping develop managed service offering around majority of the market leaders