Last week the United Kingdom’s National Cyber Security Centre (NCSC) urged UK organizations “to strengthen their cyber resilience in response to the situation in Ukraine”...
Cato Networks Response to UK’s NCSC Guidance On Tightening Cyber Control Due to the Situation in Ukraine Last week the United Kingdom’s National Cyber Security Centre (NCSC) urged UK organizations “to strengthen their cyber resilience in response to the situation in Ukraine”  and today they followed that warning up with a call for “organisations in the UK to bolster their online defences”  by adopting a set of “Actions to take when the cyber threat is heightened.” Similar statements have been issued by other authorities such as Germany’s Federal Office for Information Security (BSI) and CISA in the US.
As a global provider of the converged network and security solutions known as SASE (Secure Access Service Edge) , Cato Networks has a rapidly expanding portfolio of customers not just here in the UK but in many other regions around the world which are also exposed to the current situation. Here are some suggestions for Cato customers who wish to enhance their security posture in accordance with the NCSC’s advice.
Step 1 - Lock administrative access down.
Cato’s true single-pane-of-glass management console makes it easy for organisations to understand and control exactly who can make changes to their Cato SASE environment. Customers can use the built-in Events Discovery (effectively, your own SIEM running inside Cato) to easily filter for admin users which haven’t recently logged on, and then disable them. Admin user MFA should be enabled across the board and any administrators who don’t make changes (such as auditors) given viewer accounts only.
This is also a good opportunity to review API keys and revoke any which are no longer required.
Step 2 - Review SDP user account usage.
Now is also the right time to review SDP users for stale user accounts which can be disabled or deleted, ensure that directory synchronisation and SCIM groups are appropriately configured and filter all manually created SDP users for unexpected third party users. Customers should also check that any user-specific configuration settings which override global policy are there for good reasons and do not expose the organisation to increased risk.
[boxlink link="https://www.catonetworks.com/resources/inside-cato-networks-advanced-security-services/?utm_source=blog&utm_medium=top_cta&utm_campaign=inside_cato_advanced_security_services"] Inside Cato Networks Advanced Security Services | Find Out [/boxlink]
Step 3 - Tighten access controls.
Cato provides a wide range of access control features including Device Authentication, Device Posture (currently EA), MFA, SSO, operating system blocking and Always-On connectivity policy. Customers should ensure that they are taking advantage of the tight access control capable with Cato by implementing as many of these features as possible and minimising user-based exceptions to the global policy.
Step 4 - Implement strong firewalling.
As true Next-Generation Firewalls which are both identity-aware and application-aware, Cato’s WAN Firewall and Internet Firewall allow our customers to create fine-grained control over all network traffic across the WAN and to the Internet from all Cato sites and mobile users. The seamless integration of a Secure Web Gateway with the firewalls further increases the degree of control which can be applied to Internet traffic.
Both firewalls should be enabled with a final “block all” rule. Customers should also inspect the remaining rules for suitability, and engage Cato Professional Services to assist with a comprehensive firewall review.
Step 5 - Start logging everything.
One of the main benefits of cloud-based security solutions is that unlike on-premise appliances which are constrained by hardware, the elasticity built into the cloud allows for seamless real-time scaling up of features such as logging. Cato customers can take advantage of our cloud-native elasticity to enable flow-level logging for all traffic across their environment, and then use the built-in SIEM and analytics dashboards to derive real intelligence and perform forensics on real-time and historic data.
Step 6 - Enable TLS inspection.
Another feature made possible by the cloud is ubiquitous inspection of TLS inspection regardless of source location or destination. Cato SASE automatically detects TLS traffic on non-standard ports and can be controlled by fine-grained policies to avoid disrupting traffic to known good destinations and to comply with local regulations regarding sensitive traffic decryption.
Step 7 - Enable Enhanced Threat Protection (IPS, Anti Malware, NG Anti Malware).
Even organisations who are not directly in the line of state-sponsored fire are exposed to the usual risk of compromise by ransomware gangs and other actors of economic motivation. Cato’s Enhanced Threat Protection services – IPS, Anti Malware and Next-Gen Anti Malware – are specifically designed to complement the base level firewalls and Secure Web Gateway by inspecting the traffic which is allowed through for suspicious and malicious content. Customers who don’t currently have these features should ask their account management team to enable an immediate trial. Customers with these features should ensure that TLS inspection is enabled and engage Cato Professional Services to ensure that the feature are properly configured and tuned for maximum efficacy.
Step 8 – 24x7 Detection and Response.
During a recent interview  regarding a high-profile hack which occurred on his watch, a CISO stated that “no time is a good time, but these things never come during the middle of the day, during the work week.” Customers without a 24x7 incident response capability should carefully consider their options for being able to detect and respond to threats outside of normal working hours. Cato’s Managed Detection and Response (MDR) service can help customers who are unable to stand up their own 24x7 MDR capability.
The NCSC article referred to above includes many other suggestions which are automatically covered by Cato, such as device patching, log retention and configuration backup. The main concern for organisations who already have Cato is to make the best use of what they’ve already got. They no longer need to worry gaps in their security posture, because Cato has those covered out of the box. If you’re not a Cato customer and you’d like to find out more about our solution, or you’re an existing customer who wants to find out more about the additional products and services we provide, let's talk.
We frequently talk to organizations who are enthusiastically searching for alternatives to their old and tired MPLS and IPsec networks. They’re ready to realize the...
VoIP, DiffServ, and QoS: Don’t Be Held Captive by Old School Networking We frequently talk to organizations who are enthusiastically searching for alternatives to their old and tired MPLS and IPsec networks. They’re ready to realize the benefits of a new SASE infrastructure but remain constrained by their old beliefs about network engineering.
Last year, for example, we spoke to an organization that wanted to replace its legacy IPsec network with something that would provide a better level of service for voice traffic. It’s not unusual for people to approach us with this sort of request, after all Cato provides extensive Unified Communications (UC) and UCaaS optimization, but this time there was a twist: the customer insisted the solution preserve Differentiated Services Code Point (DSCP) bits across the middle-mile.
Back to Networking School
For those of us who might have gotten their engineering degrees when hairs were a bit darker and Corona only meant something around the sun, Differentiated Services (or Diffserv for short) emerged in the late 90’s as an early form of network-based quality of service (QoS). It replaced the six bits of the old ToS field in the header of IPv4 packets with a DSCP value proclaiming the packet’s relative importance and providing suggestions for how to handle it.
End-to-end QoS with DSCP requires customers to configure their senders and access switches to recognise different types of traffic and mark packets with the correct DSCP values. They then need to configure all intermediate network equipment with the correct queuing and congestion control commands to achieve the desired effect.
It’s a lot of time spent driving multiple CLI’s to produce an outcome which is highly resistant to contemporary concepts of application classification, identity awareness, flexibility and visibility. It’s not hard to see why DSCP struggled to gain real-world acceptance outside the IP telephony space.
By passing DSCP bits across the middle mile, the organization would be able to preserve QoS, ensuring voice quality. By the time the company came to Cato, they had already rejected several solutions that in theory claimed to pass DSCP without interference. Those solutions zeroed out the DSCP field somewhere between sender and destination, leading to a noticeable (and negative) impact on voice call quality.
Cato Brings a More Effective, Simpler Approach to QoS
Cato can support end-to-end DiffServ but we prefer a more modern and much simpler way. Instead of playing with DiffServ bits our customers define bandwidth classes with their numerical priority and limits. Each bandwidth rule details the priority level and congestion behaviour – no limit, limit only when line is congested, or always limit – together with relative or absolute values for rate limiting of upstream and downstream traffic.
[caption id="attachment_12789" align="aligncenter" width="1496"] Figure 1 With Cato, companies first define bandwidth classes detailing the priority level (P45), congestion behaviour (Limited only when the line is congested), and rate limiting information (20% Upload and Download).[/caption]
Customers then allocate traffic to those bandwidth classes in their network rules. That’s it. There are no bits to set or devices to configure. Once mapped, voice is prioritized end to end. We have many, many companies taking this approach. Even UCaaS leaders, like RingCentral, have adopted Cato’s SASE platform.
And to make configuration even easier, Cato provisions each account with a starting set of bandwidth classes and network rules based on most common customer usage, such as prioritize voice and video over file transfers and web browsing or prioritize WAN over Internet.
[caption id="attachment_12787" align="aligncenter" width="1459"] Figure 2: Cato’s Network Rules massively simplify the job of prioritizing flows. Rules are presented in a prioritized list. Each rule describes the characteristics of the traffic flow, identifying What (VoIP Video), Source and Destination (Any to Any), the bandwidth class (P10) , and other details such as routing and optimization.[/caption]
Although Cato’s approach to QoS supersedes and obsoletes DSCP, we still support the use of DSCP. We can always use DSCP as a selector for allocating traffic to a particular bandwidth. Customers who mark their VoIP traffic with DiffServ will see those traffic classes mapped to the proper bandwidth class. We can also maintain DSCP points across the middle mile by disabling some of our more advanced network optimisation features. After we proved this to the organization with network captures and a short trial, they went ahead with the Cato purchase.
Great Voice Quality Without DiffServ
Now here’s the rub. Recently I revisited this customer’s configuration when another organization also asked us about DSCP. The irony? For all their insistence on preserving DSCP codepoints, the customer was not taking advantage of this capability. I could see plenty of DSCP markers entering their Cato tunnels at the source, but very little DSCP leaving the Cato tunnels at the destination.
At the same time, the customer raised no support tickets, and a quick check of their analytics screen showed huge volumes of file transfers and software updates, happily sharing the links with their voice calls.
In short, despite not preserving DSCP across the middle mile, voice quality was fine. Why? Because it is being cared for by Cato QoS - not DSCP. They’d moved from the old way to the new way of thinking without even realizing it.
Don’t Be Constrained by Old-School Thinking
Cato Cloud does far more than just prioritizing packets and sending them to a static next-hop IP. Our software steers network flows via the best-performing links at that moment, accounting for factors such as packet loss, latency, and jitter. Cato’s approach is what true QoS should be – a tight coupling of application performance requirements with network performance SLAs.
The cloud changed our notions around servers and storage. SASE clouds do the same for networking. Organizations seeking a better alternative to their legacy MPLS/IPsec networks need to let go of “old-school” approaches that were self-evident when all we had to do was connect offices over a private network.
Today, with enterprises needing to connect users and resources everywhere, we need to expand how we think about networking and security. We need to understand the problem – preserving VoIP quality -- but remain open to new solutions. Only then will we truly benefit from this new shift embodied by the world of SASE.
To learn more about how Cato improves VoIP, UC, and UCaaS check out this case study with RingCentral.
Applying patches to software in networking devices is so common that most enterprises have a structured procedure on how to do it. The procedure details...
The Most Important Patch You’ll Never Have to Deploy Applying patches to software in networking devices is so common that most enterprises have a structured procedure on how to do it. The procedure details things like how to monitor for the availability of necessary patches, how often to apply fixes to devices, how to test patches before applying them, and when to apply the new software to minimize possible disruption.
Patching has become so common that we just assume that’s the way it has to be. “Patch Tuesday” has us expecting fixes to problems every week. In reality, patching is an artifact of the way all appliances are built. If we eliminate the appliance architecture, we can eliminate the overhead and risk of patches.
VPN Vulnerabilities Jeopardize Remote Access
Of course, some patches are more important than others. Anything pertaining to security should be considered a priority in order to shut down the vulnerability as soon as possible.
Last year CERT issued a warning about security vulnerabilities in various VPN devices that were storing session cookies improperly. One vendor after another issued a report of this and other problems they found in their own products:
April 2, 2019: Fortinet reports critical vulnerability in their remote access VPN
April 24, 2019: Pulse Secure reports multiple vulnerabilities found in their remote access VPN
December 17, 2019: Citrix reports vulnerabilities in several of their products
Since then, there has been no shortage of reports of weaponization and use of these vulnerabilities by state actors:
October 2, 2019: Vulnerabilities exploited in VPN products used worldwide
January 12, 2020: Over 25,000 Citrix (Netscaler) Endpoints Vulnerable to CVE-2019-19781
February 16, 2020: Iranian hackers have been hacking VPN servers to plant backdoors in companies around the world
February 2020: Fox Kitten Campaign -- Widespread Iranian Espionage-Offensive Campaign (opens a PDF report)
Available Patches Might Not Get Applied
Admirably, the vendors all responded quickly to create patches and put them out for the public to apply. Their assumption was that users of the gear would acquire the patches and apply them right away to secure the remote access appliances. However, that’s not always the case.
Many enterprises have change control processes that add time to the patching schedule. Maybe they want to test the patch in a lab first, or wait until the next scheduled patch day. Taking a VPN offline – even for a short time – in 2020 is problematic, as so many people are now working from home. VPNs have gone from being ancillary devices to being business-critical as the entire staff must use remote access for a while.
Existing devices aren’t the only ones affected by vulnerabilities. Sometimes new devices just unpacked from the box have been shipped with a vulnerability or two, and the customer must patch the software to make it more secure. The challenge for many network managers is that patching isn’t the first thing – or even the tenth thing – to do when deploying new hardware. That VPN could easily be deployed and up and running for a while before anyone thinks to see if it needs a patch.
The alternative is to set up the device in a staging area and tend to the configuration before it’s ever placed into service. Many organizations don’t have the time or facilities for staging new equipment like that.
Whatever the reason for not immediately applying a software patch, there’s a window of opportunity for attackers who can strike while the vulnerability is still there. Cato security researcher Avidan Avraham recently wrote about the pervasiveness of cyberattacks and how all businesses are becoming targets when they are connected to the Internet. It’s more critical than ever to shut those windows of opportunity before any harm can be done.
Cato Relieves Customers of the Patching Process
From time to time, Cato also has to push patches out. The difference is, we don’t expect the customer to deploy the patch—our engineers do it.
As a SASE platform, most of Cato’s technology resides in the cloud, so there’s less for customers to take care of themselves. Although we do have an on-premise appliance called a Cato Socket, it arrives at the customer location completely hardened. It’s much more difficult for an external actor to detect the device, let alone compromise it. As soon as the Cato Socket is plugged in, it automatically downloads and applies any patches it may need.
Thus, we do the patching on behalf of our customers, reducing their administrative overhead to stay on top of patches.
Patch Tuesday for Cato customers? Nope, and not any other day of the week.
Interest in Amazon WorkSpaces, a managed, secure Desktop-as-a-Service solution, continues to grow as enterprises look for ways to reduce costs, simplify their infrastructure, and support...
Cato overcomes the technical shortcomings undermining Amazon WorkSpaces deployments Interest in Amazon WorkSpaces, a managed, secure Desktop-as-a-Service solution, continues to grow as enterprises look for ways to reduce costs, simplify their infrastructure, and support remote workers. Companies can use Amazon WorkSpaces to provision Windows or Linux desktops in minutes and quickly scale to meet workers’ needs around the world. The service has been a boon to companies during the pandemic as millions of workers were told to work from home with very little notice or time to set up a proper home office. With WorkSpaces, “the office” can be in the cloud. However, Amazon’s regional hosting requirements can cause application performance issues. Here are the networking and security issues to consider and how the cloud acceleration and control techniques of Cato’s SASE platform address them.
Eenie, Meenie, Miney, Mo: Pick Your Amazon Region Carefully
Amazon WorkSpaces is available in 13 AWS regions across the globe. A region is a physical location where Amazon clusters its datacenters and hosts applications such as WorkSpaces. When a customer wants to initially setup WorkSpaces, it chooses which regional data center to host the application and data resources. Amazon only allows the choice of a single regional location, regardless of how dispersed the customer’s users are.
So, for example, an organization that is headquartered in Atlanta in the United States might choose Amazon’s US East region to host the resources for the entire enterprise. This may be just fine for those employees in or close to the Atlanta office, but it doesn’t work so well for the company’s workers located in Europe, Asia-Pacific, or Latin America. In the case of a hosted application like WorkSpaces, location – and specifically the distance from the host datacenter – matters very much.
In fact, in Amazon’s own FAQs about WorkSpaces, the company advises, “If you are located more than 2,000 miles from the regions where Amazon WorkSpaces is currently available, you can still use the service, but your experience may be less responsive.” For global organizations, that can be a problem.
Public Internet Routing: A Buzzkill for Productivity
Legions of workers who are now home-based are using their public Internet connections to access their Amazon WorkSpaces. This definitely has some ramifications for latency, performance, and ultimately, the user experience.
Take, for example, that Atlanta-based company who has an application development team in Bangalore, India. Most of the team members work from home. Each developer has access to a personal WorkSpace on Amazon’s network. A worker receives client software from Amazon that establishes the connection to the WorkSpace. The worker opens a laptop and clicks on the icon to open the WorkSpace application.
There are two problems from a networking perspective, though. If the worker’s packets need to traverse the entire path to the Amazon US East datacenter over the public Internet, the distance they travel will be quite long. The natural latency of the great distance is only exacerbated by TCP inefficiencies as well as public Internet routing.
The TCP handshake would take an extraordinarily long time. When the worker in India sends their traffic to the datacenter, TCP will send an acknowledgement that the traffic arrived as expected. A roundtrip of that send/acknowledge action can take hundreds of milliseconds. In the meantime, the circuit is tied up waiting for the response. It’s an incredibly inefficient use of resources.
Even if the user is connected to an SD-WAN solution, if it doesn’t have a private global backbone and simply uses the public Internet for transport, there’s really no way to reduce Internet latency between India and the US East datacenter in Northern Virginia.
The second problem is packet loss. Packet drops at congested exchange points and in the last mile can be pretty significant. With each packet drop, more time is needed to retransmit the packet. The bottom line: the combination of long distance and high packet loss results in latency and retransmits, which in turn beget poor application performance.
Cato Resolves Latency and Other Issues for Amazon Workspaces
With Cato’s cloud datacenter integration, Amazon WorkSpaces and any cloud resource become native participants of the enterprise network. More specifically, Cato takes a several steps to improve the user experience with Amazon WorkSpaces.
Let’s take those same workers in India trying to access an application on Amazon US East. When users open their laptops and click the icon to access WorkSpaces, they connect to a Cato Client (if working remotely) or a Cato Socket (if working in a branch office), which then routes the traffic to the nearest Cato PoP. With more than 50 PoPs worldwide, the traffic will travel only a short distance until reaching the Cato network.
If there are multiple communication links at the user’s location, as is common for branch offices, the remote desktop traffic can be duplicated to be carried over both links at the same time. In this way, if loss rates are high in the last mile, or there’s some other issue, the data is replicated over the second link for reliability.
When the TCP traffic gets to the Cato PoP, it will terminate there at Cato’s TCP proxy. That means a handshake can be sent back to the user to confirm receipt of the data packets at the PoP, which then frees the circuit for other uses.
From the Cato PoP, the data packets travel the middle mile from India to a Cato PoP in the U.S. over the SLA-driven Cato gobal, private backbone. There’s no congestion on the backbone and no packet loss. The Cato network is also continuously optimized for sending traffic over the fastest path to the destination PoP. Cato PoPs are co-located in the same physical datacenters as Amazon’s regions, which puts the data packets in very close proximity to Amazon. It’s certainly within two milliseconds.
Cato Improves Data Security, Too
There are a few ways to enforce security in this scenario. First, we restrict access to and from Cato. On the Cato side, the customer can create a policy to create a permanent IP address explicitly for traffic going from Cato to Amazon WorkSpaces. Then on the AWS side, the customer can restrict access into WorkSpaces to only traffic coming from that specific IP address.
As for traffic going from WorkSpaces back to the end user, traffic is sent back to Cato Cloud where it’s run through the Cato security stack. Currently, the stack includes next-generation firewall-as-a-service (FWaaS), secure web gateway with URL filtering (SWG), standard and next-generation anti-malware (NGAV), a managed IPS-as-a-Service (IPS), and a comprehensive Managed Threat Detection and Response (MDR) service to detect compromised endpoints.
Amazon WorkSpaces and Cato Are a Match Made in the Cloud
Amazon WorkSpaces can make workers more productive with a virtual desktop in the cloud that they can access from anywhere. Cato helps customers overcome the technical shortcomings undermining WorkSpaces deployments with network optimization and security capabilities that aren’t available from other SD-WAN solutions.