Are You Protecting Your Most Valuable Asset with a Data Loss Prevention (DLP)?

The Information Revolution and The Growing Importance of Data We have all heard about the information revolution, but what does it actually mean and how... Read ›
Are You Protecting Your Most Valuable Asset with a Data Loss Prevention (DLP)? The Information Revolution and The Growing Importance of Data We have all heard about the information revolution, but what does it actually mean and how profound is it? An interesting way to understand this is by looking at how it has impacted modern enterprises. A company's assets can be divided into two types: tangible vs intangible. Simply put, tangible assets are those with a physical form factor (or which represents something physical). Intangible assets are those which do not really exist in the conventional sense, such as a company's Intellectual property. Research by Ocean Tomo1 covering the leading 500 companies in the US (S&P 500) shows that in 1975, intangible assets accounted for 13% of their total value. By 2015 it grew to 84% and by 2020 it reached 90%. Figure 1: The value of intangible assets  Simply put, 90% of the value of a modern-era company comes from what it knows, only 10% from what it has. When looking at how these numbers shifted over the last 45 years, we can see how information has become the single most valuable asset for the modern enterprise. Most enterprises, however, do not have the necessary means to effectively protect their data. Let's take a look at why this is, what protecting enterprise data means, and how to choose the right solution for your enterprise. Protecting Your Company's Data With DLP Information has critical value to an enterprise. It is, however, quite difficult to protect, especially considering a great part of it typically resides in the cloud. There are numerous tools aimed at restricting access to enterprise assets, but the most efficient solution to protect the movement of information to and from enterprise assets is Data Loss Prevention (DLP). While DLP solutions have been around for 15 years, their adoption has been limited and mostly by high-end enterprises. The complexity, prohibitive costs and expertise required to obtain and effectively manage DLP solutions has left them beyond the reach of most enterprises. The increasing value of information, the growing adoption of cloud computing, and continued rise in cybercrime, are driving enterprises to the realization that they need to do a better job protecting their data. The need for DLP is clear and imminent, and market interest is rising. Gartner saw a 32% rise in DLP inquiries in 2020 vs the previous year2. But how can enterprises overcome the current adoption barriers and enable DPL protection for their assets? Let us start by looking at the types of DLP solutions and their respective advantages and shortcomings. [boxlink link=""] Protect Your Sensitive Data and Ensure Regulatory Compliance with Cato's DLP | Whitepaper [/boxlink] DLP isn't one thing Gartner recognizes three types of DLP solutions2: Enterprise DLP (EDLP) Integral DLP (IDLP) Cloud Service Provider Native DLP (CSP-Native DLP) The above solutions all have their pros and cons, and the acquiring decision-makers need to decide which solution attributes are more important for their use-cases, and which can be compromised. Let us take a deeper look into each one. Enterprise DLP (EDLP) - An enterprise level solution which covers all relevant traffic flows, and which is implemented as a stand-alone solution. EDLPs require adding (yet) another solution to an organization’s security toolbox. This typically requires an expansive project plan and additional expertise, adding complexity and cost to the project. While an EDLP offers a single console and policy management interface for the entire network, it is typically a separate console from the other network security tools (FW, IPS, AM, SWG, etc.). EDLPs will typically add another hop in the security service chain, and thus add latency and impact performance. Figure 2: Enterprise DLP  Integrated DLP (IDLP) - DLP functionality that is added on top of a pre-existing security product such as a Secure Web Gateway (SWG). IDLPs simplify the deployment process and are regarded as a quick win to get DLP up and running quickly and at a reduced cost. IDLPs, however, are limited to the traffic and use cases the base product is intended for. Piggybacking on a SWG, for example, will cover only Internet-bound traffic and may not inspect IaaS traffic. Gaining wider coverage will require adding DLP to additional security products, which will lead to fragmented consoles and policy management. Figure 3: Integrated DLP  CSP-Native DLP - A cloud-based DLP which is deployed in, or provided by, a cloud service provider (CSP). This type of solution is also simple to adopt as it is delivered as a Software as a Service (SaaS) and doesn't require deployment. It is, however, limited to the traffic sent to or from the specific CSP proving it. As most enterprises using cloud platforms are adopting a multi-cloud strategy, getting complete coverage will require using DLP services from several CSPs. Also, this type of solution will typically not cover all SaaS applications and is typically limited to sanctioned applications only. Figure 4: CNP-Native DLP Choosing The Right DLP For Your Enterprise EDLPs typically offer better coverage and enhanced protection, however, the complexity and cost concerns drive security leaders to shy away and look for simpler and cheaper options. An IDLP offers this, but the limited coverage and disjointed consoles and policy management impact their effectiveness and level of protection. CSP-Native DLPs are also simpler to onboard but are cumbersome for multi-cloud deployments and do not cover the critical use-case of unsanctioned applications (AKA Shadow IT). All the above DLP types come with compromises. Ideally, we would want a solution that is easy to deploy and manage, has complete coverage and optimal protection, does not impact performance, and covers unsanctioned applications. The rise of SASE DLP A true Secure Access Service Edge (SASE), or its Secure Service Edge (SSE) subset, offers the best of all worlds. Cato's SASE Cloud, for example, covers all fundamental SASE requirements: All edges - Cato SASE Cloud covers all enterprise users, on-prem or remote, and all applications and services, on-prem, IaaS and SaaS. This means that Cato's SASE-based DLP will have complete coverage of all traffic and all use-cases. Single pass processing - Cato SASE Cloud utilizes Cato's proprietary Singe-Pass Cloud Engine (SPACE), which is based on a modular software platform stack that executes the networking and network security services in parallel. This enables a shared context, enhancing overall protection, and minimizes latency. Adding DLP to a Cato deployment is done by a flip of a switch and requires no additional deployment. Cloud-native - Cato DLP is delivered fully from the cloud and offers all the benefits of a cloud-native solution, including unlimited scalability and inherent high-availability. Since it is part of the Cato SASE Cloud, it is completely CSP-agnostic and supports all leading cloud service providers, making it a true multi-cloud solution. Converged - Cato SASE runs and manages all services as a single solution, enabling configuration, management, and visibility from a single-pane-of-glass management console. Figure 5: SASE/SSE DLP The pros and cons of the different DLP solution types: Figure 6: DLP types, pros and cons The DLP that's in your reach A true SASE solution enables enterprises to adopt a DLP solution that benefits from all the advantages mentioned above, and more. The reduced complexity and costs, lower the traditional barrier of adoption, enabling enterprises of all sizes and levels of expertise to better protect their data. It also eliminates the dilemma of what to compromise on when looking to adopt DLP within your environment. A SASE DLP requires no compromises. Protecting your enterprise's most valuable asset is just a flip of a switch away. To learn more about Cato DLP, read our DLP whitepaper. 1 Harvard Business Review2 DLP market guide 2021 - Gartner

ZTNA Alone is Not Enough to Secure the Enterprise Network

ZTNA is a Good Start for Security Zero trust has become the new buzzword in cybersecurity, and for good reason. Traditional, perimeter-focused security models leave... Read ›
ZTNA Alone is Not Enough to Secure the Enterprise Network ZTNA is a Good Start for Security Zero trust has become the new buzzword in cybersecurity, and for good reason. Traditional, perimeter-focused security models leave the organization vulnerable to attack and are ill-suited to the modern distributed enterprise. Zero trust, which retracts the “perimeter” to a single asset, provides better security and access management for corporate IT resources regardless of their deployment location. In many cases, zero trust network access (ZTNA) is an organization’s first step on its zero trust journey. ZTNA replaces virtual private networks (VPNs), which provide a legitimate user with unrestricted access to the enterprise network. In contrast, ZTNA makes case-by-case access determinations based on access controls. If a user has legitimate access to a particular resource, then they are given access to that resource for the duration of the current session. However, accessing any other resources or accessing the same resource as part of a new session requires re-verification of the user’s access. The shift from unrestricted access to case-by-case access on a limited basis provides an important first step towards implementing an effective zero trust security strategy. Adopting ZTNA Alone Is Not Enough The purpose of ZTNA is to prevent illegitimate access to an organization’s IT resources. If a legitimate user account attempts to access a resource for which they lack the proper permissions, then that access request is denied. This assumes that all threats originate from outside the organization or from users attempting to access resources for which they are not authorized. However, several scenarios exist in which limiting access to authorized accounts does not prevent attacks. [boxlink link=""] Secure zero trust access to any user in minutes | ZTNA Demo [/boxlink] Compromised or Malicious Accounts ZTNA limits access to corporate resources to accounts that have a legitimate need for that access. However, an account with legitimate access can be abused to perform an attack. One of the most common cyberattacks is credential stuffing attacks in which an attacker tries to use a certain individual’s compromised credentials for one account to log into another account. If successful, the attacker has access to an account with legitimate access whose requests may be accepted by a ZTNA solution. If this is the case, then an attacker can use this compromised account to steal sensitive data, plant malware, or perform other malicious actions. Additionally, not all threats originate from outside of the organization. An employee could cause a data breach either via negligence or intentionally. For example, 29% of employees admit to taking company data with them when leaving a job. Legitimate users could also accidentally deploy malware on the corporate network. In 2021, 80% of ransomware was self-installed, meaning that the user opened or executed a malicious file that installed the malware. If this occurred on the corporate network, it would be within the context of a legitimate user account. Infected Devices Users access corporate resources via computers or mobile devices. While a ZTNA solution may be configured to look for a combination of a user account and a known device, this device may not be trustworthy. Devices infected with malware may attempt to take advantage of a user’s account and assigned permissions to gain access to the corporate network or other resources. If malware is installed on a user’s device, it may spread to the corporate network via legitimate accounts. ZTNA’s access control policies alone are not enough to protect against infected devices. Solutions also need to include device posture monitoring to provide more information about the risk posed by a particular device. Common device posture monitoring features include identifying the security tools running on the device, the current patch level, and compliance with corporate security policies. Ideally, a ZTNA solution should provide the ability to tune device posture access requirements based on the requested resources and to incorporate other valuable information, such as the device OS and location. ZTNA Should Be Deployed as Part of SASE ZTNA is an invaluable tool for providing secure remote access to corporate resources. Its integrated access controls and case-by-case grants of access offer far greater security than a VPN. However, as mentioned above, ZTNA is not enough to implement zero trust security or to effectively secure an organization’s network and resources against attack. An attacker with access to a legitimate account - via compromised credentials or an infected device - may be granted access to corporate IT assets. Effective zero trust security requires partnering ZTNA’s access control with security solutions capable of identifying and preventing abuse of a legitimate user account. Next-generation firewalls (NGFWs), intrusion prevention systems (IPS), cloud access security brokers, and other solutions can help to address the threats that ZTNA misses. These capabilities can be deployed as standalone solutions, but this often results in a tradeoff between performance and security. Deploying perimeter-based defenses requires routing traffic through that perimeter, which adds unacceptable latency. On the other hand, most organizations lack the resources to deploy a full security stack at all of their on-prem and cloud-based service locations. Secure Access Service Edge (SASE) provides enterprise-grade security without sacrificing network performance. By integrating a full network security stack into a single solution, SASE enables optimized performance by ensuring that expensive operations - such as decrypting a traffic stream for analysis - are only performed once. Its integrated network optimization capabilities and cloud-based deployment ensure high network performance and reliability, especially when backed by Cato’s network of dedicated backbone links between PoPs. ZTNA as a standalone solution doesn’t meet corporate network security goals or business requirements. Deploying ZTNA as part of a SASE solution is the right choice for organizations looking to effectively implement zero trust.

Don’t Ruin ZTNA by Planning for the Past

Zero trust network access (ZTNA) is an integral part of an enterprise security strategy, as companies move to adopt zero trust security principles and adapt... Read ›
Don’t Ruin ZTNA by Planning for the Past Zero trust network access (ZTNA) is an integral part of an enterprise security strategy, as companies move to adopt zero trust security principles and adapt to more distributed IT environments. Legacy solutions such as virtual private networks (VPNs) are ill-suited to the distributed enterprise and do not provide the granular access controls necessary to protect an organization against modern cyber threats. However, not all ZTNA solutions are created equal. In some cases, ZTNA solutions are designed for legacy environments where employees and corporate resources are located on the corporate LAN. Deploying the wrong ZTNA solution can result in tradeoffs between network performance and security. Where ZTNA Can Go Wrong Three of the primary ways in which ZTNA goes wrong are self-hosted solutions, web-only solutions, and solutions only offering agent-based deployments. Self-Hosted Solutions Some ZTNA solutions are designed to be self-hosted or self-managed. An organization can deploy a virtual ZTNA solution on-prem or in the cloud and configure it to manage access to the corporate resources hosted at that location. These self-hosted ZTNA solutions are designed based on a perimeter-focused security model that no longer meets the needs of an organization’s IT assets. Self-hosted ZTNA is best suited to protecting locations where the virtual appliance can be deployed, such as in an on-prem data center or an Infrastructure as a Service (IaaS) cloud service. However, many organizations use a variety of cloud services, including Software as a Service (SaaS) offerings where a self-hosted ZTNA solution cannot be easily deployed. This and the fact that expanding self-hosted ZTNA to support new sites requires additional solutions or inefficient routing, means that these solutions are less usable and offer lower network performance than a cloud-native solution. Web-Only Solutions Often, security programs focus too much on the more visible aspects of an organization’s IT infrastructure. Web security focuses on websites and web apps instead of APIs, and ZTNA is targeted toward the protection of enterprise web apps. However, companies commonly use various non-web applications as well. For example, companies may want to provide access to corporate databases, remote access protocols such as SSH and RDP, virtual desktop infrastructure (VDI), and other applications that do not run over HTTP(S). A ZTNA solution needs to provide support for all apps used by an enterprise. This includes the ability to manage access to both web and non-web enterprise applications. [boxlink link=""] Secure zero trust access to any user in minutes | ZTNA Demo [/boxlink] Only Agent-Based Deployment Some ZTNA solutions are implemented using agents that are deployed on each user’s endpoint. These agents interact with a self-hosted or cloud-based broker that allows or denies access to corporate resources based on role-based access controls. By using agents, a ZTNA solution can provide a more frictionless experience to users. While an agent-based deployment has its benefits, it may not be a fit for all devices. The shift to remote and hybrid work has driven the expanded adoption of bring your own device (BYOD) policies and the use of mobile devices for work. These devices may not be able to support ZTNA agents, making it more difficult to manage users’ access on these devices. Support for agent-based deployments can be a significant asset for a ZTNA solution. However, implementing ZTNA only via agents deployed on the endpoint can result in some devices being unable to access corporate resources or being forced to use workarounds that could degrade performance or security. Choose the Right ZTNA Solution ZTNA provides a superior alternative to VPNs for secure remote access. However, the success of an organization’s ZTNA deployment can depend on selecting and deploying the right ZTNA solution. Some key requirements to look for when evaluating ZTNA solutions include: Globally Distributed Service: ZTNA solutions that are self-managed and can only be deployed in certain locations create tradeoffs between the performance and security of corporate applications. A ZTNA solution should be responsive everywhere so that employees can easily access corporate resources hosted anywhere, which can only be achieved by a globally distributed, cloud hosted, ZTNA solution. Wide Protocol Support: Many of the most visible applications used by companies are web-based (webmail, cloud-based data storage, etc.). However, other critical applications may use different protocols yet have the same need for strong, integrated access management. A ZTNA solution should offer support for a wide range of network protocols, not just HTTP(S). Agentless Option: Agent-based ZTNA solutions can help to achieve better performing and more secure remote access management; however, they are not suitable for all devices and use cases. A ZTNA solution should offer both agent-based and agentless options for access management. Cato SASE Cloud offers ZTNA as part of an integrated Secure Access Service Edge (SASE) solution. By moving access control and other security and network optimization functions to cloud-based solutions, Cato SASE Cloud ensures that ZTNA services are accessible from anywhere and support a range of protocols. Also, with both agent-based and agentless options, Cato SASE Cloud ensures that all users have the ability to efficiently access corporate resources.

Planning for the Distributed Enterprise of the Future

In the past, most of an organization’s employee and IT resources were located on the enterprise LAN. As a result, enterprise security models were focused... Read ›
Planning for the Distributed Enterprise of the Future In the past, most of an organization’s employee and IT resources were located on the enterprise LAN. As a result, enterprise security models were focused on defending the perimeter of the corporate network against external threats. However, the face of the modern enterprise is changing rapidly. Both users and IT resources are moving off of the corporate LAN, creating new employee and service network edges. Distributed Employee Edges The most visible sign of the evolution of the modern enterprise is the growing acceptance of remote work. Employees working remotely is nothing new, even for organizations without formal telework programs. Business travel, corporate smartphones, and other factors have led to corporate data and resources being accessed from outside the enterprise network, often without proper support or security. The pandemic normalized remote work as businesses found that their employees could effectively work from basically anywhere. In fact, many businesses found that remote work increased productivity and decreased overhead. As a result, many businesses plan to support at least hybrid work indefinitely, and telework has become a common incentive for hiring and retaining employees. Scattered Service Locations While the rapid growth and distribution of employee edges can be attributed to the pandemic, service edges have been expanding for years. The emergence of cloud-based data storage and application hosting has transformed how many organizations do business. The cloud provides numerous benefits, but one of its major selling points is the wide range of service options that organizations can take advantage of. Companies can move enterprise data to a cloud data center, outsource infrastructure management to a third-party provider for hosted applications, or take advantage of Software as a Service (SaaS) applications that are developed and hosted by their cloud service provider. Nearly all organizations use at least some cloud services, even if this is simply cloud-based email and data storage (G-Suite, Microsoft 365, etc.). However, many companies have not completely given up their on-prem infrastructure, hosting some data and applications locally to meet business needs or regulatory compliance requirements. This mix of on-prem and cloud-based infrastructure complicates the corporate WAN. Both on-site and remote workers need high-performance, reliable, and secure access to corporate data and applications, regardless of where the user and application are located. [boxlink link=""] How Three Enterprises Delivered Remote Access Everywhere | EBOOK [/boxlink] Legacy Infrastructure Doesn’t Meet Modern Needs Many organizations’ security models were designed for the era where employees and corporate IT assets were centralized on the corporate LAN. By deploying security solutions at the perimeter of the corporate network, organizations attempt to detect inbound threats and outbound data exfiltration before they pose a threat to the organization. The perimeter-focused security model has many shortcomings, but one of the most significant is that it is designed for an IT infrastructure that no longer exists. With the expansion of telework and cloud computing, a growing percentage of an organization’s IT assets are now located outside the protected perimeter of the corporate LAN. A major challenge that companies face when adapting to the growing distribution of their IT assets is that many of the tools that they are trying to use to do so are designed for the same outdated model. For example, virtual private networks (VPNs) were designed to provide point-to-point secure connectivity, such as between a remote worker and the enterprise network. This design doesn’t work when employees need secure access to resources hosted in various places (on-prem, cloud, etc.) Trying to support a distributed workforce with legacy solutions creates significant challenges for an organization. VPNs’ design and lack of built-in security and access control result in companies routing all traffic through the corporate network for inspection, resulting in increased latency and degraded performance. It also creates challenges for IT personnel, who need to deploy and maintain complex and inflexible VPN-based corporate WANs. ZTNA Enables Usable, Scalable Security As companies’ workforces and infrastructure become more distributed, attempting to make the corporate WAN work with legacy solutions is not a sustainable long-term plan. A switch away from perimeter-focused technologies like VPNs is essential to the performance, reliability, and security of the enterprise network. Zero trust network access (ZTNA) offers a superior alternative to VPNs that is better suited to the needs of the distributed enterprise of the future. Some advantages of a cloud-based ZTNA deployment include: Global Accessibility: ZTNA can be hosted in the cloud, making it globally accessible. Once access decisions are made, traffic can be routed directly to its destination without a detour through the corporate network. Granular Access Controls: VPNs are designed to provide legitimate users with unrestricted access to corporate resources. ZTNA provides access to a specific resource on a case-by-case basis, enabling more granular access management and better enforcement of least privilege. Centralized Management: A VPN-based WAN for an organization with multiple sites and cloud-based infrastructure requires many independent links between different sites. ZTNA does not require these independent tunnels and can be centrally monitored and managed, simplifying network and security configuration and management. Private Backbone: Cato’s ZTNA uses a private backbone to route traffic between sites. This improves the performance and reliability of network traffic beyond what is possible with the public Internet. Solutions like VPNs are designed for an IT architecture that no longer exists and never will again. As companies adopt cloud computing and remote work they need infrastructure and security solutions designed for a distributed IT architecture. By deploying ZTNA with Cato, companies can improve network performance and security while simplifying management.

Overcoming ZTNA Deployment Challenges with the Right Solution

Zero-trust network access (ZTNA) is a superior remote access solution compared to virtual private networks (VPNs) and other legacy tools. However, many organizations are still... Read ›
Overcoming ZTNA Deployment Challenges with the Right Solution Zero-trust network access (ZTNA) is a superior remote access solution compared to virtual private networks (VPNs) and other legacy tools. However, many organizations are still relying on insecure and non-performant solutions rather than making the switch to ZTNA. Why You Might Not Be Using ZTNA (But Should Be) Often, companies have legitimate reasons for not adopting ZTNA - and below we take a closer look at some of the most common concerns: “A VPN is Good Enough” One of the simplest reasons why an organization may not want to upgrade their VPN to ZTNA is that they’ve always used a VPN and it has worked for them. If remote users can connect to the resources that they need, then it may be difficult to make a compelling case for a switch. However, even if an organization’s VPN infrastructure is performing well, there is still security to consider. A VPN is designed to provide a remote user with unrestricted, private access to the corporate network. This means that VPNs lack application-level access controls and integrated security. For this reason, cybercriminals commonly target VPNs because a single set of compromised credentials can provide all of the access needed for a data breach, ransomware infection, or other attacks. In contrast, ZTNA provides access on a case-by-case basis decided based on user and application-level access controls. If an attacker compromises a user’s account, then their access and the damage that they can do is limited by that user’s permissions. “ZTNA is Hard to Deploy” Deploying a new security solution can be a headache for an organization’s security team. They need to integrate it into an organization’s existing architecture, design a deployment process that limits business disruption, and perform ongoing configuration and testing to ensure that the solution works as designed. When an organization has a working VPN solution, then the overhead associated with switching to ZTNA may not seem worth the effort. While installing ZTNA as a standalone solution may be complex, deploying it as part of a Secure Access Service Edge (SASE) solution can streamline the process. With a managed SASE solution, deployment only requires pointing infrastructure to the nearest SASE point of presence (PoP) and implementing the required access controls. [boxlink link=""] Secure zero trust access to any user in minutes | ZTNA Demo [/boxlink] “VPNs are Required for Compliance” Most companies are subject to various data protection and industry regulations. Often, these regulations mandate that an organization have certain security controls in place and may recommend particular solutions. For secure remote access, VPNs are commonly on the list of acceptable solutions due to their age and widespread adoption. However, regulations are changing rapidly, and the limitations of VPNs are well known. As regulators start looking for and mandating a zero-trust approach to security within organizations, solutions like VPNs, which are not designed for zero trust, will be phased out of regulatory guidance. While regulations still allow VPNs, many also either explicitly recommend ZTNA or allow alternative solutions that implement the required security controls. ZTNA provides all of the same functionality as VPNs but also offers integrated access control. When deployed as part of a SASE solution, ZTNA is an even better fit for regulatory requirements due to its integration with other required security controls and adoption of a least privileges methodology commonly required by regulatory frameworks such as UK NCSC Cyber Essentials and NIST. For organizations looking to achieve and maintain compliance with applicable regulations, making the move to ZTNA as soon as possible will decrease the cost and effort of doing so. “We’ve Already Invested in Our VPN Infrastructure” VPNs have been around for a while, so many organizations have existing VPN deployments. When the pandemic drove a move to remote work, the need to deploy remote work solutions as quickly as possible led many organizations to expand their existing VPN infrastructure rather than investigate alternatives. As a result, many organizations have invested in a solution that, to a certain degree, meets their remote work needs. These sunk costs can make ripping out and upgrading VPN infrastructure an unattractive proposition. However, the differential in functionality between a VPN and a ZTNA solution can far outweigh these costs. ZTNA provides integrated access management, which can reduce the cost of a data breach and simplify an organization’s regulatory compliance strategy. A ZTNA solution that successfully prevents a data breach by blocking unauthorized access to sensitive data may have just paid for itself. “Our Security Team is Already Overwhelmed with Our Existing Solutions” Many organizations’ security teams are struggling to keep up. The cybersecurity skills gap means that companies are having trouble finding and retaining the skilled personnel that they need, and a sprawling array of security solutions creates overwhelming volumes of alerts and the need to configure, monitor, and manage various standalone solutions. As a result, the thought of deploying, configuring, and learning to use yet another solution may seem less than appealing. Yet one of the main advantages of ZTNA is that it simplifies security monitoring and management, especially when deployed as part of a SASE solution. By integrating multiple security functions into a single network-level solution, SASE eliminates redundant solutions and enables security monitoring and management to be performed from a single console. By reducing the number of dashboards and alerts that analysts need to handle, SASE reduces the burden on security teams, enabling them to better keep up with an accelerating threat landscape and expanding corporate IT infrastructure. ZTNA is the Future of Remote Access Many organizations have solutions that - on paper - provide the features and functionality that they need to support a remote workforce and provide secure access to corporate applications. However, legacy solutions like VPNs lack critical access controls and security features that leave an organization vulnerable to attack. As the zero trust security model continues to gain momentum and is incorporated into regulations, organizations will need solutions that meet their security needs and regulatory requirements. ZTNA meets these needs, especially as part of a SASE solution.

Why Moving to ZTNA Provides Benefits for Both MSPs and Their Customers

The pandemic underscored the importance of secure remote access for organizations. Even beyond the events of these past years, remote work has been normalized and... Read ›
Why Moving to ZTNA Provides Benefits for Both MSPs and Their Customers The pandemic underscored the importance of secure remote access for organizations. Even beyond the events of these past years, remote work has been normalized and has become an incentive and negotiating point for many prospective hires. However, many organizations are still reliant on legacy remote access solutions, such as virtual private networks (VPNs), that are not designed for the modern, distributed enterprise. Upgrading to zero trust network access (ZTNA) provides numerous benefits to these organizations. Main Benefits of ZTNA for MSPs For Managed Service Providers (MSPs) offering remote access services, making the move to ZTNA can significantly help their customers. However, it is not just the customer who benefits. An MSP that makes ZTNA part of its service offering can reap significant benefits, especially if it is deployed as part of a Secure Access Service Edge (SASE) offering. Let’s take a closer look at some of the key benefits: Tighter Security Controls VPNs are designed solely to provide a secure network link between two points. VPNs have no built-in traffic inspection capabilities and provide users with unrestricted access to corporate resources. By using VPNs for secure remote access, organizations expose themselves to various cyber threats. Cyber threat actors commonly target VPNs with credential stuffing attacks, hoping to take advantage of compromised credentials to gain full access to the enterprise network. VPNs are also prone to vulnerabilities that attackers can exploit to bypass access controls or eavesdrop on network traffic. ZTNA provides the same remote access capabilities as VPNs but does so on a case-by-case basis that allows effective implementation of least privilege access controls. By deploying ZTNA to customer environments, MSPs can reduce the occurrence and impact of security incidents. This results in an improved customer experience and reduced costs of recovery. [boxlink link=""] Poor VPN Scalability Hurts Productivity and Security | EBOOK [/boxlink] Improved Visibility and Control VPNs provide unrestricted access to the corporate network, providing a user experience similar to working from the office. Since VPNs don’t care about the eventual destination of network traffic, they don’t collect this information. This provides an organization or their MSP with limited visibility into how a VPN is being used. ZTNA provides access to corporate resources on a case-by-case basis. These access decisions are made based on the account requesting the access, the resource requested, and the level of access requested. Based on this data and an organization’s access controls, access is permitted or denied. ZTNA performs more in-depth traffic inspection and access management as part of its job, and these audit logs can provide invaluable visibility to an MSP. With the ability to see which accounts are remotely accessing various resources, an MSP can more easily investigate potential security incidents, identify configuration errors and other issues, and strategically allocate resources based on actual usage of IT infrastructure and assets. Improved Customer Satisfaction During the pandemic, the performance and scalability limitations were laid bare for all to see. Many organizations needed to rapidly shift from mostly on-site to remote work within a matter of weeks. To do so, they often deployed or expanded VPN infrastructure to support a workforce much larger than existing solutions were designed to handle. However, VPNs scale poorly and can create significant performance issues. Remote access deployments built on VPNs overwhelmed existing network infrastructure, created significant latency, and offered poor support for the mobile devices that remote workers are increasingly using to do their jobs. During the pandemic, employees experienced significant network latency as traffic to cloud-based applications and data storage was backhauled through on-prem data centers by VPN appliances. As a result, employees commonly sought workarounds - such as downloading sensitive data to devices for easier access or using unapproved services - in order to do their jobs. ZTNA solutions provide optimized performance and better security by moving away from the perimeter-focused security model of VPNs. As corporate infrastructure and resources move to the cloud, employees need high-performance access to SaaS solutions, and routing traffic to these solutions via the corporate network makes no sense. ZTNA makes it possible to perform access management in the cloud and improve the user experience. For MSPs, improving the end user experience also improves the experience of their customers, who need to listen to employees’ complaints about performance and latency issues. Additionally, ZTNA enables an MSP to eliminate inefficient routing, which creates unnecessary load on their infrastructure and can make it more difficult to meet customer expectations for network performance. Value-Added Functionality VPNs provide bare-bones network connectivity for remote users. If an organization wants additional access control or to secure the traffic flowing over the VPN connection, this requires additional standalone solutions. By making the move from VPNs to ZTNA for secure remote access, an MSP can expand the services offered to their customers with minimal additional overhead. ZTNA offers access management, and so can an MSP. The data generated by ZTNA can be processed and displayed on dashboards for customers looking for additional insight into their network usage or security. MSPs can provide ongoing support services for the management and maintenance of ZTNA solutions. SASE Supercharges ZTNA Making the move to ZTNA for their secure remote access offerings makes logical sense for MSPs. ZTNA provides more functionality, better performance and security, and simpler management and maintenance than VPN-based infrastructure. However, the benefits of ZTNA can be dramatically expanded by deploying it as part of a SASE solution. SASE is deployed as a network of cloud-based points of presence (PoPs) with dedicated, high-performance network links between them. Each SASE PoP integrates ZTNA with other security and network optimization features, providing high-performance and reliable connectivity and enterprise-grade security for the corporate WAN. Making the move to ZTNA streamlines and optimizes an MSP’s remote access services offering. Deploying it with SASE does the same for an MSP’s entire network and security services portfolio.

Can You Really Trust Zero Trust Network Access?

Why Yes The global economy’s shift to hybrid work models is challenging enterprises to securely connect their work-from-anywhere employees. Supporting these highly distributed, dynamic, and... Read ›
Can You Really Trust Zero Trust Network Access? Why Yes The global economy's shift to hybrid work models is challenging enterprises to securely connect their work-from-anywhere employees. Supporting these highly distributed, dynamic, and diverse networks requires enterprises to be more flexible and accommodating, which results in remote access becoming an increasingly expanding attack surface. A crucial step in reducing this risk is transitioning from legacy VPNs, with their inherently risky castle-and-moat approach, to Zero Trust Network Access (ZTNA). The latter implements a much more restrictive access control mechanism, which allows users to connect to applications on a need-to-access only basis. Why Not ZTNA solutions, however, rely mostly on user authentication, and when this becomes compromised, a perpetrator still has the capability to wreak havoc in the enterprise network and its connected assets. User account takeovers are quite common and are achieved by way of social engineering (e.g. phishing) and other techniques. Security experts agree that enterprise security teams should work under the assumption that user accounts have been, or at some point will be, compromised. What’s Next Recognizing this risk, and as part of our continuous quest to provide our customers with better security, Cato has released the Client Connectivity Policy (CCP) feature. CCP acts as an additional layer of security when connecting remote employees to the enterprise network. It adds user or group-level validations based on the client platform (Operating system), location, and Device Posture information (fig. 1). Clients are granted access only after fully satisfying the defined connectivity policies. [caption id="attachment_24508" align="alignnone" width="1200"] Figure 1[/caption] [boxlink link=""] The Hybrid Workforce: Planning for the New Working Reality | Download eBook [/boxlink] It is no longer enough to pass ZTNA authentication in order to access the enterprise network. The additional security layer added by Cato's CCP significantly reduces ZTNA related attack vectors, even for compromised accounts, and strengthens the enterprise's overall security posture. While Device Posture itself is commonly used as part of ZTNA, Cato's CCP is unique in that Device Posture is just one source of information used to make access decisions (fig. 2). CCP also enables numerous different Device Posture checks that can be defined, and selectively implemented, for different users and groups. This provides security teams a high degree of flexibility when defining connectivity policies. For example, highly stringent requirements for users with access to highly sensitive enterprise assets e.g., "the crown jewels", and more relaxed requirements for users with limited access and lower risk potential. [caption id="attachment_24510" align="alignnone" width="1200"] Figure 2[/caption] The Bottom Line In the evolving threat landscape of remote access, Zero Trust is just too trusting. Cato's Client Connectivity Policy takes ZTNA an extra step by adding a security layer capable of blocking access from unauthorized clients, even when the user account has been compromised. By using several independent evaluation criteria, and highly flexible Device Posture profiles, Cato's CCP keeps your enterprise's security posture one step ahead of your next attack.

Here’s Why You Don’t Have a CASB Yet

There’s An App for That What used to be a catchphrase in the world of smartphones, “There’s an app for that”, has become a reality... Read ›
Here’s Why You Don’t Have a CASB Yet There's An App for That What used to be a catchphrase in the world of smartphones, "There's an app for that", has become a reality for enterprise applications as well. Cloud-based Software as a Service (SaaS) applications are available to cater for nearly every aspect of an organization's needs. Whichever task an enterprise is looking to accomplish - there's a SaaS for that. On the flip side, the pervasiveness of SaaS has enabled employees to adopt and consume SaaS applications on their own, without IT's involvement, knowledge, nor consent. While CASB solutions, which help enterprises cope with shadow IT, have been around for quite a while, their adoption has been relatively limited and mostly on the larger end of the enterprise spectrum. As the shift to cloud trend has been adopted by nearly all enterprises of all sizes, and the need for a solution such as CASB being so evident, the question remains why are we not seeing greater adoption? Here are a few common objections: It's too complex to deploy and run Deploying a stand-alone CASB solution is no trivial feat. It requires extensive planning and mapping of all endpoints and network entities from which information needs to be collected. It also requires continuous deployment and updating of PAC files and network collector agents. There is also typically a need to modify the network topology to allow cloud-bound data to pass through the CASB service. It adds network latency CASB processing in inline deployment mode can add significant network latency. When there is a need to decrypt traffic in order to apply granular access rules, there is additional latency due to encryption/decryption processing. Adding a CASB service to your traffic flow often means adding an additional network hop adding more latency still. It requires domain expertise While operating a CASB solution isn't rocket science, it still requires a fair amount of knowledge and experience to implement correctly and ensure cloud assets and users are protected effectively. Many IT teams lack the resources and expertise to manage a CASB and simply pass on it. I don't see the need for CASB This objection comes up more often than one might think. In many cases it is raised by IT managers who believe their SaaS usage is minimal and in check. To understand the full extent of an organization's SaaS usage, there is a need for a CASB shadow IT report. But the report is available only after the CASB has been deployed. This deadlock often hinders enterprises from seeing the value and importance of CASB. [boxlink link=""] Cato CASB overview | Read the eBook [/boxlink] Keep It Simple CASB While CASB is undoubtedly an essential component for any modern digital enterprise, the abovementioned concerns are causing many IT leaders to keep their CASB aspirations at bay. But what if we could make all of these hurdles go away? What if there was a CASB solution that takes away the deployment complexity, eliminates network latency, reduces the required expertise, and enables any IT manager complete insight into their SaaS usage without needing to invest time, effort or cost whatsoever? This may sound too good to be true, but when we deploy CASB as part of true SASE cloud service, we are able to achieve this and bring CASB within reach of any enterprise. But what does "true SASE" mean? And how does it help deliver on these promises? True SASE means all edges. A true SASE solution processes all enterprise network traffic, this includes both on-site and remote users. Since all traffic passes through the SASE service, there is no need to plan or deploy PAC files, agents, or collectors of any kind. All the information the CASB needs is already available. True SASE means single pass processing. A true cloud-native SASE service executes networking and security services in parallel as opposed to sequential service chaining. This means no additional latency is added by enabling the CASB service. True SASE means unified management. A true SASE solution enables management and visibility of all networking and security services in a single-pane-of-glass management console. As all network edges, users, and applications are already defined in the system, adding CASB involves minimal additional configuration and ensures simple and fast ramp up. True SASE means convergence. A true SASE solution fully converges CASB as part of the SASE software stack and enables it complete visibility of all cloud-bound traffic, without the need for any additional deployment or configuration. This enables any enterprise employing SASE to try CASB and view a full shadow IT report instantly, without effort, cost or commitment. Cato's SASE cloud is a true SASE service. It covers all edges, it is implemented as a Single Pass Cloud Engine (SPACE), uses a unified, single-pane-of-glass management system for all services, and fully converges all networking and security services into a single software stack. As a result, Cato is offering today a "CASB Zero" solution that requires zero planning and zero deployment, adds zero latency, requires zero domain expertise, and enables a zero friction and zero commitment PoC to any Cato SASE customer. Cato CASB - zero reason not to give it a try.      

Does Your Backbone Have Your Back?

Private backbone services are all the rage these days. Google’s recent announcement of the GCP Network Connectivity Center (NCC) joins other similar services such as... Read ›
Does Your Backbone Have Your Back? Private backbone services are all the rage these days. Google’s recent announcement of the GCP Network Connectivity Center (NCC) joins other similar services such as Amazon’s AWS Transit Gateway and Microsoft’s Azure Virtual WAN. Private backbones enable high quality connections that don't rely on the public Internet. There are no performance guarantees in the public internet which means connections often suffer from high latency, jitter and packet loss. The greater the connection’s distance, the greater the performance degradation we will typically experience. A private backbone overcomes this and ensures traffic runs fast and smooth between any two locations within the provider’s network. So, should you use a private backbone? Absolutely. Why travel single-lane, congested and traffic-light ridden roads when you can take a multi-lane, obstacle free highway? This is a no-brainer. And with all major public cloud providers now offering private backbones, there’s got to be one that’s right for you, right? Let's take a deeper look at an enterprise’s connectivity needs and what private backbones have to offer. [boxlink link=""] SASE vs SD-WAN - What’s Beyond Security | Download eBook [/boxlink] What are we trying to solve? Enterprises have been relying on MPLS circuits to ensure connection quality between their offices and datacenters. Looking beyond the obvious drawbacks of MPLS, namely high cost and operational rigidity for applications which have been migrated to the cloud, MPLS lines provide little to no value. Enterprises must therefore find an alternative way to optimize traffic to cloud datacenters. This is where private backbones prove their value and enable highway-like connectivity to cloud deployments. But just like any other highway, the benefit they provide depends on how close they run to where you are travelling from and where you need to get to. Since public cloud providers deploy private lines between their datacenter locations, they will benefit enterprise endpoints which are themselves located near them. Let's break this down into a couple of questions which can help us better understand what it means. Where are you coming from? The point of origin for enterprise connectivity is anywhere employees connect from. This used to be the office for the most part but has increasingly shifted to employees' homes. This means the dispersion of connecting endpoints has significantly grown and requires a highway system which has much greater and more granular coverage in order to be effective. In the world of private backbones this translates into PoP coverage. The more locations the network has PoPs at, the closer it will likely reach your origin points. AWS Transit Gateway has 16 PoPs1, Azure virtual WAN has 392 and GCP NCC has 103. Are all your users connecting from locations close to these PoPs? There's a good chance you need a private backbone with better coverage. Where are you headed? One of the main reasons to use a private backbone is to enable high quality connectivity to public cloud datacenters. As mentioned above, public cloud offerings provide connectivity to these locations exactly, which makes them seem like an obvious choice. An important point to keep in mind, though, is that since private backbone services don't interconnect, you can only use one. Most enterprises, however, don't use just one public cloud. In fact, out of all enterprises using public services, 92 percent have a multi-cloud strategy4 with an average of 2.64 public cloud services each. Choosing a public cloud's private backbone offerings will be a good fit for that specific cloud's datacenter locations, but what about the other 1.6 public clouds the typical enterprise needs to reach? Who is going to guarantee connectivity performance to their datacenters? It is certainly not in the interest of AWS to guarantee your network's performance when accessing Azure, GCP or any other cloud provider. If anything, their interest is exactly the opposite. They want to lock you into their own service. Only a cloud-agnostic private backbone service will have an interest and the ability to guarantee connectivity performance to all public cloud datacenters. It will also reduce your dependency and risk of lock-in to any single public cloud vendor. They may serve, but do they protect? Network performance plays an important part in ensuring enterprise applications are delivered smoothly, but we must also ensure the enterprise network is protected. This is where security services such a Next Generation Firewalls (NGFW), Intrusion Prevention Systems (IPS), Next Generation Anti-Malware (NGAM), Secure Web Gateway (SWG), Software Defined Perimeter (SDP)/Zero Trust Network Access (ZTNA) for remote user connectivity, and others come into play. The question is where do they fit in the traffic flow? One option is to deploy them at the enterprise endpoint locations, which means traffic will first pass through them before entering, or after exiting, the private backbone. However, deploying all the above security solutions at each and every endpoint location can become quite expensive, not to mention complex to manage. Deploying them at a subset of locations (for example only at datacenters) means branch office and remote user traffic bound for a cloud-based service will need to backhaul through these locations (Fig.1). This adds latency and misses the point of having a private backbone connection for direct access to the cloud. [caption id="attachment_18668" align="alignnone" width="624"] Figure 1. Security deployed at endpoint[/caption] A second option is to use a cloud-based security service (Fig. 2). This means adding a hop along the way in order to pass through where the service is delivered from. So instead of using the private backbone for a single, direct end-to-end connection, we need to break the journey into two separate legs, user to cloud security service and cloud security service to enterprise service. This, again, defeats the purpose of having a private backbone. [caption id="attachment_18988" align="alignnone" width="588"] Figure 2. Security delivered from a cloud service[/caption]  A third option is to use a private backbone which has all the security services deployed at each of its PoPs. This means that wherever you connect from, your traffic will be protected by a full security stack at the PoP nearest to you. From there, the private backbone will carry your traffic directly to its destination, wherever it may be. This will enable full security for all endpoints with no backhauling or additional stops along the way. The question is where do you find a solution that converges a private backbone with a full security stack in a single service? Gartner have defined such an architecture and named it the Secure Service Access Edge (SASE). SASE: Private backbone done right Cato is the world's first SASE platform which converges networking, security, remote access, and a global private backbone into a unified cloud-native service. All PoPs run Cato's full security stack and provide complete protection to all endpoint traffic (Fig. 3). [caption id="attachment_18719" align="alignnone" width="575"] Figure 3. Security embedded into backbone PoPs[/caption]   This architecture, in which the security services are embedded within each network node, enables enterprises to harness the full performance potential of their global private backbone. There is no need to backhaul or add hops on the way to the service destination. Additionally, Cato's SASE Cloud is comprised of more than 65 PoPs, providing superior global coverage from anywhere your users connect from. Why Choose Cato's SASE Cloud? Cato’s SASE Cloud has superior global distribution for optimal backbone performance. It has a full security stack embedded into every PoP, which eliminates the need for backhauling or adding hops to your traffic flows. Cato’s SASE Cloud is a truly converged solution which simplifies your network topology and offers single-pane-of-glass management for backbone, security, networking, and remote access in a unified console. It is also IaaS vendor agnostic, ensuring your applications' performance when delivered from any public cloud platform, and helps you avoid vendor lock-in. Cato SASE Cloud. The backbone that serves and protects.   Figure 1  Figure 2  Figure 3 Figure 4    

How Consolidated Security Became the New Best-of-Breed

The IT Manager’s Dilemma  IT professionals are constantly making decisions regarding which security solutions they should purchase to protect their organizations. One of the most common dilemmas they face is whether to go with a... Read ›
How Consolidated Security Became the New Best-of-Breed The IT Manager’s Dilemma  IT professionals are constantly making decisions regarding which security solutions they should purchase to protect their organizations. One of the most common dilemmas they face is whether to go with a consolidated, “Swiss army knife,” solution or choose a number of stand-alone, best-of-breed, products. A consolidated solution has clear advantages such as simpler integration, no interoperability risk, less expertise required to manage multiple siloed solutions and, usually, a lower TCO. However, there has always been the notion of a tradeoff between the benefits offered by a consolidated solution, and the superior security provided by best-of-breed products. Simply put, to gain the benefits of consolidation you have to lose some degree of security.   Is Best-of-Breed Really the Best?   In a survey conducted by Dimensional Research1 among global security leaders, 69% of the respondents agreed that vendor consolidating would lead to better security.  In a recent Gartner survey2, comparing a vendor consolidation strategy versus a best-of-breed approach to security solutions procurement, 41% of respondents stated that the primary benefit they’ve seen from consolidating their security solutions was an improvement in their organizational risk posture. It’s not that the other 59% didn’t view it as a benefit, they just didn’t see it as the primary one.   This might seem counter intuitive. Best-of-breed literally means getting the best product for each security category. Logically this should lead to the best overall security posture. So how have we arrived at a reality where a growing number of IT leaders believe that a consolidated whole is greater than the sum of its best-of-breed counterparts?      What Makes Consolidation More Secure?  There a several reasons why a consolidated solution leads to a better security posture:   When More Becomes Less. Deploying a large number of stand-alone solutions requires the IT team to have the necessary expertise to manage, capacity plan, monitor, an carry out software updates and security patching, for all those different products. The greater the number of products, the thinner the IT team is spread, increasing the risk of a device, service, or security policy misconfiguration and greater vulnerability. According to a survey conducted by IDC5, misconfiguration is the number one cause of cloud security breaches. More products mean more complexity and more room for error, ultimately leading to a less robust security posture.   Big Targets Are Easier to Hit. The greater the number of products, the greater the diversity of operating systems, OS versions, drivers, and third-party software. This directly translates into a greater accumulative attack surface and more opportunities for bad actors to breach your network and assets. As far as security is concerned, the more the scarier.  One for All. Separate stand-alone products typically require separate management systems, which can lead to security gaps caused by duplicate, and sometimes inconsistent, configurations of the different security engines. Single-pane-of-glass management promotes coherence and better visiblity, which contributes to improved protection.   All for One. A solution consisting of stand-alone products is typically stitched together via service chaining, in which the security engines process traffic one after the other. A truly consolidated solution leverages a single-pass architecture, in which, security engines process traffic in parallel with a unified single context. This facilitates one fully informed decision instead of a series of half-blind ones, and greatly enhances security coverage.  Simple is the New Black. When evaluating a solution, it is easy to get excited about all the bells and whistles included in a best-of-breed product. It is, however, important to have an objective and pragmatic understanding of what you actually need. A recent report published by Pendo.io3 reveals that 80% of software features are either never or rarely used. When a thin-spread IT team is at the helm of a large number of disjoint products, an unnecessary configuration option can quickly turn into a liability. In a recent report4, Gartner wrote “After decades of focusing on network performance and features, future network innovation will target operational simplicity, automation, reliability and flexible business models”. Keeping things simple helps keep them secure.  Word of Warning: Single vendor doesn’t always mean consolidated  There is an important distinction we should make between a truly consolidated solution and a single-vendor solution composed of separate products, often obtained through a merger or acquisition. Although the latter are sold by a single vendor, they are still, more often than not, separate solutions with separate management. As such, they do not reap the above benefits.  What’s Next?  The ever-evolving cyber-threat landscape is creating a continuous need to adopt new security solutions in order to keep our networks and IT assets protected. Each organization has a tipping point at which the number of products they bring on board becomes too complex to handle and begins to hinder their security posture. As a growing number of IT leaders come to realize this, the demand for simple, coherent, consolidated solutions will continue to grow and become their de-facto go-to security strategy.   No longer are security and consolidation on opposite sides of the trade-off scales. They are, in fact, growing increasingly synonymous.     1 Why Cyber Security Consolidation Matters – Dimensional Research, published by Checkpoint.  2 Security Vendor Consolidation Trends - Should You Pursue a Consolidation Strategy? – Gartner  3The 2019 Feature Adoption Report –  4 Strategic Roadmap for Networking 2019 – Gartner  5 IDC Security Survey of 300 CISOs – IDC 2020