Lipstick on a Pig: When a Single-Pane-of-Glass Hides a Bad SASE Architecture

The Secure Access Service Edge (SASE) is a unique innovation. It doesn’t focus on new cutting-edge features such as addressing emerging threats or improving application performance. Rather, it focuses on making networking and security infrastructure easier to deploy, maintain, manage, and adapt to changing business and technical requirements. This new paradigm is threatening legacy point...
Lipstick on a Pig: When a Single-Pane-of-Glass Hides a Bad SASE Architecture The Secure Access Service Edge (SASE) is a unique innovation. It doesn’t focus on new cutting-edge features such as addressing emerging threats or improving application performance. Rather, it focuses on making networking and security infrastructure easier to deploy, maintain, manage, and adapt to changing business and technical requirements. This new paradigm is threatening legacy point solution incumbents. It portrays the world they created for their customers as costly and complex, pressuring customer budgets, skills, and people. Gartner tackled this market trend in their research note: “Predicts 2022: Consolidated Security Platforms are the Future.” Writes Gartner, “The requirement to address changing needs and new attacks prompts SRM (security and risk management) leaders to introduce new tools, leaving enterprises with a complex, fragmented environment with many stand-alone products and high operational costs.” In fact, customers want to break the trend of increasing operational complexity. Writes Gartner. “SRM leaders tell Gartner that they want to increase their efficiency by consolidating point products into broader, integrated security platforms that operate as a service”. This is the fundamental promise of SASE. However, SASE is extremely difficult for vendors that start from a pile of point solutions built for on-premises deployment. What such vendors need to do is re-architect these point solutions into a single, converged platform delivered as a cloud service. What they can afford to do is to hide the pile behind a single pane of glass. Writes Gartner: “Simply consolidating existing approaches cannot address the challenges at hand. Convergence of security systems must produce efficiencies that are greater than the sum of their individual components." [boxlink link="https://www.catonetworks.com/resources/5-questions-to-ask-your-sase-provider/?utm_source=blog&utm_medium=top_cta&utm_campaign=5_questions_for_sase_provider"] 5 Questions to Ask Your SASE Provider | eBook [/boxlink] How can you achieve efficiency that is greater than the sum of the SASE parts? The answer is: core capabilities should be built once and be leveraged to address multiple functional requirements. For example, traffic processing. Traffic processing engines are at the core of many networking and security products including routers, SD-WAN devices, next generation firewalls, secure web gateways, IPS, CASB/DLP, and ZTNA products. Each such engine uses a separate rule engine, policies, and context attributes to achieve its desired outcomes. Their deployment varies based on the subset of traffic they need to inspect and the source of that traffic including endpoints, locations, networks, and applications. A true SASE architecture is “single pass.” It means that the same traffic processing engine can address multiple technical and business requirements (threat prevention, data protection, network acceleration). To do that, it must be able to extract the relevant context needed to enforce the applicable policies across all these domains. It needs a rule engine that is expandable to create rich rulesets that use context attributes to express elaborate business policies. And it needs to feed a common repository of analytics and events that is accessible via a single management console. Simply put, the underlying architecture drives the benefits of SASE bottom-up -- not a pretty UI managing a convoluted architecture top-down. If you have an aggregation of separate point products everything becomes more complex -- deployment, maintenance, integration, resiliency, and scaling -- because each product brings its unique set of requirements, processes, and skills to an already very busy IT organization. This is why Cato is the world’s first and most mature SASE platform. It isn’t just because we cover the key functional requirements of SD-WAN, Secure Web Gateway (SWG), Firewall-as-a-Service (FWaaS), Cloud Access Security Broker (CASB), and Zero Trust Network Access (ZTNA). Rather, it is because we built the only true SASE architecture to deliver these capabilities as a single global cloud service with simplicity, automation, scalability, and resiliency that truly enables IT to support the business with whatever comes next.    

Does WAN transformation make sense when MPLS is cheap?

WAN transformation with SD-WAN and SASE is a strategic project for many organizations. One of the common drivers for this project is cost savings, specifically the reduction of MPLS connectivity costs. But, what happens when the cost of MPLS is low? This happens in many developing nations, where the cost of high-quality internet is similar...
Does WAN transformation make sense when MPLS is cheap? WAN transformation with SD-WAN and SASE is a strategic project for many organizations. One of the common drivers for this project is cost savings, specifically the reduction of MPLS connectivity costs. But, what happens when the cost of MPLS is low? This happens in many developing nations, where the cost of high-quality internet is similar to the cost of MPLS, so migration from MPLS to Internet-based last mile doesn’t create significant cost savings. Should these customers stay with MPLS? While every organization is different, MPLS generally imposes architectural limitations on enterprise WANs which could impact other strategic initiatives. These include cloud migration of legacy applications, the increased use of SaaS applications, remote access and work from home at scale, and aligning new capacity, availability, and quality requirements with available budgets. In short, moving away from MPLS prepares the business for the radical changes in the way applications are deployed and how users access them. Legacy MPLS WAN Architecture: Plane Old Hub and Spokes MPLS WAN was designed decades ago to connect branch locations to a physical datacenter as a telco-managed network. This is a hard-wired architecture, that assumes most (or all) traffic needs to reach the physical datacenter where business applications reside. Internet traffic, which was a negligible part of the overall enterprise traffic, would backhaul into the datacenter and securely exit through a centralized firewall to the internet.   [boxlink link="https://www.catonetworks.com/resources/what-telcos-wont-tell-you-about-mpls?utm_source=blog&utm_medium=top_cta&utm_campaign=windstream_partnership_news"] What Others Won’t Tell You About MPLS | EBOOK [/boxlink] Cloud migration shifts the Hub Legacy MPLS design is becoming irrelevant for most enterprises. The hub and spokes are changing. For example, Office365. This SaaS application has dramatically shifted the traffic from on-premises Exchange and SharePoint in the physical datacenter, and offline copies of Office documents on users’ machines, to a cloud application. The physical datacenter is eliminated as a primary provider of messaging and content, diverting all traffic to the Office 365 cloud platform, and continuously creating real-time interaction between user’s endpoints and content stores in the cloud. If you look at the underlying infrastructure, the datacenter firewalls and datacenter internet links carry the entire organization's Office 365 traffic, becoming a critical bottleneck and a regional single point of failure for the business.   Work from home shifts the Spokes Imagine now, that we suddenly moved to a work-from-home hybrid model. The MPLS links are now idle in the deserted branches, and all datacenter inbound and outbound traffic is coming through the same firewalls and Internet links likely to create scalability, performance, and capacity challenges. In this example, centralizing all remote access to a single location, or a few locations globally, isn’t aligned with the need to provide secure and scalable access to the entire organization in the office and at home.   Internet links offer better alignment with business requirements than MPLS While MPLS and high-quality direct internet access prices are getting closer, MPLS offers limited choice in aligning customer capacity, quality, and availability requirements with underlay budget. Let’s look at availability first. While MPLS comes with contractually committed time to fix, even the most dedicated telco can’t always fix infrastructure damage in a couple of hours over the weekend. It may make sense to use a wired line and wireless connection managed by edge SD-WAN device to create physical infrastructure redundancy. Capacity and quality pose a challenge as well. Traffic mix is evolving in many locations. For example, a retail store may want to run video traffic for display boards which will require much larger pipes. Service levels to that video streams, however, are different than those of Point-of-Sale machines. It could make sense to run the mission-critical traffic on MPLS or high-quality internet links and the best-effort video traffic on low-cost broadband links, all managed by edge SD-WAN. Furthermore, if the video streams reside in the cloud, running them over MPLS will concentrate massive traffic at the datacenter firewall and Internet link chokepoint. It would make more sense to go with direct internet access connectivity at the branch, connect directly to the cloud application and stream the traffic to the branch. This requires adding a cloud-based security layer that is built to support distributed enterprises.   The Way Forward: MPLS is Destined to be replaced by SASE Even if you don’t see immediate cost savings, shifting your thinking from MPLS-based network design to an internet- and cloud-first mindset is a necessity. Beyond the underlying network access, a SASE platform that combines SD-WAN, cloud-based security, and zero-trust network access will prepare your organization for the inevitable shift in how users’ access to applications is delivered by IT in a way that is optimized, secure, scalable, and agile. In Cato, we refer to it as making your organization Ready for Whatever’s Next.    

Windstream Enterprise partners with Cato Networks to Deliver Cloud-native SASE to organizations in North America

We are proud and excited to announce our partnership with Windstream Enterprise (WE), a leading Managed Service Provider (MSP) delivering voice and communication services across North America. WE will offer Cato’s proven and mature SASE platform to enterprises of all sizes. Cato offers WE a unique business and technical competitive advantage. By leveraging Cato’s SASE...
Windstream Enterprise partners with Cato Networks to Deliver Cloud-native SASE to organizations in North America We are proud and excited to announce our partnership with Windstream Enterprise (WE), a leading Managed Service Provider (MSP) delivering voice and communication services across North America. WE will offer Cato’s proven and mature SASE platform to enterprises of all sizes. Cato offers WE a unique business and technical competitive advantage. By leveraging Cato’s SASE platform, WE can rapidly deploy a wide range of networking and security capabilities across locations, users, and applications to enable customers’ digital transformation journeys. Unlike SASE solutions composed from point products, Cato’s converged platform enables WE to get to market faster with a feature-rich SASE solution and meet unprecedented customer demand. Agility and velocity are critical for both partners and customers today. Businesses expand geographically, grow through M&A, instantly adapt to new ways of work, and must protect themselves against the evolving threat landscape. These ever-changing requirements call for dynamic, scalable, resilient, and ubiquitous network and security infrastructure that can be ready for whatever comes next. [boxlink link="https://www.catonetworks.com/news/windstream-enterprise-delivers-sase-solution-with-cato-networks/?utm_source=blog&utm_medium=top_cta&utm_campaign=windstream_partnership_news"] Windstream Enterprise Delivers North America’s First Comprehensive Managed SASE Solution with Cato Networks | News [/boxlink] This is the promise of SASE that Cato Networks has been perfecting for the past seven years. There is no other SASE offering in the market that can deliver on that promise with the same simplicity, velocity, and agility as Cato. Here are some of the benefits that WE and our mutual customers will experience with Cato SASE: Rapidly evolving networking and security capabilities: Cato’s cloud-native software stack includes SD-WAN, Firewall as a Service (FWaaS), Secure Web Gateway (SWG), Advanced Threat Prevention with IPS and Next-Gen Anti-Malware, Cloud Access Security Broker (CASB) and Zero Trust Network Access (ZTNA). Cato experts ensure these capabilities rapidly evolve and adapt to new business requirements and security threats without any involvement from our partners and customers. Instant-on for locations and users: WE can connect enterprise customers locations and users to Cato quickly through zero-touch provisioning and let the Cato SASE Cloud handle the rest (route optimization, quality of service, traffic acceleration, and security inspection). Elastic capacity, available anywhere: Cato SASE Cloud can handle huge traffic flows of up to 3 Gbps per location in North America and globally through a dense footprint of Points-of-Presence (PoPs). No capacity planning is needed. Fully automated self-healing: Cato’s cloud-native SASE is architected with automated intelligent resiliency from the customer edge to the cloud service PoPs. High availability by design ensures service continuity without any human intervention. No need for complex HA planning and orchestration. True single pane of glass: Since Cato is a fully converged platform, it was built with a single management application to manage all configuration, reporting, analytics, and troubleshooting across all functions. Additionally, customers gain access to WE Connect Portal to enable easy configuration, analysis, and automation of their fully cloud-native SASE framework. Through our partnership, powerful SASE managed services become easier and more efficient to deliver. Cato and WE are ready to usher customers into a new era where advanced managed services meet a cloud-native software platform to create a customer experience and deliver customer value like never before.    

New Gartner Report Explores The Portfolio or Platform Question for SASE Solutions

Understanding SASE is tricky because it has no “new cool feature.” Rather, SASE is an architectural shift that fundamentally changes how common networking and security capabilities are delivered to users, locations, and applications globally. It is, primarily, a promise for a simple, agile, and holistic way of delivering secure and optimized access to everyone, everywhere,...
New Gartner Report Explores The Portfolio or Platform Question for SASE Solutions Understanding SASE is tricky because it has no “new cool feature.” Rather, SASE is an architectural shift that fundamentally changes how common networking and security capabilities are delivered to users, locations, and applications globally. It is, primarily, a promise for a simple, agile, and holistic way of delivering secure and optimized access to everyone, everywhere, and on any device. When Gartner introduced SASE in the 2019 report, The Future of Network Security is in the Cloud, the analyst firm highlighted convergence of network and network security services as the main architectural attribute of SASE. According to Gartner, “This market converges network (for example, software-defined WAN [SD-WAN]) and network security services (such as SWG, CASB and firewall as a service [FWaaS]). We refer to it as the secure access service edge and it is primarily delivered as a cloud-based service.” Cobbling together multiple products wasn’t a converged approach from both technology and management perspectives. Many vendors got the message and started to create their single-vendor solutions. Some developed missing components, such as adding SD-WAN capability to a firewall appliance. Others acquired pieces such as SD-WAN, CASB, or Remote Browser Isolation (RBI) to build on to existing solutions. According to Gartner ® Market Opportunity Map: Secure Access Service Edge, Worlwide1  report, by 2023, no less than 10 vendors will offer a one-stop-shop SASE solution. Cato is a big proponent of “convergence" as a key requirement for fulfilling the SASE promise. The direction of many SASE vendors is towards a “one stop shop.” Does “convergence” equal “one-stop shop” and should you care? [boxlink link="https://www.catonetworks.com/resources/what-to-expect-when-youre-expectingsase/?utm_source=blog&utm_medium=top_cta&utm_campaign=expecting_sase"] The Total Economic Impact™ of Cato's SASE Cloud | Read Report [/boxlink] SASE: Platform (“convergence”) does not mean Portfolio (owned by a “one stop shop”) The answer to that question was addressed in a recent research paper from Gartner titled "Predicts 2022: Consolidated Security Platforms Are the Future"2 There Gartner makes a key distinction between Portfolio and Platform security companies. According to Gartner: “Vendors are taking two clear approaches to consolidation: Platform Approach Leverage interdependencies and commonalities among adjacent systems Integrating consoles for common functions  Support for organizational business objectives at least as effectively as best-of-breed  Integration and operational simplicity mean security objectives are also met.  Portfolio Approach  Leveraged set of unintegrated or lightly integrated products in a buying package  Multiple consoles with little to no integration and synergy Legacy approach in a vendor wrapper  Will not fulfill any of the promised advantages of consolidation.  Differentiating between these approaches is key to the efficiency of the suite, and vendor marketing will always say they are a platform. As you evaluate products, you must look at how integrated the consoles are for the management and monitoring of the consolidated platform. Also, assess how security elements (such as data definitions, malware engines) and more can be reused without being redefined, or can apply across multiple areas seamlessly. Multiple consoles and multiple definitions are warnings that this is a portfolio approach that should be carefully evaluated.” SASE Platforms Require Cloud-based Delivery Convergence of networking and security is, however, just one step towards fulfilling the SASE promise. Cloud-based delivery is the key ingredient for achieving the operational and security benefits of SASE. According to Gartner: “As the platforms shift to the cloud for management, analysis and even delivery, the ability to leverage the shared responsibility model for security brings enormous benefits to the consumer. However, this extends the risk surface to the vendor and requires further due diligence in third-party vendor management. The benefits include: Lack of physical technical debt; there is no hardware to amortize before shifting vendors or technology. The end-customer’s data center footprint is reduced or eliminated for key technologies.  Operational tasks (e.g., patching, upgrades, performance scaling and maintenance) are performed by the cloud provider. The system is maintained and monitored around the clock, and the staffing of the provider supplements that of the end customer.  Controls are placed close to the hybrid modern workforce and to the distributed modern data; the path is not forced through an arbitrary, customer-owned location for filtering.  Despite being large targets, cloud-native security vendors have the scale and focus to secure, manage, and monitor their infrastructure better than most individual organizations.”  Gartner analysts Neil MacDonald and Charlie Winckless in the report predict that “[B]y 2025, 80% of enterprises will have adopted a strategy to unify web, cloud services and private application access from a single vendor’s SSE [secure service edge] platform.” One of their key findings that led to this strategic planning assumption is: “Single-vendor solutions provide significant operational efficiency and security efficacy, compared with best-of-breed, including reduced agent bloat, tighter integration, fewer consoles to use, and fewer locations where data must be decrypted, inspected, and recrypted.” The report further states: “The shift to remote work and the adoption of public cloud services was well underway already, but it has been further accelerated by COVID-19. SSE allows the organization to support anywhere, anytime workers using a cloud-centric approach for the enforcement of security policy. SSE offers immediate opportunities to reduce complexity, costs and the number of vendors.” Cato: The SASE Platform powered by a Global Backbone How does Cato measure up to this vision of the future? Cato was built from the ground up as a cloud-native service, built on one global backbone, to deliver one security stack, managed from a single console, and enforcing one comprehensive networking and security policy on all users, locations, and applications—and it’s all available today from this single vendor. We welcome you to test drive the simple, agile, and holistic Cato SASE Cloud. We promise an eye-opening experience. Learn more: Security Service Edge (SSE): It’s SASE without the “A” (blog post) How to Secure Remote Access (blog post) The Future of Security: Do All Roads Lead to SASE? (webinar) 8 Ways SASE Answers Your Future IT & Security Needs (eBook)   1 Gartner, “Market Opportunity Map: Secure Access Service Edge, Worldwide ”  Joe Skorupa, Nat Smith, and Even Zeng. July 16, 2021  2 Gartner, “Predicts 2022: Consolidated Security Platforms Are the Future” Charlie Winckless, Joerg Fritsch, Peter Firstbrook, Neil MacDonald, and Brian Lowans. December 1, 2021     GARTNER is registered trademark and service mark of Gartner, Inc. and/or its affiliates in the U.S. and internationally and is used herein with permission. All rights reserved.       

How to Secure Remote Access

Hundreds of millions of people worldwide were directed to work remotely in 2020 in response to pandemic lockdowns. Even as such restrictions are beginning to ease in some countries and employees are slowly returning to their offices, remote work continues to be a popular workstyle for many people. Last June, Gartner surveyed more than 100...
How to Secure Remote Access Hundreds of millions of people worldwide were directed to work remotely in 2020 in response to pandemic lockdowns. Even as such restrictions are beginning to ease in some countries and employees are slowly returning to their offices, remote work continues to be a popular workstyle for many people. Last June, Gartner surveyed more than 100 company leaders and learned that 82% of respondents intend to permit remote working at least some of the time as employees return to the workplace. In a similar pattern, out of 669 CEOs surveyed by PwC, 78% say that remote work is a long-term prospect. For the foreseeable future, organizations must plan how to manage a more complex, hybrid workforce as well as the technologies that enable their productivity while working remotely. The Importance of Secure Remote Access Allowing employees to work remotely introduces new risks and vulnerabilities to the organization. For example, people working at home or other places outside the office may use unmanaged personal devices with a suspect security posture. They may use unsecured public Internet connections that are vulnerable to eavesdropping and man-in-the-middle attacks. Even managed devices over secured connections are no guarantee of a secure session, as an attacker could use stolen credentials to impersonate a legitimate user. Therefore, secure remote access should be a crucial element of any cybersecurity strategy. [boxlink link="https://www.catonetworks.com/resources/the-hybrid-workforce-planning-for-the-new-working-reality/?utm_source=blog&utm_medium=top_cta&utm_campaign=hybrid_workforce"] The Hybrid Workforce: Planning for the New Working Reality | Download eBook [/boxlink] How to Secure Remote Access: Best Practices Secure remote access requires more than simply deploying a good technology solution. It also demands a well-designed and observed company security policy and processes to prevent unauthorized access to your network and its assets. Here are the fundamental best practices for increasing the security of your remote access capabilities. 1. Formalize Company Security Policies Every organization needs to have information security directives that are formalized in a written policy document and are visibly supported by senior management. Such a policy must be aligned with business requirements and the relevant laws and regulations the company must observe. The tenets of the policy will be codified into the operation of security technologies used by the organization. 2. Choose Secure Software Businesses must choose enterprise-grade software that is engineered to be secure from the outset. Even small businesses should not rely on software that has been developed for a consumer market that is less sensitive to the risk of cyber-attacks. 3. Encrypt Data, Not Just the Tunnel Most remote access solutions create an encrypted point-to-point tunnel to carry the communications payload. This is good, but not good enough. The data payload itself must also be encrypted for strong security. 4. Use Strong Passwords and Multi-Factor Authentication Strong passwords are needed for both the security device and the user endpoints. Cyber-attacks often happen because an organization never changed the default password of a security device, or the new password was so weak as to be ineffective. Likewise, end-users often use weak passwords that are easy to crack. It’s imperative to use strong passwords and MFA from end to end in the remote access solution. 5. Restrict Access Only to Necessary Resources The principle of least privilege must be utilized for remote access to resources. If a person doesn’t have a legitimate business need to access a resource or asset, he should not be able to get to it. 6. Continuously Inspect Traffic for Threats The communication tunnel of remote access can be compromised, even after someone has logged into the network. There should be a mechanism to continuously look for anomalous behavior and actual threats. Should it be determined that a threat exists, auto-remediation should kick in to isolate or terminate the connection. Additional Considerations for Secure Remote Access Though these needs aren’t specific to security, any remote access solution should be cost-effective, easy to deploy and manage, and easy for people to use, and it should offer good performance. Users will find a workaround to any solution that is slow or hard to use. Enterprise Solutions for Secure Remote Access There are three primary enterprise-grade solutions that businesses use today for secure remote access: Virtual Private Network (VPN); Zero Trust Network Access (ZTNA); and Secure Access Service Edge (SASE). Let’s have a look at the pros and cons of each type of solution. 1. Virtual Private Network (VPN) Since the mid-1990s, VPNs have been the most common and well-known form of secure remote access. However, enterprise VPNs are primarily designed to provide access for a small percentage of the workforce for short durations and not for large swaths of employees needing all-day connectivity to the network. VPNs provide point-to-point connectivity. Each secure connection between two points requires its own VPN link for routing traffic over an existing path. For people working from home, this path is going to be the public Internet. The VPN software creates a virtual private tunnel over which the user’s traffic goes from Point A (e.g., the home office or a remote work location) to Point B (usually a terminating appliance in a corporate datacenter or in the cloud). Each terminating appliance has a finite capacity for simultaneous users; thus, companies with many remote workers may need multiple appliances. VPN visibility is limited when companies deploy multiple disparate appliances. Security is still a considerable concern when VPNs are used. While the tunnel itself is encrypted, the traffic traveling within that tunnel typically is not. Nor is it inspected for malware or other threats. To maintain security, the traffic must be routed through a security stack at its terminus on the network. In addition to inefficient routing and increased network latency, this can result in having to purchase, deploy, monitor, and maintain security stacks at multiple sites to decentralize the security load. Simply put, providing security for VPN traffic is expensive and complex to manage. Another issue with VPNs is that they provide overly broad access to the entire network without the option of controlling granular user access to specific resources. There is no scrutiny of the security posture of the connecting device, which could allow malware to enter the network. What’s more, stolen VPN credentials have been implicated in several high-profile data breaches. By using legitimate credentials and connecting through a VPN, attackers were able to infiltrate and move freely through targeted company networks. 2. Zero Trust Network Access (ZTNA) An up-and-coming VPN alternative is Zero Trust Network Access, which is sometimes called a software-defined perimeter (SDP). ZTNA uses granular application-level access policies set to default-deny for all users and devices. A user connects to and authenticates against a Zero Trust controller, which implements the appropriate security policy and checks device attributes. Once the user and device meet the specified requirements, access is granted to specific applications and network resources based upon the user’s identity. The user’s and device’s status are continuously verified to maintain access. This approach enables tighter overall network security as well as micro-segmentation that can limit lateral movement in the event a breach occurs. ZTNA is designed for today’s business. People work everywhere — not only in offices — and applications and data are increasingly moving to the cloud. Access solutions need to be able to reflect those changes. With ZTNA, application access can dynamically adjust based on user identity, location, device type, and more. What’s more, ZTNA solutions provide seamless and secure connectivity to private applications without placing users on the network or exposing apps to the internet. ZTNA addresses the need for secure network and application access but it doesn’t perform important security functions such as checking for malware, detecting and remediating cyber threats, protecting web-surfing devices from infection, and enforcing company policies on all network traffic. These additional functions, however, are important offerings provided by another secure remote access solution known as Secure Access Service Edge. 3. Secure Access Service Edge (SASE) SASE converges ZTNA, NextGen firewall (NGFW), and other security services along with network services such as SD-WAN, WAN optimization, and bandwidth aggregation into a cloud-native platform. Enterprises that leverage a SASE networking architecture receive the security benefits of ZTNA, plus a full suite of converged network and security solutions that is both simple to manage and highly scalable. It is the optimal enterprise VPN alternative, and the Cato SASE solution provides it all in a cloud-native platform. Cato’s SASE solution enables remote users, through a client or clientless browser access, to access all business applications via a secure and optimized connection. The Cato Cloud, a global cloud-native service, can scale to accommodate any number of users without deploying a dedicated VPN infrastructure. Remote workers (mobile users too!) connect to the nearest Cato PoP – there are more than 60 PoPs worldwide – and their traffic is optimally routed across the Cato global private backbone to on-premises or cloud applications. Cato’s Security as a Service stack protects remote users against threats and enforces application access control. The Cato SASE platform provides optimized and highly secure remote access management for all remote workers. For more information on how to support your remote workforce, get the free Cato ebook Work From Anywhere for Everyone.        

The Branch of One: Designing Your Network for the WFH Era

For decades, the campus, branch, office, and store formed the business center of the organization. Working from anywhere is challenging that paradigm. Is the home becoming a branch of one, and what does the future hold for the traditional branch, the work home for the many? Network architects are used to building networking and security...
The Branch of One: Designing Your Network for the WFH Era For decades, the campus, branch, office, and store formed the business center of the organization. Working from anywhere is challenging that paradigm. Is the home becoming a branch of one, and what does the future hold for the traditional branch, the work home for the many? Network architects are used to building networking and security capabilities into and around physical locations: branches and datacenters. This location-centric design comes at a significant cost and complexity. It requires premium connectivity options such as MPLS, high availability and traffic shaping (QoS) through SD-WAN appliances, and securing Internet traffic with datacenter backhauling, edge firewalls, and security as a service. However, network dynamics have changed with the emergence of cloud computing, public cloud applications, and the mobile workforce. Users and applications migrated away from corporate locations, making infrastructure investments in these locations less applicable, thus requiring new designs and new capabilities. The recent pandemic accelerated migration, creating a hybrid work model that required a fluid transition between the home and the office based on public health constraints. [boxlink link="https://www.catonetworks.com/superpowered-sase/?utm_source=blog&utm_medium=top_cta&utm_campaign=super_power_sase"] Check out our SUPER POWERED SASE | On Demand Webinars [/boxlink] In their research paper, “2021 Roadmap for SASE Convergence,” Gartner analysts Neil Macdonald, Nat Smith, Lawrence Orans, and Joe Skorupa highlight the paradigm shift from an IT architecture focused on the place of work to one that focuses on the person doing the work and the work that needs to be done. In the simplest terms, Gartner views the user as a branch of one, and the branch as merely a collection of users. But, catchy phrases aside, how do you make this transition from a branch-centric to a user-centric architecture? This is where SASE comes in. It is the SASE architecture that enables this transition, as it is built upon four core principles: convergence, cloud-native, globally distributed, and support for all edges. Let’s examine how they enable the migration from branch-centric to user-centric design: Convergence: To deliver secure and optimized access to users, we need a wide range of networking and security capabilities available to support them including routing, traffic shaping, resilient connectivity, strong access controls, threat prevention, and data protection. Traditionally these were delivered via multiple products that were difficult to deploy and maintain. Convergence reduces the number of moving parts to, ideally, a single tight package the processes all end-user traffic, regardless of location, according to corporate policies and across all the required capabilities Cloud-native: A converged solution can be architected to operate in multiple places. It can reside in the branch or the datacenter, and it can live inside a cloud service. Being cloud-native, that is “built as a cloud service,” places the converged set of capabilities in the middle of all traffic regardless of source or destination. This isn’t true for edge appliances that deliver converged capabilities at a given physical location. While this is a workable solution, it cements the location-centric design pitfalls that requires traffic to specifically reach a certain location, adding unnecessary latency, instead of delivering the required capabilities as close as possible to the location of the user. Global: A cloud-native architecture is easier to distribute globally to achieve resiliency and scalability. It places the converged set of capabilities near users (in and out of the office). Cloud service density, that is the number of Points of Presence (PoPs) comprising the service, determines the latency users will experience accessing their applications. Using a global cloud-service built on PoPs has extensible reach that can address emerging business requirement such as geographical expansion and M&A. The alternative is much more costly and complex and involve setting up new co-locations, standardizing the networking and security stack, and figuring out global connectivity options. All this work is unnecessary when a SASE cloud service provider. All edges: By taking a cloud-first approach and allowing locations, users, clouds, and application to “plug” into the cloud service, optimal and secure access service can be delivered to all users regardless of where they work from, and to any application, regardless of where it is deployed. An architecture that supports all edges, is driven by identity, and enforces that same policy on all traffic associated with specific users or groups. This is a significant departure from the 5-tupple network design, and it is needed to abstract the user from the location of work to support a hybrid work model. Gartner couldn’t have predicted COVID-19, but the SASE architecture it outlined enables the agility and resiliency that are essential for businesses today. Yes, it involves a re-architecture of how we do networking and security and requires that we rethink many old axioms and even the way we build and organize our networking and security teams. The payback, however, is an infrastructure platform that can truly support the digital business, and whatever comes next.    

Remote Access Solutions Have Evolved in Stages During the Pandemic: Ten Criteria for a Long-term Solution

When pandemic lockdowns kicked in country by country and hundreds of millions of people were suddenly told to work from home, the world’s largest experiment in remote work got underway. Companies have gone through several stages of coping with this massive work from home (WFH) undertaking. From utter chaos at the start of WFH, to...
Remote Access Solutions Have Evolved in Stages During the Pandemic: Ten Criteria for a Long-term Solution When pandemic lockdowns kicked in country by country and hundreds of millions of people were suddenly told to work from home, the world’s largest experiment in remote work got underway. Companies have gone through several stages of coping with this massive work from home (WFH) undertaking. From utter chaos at the start of WFH, to a more measured approach, and now to seeking a long-term solution for a remote workforce that may take many months to return to the office—if ever. It turns out that the solution enterprises say they want is found in what SASE already delivers. See the list of enterprise requirements below. Remote Access, Pre-Pandemic In the pre-pandemic days, enterprises were very disciplined in their approach to remote access and WFH. VPNs were the primary method of connectivity for road warriors and people who occasionally worked from home or away from the office. It was standard procedure for an enterprise to build VPN capacity for a small percentage of its workforce. That capacity could be shared as workers connected for just a short time to check email or access files. If personally-owned devices were allowed on the network at all – i.e., BYOD – the enterprise typically enforced security policies on those devices. Strong policies dictated which devices were permitted and what they could access. Then COVID lockdowns totally upended this disciplined approach to remote access to enterprise resources. The Early Stage of the Pandemic: Workers Go Home In February and March of 2020, hundreds of millions of people around the world were told to stay away from their office workplaces. This happened practically overnight, with little time for IT departments to prepare for the massive onslaught of people suddenly working from home and needing continuous remote access to continue with business. Companies first tried to cope with the existing VPN infrastructure, but that was soon overwhelmed. Workers were encouraged to “just connect any way you can.” The carefully scripted remote access and BYOD policies were abandoned in favor of giving people access to the WAN to stay productive. Workers have relied, in many cases, on personally owned laptops and tablets and their consumer-grade home Internet connection. With spouses and students also at home, Internet bandwidth is constrained, resulting in unreliable service and frozen Zoom calls. As time went by, IT departments scrambled to add new VPN appliances and licenses and more capacity for remote access. In fact, 72% of respondents to the Cato 2021 Networking Survey say their organizations invested in their existing infrastructure to scale VPN resources as a response to remote working. Nevertheless, this is still a short-term solution to the WFH dilemma, as VPNs are an inefficient technology for giving remote access to a very large number of workers — for many reasons. Performance issues abound as traffic is backhauled to a datacenter before going out to cloud-based applications. Security is a concern as well. First of all, enterprises have built a full stack of security solutions in their central and branch offices. This security doesn’t extend into workers’ homes, which increases risks to the organization. What’s more, VPNs provide overly broad access to the entire network without the option of controlling granular user access to specific resources. Finally, the IT department has no visibility into what is happening over these appliances. The user experience suffers when problems occur, and no one knows the root cause. The Next Stage: Looking for a Long-term Solution for Remote Access Enterprises have realized that WFH is a workstyle that is here to stay for quite some time. Seeing the shortcomings of attempting to get by with VPNs, they are now looking for a long-term solution that more closely approximates what workers experience when in their actual offices. As we talk to these organizations, here’s what they tell us they need and want for the long haul. A remote access solution that: Is quick to deploy, with zero touch provisioning of end users and their devices, and no additional equipment to purchase or deploy. Is scalable to support tens of thousands of employees at once. Does not require backhauling remote employees’ traffic to a central datacenter. Includes a full security stack for every worker, regardless of their work location—in office, at home, on the road – without backhauling traffic to a centralized location. Restores discipline over security policies that can be adjusted for various use cases and devices. Delivers good network performance that is not subjected to problems in the last mile (to the home). Provides clear visibility into what is happening on the network. Is available from anywhere. Has “always on” connectivity for continuous access to the enterprise security stack. Ultimately delivers the same experience as people get when they are in their offices. Provides centralized control of the solution. A solution that meets all the above criteria is available today from Cato Networks. The free Cato eBook Work From Anywhere for Everyone explains how enterprises can easily deploy a secure, scalable WFH solution based on our cloud-native SASE architecture. Even as employees slowly return to their offices in the coming year, enterprises will still need to offer remote access on a large scale for some time to come. Now is the time to deploy a solution that can keep employees fully productive no matter where they choose to work.

SASE – The Strategic Difference Is in the Middle

SASE (Secure Access Service Edge) is the new, shiny toy of networking and security providers. Defined in 2019 by Gartner, SASE is a new, converged, cloud-native, elastic, and global architecture that will dominate the way enterprises deliver a broad set of networking and security capabilities. Since then, SASE messaging has been adopted by most vendors...
SASE – The Strategic Difference Is in the Middle SASE (Secure Access Service Edge) is the new, shiny toy of networking and security providers. Defined in 2019 by Gartner, SASE is a new, converged, cloud-native, elastic, and global architecture that will dominate the way enterprises deliver a broad set of networking and security capabilities. Since then, SASE messaging has been adopted by most vendors in the market for an obvious reason: SASE creates a disruption of the legacy IT architecture of edge appliances and multi-vendor point solutions. Vendors that built their business around the distribution, upgrades, and replacement of physical boxes face obsolescence. The same is true for service providers that profited from managing that inherent complexity. Why was this change of architecture necessary? The complexity of IT infrastructure is increasing exponentially. The ability to control a multi-vendor infrastructure depends on resources, skills, and budgets that can’t grow at the rate needed to securely connect the business anytime, anywhere, and on any device. Case in point is the need to support the sudden migration of the entire workforce from branches and offices to work from home. A complex, fragmented, and appliance-centric infrastructure simply can’t accommodate this shift – it was never built to support work from anywhere, anytime and on any device. We saw a glimpse of that problem over the past decade with the requirement to secure access to cloud resources by mobile users. If all your traffic is secure at your datacenter and branches, how do you inspect mobile-to-cloud traffic? One option was to force that traffic via the company datacenter so it can be protected by the main firewalls. This solution impacts performance and the user experience and is often rejected by the end users. The answer was cloud-based security that addressed the latency problem, but further fragmented and complicated the IT security stack by introducing yet another point solution. SASE is the new architecture for connecting and securing the digital business that is built to be fast, adaptable, and resilient. How does SASE achieve that? By placing the vast majority of enterprise networking and security capabilities in the cloud. The cloud sits in the “middle” of the enterprise – it is an ideal place to scale, expand, and evolve the security and networking capabilities needed by all enterprise resources: people, locations, devices, and applications. By being in the “middle,” SASE holistically inspects, optimizes, and protects all traffic from all sources to all destinations. The “middle” is a scary place for product vendors. It is a cloud service that requires a new set of operational capabilities and know-how to deliver. Amazon Web Service (AWS) compute is uniquely different than the product that is a Dell server. AWS makes the virtual server you use available, redundant, scalable and connected with a click of the button. It is by no means someone else’s computer. SASE requires vendors to become like AWS. Some will never get there. Some will try to acquire their way into it. Some will prioritize current cloud capabilities over similar appliance-delivered ones. And this process will have to go through a sales and support channel that is even more challenged by the SASE transition. This is going to be messy. When you look at the SASE field, and you want to separate true from fake SASE providers, look for the “middle.” Ask yourself: Has the SASE provider's cloud service been field tested to deliver the global reach, scalability, and degree of functional convergence needed by enterprises? Does the SASE service provide holistic visibility? The service should offer a single view showing all enterprise traffic flows regardless if they're across the Internet or the WAN, between sites, remote users, or cloud resources. What security and networking capabilities can be applied to that traffic? Is the service limited to access restrictions, or can it also optimize and accelerate traffic? What degree of centralized management control does the service provide? Is there a single pane-of-glass where you can set or change all capabilities relating to networking, security, remote access, and the cloud or must the service provider get involved at some point? If the answers are opaque, you are looking at a SASE wannabe. And unlike other solutions where features can be added to a roadmap, SASE requires the creation of a totally new architecture. To begin to know a true SASE – look at the middle.

WAN Overlay and Underlay Projects: Better Together?

Anyone who is considering SD-WAN for their WAN transformation project must be a bit anxious about the transition of last mile access to the Internet. Instead of MPLS from a single telco, a whole slew of ISPs provide the Internet underlay in various geographies (Cato created specific content and best practices to help guide customers...
WAN Overlay and Underlay Projects: Better Together? Anyone who is considering SD-WAN for their WAN transformation project must be a bit anxious about the transition of last mile access to the Internet. Instead of MPLS from a single telco, a whole slew of ISPs provide the Internet underlay in various geographies (Cato created specific content and best practices to help guide customers on this topic). Customers are motivated to migrate away from MPLS due to high cost per bit, long deployment of last mile connectivity to new sites, slow response to network changes, and lack of innovation in a network and security stack built on third-party products. SD-WAN projects introduce a mix of Internet underlays to augment or replace MPLS. Local ISPs provide the Internet-based underlays, and customers work with them directly to optimize service, quality, and costs especially for international locations. At the same time, working with multiple underlay providers is more operationally complex vs. working with a single telco. We have seen customers respond to this challenge in two ways when launching their SD-WAN project RFPs: combine the underlay and overlay parts of the project into a single RFP or separate the underlay and overlay into two RFPs. Combining the Overlay (SD-WAN) and Underlay (Internet Transport) into a Single RFP In this model, the customer wants the “a telco experience” from the new SD-WAN deployment by getting the underlay and the overlay from a single provider. This approach makes sense at first glance: keep one service provider responsible for the procurement, deployment, and management of the network. But keeping things “similar” to the old operating model, persists the service quality and operational challenges of the previous network. Since the only providers capable of providing both the underlay and the overlay from one source are telcos, bolting on a shiny new technology into the telco service will result in the same sub-par service speed, quality, costs, and innovation. There is a nuance to this story: in some cases, the customer is willing to let the service provider introduce a last mile broker that will procure and deploy the last mile. I would consider that similar to the approach discussed in the next paragraph. Separate the Overlay (SD-WAN) and Underlay (Internet Transport) RFPs In this model, the customer separates the underlay project (last-mile access) from the overlay project (SD-WAN or SASE). The underlay RFP may not be needed if IT already has contractual relationship with global ISPs acting as a backup to the MPLS network. If last-mile provisioning is needed, a bid between brokers, agents, telcos, and other access providers will reduce the cost of last mile while working with specialists. Last-mile deployment isn’t trivial, especially at scale, so working with domain experts makes a lot of sense. The customers can launch the overlay RFP separately (under the assumption that the right mix of underlays is made available) and look into the full range of vendors, technology, and services that can address SD-WAN, security, remote access, and global cloud connectivity. The expertise and capabilities needed to optimize the overlay is vastly different than that of the underlay. One of our global manufacturing customers did just that. They leveraged the buying power of a group of “sister” companies to create a big underlay project, sent the RFP to last-mile specialists, and got the best price. Then, they turned to choose the best overlay solution independent of the last mile. As the leading network architect told me: “If we issued a single RFP for the project, we would only get responses from legacy telcos. The last mile would determine the winner before we even started. By separating these RFPs, we could get the most advanced and innovative vendors bid for the strategic part of our WAN transformation”. The Cato Take Cato SASE creates a global network overlay that is agnostic to the underlay. As a cloud-based software platform Cato does not directly procure and deploy last mile services. Cato partners with MSPs and last-mile aggregators to help our customers with the procurement of last-mile services. If you want to maximize the impact of your WAN transformation project, put yourself in a position to consider the full range of options. Don’t let old WAN designs and business models hold you back.

The Spinoff Network Challenge: Cloning or Rethinking?

In our business, we see a common theme of large enterprises that are spinning off divisions or business units (BUs). The BUs consist of thousands of employees and numerous locations and applications that require a solid networking and security infrastructure. The CIO of the BU has basically two options: clone the parent infrastructure or forge...
The Spinoff Network Challenge: Cloning or Rethinking? In our business, we see a common theme of large enterprises that are spinning off divisions or business units (BUs). The BUs consist of thousands of employees and numerous locations and applications that require a solid networking and security infrastructure. The CIO of the BU has basically two options: clone the parent infrastructure or forge her own path and design a new infrastructure from the ground up. What should she do? Cloning the network: the safe option? Cloning the parent infrastructure seems like the obvious choice. It has been used for a long time, it generally works, and even the IT staff of the BU are familiar with it. However, the current state also has its own challenges. If you would have to build a new network now, would you choose MPLS as your platform? Many organizations are replacing costly and rigid MPLS networks with SD-WAN to support cloud migration and reduce costs. The same is true for security. Current security architectures are appliance centric, while the forward-looking security architecture model is cloud-based. Ultimately, cloning the parent network may be the wrong move for a BU that is setting itself up for the future. Rethinking the network: better TCO and ROI of the new infrastructure It is rare for CIOs of large enterprises to have the opportunity to start from a clean slate. The spinoff represents such an opportunity. It is time to look 5 or 10 years out and assess the direction of the business and the underlying technology needed to support it. What do we know about the future of the business? We know it is going hybrid in all directions. On the user side, hybrid work is the new normal. Users need to seamlessly transition between office and home and continue to have secure and optimized access everywhere. Applications and data are moving to the cloud, but IT will have to support distributed physical and cloud datacenters, as well as public cloud apps, for a very long time. Growth will be global, so the business and technology fabric must easily expand to where the business needs to go. The ability to adapt to changes, new requirements, new growths, mergers and acquisition, and unforeseen events like a global pandemic, all dictate a need for a very agile networking and security infrastructure. Can the current parent infrastructure deliver all these capabilities? Most likely, the answer is no. The Future of Networking and Security is SASE If rethinking is what you decided to do, a new framework can come in handy. The Secure Access Service Edge (SASE), a new category defined by Gartner in 2019, represents the blueprint of the networking and security architecture of the future. SASE takes into account all the emerging requirements we discussed above: working from anywhere, using applications hosted everywhere, with fully optimized and secure access. SASE is built around a cloud service that is deployed globally and can scale to address a wide range of requirements and use cases for all types of “edges”: physical locations, users, devices, cloud resources, and applications. These use cases include improving network capacity, resiliency, and performance, reducing network cost, eliminating security appliance sprawl, optimizing global connectivity, securing remote access, and accelerating access to public cloud applications. While there are many ways to address these use cases via point solutions, SASE’s promise is an infrastructure that is flexible, agile, and simple. The convergence of networking and security into a single, coherent cloud-based platform is easy to manage, can adapt to business and technology changes quickly, and is more affordable than a stack of point solutions. Before you clone, rethink.

The Hybrid Workforce: Planning for The New Working Reality Post COVID-19

It may be difficult to remember, but not so long ago we used to work mainly from an office. The unprecedented global pandemic that took the world by storm, changed our personal and professional life patterns. We moved to work from home, then returned to the office, and back home, with the ebbs and flows...
The Hybrid Workforce: Planning for The New Working Reality Post COVID-19 It may be difficult to remember, but not so long ago we used to work mainly from an office. The unprecedented global pandemic that took the world by storm, changed our personal and professional life patterns. We moved to work from home, then returned to the office, and back home, with the ebbs and flows of the pandemic. This work model is here to stay reflecting a transition into a “Hybrid Workforce.” The transition into a Hybrid Workforce caught many IT teams by surprise. Most organizations were not prepared for a prolonged work from home by the vast majority of the workforce. The infrastructure needed to support these remote users, virtual private network (VPN) solutions, was built for the brave few and not for the masses. During the first wave of COVID-19, IT had to throw money and hardware at the problem, stacking up legacy VPN servers all over the world to catch up with the demand. This is a key lesson learned from the pandemic: enterprises must support work from anywhere, all the time, by everyone. It is now the time to think more strategically about the Hybrid Workforce and the key requirements to enable it. Seamless transition between home and office Today, networking and security infrastructure investments in MPLS, SD-WAN, and NGFW/UTM are focused on the office. These investments do not extend to employees’ homes, which means that working from home doesn’t have the resiliency, security, and optimization of working from the office. The more “remote” the user is, the more difficult it is to ensure the proper work environment. Our take: Look for cloud-first architectures, such as Zero Trust Network Access (ZTNA) and the Secure Access Service Edge (SASE) to deliver most networking and security capabilities from the cloud. By decoupling these capabilities from physical appliances hosted in a fixed location, and moving them to the cloud, they become available to users everywhere. This is an opportunity to converge the infrastructure used for office and remote users into a single, coherent platform that enforces a unified policy on all users regardless of their locations. Scalable and globally distributed remote access The current appliance-centric VPN infrastructure requires significant investment to scale (for capacity) and distribute globally (near users in all geographical regions). Beyond the initial deployment, on-going maintenance overloads busy IT teams. Our take: Look for remote access to be delivered as a scalable, global cloud-service that is proven to serve a significant user base and the applications they require. Consuming remote access as a service will free up IT resources from sizing, deploying, and maintaining the remote access infrastructure required to support a Hybrid Workforce. Optimization and security for all traffic Today’s remote access infrastructure provides just that, remote access. IT relies on the integration of multiple technologies to optimize and secure remote access traffic. As discussed above, most are not available to work from home. Our take: Look for remote access solutions that incorporate optimization and protection for all types of traffic including wide area network (WAN), Internet, and cloud traffic. This is particularly important in the remote-user-to-cloud path, where legacy technology has no visibility or control. By embedding WAN optimization, cloud acceleration and threat prevention into the remote access platform itself, all traffic, regardless of source and destination, is optimized and protected. Conclusion Even if your enterprise IT survived the initial transition to working from home, it is now the time to think about the creation of networking and security architecture that can natively support the Hybrid Workforce. Global, elastic, and agile infrastructure is key to fending off the next crisis, or whatever comes next.

Why Cato will beat legacy techs in the SASE race

In a recent article, a Fortinet executive said: “It’s impossible for a company like a Cato to build all these things out. It’s just incredibly hard for a small company.”. Here is my take. It is true that Cato’s vision is one the biggest undertakings in IT infrastructure over the past two decades. We set...
Why Cato will beat legacy techs in the SASE race In a recent article, a Fortinet executive said: “It’s impossible for a company like a Cato to build all these things out. It’s just incredibly hard for a small company.”. Here is my take. It is true that Cato’s vision is one the biggest undertakings in IT infrastructure over the past two decades. We set out to create a completely new way of delivering and consuming networking and security services from the cloud. To do that, we built a full stack of routing, optimization, deep packet inspection, and traffic processing engine. We built, from scratch, all these capabilities as an elastic cloud service running on 58 global Points of Presence (PoPs) processing multi-gig traffic streams for hundreds of customers, thousands of locations, and hundreds of thousands of remote users. And we did it in less than 5 years. Gartner says: “While the term originated in 2019, the architecture has been deployed by early adopters as early as 2017.” There was only one SASE company in 2017: Cato Networks. Cato is the inspiration for SASE, and the most mature SASE platform in the market today. Company size comes with age, company DNA is determined at birth. Fortinet is 20 years old; Palo Alto Networks is 15; Checkpoint is 27; and Cisco is 36. When you think about their appliance roots as well as the companies they acquired over the years, it becomes clear, that there is a huge amount of redundancy and waste. Imagine buying another appliance company when you have your own appliances. All the effort that went into creating the appliance, the operating system, the management, the performance optimization – everything that isn’t the actual value-added functionality – all that effort is wasted. And then you must integrate it all. The same is true when you think about new product releases: How much net new functionality is broadly used? Many new features are needed by only a few large customers. Huge efforts go into patching of bugs. And, with appliances, everything takes forever – a typical release cycle of new appliance software can take a year, which then generates a wave of bug fixes that slows innovation to a crawl. Cato is leveraging a “build once, use for everything” platform. When we built a multi-gig packet processing engine, we could immediately deploy it for routing, decryption, intrusion prevention, anti-malware, and URL filtering. This engine looks at every packet and implement a single pass processing of that packet for multiple networking and security objectives. We don’t have multiple appliances or code bases, we have a single code base on a single CPU that processes the stream coming from any source: branch, user, cloud. Cato doesn’t have to develop, re-develop, acquire, rationalize, integrate, package and deliver multiple products that will never work as “one”. If Cato wants to process the stream for new additional capabilities such as data security, the effort will be about 10% of what a new company in data security will need to invest to deliver that functionality. This is because all the engines, infrastructure, and operational know-how are already built and tested. We also have the benefit of hindsight. If 80% of functionality that is built into products is never broadly adopted, we can work with our customers to deliver the exact capabilities they need, when they need it. After all, SASE isn’t about totally new capabilities, but the delivery of existing capabilities via the cloud. Using an agile DevOps process, we can build these capabilities at high velocity, deploy them into the cloud, and immediately get feedback on how they are used and how they should evolve. No appliance company can match that. If you have the right architecture, building these incremental capabilities, simply isn’t the “impossible challenge” an appliance-centric company will make you believe it is. In fact, the appliance baggage and heaps of dated technologies from acquisitions, prevent these large companies from delivering a SASE platform in time, if ever. Stay tuned, as Cato continues to roll out new SASE capabilities at Cloud speed, making them available with a click of a button.

Alternatives to VPN for Remote Access

Work from anywhere has recently become a hot topic. The corona virus outbreak has forced many organizations to move some or all of their employees to work from home. In some cases, work from home was a way to reduce possible exposure, in others it was mandated by health authorities to prevent the spread of...
Alternatives to VPN for Remote Access Work from anywhere has recently become a hot topic. The corona virus outbreak has forced many organizations to move some or all of their employees to work from home. In some cases, work from home was a way to reduce possible exposure, in others it was mandated by health authorities to prevent the spread of the disease across communities. This unforeseen set of events caught many organizations off guard. Historically, only a subset of the workforce required remote access, including executives, field sales, field service, and other knowledge workers. Now, enterprises need to maintain business continuity by enabling the entire workforce to work remotely. The most common enterprise remote access technology is Virtual Private Networking (VPN). How does it work? A VPN client is installed on the users’ devices – laptops, smartphones, tablets – to connect over the Internet to a server in the headquarters. Once connected to the server, users gain access to the corporate network and from there to the applications they need for their work. The obvious choice for enterprises to address the work-from-anywhere requirement was to extend their VPN technology to all users. However, VPNs were built to enable short duration connectivity for a small subset of the users. For example, a salesperson looking to update the CRM system at the end of the day on the road. VPNs may not be the right choice to support continuous remote access for all employees. VPN Alternatives for Business are Here VPN technology has many shortcomings. The most relevant ones for large scale remote access deployments are scalability, availability, and performance. VPN was never meant to scale to continuously connect an entire organization to critical applications. Under a broad work-from-anywhere scenario, VPN servers will come under extreme load that will impact response time and user productivity. To avert this problem, additional VPN servers or VPN concentrators, would have to be deployed in different geographical regions. Next, each component in the VPN architecture has to be configured for high availability. This increases cost and complexity. The project itself is non-trivial and may take a while to deploy, especially in affected regions. Finally, VPN is using the unpredictable public Internet, which isn’t optimized for global access. This is in contrast to the benefits of premium connectivity, such as MPLS or SD-WAN, available in corporate offices. VPN Alternatives for Remote Access In mid-2019, Gartner introduced a new cloud-native architectural framework to deliver secure global connectivity to all locations and users. It was named the Secure Access Service Edge (or SASE). Because SASE is built as the core network and security infrastructure of the business, and not just as a remote access solution, it offers unprecedented levels of scalability, availability, and performance to all enterprise resources. What makes SASE an ideal alternative VPN technology? In short, SASE offers the scalable access, optimized connectivity, and integrated threat prevention, needed to support continuous large-scale remote access. First, the SASE service seamlessly scales to support any number of end users globally. There is no need to set up regional hubs or VPN concentrators. The SASE service is built on top of dozens of globally distributed Points of Presence (PoPs) to deliver a wide range of security and networking services, including remote access, close to all locations and users. Second, availability is inherently designed into the SASE service. Each resource, a location, a user, or a cloud, establishes a tunnel to the neatest SASE PoP. Each PoP is built from multiple redundant compute nodes for local resiliency, and multiple regional PoPs dynamically back up one another. The SASE tunnel management system automatically seeks an available PoP to deliver continuous service, so the customer doesn’t have to worry about high availability design and redundancy planning. Third, SASE PoPs are interconnected with a private backbone and closely peer with cloud providers, to ensure optimal routing from each edge to each application. This is in contrast with the use of the public Internet to connect to users to the corporate network. Lastly, since all traffic passes through a full network security stack built into the SASE service, multi-factor authentication, full access control, and threat prevention are applied. Because the SASE service is globally distributed, SASE avoids the trombone effect associated with forcing traffic to specific security choke points on the network. All processing is done within the PoP closest to the users while enforcing all corporate network and security policies. Related content: Read our blog on moving beyond remote access VPNs. Enterprise VPN Alternatives Ready for your Business If you are looking to quickly deploy a modern vpn connection solution in your business, consider a SASE service. Cato was designed from the ground up as a SASE service that is now used by hundreds of organizations to support thousands of locations, and tens of thousands of mobile users. Cato is built to provide the scalability, availability, performance, and security you need for everyone at every location. Furthermore, Cato’s cloud native and software-centric architecture enable you to connect your cloud and on-premises datacenters to Cato in a matter of minutes and offer a self-service client provisioning for your employees on any device. If you want to learn more about the ways Cato can support your remote access requirements, please contact us here.

Network Security and Direct Internet Access: The Foundation of MPLS WAN Transformation

In a recent webinar we conducted at Cato, we asked the audience a poll question: “What is the primary driver for your SD-WAN project?” We were a bit surprised to find out that “secure, direct Internet access” was the top driver. We expected other drivers, such as MPLS cost reduction, eliminating bandwidth constraints, or optimizing...
Network Security and Direct Internet Access: The Foundation of MPLS WAN Transformation In a recent webinar we conducted at Cato, we asked the audience a poll question: “What is the primary driver for your SD-WAN project?” We were a bit surprised to find out that “secure, direct Internet access” was the top driver. We expected other drivers, such as MPLS cost reduction, eliminating bandwidth constraints, or optimizing cloud access, to be at the top of the list. Why is security such a big deal with SD-WAN? Because SD-WAN is a “code name” for “MPLS WAN transformation project.” MPLS WANs were never designed with security, and specifically, threat protection, as a core feature. SD-WAN is forcing network architects to rethink their network security design in the post-MPLS era. The Internet Challenge for MPLS and Hybrid WAN Traditionally, MPLS was always considered a private network. While MPLS traffic did cross a network service provider’s network to the corporate datacenter, traffic didn't go over the public Internet. This had two ramifications: many enterprises did not encrypt their MPLS connections or inspected MPLS traffic for threats. If Internet access was required, branch traffic was backhauled to a central Internet exit point where a full network security stack inspected the traffic for malicious software, phishing sites, and other threats. This network security stack was not distributed to the branches, but rather placed in a datacenter or a regional hub. For quite a while, backhauling Internet traffic over MPLS made sense, as most applications resided in the datacenter sitting on the MPLS network. But, as Internet traffic grew with the adoption of cloud services, direct internet access at the branch became a priority for obvious reasons: Internet traffic could be offloaded from expensive MPLS to allow more bandwidth to datacenter applications, reducing the pressure for costly upgrades. And, backhaul elimination reduced the latency added in access to the internet -- the so-called “Trombone Effect.” However, introducing Internet links as a transport alongside MPLS, what’s called “a hybrid WAN,” broke the closed MPLS architecture that relied on a single managed network for all traffic. This exposed a big hole in the legacy MPLS design — security. If users can access cloud applications from the branch, they are exposed to Internet-borne threats. Simply put, to realize the benefits of direct internet access at the branch, network security had to come down to the branch itself. Secure, Direct Internet Access: SD-WAN’s Self-Made Headache? Basic SD-WAN solutions address an enterprise’s routing needs. They can dynamically route datacenter and Internet-bound traffic to different transports. They can manage failover scenarios caused by blackout and brownouts, and they can hand off the traffic to a security solution like a branch firewall or UTM. But, is the cure worse than the disease? MPLS WANs avoided the appliance sprawl challenge because all Internet traffic was backhauled to one secure exit point. Managing numerous new appliances isn’t a recipe for simplicity and cost reduction. It’s a recipe for a massive headache. Cloud-based security solutions help avoid appliance sprawl but add new management consoles to administer, and more costs. In a nutshell, for your SD-WAN project to realize its full potential, complete, simple, and affordable network security is needed everywhere. A solution that doesn’t burden limited IT resources could be the difference between instant ROI for SD-WAN or much ado about, almost, nothing. This is why security is the top driver for what looks like a networking project. Cato Cloud: Security Built Into SD-WAN All SD-WAN players are late to catch up and hastily put together a marketing security story mostly through partnerships with network security vendors. Cato had always viewed networking and security as two sides of the same coin. This is why Cato is the only SD-WAN provider that built cloud-based network security directly into its global SD-WAN architecture. And, our dedicated research staff is rolling out threat hunting and protection capabilities that require no grunt work from our customers, and are typically accessible only to very large enterprises with ample staff. Cato’s approach of converging networking and security is simple, powerful, and affordable.  

SD-WAN and Security: The Architecture is All that Matters

For the past two years, Cato Networks has led a revolution in enterprise networking: the convergence of software-defined wide area networks (SD-WAN) and network security delivered as a single cloud service. For decades, networking and security evolved as silos, creating separate point products in each category. Convergence is the antithesis to bundling of point solutions. It...
SD-WAN and Security: The Architecture is All that Matters For the past two years, Cato Networks has led a revolution in enterprise networking: the convergence of software-defined wide area networks (SD-WAN) and network security delivered as a single cloud service. For decades, networking and security evolved as silos, creating separate point products in each category. Convergence is the antithesis to bundling of point solutions. It means the architectural integration of discrete components, by design, into a single, vertically integrated solution. Cato was the first company that decided to tackle the convergence of networking and security. We built our cloud service to address the connectivity and security needs of the modern enterprise from the ground up. Cato Cloud delivers affordable and optimized global SD-WAN with built-in multi-layer network security stack for all locations, cloud resources, and mobile users. Security was never a strength of SD-WAN companies and legacy telcos. SD-WAN isn't built to improve security, but instead address the rigidity, capacity constraints, and high costs of MPLS. Security in the context of SD-WAN was needed to encrypt the SD-WAN overlay tunnels over the Internet. This narrow security focus provided no protection against Internet-borne threats such as phishing, malicious websites, and weaponized e-mail attachments. When network security could no longer be ignored, SD-WAN companies partnered with network security vendors to create a “bundle” of non-integrated products the customer had to buy, deploy and maintain. In essence, what IT did before SD-WAN, namely deploy networking and security in silos, was reintroduced as a “partner offering.” Early announcements from Velocloud and more recently from Aryaka tell the same story. Cato founders decided to go beyond marketing and “bundles” and literally break the networking and security silos. They had the vision and the track record. Our CEO, Shlomo Kramer, created the first commercial firewall as the co-founder of Check Point Software, the first Web Application Firewall at Imperva, and was a founding investor at Palo Alto Networks that built the first next-generation firewall. You can read his 25-year long perspective on the evolution of network security that led to the formation of Cato Networks and its unique architecture. Our CTO, Gur Shatz, created one of the leading cloud networks at Incapsula - specifically designed for DDoS protection. Shlomo and Gur brought to Cato the industry, product, and market perspective to disrupt the networking and security product categories. What is the value of converged networking and security? Why did Cato decide to do it in the cloud instead of creating yet another appliance? Below are the key design principles of Cato and how they create value for enterprises versus SD-WAN point solutions and security bundles. Software and cloud must form the core of the network We live in a world of appliances — routers, SD-WAN, WAN optimization, and next-generation firewalls to name a few. Each appliance has its own internal code, detailed configuration, capacity specification and failover/failback setup. It creates a lot of work in sizing, deployment, configuration, patching, upgrading and retiring. All this work, times the number of appliances, just to keep the lights on. The appliance is one of the main reasons our networking and security architecture is so complex. In order to break the cost and complexity paradigm of enterprise networking, Cato uses software and cloud services that are inherently elastic, redundant and scalable. Cato removes the appliance form factor as the key building block of the network — all routing, optimization and security is delivered as pure software running on commodity servers in a distributed cloud network. No appliances, no grunt work for the customers, and no costly managed services. This is a fundamental architectural decision that stands in contrast to the rest of the SD-WAN field. Full network security everywhere Because Cato has converged its networking and security stack, it is available in all of our PoPs around the world. This eliminates the need for customers to create regional hubs, or deploy dedicate solutions to optimize and secure cloud resources. Cato’s security stack currently features a next-generation firewall with application control, URL filtering, anti-malware and IPS as a service. Cato inspects all traffic to stop malicious Command and Control (C&C) communications, cross-site threat propagation, drive-by downloads, and phishing attacks. A team of dedicated experts analyzes vulnerabilities, applying unique detection algorithms that leverage our broad visibility into all network traffic and security events. Cato fits well into a defense in depth model that applies for protection at different stages of the attack lifecycle including Internet/WAN, LAN, and endpoint. There is no need to cram multiple vendors into the same layer. Gartner recommends enterprises don’t mix multiple firewall brands (“One Brand of Firewall Is a Best Practice for Most Enterprises” [subscription required]) and most IT organizations standardize on one network security stack. All traffic has to be controlled end-to-end The separation of WAN and Internet traffic is driven by the legacy MPLS-centric design. Aryaka, for example, is focused on optimizing WAN traffic and recently bolted on security for “non-critical” Internet traffic. The optimized traffic isn't secure, and the Internet traffic isn’t optimized. But what if you want to optimize access to a cloud service like Box.com? In that case, security isn't applied and the customer can be compromised by a malicious file in a Box.com directory. Cato is holistically optimizing and securing all traffic, because Cato sends all traffic from the edge to be secured and optimized in the cloud network. This is the difference between secure-by-design and secure-by-duct tape. Full control of the service code Cato owns its security services code and does not resell third-party solutions. This has several key implications for enterprises: Reduced exposure In case of a vulnerability or a bug, Cato can resolve the issue in a matter of hours. With legacy SD-WAN, the customer must wait for a third-party fix to be provided. Accelerated roadmap Cato can rapidly evolve its code base, driven by customer feature requests and make all enhancements immediately available to all customers through the cloud service. Cato can also see how customers use the feature, and enhance these areas that have the most value to customers. Lower costs Cato doesn't have to pay licensing fees to third-party solution providers, passing these cost efficiencies to customers. Bundled offerings require paying all parties that participate in the bundle. Seamless scaling in the cloud Scaling is one of the biggest challenges of security technologies. Deep packet inspection for threat protection, coupled with inline SSL decryption, places a significant load on edge devices. In contrast, networking devices don't have the same scaling issues, because network-related packet processing is much lighter. Scaling is one of the reasons why networking and security remained largely independent over the years. Cato had addressed this issue by moving the security processing and global routing to the cloud. The only edge function is managing last-mile optimization. In this way, customers are freed from capacity planning, sizing, upgrading and repairing of edge devices, just because traffic volume or traffic mix has changed and the edge security device can’t keep up.    Single management across all services Cato provides a single pane of glass to manage all aspects of the network and the security: analytics, policy configuration, incident review, and troubleshooting. Bundling multiple products means customers must use multiple management interfaces increasing the potential for misconfigurations and poor security posture. Self-service and co-managed network management model to compensate for the complexity of the bundles, carriers often provide a managed service that blocks enterprises from making even the smallest changes to the network. The infamous “ticketing systems” and “read-only portals” means that every request takes a long time to complete. Because Cato is converged and focused on simplicity, our management application supports self-service or co-managed models. Enterprises can choose to manage their own network policies while Cato maintains the underlying infrastructure. Or, be supported by Cato’s partners for 24x7 monitoring.   The bottom line SD-WAN companies and legacy telcos, were forced to consider security as part of their SD-WAN offerings. However, bolting security into SD-WAN means that they are unable to use WAN modernization and transformation projects to streamline network security. Cato Networks has architecturally solved the challenge of optimizing and securing enterprise WAN and Internet traffic for branches, datacenters, cloud resources, and mobile users — globally. Enterprises can dramatically cut the costs and complexity of running their network and security infrastructure, by using Cato’s converged cloud platform. This is the future of SD-WAN, and it is available. Today.

The Cato “Why”: Make IT easy

Why do we do what we do? In a 2009 iconic TED talk, Simon Sinek explained that most people know what they do, some know how they do it, but it is why they do it that actually matters. What is the belief, the cause, the passion that drives them. It is “the why” that...
The Cato “Why”: Make IT easy Why do we do what we do? In a 2009 iconic TED talk, Simon Sinek explained that most people know what they do, some know how they do it, but it is why they do it that actually matters. What is the belief, the cause, the passion that drives them. It is “the why” that separates great leaders and businesses from mediocre ones. As we come close to our third birthday, and as one of the first employees, thinking about “Why Cato?” also has a personal meaning. Because “the why” question also applies to every one of us in everything we do and keeps us engaged and excited. Why does Cato exist? Cato was built to make the IT team’s job easier. Cato believes that chaos and complexity in the IT department is growing exponentially. IT teams are in constant firefighting mode. Even a job well done, the hard way, leaves critical work undone. Responding to and protecting the business is becoming more difficult despite the clear evidence that both are essentially needed to survive in today’s digital economy. And yet, no vendor takes on the task to address this systemic problem. As an industry, we are focused on the opposite: adding more layers, more products and more complexity —  leaving our customers to clear the mess. Why did I join Cato? Because I believe what Cato believes. I have been in the industry for more than 30 years, half of them on the customer side of IT. I worked for many companies that tackled different parts of the cost-complexity-value equation. Some produced powerful products that were difficult to deploy and maintain. Other companies cut certain costs through automation in one discipline, but expanded the scope IT had to manage overall. Nothing was easy. There is a point that you must cut through the “Gordian Knot” — but very few dare to attempt that. Cato dares. Cato has a transformative vision of IT that will take your breath away. Breaking down decades-old silos, converging disparate IT disciplines, and applying innovation everywhere to eradicate complexity and costs. We don't play nice — if a point solution or even a whole category can be eliminated or collapsed to make it easy — it is gone, gradually and seamlessly. When customers sit through our basic solution presentation they really have only one question: “Are you for real?”. No one asks for ROI or TCO analysis because, as Simon Sinek noted, “It just feels right.” When we started Cato, we knew what we wanted to achieve and the extent of our vision. Some skeptics called it “ambitious.” We didn't know how many customers, partners, and employees would join us in this journey to make IT easy. We built a world-class engineering team that redefines what is possible with cloud, software, global networking, and human ingenuity. We formed a partner community that is challenging the industry known evils that brought us here. And, we signed up hundreds of customers  — small, medium, and large, that said: “enough is enough.” We are entering 2018, growing as fast as possible. We can now answer the question: “Yes, we are real. Help is on the way.” Stay tuned.

Why NFV is Long on Hype, Short on Value

Network Function Virtualization (NFV) is an emerging platform to deliver network and security functions as a managed service. Network service providers (NSPs) have been piloting, and in some cases offering, NFV solutions to enterprises. At the core of NFV is the notion that network functions, such as SD-WAN, firewalling and secure web access, can be...
Why NFV is Long on Hype, Short on Value Network Function Virtualization (NFV) is an emerging platform to deliver network and security functions as a managed service. Network service providers (NSPs) have been piloting, and in some cases offering, NFV solutions to enterprises. At the core of NFV is the notion that network functions, such as SD-WAN, firewalling and secure web access, can be delivered as virtual appliances and run either on premises (vCPE) or at the carrier core data centers (hosted). NFV is a huge undertaking for NSPs, involving many moving parts that are partly outside their control. The ramification of which is both the NSP and enterprise will realize only minimal cost and operational benefits. Despite the hype, NFV may not be worth deploying. The Four Architectural Challenges of NFV NFV includes two key elements: An orchestration framework that manages the deployment of specific network functions for each customer into the desired deployment model, vCPE or hosted. The Virtual Network Functions (VNFs), which are third-party virtual appliances being deployed into the vCPE. This architecture has several key challenges. 1. VNF resource utilization is heavily dependent on customer context. For example, the volume of encrypted traffic traversing a firewall VNF can dramatically increase its resource consumption. Traffic must be decrypted to enable deep packet inspection required for IPS or anti-malware protection. 2. Running VNFs at the carrier data centers requires a scalable and elastic underlying infrastructure. As the load on VNFs increase, extra resources need to be allocated dynamically. Otherwise, carriers risk impacting the other VNFs sharing the host. To avoid this problem, carriers have a few choices: They can underutilize their hardware, which defeats a major benefit of virtualization. They can try to migrate the VNF in real-time to a different host, but moving an instance with live traffic is a complex and risky process. They can associate every VNF with a specific hardware platform, such as a blade with dedicated CPU cores, memory and disk capacity. With this approach, cross-VNF impact is reduced, but once again the main benefit of virtualization, maximizing usage and reducing cost of hardware infrastructure, is lost. This in turn impacts the enterprise whether through increased price or reduced service quality. 3. Running VNFs on a physical CPE creates a risk for cross-VNF processing and memory degradation of the underlying appliance. Some VNFs, such as routers and SD-WAN, consume relatively few resources. Others, such as URL filtering, anti-malware or IPS are very sensitive to the traffic mix and will require more (or less) resources as traffic changes. Sizing CPEs is not a trivial matter and forced upgrades will become routine. 4. NFV management is per vendor, leaving it complex and fragmented. While the NSPs can “launch” a VNF, the VNF software and configuration live inside a “black box.” The NSPs have limited ability to control third-party VNFs. VNF vendors maintain their own management consoles and APIs, making centralized management cumbersome for both customers and service providers. There several reasons to be skeptical that a multivendor orchestration and management standard will materialize. For VNF vendors, customer retention relies on the stickiness and vendor-specific knowledge of their management consoles. A unified multi-vendor orchestration and management platform runs counter to their interest. For NSPs, customer choice means a big impact on their managed services capabilities. It will be difficult for NSPs to offer a managed service for every VNF, if a VNF requires proprietary management. Beyond NFV: Network Function Cloudification Despite the industry hype, NFV will largely look like the managed or hosted firewalls of the past, with some incremental benefits from using virtual instead of physical appliances. Customers will end up paying for all the appliance licenses they use, and they will still need to size their environment so they don't over- or under-budget for their planned traffic growth. From a managed service perspective, offering to support every single VNF vendor’s proprietary management is an operational nightmare and a costly endeavour. Ultimately, if NFV does not allow NSPs to reduce their infrastructure, management, and licensing costs, customers will not improve their total cost of ownership (TCO), and adoption will be slow. Network Function Cloudification (NFCL) breaks the appliance-centric architecture of NFV. Unlike VNFs, Network Cloud Functions (NCFs) are natively built for cloud delivery. Instead of separate “black box” VNF appliances, functions are decomposed into discrete services -  the NCFs. These may be any network function, such as SD-WAN, firewalls, IPS/IDS, secure web gateways and routers. Regardless, they can be then deployed anywhere, scaled vertically by adding commodity servers, or horizontally through additional PoPs. Wherever PoPs are deployed, the full range of NCFs is available, and load is continuously and dynamically distributed inside and across PoPs. Unlike VNFs, no specific NCF instance is “earmarked” for a specific customer, creating incredible agility and adaptability to dynamic workloads. NCFs are configurable for each customer, either on a self-service or as a managed service, through a single cloud-based console.   NCFs promise simplification, speed and cost reduction. In some cases, these benefits come at a reduced vendor choice. It’s for the enterprise to decide if the benefits of NCFs are greater than the cost, complexity and skills needed to sustain NFV-based, or on-premises networking and security infrastructure.

Firewall as a Service Comes of Age

In a 2016 Hype Cycle for Infrastructure Protection report, Gartner Analyst Jeremy D’Hoinne initiated the emerging category of Firewall as a Service (FWaaS). FWaaS is a cloud-based architecture that eliminates the need to deploy firewalls on-premises, mainly in remote branches, in order to provide site-to-site connectivity and secure Internet access. Cato Networks is a pioneer...
Firewall as a Service Comes of Age In a 2016 Hype Cycle for Infrastructure Protection report, Gartner Analyst Jeremy D’Hoinne initiated the emerging category of Firewall as a Service (FWaaS). FWaaS is a cloud-based architecture that eliminates the need to deploy firewalls on-premises, mainly in remote branches, in order to provide site-to-site connectivity and secure Internet access. Cato Networks is a pioneer of a new architecture that provides FWaaS as part of a broader WAN transformation platform. The Cato Cloud converges WAN Optimization, SD-WAN, a global SLA-backed backbone and network security (FWaaS) -  into a single cloud service. The convergence of security and networking domains accelerates the hard and soft benefits organizations can extract from their WAN transformation through gradual deployment. These include: MPLS cost reduction through augmentation and replacement, improved global latency and network performance, branch appliances footprint reduction, and the extension of the WAN to cloud datacenters and mobile users. Palo Alto Networks has recently announced a new FWaaS offering: GlobalProtect Cloud Service. This is the first time an established firewall vendor that had built its business on selling appliances is offering its core platform as a cloud service. It is a significant validation of the evolution network security must and will take. As Palo Alto notes in its product announcement, a FWaaS solution alleviates the cost, complexity and risk associated with deploying and maintaining appliances. There are scarce details on the underlying architecture of the new offering. Simply sticking appliances into a “cloud”, isn't sufficient to effectively deliver a FWaaS in a way that is affordable and scalable. Using appliances in the cloud shifts the burden from the customer to the cloud provider, and the customer will ultimately have to pay the price for that overhead. Furthermore, the single tenant design of network security appliances, makes it difficult to support a large number of tenants in a scalable way. This is why Cato chose to develop its converged cloud service from scratch. We do not use third party appliances in our service: no firewalls, no routers and no WAN optimizers. We have built a completely new software stack that is designed for the cloud - multi tenant, globally distributed, and with redundancy and scalability built-in by design. As noted earlier, we view Firewall as a Service as a pillar of a broader platform that simplifies and streamlines IT by eliminating multiple point solutions and service providers. Palo Alto Networks uses a firewall in the cloud, and customers must procure reliable global WAN connectivity. Ultimately, the primary use case for Palo Alto’s new service is a secure web gateway and not a full blown replacement of edge firewalls. For example, when WAN security and connectivity is required. Overall, Cato is thrilled to see the industry following the path we blazed towards the cloudification of both networking and security functions. The race to maximize customer value delivery is on.  

A Guide to WAN Architecture & Design

We, at Cato Networks, are excited to sponsor the 2017 Guide to WAN Architecture & Design. The wide area network (WAN) is a critical and fundamental resource for any business. As we will discuss in this guide, the WAN is evolving, so the architecture must evolve, as well. The new architecture should address the future...
A Guide to WAN Architecture & Design We, at Cato Networks, are excited to sponsor the 2017 Guide to WAN Architecture & Design. The wide area network (WAN) is a critical and fundamental resource for any business. As we will discuss in this guide, the WAN is evolving, so the architecture must evolve, as well. The new architecture should address the future needs of businesses and support a new set of requirements, such as: Maintaining high security standards over the WAN More capacity for lower costs Applications prioritization Cloud access The WAN is the heart of any enterprise as it connects all business resources. Building an elastic and scalable WAN, with the ability to control and secure every aspect of it, is what differentiates a traditional and cumbersome WAN from a fast and agile one, which can heavily impact the company's business and ability to grow. The report points out important takeaways for companies before rolling out new WAN architectures. Reduce cost while boosting capacity Many companies depend on expensive and limited MPLS-based WAN for remote branch connectivity. Traditionally, the primary destination of business traffic was the company datacenter, so backhauling traffic over high quality MPLS links was essential for consistency and availability. But today, with more and more business traffic going to cloud applications, backhauling internet traffic from remote offices to the datacenter makes no sense, and the high costs can’t be justified. The evolution of the internet and the dramatic improvement in capacity and availability allows organizations to use internet links as a key WAN channel. By offloading traffic - especially internet-bound traffic - from the expensive MPLS links to the internet (or in some scenarios, completely eliminate it by using dual internet and/or wireless backup) allows companies to gain more capacity at a lower cost. Increase WAN security As we noted earlier, traditional WAN architecture backhauled Internet traffic to a central breakout. Using a firewall in the datacenter was simpler to manage, and produced good visibility. However, with the shape of business traffic constantly changing, backhauling increases the latency of cloud-based applications and negatively impacts the end-user experience. A better approach would be to look for a new WAN architecture that would enable direct internet access from all branch offices and secure it locally. Prioritize critical application traffic such as voice and video Every company has mission critical applications the business relies on. The WAN links’ quality (availability, utilization, latency, packet loss, jitter) heavily impacts the performance of those applications. Companies should deploy technologies that can classify and dynamically allocate traffic in real time, based on business policies and link quality, to ensure the application's performance. Demanding applications can be directed to the higher quality links, while less sensitive applications can utilize the lower quality links. Provide access to cloud services Moving business applications to cloud services reduces operational costs and provisioning time. Many companies have already started to move big parts of their business applications to the cloud, so the question and challenge is, how will they secure and monitor all these cloud services? Relying on point solutions complicates the network, is unscalable, and can cause technical and security issues. A better alternative and a good practice for companies is to look for technologies that unify the security tools, the management, and the events for their environments (on-premise and in the cloud). Cato Networks provides a unique alternative to traditional WAN. It converges SD-WAN and adds security, cloud and mobile integration. To find out more about Cato’s SD-WAN offering please read our response to NeedToChange RFP

Cato Takes Finalist in RSA Innovation Sandbox

We were honored to be nominated as a finalist for 2017 RSA Innovation Sandbox Contest at last week’s show.  The nomination recognized our groundbreaking work in rethinking networking and security. Shlomo presented the Cato value proposition to the judging panel and you can see it yourself here. As anyone who’s been involved in networking or...
Cato Takes Finalist in RSA Innovation Sandbox We were honored to be nominated as a finalist for 2017 RSA Innovation Sandbox Contest at last week’s show.  The nomination recognized our groundbreaking work in rethinking networking and security. Shlomo presented the Cato value proposition to the judging panel and you can see it yourself here. As anyone who’s been involved in networking or security knows, the classical network perimeter has long since disappeared. Shlomo elaborated on this point in his presentation. “25 years ago when I founded Checkpoint software there was a clear perimeter,” said Shlomo. “Today, that perimeter is basically gone. Network security appliances and MPLS links were designed for the old network and not today’s scale.” As such, enterprises lack the agility to spin up new offices quickly or respond to zero-day threats. Our data and resources remain siloed behind different tools, depriving us of the holistic view that could automate and transform the enterprise. The Cato Cloud transforms the networking and security with a single, secure network connecting the entire enterprise - users in offices and mobile users on the road, applications and data in private datacenters or in the cloud. One environment governed by one set of security and networking policies. Listen to Shlomo for the summary of the Cato proposition. If you’re ready for even more information you can dive in here

Critical Capabilities for a Successful SD-WAN Deployment

Last month, analyst Jim Metzler and I joined together on a webinar to discuss the current state of the WAN. Jim shared research from his recent study into the current drivers and inhibitors for WAN transformation and the deployment of SD-WAN. I dove into how Cato addresses those challenges, including showing our new SD-WAN offering....
Critical Capabilities for a Successful SD-WAN Deployment Last month, analyst Jim Metzler and I joined together on a webinar to discuss the current state of the WAN. Jim shared research from his recent study into the current drivers and inhibitors for WAN transformation and the deployment of SD-WAN. I dove into how Cato addresses those challenges, including showing our new SD-WAN offering. You can see the webinar for yourself by registering for access to the recording here. Traditional WANs were never designed to handle the dissolving perimeter. Gone are the days when users and data resided solely in corporate premises. The cloud and mobility are the new norm and the WAN needs the agility capabilities, security, and cost structures to adapt to these changes. Jim’s research showed in part (check out the webinar for full details) how customer concerns around MPLS and those around the Internet are directly inverse of one another. For MPLS services, customers were most concerned about cost, uptime and latency. Lead time to implement new circuits and security were of lesser concern. With Internet services, security is of greatest concern. We found similar results when polling respondents about the most important drivers for improving the WAN. Respondents indicated that connectivity costs were the most important driver. No surprise, I suppose, as MPLS costs can be more than 5x the cost of Internet bandwidth. With such high disparity in bandwidth costs, backhauling Internet traffic makes little sense. More companies are looking to avoid the backhaul and use direct Internet access links to put cloud and Internet-bound traffic directly onto the Internet. This is particularly important as business applications move to the cloud (e.g. Office 365, Salesforce, Box, etc.) . What was more interesting to me was that security was second most important driver for WAN improvement, markedly different from Jim’s finding with MPLS services. At first I was surprised by the results to be honest, but this makes sense as we think about the Internet as tomorrow’s WAN. MPLS has a reputation for being a secure service because of traffic separation: a user in one customer organization cannot ping, traceroute or otherwise discover (at least at the IP layer) resources on another customer network. But beyond traffic separation, MPLS services provide none of the other security components needed to protect the enterprise. There is no native encryption with MPLS services; data is sent in the clear. There is no protection against malware or APTs. There is also no segmentation to prevent users in one remote office from accessing the rest of the organization’s network. Which of the drivers below is most important to improve your WAN? Companies traditionally accepted MPLS limitations, probably because of costs, but also because attack came from “out there” on the Internet. Instead of WAN security they built a closed environment, protecting the WAN from the Internet with a perimeter firewall. But today’s threat landscape has changed, making ignoring WAN security risky at best. Insider threats are more common than ever. Attackers can get past firewalls and, without segmentation, will spread from an obscure field office across the entire enterprise. The opportunity of filtration as business applications shift to the cloud and mobility has become the norm only grows. Companies can no longer assume that perimeter firewalls will secure WANs. Businesses now look to security anywhere data travels, which is why I think security plays such big role for so many respondents in improving the WAN. To overcome the traditional problems of the WAN --  high costs, long provisioning times and more -- with the Internet or any other transport, security is an absolute requirement. And that‘s exactly why so many companies look into Cato. With networking and security integrated together, the Cato Cloud allows organizations to leverage the benefits of the Internet without the security problems. I walked through how that’s done and demoed our new SD-WAN offering in detail in the webinar. Check it out for yourself and let me know what you think.

The “Innovation”​ in RSAC Innovation Sandbox

We are honored to be named finalist at the 2017 RSA Innovation Sandbox (ISB) contest. 87 companies applied and 10 were selected. Last year RSA conference was marked by an “explosion” of security vendors (over 550), and this year will likely see an even larger crowd. Cybersecurity is one area in IT that is always...
The “Innovation”​ in RSAC Innovation Sandbox We are honored to be named finalist at the 2017 RSA Innovation Sandbox (ISB) contest. 87 companies applied and 10 were selected. Last year RSA conference was marked by an "explosion" of security vendors (over 550), and this year will likely see an even larger crowd. Cybersecurity is one area in IT that is always evolving in step with the threat landscape. Security innovation has two attributes: it is focused and sustained. It is focused on the "next" threat, the "additional" protection pillar, and better "management" of security tools. It also has to be sustained. As organizations deploy new tools and new capabilities, they all have to play nicely and incrementally add new capabilities on top of existing layers of security infrastructure. As these capabilities come from a large number of standalone providers, inserting them into the network requires careful planning to make sure nothing breaks. This sustained and focused innovation creates, along side the incremental protection, a systemic problem. As point solutions pile up to address new business requirements and new threats, complexity grows. Every solution has to be deployed, configured, sized and maintained with IT teams hard pressed to keep up with what they own. As we deploy cutting edge solutions to deter hackers, firewalls remain unpatched. Available capabilities needed to address current threats are turned off to ensure network performance isn't impacted - because network security boxes are under-sized and new budget is needed to refresh them. Complexity can and does overwhelm even the most competent and hard working security engineers, thus introducing vulnerabilities and protection gaps. These problems are more severe for midsize companies that are acutely short on staff. But, over time, they will impact virtually all enterprises. There is always more work that needs to be done in risk assessment and mitigation than people available to do it - especially, as our recent survey shows, the number of security tools continues to expand. What we will bring to the RSA ISB contest is a solution to this systemic problem. We created an all new architecture that converges the networking and security stack into a single, global, self-maintianing and scalable cloud service. We can deliver the capabilities enterprises need today and capabilities that will be needed in the future, without placing any load on the IT team. We do the care and feeding, the scaling, and making all of these capabilities available everywhere. Cato's shared network and security infrastructure enables enterprises of all sizes to access capabilities that none of them could afford individually. By using Cato, attack surface is reduced and the impact of cost constraints and increased complexity becomes muted. It is a different kind of innovation. It is broad, rather then focused. It is disruptive rather then sustained. But, we need to act on this problem before it gets completely out of hand. Cato has the vision, team and determination to address this problem and we have dozens of production customers to prove it can, in fact, be addressed. Come see Shlomo Kramer present at the RSA Innovation Sandbox on Monday February 14, 2017. We would be happy to meet you at the contest demo zone.

Cato Named Finalist to the 2017 RSA Innovation Sandbox Contest

We’re gearing up for the RSA Conference in San Francisco next month, but not just to attend the show. Cato has been named as one of 10 finalists for the prestigious Innovation Sandbox Contest that’s run annually at the San Francisco, RSA Conference. Innovation Sandbox recognizes innovative companies with ground-breaking technologies and at Cato, we...
Cato Named Finalist to the 2017 RSA Innovation Sandbox Contest We’re gearing up for the RSA Conference in San Francisco next month, but not just to attend the show. Cato has been named as one of 10 finalists for the prestigious Innovation Sandbox Contest that’s run annually at the San Francisco, RSA Conference. Innovation Sandbox recognizes innovative companies with ground-breaking technologies and at Cato, we sure do know a bit about ground-breaking innovation.   Before Shlomo and Gur cofounded Cato, Shlomo cofounded Check Point Software, creator of the first commercial firewall. Back then life was much simpler for networking and security professionals. Their mission was clear: secure the resources inside the firewall from the wily and dangerous world outside of the firewall. The classical network perimeter has long vanished. Our applications and data live outside the safe confines of our datacenters in the cloud. Our users are as apt to work from Starbucks as from the office. There is no longer is an “inside” and an “outside.” There is just the network. The perimeter is no longer, but so many have yet to evolve their thinking about networks and security. Network and security appliances are still sold; legacy WANs are still deployed. The enterprise continues to purchase separate equipment and software for managing and securing mobile and fixed users. As a result, enterprises lack the agility to spin up new offices quickly or respond to zero-day threats. Our data and resources are still siloed behind different disciplines and applications, depriving us of the holistic view that could automate and transform the enterprise. Cato revolutionizes the way organizations provide networking and security. At Cato, we allow you to rebuild the network perimeter, but it’s a perimeter for today’s business. Cato connects all enterprise resources: data centers, branches, cloud infrastructure and mobile users and connect into a single network in the cloud.  We then secure all traffic with built-in, cloud-based security services, such as next generation firewall, URL filtering and anti-malware. So now you can enforce a single policy across all traffic. With the Cato Cloud, the network becomes simpler again. The Cato Cloud replaces your secure web gateways, MPLS backbones, SD-WAN and WAN optimization appliances, Cloud Access Security Brokers (CASBs), and on-premises firewalls, UTM, and VPN appliances. As a result, Cato allows companies to Reduce MPLS costs Eliminate branch security appliances Provide secure internet access everywhere Securely connect mobile users and cloud infrastructures into the network. On Monday, February 13, 2017, Shlomo will present the radical transformation that’s possible with Cato to the Innovation Sandbox Contest panel of judges. But it won’t be the first time Shlomo had a company before the panel. Aside from co-founding Check Point, Shlomo was also the CEO of Imperva, the innovator of the Web Application Firewall, and a past winner of the Innovation Sandbox contest. Check Point. Imperva. Cato Networks. Yeah, you can say ground-breaking is in our roots.

Blender Case Study: FinTech Drops the Box

  With plans to add more remote branches in the New Year, Blender decided it was time to shed management and maintenance of firewall appliances and move to centralized network provisioning and security.   Background Eliminating borders for both lenders and borrowers worldwide is at the heart of Blender’s peer-to-peer lending platform. Founded three years...
Blender Case Study: FinTech Drops the Box   With plans to add more remote branches in the New Year, Blender decided it was time to shed management and maintenance of firewall appliances and move to centralized network provisioning and security.   Background Eliminating borders for both lenders and borrowers worldwide is at the heart of Blender’s peer-to-peer lending platform. Founded three years ago, Blender’s service gives borrowers and lenders a simple and easy alternative to traditional banks lending that offers more attractive rates to both parties. Currently servicing more than 10,000 clients, the company has offices in Israel, Italy and Lithuania with plans to expand to two new territories in 2017. To be competitive, the organization must also run especially lean, both with its network architecture and IT staff. Challenge When Blender originally started out of their headquarters in Israel they had installed a firewall appliance from one of the top tier providers, at its perimeter. Chief Technical Officer (CTO) Boaz Aviv found it complex to manage, upgrade and patch. "Owning these boxes is expensive and they need constant management. Even if the time required to manage your firewall is just 10 hours a month, that’s still 10 hours you’ve lost,” explains Aviv. Blender depended on an IT integrator for installation and support of the firewall appliance. When they experienced a system failure over the weekend, their IT integrator was not available to support them. This resulted in long downtime and impacted their business. "We are a global operation and we keep it very lean and mean," explains Aviv. “In order to do this you need to minimize hassles that don’t directly relate to your business. So it’s very important to optimize resources, time and people needed to manage your network and security. That’s why I’ve always preferred the simplicity offered by cloud solutions like Cato.” When the time came to expand to their new offices in Italy and Lithuania, the team at Blender stopped to reevaluate how their office network security footprint would impact cost and capacity going forward. Without dedicating personnel to support remote appliances with upgrades and patches, Blender would be dependent on costly third-party assistance with unreliable coverage. Also, as a financial technology organization, Blender continuously seeks to upgrade into better security services, “Although we are a young company, we never compromise on security,” says Aviv. As a cloud-centric business that is subject to regulations and stores most of its data in SaaS applications and an IaaS datacenter, Blender specifically needs to secure access to their critical data. Download the complete case study to see how Blender achieved their Firewall elimination goal.

Survey Report: 700 Networking, Security & IT Pros Share Top Challenges and What’s Driving Investments in 2017

41 percent see FWaaS as the most promising infrastructure protection technology; 50 percent plan to eliminate hardware in 2017   In the latest survey report from the Cato research team, 700 networking, security and IT executives share their biggest risks, challenges, and planned investments related to network connectivity and security. Top risks and challenges reported...
Survey Report: 700 Networking, Security & IT Pros Share Top Challenges and What’s Driving Investments in 2017 41 percent see FWaaS as the most promising infrastructure protection technology; 50 percent plan to eliminate hardware in 2017   In the latest survey report from the Cato research team, 700 networking, security and IT executives share their biggest risks, challenges, and planned investments related to network connectivity and security. Top risks and challenges reported include: 50 percent of respondents believe network security as their top security risk 44 percent of organizations with more than 1000 employees cite MPLS cost as their biggest challenge; 37 percent of organizations with less than 1000 employees reported the same 49 percent are paying a premium to buy and manage security appliances and software To learn more, download the full survey report: Top Networking and Security Challenges In the Enterprise; Planned Network Investments in 2017.

Firewall Elimination: Universal Mental Health Services Case Study

With 13 locations and 900 employees, Universal Mental Health Services made the inevitable decision to eliminate their branch firewalls with Cato Networks. Background Universal Mental Health Services (UMHS) is dedicated to helping individuals and families affected by mental illness, developmental disabilities and substance abuse in achieving their full potential to live, work and grow in the...
Firewall Elimination: Universal Mental Health Services Case Study With 13 locations and 900 employees, Universal Mental Health Services made the inevitable decision to eliminate their branch firewalls with Cato Networks. Background Universal Mental Health Services (UMHS) is dedicated to helping individuals and families affected by mental illness, developmental disabilities and substance abuse in achieving their full potential to live, work and grow in the community. The organization is a comprehensive community human service organization based in North Carolina that strives to provide integrated and quality services to its clients. UMHS is nationally accredited by the Commission on Accreditation of Rehabilitation Facilities (CARF) International. Challenge UMHS network was originally designed to have all 12 branches connected via MPLS and backhauling to a primary datacenter with one central firewall. However, after the process began, UMHS realized that MPLS was too expensive to deploy in all locations. Additionally, some locations were outside of the MPLS provider’s service area. This then forced the organization to connect 5 branches via SonicWALL Firewalls with site-to site VPN’s. This resulted in a mesh of two network architectures that was more complex to run and manage. Running this environment proved to be challenging, especially due to the burdens of updating the hardware and maintaining firewall software. It was labor intensive and updates didn’t always go smoothly. “Specifically I remember updating the firmware on some devices that caused us to lose connectivity. This created a disruption in our record keeping as our branches send key reports directly to our headquarters. Employees generally scan records using copiers, and those records are then stored directly into the appropriate folder at corporate. Additionally, because we deal with sensitive issues like abuse and drug use, employees need free access to internet resources. Policy management was difficult because SonicWALL does not offer agile options to balance the blocking of banned websites while still providing access to necessary information”. Download the complete case study to see how UMHS achieved their Firewall elimination goal

Consistent vs. Best Effort: Building a Predictable Enterprise Network

For decades, one of the primary distinctions between MPLS and internet-based connectivity was guaranteed latency. Why is this guarantee so important and why do you need carrier provided MPLS-service to get it? Latency is the time it takes for a packet to travel between two locations. The absolute minimum is the time it would take...
Consistent vs. Best Effort: Building a Predictable Enterprise Network For decades, one of the primary distinctions between MPLS and internet-based connectivity was guaranteed latency. Why is this guarantee so important and why do you need carrier provided MPLS-service to get it? Latency is the time it takes for a packet to travel between two locations. The absolute minimum is the time it would take for light to go across the distance. The practical limit is higher (roughly 1ms round trip time for every 60 miles or so). There are two elements that impact latency: packet loss and traffic routing.   Packet Loss Packet loss often occurs in the handoff between carriers. In MPLS-networks handoffs are eliminated or minimized thus reducing packet loss. Internet routes can go through many carriers increasing the likelihood of packet loss. A packet loss requires retransmission of lost packets and has a dramatic impact on overall response time. Advanced networks apply Forward Error Correction (FEC) to predict and correct packet loss without the need for retransmission.   Traffic Routing IP routing is much harder to control. It is determined by a service provider based on a complex set of business and technical constraints. Within an MPLS network, the provider controls the route between any two sites, end-to-end, and can provide an SLA for the associated latency. Public Internet routing crosses multiple service providers which are uncoordinated by design, so no SLA can be provided.   A case in point Cato Research Labs recently did latency tests between three IP addresses in Dubai and an IP in Mumbai. These three IP addresses are served by three separate ISPs in Dubai. The round trip time (RTT) was 37ms, 92ms, and 216ms, respectively. In this case, the choice of local ISP could drastically impact the customer experience. What determined this outcome was the internet routes used by each ISP. While public internet routes have no SLAs, customers can do a “spot check” and get pretty good latency between two locations over the internet. However, the latency observed during the spot check, is not guaranteed. In fact, latency can vary greatly, what we call jitter, day to day and month to month, depending on the providers that get involved and various conditions on the internet.     Cato provides a guaranteed, SLA-backed latency. Our cloud network relies on tier-1 carrier links and has self-optimization capabilities to dynamically identify optimal routes and eliminate packet loss. Regardless of how we compare with the various connectivity alternatives at any given point in time, the customer is assured that latency will not exceed our SLA. This is especially critical for long haul routes across America, Europe, and Asia but also for latency-sensitive applications like voice and video. At Cato, we strive for consistency of network latency at an affordable cost, rather than making customers pay the “MPLS premium” for the absolute lowest latency figures, or relying on a best effort, inconsistent, transport like the public internet.    Cato Networks provides an affordable, low jitter and low latency enterprise network. To know more about the trade-offs for your next generation WAN, download our "MPLS, SD-WAN, Internet, and Cloud Networks" ebook here.

Yahoo Password Leak: Your Enterprise Data is at Risk

The media is choke full of reports on a huge Yahoo password leak: 500 million compromised account passwords were hacked nearly 2 years ago. The list of hacked services includes Dropbox, Linkedin, Experian, Anthem, the Office of Personnel Management and many more. A 2-year old password hack may seem minor to IT security professionals. After...
Yahoo Password Leak: Your Enterprise Data is at Risk The media is choke full of reports on a huge Yahoo password leak: 500 million compromised account passwords were hacked nearly 2 years ago. The list of hacked services includes Dropbox, Linkedin, Experian, Anthem, the Office of Personnel Management and many more. A 2-year old password hack may seem minor to IT security professionals. After all, these passwords are used for consumer services and you typically change your password from time to time. Well, not so fast. There are two challenges with consumer security awareness (or lack thereof): static passwords and password reuse. First, most services do not require a password change because the process can be a pain, especially when a user is prevented from reusing an old password. Newer techniques, like using a phone to sign in, alter the way most consumers are used to signing in, creating even more confusion and friction. Second, and even more critical, with password explosion across services, users tend to utilize the same password across both consumer and business services. Static passwords and password reuse create a real threat to enterprises. Associating a user with the company they work for isn't that difficult. This link exists in social media accounts and even in the mail inbox of a hacked service. Figuring out the email convention of most businesses is a matter of minor research or simple trial and error. Once a business email is identified, the enterprise is at risk of spear-phishing and data breach. Through correspondence found in a consumer mailbox, it is possible to craft targeted phishing emails to colleagues based on shared past experiences. And, with the increased use of cloud-based email services for business (i.e Gmail) and the migration of mission-critical applications to the cloud (Office365, Salesforce, Box and many more) the combination of business email and a reused password can lead to a breach.   Protect Your Business from Data Breach driven by Hacked Passwords Enterprises should take precautions against account takeover and data breach from compromised passwords: Use multi-factor authentication on all business web services This feature ensures that a login from a new device gets approved through a second factor (i.e. the employee's phone). This will prevent account takeover from a reused, phished or otherwise stolen password. It seems that hacked services enable multi-factor authentication on their own service, often after a hack is discovered. Restrict access to enterprise cloud services Many cloud services allows organizations to restrict access to specific IP addresses. This works well for fixed locations, but doesn't work for mobile users with IP addresses that change often. A cloud network solution can ensure all access to business cloud services from all users and locations, which can then flow through specific IPs. Protect against phishing and malware sites If your user does get a phishing email, a URL filtering solution can help with stopping them from getting into a risky site. Some organizations prevent access to unclassified sites or new sites with an unknown reputation as a way to decrease exposure. Educate your employees Employees must be trained on the risk of emails from suspicious sources and how to look for signs of bad links and attachments before clicking on them.   The Way Forward: password elimination The likelihood of continued password leaks are very high. We should gradually move towards eliminating passwords altogether. Some services now use one-time passwords for every login. Others use the user’s phone to authorize sign-in. And the even stronger process is to require device registration for every new device, specifically binding the device to the account. Whatever the method is, the days of "the password" are numbered.

Customer Case Study: Cloud Migration Drives Global WAN Overhaul

Background J., information technology manager, works for one of the world’s leading manufacturers and marketers of consumer goods. The company has more than 30 manufacturing plants in the Middle East, Europe and the U.S., with offices across the globe. He has more than 20 years of experience in network security and information management, and specializes in enterprise...
Customer Case Study: Cloud Migration Drives Global WAN Overhaul Background J., information technology manager, works for one of the world’s leading manufacturers and marketers of consumer goods. The company has more than 30 manufacturing plants in the Middle East, Europe and the U.S., with offices across the globe. He has more than 20 years of experience in network security and information management, and specializes in enterprise infrastructure, security and project management. His professional certifications include: certified information security manager (CISM), certified information systems security professional (CISSP), certified information systems auditor (CISA).   Challenge The WAN for J.’s company is based on full mesh VPN tunnels over the internet between commercial firewalls. All enterprise locations were backhauling traffic over the internet to a datacenter that hosts an internal SAP instance. The company was moving to SAP Hana Enterprise Cloud (HEC) in Germany, which required the backhauling approach to be re-engineered. Connecting to the SAP HEC instance was enabled using 2 IPSEC tunnels, so a full mesh configuration was only possible by deploying a new firewall cluster in the SAP HEC datacenter. The company faced substantial costs and risks to support this configuration: Buying, configuring and deploying a high-end firewall cluster in a SAP HEC datacenter (an uncommon scenario) Providing 24X7, in-country support and maintenance of the new firewalls given their role as a global gateway to the critical SAP instance. This scenario created an unacceptable risk to the company’s operations due to the introduction of a new, unmanaged, network security element. “The current WAN architecture could not handle the SAP migration and we needed a solution that was affordable, didn’t require a lot of internal resources, and could be operational in two weeks in order to keep the project on track,” he says.   Solution The SAP project team was searching for a solution. After a visit to Cato’s website, the team met with Cato and was won over by the solution’s architecture, gradual deployment process, network configuration flexibility and 24/7 customer support. Like with choosing any new vendor, he expected that there would be problems - especially since Cato was given such a short window to deploy. Cato proposed a phased approach Establish IPSEC tunnels from each of the company’s firewalls to the Cato Cloud. Connect the SAP HEC instance to the Cato Cloud, without the need for a new firewall cluster Connect other cloud datacenters (AWS and Azure) to the Cato Cloud The company’s WAN was reestablished in the Cato Cloud, enabling point-to-point connectivity without the need for a full site-to-site mesh, and delivered the benefits of Cato’s low-latency backbone. The team was very professional and the job was completed on schedule.   The customer is particularly happy with Cato’s customer service and support, both during and after the project. There were minor issues with configurations at the start of the project, but the support team was very responsive and solved the issues in record time. It is because of this level of attention and service that he and the IT team have complete confidence in building a long-term relationship with Cato. The customer points out that “Cato delivered on what it promised us at the start of the project. We are running a mission critical, global enterprise network on Cato. It just works.”   Plans The IT team is seeing a substantial upside to the Cato deployment: “We are maintaining 30 firewalls in our remote locations primarily as connectivity devices, but also for internet security. We can eliminate these firewalls using Cato Sockets and maintain a centralized policy and security capabilities. This option gives us substantial cost savings in hardware refreshes and software licenses. We have already initiated a replacement of the first four firewalls. We expect to finish this process in the next 12 months.” Additionally, the customer is considering using Cato for mobile VPN access and IoT initiatives noting that “Cato enables us to connect all parts of our business into a common networking and security platform This is a great relief, compared with the mix of technologies and solutions we had to use before. I can see why many enterprises will find Cato’s platform compelling to make their infrastructure more cost effective and easier to manage.” “Technology executives within established organizations are often afraid to make bold moves, like replacing the network architecture they’ve relied upon for 20 years. We moved our WAN to Cato because my organization’s strategic ERP application was moved to the cloud, and our legacy WAN was too rigid to support that move and meet the project timeline. With Cato we were able to address the immediate business need, on time and under budget, and now have a platform to further optimize our networking and security infrastructure”

Firewall as a Service and your biggest network security challenge

We recently held a webinar focused on educating network professionals about Firewall as a Service (FWaaS). At the beginning of this webinar, we asked the audience “what is your biggest challenge running distributed network security today?” Attendees overwhelmingly noted “monitoring and handling of security events” as the top answer, followed by “ongoing support,” and finally “capacity...
Firewall as a Service and your biggest network security challenge We recently held a webinar focused on educating network professionals about Firewall as a Service (FWaaS). At the beginning of this webinar, we asked the audience “what is your biggest challenge running distributed network security today?” Attendees overwhelmingly noted “monitoring and handling of security events” as the top answer, followed by “ongoing support,” and finally “capacity and security capabilities of my appliance” and “cost of appliances.” All of these challenges cause headaches for network pros on a regular basis, and dependency on appliances and the slow evolution of the network security market has only made things worse. It’s this dynamic that inspired us to tackle the complex issues causing these challenges in order to make network security simple again.   The Challenge of Appliance-Based Network Security Network security was simpler in the past: there was a clear perimeter (or perimeters for multi-site companies) with networks, users, and applications firmly inside organizations. To securely connect users at any company location to business applications, or to secure traffic over the internet, we could either connect remote sites with an appliance on the perimeter site, or backhaul all the traffic to a datacenter. Today, there are global organizations with 10s of locations, roaming users with business and personal devices, and business applications both inside the local datacenter and in the Cloud. This creates three main challenges when relying on appliance-based network security: Appliance sprawl: Companies choosing appliance-based network security are facing complexity involved in planning, configuring and maintaining multiple appliances on each site location. Appliances have a fundamental limitation as they require continuous updates, upgrades, and patch management. In addition, appliances can’t scale with business needs - if at some point the business consumes more traffic or would like to add additional functionality, the appliance must support it or else an upgrade is enforced. In many cases, companies compromise on security due to budget limitations and the capacity constraints of the appliance they have. Last, using appliances requires a business to depend heavily on the vendor for support. Direct internet access: Today, most employee traffic is internet traffic - either for business use (e.g. Office 365, Dropbox or Salesforce, etc.) or personal browsing. Two approaches are commonly used to provide secure internet access to employees: Exit the internet on each site location and secure it locally. This method contributes to appliance sprawl due to its complexity and manageability overhead. Backhaul traffic through the company datacenter and exit to the internet from a central location. This approach is mostly chosen by companies that don’t want to compromise on security and also need WAN access. The backhauling approach means routing the traffic through the datacenter where there is a big firewall to secure it before exiting to the internet. This can be achieved in two ways:: Establishing a VPN connection to traffic from each location using an appliance. In addition to the appliance sprawl problems, this option requires the management of a complex VPN policy and configuration. VPNs usually result in bad user experiences as the traffic routed over the public internet can suffer from high latency and packet drop. Using an MPLS network and routing the traffic over a reliable network. The problem with this option is that MPLS is an expensive network that was not designed to carry heavy traffic (designed for mission-critical applications) and the internet traffic consumes a lot of the MPLS expensive bandwidth. MPLS also has other challenges like the complexity of deployment and provisioning. Both VPN and MPLS are not an effective way to exit the internet as it causes the trombone effect. Mobile and cloud access: Connecting mobile users to local and cloud applications while keeping security and visibility is challenging using appliances. For WAN access, most companies use VPN tunneling from a device directly to the perimeter firewall. The challenge here is that users experience high latency when a datacenter is located far away geographically, so the traffic gets routed all over the public internet. Another challenge is that in many cases the business resources are not located in one place, so it requires the complex management of split tunnels. Additionally, users tend to access business applications from personal devices and via private networks without any control or visibility to the organization. To deal with this, many companies use additional solution for SaaS security and visibility called cloud access security brokers (CASB), which requires the management of another security solution.   Gartner’s Perspective on Firewall-as-a-Service Last month Gartner released its Hype Cycle for Infrastructure Protection. In this report, Gartner mentioned FWaaS for the first time and defined it as a firewall delivered as cloud-based service that allows customers to partially or fully move security inspection to a cloud infrastructure. According to the report, FWaaS is simple, flexible and more secure, and it results in faster deployment and easier maintenance. FWaaS also provides consistently good latency across all enterprise points of presence, so it should provide SD-WAN functionality for resilience. Additionally, the Cloud benefits include centralize management and unique security features based on its full visibility. Gartner’s advice for customers is to trust the Cloud for both security and performance.   What to Look for in FWaaS: Traffic inspection and remote access: inspection of both the WAN and internet traffic -  a full stateful inspection, SSL inspection and threat prevention capabilities. In addition, a FWaaS should allow remote connections from all locations and mobile users. Segmentation: a full understanding of internal networks - IPs, VLANs, NAT and routing decisions User and application awareness: the ability to set and enforce security policies based on the user identity, location and machines while accessing applications or URLs Flexibility and scalability: On top of all the basic firewall capabilities a FWaaS should have all the benefits of the Cloud. That means a rapid deployment, seamless upgrade, elasticity and elimination of all the challenges involved in managing appliances.   There are a few popular alternatives to the FWaaS model, but they lack fundamental requirements. For example, using secure web gateways (SWG) in the Cloud lacks visibility and security for WAN traffic, can’t take an active role in the network segmentation, and doesn’t offer a VPN connection back to company resources. Additionally, options like appliance stacking - racking and stacking appliances in the cloud or using a virtual edition of firewalls - just moves the problem to the Cloud. Upgrades and maintenance are still a huge challenge for such solutions. Firewall-as-a-service brings a unique opportunity to organizations to simplify network security. To learn more about how Cato Networks deal with those challenges and how Cato’s firewall-as-a-service works please watch our recorded webinar. Read more about the firewall as a service market

New World, New Thinking: Why “The Box” Has Got To Go

We are living in an agile world. On-demand, self-service and “just in time” have become standard for the applications and services we use, when and where we want to use them. The Cloud possesses the functionality to create a truly agile enterprise computing platform. This is the main thesis in Tom Nolle’s recent blog, titled...
New World, New Thinking: Why “The Box” Has Got To Go We are living in an agile world. On-demand, self-service and “just in time” have become standard for the applications and services we use, when and where we want to use them. The Cloud possesses the functionality to create a truly agile enterprise computing platform. This is the main thesis in Tom Nolle’s recent blog, titled “Following Google’s Lead Could Launch the “Real Cloud” and NFV Too.” The main point that Nolle makes is that for the Cloud to serve as a platform for agile service delivery, enterprises and service providers must drop their “box” mindset. As Nolle points out, “... if we shed the boundaries of devices to become virtual, shouldn’t we think about the structure of services in a deeper way than assuming we succeed by just substituting networks of instances for networks of boxes?” This is a key question in the evolution of cloud services for telecommunication companies, internet service providers (ISPs), cloud service providers (CSPs) and managed security service providers (MSSPs). Historically, service providers either managed or hosted various parts of a customer infrastructure in their facilities. It created the “perception of cloud” - a shift from capital expense to an operational expense model, but using the same underlying technology. “The box” remained, and service providers had to buy, configure, update, upgrade, patch and maintain it. They were not truly leveraging the power of the Cloud, so the cost of services remained high and agility stayed low. As Graham Cluley points out, the Cloud was simply “someone else’s computer.” Enter Network Function Virtualization (NFV). Service providers are pushing internal projects to provide various network and security functions as cloud services. NFV infrastructure involves a management and orchestration layer that determines which services should be activated for a customer, and VNFs (virtual network functions) that represent the services themselves. In the context of firewalls, for example, these are virtual appliances from companies like Fortinet and Cisco. “The box” remains, and it still needs to be managed as a single instance, be configured, upgraded and patched. The capacity going through the appliance has to be sized and the underlying infrastructure to run them can be very volatile in terms of load. The NFV “cloud” was nothing more than a network of boxes. Nolle makes the point that in order to really use the Cloud’s full potential, a new application has to be built to leverage its agility, flexibility, and elasticity. It was simply not possible to take legacy applications (aka boxes), and expect them to become cloud aware. Nolle suggests five principles for making cloud-aware applications. While Nolle’s “application” refers to any business or infrastructure capability, I will use his principles to discuss what I believe is needed to deliver cloud-based network security as a service (NSaaS).     “You have to have a lot of places near the edge to host processes, rather than hosting in a small number (one, in the limiting case) of centralized complexes. Google’s map showing its process hosting sites looks like a map showing the major global population centers.” Cloud data centers are core elements of NSaaS. They can be a blend of virtual and physical data centers, but the nature of NSaaS requires physical infrastructure with very thin virtualization to maximize throughput. How many “places” are needed? Gartner is using a 25ms latency or less from a business user or location, as a rule of thumb. Next generation CDNs (like Imperva Incapsula) are demonstrating that a CDN can leverage the expansion of the internet backbone and the emergence of internet exchanges to deliver a global network with under 50 global locations. Regardless, the edge of the cloud must get close to the end user.   “You have to build applications explicitly to support this sort of process-to-need migration. It’s surprising how many application architectures today harken back to the days of the mainframe computer and even (gasp!) punched cards.  Google has been evolving software development to create more inherent application/component agility.” Migration is one way to address process-to-need. Another way is process-everywhere. NSaaS is making a network security stack available everywhere (i.e., close to the edge), but still maintains one-ness (i.e. a single, logical instance of NSaaS serves a physically distributed environment).   “Process centers have to be networked so well that latency among them is minimal. The real service of the network of the future is process hosting, and it will look a lot more like data center interconnect (DCI) than what we think of today.” NSaaS PoPs are interconnected by tier-1 carriers with SLA-backed latency. The low-latency backbone moves network traffic, not workloads (because they are everywhere), and controls information to keep the different NSaaS components that support every customer, making it context-aware.    “The “service network” has to be entirely virtual, and entirely buffered from the physical network. You don’t partition address spaces as much as provide truly independent networks that can use whatever address space they like.  But some process elements have to be members of multiple address spaces, and address-to-process assignment has to be intrinsically capable of load-balancing.” NSaaS is multi-tenant by design, and each tenant has its own virtual network that is totally independent of the underlying physical implementation. The physical network of NSaaS PoPs communicates over encrypted tunnels using multiple carriers. The PoPs handle traffic routing, optimization, resiliency, and security.   “If service or “network” functions are to be optimal, they need to be built using a “design pattern” or set of development rules and APIs so that they’re consistent in their structure and relate to the service ecosystem in a common way. Andromeda defines this kind of structure too, harnessing not only hosted functions but in-line packet processors with function agility.” NSaaS has a built-in management layer that keeps all the different PoPs in sync. The physical entry point of a packet is immaterial, because it is always processed in the virtual context of the network it belongs to and the policy that governs that network. NFV is moving slowly. In the past, we attributed this to potential conflicts between the players in the ecosystem. Nolle says, “the reason we’re not doing what’s needed is often said to be delays in the standards, but that’s not true in my view... We’re focused on IaaS as a business cloud service, and for NFV on virtual versions of the same old boxes connected in the same old way. As I said, piece work in either area won’t build the cloud of the future, carrier cloud or any cloud (bold is mine).” The bottom line is that architecture, not perception, matters. The network of the future, and its capabilities, must truly live in the cloud. It must align with the “one-ness” view of a cloud service: available anytime, everywhere, seamlessly updated and scaled on demand. This is our vision and the architecture we have built at Cato Networks. To learn more, get our white paper, Network Security is Simple Again here.

Firewall as a Service – Beyond the Next Generation Firewall

Next Generation Firewalls and UTMs have been the cornerstone of network security for the past 20 years. Yet, deploying appliances at every remote office, creates multiple challenges for organizations: the capital needed to buy, upgrade and retire hardware solutions and the people required to configure, patch and manage them. IT teams are also seeing an...
Firewall as a Service – Beyond the Next Generation Firewall Next Generation Firewalls and UTMs have been the cornerstone of network security for the past 20 years. Yet, deploying appliances at every remote office, creates multiple challenges for organizations: the capital needed to buy, upgrade and retire hardware solutions and the people required to configure, patch and manage them. IT teams are also seeing an increasing pressure to allow Direct Internet Access at branch locations, which is further driving the need to deploy distributed security solutions. Even when running smoothly, firewalls and UTMs still do not protect the mobile users and Cloud infrastructure that are now an integral part of the business FWaaS, recently recognized by Gartner as a high impact emerging technology in Infrastructure Protection, presents a new opportunity to reduce cost, complexity and deliver a better overall security for the business.   In our upcoming webinar, we will review: The challenges IT networking, security and Ops teams face with distributed network security stacks and Direct Internet Access. How Firewall as a Service can address these challenges, and what are the required capabilities. How Cato Networks protects enterprises in the cloud, simplifies network security and eliminates appliance footprint in remote locations. Book your spot at > http://go.catonetworks.com/Firewall-as-a-service-beyond-the-next-generation-firewall

Firewall-as-a-Service debuts on the Gartner Hype Cycle for Infrastructure Protection

In new research published by Gartner on July 6th, Analyst Jeremy D’Hoinne introduced a new technology segment: Firewall-as-a-Service (FWaaS). As the name suggests, the segment is focused on the migration of on-premise firewalls to the Cloud. Obviously, this market segment is in an early adoption stage, but the analysis suggests the impact on enterprises would...
Firewall-as-a-Service debuts on the Gartner Hype Cycle for Infrastructure Protection In new research published by Gartner on July 6th, Analyst Jeremy D’Hoinne introduced a new technology segment: Firewall-as-a-Service (FWaaS). As the name suggests, the segment is focused on the migration of on-premise firewalls to the Cloud. Obviously, this market segment is in an early adoption stage, but the analysis suggests the impact on enterprises would be high since enterprises have already realized the benefits of improved visibility, flexibility and centralized policy management. Gartner also calls out two key considerations for evaluating solutions: network latency and security capabilities. Cato Networks is one of the top vendors driving this new and exciting market forward. FWaaS promises simplification and cost reduction by eliminating on-premise appliances. We agree. However, we view the opportunity more broadly through the convergence of both Software-defined Wide Area Networking (SD-WAN) and network security in the Cloud. The Cloud doesn't merely transform the firewall “form factor,” but also enables a whole new connectivity and security architecture. Cato is delivering an integrated networking and security stack that extends the boundaries of the legacy firewall-protected perimeter to connect mobile users, cloud infrastructure, and physical locations into a new, optimized and secure perimeter in the Cloud.   By using Cato, enterprises will be able to: Eliminate distributed networking and security appliances, and the cost and complexity of buying, upgrading and patching them Reduce MPLS connectivity costs by dynamically offloading internet and WAN traffic to affordable and resilient Internet links Directly access the Internet everywhere without deploying a dedicated on-premise network security stack Leverage an affordable, low-latency and global WAN between enterprise locations Enforce a unified policy across remote locations, mobile users, and physical and cloud infrastructure without using multiple point solutions Strengthen security posture with an agile network security platform that can scale to support any business need and rapidly adapt to emerging threats Gradually migrate existing enterprise networks for a growing subset of locations, users and use cases Read more about firewall as a service market To learn more about Cato Networks visit https://www.catonetworks.com

Network Security-as-a-Service: beyond the Next Generation Firewall

About 10 years ago, a small startup, Palo Alto Networks, innovated the Next Generation Firewall (NGFW). Existing enterprise firewalls relied on the use of specific ports to apply application security rules. By application, I don’t mean “salesforce.com”. Rather, it is the mostly irrelevant distinction of application protocols such as HTTP, FTP, SSH and the like....
Network Security-as-a-Service: beyond the Next Generation Firewall About 10 years ago, a small startup, Palo Alto Networks, innovated the Next Generation Firewall (NGFW). Existing enterprise firewalls relied on the use of specific ports to apply application security rules. By application, I don’t mean “salesforce.com”. Rather, it is the mostly irrelevant distinction of application protocols such as HTTP, FTP, SSH and the like. Palo Alto created “application awareness”, the ability to detect application-specific streams, regardless of port. This was a critical innovation at a time where vast amounts of traffic moved to the Internet (using ports 80/443), and the ability to apply controls at the port level, was insufficient. Merely 5 years later, the enterprise infrastructure landscape evolved again with the increased usage of public Cloud applications (SaaS). The “application-aware” next generation firewall was blind to users accessing unauthorized applications (known as “Shadow IT”) and couldn't enforce granular access control on authorized apps. Furthermore, mobile users directly accessed Cloud applications without going through the firewall at all. A new class of network security products were created: the Cloud Access Security Broker (CASB). Many CASB flavors placed themselves in the Cloud to address the limitations of appliance based firewalls. This was a natural architectural decision, however it deepened the fragmentation of  enterprise network security controls.   Cloud and mobility cannot be solved with the current firewall appliance form factor. You simply can't control a virtual, fluid and dynamic business network with rigid, location bound security controls. For a while we could get away with appliance sprawl and integration of multiple point solutions. We are getting to a point where the care and feeding of a network security infrastructure with equipment upgrades, software updates and patching, and distributed multi-vendor management, is becoming a huge challenge for many businesses.   What is the way forward? We need to align network security with the new shape of the business using Network Security as a Service (NSaaS). When we think about putting network security in the Cloud, we start with the firewall. Firewalls are complex entities. They play multiple roles in networking, policy enforcement and security. For example, firewalls are commonly used to establish secure site-to-site tunnels between enterprise locations to form the wide area network (WAN). At the same time, they enforce access control policy between these locations. And, they can also detect access to malicious URLs when users access the Internet.   What do we need to place a firewall in the Cloud? Traffic tunneling: a firewall must be able to see the traffic it needs to control. We need a practical way to get network traffic to the Cloud. This makes sense for traffic that crosses boundaries (such as inter-location and Internet-bound traffic) and can be done in multiple ways including IPSEC and GRE tunnels or a single function tunneling device or client software. Regardless of method, the tunnel enforces no policies and has no business logic, and is therefore not subject to the capacity constraints of firewall appliances. Wire-speed traffic inspection: next, we need to be able to inspect traffic at wire speed and enforce policies. Various innovations allow us to use software and commodity hardware to perform deep packet inspection while minimizing latency. The use of shared Cloud infrastructure enables us to quickly scale and accommodate increased load. Once you get the traffic to the Cloud and can inspect it, the benefits of network security as a service are substantial: No capacity constraints: we can scale the computing power needed to process traffic without being limited to appliance capacity restrictions and equipment upgrades.  This is especially problematic with UTM devices. A Cloud-based firewall literally removes the need for sizing of security appliances, a dreaded exercise for every IT manager. No software maintenance and vulnerability patching: the solution provider is responsible for updating, patching and enhancing the network security software, which frees up customer’s resources. For the solution provider, it is also easier to understand product usage, translate it into new features and seamlessly deploy them. Easier management with one logical policy: today, we need to maintain a group of rules for each appliance. Anyone who had maintained a firewall, knows that templating rules for consistency is always subject to deviations. With NSaaS we create one logical rule set that defines access control across enterprise resources. We can avoid contradictory rules that could enable access from site A to B in firewall A but block that access in Firewall B. New Security Capabilities, Same Platform: Since we have visibility to all WAN and Internet traffic, we can rapidly roll out additional capabilities that were previously packaged into standalone products and needed complex deployment projects. For example, we can inspect traffic for phishing attacks, inbound threats, anomalous activity by insiders, sensitive data leakage, command and control communications and more. All it takes, is a deployment of new software capabilities into the NSaaS platform. Better threat visibility and platform adaptability:  NSaaS is multi-tenant by design. By inspecting traffic across multiple networks, it is now possible to detect threats earlier and quickly adapt the Cloud service to protect all customers. Users are no longer dependent on the resources available to upgrade appliances software for better security. Network Security as a Service promises to transform the network security landscape in 3 key ways: Reduce capital expense on security equipment and point solutions that can be folded into a single Network Security platform delivered as a Cloud services. Reduce operational expense by offloading resource-intensive activities such as deployment, upgrades, maintenance and distributed management from the IT staff. Improve security with an always up-to-date, hardened network security platform. To learn more about Cato Networks flavor of NSaaS, the Cato Cloud, visit https://www.catonetworks.com/cato-cloud/

SAP HANA Migration: Turning your WAN Inside Out

For decades, SAP ERP is at the core of numerous enterprises across multiple verticals. SAP software runs manufacturing, logistics, sales, supply chain and other critical functions, which means availability, performance and scalability are all essential. Yet, maintaining business-critical application infrastructure is not a simple task. To address the challenge of reducing the integration and maintenance...
SAP HANA Migration: Turning your WAN Inside Out For decades, SAP ERP is at the core of numerous enterprises across multiple verticals. SAP software runs manufacturing, logistics, sales, supply chain and other critical functions, which means availability, performance and scalability are all essential. Yet, maintaining business-critical application infrastructure is not a simple task. To address the challenge of reducing the integration and maintenance efforts of enterprise SAP instances, SAP created the pre-packaged SAP HANA. Instead of deploying the full SAP stack (software, application servers, databases) using on-premise, customer-owned, hardware, it is now possible to access a pre-packaged SAP instance in the SAP HANA Enterprise Cloud (HEC). [caption id="attachment_1104" align="alignnone" width="800"] Figure 1: Backhauling all locations to on-premise global SAP instance in the datacenter[/caption]   While the benefits of HEC are obvious, migrating SAP from the data center into a Cloud-based hosted instance isn't trivial. Take for instance a global enterprise that runs SAP in a major datacenter in Spain (Figure 1). It is backhauling traffic to that data center from manufacturing and distribution facilities throughout the world. A decision has been made to move from the on-premise SAP instance to a HEC instance, which is hosted in Germany, a totally different geography. HEC has two primary connectivity options. Using dual IPSEC tunnels into a private IP or placing the instance on the public Internet. In this case, the enterprise wanted to avoid exposing the instance to the Internet, so a firewall had to be used to establish the connectivity to HEC. The WAN design must now address two key challenges. First, integrate the HEC instance into the corporate WAN using the IPSEC tunnels. Second, Create optimal, low-latency routing from every location to HEC.   What is the best way to connect HEC to the WAN?   Backhaul all sites, connect only data center firewalls to HEC To keep it simple, legacy backhauling to the datacenter is kept, but another leg is added from the datacenter to HEC, adding latency. Depending on the distance and Internet routing, congestion and jitter between the datacenter and HEC, the user experience could be impacted. [caption id="attachment_1105" align="alignnone" width="800"] Figure 2: SAP HANA migration is incompatible with legacy WAN, adds a leg to the backhaul[/caption]   A full site-to-site mesh using firewalls Ideally, traffic from each site will go directly to HEC instead of the datacenter. However, firewall-based site-to-site mesh is no longer available as there are only 2 IPSEC tunnels to connect to. An intermediate firewall has to be deployed in Germany, and connect to HEC. However, the enterprise has no datacenter or footprint in Germany, and the new firewall is a mission critical element for the enterprise, as it will become the global chokepoint for accessing HEC.    A redundant, full site-to-site mesh using a Cloud Network In this scenario, all sites and HEC connect to a Cloud-based network using IPSEC tunnels, and a full mesh is achieved in the Cloud. Backhaul and single chokepoint are eliminated as the Cloud network provides built in resiliency and optimal, low latency routing and secure connectivity between all sites and HEC. In addition, mobile users can connect to the Cloud Network from anywhere and directly access  HEC without the added latency of going through the datacenter firewalls.   [caption id="attachment_1106" align="alignnone" width="800"] Figure 3: Cloud Network creates a global mesh, enables anywhere access to SAP HEC[/caption]   If you are a SAP customer and are considering migrating your business to SAP HANA Enterprise Cloud, drop us a note. We’d love to tell you more about Cato Networks and our solutions.

Network World names Cato Networks to its exclusive list of “hot security startups to watch”

“Kramer’s track record and the growing popularity of cloud-based security services gives Cato a seat in a hot market. The company serves up traditional security platforms – next-generation firewalling, URL filtering, application control, and VPN access – in its cloud. Its willingness to license its technology to other service providers opens up a potentially large...
Network World names Cato Networks to its exclusive list of “hot security startups to watch” “Kramer’s track record and the growing popularity of cloud-based security services gives Cato a seat in a hot market. The company serves up traditional security platforms - next-generation firewalling, URL filtering, application control, and VPN access – in its cloud. Its willingness to license its technology to other service providers opens up a potentially large and steady revenue stream.”– Tim Greene, Network World It’s an exciting time at Cato Networks. We came out of stealth mode just a few months ago, introduced our channel partner program and are transforming the way the market approaches network security. Being named to Network World’s list of “hot security start-ups to watch” provides a big boost of confidence at this critical time in the company and shows that the market is getting ready to replace perimeter firewalls with network security as a service (NSaaS). Cato has a simple vision: a network with built-in security delivered as a cloud service at an affordable price. The sun is setting on network security appliances: they simply carry too much cost and complexity for most enterprises to keep up with. You’ll soon have a better way. We hope that like Network World and Tim Greene, you will keep an eye on us. See Network World’s full list of “hot security startups to watch” here. Interested in trying the service? Click here.

The Convergence Of WAN, CDN And Cloud Security

In a recent note, industry analyst and blogger Ernie Regalado from Bizety has overviewed key trends in the convergence of CDN,  WAN and Cloud Security. The next generation WAN will integrate these domains into a unified architecture. By putting the WAN into a Cloud-based CDN infrastructure, it is possible to control Internet routing and reduce...
The Convergence Of WAN, CDN And Cloud Security In a recent note, industry analyst and blogger Ernie Regalado from Bizety has overviewed key trends in the convergence of CDN,  WAN and Cloud Security. The next generation WAN will integrate these domains into a unified architecture. By putting the WAN into a Cloud-based CDN infrastructure, it is possible to control Internet routing and reduce latency across locations (read This is Why the Internet is broken post). And, by embedding security directly into the network fabric, we can reduce appliance footprint across locations, and leverage Cloud agility to rapidly adapt networking and security capabilities. Click here to read the article online, or click here to download the PDF.

MPLS, SD-WAN and Network Security

TechTarget has recently published an interesting article on the security implications of deploying SD-WAN using 2 customer case studies. In both cases, the customers wanted to extend an MPLS-only WAN into a hybrid WAN based on a combined MPLS and Internet connectivity. There are several interesting anecdotes by the financial services customers (Scott Smith and...
MPLS, SD-WAN and Network Security TechTarget has recently published an interesting article on the security implications of deploying SD-WAN using 2 customer case studies. In both cases, the customers wanted to extend an MPLS-only WAN into a hybrid WAN based on a combined MPLS and Internet connectivity. There are several interesting anecdotes by the financial services customers (Scott Smith and “D.V.”) and a system integrator, Tim Coats from Trace3, interviewed for this article that I would like to highlight.   Is MPLS Secure? MPLS security is based on the fact that it is a private network vs. the “public Internet”. The private nature of MPLS allowed an organization to not encrypt MPLS traffic, a big benefit in terms of encryption key management and required CPE (customer premise equipment) capabilities.  As D.V. puts it: “although the public Internet always carries some risk, the reality is that MPLS is also a shared medium. The irony of an MPLS circuit is that the security is VLANs—that’s all it is. You have your traffic marked and put into a special VLAN, so it’s running over the same pipe as everyone else’s MPLS circuit”.   Does SD-WAN improve on MPLS security? For the customers, SD-WAN needs to be as secure as MPLS to be a viable extension. The immediate concern is encrypting the internet tunnel of the SD-WAN solution. This is a no-brainer: MPLS networks are often not encrypted and SD-WAN require organizations to think about encryption, something they may not have done before. However, SD-WAN or MPLS aren’t security solutions. “It’s not a physical layer of security. There’s no special inspection that a firewall might throw in, or an IDS or IPS. None of that is present in an SD-WAN solution, but none of that’s really present in an MPLS solution unless you choose to put it in.” Beyond its core objective of offloading traffic from expensive MPLS link, SD-WAN doesn’t typically include Internet access security. This means that while SD-WAN solutions do slow down the growth in MPLS spending by using the Internet for backhaul, they have no impact of enabling direct internet access at the branch without adding 3rd party security solutions.   Do SD-WAN solutions go far enough in solving customers WAN challenges? SD-WAN solutions abstract the physical topology of the network using a set of overlay encrypted tunnels. SD-WAN management help with encryption key distribution and management for remote locations, this could potentially be a big advantage as you don't need a point to point encryption. But does this address all WAN challenges? Tim Coats says he is concerned with the point solution nature of SD-WAN. Coats would like to see SD-WAN vendors go one step further in simplifying how hybrid networks are secured by removing a lot of the manual labor and guesswork out of service chaining. And then there are the new emerging WAN elements. “Everyone is trying to solve this one little piece, and no one’s looking at the whole picture. And the whole picture is I have users who are everywhere, and my services are distributed on different platforms. I need one place I can pull it all together,” he says.   Summary SD-WAN is primarily a networking technology – it is aiming to address the spiraling cost of MPLS by weaving into the WAN a cheaper, Internet-based, alternative. Is security just an afterthought in the world of SD-WAN? It shouldn’t be. “Oh, God, yes,” D.V. says. “Security is networking. I object to the whole idea that security is separate.” We couldn’t agree more. We view the integration of networking and security as a critical component of the future WAN. By security, we don’t mean just encrypting the transport layer which is a required enabling capability to route traffic over the Internet. We see an opportunity to embed a full network security stack into the WAN, and extend it to Cloud infrastructure and the mobile workforce. This approach can dramatically cut the capital and operational expense of networking and security, while delivering a powerful defense for the enterprise. Learn more about SD-WAN vs. MPLS and the current and emerging options available to architect a secure WAN, by watching our recorded Webinar: MPLS, SD-WAN, and Cloud Network: The Path to a Better, Secure, and More Affordable WAN. Read about Bitcoin mining security risks

Cloud Security Spotlight Report

We are excited to sponsor the 2016 Cloud Security Survey. Cloud adoption, with its opportunities and challenges, is one of the most interesting Business and IT topics of this decade. Organizations are increasingly looking to leverage the Cloud’s scalability, elasticity and resource offload to streamline their operations and improve user experience. Yet, as the survey...
Cloud Security Spotlight Report We are excited to sponsor the 2016 Cloud Security Survey. Cloud adoption, with its opportunities and challenges, is one of the most interesting Business and IT topics of this decade. Organizations are increasingly looking to leverage the Cloud's scalability, elasticity and resource offload to streamline their operations and improve user experience. Yet, as the survey concludes, security concerns top the list of barriers to cloud adoption followed by legal/regulatory compliance and data loss and leakage risks. Organizations, used to run and control on-premise infrastructure, feel they are expanding their attack surface. They are primarily concerned with unauthorized access through misuse of employee credentials and improper access controls. After all, if your data is in the cloud, and all you need is an ID and Password, a simple security snafu can lead to a data breach. Yet, the appeal of the Cloud is driving organizations to adjust their security posture through training their teams, partnering with a managed security services provider, and deploying additional security software to protect data and applications in the cloud. Such capabilities include protecting data at rest and in transit and securing access control with measures like two-factor authentication and restricting access control to specific corporate networks. Cato Networks see the Cloud as an enabler to better security that is also cost-effective. Enterprises using our platform benefit from: A reduced attack surface with tightly secure Cloud-based network backbone Expert security and threat research based on deep analysis of network activity Tight access control to resources, both on-premise and in the Cloud, and The ability to enforce corporate security policies across all users and data. Click here to download the full report. Contact us if you want to learn more about Cato’s Cloud-based and Enterprise-grade Secure Network.

Three Ways Network Complexity Fuels the IT Security Workforce Shortage

The workforce shortage in the IT security field is real and shows no immediate signs of improvement. Recent research by global IT and cybersecurity organization ISACA highlights just how big the problem is. Of the 461 cybersecurity managers and practitioners surveyed globally, 60% said that less than half of their candidates were qualified upon hiring. Additionally,...
Three Ways Network Complexity Fuels the IT Security Workforce Shortage The workforce shortage in the IT security field is real and shows no immediate signs of improvement. Recent research by global IT and cybersecurity organization ISACA highlights just how big the problem is. Of the 461 cybersecurity managers and practitioners surveyed globally, 60% said that less than half of their candidates were qualified upon hiring. Additionally, 54% responded that it took three months or more to fill IT security posts and one in 10 are never filled. The inability to fill these open positions with qualified personnel can leave an organization vulnerable to a range of internal and external security threats, such as phishing, denial of service, hacking, ransomware and online identity theft. But what is causing this apparent shortfall in qualified staff and how can this issue be overcome? The lack of knowledgeable and experienced professionals who can handle IT security is being driven by three factors: too many point solutions; increased network architecture complexity; and too many conflicting priorities.   -1- Too many point solutions The increasing sophistication and frequency of threats have led to an explosion in the number of point solutions installed on business systems. In turn, the knowledge and number of staff required managing all of these solutions have increased to handle the workload. What began as a few supplemental security appliances along the network’s perimeter has now become an appliance straightjacket, severely constraining an already burdened IT team.   -2- Increased network architecture complexity As businesses look to grow or become more efficient by moving data and applications to the Cloud, their network architecture grows in complexity. The modern corporate IT infrastructure can often be a mix of cloud and on-premise solutions, being accessed through a range of platforms and devices, accommodating remote offices, mobile, and on-site workers. This has led to an increase in the attack surface that requires the deployment of new security capabilities and tactics. According to recent research from ESG, 46 percent of organizations say they have a “problematic shortage” of cybersecurity skills in 2016, with 33 percent citing “cloud security specialists” as their biggest deficiency.   -3- Too many conflicting priorities This growth in network complexity results in IT security teams having to juggle too many conflicting priorities. They spend more time running the infrastructure itself and meeting compliance mandates than thinking about the threat landscape, evolving attack vectors, and how to properly adapt to them.   Simplicity is the solution There is no magic wand that is going to suddenly produce a wealth of qualified candidates who can deal with the rise in the workload and complexity of managing it. Instead, organizations should focus on simplifying how their network security is provided to improve the level of protection. But how? The very same forces responsible for the complexity of networks – cloud, virtualization, and software – can be leveraged in a new way to actually simplify an organization’s IT network. By re-establishing the enterprise network perimeter within the Cloud and securing it, it is possible to move away from the appliance-based, point-solution approach that is prevalent today. Organizations can reduce the workload on critical IT resources, with simpler topology, better visibility, fewer policies and configurations to maintain. At the same time, the attack surface will shrink because there are fewer moving parts to manage. Through connecting to a managed, Cloud-based network, an organization can significantly reduce its dependency on hardware and point solutions. Such a network is also easier to manage by service providers where assets are being continually monitored, maintained and updated by fully qualified IT security experts based on the latest cyber intelligence. By solving the complexity issue, an organization can let its staff focus on core strategic IT security initiatives, such as cybersecurity training for staff, and spend less time on network management and maintenance. The result is a reduced requirement for newly qualified staff to fill in the gaps – simple.

Is MPLS a must-have component in your enterprise network architecture?

MPLS cost reduction is the target of the emerging SD-WAN market that is bustling with solutions looking to take the corporate wide area network to a whole new level. The core value proposition of SD-WAN is the use of a standard, low-cost Internet link to augment an expensive managed, low-latency and guaranteed capacity MPLS link....
Is MPLS a must-have component in your enterprise network architecture? MPLS cost reduction is the target of the emerging SD-WAN market that is bustling with solutions looking to take the corporate wide area network to a whole new level. The core value proposition of SD-WAN is the use of a standard, low-cost Internet link to augment an expensive managed, low-latency and guaranteed capacity MPLS link. By offloading traffic from the MPLS link, costly capacity upgrades can be delayed. SD-WAN also promises to reduce the management complexity of this hybrid WAN, which naturally increases with the need to mix and match connection types, and dynamically allocate application traffic.   MPLS designed for the pre-Cloud era MPLS links are often used within mid-to-large organizations to carry latency sensitive traffic such as VOIP, Video, and other critical enterprise applications. Carriers charge a premium, often a significant one, for MPLS links. Beyond SLA-backed availability, latency and capacity, MPLS provides a coordinated prioritization of application traffic from the customer premise to the carrier and to the ultimate destination. Yet, MPLS as a service offering and as protocol is limited in many ways: MPLS requires an end to end physical control of the connection: to achieve its QoS objectives dedicated infrastructure  has to be deployed to all connected locations and coordinated through the carrier. This results in long provisioning cycles for new locations or existing locations that require capacity upgrades. MPLS is a single carrier solution: connecting global branch locations across carriers to achieve end-to-end MPLS is a challenging task. MPLS isn’t encrypted by default: MPLS relies on the carrier private network for security because the data doesn’t flow on the public Internet. Generally speaking, no 3rd party can be assumed to be 100% safe these days, so encryption should always be used for data in motion. MPLS is designed to connect organizational locations to the carrier: and not to the Internet. In the 1990s and early 2000s this made sense, but not any longer. Backhauling all Internet traffic through the carrier MPLS makes little sense. Is MPLS a required component of your WAN strategy? Obviously, many organizations don’t use MPLS in their WANs at all. A very common alternative architecture is to use firewalls to establish Internet-based VPN tunnels between enterprise locations. This typically works in scenarios where MPLS is not available or not affordable, or the vast majority of applications are not latency sensitive. But if you are using MPLS today – can you switch? For many years, the answer had been “no”. However, we observe several key trends that are putting pressure on MPLS as a required connectivity solution.   Massive increase in Internet capacity and availability Internet bandwidth availability and capacity have increased dramatically over the past decade as prices plummeted. This expansion occurred both at the last mile and the middle mile (long haul transit). Last mile Internet links can be packaged in an offering that has similar SLA-backed commitments like MPLS (i.e., symmetrical fiber connection) but don’t have the same architectural restrictions and benefits such as QoS. However, Internet links also offer a wider range of latency, availability and capacity options and price points. In many scenarios, “best effort” bandwidth availability of asymmetrical internet links can provide a compelling cost-effective option for WAN connectivity. In fact, it is the actual performance of these so-called “low quality” connections that is the basis of augmenting MPLS and moving away from using MPLS for all traffic. We are all experiencing them in our homes, where consumer-grade Internet successfully serves massive amounts of latency sensitive traffic from Skype Internet telephony to Netflix video streaming.   The emergence of the new global Internet backbones A new type of carriers has emerged: the global Intelligent backbone providers. Starting from a clean slate, these providers built a global private network using a relatively small number of Point of Presence (PoPs). The PoPs are connected via redundant multiple tier-1 Internet providers to deliver SLA-backed long haul connectivity – at an affordable price. These backbones solve the need to stitch together MPLS carriers to provide global WAN connectivity across regions.   The Cloud-based, Software-Defined WAN The use of agile and flexible Cloud-based software to optimize WAN connectivity end-to-end creates new opportunities to rebuild the WAN using flexible Internet-based connectivity. Some of the new capabilities include:   Last Mile link aggregation: Aggregating multiple inexpensive Internet links including ADSL, Cable and LTE connections to maximize bandwidth and availability. Internet and WAN traffic optimization: Applying sophisticated algorithms to optimize traffic and reduce packet loss in both the last mile and the middle mile. Efficient traffic routing across the middle mile: over a software-defined backbone that is not subject to the Internet routing. As we described in "This is Why the Internet is Broken" blog post, routing over the Internet has a limited sense of route quality (in terms of latency and packet loss) and is heavily influenced by commercial interests.   Integrating Cloud and Mobility into the WAN: extending last-mile WAN connectivity to both Cloud resource and the mobile workforce, expands the value of the WAN beyond the narrow scope covered by MPLS. Integrating security into the network fabric:  Internet access if forcing enterprises to extend security into the branch and eliminate backhaul. By integrating network security into the WAN, complexity and cost of the branch office footprint are reduced.   Summary MPLS had faithfully served businesses for the past 15 years. It is a custom-built connectivity solution, that was optimized for a time of scarce capacity and fragile connectivity to address mission critical applications. The rapid expansion in capacity, availability, and quality of Internet-based connectivity, coupled with innovation in software, Cloud, and global routing is establishing the Internet as a viable alternative to MPLS. If you want to learn more how Cato Networks can help unleash the potential of your legacy WAN, either MPLS-based or Internet-based, drop us a note.   

Software-defined Infrastructure:The convergence of Networking, Security and Cloud-based Software

“Software-defined” is one of the hottest buzzwords around. What it means, in practical terms, is vague at best. The notion of “software-defined” touches on a couple of key drivers of IT infrastructure innovation: speed and cost. Like any other service provider, IT needs to move at the speed of its customers (the business) and adapt...
Software-defined Infrastructure:The convergence of Networking, Security and Cloud-based Software “Software-defined” is one of the hottest buzzwords around. What it means, in practical terms, is vague at best. The notion of “software-defined” touches on a couple of key drivers of IT infrastructure innovation: speed and cost. Like any other service provider, IT needs to move at the speed of its customers (the business) and adapt to emerging requirements including Cloud access, mobile connectivity, data security and more. It also needs to cut the cost of services by reducing the cost of the infrastructure it owns and maintains. The reality is that hardware appliances with embedded software (the most common implementation of networking and security solutions) are too slow to evolve and too expensive to run. In the past, it was a necessary evil. Networking equipment was purpose-built using custom hardware to be able to keep up with the increase in traffic speeds. It was slow to evolve, but it was unavoidable. Enters software-defined networking (SDN). Originally, the concept of SDN emerged as a way to unbundle a hardware networking device (like a router) into a software-based control plane and a hardware-based data plane. Under this model, the control plane provided the brain of the system while the data plane moved the data along the path determined by the control plane. This architecture enabled the control plane to evolve quickly and independently of the hardware layer that is responsible for packet forwarding. SDN was also vendor neutral (with the introduction of the OpenFlow standard), but key vendors like Cisco and VMware deviated from the standard (probably, to maintain a competitive customer lock-in for their solutions). While SDN is an important concept, it is moving slowly through the datacenter due to the complexity of the environment and the co-opetition between vendors that provide the virtual network functions (VNFs). Where SDN has traction is within the discipline of SD-WAN. SD-WAN is a narrower implementation of SDN concepts. SD-WAN uses a software-based control plane to drive on-premise edge devices to dynamically allocate Wide Area Network (WAN) traffic between MPLS and Internet links. Virtual desktops and Voice Over IP (VOIP) are two applications that are latency sensitive and must use a low-latency link such as MPLS while regular web browsing will work fine over an Internet link. SD-WAN is effective because it is “self-contained” (i.e. does not require standards and cross-vendor cooperation) and addresses a narrow IT problem. SD-WAN is just a first step. We now have an opportunity to create something truly new and exciting: Software-defined infrastructure - the integration of software-defined networking and software-defined network security. Let’s start with the network. Imagine of a fully integrated control plan AND data plane all in software - a full SDN. Is this event possible without custom hardware? Apparently, standard servers with optimized, yet standard, Intel hardware and DPDK-enabled software stack can handle multi-gigabit network workloads. Moreover, it is also possible to develop totally new data plane protocols that take into account the way the Internet works in 2015 and not the way it was built in the 80s (i.e BGP). Software makes custom hardware for routing obsolete - we can now implement and rapidly evolve new protocols, optimizations, and other enhancements without being subject to the painfully slow hardware development cycle. What if we could build an SDN security layer directly into the network? This  layer will protect the network traffic as it flows through the SDN stack without being packaged into separate hardware appliances with specialized acceleration and encryption capabilities. The core networking and network security layers of the IT infrastructure remained separate for more than 20 years. There seems to be a justification for this separation. Security needed to move faster due to changes in the threat landscape while networking remained stable (some say, stagnant) and subject mostly to capacity-driven enhancements. Networking and security needed to be separate because they needed to evolve at a different pace. With SDN and Security, these layers can evolve rapidly, and in tandem. IT can achieve unprecedented speed in deploying new secure networking capabilities to address a wide range of business requirements. What about cost? By placing software-defined infrastructure in the Cloud, we can achieve a zero Capex model for enterprises to leverage a fully integrated networking and security solutions. Instead of routers, MPLS links, WAN optimization solutions and network security appliances, enterprises can collapse a full set of capabilities into a fully integrated, SDN and security stack in the Cloud. No need to buy, deploy, upgrade, maintain and manage individual point solutions across the entire business. Take a peek at the future. software-defined and Cloud-based networking and security infrastructure, available from Cato Networks.

SD-WAN does Backhauling: Aren’t we trying to get rid of that Trombone?

We have written in the past about the trombone effect or the implications of traffic backhauling on network security and the user experience. Backhauling is a way a network team is solving a security problem: providing secure internet access for all locations. Backhauling moves the traffic to a datacenter where firewalls are deployed and a...
SD-WAN does Backhauling: Aren’t we trying to get rid of that Trombone? We have written in the past about the trombone effect or the implications of traffic backhauling on network security and the user experience. Backhauling is a way a network team is solving a security problem: providing secure internet access for all locations. Backhauling moves the traffic to a datacenter where firewalls are deployed and a secure Internet access is available. The obvious benefit is that there is no need to deploy a network security stack at each location - that would be the approach of the security team.   Which approach is better? In a survey conducted by Spiceworks, respondents were asked what approach they take to secure internet access at remote locations. The survey shows organizations are evenly split in implementing backhauling vs. local security stack. Historically, the performance and cost hit associated with backhauling Internet traffic was limited because the use of Internet-based applications was limited. Backhauling had a better cost/performance vs. distributed security appliances. Backhauling also allowed for a single domain answer to secure direct internet access in branch offices. The networking team solved this problem with its resources without requiring collaboration with the security team (which managed the data center firewalls that covered ALL internet traffic for the organization). A recent survey published by Webtorials, presents 2 key findings on WAN usage:    58% of respondents see public Cloud and internet traffic increasing both Internet and MPLS usage 50% of the respondents backhaul most of their Internet traffic over MPLS The IT landscape is shifting. Backhauling is going to be severely challenged due to the massive increase in Cloud and Internet traffic. This increase will overload expensive WAN links (such MPLS) and make the negative user experience impact, the so-called Trombone Effect, more profound. In the midst of this massive transition to Cloud which drives the need for secure internet access, SD-WAN technology had entered the scene. SD-WAN is using an “Edge optimization” approach to offload application traffic, that doesn't require low-latency MPLS links to a “parallel” internet link. Using the edge for network optimization, cannot compensate for unpredictable latency of a routing traffic over an Internet connection (what we call the “middle mile”). As a result, customers always have to use MPLS links to address their low-latency applications. One of the main drivers of SD-WAN is backhauling optimization: offloading Internet traffic from the MPLS link to an Internet link (as long as the Internet link behaves well). SD-WAN, in that context, is optimizing a network security approach that is not compatible with the increase in Cloud and Internet traffic that drives the need for a direct, secure, Internet access - everywhere. Could there be other major benefits to deploying SD-WAN? The Webtorials survey ranked the the top 3 potential benefits of the technology as: “Increase Flexibility”, “Simplify Operations” and “Deploy new functions more quickly”. The authors noted that these are all soft benefits, with “reduced opex” showing up as a fourth benefit (likely a long term benefit as MPLS cost will remain fixed for the foreseeable future). The risk for IT organizations is that they will double down on the wrong architecture. With the tradition of “silo decisions”, the goal of direct, secure internet access everywhere could remain out of reach, if networking teams continue to look for a networking way to solve a security problem. We should think about this problem from our stated goal - backwards. We want: - Direct, secure Internet access that - With no backhauling - and no local security stack A new set of solutions are emerging to provide these capabilities. At a minimum, they can reduce backhauling, solve the trombone effect and secure Internet access. More broadly, they can offer an affordable alternative to MPLS links.   How will your Next Generation WAN look like? Will it evolve to accommodate the needs of the future or optimize the dated design of the past?  

Cloud Services are Eating the World

The Cloud revolution is impacting the technology sector. You can clearly see it in the business results of companies like HP and IBM. For sure, legacy technology providers are embracing the Cloud. They are transforming their businesses from building and running on-premise infrastructures to delivering Cloud-based services. The harsh reality is that this is a...
Cloud Services are Eating the World The Cloud revolution is impacting the technology sector. You can clearly see it in the business results of companies like HP and IBM. For sure, legacy technology providers are embracing the Cloud. They are transforming their businesses from building and running on-premise infrastructures to delivering Cloud-based services. The harsh reality is that this is a destructive transformation. For every dollar that exits legacy environments, only a fraction comes back through Cloud services. This is the great promise of the Cloud – maximizing economies of scale, efficient resource utilization and smart sharing of scarce capabilities. It is just the latest phase of the destructive force that technology applies to all parts of our economy. Traditionally, technology vendors used “headcount and operational savings” as part of the justification for purchasing new technologies - a politically correct reference to needing less people, offices and the support systems around them. This force has now arrived in full force to the final frontier: the technology vendors themselves. Early indicators were abundant: Salesforce.com has displaced Siebel systems reducing the need for costly and customized implementations, Amazon AWS is increasingly displacing physical servers reducing the need for processors, cabinets, cabling, power and cooling. Marc Andreseen argued in his 2011 Wall Street Journal article that “software is eating the world”. In my view, this observation is now obsolete. Today, Cloud services are eating the world. Cloud services encapsulate and commoditize the entire technology stack (software, hardware, platforms and professional services). This model is so impactful and irresistible that even capturing only a part of the value is a big win. This is how Cloud services now include Platforms (PaaS) (i.e. Google, Microsoft, Salesforce.com) and Infrastructure (IaaS) (i.e. Amazon AWS, Microsoft Azure and IBM Softlayer). Why is the Cloud services model so successful and so disruptive? Customers' focus is increasingly shifting into simplification of complex business and IT infrastructure, because complexity is both a technical and a business risk. We actually had a simpler world in the past: the vertically integrated IT world of the 80s (where one provider, like IBM, delivered a total solution for all IT needs). Things got a bit out of hand in the 90s, when the IT landscape shifted into a horizontal integration of best-of-breed components. The marketplace, where every component vendor (compute, storage, software, services) competed for business at every customer, spurred innovation, and drove down component prices. Complexity, was the less desirable side effect because customers had to integrate and than run these heterogeneous environments. We are now seeing the pendulum shift again. Cloud services offer a vertically integrated solution to multiple business problems. Choice is reduced in the sense customers can’t dictate the compute, storage or software the Cloud service will use, but complexity and cost is eliminated en mass. Ultimately, the proof is in the pudding, and if the business value is delivered in a consistent fashion with the right 3rd party validation for quality of service, the details don’t really matter. The era of the Cloud requires new type of companies that are agile and lean, like the Cloud itself. Very few companies have the courage or the will to cannibalize their legacy businessess and embrace a new reality where there is simply less money and resources available to get things done. When you build a startup company for the Cloud era, you must design it for the Cloud economic model. You invest in R&D to build a great platform, in self-service/low friction service delivery models and in customer success to keep your users happy. You do more with less because your customers are forced to do the same. Network security has yet to be extensively impacted by the Cloud. Security technology is considered sensitive by large enterprises, limiting the sharing of threat data. Regulations also place constraint around customer data handling in the Cloud. These forces may slow down the adoption of Cloud technologies but will ultimately give way to the immense value they offer to businesses. Security will uniquely benefit from vertical integration with distinct domains, such as networking. Such integration will provide unparalleled visibility into enterprise network activity and will enable deeper insight and early detection of advanced threats. We envision a world where network security is less of a burden on staff and budgets while the quality of service and the customer experience is dramatically improved. This is not a shot across the bow of network security incumbents. It is a recognition that the transformative power of the Cloud will ultimately reach every business in the world, and IT security vendors, like all other IT vendors, will have to make a choice – embrace it or wither.

Whistling in the Dark: how secure is your midsize enterprise business?

It is high noon. The one (and only) security analyst for a midsize business, needs to prepare for a PCI compliance audit. Meanwhile, a phishing email baits an account payable clerk at a regional office to access a malicious site and his workstation is infected with a financial Trojan. At closing that day, $500,000 from...
Whistling in the Dark: how secure is your midsize enterprise business? It is high noon. The one (and only) security analyst for a midsize business, needs to prepare for a PCI compliance audit. Meanwhile, a phishing email baits an account payable clerk at a regional office to access a malicious site and his workstation is infected with a financial Trojan. At closing that day, $500,000 from the corporate bank account had gone missing - on their way to an off shore account. It turns out, the office UTM appliance was last updated several months ago due to a configuration error. Alerts were issued, but there was simply no time and resources to notice them and take action. This problem is the tip of the iceberg of the midmarket enterprise security challenge. It is a conflict of business needs, resources, budget and skills. Solving this conflict requires an examination of the delivery model of security, and the productivity it enables. We all want certainty. And certainty (real or perceived) is achieved by having whatever it is that drives the desired outcome under our direct control - i.e “on premise”. In real life, though, we rely on many things that are outside our control. Power, water, internet connectivity are all examples of critical capabilities we outsource. if we run out of power we use generators or wait until the problem is fixed depending on the level of business continuity our organization requires. Security is considered a critical business capability. Traditionally, an “on premise solution” was the way to go along with the “on premise resources” that were necessary to maintain it. In an era of increased competition and razor thin margins, IT is under pressure to streamline operations. And as streamlining goes, expecting an overloaded resource to pay attention to the most mundane operational details, is unrealistic. It may feel safe for a while, with no evidence to the contrary, until you run out of luck. The “ownership” model for security must change – and it will change. The largest enterprises, the ones that can still throw resources at a sprawling on-premise infrastructure, will be the last ones to adapt the new model. The smaller organizations will have to make the leap sooner. The new model should be based on shared infrastructure and resources, in exact same way utility companies built their shared power infrastructure, generation, distribution and control so we get very high service levels at affordable cost. Here are some of the key elements of a new model that can address some of the shortcomings above. Shared Security Infrastructure Sharing security infrastructure across organizations (for example, through elastic Cloud services) ensures security capabilities and configuration are always up to date. This is a simple concept that eliminates the need for each organization to maintain every component of the distributed network security architecture that is prevalent today. The business assets we protect are naturally distributed, this is how we run the business, but the support infrastructure should support the business structure not impair it. Shared Intelligence We are at inherent disadvantage against the hackers. They have the darknet and underground forums where they share tools, tactics and resources. Organizations are very restrictive in sharing security data and often don’t have the resources or the time to facilitate it. Restrictive sharing make little sense these days. There may have been a time when proprietary capability in IT security represented a competitive advantage. However, given the current nature of the threat landscape, no one is safe. What we truly need is pooling of our resources and data and getting the right set of skills applied to analyze it. In essence, our share security infrastructure should adapt to defend against the accumulated insight from all the traffic and security events across all organizations. This should create a formidable barrier against our adversaries that will drastically raise the cost of attack. Shared Skills Finally, it all comes down to skilled personnel. There is a chronic shortage of experiences staff members, which is hitting the midmarket organization exceptionally hard. It is difficult to compete against larger enterprises in scope of work and compensation even without considering the scarcity of relevant skills. A possible way to address the skills gap is sharing them. This is not a new model, Managed Security Service Providers (MSSP) had been offering it for a while. But even MSSPs need the right platform to scale effectively so a managed service can be delivered in a cost-efficient manner, instead of just migrating the load from one place to the other. Shared infrastructure, intelligence and skills. These are the three pillars that will make enterprise grade security possible and affordable for the midmarket enterprise.

Complexity is the Real Vulnerability

Security is a unique IT discipline. It overlays and supports all other disciplines: compute, networks, storage, apps, data. As IT evolves so does IT security, often with a considerable lag. The introduction of personal computing gave rise to endpoint protection suites and AV capabilities. Networks drove the introduction of the firewall. Applications spawned multiple security...
Complexity is the Real Vulnerability Security is a unique IT discipline. It overlays and supports all other disciplines: compute, networks, storage, apps, data. As IT evolves so does IT security, often with a considerable lag. The introduction of personal computing gave rise to endpoint protection suites and AV capabilities. Networks drove the introduction of the firewall. Applications spawned multiple security disciplines from two-factor authentication to secure app development, vulnerability scanning and web application firewalls. Databases introduced encryption and activity monitoring - and to manage all these capabilities we now have Security Information Event Management (SIEM) platforms. Security thought leadership attempts to provide best practices for IT security, including defense in depth, secure development life cycle, penetration testing, separation of duties and more. These fails to address the need of security to move at business speed. When a new capability appears, with a big promise of huge returns through cost savings, employee productivity and business velocity – security teams are expected to respond, quickly. Yet, existing technologies, built for past challenges, are often inflexible and unable to adapt. But, unlike other disciplines, IT security technologies tend to stay in place while layer upon layer of new defenses are built over antiquated ones to address the new requirements. This “hodgepodge” situation not only is a burden to IT stuff but also creates real exposure to the business. A great example of this problem is the dissolving perimeter. Over the past few years, IT security had been helplessly watching the enterprise network perimeter, an essential pillar of network security, being torn to shreds. Branch offices, users, applications and data that were once contained within a well-defined network perimeter are now spread across the globe and in the Cloud requiring any-to-any access – anytime and anywhere. How did the security industry respond? Point solutions popped up, aiming to patch and stretch the network perimeter so new data access paths can be secured. Cloud-based single sign-on extended traditional enterprise single-sign on to public Cloud applications. Mobile device management extended PC-centric endpoint management systems. So, past attempts to create and enforce universal policies fell apart as IT security was yet again looking at multiple policies supporting multiple products. The increased complexity of network security is hitting us at a particularly bad period when attacks velocity and sophistication are at all time high. These has two key implications. One, IT security teams are juggling too many balls: attempting to manage what they own while responding to new and emerging threats. This means they are spending more time running the infrastructure itself than thinking about the threat landscape and how to adapt to it. Second, Complexity expands our attack surface. Hackers target unpatched software vulnerabilities, outdated defenses and product mis-configurations to breach enterprise networks. The more tools we deploy to counter this tidal wave of threats the bigger is the opportunity to identify weak links and slip through the cracks. At the end of the day, our tools are as effective as the people who run them and set the security policies – and these dedicated people are simply asked to do too much with too few resources. How can we tighten our defenses and make our business a hard target? We have to make our network security simpler and more agile. Simplifying network security is a real challenge because our assets are just spread all over the place. Network security vendors are constantly looking for ways to improve agility. Yet, keeping appliances everywhere, in both virtual and physical form, still requires a concerted effort to make sure software is up to date, patches are applied and the right configuration is in place – for every location and every solution. With all these challenges, simplicity is strategic goals for all enterprises. We should strive for reduced workload on our critical IT resources, fewer policies and configurations to maintain to reduce attack surface, faster automated adaptability to seamlessly keep up with new threats – and more cycles to focus on business-specific security issues. Cato Networks believes we can make our networks simpler, more agile and better secured. It will take a bold move - rethinking network security from the ground up. We should look for answer within the same forces that had given rise to the complexity that now dominates our networks: Cloud, Virtualization and Software. But instead of using them to replicate what we already know to exist into a different form factor, we have to break the mold. If we can realign our network security with the new shape of our business, now powered by boundless Cloud and Mobile technologies, we have the opportunity of making network security simple - again. Cato Network is ushering network security into a new era. If you want to learn more about our beta program, drop us a note.

Lipstick on a Pig?: Hybrid WAN, SD-WAN and the Death of MPLS

Networking is an enterprise IT discipline where being conservative is often the way to go. After all, without the network, today’s technology-powered businesses are dead in the water. The network doesn’t have to be totally down, though, to disrupt the business. Slow or unpredictable application response time can cripple point of sale, customer service, manufacturing...
Lipstick on a Pig?: Hybrid WAN, SD-WAN and the Death of MPLS Networking is an enterprise IT discipline where being conservative is often the way to go. After all, without the network, today’s technology-powered businesses are dead in the water. The network doesn’t have to be totally down, though, to disrupt the business. Slow or unpredictable application response time can cripple point of sale, customer service, manufacturing – essentially every part of the business. Being conservative, however, can cost the business a lot of money that could be better spent elsewhere. MPLS is a 20 years old enterprise networking technology. It had risen as a response to the business need for a reliable and predictable network performance across the wide area network (WAN). For example, remote office employees needed access to latency sensitive enterprise applications like ERP, CRM and Virtual Desktops that were hosted in the company’s data center. The alternative to MPLS, if you could think of it this way, was to jump into the Internet Abyss with Internet-based connections (IPVPN). Unmanaged Internet-based global routing, which I will refer to as the “middle mile”, is a convoluted mess of communication service providers, links and routers. It provides no guarantee that your packet will arrive on time, if at all. Guaranteed service levels come at a price with MPLS spend representing a big part of the IT networking budget. But even before the cost of using carrier-provided MPLS, organizations have to procure and deploy it. To establish MPLS paths between sites and regions, multiple carriers may need to be selected, contracts and service level agreements negotiated to optimize cost and performance. Than, network equipment has to be installed and configured at every location. In some cases, physical cabling has to be deployed too. As we discussed, Cloud apps and mobile access had disrupted the enterprise network and increased the pressure on MPLS links – now carrying a large volume of Internet traffic. In addition, distributed IoT environments will generate large volumes of data that needs to be centralized and analyzed. Internet applications, however, are less sensitive to latency. So, unmanaged Internet connection maybe sufficient with MPLS being an expensive overkill. Using the Internet for the enterprise network is really tempting. Business Internet connectivity has improved dramatically over the past decade while cost had plummeted. Enterprises can access massive amounts of bandwidth for a fraction of the cost of MPLS. Yet, they still can’t get service level guarantees for “the middle mile”. Essentially, unmanaged Internet routing remained the convoluted mess it once was. Enter the Hybrid WAN. The Hybrid WAN concept suggests that enterprises should split their network traffic in each location into Internet-bound and Enterprise-bound streams. Internet traffic should be sent to the Internet near the point of origination while Legacy, on-premise applications traffic should still be carried over MPLS links to ensure service levels. When done right, such architecture can reduce the load on MPLS links by using them for only “relevant” traffic. The Internet/MPLS split became the target of companies that belong to a new category: Software-Defined WAN (SD-WAN). SD-WAN players attempt to maximize the use of Internet-based connections (IPVPN) from the remote office to the datacenter. They do it by measuring link performance and deciding if IPVPN link works “fast enough” to support a given application or if the alternative MPLS link should be used. For some applications, IPVPN links will never be used. The SD-WAN approach, in our view, is short sighted. It assumes a split is essential because the “middle mile” challenge is unresolved. We claim that there is little reason for most midmarket enterprises to use MPLS and that Internet-based connectivity is the way to go. How can that be? The world of networking and security is transforming. Price commoditization, abundant global capacity availability, advances in computing platforms, Cloud software and network architectures - together open up amazing new opportunities. Using cheap last-mile capacity and intelligent Internet-based global network backbone it is now possible to crack the “middle mile” challenge and control the performance of the entire route. If you want to learn more about SD-WAN vs. MPLS, and how we can help you achieve great connectivity experience at an affordable price, while keeping your network, remote office, mobile users and Cloud applications securely connected – drop us a note or join our Beta.

Better Keep It Open or Closed?

Here is a nice debate we can have until the cows come home. The battle for security supremacy has been raging for years between “open” and “closed” approaches for software development. Can we name a winner? First, lets define the terminology. A software-based ecosystem has 3 main characteristics: how the software is developed, maintained and...
Better Keep It Open or Closed? Here is a nice debate we can have until the cows come home. The battle for security supremacy has been raging for years between “open” and “closed” approaches for software development. Can we name a winner? First, lets define the terminology. A software-based ecosystem has 3 main characteristics: how the software is developed, maintained and distributed. An ecosystem openness is determined by the way it treats these aspects. Windows, Android and iOS are good examples of different approaches. Microsoft Windows source code is private and so is bug fixing. It is considered a closed system by these two tests. Yet, 3rd party software development and distribution is wide open. Anyone can develop a Windows application put it on any web site and have users download it. If that sounds to you like a malware delivery system – you are probably right. Google’s Android has a different mix of attributes. Its source code is available, but the code is centrally maintained by Google. Android applications, like Windows applications, can be downloaded from numerous Internet-based marketplaces which are not centrally controlled. Apple iOS is the most closed system of all. Apple exerts full control over the entire software-based ecosystem for iOS. iOS source code is closed, it is maintained by Apple, and each 3rd party applications must be vetted by Apple and delivered from the one and only Apple App Store. The exception to that rule are jail broken phones (phones that had their security controls removed by users and are exposed to a wide variety of threats). Historically, open source advocates claimed that open source approaches to software enables crowd sourcing of bug fixing power that will make the code more scrutinized on one hand, and faster to fix on the other hand. The claim was that even a large company like Microsoft can run out of developers to secure its code and fix bugs. In the Windows/Linux battle, this claim was never tested – most attacks are on endpoints, and Linux market share of endpoints had been negligible. Recently, a more plausible comparison is possible: Android vs. iOS. It is a matter of fact, that the vast majority of malware is targeting the Android system. How does the openness of Android impacts its security? Let’s walk through 3 relevant aspects: APIs and Permissions, Application Distribution, Attack Surface. First, Application Programming Interfaces (APIs) and Permissions. Each API is designed to provide a service to an application running on the platform. At the same time, it also offers a possible point of attack. Hackers and malware often tap into system APIs which were intended “to do good”. The less APIs you expose the less vulnerable you are. Another aspect is permissions. If a user is asked to grant an application certain permissions, they may not consider why the application needs these permissions. Excessive permissions are a key source of risk for users. Android provides an extensive set of APIs and permissions that applications can leverage. Apple is notorious for restricting the number of APIs it offers. It also cleverly asks the user to authorize access to sensitive data when access occurs and not when the app is installed. This reduces security risk. Second, Application Distribution. Google allows third party marketplaces to distribute apps while Apple allows distribution only through its App store. Simply put, the more scrutiny is applied to apps before distribution, the less attacks the users are exposed too. The open marketplace approach means that Google can’t enforce its standards of security on the Android apps ecosystem, resulting in an explosion of malware targeting Android. The App store success in stopping malware is well publicized. Most iOS attacks resort to installing developer editions of apps that bypass the app store. However, this method raises quite a few red flags and often requires user consent to actually work. Even when attacks do go through, the mobile OS application sandbox architecture limits the damage. Third, Attack surface. Windows was designed to maximize interoperability. So each application can access any other application given the right permissions. This makes the OS and applications processes present a substantial attack surface. Mobile OSes addressed that risk with application sandboxing. Each application is fundamentally restricted from accessing any other application on the device or the underlying OS. This makes the attack surface much narrower and restricts the ability of malware to take over the entire device or even get out of the scope of the compromised application runtime environment. Generally speaking, both Android and iOS are doing a good job here. But the availability of Android source code may expand the attack surface because it allows hacker to more easily examine the way the operating system works and craft an attack against vulnerabilities they are able to identify. Open systems, and especially Linux based servers, had transformed the enterprise. But their impact was mostly felt in reducing the cost of doing business, not achieving better security. While no system is bulletproof, it seems that closed systems that rely on tightly controlled code development, maintenance and application distribution models seem less vulnerable to attacks. In the enterprise, distribution exposure isn’t just about app stores. It is about the supply chain of hardware and software, partners and resellers and ultimately customer sites. There are ample opportunities to capture end products, reverse engineer them and design an attack against identified vulnerabilities. Cloud-based platforms are closed in nature. They can reduce the exposure of enterprises by reducing the overall points of attack: there are less APIs, permissions, supply chain touch points and overall access to the platform. A Cloud provider can also build a formidable defense-in dept- architecture to secure their environment in a way that a single enterprise will be hard pressed to meet, because the cost and efforts are shared across many organizations. Contrary to the view that the move to the Cloud increases risks for enterprise data, the closed system approach of Cloud providers, is more likely to boost security. And the huge business implications of a breach for the provider will make the investment in security a matter of utmost urgency. It may be a coincidence, but all major breaches of the last few years occurred in enterprise environments. Despite holding business critical data, Cloud providers had done a good job protecting the data. They may just prove to be the solution to address a bulk of the enterprises security woes.

Where Do I Plug It? the dissolving perimeter and the insertion dillema

Not every topic in networking and security is “sexy”. We all want to discuss the business value of our solutions, but we are often less keen to discuss deployment technicalities (this is mostly true for marketing folks like me). However, because the enterprise IT environment is undergoing a major transformation driven by Cloud and mobility,...
Where Do I Plug It? the dissolving perimeter and the insertion dillema Not every topic in networking and security is “sexy”. We all want to discuss the business value of our solutions, but we are often less keen to discuss deployment technicalities (this is mostly true for marketing folks like me). However, because the enterprise IT environment is undergoing a major transformation driven by Cloud and mobility, some of our core assumptions about enterprise architecture and best practices should be reevaluated. Historically, the enterprise network was physically bound to specific locations like the corporate headquarters, a branch office or then datacenter. When deploying a security, it was naturally placed at the entry or exit point of the network. This was the way firewalls, intrusion prevention systems, email security gateways, data loss prevention and other security systems were implemented. There are two big forces that are pressuring this approach to network security: the use of public Cloud applications and the mobile workforce. The common theme here is that organizations now have an increasingly large number of assets that are no longer bound to a specific enterprise location – the so-called “dissolving perimeter challenge”. How did enterprises deal with this issue? An early approach was to use VPN connections into the enterprise. A user would authenticate to a VPN server (often part of the firewall) and than be allowed to access an internal resource like a file share or a mail server. Effectively, by bringing the users into the corporate network they were subject to the security controls (such as email security or DLP). But the users could still access the Internet-at-large without going through the network security stack. As a result, they were more likely to be infected by malware because they were only protected by the endpoint anti-virus. As challenging this problem had been, it had gotten bigger. Many enterprises now use Cloud applications to store sensitive data. Unlike internal applications, enterprises had no way to control access to the data (beyond the application internal controls). On top of the inherent challenge associated with securing the data, Mobile users and BYOD initiatives allow direct access to Cloud apps and enterprise data with limited ability to govern that access. As migration to the Cloud accelerated and VPN importance start to fade, a new product category was born: the Cloud Access Security Broker (CASB). CASB had to address the complexity of controlling access to enterprise data from any user, location or device, both managed and unmanaged. Suddenly, deployment became an issue. How do you control ALL access to Cloud-based enterprise data? At this junction, there are multiple deployment and integration scenarios for CASB each with its own pros and cons. A forward proxy requires endpoint configuration to intercept and apply security to Cloud access requests. A reverse proxy gets access requests redirected from the Cloud application, so it can apply security even for unmanaged devices. And, Cloud Application APIs can be used to implement some, but not all, of the required security functions depending on the specific Cloud application. No wonder, Gartner is publishing papers on deployment considerations for CASB and advises enterprises they may need to use all three methods or pragmatically settle on an approach that best meets their security requirements. The shift to an agile enterprise, driven by Cloud and mobility, is pressuring our decades old network architecture. Vendors and customers alike, are fighting for a line of sight, the right place to “insert” security controls to achieve maximum impact with minimum disruption. The fundamental requirement is: ensure security controls can enforce enterprise security policy on every user, device or location and whatever application or data they need to access. Without it, we will never be able to reap the productivity and cost savings gains the new shift is creating. What organizations had done, to date, was to patch their networks with ad-hoc solutions, band aids, so they can stretch and accommodate these new requirements. The cracks are showing. The time to rethink the network architecture and its security controls is near. If you want to help us redefine the new secure network – join our team. If you are looking to simplify network and security management - join our beta.

The Horrors of Ransomware and the Mid-market Enterprise

Mid-market enterprises do not generate big headlines as far as data breaches go. After all, why would a nation state or an organized cybercrime group take the time and effort to target an organization with a limited customer base and few commercially-valuable assets? They can’t really use them for cyber warfare or monetize in the...
The Horrors of Ransomware and the Mid-market Enterprise Mid-market enterprises do not generate big headlines as far as data breaches go. After all, why would a nation state or an organized cybercrime group take the time and effort to target an organization with a limited customer base and few commercially-valuable assets? They can't really use them for cyber warfare or monetize in the black market. In a dinner the other day I sat by a friend who owns a law firm. He told me his firm was a victim of a Ransomware attack. A paralegal opened up a phishing email attachment and her, anti-virus protected, PC disk was maliciously encrypted by Cryptowall malware. The firm had limited backups and the advise he got was to not pay the ransom. Apparently, the private/public key system used by the malware had "bugs", which means he could end up with useless files even if he paid. He gave up the data and made a decision to move to Office 365 in the Cloud. Mid-market enterprises may think they can hide in the crowd and that their anonymity will protect them vs. the likes of Target, Anthem or Sony. They are wrong. Unlike APT which is a custom attack, executed by experts with specific objectives, Ransomware is a generic, massively scalable attack. It is basically sharing a very similar concept to the Zeus financial trojan: it generically infects as many users as possible through malicious email messages or compromised web sites and than runs generic crime logic to encrypt their data that is highly automated and require no "manual intervention". Mid-market enterprises with limited resources and weak anti-virus protection are a particularly good target: they have just enough assets worth paying a ransom for. There are multiple opportunities to stop ransomware: detect malicious attachments before they are opened, alert users on malicious web sites before the user navigates to them or detect malicious files in a sandbox before they are downloaded. And, if you do get infected you have another shot. The ransomware has to connect to its C2 (Command and Control) server to get the encryption key pair generated and the public key delivered to the machine. If you can detect that outbound request and stop it, the encryption may never happen. What is common to all of these capabilities? Many of them are considered "large enterprise" capabilities. They are too difficult to acquire, install, configure and maintain by a mid-market enterprises. The team at Cato Networks understands these gaps and we are working to address them with our Secure Network as a Service solution. We are still in stealth mode, but if you run network security for a mid-market enterprise and want to learn more about our upcoming beta, drop us a note, or read our related blog post 'How to Stop NotPetya.'

User Experience as a Service or a Tale of Three Giants

The late 70s were the glory days of Apple. The Apple II had set the standard for a new personal computing era. Not for long. With the emergence of Microsoft’s MS-DOS 1.0 and the IBM PC, two diametrically opposed product design and go-to-market strategies collided. Microsoft’s strategy was to build “The Alliance”. It had partnered...
User Experience as a Service or a Tale of Three Giants The late 70s were the glory days of Apple. The Apple II had set the standard for a new personal computing era. Not for long. With the emergence of Microsoft’s MS-DOS 1.0 and the IBM PC, two diametrically opposed product design and go-to-market strategies collided. Microsoft’s strategy was to build “The Alliance”. It had partnered with Intel and low cost asian "PC clones" makers to offer a mass market personal computing platform. Apple’s approach was “All-in-One”: a single, vertically integrated, solution that included both the hardware and the software. The result of this strategic battle is now part of the history of Silicon Valley. The Alliance had quickly captured a massive market share by offering a “good enough” personal computing product. The market share grab, and the extreme “openness” of the platform had captivated the world and brought computing into many schools and homes. The Microsoft/Intel/PC Clone became the go to platform for application developers and users. Apple’s first mover advantage didn’t help and it became a niche company in the personal computing market that served the education and creative design verticals. Fast forward to 2010 and the same battle is fought again. The Alliance now includes Google’s Android operating system and dozens of asian mobile handset makers. In a repeat of the 80s, it had also captured the vast majority of the market. And Apple, as if it had learned nothing, is sticking to its All-in-One vertically integrated product strategy. Will history repeat itself, with The Alliance defeating the All-in-One by commoditizing the smartphone the way it had commoditized the personal computer? As I write this blog, the iPhone is rewriting history. 8 years into its launch, the iPhone is resisting the inevitable commoditization of any new technology. Apple commands the vast majority of profit share, and developer mind share, in the smartphone market. The loyalty of its customers and their willingness to pay a premiun for the iPhone despite cheaper alternatives, had defied logic and common wisdom. How could that be? I believe that answer is that we now live in the "age of user experience”. As we described in a previous blog, the Cloud plays a key role in the age of user experience by encapsulating “products” so customers can experience the "value” rather than the products that go into creating it. Apple is a User-Experience-as-a-Service company that ties together hardware (the iPhone, iPad, Apple Watch), software (iOS) and services (iTunes, App Store, Apple Pay) into a unified and optimized user experience. The value of that experience, has remained constant over the years, despite the rapid commoditization of the different components that went into making it. Google and its partners, however, made a conscious decision to compromise the user experience because of the need to support a large matrix of platforms without optimizing to any specific one. This was the 80s strategy: rapid market share grab with good enough product. While this approach did lead to market share gains, Apple retained its profit share lead, as many customers refused to accept a “good enough” experience and embraced a “premium” one – even at a higher cost. Google’s attempt to respond to this preference was to build a path to a vertically integrated solution with the purchase of Motorola Mobility. This move had ultimately failed because Google had become a prisoner of its own ecosystem, risking The Alliance with a decision to directly compete with its partners. The demand for a superior user experience has impacted all areas of technology. The so-called “Consumerization of IT” simply suggests that user experience matters everywhere – in both our personal and work lives. We are witnessing a “melt up” of products, software, hardware and all the duct tape holding them together into an experience that is benchmarked against the bar Apple had set with its products. As an industry we will be held accountable for delivering a great experience not just a good enough product. If you want to create a new user experience for IT security and the business it serves – join our team. If you want to experience what lies beyond “good enough” – join our beta.

The Software Revolution’s Next Stop: The Enterprise Network

We are living through a software revolution. The flexible and agile nature of software makes it easier to conceive, build, test and deploy new products. It is also easier to iterate through revisions, continuously incorporating market feedback and adapting to changing requirements. By its nature, hardware is less agile and adaptive which slows down the...
The Software Revolution’s Next Stop: The Enterprise Network We are living through a software revolution. The flexible and agile nature of software makes it easier to conceive, build, test and deploy new products. It is also easier to iterate through revisions, continuously incorporating market feedback and adapting to changing requirements. By its nature, hardware is less agile and adaptive which slows down the process of evolving products to meet market needs. A simple example is the annual refresh cycle of the iPhone compared with the more frequent introduction of enhancements to iOS. Software, and hardware, have been with us since the dawn of computing and both evolved in tandem. So where is the revolution? In my view, it is in the decoupling of software and hardware. When you couple hardware and software you enslave the flexible and agile software to the rigid hardware platform. Think of an operating system and a server. When you couple the two together, a hardware failure kills the whole instance and a software failure makes the hardware useless until a new software image is rebuilt. In both cases the ability to adapt is constrained. This problem was addressed by Virtualization and the Hypervisor. By decoupling the hardware and the software through the hypervisor, it was possible to quickly move virtual operating system images (basically Windows or Linux instances and the applications that runs on them) across physical servers in the case of a failure. And if the server software failed, the hardware could still run other virtual server instances. Virtualization was the driving force behind the Cloud transformation, because it allowed the elasticity and resource sharing that was a core requirement of Infrastructure-as-a-Service (IaaS) businesses like Amazon Web Services. Because the virtualization of the Compute space created so much impact, we are now seeing virtualization being extended everywhere. At the most basic level, network and security appliance vendors are packaging their solutions into virtual appliances. The architecture and management requirements remain the same, only the form factor changes. The customer is responsible for providing the underlying hardware and licenses often control how much “capacity” the appliance can provide. The situation is more complex when we deal with custom hardware and software. In that scenario, special rework is needed to decouple the software from the hardware. Standards like SDN and NFV are creating a framework of APIs and specifications that allows the decoupling of layers of software currently embedded in physical products. SDN extracts the control plane and abstracts the data plane that is still delivered by networking hardware. It is now possible to deploy a “network brain” to make end-to-end routing decisions while directing SDN-compliant networking gear on packet forwarding. NFV takes that approach further by allowing the data handling function itself to be decoupled from the hardware. In an NFV world, functions like routing, application delivery and security are delivered as a collection of software services and are linked together via an orchestration layer. SDN and NFV are driving the software revolution in networking. The proposed open standards reduce vendor lock-in and upfront investment as compatible virtualized functions can be swapped out by enterprises and service providers based on capabilities or pricing. The increased customer flexibility is at odds with legacy equipment vendors that make their living selling tightly integrated appliances. Obviously everybody is playing along nicely, no one wants to be blamed for fighting the common good of lower prices and better service. If we had to guess, progress on the SDN, and especially the NFV, front will be slower than expected. Enterprises will most likely find that orchestrating offerings from multiple competing vendors with little incentive to move away from their traditional business models is going to be cumbersome. This doesn't mean businesses, especially small and medium size ones, will not be able to achieve the benefits of agile software applied to their network security and core networking infrastructure. Cato Networks is taking advantage of the progress in software, virtualization and the Cloud to deliver a streamlined and secure enterprise network - as a service. If you want to work on fast tracking tomorrow’s vision of a better enterprise network – join our team. If you feel your traditional networking and security vendors want to lock you in and need a "get out of jail" card - join our Beta.

Simplicity, Courtesy of the Cloud

Simplicity is the holy grail of the technology products of our time. “Can’t it just work?” is the prayer of end users everywhere. Simplicity is also at the epicenter of the Cloud revolution. The days of complex and risky enterprise software implementations are now fading from our memories. Pioneered in the area of business applications,...
Simplicity, Courtesy of the Cloud Simplicity is the holy grail of the technology products of our time. “Can’t it just work?” is the prayer of end users everywhere. Simplicity is also at the epicenter of the Cloud revolution. The days of complex and risky enterprise software implementations are now fading from our memories. Pioneered in the area of business applications, a small startup, salesforce.com, has challenged enterprise software giant, Siebel Systems and its alliance with system integrators and their army of consultants. Salesforce.com primary message was “no software” – a promise of “business value” without the “technology hassle”. At first, only businesses with few sales people adopted this new platform. Setting up a “real” customer relationship management system was simply beyond their capabilities. Over time, enterprises with large sales teams and mission critical customer data have placed their trust in salesforce.com. Siebel was acquired by Oracle for $6B, and salesforce.com has recently entertained a $50B takeover offer. Simplicity had won. Many technology companies had followed the path blazed by the early Cloud leaders. Every realm of enterprise IT, from business applications to infrastructure, now sports a cloudy overcast. I had the privilege of working at Trusteer, an IBM company, which had pioneered Cloud-based financial fraud prevention. The Cloud enabled fraud prevention at a speed, agility and effectiveness that were unimaginable just few years prior. The customers experienced only the “value” not the “product”. Simplicity had won, again. Closer to the world of IT infrastructure, we are witnessing an arms race between first-mover Amazon Web Services and challengers Google, Microsoft and IBM to dominate the data center of the future. Cloud-enabling the full technology stack (compute, storage, network) is on its way as software virtualization devours proprietary hardware/software platforms and spits them out as Commodity Of The Shelf (COTS) hardware running agile software. This all-new software-centric stack is placed into an elastic Cloud platform where is can rapidly evolve and transform to meet emerging business needs. The IT industry as a whole is forced to think Simplicity. Legacy contracts to run complex networks with a hodgepodge of products “owned” by locked-in customers are crumbling in the face of a swift change in the IT landscape. How could Simplicity courtesy of the Cloud look like for IT Security? I see five impact areas: plumbing, management, software, intelligence and expertise. Network security plumbing is complex and mission critical. For the most part it sits “in line” and can seriously disrupt the business if it fails or maxed-out. Fault tolerance, disaster recovery and high availability are just some of the considerations. The Cloud encapsulates the plumbing , and the underlying platform scales elastically, as more security capabilities are delivered and more users need to be secured. This is one of the key challenges with the current appliance-centric approach where the customer “owns the product” – what we dubbed the “appliance straightjacket”. Managing this physical infrastructure introduces another point of failure. Network topology must be understood, and policies created to match it. This is a weak link that leads to misconfigured and outdated rules that could result a disruption in service. Organizational changes, like M&A, introduce new equipment and the need to merge incompatible systems. With plumbing hidden and independent of a specific physical location, the Cloud isn’t subject to organizational boundaries. Policies can be fewer and service standardization can be achieved faster and easier than product standardization. Security software must be uniquely adaptive. Rapid shifts in attack vectors require security capabilities to evolve or die. One of the hallmarks of Software-as-a-Service (SaaS) is rapid adaptability. It simply can’t be matched by solutions that bind software and hardware together in multiple locations where the customer owns the product and the responsibility to keep it up to date. Outdated software makes networks vulnerable but even a dashboard full of bright red vulnerability scan results still requires an overworked admin to take action (sometimes many times over) to keep a security solution up to date. Intelligence is the other side of the adaptability coin. Intelligence provides the insight to adapt security solutions to defend against emerging threats. When buried deep inside customer networks, this information has little value. Shared across multiple organizations in the Cloud, threat intelligence access is simplified so it can be quickly analyzed to detect new attack patterns and techniques. Yes, some are concerned about data privacy, but measures can be taken to anonymize data. Without sharing threat intelligence, we are crippling our own defense as nation-state and other actors increase the speed and sophistication of their attacks. The Cloud also creates an opportunity to share expertise. Security vendors and service providers can apply teams of experienced experts to analyze threat intelligence and create countermeasures. It is virtually impossible, even for the largest organizations, to match that capacity which can be used to support hundreds or thousands of organizations. Shared expertise brings to bear the largest amount of skills at the highest utilization and lowest possible cost. The Cloud enables enterprise IT to rethink and ultimately recreate a network security architecture that is simple, powerful and can effectively provide secure business productivity for organizations of all sizes. Cato Networks will lead this Cloud-driven transformation. If you want to build the next big thing in network security - join our team. Or, if you feel your enterprise network security architecture needs a new vision – join our Beta.

The Appliance Straightjacket

Let’s admit it: we want to love our appliances. Not the washing machines and the dryers, but the technology workhorses that dominate the IT landscape. They are cool to look at with their modern industrial designs, bright colors, and cool branding. They are even more attractive inside a rack stacked up with their brethren: lights...
The Appliance Straightjacket Let’s admit it: we want to love our appliances. Not the washing machines and the dryers, but the technology workhorses that dominate the IT landscape. They are cool to look at with their modern industrial designs, bright colors, and cool branding. They are even more attractive inside a rack stacked up with their brethren: lights blinking, fans humming, busy going through billions of bits looking for the sharp needle in the haystack. Sometimes, though, the music ends. A power supply fails and you have to go deal with a replacement. Software update crashes the internal operating system. As years go by, even these loyal workhorses need to be laid to pasture and a we accept a bright colored replacement bundled with an EOL (that’s End of Life) notice. Even when things are looking good, our appliances may not be able to handle what we need. A DDoS attack chokes them down. New business drives growth and capacity becomes constrained (inconveniently outside the budget cycle). New cool features overload them when activated in conjunction with old cool features we take for granted. So, we go on a spending spree like drunken sailors because “you only live once” and “today’s budget is tomorrow’s cut”. And, as the hangover sets in, all this spare capacity just sits there, idle within our networks. We love variety. So we have many appliances. Many kinds. Each with its own policy that needs to be managed and kept consistent. We keep on staff just the right number of experts for the proper number of appliances and rely on them to watch over them like day-old babies. Than you have turnover and a new geological layer of rules, settings and scripts is born. Not before long, no one knows what these rules mean or what it would mean to change them. But, no worry, we have vendors for that too. We are so concerned with stability, that we require human intervention before every update. This means we waste precious time before our appliances adapt to current threats. As we diligently lock them in data centers and away from the vendors, we are assured they will be slow to figure out what is going wrong before they ever hope to fix it. But, ultimately the biggest challenge is positioning. Not the vendors’ clever marketing messages but their precious appliances in our networks. You see, they are supposed to be “in front of”, “at the perimeter of” or “the edge of” the network. But we have mobile workforce, Bring-Your-Own-Device programs (BYOD), Cloud Apps, small branch offices we can’t afford to protect and 3rd parties like partners, contractors and agents. You can’t just get “in front” of all of that. And if you think virtual appliances will save you – think again. The severe challenges of capacity, manageability, adaptability and positioning equally apply to them too. The appliance model is broken and Cato Networks is working hard to help businesses break out of the appliance straightjacket. If you want to help network security break free of old paradigms and launch into a new era, join our team. Or, if you have suffered enough running networks choke-full of appliances – join our Beta.

The Sound of the Trombone

I love Trombones… in marching bands. Some trombones, however, generate a totally different sound: sighs of angst across networking teams around the world. The “Trombone Effect” occurs in a network architecture that forces a distributed organization to use a single secure exit point to the Internet. Simply put, network traffic from remote locations and mobile...
The Sound of the Trombone I love Trombones... in marching bands. Some trombones, however, generate a totally different sound: sighs of angst across networking teams around the world. The "Trombone Effect" occurs in a network architecture that forces a distributed organization to use a single secure exit point to the Internet. Simply put, network traffic from remote locations and mobile users is being backhauled to the corporate datacenter where it exits to the Internet through the corporate's security appliances stack. Network responses than flow back through the same stack and travel from the data center to the remote user. This twisted path, resembling the bent pipes of a trombone, has a negative impact on latency and therefore on the user experience. Why does this compromise exist? If you are located in a remote office, your organization may not be able to afford a stack of security appliances (firewall, web filter etc.) in your office. Affordability is not just a matter of money. Even UTM appliances have policies that need to be managed and if the appliance fails or requires maintenance - someone has to take care of it at that remote location. Mobile users are left unprotected because they are not "behind" the corporate network security stack. The most recent answer to the Trombone Effect is the use of "regional hubs". These "mini" data centers host the security stack and shorten the distance between the remote location and a secure exit point to the Internet. While this approach reduces the end user performance impact, the fundamental issue of managing multiple instances of the security stack still remains. Cato Networks will solve this problem as part of the core design of our network security platform. If you are a victim of the Trombone Effect or Traffic Backhaul drop us a line or join our Beta program.