Cyberattacks are on the rise and more and more enterprises fall victim to attacks each and every day. Take for example the recent high profile...
The Latest Cyber Attacks Demonstrate the Need to Rethink Cybersecurity Cyberattacks are on the rise and more and more enterprises fall victim to attacks each and every day. Take for example the recent high profile attacks on Gedia, a German automotive parts manufacturer and Travelex, a foreign currency exchange enterprise. Both businesses experienced disruption and claimed the attacks came from a known criminal group. The same group that was behind a series of attacks on companies using sophisticated malware that encrypts files, known as Sodinokibi or REvil. The criminal group also threatened to publish sensitive data from the car parts supplier on the internet, unless a ransom was paid.
Simply put, businesses are becoming attractive targets for digital extortion and are now on the radars of organized crime and criminal groups that are looking to make a quick buck off of the misery they can create. Both attacks demonstrate how vulnerable today’s businesses are when connected to the public internet and adequate protection is not deployed.
It is speculated that the attack on Travelex became possible because the company had failed to patch vulnerable VPN servers. Which is important to note, especially since the NIST’s National Vulnerability Database has published over 100 new CVEs (Common Vulnerabilities and Exposures) for VPNs since January of 2019, indicating that there may be many unpatched VPN servers in use today. Even more troubling is the fact that the root cause of the Gedia attack is still yet to be discovered, which means the security flaw may still exist.
Speculation aside, the root cause of most cyber-attacks can be traced back to unpatched systems, phishing, malicious code, or some other weak link in the security stack, such as compromised credentials. Knowing the root cause is only one part of the cybersecurity puzzle, The real question becomes “what can be done to prevent such attacks”?
The Cato Approach
Here at Cato Networks, we have developed a solution to the security problem of unpatched VPN servers. Remote users is just another “edge” along with branches, datacenters and cloud resources all connected and secured by the Cato global network. As the first implementation of the Gartner’s SASE (Secure Access Service Edge) architecture, Cato infrastructure is kept up-to-date and customers do not have to worry about applying patches or installing server-side security software, we take care of all of that. We also address the shortcomings of VPNs. Our client software replaces a traditional VPN and uses a Software Defined Perimeter (SDP) to allow only authorized users to access the private backbone.
Once connected to Cato, users are protected from attack as Cato inspects all network traffic, attachments, and files. Since most ransomware enters a network after a phishing attack or by a user downloading software from an embedded link, our platform would detect the malicious code and the associated SMB traffic and prevent the lateral movement of the malicious code. We can also detect traffic flows that attempt to contact the external addresses used by malicious software and block that traffic as well.
Ransomware and other malicious code can only impact an organization if it has a way to get on the network. Our platform eliminates the ability for malicious code to enter the network, thus defeating ransomware and other threats. Our SASE platform identifies known malicious code while it is in transit and blocks it and since our platform is a cloud service, it is constantly updated with the latest cyber security information, we take care of everything on the backend, including any patching or other updates.
In addition, Cato protects against the spread of ransomware and other types of malware introduced into a host by means outside of the secured Cato network. For example, users may introduce malware into their systems by connecting across a public Wi-Fi network (without using Cato mobile access solution) or by inserting an infected thumb drive in their systems.
Regardless of how hosts becomes infected, Cato detects and blocks the spread of malware by detecting anomalous traffic patterns. Cato monitors normal user behavior, such as the number of servers commonly accessed, typical traffic load, regular file extensions, and more. Once an anomaly occurs, Cato can either notify the customer or block various types of traffic from the abnormal host.
Cato also offers an additional layer of protection, since our server-side software is not available to the public (unlike VPN server software), hackers do not have access to the code to create exploits to the system. Since we handle everything on the backend, administrators no longer have to worry about maintaining firewalls, setting up secure branch access, or deploying secure web gateways, all of those elements are part of Cato’s service offering, helping to further reduce administrative overhead.
By moving to an SD-WAN built with SASE, enterprises can make most of their cybersecurity problems disappear. For more information, check out our advanced threat protection and get additional information on our next generation firewall.
New applications are identified faster, more efficiently by using data science and Cato’s data warehouse Identifying applications has become a crucial part of network operations....
Cato Develops Groundbreaking Method for Automatic Application Identification New applications are identified faster, more efficiently by using data science and Cato’s data warehouse
Identifying applications has become a crucial part of network operations. Quickly and reliably identifying unknown applications is essential to everything from enforcing QoS rules, setting application policies, and preventing malicious communications.
However, legacy approaches to application classification have become too ineffective or too expensive. In the past, SD-WAN appliances and firewalls identified applications by largely relying on transport-layer information, such as the port number. This approach, though, is no longer sufficient as applications today employ multiple port numbers, run over their own protocols, or both.
As a result, accurately classifying applications has required reconstruction of application flows. Indeed, next-generation firewalls have become application-aware, identifying applications by their protocol structure or other application-layer headers to permit or deny unwanted traffic.
Reconstructing application flows, though, is a processor-intensive process that does not scale. Many vendors have resorted to manual classification, a labor-intensive process involving the hiring of many engineers. It’s costly, lengthy, and limited in accuracy. Ultimately that impacts product costs, the customer experience, or both.
Cato Uses Data Science to Automatically Classify New Applications
Cato has developed a new approach for automatically identifying the toughest types of applications to classify – new apps running over their own protocols. We do this by running machine learning algorithms against our data warehouse of flows. It’s a repository built from the billions of traffic flows crossing the Cato private backbone every day.
We’re able to use that repository to classify and label applications based on thousands of datapoints derived from flow characteristics. Those application labels or AppIDs are fed into our management system. With them, customers can categorize what was once uncategorized traffic, giving them deeper visibility into their network usage, and, in the process, create more accurate network rules for managing traffic flows.
To learn more about our approach and data science behind, click here to read the paper.
For insight into our security services click here or here to learn about Cato Managed Threat Detection and Response service.
It’s no secret that malicious bots play a crucial role in the security breaches of enterprise networks. Bots are often used by malware for propagation...
How to Identify Malicious Bots on your Network in 5 Steps It’s no secret that malicious bots play a crucial role in the security breaches of enterprise networks. Bots are often used by malware for propagation across the enterprise network. But identifying and removing malicious bots has been complicated by the fact that many routine processes in an operating environment, such as the software updaters, are also bots.
Until recently there hasn’t been an effective way for security teams to distinguish between such “bad” bots and “good” bots. Open source feeds and community rules purporting to identify bots are of little help; they contain far too many false positives. In the end, security analysts wind up fighting alert fatigue from analyzing and chasing down all of the irrelevant security alerts triggered by good bots.
At Cato, we faced a similar problem in protecting our customers’ networks. To solve the problem, we developed a new approach. It’s a multi-dimensional methodology implemented in our security as a service that identifies 72% more malicious incidents than would have been possible using open source feeds or community rules alone.
Best of all, you can implement a similar strategy on your network. Your tools will be the stock-and-trade of any network engineer: access to your network, a way to capture traffic, like a tap sensor, and enough disk space to store a week’s worth of packets. Here’s how to analyze those packet captures to better protect your network.
The Five Vectors for Identifying Malicious Bot Traffic
As I said, we use a multi-dimensional approach. Although, no one variable can accurately identify malicious bots, the aggregate insight from evaluating multiple vectors will pinpoint them. The idea is to gradually narrow the field from sessions generated by people to those sessions likely to indicate a risk to your network.
Our process was simple:
Separate bots from people
Distinguish between browsers and other clients
Distinguish between bots within browsers
Analyze the payload
Determine a target’s risk
Let’s dive into each of those steps.
Separate Bots from People by Measuring Communication Frequency
Bots of all types tend to communicate continuously with their targets. It happens since bots need to receive commands, send keep-alive signals, or exfiltrate data. A first step then to distinguishing between bots and humans is to identify those machines repeatedly communicating with a target.
And that’s what you want to find out: the hosts that communicate with many targets periodically and continuously. In our experience, a week’s worth of traffic is sufficient to determine the nature of client-target communications. Statistically, the more uniform these communications, the greater the chance that they are generated by a bot (see Figure 1).
[caption id="attachment_9167" align="alignnone" width="824"] Figure 1: This frequency graph shows bot communication in mid-May of this year, notice the complete uniform distribution of communications, a strong indicator of bot traffic.[/caption]
Join Our Cyber Security Masterclass –From Disinformation to Deepfake
Distinguish Between Browsers and Other Clients
Simply knowing a bot exists on a machine alone won’t help very much: as we said most machines generate some bot traffic. You then need to look at the type of client communicating on the network. Typically, “good” bots exist within browsers while “bad” will operate outside of the browser.
Operating systems have different types of clients and libraries generating traffic. For example, “Chrome,” ”WinInet,” and “Java Runtime Environment” are all different client types. At first, client traffic may look the same, but there are some ways to distinguish between clients and enrich our context.
Start by looking at application-layer headers. Since most firewall configurations allow HTTP and TLS to any address, many bots use these protocols to communicate with their targets. You can identify bots operating outside of browsers by identifying groups of client-configured HTTP and TLS features.
Every HTTP session has a set of request headers defining the request, and how the server should handle it. These headers, their order, and their values are set when composing the HTTP request (see Figure 2). Similarly, TLS session attributes, such as cipher suites, extensions list, ALPN (Application-Layer Protocol Negotiation), and elliptic curves, are established in the initial TLS packet, the “client hello” packet, which is unencrypted. clustering the different sequences of HTTP and TLS attributes will likely indicate different bots.
Doing so, for example, will allow you to spot TLS traffic with different cipher suites, for example. It’s a good indicator that the traffic is being generated outside of the browser – a very non-human like approach and hence a good indicator of bot traffic.
[caption id="attachment_9132" align="alignnone" width="604"] Figure 2: Here’s an example of sequence of packet headers (separated by commas) generated by a cryptographic library in Windows. Changes to the sequence, keys and value of the headers can help you classify bots[/caption]
Distinguish Between Bots within Browsers
Another method for identifying malicious bots is to look at specific information contained in HTTP headers. Internet browsers usually have a clear and standard headers image. In a normal browsing session, clicking on a link within a browser will generate a “referrer” header that will be included in the next request for that URL. Bot traffic will usually not have a “referrer” header or worse, it will be forged. Identifying bots that look the same in every traffic flow likely indicates maliciousness.
[caption id="attachment_9133" align="alignnone" width="927"] Figure 3: Here’s an example of a referrer header usage within the headers of a browsing session[/caption]
User-agent is the best-known string representing the program initiating a request. Various sources, such as fingerbank.org￼, match user-agent values with known program versions. Using this information can help identify abnormal bots. For example, most recent browsers use the “Mozilla 5.0” string in the user-agent field. Seeing a lower version of Mozilla or its complete absence indicates an abnormal bot user agent string. No trustworthy browser will create traffic without a user-agent value.
Analyze the payload
Having said that, we don’t want to limit our search for bots only to the HTTP and TLS protocols. This can be done by looking beyond those protocols. For example IRC protocol, where IRC bots have played a part in malicious botnets activity. We have also observed known malware samples using proprietary unknown protocols over known ports and such could be flagged using application identification.
In addition, the traffic direction (inbound or outbound) has a significant value here.
Devices which are connected directly to the Internet are constantly exposed to scanning operations and therefore these bots should be considered as inbound scanners. On the other hand, scanning activity going outbound indicates a device infected with a scanning bot. This could be harmful for the target being scanned and puts the organization IP address reputation at risk. The below graph demonstrates traffic flows spikes in a short timeframe. Such could indicate scanning bot activity. This can be analyzed using flows/second calculation.
[caption id="attachment_9169" align="alignnone" width="908"] Figure 4: Here’s an example of a high-frequency outbound scanning operation[/caption]
Target Analysis: Know Your Destinations
Until now we’ve looked for bot indicators in the frequency of client-server communications and in the type of clients. Now, let’s pull in another dimension — the destination or target. To determine malicious targets, consider two factors: Target Reputation and Target Popularity.
Target Reputation calculates the likelihood of a domain being malicious based on the experience gathered from many flows. Reputation is determined either by third-party services or through self-calculation by noting whenever users report a target as malicious.
All too often, though, simple sources for determining targets reputation, such as URL reputation feeds, alone are insufficient. Every month millions of new domains are registered. With so many new domains, domain reputation mechanisms lack sufficient context to categorize them properly, delivering a high rate of false positives.
Putting It All Together
Putting all of what we learned together, sessions identified as:
Created by a machine rather than a human
Generated outside of the browser or are browser traffic with anomalous metadata
Communicating with low popularity targets, particularly those that are uncategorized or marked as malicious
will likely be suspicious. Your legitimate and good bots should not be communicating with low-popularity targets.
Practice: Under the network hood of Andromeda Malware
You can use a combination of these methods to discover various types of threats over your network. Let’s look at one example: detecting the Andromeda bot. Andromeda is a very common downloader for other types of malware. We identify Andromeda analyzing data using four of the five approaches we’ve discussed.
We noticed communication with “disorderstatus[.]ru” which is a domain identified as malicious by several reputation services. Categories of this site from various sources appear to be: “known infection source; bot networks.” However, as noted, alone that’s insufficient as it doesn’t indicate if the specific host is infected by Andromeda; a user could have browsed to that site. What’s more, as noted, your URL will be categorized as “unknown” or “not malicious.”
Out of ten thousand users, only one user’s machine communicates with this target, very unusual. This gives the target a low popularity score.
Over one week we have seen continuous traffic for three days been the client and the target. The repetitive communication is again another indicator of a bot.
[caption id="attachment_9170" align="alignnone" width="820"] Figure 5: Client-target communication between the user and disorderstatus[.]ru. Frequency is shown over three days in one-hour buckets[/caption]
The requesting user-agent is “Mozilla/4.0”, an invalid modern browser, indicating the user agent is probably a bot.
[caption id="attachment_9137" align="alignnone" width="593"] Figure 6: Above is the HTTP header image from the traffic we captured with disorderstatus[.]ru. Notice, there is no ‘referrer’ header in any of these requests. The User-Agent value is also set to Mozilla/4.0. Both are indicators of an Andromeda session.[/caption]
Bot detection over IP networks is not an easy task, but it’s becoming a fundamental part of network security practice and malware hunting specifically. By combining the five techniques we’ve presented here, you can detect malicious bots more efficiently.
Click here to learn more about our security services and here for the Cato Managed Threat Detection and Response service.