During a recent malware hunt, the Cato research team identified some unique attributes of DGA algorithms that can help security teams automatically spot malware on their network. The “Shimmy” DGA DGAs (Domain Generator Algorithms) are used by attackers to generate a large number of – you guessed it – domains often used for C&C servers....
The DGA Algorithm Used by Dealply and Bujo Campaigns During a recent malware hunt, the Cato research team identified some unique attributes of DGA algorithms that can help security teams automatically spot malware on their network.
The “Shimmy” DGA
DGAs (Domain Generator Algorithms) are used by attackers to generate a large number of – you guessed it – domains often used for C&C servers. Spotting DGAs can be difficult without a clear, searchable pattern.
Cato researchers began by collecting traffic metadata from malicious Chrome extensions to their C&C services. Cato maintains a data warehouse built from the metadata of all traffic flows crossing its global private backbone. We analyze those flows for suspicious traffic to hunt threats on a daily basis.
The researchers were able to identify the same traffic patterns and network behavior in traffic originating from 80 different malicious Chrome extensions, which were identified as from the Bujo, Dealply and ManageX families of malicious extensions. By examining the C&C domains, researchers observed an algorithm used to create the malicious domains. In many cases, DGAs appear as random characters. In some cases, the domains contain numbers, and in other cases the domains are very long, making them look suspicious.
Here are a few examples of the C&C domains (full domain list at the end of this post):
The most obvious trait the domains have in common is that they are all part of “.com” TLD (Top-Level Domain). Also, all the prefixes are five to eight letters long.
There are other factors shared by the domains. For one, they all start with consonants and then create a pattern that is built out of consonants and vowels; so that every domain is represented by consonant + vowel + consonant + vowel + consonant, etc. As an example, in jurokotu.com domain, removing the TLD will bring “jurokotu”, and coloring the word to consonants (red) and vowels (blue) will show the pattern: “jurokotu”.
From the domains we collected, we could see that the adversaries used the vowels: o, u and a, and consonants: q, m, s, p, r, j, k, l, w, b, c, n, d, f, t, h, and g. Clearly, an algorithm has been used to create these domains and the intention was to make them look as close to real words as possible.
[boxlink link="https://www.catonetworks.com/resources/8-ways-sase-answers-your-current-and-future-it-security-needs/?utm_source=blog&utm_medium=top_cta&utm_campaign=8_ways_sase_answers_needs_ebook"] 8 Ways SASE Answers Your Current and Future Security & IT Needs [eBook] [/boxlink]
“Shimmy” DGA infrastructure
A few additional notable findings are related to the same common infrastructure used by all the C&C domains.
All domains are registered using the same registrar - Gal Communication (CommuniGal) Ltd. (GalComm), which was previously associated with registration of malicious domains .
The domains are also classified as ‘uncategorized’ by classification engines, another sign that these domains are being used by malware. Trying to access the domains via browser, will either get you a landing page or HTTP ERROR 403 (Forbidden). However, we believe that there are server controls that allow access to the malicious extensions based on specific http headers.
All domains are translated to IP addresses belonging to Amazon AWS, part of AS16509. The domains do not share the same IP, and from time to time it seems that the IP for a particular domain is changed dynamically, as can be seen in this example:
Given all this evidence, it’s clear to us that the infrastructure used on these campaigns is leveraging AWS and that it is a very large campaign. We identified many connection points between 80 C&C domains, identifying their DGA and infrastructure. This could be used to identify the C&C communication and infected machines, by analyzing network traffic. Security teams can now use these insights to identify the traffic from malicious Chrome extensions.
Business continuity planning (BCP) is all about being ready for the unexpected. While BCP is a company-wide effort, IT plays an especially important role in maintaining business operations, with the task of ensuring redundancy measures and backup for data centers in case of an outage. With enterprises migrating to the cloud and adopting a work-from-anywhere model, BCP today must also include continual...
3 Principles for Effective Business Continuity Planning Business continuity planning (BCP) is all about being ready for the unexpected. While BCP is a company-wide effort, IT plays an especially important role in maintaining business operations, with the task of ensuring redundancy measures and backup for data centers in case of an outage.
With enterprises migrating to the cloud and adopting a work-from-anywhere model, BCP today must also include continual access to cloud applications and support for remote users. Yet, the traditional network architecture (MPLS connectivity, VPN servers, etc.) wasn’t built with cloud services and remote users in mind. This inevitably introduces new challenges when planning for business continuity today, not to mention the global pandemic in the background.
Three Measures for BCP Readiness
In order to guarantee continued operations to all edges and locations, at all times – even during a data center or branch outage – IT needs to make sure the answer to all three questions below is YES.
Can you provide access to data and applications according to corporate security policies during an outage?
Are applications and data repositories as accessible and responsive during an outage as during normal operations?
Can you continue to support users and troubleshoot problems effectively during an outage?
If you can’t answer YES to all the above, then it looks like your current network infrastructure is inadequate to ensure business continuity when it comes to secure data access, optimized user experience, and effective visibility and management.
[boxlink link="https://www.catonetworks.com/resources/business-continuity-planning-in-the-cloud-and-mobile-era-are-you-prepared/?utm_source=blog&utm_medium=top_cta&utm_campaign=business_continuity+"] Business Continuity Planning in the Cloud and Mobile Era
| Get eBook [/boxlink]
The Challenges of Legacy Networks
Secure Data Access
When a data center is down, branches connect to a secondary data center until the primary one is restored. But does that guarantee business operations continue as usual? Although data replication may have operated within requisite RTO/RPO, users may be blocked from the secondary data center, requiring IT to update security policies across the legacy infrastructure in order to enable secure access.
When a branch office is down, users work from remote, connecting back via the Internet to the VPN in the data center. Yet VPN wasn’t designed to support an entire remote workforce simultaneously, forcing IT to add VPN servers to address the surge of remote users, who also generate more Internet traffic, resulting in the need for bandwidth upgrade. If a company runs branch firewalls with VPN access, challenges become even more significant, as IT must plan for duplicating these capabilities as well.
Optimized User Experience
When a data center is down, users can access applications from the secondary data center. But, if the performance of these applications relies on WAN optimization devices, IT will need to invest further in WAN optimization at the secondary data center, otherwise data transfer will slow down to a crawl. The same is true for cloud connections. If a premium cloud connection is used, these capabilities must also be replicated at the secondary data center.
When a branch office is down, remote access via VPN is often complicated and time-consuming for users. When accessing cloud applications, traffic must be backhauled to the data center for inspection, adding delay and further undermining user experience. The WAN optimization devices required for accelerating branch-datacenter connections are no longer available, further crippling file transfers and application performance. In addition, IT needs to configure new QoS policies for remote users.
Effective Visibility and Management
When a data center is done, users continue working from branch offices, and thus user management should remain the same. This requires IT to replicate management tools to the secondary data center in order to maintain user support, troubleshooting, and application management.
When a branch office is down, IT needs user management and traffic monitoring tools that can support remote users. Such tools must be integrated with existing office tools to avoid fragmenting visibility by maintaining separate views of remote and office users.
BCP Requires a New Architecture
Legacy enterprise networks are composed of point solutions with numerous components – different kinds of network services and cloud connections, optimization devices, VPN servers, firewalls, and other security tools – all of which can fail.
BCP needs to consider each of these components; capabilities need to be replicated to secondary data centers and upgraded to accommodate additional loads during an outage. With so much functionality concentrated in on-site appliances, effective BCP becomes a mission impossible task, not to mention the additional time and money required as part of the attempt to ensure business continuity in a legacy network environment.
SASE: The Architecture for Effective BCP
SASE provides the adequate underlying infrastructure for BCP in today’s digital environment. With SASE, a single, global network connects and secures all sites, cloud resources, and remote users. There are no separate backbones for site connectivity, dedicated cloud connections for optimized cloud access, or additional VPN servers for remote access.
As such, there’s no need to replicate these capabilities for BCP. The SASE network is a full mesh, where the loss of a site can’t impact the connectivity of other locations. Access is restricted by a suite of security services running in cloud-native software built into the PoPs that comprise the SASE cloud. With optimization and self-healing built into the SASE service, enterprises receive a global infrastructure designed for effective BCP.
One of the big selling points of SD-WAN tools is their ability to use the Internet to deliver private-WAN levels of performance and reliability. Give each site connections to two or three Internet providers and you can run even demanding, performance-sensitive applications with confidence. Hence the amount of MPLS going away in the wake of...
Unstuck in the middle: WAN Latency, packet loss, and the wide, wide world of Internet WAN One of the big selling points of SD-WAN tools is their ability to use the Internet to deliver private-WAN levels of performance and reliability. Give each site connections to two or three Internet providers and you can run even demanding, performance-sensitive applications with confidence. Hence the amount of MPLS going away in the wake of SD-WAN deployments. (See Figure 1.)
[caption id="attachment_9257" align="alignnone" width="424"] Figure 1: Plans for MPLS in the SD-WAN[/caption]
The typical use case here, though, is the one where the Internet can also do best: networks contained within a specific country. In such a network, inter-carrier connectivity will be optimal, paths will be many, and overall reliability quite high. Packet loss will be low, latency low, and though still variable, the variability will tend to be across a relatively narrow range.
Global Distance = Latency, Loss, and Variability
Narrow relative to what? In this case, narrow when compared to the range of variation on latency across global networks. Base latency increases with distance, inevitably of course, but the speed of light does not tell the whole story. The longer the distances involved, the greater the number of optical/electronic conversions, bumping up latency even further as well as steadily increasing cumulative error rates. And, the more numerous the carrier interconnects crossed, the worse: even more packets lost, more errors, and another place where variability in latency creeps in.
A truly global Internet-based WAN will face innumerable challenges to delivering consistent high-speed performance thanks to all this complexity. In such a use case, the unpredictability of variation in latency as well as the greater range for the variation is likely to make the user experiences unpleasantly unreliable, especially for demanding and performance-sensitive applications.
Global Fix: Optimized Middle Miles
To fix the problem without simply reverting to a private WAN, one can seek to limit the use of public networks to the role they best fill: the ingress and egress, connecting a site to the world. But instead of having the Internet be the only path available to packets, you can also have a controlled, optimized, and consistent middle-mile network. Sites connect over the Internet to a point of presence (POP) that is “close” for Internet values of the term—just a few milliseconds away, basically, without too many hops. The POPs are interconnected with private links that bypass the complexity and unpredictability of the global Internet to deliver consistent and reliable performance across the bulk of the distance. Of course, they still also have the Internet available as backup connectivity! Given such a middle-mile optimization, even a globe-spanning SD-WAN can expect to deliver solid performance comparable to—but still at a lower cost than—a traditional private MPLS WAN.
One potential pain point in an SD-WAN deployment is vendor sprawl at the WAN edge: the continual addition of vendors to the portfolio IT has to manage to keep the edge functional and healthy. This sprawl comes in two forms: appliance vendors, and last-mile connectivity vendors. As noted in an earlier post, most folks deploying...
Fight Edge Vendor Sprawl One potential pain point in an SD-WAN deployment is vendor sprawl at the WAN edge: the continual addition of vendors to the portfolio IT has to manage to keep the edge functional and healthy.
This sprawl comes in two forms: appliance vendors, and last-mile connectivity vendors.
As noted in an earlier post, most folks deploying an SD-WAN want to flatten that unsightly pile of appliances in the branch network closet. In fact, according to Nemertes’ research:
78% want SD-WAN to replace some or all branch routers
77% want to replace some or all branch firewalls
82% want to replace some or all branch WAN optimizers
In so doing, they not only slough off the unsightly pounds of excess equipment but also free themselves from the technical and business relationships that come with those appliances, in favor of the one associated with their SD-WAN solution.
The same issue crops up in a less familiar place with SD-WAN: last mile connectivity.
But we thought having options for the last mile was good!
It can be, for sure. One of the cool things about SD-WAN, after all, is that it allows a company to use the least expensive connectivity available at each location. Given that this is generally the basis of the business case that gets SD-WAN in the door, an enterprise with a large WAN can easily wind up with dozens or scores of providers, representing a mix of regional, local, and hyper-local cost optimizations. In some places, having the same provider for everybody may be most cost-effective; in others, a different provider per metro area, or even per location in a metro, might save the most money. And money—especially hard dollars flowing out the door—talks.
But just as with the branch equipment stack, there is real cost, both financial and operational, to managing each last-mile provider relationship. The business-to-business relationship of contracts and bills requires time and effort to maintain; contracts need to be vetted, bills reviewed, disputes registered and settled. So too does the IT-to-service provider technical relationship take work: teams need to know how to reach each other, ticket systems need to be understood, disaster recovery plans tested, and so on.
The technical support part of the relationship can be especially trying in a hyper-aggressive cost reduction environment. A focus on extreme cost-cutting may lead IT to embrace consumer-grade connectivity in some locations, even when business-grade connectivity is readily available. IT will then have to deal with consumer-grade technical support hours and efforts as well, which in the long term can eat up in soft dollars much of the potential hard dollar savings.
Grappling with sprawl
SD-WAN users have to either sharply limit the size of their provider pool, or make it someone else’s problem to handle the sprawl. Our data show that more than half of SD-WAN users want someone else to handle the last mile vendor relationship.
[caption id="attachment_9024" align="alignnone" width="625"] How SD-WAN users deal with last-mile sprawl[/caption]
When others manage the last mile, we see dramatic decreases in site downtime, both the duration of incidents and the sum total of downtime for the year. If the same vendor manages the SD-WAN itself, then IT will have gotten for itself the least potential for confusion and finger-pointing should any problems arise, without losing the benefits of cost reduction and last mile diversity.
Retailers, financial services firms, and other kinds of companies want to become more agile in their branch strategies: be able to light up, move, and shut down branches quickly and easily. One sticking point has always been the branch network stack: deploying, configuring, managing, and retrieving the router, firewall, WAN optimizer, etc., as branches come...
Beyond the Thin Branch: Move Network Functions to Cloud, Says Leading Analyst Retailers, financial services firms, and other kinds of companies want to become more agile in their branch strategies: be able to light up, move, and shut down branches quickly and easily. One sticking point has always been the branch network stack: deploying, configuring, managing, and retrieving the router, firewall, WAN optimizer, etc., as branches come and go. And everyone struggles with deploying new functionality at all their branches quickly: multi-year phased deployments are not unusual in larger networks.
Network as a Service (NaaS) has arisen as a solution: use a router in the branch to connect to a provider point of presence, usually over the Internet, with the rest of the WAN’s functionality delivered there.
In-net SD-WAN is an extension of the NaaS idea: SD-WAN—centralized, policy-based management of the WAN delivering the key functions of WAN virtualization and application-aware security, routing, and prioritization—delivered in the provider’s cloud across a curated middle mile.
In-net SD-WAN allows maximum service delivery with minimum customer premises equipment (CPE) because most functionality is delivered in the service provider cloud, anywhere from edge to core. We’ve discussed how the benefits of this kind of simplification to the stack. It offers a host of other benefits as well, based on the ability to dedicate resources to SD-WAN work as needed, and the ability to perform that work wherever it is most effective and economical. Some jobs will best be handled in carrier points of presence (their network edge), such as packet replication or dropping, or traffic compression. Others may be best executed in public clouds or the provider’s core, such as traffic and security analytics and intelligent route management.
Cloud Stack Benefits the Enterprise: Freedom and Agility
People want a lot out of their SD-WAN solution: routing, firewalling, and WAN optimization, for example. (Please see figure 1.)
[caption id="attachment_6278" align="aligncenter" width="939"] Figure 1: Many Roles for SD-WAN[/caption]
Enterprises working with in-net SD-WAN are more free to use resource-intensive functions without feeling the limits of the hardware at each site. They are free to try new functions more frequently and to deploy them more broadly without needing to deploy additional or upgraded hardware. These facts can allow a much more exact fit between services needed and services used since there is no up-front investment needed to gain the ability to test an added function.
Enterprises are also able to deploy more rapidly. On trying new functions at select sites and deciding to proceed with broader deployment, IT can snap-deploy to the rest. On lighting up a new site, all standard services—as well as any needed uniquely at that site—can come online immediately, anywhere.
Cloud Stack Benefits: Security and Evolution
The provider, working with a software-defined service cloud, can spin up new service offerings in a fraction of the time required when functions depend on specialized hardware. The rapid evolution of services, as well as the addition of new ones, makes it easier for an enterprise to keep current and to get the benefits of great new ideas.
And, using elastic cloud resources for WAN security functions decreases the load on networks, and on data center security appliances. Packets that get dropped in the provider cloud for security reasons don’t consume any more branch or data center link capacity, or firewall capacity, or threaten enterprise resources. This reduces risk for the enterprise overall.
Getting to Zero
A true zero-footprint stack is not possible, of course. Functions like link load balancing and encryption have to happen on premises so there always has to be some CPE (however generic a white box it may be) on site. But the less that box has to do, and the more the heavy lifting can be handled elsewhere, the more the enterprise can take advantage of all these benefits in an in-net SD-WAN.
WANs are slow. Not in terms of data rates and such, but in terms of change. In most enterprises, the WAN changes more slowly than just about any other part of the infrastructure. People like to set up routers and such and then touch them as infrequently as possible—and that goes both for enterprises and...
WAN on a Software Timeline WANs are slow. Not in terms of data rates and such, but in terms of change. In most enterprises, the WAN changes more slowly than just about any other part of the infrastructure. People like to set up routers and such and then touch them as infrequently as possible—and that goes both for enterprises and for their WAN service providers as well.
One great benefit of SDN, SD-WAN, and the virtualization of network functions (whether via virtual appliances or actual NFV) is that they can put innovation in the network on a software timeline rather than a hardware timeline. IT agility is a major concern for any organization caught up in a digital transformation effort. Nemertes’ most recent cloud research study found a growing number of organizations, currently 37%, define success in terms of agility. So, the speed-up is sorely needed. This applies whether the enterprise is running that network itself, or the network belongs to a service provider.
When the network requires specialized and dedicated hardware to provide core functionality, making significant changes in that function can take months or years. Months, if the new generation of hardware is ready and “all” you have to do is roll it out (and you roll it out aggressively, at that). Years, if you have to wait for the new functionality to be added to custom silicon or wedged into a firmware or OS update. If you are an enterprise waiting to implement some new customer-facing functionality, or a new IOT initiative, waiting that long can see an opportunity pass. If you are a service provider, having to wait that long means you cannot offer new or improved services to your customers frequently, quickly, or easily—you have to invest a lot, between planning for and rolling out infrastructure to support your new service, before you can see how well customers take up the service.
When the network is running on commodity hardware such as x86 servers and whitebox switches, you can fundamentally change your network’s capabilities by deploying new software. Fundamental change can happen in weeks, even days. Proof of concept deployments take less time, and it is easier to upshift them into full deployments or downshift them to deal with problems. Rolling a change back becomes a matter of reverting to the previous state.
Whether enterprise or service provider, this shift lowers the barrier to innovation in the network, dramatically. It becomes possible to try more innovations and offer more new services with a much lower investment of staff time and effort, a lot less money up front, and with a much shorter planning horizon. The organization can work more to current needs and spend less time trying to predict demand five years in advance in order to justify a major capital equipment rollout.
For enterprises, shifting to a software-based networking paradigm (incorporating some or all of SDN, SD-WAN, virtual appliances and/or NFV) allows in-house application developers unprecedented opportunities to affect or respond to network behavior. For service providers, it means being able to more quickly meet changing customer needs, address new opportunities, and use new technologies as they become available.
Of course, making a shift as fundamental and far-reaching as moving from legacy to SDN, from specialized hardware to white boxes, is not trivial. But it should be on the roadmap for every organization, for all the parts of the network they run on their own; it should be a selection criterion for their network service providers as well.
Managing a big pile of network gear at every branch location is a hassle. No surprise, then, that Nemertes’ 2018-19 WAN Economics and Technologies research study is showing huge interest in collapsing the branch stack among those deploying SD-WAN: 78% want to replace some or all branch routers with SD-WAN solutions 77% want to replace...
SD-WAN: Unstacking the Branch for WAN Simplicity Managing a big pile of network gear at every branch location is a hassle. No surprise, then, that Nemertes’ 2018-19 WAN Economics and Technologies research study is showing huge interest in collapsing the branch stack among those deploying SD-WAN:
78% want to replace some or all branch routers with SD-WAN solutions
77% want to replace some or all branch firewalls
82% want to replace some or all branch WAN optimizers
While collapsing the WAN stack can have capital benefits, although it is not guaranteed unless one moves to an all-opex model, whether box-based Do-It-Yourself (DIY) or an in-net/managed solution (where a service provider manages the solution and delivers some or all SD-WAN functionality in their network instead of at the endpoints). After all, the more you want a single box to do, the beefier it has to be, and one expensive box can wind up costing more than three relatively cheap ones. More compelling in the long run are the operational benefits of collapsing the stack: smaller vendor and product pool, easier staff skills maintenance, and simpler management processes.
IT sees benefits from reducing the number of products and vendors it has to manage through each device-layer’s lifecycle. Fewer vendors means fewer management contracts to juggle. It means fewer sales teams to try to turn into productive partners. And, it means fewer technical support teams to learn how to work with—and relearn and relearn again and again through vendor restructurings, acquisitions, divestitures of products, or simply deals with support staff turnover. Having a single relationship, whether box vendor or service provider, brings these costs down as far as possible, and simplifies relationship management as much as possible.
Fewer solutions typically also means reducing the number of technical skill sets needed to keep the WAN humming. There is that special, though not uncommon, case where solutions converge but management interfaces don’t, resulting in little or no savings or improvement of this sort. But, when converged solutions come with converged management tools and a consistent, unified interface, life gets better for WAN engineers. When a team only has to know one or two management interfaces instead of five or six, it is easier for everyone to master them, and so to provide effective cross-coverage. Loss or absence of a team member no longer carries the risk of a vital skill set going missing.
Most importantly, though, IT should be able to look forward to simplifying operations. When the same solution can replace the router, firewall, and WAN optimizer, change management gets easier, and the need to do network functional regression testing decreases. IT no longer has to worry that making a configuration change on one platform will have unpredictable effects on other boxes in the stack. The need to make sure one change won’t trigger cascading failures in other systems is part of what drives so many organizations to avoid changing anything on the WAN, whenever possible.
A side effect of that lowered barrier to change on the WAN should be improved security. We have seen far too many networks in which branch router operating systems are left unpatched for very long stretches of time, IT being unwilling to risk breaking them in order to push out patches, even security patches.
Although it can be argued that the SD-WAN appliance becomes too much of a single point of failure when it takes over the whole stack, it is worth remembering that when three devices are stacked up in the traffic path, failure in any of them can kill the WAN. A lone single point of failure is better than three at the same place, and it is easier to engineer high availability for a single solution than for several.
And, of course, if the endpoint is mainly a means of connecting the branch to an in-net solution, redundancy at the endpoint is even easier (and redundancy in the provider cloud should be table stakes as a selection criterion). Whether IT is doing the engineering itself or relying on the engineering of a service provider, that’s a win no matter what.