New World, New Thinking: Why βThe Boxβ Has Got To Go
|
Listen to post:
Getting your Trinity Audio player ready...
|
We are living in an agile world. On-demand, self-service and βjust in timeβ have become standard for the applications and services we use, when and where we want to use them. The Cloud possesses the functionality to create a truly agile enterprise computing platform. This is the main thesis in Tom Nolleβs recent blog, titled βFollowing Googleβs Lead Could Launch the βReal Cloudβ and NFV Too.β The main point that Nolle makes is that for the Cloud to serve as a platform for agile service delivery, enterprises and service providers must drop their βboxβ mindset.
As Nolle points out, β… if we shed the boundaries of devices to become virtual, shouldnβt we think about the structure of services in a deeper way than assuming we succeed by just substituting networks of instances for networks of boxes?β
This is a key question in the evolution of cloud services for telecommunication companies, internet service providers (ISPs), cloud service providers (CSPs) and managed security service providers (MSSPs). Historically, service providers either managed or hosted various parts of a customer infrastructure in their facilities. It created the βperception of cloudβ – a shift from capital expense to an operational expense model, but using the same underlying technology. βThe boxβ remained, and service providers had to buy, configure, update, upgrade, patch and maintain it. They were not truly leveraging the power of the Cloud, so the cost of services remained high and agility stayed low. As Graham Cluley points out, the Cloud was simply βsomeone elseβs computer.β
Enter Network Function Virtualization (NFV). Service providers are pushing internal projects to provide various network and security functions as cloud services. NFV infrastructure involves a management and orchestration layer that determines which services should be activated for a customer, and VNFs (virtual network functions) that represent the services themselves. In the context of firewalls, for example, these are virtual appliances from companies like Fortinet and Cisco. βThe boxβ remains, and it still needs to be managed as a single instance, be configured, upgraded and patched. The capacity going through the appliance has to be sized and the underlying infrastructure to run them can be very volatile in terms of load. The NFV βcloudβ was nothing more than a network of boxes.
Nolle makes the point that in order to really use the Cloudβs full potential, a new application has to be built to leverage its agility, flexibility, and elasticity. It was simply not possible to take legacy applications (aka boxes), and expect them to become cloud aware. Nolle suggests five principles for making cloud-aware applications. While Nolleβs βapplicationβ refers to any business or infrastructure capability, I will use his principles to discuss what I believe is needed to deliver cloud-based network security as a service (NSaaS). Β
βYou have to have a lot of places near the edge to host processes, rather than hosting in a small number (one, in the limiting case) of centralized complexes. Googleβs map showing its process hosting sites looks like a map showing the major global population centers.β
Cloud data centers are core elements of NSaaS. They can be a blend of virtual and physical data centers, but the nature of NSaaS requires physical infrastructure with very thin virtualization to maximize throughput. How many βplacesβ are needed? Gartner is using a 25ms latency or less from a business user or location, as a rule of thumb. Next generation CDNs (like Imperva Incapsula) are demonstrating that a CDN can leverage the expansion of the internet backbone and the emergence of internet exchanges to deliver a global network with under 50 global locations. Regardless, the edge of the cloud must get close to the end user.
βYou have to build applications explicitly to support this sort of process-to-need migration. Itβs surprising how many application architectures today harken back to the days of the mainframe computer and even (gasp!) punched cards. Β Google has been evolving software development to create more inherent application/component agility.β
Migration is one way to address process-to-need. Another way is process-everywhere. NSaaS is making a network security stack available everywhere (i.e., close to the edge), but still maintains one-ness (i.e. a single, logical instance of NSaaS serves a physically distributed environment).
βProcess centers have to be networked so well that latency among them is minimal. The real service of the network of the future is process hosting, and it will look a lot more like data center interconnect (DCI) than what we think of today.β
NSaaS PoPs are interconnected by tier-1 carriers with SLA-backed latency. The low-latency backbone moves network traffic, not workloads (because they are everywhere), and controls information to keep the different NSaaS components that support every customer, making it context-aware.Β
βThe βservice networkβ has to be entirely virtual, and entirely buffered from the physical network. You donβt partition address spaces as much as provide truly independent networks that can use whatever address space they like. Β But some process elements have to be members of multiple address spaces, and address-to-process assignment has to be intrinsically capable of load-balancing.β
NSaaS is multi-tenant by design, and each tenant has its own virtual network that is totally independent of the underlying physical implementation. The physical network of NSaaS PoPs communicates over encrypted tunnels using multiple carriers. The PoPs handle traffic routing, optimization, resiliency, and security.
βIf service or βnetworkβ functions are to be optimal, they need to be built using a βdesign patternβ or set of development rules and APIs so that theyβre consistent in their structure and relate to the service ecosystem in a common way. Andromeda defines this kind of structure too, harnessing not only hosted functions but in-line packet processors with function agility.β
NSaaS has a built-in management layer that keeps all the different PoPs in sync. The physical entry point of a packet is immaterial, because it is always processed in the virtual context of the network it belongs to and the policy that governs that network.
NFV is moving slowly. In the past, we attributed this to potential conflicts between the players in the ecosystem. Nolle says, βthe reason weβre not doing whatβs needed is often said to be delays in the standards, but thatβs not true in my view… Weβre focused on IaaS as a business cloud service, and for NFV on virtual versions of the same old boxes connected in the same old way. As I said, piece work in either area wonβt build the cloud of the future, carrier cloud or any cloud (bold is mine).β
The bottom line is that architecture, not perception, matters. The network of the future, and its capabilities, must truly live in the cloud. It must align with the βone-nessβ view of a cloud service: available anytime, everywhere, seamlessly updated and scaled on demand. This is our vision and the architecture we have built at Cato Networks.Β To learn more, get our white paper, Network Security is Simple Again here.