New World, New Thinking: Why “The Box” Has Got To Go

Listen to post:
Getting your Trinity Audio player ready...

We are living in an agile world. On-demand, self-service and “just in time” have become standard for the applications and services we use, when and where we want to use them. The Cloud possesses the functionality to create a truly agile enterprise computing platform. This is the main thesis in Tom Nolle’s recent blog, titled “Following Google’s Lead Could Launch the “Real Cloud” and NFV Too.” The main point that Nolle makes is that for the Cloud to serve as a platform for agile service delivery, enterprises and service providers must drop their “box” mindset.

As Nolle points out, “… if we shed the boundaries of devices to become virtual, shouldn’t we think about the structure of services in a deeper way than assuming we succeed by just substituting networks of instances for networks of boxes?”

This is a key question in the evolution of cloud services for telecommunication companies, internet service providers (ISPs), cloud service providers (CSPs) and managed security service providers (MSSPs). Historically, service providers either managed or hosted various parts of a customer infrastructure in their facilities. It created the “perception of cloud” – a shift from capital expense to an operational expense model, but using the same underlying technology. “The box” remained, and service providers had to buy, configure, update, upgrade, patch and maintain it. They were not truly leveraging the power of the Cloud, so the cost of services remained high and agility stayed low. As Graham Cluley points out, the Cloud was simply “someone else’s computer.”

Enter Network Function Virtualization (NFV). Service providers are pushing internal projects to provide various network and security functions as cloud services. NFV infrastructure involves a management and orchestration layer that determines which services should be activated for a customer, and VNFs (virtual network functions) that represent the services themselves. In the context of firewalls, for example, these are virtual appliances from companies like Fortinet and Cisco. “The box” remains, and it still needs to be managed as a single instance, be configured, upgraded and patched. The capacity going through the appliance has to be sized and the underlying infrastructure to run them can be very volatile in terms of load. The NFV “cloud” was nothing more than a network of boxes.

Nolle makes the point that in order to really use the Cloud’s full potential, a new application has to be built to leverage its agility, flexibility, and elasticity. It was simply not possible to take legacy applications (aka boxes), and expect them to become cloud aware. Nolle suggests five principles for making cloud-aware applications. While Nolle’s “application” refers to any business or infrastructure capability, I will use his principles to discuss what I believe is needed to deliver cloud-based network security as a service (NSaaS).  


“You have to have a lot of places near the edge to host processes, rather than hosting in a small number (one, in the limiting case) of centralized complexes. Google’s map showing its process hosting sites looks like a map showing the major global population centers.”

Cloud data centers are core elements of NSaaS. They can be a blend of virtual and physical data centers, but the nature of NSaaS requires physical infrastructure with very thin virtualization to maximize throughput. How many “places” are needed? Gartner is using a 25ms latency or less from a business user or location, as a rule of thumb. Next generation CDNs (like Imperva Incapsula) are demonstrating that a CDN can leverage the expansion of the internet backbone and the emergence of internet exchanges to deliver a global network with under 50 global locations. Regardless, the edge of the cloud must get close to the end user.


“You have to build applications explicitly to support this sort of process-to-need migration. It’s surprising how many application architectures today harken back to the days of the mainframe computer and even (gasp!) punched cards.  Google has been evolving software development to create more inherent application/component agility.”

Migration is one way to address process-to-need. Another way is process-everywhere. NSaaS is making a network security stack available everywhere (i.e., close to the edge), but still maintains one-ness (i.e. a single, logical instance of NSaaS serves a physically distributed environment).


“Process centers have to be networked so well that latency among them is minimal. The real service of the network of the future is process hosting, and it will look a lot more like data center interconnect (DCI) than what we think of today.”

NSaaS PoPs are interconnected by tier-1 carriers with SLA-backed latency. The low-latency backbone moves network traffic, not workloads (because they are everywhere), and controls information to keep the different NSaaS components that support every customer, making it context-aware. 


“The “service network” has to be entirely virtual, and entirely buffered from the physical network. You don’t partition address spaces as much as provide truly independent networks that can use whatever address space they like.  But some process elements have to be members of multiple address spaces, and address-to-process assignment has to be intrinsically capable of load-balancing.”

NSaaS is multi-tenant by design, and each tenant has its own virtual network that is totally independent of the underlying physical implementation. The physical network of NSaaS PoPs communicates over encrypted tunnels using multiple carriers. The PoPs handle traffic routing, optimization, resiliency, and security.


“If service or “network” functions are to be optimal, they need to be built using a “design pattern” or set of development rules and APIs so that they’re consistent in their structure and relate to the service ecosystem in a common way. Andromeda defines this kind of structure too, harnessing not only hosted functions but in-line packet processors with function agility.”

NSaaS has a built-in management layer that keeps all the different PoPs in sync. The physical entry point of a packet is immaterial, because it is always processed in the virtual context of the network it belongs to and the policy that governs that network.

NFV is moving slowly. In the past, we attributed this to potential conflicts between the players in the ecosystem. Nolle says, “the reason we’re not doing what’s needed is often said to be delays in the standards, but that’s not true in my view… We’re focused on IaaS as a business cloud service, and for NFV on virtual versions of the same old boxes connected in the same old way. As I said, piece work in either area won’t build the cloud of the future, carrier cloud or any cloud (bold is mine).”

The bottom line is that architecture, not perception, matters. The network of the future, and its capabilities, must truly live in the cloud. It must align with the “one-ness” view of a cloud service: available anytime, everywhere, seamlessly updated and scaled on demand. This is our vision and the architecture we have built at Cato Networks. To learn more, get our white paper, Network Security is Simple Again here.

Related Articles