Virtualization of the data center is provoking fundamental questions about the proper place for network security services. Will they simply disappear into the One True Cloud, dutifully following applications as they vMotion about the computing ether? Will they remain as a set of modular appliances physically separate from the server computing blob? Should everything be virtualized because it can be?
By Throop Wilder
Thu, June 04, 2009 — Network World — Virtualization of the data center is provoking fundamental questions about the proper place for network security services. Will they simply disappear into the One True Cloud, dutifully following applications as they vMotion about the computing ether? Will they remain as a set of modular appliances physically separate from the server computing blob? Should everything be virtualized because it can be?
ABC: An Introduction to Virtualization
Server Virtualization: Top Five Security Concerns
Based on experience with large enterprises, we recommend that the network security infrastructure remain physically separate from the virtualized server/app blob. This separation allows you to maintain strong “trust boundaries” and high performance/low latency, without the loss of flexibility and adaptability of virtualized application infrastructures.
Large data centers have primarily safeguarded their servers with a well-protected perimeter and minimal internal protections. As zonal protection schemes were introduced to mitigate the unfettered spread of worms and intrusions, the natural boundaries became the divisions between the classic three tiers of Web infrastructures: Web, application and data layers.
More recently enterprises have further segmented these trust zones by service, business unit and other political criteria, yet this infrastructure does not lend itself to change. As three-tier architectures become vastly more flexible due to virtualization projects, the security infrastructure must develop its own flexibilty so it is never the bottleneck. Furthermore, it must also maintain the real-time guarantees of high throughput, high connection per second rates and low latency.
The good news is that this is precisely what forward-thinking architects and operations teams are designing and building right now. Best of all, some of these teams are discovering that, for once, security and performance optimization appear to benefit from the same strategy. Here’s how.
There are two core principles in new security architecture designs. The first principle is to virtualize within the three layers, not across them, which forces inter-zone traffic to pass through physically separate security equipment.
The second principle is to use equipment that consolidates multiple security services that can be invoked in any combination depending on the type of boundary crossing, while maintaining performance, latency and connection rates. You can refer to this separate layer of security resources as the “second cloud.”
The concept of virtualizing within layers (such as Web, application and database layers) vs. across layers can be depicted as follows. For example, Web servers and application servers are considered to pose risks of different levels.
In Figure 1, VMs of different risk levels are on the same servers and boundary transitions between zones happen entirely inside one or more servers. In Figure 2, all Web VMs run on one physical set of servers while the application VMs run on a separate set. Boundary transitions in this model happen outside of each group of servers.
From a purely operational point of view, the mixed-layer scenario in Figure 1 lends itself to major human error. In this case, a network team hands a set of virtual LANs (VLAN) corresponding to both zones to the server team, which must correctly connect the right VLAN to the right VM. One error, and a VM ends up on the wrong VLAN and a trust boundary has been breached. With the architecture in Figure 2, the only VLAN that needs handing off to the Web layer is the one appropriate for that layer.
From an architectural point of view, inter-VM traffic in Figure 1 also requires the addition of security VMs to manage zone boundary transitions. Every server that runs a mix of services will require the addition of the same set of security VMs, leading to VM sprawl. The separate layers depicted in Figure 2 allow for consolidated security equipment to be placed between the layers only once. In addition, this architecture preserves separation of duties between application teams and network security teams, allowing each to perform their respective duties with fewer dependencies on the other.
Finally, a fundamental advantage of the model in Figure 2 is that there is no potential “Heisenberg effect” in which the addition of security VMs impinges on the processing capacity of the servers. The result is vastly improved security performance.
The architectures depicted in Figures 1 and 2 are simplifications that don’t represent the real complexity of multiple service boundaries. Figure 3 more closely depicts the cross-boundary problem in a real world environment in which the type of information being accessed dictates different security policies.
In Figure 3, three application services may each represent different risk classes and therefore require a different combination of security services depending on which boundaries are traversed. For example, one could imagine a VM in Service 3 (Contracts) requiring access to a VM in Service 1, which stores personally identifiable information (PII) – and is therefore governed by PCI requirements.
The security for this cross-boundary transition would require multiple services (such as firewall, intrusion prevention and application firewall) chained together in a particular order. As another example, a Web server accessing the knowledgebase (KB) in Service 2 might only need lightweight firewall access control. The ability to perform per-transition servicing is critical for making security – cloud flexibility match that of the application cloud. More specifically, Figure 4 illustrates this concept.
This architecture relies on the emergence of a new generation of high performance security equipment (depicted as the green oval) which is able to consolidate and deliver multiple security services from a common platform that enables service selection decisions. Sound familiar? It is the same value proposition delivered by application-services virtualization, but applied to security services.
Most importantly, this equipment can deliver the correct sequence or “chain” of services with the performance and latency guarantees so critical to the overall end-to-end user experience, while preserving a much simpler architecture and retaining the trust boundaries required for a secure infrastructure.
As engineers begin to experiment with various network security architectures, they will find that the tier/zone-based implementation is a desirable place to start. Not only does it yield excellent performance and flexibility, but it works within the confines of slower-changing organizational/political boundaries. It also lets server infrastructures morph along multiple axes without compromising the highest standards of security.
Wilder is vice president of corporate strategy at Crossbeam Systems.
- 5 Cool Tools for Cloud Management (pcworld.com)
- UNS Spotlight on Cisco OverDrive, a Hypervisor for the Network (blogs.cisco.com)
- Repackage or Reimagine? Virtualization and the Potential for a New Security Regime (blogs.cisco.com)
- UNS Spotlight on VM-ready Security Solutions with VSG (blogs.cisco.com)
- Shift to Virtualized Environments is Shaking Up Security Practices (pcworld.com)
- VSG: Vive la difference! A Tutorial for HP (blogs.cisco.com)
- vSphere VLANs – 802.1Q VLAN Tagging (chriswhitingsblog.wordpress.com)
- Best of Times, Worst of Times: Is Virtualization in the Data Center a Problem or an Opportunity? (blogs.cisco.com)