Much has been said about Network Virtualization in last one year. The topic invited quite a lot of attention after Nicira came out of stealth mode and launched it’s products last week. So what is Network Virtualization and what can be done with this new technology? This blog is my attempt to capture the current challenges in DC and the promises this new technology offers.
So what are the market needs?
Essentially what we all want is a simple networking topology where any workloads can be placed anywhere without needing to change the network attributes. It is not a difficult ask is it? Well let’s see how that can be achieved. Well, we all know that in computer networking two nodes can talk to each other either via Layer 2 or Layer 3.
Layer 2 is the simplest topology where all the networking devices share the same network segment and communicate to each other directly using the MAC addresses. But the problem with L2 is scalability. However, there are many innovations lately from the networking giants (read FabricPath and QFabric) to flatten the network and improve the L2 scalability. The bottom line is, L2 is simple and easy to implement. It is convenient for distributed application servers to talk to one other over L2. Lastly and most importantly, our goal of placing any workload anywhere can be easily accomplished with L2 without changing the network attributes.
Layer 3 on the other hand is scalable due to it’s hierarchical addressing schemes.Route aggregation provides scalability, ECMP provides efficient network utilization and Shortest path first routing provides efficiency.However, one significant disadvantage of L3 architecture is that workload can’t be moved across different subnets without changing the network addresses. Not absolutely correct statement because LISP tries to address this but network has to support LISP natively.
So what do we want?
We want L3 scalability but L2 simplicity and flexibility. Fundamentally what we are looking at is an abstraction layer on top of the underlying L3 networking infrastructure.The abstraction layer would enable users to create layer 2 segments across layer 3 boundaries. In a nutshell, this is what is network virtualization.
So how do we achieve that?
Well, primarily what we are trying to do is tunneling. Tunnel L2 frames in L3 packets at the edge. That way existing network infrastructure sees the outer L3 packets and route the packets accordingly. Fundamentally we are creating an overlay. VXLAN, NVGRE are some of the standards that are being used to create the overlay at the edge. But how about the performance? Can we really scale these overlay technologies for MSDC environment and that means scaling beyond 20K+ ports? For instance, VXLAN needs multicast at cloud scale and NVGRE creates L3 mesh and has inefficient link utilization. Well, I think we will find a way to optimize these technologies to operate at a cloud scale.
So our ultimate goal of placing any workload anywhere can be achieved with these overlay technologies. And since we are creating these overlays at the edge, many cool things are possible. For example, we can have many virtual segments with same IP address ranges.This enables enterprise or SP customers to provide any IP addresses to any applications/customers irrespective of what are being used in their network. The other cool use-case would be IPV6 endpoints communicating to one another over IPV4 infrastructure. I believe, Network Virtualization is here to stay but only time will tell how fast companies adopt this new technology.