Network Virtualization – Nexus 1000v

Network Virtualization can be grouped by two methods:

1) One to many

2) Many to one

“One to many” means you segment one physical network into multiple logical segments; on the other hand, “many to one” includes consolidating multiple physical devices into one logical entity. By definition they seem to be opposites but in the world of network virtualization they fall under the same umbrella.

 

Logical Segmentation: One to Many

With one-to-many network virtualization, a single physical network is logically segmented into multiple virtual networks. Each virtual network could correspond to a user group or a specific security function.

End-to-end path isolation requires the virtualization of networking devices and their interconnecting links. VLANs have been traditionally used and hosts from one user group are mapped to a single VLAN. To extend the path across multiple switches at Layer 2, VLAN tagging (802.1Q) can be used to carry VLAN information between switches. These VLAN trunks were created to transport multiple VLANs over a single Ethernet interface.

The diagram below displays two independent VLANs, VLAN201, and VLAN101. Both of these VLANs can share one physical wire to provide L2 reachability between hosts connected to Switch B and Switch A via Switch C. VLANs 201 and 101 remain separate entities.

VLAN Trunking

VLAN Trunking

 

VLANs are sufficient for small Layer 2 segments. Today’s networks are likely to have a mix of Layer 2 and Layer 3 routed networks. In this case, Layer 2 VLANs by themselves are not sufficient because you need to extend the Layer 2 isolation over a Layer 3 device. This can be achieved by using a technology called Virtual Routing and Forwarding ( VRF ). A VRF instance logically carves up a Layer 3 device into a number of isolated independent L3 devices. The VRFs which are configured locally cannot directly communicate with each other.

The diagram below displays one physical Layer 3 router with three VRFs, VRF Yellow, VRF Red and VRF Blue. There is complete separation between these VRF and without explicit configuration, routes in one VRF cannot be leaked to another VRF.

Virtual Routing and Forwarding ( VRF )

Virtual Routing and Forwarding ( VRF )

 

The virtualization of the interconnecting links depends on how the virtual routers are connected to each other. If they are physically ( directly ) connected to each other, you could use a technology known as VRF-lite to separate traffic and use 802.1Q to label the data plane. This is known as hop-by-hop virtualization. It’s possible to run into scalability issues when the number of devices grow and this type of design is typically used when you want to connect VRF back to back i.e no more than two devices.

When the virtual routers are connected over multiple hops through an IP cloud, you can use either Generic routing encapsulation ( GRE ) or Multiprotocol Label Switching ( MPLS ) virtual private networks.

GRE is probably the simpler of the Layer 3 methods and it can work over any IP core. GRE can encapsulate the contents, transport it over a network with the network unaware of the packet contents. The core will just see the GRE header, essentially virtualizing the network path.

There is additional 24 bytes overhead for the GRE header so it may be the case that the forwarding router may break the datagram into two fragments so the packet may not be larger than the outgoings interface MTU. To resolve the fragmentation issue, you can correctly configure MTU, MSS and Path MTU parameters on the outgoing links and intermediate routers. The standard for GRE is typically static. You only need to configure tunnel endpoints and as long as you have reachability to those endpoints the tunnel will be up. However, in recent designs you can have dynamic GRE tunnel establishment.

MPLS/VPN, on the other hand, is a different beast. It requires a kind of signaling to distribute labels and build an end to end Label Switched Path ( LSP ). The label distribution can be done with BGP+label, LDP, and RSVP.  Unlike GRE tunnels, MPLS VPNs do not have to manage multiple point-to-point tunnels to provide a full mesh of connectivity. They are by nature used for any to any connectivity and labels attached to packets provide the traffic separation.

 

Network Consolidation: Many to One

Many to one network consolidation refers to the grouping of two or more physical devices into one. Examples of this technology would be Virtual Switching System ( VSS ), Stackable switches and Nexus VPC. Combining many physical into one logical entity allows STP to view the logical group as one, allowing all ports to be active. By default, STP will block the redundant path.

Software Defined Networking takes this concept even further; it completely abstracts the entire network into a single virtual switch. On traditional routers, the control and the data plane are on the same device yet with SDN the control and data planes are decoupled. The control plan is now on a policy driven controller and the data plane is local on the OpenFlow enabled switch.

 

Virtualization – Aware Networks

Server and network virtualization presented the challenge that multiple VMs must share a single network physical port such as a network interface controller ( NIC ). The question then arises, how do I link multiple VMs to the same uplink? How do I provide path separation? Today’s networks need to virtualize the physical port and allow configuration of policies per port.

Server Virtualization

Server Virtualization

 

NIC-per-VM design

One way to do this is to have an NIC-per-VM design where each VM is assigned a single physical NIC and the NIC is not shared with any other VM. The hypervisor aka virtualization layer would be bypassed and the VM would access the I/O device directly. This is known a VMDirectPath. This direct path, or pass-through can improve performance for hosts that utilize high-speed I/O devices, such as 10 Gigabit Ethernet.  The benefits of higher performance are offset by the lack of flexibility and the ability to move VM’s. No vMotion.

 

Virtual-NIC-per-VM in Cisco UCS (Adapter FEX)

Another way to do this is to create multiple logical NICs on the same physical NIC such as Virtual-NIC-per-VM in Cisco UCS (Adapter FEX). These logical NICs are assigned directly to VM’s and traffic gets marked with vNIC specific tag in hardware (VN-Tag/802.1ah). The actual VN-Tag tagging is implemented in the server NICs so you can clone the physical NIC in the server to multiple virtual NICs. This technology provides faster switching and enables you to apply a rich set of management features to local and remote traffic.

 

Software Virtual Switch

The 3rd option is to implement a virtual software switch in the hypervisor. VMware introduced a virtual switching compatibility with their vSphere ( ESXi ) hypervisor called vSphere Distributed Switch ( VDS ).  Initially, they introduced a local L2 software switch but this was soon phased out due to a lack of distributed architecture.

Data physically moves between the servers through the external network, but the control plane abstracts this movement to look like one large distributed switch spanning multiple servers. This approach has a single management and configuration points, similar to the concept of stackable switches – one control plane with many physical data forwarding paths. The data does not move through a parent partition but logically connects directly to the network interface through local vNICs associated with each VM.

 

Nexus 1000v

The VDS introduced by VMwares lacked any good networking features which led Cisco to introduce the Nexus 1000V software-based switch.

The Nexus 1000v is a multi-cloud, multi-hypervisor and multi-services distributed virtual switch. Its function is to enable communication between VMs.

 

Nexus1000v

Nexus1000v

VEM and VSM

The Nexus 1000v has two basic components:

1) The Virtual Supervisor Module ( VSM )

2) The Virtual Ethernet Module ( VEM ).

In comparison to a physical switch you could view the VSM as the supervisor, setting up the control plane functions for the data plane to forward efficiently and the VEM as the physical line cards that do all the forwarding of packets.

The VEM is the software component that runs within the hypervisor kernel and it handles all VM traffic, including inter-VM frames and Ethernet traffic between a VM and external resources.

The VSM runs its own NX-OS code and controls both the control and management planes, which integrates into a cloud manager, such as VMware Vcenter.  You can have two VSM for redundancy and both modules remain constantly synchronized with unicast VSM-to-VSM heartbeats to provide stateful failover in the event of an active VSM failure.

 

The two available communication options for VSM to VEM are:

Layer 2 control mode: The VSM control interface shares the same VLAN with the VEM

Layer 3 control mode: The VEM and the VSM are in different IP subnets.

The VSM also uses heartbeat messages to detect loss of connectivity between the VSM and the VEM. However, the VEM does not depend on connectivity to the VSM to perform its data plane functions and will continue to forward packets in the event of VSM failure.

 

With Layer 3 control mode, the heartbeat messages are encapsulated in a GRE envelope.

 

VSM Best Practices

1) L2 control is recommended for new installations.

2) Use MAC pinning instead of LACP.

3) Packet, Control and Management in the same VLAN.

4) Do not use VLAN 1 for Control and Packet.

5) Use 2 x VSM for redundancy.

 

The max latency between VSM and VEM is 10 milliseconds. Therefore if you have a high quality DCI link, a VSM can potentially be placed outside the data center and still control the VEM.

 

Nexus 1000v InterCloud  – the world of many clouds

A key element of the Nexus 1000v is its use case for hybrid cloud deployments and its ability to place workloads in private and public environments via a single pane of glass. The Nexus 1000v interCloud aims to address the main challenges with hybrid cloud deployments such as security concerns and control / visibility challenges within the public cloud.

The Nexus 1000v interCloud works together with Cisco Prime Service Controller to create a secure L2 extension between the private data center and the public cloud. This L2 extension is based on Datagram Transport Layer Security ( DTLS ) protocol and allows you to securely transfer VM’s and Network services over a public IP backbone.  DTLS is a derivation of the SSL protocol and provides communications privacy for datagram protocols so all data in motion is cryptographically isolated and encrypted.

Hybrid Cloud

Hybrid Cloud

 

 

The important point is that the Enterprise owns the keys and performs the key management functions. Not the Public cloud provider.

 

Nexus 1000v Hybrid Cloud Components 

Cisco Prime Network Service Controller for InterCloud ** A VM that provides a single pane of glass to managed all functions of the inter clouds
InterCloud VSM Manage port-profiles for VMs in the InterCloud infrastructure
InterCloud Extender Provides secure connectivity to the InterCloud Switch in provider cloud. Install in the private data center.
InterCloud Switch Virtual Machine in provider data center, has secure connectivity to the InterCloud Extender in enterprise cloud and secure connectivity to the Virtual Machines in the provider cloud.
Cloud Virtual Machines VMs in the public cloud running workloads.

** Cisco Prime Network Service Controller will soon be replaced by Intercloud director.

 

Prerequisites

Port 80 HTTP access from PNSC for AWS calls and communicating with InterCloud VMs in provider cloud
Port 443 HTTPS access from PNSC for AWS calls and communicating with InterCloud VMs in provider cloud
Port 22 SSH from PNSC to InterCloud VMs in provider cloud
UDP 6644 DTLS data tunnel
TCP 6644 DTLS control tunnel

 

VXLAN  – Virtual Extensible LAN

The requirement for applications on demand has led to the increase in the number of required VLANs for cloud providers. The traditional 12 bit identifier which provided 4000 vlans proved to be a limiting factor in multi-tier multi-tenant environments and engineers started to run out of isolation options. This has led to the introduction of a 24 bit VXLAN identifier, offering 16 million logical networks. Now we can cross Layer 3 boundaries. The MAC in UDP encapsulation uses switch hashing to look into UDP packets and distribute all packets in a port channel efficiently.

VXLAN

VXLAN

 

VXLAN works a lot like a layer 2 bridge ( Flood and Learn ), the VEM learn does all the heavy lifting and learns all the VM source MAC and Host VXLAN IP and encapsulates the traffic according the port profile the VM belongs. Broadcast, Multicast, and unknown unicast traffic are sent as multicast. While unicast traffic are encapsulated and sent directly to destination hosts VXLAN IP aka destination VEM. Enhanced VXLAN offers VXLAN MAC distribution and ARP termination making it more optional.

 

VXLAN Mode Packet Functions

Packet VXLAN(multicast mode) Enhanced VXLAN(unicast mode) Enhanced VXLANMAC Distribution Enhanced VXLANARP Termination
Broadcast /Multicast Multicast Encapsulation Replication plus Unicast Encap Replication plus Unicast Encap Replication plus Unicast Encap
Unknown Unicast Multicast Encapsulation Replication plus Unicast Encap Drop Drop
Known Unicast Unicast Encapsulation Unicast Encap Unicast Encap Unicast Encap
ARP Multicast Encapsulation Replication plus Unicast Encap Replication plus Unicast Encap VEM ARP Reply

 

vPath – Service Chaining

Intelligent Policy-based traffic steering through multiple network services.

vPath allows you to intelligently traffic steer VM traffic to virtualized devices. It intercepts and redirects the initial traffic to the service node. Once the service node performance its policy function, the result is cached, and the local virtual switch treats the subsequent packets accordingly. It enables you to tie services together so you can push the VM through each service as required. Previously, if you want to tie service together in a data center, you need to stitch the VLANs together which proved to be limiting by design and scale.

vPATH

vPATH

 

vPath 3.0 is now submitted to the IETF for standardization and it allows service chaining with vPath and nonvpath network services. Enabling you to use vpath service chaining between multiple physical devices. Supporting multiple hypervisors.

 

Licence Options 

Nexus 1000v Essential Edition Nexus 1000v Advanced Edition
Full Layer-2 Feature Set All Features of Essential Edition
Security, QoS Policies VSG firewall
VXLAN virtual overlays VXLAN Gateway
vPath enabled Virtual Services TrustSec SGA
Full monitoring and managementcapabilities Platform for other Cisco DC Extensions in the Future
Free $695 per CPU MSRP

 

Nexus 1000v features and benefits

Switching L2 Switching, 802.1Q Tagging, VLAN, Rate Limiting (TX), VXLAN
IGMP Snooping, QoS Marking (COS & DSCP), Class-based WFQ
Security Policy Mobility, Private VLANs w/ local PVLAN Enforcement
Access Control Lists, Port Security, Cisco TrustSec Support
Dynamic ARP inspection, IP Source Guard, DHCP Snooping
Network Services Virtual Services Datapath (vPath) support for traffic steering & fast-path off-load[leveraged by Virtual Security Gateway (VSG), vWAAS, ASA1000V]
Provisioning Port Profiles, Integration with vC, vCD, SCVMM*, BMC CLM
Optimized NIC Teaming with Virtual Port Channel – Host Mode
Visibility VM Migration Tracking, VC Plugin, NetFlow v.9 w/ NDE, CDP v.2
VM-Level Interface Statistics, vTrackerSPAN & ERSPAN (policy-based)
Management Virtual Centre VM Provisioning, vCenter Plugin, Cisco LMS, DCNM
Cisco CLI, Radius, TACACs, Syslog, SNMP (v.1, 2, 3)
Hitless upgrade, SW Installer

 

Advantages and Disadvantages of the Nexus 1000V

Advantages Disadvantages
The Standard edition is FREE and you can upgrade to enhanced version when the need arises. VEM and VSM internal communication are very sensitive to latency. Due to their chatty nature, they may not be good for inter DC deployments.
Easy and Quick to deploy VSM – VEM, VSM (active) – VSM (standby) heartbeat time of 6 seconds makes it sensitive to network failures and congestion.
Offers you many rich network features that are not available on other distributed software switches. VEM over dependency on VSM reduces resiliency.
Hypervisor agnostic VSM is required for vSphere HA, FT, VMotion to work.
Hybrid Cloud functionality  

 

 

 

 

 

About Matt Conran

Matt Conran has created 163 entries.