Cisco switch virtualization

Cisco Switch Virtualization Nexus 1000v

 

Cisco switch virtualization

 

Nexus1000v

In today’s digital landscape, virtualization has become integral to modern data centers. With the increasing demand for agility, flexibility, and scalability, organizations are turning to virtual networking solutions to meet their evolving needs. One such solution is the Nexus 1000v, a virtual network switch offering comprehensive features and functionalities. In this blog post, we will delve into the world of the Nexus 1000v, exploring its key features, benefits, and use cases.

The Nexus 1000v is a distributed virtual switch that operates at the hypervisor level, providing advanced networking capabilities for virtual machines (VMs). It is designed to integrate seamlessly with VMware vSphere, offering enhanced network visibility, control, and security.

 

  • Virtual routing and forwarding

Virtual routing and forwarding form the basis of this stack. Firstly, network virtualization comes with two primary methods: 1) One too many and 2) Many to one.  The “one too many” network virtualization method means you segment one physical network into multiple logical segments. Conversely, the “many to one” network virtualization method consolidates multiple physical devices into one logical entity. By definition, they seem to be opposites, but they fall under the same umbrella in network virtualization.

 

Before you proceed, you may find the following posts helpful:

  1. Container Based Virtualization
  2. Virtual Switch
  3. What is VXLAN
  4. Redundant Links
  5. WAN Virtualization
  6. What Is FabricPath

 

Cisco Switch Virtualization.

Key Nexus 1000v Discussion Points:


  • Introduction to Nexus1000v and what is involved.

  • Highlighting the details on Cisco switch virtualization. Logical separation. 

  • Technical details on the additional overhead from virtualization.

  • Scenario: Network virtualization.

  • A final note on software virtual switch designs.

 

  • A key point: Video on network virtualization

In this video, we will address the basics of network virtualization. We have an underlay and an overlay design. Overlay/Underlay Essentially, an overlay is placing Layer 2 or 3 over a Layer 3 Core. The Layer 3 Core is known as the underlay.

This removes many drawbacks and scaling issues with traditional Layer 2 connectivity, which uses VLANs. The multi-tenant nature of overlays is designed to get away from these L2 challenges allowing you to build networks at a much larger scale.

 

Technology Brief : VXLAN - Introducing Overlay Networking
Prev 1 of 1 Next
Prev 1 of 1 Next

 

Back to basics with network virtualization

Before we get stuck in Cisco virtualization, let us address some basics. For example, if you’re going to have multiple virtual endpoints share a physical network, but different virtual endpoints belong to different customers, the communication between these virtual endpoints also needs to be isolated. In other words, the network is a resource, too, and network virtualization is the technology that enables sharing of a standard physical network infrastructure.

Virtualization uses software to simulate traditional hardware platforms and create virtual software-based systems. For example, virtualization allows specialists to construct a single virtual network or partition a physical network into multiple virtual networks.

 

Cisco Switch Virtualization

Logical segmentation: One too many

We have one-to-many network virtualization for the Cisco switch virtualization design; a single physical network is logically segmented into multiple virtual networks. For example, each virtual network could correspond to a user group or a specific security function.

End-to-end path isolation requires the virtualization of networking devices and their interconnecting links. VLANs have been traditionally used, and hosts from one user group are mapped to a single VLAN. To extend the path across multiple switches at Layer 2, VLAN tagging (802.1Q) can carry VLAN information between switches. These VLAN trunks were created to transport multiple VLANs over a single Ethernet interface.

The diagram below displays two independent VLANs, VLAN201 and VLAN101. These VLANs can share one physical wire to provide L2 reachability between hosts connected to Switch B and Switch A via Switch C. VLANs 201, and 101 remain separate entities.

Nexus1000v
Nexus1000v: The operation

 

VLANs are sufficient for small Layer 2 segments. However, today’s networks will likely have a mix of Layer 2 and 3 routed networks. In this case, Layer 2 VLANs alone are insufficient because you must extend the Layer 2 isolation over a Layer 3 device. This can be achieved by using Virtual Routing and Forwarding ( VRF ), the next step in the Cisco switch virtualization. A virtual routing and forwarding instance logically carves a Layer 3 device into several isolated independent L3 devices. The virtual routing and forwarding configured locally cannot communicate directly.

The diagram below displays one physical Layer 3 router with three VRFs, VRF Yellow, VRF Red, and VRF Blue. There is complete separation between these virtual routing and forwarding instances; without explicit configuration, routes in one virtual routing and forwarding cannot be leaked to another virtual routing and forwarding instance.

Virtual Routing and Forwarding

virtual routing and forwarding

The virtualization of the interconnecting links depends on how the virtual routers are connected. If they are physically ( directly ) connected, you could use a technology known as VRF-lite to separate traffic and 802.1Q to label the data plane. This is known as hop-by-hop virtualization. However, it’s possible to run into scalability issues when the number of devices grows. This design is typically used when you connect virtual routing and forwarding back to back, i.e., no more than two devices.

When the virtual routers are connected over multiple hops through an IP cloud, you can use generic routing encapsulation ( GRE ) or Multiprotocol Label Switching ( MPLS ) virtual private networks.

GRE is probably the simpler of the Layer 3 methods, and it can work over any IP core. GRE can encapsulate the contents and transport them over a network with the network unaware of the packet contents. Instead, the core will see the GRE header, virtualizing the network path.

 

 

 

Cisco Switch Virtualization: The additional overhead

When designing Cisco switch virtualization, you need to consider the additional overhead. There are additional 24 bytes overhead for the GRE header, so it may be the case that the forwarding router may break the datagram into two fragments, so the packet may not be larger than the outgoing interface MTU. To resolve the fragmentation issue, you can correctly configure MTU, MSS, and Path MTU parameters on the outgoing and intermediate routers.

The standard for GRE is typically static. You only need to configure tunnel endpoints, and the tunnel will be up as long as you have reachability to those endpoints. However, you can have a dynamic GRE tunnel establishment in recent designs.

MPLS/VPN, on the other hand, is a different beast. It requires signaling to distribute labels and build an end-to-end Label Switched Path ( LSP ). The label distribution can be done with BGP+label, LDP, and RSVP. Unlike GRE tunnels, MPLS VPNs do not have to manage multiple point-to-point tunnels to provide a full mesh of connectivity. Instead, they are used for connectivity, and packets’ labels provide traffic separation.

 

Cisco switch virtualization: Many to one

Many-to-one network consolidation refers to grouping two or more physical devices into one. Examples of this Cisco switch virtualization technology include a Virtual Switching System ( VSS ), Stackable switches, and Nexus VPC. Combining many physicals into one logical entity allows STP to view the logical group as one, allowing all ports to be active. By default, STP will block the redundant path.

Software-Defined Networking takes this concept further; it completely abstracts the entire network into a single virtual switch. On traditional routers, the control and the data plane are on the same device, yet the control and data planes are decoupled with SDN. The control plan is now on a policy-driven controller, and the data plane is local on the OpenFlow-enabled switch.

 

Network Virtualization

Server and network virtualization presented the challenge of multiple VMs sharing a single network physical port, such as a network interface controller ( NIC ). The question then arises, how do I link multiple VMs to the same uplink? How do I provide path separation? Today’s networks need to virtualize the physical port and allow the configuration of policies per port.

Nexus 1000

 

NIC-per-VM design

One way to do this is to have a NIC-per-VM design where each VM is assigned a single physical NIC, and the NIC is not shared with any other VM. The hypervisor, aka virtualization layer, would be bypassed, and the VM would access the I/O device directly. This is known as VMDirectPath. This direct path or pass-through can improve performance for hosts that utilize high-speed I/O devices, such as 10 Gigabit Ethernet. However, the lack of flexibility and the ability to move VMs offset higher performance benefits.  

 

Virtual-NIC-per-VM in Cisco UCS (Adapter FEX)

Another way to do this is to create multiple logical NICs on the same physical NIC, such as Virtual-NIC-per-VM in Cisco UCS (Adapter FEX). These logical NICs are assigned directly to VMs, and traffic gets marked with vNIC specific tag in hardware (VN-Tag/802.1ah). The actual VN-Tag tagging is implemented in the server NICs so you can clone the physical NIC in the server to multiple virtual NICs. This technology provides faster switching and enables you to apply a rich set of management features to local and remote traffic.

 

Software Virtual Switch

The 3rd option is to implement a virtual software switch in the hypervisor. For example, VMware introduced virtual switching compatibility with their vSphere ( ESXi ) hypervisor called vSphere Distributed Switch ( VDS ). Initially, they introduced a local L2 software switch, which was soon phased out due to a lack of distributed architecture.

Data physically move between the servers through the external network, but the control plane abstracts this movement to look like one large distributed switch spanning multiple servers. This approach has a single management and configuration point, similar to stackable switches – one control plane with many physical data forwarding paths. The data does not move through a parent partition but logically connects directly to the network interface through local vNICs associated with each VM.

 

Network virtualization and Nexus 1000v ( Nexus 1000 )

The VDS introduced by VMware lacked any good networking features, which led Cisco to introduce the Nexus 1000V software-based switch. The Nexus 1000v is a multi-cloud, multi-hypervisor, and multi-services distributed virtual switch. Its function is to enable communication between VMs.

Nexus1000v
Nexus1000v: Virtual Distributed Switch.

 

Nexus 1000 components: VEM and VSM

The Nexus 1000v has two essential components:

  1. The Virtual Supervisor Module ( VSM )
  2. The Virtual Ethernet Module ( VEM ).

Compared to a physical switch, you could view the VSM as the supervisor, setting up the control plane functions for the data plane to forward efficiently, and the VEM as the physical line cards that do all the forwarding of packets. The VEM is the software component that runs within the hypervisor kernel. It handles all VM traffic, including inter-VM frames and Ethernet traffic between a VM and external resources.

The VSM runs its NX-OS code and controls the control and management planes, which integrates into a cloud manager, such as a VMware vCenter. You can have two VSMs for redundancy. Both modules remain constantly synchronized with unicast VSM-to-VSM heartbeats to provide stateful failover in the event of an active VSM failure.

 

The two available communication options for VSM to VEM are:

  1. Layer 2 control mode: The VSM control interface shares the same VLAN with the VEM.
  2. Layer 3 control mode: The VEM and the VSM are in different IP subnets.

The VSM also uses heartbeat messages to detect loss of connectivity between the VSM and the VEM. However, the VEM does not depend on connectivity to the VSM to perform its data plane functions and will continue forwarding packets if VSM fails.

 

With Layer 3 control mode, the heartbeat messages are encapsulated in a GRE envelope.

 

Nexus 1000 and VSM best practices

  • L2 control is recommended for new installations.
  • Use MAC pinning instead of LACP.
  • Packet, Control, and Management in the same VLAN.
  • Do not use VLAN 1 for Control and Packet.
  • Use 2 x VSM for redundancy. 

The max latency between VSM and VEM is ten milliseconds. Therefore if you have a high-quality DCI link, a VSM can be placed outside the data center and still control the VEM.

 

Nexus 1000v InterCloud – Cisco switch virtualization

A vital element of the Nexus 1000 is its use case for hybrid cloud deployments and its ability to place workloads in private and public environments via a single pane of glass. In addition, the Nexus 1000v interCloud addresses the main challenges with hybrid cloud deployments, such as security concerns and control/visibility challenges within the public cloud.

The Nexus 1000 interCloud works with Cisco Prime Service Controller to create a secure L2 extension between the private data center and the public cloud.

This L2 extension is based on Datagram Transport Layer Security ( DTLS ) protocol and allows you to securely transfer VMs and Network services over a public IP backbone. DTLS derives the SSL protocol and provides communications privacy for datagram protocols, so all data in motion is cryptographically isolated and encrypted.

Nexus 1000
Nexus 1000 and Hybrid Cloud.

 

Nexus 1000v Hybrid Cloud Components 

Cisco Prime Network Service Controller for InterCloud ** A VM that provides a single pane of glass to manage all functions of the inter clouds
InterCloud VSM Manage port profiles for VMs in the InterCloud infrastructure
InterCloud Extender Provides secure connectivity to the InterCloud Switch in the provider cloud. Install in the private data center.
InterCloud Switch Virtual Machine in the provider data center has secure connectivity to the InterCloud Extender in the enterprise cloud and secure connectivity to the Virtual Machines in the provider cloud.
Cloud Virtual Machines VMs in the public cloud running workloads.

 

Prerequisites

Port 80 HTTP access from PNSC for AWS calls and communicating with InterCloud VMs in the provider cloud
Port 443 HTTPS access from PNSC for AWS calls and communicating with InterCloud VMs in the provider cloud
Port 22 SSH from PNSC to InterCloud VMs in the provider cloud
UDP 6644 DTLS data tunnel
TCP 6644 DTLS control tunnel

 

VXLAN – Virtual Extensible LAN

The requirement for applications on demand has led to an increased number of required VLANs for cloud providers. The standard 12-bit identifier, which provided 4000 VLANs, proved to be a limiting factor in multi-tier, multi-tenant environments, and engineers started to run out of isolation options.

This has introduced a 24-bit VXLAN identifier, offering 16 million logical networks. Now we can cross Layer 3 boundaries. The MAC in UDP encapsulation uses switch hashing to look into UDP packets and distribute all packets in a port channel efficiently.

nexus 1000
VXLAN operations

 

VXLAN works like a layer 2 bridge ( Flood and Learn ); the VEM learn does all the heavy lifting, learns all the VM source MAC and Host VXLAN IP, and encapsulates the traffic according to the port profile to which the VM belongs. Broadcast, Multicast, and unknown unicast traffic are sent as Multicast.

At the same time, unicast traffic is encapsulated and shipped directly to destination hosts VXLAN IP, aka destination VEM. Enhanced VXLAN offers VXLAN MAC distribution and ARP termination making it more optional.

 

  • A key point: Video on VXLAN and Dynamic MAC learning

The following video discusses dynamic MAC learning. Initially, Ethernet started with a Thick Coax Cable – a single cable was used to connect all workstations. It was later replaced by the twisted pair cables – unshielded UTP and shielded twisted pair STP in the late 1990-2000s. On Ethernet networks, each host has a unique MAC address for identification.

 

Technology Brief : VXLAN - Dynamic MAC Learning
Prev 1 of 1 Next
Prev 1 of 1 Next

 

 

VXLAN Mode Packet Functions

Packet VXLAN(multicast mode) Enhanced VXLAN(unicast mode) Enhanced VXLANMAC Distribution Enhanced VXLANARP Termination
Broadcast /Multicast Multicast Encapsulation Replication plus Unicast Encap Replication plus Unicast Encap Replication plus Unicast Encap
Unknown Unicast Multicast Encapsulation Replication plus Unicast Encap Drop Drop
Known Unicast Unicast Encapsulation Unicast Encap Unicast Encap Unicast Encap
ARP Multicast Encapsulation Replication plus Unicast Encap Replication plus Unicast Encap VEM ARP Reply

 

vPath – Service chaining

Intelligent Policy-based traffic steering through multiple network services.

vPath allows you to intelligently traffic steer VM traffic to virtualized devices. It intercepts and redirects the initial traffic to the service node. Once the service node performs its policy function, the result is cached, and the local virtual switch treats the subsequent packets accordingly. In addition, it enables you to tie services together to push the VM through each service as required. Previously, if you wanted to tie services together in a data center, you needed to stitch the VLANs together, which was limited by design and scale.

Cisco virtualization
Nexus and service chaining

 

vPath 3.0 is now submitted to the IETF for standardization, allowing service chaining with vPath and non-vpath network services. It enables you to use vpath service chaining between multiple physical devices and supporting multiple hypervisors.

 

License Options 

Nexus 1000 Essential Edition Nexus 1000 Advanced Edition
Full Layer-2 Feature Set All Features of Essential Edition
Security, QoS Policies VSG firewall
VXLAN virtual overlays VXLAN Gateway
vPath enabled Virtual Services TrustSec SGA
Full monitoring and management capabilities A platform for other Cisco DC Extensions in the Future
Free $695 per CPU MSRP

 

Nexus 1000 features and benefits

Switching L2 Switching, 802.1Q Tagging, VLAN, Rate Limiting (TX), VXLAN
IGMP Snooping, QoS Marking (COS & DSCP), Class-based WFQ
Security Policy Mobility, Private VLANs w/ local PVLAN Enforcement
Access Control Lists, Port Security, Cisco TrustSec Support
Dynamic ARP inspection, IP Source Guard, DHCP Snooping
Network Services Virtual Services Datapath (vPath) support for traffic steering & fast-path off-load[leveraged by Virtual Security Gateway (VSG), vWAAS, ASA1000V]
Provisioning Port Profiles, Integration with vC, vCD, SCVMM*, BMC CLM
Optimized NIC Teaming with Virtual Port Channel – Host Mode
Visibility VM Migration Tracking, VC Plugin, NetFlow v.9 w/ NDE, CDP v.2
VM-Level Interface Statistics, vTrackerSPAN & ERSPAN (policy-based)
Management Virtual Centre VM Provisioning, vCenter Plugin, Cisco LMS, DCNM
Cisco CLI, Radius, TACACs, Syslog, SNMP (v.1, 2, 3)
Hitless upgrade, SW Installer

 

Advantages and disadvantages of the Nexus 1000

Advantages Disadvantages
The Standard edition is FREE; you can upgrade to an enhanced version when needed. VEM and VSM internal communication is very sensitive to latency. Due to their chatty nature, they may not be good for inter-DC deployments.
Easy and Quick to deploy VSM – VEM, VSM (active) – VSM (standby) heartbeat time of 6 seconds makes it sensitive to network failures and congestion.
It offers you many rich network features unavailable on other distributed software switches. VEM over-dependency on VSM reduces resiliency.
Hypervisor agnostic VSM is required for vSphere HA, FT, and VMotion to work.
Hybrid Cloud functionality  

 

Closing Points on Cisco Nexus 1000v

Key Features and Functionalities:

Virtual Ethernet Module (VEM):

The Nexus 1000v employs the concept of the Virtual Ethernet Module (VEM), which runs as a module inside the hypervisor. This allows for efficient and direct communication between VMs, bypassing the traditional reliance on the hypervisor networking stack.

Virtual Supervisor Module (VSM):

The Virtual Supervisor Module (VSM) serves as the control plane for the Nexus 1000v, providing centralized management and configuration. It enables network administrators to define policies, manage virtual ports, and monitor network traffic.

Policy-Based Virtual Network Management:

With the Nexus 1000v, administrators can define policies to manage virtual networks. These policies ensure consistent network configurations across multiple hosts, simplifying network management and reducing the risk of misconfigurations.

Advanced Security and Monitoring Capabilities:

The Nexus 1000v offers granular security controls, including access control lists (ACLs), port security, and dynamic host configuration protocol (DHCP) snooping. Additionally, it provides comprehensive visibility into network traffic, enabling administrators to monitor and troubleshoot network issues effectively.

Benefits of the Nexus 1000v:

Enhanced Network Performance:

By offloading network processing to the VEM, the Nexus 1000v minimizes the impact on the hypervisor, resulting in improved network performance and reduced latency.

Increased Scalability:

The distributed architecture of the Nexus 1000v allows for seamless scalability, ensuring that organizations can meet the growing demands of their virtualized environments.

Simplified Network Management:

With its policy-based approach, the Nexus 1000v simplifies network management tasks, enabling administrators to provision and manage virtual networks more efficiently.

Use Cases:

Data Centers:

The Nexus 1000v is particularly beneficial in data center environments where virtualization is prevalent. It provides a robust and scalable networking solution, ensuring optimal performance and security for virtualized workloads.

Cloud Service Providers:

Cloud service providers can leverage the Nexus 1000v to enhance their network virtualization capabilities, offering customers more flexibility and control over their virtual networks.

The Nexus 1000v is a powerful virtual network switch that provides advanced networking capabilities for virtualized environments. Its rich set of features, policy-based management approach, and seamless integration with VMware vSphere allows organizations to achieve enhanced network performance, scalability, and management efficiency. As virtualization continues to shape the future of data centers, the Nexus 1000v remains a valuable tool for optimizing virtual network infrastructures.

 

Cisco switch virtualization

What is VXLAN

What is VXLAN

What is VXLAN

In the rapidly evolving networking world, virtualization has become critical for businesses seeking to optimize their IT infrastructure. One key technology that has emerged is VXLAN (Virtual Extensible LAN), which enables the creation of virtual networks independent of physical network infrastructure. In this blog post, we will delve into the concept of VXLAN, its benefits, and its role in network virtualization.

VXLAN is an encapsulation protocol designed to extend Layer 2 (Ethernet) networks over Layer 3 (IP) networks. It provides a scalable and flexible solution for creating virtualized networks, enabling seamless communication between virtual machines (VMs) and physical servers across different data centers or geographic regions.

VXLAN is a technology that creates virtual networks within an existing physical network. A Layer 2 overlay network runs on top of the current Layer 2 network. VXLAN utilizes UDP as the transport protocol, providing a secure, efficient, and reliable way to create a virtual network.

Table of Contents

Highlights: What is VXLAN

Segmentation: Security and policy control

VXLAN provides several advantages over traditional Layer 2 network technologies. It enables the creation of enormous virtual networks with thousands of endpoints, allowing multi-tenant segmentation for security and policy enforcement. It also takes advantage of existing Layer 3 routing protocols, allowing for efficient routing between virtual networks, and it is hardware agnostic, meaning it can be used with any hardware.

VLXAN offerings

VXLAN has been widely adopted and is now used in many large enterprise networks for virtualization and cloud computing. It provides:

  • A secure and efficient way to create virtual networks.
  • Allowing for the creation of multi-tenant segmentation.
  • Efficient routing.
  • Hardware-agnostic capabilities.

With its widespread adoption, VXLAN has become an essential technology for network virtualization.

Related: Before you proceed, you may find the following posts helpful for pre-information:

  1. Data Center Topologies
  2. Segment Routing
  3. What is OpenFlow
  4. Overlay Virtual Networks
  5. Layer 3 Data Center

Virtual Private Network

What is VXLAN

Key What is VXLAN Discussion Points:


  • Introduction to What is VXLAN and what is involved.

  • Highlighting the details of VXLAN vs VLAN.

  • Technical details on the VXLAN Spanning Tree.

  • Scenario: Why introduce VXLAN? VLXAN benefits. 

  • A final note on the VXLAN enhancements.

Back to Basics: The Need For VXLAN

Traditional layer two networks have issues because of the following reasons:

  • Spanning tree: Restricts links.
  • Limited amount of VLANs: Restricts scalability;
  • Large MAC address tables: Restricts scalability and mobility

Spanning-tree avoids loops by blocking redundant links. By blocking connections, we create a loop-free topology and pay for links we can’t use. Although we could switch to a layer three network, some technologies require layer two networking.

VLAN IDs are 12 bits long, so we can create 4094 VLANs (0 and 4095 are reserved). Data centers may need help with only 4094 available VLANs. Let’s say we have a service provider with 500 customers. There are 4094 available VLANs, so each customer can only have eight.

The Role of Server Virtualization

Server virtualization has exponentially increased the number of addresses in our switches’ MAC addresses. There was only one MAC address per switch port before server virtualization. With server virtualization, we can run many virtual machines (VMs) or containers on a single physical server. Virtual NICs and virtual MAC addresses are assigned to each virtual machine. One switch port must learn many MAC addresses.

There could be 24 or 48 physical servers connected to a Top of Rack (ToR) switch in a data center. There may be many racks in a data center, so each switch must store the MAC addresses of all VMs that communicate. Networks without server virtualization require much larger MAC address tables.

Lab Guide: VXLAN

In the following lab, I created a Layer 2 overlay with VXLAN over a Layer 3 core. A bridge domain VNI of 6001 must match both sides of the overlay tunnel. What Is a VNI? The VLAN ID field in an Ethernet frame has only 12 bits, so VLAN cannot meet isolation requirements on data center networks. The emergence of VNI is specifically to solve this problem.

Note: The VNI

A VNI is a user identifier similar to a VLAN ID. A VNI identifies a tenant. VMs with different VNIs cannot communicate at Layer 2. During VXLAN packet encapsulation, a 24-bit VNI is added to a VXLAN packet, enabling VXLAN to isolate many tenants.

You will notice in the screenshot below that I can ping from desktop 0 to desktop one even though the IP addresses are not in the routing table of the core devices, simulating a Layer 2 overlay. Consider VXLAN to be the overlay and the routing Layer 3 core to be the underlay.

VXLAN overlay
Diagram: VXLAN Overlay

In the following screenshot, notice that the VNI has been changed. The VNI needs to be changed in two places in the configuration, as illustrated below. Once changed, the Peers are down; however, the NVE  interface remains up. The VXLAN layer two overlay is not operational.

Diagram: Changing the VNI

How does VXLAN work?

VXLAN uses tunneling to encapsulate Layer 2 Ethernet frames within IP packets. A unique 24-bit segment ID identifies each VXLAN network, the VXLAN Network Identifier (VNI). The source VM encapsulates the original Ethernet frame with a VXLAN header, including the VNI. The encapsulated packet is then sent over the physical IP network to the destination VM and decapsulated to retrieve the original Ethernet frame.

Analysis:

Notice below that it is running a ping from desktop 0 to desktop 1. The IP addresses assigned to this host are in the 10.0.0.1 and 10.0.0.2. First, notice that the ping is booming, and when I do a packet capture on the links Gi1 connected to Leaf A, we see the encapsulation of the ICMP echo request and reply.

Everything is encapsulated into UDP port 1024. In my configurations of Leaf A and Leaf B, I explicitly set the VXLAN port to 1024.

VXLAN unicast mode

Benefits of VXLAN:

– Scalability: VXLAN allows creating up to 16 million logical networks, providing the scalability required for large-scale virtualized environments.

– Network Segmentation: By leveraging VXLAN, organizations can segment their networks into virtual segments, enhancing security and isolating traffic between applications or user groups.

– Flexibility and Mobility: VXLAN enables the movement of VMs across physical servers and data centers without the need to reconfigure network settings. This flexibility is crucial for workload mobility in dynamic environments.

– Interoperability: VXLAN is an industry-standard protocol supported by various networking vendors, ensuring compatibility across different network devices and platforms.

Data Center

VXLAN

VXLAN Benefits

  • Scalability

  • Network Segmentation

  • Flexibility and Mobility

  • Interopability 

Data Center

VXLAN

VLAN Use Cases

  • Data Center Interconnect (DCI)

  • Multi Tenant Environments

  • Network Virtualization

  • Hybrid Cloud Connectivity

Use Cases for VXLAN:

– Data Center Interconnect (DCI): VXLAN allows organizations to interconnect multiple data centers, enabling seamless workload migration, disaster recovery, and workload balancing across different locations.

– Multi-Tenant Environments: VXLAN enables service providers to offer virtualized network services to multiple tenants securely and isolatedly. This is particularly useful in cloud computing environments.

– Network Virtualization: VXLAN plays a crucial role in network virtualization, allowing organizations to create virtual networks independent of the underlying physical infrastructure. This enables greater flexibility and agility in managing network resources.

Back to Basics: VXLAN and Network Virtualization.

VXLAN and network virtualization

VXLAN is a form of network virtualization. Network virtualization cuts a single physical network into many virtual networks, often called network overlays. Virtualizing a resource allows it to be shared by multiple users. Virtualization provides the illusion that each user is on his or her resources. In the case of virtual networks, each user is under the misconception that there are no other users of the network. To preserve the illusion, virtual networks are separated from one another. Packets cannot leak from one virtual network to another.

Network Virtualization
Diagram: Network Virtualization. Source Parallels

VXLAN Loop Detection and Prevention

So, before we dive into the benefits of VXLAN, let us address the basics of loop detection and prevention, which is a significant driver for using network overlays such as VLXAN. The challenge is that data frames can exist indefinitely when loops occur, disrupting network stability and degrading performance.

In addition, loops introduce broadcast radiation, increasing CPU and network bandwidth utilization, which results in a degradation of user application access experience. Finally, in multi-site networks, a loop can span multiple data centers, causing disruptions that are difficult to pinpoint. A lot of this can be solved with overlay networking.

Video: Overlay Networking and VXLAN

In the following video, we will discuss the basics of overlay networking.Overlay/Underlay Essentially, an overlay is placing Layer 2 or Layer 3 over a Layer 3 Core. The Layer 3 Core is known as the underlay. This removes many drawbacks and scaling issues with traditional Layer 2 connectivity, which uses VLANs.

The multi-tenant nature of overlays is designed to avoid these L2 challenges, allowing you to build networks at a much larger scale. We have Layer 2 and Layer 3 overlays. Layer 2 overlays emulate a Layer 2 network and map Layer 2 frames into an IP underlay.

If you are emulating a Layer 2 network, you must emulate the Layer 2 flooding behavior. This is the bread and butter of how Layer 2 networks work, and that doesn’t change just because you decide to create a Layer 2 overlay.

Technology Brief : VXLAN – Introducing Overlay Networking
Prev 1 of 1 Next
Prev 1 of 1 Next

VXLAN vs VLAN

However, first-generation Layer-2 Ethernet networks could not natively detect or mitigate looped topologies, while modern Layer-2 overlays implicitly build loop-free topologies. Therefore, overlays do not need loop detection and mitigation as long as no first-gen Layer-2 network is attached. Essentially, there is no need for a VXLAN spanning tree.

So, one of the differences between VXLAN vs VLAN is that the VLAN has a 12-bit VID while VXLAN has a 24-bit VID network identifier, allowing you to create up to 16 million segments. VXLAN has tremendous scale and stable loop-free networking and is a foundation technology in the ACI Cisco.

Spanning tree VXLAN
Diagram: Loop prevention. Source is Cisco

VXLAN and Data Center Interconnect

VXLAN has revolutionized data center interconnect by providing a scalable, flexible, and efficient solution for extending Layer 2 networks. Its ability to enable network segmentation, multi-tenancy support, and seamless mobility makes it a valuable technology for modern businesses.

However, careful planning, consideration of network infrastructure, and security measures are essential for successful implementation. By harnessing the power of VXLAN, organizations can achieve a more agile, scalable, and interconnected data center environment.

Considerations for Implementing VXLAN:

1. Underlying Network Infrastructure: Before implementing VXLAN, it is essential to assess the underlying network infrastructure. Network devices must support VXLAN encapsulation and decapsulation and have sufficient bandwidth to handle the increased traffic.

2. Network Overhead: While VXLAN provides numerous benefits, it does introduce additional network overhead due to encapsulation and decapsulation processes. It is crucial to consider the impact on network performance and plan accordingly.

3. Security: As VXLAN extends Layer 2 networks over Layer 3 infrastructure, it is essential to implement appropriate security measures. This includes encrypting VXLAN traffic, deploying access control policies, and monitoring network traffic for anomalies.

VXLAN vs VLAN: The VXLAN Benefits Drive Adoption

Introduced by Cisco and VMware and now heavily used in open networking, VXLAN stands for Virtual eXtensible Local Area Network and is perhaps the most popular overlay technology for IP-based SDN data centers. And is used extensively with ACI networks.

VXLAN was explicitly designed for Layer 2 over Layer 3 tunneling, and its early competitions from NVGRE and STT are fading away, and VXLAN is becoming the industry standard. VLXAN brings many advantages, especially in loop prevention, as there is no need for a VXLAN spanning tree.

VXLAN Benefits
VXLAN Benefits: Scale and loop-free networks.

Today, with overlays such as with VXLAN, the dependency on loop prevention protocols is almost eliminated. However, even though virtualized overlay networks such as VXLAN are loop-free, having a failsafe loop detection and mitigation method is still desirable because loops can be introduced by topologies connected to the overlay network.

Loop prevention traditionally started with Spanning Tree Protocols (STP) to counteract the loop problem in first-gen Layer-2 Ethernet networks. Over time, other approaches evolved by moving networks from “looped topologies” to “loop-free topologies.

While LAG and MLAG were used, other approaches for building loop-free topologies arose using ECMP at the MAC or IP layers. For example, FabricPath or TRILL is a MAC layer ECMP approach that emerged in the last decade. More recently, network virtualization overlays that build loop-free topologies on top of IP layer ECMP became state-of-the-art.

What is VXLAN
What is VXLAN and the components involved?

VXLAN vs VLAN: Why Introduce VXLAN?

  1. STP issues and scalability constraints: STP is undesirable on a large scale and lacks a proper load-balancing mechanism. A solution was needed to leverage the ECMP capabilities of an IP network while offering extended VLANs across an IP core, i.e., virtual segments across the network core. There is no VXLAN spanning tree.
  2. Multi-tenancy: Layer 2 networks are capped at 4000 VLANs, restricting multi-tenancy design—a big difference in the VXLAN vs VLAN debates.
  3. ToR table scalability: Every ToR switch may need to support several virtual servers, and each virtual server requires several NICs and MAC addresses. This pushes the limits on the table sizes for the ToR switch. In addition, after the ToR tables become full, Layer 2 traffic will be treated as unknown unicast traffic, which will be flooded across the network, causing instability to a previously stable core.
STP Blocking.
Diagram: STP Blocking. Source Cisco Press free chapter.

VXLAN use cases

Use Case 

VXLAN Details

Use Case 1

Multi-tenant IaaS Clouds where you need a large number of segments

Use Case 2

Link Virtual to Physical Servers. This is done via software or hardware VXLAN to VLAN gateway

Use Case 3

HA Clusters across failure domains/availability zones

Use Case 4

VXLAN works well over fabrics that have equidistant endpoints

Use Case 5

VXLAN-encapsulated VLAN traffic across availability zones must be rate-limited to prevent broadcast storm propagation across multiple availability zones

What is VXLAN? The operations

When discussing VXLAN vs VLAN, VXLAN employs a MAC over IP/UDP overlay scheme and extends the traditional VLAN boundary of 4000 VLANs. The 12-bit VLAN identifier in traditional VLANs capped scalability within the SDN data center and proved cumbersome if you wanted a VLAN per application segment model. VXLAN scales the 12-bit to a 24-bit identifier and allows for 16 million logical endpoints, with each endpoint potentially offering another 4,000 VLANs.

While tunneling does provide Layer 2 adjacency between these logical endpoints with the ability to move VMs across boundaries, the main driver for its insertion was to overcome the challenge of having only 4000 VLAN.

Typically, an application segment would have multiple segments; between each segment, you will have firewalling and load-balancing services, and each segment requires a different VLAN. The Layer 2 VLAN segment transfers non-routable heartbeats or state information that can’t cross an L3 boundary. You will soon reach the 4000k VLAN limit if you are a cloud provider.

vxlan vs vlan
Multiple segments are required per application stack.

The control plane

The control plane is very similar to the spanning tree control plane. If a switch receives a packet destined for an unknown address, the switch will forward the packet to an IP address that floods the packet to all the other switches.

This IP address is, in turn, mapped to a multicast group across the network. VXLAN doesn’t explicitly have a control plane and requires an IP multicast running in the core for forwarding traffic and host discovery.

Video: VXLAN operations

VXLAN is all about discovering the destination VTEP; the big decision is how you discover the destination VTEP IP address. The destination VTEP IP address needs to be mapped to the end host destination MAC address. The mechanism used to do this affects the scalability & VXLAN domain functionality. We need some control plane elements.

The control plane element of VXLAN can be deployed as a flood and learn mechanism, which is not an absolute control plane, or you can have an actual control plane (that does not flood and learn) or even use an orchestration tool for VTEP to IP mapping. Many vendors implement this differently.

Technology Brief : VXLAN – VXLAN Operations
Prev 1 of 1 Next
Prev 1 of 1 Next

Best practices for enabling IP Multicast in the core

IP Multicast

In the Core

  1. Bidirectional PIM or PIM Sparse Mode
  1. Redundant Rendezvous Points (RP)
  1. Shared trees (reduce the amount of IP multicast state)
  1. Always check the IP multicast table sizes on core and ToR switches
  1. Single IP multicast address for multiple VXLAN segments is OK

The requirement for IP multicast in the core made VXLAN undesirable from an operation point of view. For example, creating the tunnel endpoints is simple, but introducing a protocol like IP multicast to a core just for the tunnel control plane was considered undesirable. As a result, some of the more recent versions of VXLAN support IP unicast.

VXLAN uses a MAC over IP/UDP solution to eliminate the need for a spanning tree. There is no VXLAN spanning tree. This enables the core to be IP and not run a spanning tree. Many people ask why VXLAN uses UDP. The reason is that the UDP port numbers cause VXLAN to inherit Layer 3 ECMP features. The entropy that enables load balancing across multiple paths is embedded into the UDP source port of the overlay header.

Lab Guide: Multicast VLXAN

In this lab guide, we are going to have a look at a VXLAN multicast mode. The multicat mode requires both unicast and multicast connectivity between sites. Similar to the previous one, this configuration guide uses OSPF to provide unicast connectivity, and now we have an additional bidirectional Protocol Independent Multicast (PIM) to provide multicast connectivity.

This does not mean that you don’t have a multicast-enabled core. It would be best if you still had multicast enabled on the core. 

So we are not, let’s say, tunneling multicast over an IPv4 core without having multicast enabled on the core. I have multicast on all Layer 3 interfaces, and the mroute table is populated on all Layer 3 routers. With the command: Show ip mroute we are tunneling the multicast traffic, and with the command: Show nve vni we have multicast group 239.0.0.10, and we have a state of UP.

Multicast VXLAN
Diagram: Multicast VXLAN

VXLAN benefits and stability

The underlying control plan network impacts the stability of VXLAN and the applications running within it. For example, if the underlying IP network cannot converge quickly enough, VLXAN packets may be dropped, and an application cache timeout may be triggered.

The rate of change in the underlying network has a significant impact on the stability of the tunnels, yet the rate and change of the tunnels do not affect the underlying control plane. This is similar to how the strength of an MPLS / VPN overlay is affected by the core’s IGP.

VXLAN Points

VXLAN benefits

VXLAN drawbacks

Point 1

Runs over IP Transport

 No control plane

Point 2

Offers a large number of logical endpoints 

Needs IP Multicast***

Point 3

Reduced flooding scope

No IGMP snooping ( yet )

Point 4

Eliminates STP

No Pvlan support

Point 5

Easily integrated over existing Core

Requires Jumbo frames in the core ( 50 bytes)

Point 6

Minimal host-to-network integration

No built-in security features **

Point 7

Not a DCI solution ( no arp reduction, first-hop gateway localization, no inbound traffic steering i.e, LISP )

** VXLAN has no built-in security features. Anyone who gains access to the core network can insert traffic into segments. The VXLAN transport network must be secure, as no existing Firewall or Intrusion Prevention System (IPS) equipment has visibility into the VXLAN traffic.

*** Recent versions have Unicast VXLAN. Nexus 1000V release 4.2(1)SV2(2.1)

Updated: VXLAN enhancements

MAC distribution mode is an enhancement to VXLAN that prevents unknown unicast flooding. It eliminates the process of data plane MAC address learning. Traditionally, this was done by flooding to locate an unknown end host, but it has now been replaced with a control plane solution.

During VM startup, the VSM ( control plane ) collects the list of MAC addresses and distributes the MAC-to-VTEP mappings to all VEMs participating in a VXLAN segment. This technique makes VXLAN more optimal by unicasting more intelligently, similar to Nicira and VMware NVP.

ARP termination works by giving the VSM controller all the ARP and MAC information. This enables the VSM to proxy and respond locally to ARP requests without sending a broadcast. Because 90% of broadcast traffic is ARP requests ( ARP reply is unicast ), this significantly reduces broadcast traffic on the network.

Video: The VXLAN Phases

In the following video, we will discuss the VXLAN phases. VXLAN went through several steps to get the remote VTEP IP information. It started with a flood-and-learn process and finally used a proper control plane – EVPN.

Technology Brief : VXLAN – VXLAN Phases
Prev 1 of 1 Next
Prev 1 of 1 Next

Final Notes: VXLAN

In recent years, the rapid growth of cloud computing and the increasing demand for scalable and flexible networks have led to the development of various technologies to address these needs. One such technology is VXLAN (Virtual Extensible LAN), an overlay network protocol that has gained significant popularity in networking. In this blog post, we will delve into the intricacies of VXLAN, exploring its key features, benefits, and use cases.

What is VXLAN?

VXLAN is a network overlay technology that enables the creation of virtualized Layer 2 networks over existing Layer 3 infrastructure. It was developed to address the limitations of traditional VLANs, which could not scale beyond a few thousand networks due to the limited number of VLAN IDs available. VXLAN solves this problem using a 24-bit VXLAN Network Identifier (VNI), allowing for an impressive 16 million unique network segments.

Key Features of VXLAN:

1. Scalability: As mentioned earlier, VXLAN’s use of a 24-bit VNI allows for a significantly larger number of network segments than traditional VLANs. This scalability makes VXLAN an ideal solution for large-scale virtualized environments.

2. Network Segmentation: VXLAN enables the creation of logical network segments, allowing for network isolation and improved security. By encapsulating Layer 2 Ethernet frames within Layer 3 UDP packets, VXLAN provides a flexible and scalable approach to network segmentation.

3. Multicast Support: VXLAN leverages IP multicast to efficiently distribute broadcast, unknown unicast, and multicast (BUM) traffic across the network. This feature reduces network congestion and improves overall performance.

4. Mobility: VXLAN supports seamless movement of virtual machines (VMs) across physical hosts and data centers. By decoupling the VMs from the underlying physical network, VXLAN enables mobility without requiring any changes to the network infrastructure.

Benefits of VXLAN:

1. Enhanced Network Flexibility: VXLAN enables the creation of virtualized networks decoupled from the underlying physical infrastructure. This flexibility allows for easier network provisioning, scaling, and reconfiguration, making it an ideal choice for cloud environments.

2. Improved Scalability: With its larger network segment capacity, VXLAN offers improved scalability compared to traditional VLANs. This scalability is crucial in modern data centers and cloud environments where virtual machines and network segments are continuously growing.

3. Simplified Network Management: VXLAN simplifies network management tasks by abstracting the network infrastructure. Network administrators can define and manage virtual networks independently of the underlying physical infrastructure, streamlining network operations and reducing complexity.

Use Cases for VXLAN:

1. Data Center Interconnect: VXLAN is widely used for interconnecting geographically dispersed data centers. By extending Layer 2 network connectivity over Layer 3 infrastructure, VXLAN facilitates seamless VM mobility, disaster recovery, and workload balancing across data centers.

2. Multi-tenancy in Cloud Environments: VXLAN allows cloud service providers to create isolated network segments for different tenants, enhancing security and providing dedicated network resources. This feature is vital in multi-tenant cloud environments where data privacy and network isolation are critical.

3. Network Virtualization: VXLAN plays a crucial role in network virtualization, enabling the creation of virtual networks that are independent of the underlying physical infrastructure. This virtualization simplifies network management, enhances flexibility, and enables efficient resource utilization.

Conclusion: VXLAN has emerged as a powerful network virtualization technology with many use cases. VXLAN provides the flexibility, scalability, and efficiency required in modern networking environments, from data center virtualization to multi-tenancy, hybrid cloud connectivity, and disaster recovery. As organizations continue to embrace cloud computing and virtualization, VXLAN will undoubtedly play a pivotal role in shaping the future of networking.