Cisco switch virtualization

Cisco Switch Virtualization Nexus 1000v

 

Cisco switch virtualization

 

Nexus1000v

In today’s digital landscape, virtualization has become integral to modern data centers. With the increasing demand for agility, flexibility, and scalability, organizations are turning to virtual networking solutions to meet their evolving needs. One such solution is the Nexus 1000v, a virtual network switch offering comprehensive features and functionalities. In this blog post, we will delve into the world of the Nexus 1000v, exploring its key features, benefits, and use cases.

The Nexus 1000v is a distributed virtual switch that operates at the hypervisor level, providing advanced networking capabilities for virtual machines (VMs). It is designed to integrate seamlessly with VMware vSphere, offering enhanced network visibility, control, and security.

 

  • Virtual routing and forwarding

Virtual routing and forwarding form the basis of this stack. Firstly, network virtualization comes with two primary methods: 1) One too many and 2) Many to one.  The “one too many” network virtualization method means you segment one physical network into multiple logical segments. Conversely, the “many to one” network virtualization method consolidates multiple physical devices into one logical entity. By definition, they seem to be opposites, but they fall under the same umbrella in network virtualization.

 

Before you proceed, you may find the following posts helpful:

  1. Container Based Virtualization
  2. Virtual Switch
  3. What is VXLAN
  4. Redundant Links
  5. WAN Virtualization
  6. What Is FabricPath

 

Cisco Switch Virtualization.

Key Nexus 1000v Discussion Points:


  • Introduction to Nexus1000v and what is involved.

  • Highlighting the details on Cisco switch virtualization. Logical separation. 

  • Technical details on the additional overhead from virtualization.

  • Scenario: Network virtualization.

  • A final note on software virtual switch designs.

 

  • A key point: Video on network virtualization

In this video, we will address the basics of network virtualization. We have an underlay and an overlay design. Overlay/Underlay Essentially, an overlay is placing Layer 2 or 3 over a Layer 3 Core. The Layer 3 Core is known as the underlay.

This removes many drawbacks and scaling issues with traditional Layer 2 connectivity, which uses VLANs. The multi-tenant nature of overlays is designed to get away from these L2 challenges allowing you to build networks at a much larger scale.

 

Technology Brief : VXLAN - Introducing Overlay Networking
Prev 1 of 1 Next
Prev 1 of 1 Next

 

Back to basics with network virtualization

Before we get stuck in Cisco virtualization, let us address some basics. For example, if you’re going to have multiple virtual endpoints share a physical network, but different virtual endpoints belong to different customers, the communication between these virtual endpoints also needs to be isolated. In other words, the network is a resource, too, and network virtualization is the technology that enables sharing of a standard physical network infrastructure.

Virtualization uses software to simulate traditional hardware platforms and create virtual software-based systems. For example, virtualization allows specialists to construct a single virtual network or partition a physical network into multiple virtual networks.

 

Cisco Switch Virtualization

Logical segmentation: One too many

We have one-to-many network virtualization for the Cisco switch virtualization design; a single physical network is logically segmented into multiple virtual networks. For example, each virtual network could correspond to a user group or a specific security function.

End-to-end path isolation requires the virtualization of networking devices and their interconnecting links. VLANs have been traditionally used, and hosts from one user group are mapped to a single VLAN. To extend the path across multiple switches at Layer 2, VLAN tagging (802.1Q) can carry VLAN information between switches. These VLAN trunks were created to transport multiple VLANs over a single Ethernet interface.

The diagram below displays two independent VLANs, VLAN201 and VLAN101. These VLANs can share one physical wire to provide L2 reachability between hosts connected to Switch B and Switch A via Switch C. VLANs 201, and 101 remain separate entities.

Nexus1000v
Nexus1000v: The operation

 

VLANs are sufficient for small Layer 2 segments. However, today’s networks will likely have a mix of Layer 2 and 3 routed networks. In this case, Layer 2 VLANs alone are insufficient because you must extend the Layer 2 isolation over a Layer 3 device. This can be achieved by using Virtual Routing and Forwarding ( VRF ), the next step in the Cisco switch virtualization. A virtual routing and forwarding instance logically carves a Layer 3 device into several isolated independent L3 devices. The virtual routing and forwarding configured locally cannot communicate directly.

The diagram below displays one physical Layer 3 router with three VRFs, VRF Yellow, VRF Red, and VRF Blue. There is complete separation between these virtual routing and forwarding instances; without explicit configuration, routes in one virtual routing and forwarding cannot be leaked to another virtual routing and forwarding instance.

Virtual Routing and Forwarding

virtual routing and forwarding

The virtualization of the interconnecting links depends on how the virtual routers are connected. If they are physically ( directly ) connected, you could use a technology known as VRF-lite to separate traffic and 802.1Q to label the data plane. This is known as hop-by-hop virtualization. However, it’s possible to run into scalability issues when the number of devices grows. This design is typically used when you connect virtual routing and forwarding back to back, i.e., no more than two devices.

When the virtual routers are connected over multiple hops through an IP cloud, you can use generic routing encapsulation ( GRE ) or Multiprotocol Label Switching ( MPLS ) virtual private networks.

GRE is probably the simpler of the Layer 3 methods, and it can work over any IP core. GRE can encapsulate the contents and transport them over a network with the network unaware of the packet contents. Instead, the core will see the GRE header, virtualizing the network path.

 

 

 

Cisco Switch Virtualization: The additional overhead

When designing Cisco switch virtualization, you need to consider the additional overhead. There are additional 24 bytes overhead for the GRE header, so it may be the case that the forwarding router may break the datagram into two fragments, so the packet may not be larger than the outgoing interface MTU. To resolve the fragmentation issue, you can correctly configure MTU, MSS, and Path MTU parameters on the outgoing and intermediate routers.

The standard for GRE is typically static. You only need to configure tunnel endpoints, and the tunnel will be up as long as you have reachability to those endpoints. However, you can have a dynamic GRE tunnel establishment in recent designs.

MPLS/VPN, on the other hand, is a different beast. It requires signaling to distribute labels and build an end-to-end Label Switched Path ( LSP ). The label distribution can be done with BGP+label, LDP, and RSVP. Unlike GRE tunnels, MPLS VPNs do not have to manage multiple point-to-point tunnels to provide a full mesh of connectivity. Instead, they are used for connectivity, and packets’ labels provide traffic separation.

 

Cisco switch virtualization: Many to one

Many-to-one network consolidation refers to grouping two or more physical devices into one. Examples of this Cisco switch virtualization technology include a Virtual Switching System ( VSS ), Stackable switches, and Nexus VPC. Combining many physicals into one logical entity allows STP to view the logical group as one, allowing all ports to be active. By default, STP will block the redundant path.

Software-Defined Networking takes this concept further; it completely abstracts the entire network into a single virtual switch. On traditional routers, the control and the data plane are on the same device, yet the control and data planes are decoupled with SDN. The control plan is now on a policy-driven controller, and the data plane is local on the OpenFlow-enabled switch.

 

Network Virtualization

Server and network virtualization presented the challenge of multiple VMs sharing a single network physical port, such as a network interface controller ( NIC ). The question then arises, how do I link multiple VMs to the same uplink? How do I provide path separation? Today’s networks need to virtualize the physical port and allow the configuration of policies per port.

Nexus 1000

 

NIC-per-VM design

One way to do this is to have a NIC-per-VM design where each VM is assigned a single physical NIC, and the NIC is not shared with any other VM. The hypervisor, aka virtualization layer, would be bypassed, and the VM would access the I/O device directly. This is known as VMDirectPath. This direct path or pass-through can improve performance for hosts that utilize high-speed I/O devices, such as 10 Gigabit Ethernet. However, the lack of flexibility and the ability to move VMs offset higher performance benefits.  

 

Virtual-NIC-per-VM in Cisco UCS (Adapter FEX)

Another way to do this is to create multiple logical NICs on the same physical NIC, such as Virtual-NIC-per-VM in Cisco UCS (Adapter FEX). These logical NICs are assigned directly to VMs, and traffic gets marked with vNIC specific tag in hardware (VN-Tag/802.1ah). The actual VN-Tag tagging is implemented in the server NICs so you can clone the physical NIC in the server to multiple virtual NICs. This technology provides faster switching and enables you to apply a rich set of management features to local and remote traffic.

 

Software Virtual Switch

The 3rd option is to implement a virtual software switch in the hypervisor. For example, VMware introduced virtual switching compatibility with their vSphere ( ESXi ) hypervisor called vSphere Distributed Switch ( VDS ). Initially, they introduced a local L2 software switch, which was soon phased out due to a lack of distributed architecture.

Data physically move between the servers through the external network, but the control plane abstracts this movement to look like one large distributed switch spanning multiple servers. This approach has a single management and configuration point, similar to stackable switches – one control plane with many physical data forwarding paths. The data does not move through a parent partition but logically connects directly to the network interface through local vNICs associated with each VM.

 

Network virtualization and Nexus 1000v ( Nexus 1000 )

The VDS introduced by VMware lacked any good networking features, which led Cisco to introduce the Nexus 1000V software-based switch. The Nexus 1000v is a multi-cloud, multi-hypervisor, and multi-services distributed virtual switch. Its function is to enable communication between VMs.

Nexus1000v
Nexus1000v: Virtual Distributed Switch.

 

Nexus 1000 components: VEM and VSM

The Nexus 1000v has two essential components:

  1. The Virtual Supervisor Module ( VSM )
  2. The Virtual Ethernet Module ( VEM ).

Compared to a physical switch, you could view the VSM as the supervisor, setting up the control plane functions for the data plane to forward efficiently, and the VEM as the physical line cards that do all the forwarding of packets. The VEM is the software component that runs within the hypervisor kernel. It handles all VM traffic, including inter-VM frames and Ethernet traffic between a VM and external resources.

The VSM runs its NX-OS code and controls the control and management planes, which integrates into a cloud manager, such as a VMware vCenter. You can have two VSMs for redundancy. Both modules remain constantly synchronized with unicast VSM-to-VSM heartbeats to provide stateful failover in the event of an active VSM failure.

 

The two available communication options for VSM to VEM are:

  1. Layer 2 control mode: The VSM control interface shares the same VLAN with the VEM.
  2. Layer 3 control mode: The VEM and the VSM are in different IP subnets.

The VSM also uses heartbeat messages to detect loss of connectivity between the VSM and the VEM. However, the VEM does not depend on connectivity to the VSM to perform its data plane functions and will continue forwarding packets if VSM fails.

 

With Layer 3 control mode, the heartbeat messages are encapsulated in a GRE envelope.

 

Nexus 1000 and VSM best practices

  • L2 control is recommended for new installations.
  • Use MAC pinning instead of LACP.
  • Packet, Control, and Management in the same VLAN.
  • Do not use VLAN 1 for Control and Packet.
  • Use 2 x VSM for redundancy. 

The max latency between VSM and VEM is ten milliseconds. Therefore if you have a high-quality DCI link, a VSM can be placed outside the data center and still control the VEM.

 

Nexus 1000v InterCloud – Cisco switch virtualization

A vital element of the Nexus 1000 is its use case for hybrid cloud deployments and its ability to place workloads in private and public environments via a single pane of glass. In addition, the Nexus 1000v interCloud addresses the main challenges with hybrid cloud deployments, such as security concerns and control/visibility challenges within the public cloud.

The Nexus 1000 interCloud works with Cisco Prime Service Controller to create a secure L2 extension between the private data center and the public cloud.

This L2 extension is based on Datagram Transport Layer Security ( DTLS ) protocol and allows you to securely transfer VMs and Network services over a public IP backbone. DTLS derives the SSL protocol and provides communications privacy for datagram protocols, so all data in motion is cryptographically isolated and encrypted.

Nexus 1000
Nexus 1000 and Hybrid Cloud.

 

Nexus 1000v Hybrid Cloud Components 

Cisco Prime Network Service Controller for InterCloud ** A VM that provides a single pane of glass to manage all functions of the inter clouds
InterCloud VSM Manage port profiles for VMs in the InterCloud infrastructure
InterCloud Extender Provides secure connectivity to the InterCloud Switch in the provider cloud. Install in the private data center.
InterCloud Switch Virtual Machine in the provider data center has secure connectivity to the InterCloud Extender in the enterprise cloud and secure connectivity to the Virtual Machines in the provider cloud.
Cloud Virtual Machines VMs in the public cloud running workloads.

 

Prerequisites

Port 80 HTTP access from PNSC for AWS calls and communicating with InterCloud VMs in the provider cloud
Port 443 HTTPS access from PNSC for AWS calls and communicating with InterCloud VMs in the provider cloud
Port 22 SSH from PNSC to InterCloud VMs in the provider cloud
UDP 6644 DTLS data tunnel
TCP 6644 DTLS control tunnel

 

VXLAN – Virtual Extensible LAN

The requirement for applications on demand has led to an increased number of required VLANs for cloud providers. The standard 12-bit identifier, which provided 4000 VLANs, proved to be a limiting factor in multi-tier, multi-tenant environments, and engineers started to run out of isolation options.

This has introduced a 24-bit VXLAN identifier, offering 16 million logical networks. Now we can cross Layer 3 boundaries. The MAC in UDP encapsulation uses switch hashing to look into UDP packets and distribute all packets in a port channel efficiently.

nexus 1000
VXLAN operations

 

VXLAN works like a layer 2 bridge ( Flood and Learn ); the VEM learn does all the heavy lifting, learns all the VM source MAC and Host VXLAN IP, and encapsulates the traffic according to the port profile to which the VM belongs. Broadcast, Multicast, and unknown unicast traffic are sent as Multicast.

At the same time, unicast traffic is encapsulated and shipped directly to destination hosts VXLAN IP, aka destination VEM. Enhanced VXLAN offers VXLAN MAC distribution and ARP termination making it more optional.

 

  • A key point: Video on VXLAN and Dynamic MAC learning

The following video discusses dynamic MAC learning. Initially, Ethernet started with a Thick Coax Cable – a single cable was used to connect all workstations. It was later replaced by the twisted pair cables – unshielded UTP and shielded twisted pair STP in the late 1990-2000s. On Ethernet networks, each host has a unique MAC address for identification.

 

Technology Brief : VXLAN - Dynamic MAC Learning
Prev 1 of 1 Next
Prev 1 of 1 Next

 

 

VXLAN Mode Packet Functions

Packet VXLAN(multicast mode) Enhanced VXLAN(unicast mode) Enhanced VXLANMAC Distribution Enhanced VXLANARP Termination
Broadcast /Multicast Multicast Encapsulation Replication plus Unicast Encap Replication plus Unicast Encap Replication plus Unicast Encap
Unknown Unicast Multicast Encapsulation Replication plus Unicast Encap Drop Drop
Known Unicast Unicast Encapsulation Unicast Encap Unicast Encap Unicast Encap
ARP Multicast Encapsulation Replication plus Unicast Encap Replication plus Unicast Encap VEM ARP Reply

 

vPath – Service chaining

Intelligent Policy-based traffic steering through multiple network services.

vPath allows you to intelligently traffic steer VM traffic to virtualized devices. It intercepts and redirects the initial traffic to the service node. Once the service node performs its policy function, the result is cached, and the local virtual switch treats the subsequent packets accordingly. In addition, it enables you to tie services together to push the VM through each service as required. Previously, if you wanted to tie services together in a data center, you needed to stitch the VLANs together, which was limited by design and scale.

Cisco virtualization
Nexus and service chaining

 

vPath 3.0 is now submitted to the IETF for standardization, allowing service chaining with vPath and non-vpath network services. It enables you to use vpath service chaining between multiple physical devices and supporting multiple hypervisors.

 

License Options 

Nexus 1000 Essential Edition Nexus 1000 Advanced Edition
Full Layer-2 Feature Set All Features of Essential Edition
Security, QoS Policies VSG firewall
VXLAN virtual overlays VXLAN Gateway
vPath enabled Virtual Services TrustSec SGA
Full monitoring and management capabilities A platform for other Cisco DC Extensions in the Future
Free $695 per CPU MSRP

 

Nexus 1000 features and benefits

Switching L2 Switching, 802.1Q Tagging, VLAN, Rate Limiting (TX), VXLAN
IGMP Snooping, QoS Marking (COS & DSCP), Class-based WFQ
Security Policy Mobility, Private VLANs w/ local PVLAN Enforcement
Access Control Lists, Port Security, Cisco TrustSec Support
Dynamic ARP inspection, IP Source Guard, DHCP Snooping
Network Services Virtual Services Datapath (vPath) support for traffic steering & fast-path off-load[leveraged by Virtual Security Gateway (VSG), vWAAS, ASA1000V]
Provisioning Port Profiles, Integration with vC, vCD, SCVMM*, BMC CLM
Optimized NIC Teaming with Virtual Port Channel – Host Mode
Visibility VM Migration Tracking, VC Plugin, NetFlow v.9 w/ NDE, CDP v.2
VM-Level Interface Statistics, vTrackerSPAN & ERSPAN (policy-based)
Management Virtual Centre VM Provisioning, vCenter Plugin, Cisco LMS, DCNM
Cisco CLI, Radius, TACACs, Syslog, SNMP (v.1, 2, 3)
Hitless upgrade, SW Installer

 

Advantages and disadvantages of the Nexus 1000

Advantages Disadvantages
The Standard edition is FREE; you can upgrade to an enhanced version when needed. VEM and VSM internal communication is very sensitive to latency. Due to their chatty nature, they may not be good for inter-DC deployments.
Easy and Quick to deploy VSM – VEM, VSM (active) – VSM (standby) heartbeat time of 6 seconds makes it sensitive to network failures and congestion.
It offers you many rich network features unavailable on other distributed software switches. VEM over-dependency on VSM reduces resiliency.
Hypervisor agnostic VSM is required for vSphere HA, FT, and VMotion to work.
Hybrid Cloud functionality  

 

Closing Points on Cisco Nexus 1000v

Key Features and Functionalities:

Virtual Ethernet Module (VEM):

The Nexus 1000v employs the concept of the Virtual Ethernet Module (VEM), which runs as a module inside the hypervisor. This allows for efficient and direct communication between VMs, bypassing the traditional reliance on the hypervisor networking stack.

Virtual Supervisor Module (VSM):

The Virtual Supervisor Module (VSM) serves as the control plane for the Nexus 1000v, providing centralized management and configuration. It enables network administrators to define policies, manage virtual ports, and monitor network traffic.

Policy-Based Virtual Network Management:

With the Nexus 1000v, administrators can define policies to manage virtual networks. These policies ensure consistent network configurations across multiple hosts, simplifying network management and reducing the risk of misconfigurations.

Advanced Security and Monitoring Capabilities:

The Nexus 1000v offers granular security controls, including access control lists (ACLs), port security, and dynamic host configuration protocol (DHCP) snooping. Additionally, it provides comprehensive visibility into network traffic, enabling administrators to monitor and troubleshoot network issues effectively.

Benefits of the Nexus 1000v:

Enhanced Network Performance:

By offloading network processing to the VEM, the Nexus 1000v minimizes the impact on the hypervisor, resulting in improved network performance and reduced latency.

Increased Scalability:

The distributed architecture of the Nexus 1000v allows for seamless scalability, ensuring that organizations can meet the growing demands of their virtualized environments.

Simplified Network Management:

With its policy-based approach, the Nexus 1000v simplifies network management tasks, enabling administrators to provision and manage virtual networks more efficiently.

Use Cases:

Data Centers:

The Nexus 1000v is particularly beneficial in data center environments where virtualization is prevalent. It provides a robust and scalable networking solution, ensuring optimal performance and security for virtualized workloads.

Cloud Service Providers:

Cloud service providers can leverage the Nexus 1000v to enhance their network virtualization capabilities, offering customers more flexibility and control over their virtual networks.

The Nexus 1000v is a powerful virtual network switch that provides advanced networking capabilities for virtualized environments. Its rich set of features, policy-based management approach, and seamless integration with VMware vSphere allows organizations to achieve enhanced network performance, scalability, and management efficiency. As virtualization continues to shape the future of data centers, the Nexus 1000v remains a valuable tool for optimizing virtual network infrastructures.

 

Cisco switch virtualization

redundant links

Redundant links with Virtual PortChannels

Redundant Links with Virtual PortChannels

In the world of networking, efficiency, and reliability are paramount. As data centers expand and organizations strive for seamless connectivity, Virtual PortChannels (vPCs) have emerged as a powerful solution. This blog post aims to demystify vPCs, comprehensively understanding their benefits, functionality, and implementation considerations.

Virtual PortChannels, also known as vPCs, are a technology designed to enhance network scalability and resiliency. By combining multiple physical links into a single logical interface, vPCs allow for increased bandwidth and redundancy, ensuring uninterrupted connectivity and load balancing across network switches.

Table of Contents

Highlights: Virtual Port Channels

Example: Cisco ACI

These technologies are extensively used in the ACI Cisco. A virtual port channel (vPC) allows links physically connected to two different ACI leaf nodes to appear as a single port channel to a third device (i.e., network switch, server, or any other networking device that supports link aggregation technology). Firstly, let us start with the basics.

Spanning Tree Challenges

Traditional spanning tree challenges network designers as they will block redundant links. The drawbacks of STP ( spanning tree protocol ) prove extremely expensive in data centers when you have multiple redundant links for mission-critical applications, essentially wasting 50% of your capacity.

You can use the port channel to scale bandwidth as the bundled links appear as one to higher-level protocols, resulting in all ports forward or blocking for a particular VLAN. It would help if you aimed to design all links in a data center to be an EtherChannel, as this will optimize your bandwidth and reliability.

EtherChannel Technology

Network administrators connect multiple physical Ethernet links between devices to achieve more bandwidth and redundancy. The Spanning Tree Protocol blocks these links, so we need EtherChannel Technology. EtherChannel technology combines several physical links between switches into one logical connection to provide high-speed links and redundancy without being blocked by the Spanning Tree Protocol.

Related: For pre-information, you may find the following posts helpful:

  1. Data Center Fabric
  2. Optimal Layer 3 Forwarding
  3. Data Center Failure
  4. Active Active Data Center Design
  5. Network Overlays
  6. Dead Peer Detection

Virtual Portchannels

Key Redundant Links Discussion Points:


  • Introduction to redundant links and what is involved.

  • .Highlighting the details of VPC vs Port Channel.

  • Technical details on Link aggregation.

  • Scenario: LACP negotiation.

  • Details on load balancing functions and virtual portchannels.

  • Final note on fabric extenders.

Back to Basics: Redundant links 

STP has one suboptimal principle: to break loops in a network. This issue with having a single practice is that only one active path is allowed from one device to another, regardless of how many connections might exist in the network. In addition, no efficient dynamic mechanism exists for using all the available bandwidth with STP loop management.

Port Channel Technology

So, to overcome these challenges, enhancements to Layer 2 Ethernet networks were made in the shape of port channel and virtual port channel (vPC) technologies. Port Channel technology permits multiple links between two participating devices to forward traffic using a load-balancing algorithm while managing the loop problem by bundling the links as one logical link.

vPC Technology

Then we have the vPC technology. The vPC technology permits multiple devices to create a port channel. In vPC, a pair of switches acting as a vPC peer endpoint looks like a single logical entity to port channel–attached devices; the two devices that serve as the logical port-channel endpoint are still two different and separate devices.

High Availability: Link and Device.

You need to identify the level of high availability you want to achieve in enterprise branch offices. Then, you can meet your high availability requirements with the appropriate device level and link redundancy.

Link-level redundancy requires two links to run as active/active or active/backup links to recover traffic forwarding lost if one link fails. Therefore, any failure on an access link should not result in a loss of connectivity. To qualify, a branch office must have at least two upstream links, either to a private network or the Internet.

Device-level redundancy is another level of high availability, ensuring that the backup device can take over in the event of a failed device. Device and link redundancy is typically coupled, which means that if one fails, the other will too. As a result of this strategy, there should be no loss of connectivity between branch offices and data centers due to a single device failure.

High Availability and Designs

High-availability designs combine link and device redundancy between branches and data centers to ensure business-critical connectivity. Each data center is dual-homed, so if one fails, traffic can be redirected to the backup data center in the event of a complete failure.

Reroute traffic within 30 seconds should be possible whenever a failure (link, device, or data center) occurs. Packets can be lost during this period. When the user applications can withstand these failover times, sessions are maintained. Established sessions should not be dropped in a branch office with redundant devices if the failed device was forwarding traffic.

redundant links
Diagram: Redundant links with EtherChannel. Source is jmcritobal

vPC vs Port Channel

Servers can be attached to the access switches as port channels, uplinks that consist of redundant links formed from the access can be link aggregated, and the core links can also be bundled. Most switches can support 8 ports in a bundle, and Nexus platforms can support up to 16 – 32 ports.

It would help to create a port channel with ports from different line cards in each redundant switch. This will prevent the failure of a single line card from affecting the entire channel. With this approach, we get redundancy at a logical and physical layer.

Link Aggregation and Port Channels

Link aggregation (EtherChannel and IEEE 802.3ad ) was developed to address that limitation where two Ethernet redundant switches were connected through multiple up-links. However, this did not address the challenges in the data center environment for deploying link aggregation on triangular topologies or if you want to terminate on different switches.

Traditional LAG ( link aggregation ) has limitations because its standard only allows aggregated links to terminate on a single switch. Technologies such as vPC Virtual Port Channel and Virtual Switching System (VSS) have been implemented to overcome this limitation.

Key Points: Port Channels

In summary, a port channel aggregates multiple physical interfaces that create a logical interface. On some platforms, you can bundle up to 32 individual redundant links. The Port channel will also load balance traffic across the redundant links. The port channel will remain operational as long as at least one physical interface within the port channel is operational. Finally, before we move to vPC vs port channel, you can create either Layer 2 or 3 port channels. However, as you might have expected, you cannot combine Layer 2 and 3 interfaces in the same port channel.

  • Port-channel load balancing

Using a hashing function, frames are distributed between the physical interfaces that make up the port channel. Depending on the method used for load balancing, this hash will differ. Based on the hash result, the physical port to be used for transmission is determined.

A hashing operation can be performed on MAC or IP addresses based on the source address, destination address, or both (some methods use the port number). Depending on the switch model and software version, default load-balancing methods can be layer 2, 3, or 4 and apply globally to all port channels. Here are a few methods for balancing etherchannels:

  • src-ip : Source IP address
  • dst-ip : Destination IP address
  • src-dst-ip : Source and destination IP address
  • src-mac : Source MAC address
  • dst-mac : Destination MAC address
  • src-dst-mac : Source and destination MAC address
  • src-port : Source port number
  • dst-port : Destination port number
  • src-dst-port : Destination source port number

Starting the Debate: vPC vs Port Channel

vPC (Virtual Port-Channel), or multi-chassis ether channel (MEC), is a feature on the Cisco Nexus switches. You can configure port-channel across multiple redundant switches. The virtual Port-channel (vPC) is configured using the interfaces of two redundant switches. Now we have redundancy at a link and a switch layer forming a triangular design. We must terminate the links with a standard port channel on the same switch. We don’t have a channel between two redundant switches in this case.

Virtual PortChannels (vPCs), links between two Cisco switches, appear to a third downstream device as coming from one device and as part of a single PortChannel. A third device can be a switch, a server, or other networking devices that support IEEE 802.3ad PortChannels. Both standard port channels and Virtual PortChannels (vPC) can use the link Aggregation Control Protocol ( LACP ).

LACP negotiation and redundant switches

As part of the IEEE 802.3ad standards, Link Aggregation Control Protocol ( LACP ) was created to negotiate the channel, and it recommended using this feature when building a bundle. LACP modes can be either active or passive. Active mode means the switch actively negotiates the channel, whereas passive means the port does not initiate an LACP negotiation.

You can form channels between active and passive ports or two active ports but not passive and passive ports. If the correct modes are not configured on each side of the challenge, the port channel will not negotiate and remain down.

The following diagram depicts the logical and physical of a vPC virtual port channel. This is not specific to a Cisco vPC but all aggregation technologies. We have several physical redundant links; in our case, four appear to be one prominent link from a logical perspective.

vpc virtual port channel
Diagram: vPC virtual port channels.

In either case, LACP can be used as the control plane to negotiate the channel. You may ask, is LACP mandatory for vPC?” – No. We can use mode “On” & bring UP the port channel without negotiation/checks. So, we are turning off the LACP or other control protocols. However, is LACP recommended for vPC?” –

Like a normal port channel, it is always advised to use a control protocol for the vPC/port channel. LACP adds a lot of intelligence to the background. The main difference between vPC and port channels is that vPC can terminate on secondary switches, creating a triangular design.

Building triangles for better redundancy

The quandary of the inability to build triangles with link aggregation can be mitigated by deploying either the Nexus technology, known as virtual Port Channels (vPCs), or the Catalyst technology, known as Virtual Switching System ( VSS ). VSS and vPC virtual port channels allow the termination of an LAG on two separate switches, resulting in a triangular design. In addition, they will enable the grouping of two physical redundant switches to form a single logical switch to any downstream device ( switch or server ).

Load Balancing Functions

A hash function is performed when a layer 2 frame is forwarding to a PortChannel to determine which physical links to forward the frame. The load balancing method used for Nexus switches is granular and includes the following:

Nexus Switches

Load Balancing Method

Option 1

Destination IP address

Option 2

Destination MAC address

Option 3

Destination TCP and UDP port number

Option 4

Source and Destination IP address

Option 5

Source and Destination MAC address

Option 6

Source and Destination TCP and UDP port numbers

Option 7

Source IP address

Option 8

Source MAC address

Option 9

Source TCP and UDP port number

Redundant Links: Detect polarized links

Monitoring the traffic distribution over each physical link is essential to detect polarized links. The polarization effect occurs if some links attract more traffic than others, resulting in heavy utilization of some redundant links and low utilization of others. Therefore, before choosing the load balancing method, analyze the traffic flows from source to destination and determine if the flow is too many or evenly spread. For example, I would not use the source IP address load balancing method to load balance traffic from a firewall deploying Network Address Translation ( NAT ) to a single device.

Routing Protocols

Keep in mind for routing convergence, that routing protocols see the channel as one link, so if you have 8 x 10 ports in one bundle and that bundle has an OSPF cost of 10, a failure occurs, and you lose a member of that channel, the OSPF will still mark that link with the same metric. Routing protocols don’t dynamically change metrics due to a member link failure.

Video: Routing and Routing Convergence

In this video, we will address routing convergence, also known as convergence routing. We know we have Layer 2 switches that create Ethernet. So, all endpoints physically connect to a Layer 2 switch. And if you are on a single LAN with one large VLAN, you are ready with this setup as switches work out of the box, making decisions based on Layer 2 MAC addresses.

So, these Layer 2 MAC addresses are already assigned to the NIC cards on your hosts, so you don’t need to do anything.

Routing Convergence
Prev 1 of 1 Next
Prev 1 of 1 Next

Virtual PortChannels Benefits

vPC and VSS offer the following benefits:

Redundant links with Virtual PortChannels

Improved convergence with a link and device failure.

Eliminate the need for STP.

Independent control planes ** Not with the VSS.

Increased bandwidth but combining all redundant links to one from the perspective of STP.

What is vPC?

MEC (Multi-Chassis EtherChannel) is a feature on Cisco Nexus switches that allows you to configure a Port-Channel across multiple switches (i.e., vPC peers). Virtual PC is similar to Virtual Switch System (VSS) on Catalyst 6500s. However, VSS creates just one logical switch instead of vPC’s multiple ones. In this way, a single control plane handles both management and configuration. vPC, on the other hand, allows each switch to be managed and configured independently. vPC manages both switches independently, so it’s important to remember that. Therefore, you must create and permit your VLANs on both Nexus switches.

Comparing vPC and VSS

vPC and VSS are similar technologies, but the Nexus vPC feature has dual control planes. It offers In-Service Upgrade ( ISSU ), which allows upgrading one of the two switches without causing any service interruption. Because the control plane runs independently on each of the vPC peers, the failure of one peer does not affect the virtual switch.

With the VSS, the active peer going down brings down the entire system because of the lack of dual control planes. It should be worth noting that vPC falls back to STP, and the reliance on STP can only be entirely circumvented if you use Cisco’s Fabric Path or THRILL. The VSS is available on the Catalyst platforms, while vPC is solely a Nexus technology.

vPC Terminology:

  • vPC Peer – a vPC switch, one of a pair.
  • vPC member port – one of the ports that form a vPC.
  • vPC – the combined port channel between the vPC peers and the downstream device.
  • vPC peer-link – link used to synchronize state between vPC peer devices, must be 10GbE. The vPC-related control plane communications occur over this link, and any Ethernet frames transported receive special treatment to avoid loops in the vPC member ports.
  • vPC peer keepalive link – the keepalive link between the vPC peer devices. It recommended using the MGMT0 interface and a VRF instance. If the mgmt interface is unavailable, then a routed interface in the mgmt VRF is.
  • vPC VLAN – one of the VLANs carried over the vPC peer-link and used to communicate via the vPC with a peer device.
  • non-vPC peer VLAN – One STP VLAN is not carried over the peer link.
  • CFS – Cisco Fabric Service Protocol, used for state synchronization and configuration validation between vPC peer devices.

Within a vPC domain, each pair is assigned to a primary or secondary role; by default, the switch with the lowest MAC address becomes the primary peer. The domain identifies the pair of redundant switches, generating a shared MAC address that can be used as a logical switch bridge ID in STP communication.

Virtual PortChannels Best Practices

Below are the best practices to consider for implementation:

  1. Manually define which vPC switch is primary and secondary. Lower the priority, and the more preferred switch will act as the primary.
  2. Form Layer 2 port channels using different 10GE modules on the Nexus switch for the vPC peer-link with ports in dedicated mode.
  3. Form Layer 2 port channels using different 10GE modules on the Nexus switch for the vPC peer keepalive link ( non-default VRF ).
  4. Enable Bridge Assurance ( BA ) on the vPC peer-link interface ( default ).
  5. Enable UDLD aggression on the vPC peer-link interface.
  6. On the primary vPC switch, configure the STP root bridge for a VLAN, the active HSRP router, and the PIM DR router. Likewise, on the secondary vPC switch, configure the secondary STP and the standby HSRP router. The Layer 2 and Layer 3 topologies should match.

Introducing Fabric Extenders

If you want to add even more redundancy with vPC, use it with a Fabric Extender. Fabric Extenders act as a remote line card to a parent switch and can be used with vPC in three forms. The first is known as host vPC and is a vPC southbound from the FEX to the server; the second is a vPC northbound from the FEX to the parent switch, sometimes called a Fabric vPC; and the third is both a southbound and northbound vPC from the FEX which is known as Enhanced vPC.

Data Center Topology types
Diagram: Introducing Fabric Extenders.

Virtual PortChannels, the single connection

  • Datacenter interconnect

Because vPC characterizes a single connection from the perspective of higher-level protocols, e.g., STP or OSPF, it can be used as a Layer 2 extension for a DCI ( data center interconnect ) over short distances and dark fiber or protected DWDM only. vPC best practices still apply, and it is recommended that you use different vPC domains for each site and that Layer 3 communication between vPC peers is performed on dedicated routed redundant links.

  • OTV or VPLS

If you connect more than two data centers with a full mesh topology, opting for Overlay Transport Virtualization (OTV) or VPLS ( Ethernet-based point-to-multipoint Layer 2 VPN ) as the DCI mechanism would be best. vPC can work with two or more data centers, but you must design the topology as a hub and spoke.

Any spoke-to-spoke communication must flow through the hub, connecting two data centers back to back or two or more in a hub and spoke design; the layer 2 boundary and STP isolation can be achieved with bridge protocol unit ( BPDU ) filtering on the DCI links. BPDU filtering avoids transmitting BPDUs on a link, essentially turning off STP on the DCI links. You can extend with the multi-pod or multi-site designs if you have Cisco ACI.

redundant links
Diagram: vPC as a Data Center Interconnect.
  • A key point: Loop prevention

vPC has a built-in loop prevention mechanism; never forward a frame received through a peer link to a vPC member port. Under normal operations, a vPC peer switch should never learn MAC addresses over the peer link and is mainly used for flooding, multicast, broadcast, and control plane traffic.

This is because the LAG is terminated on two peer switches, and you don’t want to send traffic received from a single downstream device back down to the same downstream device, resulting in a loop. However, this rule does not apply to:

  1. Non-vPC interface ( orphan port ) and
  2. vPC member ports that are only active in the receiving pair.

Note: An orphan port in a port to a downstream device connected to only one peer.

Redundant Switches: vPC peer link usage

As mentioned, the vPC peer link should not be used for end-host reachability under normal operations. However, if there is a failure on all members of a vPC in a single peer, the peer link will forward frames to the remaining member ports of the vPC. This explains why Cisco has recommended using the same 10G for the peer link.

The peer keepalive link is also mandatory and is used as a heartbeat mechanism to transport UDP datagrams between peers. This avoids a dual-active / split-brain scenario where both peers are active simultaneously. If no heartbeat is received after a configurable timeout, the secondary vPC peer is the primary peer, and all its member ports remain active.

There is, however, undesirable behavior if you have an orphan port connected to only one peer. For example, with a vPC peer link failure, the orphan ports remain active in the secondary peer, even though they are now isolated from the rest of the network. In this case, it is recommended to configure a non-vPC trunk between peer switches.

Benefits of Virtual PortChannels:

1. Enhanced Network Performance: vPCs distribute traffic across multiple physical links, increasing overall bandwidth and reducing congestion. This improves network performance, especially in environments with high data transfer requirements.

2. Improved Redundancy and High Availability: By utilizing vPCs, organizations can eliminate single points of failure and achieve network resiliency. If a link or switch fails, traffic can seamlessly failover to the remaining active links, ensuring uninterrupted connectivity.

3. Simplified Network Management: vPCs simplify network management by treating multiple physical links as a single logical entity. This unified approach allows for more straightforward configuration, troubleshooting, and maintenance, reducing operational complexity and potential errors.

4. Scalability: As organizations grow and their network requirements evolve, vPCs offer a scalable solution. Additional switches and links can be seamlessly added to the vPC domain, expanding network capacity without disrupting ongoing operations.

Implementing Virtual PortChannels:

Implementing vPCs requires careful planning and consideration. Here are some key factors to keep in mind:

1. Compatible Network Equipment: Ensure the network switches and devices used support vPC technology. Consult the manufacturer’s documentation to verify compatibility and recommended configurations.

2. vPC Peer Link: Establishing a peer link between the vPC-enabled switches is crucial for synchronization and control plane communication. This link should have sufficient bandwidth and redundancy to support the vPC domain.

3. vPC Member Ports: Determine which physical links will be part of the vPC domain and configure them as vPC member ports. These ports should be connected to separate switches to ensure redundancy and minimize single points of failure.

4. vPC Keepalive Link: To monitor the health of vPC peers, a dedicated keepalive link is required. This link should be separate from the vPC peer link and have sufficient bandwidth to exchange keepalive messages.

Conclusion: Virtual PortChannels have revolutionized network connectivity, offering increased performance, redundancy, and scalability. By aggregating multiple physical links into a single logical interface, vPCs simplify network management and ensure uninterrupted connectivity.

While implementing vPCs requires careful planning and consideration, their benefits make them a valuable addition to any modern data center or network infrastructure. Remember, understanding the fundamentals and best practices of vPCs is essential for successfully implementing this technology and maximizing its benefits.

vpc virtual port channel