Asian factory worker woman look to part of the machine and inspect the function while her co-worker work with other machine in the background. Maintenance and check system support industrial business.

Virtual Switching and ToR Integration

In a VMware virtualized environment, we typically have a single host running multiple virtual machines (VM) through the VMkernel hypervisor. The physical host does not have enough network cards to allocate a physical NIC to every VM, there are exceptions, for example, Cisco VM-FEX, but generally speaking, we have more virtual machines than physical network cards. In order to get VMs to communicate out an uplink or even to each other internally, we need a network of some kind to support communication flows. By implementing a Layer 2 switch within the ESXi hosts, traffic flowing from VMs within the same VLAN is locally switched. Traffic across VLAN boundaries is passed to a security or routing device northbound to the switch. Possibilities for micro-segmentation and VM NIC firewalls exist, but let’s deal with them in a later article. Essentially, the virtual switch aggregates multiple VM’s traffic across a set of links as well as provides frame delivery between VMs based on Media Access Control (MAC) address.

There are three distinct virtual switches in a VMware environment a) standalone virtual switch b) distributed virtual switch and c) 3rd party distributed switch, such as Cisco Nexus 1000v. These virtual switches have ports and the hypervisor presents what looks like a NIC to every VM. The VM’s are now isolated and think they have a virtual Ethernet adapter. Even if you change the physical cards in the server, the VM does not care as it does not see the physical hardware.

VMware

The diagram displays a virtualized environment with two sets of VMs, blue and red, attached to corresponding Port Groups. Port Groups are nothing special, simply management groups based on configuration templates. You may have different VMs in different Port Groups in the same VLAN communicate freely. The virtual NIC is a software construct emulated by the hypervisor. 

 

Virtual Switches

The Standalone vswitch lacks advanced features but gains in performance. The standalone version is not a feature-rich virtual switch and supports standard VLAN and control plane consisting of CDP. Each ESXi host has an independent switch consisting of its own data plane and control plane. Every switch is a separate management entity. The Distributed vswitch (vDS) is purely a management entity and minimizes the configuration burden of the standalone switch. It’s basically a template you configure in vCenter, applied to individual hosts. It enables you to view the entire network infrastructure as one object in the vCenter. The port and network statistics assigned to the VM move when the VM moves. The vDS is a simple management template and each ESXi host has its own control and data plane with unique MAC and forwarding rules. The local host proxy switch performs packet forwarding and running of control plane protocols. One major vDS drawback is if vCenter drops you cannot change anything on the local hosts. As a best design practice, most engineers use the standard standalone switch for management traffic and vDS for VM traffic on the same host. Each virtual switch (vS and vDS) must have its own set of uplinks. You need at least two uplinks for each switch for redundancy, already you need four uplinks, usually operating at 10Gbps.

  • VMware-based software switches don’t follow 802.1 forwardings or operate Spanning Tree Protocol (STP). Instead, they use special tricks to prevent forwarding loops, such as Reverse Path Forwarding (RPF) checking on the source MAC address.

 

Third-party switches may also be plugged in and the Nexus 1000v is the most popular. It operates with a control plane, known as Virtual Supervisor Module (VSM), and distributed data plane objects, known as Virtual Ethernet Module (VEM). Cisco initially operated all control plane protocols on the VSM, including LACP, and IGMP snooping. It severely inhibited scalability and now, control plane protocols are distributed locally to the VEMs. It’s a feature-rich software switch and supports VXLAN. You may also use the TCP-established keyword, which is not available in VMware versions. Some of these products are free and others require an enterprise plus license. If you want a free feature-rich standards-based switching product, use Open vSwitch, which is licensed under Apache 2.0.

Nexus 1000v

Open vSwitch is similar to VMware virtual switch and Cisco Nexus 1000v. It operates as a soft switch running within the hypervisor or as the control stack for switching silicon. For example, you can flash your device with OpenWrt and install the Open vSwitch package from the OpenWrt repository. Both are standards-based. The following displays the ports on an Open vSwitch, and as you can see there are a number of bridges present. These bridges are used to forward packets between hosts. For a free switch, it has a great feature set including VXLAN, STT, Layer 4 hashing, OpenFlow, etc.

Openvswitch

 

 

Integration with ToR switches

Challenges occur when VMs need to move, resulting in a large VLAN sprawl. All VLANs configured on all uplinks create one big switch. What can be done to reduce this requirement? The best case would be to integrate a solution that synchronizes the virtual and physical world. Any changes in the virtual world are automatically provisioned in the physical world. Ideally, we would like the list of VLANs configured on the server-facing port adjusted dynamically as VMs are moved around the network. For example, if VM-A moves from location-A, we want its VLAN removed from the previous location and added to the new location-B. Automatic VLAN synchronization reduces the flooding of Broadcast to servers, lowering the CPU utilization on each physical node.

Arista, Force 10, and Brocade have VMware networking solutions on their ToR switches. Arista’s solution is called VMtracer, which is natively integrated with EOS and works across their entire family of data center switches. VMtracer gives you better visibility and control over VMs. It works by sending and receiving CCDP or LLDP packets to extract VM information including VLAN numbering per server port. When a VM moves it is able to remove the old VLAN and add the new VLAN to the new ports. Juniper and NEC use their Network Management System to keep track of VMs and update the list of VLANs accordingly. Cisco utilizes VM-FEX and a new feature called VM tracker, available on NX-OS. Cisco’s VM tracker interacts with vCenter SOAP API. It works with vCenter to identify the VLAN requirements of each VM to track their movements from one ESXi host to another. It relies on Cisco Discovery Protocol (CDP) information and currently does not support Link Layer Discovery Protocol (LLDP).

 

Edge Virtual Bridging (EVB)

There is an IEEE standard way to solve this called EVB and it’s supported by Juniper and HP ToR switches. It’s not currently supported by VMware virtual switches. To implement EVB in a VMware virtualized environment, you need to change the VMware virtual switch to either HP or Juniper. EVB uses either VLANs or Q-in-Q tagging between the hypervisor and the physical switch. They introduced a new protocol called VSI that uses VDP as its discovery protocol. The protocol runs between the virtual switch in the hypervisor and the adjacent physical switch enabling the hypervisor to request information (for example: upon VM move) from the physical switch.  EVB is following two paths a)  802.1qbg and, b) 802.1qbh. 802.1qbg is also referred to as VEPA (Virtual Ethernet Port Aggregation) and 802.1qbh is also known as VN-Tag (Cisco products support VN-Tag). Both are running in parallel and attempt to provide consistent control for VMs.

 

Limit Core Flooding

If you want to reduce flooding in the network core you need a protocol between the switches, allowing them to exchange information about which VLANs are in use. Cisco uses VTP; designs VTP with care. There is a standard layer 2 messaging protocol called Multiple VLAN Registration Protocol (MVRP). Unfortunately, it’s not implemented by many vendors. It automates the creation and deactivation of VLANs by giving switches the ability to register and de-register VLAN identifiers. Unlike VTP, it does not use a “client” – “server” model. MVRP advertises VLAN information over 802.1q trunks to connected switches with MVRP enabled on the same interface. The neighboring switch receives the MVRP information and builds a dynamic VLAN table. MVRP is supported on Juniper Networks MX Series routers and EX Series switches.

Comments are closed.