Open vSwitch (OVS) Basics

There are many barriers to network innovation, which makes it difficult for outsiders to drive features and innovate. Until recently, technologies were largely proprietary and controlled by a small number of vendors. The lack of tools available limited network virtualization and network resource abstraction. There are now many new initiatives challenging this space and the Open vSwitch project, managed by the Open Network Foundation (ONF), is one of them. The ONF is a non-profit organization who promote the adoption of software-defined networking through open standards. Since its release OVS has gained in popularity and is now the de facto open standard cloud networking switch. It is changing the network landscape and moves the network edge to the hypervisor. The hypervisor is the new edge of the network. It resolves the problem of network separation, cloud users can now be assigned VM’s with elastic configurations.

Open vSwitch originates from the academic labs from a project known as Ethan – SIGCOMM 2007. Ethan created a simple flow based switch with a central controller. The central controller has end-to-end visibility, allowing policies to be applied to one place while affecting many data plane devices. Central controllers make orchestrating the network much easier. SIGCOMM 2007 led to the introduction of the OpenFlow protocol – SIGCOMM CCR 2008 and then the first Open vSwitch (OVS) release in early 2009.

 

Open vSwitch (OVS)

OVS is a multilayer virtual switch implemented in software. It uses virtual network bridges and flow rules to forward packets between hosts. It behaves like a physical switch, only virtualized. Namespaces and instance tap interfaces connect to what is known as OVS bridge ports. Similar to a traditional switch, OVS maintain information about connected devices, such as MAC address. It is an enhancement to the monolithic Linux Bridge plugin and includes the use of overlay networking (GRE & VXLAN) providing multi-tenancy in cloud environments. It can also be integrated with hardware and serve as the control plane for switching silicon.  

Programming flow rules work differently in OVS than the standard Linux Bridge. The OVS plugin does not use VLANs to tag traffic. Instead, it programs flow rules on the virtual switches that dictate how traffic should be manipulated before forwarded to the exit interface. Flow rules essentially determine how inbound and outbound traffic should be treated.

OVS has two fail modes a) Standalone and b) Secure. Standalone is the default mode and it simply acts as a learning switch. Secure mode is different in the sense that it relies on the controller element to insert flows rules. Secure mode has a dependency on the controller.

 

Open vSwitch Flow Forwarding

Kernel mode, known as “fast path” processing is where it does the switching. If you relate this to hardware component on a physical device, the kernel mode would map to the ASIC. User mode, is known as “slow path”. If there is a new flow the kernel doesn’t know about the user mode is instructed to engage. Once the flow is active the user mode should not be invoked. So you may take a hit the first time.

The first packet in a flow goes to the userspace ovs-vswitchd and subsequent packets hit cached entries in the kernel. When a packet is received by the kernel module, the cache is inspected to determine if there is a flow entry. If a corresponding flow entry is found in the cache, then the associated action is carried out on the packet. This could be forward the packet or modify its headers. If no cache entry is found the packet is passed to userspace ovs-vswitchd process for processing. Subsequent packet are processed in the kernel without user space interaction. The processing speed of the OVS is now faster than the original Linux Bridge. It also has good support for mega flows and multithreading

 

OVS - Path

 

OVS Component Architecture

There are a number of CLI tools to interface with the various components: ovs-vsctl manage state in the ovsdb-server, ovs-appctl sends commands to the ovs-vswitchd, ovs-dpctl is the kernel module configuration and ovs-ofctl works with the OpenFlow protocols. The following diagram is a reference component architecture.

 

OVS - COMP

You may have an off-host component such as the controller. It communicates and acts a manager of a set of OVS components in a cluster. The controller has a global view and manages across all the components. An example controller is OpenDaylight. OpenDaylight promotes the adoption of SDN and serves as a platform Network Function Virtualization (NFV). NFV virtualized network services instead of using physical function-specific hardware. It has a northbound interface that expose the network application and southbound interfaces to interface with the OVS components.

 

RYU provides a framework for SDN Controller and allows you to develop controllers. It is written in Python. It supports OpenFlow, Netconf, OF-config.

 

There are many interfaces used to communicate across and between components.The database has a management protocol known as OVSDB, RFC 7047. OVS has a local database server on every physical host. It maintains the configuration of the virtual switches. Netlink is used to communicate between user and kernel modes, and between different user space processes. It is used between OVS-vswitchd and openvswitch.ko. It is designed to transfer miscellaneous networking information. OpenFlow can also be used to talk and program the OVS. The ovsdb-server interfaces with an external controller (if used) and the ovs-vswitchd interface. Its purpose is to store information for the switches. Its state is persistent. Main CLI tool is ovs-vsctl. The ovs-vswitchd interface with an external controller, kernel via Netlink and the ovsdb server. Its purpose is to manage multiple bridges and is involved in the data path. It’s a core system component for the OVS. Two CLI tools ovs-ofctl and ovs-appctl are used to interface with this.

 

Linux Containers and Networking

OVS can make use of Linux and Docker containers. Containers provide a layer of isolation that reduces communication in the world of humans. They make it easy to build out example scenarios. Starting a container takes milliseconds, compared to minutes of a virtual machine. Deploying container images are much faster if less data needs to travel across the fabric. Elastic applications with frequent state changes and dynamic resource allocation can be built more efficiently with containers. Both Linux and docker containers represent a fundamental shift in how we consume and manage applications. 

Libvirt is a tool used to make use of containers. It’s a virtualization application for Linux. Linux containers involve process isolation in Linux, so instead of running full blown VM, you can do a container but you share the same kernel, but you are completely isolated. Each container has its own view of networking and processes. Containers isolate instances without the overhead of a VM. A lightweight way of doing things on a host and builds on the mechanism in the kernel.

 

Source versus package install

There are two paths for install, a) Source code and b) Package install based on your Linux distribution. The source code install is primarily used if you are a developer and is useful if you are trying to make an extension or focusing on hardware component integration. Before accessing the Repo-install any build dependencies, such as git, autoconf and libtool. Then you pull over the image from GitHub with the “clone” command. <git clone https://github.com/openvswitch/ovs>. Running from source code is a lot more difficult that install through distribution. When you install from packages all the dependencies will be done for you. 

 

 

 

 

 

 

About Matt Conran

Matt Conran has created 156 entries.

Leave a Reply