Software Defined Networking and OpenFlow

Software Defined Networking and OpenFlow

Changes inside the data center are driving networks and network services towards becoming more flexible, virtualization aware and API driven. One major trend that is affecting the future of networking is software-defined networking ( SDN ). The software defined architecture aims to extract the entire network into a single switch.

SDN is an evolving technology and is defined by the Open Networking Foundation ( ONF ) as:

Software Defined Networking is the physical separation of the network control plane from the forwarding plane, and where a control plane controls several devices.

The main driving body behind SDN is the Open Networking Foundation ( ONF ). Introduced in 2008, the ONF is a non-profit organisation, which wants to provide an alternative to proprietary solutions that limit flexibility and create vendor lock-in. The insertion of the ONF allowed its members to run proof of concepts on heterogeneous networking devices without requiring vendors to expose the internal code of their software. This is creating a path for an open source approach to networking and policy based controllers.

 

Drivers for SDN

Application driven routing. Users can control network path. A way to enhance link utilization.
An open solution for VM mobility. No VLAN reliability. A means to traffic engineer without MPLS.
Solution to build very large Layer 2 networks. A way to scale Firewalls and Load Balancers.
A way to configure an entire network as a whole as opposed to individual entities. A way to build your own encryption solution. Off the box encryption.
A way to distribute policies from a central controller. Customized flow forwarding. Based on a variety of bit patterns.
A solution to get a global view of the network and its state. End-to-end visibility. A solution to use commodity switches in the network. Massive costs savings.

 

The following table list the Software Defined Networking ( SDN ) benefits and the problems encountered with existing control plane architecture:

SDN Benefits Problems with the existing approach
Faster software deployment. Large scale provisioning and orchestration.
Programmable network elements. Limited traffic engineering ( MPLS TE is cumbersome )
Faster provisioning. Synchronized distribution policies.
Centralized intelligence with centralized controllers. Routing of large elephant flows.
Decisions based on end-to-end visibility. Qos and load based forwarding models.
Granular control of flows. Ability to scale with VLANs
Decreases the dependence on network appliances like load balancers.

 

Regardless of the hype and benefits of SDN, neither OpenFlow or other SDN technologies address the real problems of the lack of a session layer in TCP/IP protocol stack. The problem is that the clients application ( Layer 7 ) connects to the IP address ( Layer 3 ) of the server and if you want to have persistent sessions, the servers IP address must remain reachable. This sessions persistence and the ability to connect to multiple Layer 3 addresses to reach the same device is the job of the OSI session layer. The session layer provides the services for opening, closing and managing a session between end-user applications. It allows information from different sources to be properly combined and synchronized. The problem is the TCP/IP reference module does not consider a session layer and there is none in the TCP/IP protocol stack. SDN does not solve this, it just gives you different tools to implement today’s kludges.

 

TCP/IP Ref Model

TCP/IP Ref Model

 

Control and Data Plane

Traditional networking devices have a control and forwarding plane, depicted in the diagram below. The control plane is responsible for setting up the necessary protocols and controls so the data plane can forward packets, resulting in end-to-end connectivity. These roles are shared on a single device, the fast packet forwarding ( data path ) and the high-level routing decisions ( control path ) occur on the same device.

 

Router Planes of Operation

Router Planes of Operation

 

Control Plane

In routing, the control plane is part of the router architecture that is responsible for drawing the network map. When we mention control planes, you usually just think about routing protocols, such as OSPF or BGP. But in reality the control plane protocols perform numerous other functions including:

Connectivity management ( BFD, CFM ) Interface state management ( PPP, LACP )
Service provisioning ( RSVP for InServ or MPLS TE) Topology and reachability information exchange ( IP routing protocols, IS-IS in TRILL/SPB )
Adjacent device discovery via HELLO mechanism ICMP

 

Control plane protocols run over data plane interfaces to ensure “shared fate” – if the packet forwarding fails, the control plane protocol fails as well.

The majority of control plane protocols ( BGP, OSPF, BFD ) are not data-driven. A BGP or BFD packet is never sent as a direct response to a data packet. There is a question mark over the validity of ICMP as a control plane protocol. The debate is whether it should be classed in the category of the control or data plane. Some ICMP packets are sent as replies to other ICMP packets and others are triggered by data plane packets i.e data-driven. My view is that ICMP is a control plane protocol that is triggered by data plane activity. After all, the “C” is ICMP does stand for “Control”

 

Data Plane

In routing, the data path is part of the routing architecture that decides what to do when a packet is received on its inbound interface. It is primarily focused on forwarding packets but also includes the following functions:

ACL logging Netflow accounting
NAT session creation  NAT table maintenance

 

The data forwarding is usually performed in dedicated hardware, while the additional functions ( ACL logging, Netflow accounting ) usually happen on the device CPU, commonly known “punting”.

 

Software Defined Networking changes the control and data plane architecture

The concept of SDN takes these two planes and separates them i.e the control plane and forwarding plane are decoupled from each other. This allows the networking devices in the forwarding path to focus solely on packet forwarding. A separate controller ( orchestration system ) in a out-of band network is utilized to set up the policies and controls so the forwarding plane has the correct information so it can forward packets efficiently. It allows the network control plane to be moved to a centralized controller on a server, instead of residing on the same box that is carrying out the forwarding. The movement of the intelligence ( control plane ) off the data plane network devices to a controller enables companies use low-cost, commodity hardware in the forwarding path.

A Centralized computation and management plane makes more sense than centralized control plane.

The controller maintains a view of the entire network and communicates, with Openflow  ( or in some cases BGP ) with the different types of OpenFlow enabled network boxes. The data path portion still resides on the switch, while the high-level decisions are moved to a separate controller. The data path presents a clean flow table abstraction and each flow table entry contains a set of packet fields to match resulting in certain actions ( drop, redirect, send-out-port ). When a OpenFlow switch receives a packet it has never seen before and it doesn’t have a matching flow entry, it sends the packet to the control for processing. The controller then decides what to do with the packet.

SDN Planes separation

SDN Planes separation

 

Applications could then be developed on top of this controller, which can perform functions such as security scrubbing, load balancing, traffic engineering or customised packet forwarding. The centralized view of the network simplifies problems that were harder to overcome with traditional control plane protocols.  A single controller could potentially manage all OpenFlow enabled switches. Instead of individually configuring each switch the controller can push down policies to multiple switches at the same time. A compelling example of many-to-one virtualization.

With SDN, the operator uses the centralized controller to choose the correct forwarding information on a per flow basis. This allows better load balancing and traffic separation on the data plane. There is no need to enforce traffic separation based on VLANs as the controller would have a set of policies and rules that would only allow traffic from one “VLAN” to be forwarded to other devices within that same “VLAN”.

With the advent of VXLAN which allows up to 16 million logical entities, the benefits of SDN should not be purely associate with overcoming VLAN scaling issues. VXLAN already does a good job with this.

It does makes sense to deploy a centralized control plane in smaller independent islands and in my view it should be at the edge of the network for the roles in security and policy enforcement’s. The ability to use Openflow on one or more isolated devices is easy to implement and scale. It also decreases impact with controller failure. If a controller fails and its sole job is to implement packet filters when a new user connects to the network, the only affecting element is that the new user cannot connect.  If the controller is responsible for core changes, you may have interesting results with a failure. New user not able to connect is bad but it is not as bad as loosing your entire fabric.

SDN islands at the Edge

SDN islands at the Edge

 

 

What is Openflow?

A traditional networking device runs all the control and data plane functions. The control plane, which is usually implemented in central CPU or the supervisor module download the forwarding instructions into the data plane structures. Every vendor needs a communications protocols to bind the two planes together so that the forwarding instructions can be downloaded. You need a protocol between control and data plane elements in all distributed architects. The protocol used to bind this communication path for traditional vendor devices is not open-source and every vendor uses its own proprietary protocol (Cisco uses IPC – InterProcess Communication ). Openflow tries to define a standard protocol between the control plane and associated data plane.

When you think of Openflow, you should relate it to the communication protocol between the traditional supervisors and the linecards. OpenFlow is just a low-level tool.

Essentially, OpenFlow is a control plane ( controller ) to data plane ( openflow enabled device ) protocol that allows the control plane to modify forwarding entries in the data plane.

 

OpenFlow Packet

OpenFlow Packet

 

Proactive versus Reactive Flow Set-up

OpenFlow operations has two types of flow set-up, Proactive and Reactive.

With Proactive, the controller can populate the flow tables ahead of time, similar to typical routing. By pre-defining all of your flows and actions ahead of time in the switches flow tables, the packet-in event never occurs. The result is all packets forwarded at line rate.

With Reactive, the network devices react to traffic, consults the OpenFlow controller and creates a rule in the flow table based on the instruction. The problem with this approach is that there can many CPU hits.

 

Hop-by-Hop versus Path-based Forwarding

Hop-by-Hop versus Path-based Forwarding

 

The following table outlines the key points for each type of flow setup:

Proactive flow setup Reactive flow setup
Works well when the controller is emulating BGP or OSPF. Used when no one can predict when and where a new MAC address will appear.
The controller must first discover the entire topology. Punts unknown packets to the controller. Many CPU hits.
Discover endpoints ( MAC addresses, IP addresses and IP subnets ) Compute forwarding paths on demand. Not off the box computation.
Compute off the box optimal forwarding. Install flow entries based on actual traffic.
Download flow entries to the data plane switches. Has many scalability concerns such as packet punting rate.
No data plane controller involvement with the exceptions of ARP and MAC learning. Line rate performance.  Not a recommended setup.

 

Hop-by-Hop versus Path-based Forwarding

The following table illustrates the keys point for the two types of forwarding methods used by OpenFlow; hop-by-hop forwarding and path-based forwarding:

 

Hop-by-Hop Forwarding Path-based Forwarding
Similar to traditional IP Forwarding. Similar to MPLS.
Installs identical flows on each switch on the data path. Map flows to paths on ingress switches and assigns user traffic to paths at the edge node.
Scalability concerns relating to flow updates after a change in topology. Compute paths across the network and installs end-to-end path forwarding entries.
Significant overhead in large-scale networks. Works better than hop-by-hop forwarding in large-scale networks.
FIB update challenges. Convergence time. Core switches don’t have to support the same granular functionality as the edge switches.

 

Threat Analysis – Securing SDN

Obviously, with any controller, the controller is a lucrative target for attack. Anyone that figures out you are using a controller based network will try to attack the controller and its control plane. The first thing the attacker may try to do is intercept the controller-to-switch communication and try to replace it with its own commands, essentially attacking the control plane with whatever means they like. An attacker may also try to insert a malformed packet or some other type of unknown packet to the controller ( fuzzing attack ), exploiting bugs in the controller and causing the controller to crash. Fuzzing attacks can be carried out with application scanning software such as Burp Suite. It attempts to manipulate data in a particular way, breaking the application.

The best way to tighten down security would be to encrypt switch-to-controller communications with SSL and self signed certificates to authenticate the switch and controller. You should also try to minimize the interaction with the data plane with the exceptions of ARP and MAC learning. If you want to prevent denial of services attacks on the controller you can use Control Plane Policing (  CoPP ) on Ingress so you don’t overload the switch and the controller. Currently, NEC are the only vendor implementing CoPP.

Securing SDN

Securing SDN

 

The Hybrid deployment model is useful from a security perspective. You can group certain ports or VLANs to OpenFlow and a group of other ports or VLANs to traditional forwarding, then use traditional forwarding to communicate with the OpenFlow controller.

 

Software Defined Networking or Traditional Routing Protocols?

The move to an Software Defined Networking architecture has its clear advantages. Its agile and can react quickly to business needs, such as new product development. There are however circumstances where existing control plan architecture can meet company requirements. The following table displays the advantages and disadvantages of existing routing protocol control architecture.

 

+Reliable and well known. -Non-standard Forwarding models. Destination only and not load aware metrics**
+Proven with 20 plus years field experience. -Loosely coupled.
+Deterministic and predictable. -Lacks end-to-end transactional consistency and visibility.
+Self-Healing. Traffic can reroute around a failed node or link. -Limited Topology discovery and extraction. Basic neighbor and topology tables.
+Autonomous. -Lacks the ability to change existing control plane protocol behavior.
+Scalable. -Lacks the ability to introduce new control plane protocols.
+Plenty of learning and reading materials.

 

** Basic EIGRP IETF originally proposed an Energy Aware Control Plane but this was later removed by the IETF.

 

SDN use cases:

Edge Security policy enforcement at the network edge. Authenticate users or VMs and deploy per-user ACL before connecting a user to the network.
Custom routing and online TE. The ability to route on a variety of business metrics aka routing for dollars. Allowing you to override the default routing behavior.
Custom traffic processing. For analytics and encryption.
Programmable SPAN ports Use Openflow entries to mirror selected traffic to SPAN port.
DoS traffic blackholing & distributed DoS prevention. Block DoS traffic as close to the source as possible with more selective traffic targeting than the original RTBH approach**. The traffic blocking is implemented in OpenFlow switches. Higher performance with significantly lower costs.
Traffic redirection and service insertion. Redirect a subset of traffic to network appliances and install redirection flow entries wherever needed.
Network Monitoring The controller is the authoritative source of information on network topology and Forwarding paths
Scale Out Load Balancing Punt new flows to the Openflow controller and install per-session entries throughout the network
IPS Scale Out OpenFlow used to distribute the load to multiple IDS appliances

 

**Remote-Triggerd Black Hole: RTBH refers to the installation of a host route to a bogus IP address ( RTBH address ) pointing to NULL interfaces on all routers. BGP is used to advertise the host routes to other BGP peers of the attacked hosts with the next hop pointing to the RTBH address.  Mostly automated in ISP environments.

 

SDN Deployment Models

Guidelines:

  1. Start with small deployments away from the mission-critical productions path i.e the Core. Ideally, start with device or service provisioning systems.
  2. Start at the Edge and slowly integrate with the Core. Minimize the risk and blast radius. Start with packet filters at the edge and tasks that can be easily automated ( VLANs ).
  3. Integrate new technology with the existing network.
  4. Gradually increase scale and gain trust. Experience is key.
  5. Have the controller in a protected out-of-band network with SSL connectivity to the switches.

 

There are 4 different models for OpenFlow deployment and the following sections lists the key points of each model.

 

Native OpenFlow  

  • Commonly used for Green field deployments.
  • The controller performs all the intelligent functions.
  • The forwarding plane switches have little intelligence and solely perform packet forwarding
  • The white box switches need IP connectivity to the controller for the OpenFlow control sessions. This should be done with a out-of band network. If you are forced to use an in-band network for this communication path use an isolated VLAN with STP.
  • Fast convergence techniques such as BFD may be difficult to use with a central controller.
  • Many people view that this approach does not work for the regular company. Companies that are implementing native OpenFlow such as Google have the time and resources that are needed to reinvent all the wheels when implementing a new control plane protocol ( OpenFlow ).

 

Native OpenFlow with Extensions

  • Some control plane functions are handled off from the centralized controller to the forwarding plane switches. For example, the OpenFlow enabled switches could perform load balancing across multiple links without the decision previously been made by the controller. You could also run STP, LACP or ARP locally on the switch without interaction with the controller. This approach is useful in the event that you lose connectivity to the controller. If the low-level switches are performing certain controller functions, in the event of failure packet forwarding will still continue.
  • The local switches should support the specific OpenFlow extensions that let them carry out functions on behalf of the controller.

 

Hybrid ( Ships in the night )

  • This approach is used where OpenFlow is running in parallel with the production network.
  • The same network box is controlled by existing on-box control plane as well as off-box control planes ( OpenFlow).
  • Good for pilot deployment models as switches still run traditional control plane protocols.
  • The Openflow controller manages only certain VLANs or ports on the network.
  • The big challenge is to determine and investigate the conflict-free sharing of forwarding plane resources across multiple control planes.

 

Integrated OpenFlow

  • OpenFlow classifiers and forwarding entries are integrated with the existing control plane. For example, Junipers OpenFlow model follows this mode of operation where openflow static routes can be redistributed into the other routing protocols.
  • No need for a new control plane.
  • No need to replace all forwarding hardware
  • Most practical approach as long as the vendor supports it.

 

About Matt Conran

Matt Conran has created 169 entries.