OpenFlow Series Part A

The source for this blog post is taken from Ivan Pepelnjak’s recent deep dive webinar on OpenFlow.

OpenFlow started in the labs as an academic experiment. They wanted to test concepts about new protocols and forwarding mechanism on real hardware but failed with current architectures. The challenges from changing forwarding entries in physical switches limited the project to emulations. Emulations are a type of simulation and can’t mimic an entire production network. The requirement to test on actual hardware (not emulations) lead to the separation of device planes (control, data, and management plane) and the introduction of the OpenFlow protocolOpenFlow has gone through a number of versions; namely version 1.0 to 1.4. OpenFlow 1.0 was the initial version published by Open Network Foundation (ONF). Most vendors initially implemented version 1.0, which has lots of restrictions and scalability concerns. Version 1.0 was limited and proves the ONF rushed to the market without having a full product. Not many vendors implemented versions 1.1 or 1.2 and waited for version 1.3, allowing per-flow meter and Provider Backbone Bridging (PBB) support. Everyone thinks that OpenFlow is “open” but it is controlled by a closed group of around 150 member organisations, forming the ONF. The ONF specifies all the OpenFlow standards and work is hidden until published as a standard.

 

 

Separation of Planes

Every device has three independent pieces; data plane, control plane and management plane. The data plane is responsible for switching Protocol Data Units (PDU) from incoming ports to destination ports. It uses a forwarding table. A Layer 2 forwarding table could be a list of MAC address and outgoing ports. A Layer 3 forwarding table contains IP prefixes with next hops and outgoing ports. The data plane is not responsible for creating the controls necessary to forward. Someone else has the job to “fill in” the data plane and this is the function of the control plane.

The control plane is responsible for giving the data plane the required information enabling it to forward. It is considered the “intelligence” of the network as it makes the decisions about PDU forwarding. Control plane protocols are not limited to routing protocols, it’s more than just BGP and OSPF. Every single protocol that runs between adjacent devices is usually a control plane protocol. Line card protocols such as BFD, STP and LACP are some examples. These protocols do not directly interface with the forwarding table, for example, BFD detects failures, but it doesn’t by itself remove anything from the forwarding table. It informs other higher level control plane protocols of the problem and leaves them to change the forwarding behaviour. On the other hand, protocols like OSPF and ISIS directly influence the forwarding table. Finally, the management plane provides operational access and monitoring. It permits you or “something else” to configure the device.

 

The Idea of OpenFlow

OpenFlow is not something revolutionary new, similar ideas have been around the last 20 years. RFC 1925 by R.Callon presents what as known as “The Twelve Network Truths”. Section 2.11 states that Every old idea will be proposed again with a different name and a different presentation, regardless of whether it works” Solutions to old problems are easily dressed up in new clothes, but the problems stay the same. Scalability of a central control plane will always pose challenges. We had this with the SDH/SONET and Frame Relay days. NEC and Big Switch networks tried to centralise, eventually moving as much as they could to local devices or limited the dynamic nature and number of supported protocols. For example, NEC permits only static port channels and if you opt for Layer 3 forwarding the only control protocol they support is ARP. Juniper implemented a very scalable model with MP-BGP, retaining low-level control plane protocols on local switches. Putting all the periodic easy jobs on the local switch makes the whole architecture more scalable. Cisco DFA or ACI use the same architecture. It’s also hard to do fast feedback looks and fast convergence with a centralised control plane. OpenFlow and centralised control plane-centric SDN architectures do not solve this.

The idea of OpenFlow is very simple. Let’s decouple the control and management plane from the switching hardware. Split the three plane model in a way where the dumb hardware is in one box and brains in the other box. The intelligence has to have a mechanism to push the forwarding entries, which could be MAC, ACL, IP prefix or NAT rules to the local switches. The protocol used to push these entries is OpenFlowOpenFlow is viewed as both a protocol with a number of instruction sets and an architecture. But it is nothing more than a Forwarding ( TCAM ) download protocol.  It cannot change switch hardware functionality. If something is supported in OpenFlow but not in the hardware, OpenFlow cannot do anything for you. Just because OpenFlow versions allows a specific matching, it doesn’t mean that the hardware can match on those fields. Juniper use OpenFlow 1.3 permitting IPv6 handling, but their hardware does not permit matching on IPv6 addresses.

 

Flow Table and OpenFlow Forwarding Model

The flow table in a switch is not the same as the Forwarding Information Base (FIB). The FIB is a simple set of instructions, supporting destination-based switching. The OpenFlow table is slightly more complicated and represents a sequential set of instructions matching multiple fields. It supports flow-based switching.

 

OpenFlow 1.0 – Single Table Model

The initial OpenFlow forwarding model was simple. They based the model of OpenFlow on low-cost switches that use TCAM. Most low-cost switches have only one table that serve all forwarding needs of that switch. As a result, the model for the OpenFlow forwarding model was one simple table. A packet is received on an interface, we extract metadata (like incoming interface) and other header fields from the packet. Then the fields in the packet are matched in the OpenFlow table. Every entry in a table has a protocol field and the match with the highest priority is the winner. Each single line in the table should have a different priority; highest-priority match determines the forwarding behaviour.

OpenFlow

If you have simple MAC address forwarding i.e building a simple bridge.  All entries are already distinct. There is no overlap and you don’t need to use the priority field.

Once there is a match, the action of the packet is carried out. The action could be to send to an interface, drop or send to the controller. The default behaviour of OpenFlow 1.0 would send any unmatched packets to the controller. This type of punting was later removed as it exposes the controller to DoS attacks. An attacker could figure out what is not in the table and send packets for that destination. Completely overloading the controller. The original OpenFlow specifications could send to interface or to the controller, but then they figured out they may need to modify the packet, such as change the TTL, set a field, push/pop tags. They realized version 1.0 was broken and you need to do more than one action on every specific packet. Multiple actions must be associated with each flow entry. This was later addressed in subsequent OpenFlow versions.

 

OpenFlow Ports

OpenFlow has the concepts of ports. Ports on a OpenFlow switch serve the same input/output purposes that they do on any switch. From the perspective on ingress and egress traffic flows, it is no different from any other switch. The ingress port for one flow might be the output port for another flow. OpenFlow defines a number of standards ports such as physical, logical and reserved. Physical ports correspond directly to the hardware. Logical ports do not directly correspond to hardware, for example, MPLS LSP, tunnel and null interfaces. Reserved ports are ports used for internal packet processing and for OpenFlow hybrid switch deployments. Reserved ports are either required or optional. Required ports include ALL, CONTROLLER, TABLE, IN_PORT and ANY. While the Optional ports include LOCAL, NORMAL and FLOOD.

 

PORT ALL: flooding to all ports.

PORT CONTROLLER: punt packets to the controller.

PORT LOCAL: forward to local switch control plane.

PORT NORMAL: forward to local switch regular data plane.

PORT FLOOD: local switch flooding mechanism.

 

OpenFlow is simply a TCAM download protocol. If you want to create a tunnel between two endpoints, OpenFlow cannot do this. It does not create interfaces for you. There are other protocols, such as OF-CONFIG that uses NETCONF for this job. It is a YANG based data model. OpenFlow protocol is used between the controller and the switch while OF-CONFIG is used between a configuration point and the switch. A port can be added, changed, or removed in the switch configuration with OF-CONFIG not with OpenFlow.

 

OpenFlow

Port changes (state) do not automatically change the direction of the flow entry. For example, if a port goes down a flow entry will still point to that interface and subsequent packets are dropped. All port changes must first be communicated to the controller so it can make changes to the necessary forwarding by download instructions with OpenFlow to the switches. There is a variety of OpenFlow messages used for switch to controller and controller to switch communication. We will address these in the OpenFlow Series 2 post.

 

OpenFlow Classifiers

OpenFlow is modelled like a TCAM; supporting a number of matching mechanisms. You can use any combination of packet header fields to match a packet. Possibilities to match on MAC address (OF 1.0), with wildcards (OF 1.1) , VLAN and MPLS tag (OF 1.1), PBB headers (OF 1.3), IPv4 address with wild cards, ARP fields, DSCP bits. Not many vendors implement ARP field matching due to hardware restrictions. A lot of  current hardware does not let you look deep into ARP fields. IPv6 address (OF 1.2) and IPv6 extension headers (OF1.3), Layer 4 protocol, TCP and UDP port numbers are also supported.

 

Software matching is useless and hardware matching is limited by the hardware on the switch.

 

Once a packet is matched by a specific flow entry, you can specify a number of actions. Options include output to type of port; for example, NORMAL port for traditional processing or LOCAL PORT for the local control plane, CONTROLLER port to send to the controller. You may set the OUTPUT QUEUE ID, PUSH/POP VLAN or MPLS or PBB tags. You can even do header rewrite, which means OpenFlow can be used to implement NAT but be careful with this as you only have a limited number of flow entries on the switches. Some actions might be in software, which is too slow.

 

OpenFlow Groups

Finally, there is an interesting mechanism where a packet can be processed through what is known as a GROUP. With OpenFlow 1.1+ the OpenFlow forwarding model included the functionality of GROUPS. Groups enhance the previous OpenFlow forwarding model. Enhanced forwarding mechanisms like ECMP Load Balancing cannot be completed in OpenFlow 1.0. For this reason, the ONF implemented OUTPUT GROUPS. A Group is a set of buckets and a Bucket is a set of actions.  An action could be output to port, set VLAN tag or push/pop MPLS tag. Groups can contain a number of buckets; bucket 1 could send to port 1, bucket 2 to port 2 but also add a tag. This adds granularity to OpenFlow forwarding and enable additional forwarding methods. For example, sending to all buckets in a group could be used for selective multicast or sending to one bucket in a group could be used for load balancing across LAG or ECMP. Additional information on OpenFlow at IPspace.net or on BGP-SDN visit here.

 

 

 

About Matt Conran

Matt Conran has created 169 entries.

One Comment

  • Application-aware networking - Plexxi Networks

    […] an optimization of the topology. The controller to switch communication needs something more than OpenFlow – TCAM download protocol. They are communication more than flows. They communicate the entire topology to the local switches […]

Leave a Reply