Application-aware networking – Plexxi Networks

Mobility and dynamic bandwidth provisioning force us to rethink how we design networks. Traditional data centre designs utilize a hierarchical approach, encompassing a number of bandwidth aggregation points at different layers. We build networks with a fixed size and adapt the application to the network. Data centres are designed to have fixed oversubscription ratio, for example, 2:1, 4:1 etc. But what if you want better oversubscription ratios for individual sets of applications? Equal-cost multipathing (ECMP) and Link aggregation (LAG) mechanisms improve performance but effectively you still have a fairly static network. Non-blocking and low oversubscription designs are costly and leave many sections underused during low utilization periods.

 

Plexxi Affinity Model

Plexxi has a solution encompassing a central controller, ethernet switches, and an optical backplane. The switches are physically connected with a Ring topology and dense wavelength division multiplexing (DWDM) operates on top for topology flexibility. At a basic level, they group applications into affinity groups and tie these groups together with an expression. Different bandwidth and path characteristic are tied to individual affinity groups and traffic topology optimizations are download by the controller to local nodes. Plexxi aims to reverse the traditional design process and let the application dictate what kind of network it wants. Application-aware networking is the idea that application visibility combined with network dynamism will create an environment where the network can react to the changing behavior of application mobility and bandwidth allocation requirements. Networks should be designed around conversions but when you design a network it is usually designed around reachability. A conversational view measures network resources in a different way, such as application SLA and end-to-end performance. The focus is not just uptime. We need a mechanism to describe applications in an abstract way and design the network around conversations. The Plexxi affinity model is about taking a high-level abstraction of what you want to do, let the controller influence the network and take care of the low-level details. Affinity is a policy language that dictates exactly how you want the network to behave to a specific affinity group. Specific bandwidth and conservation priority are set to affinity groups giving them priority over normal types of traffic.

 

The Plexxi data plane layer has a photonic layer and packet forwarding engine. The controller has the ability to dynamically orchestrate the network based on predefined application groupings.

 

Plexxi Components

Plexxi provides ethernet hardware switching equipment with an optical fabric backplane. The optical backplane fabric allows better scaling, end-to-end latency and dynamic control of paths. They use DWDM optics and cross-connect technology built into the switch, enabling the creation of a wired Ring topology. The physical cabling is Ring based but on top of that they use DWDM technology to logical connect switches in a flexible way. They have chosen DWDM as the transport due to its flexibility. DWDM enables topology changes based on where the light waves are terminating. For direct switch-to-switch connectivity, it can passively pass through intermediate switches. Effectively, permitting any-to-any or partial mesh connectivity models.

 

plexxi

 

The light frequency acts as a channel for communication. Wifi carries Ethernet over a wireless channel and Lambda carries Ethernet over a light frequency. Plexxi implements a physical Ring with a DWDM flexible topology on top.

 

 

Ring Based Design

Ring based design gives you the ability to do large unequal cost multipath flat designs. As traffic traverses the Ring, is it not always traversing hop-by-hop, it may skip devices by having direct Lambda interconnect. Now, there is a load of interconnect paths between switches forcing Plexxi to revisit the way we forward packets. Traditional Layer 2 and Layer 3 is always the shortest path between two points. Resulting in ECMP style load sharing. Plexxi switches have a large number of possible combinations to take and the typical ECMP forwarding model wouldn’t let you get all the bandwidth from these links. At a Packet level, the controller discovers the topology and runs a set of algorithms. It determines all the possible paths and sets policy for optimal forwarding to the local switches. The controller solves the paths algorithmically. Potentially, there could be 100’s of possible different topologies available through the fabric. As a result, the Plexxi solution allows massive unequal cost load balancing between each end point. At an Optical level, they can stitch optical paths from one point to another. If you require high bandwidth between two switches, dedicate bandwidth and tailor your topology based on lambda. Forwarding policies are then pushed down to the fabric that optimizes and isolates traffic between those points.

 

Affinity Networking & Grouping

Affinity fabrics class two type of traffic: Affinity (priority) and Normal. Affinity traffic is explicitly controlled by TCAM entries specified for network path control. It relates to hand crafted traffic paths, created by the controller. The TCAM rules are pre-engineered and traffic forwarding is based on manual engineering. Normal traffic is when the controller picks a large set of non-ECMP paths and makes them available for endpoints.

The application must be able to tell us what they want from the network. This is the essence of affinity networking. At a high level, affinity groups are no more than applications grouped by MAC address. Once affinity groups are designed, you can create a line or expression between them. Expressions dictate network sensitivity. For example, affinity group “Sales” may require certain bandwidth or hop count for decreased latency. All this input is fed to the controller. The controller attempts to build the topology based on MAC address endpoint and affinity groups. It runs a set of algorithms and determine best possible paths for these applications. Once decided it downloads the topology information to the local switches.

The controller is not in the data path and it holds no state for the network. If there is a controller failure, the switches can operate by themselves. By default, when we connect Plexxi switches together, they will operate like traditional switches, in the sense that they will learn MAC addresses, flood and pass packets. The controller optimizes the topology based on the affinity and provides the local switch with an optimization of the topology. The controller to switch communication needs something more than OpenFlow – TCAM download protocol. They are communication more than flows. They communicate the entire topology to the local switches and this is why they don’t use OpenFlow. The interface they use is proprietary. For the northbound interface, they have a REST API with a GUI. The API allows external companies to tie their provisioning systems to the controller and communicate to the switches. Additional information on Plexxi at IPspace.net

 

 

 

 

 

 

 

 

About Matt Conran

Matt Conran has created 184 entries.

One Comment

  • Industry Influencers: Application-aware Networking - Plexxi

    […] Insight Blogger and industry guru, Matt Conran, featured Plexxi in his October 6 post Application-aware Networking-Plexxi Networks. He believes, “Mobility and dynamic bandwidth provisioning force us to rethink how we design […]

Leave a Reply