Openflow protocol

OpenFlow Protocol

OpenFlow Protocol

The world of networking has witnessed remarkable advancements, and one of the key contributors to this progress is the OpenFlow protocol. In this blog post, we will dive into the depths of OpenFlow, exploring its principles, benefits, and impact on the networking landscape.

OpenFlow, at its core, is a communication protocol that enables the separation of network control and forwarding plane. By doing so, it empowers network administrators with a centralized control plane and facilitates programmable network management. This flexibility opens up a realm of possibilities for network optimization, innovation, and enhanced performance.

To grasp the essence of OpenFlow, it's essential to familiarize ourselves with its key principles. Firstly, OpenFlow operates on a flow-based model, treating packets as individual entities rather than traditional IP and MAC addresses. Secondly, it enables the dynamic modification of network behavior through a centralized controller, providing unprecedented control and adaptability. Lastly, OpenFlow fosters interoperability by ensuring compatibility across various networking devices from different vendors.

The adoption of OpenFlow brings forth a plethora of benefits. Firstly, it allows network administrators to exert fine-grained control over traffic flows, enabling them to optimize network performance and reduce latency. Secondly, OpenFlow facilitates network virtualization, empowering organizations to create isolated virtual networks without the need for physical infrastructure. Additionally, OpenFlow promotes innovation and fosters rapid experimentation by providing an open platform for application developers to create and deploy new network services.

OpenFlow and SDN are often mentioned in the same breath, as they go hand in hand. SDN is a paradigm that leverages OpenFlow to enable programmability, flexibility, and automation in network management. By decoupling the control and data planes, SDN architectures simplify network management, facilitate network orchestration, and enable the creation of dynamic, agile networks.

Conclusion: The OpenFlow protocol has revolutionized the networking landscape, empowering organizations with unprecedented control, flexibility, and performance optimization. As we sail into the future, OpenFlow will continue to be a key enabler of innovation, driving the evolution of networking technologies and shaping the digital world we inhabit.

Highlights: OpenFlow Protocol

Software-defined networking is on the rise.

Martin Casado, a general partner at Andreessen Horowitz, would handle all the changes in the network industry. Previously, Casado worked for VMware as a fellow, senior vice president, and general manager. In addition to his direct contributions (like OpenFlow and Nicira), he has opened large network incumbents to the need to change network operations, agility, and manageability.

OpenFlow: A New Era

The Software-Defined Networking movement began with OpenFlow, whether for good or bad. During his Ph.D. at Stanford University, Casado worked on OpenFlow under Nick McKeown. Decoupling a network device’s control plane and data plane from the OpenFlow protocol is possible. A network device’s control plane consists of its brain, while its data plane consists of hardware that forwards packets or application-specific integrated circuits (ASICs).

Alternatively, OpenFlow can be run in a hybrid mode. It can be deployed on a specific port, virtual local area network (VLAN), or a regular packet-forwarding pipeline. Since there is no match in the OpenFlow table, packet forwarding is equivalent to policy-based routing (PBR).

SDN and OpenFlow

OpenFlow provides granularity in determining forwarding paths (matching fields in packets) because its tables support more than destination addresses. PBR provides granularity by considering the source address when selecting the next routing hop. In a similar way to OpenFlow, it allows network administrators to forward traffic based on non-traditional attributes, such as packet source addresses. Despite taking quite some time for network vendors to offer comparable performance for PBR-forwarded traffic, the final result was still very vendor-specific.

OpenFlow allowed us to make traffic-forwarding decisions at the same granularity as before but without vendor bias. Thus, network infrastructure capabilities could be enhanced without waiting for the following hardware version.

The role of OpenFlow

So, what is OpenFlow? OpenFlow is an open-source communications protocol that enables remote control of network devices from a centralized controller. It is most commonly used in software-defined networking (SDN) architectures, allowing networks to be configured and managed from a single point. OpenFlow enables network administrators to programmatically control traffic flow across their networks, allowing them to respond quickly to changing traffic patterns and optimize network performance.

The creation of Virtual Networks

OpenFlow will also help it create virtual networks on top of existing physical networks, allowing for more efficient and flexible network management. As a result, OpenFlow is an essential tool for building and managing modern, agile networks that can quickly and easily adapt to changing network conditions. The following post discusses OpenFlow, the operations of the OpenFlow protocol, and the OpenFlow hardware components you can use to build an OpenFlow network.

What is OpenFlow

You may find the following helpful post for pre-information:

  1. SDN Adoption Report
  2. Network Traffic Engineering
  3. BGP SDN
  4. NFV Use Cases
  5. Virtual Switch 
  6. HP SDN Controller



OpenFlow protocol.

Key OpenFlow Discussion Points:


  • Introduction to OpenFlow and what is involved.

  • Highlighting the separation of planes.

  • Discussing the data plane, control plane, and management plane.

  • Technical details of the OpenFlow protocol and its operation.

  • The role of the different OpenFlow hardware components.

Back to basics with OpenFlow

OpenFlow history

OpenFlow started in the labs as an academic experiment. They wanted to test concepts about new protocols and forwarding mechanisms on real hardware but failed with current architectures. The challenges of changing forwarding entries in physical switches limited the project to emulations. Emulations are a type of simulation and can’t mimic an entire production network. The requirement to test on actual hardware (not emulations) leads to separating device planes (control, data, and management plane) and introducing the OpenFlow protocol and new OpenFlow hardware components.

Highlighting SDN

Standard network technologies have existed since the inception of networking, consisting of switches, routers, and firewalls. We have data at each layer called different things, and forwarding has stayed the same since inception. Frames and packets have been forwarded and routed using a similar approach, resulting in limited efficiency and a high maintenance cost.

Consequently, there was a need to evolve the techniques and improve the operations of networks, which led to the inception of SDN. SDN, often considered a revolutionary new idea in networking, pledges to dramatically simplify network control and management and enable innovation through network programmability.

software defined networking
Diagram: Software Defined Networking (SDN). Source is Opennetworking
  • A key point: The OpenFlow Protocol versions

OpenFlow has gone through several versions, namely versions 1.0 to 1.4. OpenFlow 1.0 was the initial version of the Open Network Foundation (ONF). Most vendors initially implemented version 1.0, which has many restrictions and scalability concerns. Version 1.0 was limited and proved the ONF rushed to the market without having a complete product.

Not many vendors implemented versions 1.1 or 1.2 and waited for version 1.3, allowing per-flow meter and Provider Backbone Bridging (PBB) support. Everyone thinks OpenFlow is “open,” but it is controlled by a closed group of around 150 member organizations, forming the ONF. The ONF specifies all the OpenFlow standards, and work is hidden until published as a standard.

Key Features and Benefits:

1. Centralized Control: OpenFlow Protocol provides a centralized control mechanism by decoupling the control plane from the devices. This allows network administrators to manage and configure network resources dynamically, making adapting to changing network demands easier.

2. Programmability: OpenFlow Protocol enables network administrators to program the behavior of network devices according to specific requirements. This programmability allows for greater flexibility and customization, making implementing new network policies and services easier.

3. Network Virtualization: OpenFlow Protocol supports virtualization, allowing multiple virtual networks to coexist on a single physical infrastructure. This capability enhances resource utilization and enables the creation of isolated network environments, enhancing security and scalability.

4. Traffic Engineering: With OpenFlow Protocol, network administrators have fine-grained control over traffic flows. This enables efficient load balancing, prioritization, and congestion management, ensuring optimal performance and resource utilization.

Use Cases:

1. Software-Defined Networking (SDN): OpenFlow Protocol is a key component of SDN, a network architecture separating the control and data planes. SDN leverages OpenFlow Protocol to provide a more flexible, programmable, and scalable network infrastructure.

2. Data Center Networking: OpenFlow Protocol offers significant advantages in data center environments, where dynamic workload placement and resource allocation are crucial. Using OpenFlow-enabled switches and controllers, data center operators can achieve greater agility, scalability, and efficiency.

3. Campus and Enterprise Networks: OpenFlow Protocol can simplify network management in large campus and enterprise environments. It allows network administrators to centrally manage and control network policies, improving network security, traffic engineering, and troubleshooting capabilities.

 

OpenFlow Protocol: Separation of Planes

An OpenFlow network has several hardware components that enable three independent pieces; the data plane, control plane, and management plane.

The data plane

The data plane switches Protocol Data Units (PDU) from incoming ports to destination ports. It uses a forwarding table. For example, a Layer 2 forwarding table could list MAC addresses and outgoing ports. A Layer 3 forwarding table contains IP prefixes with next hops and outgoing ports. The data plane is not responsible for creating the controls necessary to forward. Instead, someone else has the job of “filling in” the data plane, which is the control plane’s function.

The control plane

The control plane is responsible for giving the data plane the required information enabling it to forward. It is considered the “intelligence” of the network as it makes the decisions about PDU forwarding. Control plane protocols are not limited to routing protocols; it’s more than just BGP and OSPF. Every single protocol that runs between adjacent devices is usually a control plane. Examples include line card protocols such as BFD, STP, and LACP.

These protocols do not directly interface with the forwarding table; for example, BFD detects failures but doesn’t remove anything from the forwarding table. Instead, it informs other higher-level control plane protocols of the problem and leaves them to change the forwarding behavior.

The management plane

On the other hand, protocols like OSPF and ISIS directly influence the forwarding table. Finally, the management plane provides operational access and monitoring. It permits you or “something else” to configure the device.

Switch functions
Diagram: Switch functions. Source is Bradhedlund

 

The Idea of OpenFlow Protocol

OpenFlow is not revolutionary new; similar ideas have been around for the last 20 years. RFC 1925 by R.Callon presents what is known as “The Twelve Network Truths.” Section 2.11 states, “Every old idea will be proposed again with a different name and a different presentation, regardless of whether it works” Solutions to old problems are easily dressed up in new clothes, but the problems stay the same.

  • Example: Use case of vendors and OpenFlow hardware.

The scalability of a central control plane will always pose challenges. We had this with the SDH/SONET and Frame Relay days. NEC and Big Switch networks tried to centralize, eventually moving as much as possible to local devices or limiting the dynamic nature and number of supported protocols.

For example, NEC permits only static port channels, and if you opt for Layer 3 IP forwarding, the only control protocol they support is ARP. Juniper implemented a scalable model with MP-BGP, retaining low-level control plane protocols on local switches. Putting all the routine easy jobs on the local switch makes the architecture more scalable.

Cisco DFA or ACI uses the same architecture. It’s also hard to do fast feedback looks and fast convergence with a centralized control plane. OpenFlow and centralized control plane-centric SDN architectures do not solve this. You may, however, see OpenFlow combined with the Open vSwitch.  Consider the Open vSwitch, also known as OVS, and the OVS bridge as the virtual switch in the data path.

  • A key point: Open vSwitch

The Open vSwitch is an open-source multi-layered switch that allows hypervisors to virtualize the networking layer. It is designed to enable network automation through programmatic extension. The OVS bridge supports standard management interfaces and protocols such as NetFlow, LACP, and 802.1ag).  This caters to many virtual machines running on one or more physical nodes. The virtual machines connect to virtual ports on virtual bridges (inside the virtualized network layer.)

OVS bridge
Diagram: OVS Bridge: Source OpenvSwitch.
  • A key point: OpenFlow and Open Networking

Today, with many prominent vendors, we need an open interface to the forwarding plane. Many refer to this as monolithic, closed, and mainframe-like networking devices. On the other hand, a protocol such as OpenFlow. OpenFlow allows direct access to and manipulation of the forwarding plane of the network device. We can now move network control out of proprietary network switches and into open-source and locally managed control software. We were driving a new world of open networking.

OpenFlow protocol and decoupling

The idea of OpenFlow is straightforward. Let’s decouple the control and management plane from the switching hardware. Split the three-plane model where the dumb hardware is in one box, and the brains are in the other. The intelligence has to have a mechanism to push the forwarding entries, which could be MAC, ACL, IP prefix, or NAT rules to the local switches. The protocol used to push these entries is OpenFlow.

OpenFlow is viewed as a protocol with several instruction sets and an architecture. But it is nothing more than a Forwarding ( TCAM ) download protocol. It cannot change the switch hardware functionality. If something is supported in OpenFlow but not in the hardware, OpenFlow cannot do anything for you.

Just because OpenFlow versions allow a specific matching doesn’t mean that the hardware can match on those fields. For example, Juniper uses OpenFlow 1.3, permitting IPv6 handling, but their hardware does not permit matching on IPv6 addresses. 

Flow table and OpenFlow forwarding model

The flow table in a switch is not the same as the Forwarding Information Base (FIB). The FIB is a simple set of instructions supporting destination-based switching. The OpenFlow table is slightly more complicated and represents a sequential set of instructions matching multiple fields. It supports flow-based switching.

  • OpenFlow 1.0 – Single Table Model: OpenFlow hardware with low-cost switches 

The initial OpenFlow forwarding model was simple. They based the model of OpenFlow on low-cost switches that use TCAM. Most low-cost switches have only one table that serves all of the forwarding needs of that switch. As a result, the model for the OpenFlow forwarding model was a straightforward table. First, a packet is received on an interface; we extract metadata (like incoming interface) and other header fields from the packet.

Then, the fields in the packet are matched in the OpenFlow table. Every entry in a table has a protocol field, and the match with the highest priority is the winner. Every line in the table should have a different priority; the highest-priority match determines the forwarding behavior.

Openflow protocol
Diagram: OpenFlow protocol
  • If you have simple MAC address forwarding, i.e., building a simple bridge. All entries are already distinct. So, there is no overlap, and you don’t need to use the priority field.

Once there is a match, the action of the packet is carried out. The action could be to send to an interface, drop, or send to the controller. The default behavior of OpenFlow 1.0 would send any unmatched packets to the controller. This punting was later removed as it exposed the controller to DoS attacks. An attacker could figure out what is not in the table and send packets to that destination—completely overloading the controller.

The original OpenFlow specifications could be sent to the interface or the controller. Still, they figured out they may need to modify the packet, such as changing the TTL, setting a field, and pushing/pop tags. They realized version 1.0 was broken, and you must do more than one action on every specific packet. Multiple actions must be associated with each flow entry. This was later addressed in subsequent OpenFlow versions.

OpenFlow ports

OpenFlow has the concept of ports. Ports on an OpenFlow switch serve the same input/output purposes as any switch. From the perspective of ingress and egress traffic flows, it is no different from any other switch. For example, the ingress port for one flow might be the output port for another flow.

OpenFlow defines several standard ports, such as physical, logical, and reserved. Physical ports correspond directly to the hardware. Logical ports do not directly correspond to hardware, such as MPLS LSP, tunnel, and null interfaces.

Reserved ports are used for internal packet processing and OpenFlow hybrid switch deployments. Reserved ports are either required or optional. Required ports include ALL, CONTROLLER, TABLE, IN_PORT, and ANY. In comparison, the Optional ports include LOCAL, NORMAL, and FLOOD.

OpenFlow Ports

  • PORT ALL: flooding to all ports.

  • PORT CONTROLLER: punt packets to the controller.

  • PORT LOCAL: forward to the local switch control plane.

  • PORT NORMAL: forward to local switch regular data plane.

  • PORT FLOOD: local switch flooding mechanism.

  • TCAM download protocol

OpenFlow is simply a TCAM download protocol. OpenFlow cannot do this if you want to create a tunnel between two endpoints. It does not create interfaces for you. Other protocols, such as OF-CONFIG, use NETCONF for this job. It is a YANG-based data model. OpenFlow protocol is used between the controller and the switch, while OF-CONFIG is used between a configuration point and the switch. A port can be added, changed, or removed in the switch configuration with OF-CONFIG, not OpenFlow.

Port changes (state) do not automatically change the direction of the flow entry. For example, a flow entry will still point to that interface if a port goes down and subsequent packets are dropped. All port changes must first be communicated to the controller so it can make changes to the necessary forwarding by downloading instructions with OpenFlow to the switches.

A variety of OpenFlow messages are used for switch-to-controller and controller-to-switch communication. We will address these in the OpenFlow Series 2 post.

OpenFlow Classifiers

OpenFlow is modeled like a TCAM, supporting several matching mechanisms. You can use any combination of packet header fields to match a packet. Possibilities to match on MAC address (OF 1.0), with wildcards (OF 1.1), VLAN and MPLS tag (OF 1.1), PBB headers (OF 1.3), IPv4 address with wild cards, ARP fields, DSCP bits. Not many vendors implement ARP field matching due to hardware restrictions. Much current hardware does not let you look deep into ARP fields. IPv6 address (OF 1.2) and IPv6 extension headers (OF1.3), Layer 4 protocol, TCP, and UDP port numbers are also supported.

Once a specific flow entry matches a packet, you can specify the number of actions. Options include output to the port type: NORMAL port for traditional processing, LOCAL PORT for the local control plane, and CONTROLLER port for sending to the controller.

In addition, you may set the OUTPUT QUEUE ID, PUSH/POP VLAN, MPLS, or PBB tags. You can even do a header rewrite, which means OpenFlow can be used to implement NAT. However, be careful with this, as you only have a limited number of flow entries on the switches. Finally, some actions might be in software, which is too slow.

OpenFlow Groups

Finally, there is an exciting mechanism where a packet can be processed through what is known as a GROUP. With OpenFlow 1.1+, the OpenFlow forwarding model included the functionality of GROUPS. Groups enhance the previous OpenFlow forwarding model. However, enhanced forwarding mechanisms like ECMP Load Balancing cannot be completed in OpenFlow 1.0. For this reason, the ONF implemented OUTPUT GROUPS.

A Group is a set of buckets, and a Bucket is a set of actions. For example, an action could be output to port, set VLAN tag, or push/pop MPLS tag. Groups can contain several buckets; bucket 1 could send to port 1, and bucket 2 to port 2 but also add a tag.

This adds granularity to OpenFlow forwarding and enables additional forwarding methods. For example, sending to all buckets in a group could be used for selective multicast, or sending to one bucket in a group could be used for load balancing across LAG or ECMP.

OpenFlow Protocol represents a significant advancement in network management, offering a more flexible, programmable, and efficient approach. Decoupling the control and forwarding plane enables centralized control, programmability, and network virtualization. With its wide range of benefits and use cases, OpenFlow Protocol is poised to revolutionize network management and pave the way for future innovations in networking technologies. As organizations face ever-increasing network challenges, embracing OpenFlow Protocol can provide a competitive edge and drive efficiency in network operations.

 

Summary: OpenFlow Protocol

The introduction of the OpenFlow protocol has witnessed a remarkable revolution in the world of networking. In this blog post, we will delve into its intricacies and explore how it has transformed the way networks are managed and controlled.

Understanding OpenFlow

At its core, OpenFlow is a communication protocol that enables the separation of control and data planes in network devices. Unlike traditional networking approaches, where switches and routers make forwarding decisions, OpenFlow centralizes the control plane, allowing for dynamic and programmable network management.

The Key Components of OpenFlow

To grasp the power of OpenFlow, it is essential to understand its key components. These include the OpenFlow switch, which processes packets based on instructions from a centralized controller, and the controller, which has a comprehensive view of the network topology and makes intelligent decisions on forwarding packets.

Benefits and Advantages

OpenFlow brings forth a myriad of benefits to networking environments. Firstly, it promotes flexibility and programmability, allowing network administrators to tailor the behavior of their networks to meet specific requirements. Furthermore, it simplifies network management, as policies and configurations can be implemented centrally. Additionally, OpenFlow enables network virtualization, enhancing scalability and resource utilization.

Real-World Applications

The adoption of OpenFlow has paved the way for innovative networking solutions. Software-defined networking (SDN) is one application where OpenFlow is critical. SDN allows for dynamic network control and management, making implementing complex policies and responding to changing network conditions easier.

In conclusion, the OpenFlow protocol has brought a paradigm shift in the networking world. Its ability to separate control and data planes, along with its flexibility and programmability, has transformed how networks are designed, managed, and controlled. As technology continues to evolve, OpenFlow and its applications, such as SDN, will undoubtedly play a pivotal role in shaping the future of networking.

OpenFlow Service Chaining

OpenFlow and SDN Adoption

OpenFlow and SDN Adoption

In the ever-evolving world of networking, new technologies and approaches continue to reshape the landscape. One such technology that has gained significant attention is OpenFlow, which forms the backbone of Software-Defined Networking (SDN). In this blog post, we will delve into the concept of OpenFlow and explore its growing adoption in the networking industry.

OpenFlow can be best described as a protocol that enables the separation of the control plane and the data plane in a network. Traditionally, network devices handled both the control and data forwarding aspects, leading to limited flexibility and scalability. With OpenFlow, the control plane is centralized in a controller, allowing for dynamic network management and programmability.

Benefits of OpenFlow: The adoption of OpenFlow brings forth a multitude of benefits. Firstly, it offers network administrators unprecedented control and visibility into the network, empowering them to efficiently manage traffic flows and implement changes on the fly. Additionally, OpenFlow promotes network programmability, enabling the development of innovative applications and services that can harness the full potential of the network infrastructure.

OpenFlow in Action: Numerous organizations and industries have recognized the potential of OpenFlow and have embraced it in their networks. For instance, data centers have leveraged OpenFlow to create virtual networks with enhanced security and improved resource allocation. Internet Service Providers (ISPs) have also adopted OpenFlow to optimize traffic routing and enhance network performance.

Challenges and Considerations: While OpenFlow holds great promise, it is not without its challenges. One of the primary concerns is ensuring interoperability across different vendors and devices, as OpenFlow relies on a standard set of protocols and features. Additionally, network security and policy enforcement must be carefully addressed to prevent unauthorized access and protect sensitive data.

In conclusion, OpenFlow and SDN adoption are revolutionizing the networking industry, offering unprecedented control, programmability, and scalability. As organizations continue to recognize the benefits of OpenFlow, we can expect to see further advancements and innovations in the realm of network management and infrastructure.

Highlights: OpenFlow and SDN Adoption

Understanding OpenFlow

At its core, OpenFlow is a communications protocol that enables the separation of the control plane and the data plane in networking devices. It allows for the programmability and centralized control of network switches and routers. With OpenFlow, network administrators can dynamically manage traffic, define routing paths, and apply policies, all through a centralized controller.

Unveiling Software-Defined Networking (SDN)

SDN takes the concept of OpenFlow further by providing a framework for network management and configuration. It abstracts the underlying network infrastructure and allows for programmability and automation through a software-based controller. SDN architectures offer flexibility, scalability, and agility, making adapting to evolving network demands easier.

Benefits of OpenFlow and SDN

The combination of OpenFlow and SDN brings numerous benefits to network operators, administrators, and end-users. Firstly, it simplifies network management by providing a centralized view and control of the entire network. This simplification leads to enhanced network visibility, easier troubleshooting, and faster deployment of new services. Additionally, OpenFlow and SDN enable network virtualization, allowing for the creation of logical networks decoupled from the physical infrastructure.

Use Cases and Real-World Applications

OpenFlow and SDN have been found to be extensively used in various domains and industries. From data centers and cloud computing environments to enterprise networks and even telecommunications, the versatility of OpenFlow and SDN is undeniable. They enable dynamic traffic engineering, efficient load balancing, and improved network security. Furthermore, SDN has paved the way for network function virtualization (NFV), allowing the deployment of network services as software applications rather than dedicated hardware.

The Application Layer

As its name suggests, this layer includes network applications. Examples of these applications include communication applications, such as VoIP prioritization, and security applications, such as firewalls. Also included in this layer are utilities and network services.

Switches and routers traditionally handled these applications. SDN simplifies their management by offloading them. In addition, companies can save a lot of money by stripping down the hardware.

The Control Layer

Switches and routers are now controlled by a centralized control plane, which allows the network to be programmed. As an open-source network protocol, OpenFlow has become the industry standard despite Cisco’s OpenFlow variant.

The Infrastructure Layer

This layer includes data, switches, and routers. Traffic is moved according to flow tables. SDN leaves this layer essentially unchanged since routers and switches still move packets. The main difference is the centralization of traffic flow rules. However, the intelligence of vendor devices is not stripped away. The API provides centralized control of SDN for large network providers to protect their intellectual property. However, the cost of generic packet-forwarding devices is much lower than traditional networking equipment.

SDN and OpenFlow

A Programmable Network

Developers have made it possible for network administrators to create “slices” that allow generic networking hardware to support multiple configurations by adding a virtualization layer between the control system and the hardware layer. It resembles how a hypervisor can run a virtual machine (VM) on a single server. Using SDN, an administrator can create different rules and applications for various groups of users.

Because most applications are not installed on the devices, SDN enables the network to appear as one big switch/router. There could be three devices on the network or 30,000. They are all the same as centralized applications. (Some applications are just nodes on the network.) Therefore, upgrades, changes, additions, and configurations are much more accessible.

The role of OpenFlow

Firstly, the basis of the SDN adoption report is the OpenFlow protocol, an existing technology derived from academic labs. Its origins can be traced back to 2006 when Martin Casado, part of the “Clean Slate” program, developed Ethane. They were trying to figure out ways to manage the network states via a centrally managed global policy.

The idea that networks are dynamic and non-symmetrical poses challenges in keeping track of their state to enforce programmability. The program has stopped but produced several follow-up programs, including OpenFlow and SDN.

SDN OpenFlow is not revolutionary new. Similar ideas have been available, and previous projects have tried to solve the same problems OpenFlow is trying to solve today. Besides the central viewpoint use case, whatever you can do with OpenFlow today is possible with Policy-Based Routing (PBR) and ACL. The problem is that these tools are clumsy and do not scale well.

What is OpenFlow

You may find the following useful for pre-information:

  1. Virtual Overlay Network
  2. SDN Router
  3. What is OpenFlow
  4. BGP SDN
  5. SDN BGP
  6. Hyperscale Networking
  7. SDN Data Center



SDN Adoption Report.

Key SDN Adoption Discussion Points:


  • Introduction to SDN OpenFlow and what is involved.

  • Highlighting the SDN architecture.

  • Critical points on the virtual switching fabric.

  • Technical details on the use of OSPF.

  • Technical details for programming the forwarding paths.

  • Final comments on SDN OpenFlow.

Back to basics with the SDN.

What is OpenFlow?

OpenFlow is an open standard that enables the separation of the control plane and the data plane in network devices. It allows network administrators to centrally control and manage the behavior of network switches and routers, resulting in increased network programmability, flexibility, and scalability. OpenFlow provides a standardized protocol that facilitates communication between the control and data planes, enabling the network to be programmed and controlled through software.

Understanding SDN Adoption:

SDN is a paradigm shift in network architecture that leverages OpenFlow and other technologies to virtualize and abstract network resources. With SDN, the control plane is decoupled from the underlying physical infrastructure, allowing network administrators to configure and manage networks dynamically through a centralized controller. This centralized control simplifies network operations, enhances automation, and creates innovative network services.

The use of APIs

Besides the network abstraction, the SDN architecture will deliver a set of APIs that streamline the implementation of standard network services. These network services include routing, security, access control, and traffic engineering. Consequently, we can achieve exceptional programmability, automation, and network control, enabling us to build highly scalable and flexible networks that readily adapt to changing business needs. Then, we have OpenFlow and the SDN story. OpenFlow is the first standard interface explicitly designed for SDN, providing high-performance and granular traffic control across multiple networking devices.

Benefits of OpenFlow and SDN Adoption:

The adoption of OpenFlow and SDN comes with numerous benefits for organizations of all sizes:

1. Enhanced Network Programmability: OpenFlow and SDN enable network administrators to program and control networks through software, making implementing new network services and policies easier.

2. Increased Flexibility and Scalability: SDN allows for dynamic network reconfiguration and resource allocation, ensuring networks can adapt to changing requirements and scale efficiently.

3. Centralized Network Management: With SDN, network administrators can manage and configure multiple network devices from a centralized controller, simplifying network operations and reducing the complexity of managing traditional networks.

4. Improved Network Security: SDN facilitates the implementation of granular security policies, enabling network administrators to quickly detect and respond to security threats, enhancing overall network security.

Challenges and Considerations:

While OpenFlow and SDN offer significant advantages, their adoption comes with a few challenges that organizations need to address:

1. Compatibility: Not all network devices and vendors fully support OpenFlow and SDN, requiring organizations to consider device compatibility carefully before implementation.

2. Skillset and Training: SDN introduces new concepts and requires network administrators to acquire skills and knowledge to deploy and manage SDN-based networks effectively.

3. Transition from Legacy Infrastructure: Migrating from traditional networking solutions to SDN-based architectures requires careful planning and a phased approach to minimize disruptions and ensure a smooth transition.

Starting Points for SDN Adoption

SDN Architectures and OpenFlow

SDN architectures and OpenFlow offer several advantages. You can influence traffic forwarding behavior at a more granular flow level. A holistic view instead of a partial view of distributed devices simplifies the network. Traffic engineering with SDN becomes easier to implement when you have a centralized view; this is how Google implemented SDN. Google has two network backbones: an Internet-facing backbone and a data center backbone. 

They noticed that the cost/bit was not decreasing as the network grew. It was doing the opposite. Their solution was to implement a centralized controller and manage the WAN as a fabric, not as a collection of individual nodes.

SDN adoption report: Virtual switching fabric

SDN architectures allow networks to move from loosely coupled systems to a virtual switching fabric. One extensive flat virtualized network that appears and can be managed as a single switch has many operational advantages. The switch fabric consists of multiple physical nodes but behaves like one big switch. For example, a port on any underlying switch fabric nodes or virtual switch appears as a port to the single switching fabric.

The entire data plane becomes an abstraction. By employing this architecture, we manage the data plane as a whole entity instead of a set of loosely coupled connected devices. If we study existing networks, the control and data planes are distributed to the same locations. No central point controls individual nodes, resulting in complex cross-network interactions.

sdn adoption

Open Shortest Path First (OSPF)

Open Shortest Path First (OSPF) calculates the shortest path tree from each node to every other node. Each OSPF neighbor must establish an adjacency and build and synchronize the link-state databases (LSB). The complexity can be reduced by designing OSPF areas with ABRs but by sacrificing some precision of route information. Imagine that every node reports and synchronizes its LSB to a central controller with an OSPF SDN application instead of individual nodes.

The controller can perform the Shortest Path First (SPF) calculation and directly update each node’s forwarding information base (FIB). The network now becomes programmable. While it does bring advantages, the laws of physics have not changed.

OpenFlow does not decrease latency or let you push more bits through a link. It does, however, let you better manage and control your network. It removes the box-by-box mentality and introduces automation and programmability.

SDN CONTROLLER

Do you think OpenFlow will be derailed?

SDN OpenFlow has come up against some market adoption barriers, such as silicon challenges and numerous vendor-specific extensions. In addition, the lack of conformance tests has led to some inconsistencies. It depends on how you define it. To explain it, you need to know what it is not. It is not a controller or a forwarding switch but a communication between the two.

It has a distinct place in the SDN architecture and does not run anywhere except between the control (controller) and the data plane, such as the OVS bridge acting as the switch infrastructure. SDN OpenFlow is also not alone in this space; other technologies provide control and data plane communications, such as BGP, Open vSwitch Database Management Protocol (OVSDB), NETCONF, and Extensible Message and Presence Protocol (XMPP).

Juniper’s OpenContrail uses XMPP.

SDN ADOPTION

It is evolving, and emerging technologies are sometimes slow to adopt. For example, in the early days of Novell networks, there were 4-frame types. Likewise, OpenFlow is changing and adapting as time progresses. For example, the original version of OpenFlow did not have multiple flow tables; now, versions 1.3 and 1.4 have multiple tables with various actions and many additional features.

Will it be used for program forwarding paths instead of BGP? 

Probably not, but it will augment BGP and other traditional technologies. It is not strictly a YES or NO answer as the SDN adoption falls into two buckets: one with OpenFlow and one without. Take the IPv6 adaptations as the IPv4 “replacement.” There was a “D” day of IPv4 address exhaustion, but IPv4 is still widely used. New “transition” mechanisms such as 6to4 and NAT64 are still widely deployed. It is the same with SDN and OpenFlow.

There will be ways to make traditional networks communicate with SDN and OpenFlow. BGP was invented as an EBGP, but people use EBGP Internal in their network. BGP is also used as an SDN control plane. It will be the case that you have controllers that provide automation and a holistic view but can speak BGP or OSPF to program the forwarding devices. SDN migrations will come incrementally, similar to what we see with IPv4 and IPv6

The lack of clarity in the controller space has limited OpenFlow’s progress. However, the controller market is consolidating now, which gives users a clear path forward. This emergence is a good thing and will move OpenFlow forward. Maintaining SDN applications on different controllers is a dead end, but now that OpenDaylight is emerging, we have controller unity.

A market with numerous open-source controllers would make SDN application development difficult. There will always be business drivers for proprietary controllers serving a particular niche and corner case problems the open community did not invest in. Even today, specialized UNIX platforms exist when you look at open Linux. Similarly, this adoption of technology will be evident for OpenFlow controllers.

The Future of OpenFlow and SDN:

The adoption of OpenFlow and SDN has gained significant momentum in recent years, and the future looks promising for these technologies. With the increasing demand for flexible, scalable, and programmable networks, OpenFlow and SDN are vital in deploying 5G networks, Internet of Things (IoT) applications, and network virtualization.

OpenFlow and SDN adoption revolutionizes network infrastructure, offering increased programmability, flexibility, and centralized management. While challenges exist, the benefits of OpenFlow and SDN far outweigh the drawbacks. As organizations continue to embrace digital transformation, OpenFlow and SDN will continue to shape the future of networking, enabling agile, scalable, and secure networks that can adapt to the evolving needs of modern businesses.

Summary: OpenFlow and SDN Adoption

In today’s rapidly evolving technological landscape, Software-Defined Networking (SDN) and OpenFlow have emerged as game-changing innovations revolutionizing the world of networking. This blog post delves into the intricacies of SDN and OpenFlow, exploring their capabilities, benefits, and their potential to reshape the future of networking.

Understanding SDN

SDN, short for Software-Defined Networking, is a paradigm that separates the control plane from the data plane, enabling centralized network management. Unlike traditional networking approaches, SDN decouples network control, making it programmable and agile. It empowers network administrators with unprecedented flexibility and control over their infrastructure. 

Unveiling OpenFlow

At the core of SDN lies OpenFlow, a protocol that enables communication between the control and data planes. OpenFlow facilitates the flow of network packets, allowing administrators to define and manage network traffic dynamically. Providing a standardized interface promotes interoperability between different vendors’ networking equipment, fostering innovation and cost-effectiveness. 

Benefits of SDN and OpenFlow

Enhanced Network Flexibility and Scalability: SDN and OpenFlow enable network administrators to adjust network resources dynamically, optimize traffic flow, and respond to changing demands. This flexibility and scalability empower organizations to adapt swiftly to evolving network requirements, ensuring efficient resource utilization. 

Simplified Network Management: With SDN and OpenFlow, network administrators can centrally manage and orchestrate network devices, eliminating the need for manual configurations on individual devices. This centralized control simplifies network management, reduces human errors, and accelerates the deployment of new services. 

Improved Network Security: SDN’s centralized control allows for better security management. Administrators gain granular control over network access, threat detection, and response by implementing security policies and protocols at the controller level. This enhanced security posture helps safeguard critical assets and data. 

Data Center Networking: SDN and OpenFlow find extensive applications in data centers, where virtualization and cloud computing demand dynamic resource allocation and efficient traffic management. By abstracting network control, SDN facilitates seamless scalability, load balancing, and efficient utilization of data center resources.  

Campus and Enterprise Networks: In campus and enterprise networks, SDN and OpenFlow enable administrators to manage and optimize network traffic, prioritize critical applications, and quickly respond to changing user demands. These technologies also facilitate network slicing, allowing organizations to create virtual networks tailored to specific requirements. 

In conclusion, SDN and OpenFlow represent a paradigm shift in networking, offering immense potential for increased efficiency, scalability, and security. As organizations continue to embrace digital transformation, these technologies will play a pivotal role in shaping the future of networking. By decoupling network control and leveraging the power of programmability, SDN and OpenFlow empower administrators to build agile, intelligent, and future-ready networks.

SDN applications

HP SDN Controller

HP SDN

In today's fast-paced digital world, efficient network management is crucial for organizations to stay competitive. Traditional network infrastructures often struggle to keep up with the increasing demands of modern applications and services. Enter the HP SDN Controller, a revolutionary solution transforming how networks are managed. In this blog post, we will delve into the world of the HP SDN Controller, exploring its features, benefits, and how it is reshaping the future of network management.

The HP SDN Controller is a software-defined networking (SDN) solution designed to simplify and automate network management. By decoupling the network control plane from the underlying infrastructure, the SDN Controller empowers organizations to manage and control their networks centrally, making it easier to deploy, scale, and adapt to changing business needs.

HP SDN, short for Software-Defined Networking, is a cutting-edge approach to network architecture that separates the control plane from the data plane, allowing for more flexible and programmable network management. By decoupling these two components, HP SDN enables administrators to centrally control and manage network resources, resulting in enhanced agility, scalability, and efficiency.

HP SDN boasts a range of powerful features that set it apart from traditional networking solutions. One of its key features is the OpenFlow protocol, which enables seamless communication between the control and data planes. This facilitates dynamic network configuration, traffic engineering, and the implementation of advanced network services.

Another notable feature of HP SDN is its centralized management platform, which provides a single pane of glass for network administrators to monitor and control various network devices. This simplifies network troubleshooting, reduces the likelihood of human errors, and ensures better resource utilization.

The adoption of HP SDN brings forth a multitude of benefits for organizations across various industries. Firstly, it enhances network agility by allowing administrators to rapidly provision and configure network resources in response to changing demands. This agility enables businesses to adapt quickly to evolving market needs and stay ahead of the competition.

Secondly, HP SDN facilitates network scalability by providing a more efficient and flexible approach to network provisioning. With the ability to dynamically allocate resources, organizations can easily scale their networks to accommodate growing workloads, without the need for costly hardware upgrades.

Furthermore, HP SDN improves network security by providing granular control over network access and traffic flow. Administrators can implement robust security policies and segment their networks to isolate critical assets, mitigating the risk of unauthorized access and potential security breaches.

The real-world applications of HP SDN are vast and diverse. From data centers and campus networks to telecommunications and cloud service providers, HP SDN is revolutionizing network management across industries. Its ability to optimize network performance, improve resource utilization, and simplify network operations makes it an invaluable tool for organizations of all sizes.

In conclusion, HP SDN is a game-changing technology that empowers businesses to unlock the full potential of their network infrastructure. By providing centralized control, enhanced agility, scalability, and improved security, HP SDN paves the way for a more efficient and future-ready network ecosystem. Embracing the power of HP SDN is not just a choice; it is a strategic move towards network excellence.

Highlights: HP SDN

HP Software-defined Networking (SDN) automates networks from data centers to campuses and branches. HP SDN ecosystem expands SDN innovation by delivering resources to develop and create a market for SDN applications. Benefits of HP’s SDN ecosystem include:

  • An open standards-based network that is simple and programmable
  • Adapt your network dynamically to business needs
  • Deployment of applications is automated and rapid

HP’s new Software-Defined Networking (SDN) architecture, combined with innovative network virtualization technologies, will enable multi-tenant cloud environments that were not possible before with legacy data center networks. HP offers cloud networking through an open-source application, HP Virtual Cloud Networking SDN Application, and integration with other solutions, such as VMware. OpenFlow and Virtual Extensible LAN (VXLAN) are underlay and overlay technologies that HP uses as part of our core strategy based on open standards.

OpenFlow: OpenFlow is a standard communication protocol defined by the Open Networking Foundation (ONF). The protocol provides access and communication between the control and infrastructure layers of a Software-defined Network (SDN).

  • Simplifies network management and programming of network devices
  • Allows for dynamic traffic flow change
  • Enables the network to be more responsive to business needs

VXLAN (Virtual Extensible LAN): VXLAN is an encapsulation protocol that distributes virtual networks over networks.

  • Ensures your data center is software-defined
  • Scalable multi-tenancy
  • Because VXLAN runs on standard switching hardware, it provides investment protection.

What is SDN, and what is its role?

On the other hand, SDN is an open architecture proposed by the Open Networking Foundation (ONF) to address current networking challenges. It facilitates configuration automation and, even better, full network programming. Compared to the conventional distributed network architecture, which bundles software and hardware into closed and vertically integrated network devices, SDN architecture elevates the level of abstraction by separating the network data plane and control plane.

By doing so, network devices become simple forwarding switches; all the control logic is centralized in software controllers, enabling the development of specialized applications and the deployment of new services.

network challenges

SDN Benefits

It is believed that such aspects of SDN simplify and improve network management by allowing innovation, customizing behaviors, and controlling the network according to high-level policies expressed as centralized programs. In this way, the complexity of low-level network details is bypassed, and the fundamental architectural problems are overcome. Through SDN’s southbound interface abstraction, SDN can also easily handle the underlying infrastructure’s heterogeneity.

Application SDN

This post discusses the HP SDN Controller and its approach to HP OpenFlow based on the OpenFlow protocol. These enable an exciting approach to SDN application. Provisioning network services for an application takes too long. As a result, the network lacks agility, and making changes is still a manual process.

Usually, when an application is rolled out, you must reconfigure every device with a command CLI interface. This type of manual configuration cannot accommodate today’s application requirements. Furthermore, static rollout frameworks prohibit dynamic changes to the network, blocking the full potential that applications can bring to the business.

What is OpenFlow

Remove Rigidity

Software-defined networking (SDN) aims to take rigidity out of networks and give you the visibility to make real-time changes and responses. The HP SDN Application Suite changes how the network responds to business needs by programming the network differently. The following post discusses the HP SDN controller and how it works with HP OpenFlow, where HP operates the best part of OpenFlow and uses it with traditional routing and switching. I will also provide an example of an application SDN, such as a network protector and network optimizer.

Before you proceed, you may find the following helpful post for pre-information:

  1. SDN Traffic Optimizations
  2. What Is OpenFlow
  3. BGP SDN 
  4. What Does SDN Mean
  5. SDN Adoption Report
  6. WAN SDN 
  7. Hyperscale Networking



SDN Controller

Key HP SDN Controller Discussion Points:


  • Introduction to HP SDN Controller and what is involved.

  • Highlighting HP OpenFlow and the components involved.

  • Critical points on the SDN VAN controller.

  • Technical details on Application SDN: Network Protector.

  • Technical details on Application SDN: Network Optimizer

Back to basics with SDN

Software-defined networking (SDN) is the decoupling of network control from networking devices that are used to forward the traffic. The network control functionality, also known as the control plane, is decoupled from the data forwarding functionality (also known as the data plane). Furthermore, the split control is programmable by exposing several APIs. The migration of control logic, which used to be tightly integrated into networking devices into logically centralized controllers, enables the underlying networking infrastructure to be abstracted from an application’s point of view.

Key Features of HP SDN Controller:

Centralized Management: The SDN Controller provides a centralized platform for managing and configuring network devices, eliminating the need for manual configurations on individual switches or routers. This streamlined approach improves efficiency and reduces the risk of human errors.

Programmable Network: With the HP SDN Controller, network administrators can program and control the behavior of the network through open APIs. This programmability enables organizations to tailor their network infrastructure to meet specific requirements, such as optimizing performance, enhancing security, or helping new services.

Network Virtualization: Virtualizing the network infrastructure allows organizations to create multiple virtual networks on a shared physical infrastructure. The SDN Controller enables network virtualization, providing isolation and segmentation of traffic, improving network scalability, and simplifying network management.

Traffic Engineering and Performance Optimization: HP SDN Controller enables dynamic traffic engineering, allowing administrators to intelligently route traffic based on real-time conditions. This capability improves network performance, reduces congestion, and enhances user experience.

Benefits of HP SDN Controller:

Improved Network Agility: The SDN Controller enables organizations to respond quickly to changing business needs, allowing for a more agile and flexible network infrastructure. It simplifies the deployment of new applications and services, reduces time-to-market, and enhances the organization’s ability to innovate.

Enhanced Security: The SDN Controller’s centralized control and programmability allow organizations to implement security policies and access control measures more effectively. It enables granular control and visibility, empowering administrators to monitor and secure the network infrastructure against potential threats.

Cost Savings: By automating network management tasks and optimizing resource allocation, the HP SDN Controller helps organizations reduce operational costs. It eliminates the need for manual configurations on individual devices, reduces human errors, and improves overall network efficiency.

Scalability and Flexibility: The SDN Controller allows organizations to scale their network infrastructure as their business snowballs. It supports integrating new devices, services, and technologies without disrupting the existing network, ensuring flexibility and future infrastructure-proofing.

Real-World Applications of HP SDN Controller:

Data Centers: HP SDN Controller facilitates the management and orchestration of network resources in data centers, enabling organizations to allocate resources efficiently, optimize workload distribution, and enhance overall performance.

Campus Networks: By centralizing network management, the SDN Controller simplifies the configuration and deployment of services across campus networks. It allows for seamless integration of wired and wireless networks, improves scalability, and enhances user experience.

Service Providers: HP SDN Controller empowers providers to deliver agile and scalable customer services. It enables the creation of virtualized network functions and improves service provisioning, reducing time-to-market and enhancing service quality.

HP SDN

Hewlett Packard (HP) has taken a different approach to SDN. They do not want to recreate every wheel invented and roll out a blanket greenfield OpenFlow solution. Routing has worked for 40 years, so we cannot expect to see some revolutionary change to routing as it’s simply not there. Consider how complicated distributed systems are. Filing all Layer 2 and 3 protocols with OpenFlow is nearly impossible.

Layer 2 switches learn MAC addresses automatically, building a table that can selectively forward packets. So, why is there a need to replace how switches learn via Layer 2? The layer 2-learning mechanism works fine; no real driver can replace it. There are Potential drivers for Spanning Tree Protocol (STP) replacement as it is dangerous, but there is no reason to replace the layer 2-learning mechanism. So, why attempt this with OpenFlow?

HP OpenFlow

OpenFlow comes with its challenges. It derives from Stanford and is very academic. It’s hard to use and deploy in its pure form. HP adds to it and makes it more usable. They tune its implementation to match today’s network requirements using parts of OpenFlow, considering this to be HP OpenFlow and traditional routing. OpenFlow is generally not good, but certain narrow niche cases exist where it can be used. Campus networks are one of those niches, and HP is marketing its product set for this niche.

Their HP SDN controller product sets markets the network edge and leaves the core to what it does best. This allows an easy migration path by starting at the edge and moving gradually to the core ( if needed). This type of migration path keeps the potential blast radius to a minimum. An initial migration strategy that starts at the edge with SDN islands sounds appealing.

Diagram: HP SDN Controller.

HP SDN: The SDN VAN controller

HP removed the North-South bottleneck communication. They are not sending anything to the controller. Any packets that miss an OpenFlow rule hit what is known as the last rule and are sent with standard packet processing via traditional methods.

The last rule, “Forward match all – forward normal,” reverts to the regular forwarding plane, and the network does what it’s always done. If no OpenFlow match exists, packets are forwarded via traditional means. They use a conventional distributed control plane so it can scale. Suppose you consider a controller that has to learn the topology and compute the best path through a topology.

In that case, controller-based “routing” is almost certainly more complex than distributed routing protocols. HP SDN design does not do this and combines the best from OpenFlow and Routing. OpenFlow rules take precedence over most of the control plane elements.

However, most Layer 2-control plane protocols are left to traditional methods. As a general rule, you keep time-critical things such as Link Aggregation Control Protocol (LACP) and Bidirectional Forwarding Detection (BFD) with conventional methods, and other controls that are not as time-critical can be done with OpenFlow.

  • HP OpenFlow: HP uses Openflow to glean and not modify the forwarding plane.

 

The controller can work in several modes. The first is the hybrid model, which forwards with OpenFlow rules. If all OpenFlow rules are not matched, it will fall back to standard processing. The second mode is Discovery. This is where the local SDN switches send copies of ARP and DHCP packets to the controller. By analyzing this information, the controller knows where all the hosts are and can build a network topology map. A centralized view of the network topology is a significant benefit to SDN.

They also use BBDP, which is similar to LLDP. It uses a broadcast domain and is not just link-level, enabling it to fly through OpenFlow-enabled switches. The controller does not directly influence forwarding; it scans the topology by listening to endpoint discovery information. The controller now contains a topology view, but there is no intercepting or redirecting traffic. Instead, it provides endpoint visibility across the network.

HP has started to integrate its SDN controller with Microsoft Active Directory. This gives the controller a different layer of visibility, not just IP and Subnet-based. It now gives you a higher-level language to control your network. It is making decisions based on users and groups, not subnets.

Application SDN: Network Protector  

Malware and Spyware cause many issues, and the HP Protector product can help with these challenges. It enables real-time assessment and security across all SDN devices. The application SDN pushes down one rule—UDP 53 redirects to the controller. It intercepts UDP 53 and can push down ACL rules to block certain types of traffic.

They extract DNS traffic on the network’s edge and pass it to the controller. Application features rank the reputation of an external site and determine how likely you will get something nasty if you go to that site. Additional hit count capability lets the network admin track who requests what. For example, if a host requests 3000 DNS requests per second, it is considered an infected host and quarantined by sending down additional OpenFlow rules.

application sdn
Diagram: Application SDN

Application SDN and Network visualizer  

An SDN application for network admins assists in troubleshooting by defining where traffic is and where it is going. The network admin can select traffic, make copies, and send it to a location. This is similar to tapping, except it is quicker and easier to roll out. Now, your network traffic is viewable on any port and switch. This app lets you go through the wire straight away.

As it is now integrated with Active Directory, when a user calls and says he has a network problem, you can extract his traffic by user ID and debug it remotely.

All you need is the User ID; in 30 seconds, you can see his packets. This is a level of visibility previously unavailable. HP gives you a level of network traffic detail incapable in the past. You could also grab ingress OSPF for analysis. This is not something you could do in the past. You can mirror LSAs and recreate the entire topology. You need access to one switch in the OSPF area.

Application SDN and Network optimizer  

This application SDN is used for Microsoft LYNC and SKYPE for business. It provides automated provisioning of network policy and quality of service to endpoints. Lync and Microsoft created a diagnostic API called SDN API. This diagnostic API sends information about the calls, username, IP, and port number on both sides – ingress and egress.

It can reach the ingress switch on each side and remark the Differentiated Services Code Point (DSCP) for the ingress flows. This is how SDN applications should work. SDN implementations should be where the application requests service from the network, and the network responds. We were at Layer 4 with ACL and QoS, not the Layer 7 application. Now, with HP Network Optimizer, the application can notify the network, and the network can respond.

 The HP SDN suite is about adding value to the network’s edge. Where do you allow the dynamic value of SDN to give value up to customers’ risk appetite? Keeping the dynamic SDN to the edge while keeping the core static is a significant value of SDN and an excellent migration strategy. The SDN concept takes information otherwise out of the network to the network.

Summary: HP SDN

The need for flexible and scalable networks has become paramount in today’s rapidly evolving technological landscape. Enter HP SDN (Software-Defined Networking), a revolutionary approach that transforms traditional networking by separating the control and data planes. In this blog post, we delved into the world of HP SDN, exploring its key concepts, benefits, and real-world applications.

Understanding HP SDN

HP SDN is a groundbreaking paradigm shift in networking architecture. It enables network administrators to manage and control network behavior centrally using software applications. SDN empowers organizations to build dynamic, programmable, and agile networks by decoupling the control plane from the underlying hardware.

Key Components of HP SDN

To comprehend the workings of HP SDN, it is essential to grasp its key components. The SDN Controller acts as the brain of the network, orchestrating and directing network traffic. It communicates with SDN switches that forward packets based on the instructions received from the controller. OpenFlow, a vital protocol, facilitates communication between the controller and switches, ensuring seamless interoperability.

Benefits of HP SDN

The benefits of HP SDN are manifold. Firstly, it offers enhanced network agility, allowing administrators to adapt to changing business requirements swiftly. Secondly, SDN simplifies network management, reducing operational complexities. Moreover, by centralizing control, SDN enables intelligent traffic engineering, improving network performance and efficiency. Lastly, HP SDN promotes innovation and accelerates the deployment of new services, driving business growth.

Real-World Applications

HP SDN has found its applications across various industries. SDN enables dynamic resource allocation in data centers, optimizes workload distribution, and improves overall efficiency. SDN facilitates secure and scalable network connectivity for students and faculty in the education sector. Furthermore, SDN plays a crucial role in service provider networks, enabling the rapid provisioning of new services and enhancing service quality.

Conclusion:

In conclusion, HP SDN represents a paradigm shift in networking, revolutionizing how we design, manage, and operate networks. Its ability to centralize control, enhance agility, and drive innovation makes it a game-changer in the industry. As organizations strive for flexible and scalable networks, HP SDN emerges as a powerful solution that paves the way for future networks.

hyperscale networking

Hyperscale Networking

Hyperscale networking

In today's digital age, where data is generated at an unprecedented rate, traditional networking infrastructures are struggling to keep up with the demand. Enter hyperscale networking, a revolutionary paradigm transforming how we build and manage networks. In this blog post, we will explore the concept of hyperscale networking, its benefits, and its impact on various industries.

Hyperscale networking refers to quickly and seamlessly scaling network infrastructure to accommodate massive amounts of data, traffic, and users. It is a distributed architecture that leverages cloud-based technologies and software-defined networking (SDN) principles to achieve unprecedented scalability, agility, and efficiency.

Hyperscale networking is a revolutionary approach to networking that enables organizations to scale their networks rapidly and efficiently. Unlike traditional networking architectures, hyperscale networking is designed to handle massive amounts of data and traffic with ease. It leverages advanced technologies and software-defined principles to create flexible, agile, and highly scalable networks.

One of the key benefits of hyperscale networking is its ability to handle exponential data growth. With the rise of cloud computing, big data, and the Internet of Things (IoT), businesses are generating and processing enormous volumes of data. Hyperscale networking allows organizations to scale their networks seamlessly to accommodate this data explosion.

Another advantage of hyperscale networking is its cost-effectiveness. By leveraging commodity hardware and open-source software, organizations can build and manage their networks at a fraction of the cost compared to traditional networking solutions. This scalability and cost-efficiency make hyperscale networking an attractive option for businesses of all sizes.

Hyperscale networking has had a profound impact on modern businesses across various industries. It has enabled cloud providers to deliver scalable and reliable services to millions of users worldwide. Additionally, hyperscale networking has empowered enterprises to embrace digital transformation initiatives by providing the infrastructure needed to support modern applications and workloads.

Moreover, hyperscale networking has played a crucial role in enabling the deployment of emerging technologies such as artificial intelligence (AI), machine learning (ML), and edge computing. These technologies heavily rely on robust and flexible networking architectures to deliver real-time insights and drive innovation.

In conclusion, hyperscale networking is a game-changer in the world of networking. Its ability to handle massive data volumes, cost-effectiveness, and impact on modern businesses make it a compelling choice for organizations looking to scale their networks and embrace digital transformation. As the demand for data continues to grow, hyperscale networking will undoubtedly play a pivotal role in shaping the future of connectivity.

Highlights: Hyperscale networking

The term hyperscale describes the ability of a system or technology architecture to scale as more resources are demanded. Organizations can use hyperscale computing to meet growing data demands without additional cooling, power, or space for large, distributed computing networks. 

The hyperscale infrastructure includes scalable cloud computing systems connected by many servers. Servers can be increased or decreased to meet a network’s capacity and performance requirements. 

As the modern enterprise needs to support big data and cloud computing, hyperscale infrastructure is critical to building strong, scalable, distributed infrastructure systems. It combines compute, storage, and virtualization layers in a single solution. Hyperscale infrastructure is typically associated with large cloud computing and data center providers.

How Hyperscale Works

Unlike conventional computing systems, hyperscale computing ditches high-grade constructs. Instead, it utilizes stripped-down designs that maximize hardware effectiveness, which is more cost-effective and allows for more excellent software investment.

In hyperscale computing, servers are networked horizontally to be added or removed quickly and easily depending on capacity requirements. A load balancer manages this process by monitoring the amount of data to be processed, handling requests, and distributing resources according to the available resources. The load balancer determines whether to add additional servers by continuously checking the workload of the servers with the data volumes to be processed. 

Hyperscale benefits

In addition to making it easier for organizations to find data, Hyperscale limits excessive data copying simplifies data backups and imposes security controls and policies. Scaling and flexibility can be achieved cost-effectively, and organizations can maximize their hardware resources. 

Using Hyperscale, organizations can fully utilize their existing hardware by fully integrating computing and networking. Cloud computing and big data analytics allow businesses to run their projects more quickly and easily.

SDN data plane

It consists of a distributed set of forwarding network elements (mainly switches) responsible for forwarding packets. SDN uses software-based control through an open, vendor-neutral southbound interface, and the control plane and data plane are separated.

Several well-known candidate protocols for the southbound interface exist, including OpenFlow (McKeown et al. 2008; Costa et al. 2021) and ForCES (Forwarding and Control Element Separation). In both cases, the control and forwarding planes are divided into network elements, and the communication between them is standardized. In terms of network architecture design, these solutions differ in many ways.

SDN control plane

In SDN architectures, the control plane is the backbone that communicates between network applications and devices through a centralized software controller. The SDN controllers deliver relevant information to SDN applications by translating applications’ requirements into underlying elements of the data plane.

In SDN, the control layer called the network operating system (NOS), abstracts network data from the application layer. Policies can be specified while implementation details are hidden.

Typically, the control plane is logically centralized yet implemented as a physically distributed system for scalability and reliability. East-west APIs are needed to enable communication and network information exchange across SDN controllers.

SDN application plane

An SDN application plane consists of control programs that implement logic and strategies for network control. An open northbound API connects this higher-level plane to the control plane. The SDN controller translates the network requirements of SDN applications into southbound-specific commands and forwarding rules that dictate how the SDN devices should behave. SDN applications that run on top of existing controller platforms include routing, traffic engineering (TE), firewalls, and load balancing.

Example: Big Switch

Throughout the last 5-years, data center innovation has come from companies such as Google, Facebook, Amazon, and Microsoft. These companies are referred to as hyper-scale players. The vision of Big Switch is to take hyperscale concepts developed by these companies and bring them to smaller data centers around the world in the version of hyperscale networking, enabling a hyperscale architecture.

What is OpenFlow

Before you proceed, you may find the following helpful post for pre-information:

  1. Virtual Data Center Design
  2. ACI Networks
  3. Application Delivery Architecture
  4. ACI Cisco
  5. Data Center Design Guide



Hyperscale Networking

Key Hyperscale Architecture Discussion Points:


  • Introduction to hyperscale architecture and what is involved.

  • Highlighting the challenges of a standard chassis design.

  • Critical points on bare metal switches.

  • Technical details on the core and pod designs.

  • SDN controller architecture and distributed routing.

Back to basic with OpenFlow

With OpenFlow, the switching device has no control plane, as the controller interacts directly with the FIB. Instead, OpenFlow provides a packet format and protocol to carry these packets that now describes forwarding table entries in the FIB. In OpenFlow documentation, the FIB is referred to as the flow table, which contains information about each flow the switch needs to know about.

Critical Benefits of Hyperscale Networking:

1. Scalability: Hyperscale networking allows organizations to scale their networks effortlessly as demand grows. With traditional networking, scaling often involves costly hardware upgrades and complex configurations. In contrast, hyperscale networks can scale horizontally by adding more commodity hardware, resulting in significantly lower costs and simplified network management.

2. Agility: In the fast-paced digital landscape, businesses must adapt quickly to changing requirements. Hyperscale networking enables organizations to deploy and provision network resources on demand, reducing time-to-market for new services and applications. This agility empowers businesses to respond rapidly to customer demands and gain a competitive edge.

3. Enhanced Performance: Hyperscale networks are designed to handle massive data and traffic efficiently. By distributing workloads across multiple nodes, these networks can deliver superior performance, low latency, and high throughput. This translates into a seamless user experience and improved productivity for businesses.

4. Cost Efficiency: Traditional networking often involves significant upfront investments in proprietary hardware and complex infrastructure—hyperscale networking leverages off-the-shelf hardware and cloud-based technologies, resulting in cost savings and reduced operational expenses. Moreover, the ability to scale horizontally eliminates the need for expensive equipment upgrades.

Hyperscale Networking in Various Industries:

1. Cloud Computing: Hyperscale networking is the backbone of cloud computing platforms. It enables cloud service providers to deliver scalable and reliable services to millions of users worldwide. By leveraging hyperscale architectures, these providers can efficiently manage massive workloads and deliver high-performance cloud services.

2. Internet of Things (IoT): The proliferation of IoT devices generates enormous amounts of data that must be processed and analyzed in real-time. Hyperscale networking provides the infrastructure to handle the massive data influx from IoT devices, ensuring seamless connectivity, efficient data processing, and rapid insights.

3. E-commerce: The e-commerce industry heavily relies on hyperscale networking to handle the ever-increasing number of online transactions, user interactions, and inventory management. With hyperscale networks, e-commerce platforms can ensure fast and secure transactions, reliable inventory management, and personalized user experiences.

Hyperscale Architecture

Hyperscale networking consists of three elements. The first is bare metal and open switch hardware. Bare metal switches are sold without software and makeup 10% of all ports shipped. The second hyperscale aspect is Software-Defined Networking (SDN). In SDN vision, one device acts as a controller, managing the physical and virtual infrastructure.

The third element is the actual data architecture—Big Switch leverages what’s known as the Core-and-Pod design. Core-and-Pod differs from the traditional core, aggregation, and edge model, allowing incredible scale and automation when deploying applications.

hyperscale networking
Diagram: Hyperscale Networking

Standard Chassis Design vs. SDN Design

Standard chassis-based switches have supervisors, line cards, and fabric backplanes. In addition, a proprietary protocol runs between the blades for controls. Big Switch has all of these components but is named differently. Under the covers, the supervisor module acts like an SDN controller, programming the line cards and fabric backplane.

Instead of supervisors, they have a controller, and the internal chassis proprietary protocol is OpenFlow. The leaf switches are treated like line cards, and the spine switches are like the fabric backplane. In addition, they offer an OpenFlow-integrated architecture.

Hyperscale architecture
Diagram: Hyperscale architecture

Traditional data center topologies operate on hierarchical tree architecture. The big switch follows a new networking architecture called spine leaf architecture, which overcomes the shortcomings of conventional tree architectures. To map the leaf and spine to traditional data center terminology, the leaf is accessed, and the spine is a core switch.

In addition, the leaf and spine operate on the concept that every leaf has equidistant endpoints. Designs with equidistant endpoints make POD placement and service insertion easier than hierarchical tree architecture.

Big Switch hyperscale architecture has multiple connection points. Similar to Equal Cost Multipath (ECMP) fabric and Multi-Chassis Link Aggregation (MLAG), enabling layer 2 and layer 3 multipathing. This type of connectivity allows you to have network partition problems without having a global effect. You still lose your spin switch’s capacity but have not lost connectivity. The controller controls all this and has a central view.

  • Losing a leaf switch in a leaf and spine architecture is not a big deal as long as you have configured multiple paths.

Bare metal switches

The first hyperscale design principle utilizes bare metal switches. Bare metal switches are Ethernet switches sold without software. Disaggregating the hardware from the switch software allows you to build your switch software stack. It is cheaper in terms of CAPEX and will enable you to tune the operating system to your needs better. It gives you the ability to tailor the operations to specific requirements.

Core and pod design

Traditional core-agg-edge is a monolithic design that cannot evolve. Hyperscale companies are now designing to a core-and-pod design, allowing operations to improve faster. Data centers are usually made up of two core components. One is the core with the Layer 3 routes for ingress and egress routing. Then, you have a POD, a self-contained unit connected to the core.

Intra-communication between PODs is done via the core. A POD is a certified design of servers, storage, and network grouped into standard services. Each POD contains an atomic networking, computing, and storage unit attached directly to the core via Layer 2 or 3. Due to a PODs-fixed configuration, automation is simple and stable.

Hyperscale Networking and Big Switch Products

Big Tap and Big Cloud Fabric are two product streams from Big Switch. Both use a fabric architecture built on white box switches with a centralized controller and POD design. Big Clouds’ hyperscale architecture is designed as a network for a POD.

Each Big Cloud architecture instance is a pair of redundant SDN controllers, and a leaf/spine topology is the network for your POD. Switches have zero-touch, so they are stateless. Turn them on, and they boot and download the switch image and configuration. It auto-discovers all of the links and troubleshoots any physical problems.

OpenFlow

SDN controller architecture

There are generic architectural challenges of SDN controller-based networks. The first crucial question is, where are the controller and network devices split? In OpenFlow, it’s clear that the split is between the control plane and the data plane. The split affects the outcomes from various events, such as a controller bug, controller failure, network partitions, and the size of the failure domain.

You might have an SDN controller cluster, but every single controller is still a single point of failure. The controller cluster protects you from hardware failures but not from software failures. If someone misconfigures or corrupts the controller database, you lose the controller regardless of how many controllers are in a cluster.

Every controller is a single fat fingers domain. Due to the complexity of clusters and clustering protocols, you could implement failures by the lousy design. Every distributed system is complex, and it is even more challenging if it has to work with real-time data.

SDN Controllers

SDN controller – Availability Zones

The optimum design is to build controllers per availability zone. If one controller fails, you lose that side of the fabric but still have another fabric. To use this concept, you must have applications that can run in multiple availability zones. Availability zones are great, but applications must be adequately designed to use them. Availability zones usually relate to a single failure domain.

How do you deal with failures, and what failure rate is acceptable? The failure rate acceptance level drives the redundancy in your network. Full redundancy is a great design goal as it reduces the probability of total network failure. But full redundancy will never give you 100% availability. Network partitions still happen with fully redundant networks.

Be careful of split-brain scenarios, where one controller looks after one partition and another looks after the other partitions. Big Switch overcomes time with a distributed control plane. The forwarding elements are aligned so a network partition can happen.

Hyperscale Architecture: Big Switch distributed routing.

For routing, they have the concept known as a tenant router. With the tenant router, you can say that these two broadcast domains can talk to each other via policy points. A tenant router is a logical router physically distributed throughout the entire network. Every switch has a copy of the tenant router’s routing table that is local to it. The routing state is spread everywhere. Traffic needs to cross no specific layer 3 points to get from one layer 2 segment to the other.

As all the leaf switches have a distributed copy of the database, all routing takes the most optimal path. When two broadcast domains are on the same leaf switch, traffic does not have to hairpin to a physical layer 3 points.

You can map the application directly to the tenant router, which acts like a VRF with VRF packet forwarding hardware. This is known as micro-segmentation. With this, you can put a set of applications or VMs in a tenant, demarcate the network by the tenant, and have a per-tenant policy.

Hyperscale networking revolutionizes how we build and manage networks in the digital era. Its ability to scale effortlessly, provide agility, enhance performance, and reduce costs makes it a game-changer in various industries. As data volumes grow, organizations must embrace hyperscale networking to stay competitive, deliver exceptional user experiences, and drive innovation in a rapidly evolving digital landscape.

Summary: Hyperscale networking

In today’s digital age, where data is generated at an unprecedented rate, the need for efficient and scalable networking solutions has become paramount. This is where hyperscale networking steps in, offering a revolutionary approach to connectivity. In this blog post, we delved into the world of hyperscale networking, exploring its key features, benefits, and impact on the ever-evolving landscape of technology.

Understanding Hyperscale Networking

Hyperscale networking refers to the ability to scale network infrastructure dynamically and rapidly to meet the demands of large-scale applications and services. It involves using robust hardware and software solutions to handle massive amounts of data and traffic without compromising performance or reliability. By leveraging high-speed interconnections, virtualization technologies, and advanced routing algorithms, hyperscale networks provide a solid foundation for modern digital ecosystems.

Key Features and Benefits

One of the defining features of hyperscale networking is its ability to scale horizontally, allowing organizations to expand their network infrastructure seamlessly. With hyperscale architectures, businesses can easily add or remove resources as needed, ensuring optimal performance and cost-efficiency. Additionally, hyperscale networks offer improved resiliency and fault tolerance through redundant components and automated failover mechanisms. This ensures minimal downtime and uninterrupted service delivery, which is critical for mission-critical applications.

The Impact on Cloud Computing

Hyperscale networking has had a profound impact on the world of cloud computing. With hyperscale networks, cloud service providers can deliver scalable and elastic infrastructure to their customers, enabling them to provision resources based on demand rapidly. This scalability and flexibility have transformed how businesses operate, allowing them to focus on innovation and growth without worrying about infrastructure limitations. Furthermore, hyperscale networking has paved the way for the rise of edge computing, bringing computing resources closer to end users and reducing latency.

Challenges and Considerations

While hyperscale networking offers numerous advantages, it is not without its challenges. Managing and securing vast amounts of data flowing through hyperscale networks requires robust monitoring and security measures. Organizations must also consider the cost implications of scaling their network infrastructure, as hyperscale solutions can be resource-intensive. Moreover, ensuring compatibility and seamless integration with existing systems and applications can pose challenges during the transition to hyperscale networking.

Conclusion:

Hyperscale networking is revolutionizing the way we connect, scale, and operate in the digital world. Its ability to seamlessly scale, improve resiliency, and drive innovation has made it a game-changer for businesses across various industries. As technology continues to advance and data demands grow, hyperscale networking will play an increasingly vital role in enabling the next generation of digital transformation.

BGP acronym (Border Gateway Protocol)

Optimal Layer 3 Forwarding

Optimal Layer 3 Forwarding

Layer 3 forwarding is crucial in ensuring efficient and seamless network data transmission. Optimal Layer 3 forwarding, in particular, is an essential aspect of network architecture that enables the efficient routing of data packets across networks. In this blog post, we will explore the significance of optimal Layer 3 forwarding and its impact on network performance and reliability.

Layer 3 forwarding directs network traffic based on its network layer (IP) address. It operates at the network layer of the OSI model, making it responsible for routing data packets across different networks. Layer 3 forwarding involves analyzing the destination IP address of incoming packets and selecting the most appropriate path for their delivery.

Enhanced Network Performance: Optimal layer 3 forwarding optimizes routing decisions, resulting in faster and more efficient data transmission. It eliminates unnecessary hops and minimizes packet loss, leading to improved network performance and reduced latency.

Scalability: With the exponential growth of network traffic, scalability becomes crucial. Optimal layer 3 forwarding enables networks to handle increasing traffic demands by efficiently distributing packets across multiple paths. This scalability ensures that networks can accommodate growing data loads without compromising on performance.

Load Balancing: Layer 3 forwarding allows for intelligent load balancing by distributing traffic evenly across available network paths. This ensures that no single path becomes overwhelmed with traffic, preventing bottlenecks and optimizing resource utilization.

Implementing Optimal Layer 3 Forwarding

Hardware and Software Considerations: Implementing optimal layer 3 forwarding requires suitable network hardware and software support. It is essential to choose routers and switches that are capable of handling the increased forwarding demands and provide advanced routing protocols.

Configuring Routing Protocols: To achieve optimal layer 3 forwarding, configuring robust routing protocols is crucial. Protocols such as OSPF (Open Shortest Path First) and BGP (Border Gateway Protocol) play a significant role in determining the best path for packet forwarding. Fine-tuning these protocols based on network requirements can greatly enhance overall network performance.

Real-World Use Cases

Data Centers:In data center environments, optimal layer 3 forwarding is essential for seamless communication between servers and networks. It enables efficient load balancing, fault tolerance, and traffic engineering, ensuring high availability and reliable data transfer.

Wide Area Networks (WAN):For organizations with geographically dispersed locations, WANs are the backbone of their communication infrastructure. Optimal layer 3 forwarding in WANs ensures efficient routing of traffic across different locations, minimizing latency and maximizing throughput.

Highlights: Optimal Layer 3 Forwarding

Implementing Optimal Layer 3 Forwarding

a) Choosing the Right Routing Protocol: An appropriate routing protocol, such as OSPF, EIGRP, and BGP, is crucial for implementing optimal layer three forwarding.

b) Network Segmentation: Breaking down large networks into smaller subnets enhances routing efficiency and reduces network complexity.

c) Traffic Engineering: Network operators can further optimize layer three forwarding by leveraging traffic engineering techniques, such as MPLS or segment routing.

What is Routing?

Routing is like a network’s GPS. It involves directing data packets from their source to their destination across multiple networks. Think of it as the process of determining the best possible path for data to travel. Routers, the key devices responsible for routing, use various algorithms and protocols to make intelligent decisions about where to send data packets next.

The Role of Switching

While routing deals with data flow between networks, switching comes into play within a single network. Switches serve as the traffic managers within a local area network (LAN). They connect devices, such as computers, printers, and servers, allowing them to communicate with one another. Switches receive incoming data packets and use MAC addresses to determine which device the data should be forwarded to. This efficient and direct communication within a network makes switching so critical.

stp port states

The Role of Segmentation

Segmentation is dividing a network into smaller, isolated segments or subnets. Each subnet operates independently, with its own set of rules and configurations. This division allows for better control and management of network traffic, leading to improved performance and security.

Enhanced Performance: Segmenting the network minimizes congestion and bottlenecks, resulting in faster and more reliable data transmission. This ensures that critical applications and services operate smoothly without being affected by other network activities.

 

The Role of Optimal Layer 3 Forwarding:

Optimal Layer 3 forwarding ensures that data packets are transmitted through the most efficient path, improving network performance. It minimizes packet loss, latency, and jitter, enhancing user experience. By selecting the best path, optimal Layer 3 forwarding also enables load balancing, distributing the traffic evenly across multiple links, thus preventing congestion.

EIGRP LFA

 

One key challenge in network performance is identifying and resolving bottlenecks. These bottlenecks can occur due to congested network links, outdated hardware, or inefficient routing protocols. Organizations can optimize bandwidth utilization by conducting thorough network assessments and employing intelligent traffic management techniques, ensuring smooth data flow and reduced latency.

GRE and Next Hops

Generic Routing Encapsulation (GRE) is a tunneling protocol that enables the encapsulation of various network protocols within IP packets. It provides a flexible and scalable solution for deploying virtual private networks (VPNs) and connecting disparate networks over an existing IP infrastructure. By encapsulating multiple protocol types, GRE allows for seamless network communication, regardless of their underlying technologies. Notice the next hop below is the tunnel interface.

GRE configuration

Implementing Quality of Service (QoS) Policies

Implementing quality of service (QoS) policies is essential to prioritizing critical applications and ensuring optimal user experience. QoS allows network administrators to allocate network resources based on application requirements, guaranteeing a minimum level of service for high-priority applications. Organizations can prevent congestion, reduce latency, and deliver consistent performance by classifying and prioritizing traffic flows.

Leveraging Load Balancing Techniques

 Load Balancing: Distributing traffic across multiple paths optimizes resource utilization and prevents bottlenecks.

Load balancing is crucial in distributing network traffic across multiple servers or links, optimizing resource utilization, and preventing overload. Organizations can achieve better network performance, fault tolerance, and enhanced scalability by implementing intelligent load-balancing algorithms. Load balancing techniques, such as round-robin, least connections, or weighted distribution, ensure efficient utilization of network resources.

Example: EIGRP configuration

EIGRP is an advanced distance-vector routing protocol developed by Cisco Systems. It is known for its fast convergence, efficient bandwidth use, and support for IPv4 and IPv6 networks. Unlike traditional distance-vector protocols, EIGRP utilizes a more sophisticated Diffusing Update Algorithm (DUAL) to determine the best path to a destination. This enables networks to adapt quickly to changes and ensures optimal routing efficiency.

EIGRP load balancing enables routers to distribute traffic among multiple paths, maximizing the utilization of available resources. It is achieved through the equal-cost multipath (ECMP) mechanism, which allows for the simultaneous use of various routes with equal metrics. By leveraging ECMP, EIGRP load balancing enhances network reliability, minimizes congestion, and improves overall performance

EIGRP Configuration

Enhanced Network Scalability

c) Improved Scalability: Optimal layer three forwarding allows networks to scale seamlessly, accommodating growing traffic demands while maintaining high performance.

Network scalability refers to a network’s capacity to grow and adapt to changing requirements without sacrificing performance or efficiency. It involves handling increased traffic, data volume, and user demands. Traditional networks often struggle with scalability, leading to bottlenecks, slow connections, and compromised user experiences.

Enhanced network scalability brings numerous benefits to businesses and organizations. Firstly, it allows for seamless expansion as the network can accommodate additional devices, users, and data traffic without performance degradation. This scalability fosters growth and supports future business needs. Secondly, enhanced scalability improves reliability and uptime. With redundant systems and load-balancing capabilities, network failures can be minimized, ensuring uninterrupted operations. Lastly, scalability enables efficient resource allocation, optimizing network resources and reducing costs.

Technologies Driving Enhanced Network Scalability

Several technologies play a crucial role in achieving enhanced network scalability. Software-defined networking (SDN) allows centralized network management, flexibility, and simplified configuration. Network function virtualization (NFV) decouples network functions from physical devices, enabling greater scalability and agility. Cloud computing and virtualization also contribute to enhanced scalability by providing on-demand

Challenges and Considerations

Security: Layer 3 forwarding introduces potential security risks involving routing packets across different networks. Implementing robust security measures, such as access control lists (ACLs) and firewall policies, is essential to protect against unauthorized access and network attacks.

Stateful Inspection Firewall

Network congestion: In complex network environments, layer 3 forwarding can lead to congestion if not correctly managed. Network administrators must carefully monitor and analyze traffic patterns to proactively address congestion issues and optimize routing decisions.

Example: Arista with Large Layer-3 Multipath

Arista EOS supports hardware for Leaf ( ToR ), Spine, and Spline data center design layers. Its wide product range supports significant layer-3 multipath ( 16 – 64-way ECMP ) with excellent optimal Layer 3-forwarding technologies. Unfortunately, multi-protocol Label Switching ( MPLS ) is limited to static MPLS labels, which could become an operational nightmare. Currently, no Fibre Channel over Ethernet ( FCoE ) support exists.

Arista supports massive Layer-2 Multipath with ( Multichassis Link aggregation ) MLAG. Validated designs with Arista Core 7508 switches ( offer 768 10GE ports ) and Arista Leaf 7050S-64 support over 1980 x 10GE server ports with 1:2,75 oversubscription. That’s a lot of 10GE ports. Do you think layer 2 domains should be designed to that scale?

Related: Before you proceed, you may find the following helpful:

  1. Scaling Load Balancers
  2. Virtual Switch
  3. Data Center Network Design
  4. Layer-3 Data Center
  5. What Is OpenFlow

 



Optimal Layer 3 Forwarding

Key Optimal Layer 3 Forwarding Discussion Points:


  • Introduction to optimal layer 3 forwarding and what is involved.

  • Highlighting the details of using deep buffers.

  • Critical points on the use case of Arista and virtual ARP.

  • Technical details on load balancing enhancements and LACP fallback.

  • Technical details on Direct Server Return and detecting server failures.

Back to Basics: Router operation and IP forwarding

Every IP host in a network is configured with its IP address and mask and the IP address of the default gateway. Suppose the host wants to send traffic, which, in our case, is to a destination address that does not belong to a subnet to which the host is directly attached; the host passes the packet to the default gateway, which would be a Layer 3 router.

The Role of The Default Gateway 

A standard misconception is how the address of the default gateway is used. People mistakenly believe that when a packet is sent to the Layer 3 default router, the sending host sets the destination address in the IP packet as the default gateway router address. However, if this were the case, the router would consider the packet addressed to itself and not forward it any further. So why configure the default gateway’s IP address?

First, the host uses the Address Resolution Protocol (ARP) to find the specified router’s Media Access Control (MAC) address. Then, having acquired the router’s MAC address, the host sends the packets directly to it as data link unicast submissions.

Benefits of Optimal Layer 3 Forwarding:

1. Enhanced Scalability: Optimal Layer 3 forwarding allows networks to scale effectively by efficiently handling a growing number of connected devices and increasing traffic volumes. It enables seamless expansion without compromising network performance.

2. Improved Network Resilience: Optimized Layer 3 forwarding enhances network resilience by selecting the most efficient path for data packets. It enables networks to quickly adapt to network topology or link failure changes, rerouting traffic to ensure uninterrupted connectivity.

3. Better Resource Utilization: Optimal Layer 3 forwarding optimizes resource utilization by distributing traffic across multiple links. This enables efficient utilization of available network capacity, reducing the risk of bottlenecks and maximizing the network’s throughput.

4. Enhanced Security: Optimal Layer 3 forwarding contributes to network security by ensuring traffic is directed through secure paths. It also enables the implementation of firewall policies and access control lists, protecting the network from unauthorized access and potential security threats.

Implementing Optimal Layer 3 Forwarding:

To achieve optimal Layer 3 forwarding, various technologies and protocols are utilized, such as:

1. Routing Protocols: Dynamic routing protocols, such as OSPF (Open Shortest Path First) and BGP (Border Gateway Protocol), enable networks to exchange routing information automatically and determine the best path for data packets.

2. Quality of Service (QoS): QoS mechanisms prioritize network traffic, ensuring that critical applications receive the necessary bandwidth and reducing the impact of congestion.

3. Network Monitoring and Analysis: Continuous network monitoring and analysis tools provide real-time visibility into network performance, enabling administrators to identify and resolve potential issues promptly.

Arista deep buffers: Why are they important?

A vital switch table you need to be concerned with for large 3 networks is the size of Address Resolution Protocol ( ARP ) tables. When ARP tables become full, and packets are offered with the destination ( next hop ) that isn’t cached, the network will experience flooding and suffer performance problems.

Arista Spine switches have deep buffers, ideal for bursty- and latency-sensitive environments. They are also perfect when you have little knowledge of the application traffic matrix, as they can handle most types efficiently.

Finally, deep buffers are most useful in spine layers as traffic concentration occurs in these layers. If you are concerned that ToR switches do not have enough buffers, physically direct servers to chassis-based switches in the Core / Spine layer.

Knowledge Check: Cisco PfR

Understanding Cisco PfR

Cisco PfR, also known as Cisco Performance Routing, is an intelligent network optimization technology that dynamically manages traffic flows to ensure optimal performance. It combines sophisticated algorithms, real-time monitoring, and path selection to intelligently route network traffic, leveraging multiple paths and network resources.

Performance based routing

The Benefits of Cisco PfR

Enhanced Network Resilience and Redundancy

By continuously monitoring network conditions and dynamically adapting to changes, Cisco PfR ensures network resilience. It automatically reroutes traffic when network congestion, link failures, or other performance issues occur, minimizing downtime and disruptions.

Improved Application Performance

With Cisco PfR, network traffic is intelligently distributed across multiple paths based on application requirements. This optimization ensures critical applications receive the necessary bandwidth and low latency, enhancing the overall user experience.

Cost-Efficient Bandwidth Utilization

By intelligently distributing traffic across available network resources, Cisco PfR optimizes bandwidth utilization. It can dynamically choose the path with the lowest cost or least congestion, resulting in significant cost savings for organizations.

Optimal layer 3 forwarding  

Every data center has some mix of layer 2 bridging and layer 3 forwardings. The design selected depends on layer 2 / layer 3 boundaries. Data centers that use MAC-over-IP usually have layer 3 boundaries on the ToR switch. While fully virtualized data centers require large layer two domains ( for VM mobility ); VLANs span Core or Spine layers.

Either of these designs can result in suboptimal traffic flow. Layer 2 forwarding in ToR switches and layer 3 forwarding in Core may result in servers in different VLANs connected to the same ToR switches being hairpinned to the closest Layer 3 switch.

Solutions that offer optimal Layer 3 forwarding in the data center were available. These may include stacking ToR switches, architectures that present the whole fabric as a single layer 3 elements ( Juniper QFabric ), and controller-based architectures (NEC’s Programmable Flow ). While these solutions may suffice for some business requirements, they don’t have optimal Layer 3 forward across the whole data center while using sets of independent devices.

Arista Virtual ARP does this. All ToR switches share the same IP and MAC with a common VLAN. Configuration involves the same first-hop gateway IP address on a VLAN for all ToR switches and mapping the MAC address to the configured shared IP address. The design ensures optimal Layer 3 forwarding between two ToR endpoints and optimal inbound traffic forwarding.

Optimal VARP Deployment
Diagram: Optimal VARP Deployment

Load balancing enhancements

Arista 7150 is an ultra-low latency 10GE switch ( 350 – 380 ns ). It offers load-balancing enhancements other than the standard 5-tuple mechanism. Arista supports new load-balancing profiles. Load-balancing profiles allow you to decide what bit and byte of the packet you want to use as the hash for the load-balancing mechanism—offering more scope and granularity than the traditional 5-tuple mechanism. 

LACP fallback

With traditional Link Aggregation ( LAG ), LAG is enabled after receiving the first LACP packet. This is because the physical interfaces are not operational and are down / down before receiving LACP packets. This is viable and perfectly OK unless you need auto-provisioning. What does LACP fallback mean?

If you don’t receive an LACP packet and the LACP fallback is configured, one of the links will still become active and will be UP / UP. Continue using the Bridge Protocol Data Unit ( BPDU ) guard on those ports, as you don’t want a switch to bridge between two ports, create a forwarding loop.

 

Direct server return

7050 series supports Direct Server Return. The load balancer in the forwarding path does not do NAT. Implementation includes configuring VIP on the load balancer’s outside IP and the internal servers’ loopback. It is essential not to configure the same IP address on server LAN interfaces, as ARP replies will clash. The load balancer sends the packet unmodified to the server, and the server sends it straight to the client.

Requires layer 2 between the load balancer and servers; load balancer needs to use MAC address between the load balancer and servers. It is possible to use IP called Direct Server Return IP-in-IP. Requires any layer 3 connectivity between the load balancer and servers.

Arista 7050 IP-in-IP Tunnel supports essential load balancing, so one can save the cost of not buying an external load-balancing device. However, it’s a scaled-down model, and you don’t get the advanced features you might have with Citrix or F5 load balancers.

Link flap detection

Networks have a variety of link flaps. Networks can experience fast and regular flapping; sometimes, you get irregular flapping. Arista has a generic mechanism to detect flaps so you can create flap profiles that offer more granularity to flap management. Flap profiles can be configured on individual interfaces or globally. It is possible to have multiple profiles on one interface.

Detecting failed servers

The problem is when we have scale-out applications, and you need to detect server failures. When no load balancer appliance exists, this has to be with application-level keepalives or, even worse, Transmission Control Protocol ( TCP ) timeouts. TCP timeout could take minutes. Arista uses Rapid Indication of Link Loss ( RAIL ) to improve performance. RAIL improves the convergence time of TCP-based scale-out applications.

OpenFlow support

Arista matches 750 complete entries or 1500 layer 2 match entries, which would be destination MAC addresses. They can’t match IPv6 or any ARP codes or inside ARP packets, which are part of OpenFlow 1.0. Limited support enables only VLAN or layer 3 forwardings. If matching on layer 3 forwarding, match either the source or destination IP address and rewrite the layer 2 destination address to the next hop.

Arista offers a VLAN bind mode, configuring a certain amount of VLANs belonging to OpenFlow and another set of VLANs belonging to standard Layer 3. Openflow implementation is known as “ships in the night.”

Arista also supports a monitor mode. Monitor mode is regular forwarding with OpenFlow on top of it. Instead of allowing the OpenFlow controller to forward forwarding entries, forwarding entries are programmed by traditional means via Layer 2 or Layer 3 routing protocol mechanism. OpenFlow processing is used parallel to conventional routing—openflow then copies packets to SPAN ports, offering granular monitoring capabilities.

DirectFlow

Direct Flow – I want all traffic from source A to destination A to go through the standard path, but any HTTP traffic goes via a firewall for inspection. i.e., set the output interface to X and a similar entry for the return path, and now you have traffic going to the firewall but for port 80 only.

It offers the same functionality as OpenFlow but without a central controller piece. DirectFlow can configure OpenFlow with forwarding entries through CLI or REST API and is used for Traffic Engineering ( TE ) or symmetrical ECMP. Direct Flow is easy to implement as you don’t need a controller. Just use a REST API available in EOS to configure the flows.

Optimal Layer 3 Forwarding: Final Points

Optimal Layer 3 forwarding is a critical network architecture component that significantly impacts network performance, scalability, and reliability. Efficiently routing data packets through the best paths enhances network resilience, resource utilization, and security.

Implementing optimal Layer 3 forwarding through routing protocols, QoS mechanisms, and network monitoring ensures a robust and efficient network infrastructure. Embracing this technology allows organizations to deliver seamless connectivity and a superior user experience in today’s increasingly interconnected world.

Summary: Optimal Layer 3 Forwarding

In today’s rapidly evolving networking world, achieving efficient, high-performance routing is paramount. Layer 3 forwarding is crucial in this process, enabling seamless communication between different networks. This blog post delved into optimal layer 3 forwarding, exploring its significance, benefits, and implementation strategies.

Section 1: Understanding Layer 3 Forwarding

Layer 3 forwarding, also known as IP forwarding, is the process of forwarding network packets at the network layer of the OSI model. It involves making intelligent routing decisions based on IP addresses, enabling data to travel across different networks efficiently. By understanding the fundamentals of layer 3 forwarding, we can unlock its full potential.

The Significance of Optimal Layer 3 Forwarding

Optimal layer 3 forwarding is crucial in modern networking architectures. It ensures packets are forwarded through the most efficient path, minimizing latency and maximizing throughput. With exponential data traffic growth, optimizing layer 3 forwarding becomes essential to support demanding applications and services.

Strategies for Achieving Optimal Layer 3 Forwarding

There are several strategies and techniques that network administrators can employ to achieve optimal layer 3 forwarding. These include:

1. Load Balancing: Distributing traffic across multiple paths to prevent congestion and utilize available network resources efficiently.

2. Quality of Service (QoS): Implementing QoS mechanisms to prioritize certain types of traffic, ensuring critical applications receive the necessary bandwidth and low latency.

3. Route Optimization: Utilizing advanced routing protocols and algorithms to select the most efficient paths based on real-time network conditions.

4. Network Monitoring and Analysis: Deploying monitoring tools to gain insights into network performance, identify bottlenecks, and make informed decisions for optimal forwarding.

Benefits of Optimal Layer 3 Forwarding

By implementing optimal layer 3 forwarding techniques, network administrators can unlock a range of benefits, including:

– Enhanced network performance and reduced latency, leading to improved user experience.

– Increased scalability and capacity to handle growing network demands.

– Improved utilization of network resources, resulting in cost savings.

– Better resiliency and fault tolerance, ensuring uninterrupted network connectivity.

Conclusion:

Optimal layer 3 forwarding holds the key to unlocking modern networking’s true potential. Organizations can stay at the forefront of network performance and deliver seamless connectivity to their users by understanding its significance, implementing effective strategies, and reaping its benefits.

What is OpenFlow

What is OpenFlow

What is OpenFlow?

In today's rapidly evolving digital landscape, network management and data flow control have become critical for businesses of all sizes. OpenFlow is one technology that has gained significant attention and is transforming how networks are managed. In this blog post, we will delve into the concept of OpenFlow, its advantages, and its implications for network control.

OpenFlow is an open-standard communications protocol that separates the control and data planes in a network architecture. It allows network administrators to have direct control over the behavior of network devices, such as switches and routers, by utilizing a centralized controller.

Traditional network architectures follow a closed model, where network devices make independent decisions on forwarding packets. On the other hand, OpenFlow introduces a centralized control plane that provides a global view of the network and allows administrators to define network policies and rules from a centralized location.

OpenFlow operates by establishing a secure channel between the centralized controller and the network switches. The controller is responsible for managing the flow tables within the switches, defining how traffic should be forwarded based on predefined rules and policies. This separation of control and data planes allows for dynamic network management and facilitates the implementation of innovative network protocols.

One of the key advantages of OpenFlow is its ability to simplify network management. By centralizing control, administrators can easily configure and manage the entire network from a single point of control. This reduces complexity and enhances the scalability of network infrastructure. Additionally, OpenFlow enables network programmability, allowing for the development of custom networking applications and services tailored to specific requirements.

OpenFlow plays a crucial role in network virtualization, as it allows for the creation and management of virtual networks on top of physical infrastructure. By abstracting the underlying network, OpenFlow empowers organizations to optimize resource utilization, improve security, and enhance network performance. It opens doors to dynamic provisioning, isolation, and efficient utilization of network resources.

Highlights: What is OpenFlow?

How does OpenFlow work?

OpenFlow allows network controllers to determine the path of network packets in a network of switches. There is a difference between switches and controllers. With separate control and forwarding, traffic management can be more sophisticated than access control lists (ACLs) and routing protocols. An OpenFlow protocol allows switches from different vendors, often with proprietary interfaces and scripting languages, to be managed remotely. Software-defined networking (SDN) is considered to be enabled by OpenFlow by its inventors.

With OpenFlow, Layer 3 switches can add, modify, and remove packet-matching rules and actions remotely. By doing so, routing decisions can be made periodically or ad hoc by the controller and translated into rules and actions with a configurable lifespan, which are then deployed to the switch’s flow table, where packets are forwarded at wire speed for the duration of the rule. If the switch cannot match packets, they can be sent to the controller. The controller can modify existing flow table rules or deploy new rules to prevent a structural traffic flow. It may even forward the traffic itself if the switch is instructed to forward packets rather than just their headers.

OpenFlow uses Transport Layer Security (TLS) over Transmission Control Protocol (TCP). Switches wishing to connect should listen on TCP port 6653. In earlier versions of OpenFlow, port 6633 was unofficially used. The protocol is mainly used between switches and controllers.

Introducing SDN

Recent changes and requirements have driven networks and network services to become more flexible, virtualization-aware, and API-driven. One major trend affecting the future of networking is software-defined networking ( SDN ). The software-defined architecture aims to extract the entire network into a single switch.

Software-defined networking (SDN) is an evolving technology defined by the Open Networking Foundation ( ONF ). It involves the physical separation of the network control plane from the forwarding plane, where a control plane controls several devices. This differs significantly from traditional IP forwarding that you may have used in the past.

The activities around OpenFlow

Even though OpenFlow has received a lot of industry attention, programmable networks and decoupled control planes (control logic) from data planes have been around for many years. To enhance ATM, Internet, and mobile networks’ openness, extensibility, and programmability, the Open Signaling (OPENING) working group held workshops in 1995. A working group within the Internet Engineering Task Force (IETF) developed GSMP to control label switches based on these ideas. June 2002 marked the official end of this group, and GSMPv3 was published.

What is OpenFlow

Data and control plane

Therefore, SDN separates the data and control plane. The main driving body behind software-defined networking (SDN) is the Open Networking Foundation ( ONF ). Introduced in 2008, the ONF is a non-profit organization that wants to provide an alternative to proprietary solutions that limit flexibility and create vendor lock-in.

The insertion of the ONF allowed its members to run proof of concepts on heterogeneous networking devices without requiring vendors to expose their software’s internal code. This creates a path for an open-source approach to networking and policy-based controllers. 

Building blocks: SDN Environment 

As a fundamental building block of an SDN deployment, the controller, the SDN switch (for example, an OpenFlow switch), and the interfaces are present in the controller to communicate with forwarding devices, generally the southbound interface (OpenFlow) and the northbound interface (the network application interface). In an SDN, switches function as basic forwarding hardware, accessible via an open interface, with the control logic and algorithms offloaded to controllers. Hybrid (OpenFlow-enabled) and pure (OpenFlow-only) OpenFlow switches are available.

OpenFlow switches rely entirely on a controller for forwarding decisions, without legacy features or onboard control. Hybrid switches support OpenFlow as well, in addition to traditional operation and protocols. Today, hybrid switches are the most common type of commercial switch. A flow table performs packet lookup and forwarding in an OpenFlow switch.

You may find the following useful for pre-information:

  1. OpenFlow Protocol
  2. Network Traffic Engineering
  3. What is VXLAN
  4. SDN Adoption Report
  5. Virtual Device Context

Identify the Benefits of OpenFlow

Key What is OpenFlow Discussion Points:


  • Introduction to what is OpenFlow and what is involved with the protocol.

  • Highlighting the details and benefits of OpenFlow.

  • Technical details on the lack of session layers in the TCP/IP model.

  • Scenario: Control and data plane separation with SDN. 

  • A final note on proactive vs reactive flow setup.

Back to basics. What is OpenFlow?

What is OpenFlow?

OpenFlow was the first protocol of the Software-Defined Networking (SDN) trend and is the only protocol that allows decoupling a network device’s control plane from the data plane. In most straightforward terms, the control plane can be thought of as the brains of a network device. On the other hand, the data plane can be considered hardware or application-specific integrated circuits (ASICs) that perform packet forwarding.

Numerous devices also support running OpenFlow in a hybrid mode, meaning OpenFlow can be deployed on a given port, virtual local area network (VLAN), or even within a regular packet-forwarding pipeline such that if there is not a match in the OpenFlow table, then the existing forwarding tables (MAC, Routing, etc.) are used, making it more analogous to Policy Based Routing (PBR).

What is OpenFlow
Diagram: What is OpenFlow? The source is cable solutions.

What is SDN?

Despite various modifications to the underlying architecture and devices (such as switches, routers, and firewalls), traditional network technologies have existed since the inception of networking. Using a similar approach, frames, and packets have been forwarded and routed in a limited manner, resulting in low efficiency and high maintenance costs—consequently, the architecture and operation of networks needed to evolve, resulting in SDN.

By enabling network programmability, SDN promises to simplify network control and management and allow innovation in computer networking. Network engineers configure policies to respond to various network events and application scenarios. They can achieve the desired results by manually converting high-level policies into low-level configuration commands.

Often, minimal tools are available to accomplish these very complex tasks. Controlling network performance and tuning network management are challenging and error-prone tasks.

A modern network architecture consists of a control plane, a data plane, and a management plane; the control and data planes are merged into a machine called Inside the Box. To overcome these limitations, programmable networks have emerged.

How OpenFlow Works:

At the core of OpenFlow is the concept of a flow table, which resides in each OpenFlow-enabled switch. The flow table contains match-action rules defining how incoming packets should be processed and forwarded. The centralized controller determines these rules and communicates with the switches using the OpenFlow protocol.

When a packet arrives at an OpenFlow-enabled switch, it is first matched against the rules in the flow table. If a match is found, the corresponding action is executed, including forwarding the packet, dropping it, or sending it to the controller for further processing. This decoupling of the control and data planes allows for flexible and programmable network management.

What is OpenFlow SDN?

The main goal of SDN is to separate the control and data planes and transfer network intelligence and state to the control plane. These concepts have been exploited by technologies like Routing Control Platform (RCP), Secure Architecture for Network Enterprise (SANE), and, more recently, Ethane.

In addition, there is often a connection between SDN and OpenFlow. The Open Networking Foundation (ONF) is responsible for advancing SDN and standardizing OpenFlow, whose latest version is 1.5.0.

  • An SDN deployment starts with these building blocks.

For communication with forwarding devices, the controller has the SDN switch (for example, an OpenFlow switch), the SDN controller, and the interfaces. An SDN deployment is based on two basic building blocks: a southbound interface (OpenFlow) and a northbound interface (the network application interface).

As the control logic and algorithms are offloaded to a controller, switches in SDNs may be represented as basic forwarding hardware. Switches that support OpenFlow come in two varieties: pure (OpenFlow-only) and hybrid (OpenFlow-enabled).

Pure OpenFlow switches do not have legacy features or onboard control for forwarding decisions. A hybrid switch can operate with both traditional protocols and OpenFlow. Hybrid switches make up the majority of commercial switches available today. A flow table performs packet lookup and forwarding in an OpenFlow switch.

OpenFlow reference switch

The OpenFlow protocol and interface allow OpenFlow switches to be accessed as essential forwarding elements. A flow-based SDN architecture like OpenFlow simplifies switching hardware. Still, it may require additional forwarding tables, buffer space, and statistical counters that are difficult to implement in traditional switches with integrated circuits tailored to specific applications.

There are two types of switches in an OpenFlow network: hybrids (which enable OpenFlow) and pores (which only support OpenFlow). OpenFlow is supported by hybrid switches and traditional protocols (L2/L3). OpenFlow switches rely entirely on a controller for forwarding decisions and do not have legacy features or onboard control.

Hybrid switches are the majority of the switches currently available on the market. This link must remain active and secure because OpenFlow switches are controlled over an open interface (through a TCP-based TLS session). OpenFlow is a messaging protocol that defines communication between OpenFlow switches and controllers, which can be viewed as an implementation of SDN-based controller-switch interactions.

Openflow switch
Diagram: OpenFlow switch. The source is cable solution.

Identify the Benefits of OpenFlow

Application-driven routing. Users can control the network paths.

The networks paths.A way to enhance link utilization.

An open solution for VM mobility. No VLAN reliability.

A means to traffic engineer without MPLS.

A solution to build very large Layer 2 networks.

A way to scale Firewalls and Load Balancers.

A way to configure an entire network as a whole as opposed to individual entities.

A way to build your own encryption solution. Off-the-box encryption.

A way to distribute policies from a central controller.

Customized flow forwarding. Based on a variety of bit patterns.

A solution to get a global view of the network and its state. End-to-end visibility.

A solution to use commodity switches in the network. Massive cost savings.

The following table lists the Software Networking ( SDN ) benefits and the problems encountered with existing control plane architecture:

Identify the benefits of OpenFlow and SDN

Problems with the existing approach

Faster software deployment.

Large scale provisioning and orchestration.

Programmable network elements.

Limited traffic engineering ( MPLS TE is cumbersome )

Faster provisioning.

Synchronized distribution policies.

Centralized intelligence with centralized controllers.

Routing of large elephant flows.

Decisions are based on end-to-end visibility.

Qos and load based forwarding models.

Granular control of flows.

Ability to scale with VLANs.

Decreases the dependence on network appliances like load balancers.

  • A key point: The lack of a session layer in the TCP/IP stack.

Regardless of the hype and benefits of SDN, neither OpenFlow nor other SDN technologies address the real problems of the lack of a session layer in the TCP/IP protocol stack. The problem is that the client’s application ( Layer 7 ) connects to the server’s IP address ( Layer 3 ), and if you want to have persistent sessions, the server’s IP address must remain reachable. 

This session’s persistence and the ability to connect to multiple Layer 3 addresses to reach the same device is the job of the OSI session layer. The session layer provides the services for opening, closing, and managing a session between end-user applications. In addition, it allows information from different sources to be correctly combined and synchronized.

The problem is the TCP/IP reference module does not consider a session layer, and there is none in the TCP/IP protocol stack. SDN does not solve this; it gives you different tools to implement today’s kludges.

what is openflow
What is OpenFlow? Lack of a session layer

Control and data plane

When we identify the benefits of OpenFlow, let us first examine traditional networking operations. Traditional networking devices have a control and forwarding plane, depicted in the diagram below. The control plane is responsible for setting up the necessary protocols and controls so the data plane can forward packets, resulting in end-to-end connectivity. These roles are shared on a single device, and the fast packet forwarding ( data path ) and the high-level routing decisions ( control path ) occur on the same device.

What is OpenFlow | SDN separates the data and control plane

Control plane

The control plane is part of the router architecture and is responsible for drawing the network map in routing. When we mention control planes, you usually think about routing protocols, such as OSPF or BGP. But in reality, the control plane protocols perform numerous other functions, including:

Connectivity management ( BFD, CFM )

Interface state management ( PPP, LACP )

Service provisioning ( RSVP for InServ or MPLS TE)

Topology and reachability information exchange ( IP routing protocols, IS-IS in TRILL/SPB )

Adjacent device discovery via HELLO mechanism

ICMP

Control plane protocols run over data plane interfaces to ensure “shared fate” – if the packet forwarding fails, the control plane protocol fails as well.

Most control plane protocols ( BGP, OSPF, BFD ) are not data-driven. A BGP or BFD packet is never sent as a direct response to a data packet. There is a question mark over the validity of ICMP as a control plane protocol. The debate is whether it should be classed in the control or data plane category.

Some ICMP packets are sent as replies to other ICMP packets, and others are triggered by data plane packets, i.e., data-driven. My view is that ICMP is a control plane protocol that is triggered by data plane activity. After all, the “C” is ICMP does stand for “Control.”

Data plane

The data path is part of the routing architecture that decides what to do when a packet is received on its inbound interface. It is primarily focused on forwarding packets but also includes the following functions:

ACL logging

 Netflow accounting

NAT session creation

NAT table maintenance

The data forwarding is usually performed in dedicated hardware, while the additional functions ( ACL logging, Netflow accounting ) usually happen on the device CPU, commonly known as “punting.” The data plane for an OpenFlow-enabled network can take a few forms.

However, the most common, even in the commercial offering, is the Open vSwitch, often referred to as the OVS. The Open vSwitch is an open-source implementation of a distributed virtual multilayer switch. It enables a switching stack for virtualization environments while supporting multiple protocols and standards.

Identify the benefits of OpenFlow

Software-defined networking changes the control and data plane architecture.

The concept of SDN separates these two planes, i.e., the control and forwarding planes are decoupled. This allows the networking devices in the forwarding path to focus solely on packet forwarding. An out-of-band network uses a separate controller ( orchestration system ) to set up the policies and controls. Hence, the forwarding plane has the correct information to forward packets efficiently.

In addition, it allows the network control plane to be moved to a centralized controller on a server instead of residing on the same box carrying out the forwarding. Moving the intelligence ( control plane ) of the data plane network devices to a controller enables companies to use low-cost, commodity hardware in the forwarding path. A significant benefit is that SDN separates the data and control plane, enabling new use cases.

A centralized computation and management plane makes more sense than a centralized control plane.

The controller maintains a view of the entire network and communicates with Openflow ( or, in some cases, BGP with BGP SDN ) with the different types of OpenFlow-enabled network boxes. The data path portion remains on the switch, such as the OVS bridge, while the high-level decisions are moved to a separate controller. The data path presents a clean flow table abstraction, and each flow table entry contains a set of packet fields to match, resulting in specific actions ( drop, redirect, send-out-port ).

When an OpenFlow switch receives a packet it has never seen before and doesn’t have a matching flow entry, it sends the packet to the controller for processing. The controller then decides what to do with the packet.

Applications could then be developed on top of this controller, performing security scrubbing, load balancing, traffic engineering, or customized packet forwarding. The centralized view of the network simplifies problems that were harder to overcome with traditional control plane protocols.

A single controller could potentially manage all OpenFlow-enabled switches. Instead of individually configuring each switch, the controller can push down policies to multiple switches simultaneously—a compelling example of many-to-one virtualization.

Now that SDN separates the data and control plane, the operator uses the centralized controller to choose the correct forwarding information per-flow basis. This allows better load balancing and traffic separation on the data plane. In addition, there is no need to enforce traffic separation based on VLANs, as the controller would have a set of policies and rules that would only allow traffic from one “VLAN” to be forwarded to other devices within that same “VLAN.”

The advent of VXLAN

With the advent of VXLAN, which allows up to 16 million logical entities, the benefits of SDN should not be purely associated with overcoming VLAN scaling issues. VXLAN already does an excellent job with this. It does make sense to deploy a centralized control plane in smaller independent islands; in my view, it should be at the edge of the network for security and policy enforcement roles. Using Openflow on one or more remote devices is easy to implement and scale.

It also decreases the impact of controller failure. If a controller fails and its sole job is implementing packet filters when a new user connects to the network, the only affecting element is that the new user cannot connect. If the controller is responsible for core changes, you may have interesting results with a failure. New users not being able to connect is bad, but losing your entire fabric is not as bad.

Spanning tree VXLAN
Diagram: Loop prevention. Source is Cisco

What Is OpenFlow? Identify the Benefits of OpenFlow

A traditional networking device runs all the control and data plane functions. The control plane, usually implemented in the central CPU or the supervisor module, downloads the forwarding instructions into the data plane structures. Every vendor needs communications protocols to bind the two planes together to download forward instructions. 

Therefore, all distributed architects need a protocol between control and data plane elements. The protocol to bind this communication path for traditional vendor devices is not open-source, and every vendor uses its proprietary protocol (Cisco uses IPC – InterProcess Communication ).

Openflow tries to define a standard protocol between the control plane and the associated data plane. When you think of Openflow, you should relate it to the communication protocol between the traditional supervisors and the line cards. OpenFlow is just a low-level tool.

OpenFlow is a control plane ( controller ) to data plane ( OpenFlow enabled device ) protocol that allows the control plane to modify forwarding entries in the data plane. It enables SDN to separate the data and control planes.

identify the benefits of openflow

Proactive versus reactive flow setup

OpenFlow operations have two types of flow setups: Proactive and Reactive.

With Proactive, the controller can populate the flow tables ahead of time, similar to a typical routing. However, the packet-in event never occurs by pre-defining your flows and actions ahead of time in the switch’s flow tables. The result is all packets are forwarded at line rate. With Reactive, the network devices react to traffic, consult the OpenFlow controller, and create a rule in the flow table based on the instruction. The problem with this approach is that there can be many CPU hits.

OpenFlow protocol

The following table outlines the critical points for each type of flow setup:

Proactive flow setup

Reactive flow setup

Works well when the controller is emulating BGP or OSPF.

 Used when no one can predict when and where a new MAC address will appear.

The controller must first discover the entire topology.

 Punts unknown packets to the controller. Many CPU hits.

Discover endpoints ( MAC addresses, IP addresses, and IP subnets )

Compute forwarding paths on demand. Not off the box computation.

Compute off the box optimal forwarding.

 Install flow entries based on actual traffic.

Download flow entries to the data plane switches.

Has many scalability concerns such as packet punting rate.

No data plane controller involvement with the exceptions of ARP and MAC learning. Line-rate performance.

 Not a recommended setup.

Hop-by-hop versus path-based forwarding

The following table illustrates the keys point for the two types of forwarding methods used by OpenFlow; hop-by-hop forwarding and path-based forwarding:

Hop-by-hop Forwarding

 Path-based Forwarding

Similar to traditional IP Forwarding.

Similar to MPLS.

Installs identical flows on each switch on the data path.

Map flows to paths on ingress switches and assigns user traffic to paths at the edge node

Scalability concerns relating to flow updates after a change in topology.

Compute paths across the network and installs end-to-end path-forwarding entries.

Significant overhead in large-scale networks.

Works better than hop-by-hop forwarding in large-scale networks.

FIB update challenges. Convergence time.

Core switches don’t have to support the same granular functionality as edge switches.

Identify the benefits of OpenFlow with security.

Obviously, with any controller, the controller is a lucrative target for attack. Anyone who knows you are using a controller-based network will try to attack the controller and its control plane. The attacker may attempt to intercept the controller-to-switch communication and replace it with its commands, essentially attacking the control plane with whatever means they like.

An attacker may also try to insert a malformed packet or some other type of unknown packet into the controller ( fuzzing attack ), exploiting bugs in the controller and causing the controller to crash. 

Fuzzing attacks can be carried out with application scanning software such as Burp Suite. It attempts to manipulate data in a particular way, breaking the application.

The best way to tighten security is to encrypt switch-to-controller communications with SSL and self-signed certificates to authenticate the switch and controller. It would also be best to minimize interaction with the data plane, except for ARP and MAC learning.

To prevent denial-of-service attacks on the controller, you can use Control Plane Policing ( CoPP ) on Ingress to avoid overloading the switch and the controller. Currently, NEC is the only vendor implementing CoPP.

sdn separates the data and control plane

The Hybrid deployment model is helpful from a security perspective. For example, you can group specific ports or VLANs to OpenFlow and other ports or VLANs to traditional forwarding, then use traditional forwarding to communicate with the OpenFlow controller.

Identify the Benefits of OpenFlow

Software-defined networking or traditional routing protocols?

The move to a Software-Defined Networking architecture has clear advantages. It’s agile and can react quickly to business needs, such as new product development. For businesses to achieve success, they must have software that continues to evolve.

Otherwise, your customers and staff may lose interest in your product and service. The following table displays the advantages and disadvantages of the existing routing protocol control architecture.

+Reliable and well known.

-Non-standard Forwarding models. Destination-only and not load-aware metrics**

+Proven with 20 plus years field experience.

 -Loosely coupled.

+Deterministic and predictable.

-Lacks end-to-end transactional consistency and visibility.

+Self-Healing. Traffic can reroute around a failed node or link.

-Limited Topology discovery and extraction. Basic neighbor and topology tables.

+Autonomous.

-Lacks the ability to change existing control plane protocol behavior.

+Scalable.

-Lacks the ability to introduce new control plane protocols.

+Plenty of learning and reading materials.

** Basic EIGRP IETF originally proposed an Energy-Aware Control Plane, but the IETF later removed this.

Software-Defined Networking: Use Cases

Edge Security policy enforcement at the network edge.

Authenticate users or VMs and deploy per-user ACL before connecting a user to the network.

Custom routing and online TE.

The ability to route on a variety of business metrics aka routing for dollars. Allowing you to override the default routing behavior.

Custom traffic processing.

For analytics and encryption.

Programmable SPAN ports

 Use Openflow entries to mirror selected traffic to the SPAN port.

DoS traffic blackholing & distributed DoS prevention.

Block DoS traffic as close to the source as possible with more selective traffic targeting than the original RTBH approach**. The traffic blocking is implemented in OpenFlow switches. Higher performance with significantly lower costs.

Traffic redirection and service insertion.

Redirect a subset of traffic to network appliances and install redirection flow entries wherever needed.

Network Monitoring.

 The controller is the authoritative source of information on network topology and Forwarding paths.

Scale-Out Load Balancing.

Punt new flows to the Openflow controller and install per-session entries throughout the network.

IPS Scale-Out.

OpenFlow is used to distribute the load to multiple IDS appliances.

**Remote-Triggered Black Hole: RTBH refers to installing a host route to a bogus IP address ( RTBH address ) pointing to NULL interfaces on all routers. BGP is used to advertise the host routes to other BGP peers of the attacked hosts, with the next hop pointing to the RTBH address, and it is mainly automated in ISP environments.

SDN deployment models

Guidelines:

  1. Start with small deployments away from the mission-critical production path, i.e., the Core. Ideally, start with device or service provisioning systems.
  2. Start at the Edge and slowly integrate with the Core. Minimize the risk and blast radius. Start with packet filters at the Edge and tasks that can be easily automated ( VLANs ).
  3. Integrate new technology with the existing network.
  4. Gradually increase scale and gain trust. Experience is key.
  5. Have the controller in a protected out-of-band network with SSL connectivity to the switches.

There are 4 different models for OpenFlow deployment, and the following sections list the key points of each model.

Native OpenFlow 

  • They are commonly used for Greenfield deployments.
  • The controller performs all the intelligent functions.
  • The forwarding plane switches have little intelligence and solely perform packet forwarding.
  • The white box switches need IP connectivity to the controller for the OpenFlow control sessions. If you are forced to use an in-band network for this communication path using an isolated VLAN with STP, this should be done with an out-of-band network.
  • Fast convergence techniques such as BFD may be challenging to use with a central controller.
  • Many people believe that this approach does not work for a regular company. Companies implementing native OpenFlow, such as Google, have the time and resources to reinvent the wheel when implementing a new control-plane protocol ( OpenFlow ).

Native OpenFlow with Extensions

  • Some control plane functions are handled from the centralized controller to the forwarding plane switches. For example, the OpenFlow-enabled switches could load balancing across multiple links without the controller’s previous decision. You could also run STP, LACP, or ARP locally on the switch without interaction with the controller. This approach is helpful if you lose connectivity to the controller. If the low-level switches perform certain controller functions, packet forwarding will continue in the event of failure.
  • The local switches should support the specific OpenFlow extensions that let them perform functions on the controller’s behalf.

Hybrid ( Ships in the night )

  • This approach is used where OpenFlow runs in parallel with the production network.
  • The same network box is controlled by existing on-box and off-box control planes ( OpenFlow).
  • Suitable for pilot deployment models as switches still run traditional control plane protocols.
  • The Openflow controller manages only specific VLANs or ports on the network.
  • The big challenge is determining and investigating the conflict-free sharing of forwarding plane resources across multiple control planes.

Integrated OpenFlow

  • OpenFlow classifiers and forwarding entries are integrated with the existing control plane. For example, Juniper’s OpenFlow model follows this mode of operation where OpenFlow static routes can be redistributed into the other routing protocols.
  • No need for a new control plane.
  • No need to replace all forwarding hardware
  • It is the most practical approach as long as the vendor supports it.

Closing Points on OpenFlow

Advantages of OpenFlow:

OpenFlow brings several critical advantages to network management and control:

1. Flexibility and Programmability: With OpenFlow, network administrators can dynamically reconfigure the behavior of network devices, allowing for greater adaptability to changing network requirements.

2. Centralized Control: By centralizing control in a single controller, network administrators gain a holistic view of the network, simplifying management and troubleshooting processes.

3. Innovation and Experimentation: OpenFlow enables researchers and developers to experiment with new network protocols and applications, fostering innovation in the networking industry.

4. Scalability: OpenFlow’s centralized control architecture provides the scalability needed to manage large-scale networks efficiently.

Implications for Network Control:

OpenFlow has significant implications for network control, paving the way for new possibilities in network management:

1. Software-Defined Networking (SDN): OpenFlow is a critical component of the broader concept of SDN, which aims to decouple network control from the underlying hardware, providing a more flexible and programmable infrastructure.

2. Network Virtualization: OpenFlow facilitates network virtualization, allowing multiple virtual networks to coexist on a single physical infrastructure.

3. Traffic Engineering: By controlling the flow of packets at a granular level, OpenFlow enables advanced traffic engineering techniques, optimizing network performance and resource utilization.

Conclusion:

OpenFlow represents a paradigm shift in network control, offering a more flexible, scalable, and programmable approach to managing networks. By separating the control and data planes, OpenFlow empowers network administrators to have fine-grained control over network behavior, improving efficiency, innovation, and adaptability. As the networking industry continues to evolve, OpenFlow and its related technologies will undoubtedly play a crucial role in shaping the future of network management.

Summary: What is OpenFlow?

In the rapidly evolving world of networking, OpenFlow has emerged as a game-changer. This revolutionary technology has transformed the way networks are managed, offering unprecedented flexibility, control, and efficiency. In this blog post, we will delve into the depths of OpenFlow, exploring its definition, key features, and benefits.

What is OpenFlow?

OpenFlow can be best described as an open standard communications protocol that enables the separation of the control plane and the data plane in network devices. It allows centralized control over a network’s forwarding elements, making it possible to program and manage network traffic dynamically. By decoupling the intelligence of the network from the underlying hardware, OpenFlow provides a flexible and programmable infrastructure for network administrators.

Key Features of OpenFlow

a) Centralized Control: One of the core features of OpenFlow is its ability to centralize network control, allowing administrators to define and implement policies from a single point of control. This centralized control improves network visibility and simplifies management tasks.

b) Programmability: OpenFlow’s programmability empowers network administrators to define how network traffic should be handled based on their specific requirements. Through the use of flow tables and match-action rules, administrators can dynamically control the behavior of network switches and routers.

c) Software-Defined Networking (SDN) Integration: OpenFlow plays a crucial role in the broader concept of Software-Defined Networking. It provides a standardized interface for SDN controllers to communicate with network devices, enabling dynamic and automated network provisioning.

Benefits of OpenFlow

a) Enhanced Network Flexibility: With OpenFlow, network administrators can easily adapt and customize their networks to suit evolving business needs. The ability to modify network behavior on the fly allows for efficient resource allocation and improved network performance.

b) Simplified Network Management: By centralizing network control, OpenFlow simplifies the management of complex network architectures. Policies and configurations can be applied uniformly across the network, reducing administrative overhead and minimizing the chances of configuration errors.

c) Innovation and Experimentation: OpenFlow fosters innovation by providing a platform for the development and deployment of new network protocols and applications. Researchers and developers can experiment with novel networking concepts, paving the way for future advancements in the field.

Conclusion:

OpenFlow has ushered in a new era of network management, offering unparalleled flexibility and control. Its ability to separate the control plane from the data plane, coupled with centralized control and programmability, has opened up endless possibilities in network architecture design. As organizations strive for more agile and efficient networks, embracing OpenFlow and its associated technologies will undoubtedly be a wise choice.

container based virtualization

Cisco Switch Virtualization Nexus 1000v

Cisco Switch Virtualization Nexus 1000v

Virtualization has become integral to modern data centers in today's digital landscape. With the increasing demand for agility, flexibility, and scalability, organizations are turning to virtual networking solutions to meet their evolving needs. One such solution is the Nexus 1000v, a virtual network switch offering comprehensive features and functionalities. In this blog post, we will delve into the world of the Nexus 1000v, exploring its key features, benefits, and use cases.

The Nexus 1000v is a distributed virtual switch that operates at the hypervisor level, providing advanced networking capabilities for virtual machines (VMs). It is designed to integrate seamlessly with VMware vSphere, offering enhanced network visibility, control, and security.

Cisco Switch Virtualization is a revolutionary concept that allows network administrators to create multiple virtual switches on a single physical switch. By abstracting the network functions from the hardware, it provides enhanced flexibility, scalability, and efficiency. With Cisco Switch Virtualization, businesses can maximize resource utilization and simplify network management.

At the forefront of Cisco's Switch Virtualization portfolio is the Nexus 1000v. This powerful platform brings the benefits of virtualization to the data center, enabling seamless integration between virtual and physical networks. By extending Cisco's renowned networking capabilities into the virtual environment, Nexus 1000v empowers organizations to achieve consistent policy enforcement, enhanced security, and simplified operations.

The Nexus 1000v boasts a wide range of features that make it a compelling choice for network administrators. From advanced network segmentation and traffic isolation to granular policy control and deep visibility, this platform has it all. By leveraging the power of Cisco's Virtual Network Services (VNS), organizations can optimize their network infrastructure, streamline operations, and deliver superior performance.

Deploying Cisco Switch Virtualization, specifically the Nexus 1000v, requires careful planning and consideration. Organizations must evaluate their network requirements, ensure compatibility with existing infrastructure, and adhere to best practices. From designing a scalable architecture to implementing proper security measures, attention to detail is crucial to achieve a successful deployment.

To truly understand the impact of Cisco Switch Virtualization, it's essential to explore real-world use cases and success stories. From large enterprises to service providers, organizations across various industries have leveraged the power of Nexus 1000v to transform their networks. This section will highlight a few compelling examples, showcasing the versatility and value that Cisco Switch Virtualization brings to the table.

Highlights: Cisco Switch Virtualization Nexus 1000v

Hypervisor and vSphere Introduction

An operating system can run multiple operating systems on a single hardware host using a hypervisor, also known as a virtual machine manager. Operating systems use the host’s processor, memory, and other resources. Hypervisors control the host processor, memory, and other resources and allocate what each operating system needs. Hypervisors run guest operating systems or virtual machines on top of them.

Designed specifically for integration with VMware vSphere environments, the Cisco Nexus 1000V Series Switch runs Cisco NX-OS software. Enterprise-class performance, scalability, and scalability are delivered by VMware vSphere 2.0 across multiple platforms. Within the VMware ESX hypervisor, the Nexus 1000V runs. With the Cisco Nexus 1000V Series, you can take advantage of Cisco VN-Link server virtualization technology

• Policy-based virtual machine (VM) connectivity

• Mobile VM security

• Network policy

Nondisruptive operational model for your server virtualization and networking teams

As with physical servers, virtual servers can be configured with the same network configuration, security policy, diagnostic tools, and operational models as physical servers. The Cisco Nexus 1000V Series is also compatible with VMware vSphere, vCenter, ESX, and ESXi.

A brief overview of the Nexus 1000V system

There are two primary components of the Cisco Nexus 1000V Series switch:

VEM (Virtual Ethernet Module): Executes inside hypervisors

VSM (External Virtual Supervisor Module): Manages VEMs

Nexus 1000v implements a generic concept of Cisco Distributed Virtual Switch (DVS). VMware ESX or ESXi executes the Cisco Nexus 1000V Virtual Ethernet Module (VEM). The VEM’s application programming interface (API) is VMware vNetwork Distributed Switch (vDS). By integrating the API with VMware VMotion and Distributed Resource Scheduler (DRS), advanced networking capabilities can be provided to virtual machines. In the VEM, Layer 2 switching and advanced networking functions are performed based on configuration information from the VSM:

Nexus Switch Virtualization

Virtual routing and forwarding

Virtual routing and forwarding form the basis of this stack. Firstly, network virtualization comes with two primary methods: 1) One too many and 2) Many to one.  The “one too many” network virtualization method means you segment one physical network into multiple logical segments. Conversely, the “many to one” network virtualization method consolidates numerous physical devices into one logical entity. By definition, they seem to be opposites, but they fall under the same umbrella in network virtualization.

Before you proceed, you may find the following posts helpful:

  1. Container Based Virtualization
  2. Virtual Switch
  3. What is VXLAN
  4. Redundant Links
  5. WAN Virtualization
  6. What Is FabricPath

Cisco Switch Virtualization.

Key Nexus 1000v Discussion Points:


  • Introduction to Nexus1000v and what is involved.

  • Highlighting the details on Cisco switch virtualization. Logical separation. 

  • Technical details on the additional overhead from virtualization.

  • Scenario: Network virtualization.

  • A final note on software virtual switch designs.

Back to basics with network virtualization

Before we get stuck in Cisco virtualization, let us address some basics. For example, if you have multiple virtual endpoints share a physical network. Still, different virtual endpoints belong to various customers, and the communication between these endpoints also needs to be isolated. In other words, the network is a resource, too, and network virtualization is the technology that enables the sharing of a standard physical network infrastructure.

Virtualization uses software to simulate traditional hardware platforms and create virtual software-based systems. For example, virtualization allows specialists to construct a single virtual network or partition a physical network into multiple virtual networks.

Cisco Switch Virtualization: Logical segmentation: One too many

We have one-to-many network virtualization for the Cisco switch virtualization design; a single physical network is logically segmented into multiple virtual networks. For example, each virtual network could correspond to a user group or a specific security function.

End-to-end path isolation requires the virtualization of networking devices and their interconnecting links. VLANs have been traditionally used, and hosts from one user group are mapped to a single VLAN. To extend the path across multiple switches at Layer 2, VLAN tagging (802.1Q) can carry VLAN information between switches. These VLAN trunks were created to transport multiple VLANs over a single Ethernet interface.

The diagram below displays two independent VLANs, VLAN201 and VLAN101. These VLANs can share one physical wire to provide L2 reachability between hosts connected to Switch B and Switch A via Switch C, but they remain separate entities.

Nexus1000v
Nexus1000v: The operation

VLANs are sufficient for small Layer 2 segments. However, today’s networks will likely have a mix of Layer 2 and 3 routed networks. In this case, Layer 2 VLANs alone are insufficient because you must extend the Layer 2 isolation over a Layer 3 device. This can be achieved by using Virtual Routing and Forwarding ( VRF ), the next step in the Cisco switch virtualization. A virtual routing and forwarding instance logically carves a Layer 3 device into several isolated independent L3 devices. The virtual routing and forwarding configured locally cannot communicate directly.

The diagram below displays one physical Layer 3 router with three VRFs: VRF Yellow, VRF Red, and VRF Blue. These virtual routing and forwarding instances are completely separated; without explicit configuration, routes in one virtual routing and forwarding instance cannot be leaked to another.

Virtual Routing and Forwarding

virtual routing and forwarding

The virtualization of the interconnecting links depends on how the virtual routers are connected. If they are physically ( directly ) connected, you could use a technology known as VRF-lite to separate traffic and 802.1Q to label the data plane. This is known as hop-by-hop virtualization. However, it’s possible to run into scalability issues when the number of devices grows. This design is typically used when you connect virtual routing and forwarding back to back, i.e., no more than two devices.

When the virtual routers are connected over multiple hops through an IP cloud, you can use generic routing encapsulation ( GRE ) or Multiprotocol Label Switching ( MPLS ) virtual private networks.

GRE is probably the simpler of the Layer 3 methods, and it can work over any IP core. GRE can encapsulate the contents and transport them over a network with the network unaware of the packet contents. Instead, the core will see the GRE header, virtualizing the network path.

Cisco Switch Virtualization: The additional overhead

When designing Cisco switch virtualization, you need to consider the additional overhead. There are a further 24 bytes overhead for the GRE header, so it may be the case that the forwarding router may break the datagram into two fragments, so the packet may not be larger than the outgoing interface MTU. To resolve the fragmentation issue, you can correctly configure MTU, MSS, and Path MTU parameters on the outgoing and intermediate routers.

The GRE standard is typically static. You only need to configure tunnel endpoints, and the tunnel will be up as long as you can reach those endpoints. However, recent designs can establish a dynamic GRE tunnel.

GRE over IPsec

MPLS/VPN, on the other hand, is a different beast. It requires signaling to distribute labels and build an end-to-end Label Switched Path ( LSP ). The label distribution can be done with BGP+label, LDP, and RSVP. Unlike GRE tunnels, MPLS VPNs do not have to manage multiple point-to-point tunnels to provide a full mesh of connectivity. Instead, they are used for connectivity, and packets’ labels provide traffic separation.

Cisco switch virtualization: Many to one

Many-to-one network consolidation refers to grouping two or more physical devices into one. Examples of this Cisco switch virtualization technology include a Virtual Switching System ( VSS ), Stackable switches, and Nexus VPC. Combining many physicals into one logical entity allows STP to view the logical group as one, allowing all ports to be active. By default, STP will block the redundant path.

Software-defined networking takes this concept further; it completely abstracts the entire network into a single virtual switch. The control and data planes are on the same device on traditional routers, yet they are decoupled with SDN. The control plan is now on a policy-driven controller, and the data plane is local on the OpenFlow-enabled switch.

Network Virtualization

Server and network virtualization presented the challenge of multiple VMs sharing a single network physical port, such as a network interface controller ( NIC ). The question then arises: how do I link multiple VMs to the same uplink? How do I provide path separation? Today’s networks need to virtualize the physical port and allow the configuration of policies per port.

Nexus 1000

NIC-per-VM design

One way to do this is to have a NIC-per-VM design where each VM is assigned a single physical NIC, and the NIC is not shared with any other VM. The hypervisor, aka virtualization layer, would be bypassed, and the VM would access the I/O device directly. This is known as VMDirectPath. This direct path or pass-through can improve performance for hosts that utilize high-speed I/O devices, such as 10 Gigabit Ethernet. However, the lack of flexibility and the ability to move VMs offset higher performance benefits.  

Virtual-NIC-per-VM in Cisco UCS (Adapter FEX)

Another way to do this is to create multiple logical NICs on the same physical NIC, such as Virtual-NIC-per-VM in Cisco UCS (Adapter FEX). These logical NICs are assigned directly to VMs, and traffic gets marked with a vNIC-specific tag on the hardware (VN-Tag/802.1ah). The actual VN-Tag tagging is implemented in the server NICs so that you can clone the physical NIC in the server to multiple virtual NICs. This technology provides faster switching and enables you to apply a rich set of management features to local and remote traffic.

Software Virtual Switch

The third option is to implement a virtual software switch in the hypervisor. For example, VMware introduced virtual switching compatibility with its vSphere ( ESXi ) hypervisor, called vSphere Distributed Switch ( VDS ). Initially, they introduced a local L2 software switch, which was soon phased out due to a lack of distributed architecture.

Data physically moves between the servers through the external network, but the control plane abstracts this movement to look like one large distributed switch spanning multiple servers. This approach has a single management and configuration point, similar to stackable switches – one control plane with many physical data forwarding paths. The data does not move through a parent partition but logically connects directly to the network interface through local vNICs associated with each VM.

Network virtualization and Nexus 1000v ( Nexus 1000 )

The VDS introduced by VMware lacked any good networking features, which led Cisco to introduce the Nexus 1000V software-based switch. The Nexus 1000v is a multi-cloud, multi-hypervisor, and multi-services distributed virtual switch. Its function is to enable communication between VMs.

Nexus1000v
Nexus1000v: Virtual Distributed Switch.

Nexus 1000 components: VEM and VSM

The Nexus 1000v has two essential components:

  1. The Virtual Supervisor Module ( VSM )
  2. The Virtual Ethernet Module ( VEM ).

Compared to a physical switch, the VSM could be viewed as the supervisor, setting up the control plane functions for the data plane to forward efficiently, and the VEM as the physical line cards that do all the packet forwarding. The VEM is the software component that runs within the hypervisor kernel. It handles all VM traffic, including inter-VM frames and Ethernet traffic between a VM and external resources.

The VSM runs its NX-OS code and controls the control and management planes, which integrate into a cloud manager, such as a VMware vCenter. You can have two VSMs for redundancy. Both modules remain constantly synchronized with unicast VSM-to-VSM heartbeats to provide stateful failover in the event of an active VSM failure.

The two available communication options for VSM to VEM are:

  1. Layer 2 control mode: The VSM control interface shares the same VLAN with the VEM.
  2. Layer 3 control mode: The VEM and the VSM are in different IP subnets.

The VSM also uses heartbeat messages to detect a loss of connectivity between it and the VEM. However, the VEM does not depend on connectivity to the VSM to perform its data plane functions and will continue forwarding packets if the VSM fails.

 

With Layer 3 control mode, the heartbeat messages are encapsulated in a GRE envelope.

 

Nexus 1000 and VSM best practices

  • L2 control is recommended for new installations.
  • Use MAC pinning instead of LACP.
  • Packet, Control, and Management in the same VLAN.
  • Do not use VLAN 1 for Control and Packet.
  • Use 2 x VSM for redundancy. 

The max latency between VSM and VEM is ten milliseconds. Therefore, a VSM can be placed outside the data center if you have a high-quality DCI link, and the VEM can still be controlled.

Nexus 1000v InterCloud – Cisco switch virtualization

A vital element of the Nexus 1000 is its use case for hybrid cloud deployments and its ability to place workloads in private and public environments via a single pane of glass. In addition, the Nexus 1000v interCloud addresses the main challenges with hybrid cloud deployments, such as security concerns and control/visibility challenges within the public cloud.

The Nexus 1000 interCloud works with Cisco Prime Service Controller to create a secure L2 extension between the private data center and the public cloud.

This L2 extension is based on Datagram Transport Layer Security ( DTLS ) protocol and allows you to securely transfer VMs and Network services over a public IP backbone. DTLS derives the SSL protocol and provides communications privacy for datagram protocols, so all data in motion is cryptographically isolated and encrypted.

Nexus 1000
Nexus 1000 and Hybrid Cloud.

 

Nexus 1000v Hybrid Cloud Components 

Cisco Prime Network Service Controller for InterCloud **A VM that provides a single pane of glass to manage all functions of the inter clouds
InterCloud VSMManage port profiles for VMs in the InterCloud infrastructure
InterCloud ExtenderProvides secure connectivity to the InterCloud Switch in the provider cloud. Install in the private data center.
InterCloud SwitchVirtual Machine in the provider data center has secure connectivity to the InterCloud Extender in the enterprise cloud and secure connectivity to the Virtual Machines in the provider cloud.
Cloud Virtual MachinesVMs in the public cloud running workloads.

Prerequisites

Port 80HTTP access from PNSC for AWS calls and communicating with InterCloud VMs in the provider cloud
Port 443HTTPS access from PNSC for AWS calls and communicating with InterCloud VMs in the provider cloud
Port 22SSH from PNSC to InterCloud VMs in the provider cloud
UDP 6644DTLS data tunnel
TCP 6644DTLS control tunnel

VXLAN – Virtual Extensible LAN

The requirement for applications on demand has led to an increased number of required VLANs for cloud providers. The standard 12-bit identifier, which provided 4000 VLANs, proved to be a limiting factor in multi-tier, multi-tenant environments, and engineers started to run out of isolation options.

This has introduced a 24-bit VXLAN identifier, offering 16 million logical networks. Now, we can cross Layer 3 boundaries. The MAC in UDP encapsulation uses switch hashing to analyze UDP packets and efficiently distribute all packets in a port channel.

nexus 1000
VXLAN operations

VXLAN works like a layer 2 bridge ( Flood and Learn ); the VEM learn does all the heavy lifting, learns all the VM source MAC and Host VXLAN IPs, and encapsulates the traffic according to the port profile to which the VM belongs. Broadcast, Multicast, and unknown unicast traffic are sent as Multicast.

At the same time, unicast traffic is encapsulated and shipped directly to the destination host’s VXLAN IP, aka destination VEM. Enhanced VXLAN offers VXLAN MAC distribution and ARP termination, making it more optional. 

VXLAN Mode Packet Functions

PacketVXLAN(multicast mode)Enhanced VXLAN(unicast mode)Enhanced VXLANMAC DistributionEnhanced VXLANARP Termination
Broadcast /MulticastMulticast EncapsulationReplication plus Unicast EncapReplication plus Unicast EncapReplication plus Unicast Encap
Unknown UnicastMulticast EncapsulationReplication plus Unicast EncapDropDrop
Known UnicastUnicast EncapsulationUnicast EncapUnicast EncapUnicast Encap
ARPMulticast EncapsulationReplication plus Unicast EncapReplication plus Unicast EncapVEM ARP Reply

vPath – Service chaining

Intelligent Policy-based traffic steering through multiple network services.

vPath allows you to intelligently traffic steer VM traffic to virtualized devices. It intercepts and redirects the initial traffic to the service node. Once the service node performs its policy function, the result is cached, and the local virtual switch treats the subsequent packets accordingly. In addition, it enables you to tie services together to push the VM through each service as required. Previously, if you wanted to tie services together in a data center, you needed to stitch the VLANs together, which was limited by design and scale.

Cisco virtualization
Nexus and service chaining

vPath 3.0 is now submitted to the IETF for standardization, allowing service chaining with vPath and non-vpath network services. It enables you to use vpath service chaining between multiple physical devices and supporting multiple hypervisors.

License Options 

Nexus 1000 Essential EditionNexus 1000 Advanced Edition
Full Layer-2 Feature SetAll Features of Essential Edition
Security, QoS PoliciesVSG firewall
VXLAN virtual overlaysVXLAN Gateway
vPath enabled Virtual ServicesTrustSec SGA
Full monitoring and management capabilitiesA platform for other Cisco DC Extensions in the Future
Free$695 per CPU MSRP

Nexus 1000 features and benefits

SwitchingL2 Switching, 802.1Q Tagging, VLAN, Rate Limiting (TX), VXLAN
IGMP Snooping, QoS Marking (COS & DSCP), Class-based WFQ
SecurityPolicy Mobility, Private VLANs w/ local PVLAN Enforcement
Access Control Lists, Port Security, Cisco TrustSec Support
Dynamic ARP inspection, IP Source Guard, DHCP Snooping
Network ServicesVirtual Services Datapath (vPath) support for traffic steering & fast-path off-load[leveraged by Virtual Security Gateway (VSG), vWAAS, ASA1000V]
ProvisioningPort Profiles, Integration with vC, vCD, SCVMM*, BMC CLM
Optimized NIC Teaming with Virtual Port Channel – Host Mode
VisibilityVM Migration Tracking, VC Plugin, NetFlow v.9 w/ NDE, CDP v.2
VM-Level Interface Statistics, vTrackerSPAN & ERSPAN (policy-based)
ManagementVirtual Centre VM Provisioning, vCenter Plugin, Cisco LMS, DCNM
Cisco CLI, Radius, TACACs, Syslog, SNMP (v.1, 2, 3)
Hitless upgrade, SW Installer

Advantages and disadvantages of the Nexus 1000

AdvantagesDisadvantages
The Standard edition is FREE; you can upgrade to an enhanced version when needed.VEM and VSM internal communication is very sensitive to latency. Due to their chatty nature, they may not be good for inter-DC deployments.
Easy and Quick to deployVSM – VEM, VSM (active) – VSM (standby) heartbeat time of 6 seconds makes it sensitive to network failures and congestion.
It offers you many rich network features unavailable on other distributed software switches.VEM over-dependency on VSM reduces resiliency.
Hypervisor agnosticVSM is required for vSphere HA, FT, and VMotion to work.
Hybrid Cloud functionality 

Closing Points on Cisco Nexus 1000v

Key Features and Functionalities:

Virtual Ethernet Module (VEM):

The Nexus 1000v employs the Virtual Ethernet Module (VEM), which runs as a module inside the hypervisor. This allows for efficient and direct communication between VMs, bypassing the traditional reliance on the hypervisor networking stack.

Virtual Supervisor Module (VSM):

The Virtual Supervisor Module (VSM) serves as the control plane for the Nexus 1000v, providing centralized management and configuration. It enables network administrators to define policies, manage virtual ports, and monitor network traffic.

Policy-Based Virtual Network Management:

With the Nexus 1000v, administrators can define policies to manage virtual networks. These policies ensure consistent network configurations across multiple hosts, simplifying network management and reducing the risk of misconfigurations.

Advanced Security and Monitoring Capabilities:

The Nexus 1000v offers granular security controls, including access control lists (ACLs), port security, and dynamic host configuration protocol (DHCP) snooping. Additionally, it provides comprehensive visibility into network traffic, enabling administrators to monitor and troubleshoot network issues effectively.

Benefits of the Nexus 1000v:

Enhanced Network Performance:

By offloading network processing to the VEM, the Nexus 1000v minimizes the impact on the hypervisor, resulting in improved network performance and reduced latency.

Increased Scalability:

The distributed architecture of the Nexus 1000v allows for seamless scalability, ensuring that organizations can meet the growing demands of their virtualized environments.

Simplified Network Management:

With its policy-based approach, the Nexus 1000v simplifies network management tasks, enabling administrators to provision and manage virtual networks more efficiently.

Use Cases:

Data Centers:

The Nexus 1000v is particularly beneficial in data center environments where virtualization is prevalent. It provides a robust and scalable networking solution, ensuring optimal performance and security for virtualized workloads.

Cloud Service Providers:

Cloud service providers can leverage the Nexus 1000v to enhance their network virtualization capabilities, offering customers more flexibility and control over their virtual networks.

The Nexus 1000v is a powerful virtual network switch that provides advanced networking capabilities for virtualized environments. Its rich features, policy-based management approach, and seamless integration with VMware vSphere allow organizations to achieve enhanced network performance, scalability, and management efficiency. As virtualization continues to shape the future of data centers, the Nexus 1000v remains a valuable tool for optimizing virtual network infrastructures.

 

Summary: Cisco Switch Virtualization Nexus 1000v

Welcome to our blog post, where we dive into the world of Cisco Switch Virtualization, explicitly focusing on the Nexus 1000v. In this article, we will unravel the complexities surrounding switch virtualization, explore the key features of the Nexus 1000v, and understand its significance in modern networking environments.

Understanding Switch Virtualization

Switch virtualization is a technique that allows for creating multiple virtual switches on a single physical switch, enabling greater flexibility and efficiency in network management. Organizations can consolidate their infrastructure, reduce costs, and streamline network operations by virtualizing switches.

Introducing the Nexus 1000v

The Cisco Nexus 1000v is a powerful switch virtualization solution that extends the functionality of VMware environments. Unlike traditional virtual switches, it provides a more comprehensive set of features and advanced network control. It seamlessly integrates with VMware vSphere, offering enhanced visibility, security, and policy management.

Key Features of the Nexus 1000v

– Distributed Virtual Switch: The Nexus 1000v operates as a distributed virtual switch, distributing network intelligence across all hosts in the virtualized environment. This ensures consistent policies, simplified troubleshooting, and improved performance.

– Virtual Port Profiles: With virtual port profiles, administrators can define consistent network policies for virtual machines, irrespective of their physical location. This simplifies network provisioning and reduces the chances of misconfigurations.

– Network Analysis Module (NAM): The Nexus 1000v incorporates NAM, a robust monitoring and analysis tool that provides deep visibility into virtual network traffic. This enables administrators to identify and resolve network issues, ensuring optimal performance quickly.

Deployment Considerations

When planning to deploy the Nexus 1000v, it is essential to consider factors such as network architecture, compatibility with existing infrastructure, and scalability requirements. It is advisable to consult with Cisco experts or certified partners to ensure a smooth and successful implementation.

Conclusion:

In conclusion, the Cisco Nexus 1000v is a game-changer in switch virtualization. Its advanced features, seamless integration with VMware environments, and extensive network control make it an ideal choice for organizations seeking to optimize their network infrastructure. By understanding the fundamentals of switch virtualization and exploring Nexus 1000v’s capabilities, network administrators can unlock a world of possibilities in network management and performance.