data center design

Open Networking

Open Networking

In today's digital age, where connectivity is the lifeline of businesses and individuals alike, open networking has emerged as a transformative approach. This blogpost delves into the concept of open networking, its benefits, and its potential to revolutionize the way we connect and communicate.

Open networking refers to a networking model that promotes interoperability, flexibility, and innovation. Unlike traditional closed networks that rely on proprietary systems, open networking embraces open standards, open source software, and open APIs. This approach enables organizations to break free from vendor lock-in, customize their network infrastructure, and foster collaborative development.

Enhanced Agility and Scalability: Open networking empowers businesses to adapt swiftly to changing requirements. By decoupling hardware and software layers, organizations gain the flexibility to scale their networks seamlessly and introduce new services efficiently. This agility is crucial in today's dynamic business landscape.

Cost-Effectiveness: With open networking, businesses can leverage commodity hardware and software-defined solutions, reducing capital expenditures. Moreover, the use of open source software eliminates costly licensing fees, making it an economically viable option for organizations of all sizes.

Interoperability and Vendor Neutrality: Open networking promotes interoperability between different vendors' products, fostering a vendor-neutral environment. This not only frees organizations from vendor lock-in but also encourages healthy competition, driving innovation and ensuring the best solutions for their specific needs.

Data Centers and Cloud Networks: Open networking has found significant applications in data centers and cloud networks. By embracing open standards and software-defined architectures, organizations can create agile and scalable infrastructure, enabling efficient management of virtual resources and enhancing overall performance.

Campus Networks and Enterprise Connectivity: In the realm of campus networks, open networking allows organizations to tailor their network infrastructure to meet specific demands. Through open APIs and programmability, businesses can integrate various systems and applications, enhancing connectivity, security, and productivity.

Telecommunications and Service Providers: Telecommunications and service providers can leverage open networking to deliver innovative services and improve customer experiences. By adopting open source solutions and virtualization, they can enhance network efficiency, reduce costs, and introduce new revenue streams with ease.

Open networking presents a transformative paradigm shift, empowering organizations to unleash the full potential of connectivity. By embracing open standards, flexibility, and collaboration, businesses can achieve enhanced agility, cost-effectiveness, and interoperability. Whether in data centers, campus networks, or telecommunications, open networking opens doors to innovation and empowers organizations to shape their network infrastructure according to their unique needs.

Highlights: Open Networking

**Fostering Innovation**

a) Open Networking refers to a network where networking hardware devices are separated from software code. Enterprises can flexibly choose equipment, software, and networking operating systems (OS) by using open standards and bare-metal hardware. An open network provides flexibility, agility, and programmability.

b) Additionally, open networking effectively separates hardware from software. This approach enhances component compatibility, interoperability, and expandability. In this way, enterprises gain greater flexibility, which facilitates their development.

c) Open networking relies on open standards, which allow for seamless integration between different hardware and software components, regardless of the vendor. This approach not only reduces dependency on single-source suppliers but also encourages a competitive market, fostering innovation and driving down costs.

d) Furthermore, open networking solutions are often built on open-source software, which benefits from the collective expertise of a global community of developers and engineers.

At present, Open Networking is enabled by: 

  • A. Open Source Software 
  • B. Open Network Devices 
  • C. Open Compute Hardware 
  • D. Software Defined Networks 
  • E. Network Function Virtualisation 
  • F. Cloud Computing 
  • G. Automation 
  • H. Agile Methods & Processes 

Defining Open Networking

Open Networking is much broader than other definitions, but it’s the only definition that doesn’t create more solution silos or bend the solution outcome to a buzzword or competing technology.  There is a need for a holistic definition of open networking that is inclusive and holistic and produces the best results. 

As a result of these technologies, hardware-based, specific-function, and proprietary components are being replaced by more generic and more straightforward hardware, and software is being migrated to perform more critical functions.

Open Networking in Practice:

Open Networking is already making its mark across various industries. Cloud service providers, for example, rely heavily on Open Networking principles to build scalable and flexible data center networks. Telecom operators also embrace Open Networking to deploy virtualized network functions, enabling them to offer services more efficiently and adapt to changing customer demands.

**Role of SDN and NFV**

Moreover, adopting software-defined networking (SDN) and network function virtualization (NFV) further accelerates the realization of the benefits of open networking. SDN separates the control plane from the data plane, providing centralized network management and programmability. NFV virtualizes network functions, allowing for dynamic provisioning and scalability. 

A. Use Cases and Real-World Examples: 

Data Centers and Cloud Computing: Open networking has gained significant traction in data centers and cloud computing environments. By leveraging open networking principles, organizations can build scalable and flexible data center networks that seamlessly integrate with cloud platforms, enabling efficient data management and resource allocation.

**Separate Control from Data Plane**

Software-Defined Networking (SDN): SDN is an example of open networking principles. By separating the control plane from the data plane, SDN enables centralized network management, automation, and programmability. This approach empowers network administrators to dynamically configure and optimize network resources, improving performance and reducing operational overhead.

B. Key Open Networking Projects:

Open Network Operating System (ONOS): ONOS is a collaborative project that focuses on creating an open-source, carrier-grade SDN (Software-Defined Networking) operating system. It provides a scalable platform for building network applications and services, facilitating innovation and interoperability.

OpenDaylight (ODL): ODL is a modular, extensible, open-source SDN controller platform. It aims to accelerate SDN adoption by providing developers and network operators with a common platform to build and deploy network applications.

FRRouting (FRR): FRR is an open-source IP routing protocol suite that supports various routing protocols, including OSPF, BGP, and IS-IS. It offers a flexible and scalable routing solution, enabling network operators to optimize their routing infrastructure.

The Role of Transformation

Infrastructure: Embrace Transformation:

To undertake an effective SDN data center transformation strategy, we must accept that demands on data center networks come from internal end-users, external customers, and considerable changes in the application architecture. All of these factors put pressure on traditional data center architecture.

Dealing effectively with these demands requires the network domain to become more dynamic, potentially introducing Open Networking and Open Networking solutions. For this to occur, we must embrace digital transformation and the changes it will bring to our infrastructure. Unfortunately, keeping current methods is holding back this transition.

Modern Network Infrastructure:

In modern network infrastructures, as has been the case on the server side for many years, customers demand supply chain diversification regarding hardware and silicon vendors. This diversification reduces the Total Cost of Ownership because businesses can drive better cost savings. In addition, replacing the hardware underneath can be seamless because the software above is standard across both vendors.

Leaf and Spine Architecture:

Further, as architectures streamline and spine leaf architecture increases from the data center to the backbone and the Edge, a typical software architecture across all these environments brings operational simplicity. This perfectly aligns with the broader trend of IT/OT convergence.  

Working with Open Source Software

Linux Networking

One remarkable aspect of Linux networking is the abundance of powerful tools available for network configuration. From the traditional ifconfig and route commands to the more recent ip command, this section will introduce various tools and their functionalities.

Virtual Switching: Open vSwitch

What is Open vSwitch?

Open vSwitch is a multilayer virtual switch that enables network automation and management in virtualized environments. It bridges virtual machines (VMs) and the physical network, allowing seamless communication and control over network traffic. With its extensible architecture and robust feature set, Open vSwitch offers a flexible and scalable networking solution.

Open vSwitch offers many features, making it a popular choice among network administrators and developers. Some of its key capabilities include:

1. Virtual Network Switching: Open vSwitch can create and manage virtual switches, ports, and bridges, creating complex network topologies within virtualized environments.

2. Flow Control: With Open vSwitch, you can define and control network traffic flow using flow rules. This enables advanced traffic management, filtering, and QoS (Quality of Service) capabilities.

3. Integration with SDN Controllers: Open vSwitch seamlessly integrates with various Software-Defined Networking (SDN) controllers, providing centralized management and control of network resources.

Containers & Docker Networking

Docker networking revolves around containers, networks, and endpoints. Containers are isolated environments that run applications, while networks act as virtual channels for communication. Endpoints, on the other hand, are unique identifiers attached to containers within a network. Understanding these fundamental concepts is crucial for grasping Docker network connectivity.

Docker Networking Fundamentals

Docker networking operates on a virtual network that allows containers to communicate securely. Docker creates a bridge network called “docker0” by default and assigns each container a unique IP address. This isolation ensures that containers can run independently without interfering with each other.

The default bridge network in Docker is an internal network that connects containers running on the same host. Containers within this network can communicate with each other using IP addresses. However, containers on different hosts cannot directly communicate over the bridge network.

Orchestrator: Understanding Docker Swarm

Docker Swarm, a native clustering and orchestration tool for Docker, allows the management of a cluster of Docker nodes as a single virtual system. It provides high availability, scalability, and ease of use for deploying and managing containerized applications. With its intuitive user interface and powerful command-line interface, Docker Swarm simplifies managing container clusters.

Related: For pre-information, you may find the following posts helpful:

  1. OpenFlow Protocol
  2. Software-defined Perimeter Solutions
  3. Network Configuration Automation
  4. SASE Definition
  5. Network Overlays
  6. Overlay Virtual Networking

Open Networking Solutions

Open Networking: The Solutions

Now, let’s look at the evolution of data centers to see how we can achieve this modern infrastructure. To evolve and keep up with current times, you should use technology and your infrastructure as practical tools. You will be able to drive the entire organization to become digital. Of course, the network components will play a key role. Still, the digital transformation process is an enterprise-wide initiative focusing on fabric-wide automation and software-defined networking.

A. Lacking fabric-wide automation:

One central pain point I have seen throughout networking is the necessity to dispense with manual work lacking fabric-wide automation. In addition, it’s common to deploy applications by combining multiple services that run on a distributed set of resources. As a result, configuration and maintenance are much more complex than in the past. You have two options to implement all of this.

Undertaking Manual or Automated Approach

First, you can connect these services by manually spinning up the servers, installing the necessary packages, and SSHing to each one. Alternatively, you can go toward open network solutions with automation, particularly Ansible automation with Ansible Engine or Ansible Tower with automation mesh. As automation best practice, use Ansible variables for flexible playbook creation that can be easily shared and used amongst different environments.  

B. Fabric-wide automation and SDN:

However, deploying a VRF or any technology, such as an anycast gateway, is a dynamic global command in a software-defined environment. We now have fabric-wide automation and can deploy with one touch instead of numerous box-by-box configurations. 

We are moving from a box-by-box configuration to the atomic programming of a single entity’s distributing fabric. This allows us to carry out deployments with one configuration point quickly and without human error.

C. Configuration management:

Manipulating configuration files by hand is tedious, error-prone, and time-consuming. Equally, performing pattern matching to make changes to existing files is risky. The manual approach will result in configuration drift, where some servers will drift from the desired state. 

Configuration Drift: Configuration drift is caused by inconsistent configuration items across devices, usually due to manual changes and updates and not following the automation path. Ansible architecture can maintain the desired state across various managed assets.

Storing Managed Assets: Managed assets, which can range from distributed firewalls to Linux hosts, are stored in an inventory file, which can be static or dynamic. Dynamic inventories are best suited for a cloud environment where you want to gather host information dynamically. Ansible is all about maintaining the desired state for your domain.

Challenge: The issue of Silos

To date, the networking industry has been controlled by a few vendors. We have dealt with proprietary silos in the data center, campus/enterprise, and service provider environments. The major vendors will continue to provide a vertically integrated lock-in solution for most customers. They will not allow independent, 3rd party network operating system software to run on their silicon.

Required: Modular & Open

Typically, these silos were able to solve the problems of the time. The modern infrastructure needs to be modular, open, and straightforward. Vendors need to allow independent, 3rd party network operating systems to run on their silicon to break from being a vertically integrated lock-in solution. Cisco has started this for the broader industry regarding open networking solutions with the announcement of the Cisco Silicon ONE. 

The Rise of Open Networking Solutions

New data center requirements have emerged; therefore, the network infrastructure must break the silos and transform to meet these trending requirements. One can view the network transformation as moving from a static and conservative mindset that results in cost overrun and inefficiencies to a dynamic routed environment that is simple, scalable, secure, and can reach the far edge. For effective network transformation, we need several stages. 

**Routed Data Center Design**

Firstly, transition to a routed data center design with a streamlined leaf-spine architecture and a standard operating system across cloud, Edge, and 5G networks. A viable approach would be to do all this with open standards, without proprietary mechanisms. Then, we need good visibility.

**Networking and Visibility**

As part of the transformation, the network is no longer considered a black box that needs to be available and provide connectivity to services. Instead, the network is a source of deep visibility that can aid a large set of use cases: network performance, monitoring, security, and capacity planning, to name a few. However, visibility is often overlooked with an over-focus on connectivity and not looking at the network as a valuable source of information.

**Monitoring at a Flow level**

For efficient network management, we must provide deep visibility for the application at a flow level on any port and device type. You would deploy a redundant monitoring network if you want something comparable today. Such a network would consist of probes, packet brokers, and tools to process the packet for metadata.

**Packet Brokers: Traditional Tooling**

Traditional network monitoring tools like packet brokers require life cycle management. A more viable solution would integrate network visibility into the fabric and would not need many components. This would enable us to do more with the data and aid in agility for ongoing network operations.

Note: Observability: Detecting the unknown

There will always be some requirement for application optimization or a security breach, where visibility can help you quickly resolve these issues. Monitoring is used to detect known problems and is only valid with pre-defined dashboards that show a problem you have seen before, such as capacity reaching its limit.

On the other hand, we have the practices of Observability that can detect unknown situations and are used to aid those in getting to the root cause of any problem, known or unknown: 

Example Visibility Technology: sFlow

What is sFlow?

sFlow is a network monitoring technology that allows for real-time, granular network traffic analysis. By sampling packets at high speeds, sFlow provides a comprehensive view of network behavior, capturing key data such as source and destination addresses, port numbers, and traffic volumes. This invaluable information serves as the foundation for network optimization and security.

Evolution of the Data Center

**Several Important Design Phases**

We are transitioning, and the data center has undergone several design phases. Initially, we started with layer 2 silos, suitable for the north-to-south traffic flows. However, layer 2 designs hindered east-west communication traffic flows of modern applications and restricted agility, which led to a push to break network boundaries.

**Layer 3 Routing & Overlay Networking**

Hence, routing at the top of the rack (ToR) with overlays between ToR is moved to drive inter-application communication. This is the most efficient approach, which can be accomplished in several ways. 

The demand for leaf and spine “clos” started in the data center and spread to other environments. A closed network is a type of non-blocking, multistage switching architecture.

This network design extends from the central/backend data center to the micro data centers at the EdgeEdge. Various parts of the edge network, PoPs, central offices, and packet core have all been transformed into leaf and spine “clos” designs. 

The network overlay

When increasing agility, building a complete network overlay is common to all software-defined technologies. An overlay is a solution abstracted from the underlying physical infrastructure. This means separating and disaggregating the customer applications or services from the network infrastructure. Think of it as a sandbox or private network for each application on an existing network.

Example: Overlay Networking with VXLAN

The network overlay is more often created with VXLAN. The Cisco ACI uses an ACI network of VXLAN for the overlay, and the underlay is a combination of BGP and IS-IS. The overlay abstracts a lot of complexity, and Layer 2 and 3 traffic separation is done with a VXLAN network identifier (VNI).

The VXLAN overlay

VXLAN uses a 24-bit network segment ID, called a VXLAN network identifier (VNI), for identification. This is much larger than the 12 bits used for traditional VLAN identification. The VNI is just a fancy name for a VLAN ID, but it now supports up to 16 Million VXLAN segments. 

Challenge: Traditional VLANs

This is considerably more than the traditional 4094-supported endpoints with VLANs. Not only does this provide more hosts, but it also enables better network isolation capabilities, with many little VXLAN segments instead of one large VLAN domain.

Required: Better Isolation and Scalability

The VXLAN network has become the de facto overlay protocol and brings many advantages to network architecture regarding flexibility, isolation, and scalability. VXLAN effectively implements an Ethernet segment that virtualizes a thick Ethernet cable.

Use Case: – **VXLAN Flood and Learn**

Flood and learn is a crucial mechanism within VXLAN that enables the dynamic discovery of VXLAN tunnels and associated endpoints. When a VXLAN packet reaches a switch, and the destination MAC address is unknown, the switch utilizes flood and learns to broadcast the packet to all its VXLAN tunnels. The receiving tunnel endpoints then examine the packet, learn the source MAC address, and update their forwarding tables accordingly.

Traditional policy deployment

Traditionally, deploying an application to the network involves propagating the policy to work through the entire infrastructure. Why? Because the network acts as an underlay, segmentation rules configured on the underlay are needed to separate different applications and services.

This creates a rigid architecture that cannot react quickly and adapt to changes, therefore lacking agility. The applications and the physical network are tightly coupled. Now, we can have a policy in the overlay network with proper segmentation per customer.

1. Virtual Networking & ToR switches

Virtual networks and those built with VXLAN are built from servers or ToR switches. Either way, the underlying network transports the traffic and doesn’t need to be configured to accommodate the customer application. Everything, including the policy, is done in the overlay network, which is most efficient when done in a fully distributed manner.

2. Flexibility of Overlay Networking

Now, application and service deployment occurs without touching the physical infrastructure. For example, if you need to have Layer 2 or Layer 3 paths across the data center network, you don’t need to tweak a VLAN or change routing protocols.  Instead, you add a VXLAN overlay network. This approach removes the tight coupling between the application and network, creating increased agility and simplicity in deploying applications and services.

**Key Point: Extending from the data center**

Edge computing creates a fundamental disruption among the business infrastructure teams. We no longer have the framework where IT only looks at the backend software, such as Office365, and OT looks at the routing and switching product-centric elements. There is convergence.

Therefore, you need many open APIs. The edge computing paradigm brings processing closer to the end devices, reducing latency and improving the end-user experience. It would help if you had a network that could work with this model to support this. Having different siloed solutions does not work. 

3. Required: Common software architecture

So the data center design went from the layer 2 silo to the leaf and spine architecture with routing to the ToR. However, there is another missing piece. We need a standard operating software architecture across all the domains and location types for switching and routing to reduce operating costs. The problem remains that even on one site, there can be several different operating systems.

I have experienced the operational challenge of having many Cisco operating systems on one site through recent consultancy engagements. For example, I had an IOS XR for service provider product lines, IOS XE for enterprise, and NS OX for the data center, all on a single site.

4. Challenge: The traditional integrated vendor

Traditionally, networking products were a combination of hardware and software that had to be purchased as an integrated solution. Conversely, open networking disaggregates hardware from software, allowing IT to mix and match at will.

With Open Networking, we are not reinventing how packets are forwarded or routers communicate. With Open Networking solutions, you are never alone and never the only vendor. The value of software-defined networking and Open Networking is doing as much as possible in software so you don’t depend on delivering new features from a new generation of hardware. If you want a new part, it’s quickly implemented in software without swapping the hardware or upgrading line cards.

5. Required: Move intelligence to software.

You want to move as much intelligence as possible into software, thus removing the intelligence from the physical layer. You don’t want to build in hardware features; you want to use the software to provide the new features. This is a critical philosophy and is the essence of Open Networking. Software becomes the central point of intelligence, not the hardware; this intelligence is delivered fabric-wide.

As we have seen with the rise of SASE, customers gain more agility as they can move from generation to generation of services without hardware dependency and without the operational costs of constantly swapping out the hardware.

**SDN Network Design Options**

We have both controller and controllerless options. With a controllerless solution, setup is faster, agility increases, and robustness in single-point-of-failure is provided, particularly for out-of-band management, i.e., connecting all the controllers.

SDN Controllerless & Controller architecture:

A controllerless architecture is more self-healing; anything in the overlay network is also part of the control plane resilience. An SDN controller or controller cluster may add complexity and impede resiliency. Since the network depends on them for operation, they become a single point of failure and can impact network performance. The intelligence kept in a controller can be a point of attack.

So, there are workarounds where the data plane can continue forward without an SDN controller but always avoid a single point of failure or complex ways to have a quorum in a control-based architecture.

We have two main types of automation to consider: day 0 and days 1-2. First and foremost, day 0 automation simplifies and reduces human error when building the infrastructure. Days 1-2 touch the customer more. This may include installing services quickly, e.g., VRF configuration and building Automation into the fabric. 

A. Day 0 automation

As I said, day 0 automation builds basic infrastructures, such as routing protocols and connection information. These stages need to be carried out before installing VLANs or services. Typical tools that software-defined networking uses are Ansible or your internal applications to orchestrate the building of the network.

Fabric Automation Tools

These are known as fabric automation tools. Once the tools discover the switches, the devices are connected in a particular way, and the fabric network is built without human intervention. It simplifies traditional automation, which is helpful in day 0 automation environments.

  • Configuration Management: Ansible is a configuration management tool that can help alleviate manual challenges. Ansible replaces the need for an operator to tune configuration files manually and does an excellent job in application deployment and orchestrating multi-deployment scenarios.  
  • Pre-deployed infrastructure: Ansible does not deploy the infrastructure; you could use other solutions like Terraform that are best suited for this. Terraform is infrastructure as a code tool. Ansible is often described as a configuration management tool and is typically mentioned along the same lines as Puppet, Chef, and Salt. However, there is a considerable difference in how they operate.

Most notably, the installation of agents. Ansible automation is relatively easy to install as it is agentless. The Ansible architecture can be used in large environments with Ansible Tower using the execution environment and automation mesh. I have recently encountered an automation mesh, a powerful overlay feature that enables automation closer to the network’s edge.

Ansible ensures that the managed asset’s current state meets the desired state. It is all about state management. It does this with Ansible Playbooks, more specifically, YAML playbooks. A playbook is a term Ansible uses for a configuration management script that ensures the desired state is met. Essentially, playbooks are Ansible’s configuration management scripts. 

B. Day 1-2 automation

With day 1-2 automation, SDN does two things.

Firstly, installing or provisioning services automatically across the fabric is possible. With one command, human error is eliminated. The fabric synchronizes the policies across the entire network. It automates and disperses the provisioning operations across all devices. This level of automation is not classical, as this strategy is built into the SDN infrastructure. 

Secondly, it integrates network operations and services with virtualization infrastructure managers such as OpenStack, VCenter, OpenDaylight, or, at an advanced level, OpenShift networking SDN. How does the network adapt to the instantiation of new workloads via the systems? The network admin should not even be in the loop if, for example, a new virtual machine (VM) is created. 

A signal that a VM with specific configurations should be created should be propagated to all fabric elements. When the virtualization infrastructure managers provide a new service, you shouldn’t need to touch the network. This represents the ultimate agility as you remove the network components. 

Summary: Open Networking

Networking is vital in bringing people and ideas together in today’s interconnected world. Traditional closed networks have their limitations, but with the emergence of open networking, a new era of connectivity and collaboration has dawned. This blog post explored the concept of open networking, its benefits, and its impact on various industries and communities.

What is Open Networking?

Open networking uses open standards, open-source software, and open APIs to build and manage networks. Unlike closed networks that rely on proprietary systems and protocols, open networking promotes interoperability, flexibility, and innovation. It allows organizations to customize and optimize their networks based on their unique requirements.

Benefits of Open Networking

Enhanced Scalability and Agility: Open networking enables organizations to scale their networks more efficiently and adapt to changing needs. Decoupling hardware and software makes adding or removing network components easier, making the network more agile and responsive.

Cost Savings: With open networking, organizations can choose hardware and software components from multiple vendors, promoting competition and reducing costs. This eliminates vendor lock-in and allows organizations to use cost-effective solutions without compromising performance or reliability.

Innovation and Collaboration: Open networking fosters innovation by encouraging collaboration among vendors, developers, and users. Developers can create new applications and services that leverage the network infrastructure with open APIs and open-source software. This leads to a vibrant ecosystem of solutions that continually push the boundaries of what networks can achieve.

Open Networking in Various Industries

Telecommunications: Open networking has revolutionized the telecommunications industry. Telecom operators can now build and manage their networks using standard hardware and open-source software, reducing costs and enabling faster service deployments. It has also paved the way for the adoption of virtualization technologies like Network Functions Virtualization (NFV) and Software-Defined Networking (SDN).

Data Centers: Open networking has gained significant traction in the world of data centers. Data center operators can achieve greater agility and scalability using open standards and software-defined networking. Open networking also allows for better integration with cloud platforms and the ability to automate network provisioning and management.

Enterprise Networks: Enterprises are increasingly embracing open networking to gain more control over their networks and reduce costs. Open networking solutions offer greater flexibility regarding hardware and software choices, enabling enterprises to tailor their networks to meet specific business needs. It also facilitates seamless integration with cloud services and enhances network security.

Open networking has emerged as a powerful force in today’s digital landscape. Its ability to promote interoperability, scalability, and innovation makes it a game-changer in various industries. Whether revolutionizing telecommunications, transforming data centers, or empowering enterprises, open networking connects the world in ways we never thought possible.

Cisco ACI

ACI Cisco

Cisco ACI Components

In today's rapidly evolving technological landscape, organizations are constantly seeking innovative solutions to streamline their network infrastructure. Enter Cisco ACI Networks, a game-changing technology that promises to redefine networking as we know it. In this blog post, we will explore the key features and benefits of Cisco ACI Networks, shedding light on how it is transforming the way businesses design, deploy, and manage their network infrastructure.

Cisco ACI, short for Application Centric Infrastructure, is an advanced networking solution that brings together physical and virtual environments under a single, unified policy framework. By providing a holistic approach to network provisioning, automation, and orchestration, Cisco ACI Networks enable organizations to achieve unprecedented levels of agility, efficiency, and scalability.

Simplified Network Management: Cisco ACI Networks simplify network management by abstracting the underlying complexity of the infrastructure. With a centralized policy model, administrators can define and enforce network policies consistently across the entire network fabric, regardless of the underlying hardware or hypervisor.

Enhanced Security: Security is a top concern for any organization, and Cisco ACI Networks address this challenge head-on. By leveraging microsegmentation and integration with leading security platforms, ACI Networks provide granular control and visibility into network traffic, helping organizations mitigate potential threats and adhere to compliance requirements.

Scalability and Flexibility: The dynamic nature of modern business demands a network infrastructure that can scale effortlessly and adapt to changing requirements. Cisco ACI Networks offer unparalleled scalability and flexibility, allowing businesses to seamlessly expand their network footprint, add new services, and deploy applications with ease.

Data Center Virtualization: Cisco ACI Networks have revolutionized data center virtualization by providing a unified fabric that spans physical and virtual environments. This enables organizations to achieve greater operational efficiency, optimize resource utilization, and simplify the deployment of virtualized workloads.

Multi-Cloud Connectivity: In the era of hybrid and multi-cloud environments, connecting and managing disparate cloud services can be a daunting task. Cisco ACI Networks facilitate seamless connectivity between on-premises data centers and various public and private clouds, ensuring consistent network policies and secure communication across the entire infrastructure.

Cisco ACI Networks offer a paradigm shift in network infrastructure, empowering organizations to build agile, secure, and scalable networks tailored to their specific needs. With its comprehensive feature set, simplified management, and seamless integration with virtual and cloud environments, Cisco ACI Networks are poised to shape the future of networking. Embrace this transformative technology, and unlock a world of possibilities for your organization.

Highlights: Cisco ACI Components

The ACI Fabric

Cisco ACI is a software-defined networking (SDN) solution that integrates with software and hardware. With the ACI, we can create software policies and use hardware for forwarding, an efficient and highly scalable approach offering better performance. The hardware for ACI is based on the Cisco Nexus 9000 platform product line. The APIC centralized policy controller drives the software, which stores all configuration and statistical data.

–The Cisco Nexus Family–

To build the ACI underlay, you must exclusively use the Nexus 9000 family of switches. You can choose from modular Nexus 9500 switches or fixed 1U to 2U Nexus 9300 models. Specific models and line cards are dedicated to the spine function in ACI fabric; others can be used as leaves, and some can be used for both purposes. You can combine various leaf switches inside one fabric without any limitations.

a) Cisco ACI Fabric: Cisco ACI’s foundation lies in its fabric, which forms the backbone of the entire infrastructure. The ACI fabric comprises leaf switches, spine switches, and the application policy infrastructure controller (APIC). Each component ensures a scalable, agile, and resilient network.

b) Leaf Switches: Leaf switches serve as the access points for endpoints within the ACI fabric. They provide connectivity to servers, storage devices, and other network devices. With their high port density and advanced features, such as virtual port channels (vPCs) and fabric extenders (FEX), leaf switches enable efficient and flexible network designs.

c) Spine Switches: Spine switches serve as the core of the ACI fabric, providing high-bandwidth connectivity between the leaf switches. They use a non-blocking, multipath forwarding mechanism to ensure optimal traffic flow and eliminate bottlenecks. With their modular design and support for advanced protocols like Ethernet VPN (EVPN), spine switches offer scalability and resiliency.

d) Application Policy Infrastructure Controller (APIC): At the heart of Cisco ACI is the APIC, a centralized management and policy control plane. The APIC acts as a single control point, simplifying network operations and enabling policy-based automation. It provides a comprehensive view of the entire fabric, allowing administrators to define and enforce policies across the network.

e) Integration with Virtualization and Cloud Environments: Cisco ACI seamlessly integrates with virtualization platforms such as VMware vSphere and Microsoft Hyper-V and cloud environments like Amazon Web Services (AWS) and Microsoft Azure. This integration enables consistent policy enforcement and visibility across physical, virtual, and cloud infrastructures, enhancing agility and simplifying operations.

–ACI Architecture: Spine and Leaf–

To be used as ACI spines or leaves, Nexus 9000 switches must be equipped with powerful Cisco CloudScale ASICs manufactured using 16-nm technology. The following figure shows the Cisco ACI based on the Nexus 9000 series. Cisco Nexus 9300 and 9500 platform switches support Cisco ACI. As a result, organizations can use them as spines or leaves to utilize an automated, policy-based systems management approach fully. 

Cisco ACI Components
Diagram: Cisco ACI Components. Source is Cisco

**Hardware-based Underlay**

Server virtualization helped by decoupling workloads from the hardware, making the compute platform more scalable and agile. However, the server is not the main interconnection point for network traffic. So, we need to look at how we could virtualize the network infrastructure similarly to the agility gained from server virtualization.

**Mapping Network Endpoints**

This is carried out with software-defined networking and overlays that could map network endpoints and be spun up and down as needed without human intervention. In addition, the SDN architecture includes an SDN controller and an SDN network that enables an entirely new data center topology.

**Specialized Forwarding Chips**

In ACI, hardware-based underlay switching offers a significant advantage over software-only solutions due to specialized forwarding chips. Furthermore, thanks to Cisco’s ASIC development, ACI brings many advanced features, including security policy enforcement, microsegmentation, dynamic policy-based redirect (inserting external L4-L7 service devices into the data path), or detailed flow analytics—besides the vast performance and flexibility.

Related: For pre-information, you may find the following helpful:

  1. Data Center Security 
  2. VMware NSX

Cisco ACI Components

 Introduction to Leaf and Spine

The Cisco SDN ACI works with a Clos architecture, a fully meshed ACI network. Based on a spine leaf architecture. As a result, every Leaf is physically connected to every Spine, enabling traffic forwarding through non-blocking links. Physically, a leaf switch set creates a leaf layer attached to the spines in a full BIPARTITE graph. This means that each Leaf is connected to each Spine, and each Spine is connected to each Leaf

The ACI uses a horizontally elongated Leaf and Spine architecture with one hop to every host in an entirely messed ACI fabric, offering good throughput and convergence needed for today’s applications.

The ACI fabric: Does Not Aggregate Traffic

A key point in the spine-and-leaf design is the fabric concept, like a stretch network. One of the core ideas around a fabric is that it does not aggregate traffic. This does increase data center performance along with a non-blocking architecture. With the spine-leaf topology, we are spreading a fabric across multiple devices.

Required: Increased Bandwidth Available

The result of the fabric is that each edge device has the total bandwidth of the fabric available to every other edge device. This is one big difference from traditional data center designs; we aggregate the traffic by either stacking multiple streams onto a single link or carrying the streams serially.

Challenge: Oversubscription

With the traditional 3-tier design, we aggregate everything at the core, leading to oversubscription ratios that degrade performance. With the ACI Leaf and Spine design, we spread the load across all devices with equidistant endpoints, allowing us to carry the streams parallel.

Required: Routed Multipathing

Then, we have horizontal scaling load balancing.  Load balancing with this topology uses multipathing to achieve the desired bandwidth between the nodes. Even though this forwarding paradigm can be based on Layer 2 forwarding ( bridging) or Layer 3 forwarding ( routing), the ACI leverages a routed approach to the Leaf and Spine design, and we have Equal Cost Multi-Path (ECMP) for both Layer 2 and Layer 3 traffic. 

**Overlay and Underlay Design**

Mapping Traffic:

So you may be asking how we can have Layer 3 routed core and pass Layer 2 traffic. This is done using the overlay, which can map different traffic types to other overlays. So, we can have Layer 2 traffic mapped to an overlay over a routed core.

L3 active-active links: ACI links between the Leaf and the Spine switches are L3 active-active links. Therefore, we can intelligently load balance and traffic steer to avoid issues. We don’t need to rely on STP to block links or involve STP in fixing the topology.

Challenge: IP – Identity & Location

When networks were first developed, there was no such thing as an application moving from one place to another while it was in use. So, the original architects of IP, the communication protocol used between computers, used the IP address to indicate both the identity of a device connected to the network and its location on the network. Today, in the modern data center, we need to be able to communicate with an application or application tier, no matter where it is.

Required: Overlay Encapsulation

One day, it may be in location A and the next in location B, but its identity, which we communicate with, is the same on both days. An overlay is when we encapsulate an application’s original message with the location to which it needs to be delivered before sending it through the network. Once it arrives at its final destination, we unwrap it and deliver the original message as desired.

The identities of the devices (applications) communicating are in the original message, and the locations are in the encapsulation, thus separating the place from the identity. This wrapping and unwrapping is done on a per-packet basis and, therefore, must be done quickly and efficiently.

**Overlay and Underlay Components**

The Cisco SDN ACI has an overlay and underlay concept, which forms a virtual overlay solution. The role of the underlay is to glue together devices so the overlay can work and be built on top. So, the overlay, which is VXLAN, runs on top of the underlay, which is IS-IS. In the ACI, the IS-IS protocol provides the routing for the overlay, which is why we can provide ECMP from the Leaf to the Spine nodes. The routed underlay provides an ECMP network where all leaves can access Spine and have the same cost links. 

ACI overlay
Diagram: Overlay. Source Cisco

Underlay & Overlay Interaction

Example: 

Let’s take a simple example to illustrate how this is done. Imagine that application App-A wants to send a packet to App-B. App-A is located on a server attached to switch S1, and App-B is initially on switch S2. When App-A creates the message, it will put App-B as the destination and send it to the network; when the message is received at the edge of the network, whether a virtual edge in a hypervisor or a physical edge in a switch, the network will look up the location of App-B in a “mapping” database and see that it is attached to switch S2.

It will then put the address of S2 outside of the original message. So, we now have a new message addressed to switch S2. The network will forward this new message to S2 using traditional networking mechanisms. Note that the location of S2 is very static, i.e., it does not move, so using traditional mechanisms works just fine.

Upon receiving the new message, S2 will remove the outer address and thus recover the original message. Since App-B is directly connected to S2, it can easily forward the message to App-B. App-A never had to know where App-B was located, nor did the network’s core. Only the edge of the network, specifically the mapping database, had to know the location of App-B. The rest of the network only had to see the location of switch S2, which does not change.

Let’s now assume App-B moves to a new location switch S3. Now, when App-A sends a message to App-B, it does the same thing it did before, i.e., it addresses the message to App-B and gives the packet to the network. The network then looks up the location of App-B and finds that it is now attached to switch S3. So, it puts S3’s address on the message and forwards it accordingly. At S3, the message is received, the outer address is removed, and the original message is delivered as desired.

App-A did not track App-B’s movement at all. App-B’s address identified It, while the switch’s address, S2 or S3, identified its location. App-A can communicate freely with App-B no matter where It is located, allowing the system administrator to place App-B in any area and move it as desired, thus achieving the flexibility needed in the data center.

Multicast Distribution Tree (MDT)

We have a Multicast Distribution Tree MDT tree on top that is used to forward multi-destination traffic without having loops. The Multicast distribution tree is dynamically built to send flood traffic for specific protocols. Again, it does this without creating loops in the overlay network. The tunnels created for the endpoints to communicate will have tunnel endpoints. The tunnel endpoints are known as the VTEP. The VTEP addresses are assigned to each Leaf switch from a pool that you specify in the ACI startup and discovery process.

Normalize the transports

VXLAN tunnels in the ACI fabric normalize the transports in the ACI network. Therefore, traffic between endpoints can be delivered using the VXLAN tunnel, resulting in any transport network regardless of the device connecting to the fabric. 

So, using VXLAN in the overlay enables any network, and you don’t need to configure anything special on the endpoints for this to happen. The endpoints that connect to the ACI fabric do not need special software or hardware. The endpoints send regular packets to the leaf nodes they are connected to directly or indirectly. As endpoints come online, they send traffic to reach a destination.

Bridge Domains and VRF

Therefore, the Cisco SDN ACI under the hood will automatically start to build the VXLAN overlay network for you. The VXLAN network is based on the Bridge Domain (BD), or VRF ACI constructs deployed to the leaf switches. The Bridge Domain is for Layer 2, and the VRF is for Layer 3. So, as devices come online and send traffic to each other, the overlay will grow in reachability in the Bridge Domain or the VRF. 

Direct host routing for endoints

Routing within each tenant, VRF is based on host routing for endpoints directly connected to the Cisco ACI fabric. For IPv4, the host routing is based on the /32, giving the ACI a very accurate picture of the endpoints. Therefore, we have exact routing in the ACI.  In conjunction, we have a COOP database that runs on the Spines and offers remarkably optimized fabric to know where all the endpoints are located.

To facilitate this, every node in the fabric has a TEP address, and we have different types of TEPs depending on the device’s role. The Spine and the Leaf will have TEP addresses but will differ from each other.

COOP database
Diagram: COOP database

The VTEP and PTEP

The Leaf’s nodes are the Virtual Tunnel Endpoints (VTEP), which are also known as the physical tunnel endpoints (PTEP) in ACI. These PTEP addresses represent the “WHERE” in the ACI fabric where an endpoint lives. Cisco ACI uses a dedicated VRF and a subinterface of the uplinks from the Leaf to the Spines as the infrastructure to carry VXLAN traffic. In Cisco ACI terminology, the transport infrastructure for VXLAN traffic is known as Overlay-1, which is part of the tenant “infra.” 

**The Spine TEP**

The Spines also have a PTEP and an additional proxy TEP, which are used for forwarding lookups into the mapping database. The Spines have a global view of where everything is, which is held in the COOP database synchronized across all Spine nodes. All of this is done automatically for you.

**Anycast IP Addressing**

For this to work, the Spines have an Anycast IP address known as the Proxy TEP. The Leaf can use this address if they do not know where an endpoint is, so they ask the Spine for any unknown endpoints, and then the Spine checks the COOP database. This brings many benefits to the ACI solution, especially for traffic optimizations and reducing flooded traffic in the ACI. Now, we have an optimized fabric for better performance.

The ACI optimizations

**Mouse and elephant flow**

This provides better performance for load balancing different flows. For example, in most data centers, we have latency-sensitive flows, known as mouse flows, and long-lived bandwidth-intensive flows, known as elephant flows. 

The ACI has more precisely load-balanced traffic using algorithms that optimize mouse and elephant flows and distribute traffic based on flow lets: flow let load-balancing. Within a Leaf, Spine latency is low and consistent from port to port.

The max latency of a packet from one port to another in the architecture is the same regardless of the network size. So you can scale the network without degrading performance. Scaling is often done on a POD-by-POD basis. For more extensive networks, each POD would be its Leaf and Spine network.

**ARP optimizations: Anycast gateways**

The ACI comes by default with a lot of traffic optimizations. Firstly, instead of using an ARP and broadcasting across the network, that can hamper performance. The Leaf can assume that the Spine will know where the destination is ( and it does via the COOP database ), so there is no need to broadcast to everyone to find a destination.

If the Spine knows where the endpoint is, it will forward the traffic to the other Leaf. If not, it will drop it.

**Fabric anycast addressing**

This again adds performance benefits to the ACI solution as the table sizes on the Leaf switches can be kept smaller than they would if they needed to know where all the destinations were, even if they were not or never needed to communicate with them. On the Leaf, we have an Anycast address too.

These fabric Anycast addresses are available for Layer 3 interfaces. On the Leaf ToR, we can establish an SVI that uses the same MAC address on every ToR; therefore, when an endpoint needs to route to a ToR, it doesn’t matter which ToR you use. The Anycast Address is spread across all ToR leaf switches. 

**Pervasive gateway**

Now we have predictable latency to the first hop, and you will use the local route VRF table within that ToR instead of traversing the fabric to a different ToR. This is the Pervasive Gateway feature that is used on all Leaf switches. The Cisco ACI has many advanced networking features, but the pervasive gateway is my favorite. It does take away all the configuration mess we had in the past.

ACI Cisco: Integrations

  • Routing Control Platform

Then came along Cisco SDN ACI, the ACI Cisco, which operates differently from the traditional data center with an application-centric infrastructure. The Cisco application-centric infrastructure achieves resource elasticity with automation through standard policies for data center operations and consistent policy management across multiple on-premises and cloud instances.

  • Extending & Integrating the fabric

What makes the Cisco ACI interesting is its several vital integrations. I’m not talking about extending the data center with multi-pod and multi-site, for example, with AlgoSec, Cisco AppDynamics, and SD-WAN. AlgoSec enables secure application delivery and policy across hybrid network estates, while AppDynamic lives in a world of distributed systems Observability. SD-WAN enabled path performance per application with virtual WANs.

Cisco Multi-Pod Design

Cisco ACI Multi-Pod is part of the “Single APIC Cluster / Single Domain” family of solutions, as a single APIC cluster is deployed to manage all the interconnected ACI networks. These separate ACI networks are named “pods,” Each looks like a regular two-tier spine-leaf topology. The same APIC cluster can manage several pods, and to increase the resiliency of the solution, the various controller nodes that make up the cluster can be deployed across different pods.

ACI Multi-Pod
Diagram: Cisco ACI Multi-Pod. Source Cisco.

ACI Cisco and AlgoSec

With AlgoSec integrated with the Cisco ACI, we can now provide automated security policy change management for multi-vendor devices and risk and compliance analysis. The AlgoSec Security Management Solution for Cisco ACI extends ACI’s policy-driven automation to secure various endpoints connected to the Cisco SDN ACI fabric.

These simplify network security policy management across on-premises firewalls, SDNs, and cloud environments. They also provide visibility into ACI’s security posture, even across multi-cloud environments. 

ACI Cisco and AppDynamics 

Then, with AppDynamics, we are heading into Observability and controllability. Now, we can correlate app health and network for optimal performance, deep monitoring, and fast root-cause analysis across complex distributed systems with numbers of business transactions that need to be tracked.

This will give your teams complete visibility of your entire technology stack, from your database servers to cloud-native and hybrid environments. In addition, AppDynamics works with agents that monitor application behavior in several ways. We will examine the types of agents and how they work later in this post.

ACI Cisco and SD-WAN 

SD-WAN brings a software-defined approach to the WAN. These enable a virtual WAN architecture to leverage transport services such as MPLS, LTE, and broadband internet services. So, SD-WAN is not a new technology; its benefits are well known, including improving application performance, increasing agility, and, in some cases, reducing costs.

The Cisco ACI and SD-WAN integration makes active-active data center design less risky than in the past. The following figures give a high-level overview of the Cisco ACI and SD-WAN integration. For pre-information generic to SD-WAN, go here: SD-WAN Tutorial

SD WAN integration
Diagram: Cisco ACI and SD-WAN integration

The Cisco SDN ACI with SD-WAN integration helps ensure an excellent application experience by defining application Service-Level Agreement (SLA) parameters. Cisco ACI releases 4.1(1i) and adds support for WAN SLA policies. This feature enables admins to apply pre-configured policies to specify the packet loss, jitter, and latency levels for the tenant traffic over the WAN.

When you apply a WAN SLA policy to the tenant traffic, the Cisco APIC sends the pre-configured policies to a vManage controller. The vManage controller, configured as an external device manager that provides SDWAN capability, chooses the best WAN link that meets the loss, jitter, and latency parameters specified in the SLA policy.

Openshift and Cisco SDN ACI

OpenShift Container Platform (formerly known as OpenShift Enterprise) or OCP is Red Hat’s offering for the on-premises private platform as a service (PaaS). OpenShift is based on the Origin open-source project and is a Kubernetes distribution, the defacto for container-based virtualization. The foundation of the OpenShift networking SDN is based on Kubernetes and, therefore, shares some of the same networking technology along with some enhancements, such as the OpenShift route construct.

Other data center integrations

Cisco SDN ACI has another integration with Cisco DNA Center/ISE that maps user identities consistently to endpoints and apps across the network, from campus to the data center. Cisco Software-Defined Access (SD-Access) provides policy-based automation from the edge to the data center and the cloud.

Cisco SD-Access provides automated end-to-end segmentation to separate user, device, and application traffic without redesigning the network. This integration will enable customers to use standard policies across Cisco SD-Access and Cisco ACI, simplifying customer policy management using Cisco technology in different operational domains.

OpenShift and Cisco ACI

OpenShift does this with an SDN layer and enhances Kubernetes networking to create a virtual network across all the nodes. It is made with the Open Switch standard. For OpenShift SDN, this pod network is established and maintained by the OpenShift SDN, configuring an overlay network using a virtual switch called the OVS bridge. This forms an OVS network that gets programmed with several OVS rules. The OVS is a popular open-source solution for virtual switching.

OpenShift SDN plugin

We mentioned that you could tailor the virtual network topology to suit your networking requirements, which can be determined by the OpenShift SDN plugin and the SDN model you select. With the default OpenShift SDN, several modes are available. This level of SDN mode you choose is concerned with managing connectivity between applications and providing external access to them. Some modes are more fine-grained than others. The Cisco ACI plugins offer the most granular.

Integrating ACI and OpenShift platform

The Cisco ACI CNI plugin for the OpenShift Container Platform provides a single, programmable network infrastructure, enterprise-grade security, and flexible micro-segmentation possibilities. The APIC can provide all networking needs for the workloads in the cluster. Kubernetes workloads become fabric endpoints, like Virtual Machines or Bare Metal endpoints.

Cisco ACI CNI Plugin

The Cisco ACI CNI plugin extends the ACI fabric capabilities to OpenShift clusters to provide IP Address Management, networking, load balancing, and security functions for OpenShift workloads. In addition, the Cisco ACI CNI plugin connects all OpenShift Pods to the integrated VXLAN overlay provided by Cisco ACI.

Cisco SDN ACI and AppDynamics

AppDynamis overview

So, an application requires multiple steps or services to work. These services may include logging in and searching to add something to a shopping cart. These services invoke various applications, web services, third-party APIs, and databases, known as business transactions.

The user’s critical path

A business transaction is the essential user interaction with the system and is the customer’s critical path. Therefore, business transactions are the things you care about. If they start to go, your system will degrade. So, you need ways to discover your business transactions and determine if there are any deviations from baselines. This should also be automated, as learning baseline and business transitions in deep systems is nearly impossible using the manual approach.

So, how do you discover all these business transactions?

AppDynamics automatically discovers business transactions and builds an application topology map of how the traffic flows. A topology map can view usage patterns and hidden flows, acting as a perfect feature for an Observability platform.

AppDynamic topology

AppDynamics will automatically discover the topology for all of your application components. It can then build a performance baseline by capturing metrics and traffic patterns. This allows you to highlight issues when services and components are slower than usual.

AppDynamics uses agents to collect all the information it needs. The agent monitors and records the calls that are made to a service. This is from the entry point and follows executions along its path through the call stack. 

Types of Agents for Infrastructure Visibility

If the agent is installed on all critical parts, you can get information about that specific instance, which can help you build a global picture. So we have an Application Agent, Network Agent, and Machine Agent for Server visibility and Hardware/OS.

  • App Agent: This agent will monitor apps and app servers, and example metrics will be slow transitions, stalled transactions, response times, wait times, block times, and errors.  
  • Network Agent: This agent monitors the network packets, TCP connection, and TCP socket. Example metrics include performance impact Events, Packet loss, and retransmissions, RTT for data transfers, TCP window size, and connection setup/teardown.
  • Machine Agent Server Visibility: This agent monitors the number of processes, services, caching, swapping, paging, and querying. Example Metrics include hardware/software interrupts, virtual memory/swapping, process faults, and CPU/DISK/Memory utilization by the process.
  • Machine Agent: Hardware/OS – disks, volumes, partitions, memory, CPU. Example metrics: CPU busy time, MEM utilization, and pieces file.

Automatic establishment of the baseline

A baseline is essential, a critical step in your monitoring strategy. Doing this manually is hard, if not impossible, with complex applications. It is much better to have this done automatically. You must automatically establish the baseline and alert yourself about deviations from it.

This will help you pinpoint the issue faster and resolve it before it can be affected. Platforms such as AppDynamics can help you here. Any malicious activity can be seen from deviations from the security baseline and performance issues from the network baseline.

Summary: Cisco ACI Components

In the ever-evolving world of networking, organizations are constantly seeking ways to enhance their infrastructure’s performance, security, and scalability. Cisco ACI (Application Centric Infrastructure) presents a cutting-edge solution to these challenges. By unifying physical and virtual environments and leveraging network automation, Cisco ACI revolutionizes how networks are built and managed.

Understanding Cisco ACI Architecture

At the core of Cisco ACI lies a robust architecture that enables seamless integration between applications and the underlying network infrastructure. The architecture comprises three key components:

1. Application Policy Infrastructure Controller (APIC):

The APIC serves as the centralized management and policy engine of Cisco ACI. It provides a single point of control for configuring and managing the entire network fabric. Through its intuitive graphical user interface (GUI), administrators can define policies, allocate resources, and monitor network performance.

2. Nexus Switches:

Cisco Nexus switches form the backbone of the ACI fabric. These high-performance switches deliver ultra-low latency and high throughput, ensuring optimal data transfer between applications and the network. Nexus switches provide the necessary connectivity and intelligence to enable the automation and programmability features of Cisco ACI.

3. Application Network Profiles:

Application Network Profiles (ANPs) are a fundamental aspect of Cisco ACI. ANPs define the policies and characteristics required for specific applications or application groups. By encapsulating network, security, and quality of service (QoS) policies within ANPs, administrators can streamline the deployment and management of applications.

The Power of Network Automation

One of the most compelling aspects of Cisco ACI is its ability to automate network provisioning, configuration, and monitoring. Through the APIC’s powerful automation capabilities, network administrators can eliminate manual tasks, reduce human errors, and accelerate the deployment of applications. With Cisco ACI, organizations can achieve greater agility and operational efficiency, enabling them to rapidly adapt to evolving business needs.

Security and Microsegmentation with Cisco ACI

Security is a paramount concern for every organization. Cisco ACI addresses this by providing robust security features and microsegmentation capabilities. With microsegmentation, administrators can create granular security policies at the application level, effectively isolating workloads and preventing lateral movement of threats. Cisco ACI also integrates with leading security solutions, enabling seamless network enforcement and threat intelligence sharing.

Conclusion

Cisco ACI is a game-changer in the realm of network automation and infrastructure management. Its innovative architecture, coupled with powerful automation capabilities, empowers organizations to build agile, secure, and scalable networks. By leveraging Cisco ACI’s components, businesses can unlock new levels of efficiency, flexibility, and performance, ultimately driving growth and success in today’s digital landscape.

young-man-wearing-vr-glasses-with-neon-light-futu-2021-12-17-19-01-47-utc

Intent-Based Networking

Intent Based Networking

In today's rapidly advancing technological landscape, the demand for efficient and intelligent networking solutions continues to rise. Intent-Based Networking (IBN) has emerged as a transformative approach that simplifies network management, enhances security, and enables businesses to align their network operations with their overall objectives.

Intent-Based Networking represents a paradigm shift in the way networks are designed, deployed, and managed. At its core, IBN leverages automation, artificial intelligence (AI), and machine learning (ML) to interpret high-level business policies and translate them into automated network configurations. By abstracting network complexity, IBN empowers organizations with greater control, visibility, and agility.

1. Policy Definition: IBN relies on a declarative approach, where network administrators define policies based on business intent rather than dealing with low-level configurations. This simplifies the process of managing networks and reduces human errors.

2. Real-Time Analytics: By continuously gathering and analyzing network data, IBN platforms provide actionable insights that enable proactive network optimization, troubleshooting, and security threat detection. This real-time visibility empowers IT teams to make informed decisions and respond swiftly to network events.

3. Automation and Orchestration: IBN leverages automation to dynamically adjust network configurations based on intent. It automates routine tasks, such as device provisioning, policy enforcement, and network provisioning, freeing up IT resources for more strategic initiatives.

1. Enhanced Network Security: IBN's ability to enforce policies consistently across the network enhances security by minimizing vulnerabilities and ensuring compliance. It enables organizations to swiftly identify and respond to security threats, reducing the risk of data breaches.

2. Improved Network Efficiency: IBN's automation capabilities streamline network operations, reducing manual errors and optimizing performance. Through dynamic network provisioning and configuration, organizations can adapt to changing business needs, ensuring efficient resource utilization.

3. Simplified Network Management: The abstraction of network complexity and the use of high-level policies simplify network management tasks. This reduces the learning curve for IT professionals and accelerates the deployment of new network services.

Intent-Based Networking represents a major leap forward in network management, offering organizations unprecedented levels of control, agility, and security. By embracing the power of automation, AI, and intent-driven policies, businesses can unlock the full potential of their networks and position themselves for future success

Highlights: Intent Based Networking

Understanding Intent-Based Networking

–  Intent-Based Networking is a paradigm shift in network management focusing on translating high-level business objectives into automated network configurations. By leveraging artificial intelligence and machine learning, IBN aims to streamline network operations, enhance security, and improve overall network performance. It truly empowers network administrators to align the network’s behavior with business intent.

– IBN offers many benefits that make it a game-changer in the networking realm. Firstly, it enables faster network provisioning and troubleshooting, reducing human error and minimizing downtime. Secondly, IBN enhances network security through real-time monitoring and automated threat response. Additionally, IBN provides valuable insights and analytics, enabling better decision-making and optimized resource allocation.

– Implementing IBN requires careful planning and execution. It integrates various components, such as a centralized controller, network devices with programmable interfaces, and AI-powered analytics engines. Furthermore, organizations must assess their network infrastructure and determine the automation and intelligence needed. Collaborating with experienced vendors and leveraging their expertise can facilitate implementation.

Defining: Intent-Baed Networking

Intent-Based Networking can only be understood if we understand what Intent is. Purpose makes the definition of intent easier to understand because it is a synonym. Intentions or purposes vary from person to person, department to department, and organization to organization. It is possible for an organization to provide the best in class software to schools or to provide the best phones available. The purpose of a business process can be to fulfill the described task as efficiently as possible. It is, of course, the potential for a person to have multiple intentions or purposes. Generally, intent or purpose describes a goal to be achieved.

Example: Cisco DNA

As a network infrastructure built on Cisco DNA, IBN describes how to manage, operate, and enable a digital business using the network. An intent within the industry is translated into a network configuration that fulfills that intent. This is accomplished by defining the intent utilizing a set of (repetitive) steps. Cisco DNA approaches networks using all aspects of IBN (design principles, concepts, etc.).

**Challenge: The Lack of Agility**

Intent-based networking is not just hype; we see many intent-driven networks already with many SD WAN overlay roll-outs. It is a necessary development; from a technology standpoint, it has arrived. However, cultural acceptance will take a little longer. Organizations are looking to modernize their business processes and their networks.

Yet, traditional vertically integrated monolithic networking solutions prohibit the network from achieving agility. This is why we need intent-based networking systems. So, what is intent-based networking? Intent-based networking is where an end-user describes what the network should do, and the system automatically configures the policy. It uses declarative statements instead of imperative statements. 

**Converts the What into How**

You are telling the network what you want to accomplish, not precisely what to do and how to do it. For example, tell me what you want, not how to do it, all of which gets translated behind the scenes. Essentially, intent-based networking is a piece of open networking software that takes the “what” and converts it into the “how.” The system generates the resulting configuration for design and device implementation.

The system is provided with algorithms that translate business intent into network configurations. Humans can not match the speed of algorithms, and this is key. The system is aware of the network state and can ingest real-time network status from multiple sources in a transport and protocol-agnostic way.

**The Desired State**

It adds the final piece of the puzzle by validating in real time that the intent is being met. The system continuously compares the actual to the desired state of the running network. If the desired state is unmet, corrective actions such as modifying a QoS policy or applying an access control list (ACL) can occur. This allows for a closer alignment between the network infrastructure and business initiatives and maintains the network’s correctness.

Related: Before you proceed, you may find the following posts helpful.

  1. Network Configuration Automation
  2. Distributed Systems Observability
  3. Reliability in Distributed Systems
  4. Container Networking
  5. Overlay Virtual Networking

Intent Based Networking

Learning more about intent-based networking is essential to understanding it. An intent is a brief description of the purpose and a concrete, predetermined set of steps that must be executed to (successfully) achieve the Intent. This principle can also be applied to the operation of a network infrastructure. The Intent and its steps precisely describe what needs to be done on the network to accomplish a specific task. 

For example, the application is migrating to the cloud. In this case, the Intent or steps may include the following. First, take the existing access policy for that application from the data center policy, transform the policy into an application policy for Internet access, deploy the procedure on all perimeter firewalls, and change the routing for that application to the cloud.

Critical Principles of Intent-Based Networking:

1. Translation: Intent-based networking automatically translates high-level business objectives into network policies and configurations. By understanding the desired intent, the network infrastructure can autonomously make the necessary adjustments to align with the organization’s goals.

2. Automation: Automation is a fundamental aspect of IBN. By automating routine network tasks, such as provisioning, configuration, and troubleshooting, network administrators can focus on strategic activities that add value to the organization. Automation also reduces the risk of human error, leading to improved network reliability and security.

3. Assurance: IBN provides continuous monitoring and verification of network behavior against the intended state. By constantly comparing the network’s current state with the desired intent, IBN can promptly identify and mitigate any configuration drift or anomalies. This proactive approach enhances network visibility, performance, and compliance.

Benefits of Intent-Based Networking:

1. Simplified Network Management: With IBN, network administrators can easily manage complex networks. By abstracting the complexity of individual devices and focusing on business intent, IBN simplifies network operations, reducing the need for manual configuration and troubleshooting.

2. Enhanced Agility and Scalability: IBN enables organizations to respond quickly to changing business requirements and effortlessly scale their networks. By automating network provisioning and configuration, IBN supports rapid deployment and seamless integration of new services and devices.

3. Improved Network Security: Security is a top concern for modern networks. IBN offers enhanced security by continuously monitoring network behavior and enforcing security policies. This proactive approach reduces the risk of security breaches and enables faster threat detection and response.

4. Optimized Performance: IBN leverages real-time analytics and machine learning to optimize network performance. By dynamically adjusting network configurations based on traffic patterns and user behavior, IBN ensures optimal performance and user experience.

Intent-Based Networking Solution: Cisco SD-Access

The Cisco SD-Access digital network evolution transforms traditional campus LANs into intent-driven, programmable networks. Campus Fabric and DNA Center are Cisco SD-Access’s two main components. The Cisco DNA Center automates and assures the creation and monitoring of the Cisco Campus Fabric.

Cisco Campus Fabric Architecture: In Cisco SD-Access, fabric roles and terminology differ from those in traditional three-tier hierarchical networks. To create a logical topology, Cisco SD-Access implements fabric technology using overlay networks running on a physical (underlay) network.

Underlay networks are traditional physical networks that connect LAN devices such as routers and switches. Their primary function is to provide IP connectivity for traffic to travel from one point to another. Due to the IP-based underlay, any interior gateway protocol (IGP) can be utilized.

Overlay and Underlay Networking

Fabrics are overlay networks. In the IT world, Internet Protocol Security (IPsec), Generic Routing Encapsulation (GRE), Dynamic Multipoint Virtual Private Networks (DMVPN), Multiprotocol Label Switching (MPLS), Location Identifier Separation Protocol (LISP), and others are commonly used with overlay networks to connect devices virtually. An overlay network is a logical topology that connects devices over a topology-independent physical underlay network.

Example Overlay Technology: GRE Point to Point

Data & Control Plane

Forwarding and control planes are separated in overlay networks, resulting in a flexible, programmable, and scalable network. To simplify the underlay, the control plane and data plane are separated. As the control plane becomes the network’s brain, it allows faster forwarding and optimizes packets and network reliability. As an underlay for the centralized controller, Cisco SD-Access supports building a fabric using an existing network.

Underlay networks can be automated with Cisco DNA Center. As a result, it is helpful for new implementations or infrastructure growth since it eliminates the hassle of setting up the underlay. For differentiation, segmentation, and mobility, overlay networks often use alternate forwarding attributes in an additional header.

Cisco SD AccessNetworking Complexity

Networks continue to get more complex as traffic demands increase. While software-defined networking (SDN) can abstract the underlying complexities, we must consider how we orchestrate the policy and intent across multi-vendor, multi-domain elements.

To overcome complexity, you have to abstract. We have been doing this with tunneling for decades. However, different abstractions are used at the business and infrastructure resource levels.

At a business level, you need to be flexible, as rules will change and must be approached differently from how the operating system models resources. We must make new architecture decisions for this, as it’s not just about configuration management and orchestrations. None of these can look at the network state, which we need to do.

For this, we need network intelligence. Networks are built and managed today using a manual approach without algorithmic validation. The manual process of networking will not be viable in the future.  Let’s face it: humans make mistakes.

There are many reasons for network outages, ranging from software bugs and hardware/power failure to security breaches. All of these come from a lack of implementation of network security. However, human error is still the number one cause. Manual configuration inhibits us. Intent-based networking eliminates this inhibition.

The traditional approach to networking

In the traditional network model, there is a gap between the architect’s intent and what’s achieved. Not just for device configuration but also for achieved runtime behavior. Until now, there has not been a way to validate the original intent or to have a continuous verification mechanism.

Once you have achieved this level of assurance, you can focus on business needs and not be constrained by managing a legacy network. For example, Netflix moved its control plane to the cloud and now focuses all its time on its customer base.

We have gone halfway and spent billions of dollars on computing storage and applications, but the network still lags. The architecture and protocols have become more complex, but the management tools have not kept pace. Fortunately, now, this is beginning to change.

Software-defined networking; Slow Deployments

SDN shows great promise that could release networking, but deployments have been slow. Primarily down to large cloud-scale organizations with ample resources and dollars. But what can the rest of the industry do if we do not have that level of business maturity?  Intent-based networking is a natural successor to SDN, as many intent-based vendors have borrowed the same principles and common architectures.

The systems are built on the divide between the application and the network infrastructure. However, SDN operates at the network architecture level, where the control plane instructs the data plane forwarding node. Intent-based systems work higher at the application level to offer true brownfield network automation.

SDN and SD-WAN have made considerable leaps in network programmability, but intent-based networking is a further leap to zero-touch self-healing networks. For additional information on SD-WAN, including the challenges with existing WANs, such as lack of agility with BGP ( what is BGP protocol in networking ) and the core features of SD-WAN, check out this SDWAN tutorial.

Intent-Based Networking Use Case

The wide-area network (WAN) edge consists of several network infrastructure devices, including Layer 3 routers, SD-WAN appliances such as Viptela SD-WAN, and WAN optimization controllers. These devices could send diagnostic information for the intent-based system to ingest. The system can ingest from multiple sources, including a monitoring system and network telemetry.

As a result, the system can track application performance over various links. Suppose there is a performance-related problem, the policies are unmet, and application performance degrades.

In that case, the system can take action, such as rerouting the traffic over a less congested link or notifying a network team member. The intent-based system does not have to take corrective action, similar to how IDS/IPS is deployed. These devices can take disciplinary action if necessary, but many use IDS/IPS to alert.

Looking deeper into intent-based networking systems

The intent-based architecture combines machine learning (ML), cognitive computing, and deep analytics, providing enhanced levels of automation and programmability through an easy-to-use GUI. Combining these technologies allows you to move from a reactive to a proactive system.

ML, a sub-application of artificial intelligence (AI), allows intent-based systems to analyze and learn from data automatically without explicit programming. Therefore, it enables systems to understand and predict the data for autonomous behavior. Intent-based networking represents a radical new approach to network architecture and takes networking to the next level in intelligence.

It is not a technology that will be accepted overnight. Its adoption will be slow as, to some, a fully automated network can sound daunting, placing faith in your business, which for many organizations is the network.

However, deploying intent-based networking systems offers a new way to build and operate networks, which increases agility, availability, and security compared to traditional networking.

Intent-based networking (IBN) is transforming the way networks are managed. By shifting the focus from device-centric configurations to intent-driven outcomes, IBN simplifies network management, enhances agility and scalability, improves security, and optimizes network performance. As organizations strive to meet the demands of the digital age, embracing this innovative approach can pave the way for a more efficient and intelligent network infrastructure.

Summary: Intent Based Networking

In today’s rapidly evolving digital landscape, traditional networking approaches often struggle to keep pace with the dynamic needs of modern organizations. This is where intent-based networking (IBN) steps in, revolutionizing how networks are designed, managed, and optimized. By leveraging automation, artificial intelligence, and machine learning, IBN empowers businesses to align their network infrastructure with their intent, enhancing efficiency, agility, and security.

Understanding Intent-Based Networking

Intent-based networking goes beyond traditional methods by enabling businesses to articulate their desired outcomes to the network rather than manually configuring every network device. This approach allows network administrators to focus on strategic decision-making and policy creation while the underlying network infrastructure dynamically adapts to fulfill the intent.

Key Components of Intent-Based Networking

1. Policy Definition: Intent-based networking relies on clear policies that define the network’s intended behavior. These policies can be based on business objectives, security requirements, or application-specific needs. By translating high-level business intent into actionable policies, IBN streamlines network management.

2. Automation and Orchestration: Automation lies at the heart of IBN. Network automation tools automate routine tasks like configuration, provisioning, and troubleshooting, freeing valuable time for IT teams to focus on critical initiatives. Orchestration ensures seamless coordination and integration between various network elements.

3. Artificial Intelligence and Machine Learning: IBN leverages AI and ML technologies to continuously monitor, analyze, and optimize network performance. These intelligent systems can detect anomalies, predict potential issues, and self-heal network problems in real-time, enhancing network reliability and uptime.

Benefits of Intent-Based Networking

1. Enhanced Network Agility: IBN enables organizations to quickly adapt to changing business requirements and market dynamics. By abstracting the complexity of network configurations, businesses can scale their networks, deploy new services, and implement changes with ease and speed.

2. Improved Security and Compliance: Intent-based networking incorporates security policies directly into network design and management. By automating security measures and continuously monitoring network behavior, IBN helps identify and respond to threats promptly, reducing the risk of data breaches and ensuring compliance with industry regulations.

3. Optimal Resource Utilization: IBN helps organizations optimize resource allocation across the network through AI-driven insights and analytics. By dynamically adjusting network resources based on real-time demands, businesses can ensure optimal performance, minimize latency, and reduce operational costs.

Conclusion:

Intent-based networking represents a paradigm shift in network management, offering a holistic approach to meet the ever-evolving demands of modern businesses. By aligning network behavior with business intent, automating configuration and management tasks, and leveraging AI-driven insights, IBN empowers organizations to unlock new levels of agility, security, and efficiency in their network infrastructure.

network-automation3

Network Configuration Automation

Network Configuration Automation

In today's fast-paced digital landscape, efficient network configuration automation has become a cornerstone for organizations striving to enhance their operational productivity. Automating network configuration processes not only saves time and effort but also minimizes human error and ensures consistent network performance. In this blog post, we will explore the key benefits and considerations of network configuration automation, along with best practices to implement it effectively.

Network configuration automation refers to the practice of automating the deployment, management, and monitoring of network devices and related configurations. It streamlines the repetitive and time-consuming tasks involved in configuring network devices, such as routers, switches, and firewalls. By utilizing automation tools and frameworks, organizations can achieve greater agility, scalability, and accuracy in their network infrastructure.

Automating network configuration brings numerous advantages to organizations. Firstly, it significantly reduces the risk of human errors that can lead to network downtime or security vulnerabilities. Automation ensures consistency across network devices, eliminating configuration discrepancies.

Secondly, it enhances operational efficiency by reducing manual efforts and standardizing configuration processes, allowing IT teams to focus on more strategic initiatives. Lastly, network configuration automation facilitates faster troubleshooting and enables rapid changes to adapt to dynamic network requirements.

Comprehensive Network Inventory: Begin by creating a detailed inventory of network devices, including their models, firmware versions, and current configurations. This inventory will serve as a foundation for automation workflows.

Define Configuration Standards: Establish clear and standardized configuration templates that align with industry best practices. These templates should include essential parameters, such as IP addresses, routing protocols, and security policies.

Utilize Automation Tools: Choose a robust automation tool or framework that suits your organization's requirements. Evaluate features like device compatibility, scalability, and ease of integration with existing network management systems.

Test and Validate: Before deploying automated configurations in a production environment, thoroughly test and validate them in a controlled lab or staging environment. This step helps identify potential issues or conflicts.

While network configuration automation offers substantial benefits, it is essential to consider potential challenges. Organizations must ensure proper security measures are in place to protect automation tools and the integrity of network configurations. Additionally, regular monitoring and auditing of automated processes are crucial to detect any anomalies or unauthorized changes.

Network configuration automation serves as a catalyst for operational efficiency and reliability in modern network infrastructures. By embracing automation tools, defining robust processes, and adhering to best practices, organizations can streamline their network configuration workflows, reduce errors, and improve overall network performance.

With the right approach, network configuration automation becomes a strategic enabler for organizations seeking to stay competitive in today's digital landscape.

Highlights: Network Configuration Automation

**The Rise of Automation in Networking**

Automation in networking refers to the use of software and technologies to automate the configuration, management, testing, deployment, and operation of network devices. This approach is gaining traction as it addresses some of the most pressing challenges in networking today, such as the need for speed, accuracy, and scalability. By automating routine tasks, network administrators can focus on more strategic initiatives, resulting in more efficient and effective networks.

**Benefits of Networking Automation**

One of the most significant benefits of automation in networking is its ability to enhance efficiency. Automated systems can perform repetitive tasks quickly and accurately, reducing the likelihood of human error. This increased precision leads to improved network performance and reliability. Additionally, automation allows for faster deployment of network changes and updates, enabling organizations to respond swiftly to evolving business needs and technological advancements.

**Overcoming Challenges in Automation**

While the advantages of networking automation are clear, the journey to full automation is not without obstacles. One of the primary challenges is the complexity of integrating automation tools with existing network infrastructures. Organizations must also address potential security issues, as automated systems can be vulnerable to cyber threats if not properly managed. Moreover, there is a need for skilled personnel who can oversee and maintain these automated systems, highlighting the importance of ongoing training and development.

**The Future of Networking: What Lies Ahead?**

As automation continues to gain ground, the future of networking looks promising. Emerging technologies such as artificial intelligence and machine learning are expected to further enhance automation capabilities, leading to more intelligent and adaptive networks. These advancements will enable networks to self-optimize and self-heal, minimizing downtime and maximizing performance. As a result, businesses can expect to see increased efficiency, reduced operational costs, and improved customer satisfaction.

Introducing Network Configuration Automation

1: – In today’s fast-paced digital landscape, efficient network configuration management is crucial for businesses to thrive. Manual network configuration tasks can be time-consuming, error-prone, and hinder productivity. This is where network configuration automation comes into play, revolutionizing the way networks are managed and maintained.

2: – Network configuration automation refers to the use of tools and scripts to automatically configure network devices such as routers, switches, and firewalls. This process eliminates the need for manual configuration, which can be time-consuming and prone to human error. By automating repetitive tasks, organizations can ensure consistency and accuracy across their network infrastructure.

3: – Automating network configuration processes offers a plethora of advantages. Firstly, it significantly reduces human errors that often occur during manual configurations. With automation, consistency and accuracy are ensured, leading to enhanced network reliability. Additionally, network configuration automation saves time and resources, allowing IT teams to focus on more strategic tasks rather than mundane and repetitive configurations.

4: – Implementing network configuration automation involves selecting the right tools and technologies for your organization. Popular options include Ansible, Puppet, and Chef, each offering unique features and capabilities. It’s important to assess your specific needs and environment before choosing a solution. Once implemented, automation scripts can be created and customized to meet your organization’s unique requirements, ensuring seamless integration into existing workflows.

Note: – **Application Changes**

Applications are deployed differently today than they were 10-15 years ago. So much has changed with the app. The problem we are seeing today is that the network is not tightly coupled with these other developments. Providing various network policies and corresponding configurations is not tightly associated with the application.

They are usually loosely coupled and reactive. For example, analyzing firewall rules and providing a network assessment is nearly impossible with old security devices, driving the need for network configuration automation and the ability to automate network configuration.

Right Tools & Strategies 

– To successfully implement network configuration automation, businesses need to adopt the right tools and strategies. Robust automation platforms, such as Ansible or Puppet, can streamline the process by providing intuitive interfaces and extensive libraries of pre-built configurations. IT teams should also establish clear configuration standards and templates to ensure consistency across the network.

– While network configuration automation offers numerous benefits, it’s not without its challenges. One common obstacle is the initial investment required to implement automation tools and train IT staff. However, the long-term cost savings and increased efficiency outweigh the initial costs. Additionally, businesses must carefully plan and test automation workflows to avoid unintended consequences or disruptions to network operations.

Deterministic outcomes

An enterprise organization’s change review involves examining upcoming network changes, their impact on external systems, and their rollback plans. When humans use the CLI to make changes, typing the wrong command can have catastrophic results. Think about a team of three, four, five, or fifty engineers. Depending on the engineer, changes can be made in a variety of ways. In addition, even using a GUI or a CLI does not eliminate or reduce the chance of errors during change control.

The executive team has a better chance of achieving deterministic outcomes when they use proven and tested network automation to make changes. This increases their chances of achieving a successful project the first time around by achieving more predictable behavior than when making changes manually. This might happen when a new VLAN is added or a new customer is onboarded, requiring multiple network changes.

Furthermore, deterministic results result in lower operating expenses (OpEx), as network changes require less manual labor, resulting in a more efficient network operation (e.g., automating time-consuming tasks such as updating a network device’s operating system). Network engineers can focus on more strategic projects and improve processes with less operating time.

Device Provisioning

An easy and fast way to get started with network automation is to automate the creation and pushing of device configuration files. Two steps are involved in this process: creating the configuration file and pushing it to the device.

To automate configuration files (or configuration data in general), the input parameters (configuration parameters) must first be decoupled from the vendor-proprietary syntax (CLI). Separate files will be created for configuration templates, VLANs, domains, interfaces, routing, etc.

Data Collection and Enrichment

Through SNMP, monitoring tools typically poll management information bases (MIBs) for data. Data may be returned in excess or insufficient to meet your needs. What should be done when polling interface statistics? What if you only need interface resets, not CRC errors, jumbo frames, or output errors?

The command show interface may return every counter, but what if you only need interface resets? Moreover, what if you want to see interface resets correlated with Cisco Discovery Protocol (CDP) or LLDP neighbors now rather than in the future? In this context, what role does network automation play?

We focus on giving you more control so you can customize what you get when you get it, how it is formatted, and how it is used after it is collected. Automating the process can maximize your data.

Migrations

Migrating from one platform to another is never easy. There may be platforms from the same vendor or different vendors. In our example, you can create configuration templates for network devices and operating systems using various forms of automation. Vendors may provide a script or tool that helps with migrations. It would then be possible to generate a configuration file for every vendor using a defined and standard data set (common data model).

If you are using them, you must also consider vendor-proprietary extensions. It is fantastic that such a tool can be built independently rather than by a vendor. A vendor must account for all the device features, while an organization only needs a limited number. Vendors aren’t concerned about this; they are worried about their equipment not making it easier for you, the network operator, to manage multivendor environments.

Configuration Management

In configuration management, devices are deployed, pushed, and managed according to their configuration state. Everything from interface descriptions to configurations of ToR switches, firewalls, load balancers, and advanced security infrastructure is covered to deploy three-tier applications.

As you can see, with the read-only forms of automation, you don’t have to start by pushing configurations. This method may be worthwhile if you spend countless hours making the same change across many routers or switches.

Before you proceed, you may find the following articles of interest:

  1. Open Networking
  2. A10 Networks
  3. Brownfield Network Automation

Network Configuration Automation

One of the easiest and quickest ways to get started with network automation is to automate the creation of the device configuration files used for initial device provisioning and push them to network devices. You can also get a lot of information with automation. For example, network devices have enormous static and ephemeral data buried inside, and using open-source tools or building your own gets you access to this data.

Examples of this type of data include entries in the BGP table, OSPF adjacencies, active neighbors, interface statistics, specific counters and resets, and even counters from application-specific integrated circuits (ASICs) themselves on newer platforms.

Guide with Ansible Core

We have Ansible installed and a managed host already prepared in the following. The managed host needs to have SSH enabled and a user with admin privileges. Ansible finds managed hosts by looking at the inventory file. The inventory file is also a great place to pass variables that can be used to remove site-specific information; this is set under the host var section below.

Remember that Ansible requires Python, and below, we are running Python version 3.0.3 and Jinja version 3.0.3, which is used for templating. You can pass information to ansible managed hosts with playbooks and ad hoc commands. Below, I’m using an ad hoc command, calling the command module by default, and testing with a ping.

Ansible configuration
Diagram: Ansible Configuration

**Benefits of Network Configuration Automation**

1. Time and Resource Efficiency: Organizations can free up their IT staff to focus on more strategic initiatives by automating repetitive and time-consuming network configuration tasks. This results in increased productivity and efficiency across the organization.

2. Enhanced Accuracy and Consistency: Manual configuration processes are prone to human error, leading to misconfigurations and network downtime. Network configuration automation eliminates these risks by ensuring consistency and accuracy in network configurations, reducing the chances of costly errors.

3. Rapid Network Deployment: Network administrators can quickly deploy network configurations across multiple devices simultaneously with automation tools. This accelerates network deployment and enables organizations to respond faster to changing business needs.

4. Improved Security and Compliance: Network configuration automation enhances security by enforcing standardized configurations and ensuring compliance with industry regulations. Automated security protocols can be applied consistently across the network, reducing vulnerabilities and enhancing overall network protection.

5. Simplified Network Management: Automation tools provide a centralized platform for managing network configurations, making it easier to monitor, troubleshoot, and maintain network devices. This simplifies network management and reduces the complexity associated with manual configuration processes.

**Implementing Network Configuration Automation**

To implement network configuration automation, organizations need to consider the following steps:

1. Assess Network Requirements: Understand the specific network requirements, including device types, network protocols, and security policies.

2. Select an Automation Tool: Evaluate different automation tools available on the market and choose the one that best suits the organization’s needs and network infrastructure.

3. Create Configuration Templates: Develop standardized configuration templates that can be easily applied to network devices. These templates should include best practices, security policies, and network-specific configurations.

4. Test and Validate: Before deploying automated configurations, thoroughly test and validate them in a controlled environment to ensure their effectiveness and compatibility with the existing network infrastructure.

5. Monitor and Maintain: Regularly monitor and maintain the automated network configurations to identify and resolve any issues or security vulnerabilities that may arise.

The Need to Automate Network Configuration

There are always hundreds, if not thousands, of outdated rules even though the application service is not required. Another example is unused VLANs left configured on access ports, posing a security risk. The problem lies in the process: how we change and provision the network is not tied to the application. It is not automated. Inconsistent configurations tend to grow as human interaction is required to tidy things up. People move on and change roles.

You cannot guarantee the person creating a firewall rule will be the engineer deleting the rule once the corresponding applications are decommissioned or changed. And if you don’t have a rigorous change control process, deprecated configurations will be idle on active nodes.

A key point: The use of Ansible variables in an Ansible architecture.

For configuration management, you could opt for Red Hat Ansible. The Ansible architecture consists of modules with tasks on the target hosts listed in the inventory. Various plugins are available for additional context and Ansible variables for flexible playbook development. Ansible Core is the CLI-based version of automation, and Ansible Tower is the platform.

The recommended approach for enterprise-wide security would be a platform-based approach to the Ansible architecture. Using a platform approach using Ansible variables creates a very flexible automation journey where you can have one playbook with Ansible variables, removing any site-specific information running against several different inventories that could relate to your other functions, Dev, Staging, and Production.

Network Automation

The network is critical for business continuity, resulting in real uptime pressure. Operational uptime is directly tied to the success of the business. This results in a manual culture, which manifests as manual and slow. The actual bottleneck is our manual culture for network provision and operation. 

Virtualization – Beginning the change

Virtualization vendors are changing the manual approach. For example, if we look at essential MAC address learning and its process with traditional switches. The source MAC address of an incoming Ethernet frame is examined, and if the source MAC address is known, it doesn’t need to do anything, but if it’s not known, it will add that MAC to its table and make a note of the port the frame entered. The switch has a port for MAC mapping. The table is continually maintained, and MAC addresses are added and removed via timers and flushing.

The virtual switch

The virtual switch operates differently. Whenever a VM spins up and a VNIC attaches to the virtual switch, the Hypervisor programs everything it needs to know to forward that traffic into its process on the virtual switch. There is no MAC learning. When you spin down the VM, the hypervisor does not need to wait for a timer.

It knows the source is no longer there; as a result, it no longer needs to have that state. Less state in a network is a good thing. The critical point is that the provision of the application/ virtual machine is tightly coupled with the provisioning of network resources. Tightly coupling applications to network resources/provisioning offers less “Garbage Collection.”

Box mentality  

When the contents of HLD / LLD are completed and you are now moving to the configuration stage, the current implementation-specific details are done per box. The commands are defined on individual boxes and are vendor-specific. This works functionally, and it’s how the Internet was built, but it lacks agility and proper configuration management. Many repetitive tasks with a box mentality destroy your ability to scale.

Businesses are mainly concerned with agility and continuity, but you cannot have these two things with manual provisions. You must look at your network as a system, not individual boxes. When you look at applications and their scaling, the current network-style implementation method does not scale and keeps in line with the apps. The solution is to move to network configuration automation and automatic interaction.

Configuration management

Network Configuration Automation and Automate Network Configuration

We must move out of a manual approach and into an automated system. Focus initially on low-hanging fruit and easy wins. What takes engineers the longest to do? Do VLAN and Subnet allocation sheets ring a bell? We should size according to demand and not care about the type of VLAN or the Internal subnet allocation. Microsoft Azure cloud is a perfect example.

They do not care about the type of private address they assign to internal systems. They automate the IP allocation and gateway assignment so you can communicate locally. Designing optimum networks to last and scale is not good enough anymore. The network must evolve and be programmed to keep up with app placement. The configuration approach needs to change, and we should move to proper configuration management and automation.

Ansible is a widespread tool of choice. As previously mentioned, we have Ansible Tower as a platform, and for CLI-based devices, we have Ansible Core, which supports variable substitution with Ansible variables. 

SDN: A companion to network automation?

One benefit of Software-Defined Networking (SDN) is that it lets you view your network holistically, with a central viewpoint. Network configuration automation is not SDN, and SDN is not network automation. They work side by side and complement each other. SDN allows you to be abstract and prevents those who do not need to see the detail from not seeing it.

The application owners do not care about VLANs. Application designers should also not care about local IP allocations if they have designed the application correctly. Centralization is also a goal for SDN. Centralization with SDN is different from control-plane centralization. Central SDN controller devices should not fully control the control plane.

SDN companies have learned this and now allow network nodes to handle some or part control plane operations.  

Programming network: Automate network configuration

You don’t need to be a programmer, but you should start thinking like one. Learning to program will make you better equipped to deal with things. Programming networks is a diagonal step from what you are doing now, offering an environment to run code and ways to test code before you run it out.

The current CLI is the most dangerous approach to device configuration; you can even lock yourself out of a device. Programming adds a safety net. It’s more of a mental shift. Stop jumping to the CLI and THINK FIRST. Break the task down and create workflows. Workloads are then mapped to an automation platform.

A key point: TCL and EXPECT

TCL ( Tool Command Language ) is a scripting language created in 1988 at UC Berkeley. It aims to connect Shell scripts and Unix commands. EXPECT is a TCL extension written by Don Libes. It automates Telnet, SSH, and Serial sessions to perform many repetitive tasks.

EXPECT’s main drawback is that it is not secure and is synchronous only. If you log onto a device, you display login credentials in the EXPECT scripts and cannot encrypt that data in the code. In addition, it operates sequentially, meaning you send a command and wait for the output; it does not send send send and wait to receive; it’s a send and waits, sends and wait for mythology.

A key point: SNMP has failed | NETCONF begins

SNMP is used for fault handling, monitoring equipment, and retrieving performance data, but very few use SNMP to set configurations. More often, there is no 1:1 translation between a CLI configuration operation and an SNMP “SET.” It’s hard to get this 1-2-1 correlation. As a result, not many people use SNMP for device configuration and management of structures.

CLI scripting was the primary approach to automating network configuration changes before NETCONF. Unfortunately, CLI scripting has several limitations, including a lack of transaction management, no structured error management, and the ever-changing structure and syntax of commands, making scripts fragile and costly to maintain. Even the use of autocorrelation scripts won’t be able to fix it.

People make mistakes, and ultimately, people are bad at stuff. It’s the nature of the beast. Human error plays a significant role in network outages, and if a person is not logging in doing CLI, they are less likely to make a costly mistake. Human interaction with the network is a major cause of network outages.

NETCONF & Tail-F

NETCONF ( network control protocol ) is an XML-based data encoding for configuration and protocol messages. It offers secure transport and is Asynchronous, so it’s not sequential like TCL and EXPECT. Asynchronous makes NETCONF more efficient. It allows the separation of the configuration from the non-configuration items.

SNMP makes backup and restore complex, as you have no idea what fields are used to restore. Also, because of SNMP’s binary nature, it isn’t easy to compare configurations from one device to another. NETCONF is much better at this.

A final note: Transaction-based approach

It offers a transaction-based approach. A transaction is a set of configuration changes, not a sequence. SNMP for configuration requires everything to be in the correct sequence/order. But with a transaction, you throw in everything, and the device figures out how to roll it out.

What matters is that operators can write service-level applications that activate service-level changes and don’t have to make the application aware of all the sequence of changes that must be completed before the network can serve application responses and requests. Check out an exciting company called Tail-F (now part of Cisco), which offers a family of NETCONF-enabled products.

Closing Points on Network Configuration Automation

Automation in networking refers to the use of software and tools to execute network management tasks with minimal human intervention. This includes configuration, management, testing, deployment, and operation of physical and virtual devices within a network. The primary goal of automation is to improve efficiency, reduce human error, and enable network administrators to focus on strategic tasks rather than mundane, repetitive ones.

The integration of automation into networking brings numerous advantages. Firstly, it significantly enhances operational efficiency by automating routine tasks, allowing network teams to allocate their time and resources more effectively. Secondly, it reduces the risk of human errors, which are often the cause of network outages and security breaches. Thirdly, automation supports scalability, enabling networks to grow and adapt without the need for extensive manual configuration. Lastly, it provides real-time insights and analytics, empowering businesses to make data-driven decisions.

Despite its benefits, the implementation of automation in networking is not without challenges. One of the primary obstacles is the complexity of integrating automation tools with existing network infrastructure. Organizations may also face resistance from IT staff who are wary of changing established processes. Additionally, there is a need for ongoing training and upskilling to ensure that teams can effectively manage and optimize automated systems. Addressing these challenges requires careful planning, investment, and a commitment to change management.

As technology continues to evolve, the future of networking automation looks promising. Advances in artificial intelligence and machine learning are expected to further enhance automation capabilities, leading to smarter, more adaptive networks. The rise of the Internet of Things (IoT) and 5G technologies will also drive demand for automated solutions to manage the increased complexity and volume of network traffic. As businesses continue to embrace digital transformation, automation will become an indispensable component of modern networking strategies.

Summary: Network Configuration Automation

Network configuration is crucial in ensuring seamless connectivity and efficient data flow in today’s fast-paced technological landscape. However, the manual configuration of networks can be time-consuming, prone to errors, and hinder scalability. This is where network configuration automation comes into play, revolutionizing how networks are managed and optimized. In this blog post, we explored the benefits, implementation techniques, and best practices of network configuration automation.

Understanding Network Configuration Automation

Network configuration automation involves leveraging software and tools to automate configuring and managing network devices. Reducing human intervention eliminates manual errors, enhances agility, and enables effective network management at scale.

Benefits of Network Configuration Automation

Automating network configuration brings a plethora of advantages to organizations. Firstly, it significantly reduces human errors, ensuring accurate and consistent device configurations. Secondly, it enhances efficiency by saving time and effort spent on manual configurations. Additionally, automation allows for faster deployment of network changes, improving overall network agility and responsiveness.

Implementation Techniques for Network Configuration Automation

Implementing network configuration automation requires a structured approach. It involves:

1. Inventory and Device Discovery: Create an inventory of network devices and establish a comprehensive understanding of the existing network infrastructure.

2. Configuration Templates and Version Control: Develop standardized configuration templates that can be easily applied to multiple devices. Implement version control mechanisms to track and manage configuration changes effectively.

3. Orchestration and Automation Tools: Leverage network automation tools that provide scripting, scheduling, and deployment automation features. Tools like Ansible, Chef, and Puppet offer potent capabilities to streamline network configuration.

Best Practices for Network Configuration Automation

To ensure a successful implementation of network configuration automation, consider the following best practices:

1. Test and Verify: Before deployment, thoroughly test and verify automation scripts and templates to ensure they align with the desired network configuration and functionality.

2. Security and Compliance: Incorporate security measures and compliance standards into automation processes to protect network assets and ensure adherence to industry regulations.

3. Documentation and Change Management: Maintain detailed documentation of configurations and changes made through automation. Implement a change management process to track modifications and facilitate troubleshooting.

Conclusion

Network configuration automation is a game-changer in network management. By embracing automation, organizations can reduce errors, enhance efficiency, and improve overall network performance. Whether deploying standardized configurations or orchestrating complex network changes, automation streamlines processes, allowing IT teams to focus on strategic initiatives and innovation.

MPLS

Segment Routing – Introduction

Segment Routing

In today's interconnected world, where data traffic is growing exponentially, network operators face numerous challenges regarding scalability, flexibility, and efficiency. To address these concerns, segment routing has emerged as a powerful networking paradigm that offers a simplified and programmable approach to traffic engineering. In this blog post, we will explore the concept of segment routing, its benefits, and its applications in modern networks.

Segment routing is a forwarding paradigm that leverages source routing principles to steer packets along a predetermined path through a network. Instead of relying on complex routing protocols and their associated overhead, segment routing enables the network to be programmed with predetermined instructions, known as segments, to define the path packets should traverse. These segments can represent various network resources, such as links, nodes, or services, and are encoded in the packet's header.

Enhanced Network Scalability: Segment routing enables network operators to scale their networks effortlessly. By leveraging existing routing mechanisms and avoiding the need for extensive protocol exchanges, segment routing simplifies network operations, reduces overhead, and enhances scalability.

Traffic Engineering and Optimization: With segment routing, network operators gain unparalleled control over traffic engineering. By specifying explicit paths for packets, they can optimize network utilization, avoid congestion, and prioritize critical applications, ensuring a seamless user experience.

Fast and Efficient Network Restoration: Segment routing's inherent flexibility allows for rapid network restoration in the event of failures. By dynamically rerouting traffic along precomputed alternate paths, segment routing minimizes downtime and enhances network resilience.

Highlights: Segment Routing

What is Segment Routing?

Segment Routing is a forwarding paradigm that leverages the concept of source routing. It allows network operators to define a path for network traffic by encoding instructions into the packet header itself. This eliminates the need for complex routing protocols, simplifying network operations and enhancing scalability.

Traditional routing protocols often struggle with complexity and scalability, leading to increased operational costs and reduced network performance. Segment Routing addresses these challenges by simplifying the data path and enhancing traffic engineering capabilities, making it an attractive option for Internet Service Providers (ISPs) and enterprises alike.

**Source-based routing technique**

Note: Source-Based Routing

Segment routing is a source-based routing technique that enables a source node to define the path that a packet will take through the network. This is achieved by encoding a list of instructions, or “segments,” into the packet header.

Each segment represents a specific instruction, such as forwarding the packet to a particular node or applying a specific service. By doing so, segment routing eliminates the need for complex protocols like MPLS and reduces the reliance on network-wide signaling, leading to a more efficient and manageable network.

1- ) At its core, Segment Routing is a source-based routing technique that enables packets to follow a specified path through the network. Unlike conventional routing methods that rely on complex signaling protocols and state information, SR uses a list of segments embedded within packets to dictate the desired path.

2- ) These segments represent various waypoints or instructions that guide packets from source to destination, allowing for more efficient and predictable routing.

3- ) Segment Routing is inherently compatible with both MPLS (Multiprotocol Label Switching) and IPv6 networks, making it a versatile choice for diverse network environments. By leveraging existing infrastructure, SR minimizes the need for significant hardware upgrades, facilitating a smoother transition for network operators.

Key Points:

Scalability and Flexibility: Segment Routing provides enhanced scalability by allowing network operators to define paths dynamically based on network conditions. It enables traffic engineering, load balancing, and fast rerouting, ensuring efficient resource utilization and optimal network performance.

Simplified Operations: By leveraging source routing, Segment Routing simplifies network operations. It eliminates the need for maintaining and configuring multiple protocols, reducing complexity and operational overhead. This results in improved network reliability and faster deployment of new services.

**Implementing Segment Routing in Your Network**

Transitioning to Segment Routing involves several key steps. First, network operators need to assess their existing infrastructure to determine compatibility with SR. This may involve updating software, configuring routers, and integrating SR capabilities into the network.

Once the groundwork is laid, operators can begin defining segments and policies to guide packet flows. This process involves careful planning to ensure optimal performance and alignment with business objectives. By leveraging automation tools and intelligent analytics, operators can further streamline the implementation process and continuously monitor network performance.

Applications of Segment Routing

Traffic Engineering: Segment Routing enables intelligent traffic engineering by allowing operators to specify explicit paths for traffic flows. This empowers network operators to optimize network utilization, minimize congestion, and provide quality-of-service guarantees.

Network Slicing: Segment Routing facilitates network slicing, a technique that enables the creation of multiple virtual networks on a shared physical infrastructure. By assigning unique segments to each slice, Segment Routing ensures isolation, scalability, and efficient resource allocation.

5G Networks: Segment Routing plays a crucial role in the evolution of 5G networks. It enables network slicing, network function virtualization, and efficient traffic engineering, providing the necessary foundation for the deployment of advanced 5G services and applications.

Complexity and Scale of MPLS Networks

– The complexity and scale of MPLS networks have grown over the past several years. Segment routing simplifies the propagation of tags through an extensive network by reducing overhead communication or control-plane traffic.

– By ensuring accurate and timely control-plane communication, traffic can reach its destination properly and efficiently. As a second-order effect, this reduces the likelihood of user error, order of operation errors, and control-plane sync issues in the network.

– Segment Routing’s architecture allows the software to define traffic flow proactive rather than reactively responding to network issues. With these optimizations, Segment Routing has become a hot topic for service providers and enterprises alike, and many are migrating from MPLS to Segment Routing.

**The Architecture of MPLS**

At its core, MPLS operates by assigning labels to data packets, allowing routers to make forwarding decisions based on these labels rather than the traditional, more cumbersome IP addresses. This label-based system not only streamlines the routing process but also significantly enhances the speed and efficiency of data transmission.

The architecture of MPLS involves several key components, including Label Edge Routers (LERs) and Label Switch Routers (LSRs), which work together to ensure seamless data flow across the network. Understanding the roles of these components is crucial for grasping how MPLS networks maintain their robustness and scalability.

**Challenges in Implementing MPLS**

Despite its numerous advantages, deploying MPLS networks comes with its own set of challenges. The initial setup and configuration can be complex, requiring specialized knowledge and skills. Furthermore, maintaining and troubleshooting MPLS networks demand continuous monitoring and management to ensure optimal performance.

Security is another concern, as the complexity of MPLS can sometimes obscure potential vulnerabilities. Organizations must implement robust security measures to protect their MPLS networks from threats and ensure the integrity of their data.

Example Technology: MPLS Forwarding & LDP

### What is MPLS Forwarding?

MPLS forwarding is a method of data transport that simplifies and accelerates the routing process. Unlike traditional IP routing, which uses destination IP addresses to forward packets, MPLS uses labels. Each packet gets a label, which is a short identifier that routers use to determine the packet’s next hop. This approach reduces the complexity of routing tables and increases the speed at which data is processed. MPLS is particularly beneficial for building scalable, high-performance networks, making it a preferred choice for service providers and large enterprises.

### The Role of LDP in MPLS Networks

The Label Distribution Protocol (LDP) is integral to the operation of MPLS networks, as it manages the distribution of labels between routers. LDP establishes label-switched paths (LSPs), which are the routes that data packets follow through an MPLS network. By automating the process of label distribution, LDP ensures efficient and consistent communication between routers, thus enhancing network reliability and performance. Understanding LDP is crucial for network engineers looking to implement or manage MPLS systems.

### Advantages of Using MPLS and LDP

The combination of MPLS and LDP offers numerous advantages. Firstly, MPLS can handle multiple types of traffic, such as IP packets, ATM, and Ethernet frames, making it versatile and adaptable. The use of labels instead of IP addresses results in faster data forwarding, reducing latency and improving network speed. Additionally, MPLS supports Quality of Service (QoS), allowing for prioritization of critical data, which is essential for applications like VoIP and video conferencing. LDP enhances these benefits by providing a robust framework for label management, ensuring network stability and efficiency.

Segment Routing –  A Forwarding Paradigm

Segment routing is a forwarding paradigm that allows network operators to define packet paths by specifying a series of segments. These segments represent instructions that guide the packet’s journey through the network. Network engineers can optimize traffic flow, improve network scalability, and provide advanced services by leveraging segment routing.

**The Mechanics of Segment Routing**

At its core, Segment Routing leverages the source-routing principle. Instead of relying on each router to make forwarding decisions, SR allows the source node to encode the path a packet should take through the network. This path is defined by an ordered list of segments, which can be topological or service-based. By eliminating the need for complex signaling protocols, SR reduces overhead and simplifies the network infrastructure.

Recap: Segment routing leverages the concept of source routing, where the sender of a packet determines the complete path it will take through the network. Routers can effortlessly steer packets along a predefined path by assigning a unique identifier called a segment to each network hop, avoiding complex routing protocols and reducing network overhead.

Key Concepts – Components

To fully grasp segment routing, it’s essential to familiarize yourself with its key concepts and components. One fundamental element is the Segment Identifier (SID), which represents a specific network node or function. SIDs are used to construct explicit paths and enable traffic engineering. Another essential concept is the label stack, which allows routers to stack multiple SIDs together to form a forwarding path. Understanding these concepts is crucial for effectively implementing segment routing in network architectures.

**The Building Blocks**

  • Segment IDs: Segment IDs are fundamental elements in segment routing. They uniquely identify a specific segment within the network. Depending on the network infrastructure, these IDs can be represented by various formats, such as IPv6 addresses or MPLS labels.
  • Segment Routing Headers: Segment routing headers contain the segment instructions for packets. They are added to the packet’s header, indicating the sequence of segments to traverse. These headers provide the necessary information for routers to make forwarding decisions based on the defined segments.

**Traffic Engineering with Segment Routing**

  • Traffic Steering: Segment routing enables precise traffic steering capabilities, allowing network operators to direct packets along specific paths based on their requirements. This fine-grained control enhances network efficiency and enables better utilization of network resources.
  • Fast Reroute: Fast Reroute (FRR) is a crucial feature of segment routing that enhances network resiliency. By leveraging backup paths and pre-calculated segments, segment routing enables rapid traffic rerouting in case of link failures or congestion. This ensures minimal disruption and improved quality of service for critical applications.

**Integration with Network Services**

  • Service Chaining: Segment routing seamlessly integrates with network services, enabling efficient service chaining. Service chaining directs traffic through a series of service functions, such as firewalls or load balancers, in a predefined order. With segment routing, this process becomes more streamlined and flexible.
  • Network slicing: Network slicing leverages the capabilities of segment routing to create virtualized networks within a shared physical infrastructure. It enables the provisioning of isolated network slices tailored to specific requirements, guaranteeing performance, security, and resource isolation for different applications or tenants.

Understanding MPLS

MPLS, a versatile protocol, has been a stalwart in the networking industry for decades. It enables efficient packet forwarding by leveraging labels attached to packets for routing decisions. MPLS provides benefits such as traffic engineering, Quality of Service (QoS) control, and Virtual Private Network (VPN) support. Understanding the fundamental concepts of label switching and label distribution protocols is critical to grasping MPLS.

The Rise of Segment Routing

Segment Routing, on the other hand, is a relatively newer paradigm that simplifies network architectures and enhances flexibility. It leverages the concept of source routing, where the source node explicitly defines the path that packets should traverse through the network. By incorporating this approach, segment routing eliminates the need to maintain per-flow state information in network nodes, leading to scalability improvements and more accessible network management.

Key Differences and Synergies

While MPLS and Segment Routing have unique characteristics, they can also complement each other in various scenarios. Understanding the differences and synergies between these technologies is crucial for network architects and operators. MPLS offers a wide range of capabilities, including Traffic Engineering (MPLS-TE) and VPN services, while Segment Routing simplifies network operations and offers inherent traffic engineering capabilities.

MPLS and BGP-free Core

So, what is segment routing? Before discussing a segment routing solution and the details of segment routing vs. MPLS, let us recap how MPLS works and the protocols used. MPLS environments have both control and data plane elements.

A BGP-free core operates at network edges, participating in full mesh or route reflection design. BGP is used to pass customer routes, Interior Gateway Protocol (IGP) to pass loopbacks, and Label Distribution Protocol (LDP) to label the loopback.

Example BGP Technology: BGP Route Reflection

**The Challenge of Scalability in BGP**

In large networks, maintaining a full mesh of BGP peering relationships becomes impractical. As the number of routers increases, the number of required connections grows exponentially. This complexity can lead to increased configuration errors, higher resource consumption, and slower convergence times, making network management a daunting task.

**Enter BGP Route Reflection**

BGP Route Reflection is a clever solution to the scalability challenges of traditional BGP. By designating certain routers as route reflectors, network administrators can significantly reduce the number of BGP sessions required. These reflectors act as intermediaries, receiving routes from clients and redistributing them to other clients without the need for a full mesh.

**Benefits of BGP Route Reflection**

Implementing BGP Route Reflection offers numerous advantages. It simplifies network topology, reduces configuration complexity, and lowers the resource demands on routers. Additionally, it enhances network stability by minimizing the risk of configuration errors and improving convergence times during network changes or failures.

Labels and BGP next hops

LDP or RSVP establishes MPLS label-switched paths ( LSPs ) throughout the network domain. Labels are assigned to the BGP next hops on every router where the IGP in the core provides reachability for remote PE BGP next hops.

As you can see, several control plane elements interact to provide complete end-to-end reachability. Unfortunately, the control plane is performed hop-by-hop, creating a network state and the potential for synchronization problems between LDP and IGP.

Related: Before you proceed, you may find the following post helpful:

  1. Observability vs Monitoring
  2. Network Traffic Engineering
  3. What Is VXLAN
  4. Technology Insight for Microsegmentation
  5. WAN SDN

Segment Routing

Keep Complexity to Edges

In 2002, the IETF published RFC 3439, an Internet Architectural Guideline and Philosophy. It states, In short, the complexity of the Internet belongs at the edges, and the IP layer of the Internet should remain as simple as possible.” When applying this concept to traditional MPLS-based networks, we must bring additional network intelligence and enhanced decision-making to network edges. Segment Routing is a way to get intelligence to the edge and Software-Defined Networking (SDN) concepts to MPLS-based architectures.

MPLS-based architectures:

MPLS, or Multiprotocol Label Switching, is a versatile networking technology that enables the efficient forwarding of data packets. Unlike traditional IP routing, MPLS utilizes labels to direct traffic along predetermined paths, known as Label Switched Paths (LSPs). This label-based approach offers enhanced speed, flexibility, and traffic engineering capabilities, making it a popular choice for modern network infrastructures.

Components of MPLS-Based Architectures

It is crucial to understand the workings of MPLS-based architectures’ key components. These include:

1. Label Edge Routers (LERs): LERs assign labels to incoming packets and forward them into the MPLS network.

2. Label Switch Routers (LSRs): LSRs form the core of the MPLS network, efficiently switching labeled packets along the predetermined LSPs.

3. Label Distribution Protocol (LDP): LDP facilitates the exchange of label information between routers, ensuring the proper establishment of LSPs.

Guide on a BGP-free core.

Here, we have a typically pre-MPLS setup. The main point is that the P node is only running OSPF. It does not know the CE routers or any other BGP routes. Then, BGP runs across a GRE tunnel to the CE nodes. The GRE tunnel we are running is point-to-point.

When we run a traceroute from CE1 to CE2, the packets traverse the GRE tunnel, and no P node interfaces are in the trace. The main goal here is to free up resources in the core, which is the starting point of MPLS networking. In the lab guide below, we will upgrade this to MPLS.

overlay networking

Source Packet Routing

Segment routing is a development of the Source Packet Routing in the Network (SPRING) working group of the IETF. The fundamental idea is the same as Service Function Chaining (SFC). Still, rather than assuming the processes along the path will manage the service chain, Segment Routing considers the routing control plane to handle the flow path through a network.

Segment routing (SR) is a source-based routing technique that streamlines traffic engineering across network domains. It removes network state information from transit routers and nodes and puts the path state information into packet headers at an ingress node.

segment routing solution
Diagram: Issues with MPLS and the need for a segment routing solution.

MPLS Traffic Engineering

MPLS TE is an extension of MPLS, a protocol for efficiently routing data packets across networks. It provides a mechanism for network operators to control and manipulate traffic flow, allowing them to allocate network resources effectively. MPLS TE utilizes traffic engineering to optimize network paths and allocate bandwidth based on specific requirements.

It allows network operators to set up explicit paths for traffic, ensuring that critical applications receive the necessary resources and are not affected by congestion or network failures. MPLS TE achieves this by establishing Label Switched Paths (LSPs) that bypass potential bottlenecks and follow pre-determined routes, resulting in a more efficient and predictable network.

Guide on MPLS TE

In this lab, we will examine MPLS TE with ISIS configuration. Our MPLS core network comprises PE1, P1, P2, P3, and PE2 routers. The CE1 and CE2 routers use regular IP routing. All routers are configured to use IS-IS L2. 

There are four main items we have to configure:

  • Enable MPLS TE support:
    • Globally
    • Interfaces
  • Configure IS-IS to support MPLS TE.
  • Configure RSVP.
  • Configure a tunnel interface.
MPLS TE
Diagram: MPLS TE

**Synchronization Problems**

Packet loss can occur in two scenarios when the actions of IGP and LDP are not synchronized. Firstly, when an IGP adjacency is established, the router begins to forward packets using the new adjacency before the actual LDP exchange occurs between peers on that link.

Secondly, when an LDP session terminates, the router forwards traffic using the existing LDP peer link. This issue is resolved by implementing network kludges and turning on auto-synchronization between IGP and LDP. Additional configurations are needed to get these two control planes operating safely.

Solution – Segment Routing

Segment Routing is a new architecture built with SDN in mind. Separating data from the control plane is all about network simplification. SDN is a great concept; we must integrate it into today’s networks. The SDN concept of simplification is a driver for introducing Segment Routing.

Segment routing vs MPLS

Segment routing utilizes the basics of MPLS but with fewer protocols, less protocol interaction, and less state. It is also applied to MPLS architecture with no change to the forwarding plane. Existing devices switching based on labels may only need a software upgrade. The virtual overlay network concept is based on source routing. The source chooses the path you take through the network. It steers a packet through an ordered list of instructions called segments.

Like MPLS, Segment Routing is based on label switching without LDP or RSVP. Labels are called segments, and we still have push, swap, and pop actions. You do not keep the state in the middle of the network, as the state is in the packet instead. In the packet header, you put a list of segments. A segment is an instruction – if you want to go to C, use A-B-C.

  • With Segment Routing, the Per-flow state is only maintained at the ingress node to the domain.

It is all about getting a flow concept, mapping it to a segment, and putting that segment on a true path. It keeps the properties of resilience ( fast reroute) but simplifies the approach with fewer protocols. As a result, it provides enhanced packet forwarding behavior while minimizing the need to maintain the network state.

Guide on MPLS forwarding.

The previous lab guide can easily be upgraded to MPLS. We removed the GRE tunnel and the iBGP neighbors. MPLS is enabled with the mpls ip command on all interfaces on the P node and the PE node interfaces facing the P node. Now, we have MPLS forwarding based on labels while maintaining a BGP-free core. Notice how the two CEs can ping each other, and there is no route for 5.5.5.5 in the P node.

MPLS forwarding
Diagram: MPLS forwarding

Two types of initial segments are defined

Node and Adjacency

1- Nodel label: Nodel label is globally unique to each node. For example, a node labeled “Dest” has label 65 assigned to it, so any ingress network traffic with label 65 goes straight to Dest. By default, it will take the best path.

2- Then we have the Adjacency label: a locally significant label that takes packets to an adjacent path. It forces packets through a specific link and offers more specific path forwarding than a nodel label.

segment routing vs mpls
Diagram: Segment routing vs MPLS and the use of labels.

Segment routing: A new business model

Segment Routing addresses current issues and brings a new business model. It aims to address the pain points of existing MPLS/IP networks in terms of simplicity, scale, and ease of operation. Preparing the network with an SDN approach allows application integration directly on top of it.

Segment Routing allows you to take certain traffic types and make a routing decision based on that traffic class. It permits you to bring traffic that you think is important, such as Video or Voice, to go a different way than best efforts traffic.

Traffic paths can be programmed end-to-end for a specific class of customer. It moves away from the best-path model by looking at the network and deciding on the source. It is very similar to MPLS, but you use the labels differently.

SDN controller & network intelligence

Controller-based networks sit perfectly with this technology. It’s a very centralized and controller application methodology. The SDN controller gathers network telemetry information, decides based on a predefined policy, and pushes information to nodes to implement data path forwarding. Network intelligence such as link utilization, path response time, packet drops, latency, and jitter are extracted from the network and analyzed by the controller.

The intelligence now sits at the edges. The packet takes a path based on the network telemetry information extracted by the controller. The result is that the ingress node can push a label stack to the destination to take a specific path.

Your chosen path at the network’s edge is based on telemetry information.

Recap: Applications of Segment Routing:

1. Traffic Engineering and Load Balancing: Segment routing enables network operators to dynamically steer traffic along specific paths to optimize network resource utilization. This capability is handy in scenarios where certain links or nodes experience congestion, enabling network operators to balance the load and efficiently utilize available resources.

2. Service Chaining: Segment routing allows for the seamless insertion of network services, such as firewalls, load balancers, or WAN optimization appliances, into the packet’s path. By specifying the desired service segments, network operators can ensure traffic flows through the necessary services while maintaining optimal performance and security.

3. Network Slicing: With the advent of 5G and the proliferation of the Internet of Things (IoT) devices, segment routing can enable efficient network slicing. Network slicing allows for virtualized networks, each tailored to the specific requirements of different applications or user groups. Segment routing provides the flexibility to define and manage these virtualized networks, ensuring efficient resource allocation and isolation.

Segment Routing: Closing Points

Segment routing offers a promising solution to the challenges faced by modern network operators. Segment routing enables efficient and optimized utilization of network resources by providing simplified network operations, enhanced scalability, and traffic engineering flexibility.

With its applications ranging from traffic engineering to service chaining and network slicing, segment routing is poised to play a crucial role in the evolution of modern networks. As the demand for more flexible and efficient networks grows, segment routing emerges as a powerful tool for network operators to meet these demands and deliver a seamless and reliable user experience.

Summary: Segment Routing

Segment Routing, also known as SR, is a cutting-edge technology that has revolutionized network routing in recent years. This innovative approach offers numerous benefits, including enhanced scalability, simplified network management, and efficient traffic engineering. This blog post delved into Segment Routing and explored its key features and advantages.

Understanding Segment Routing

Segment Routing is a flexible and scalable routing paradigm that leverages source routing techniques. It allows network operators to define a predetermined packet path by encoding it in the packet header. This eliminates the need for complex routing protocols and enables simplified network operations.

Key Features of Segment Routing

Traffic Engineering:

Segment Routing provides granular control over traffic paths, allowing network operators to steer traffic along specific paths based on various parameters. This enables efficient utilization of network resources and optimized traffic flows.

Fast Rerouting:

One notable advantage of Segment Routing is its ability to quickly reroute traffic in case of link or node failures. With the predefined paths encoded in the packet headers, the network can dynamically reroute traffic without relying on time-consuming protocol convergence.

Network Scalability:

Segment Routing offers excellent scalability by leveraging a hierarchical addressing structure. It allows network operators to segment the network into smaller domains, simplifying management and reducing the overhead associated with traditional routing protocols.

Use Cases and Benefits

Service Provider Networks:

Segment Routing is particularly beneficial for service provider networks. It enables efficient traffic engineering, seamless service provisioning, and simplified network operations, leading to improved quality of service and reduced operational costs.

Data Center Networks:

In data center environments, Segment Routing offers enhanced flexibility and scalability. It enables optimal traffic steering, efficient workload balancing, and simplified network automation, making it an ideal choice for modern data centers.

Conclusion:

In conclusion, Segment Routing is a powerful and flexible technology that brings numerous benefits to modern networks. Its ability to provide granular control over traffic paths, fast rerouting, and network scalability makes it an attractive choice for network operators. As Segment Routing continues to evolve and gain wider adoption, we can expect to see even more innovative use cases and benefits in the future.

network overlays

WAN Virtualization

WAN Virtualization

In today's fast-paced digital world, seamless connectivity is the key to success for businesses of all sizes. WAN (Wide Area Network) virtualization has emerged as a game-changing technology, revolutionizing the way organizations connect their geographically dispersed branches and remote employees. In this blog post, we will explore the concept of WAN virtualization, its benefits, implementation considerations, and its potential impact on businesses.

WAN virtualization is a technology that abstracts the physical network infrastructure, allowing multiple logical networks to operate independently over a shared physical infrastructure. It enables organizations to combine various types of connectivity, such as MPLS, broadband, and cellular, into a single virtual network. By doing so, WAN virtualization enhances network performance, scalability, and flexibility.

Increased Flexibility and Scalability: WAN virtualization allows businesses to scale their network resources on-demand, facilitating seamless expansion or contraction based on their requirements. It provides flexibility to dynamically allocate bandwidth, prioritize critical applications, and adapt to changing network conditions.

Improved Performance and Reliability: By leveraging intelligent traffic management techniques and load balancing algorithms, WAN virtualization optimizes network performance. It intelligently routes traffic across multiple network paths, avoiding congestion and reducing latency. Additionally, it enables automatic failover and redundancy, ensuring high network availability.

Simplified Network Management: Traditional WAN architectures often involve complex configurations and manual provisioning. WAN virtualization simplifies network management by centralizing control and automating tasks. Administrators can easily set policies, monitor network performance, and make changes from a single management interface, saving time and reducing human errors.

Multi-Site Connectivity: For organizations with multiple remote sites, WAN virtualization offers a cost-effective solution. It enables seamless connectivity between sites, allowing efficient data transfer, collaboration, and resource sharing. With centralized management, network administrators can ensure consistent policies and security across all sites. Cloud Connectivity:

As more businesses adopt cloud-based applications and services, WAN virtualization becomes an essential component. It provides reliable and secure connectivity between on-premises infrastructure and public or private cloud environments. By prioritizing critical cloud traffic and optimizing routing, WAN virtualization ensures optimal performance for cloud-based applications.

Highlights: WAN Virtualization

### The Basics of WAN

A WAN is a telecommunications network that extends over a large geographical area. It is designed to connect devices and networks across long distances, using various communication links such as leased lines, satellite links, or the internet. The primary purpose of a WAN is to facilitate the sharing of resources and information across locations, making it a vital component of modern business infrastructure. WANs can be either private, connecting specific networks of an organization, or public, utilizing the internet for broader connectivity.

### The Role of Virtualization in WAN

Virtualization has revolutionized the way WANs operate, offering enhanced flexibility, efficiency, and scalability. By decoupling network functions from physical hardware, virtualization allows for the creation of virtual networks that can be easily managed and adjusted to meet organizational needs. This approach reduces the dependency on physical infrastructure, leading to cost savings and improved resource utilization. Virtualized WANs can dynamically allocate bandwidth, prioritize traffic, and ensure optimal performance, making them an attractive solution for businesses seeking agility and resilience.

Separating: Control and Data Plane:

1: – WAN virtualization can be defined as the abstraction of physical network resources into virtual entities, allowing for more flexible and efficient network management. By separating the control plane from the data plane, WAN virtualization enables the centralized management and orchestration of network resources, regardless of their physical locations. This simplifies network administration and paves the way for enhanced scalability and agility.

2: – WAN virtualization optimizes network performance by intelligently routing traffic and dynamically adjusting network resources based on real-time conditions. This ensures that critical applications receive the necessary bandwidth and quality of service, resulting in improved user experience and productivity.

3: – By leveraging WAN virtualization, organizations can reduce their reliance on expensive dedicated circuits and hardware appliances. Instead, they can leverage existing network infrastructure and utilize cost-effective internet connections without compromising security or performance. This significantly lowers operational costs and capital expenditures.

4: – Traditional WAN architectures often struggle to meet modern businesses’ evolving needs. WAN virtualization solves this challenge by providing a scalable and flexible network infrastructure. With virtual overlays, organizations can rapidly deploy and scale their network resources as needed, empowering them to adapt quickly to changing business requirements.

**Implementing WAN Virtualization**

Successful implementation of WAN virtualization requires careful planning and execution. Start by assessing your current network infrastructure and identifying areas for improvement. Choose a virtualization solution that aligns with your organization’s specific needs and budget. Consider leveraging software-defined WAN (SD-WAN) technologies to simplify the deployment process and enhance overall network performance.

There are several popular techniques for implementing WAN virtualization, each with its unique characteristics and use cases. Let’s explore a few of them:

a. MPLS (Multi-Protocol Label Switching): MPLS is a widely used technique that leverages labels to direct network traffic efficiently. It provides reliable and secure connectivity, making it suitable for businesses requiring stringent service level agreements (SLAs).

b. SD-WAN (Software-Defined Wide Area Network): SD-WAN is a revolutionary technology that abstracts and centralizes the network control plane in software. It offers dynamic path selection, traffic prioritization, and simplified network management, making it ideal for organizations with multiple branch locations.

c. VPLS (Virtual Private LAN Service): VPLS extends the functionality of Ethernet-based LANs over a wide area network. It creates a virtual bridge between geographically dispersed sites, enabling seamless communication as if they were part of the same local network.

Example Technology: MPLS & LDP

**The Mechanics of MPLS: How It Works**

MPLS operates by assigning labels to data packets at the network’s entry point—an MPLS-enabled router. These labels determine the path the packet will take through the network, enabling quick and efficient routing. Each router along the path uses the label to make forwarding decisions, eliminating the need for complex table lookups. This not only accelerates data transmission but also allows network administrators to predefine optimal paths for different types of traffic, enhancing network performance and reliability.

**Exploring LDP: The Glue of MPLS Systems**

The Label Distribution Protocol (LDP) is crucial for the functioning of MPLS networks. LDP is responsible for the distribution of labels between routers, ensuring that each understands how to handle the labeled packets appropriately. When routers communicate using LDP, they exchange label information, which helps in building a label-switched path (LSP). This process involves the negotiation of label values and the establishment of the end-to-end path that data packets will traverse, making LDP the unsung hero that ensures seamless and effective MPLS operation.

**Benefits of MPLS and LDP in Modern Networks**

MPLS and LDP together offer a range of benefits that make them indispensable in contemporary networking. They provide a scalable solution that supports a wide array of services, including VPNs, traffic engineering, and quality of service (QoS). This versatility makes it easier for network operators to manage and optimize traffic, leading to improved bandwidth utilization and reduced latency. Additionally, MPLS networks are inherently more secure, as the label-switching mechanism makes it difficult for unauthorized users to intercept or tamper with data.

Overcoming Potential Challenges

While WAN virtualization offers numerous benefits, it also presents certain challenges. Security is a top concern, as virtualized networks can introduce new vulnerabilities. It’s essential to implement robust security measures, such as encryption and access controls, to protect your virtualized WAN. Additionally, ensure your IT team is adequately trained to manage and monitor the virtual network environment effectively.

**Section 1: The Complexity of Network Integration**

One of the primary challenges in WAN virtualization is integrating new virtualized solutions with existing network infrastructures. This task often involves dealing with legacy systems that may not easily adapt to virtualized environments. Organizations need to ensure compatibility and seamless operation across all network components. To address this complexity, businesses can employ network abstraction techniques and use software-defined networking (SDN) tools that offer greater control and flexibility, allowing for a smoother integration process.

**Section 2: Security Concerns in Virtualized Environments**

Security remains a critical concern in any network architecture, and virtualization adds another layer of complexity. Virtual environments can introduce vulnerabilities if not properly managed. The key to overcoming these security challenges lies in implementing robust security protocols and practices. Utilizing encryption, firewalls, and regular security audits can help safeguard the network. Additionally, leveraging network segmentation and zero-trust models can significantly enhance the security of virtualized WANs.

**Section 3: Managing Performance and Reliability**

Ensuring consistent performance and reliability in a virtualized WAN is another significant challenge. Virtualization can sometimes lead to latency and bandwidth issues, affecting the overall user experience. To mitigate these issues, organizations should focus on traffic optimization techniques and quality of service (QoS) management. Implementing dynamic path selection and traffic prioritization can ensure that mission-critical applications receive the necessary bandwidth and performance, maintaining high levels of reliability across the network.

**Section 4: Cost Implications and ROI**

While WAN virtualization can lead to cost savings in the long run, the initial investment and transition can be costly. Organizations must carefully consider the cost implications and potential return on investment (ROI) when adopting virtualized solutions. Conducting thorough cost-benefit analyses and pilot testing can provide valuable insights into the financial viability of virtualization projects. By aligning virtualization strategies with business goals, companies can maximize ROI and achieve sustainable growth.

WAN Virtualisation & SD-WAN Cloud Hub

SD-WAN Cloud Hub is a cutting-edge networking solution that combines the power of software-defined wide area networking (SD-WAN) with the scalability and reliability of cloud services. It acts as a centralized hub, enabling organizations to connect their branch offices, data centers, and cloud resources in a secure and efficient manner. By leveraging SD-WAN Cloud Hub, businesses can simplify their network architecture, improve application performance, and reduce costs.

Google Cloud needs no introduction. With its robust infrastructure, comprehensive suite of services, and global reach, it has become a preferred choice for businesses across industries. From compute and storage to AI and analytics, Google Cloud offers a wide range of solutions that empower organizations to innovate and scale. By integrating SD-WAN Cloud Hub with Google Cloud, businesses can unlock unparalleled benefits and take their network connectivity to new heights.

Understanding SD-WAN

SD-WAN is a cutting-edge networking technology that utilizes software-defined principles to manage and optimize network connections intelligently. Unlike traditional WAN, which relies on costly and inflexible hardware, SD-WAN leverages software-based solutions to streamline network management, improve performance, and enhance security.

Key Benefits of SD-WAN

a) Enhanced Performance: SD-WAN intelligently routes traffic across multiple network paths, ensuring optimal performance and reduced latency. This results in faster data transfers and improved user experience.

b) Cost Efficiency: With SD-WAN, businesses can leverage affordable broadband connections rather than relying solely on expensive MPLS (Multiprotocol Label Switching) links. This not only reduces costs but also enhances network resilience.

c) Simplified Management: SD-WAN centralizes network management through a user-friendly interface, allowing IT teams to easily configure, monitor, and troubleshoot network connections. This simplification saves time and resources, enabling IT professionals to focus on strategic initiatives.

SD-WAN incorporates robust security measures to protect network traffic and sensitive data. It employs encryption protocols, firewall capabilities, and traffic segmentation techniques to safeguard against unauthorized access and potential cyber threats. These advanced security features give businesses peace of mind and ensure data integrity.

WAN Virtualization with Network Connectivity Center

**Understanding Google Network Connectivity Center**

Google Network Connectivity Center (NCC) is a cloud-based service designed to simplify and centralize network management. By leveraging Google’s extensive global infrastructure, NCC provides organizations with a unified platform to manage their network connectivity across various environments, including on-premises data centers, multi-cloud setups, and hybrid environments.

**Key Features and Benefits**

1. **Centralized Network Management**: NCC offers a single pane of glass for network administrators to monitor and manage connectivity across different environments. This centralized approach reduces the complexity associated with managing multiple network endpoints and enhances operational efficiency.

2. **Enhanced Security**: With NCC, organizations can implement robust security measures across their network. The service supports advanced encryption protocols and integrates seamlessly with Google’s security tools, ensuring that data remains secure as it moves between different environments.

3. **Scalability and Flexibility**: One of the standout features of NCC is its ability to scale with your organization’s needs. Whether you’re expanding your data center operations or integrating new cloud services, NCC provides the flexibility to adapt quickly and efficiently.

**Optimizing Data Center Operations**

Data centers are the backbone of modern digital infrastructure, and optimizing their operations is crucial for any organization. NCC facilitates this by offering tools that enhance data center connectivity and performance. For instance, with NCC, you can easily set up and manage VPNs, interconnect data centers across different regions, and ensure high availability and redundancy.

**Seamless Integration with Other Google Services**

NCC isn’t just a standalone service; it integrates seamlessly with other Google Cloud services such as Cloud Interconnect, Cloud VPN, and Google Cloud Armor. This integration allows organizations to build comprehensive network solutions that leverage the best of Google’s cloud offerings. Whether it’s enhancing security, improving performance, or ensuring compliance, NCC works in tandem with other services to deliver a cohesive and powerful network management solution.

Understanding Network Tiers

Google Cloud offers two distinct Network Tiers: Premium Tier and Standard Tier. Each tier is designed to cater to specific use cases and requirements. The Premium Tier provides users with unparalleled performance, low latency, and high availability. On the other hand, the Standard Tier offers a more cost-effective solution without compromising on reliability.

The Premium Tier, powered by Google’s global fiber network, ensures lightning-fast connectivity and optimal performance for critical workloads. With its vast network of points of presence (PoPs), it minimizes latency and enables seamless data transfers across regions. By leveraging the Premium Tier, businesses can ensure superior user experiences and support demanding applications that require real-time data processing.

While the Premium Tier delivers exceptional performance, the Standard Tier presents an attractive option for cost-conscious organizations. By utilizing Google Cloud’s extensive network peering relationships, the Standard Tier offers reliable connectivity at a reduced cost. It is an ideal choice for workloads that are less latency-sensitive or require moderate bandwidth.

What is VPC Networking?

VPC networking refers to the virtual network environment that allows you to securely connect your resources running in the cloud. It provides isolation, control, and flexibility, enabling you to define custom network configurations to suit your specific needs. In Google Cloud, VPC networking is a fundamental building block for your cloud infrastructure.

Google Cloud VPC networking offers a range of powerful features that enhance your network management capabilities. These include subnetting, firewall rules, route tables, VPN connectivity, and load balancing. Let’s explore each of these features in more detail:

Subnetting: With VPC subnetting, you can divide your IP address range into smaller subnets, allowing for better resource allocation and network segmentation.

Firewall Rules: Google Cloud VPC networking provides robust firewall rules that enable you to control inbound and outbound traffic, ensuring enhanced security for your applications and data.

Route Tables: Route tables in VPC networking allow you to define the routing logic for your network traffic, ensuring efficient communication between different subnets and external networks.

VPN Connectivity: Google Cloud supports VPN connectivity, allowing you to establish secure connections between your on-premises network and your cloud resources, creating a hybrid infrastructure.

Load Balancing: VPC networking offers load balancing capabilities, distributing incoming traffic across multiple instances, increasing availability and scalability of your applications.

Example: DMVPN ( Dynamic Multipoint VPN)

Separating control from the data plane

DMVPN is a Cisco-developed solution that combines the benefits of multipoint GRE tunnels, IPsec encryption, and dynamic routing protocols to create a flexible and efficient virtual private network. It simplifies network architecture, reduces operational costs, and enhances scalability. With DMVPN, organizations can connect remote sites, branch offices, and mobile users seamlessly, creating a cohesive network infrastructure.

The underlay infrastructure forms the foundation of DMVPN. It refers to the physical network that connects the different sites or locations. This could be an existing Wide Area Network (WAN) infrastructure, such as MPLS, or the public Internet. The underlay provides the transport for the overlay network, enabling the secure transmission of data packets between sites.

The overlay network is the virtual network created on top of the underlay infrastructure. It is responsible for establishing the secure tunnels and routing between the connected sites. DMVPN uses multipoint GRE tunnels to allow dynamic and direct communication between sites, eliminating the need for a hub-and-spoke topology. IPsec encryption ensures the confidentiality and integrity of data transmitted over the overlay network.

Example WAN Technology: Tunneling IPv6 over IPV4

IPv6 tunneling is a technique that allows the transmission of IPv6 packets over an IPv4 network infrastructure. It enables communication between IPv6 networks by encapsulating IPv6 packets within IPv4 packets. By doing so, organizations can utilize existing IPv4 infrastructure while transitioning to IPv6. Before delving into its various implementations, understanding the basics of IPv6 tunneling is crucial.

Types of IPv6 Tunneling

There are several types of IPv6 tunneling techniques, each with its advantages and considerations. Let’s explore a few popular types:

Manual Tunneling: Manual tunneling is a simple method configuring tunnel endpoints. It also requires manually configuring tunnel interfaces on each participating device. While it provides flexibility and control, this approach can be time-consuming and prone to human error.

Automatic Tunneling: Automatic tunneling, also known as 6to4 tunneling, allows for the automatic creation of tunnels without manual configuration. It utilizes the 6to4 addressing scheme, where IPv6 packets are encapsulated within IPv4 packets using protocol 41. While convenient, automatic tunneling may encounter issues with address translation and compatibility.

Teredo Tunneling: Teredo tunneling is another automatic technique that enables IPv6 connectivity for hosts behind IPv4 Network Address Translation (NAT) devices. It uses UDP encapsulation to carry IPv6 packets over IPv4 networks. Though widely supported, Teredo tunneling may suffer from performance limitations due to its reliance on UDP.

WAN Virtualization Technologies

Understanding VRFs

VRFs, in simple terms, allow the creation of multiple virtual routing tables within a single physical router or switch. Each VRF operates as an independent routing instance with its routing table, interfaces, and forwarding decisions. This powerful concept allows for logical separation of network traffic, enabling enhanced security, scalability, and efficiency.

One of VRFs’ primary advantages is network segmentation. By creating separate VRF instances, organizations can effectively isolate different parts of their network, ensuring traffic from one VRF cannot directly communicate with another. This segmentation enhances network security and provides granular control over network resources.

Furthermore, VRFs enable efficient use of network resources. By utilizing VRFs, organizations can optimize their routing decisions, ensuring that traffic is forwarded through the most appropriate path based on the specific requirements of each VRF. This dynamic routing capability leads to improved network performance and better resource utilization.

Use Cases for VRFs

VRFs are widely used in various networking scenarios. One common use case is in service provider networks, where VRFs separate customer traffic, allowing multiple customers to share a single physical infrastructure while maintaining isolation. This approach brings cost savings and scalability benefits.

Another use case for VRFs is in enterprise networks with strict security requirements. By leveraging VRFs, organizations can segregate sensitive data traffic from the rest of the network, reducing the risk of unauthorized access and potential data breaches.

Example WAN technology: Cisco PfR

Cisco PfR is an intelligent routing solution that utilizes real-time performance metrics to make dynamic routing decisions. By continuously monitoring network conditions, such as latency, jitter, and packet loss, PfR can intelligently reroute traffic to optimize performance. Unlike traditional static routing protocols, PfR adapts to network changes on the fly, ensuring optimal utilization of available resources.

Key Features of Cisco PfR

a. Performance Monitoring: PfR continuously collects performance data from various sources, including routers, probes, and end-user devices. This data provides valuable insights into network behavior and helps identify areas of improvement.

b. Intelligent Traffic Engineering: With its advanced algorithms, Cisco PfR can dynamically select the best path for traffic based on predefined policies and performance metrics. This enables efficient utilization of available network resources and minimizes congestion.

c. Application Visibility and Control: PfR offers deep visibility into application-level performance, allowing network administrators to prioritize critical applications and allocate resources accordingly. This ensures optimal performance for business-critical applications and improves overall user experience.

Performance based routing

DMVPN and WAN Virtualization

Example WAN Technology: Network Overlay

Virtual network overlays serve as a layer of abstraction, enabling the creation of multiple virtual networks on top of a physical network infrastructure. By encapsulating network traffic within virtual tunnels, overlays provide isolation, scalability, and flexibility, empowering organizations to manage their networks efficiently.

Underneath the surface, virtual network overlays rely on encapsulation protocols such as Virtual Extensible LAN (VXLAN) or Generic Routing Encapsulation (GRE). These protocols enable the creation of virtual tunnels, allowing network packets to traverse the physical infrastructure while remaining isolated within their respective virtual networks.

**What is GRE?**

At its core, Generic Routing Encapsulation is a tunneling protocol that allows the encapsulation of different network layer protocols within IP packets. It acts as an envelope, carrying packets from one network to another across an intermediate network. GRE provides a flexible and scalable solution for connecting disparate networks, facilitating seamless communication.

GRE encapsulates the original packet, often called the payload, within a new IP packet. This encapsulated packet is then sent to the destination network, where it is decapsulated to retrieve the original payload. By adding an IP header, GRE enables the transportation of various protocols across different network infrastructures, including IPv4, IPv6, IPX, and MPLS.

**Introducing IPSec Services**

IPSec, short for Internet Protocol Security, is a suite of protocols that provides security services at the IP network layer. It offers data integrity, confidentiality, and authentication features, ensuring that data transmitted over IP networks remains protected from unauthorized access and tampering. IPSec operates in two modes: Transport Mode and Tunnel Mode.

**Combining GRE & IPSec**

By combining GRE and IPSec, organizations can create secure and private communication channels over public networks. GRE provides the tunneling mechanism, while IPSec adds an extra layer of security by encrypting and authenticating the encapsulated packets. This combination allows for the secure transmission of sensitive data, remote access to private networks, and the establishment of virtual private networks (VPNs).

The combination of GRE and IPSec offers several advantages. First, it enables the creation of secure VPNs, allowing remote users to connect securely to private networks over public infrastructure. Second, it protects against eavesdropping and data tampering, ensuring the confidentiality and integrity of transmitted data. Lastly, GRE and IPSec are vendor-neutral protocols widely supported by various network equipment, making them accessible and compatible.

GRE with IPsec ipsec plus GRE

What is MPLS?

MPLS, short for Multi-Protocol Label Switching, is a versatile and scalable protocol used in modern networks. At its core, MPLS assigns labels to network packets, allowing for efficient and flexible routing. These labels help streamline traffic flow, leading to improved performance and reliability. To understand how MPLS works, we need to explore its key components.

The basic building block is the Label Switched Path (LSP), a predetermined path that packets follow. Labels are attached to packets at the ingress router, guiding them along the LSP until they reach their destination. This label-based forwarding mechanism enables MPLS to offer traffic engineering capabilities and support various network services.

Understanding Label Distributed Protocols

Label distributed protocols, or LDP, are fundamental to modern networking technologies. They are designed to establish and maintain label-switched paths (LSPs) in a network. LDP operates by distributing labels, which are used to identify and forward network traffic efficiently. By leveraging labels, LDP enhances network scalability and enables faster packet forwarding.

One key advantage of label-distributed protocols is their ability to support multiprotocol label switching (MPLS). MPLS allows for efficient routing of different types of network traffic, including IP, Ethernet, and ATM. This versatility makes label-distributed protocols highly adaptable and suitable for diverse network environments. Additionally, LDP minimizes network congestion, improves Quality of Service (QoS), and promotes effective resource utilization.

What is MPLS LDP?

MPLS LDP, or Label Distribution Protocol, is a key component of Multiprotocol Label Switching (MPLS) technology. It facilitates the establishment of label-switched paths (LSPs) through the network, enabling efficient forwarding of data packets. MPLS LDP uses labels to direct network traffic along predetermined paths, eliminating the need for complex routing table lookups.

One of MPLS LDP’s primary advantages is its ability to enhance network performance. By utilizing labels, MPLS LDP reduces the time and resources required for packet forwarding, resulting in faster data transmission and reduced network congestion. Additionally, MPLS LDP allows for traffic engineering, enabling network administrators to prioritize certain types of traffic and allocate bandwidth accordingly.

Understanding MPLS VPNs

MPLS VPNs, or Multiprotocol Label Switching Virtual Private Networks, are network infrastructure that allows multiple sites or branches of an organization to communicate over a shared service provider network securely. Unlike traditional VPNs, MPLS VPNs utilize labels to efficiently route and prioritize data packets, ensuring optimal performance and security. By encapsulating data within labels, MPLS VPNs enable seamless communication between different sites while maintaining privacy and segregation.

Understanding VPLS

VPLS, short for Virtual Private LAN Service, is a technology that enables the creation of a virtual LAN (Local Area Network) over a shared or public network infrastructure. It allows geographically dispersed sites to connect as if they are part of the same LAN, regardless of their physical distance. This technology uses MPLS (Multiprotocol Label Switching) to transport Ethernet frames across the network efficiently.

Key Features and Benefits

Scalability and Flexibility: VPLS offers scalability, allowing businesses to easily expand their network as their requirements grow. It allows adding or removing sites without disrupting the overall network, making it an ideal choice for organizations with dynamic needs.

Seamless Connectivity: By extending the LAN across different locations, VPLS provides a seamless and transparent network experience. Employees can access shared resources, such as files and applications, as if they were all in the same office, promoting collaboration and productivity across geographically dispersed teams.

Enhanced Security: VPLS ensures a high level of security by isolating each customer’s traffic within their own virtual LAN. The data is encapsulated and encrypted, protecting it from unauthorized access. This makes VPLS a reliable solution for organizations that handle sensitive information and must comply with strict security regulations.

**Advanced WAN Designs**

DMVPN Phase 2 Spoke to Spoke Tunnels

Learning the mapping information required through NHRP resolution creates a dynamic spoke-to-spoke tunnel. How does a spoke know how to perform such a task? As an enhancement to DMVPN Phase 1, spoke-to-spoke tunnels were first introduced in Phase 2 of the network. Phase 2 handed responsibility for NHRP resolution requests to each spoke individually, which means that spokes initiated NHRP resolution requests when they determined a packet needed a spoke-to-spoke tunnel. Cisco Express Forwarding (CEF) would assist the spoke in making this decision based on information contained in its routing table.

Exploring Single Hub Dual Cloud Architecture

– Single Hub Dual Cloud is a specific deployment model within the DMVPN framework that provides enhanced redundancy and improved performance. This architecture connects a single hub device to two separate cloud service providers, creating two independent VPN clouds. This setup offers numerous advantages, including increased availability, load balancing, and optimized traffic routing.

– One key benefit of the Single Hub Dual Cloud approach is improved network resiliency. With two independent clouds, businesses can ensure uninterrupted connectivity even if one cloud or service provider experiences issues. This redundancy minimizes downtime and helps maintain business continuity. This architecture’s load-balancing capabilities also enable efficient traffic distribution, reducing congestion and enhancing overall network performance.

– Implementing DMVPN Single Hub Dual Cloud requires careful planning and configuration. Organizations must assess their needs, evaluate suitable cloud service providers, and design a robust network architecture. Working with experienced network engineers and leveraging automation tools can streamline deployment and ensure successful implementation.

WAN Services

Network Address Translation:

In simple terms, NAT is a technique for modifying IP addresses while packets traverse from one network to another. It bridges private local networks and the public Internet, allowing multiple devices to share a single public IP address. By translating IP addresses, NAT enables private networks to communicate with external networks without exposing their internal structure.

Types of Network Address Translation

There are several types of NAT, each serving a specific purpose. Let’s explore a few common ones:

Static NAT: Static NAT, also known as one-to-one NAT, maps a private IP address to a public IP address. It is often used when specific devices on a network require direct access to the internet. With static NAT, inbound and outbound traffic can be routed seamlessly.

Dynamic NAT: On the other hand, Dynamic NAT allows a pool of public IP addresses to be shared among several devices within a private network. As devices connect to the internet, they are assigned an available public IP address from the pool. Dynamic NAT facilitates efficient utilization of public IP addresses while maintaining network security.

Port Address Translation (PAT): PAT, also called NAT Overload, is an extension of dynamic NAT. Rather than assigning a unique public IP address to each device, PAT assigns a unique port number to each connection. PAT allows multiple devices to share a single public IP address by keeping track of port numbers. This technique is widely used in home networks and small businesses.

NAT plays a crucial role in enhancing network security. By hiding devices’ internal IP addresses, it acts as a barrier against potential attacks from the Internet. External threats find it harder to identify and target individual devices within a private network. NAT acts as a shield, providing additional security to the network infrastructure.

PBR At the WAN Edge

Understanding Policy-Based Routing

Policy-based Routing (PBR) allows network administrators to control the path of network traffic based on specific policies or criteria. Unlike traditional routing protocols, PBR offers a more granular and flexible approach to directing network traffic, enabling fine-grained control over routing decisions.

PBR offers many features and functionalities that empower network administrators to optimize network traffic flow. Some key aspects include:

1. Traffic Classification: PBR allows the classification of network traffic based on various attributes such as source IP, destination IP, protocol, port numbers, or even specific packet attributes. This flexibility enables administrators to create customized policies tailored to their network requirements.

2. Routing Decision Control: With PBR, administrators can define specific routing decisions for classified traffic. Traffic matching certain criteria can be directed towards a specific next-hop or exit interface, bypassing the regular routing table.

3. Load Balancing and Traffic Engineering: PBR can distribute traffic across multiple paths, leveraging load balancing techniques. By intelligently distributing traffic, administrators can optimize resource utilization and enhance network performance.

Performance at the WAN Edge

Understanding TCP MSS

TCP MSS refers to the maximum amount of data encapsulated in a single TCP segment. It determines the payload size within each TCP packet, excluding the TCP/IP headers. By limiting the MSS, TCP ensures that data is transmitted in manageable chunks, preventing fragmentation and improving overall network performance.

Several factors influence the determination of TCP MSS. One crucial aspect is the underlying network’s Maximum Transmission Unit (MTU). The MTU represents the largest packet size transmitted over a network without fragmentation. TCP MSS is typically set to match the MTU to avoid packet fragmentation and subsequent retransmissions.

By appropriately configuring TCP MSS, network administrators can optimize network performance. Matching the TCP MSS to the MTU size reduces the chances of packet fragmentation, which can lead to delays and retransmissions. Moreover, a properly sized TCP MSS can prevent unnecessary overhead and improve bandwidth utilization.

Adjusting the TCP MSS to suit specific network requirements is possible. Network administrators can configure the TCP MSS value on routers, firewalls, and end devices. This flexibility allows for fine-tuning network performance based on the specific characteristics and constraints of the network infrastructure.

WAN – The desired benefits

Businesses often want to replace or augment premium bandwidth services and switch from active/standby to active/active WAN transport models. This will reduce their costs. The challenge, however, is that augmentation can increase operational complexity. Creating a consistent operational model and simplifying IT requires businesses to avoid complexity.

The importance of maintaining remote site uptime for business continuity goes beyond simply preventing blackouts. Latency, jitter, and loss can affect critical applications and render them inoperable. As a result, the applications are entirely unavailable. The term “brownout” refers to these situations. Businesses today are focused on providing a consistent, high-quality application experience.

Ensuring connectivity

To ensure connectivity and make changes, there is a shift towards retaking control. It extends beyond routing or quality of service to include application experience and availability. The Internet edge is still not familiar to many businesses regarding remote sites. Software as a Service (SaaS) and productivity applications can be rolled out more effectively with this support.

Better access to Infrastructure as a Service (IaaS) is also necessary. Offloading guest traffic to branches with direct Internet connectivity is also possible. However, many businesses are interested in doing so. This is because offloading this traffic locally is more efficient than routing it through a centralized data center to consume WAN bandwidth. WAN bandwidth is wasted and is not efficient.

The shift to application-centric architecture

Business requirements are changing rapidly, and today’s networks cannot cope. It is traditionally more expensive and has a fixed capacity for hardware-centric networks. In addition, the box-by-box configuration approach, siloed management tools, and lack of automated provisioning make them more challenging to support.

They are inflexible, static, expensive, and difficult to maintain due to conflicting policies between domains and different configurations between services. As a result, security vulnerabilities and misconfigurations are more likely to occur. An application- or service-centric architecture focusing on simplicity and user experience should replace a connectivity-centric architecture.

Understanding Virtualization

Virtualization is a technology that allows the creation of virtual versions of various IT resources, such as servers, networks, and storage devices. These virtual resources operate independently from physical hardware, enabling multiple operating systems and applications to run simultaneously on a single physical machine. Virtualization opens possibilities by breaking the traditional one-to-one relationship between hardware and software. Now, virtualization has moved to the WAN.

WAN Virtualization and SD-WAN

Organizations constantly seek innovative solutions in modern networking to enhance their network infrastructure and optimize connectivity. One such solution that has gained significant attention is WAN virtualization. In this blog post, we will delve into the concept of WAN virtualization, its benefits, and how it revolutionizes how businesses connect and communicate.

WAN virtualization, also known as Software-Defined WAN (SD-WAN), is a technology that enables organizations to abstract their wide area network (WAN) connections from the underlying physical infrastructure. It leverages software-defined networking (SDN) principles to decouple network control and data forwarding, providing a more flexible, scalable, and efficient network solution.

VPN and SDN Components

WAN virtualization is an essential technology in the modern business world. It creates virtualized versions of wide area networks (WANs) – networks spanning a wide geographic area. The virtualized WANs can then manage and secure a company’s data, applications, and services.

Regarding implementation, WAN virtualization requires using a virtual private network (VPN), a secure private network accessible only by authorized personnel. This ensures that only those with proper credentials can access the data. WAN virtualization also requires software-defined networking (SDN) to manage the network and its components.

Related: Before you proceed, you may find the following posts helpful:

  1. SD WAN Overlay
  2. Generic Routing Encapsulation
  3. WAN Monitoring
  4. SD WAN Security 
  5. Container Based Virtualization
  6. SD WAN and Nuage Networks

WAN Virtualization

Knowledge Check: Application-Aware Routing (AAR)

Understanding Application-Aware Routing (AAR)

Application-aware routing is a sophisticated networking technique that goes beyond traditional packet-based routing. It considers the unique requirements of different applications, such as video streaming, cloud-based services, or real-time communication, and optimizes the network path accordingly. It ensures smooth and efficient data transmission by prioritizing and steering traffic based on application characteristics.

Benefits of Application-Aware Routing

1- Enhanced Performance: Application-aware routing significantly improves overall performance by dynamically allocating network resources to applications with high bandwidth or low latency requirements. This translates into faster downloads, seamless video streaming, and reduced response times for critical applications.

2-  Increased Reliability: Traditional routing methods treat all traffic equally, often resulting in congestion and potential bottlenecks. Application Aware Routing intelligently distributes network traffic, avoiding congested paths and ensuring a reliable and consistent user experience. In network failure or congestion, it can dynamically reroute traffic to alternative paths, minimizing downtime and disruptions.

Implementation Strategies

1- Deep Packet Inspection: A key component of Application-Aware Routing is deep packet inspection (DPI), which analyzes the content of network packets to identify specific applications. DPI enables routers and switches to make informed decisions about handling each packet based on its application, ensuring optimal routing and resource allocation.

2- Quality of Service (QoS) Configuration: Implementing QoS parameters alongside Application Aware Routing allows network administrators to allocate bandwidth, prioritize specific applications over others, and enforce policies to ensure the best possible user experience. QoS configurations can be customized based on organizational needs and application requirements.

Future Possibilities

As the digital landscape continues to evolve, the potential for Application-Aware Routing is boundless. With emerging technologies like the Internet of Things (IoT) and 5G networks, the ability to intelligently route traffic based on specific application needs will become even more critical. Application-aware routing has the potential to optimize resource utilization, enhance security, and support the seamless integration of diverse applications and services.

 

 

WAN Challenges

Deploying and managing the Wide Area Network (WAN) has become more challenging. Engineers face several design challenges, such as traffic flow decentralizing, inefficient WAN link utilization, routing protocol convergence, and application performance issues with active-active WAN edge designs. Active-active WAN designs that spray and pray over multiple active links present technical and business challenges.

To do this efficiently, you have to understand application flows. There may also be performance problems. When packets reach the other end, there may be out-of-order packets as each link propagates at different speeds. The remote end has to be reassembled and put back together, causing jitter and delay. Both high jitter and delay are bad for network performance. To recap on WAN virtualization, including the drivers for SD-WAN, you may follow this SD WAN tutorial.

What is WAN Virtualization
Diagram: What is WAN virtualization? Source Linkedin.

Knowledge Check: Control and Data Plane

Understanding the Control Plane

The control plane can be likened to a network’s brain. It is responsible for making high-level decisions and managing network-wide operations. From routing protocols to network management systems, the control plane ensures data is directed along the most optimal paths. By analyzing network topology, the control plane determines the best routes to reach a destination and establishes the necessary rules for data transmission.

Unveiling the Data Plane

In contrast to the control plane, the data plane focuses on the actual movement of data packets within the network. It can be thought of as the hands and feet executing the control plane’s instructions. The data plane handles packet forwarding, traffic classification, and Quality of Service (QoS) enforcement tasks. It ensures that data packets are correctly encapsulated, forwarded to their intended destinations, and delivered with the necessary priority and reliability.

Use Cases and Deployment Scenarios

Distributed Enterprises:

For organizations with multiple branch locations, WAN virtualization offers a cost-effective solution for connecting remote sites to the central network. It allows for secure and efficient data transfer between branches, enabling seamless collaboration and resource sharing.

Cloud Connectivity:

WAN virtualization is ideal for enterprises adopting cloud-based services. It provides a secure and optimized connection to public and private cloud environments, ensuring reliable access to critical applications and data hosted in the cloud.

Disaster Recovery and Business Continuity:

WAN virtualization plays a vital role in disaster recovery strategies. Organizations can ensure business continuity during a natural disaster or system failure by replicating data and applications across geographically dispersed sites.

Challenges and Considerations:

Implementing WAN virtualization requires careful planning and consideration. Factors such as network security, bandwidth requirements, and compatibility with existing infrastructure need to be evaluated. It is essential to choose a solution that aligns with the specific needs and goals of the organization.

SD-WAN vs. DMVPN

Two popular WAN solutions are DMVPN and SD-WAN.

DMVPN (Dynamic Multipoint Virtual Private Network) and SD-WAN (Software-Defined Wide Area Network) are popular solutions to improve connectivity between distributed branch offices. DMVPN is a Cisco-specific solution, and SD-WAN is a software-based solution that can be used with any router. Both solutions provide several advantages, but there are some differences between them.

DMVPN is a secure, cost-effective, and scalable network solution that combines underlying technologies and DMVVPN phases (for example, the traditional DMVPN phase 1 ) to connect multiple sites. It allows the customer to use existing infrastructure and provides easy deployment and management. This solution is an excellent choice for businesses with many branch offices because it allows for secure communication and the ability to deploy new sites quickly.

DMVPN and WAN Virtualization

SD-WAN is a software-based solution that is gaining popularity in the enterprise market. It provides improved application performance, security, and network reliability. SD-WAN is an excellent choice for businesses that require high-performance applications across multiple sites. It provides an easy-to-use centralized management console that allows companies to deploy new sites and manage the network quickly.

Dynamic Multipoint VPN
Diagram: Example with DMVPN. Source is Cisco

Guide: DMVPN operating over the WAN

The following shows DMVPN operating over the WAN. The SP node represents the WAN network. Then we have R11 as the hub and R2, R3 as the spokes.  Several protocols make the DMVPM network over the WAN possible. We have GRE; in this case, the tunnel destination is specified as a point-to-point GRE tunnel instead of a mGRE tunnel.

Then we have NHRP, which is used to help create a mapping as this is a nonbroadcast network; we can not use ARP. So, we need to manually set this up on the spokes with the command: ip nhrp NHS 192.168.100.11

DMVPN configuration
Diagram: DMVPN Configuration.

Shift from network-centric to business intent.

The core of WAN virtualization involves shifting focus from a network-centric model to a business intent-based WAN network. So, instead of designing the WAN for the network, we can create the WAN for the application. This way, the WAN architecture can simplify application deployment and management.

First, however, the mindset must shift from a network topology focus to an application services topology. A new application style consumes vast bandwidth and is very susceptible to variations in bandwidth quality. Things such as jitter, loss, and delay impact most applications, which makes it essential to improve the WAN environment for these applications.

wan virtualization
Diagram: WAN virtualization.

The spray-and-pray method over two links increases bandwidth but decreases “goodput.” It also affects firewalls, as they will see asymmetric routes. When you want an active-active model, you need application session awareness and a design that eliminates asymmetric routing. It would help if you could slice the WAN properly so application flows can work efficiently over either link.

What is WAN Virtualization: Decentralizing Traffic

Decentralizing traffic from the data center to the branch requires more bandwidth to the network’s edges. As a result, we see many high-bandwidth applications running on remote sites. This is what businesses are now trying to accomplish. Traditional branch sites usually rely on hub sites for most services and do not host bandwidth-intensive applications. Today, remote locations require extra bandwidth, which is not cheaper yearly.

Inefficient WAN utilization

Redundant WAN links usually require a dynamic routing protocol for traffic engineering and failover. Routing protocols require complex tuning to load balance traffic between border devices. Border Gateway Protocol (BGP) is the primary protocol for connecting sites to external networks.

It relies on path attributes to choose the best path based on availability and distance. Although these attributes allow granular policy control, they do not cover aspects relating to path performance, such as Round Trip Time (RTT), delay, and jitter.

Port 179
Furthermore, BGP does not always choose the “best” path, which may have different meanings for customers. For example, customer A might consider the path via provider A as the best due to the price of links. Default routing does not take this into account. Packet-level routing protocols are not designed to handle the complexities of running over multiple transport-agnostic links. Therefore, a solution that eliminates the need for packet-level routing protocols must arise.
BGP Path Attributes
Diagram: BGP Path Attributes Source is Cisco.

Routing protocol convergence

WAN designs can also be active standby, which requires routing protocol convergence in the event of primary link failure. However, routing convergence is slow, and to speed up, additional features, such as Bidirectional Forwarding Detection (BFD), are implemented that may stress the network’s control plane. Although mechanisms exist to speed up convergence and failure detection, there are still several convergence steps, such as:

Rouitng Convergence

Convergence


Detect


Describe


Switch 


Find

Branch office security

With traditional network solutions, branches connect back to the data center, which typically provides Internet access. However, the application world has evolved, and branches directly consume applications such as Office 365 in the cloud. This drives a need for branches to access these services over the Internet without going to the data center for Internet access or security scrubbing.

Extending the security diameter into the branches should be possible without requiring onsite firewalls / IPS and other security paradigm changes. A solution must exist that allows you to extend your security domain to the branch sites without costly security appliances at each branch—essentially, building a dynamic security fabric.

WAN Virtualization

The solution to all these problems is SD-WAN ( software-defined WAN ). SD-WAN is a transport-independent overlay software-based networking deployment. It uses software and cloud-based technologies to simplify the delivery of WAN services to branch offices. Similar to Software Defined Networking (SDN), SD-WAN works by abstraction. It abstracts network hardware into a control plane with multiple data planes to make up one large WAN fabric.

SD-WAN in a nutshell 

When we consider the Wide Area Network (WAN) environment at a basic level, we connect data centers to several branch offices to deliver packets between those sites, supporting the transport of application transactions and services. The SD-WAN platform allows you to pull Internet connectivity into those sites, becoming part of one large transport-independent WAN fabric.

SD-WAN monitors the paths and the application performance on each link (Internet, MPLS, LTE ) and chooses the best path based on performance.

There are many forms of Internet connectivity (cable, DSL, broadband, and Ethernet). They are quick to deploy at a fraction of the cost of private MPLS circuits. SD-WAN provides the benefit of using all these links and monitoring which applications are best for them.

Application performance is continuously monitored across all eligible paths-direct internet, internet VPN, and private WAN. It creates an active-active network and eliminates the need to use and maintain traditional routing protocols for active-standby setups—no reliance on the active-standby model and associated problems.

WAN virtualization
Diagram: WAN virtualization. Source is Juniper

SD-WAN simplifies WAN management

SD-WAN simplifies managing a wide area network by providing a centralized platform for managing and monitoring traffic across the network. This helps reduce the complexity of managing multiple networks, eliminating the need for manual configuration of each site. Instead, all of the sites are configured from a single management console.

SD-WAN also provides advanced security features such as encryption and firewalling, which can be configured to ensure that only authorized traffic is allowed access to the network. Additionally, SD-WAN can optimize network performance by automatically routing traffic over the most efficient paths.

what is wan virtualization

SD-WAN Packet Steering

SD-WAN packet steering is a technology that efficiently routes packets across a wide area network (WAN). It is based on the concept of steering packets so that they can be delivered more quickly and reliably than traditional routing protocols. Packet steering is crucial to SD-WAN technology, allowing organizations to maximize their WAN connections.

SD-WAN packet steering works by analyzing packets sent across the WAN and looking for patterns or trends. Based on these patterns, the SD-WAN can dynamically route the packets to deliver them more quickly and reliably. This can be done in various ways, such as considering latency and packet loss or ensuring the packets are routed over the most reliable connections.

Spraying packets down both links can result in 20% drops or packet reordering. SD-WAN makes packets better utilized, no reorder, and better “goodput.” SD-WAN increases your buying power and results in buying lower bandwidth links and running them more efficiently. Over-provision is unnecessary as you are using the existing WAN bandwidth better.

Example WAN Security Technology: Suricata

A Final Note: WAN virtualization

Server virtualization and automation in the data center are prevalent, but WANs are stalling in this space. It is the last bastion of hardware models that has complexity. Like hypervisors have transformed data centers, SD-WAN aims to change how WAN networks are built and managed. When server virtualization and hypervisor came along, we did not have to worry about the underlying hardware. Instead, a virtual machine (VM) can be provided and run as an application. Today’s WAN environment requires you to manage details of carrier infrastructure, routing protocols, and encryption. 

  • SD-WAN pulls all WAN resources together and slices up the WAN to match the applications on them.

The Role of WAN Virtualization in Digital Transformation:

In today’s digital era, where cloud-based applications and remote workforces are becoming the norm, WAN virtualization is critical in enabling digital transformation. It empowers organizations to embrace new technologies, such as cloud computing and unified communications, by providing secure and reliable connectivity to distributed resources.

Summary: WAN Virtualization

In our ever-connected world, seamless network connectivity is necessary for businesses of all sizes. However, traditional Wide Area Networks (WANs) often fall short of meeting the demands of modern data transmission and application performance. This is where the concept of WAN virtualization comes into play, promising to revolutionize network connectivity like never before.

Understanding WAN Virtualization

WAN virtualization, also known as Software-Defined WAN (SD-WAN), is a technology that abstracts the physical infrastructure of traditional WANs and allows for centralized control, management, and optimization of network resources. By decoupling the control plane from the underlying hardware, WAN virtualization enables organizations to dynamically allocate bandwidth, prioritize critical applications, and ensure optimal performance across geographically dispersed locations.

The Benefits of WAN Virtualization

Enhanced Flexibility and Scalability: With WAN virtualization, organizations can effortlessly scale their network infrastructure to accommodate growing business needs. The virtualized nature of the WAN allows for easy addition or removal of network resources, enabling businesses to adapt to changing requirements without costly hardware upgrades.

Improved Application Performance: WAN virtualization empowers businesses to optimize application performance by intelligently routing network traffic based on application type, quality of service requirements, and network conditions. By dynamically selecting the most efficient path for data transmission, WAN virtualization minimizes latency, improves response times, and enhances overall user experience.

Cost Savings and Efficiency: By leveraging WAN virtualization, organizations can reduce their reliance on expensive Multiprotocol Label Switching (MPLS) connections and embrace more cost-effective broadband links. The ability to intelligently distribute traffic across diverse network paths enhances network redundancy and maximizes bandwidth utilization, providing significant cost savings and improved efficiency.

Implementation Considerations

Network Security: When adopting WAN virtualization, it is crucial to implement robust security measures to protect sensitive data and ensure network integrity. Encryption protocols, threat detection systems, and secure access controls should be implemented to safeguard against potential security breaches.

Quality of Service (QoS): Organizations should prioritize critical applications and allocate appropriate bandwidth resources through Quality of Service (QoS) policies to ensure optimal application performance. By adequately configuring QoS settings, businesses can guarantee mission-critical applications receive the necessary network resources, minimizing latency and providing a seamless user experience.

Real-World Use Cases

Global Enterprise Networks

Large multinational corporations with a widespread presence can significantly benefit from WAN virtualization. These organizations can achieve consistent performance across geographically dispersed locations by centralizing network management and leveraging intelligent traffic routing, improving collaboration and productivity.

Branch Office Connectivity

WAN virtualization simplifies connectivity and network management for businesses with multiple branch offices. It enables organizations to establish secure and efficient connections between headquarters and remote locations, ensuring seamless access to critical resources and applications.

In conclusion, WAN virtualization represents a paradigm shift in network connectivity, offering enhanced flexibility, improved application performance, and cost savings for businesses. By embracing this transformative technology, organizations can unlock the true potential of their networks, enabling them to thrive in the digital age.

WAN Design Requirements

WAN SDN

WAN SDN

In today's fast-paced digital world, organizations constantly seek ways to optimize their network infrastructure for improved performance, scalability, and cost efficiency. One emerging technology that has gained significant traction is WAN Software-Defined Networking (SDN). By decoupling the control and data planes, WAN SDN provides organizations unprecedented flexibility, agility, and control over their wide area networks (WANs). In this blog post, we will delve into the world of WAN SDN, exploring its key benefits, implementation considerations, and real-world use cases.

WAN SDN is a network architecture that allows organizations to manage and control their wide area networks using software centrally. Traditionally, WANs have been complex and time-consuming to configure, often requiring manual network provisioning and management intervention. However, with WAN SDN, network administrators can automate these tasks through a centralized controller, simplifying network operations and reducing human errors.

Enhanced Agility: WAN SDN empowers network administrators with the ability to quickly adapt to changing business needs. With programmable policies and dynamic control, organizations can easily adjust network configurations, prioritize traffic, and implement changes without the need for manual reconfiguration of individual devices.

Improved Scalability: Traditional wide area networks often face scalability challenges due to the complex nature of managing numerous remote sites. WAN SDN addresses this issue by providing centralized control, allowing for streamlined network expansion, and efficient resource allocation.

Optimal Resource Utilization: WAN SDN enables organizations to maximize their network resources by intelligently routing traffic and dynamically allocating bandwidth based on real-time demands. This ensures that critical applications receive the necessary resources while minimizing wastage.

Multi-site Enterprises: WAN SDN is particularly beneficial for organizations with multiple branch locations. It allows for simplified network management across geographically dispersed sites, enabling efficient resource allocation, centralized security policies, and rapid deployment of new services.

Cloud Connectivity: WAN SDN plays a crucial role in connecting enterprise networks with cloud service providers. It offers seamless integration, secure connections, and dynamic bandwidth allocation, ensuring optimal performance and reliability for cloud-based applications.

Service Providers: WAN SDN can revolutionize how service providers deliver network services to their customers. It enables the creation of virtual private networks (VPNs) on-demand, facilitates network slicing for different tenants, and provides granular control and visibility for service-level agreements (SLAs).

WAN SDN represents a paradigm shift in wide area network management. Its ability to centralize control, enhance agility, and optimize resource utilization make it a game-changer for modern networking infrastructures. As organizations continue to embrace digital transformation and demand more from their networks, WAN SDN will undoubtedly play a pivotal role in shaping the future of networking.

Highlights: WAN SDN

Discussing WAN SDN

1: – ) Traditional WANs have long been plagued by various limitations, such as complexity, lack of agility, and high operational costs. These legacy networks typically rely on manual configurations and proprietary hardware, making them inflexible and time-consuming. SDN brings a paradigm shift to WANs by decoupling the network control plane from the underlying infrastructure. With centralized control and programmability, SDN enables network administrators to manage and orchestrate their WANs through a single interface, simplifying network operations and promoting agility.

2: – ) At its core, WAN SDN separates the control plane from the data plane, allowing network administrators to manage network traffic dynamically and programmatically. This separation leads to more efficient network management, reducing the complexity associated with traditional network infrastructures. With WAN SDN, businesses can optimize traffic flow, enhance security, and reduce operational costs by leveraging centralized control and automation.

3: – ) One of the key advantages of SDN in WANs is its inherent flexibility and scalability. With SDN, network administrators can dynamically allocate bandwidth, reroute traffic, and prioritize applications based on real-time needs. This level of granular control allows organizations to optimize their network resources efficiently and adapt to changing demands.

4: – )  SDN brings enhanced security features to WANs through centralized policy enforcement and monitoring. By abstracting network control, SDN allows for consistent security policies across the entire network, minimizing vulnerabilities and ensuring better threat detection and mitigation. Additionally, SDN enables rapid network recovery and failover mechanisms, enhancing overall resilience.

**Key Benefits of WAN SDN**

1. **Scalability and Flexibility**: WAN SDN enables networks to adapt quickly to changing demands without the need for significant hardware investments. This flexibility is crucial for organizations looking to scale their operations efficiently.

2. **Improved Network Performance**: By optimizing traffic routing and prioritizing critical applications, WAN SDN ensures that networks operate at peak performance levels. This capability is particularly beneficial for businesses with high bandwidth demands.

3. **Enhanced Security**: WAN SDN allows for the implementation of robust security measures, including automated threat detection and response. This proactive approach to security helps protect sensitive data and maintain compliance with industry regulations.

**Application Challenges**

Compared to a network-centric model, business intent-based WAN networks have great potential. By using a WAN architecture, applications can be deployed and managed more efficiently. However, application services topologies must replace network topologies. Supporting new and existing applications on the WAN is a common challenge for network operations staff. Applications such as these consume large amounts of bandwidth and are extremely sensitive to variations in bandwidth quality. Improving the WAN environment for these applications is more critical due to jitter, loss, and delay.

**WAN SLA**

In addition, cloud-based applications such as Enterprise Resource Planning (ERP) and Customer Relationship Management (CRM) are increasing bandwidth demands on the WAN. As cloud applications require increasing bandwidth, provisioning new applications and services is becoming increasingly complex and expensive. In today’s business environment, WAN routing and network SLAs are controlled by MPLS L3VPN service providers. As a result, they are less able to adapt to new delivery methods, such as cloud-based and SaaS-based applications.

These applications could take months to implement in service providers’ environments. These changes can also be expensive for some service providers, and some may not be made at all. There is no way to instantiate VPNs independent of underlying transport since service providers control the WAN core. Implementing differentiated service levels for different applications becomes challenging, if not impossible.

WAN SDN Technology: DMVPN

DMVPN is a Cisco-developed solution that enables the creation of virtual private networks over public or private networks. Unlike traditional VPNs that require point-to-point connections, DMVPN utilizes a hub-and-spoke architecture, allowing for dynamic and scalable network deployments. DMVPN simplifies network management and reduces administrative overhead by leveraging multipoint GRE tunnels.

– Multipoint GRE Tunnels: At the core of DMVPN lies the concept of multipoint GRE tunnels. These tunnels create a virtual network, connecting multiple sites while encapsulating packets in GRE headers. This enables efficient traffic routing between sites, reducing the complexity and overhead associated with traditional point-to-point VPNs.

– Next-Hop Resolution Protocol (NHRP): NHRP plays a crucial role in DMVPN by dynamically mapping tunnel IP addresses to physical addresses. It allows for the efficient resolution of next-hop information, eliminating the need for static routes. NHRP also enables on-demand tunnel establishment, improving scalability and reducing administrative overhead.

– IPsec Encryption: DMVPN utilizes IPsec encryption to ensure secure communication over the VPN. IPsec provides confidentiality, integrity, and authentication of data, making it ideal for protecting sensitive information transmitted over the network. With DMVPN, IPsec is applied dynamically per-tunnelly, enhancing flexibility and scalability.

DMVPN over IPSec

Understanding DMVPN & IPSec

IPsec, a widely adopted security protocol, is integral to DMVPN deployments. It provides the cryptographic framework necessary for securing data transmitted over the network. By leveraging IPsec, DMVPN ensures the transmitted information’s confidentiality, integrity, and authenticity, protecting sensitive data from unauthorized access and tampering.

Firstly, the dynamic mesh topology eliminates the need for complex hub-and-spoke configurations, simplifying network management and reducing administrative overhead. Additionally, DMVPN’s scalability enables seamless integration of new sites and facilitates rapid expansion without compromising performance. Furthermore, the inherent flexibility ensures optimal routing, load balancing, and efficient bandwidth utilization.

Example WAN Techniques: 

Understanding Virtual Routing and Forwarding

VRF is a technology that enables the creation of multiple virtual routing tables within a single physical router. Each VRF instance acts as an independent router with its routing table, interfaces, and forwarding decisions. This separation allows different networks or customers to coexist on the same physical infrastructure while maintaining complete isolation.

One critical advantage of VRF is its ability to provide network segmentation. By dividing a physical router into multiple VRF instances, organizations can isolate their networks, ensuring that traffic from one VRF does not leak into another. This enhances security and provides a robust framework for multi-tenancy scenarios.

Use Cases for VRF

VRF finds application in various scenarios, including:

1. Service Providers: VRF allows providers to offer their customers virtual private network (VPN) services. Each customer can have their own VRF, ensuring their traffic remains separate and secure.

2. Enterprise Networks: VRF can segregate different organizational departments, creating independent virtual networks.

3. Internet of Things (IoT): With the proliferation of IoT devices, VRF can create separate routing domains for different IoT deployments, improving scalability and security.

Understanding Policy-Based Routing

Policy-based Routing, at its core, involves manipulating routing decisions based on predefined policies. Unlike traditional routing protocols that rely solely on destination addresses, PBR considers additional factors such as source IP, ports, protocols, and even time of day. By implementing PBR, network administrators gain flexibility in directing traffic flows to specific paths based on specified conditions.

The adoption of Policy Based Routing brings forth a multitude of benefits. Firstly, it enables efficient utilization of network resources by allowing administrators to prioritize or allocate bandwidth for specific applications or user groups. Additionally, PBR enhances security by allowing traffic redirection to dedicated firewalls or intrusion detection systems. Furthermore, PBR facilitates load balancing and traffic engineering, ensuring optimal performance across the network.

Implementing Policy-Based Routing

To implement PBR, network administrators must follow a series of steps. Firstly, the traffic classification criteria are defined by specifying the match criteria based on desired conditions. Secondly, create route maps that outline the actions for matched traffic. These actions may include altering the next-hop address, setting specific Quality of Service (QoS) parameters, or redirecting traffic to a different interface. Lastly, the route maps should be applied to the appropriate interfaces or specific traffic flows.

Example SD WAN Product: Cisco Meraki

**Seamless Cloud Management**

One of the standout features of Cisco Meraki is its seamless cloud management. Unlike traditional network systems, Meraki’s cloud-based platform allows IT administrators to manage their entire network from a single, intuitive dashboard. This centralization not only simplifies network management but also provides real-time visibility and control over all connected devices. With automatic updates and zero-touch provisioning, businesses can ensure their network is always up-to-date and secure without the need for extensive manual intervention.

**Cutting-Edge Security Features**

Security is at the core of Cisco Meraki’s suite of products. With cyber threats becoming more sophisticated, Meraki offers a multi-layered security approach to protect sensitive data. Features such as Advanced Malware Protection (AMP), Intrusion Prevention System (IPS), and secure VPNs ensure that the network is safeguarded against intrusions and malware. Additionally, Meraki’s security appliances are designed to detect and mitigate threats in real-time, providing businesses with peace of mind knowing their data is secure.

**Scalability and Flexibility**

As businesses grow, so do their networking needs. Cisco Meraki’s scalable solutions are designed to grow with your organization. Whether you are expanding your office space, adding new branches, or integrating more IoT devices, Meraki’s flexible infrastructure can easily adapt to these changes. The platform supports a wide range of devices, from access points and switches to security cameras and mobile device management, making it a comprehensive solution for various networking requirements.

**Enhanced User Experience**

Beyond security and management, Cisco Meraki enhances the user experience by ensuring reliable and high-performance network connectivity. Features such as intelligent traffic shaping, load balancing, and seamless roaming between access points ensure that users enjoy consistent and fast internet access. Furthermore, Meraki’s analytics tools provide insights into network usage and performance, allowing businesses to optimize their network for better efficiency and user satisfaction.

Performance at the WAN Edge

Understanding Performance-Based Routing

Performance-based routing is a dynamic approach to network traffic management that prioritizes route selection based on real-time performance metrics. Instead of relying on traditional static routing protocols, performance-based routing algorithms assess the current conditions of network paths, such as latency, packet loss, and available bandwidth, to make informed routing decisions. By dynamically adapting to changing network conditions, performance-based routing aims to optimize traffic flow and enhance overall network performance.

The adoption of performance-based routing brings forth a multitude of benefits for businesses.

1- Firstly, it enhances network reliability by automatically rerouting traffic away from congested or underperforming paths, minimizing the chances of bottlenecks and service disruptions.

2- Secondly, it optimizes application performance by intelligently selecting the best path based on real-time network conditions, thus reducing latency and improving end-user experience. A

3- Additionally, performance-based routing allows for efficient utilization of available network resources, maximizing bandwidth utilization and cost-effectiveness.

Implementation Details:

Implementing performance-based routing requires a thoughtful approach. Firstly, businesses must invest in monitoring tools that provide real-time insights into network performance metrics. These tools can range from simple latency monitoring to more advanced solutions that analyze packet loss and bandwidth availability.

Once the necessary monitoring infrastructure is in place, configuring performance-based routing algorithms within network devices becomes the next step. This involves setting up rules and policies that dictate how traffic should be routed based on specific performance metrics.

Lastly, regular monitoring and fine-tuning performance-based routing configurations are essential to ensure optimal network performance.

WAN Performance Parameters

TCP Performance Parameters

TCP (Transmission Control Protocol) is the backbone of modern Internet communication, ensuring reliable data transmission across networks. Behind the scenes, TCP performance is influenced by several key parameters that can significantly impact network efficiency.

TCP performance parameters govern how TCP behaves in various network conditions. These parameters can be fine-tuned to adapt TCP’s behavior to specific network characteristics, such as latency, bandwidth, and congestion. By adjusting these parameters, network administrators and system engineers can optimize TCP performance for better throughput, reduced latency, and improved overall network efficiency.

Congestion Control Algorithms: Congestion control algorithms are crucial in TCP performance. They monitor network conditions, detect congestion, and adjust TCP’s sending rate accordingly. Popular algorithms like Reno, Cubic, and BBR implement different strategies to handle congestion, balancing fairness and efficiency. Understanding these algorithms and their impact on TCP behavior is essential for maintaining a stable and responsive network.

Window Size and Bandwidth Delay Product: The window size parameter, often called the congestion window, determines the amount of data that can be sent before receiving an acknowledgment. The bandwidth-delay product should set the window size, a value calculated by multiplying the available bandwidth with the round-trip time (RTT). Adjusting the window size to match the bandwidth-delay product ensures optimal data transfer and prevents underutilization or overutilization of network resources.

Maximum Segment Size (MSS): The Maximum Segment Size is another TCP performance parameter defining the maximum amount of data encapsulated within a single TCP segment. By carefully configuring the MSS, it is possible to reduce packet fragmentation, enhance data transmission efficiency, and mitigate issues related to network overhead.

Selective Acknowledgment (SACK): Selective Acknowledgment is a TCP extension that allows the receiver to acknowledge out-of-order segments and provide more precise information about the received data. Enabling SACK can improve TCP performance by reducing retransmissions and enhancing the overall reliability of data transmission.

Understanding TCP MSS

TCP MSS refers to the maximum amount of data encapsulated within a single TCP segment. It represents the most significant data payload that can be transmitted without fragmentation. By limiting the segment size, TCP aims to prevent excessive overhead and ensure efficient data transmission across networks.

Several factors influence the determination of TCP MSS. One crucial aspect is the underlying network infrastructure’s Maximum Transmission Unit (MTU). The MTU represents the maximum packet size that can be transmitted over the network without fragmentation. TCP MSS must be set to a value equal to or lower than the MTU to avoid fragmentation and subsequent performance degradation.

Path MTU Discovery (PMTUD) is a mechanism TCP employs to dynamically determine the optimal MSS value for a given network path. By exchanging ICMP messages with routers along the path, TCP can ascertain the MTU and adjust the MSS accordingly. PMTUD helps prevent packet fragmentation and ensures efficient data transmission across network segments.

The TCP MSS value directly affects network performance. A smaller MSS can increase overhead due to more segments and headers, potentially reducing overall throughput. On the other hand, a larger MSS can increase the risk of fragmentation and subsequent retransmissions, impacting latency and overall network efficiency. Striking the right balance is crucial for optimal performance.

Example WAN Technology: DMVPN Phase 3

Understanding DMVPN Phase 3

DMVPN Phase 3 builds upon the foundation of its predecessors, bringing forth even more advanced features. This section will provide an overview of DMVPN Phase 3, highlighting its main enhancements, such as increased scalability, simplified configuration, and enhanced security protocols.

One of the standout features of DMVPN Phase 3 is its scalability. This section will explain how DMVPN Phase 3 allows organizations to effortlessly add new sites to the network without complex manual configurations. By leveraging multipoint GRE tunnels, DMVPN Phase 3 offers a dynamic and flexible solution that can easily accommodate growing networks.

Example WAN Technology: FlexVPN Site-to-Site Smart Defaults

Understanding FlexVPN Site-to-Site Smart Defaults

FlexVPN Site-to-Site Smart Defaults is a powerful feature that simplifies site-to-site VPN configuration and deployment process. Providing pre-defined templates and configurations eliminates the need for manual configuration, reducing the chances of misconfigurations or human errors. This feature ensures a secure and reliable VPN connection between sites, enabling organizations to establish a robust network infrastructure.

FlexVPN Site-to-Site Smart Defaults offers several key features and benefits that contribute to improved network security. Firstly, it provides secure cryptographic algorithms that protect data transmission, ensuring the confidentiality and integrity of sensitive information. Additionally, it supports various authentication methods, such as digital certificates and pre-shared keys, further enhancing the overall security of the VPN connection. The feature also allows for easy scalability, enabling organizations to expand their network infrastructure without compromising security.

Example WAN Technology: FlexVPN IKEv2 Routing

Understanding FlexVPN

FlexVPN, short for Flexible VPN, is a versatile framework offering various VPN solutions. It provides a secure and scalable approach to establishing Virtual Private Networks (VPNs) over various network infrastructures. With its flexibility, it allows for seamless integration and interoperability across different platforms and devices.

IKEv2, or Internet Key Exchange version 2, is a secure and efficient protocol for establishing and managing VPN connections. It boasts numerous advantages, including its robust security features, ability to handle network disruptions, and support for rapid reconnection. IKEv2 is highly regarded for its ability to maintain stable and uninterrupted VPN connections, making it an ideal choice for FlexVPN.

a. Enhanced Security: FlexVPN IKEv2 Routing offers advanced encryption algorithms and authentication methods, ensuring the confidentiality and integrity of data transmitted over the VPN.

b. Scalability: With its flexible architecture, FlexVPN IKEv2 Routing effortlessly scales to accommodate growing network demands, making it suitable for small—to large-scale deployments.

c. Dynamic Routing: One of FlexVPN IKEv2 Routing’s standout features is its support for dynamic routing protocols, such as OSPF and EIGRP. This enables efficient and dynamic routing of traffic within the VPN network.

d. Seamless Failover: FlexVPN IKEv2 Routing provides automatic failover capabilities, ensuring uninterrupted connectivity even during network disruptions or hardware failures.

Understanding MPLS (Multi-Protocol Label Switching)

MPLS serves as the foundation for MPLS VPNs. It is a versatile and efficient routing technique that uses labels to forward data packets through a network. By assigning labels to packets, MPLS routers can make fast-forwarding decisions based on the labels, reducing the need for complex and time-consuming lookups in routing tables. This results in improved network performance and scalability.

Understanding MPLS LDP

MPLS LDP is a crucial component in establishing label-switched paths within MPLS networks. MPLS LDP facilitates efficient packet forwarding and routing by enabling the distribution of labels and creating forwarding equivalency classes. Let’s take a closer look at how MPLS LDP operates.

One of the fundamental aspects of MPLS LDP is label distribution. Through signaling protocols, MPLS LDP ensures that labels are assigned and distributed across network nodes. This enables routers to make forwarding decisions based on labels, resulting in streamlined and efficient data transmission.

In MPLS LDP, labels serve as the building blocks of label-switched paths. These paths allow routers to forward packets based on labels rather than traditional IP routing. Additionally, MPLS LDP employs forwarding equivalency classes (FECs) to group packets with similar characteristics, further enhancing network performance.

MPLS Virtual Private Networks (VPNs) Explained

VPNs provide secure communication over public networks by creating a private tunnel through which data can travel. They employ encryption and tunneling protocols to protect data from eavesdropping and unauthorized access. MPLS VPNs utilize this VPN concept to establish secure connections between geographically dispersed sites or remote users.

MPLS VPN Components

Customer Edge (CE) Router: The CE router acts as the entry and exit point for customer networks. It connects to the provider network and exchanges routing information. It encapsulates customer data into MPLS packets and forwards them to the provider network.

Provider Edge (PE) Router: The PE router sits at the edge of the service provider’s network and connects to the CE routers. It acts as a bridge between the customer and provider networks and handles the MPLS label switching. The PE router assigns labels to incoming packets and forwards them based on the labels’ instructions.

Provider (P) Router: P routers form the backbone of the service provider’s network. They forward MPLS packets based on the labels without inspecting the packet’s content, ensuring efficient data transmission within the provider’s network.

Virtual Routing and Forwarding (VRF) Tables: VRF tables maintain separate routing instances within a single PE router. Each VRF table represents a unique VPN and keeps the customer’s routing information isolated from other VPNs. VRF tables enable the PE router to handle multiple VPNs concurrently, providing secure and independent communication channels.

Use Case – DMVPN Single Hub, Dual Cloud

Single Hub, Dual Cloud is a specific configuration within the DMVPN architecture. In this setup, a central hub device acts as the primary connection point for branch offices while utilizing two separate cloud providers for redundancy and load balancing. This configuration offers several advantages, including improved availability, increased bandwidth, and enhanced failover capabilities.

1. Enhanced Redundancy: By leveraging two cloud providers, organizations can achieve high availability and minimize downtime. If one cloud provider experiences an issue or outage, the traffic can seamlessly be redirected to the alternate provider, ensuring uninterrupted connectivity.

2. Load Balancing: Distributing network traffic across two cloud providers allows for better resource utilization and improved performance. Organizations can optimize their bandwidth usage and mitigate potential bottlenecks.

3. Scalability: Single Hub, Dual Cloud DMVPN allows organizations to easily scale their network infrastructure by adding more branch offices or cloud providers as needed. This flexibility ensures that the network can adapt to changing business requirements.

4. Cost Efficiency: Utilizing multiple cloud providers can lead to cost savings through competitive pricing and the ability to negotiate better service level agreements (SLAs). Organizations can choose the most cost-effective options while maintaining the desired level of performance and reliability.

The role of SDN

With software-defined networking (SDN), network configurations can be dynamic and programmatically optimized, improving network performance and monitoring more like cloud computing than traditional network management. By disassociating the forwarding of network packets from routing (control plane), SDN can be used to centralize network intelligence within a single network component by improving the static architecture of traditional networks.

Controllers make up the control plane of an SDN network, which contains all of the network’s intelligence. They are considered the brains of the network. Security, scalability, and elasticity are some of the drawbacks of centralization.

Since OpenFlow’s emergence in 2011, SDN was commonly associated with remote communication with network plane elements to determine the path of network packets across network switches. Additionally, proprietary network virtualization platforms, such as Cisco Systems’ Open Network Environment and Nicira’s, use the term.

The SD-WAN technology is used in wide area networks (WANs)

SD-WAN, short for Software-Defined Wide Area Networking, is a transformative approach to network connectivity. Unlike traditional WAN, which relies on hardware-based infrastructure, SD-WAN utilizes software and cloud-based technologies to connect networks over large geographic areas securely. By separating the control plane from the data plane, SD-WAN provides centralized management and enhanced flexibility, enabling businesses to optimize their network performance.

Transport Independance: Hybrid WAN

The hybrid WAN concept was born out of this need. An alternative path that applications can take across a WAN environment is provided by hybrid WAN, which involves businesses acquiring non-MPLS networks and adding them to their LANs. Business enterprises can control these circuits, including routing and application performance. VPN tunnels are typically created over the top of these circuits to provide secure transport over any link. 4G/LTE, L2VPN, commodity broadband Internet, and L2VPN are all examples of these types of links.

As a result, transport independence is achieved. In this way, any transport type can be used under the VPN, and deterministic routing and application performance can be achieved. These commodity links can transmit some applications rather than the traditionally controlled L3VPN MPLS links provided by service providers.

SDN and APIs

WAN SDN is a modern approach to network management that uses a centralized control model to manage, configure, and monitor large and complex networks. It allows network administrators to use software to configure, monitor, and manage network elements from a single, centralized system. This enables the network to be managed more efficiently and cost-effectively than traditional networks.

SDN uses an application programming interface (API) to abstract the underlying physical network infrastructure, allowing for more agile network control and easier management. It also enables network administrators to rapidly configure and deploy services from a centralized location. This enables network administrators to respond quickly to changes in traffic patterns or network conditions, allowing for more efficient use of resources.

Scalability and Automation

SDN also allows for improved scalability and automation. Network administrators can quickly scale up or down the network by leveraging automated scripts depending on its current needs. Automation also enables the network to be maintained more rapidly and efficiently, saving time and resources.

Before you proceed, you may find the following posts helpful:

  1. WAN Virtualization
  2. Software Defined Perimeter Solutions
  3. What is OpenFlow
  4. SD WAN Tutorial
  5. What Does SDN Mean
  6. Data Center Site Selection

WAN SDN

A Deterministic Solution

Technology typically starts as a highly engineered, expensive, deterministic solution. As the marketplace evolves and competition rises, the need for a non-deterministic, inexpensive solution comes into play. We see this throughout history. First, mainframes were/are expensive, and with the arrival of a microprocessor personal computer, the client/server model was born. The Static RAM ( SRAM ) technology was replaced with cheaper Dynamic RAM ( DRAM ). These patterns consistently apply to all areas of technology.

Finally, deterministic and costly technology is replaced with intelligent technology using redundancy and optimization techniques. This process is now appearing in Wide Area Networks (WAN). Now, we are witnessing changes to routing space with the incorporation of Software Defined Networking (SDN) and BGP (Border Gateway Protocol). By combining these two technologies, companies can now perform  intelligent routing, aka SD-WAN path selection, with an SD WAN Overlay

**SD-WAN Path Selection**

SD-WAN path selection is essential to a Software-Defined Wide Area Network (SD-WAN) architecture. SD-WAN path selection selects the most optimal network path for a given application or user. This process is automated and based on user-defined criteria, such as latency, jitter, cost, availability, and security. As a result, SD-WAN can ensure that applications and users experience the best possible performance by making intelligent decisions on which network path to use.

When selecting the best path for a given application or user, SD-WAN looks at the quality of the connection and the available bandwidth. It then looks at the cost associated with each path. Cost can be a significant factor when selecting a path, especially for large enterprises or organizations with multiple sites.

SD-WAN can also prioritize certain types of traffic over others. This is done by assigning different weights or priorities for various kinds of traffic. For example, an organization may prioritize voice traffic over other types of traffic. This ensures that voice traffic has the best possible chance of completing its journey without interruption.

SD WAN traffic steering
Diagram: SD WAN traffic steering. Source Cisco.

Critical Considerations for Implementation:

Network Security:

When adopting WAN SDN, organizations must consider the potential security risks associated with software-defined networks. Robust security measures, including authentication, encryption, and access controls, should be implemented to protect against unauthorized access and potential vulnerabilities.

Staff Training and Expertise:

Implementing WAN SDN requires skilled network administrators proficient in configuring and managing the software-defined network infrastructure. Organizations must train and upskill their IT teams to ensure successful implementation and ongoing management.

Real-World Use Cases:

Multi-Site Connectivity:

WAN SDN enables organizations with multiple geographically dispersed locations to connect their sites seamlessly. Administrators can prioritize traffic, optimize bandwidth utilization, and ensure consistent network performance across all locations by centrally controlling the network.

Cloud Connectivity:

With the increasing adoption of cloud services, WAN SDN allows organizations to connect their data centers to public and private clouds securely and efficiently. This facilitates smooth data transfers, supports workload mobility, and enhances cloud performance.

Disaster Recovery:

WAN SDN simplifies disaster recovery planning by allowing organizations to reroute network traffic dynamically during a network failure. This ensures business continuity and minimizes downtime, as the network can automatically adapt to changing conditions and reroute traffic through alternative paths.

The Rise of WAN SDN

The foundation for business and cloud services are crucial elements of business operations. The transport network used for these services is best efforts, weak, and offers no guarantee of an acceptable delay. More services are being brought to the Internet, yet the Internet is managed inefficiently and cheaply.

Every Autonomous System (AS) acts independently, and there is a price war between transit providers, leading to poor quality of transit services. Operating over this flawed network, customers must find ways to guarantee applications receive the expected level of quality.

Border Gateway Protocol (BGP), the Internet’s glue, has several path selection flaws. The main drawback of BGP is the routing paradigm relating to the path-selection process. BGP default path selection is based on Autonomous System (AS) Path length; prefer the path with the shortest AS_PATH. It misses the shape of the network with its current path selection process. It does not care if propagation delay, packet loss, or link congestion exists. It resulted in long path selection and utilizing paths potentially experiencing packet loss.

Example: WAN SDN with Border6 

Border6 is a French company that started in 2012. It offers non-stop internet and an integrated WAN SDN solution, influencing BGP to perform optimum routing. It’s not a replacement for BGP but a complementary tool to enhance routing decisions. For example, it automates changes in routing in cases of link congestion/blackouts.

“The agile way of improving BGP paths by the Border 6 tool improves network stability” Brandon Wade, iCastCenter Owner.

As the Internet became more popular, customers wanted to add additional intelligence to routing. Additionally, businesses require SDN traffic optimizations, as many run their entire service offerings on top of it.

What is non-stop internet?

Border6 offers an integrated WAN SDN solution with BGP that adds intelligence to outbound routing. A common approach when designing SDN in real-world networks is to prefer that SDN solutions incorporate existing field testing mechanisms (BGP) and not reinvent all the wheels ever invented. Therefore, the border6 approach to influence BGP with SDN is a welcomed and less risky approach to implementing a greenfield startup. In addition, Microsoft and Viptela use the SDN solution to control BGP behavior.

Border6 uses BGP to guide what might be reachable. Based on various performance metrics, they measure how well paths perform. They use BGP to learn the structure of the Internet and then run their algorithms to determine what is essential for individual customers. Every customer has different needs to reach different subnets. Some prefer costs; others prefer performance.

They elect several interesting “best” performing prefixes, and the most critical prefixes are selected. Next, they find probing locations and measure the source with automatic probes to determine the best path. All these tools combined enhance the behavior of BGP. Their mechanism can detect if an ISP has hardware/software problems, drops packets, or rerouting packets worldwide. 

Thousands of tests per minute

The Solution offers the best path by executing thousands of tests per minute and enabling results to include the best paths for packet delivery. Outputs from the live probing of path delays and packet loss inform BGP on which path to route traffic. The “best path” is different for each customer. It depends on the routing policy the customer wants to take. Some customers prefer paths without packet loss; others wish to cheap costs or paths under 100ms. It comes down to customer requirements and the applications they serve.

**BGP – Unrelated to Performance**

Traditionally, BGP gets its information to make decisions based on data unrelated to performance. Broder 6 tries to correlate your packet’s path to the Internet by choosing the fastest or cheapest link, depending on your requirements.

They are taking BGP data service providers and sending them as a baseline. Based on that broad connectivity picture, they have their measurements – lowest latency, packets lost, etc.- and adjust the data from BGP to consider these other measures. They were, eventually, performing optimum packet traffic forwarding. They first look at Netflow or Sflow data to determine what is essential and use their tool to collect and aggregate the data. From this data, they know what destinations are critical to that customer.

BGP for outbound | Locator/ID Separation Protocol (LISP) for inbound

Border6 products relate to outbound traffic optimizations. It can be hard to influence inbound traffic optimization with BGP. Most AS behave selfishly and optimize the traffic in their interest. They are trying to provide tools that help AS optimize inbound flows by integrating their product set with the Locator/ID Separation Protocol (LISP). The diagram below displays generic LISP components. It’s not necessarily related to Border6 LISP design.

LISP decouples the address space so you can optimize inbound traffic flows. Many LISP uses cases are seen with active-active data centers and VM mobility. It decouples the “who” and the “where,” which allows end-host addressing not to correlate with the actual host location. The drawback is that LISP requires endpoints that can build LISP tunnels.

Currently, they are trying to provide a solution using LISP as a signaling protocol between Border6 devices. They are also working on performing statistical analysis for data received to mitigate potential denial-of-service (DDoS) events. More DDoS algorithms are coming in future releases.

Closing Points: On WAN SDN

At its core, WAN SDN separates the control plane from the data plane, facilitating centralized network management. This separation allows for dynamic adjustments to network configurations, providing businesses with the agility to respond to changing conditions and demands. By leveraging software to control network resources, organizations can achieve significant improvements in performance and cost-effectiveness.

One of the primary advantages of WAN SDN is its ability to optimize network traffic and improve bandwidth utilization. By intelligently routing data, WAN SDN minimizes latency and enhances the overall user experience. Additionally, it simplifies network management by providing a single, centralized platform to control and configure network policies, reducing the complexity and time required for network maintenance.

Summary: WAN SDN

In today’s digital age, where connectivity and speed are paramount, traditional Wide Area Networks (WANs) often fall short of meeting the demands of modern businesses. However, a revolutionary solution that promises to transform how we think about and utilize WANs has emerged. Enter Software-Defined Networking (SDN), a paradigm-shifting approach that brings unprecedented flexibility, efficiency, and control to WAN infrastructure.

Understanding SDN

At its core, SDN is a network architecture that separates the control plane from the data plane. By decoupling network control and forwarding functions, SDN enables centralized management and programmability of the entire network, regardless of its geographical spread. Traditional WANs relied on complex and static configurations, but SDN introduced a level of agility and simplicity that was previously unimaginable.

Benefits of SDN for WANs

Enhanced Flexibility

SDN empowers network administrators to dynamically configure and customize WANs based on specific requirements. With a software-based control plane, they can quickly implement changes, allocate bandwidth, and optimize traffic routing, all in real time. This flexibility allows businesses to adapt swiftly to evolving needs and drive innovation.

Improved Efficiency

By leveraging SDN, WANs can achieve higher levels of efficiency through centralized management and automation. Network policies can be defined and enforced holistically, reducing manual configuration efforts and minimizing human errors. Additionally, SDN enables the intelligent allocation of network resources, optimizing bandwidth utilization and enhancing overall network performance.

Enhanced Security

Security threats are a constant concern in any network infrastructure. SDN brings a new layer of security to WANs by providing granular control over traffic flows and implementing sophisticated security policies. With SDN, network administrators can easily monitor, detect, and mitigate potential threats, ensuring data integrity and protecting against unauthorized access.

Use Cases and Implementation Examples

Dynamic Multi-site Connectivity

SDN enables seamless connectivity between multiple sites, allowing businesses to establish secure and scalable networks. With SDN, organizations can dynamically create and manage virtual private networks (VPNs) across geographically dispersed locations, simplifying network expansion and enabling agile resource allocation.

Cloud Integration and Hybrid WANs

Integrating SDN with cloud services unlocks a whole new level of scalability and flexibility for WANs. By combining SDN with cloud-based infrastructure, organizations can easily extend their networks to the cloud, access resources on demand, and leverage the benefits of hybrid WAN architectures.

Conclusion:

With its ability to enhance flexibility, improve efficiency, and bolster security, SDN is ushering in a new era for Wide-Area Networks (WANs). By embracing the power of software-defined networking, businesses can overcome the limitations of traditional WANs and build robust, agile, and future-proof network infrastructures. It’s time to embrace the SDN revolution and unlock the full potential of your WAN.

viptela1

Viptela Software Defined WAN (SD-WAN)

 

viptela sd wan

Viptela SD WAN

Why can’t enterprise networks scale like the Internet? What if you could virtualize the entire network?

Wide Area Network (WAN) connectivity models follow a hybrid approach, and companies may have multiple types – MPLS and the Internet. For example, branch A has remote access over the Internet, while branch B employs private MPLS connectivity. Internet and MPLS have distinct connectivity models, and different types of overlay exist for the Internet and MPLS-based networks.

The challenge is to combine these overlays automatically and provide a transport-agnostic overlay network. The data consumption model in enterprises is shifting. Around 70% of data is; now Internet-bound, and it is expensive to trombone traffic from defined DMZ points. Customers are looking for topological flexibility, causing a shift in security parameters. Topological flexibility forces us to rethink WAN solutions for tomorrow’s networks and leads towards Viptela SD-WAN.

 

Before you proceed, you may find the following helpful:

  1. SD WAN Tutorial
  2. SD WAN Overlay
  3. SD WAN Security 
  4. WAN Virtualization
  5. SD-WAN Segmentation

 

Solution: Viptela SD WAN

Viptela created a new overlay network called Secure Extensible Network (SEN) to address these challenges. For the first time, encryption is built into the solution. Security and routing are combined into one solution. Enables you to span environments, anywhere-to-anywhere in a secure deployment. This type of architecture is not possible with today’s traditional networking methods.

Founded in 2012, Viptela is a Virtual Private Network (VPN) company utilizing concepts of Software Defined Networking (SDN) to transform end-to-end network infrastructure. Based in San Jose, they are developing an SDN Wide Area Network (WAN) product offering any-to-any connectivity with features such as application-aware routing, service chaining, virtual Demilitarized Zone (DMZ), and weighted Equal Cost Multipath (ECMP) operating on different transports.

The key benefit of Viptela is any-to-any connectivity product offering. Connectivity was previously found in Multiprotocol Label Switching (MPLS) networks. They purely work on the connectivity model and not security frameworks. They can, however, influence-traffic paths to and from security services.

Viptela sd wan

 

Ubiquitous data plane

MPLS was attractive because it had a single control plane and a ubiquitous data plane. As long as you are in the MPLS network, connection to anyone is possible. Granted, you have the correct Route Distinguisher (RD) and Route Target (RT) configurations. But why can’t you take this model to Wide Area Network? Invent a technology that can create a similar model and offer ubiquitous connectivity regardless of transport type ( Internet, MPLS ).

 

Why Viptela SDN WAN?

The business today wants different types of connectivity modules. When you map service to business logic, the network/service topology is already laid out. It’s defined. Services have to follow this topology. Viptela is changing this concept by altering the data and control plane connectivity model using SDN to create an SDN WAN technology.

SDN is all about taking intensive network algorithms out of the hardware. Previously, in traditional networks, this was in individual hardware devices using control plane points in the data path. As a result, control points may become congested (for example – OSPF max neighbors reached). Customers lose capacity on the control plane front but not on the data plane. SDN is moving the intensive computation to off-the-shelf servers. MPLS networks attempt to use the same concepts with Route-Reflector (RR) designs.

They started to move route reflectors off the data plane to compute the best-path algorithms. Route reflectors can be positioned anywhere in the network and do not have to sit on the data path. Controller-based SDN approach, you are not embedding the control plane in the network. The controller is off the path. Now, you can scale out and SDN frameworks centrally provision and push policy down to the data plane.

Viptela can take any circuit and provide the ubiquitous connectivity MPLS provided, but now, it’s based on a policy with a central controller. Remote sites can have random transport methods. One leg could be the Internet, and the other could be MPLS. As long as there is an IP path between endpoint A and the controller, Viptela can provide the ubiquitous data plane.

 

Viptela SD WAN and Secure Extensible Network (SEN)

Managed overlay network

If you look at the existing WAN, it is two-part: routing and security. Routing connects sites, and security secures transmission. We have too many network security and policy configuration points in the current model. SEN allows you to centralize control plane security and routing, resulting in data path fluidity. The controller takes care of routing and security decisions.

It passes the relevant information between endpoints. Endpoints can pop up anywhere in the network. All they have to do is set up a control channel for the central controller. This approach does not build excessive control channels, as the control channel is between the controller and endpoints. Not from endpoint to endpoint. The data plane can flow based on the policy in the center of the network.

Viptela SD WAN

 

Viptela SD WAN: Deployment considerations

Deployment of separate data plane nodes at the customer site is integrated into existing infrastructure at Layer 2 or 3. So you can deploy incrementally, starting with one node and ending with thousands. It is so scalable because it is based on routed technology. The model allows you to deploy, for example, a guest network and then integrate it further into your network over time. Internally they use Border Gateway Protocol (BGP). One the data plane, they use standard IPSec between endpoints. It also works over Network Address Translation (NAT), meaning IPSec over UDP.

When an attacker gets access to your network, it is easy for them to reach the beachhead and hop from one segment to another. Viptela enables per-segment encryption, so even if they get to one segment, they will not be able to jump to another. Key management on a global scale has always been a challenge. Viptela solves this with a propitiatory distributed manager based on a priority system. Currently, their key management solution is not open to the industry.

 

SDN controller

You have a controller and VPN termination points i.e data plane points. The controller is the central management piece that assigns the policy. Data points are modules that are shipped to customer sites. The controller allows you to dictate different topologies for individual endpoint segments. Similar to how you influence-routing tables with RT in MPLS.

The control plane is at the controller.

 

Data plane module

Data plane modules are located at the customer site. They connect this data plane module, which could be a PE hand-off to the internal side of the network. The data plane module must be in the data plane path on the customer site. Internal side, they discover the routing protocols and participate in prefix learning. At Layer 2, they discover the VLANs. Their module can either be the default gateway or perform the router neighbor relationship function. WAN side, data plane module registers uplink IP address to WAN controller/orchestration system. The controller builds encrypted tunnels between the data endpoints. The encrypted control channels are only needed when you build over untrusted third parties.

If the problem occurs with controller connectivity, the on-site module can stop being the default gateway and usually participate in Layer 3 forwarding for existing protocols. It backs off from being the primary router for off-net traffic. It’s like creating VRF for different businesses and default routes for each VRF with a single peering point to the controller; Policy-Based Routing (PBR) for each VRF for data plane activity. The PBR is based on information coming from the controller. Each control segment can have a separate policy (for example – modifying the next hop). From a configuration point of view, you need an IP on the data plane module and the remote controller IP. The controller pushes down the rest.

 

  • Viptela SD WAN: Use cases

For example, you have a branch office with three distinct segments, and you want each endpoint to have its independent topology. The topology should be service driven, and the service should not follow existing defined topology. Each business should depict how they want their business to connect to the network team should not say this is how the topology is, and you must obey our topology.

From a carrier’s perspective, they can expand their MPLS network to areas they do not have a physical presence. And bring customers with this secure overlay to their closest pop where they have an MPLS peering. MPLS providers can expand their footprint to areas where they do not have service. If MPLS has customers in region X and wants to connect to the customer in region Y, they can use Viptela. Having those different data plane endpoints through a security framework would be best before entering the MPLS network.

Viptela allows you to steer traffic based on the SLA requirements of the application, aka Application-Aware Routing. For example, if you have two sites with dual connectivity to MPLS and Internet, data plane modules (located at customer sites) nodes can steer traffic over either the MPLS or Internet transport based on end-to-end latency or drops. They do this by maintaining the real-time loss, latency, and jitter characteristics and then applying policies on the centralized controller. As a result, critical traffic is always steered to the most reliable link. This architecture can scale to 1000 nodes in a full mesh topology.

 

viptela sd wan

 

What is OpenFlow

What is OpenFlow

What is OpenFlow?

In today's rapidly evolving digital landscape, network management and data flow control have become critical for businesses of all sizes. OpenFlow is one technology that has gained significant attention and is transforming how networks are managed. In this blog post, we will delve into the concept of OpenFlow, its advantages, and its implications for network control.

OpenFlow is an open-standard communications protocol that separates the control and data planes in a network architecture. It allows network administrators to have direct control over the behavior of network devices, such as switches and routers, by utilizing a centralized controller.

Traditional network architectures follow a closed model, where network devices make independent decisions on forwarding packets. On the other hand, OpenFlow introduces a centralized control plane that provides a global view of the network and allows administrators to define network policies and rules from a centralized location.

OpenFlow operates by establishing a secure channel between the centralized controller and the network switches. The controller is responsible for managing the flow tables within the switches, defining how traffic should be forwarded based on predefined rules and policies. This separation of control and data planes allows for dynamic network management and facilitates the implementation of innovative network protocols.

One of the key advantages of OpenFlow is its ability to simplify network management. By centralizing control, administrators can easily configure and manage the entire network from a single point of control. This reduces complexity and enhances the scalability of network infrastructure. Additionally, OpenFlow enables network programmability, allowing for the development of custom networking applications and services tailored to specific requirements.

OpenFlow plays a crucial role in network virtualization, as it allows for the creation and management of virtual networks on top of physical infrastructure. By abstracting the underlying network, OpenFlow empowers organizations to optimize resource utilization, improve security, and enhance network performance. It opens doors to dynamic provisioning, isolation, and efficient utilization of network resources.

Highlights: What is OpenFlow?

How does OpenFlow work?

1: OpenFlow allows network controllers to determine the path of network packets in a network of switches. There is a difference between switches and controllers. With separate control and forwarding, traffic management can be more sophisticated than access control lists (ACLs) and routing protocols.

2: An OpenFlow protocol allows switches from different vendors, often with proprietary interfaces and scripting languages, to be managed remotely. Software-defined networking (SDN) is considered to be enabled by OpenFlow by its inventors.

3: With OpenFlow, Layer 3 switches can add, modify, and remove packet-matching rules and actions remotely. By doing so, routing decisions can be made periodically or ad hoc by the controller and translated into rules and actions with a configurable lifespan, which are then deployed to the switch’s flow table, where packets are forwarded at wire speed for the duration of the rule.

The Role of OpenFlow Controllers

If the switch cannot match packets, they can be sent to the controller. The controller can modify existing flow table rules or deploy new rules to prevent a structural traffic flow. It may even forward the traffic itself if the switch is instructed to forward packets rather than just their headers.

OpenFlow uses Transport Layer Security (TLS) over Transmission Control Protocol (TCP). Switches wishing to connect should listen on TCP port 6653. In earlier versions of OpenFlow, port 6633 was unofficially used. The protocol is mainly used between switches and controllers.

### Origins of OpenFlow

The inception of OpenFlow can be traced back to the early 2000s when researchers at Stanford University sought to create a more versatile and programmable network architecture. Traditional networking relied heavily on static and proprietary hardware configurations, which limited innovation and adaptability. OpenFlow emerged as a solution to these challenges, offering a standardized protocol that decouples the network control plane from the data plane. This separation allows for centralized control and dynamic adjustment of network traffic, fostering innovation and agility in network design.

### How OpenFlow Works

At its core, the OpenFlow protocol facilitates communication between network devices and a centralized SDN controller. It does this by using a series of flow tables within network switches, which are programmed by the controller. These flow tables dictate how packets should be handled, whether they are forwarded to a destination, dropped, or modified in some way. By leveraging OpenFlow, network administrators can deploy updates, optimize performance, and troubleshoot issues with unprecedented speed and precision, all from a single point of control.

Introducing SDN

Recent changes and requirements have driven networks and network services to become more flexible, virtualization-aware, and API-driven. One major trend affecting the future of networking is software-defined networking ( SDN ). The software-defined architecture aims to extract the entire network into a single switch.

Software-defined networking (SDN) is an evolving technology defined by the Open Networking Foundation ( ONF ). It involves the physical separation of the network control plane from the forwarding plane, where a control plane controls several devices. This differs significantly from traditional IP forwarding that you may have used in the past.

The Core Concepts of SDN

At its heart, Software-Defined Networking decouples the network control plane from the data plane, allowing network administrators to manage network services through abstraction of lower-level functionality. This separation enables centralized network control, which simplifies the management of complex networks. The control plane makes decisions about where traffic is sent, while the data plane forwards traffic to the selected destination. This approach allows for a more flexible, adaptable network infrastructure.

The activities around OpenFlow

Even though OpenFlow has received a lot of industry attention, programmable networks and decoupled control planes (control logic) from data planes have been around for many years. To enhance ATM, Internet, and mobile networks’ openness, extensibility, and programmability, the Open Signaling (OPENING) working group held workshops in 1995. A working group within the Internet Engineering Task Force (IETF) developed GSMP to control label switches based on these ideas. June 2002 marked the official end of this group, and GSMPv3 was published.

What is OpenFlow

Data and control plane

Therefore, SDN separates the data and control plane. The main driving body behind software-defined networking (SDN) is the Open Networking Foundation ( ONF ). Introduced in 2008, the ONF is a non-profit organization that wants to provide an alternative to proprietary solutions that limit flexibility and create vendor lock-in.

The insertion of the ONF allowed its members to run proof of concepts on heterogeneous networking devices without requiring vendors to expose their software’s internal code. This creates a path for an open-source approach to networking and policy-based controllers. 

Knowledge Check: Data & Control Plane

### Data Plane: The Highway for Your Data

The data plane, often referred to as the forwarding plane, is responsible for the actual movement of packets of data from source to destination. Imagine it as a network’s highway, where data travels at high speed. This component operates at the speed of light, handling massive amounts of data with minimal delay. It’s designed to process packets quickly, ensuring that the information arrives where it needs to be without interruption.

### Control Plane: The Brain Behind the Operation

While the data plane acts as the highway, the control plane is the brain that orchestrates the flow of traffic. It makes decisions about routing, managing the network topology, and controlling the data plane’s operations. The control plane uses protocols to determine the best paths for data and to update routing tables as needed. It’s responsible for ensuring that the network operates efficiently, adapting to changes, and maintaining optimal performance.

### Interplay Between Data and Control Planes

The synergy between the data and control planes is what enables modern networks to function effectively. The control plane provides the intelligence and decision-making necessary to guide the data plane. This interaction ensures that data packets take the best possible paths, reducing latency and maximizing throughput. As networks evolve, the lines between these planes may blur, but their distinct roles remain pivotal.

### Real-World Applications and Innovations

The concepts of data and control planes are not just theoretical—they have practical applications in technologies such as Software-Defined Networking (SDN) and Network Function Virtualization (NFV). These innovations allow for greater flexibility and scalability in managing network resources, offering businesses the agility needed to adapt to changing demands.

Building blocks: SDN Environment 

As a fundamental building block of an SDN deployment, the controller, the SDN switch (for example, an OpenFlow switch), and the interfaces are present in the controller to communicate with forwarding devices, generally the southbound interface (OpenFlow) and the northbound interface (the network application interface).

In an SDN, switches function as basic forwarding hardware, accessible via an open interface, with the control logic and algorithms offloaded to controllers. Hybrid (OpenFlow-enabled) and pure (OpenFlow-only) OpenFlow switches are available.

OpenFlow switches rely entirely on a controller for forwarding decisions, without legacy features or onboard control. Hybrid switches support OpenFlow as well, in addition to traditional operation and protocols. Today, hybrid switches are the most common type of commercial switch. A flow table performs packet lookup and forwarding in an OpenFlow switch.

### The Role of the OpenFlow Controller

The OpenFlow controller is the brain of the SDN, orchestrating the flow of data across the network. It communicates with the network devices using the OpenFlow protocol to dictate how packets should be handled. The controller’s primary function is to make decisions on the path data packets should take, ensuring optimal network performance and resource utilization. This centralization of control allows for dynamic network configuration, paving the way for innovative applications and services.

### OpenFlow Switches: The Workhorses of the Network

While the controller is the brain, OpenFlow switches are the workhorses, executing the instructions they receive. These switches operate at the data plane, where they forward packets based on the rules set by the controller. Each switch maintains a flow table that matches incoming packets to particular actions, such as forwarding or dropping the packet. This separation of control and data planes is what sets SDN apart from traditional networking, offering unparalleled flexibility and control.

### Flow Tables: The Heart of OpenFlow Switches

Flow tables are the core component of OpenFlow switches, dictating how packets are handled. Each entry in a flow table consists of match fields, counters, and a set of instructions. Match fields identify the packets that should be affected by the rule, counters track the number of packets and bytes that match the entry, and instructions define the actions to be taken. This modular approach allows for precise traffic management and is essential for implementing advanced network policies.

OpenFlow Table & Routing Tables 

### What is an OpenFlow Flow Table?

OpenFlow is a protocol that allows network controllers to interact with the forwarding plane of network devices like switches and routers. At the heart of OpenFlow is the flow table. This table contains a set of flow entries, each specifying actions to take on packets that match a particular pattern. Unlike traditional routing tables, which rely on predefined paths and protocols, OpenFlow flow tables provide flexibility and programmability, allowing dynamic changes in how packets are handled based on real-time network conditions.

### Understanding Routing Tables

Routing tables are a staple of traditional networking, used by routers to determine the best path for forwarding packets to their final destinations. These tables consist of a list of network destinations, with associated metrics that help in selecting the most efficient route. Routing protocols such as OSPF, BGP, and RIP are employed to maintain and update these tables, ensuring that data flows smoothly across the interconnected web of networks. While reliable, routing tables are less flexible compared to OpenFlow flow tables, as changes in network traffic patterns require updates to routing protocols and configurations.

Example of a Routing Table 

RIP Configuration RIP configuration

### Key Differences Between OpenFlow Flow Table and Routing Table

The primary distinction between OpenFlow flow tables and routing tables lies in their approach to network management. OpenFlow flow tables are dynamic and programmable, allowing for real-time adjustments and fine-grained control over traffic flows. This makes them ideal for environments where network agility and customization are paramount. Conversely, routing tables offer a more static and predictable method of packet forwarding, which can be beneficial in stable networks where consistency and reliability are prioritized.

### Use Cases: When to Use Each

OpenFlow flow tables are particularly advantageous in software-defined networking (SDN) environments, where network administrators need to quickly adapt to changing conditions and optimize traffic flows. They are well-suited for data centers, virtualized networks, and scenarios requiring high levels of automation and scalability. On the other hand, traditional routing tables are best used in established networks with predictable traffic patterns, such as those found in enterprise or service provider settings, where reliability and stability are key.

You may find the following useful for pre-information:

  1. OpenFlow Protocol
  2. Network Traffic Engineering
  3. What is VXLAN
  4. SDN Adoption Report
  5. Virtual Device Context

What is OpenFlow?

What is OpenFlow?

OpenFlow was the first protocol of the Software-Defined Networking (SDN) trend and is the only protocol that allows the decoupling of a network device’s control plane from the data plane. In most straightforward terms, the control plane can be thought of as the brains of a network device. On the other hand, the data plane can be considered hardware or application-specific integrated circuits (ASICs) that perform packet forwarding.

Numerous devices also support running OpenFlow in a hybrid mode, meaning OpenFlow can be deployed on a given port, virtual local area network (VLAN), or even within a regular packet-forwarding pipeline such that if there is not a match in the OpenFlow table, then the existing forwarding tables (MAC, Routing, etc.) are used, making it more analogous to Policy Based Routing (PBR).

What is OpenFlow
Diagram: What is OpenFlow? The source is cable solutions.

What is SDN?

Despite various modifications to the underlying architecture and devices (such as switches, routers, and firewalls), traditional network technologies have existed since the inception of networking. Using a similar approach, frames and packets have been forwarded and routed in a limited manner, resulting in low efficiency and high maintenance costs. Consequently, the architecture and operation of networks need to evolve, resulting in SDN.

By enabling network programmability, SDN promises to simplify network control and management and allow innovation in computer networking. Network engineers configure policies to respond to various network events and application scenarios. They can achieve the desired results by manually converting high-level policies into low-level configuration commands.

Often, minimal tools are available to accomplish these very complex tasks. Controlling network performance and tuning network management are challenging and error-prone tasks.

A modern network architecture consists of a control plane, a data plane, and a management plane; the control and data planes are merged into a machine called Inside the Box. To overcome these limitations, programmable networks have emerged.

How OpenFlow Works:

At the core of OpenFlow is the concept of a flow table, which resides in each OpenFlow-enabled switch. The flow table contains match-action rules defining how incoming packets should be processed and forwarded. The centralized controller determines these rules and communicates using the OpenFlow protocol with the switches.

When a packet arrives at an OpenFlow-enabled switch, it is first matched against the rules in the flow table. If a match is found, the corresponding action is executed, including forwarding the packet, dropping it, or sending it to the controller for further processing. This decoupling of the control and data planes allows for flexible and programmable network management.

What is OpenFlow SDN?

The main goal of SDN is to separate the control and data planes and transfer network intelligence and state to the control plane. These concepts have been exploited by technologies like Routing Control Platform (RCP), Secure Architecture for Network Enterprise (SANE), and, more recently, Ethane.

In addition, there is often a connection between SDN and OpenFlow. The Open Networking Foundation (ONF) is responsible for advancing SDN and standardizing OpenFlow, whose latest version is 1.5.0.

An SDN deployment starts with these building blocks.

For communication with forwarding devices, the controller has the SDN switch (for example, an OpenFlow switch), the SDN controller, and the interfaces. An SDN deployment is based on two basic building blocks: a southbound interface (OpenFlow) and a northbound interface (the network application interface).

As the control logic and algorithms are offloaded to a controller, switches in SDNs may be represented as basic forwarding hardware. Switches that support OpenFlow come in two varieties: pure (OpenFlow-only) and hybrid (OpenFlow-enabled).

Pure OpenFlow switches do not have legacy features or onboard control for forwarding decisions. A hybrid switch can operate with both traditional protocols and OpenFlow. Hybrid switches make up the majority of commercial switches available today. In an OpenFlow switch, a flow table performs packet lookup and forwarding.

OpenFlow reference switch

The OpenFlow protocol and interface allow OpenFlow switches to be accessed as essential forwarding elements. A flow-based SDN architecture like OpenFlow simplifies switching hardware. Still, it may require additional forwarding tables, buffer space, and statistical counters that are difficult to implement in traditional switches with integrated circuits tailored to specific applications.

There are two types of switches in an OpenFlow network: hybrids (which enable OpenFlow) and pores (which only support OpenFlow). OpenFlow is supported by hybrid switches and traditional protocols (L2/L3). OpenFlow switches rely entirely on a controller for forwarding decisions and do not have legacy features or onboard control.

Hybrid switches are the majority of the switches currently available on the market. This link must remain active and secure because OpenFlow switches are controlled over an open interface (through a TCP-based TLS session). OpenFlow is a messaging protocol that defines communication between OpenFlow switches and controllers, which can be viewed as an implementation of SDN-based controller-switch interactions.

Openflow switch
Diagram: OpenFlow switch. The source is cable solution.

Identify the Benefits of OpenFlow

Application-driven routing. Users can control the network paths.

The networks paths.A way to enhance link utilization.

An open solution for VM mobility. No VLAN reliability.

A means to traffic engineer without MPLS.

A solution to build very large Layer 2 networks.

A way to scale Firewalls and Load Balancers.

A way to configure an entire network as a whole as opposed to individual entities.

A way to build your own encryption solution. Off-the-box encryption.

A way to distribute policies from a central controller.

Customized flow forwarding. Based on a variety of bit patterns.

A solution to get a global view of the network and its state. End-to-end visibility.

A solution to use commodity switches in the network. Massive cost savings.

The following table lists the Software Networking ( SDN ) benefits and the problems encountered with existing control plane architecture:

Identify the benefits of OpenFlow and SDN

Problems with the existing approach

Faster software deployment.

Large scale provisioning and orchestration.

Programmable network elements.

Limited traffic engineering ( MPLS TE is cumbersome )

Faster provisioning.

Synchronized distribution policies.

Centralized intelligence with centralized controllers.

Routing of large elephant flows.

Decisions are based on end-to-end visibility.

Qos and load based forwarding models.

Granular control of flows.

Ability to scale with VLANs.

Decreases the dependence on network appliances like load balancers.

**A key point: The lack of a session layer in the TCP/IP stack**

Regardless of the hype and benefits of SDN, neither OpenFlow nor other SDN technologies address the real problems of the lack of a session layer in the TCP/IP protocol stack. The problem is that the client’s application ( Layer 7 ) connects to the server’s IP address ( Layer 3 ), and if you want to have persistent sessions, the server’s IP address must remain reachable. 

This session’s persistence and the ability to connect to multiple Layer 3 addresses to reach the same device is the job of the OSI session layer. The session layer provides the services for opening, closing, and managing a session between end-user applications. In addition, it allows information from different sources to be correctly combined and synchronized.

The problem is the TCP/IP reference module does not consider a session layer, and there is none in the TCP/IP protocol stack. SDN does not solve this; it gives you different tools to implement today’s kludges.

what is openflow
What is OpenFlow? Lack of a session layer

Control and data plane

When we identify the benefits of OpenFlow, let us first examine traditional networking operations. Traditional networking devices have a control and forwarding plane, depicted in the diagram below. The control plane is responsible for setting up the necessary protocols and controls so the data plane can forward packets, resulting in end-to-end connectivity. These roles are shared on a single device, and the fast packet forwarding ( data path ) and the high-level routing decisions ( control path ) occur on the same device.

What is OpenFlow | SDN separates the data and control plane?

**Control plane**

The control plane is part of the router architecture and is responsible for drawing the network map in routing. When we mention control planes, you usually think about routing protocols, such as OSPF or BGP. But in reality, the control plane protocols perform numerous other functions, including:

Connectivity management ( BFD, CFM )

Interface state management ( PPP, LACP )

Service provisioning ( RSVP for InServ or MPLS TE)

Topology and reachability information exchange ( IP routing protocols, IS-IS in TRILL/SPB )

Adjacent device discovery via HELLO mechanism

ICMP

Control plane protocols run over data plane interfaces to ensure “shared fate” – if the packet forwarding fails, the control plane protocol fails as well.

Most control plane protocols ( BGP, OSPF, BFD ) are not data-driven. A BGP or BFD packet is never sent as a direct response to a data packet. There is a question mark over the validity of ICMP as a control plane protocol. The debate is whether it should be classed in the control or data plane category.

Some ICMP packets are sent as replies to other ICMP packets, and others are triggered by data plane packets, i.e., data-driven. My view is that ICMP is a control plane protocol that is triggered by data plane activity. After all, the “C” is ICMP does stand for “Control.”

**Data plane**

The data path is part of the routing architecture that decides what to do when a packet is received on its inbound interface. It is primarily focused on forwarding packets but also includes the following functions:

ACL logging

 Netflow accounting

NAT session creation

NAT table maintenance

The data forwarding is usually performed in dedicated hardware, while the additional functions ( ACL logging, Netflow accounting ) typically happen on the device CPU, commonly known as “punting.” The data plane for an OpenFlow-enabled network can take a few forms.

However, the most common, even in the commercial offering, is the Open vSwitch, often called the OVS. The Open vSwitch is an open-source implementation of a distributed virtual multilayer switch. It enables a switching stack for virtualization environments while supporting multiple protocols and standards.

Software-defined networking changes the control and data plane architecture.

The concept of SDN separates these two planes, i.e., the control and forwarding planes are decoupled. This allows the networking devices in the forwarding path to focus solely on packet forwarding. An out-of-band network uses a separate controller ( orchestration system ) to set up the policies and controls. Hence, the forwarding plane has the correct information to forward packets efficiently.

In addition, it allows the network control plane to be moved to a centralized controller on a server instead of residing on the same box carrying out the forwarding. Moving the intelligence ( control plane ) of the data plane network devices to a controller enables companies to use low-cost, commodity hardware in the forwarding path. A significant benefit is that SDN separates the data and control plane, enabling new use cases.

A centralized computation and management plane makes more sense than a centralized control plane.

The controller maintains a view of the entire network and communicates with Openflow ( or, in some cases, BGP with BGP SDN ) with the different types of OpenFlow-enabled network boxes. The data path portion remains on the switch, such as the OVS bridge, while the high-level decisions are moved to a separate controller. The data path presents a clean flow table abstraction, and each flow table entry contains a set of packet fields to match, resulting in specific actions ( drop, redirect, send-out-port ).

When an OpenFlow switch receives a packet it has never seen before and doesn’t have a matching flow entry, it sends the packet to the controller for processing. The controller then decides what to do with the packet.

Applications could then be developed on top of this controller, performing security scrubbing, load balancing, traffic engineering, or customized packet forwarding. The centralized view of the network simplifies problems that were harder to overcome with traditional control plane protocols.

A single controller could potentially manage all OpenFlow-enabled switches. Instead of individually configuring each switch, the controller can push down policies to multiple switches simultaneously—a compelling example of many-to-one virtualization.

Now that SDN separates the data and control plane, the operator uses the centralized controller to choose the correct forwarding information per-flow basis. This allows better load balancing and traffic separation on the data plane. In addition, there is no need to enforce traffic separation based on VLANs, as the controller would have a set of policies and rules that would only allow traffic from one “VLAN” to be forwarded to other devices within that same “VLAN.”

The advent of VXLAN

With the advent of VXLAN, which allows up to 16 million logical entities, the benefits of SDN should not be purely associated with overcoming VLAN scaling issues. VXLAN already does an excellent job with this. It does make sense to deploy a centralized control plane in smaller independent islands; in my view, it should be at the edge of the network for security and policy enforcement roles. Using Openflow on one or more remote devices is easy to implement and scale.

It also decreases the impact of controller failure. If a controller fails and its sole job is implementing packet filters when a new user connects to the network, the only affecting element is that the new user cannot connect. If the controller is responsible for core changes, you may have interesting results with a failure. New users not being able to connect is bad, but losing your entire fabric is not as bad.

Spanning tree VXLAN
Diagram: Loop prevention. Source is Cisco

A traditional networking device runs all the control and data plane functions. The control plane, usually implemented in the central CPU or the supervisor module, downloads the forwarding instructions into the data plane structures. Every vendor needs communications protocols to bind the two planes together to download forward instructions. 

Therefore, all distributed architects need a protocol between control and data plane elements. The protocol binding this communication path for traditional vendor devices is not open-source, and every vendor uses its proprietary protocol (Cisco uses IPC—InterProcess Communication ).

Openflow tries to define a standard protocol between the control plane and the associated data plane. When you think of Openflow, you should relate it to the communication protocol between the traditional supervisors and the line cards. OpenFlow is just a low-level tool.

OpenFlow is a control plane ( controller ) to data plane ( OpenFlow enabled device ) protocol that allows the control plane to modify forwarding entries in the data plane. It enables SDN to separate the data and control planes.

identify the benefits of openflow

Proactive versus reactive flow setup

OpenFlow operations have two types of flow setups: Proactive and Reactive.

With Proactive, the controller can populate the flow tables ahead of time, similar to a typical routing. However, the packet-in event never occurs by pre-defining your flows and actions ahead of time in the switch’s flow tables. The result is all packets are forwarded at line rate. With Reactive, the network devices react to traffic, consult the OpenFlow controller, and create a rule in the flow table based on the instruction. The problem with this approach is that there can be many CPU hits.

OpenFlow protocol

The following table outlines the critical points for each type of flow setup:

Proactive flow setup

Reactive flow setup

Works well when the controller is emulating BGP or OSPF.

 Used when no one can predict when and where a new MAC address will appear.

The controller must first discover the entire topology.

 Punts unknown packets to the controller. Many CPU hits.

Discover endpoints ( MAC addresses, IP addresses, and IP subnets )

Compute forwarding paths on demand. Not off the box computation.

Compute off the box optimal forwarding.

 Install flow entries based on actual traffic.

Download flow entries to the data plane switches.

Has many scalability concerns such as packet punting rate.

No data plane controller involvement with the exceptions of ARP and MAC learning. Line-rate performance.

 Not a recommended setup.

Hop-by-hop versus path-based forwarding

The following table illustrates the key points for the two types of forwarding methods used by OpenFlow: hop-by-hop forwarding and path-based forwarding:

Hop-by-hop Forwarding

 Path-based Forwarding

Similar to traditional IP Forwarding.

Similar to MPLS.

Installs identical flows on each switch on the data path.

Map flows to paths on ingress switches and assigns user traffic to paths at the edge node

Scalability concerns relating to flow updates after a change in topology.

Compute paths across the network and installs end-to-end path-forwarding entries.

Significant overhead in large-scale networks.

Works better than hop-by-hop forwarding in large-scale networks.

FIB update challenges. Convergence time.

Core switches don’t have to support the same granular functionality as edge switches.

Obviously, with any controller, the controller is a lucrative target for attack. Anyone who knows you are using a controller-based network will try to attack the controller and its control plane. The attacker may attempt to intercept the controller-to-switch communication and replace it with its commands, essentially attacking the control plane with whatever means they like.

An attacker may also try to insert a malformed packet or some other type of unknown packet into the controller ( fuzzing attack ), exploiting bugs in the controller and causing the controller to crash. 

Fuzzing attacks can be carried out with application scanning software such as Burp Suite. It attempts to manipulate data in a particular way, breaking the application.

The best way to tighten security is to encrypt switch-to-controller communications with SSL and self-signed certificates to authenticate the switch and controller. It would also be best to minimize interaction with the data plane, except for ARP and MAC learning.

To prevent denial-of-service attacks on the controller, you can use Control Plane Policing ( CoPP ) on Ingress to avoid overloading the switch and the controller. Currently, NEC is the only vendor implementing CoPP.

sdn separates the data and control plane

The Hybrid deployment model is helpful from a security perspective. For example, you can group specific ports or VLANs to OpenFlow and other ports or VLANs to traditional forwarding, then use traditional forwarding to communicate with the OpenFlow controller.

Software-defined networking or traditional routing protocols?

The move to a Software-Defined Networking architecture has clear advantages. It’s agile and can react quickly to business needs, such as new product development. For businesses to succeed, they must have software that continues evolving.

Otherwise, your customers and staff may lose interest in your product and service. The following table displays the advantages and disadvantages of the existing routing protocol control architecture.

+Reliable and well known.

-Non-standard Forwarding models. Destination-only and not load-aware metrics**

+Proven with 20 plus years field experience.

 -Loosely coupled.

+Deterministic and predictable.

-Lacks end-to-end transactional consistency and visibility.

+Self-Healing. Traffic can reroute around a failed node or link.

-Limited Topology discovery and extraction. Basic neighbor and topology tables.

+Autonomous.

-Lacks the ability to change existing control plane protocol behavior.

+Scalable.

-Lacks the ability to introduce new control plane protocols.

+Plenty of learning and reading materials.

** Basic EIGRP IETF originally proposed an Energy-Aware Control Plane, but the IETF later removed this.

Software-Defined Networking: Use Cases

Edge Security policy enforcement at the network edge.

Authenticate users or VMs and deploy per-user ACL before connecting a user to the network.

Custom routing and online TE.

The ability to route on a variety of business metrics aka routing for dollars. Allowing you to override the default routing behavior.

Custom traffic processing.

For analytics and encryption.

Programmable SPAN ports

 Use Openflow entries to mirror selected traffic to the SPAN port.

DoS traffic blackholing & distributed DoS prevention.

Block DoS traffic as close to the source as possible with more selective traffic targeting than the original RTBH approach**. The traffic blocking is implemented in OpenFlow switches. Higher performance with significantly lower costs.

Traffic redirection and service insertion.

Redirect a subset of traffic to network appliances and install redirection flow entries wherever needed.

Network Monitoring.

 The controller is the authoritative source of information on network topology and Forwarding paths.

Scale-Out Load Balancing.

Punt new flows to the Openflow controller and install per-session entries throughout the network.

IPS Scale-Out.

OpenFlow is used to distribute the load to multiple IDS appliances.

**Remote-Triggered Black Hole: RTBH refers to installing a host route to a bogus IP address ( RTBH address ) pointing to NULL interfaces on all routers. BGP is used to advertise the host routes to other BGP peers of the attacked hosts, with the next hop pointing to the RTBH address, and it is mainly automated in ISP environments.

SDN deployment models

Guidelines:

  1. Start with small deployments away from the mission-critical production path, i.e., the Core. Ideally, start with device or service provisioning systems.
  2. Start at the Edge and slowly integrate with the Core. Minimize the risk and blast radius. Start with packet filters at the Edge and tasks that can be easily automated ( VLANs ).
  3. Integrate new technology with the existing network.
  4. Gradually increase scale and gain trust. Experience is key.
  5. Have the controller in a protected out-of-band network with SSL connectivity to the switches.

There are 4 different models for OpenFlow deployment, and the following sections list the key points of each model.

Native OpenFlow 

  • They are commonly used for Greenfield deployments.
  • The controller performs all the intelligent functions.
  • The forwarding plane switches have little intelligence and solely perform packet forwarding.
  • The white box switches need IP connectivity to the controller for the OpenFlow control sessions. If you are forced to use an in-band network for this communication path using an isolated VLAN with STP, you should use an out-of-band network.
  • Fast convergence techniques such as BFD may be challenging to use with a central controller.
  • Many people believe that this approach does not work for a regular company. Companies implementing native OpenFlow, such as Google, have the time and resources to reinvent the wheel when implementing a new control-plane protocol ( OpenFlow ).

Native OpenFlow with Extensions

  • Some control plane functions are handled from the centralized controller to the forwarding plane switches. For example, the OpenFlow-enabled switches could load balancing across multiple links without the controller’s previous decision. You could also run STP, LACP, or ARP locally on the switch without interaction with the controller. This approach is helpful if you lose connectivity to the controller. If the low-level switches perform certain controller functions, packet forwarding will continue in the event of failure.
  • The local switches should support the specific OpenFlow extensions that let them perform functions on the controller’s behalf.

Hybrid ( Ships in the night )

  • This approach is used where OpenFlow runs in parallel with the production network.
  • The same network box is controlled by existing on-box and off-box control planes ( OpenFlow).
  • Suitable for pilot deployment models as switches still run traditional control plane protocols.
  • The Openflow controller manages only specific VLANs or ports on the network.
  • The big challenge is determining and investigating the conflict-free sharing of forwarding plane resources across multiple control planes.

Integrated OpenFlow

  • OpenFlow classifiers and forwarding entries are integrated with the existing control plane. For example, Juniper’s OpenFlow model follows this mode of operation where OpenFlow static routes can be redistributed into the other routing protocols.
  • No need for a new control plane.
  • No need to replace all forwarding hardware
  • It is the most practical approach as long as the vendor supports it.

Closing Points on OpenFlow

OpenFlow is a communication protocol that provides access to the forwarding plane of a network switch or router over the network. It was initially developed at Stanford University and has since been embraced by the Open Networking Foundation (ONF) as a core component of SDN. OpenFlow allows network administrators to program the control plane, enabling them to direct how packets are forwarded through the network. This decoupling of the control and data planes is what empowers SDN to offer more dynamic and flexible network management.

The architecture of OpenFlow is quite straightforward yet powerful. It consists of three main components: the controller, the switch, and the protocol itself. The controller is the brains of the operation, managing network traffic by sending instructions to OpenFlow-enabled switches. These switches, in turn, execute the instructions received, altering the flow of network data accordingly. This setup allows for centralized network control, offering unprecedented levels of automation and agility.

**Advantages of OpenFlow**

OpenFlow brings several critical advantages to network management and control:

1. Flexibility and Programmability: With OpenFlow, network administrators can dynamically reconfigure the behavior of network devices, allowing for greater adaptability to changing network requirements.

2. Centralized Control: By centralizing control in a single controller, network administrators gain a holistic view of the network, simplifying management and troubleshooting processes.

3. Innovation and Experimentation: OpenFlow enables researchers and developers to experiment with new network protocols and applications, fostering innovation in the networking industry.

4. Scalability: OpenFlow’s centralized control architecture provides the scalability needed to manage large-scale networks efficiently.

**Implications for Network Control**

OpenFlow has significant implications for network control, paving the way for new possibilities in network management:

1. Software-Defined Networking (SDN): OpenFlow is a critical component of the broader concept of SDN, which aims to decouple network control from the underlying hardware, providing a more flexible and programmable infrastructure.

2. Network Virtualization: OpenFlow facilitates network virtualization, allowing multiple virtual networks to coexist on a single physical infrastructure.

3. Traffic Engineering: By controlling the flow of packets at a granular level, OpenFlow enables advanced traffic engineering techniques, optimizing network performance and resource utilization.

OpenFlow represents a paradigm shift in network control, offering a more flexible, scalable, and programmable approach to managing networks. By separating the control and data planes, OpenFlow empowers network administrators to have fine-grained control over network behavior, improving efficiency, innovation, and adaptability. As the networking industry continues to evolve, OpenFlow and its related technologies will undoubtedly play a crucial role in shaping the future of network management.

Summary: What is OpenFlow?

In the rapidly evolving world of networking, OpenFlow has emerged as a game-changer. This revolutionary technology has transformed the way networks are managed, offering unprecedented flexibility, control, and efficiency. In this blog post, we will delve into the depths of OpenFlow, exploring its definition, key features, and benefits.

What is OpenFlow?

OpenFlow can be best described as an open standard communications protocol that enables the separation of the control plane and the data plane in network devices. It allows centralized control over a network’s forwarding elements, making it possible to program and manage network traffic dynamically. By decoupling the intelligence of the network from the underlying hardware, OpenFlow provides a flexible and programmable infrastructure for network administrators.

Key Features of OpenFlow

a) Centralized Control: One of the core features of OpenFlow is its ability to centralize network control, allowing administrators to define and implement policies from a single point of control. This centralized control improves network visibility and simplifies management tasks.

b) Programmability: OpenFlow’s programmability empowers network administrators to define how network traffic should be handled based on their specific requirements. Through the use of flow tables and match-action rules, administrators can dynamically control the behavior of network switches and routers.

c) Software-Defined Networking (SDN) Integration: OpenFlow plays a crucial role in the broader concept of Software-Defined Networking. It provides a standardized interface for SDN controllers to communicate with network devices, enabling dynamic and automated network provisioning.

Benefits of OpenFlow

a) Enhanced Network Flexibility: With OpenFlow, network administrators can easily adapt and customize their networks to suit evolving business needs. The ability to modify network behavior on the fly allows for efficient resource allocation and improved network performance.

b) Simplified Network Management: By centralizing network control, OpenFlow simplifies the management of complex network architectures. Policies and configurations can be applied uniformly across the network, reducing administrative overhead and minimizing the chances of configuration errors.

c) Innovation and Experimentation: OpenFlow fosters innovation by providing a platform for the development and deployment of new network protocols and applications. Researchers and developers can experiment with novel networking concepts, paving the way for future advancements in the field.

Conclusion

OpenFlow has ushered in a new era of network management, offering unparalleled flexibility and control. Its ability to separate the control plane from the data plane, coupled with centralized control and programmability, has opened up endless possibilities in network architecture design. As organizations strive for more agile and efficient networks, embracing OpenFlow and its associated technologies will undoubtedly be a wise choice.

ip routing

Advances of IP routing and Cloud

 

ip routing

 

With the introduction and hype around Software Defined Networking ( SDN ) and Cloud Computing, one would assume that there has been little or no work with the advances in IP routing. You could say that the cloud has clouded the mind of the market. Regardless of the hype around this transformation, routing is still very much alive and makes up a vital part of the main internet statistics you can read. Packets still need to get to their destinations.

 

Advanced in IP Routing

The Internet Engineering Task Force (IETF) develops and promotes voluntary internet standards, particularly those that comprise the Internet Protocol Suite (TCP/IP). The IETF shapes what comes next, and this is where all the routing takes place. It focuses on anything between the physical layer and the application layer. It doesn’t focus on the application itself, but on the technologies used to transport it, for example, HTTP.

In the IETF, no one is in charge, anyone can contribute, and everyone can benefit. As you can see from the chart below, that routing ( RTG ) has over 250 active drafts and is the most popular working group within the IETF.

 

 

IP routinng
Diagram: IETF Work Distribution.

 

The routing area is responsible for ensuring the continuous operation of the Internet routing system by maintaining the scalability and stability characteristics of the existing routing protocols and developing new protocols, extensions, and bug fixes promptly

The following table illustrates the subgroups of the RTG working group:

Bidirectional Forwarding Detection (BFD) Open Shortest Path First IGP (OSPF)
Forwarding and Control Element Separation (forces) Routing Over Low power and Lossy networks (roll)
Interface to the Routing System (i2rs) Routing Area Working Group (RTGW)
Inter-Domain Routing (IDR) Secure Inter-Domain Routing (SCIDR)
IS-IS for IP Internets (isis) Source Packet Routing in Networking (spring)
Keying and Authentication for Routing Protocols (Karp)
Mobile Ad-hoc Networks (manet)

The chart below displays the number of drafts per subgroup of the routing area. There has been a big increase in the subgroup “roll,” which is second to BGP. “Roll” relates to “Routing Over Low power and Lossy networks” and is driven by the Internet of Everything and Machine-to-Machine communication.

 

IP ROUTING
Diagram: RTG Ongoing Work.

 

OSPF Enhancements

OSPF is a link-state protocol that uses a common database to determine the shortest path to any destination.

Two main areas of interest in the Open Shortest Path First IGP (OSPF) subgroups are OSPFv3 LSA Extendibility and Remote Loop-Free Alternatives ( LFAs ). One benefit IS-IS has over OSPF is its ability to easily introduce new features with the inclusion of Type Length Values ( TLVs ) and sub-TLVs. The IETF draft-IETF-OSPF-ospfv3-lsa-extend extends the LSA format by allowing the optional inclusion of TLVs, making OSPF more flexible and extensible. For example, OSPFv3 uses a new TLV to support intra-area Traffic Engineering ( TE ), while OSPFv2 uses an opaque LSA.

 

TLV for OSPFv3
Diagram: TLV for OSPFv3.

 

Another shortcoming of OSPF is that it does not install a backup route in the routing table by default. Having a pre-installed backup up path greatly improves convergence time. With pre-calculated backup routes already installed in the routing table, the router process does not need to go through the convergence process’s LSA propagation and SPF calculation steps.

 

Loop-free alternatives (LFA)

Loop-Free Alternatives ( LFA ), known as Feasible Successors in EIGRP, are local router decisions to pre-install a backup path.
In the diagram below:

-Router A has a primary ( A-C) and secondary ( A-B-C) path to 10.1.1.0/24
-Link State allows Router A to know the entire topology
-Router A should know that Router B is an alternative path. Router B is a Loop-Free Alternate for destination 10.1.1.0/24

OSPF LFA
Diagram: OSPF LFA.

 

This is not done with any tunneling, and the backup route needs to exist for it to be used by the RIB. If the second path doesn’t exist in the first place, the OSPF process cannot install a Loop-Free Alternative. The LFA process does not create backup routes if they don’t already exist. An LFA is simply an alternative loop-free route calculated at any network router.

A drawback of LFA is that it cannot work in all topologies. This is most notable in RING topologies. The answer is to tunnel and to get the traffic past the point where it will loop. This effectively makes the RING topology a MESH topology. For example, the diagram below recognizes that we must tunnel traffic from A to C. The tunnel type doesn’t matter – it could be a GRE tunnel, an MPLS tunnel, an IP-in-IP tunnel, or just about any other encapsulation.

 

In this network:

-Router A’s best path through E
-Routers C’s best path is through D
-Router A must forward traffic directly to C to prevent packets from looping back.

Remote LFA
Diagram: Remote LFA.

 

In the preceding example, we will look at “Remote LFA,” which leverages an MPLS network and Label Distribution Protocol ( LDP ) for label distribution. If you use Traffic Engineering ( TE ), it’s called “TE Fast ReRoute” and not “Remote LFA.” There is also a hybrid model combining Remote LFA and TE Fast ReRoute, and is used only when the above cannot work efficiently due to a complex topology or corner case scenario.

Remote LFAs extend the LFA space to “tunneled neighbors”.

– Router A runs a constrained SPF and finds C is a valid LFA

– Since C is not directly connected, Router A must tunnel to C

a) Router A uses LDP to configure an MPLS path to C

b) Installs this alternate path as an LFA in the CEF table

– If the A->E link fails.

a) Router A begins forwarding traffic along the LDP path

The total time for convergence usually takes 10ms.

Remote LFA has some topology constraints. For example, they cannot be calculated across a flooding domain boundary, i.e., an ABR in OSPF or L1/L2 boundary is IS-IS. However, they work in about 80% of all possible topologies and 90% of production topologies.

 

BGP Enhancements

BGP is a scalable distance vector protocol that runs on top of TCP. It uses a path algorithm to determine the best path to install in the IP routing table and for IP forwarding.

 

Recap BGP route advertisement:

  • RR client can send to a RR client.
  • RR client can send to a non RR client.
  • A non-RR client cannot send to a non-RR client.

One drawback to the default BGP behavior is that it only advertises the best route. When a BGP Route Reflector receives multiple paths to the same destination, it will advertise only one of those routes to its neighbors.

This can limit the visibility in the network and affect the best path selection used for hot potato routing when you want traffic to leave your AS as quickly as possible. In addition, all paths to exit an AS are not advertised to all peers, basically hiding ( not advertising ) some paths to exit the network.

The diagram below displays default BGP behavior; the RR receives two routes from PE2 and PE3 about destination Z; due to the BGP best path mechanism, it only advertises one of those paths to PE1. 

Route Reflector - Default
Diagram: Route Reflector – Default.

 

In certain designs, you could advertise the destination CE with different Route Distinguishers (RDs), creating two instances for the same destination prefix. This would allow the RR to send two paths to PE.

 

Diverse BGP path distribution

Another new feature is diverse BGP Path distribution, where you can create a shadow BGP session to the RR. It is easy to deploy, and the diverse iBGP session will announce the 2nd best path. Shadow BGP sessions are especially useful in virtualized deployments, where you can create another BGP session to a VM acting as a Route-Reflector. The VM can then be scaled out in a virtualized environment creating numerous BGP sessions. You are allowing the advertisements of multiple paths for each destination prefix.

Route Reflector - Shadow Sessions
Diagram: Route Reflector – Shadow Sessions.

 

BGP Add-path 

A special extension to BGP known as “Add Paths” allows BGP speakers to propagate and accept multiple paths for the same prefix. The BGP Add-Path feature will signal diverse paths, so you don’t need to create shadow BGP sessions. There is a requirement that all Add-Path receiver BGP routers must support the Add-Path capability.

There are two flavors of the Add-Path capability, Add-n-path, and Add-all-path. The “Add-n-path” will add 2 or 3 paths depending on the IOS version. With “Add-all-path,” the route reflector will do the primary best path computation (only on the first path) and then send all paths to the BR/PE. This is useful for large ECMP load balancing, where you need multiple existing paths for hot potato routing.

BGP Add Path
Diagram: BGP Add Path

 

Source packet routing

Another interesting draft the IETF is working on is Source Packet Routing ( spring ). Source Packet Routing is the ability of a node to specify a forwarding path. As the packet arrives in the network, the edge device looks at the application, determines what it needs, and predicts its path throughout the network. Segment routing leverages the MPLS data plane, i.e., push, swap, and pop controls, without needing LDP or RSVP-TE. This avoids millions of labels in the LDP database or TE LSPs in the networks.

 

Application Controls - Network DeliversDiagram: Application Controls – Network Delivers 

The complexity and state are now isolated to the network’s edges, and the middle nodes are only swapping labels. The source routing is based on the notion of a 32-bit segment that can represent any instructions, such as service, context, or IGP-based forwarding construct. This results in an ordered chain of topological and service instructions where the ingress node pushes the segment list on the packet.

 

Prefix Hijacking in BGP

BGP hijacking revolves around locating an ISP that is not filtering advertisements, or its misconfiguration makes it susceptible to a man-in-the-middle attack. Once located, an attacker can advertise any prefix they want, causing some or all traffic to be diverted from the real source towards the attacker.

In February 2008, a large portion of YouTube’s address space was redirected to Pakistan when the Pakistan Telecommunication Authority ( PTA ) decided to restrict access to YouTube.com inside the country but accidentally blackholed the route in the global BGP table.

These events and others have led the Secure-Inter Domain Routing Group ( SIDR ) to address the following two vulnerabilities in BGP:

-Is the AS authorized to originate an IP prefix?

-Is the AS-Path represented in the route the same as the path through which the NLRI traveled?

This lockdown of BGP has three solution components:

 

RPKI Infrastructure Offline repository of verifiable secure objects based on public-key cryptography
Follows resources (IPv4/v6 + ASN) allocation hierarchy to provide “right of use”
BGP Secure Origin AS You only validate the Origin AS of a BGP UPDATE
Solves most frequent incidents (*)
No changes to BGP nor the router’s hardware impact
Standardization is almost finished and running code
BGP PATH Validation BGPSEC proposal under development at IETF
Requires forward signing AS-PATH attribute
Changes in BGP and possible routers

The roll-out and implementation should be gradual and create islands of trust worldwide. These islands of trust will eventually interconnect together, making BGP more secure.

The table below displays the RPKI Deployment State;

RIR Total Valid Invalid Unknown Accuracy RPKI Adoption Rate
AFRINIC 100% .44% .42% 99.14% 51.49% .86%
APNIC 100% .22% .24% 99.5% 48.32% .46%
ARIN 100% .4% .14% 99.46% 74.65% .54%
LACNIC 100% 17.84% 2.01% 80.15% 89.87% 19.85%
RIPE NCC 100% 6.7% 0.59% 92.71% 91.92% 7.29%

Cloud Enhancements – The Intercloud

Today’s clouds have crossed well beyond the initial hype, and applications are now offered as on-demand services ( anything-as-a-service [XaaS] ). These services are making significant cost savings, and the cloud transition is shaping up to be as powerful as the previous one – the Internet. The Intercloud and the Internet of Things are the two new big clouds of the future.

Currently, the cloud market is driven by two spaces – the public cloud ( off-premise ) and the private cloud (on-premise). The intercloud takes the concept of cloud much further and attempts to connect multiple public clouds. A single application that could integrate services and workloads from ten or more clouds would create opportunities and potentially alter the cloud market landscape significantly. Hence, it is important to know and understand the need for cloud migration and its related problems.

We are already beginning to see signs of this in the current market. Various applications, such as Spotify and Google Maps, authenticate unregistered users with their Facebook credentials. Another use case is a cloud IaaS provider could divert incoming workload to another provider if it doesn’t have the resources to serve the incoming requests, essentially cloud bursting from provider to provider. It would also make economic sense to move workload and services between cloud providers based on cooling costs ( follow the moon ). Or maybe dynamically move workloads between providers, so they are closest to the active user base ( follow the sun )

The following diagram displays a Dynamic Workload Migration between two Cloud companies.

 

Intercloud
Diagram: Intercloud.

 

A: Cloud 1 finds Cloud 2 -Naming, Presence
B: Cloud 1 Trusts Cloud 2 -Certificates, Trustsec
C: Both Cloud 1 and 2 negotiate -Security, Policy
D: Cloud 1 sets up Cloud 2 -Placement, Deployment
E: Cloud 1 sends to Cloud 2 -VM runs in cloud-Addressing, configurations

The concept of Intercloud was difficult to achieve with the previous version of vSphere based on the restriction of latency for VMotion to operate efficiently. Now vSphere v6 can tolerate 100 msec of RTT.

InterCloud is still a conceptual framework, and the following questions must be addressed before it can be moved from concept to production.

1) Intercloud security

2) Intercloud SLA management

3) Interoperability across cloud providers.

 

Cisco’s One Platform Kit (onePK)

The One Platform Kit is Cisco’s answer to Software Defined Networking. It aims to provide simplicity and agility to a programmatic network. It’s a set of APIs driven by programming languages, such as C and Java, that are used to program the network. We currently have existing ways to program the network with EEM applets but lack an automation tool that can program multiple devices simultaneously. It’s the same with Performance Routing ( PfR ). PfR can program and traffic engineer the network by remotely changing metrics, but the decisions are still local and not controller-based.

 

Traffic engineering

One useful element of Cisco’s One Platform Kit is its ability to perform “Off box” traffic engineering, i.e., the computation is made outside the local routing device. It allows you to create route paths throughout the network without relying on default routing protocol behavior. For example, the cost is the default metric for route selection for equal-length routes in OSPF. This cannot be changed, which makes the routing decisions very static. In addition, Cisco’s One Platform Kit (onePK) allows you to calculate routes using different variables you set, giving you complete path control.

 

ip routing

container based virtualization

Cisco Switch Virtualization Nexus 1000v

Cisco Switch Virtualization Nexus 1000v

Virtualization has become integral to modern data centers in today's digital landscape. With the increasing demand for agility, flexibility, and scalability, organizations are turning to virtual networking solutions to meet their evolving needs. One such solution is the Nexus 1000v, a virtual network switch offering comprehensive features and functionalities. In this blog post, we will delve into the world of the Nexus 1000v, exploring its key features, benefits, and use cases.

The Nexus 1000v is a distributed virtual switch that operates at the hypervisor level, providing advanced networking capabilities for virtual machines (VMs). It is designed to integrate seamlessly with VMware vSphere, offering enhanced network visibility, control, and security.

Cisco Switch Virtualization is a revolutionary concept that allows network administrators to create multiple virtual switches on a single physical switch. By abstracting the network functions from the hardware, it provides enhanced flexibility, scalability, and efficiency. With Cisco Switch Virtualization, businesses can maximize resource utilization and simplify network management.

At the forefront of Cisco's Switch Virtualization portfolio is the Nexus 1000v. This powerful platform brings the benefits of virtualization to the data center, enabling seamless integration between virtual and physical networks. By extending Cisco's renowned networking capabilities into the virtual environment, Nexus 1000v empowers organizations to achieve consistent policy enforcement, enhanced security, and simplified operations.

The Nexus 1000v boasts a wide range of features that make it a compelling choice for network administrators. From advanced network segmentation and traffic isolation to granular policy control and deep visibility, this platform has it all. By leveraging the power of Cisco's Virtual Network Services (VNS), organizations can optimize their network infrastructure, streamline operations, and deliver superior performance.

Deploying Cisco Switch Virtualization, specifically the Nexus 1000v, requires careful planning and consideration. Organizations must evaluate their network requirements, ensure compatibility with existing infrastructure, and adhere to best practices. From designing a scalable architecture to implementing proper security measures, attention to detail is crucial to achieve a successful deployment.

To truly understand the impact of Cisco Switch Virtualization, it's essential to explore real-world use cases and success stories. From large enterprises to service providers, organizations across various industries have leveraged the power of Nexus 1000v to transform their networks. This section will highlight a few compelling examples, showcasing the versatility and value that Cisco Switch Virtualization brings to the table.

Highlights: Cisco Switch Virtualization Nexus 1000v

Hypervisor and vSphere Introduction

An operating system can run multiple operating systems on a single hardware host using a hypervisor, also known as a virtual machine manager. Operating systems use the host’s processor, memory, and other resources. Hypervisors control the host processor, memory, and other resources and allocate what each operating system needs. Hypervisors run guest operating systems or virtual machines on top of them.

Designed specifically for integration with VMware vSphere environments, the Cisco Nexus 1000V Series Switch runs Cisco NX-OS software. Enterprise-class performance, scalability, and scalability are delivered by VMware vSphere 2.0 across multiple platforms. Within the VMware ESX hypervisor, the Nexus 1000V runs. With the Cisco Nexus 1000V Series, you can take advantage of Cisco VN-Link server virtualization technology

• Policy-based virtual machine (VM) connectivity

• Mobile VM security

• Network policy

Nondisruptive operational model for your server virtualization and networking teams

As with physical servers, virtual servers can be configured with the same network configuration, security policy, diagnostic tools, and operational models as physical servers. The Cisco Nexus 1000V Series is also compatible with VMware vSphere, vCenter, ESX, and ESXi.

A brief overview of the Nexus 1000V system

There are two primary components of the Cisco Nexus 1000V Series switch:

  • VEM (Virtual Ethernet Module): Executes inside hypervisors
  • VSM (External Virtual Supervisor Module): Manages VEMs

Nexus 1000v implements a generic concept of Cisco Distributed Virtual Switch (DVS). VMware ESX or ESXi executes the Cisco Nexus 1000V Virtual Ethernet Module (VEM). The VEM’s application programming interface (API) is VMware vNetwork Distributed Switch (vDS).

By integrating the API with VMware VMotion and Distributed Resource Scheduler (DRS), advanced networking capabilities can be provided to virtual machines. In the VEM, Layer 2 switching and advanced networking functions are performed based on configuration information from the VSM:

Nexus Switch Virtualization

**Virtual routing and forwarding**

Virtual routing and forwarding form the basis of this stack. Firstly, network virtualization comes with two primary methods: 1) One too many and 2) Many to one.  The “one too many” network virtualization method means you segment one physical network into multiple logical segments. Conversely, the “many to one” network virtualization method consolidates numerous physical devices into one logical entity. By definition, they seem to be opposites, but they fall under the same umbrella in network virtualization.

Before you proceed, you may find the following posts helpful:

  1. Container Based Virtualization
  2. Virtual Switch
  3. What is VXLAN
  4. Redundant Links
  5. WAN Virtualization
  6. What Is FabricPath

Network virtualization

Before we get stuck in Cisco virtualization, let us address some basics. For example, if you have multiple virtual endpoints share a physical network. Still, different virtual endpoints belong to various customers, and the communication between these endpoints also needs to be isolated. In other words, the network is a resource, too, and network virtualization is the technology that enables the sharing of a standard physical network infrastructure.

Virtualization uses software to simulate traditional hardware platforms and create virtual software-based systems. For example, virtualization allows specialists to construct a single virtual network or partition a physical network into multiple virtual networks.

Cisco Switch Virtualization: Logical segmentation: One too many

We have one-to-many network virtualization for the Cisco switch virtualization design; a single physical network is logically segmented into multiple virtual networks. For example, each virtual network could correspond to a user group or a specific security function.

End-to-end path isolation requires the virtualization of networking devices and their interconnecting links. VLANs have been traditionally used, and hosts from one user group are mapped to a single VLAN. To extend the path across multiple switches at Layer 2, VLAN tagging (802.1Q) can carry VLAN information between switches. These VLAN trunks were created to transport multiple VLANs over a single Ethernet interface.

The diagram below displays two independent VLANs, VLAN201 and VLAN101. These VLANs can share one physical wire to provide L2 reachability between hosts connected to Switch B and Switch A via Switch C, but they remain separate entities.

Nexus1000v
Nexus1000v: The operation

VLANs are sufficient for small Layer 2 segments. However, today’s networks will likely have a mix of Layer 2 and 3 routed networks. In this case, Layer 2 VLANs alone are insufficient because you must extend the Layer 2 isolation over a Layer 3 device. This can be achieved by using Virtual Routing and Forwarding ( VRF ), the next step in the Cisco switch virtualization. A virtual routing and forwarding instance logically carves a Layer 3 device into several isolated independent L3 devices. The virtual routing and forwarding configured locally cannot communicate directly.

The diagram below displays one physical Layer 3 router with three VRFs: VRF Yellow, VRF Red, and VRF Blue. These virtual routing and forwarding instances are completely separated; without explicit configuration, routes in one virtual routing and forwarding instance cannot be leaked to another.

Virtual Routing and Forwarding

virtual routing and forwarding

The virtualization of the interconnecting links depends on how the virtual routers are connected. If they are physically ( directly ) connected, you could use a technology known as VRF-lite to separate traffic and 802.1Q to label the data plane. This is known as hop-by-hop virtualization.

However, it’s possible to run into scalability issues when the number of devices grows. This design is typically used when you connect virtual routing and forwarding back to back, i.e., no more than two devices.

When the virtual routers are connected over multiple hops through an IP cloud, you can use generic routing encapsulation ( GRE ) or Multiprotocol Label Switching ( MPLS ) virtual private networks.

GRE is probably the simpler of the Layer 3 methods, and it can work over any IP core. GRE can encapsulate the contents and transport them over a network with the network unaware of the packet contents. Instead, the core will see the GRE header, virtualizing the network path.

Cisco Switch Virtualization: The additional overhead

When designing Cisco switch virtualization, you need to consider the additional overhead. There are a further 24 bytes overhead for the GRE header, so it may be the case that the forwarding router may break the datagram into two fragments, so the packet may not be larger than the outgoing interface MTU. To resolve the fragmentation issue, you can correctly configure MTU, MSS, and Path MTU parameters on the outgoing and intermediate routers.

The GRE standard is typically static. You only need to configure tunnel endpoints, and the tunnel will be up as long as you can reach those endpoints. However, recent designs can establish a dynamic GRE tunnel.

GRE over IPsec

MPLS/VPN, on the other hand, is a different beast. It requires signaling to distribute labels and build an end-to-end Label Switched Path ( LSP ). The label distribution can be done with BGP+label, LDP, and RSVP. Unlike GRE tunnels, MPLS VPNs do not have to manage multiple point-to-point tunnels to provide a full mesh of connectivity. Instead, they are used for connectivity, and packets’ labels provide traffic separation.

Cisco switch virtualization: Many to one

Many-to-one network consolidation refers to grouping two or more physical devices into one. Examples of this Cisco switch virtualization technology include a Virtual Switching System ( VSS ), Stackable switches, and Nexus VPC. Combining many physicals into one logical entity allows STP to view the logical group as one, allowing all ports to be active. By default, STP will block the redundant path.

Software-defined networking takes this concept further; it completely abstracts the entire network into a single virtual switch. The control and data planes are on the same device on traditional routers, yet they are decoupled with SDN. The control plan is now on a policy-driven controller, and the data plane is local on the OpenFlow-enabled switch.

Network Virtualization

Server and network virtualization presented the challenge of multiple VMs sharing a single network physical port, such as a network interface controller ( NIC ). The question then arises: how do I link multiple VMs to the same uplink? How do I provide path separation? Today’s networks need to virtualize the physical port and allow the configuration of policies per port.

Nexus 1000

NIC-per-VM design

One way to do this is to have a NIC-per-VM design where each VM is assigned a single physical NIC, and the NIC is not shared with any other VM. The hypervisor, aka virtualization layer, would be bypassed, and the VM would access the I/O device directly. This is known as VMDirectPath.

This direct path or pass-through can improve performance for hosts that utilize high-speed I/O devices, such as 10 Gigabit Ethernet. However, the lack of flexibility and the ability to move VMs offset higher performance benefits.  

Virtual-NIC-per-VM in Cisco UCS (Adapter FEX)

Another way to do this is to create multiple logical NICs on the same physical NIC, such as Virtual-NIC-per-VM in Cisco UCS (Adapter FEX). These logical NICs are assigned directly to VMs, and traffic gets marked with a vNIC-specific tag on the hardware (VN-Tag/802.1ah).

The actual VN-Tag tagging is implemented in the server NICs so that you can clone the physical NIC in the server to multiple virtual NICs. This technology provides faster switching and enables you to apply a rich set of management features to local and remote traffic.

Software Virtual Switch

The third option is to implement a virtual software switch in the hypervisor. For example, VMware introduced virtual switching compatibility with its vSphere ( ESXi ) hypervisor, called vSphere Distributed Switch ( VDS ). Initially, they introduced a local L2 software switch, which was soon phased out due to a lack of distributed architecture.

Data physically moves between the servers through the external network, but the control plane abstracts this movement to look like one large distributed switch spanning multiple servers. This approach has a single management and configuration point, similar to stackable switches – one control plane with many physical data forwarding paths.

The data does not move through a parent partition but logically connects directly to the network interface through local vNICs associated with each VM.

Network virtualization and Nexus 1000v ( Nexus 1000 )

The VDS introduced by VMware lacked any good networking features, which led Cisco to introduce the Nexus 1000V software-based switch. The Nexus 1000v is a multi-cloud, multi-hypervisor, and multi-services distributed virtual switch. Its function is to enable communication between VMs.

Nexus1000v
Nexus1000v: Virtual Distributed Switch.

**Nexus 1000 components: VEM and VSM**

The Nexus 1000v has two essential components:

  1. The Virtual Supervisor Module ( VSM )
  2. The Virtual Ethernet Module ( VEM ).

Compared to a physical switch, the VSM could be viewed as the supervisor, setting up the control plane functions for the data plane to forward efficiently, and the VEM as the physical line cards that do all the packet forwarding. The VEM is the software component that runs within the hypervisor kernel. It handles all VM traffic, including inter-VM frames and Ethernet traffic between a VM and external resources.

The VSM runs its NX-OS code and controls the control and management planes, which integrate into a cloud manager, such as a VMware vCenter. You can have two VSMs for redundancy. Both modules remain constantly synchronized with unicast VSM-to-VSM heartbeats to provide stateful failover in the event of an active VSM failure.

The two available communication options for VSM to VEM are:

  1. Layer 2 control mode: The VSM control interface shares the same VLAN with the VEM.
  2. Layer 3 control mode: The VEM and the VSM are in different IP subnets.

The VSM also uses heartbeat messages to detect a loss of connectivity between it and the VEM. However, the VEM does not depend on connectivity to the VSM to perform its data plane functions and will continue forwarding packets if the VSM fails.

With Layer 3 control mode, the heartbeat messages are encapsulated in a GRE envelope.

Nexus 1000 and VSM best practices

  • L2 control is recommended for new installations.
  • Use MAC pinning instead of LACP.
  • Packet, Control, and Management in the same VLAN.
  • Do not use VLAN 1 for Control and Packet.
  • Use 2 x VSM for redundancy. 

The max latency between VSM and VEM is ten milliseconds. Therefore, a VSM can be placed outside the data center if you have a high-quality DCI link, and the VEM can still be controlled.

Nexus 1000v InterCloud – Cisco switch virtualization

A vital element of the Nexus 1000 is its use case for hybrid cloud deployments and its ability to place workloads in private and public environments via a single pane of glass. In addition, the Nexus 1000v interCloud addresses the main challenges with hybrid cloud deployments, such as security concerns and control/visibility challenges within the public cloud.

The Nexus 1000 interCloud works with Cisco Prime Service Controller to create a secure L2 extension between the private data center and the public cloud.

This L2 extension is based on Datagram Transport Layer Security ( DTLS ) protocol and allows you to securely transfer VMs and Network services over a public IP backbone. DTLS derives the SSL protocol and provides communications privacy for datagram protocols, so all data in motion is cryptographically isolated and encrypted.

Nexus 1000
Nexus 1000 and Hybrid Cloud.

 

Nexus 1000v Hybrid Cloud Components 

Cisco Prime Network Service Controller for InterCloud **A VM that provides a single pane of glass to manage all functions of the inter clouds
InterCloud VSMManage port profiles for VMs in the InterCloud infrastructure
InterCloud ExtenderProvides secure connectivity to the InterCloud Switch in the provider cloud. Install in the private data center.
InterCloud SwitchVirtual Machine in the provider data center has secure connectivity to the InterCloud Extender in the enterprise cloud and secure connectivity to the Virtual Machines in the provider cloud.
Cloud Virtual MachinesVMs in the public cloud running workloads.

Prerequisites

Port 80HTTP access from PNSC for AWS calls and communicating with InterCloud VMs in the provider cloud
Port 443HTTPS access from PNSC for AWS calls and communicating with InterCloud VMs in the provider cloud
Port 22SSH from PNSC to InterCloud VMs in the provider cloud
UDP 6644DTLS data tunnel
TCP 6644DTLS control tunnel

VXLAN – Virtual Extensible LAN

The requirement for applications on demand has led to an increased number of required VLANs for cloud providers. The standard 12-bit identifier, which provided 4000 VLANs, proved to be a limiting factor in multi-tier, multi-tenant environments, and engineers started to run out of isolation options.

This has introduced a 24-bit VXLAN identifier, offering 16 million logical networks. Now, we can cross Layer 3 boundaries. The MAC in UDP encapsulation uses switch hashing to analyze UDP packets and efficiently distribute all packets in a port channel.

nexus 1000
VXLAN operations

VXLAN works like a layer 2 bridge ( Flood and Learn ); the VEM learn does all the heavy lifting, learns all the VM source MAC and Host VXLAN IPs, and encapsulates the traffic according to the port profile to which the VM belongs. Broadcast, Multicast, and unknown unicast traffic are sent as Multicast.

At the same time, unicast traffic is encapsulated and shipped directly to the destination host’s VXLAN IP, aka destination VEM. Enhanced VXLAN offers VXLAN MAC distribution and ARP termination, making it more optional. 

VXLAN Mode Packet Functions

PacketVXLAN(multicast mode)Enhanced VXLAN(unicast mode)Enhanced VXLANMAC DistributionEnhanced VXLANARP Termination
Broadcast /MulticastMulticast EncapsulationReplication plus Unicast EncapReplication plus Unicast EncapReplication plus Unicast Encap
Unknown UnicastMulticast EncapsulationReplication plus Unicast EncapDropDrop
Known UnicastUnicast EncapsulationUnicast EncapUnicast EncapUnicast Encap
ARPMulticast EncapsulationReplication plus Unicast EncapReplication plus Unicast EncapVEM ARP Reply

vPath – Service chaining

Intelligent Policy-based traffic steering through multiple network services.

vPath allows you to intelligently traffic steer VM traffic to virtualized devices. It intercepts and redirects the initial traffic to the service node. Once the service node performs its policy function, the result is cached, and the local virtual switch treats the subsequent packets accordingly. In addition, it enables you to tie services together to push the VM through each service as required. Previously, if you wanted to tie services together in a data center, you needed to stitch the VLANs together, which was limited by design and scale.

Cisco virtualization
Nexus and service chaining

vPath 3.0 is now submitted to the IETF for standardization, allowing service chaining with vPath and non-vpath network services. It enables you to use vpath service chaining between multiple physical devices and supporting multiple hypervisors.

License Options 

Nexus 1000 Essential EditionNexus 1000 Advanced Edition
Full Layer-2 Feature SetAll Features of Essential Edition
Security, QoS PoliciesVSG firewall
VXLAN virtual overlaysVXLAN Gateway
vPath enabled Virtual ServicesTrustSec SGA
Full monitoring and management capabilitiesA platform for other Cisco DC Extensions in the Future
Free$695 per CPU MSRP

Nexus 1000 features and benefits

SwitchingL2 Switching, 802.1Q Tagging, VLAN, Rate Limiting (TX), VXLAN
IGMP Snooping, QoS Marking (COS & DSCP), Class-based WFQ
SecurityPolicy Mobility, Private VLANs w/ local PVLAN Enforcement
Access Control Lists, Port Security, Cisco TrustSec Support
Dynamic ARP inspection, IP Source Guard, DHCP Snooping
Network ServicesVirtual Services Datapath (vPath) support for traffic steering & fast-path off-load[leveraged by Virtual Security Gateway (VSG), vWAAS, ASA1000V]
ProvisioningPort Profiles, Integration with vC, vCD, SCVMM*, BMC CLM
Optimized NIC Teaming with Virtual Port Channel – Host Mode
VisibilityVM Migration Tracking, VC Plugin, NetFlow v.9 w/ NDE, CDP v.2
VM-Level Interface Statistics, vTrackerSPAN & ERSPAN (policy-based)
ManagementVirtual Centre VM Provisioning, vCenter Plugin, Cisco LMS, DCNM
Cisco CLI, Radius, TACACs, Syslog, SNMP (v.1, 2, 3)
Hitless upgrade, SW Installer

Advantages and disadvantages of the Nexus 1000

AdvantagesDisadvantages
The Standard edition is FREE; you can upgrade to an enhanced version when needed.VEM and VSM internal communication is very sensitive to latency. Due to their chatty nature, they may not be good for inter-DC deployments.
Easy and Quick to deployVSM – VEM, VSM (active) – VSM (standby) heartbeat time of 6 seconds makes it sensitive to network failures and congestion.
It offers you many rich network features unavailable on other distributed software switches.VEM over-dependency on VSM reduces resiliency.
Hypervisor agnosticVSM is required for vSphere HA, FT, and VMotion to work.
Hybrid Cloud functionality 

**Closing Points on Cisco Nexus 1000v**

Virtual Ethernet Module (VEM):

The Nexus 1000v employs the Virtual Ethernet Module (VEM), which runs as a module inside the hypervisor. This allows for efficient and direct communication between VMs, bypassing the traditional reliance on the hypervisor networking stack.

Virtual Supervisor Module (VSM):

The Virtual Supervisor Module (VSM) serves as the control plane for the Nexus 1000v, providing centralized management and configuration. It enables network administrators to define policies, manage virtual ports, and monitor network traffic.

Policy-Based Virtual Network Management:

With the Nexus 1000v, administrators can define policies to manage virtual networks. These policies ensure consistent network configurations across multiple hosts, simplifying network management and reducing the risk of misconfigurations.

Advanced Security and Monitoring Capabilities:

The Nexus 1000v offers granular security controls, including access control lists (ACLs), port security, and dynamic host configuration protocol (DHCP) snooping. Additionally, it provides comprehensive visibility into network traffic, enabling administrators to monitor and troubleshoot network issues effectively.

Benefits of the Nexus 1000v:

Enhanced Network Performance:

By offloading network processing to the VEM, the Nexus 1000v minimizes the impact on the hypervisor, resulting in improved network performance and reduced latency.

Increased Scalability:

The distributed architecture of the Nexus 1000v allows for seamless scalability, ensuring that organizations can meet the growing demands of their virtualized environments.

Simplified Network Management:

With its policy-based approach, the Nexus 1000v simplifies network management tasks, enabling administrators to provision and manage virtual networks more efficiently.

Use Cases:

Data Centers:

The Nexus 1000v is particularly beneficial in data center environments where virtualization is prevalent. It provides a robust and scalable networking solution, ensuring optimal performance and security for virtualized workloads.

Cloud Service Providers:

Cloud service providers can leverage the Nexus 1000v to enhance their network virtualization capabilities, offering customers more flexibility and control over their virtual networks.

The Nexus 1000v is a powerful virtual network switch that provides advanced networking capabilities for virtualized environments. Its rich features, policy-based management approach, and seamless integration with VMware vSphere allow organizations to achieve enhanced network performance, scalability, and management efficiency. As virtualization continues to shape the future of data centers, the Nexus 1000v remains a valuable tool for optimizing virtual network infrastructures.

Summary: Cisco Switch Virtualization Nexus 1000v

Welcome to our blog post, where we dive into the world of Cisco Switch Virtualization, explicitly focusing on the Nexus 1000v. In this article, we will unravel the complexities surrounding switch virtualization, explore the key features of the Nexus 1000v, and understand its significance in modern networking environments.

Understanding Switch Virtualization

Switch virtualization is a technique that allows for creating multiple virtual switches on a single physical switch, enabling greater flexibility and efficiency in network management. Organizations can consolidate their infrastructure, reduce costs, and streamline network operations by virtualizing switches.

Introducing the Nexus 1000v

The Cisco Nexus 1000v is a powerful switch virtualization solution that extends the functionality of VMware environments. Unlike traditional virtual switches, it provides a more comprehensive set of features and advanced network control. It seamlessly integrates with VMware vSphere, offering enhanced visibility, security, and policy management.

Key Features of the Nexus 1000v

– Distributed Virtual Switch: The Nexus 1000v operates as a distributed virtual switch, distributing network intelligence across all hosts in the virtualized environment. This ensures consistent policies, simplified troubleshooting, and improved performance.

– Virtual Port Profiles: With virtual port profiles, administrators can define consistent network policies for virtual machines, irrespective of their physical location. This simplifies network provisioning and reduces the chances of misconfigurations.

– Network Analysis Module (NAM): The Nexus 1000v incorporates NAM, a robust monitoring and analysis tool that provides deep visibility into virtual network traffic. This enables administrators to identify and resolve network issues, ensuring optimal performance quickly.

Deployment Considerations

When planning to deploy the Nexus 1000v, it is essential to consider factors such as network architecture, compatibility with existing infrastructure, and scalability requirements. It is advisable to consult with Cisco experts or certified partners to ensure a smooth and successful implementation.

Conclusion:

In conclusion, the Cisco Nexus 1000v is a game-changer in switch virtualization. Its advanced features, seamless integration with VMware environments, and extensive network control make it an ideal choice for organizations seeking to optimize their network infrastructure. By understanding the fundamentals of switch virtualization and exploring Nexus 1000v’s capabilities, network administrators can unlock a world of possibilities in network management and performance.