SD WAN Overlay

SD WAN Overlay

SD WAN Overlay

In today's digital age, businesses rely on seamless and secure network connectivity to support their operations. Traditional Wide Area Network (WAN) architectures often struggle to meet the demands of modern companies due to their limited bandwidth, high costs, and lack of flexibility. A revolutionary SD-WAN (Software-Defined Wide Area Network) overlay has emerged to address these challenges, offering businesses a more efficient and agile network solution. This blog post will delve into SD-WAN overlay, exploring its benefits, implementation, and potential to transform how businesses connect.

SD-WAN employs the concepts of overlay networking. Overlay networking is a virtual network architecture that allows for the creation of multiple logical networks on top of an existing physical network infrastructure. It involves the encapsulation of network traffic within packets, enabling data to traverse across different networks regardless of their physical locations. This abstraction layer provides immense flexibility and agility, making overlay networking an attractive option for organizations of all sizes.

Scalability: One of the key advantages of overlay networking is its ability to scale effortlessly. By decoupling the logical network from the underlying physical infrastructure, organizations can rapidly deploy and expand their networks without disruption. This scalability is particularly crucial in cloud environments or scenarios where network requirements change frequently.

Security and Isolation: Overlay networks provide enhanced security by isolating different logical networks from each other. This isolation ensures that data traffic remains segregated and prevents unauthorized access to sensitive information. Additionally, overlay networks can implement advanced security measures such as encryption and access control, further fortifying network security.

Highlights: SD WAN Overlay

The Role of SD-WAN Overlays

SD-WAN overlay is a network architecture that enhances traditional WAN infrastructure by leveraging software-defined networking (SDN) principles. Unlike conventional WAN, where network management is done manually and requires substantial hardware investments, SD-WAN overlay centralizes network control and management through software. This enables businesses to simplify network operations and reduce costs by utilizing commodity internet connections alongside existing MPLS networks. 

SD-WAN, or Software-Defined Wide Area Network, is a technology that simplifies the management and operation of a wide area network. It abstracts the underlying network infrastructure and provides a centralized control plane for configuring and managing network services. SD-WAN overlay takes this concept further by introducing an additional virtualization layer, enabling the creation of multiple logical networks on top of the physical network infrastructure.

SD WAN 

SD WAN Overlay 

Overlay Types

  • Tunnel-Based Overlays

  • Segment-Based Overlays

  • Policy-Based Overlays

  • Internet-Based SD-WAN Overlay

SD WAN 

SD WAN Overlay 

Overlay Types

  • Hybrid Overlays

  • Cloud-Enabled Overlays

  • MPLS-Based SD-WAN Overlay

  • Hybrid SD-WAN Overlay

So, what exactly is an SD-WAN overlay?

In simple terms, it is a virtual layer added to the existing network infrastructure. These network overlays connect different locations, such as branch offices, data centers, and the cloud, by creating a secure and reliable network.

1. Tunnel-Based Overlays:

One of the most common types of SD-WAN overlays is tunnel-based overlays. This approach encapsulates network traffic within a virtual tunnel, allowing it to traverse multiple networks securely. Tunnel-based overlays are typically implemented using IPsec or GRE (Generic Routing Encapsulation) protocols. They offer enhanced security through encryption and provide a reliable connection between the SD-WAN edge devices.

GRE over IPsec

2. Segment-Based Overlays:

Segment-based overlays are designed to segment the network traffic based on specific criteria such as application type, user group, or location. This allows organizations to prioritize critical applications and allocate network resources accordingly. By segmenting the traffic, SD-WAN can optimize the performance of each application and ensure a consistent user experience. Segment-based overlays are particularly beneficial for businesses with diverse network requirements.

3. Policy-Based Overlays:

Policy-based overlays enable organizations to define rules and policies that govern the behavior of the SD-WAN network. These overlays use intelligent routing algorithms to dynamically select the most optimal path for network traffic based on predefined policies. By leveraging policy-based overlays, businesses can ensure efficient utilization of network resources, minimize latency, and improve overall network performance.

4. Hybrid Overlays:

Hybrid overlays combine the benefits of both public and private networks. This overlay allows organizations to utilize multiple network connections, including MPLS, broadband, and LTE, to create a robust and resilient network infrastructure. Hybrid overlays intelligently route traffic through the most suitable connection based on application requirements, network availability, and cost. By leveraging mixed overlays, businesses can achieve high availability, cost-effectiveness, and improved application performance.

5. Cloud-Enabled Overlays:

As more businesses adopt cloud-based applications and services, seamless connectivity to cloud environments becomes crucial. Cloud-enabled overlays provide direct and secure connectivity between the SD-WAN network and cloud service providers. These overlays ensure optimized performance for cloud applications by minimizing latency and providing efficient data transfer. Cloud-enabled overlays simplify the management and deployment of SD-WAN in multi-cloud environments, making them an ideal choice for businesses embracing cloud technologies.

Related: For additional pre-information, you may find the following helpful:

  1. Transport SDN
  2. SD WAN Diagram 
  3. Overlay Virtual Networking



SD-WAN Overlay

Key SD WAN Overlay Discussion Points:


  • WAN transformation.

  • The issues with traditional networking.

  • Introduction to Virtual WANs.

  • SD-WAN and SDN discussion.

  • SD-WAN overlay core features.

  • Drivers for SD-WAN.

Back to Basics: SD-WAN Overlay

Overlay Networking

Overlay networking is an approach to computer networking that involves building a layer of virtual networks on top of an existing physical network. This approach improves the underlying infrastructure’s scalability, performance, and security. It also allows for creating virtual networks that span multiple physical networks, allowing for greater flexibility in traffic routes.

At the core of overlay networking is the concept of virtualization. This involves separating the physical infrastructure from the virtual networks, allowing greater control over allocating resources. This separation also allows the creation of virtual network segments that span multiple physical networks. This provides an efficient way to route traffic, as well as the ability to provide additional security and privacy measures.

The diagram below displays a VXLAN overlay. So, we are using VLXAN to create the tunnel that allows Layer 2 extensions across a Layer 3 core.

Overlay networking
Diagram: Overlay Networking with VXLAN

Underlay network

A network underlay is a physical infrastructure that provides the foundation for a network overlay, a logical abstraction of the underlying physical network. The network underlay provides the physical transport of data between nodes, while the overlay provides logical connectivity.

The network underlay can comprise various technologies, such as Ethernet, Wi-Fi, cellular, satellite, and fiber optics. It is the foundation of a network overlay and essential for its proper functioning. It provides data transport and physical connections between nodes. It also provides the physical elements that make up the infrastructure, such as routers, switches, and firewalls.

Overlay networking
Diagram: Overlay networking. Source Researchgate.

SD-WAN with SDWAN overlay.

SD-WAN leverages a transport-independent fabric technology that is used to connect remote locations. This is achieved by using overlay technology. The SDWAN overlay works by tunneling traffic over any transport between destinations within the WAN environment.

This gives authentic flexibility to routing applications across any network portion regardless of the circuit or transport type. This is the definition of transport independence. Having a fabric SDWAN overlay network means that every remote site, regardless of physical or logical separation, is always a single hop away from another. DMPVN works based on transport agnostic design.

DMVPN configuration
Diagram: DMVPN Configuration.

SD-WAN overlays offer several advantages over traditional WANs, including improved scalability, reduced complexity, and better control over traffic flows. They also provide better security, as each site is protected by its dedicated security protocols. Additionally, SD-WAN overlays can improve application performance and reliability and reduce latency.

We need more bandwidth.

Modern businesses demand more bandwidth than ever to connect their data, applications, and services. As a result, we have many things to consider with the WAN, such as regulations, security, visibility, branch, data center sites, remote workers, internet access, cloud, and traffic prioritization. They were driving the need for SD-WAN.

The concepts and design principles of creating a wide area network (WAN) to provide resilient and optimal transit between endpoints have continuously evolved. However, the driver behind building a better WAN is to support applications that demand performance and resiliency.

SD WAN Overlay 

Key SD WAN Features

Full stack obervability 


Not all traffic treated equally

Combining all transports

Intelligent traffic steering 

Controller-based policy

Lab Guide: PfR Operations

In the following guide, I will address PfR. PfR is all about optimizing traffic and originated from OER. OER is a good step forward, but it’s not enough; it does “prefix-based route optimization,” but optimization per prefix is not good enough today. Nowadays, it’s all about “application-based optimization”. 

Performance routing (PfR) is similar to OER but can optimize our routing based on application requirements. OER and PfR are technically 95% identical, but Cisco rebranded OER as PfR.

In the diagram below, we have the following:

  • H1 is a traffic generator that sends traffic to the ISP router loopback interfaces.
  • MC, BR1, and BR2 run iBGP.
  • MC is our master controller.
  • BR1 and BR2 are border routers.
  • Between AS 1 and AS 2 we run eBGP.

Performance based routing

Note:

First, we will look at the MC device and the default routing. We see two entries for the 10.0.0.0/8 network; iBGP uses BR1 as the exit point. 

Once PfF is configured, we can check the settings on the MC and the Border routers.

Performance based routing

Analysis:

Cisco PfR, or Cisco Performance Routing, is an advanced technology designed to optimize network traffic flows. Unlike traditional routing protocols, PfR considers various factors such as network conditions, link capacities, and application requirements to select the most efficient path for data packets dynamically. This intelligent routing approach ensures enhanced performance and optimal resource utilization.

Key Features of Cisco PfR

1. Intelligent Path Selection: Cisco PfR analyzes real-time network data to determine the best path for traffic flows, considering factors like latency, delay, and link availability. It dynamically adapts to changing network conditions, ensuring optimal performance.

2. Application-Aware Routing: PfR goes beyond traditional routing protocols by considering the specific requirements of applications running on the network. It can prioritize critical applications, allocate bandwidth resources accordingly, and optimize performance for different types of traffic.

Cisco PfR

Benefits of Cisco PfR

1. Improved Network Performance: PfR can dynamically adapt to network conditions, optimizing traffic flows, reducing latency, and enhancing overall network performance. This results in improved user experience and increased productivity.

2. Efficient Utilization of Network Resources: Cisco PfR intelligently distributes traffic across available network links, optimizing resource utilization. Leveraging multiple paths balances the load and prevents congestion, leading to better bandwidth utilization.

3. Enhanced Application Performance: PfR’s application-aware routing ensures that critical applications receive the required bandwidth and quality of service. This prioritization improves application performance, minimizing delays and ensuring a smooth user experience.

4. Simplified Network Management: PfR provides detailed visibility into network performance, allowing administrators to identify and troubleshoot issues more effectively. Its centralized management interface simplifies configuration and monitoring, making network management less complex.

Implementation Considerations

Certain factors must be considered before implementing Cisco PfR. Evaluate the network infrastructure, identify critical applications, and determine the desired performance goals. Proper planning and configuration are essential to maximizing the benefits of PfR.

Knowledge Check: Application-Aware Routing (AAR) with Cisco SD-WAN

Depending on the OMP best path selection) both connections may be actively used if you have multiple connections, such as an MPLS and an Internet connection. There might be a better solution than this. There is a possibility that your MPLS connection supports QoS, while your Internet connection is the best effort. There may be a business application that requires QoS that should use the MPLS link and web traffic that should only use the Internet connection.

How can MPLS performance be improved if it degrades? Temporarily switching to an Internet connection could improve the end-user experience.

Multi-connections to the Internet are another example. A fiber optic network, cable, DSL, or 4G network might be available. You should be able to select the best connection every time.

With Application-Aware Routing (AAR), we can determine which applications should use which WAN connection, and we can failover based on packet loss, jitter, and delay. AAR tracks network statistics from Cisco SD-WAN data plane tunnels to determine the optimal traffic path.

Knowledge Check: NBAR

NBAR, short for Network-Based Application Recognition, is a technology that allows network devices to identify and classify network protocols and applications traversing the network. Unlike traditional network traffic analysis methods that rely on port numbers alone, NBAR utilizes deep packet inspection to identify applications based on their unique signatures and traffic patterns. This granular level of visibility enables network administrators to gain valuable insights into the type of traffic flowing through their networks.

Application Recognition

NBAR finds extensive use in various scenarios. From a network performance perspective, it assists in traffic shaping and bandwidth management, ensuring optimal resource allocation. Moreover, NBAR plays a vital role in Quality of Service (QoS) implementations, facilitating the prioritization of mission-critical applications. Additionally, NBAR’s application recognition capabilities are essential in network troubleshooting, as they help pinpoint the source of congestion and performance issues.

SD WAN Overlay: Implementation Considerations

Network Assessment: A thorough network assessment is crucial before implementing the SD-WAN overlay. This includes evaluating existing network infrastructure, bandwidth requirements, application performance, and security protocols. A comprehensive assessment helps identify potential bottlenecks and ensures a smooth transition to the new technology.

Vendor Selection: Choosing the right SD-WAN overlay vendor is vital for a successful implementation. Factors to consider include scalability, security features, ease of management, and compatibility with existing network infrastructure. Evaluating multiple vendors and seeking recommendations from industry experts can help make an informed decision.

Key Considerations for Implementation

Before implementing an SD-WAN overlay, assessing your organization’s specific requirements and goals is essential. Consider network architecture, security needs, scalability, and integration with existing systems. Conduct a thorough evaluation to determine your business’s most suitable SD-WAN solution.

Overcoming Implementation Challenges

Implementing an SD-WAN overlay may present challenges. Common hurdles include network compatibility, data migration, and seamless integration with existing infrastructure. Identify potential roadblocks early on and work closely with your SD-WAN provider to develop a comprehensive implementation plan.

Best Practices for Successful Deployment

To ensure a smooth and successful SD-WAN overlay implementation, follow these best practices:

a. Conduct a pilot phase: Test the solution in a controlled environment to identify and address potential issues before full-scale deployment.

b. Prioritize security: Implement robust security measures to protect your network and data. Consider features like encryption, firewalls, and intrusion prevention systems.

c. Optimize for performance: Leverage SD-WAN overlay’s advanced traffic management capabilities to optimize application performance and prioritize critical traffic.

Monitoring and Maintenance

Once the SD-WAN overlay is implemented, continuous monitoring and maintenance are crucial. Regularly assess network performance, address any bottlenecks, and apply updates as necessary. Implement proactive monitoring tools to detect and resolve issues before they impact operations.

WAN Innovation

The WAN is the entry point between inside the perimeter and outside. An outage in the WAN has a large blast radius, affecting many applications and other branch site connectivity. Yet the WAN has had little innovation until now with the advent of both SD-WAN and SASE.  SASE is a combination of both network and security functions.

SASE Network

If you look at the history of WAN, you will see that there have been several stages in WAN virtualization. Most WAN transformation projects went from basic hub-and-spoke topologies based on services such as leased lines to fully meshed MPLS-based WAN servers. Cost was the main driver for this evolution, not agility.  

wide area network
Diagram: Wide Area Network: WAN Technologies.

Issues with the Traditional Network

To understand SD-WAN, we must first discuss some “problems” with traditional WAN connections. There are two types of WAN connections: private and public. Here are two options to compare:

  • Cost: MPLS connections are much more expensive than regular Internet connections.

  • Time to deploy: Private WAN connections take longer than regular Internet connections.

  • Service providers offer service level agreements (SLAs) for private WAN connections but not regular Internet connections. Several Internet providers offer SLAs for “business” class connections, but they are much more expensive than regular (consumer) connections.

  • Packet loss: Private WAN connections like MPLS suffer from lower packet loss than Internet connections.

  • Internet connections do not offer quality of service. Outgoing traffic can be prioritized, but that’s it—the Internet itself is like the Wild West. Private WAN connections often support end-to-end quality of service.

As the world of I.T. becomes dispersed, the network and security perimeters dissolve and become less predictable. Before, it was easy to know what was internal and external, but now we live in a world of micro-perimeters with a considerable change in the focal point.

The perimeter is now the identity of the user and device – not the fixed point at an H.Q. site. As a result, applications require a WAN to support distributed environments, flexible network points, and a change in the perimeter design.

Suboptimal traffic flow

The optimal route will be the fastest or most efficient and, therefore, preferred to transfer data. Sub-optimal routes will be slower and, hence, not the selected route. Centralized-only designs resulted in suboptimal traffic flow and increased latency, which will degrade application performance.

A key point to note is that traditional networks focus on centralized points in the network that all applications, network, and security services must adhere to. These network points are fixed and cannot be changed.

Network point intelligence

However, the network should be evolved to have network points positioned where it makes the most sense for the application and user. Not based on, let’s say, a previously validated design for a different application era. For example, many branch sites do not have local Internet breakouts.

So, for this reason, we backhauled internet-bound traffic to secure, centralized internet portals at the H.Q. site. As a result, we sacrificed the performance of Internet and cloud applications. Designs that place the H.Q. site at the center of connectivity requirements inhibit the dynamic access requirements for digital business.

Hub and spoke drawbacks.

Simple spoke-type networks are sub-optimal because you always have to go to the center point of the hub and then out to the machine you need rather than being able to go directly to whichever node you need. As a result, the hub becomes a bottleneck in the network as all data must go through it. With a more scattered network using multiple hubs and switches, a less congested and more optimal route could be found between machines.

Knowledge Check: DMVPN as an overlay technology

DMVPN, an acronym for Dynamic Multipoint Virtual Private Network, is a Cisco proprietary solution that provides a scalable and flexible approach to creating virtual private networks over the Internet. Unlike traditional VPNs requiring point-to-point connections, DMVPN utilizes a hub-and-spoke architecture, allowing multiple remote sites to communicate securely.

How DMVPN Works

a) Phase 1: Establishing a mGRE (Multipoint GRE) Tunnel: DMVPN begins by creating a multipoint GRE tunnel, allowing spoke routers to connect to the hub router. This phase sets the foundation for secure communication.

b) Phase 2: Dynamic Routing Protocol Integration: Once the mGRE tunnel is established, a dynamic routing protocol, such as EIGRP or OSPF, propagates routing information. This allows spoke routers to learn about other remote networks dynamically.

c) Phase 3: IPSec Encryption: To ensure secure communication over the internet, IPSec encryption is applied to the DMVPN tunnels. This encryption provides confidentiality, integrity, and authentication, safeguarding data transmitted between sites.

DMVPN Phase 3
Diagram: DMVPN Phase 3 configuration

A key point on MPLS agility

Multiprotocol Label Switching, or MPLS, is a networking technology that routes traffic using the shortest path based on “labels,” rather than network addresses, to handle forwarding over private wide area networks. As a protocol-independent solution, MPLS assigns labels to each data packet, controlling the path the packet follows. As a result, MPLS significantly improves traffic speed, but it has some drawbacks.

MPLS VPN
Diagram: MPLS VPN

MPLS topologies, once they are provisioned, are challenging to modify. Community tagging and matching provide some degree of flexibility and are commonly used, meaning the customers set BGP communities on prefixes for specific applications. The SP matches these communities and sets traffic engineering parameters like the MED and Local Preference. However, the network topology essentially remains fixed.

digital transformation
Diagram: Networking: The cause of digital transformation.

Connecting remote sites to cloud offerings, such as SaaS or IaaS, is far more efficient over the public Internet. However, there are many drawbacks to backhauling traffic to a central data center when it is not required, and it is more efficient to go direct. SD-WAN technologies share similar technologies to DMVPN phases, allowing your branch sites to go directly to cloud-based applications without backhauling to the central H.Q.

Introducing the SD-WAN Overlay

A software-defined wide area network is a wide area network that uses software-defined network technology, such as communicating over the Internet using SDWAN overlay tunnels that are encrypted when destined for internal organization locations. SD-WAN is software-defined networking for the wide area network.

SD-WAN decouples (separates) the WAN infrastructure, whether physical or virtual, from its control plane mechanism and allows applications or application groups to be placed into virtual WAN overlays.

Types of SD-WAN and the SD-WAN overlay: The virtual WANs 

The separation allows us to bring many enhancements and improvements to a WAN that has had very little innovation in the past compared to the rest of the infrastructure, such as server and storage modules. With server virtualization, several virtual machines create application isolation on a physical server.

For example, an application placed in a VM operated in isolation from each other, yet the VMs were installed on the same physical hosts.

Consider SD-WAN to operate with similar principles. Each application or group can operate independently when traversing the WAN to endpoints in the cloud or other remote sites. These applications are placed into a virtual SDWAN overlay.

Cisco SD WAN Overlay
Diagram: Cisco SD-WAN overlay. Source Network Academy

SD-WAN overlay and SDN combined

  • The Fabric

The word fabric comes from the fact that there are many paths to move from one server to another to ease balance and traffic distribution. SDN aims to centralize the order that enables the distribution of the flows over all the fabric paths. Then, we have an SDN controller device. The SDN controller can also control several fabrics simultaneously, managing intra and inter-datacenter flows.

  • SD-WAN overlay includes SDN

SD-WAN is used to control and manage a company’s multiple WANs. There are different types of WAN: Internet, MPLS, LTE, DSL, fiber, wired network, circuit link, etc. SD-WAN uses SDN technology to control the entire environment. Like SDN, the data plane and control plane are separated. A centralized controller must be added to manage flows, routing or switch policies, packet priority, network policies, etc. SD-WAN technology is based on overlay, meaning nodes representing underlying networks.

  • Centralized logic

In a traditional network, each device’s transport functions and controller layer are resident. This is why any configuration or change must be done box-by-box. Configuration was carried out manually or, at the most, an Ansible script. SD-WAN brings Software-Defined Networking (SDN) concepts to the enterprise branch WAN.

Software-defined networking (SDN) is an architecture, whereas SD-WAN is a technology that can be purchased and built on SDN’s foundational concepts. SD-WAN’s centralized logic stems from SDN. SDN separates the control from the data plane and uses a central controller to make intelligent decisions, similar to the design that most SD-WAN vendors operate.

  • A holistic view

The controller has a holistic view. Same with the SD-WAN overlay. The controller supports central policy management, enabling network-wide policy definitions and traffic visibility. The SD-WAN edge devices perform the data plane. The data plane is where the simple forwarding occurs, and the control plane, which is separate from the data plane, sets up all the controls for the data plane to forward.

Like SDN, the SD-WAN overlay abstracts network hardware into a control plane with multiple data planes to make up one large WAN fabric. As the control layer is abstracted and decoupled above the physicals and running in software, services can be virtualized and delivered from a central location to any point on the network.

sd-wan technology
Diagram: SD-WAN technology: The old WAN vs the new WAN.

Types of SD WAN and SD-WAN Overlay Features

Enterprises that employ SD-WAN solutions for their network architecture will simplify the complexity of their WAN. Enterprises should look at the SD-WAN options available in various deployment options, ranging from thin devices with most of the functionality in the cloud to thicker devices at the branch location performing most of the work. Whichever SD-WAN vendor you choose will have similar features.

Today’s WAN environment requires us to manage many elements: numerous physical components that include both network and security devices, complex routing protocols and configurations, complex high-availability designs, and various path optimizations and encryption techniques. 

Gaining the SD-WAN benefits

Employing the features discussed below will allow you to gain the benefits of SD-WAN: its higher capacity bandwidth, centralized management, network visibility, and multiple connection types. In addition, SD-WAN technology allows organizations to use connection types that are cheaper than MPLS.

virtual private network
Diagram: SD-WAN features: Virtual Private Network (VPN).

Types of SD WAN: Combining the transports

At its core, SD-WAN shapes and steers application traffic across multiple WAN means of transport. Building on the concept of link bonding to combine numerous means of transport and transport types, the SD-WAN overlay improves the idea by moving the functionality up the stack—first, SD-WAN aggregates last-mile services, representing them as a single pipe to the application.

SD-WAN allows you to combine all transport links into one big pipe. SD-WAN is transport agnostic. As it works by abstraction, it does not care what transport links you have. Maybe you have MPLS, private Internet, or LTE. It can combine all these or use them separately.

Types of SD WAN: Central location

From a central location, SD-WAN pulls all of these WAN resources together, creating one large WAN fabric that allows administrators to slice up the WAN to match the application requirements that sit on top. Different applications traverse the WAN, so we need the WAN to react differently.

For example, if you’re running a call center, you want a low delay, latency, and high availability with Voice traffic. You may wish to use this traffic to use an excellent service-level agreement path.

SD WAN traffic steering
Diagram: SD-WAN traffic steering. Source Cisco.

Types of SD WAN: Traffic steering

Traffic steering may also be required: voice traffic to another path if, for example, the first Path is experiencing high latency. If it’s not possible to steer traffic automatically to a link that is better performing, run a series of path remediation techniques to try to improve performance. File transfer differs from real-time Voice: you can tolerate more delay but need more B/W.

Here, you may want to use a combination of WAN transports ( such as customer broadband and LTE ) to achieve higher aggregate B/W. This also allows you to automatically steer traffic over different WAN transports when there is a deflagration on one link. With the SD-WAN overlay, we must start thinking about paths, not links.

SD-WAN overlay makes intelligent decisions

At its core, SD-WAN enables real-time application traffic steering over any link, such as broadband, LTE, and MPLS, assigning pre-defined policies based on business intent. Steering policies support many application types, making intelligent decisions about how WAN links are utilized and which paths are taken.

computer networking
Diagram: Computer networking: Overlay technology.

Types of SD WAN: Steering traffic

The concept of an underlay and overlay are not new, and SD-WAN borrows these designs. First, the underlay is the physical or virtual world, such as the physical infrastructure. Then, we have the overlay, where all the intelligence can be set. The SDWAN overlay represents the virtual WANs that hold your different applications.

A virtual WAN overlay enables us to steer traffic and combine all bandwidths. Similar to how applications are mapped to V.M. in the server world, with SD-WAN, each application is mapped to its own virtual SD-WAN overlay. Each virtual SDWAN overlay can have its own SD WAN security policies, topologies, and performance requirements.

SD-WAN overlay path monitoring

SD-WAN monitors the paths and the application performance on each link (Internet, MPLS, LTE ) and then chooses the best path based on real-time conditions and the business policy. In summary, the underlay network is the physical or virtual infrastructure above which the overlay network is built. An SDWAN overlay network is a virtual network built on top of an underlying Network infrastructure/Network layer (the underlay).

Types of SD-WAN: Controller-based policy

An additional layer of information is needed to make more intelligent decisions about how and where to forward application traffic. This is the controller-based policy approach that SD-WAN offers, incorporating a holistic view.

A central controller can now make decisions based on global information, not solely on a path-by-path basis with traditional routing protocols.  Getting all the routing information and compiling it into the controller to make a decision is much more efficient than making local decisions that only see a limited part of the network.

The SD-WAN Controller provides physical or virtual device management for all SD-WAN Edges associated with the controller. This includes but is not limited to, configuration and activation, IP address management, and pushing down policies onto SD-WAN Edges located at the branch sites.

SD-WAN Overlay Case Study

I recently consulted for a private enterprise. Like many enterprises, they have many applications, both legacy and new. No one knew about courses and applications running over the WAN. Visibility was at an all-time low. For the network design, the H.Q. has MPLS and Direct Internet access.

There is nothing new here; this design has been in place for the last decade. All traffic is backhauled to the HQ/MPLS headend for security screening. The security stack, which will include firewalls, IDS/IPS, and anti-malware, was located in the H.Q. The remote sites have high latency and limited connectivity options.

 

types of sd wan
Diagram: WAN transformation: Network design.

More importantly, they are transitioning their ERP system to the cloud. As apps move to the cloud, they want to avoid fixed WAN, a big driver for a flexible SD-WAN solution. They also have remote branches. These branches are hindered by high latency and poorly managed IT infrastructure.

But they don’t want an I.T. representative at each site location. They have heard that SD-WAN has a centralized logic and can view the entire network from one central location. These remote sites must receive large files from the H.Q.; the branch sites’ transport links are only single-customer broadband links.

The cost of remote sites

Some remote sites have LTE, and the bills are getting more significant. The company wants to reduce costs with dedicated Internet access or customer/business broadband. They have heard that you can combine different transports with SD-WAN and have several path remediations on degraded transports for better performance. So, they decided to roll out SD-WAN. From this new architecture, they gained several benefits.

SD-WAN Visibility

When your business-critical applications operate over different provider networks, it gets harder to troubleshoot and find the root cause of problems. So, visibility is critical to business. SD-WAN allows you to see network performance data in real-time and is essential for determining where packet loss, latency, and jitter are occurring so you can resolve the problem quickly.

You also need to be able to see who or what is consuming bandwidth so you can spot intermittent problems. For all these reasons, SD-WAN visibility needs to go beyond network performance metrics and provide greater insight into the delivery chains that run from applications to users.

Understand your baselines

Visibility is needed to complete the network baseline before the SD-WAN is deployed. This enables the organization to understand existing capabilities, the norm, what applications are running, the number of sites connected, what service providers used, and whether they’re meeting their SLAs.

Visibility is critical to obtaining a complete picture so teams understand how to optimize the business infrastructure. SD-WAN gives you an intelligent edge, so you can see all the traffic and act on it immediately.

First, look at the visibility of the various flows, the links used, and any issues on those links. Then, if necessary, you can tweak the bonding policy to optimize the traffic flow. Before the rollout of SD-WAN, there was no visibility into the types of traffic, and different apps used what B.W. They had limited knowledge of WAN performance.

SD-WAN offers higher visibility

With SD-WAN, they have the visibility to control and class traffic on layer seven values, such as what URL you are using and what Domain you are trying to hit, along with the standard port and protocol.

All applications are not equal; some run better on different links. If an application is not performing correctly, you can route it to a different circuit. With the SD-WAN orchestrator, you have complete visibility across all locations, all links, and into the other traffic across all circuits. 

SD-WAN High Availability

The goal of any high-availability solution is to ensure that all network services are resilient to failure. Such a solution aims to provide continuous access to network resources by addressing the potential causes of downtime through functionality, design, and best practices.

The previous high-availability design was active and passive with manual failover. It was hard to maintain, and there was a lot of unused bandwidth. Now, they have more efficient use of resources and are no longer tied to the bandwidth of the first circuit.

There is a better granular application failover mechanism. You can also select which apps are prioritized if a link fails or when a certain congestion ratio is hit. For example, you have LTE as a backup, which can be very expensive. So applications marked high priority are steered over the backup link, but guest WIFI traffic isn’t.  

Flexible topology

Before, they had a hub and spoke MPLS design for all applications. They wanted a complete mesh architecture for some applications, kept the existing hub, and spoke for others. However, the service provider couldn’t accommodate the level of granularity that they wanted.

With SD-WAN, they can choose topologies better suited to the application type. As a result, the network design is now more flexible and matches the application than the application matching a network design it doesn’t want.

SD-WAN topology
Diagram: SD-WAN Topologies.

Going Deeper on the SD-WAN Overlay Components

SD-WAN combines transports, SDWAN overlay, and underlay

Look at it this way. With an SD-WAN topology, there are different levels of networking. There is an underlay network, the physical infrastructure, and an SDWAN overlay network. The physical infrastructure is the router, switches, and WAN transports; the overlay network is the virtual WAN overlays.

The SDWAN overlay presents a different network to the application. For example, the voice overlay will see only the voice overlay. The logical virtual pipe the overlay creates, and the application sees differs from the underlay.

An SDWAN overlay network is a virtual or logical network created on top of an existing physical network. The internet, which connects many nodes via circuit switching, is an example of an SDWAN overlay network. An overlay network is any virtual layer on top of physical network infrastructure.

Consider an SDWAN overlay as a flexible tag.

This may be as simple as a virtual local area network (VLAN) but typically refers to more complex virtual layers from an SDN or an SD-WAN). Think of an SDWAN overlay as a tag so that building the overlays is not expensive or time-consuming. In addition, you don’t need to buy physical equipment for each overlay as the overlay is virtualized and in the software.

Similar to software-defined networking (SDN), the critical part is that SD-WAN works by abstraction. All the complexities are abstracted into application overlays. For example, application type A can use this SDWAN overlay, and application type B can use that SDWAN overlay. 

I.P. and port number, orchestrations, and end-to-end

Recent application requirements drive a new type of WAN that more accurately supports today’s environment with an additional layer of policy management. The world has moved away from looking at I.P. addresses and Port numbers used to identify applications and made the correct forwarding decision. 

Types of SD WAN

The market for branch office wide-area network functionality is shifting from dedicated routing, security, and WAN optimization appliances to feature-rich SD-WAN. As a result, WAN edge infrastructure now incorporates a widening set of network functions, including secure routers, firewalls, SD-WAN, WAN path control, and WAN optimization, along with traditional routing functionality. Therefore, consider the following approach to deploying SD-WAN.

SD WAN Overlay Approach

SD WAN Feature

 Application-orientated WAN

Holistic visibility and decisions

Central logic

Independent topologies

Application mapping

1. Application-based approach

With SD-WAN, we are shifting from a network-based approach to an application-based approach. The new WAN no longer looks solely at the network to forward packets. Instead, it looks at the business requirements and decides how to optimize the application with the correct forwarding behavior. This new way of forwarding would be problematic when using traditional WAN architectures.

Making business logic decisions with I.P. and port number information is challenging. Standard routing is the most common way to forward application traffic today, but it only assesses part of the picture when making its forwarding decision. 

These devices have routing tables to perform forwarding. Still, with this model, they operate and decide on their little island, losing the holistic view required for accurate end-to-end decision-making.  

2. SD-WAN: Holistic decision

The WAN must start to make decisions holistically. The WAN should not be viewed as a single module in the network design. Instead, it must incorporate several elements it has not incorporated to capture the correct per-application forwarding behavior. The ideal WAN should be automatable to form a comprehensive end-to-end solution centrally orchestrated from a single pane of glass.

Managed and orchestrated centrally, this new WAN fabric is transport agnostic. It offers application-aware routing, regional-specific routing topologies, encryption on all transports regardless of link type, and high availability with automatic failover. All of these will be discussed shortly and are the essence of SD-WAN.  

3. SD-WAN and central logic        

Besides the virtual SD-WAN overlay, another key SD-WAN concept is centralized logic. Upon examining a standard router, local routing tables are computed from an algorithm to forward a packet to a given destination.

It receives routes from its peers or neighbors but computes paths locally and makes local routing decisions. The critical point to note is that everything is calculated locally. SD-WAN functions on a different paradigm.

Rather than using distributed logic, it utilizes centralized logic. This allows you to view the entire network holistically and with a distributed forwarding plane that makes real-time decisions based on better metrics than before.

This paradigm enables SD-WAN to see how the flows behave along the path. This is because they are taking the fragmented control approach and centralizing it while benefiting from a distributed system. 

The SD-WAN controller, which acts as the brain, can set different applications to run over different paths based on business requirements and performance SLAs, not on a fixed topology. So, for example, if one path does not have acceptable packet loss and latency is high, we can move to another path dynamically.

4. Independent topologies

SD-WAN has different levels of networking and brings the concepts of SDN into the Wide Area Network. Similar to SDN, we have an underlay and an overlay network with SD-WAN. The WAN infrastructure, either physical or virtual, is the underlay, and the SDWAN overlay is in software on top of the underlay where the applications are mapped.

This decoupling or separation of functions allows different application or group overlays. Previously, the application had to work with a fixed and pre-built network infrastructure. With SD-WAN, the application can choose the type of topology it wants, such as a full mesh or hub and spoke. The topologies with SD-WAN are much more flexible.

A key point: SD-WAN abstracts the underlay

With SD-WAN, the virtual WAN overlays are abstracted from the physical device’s underlay. Therefore, the virtual WAN overlays can take on topologies independent of each other without being pinned to the configuration of the underlay network. SD-WAN changes how you map application requirements to the network, allowing for the creation of independent topologies per application.

For example, mission-critical applications may use expensive leased lines, while lower-priority applications can use inexpensive best-effort Internet links. This can all change on the fly if specific performance metrics are unmet.

Previously, the application had to match and “fit” into the network with the legacy WAN, but with an SD-WAN, the application now controls the network topology. Multiple independent topologies per application are a crucial driver for SD-WAN.

types of sd wan
Diagram: SD-WAN Link Bonding.

5. The SD-WAN overlay

SD-WAN optimizes traffic over multiple available connections. It dynamically steers traffic to the best available link. Suppose the available links show any transmission issues. In that case, it will immediately transfer to a better path or apply remediation to a link if, for example, you only have a single link. SD-WAN delivers application flows from a source to a destination based on the configured policy and best available network path. A core concept of SD-WAN is overlaid.

SD-WAN solutions provide the software abstraction to create the SD-WAN overlay and decouple network software services from the underlying physical infrastructure. Multiple virtual overlays may be defined to abstract the underlying physical transport services, each supporting a different quality of service, preferred transport, and high availability characteristics.

6. Application mapping

Application mapping also allows you to steer traffic over different WAN transports. This steering is automatic and can be implemented when specific performance metrics are unmet. For example, if Internet transport has a 15% packet loss, the policy can be set to steer all or some of the application traffic over to better-performing MPLS transport.

Applications are mapped to different overlays based on business intent, not infrastructure details like IP addresses. When you think about overlays, it’s common to have, on average, four overlays. For example, you may have a gold, platinum, and bronze SDWAN overlay, and then you can map the applications to these overlays.

The applications will have different networking requirements, and overlays allow you to slice and dice your network if you have multiple application types. 

SDWAN Overlau
Diagram: Technology design: SDWAN overlay application mapping.

SD-WAN & WAN metrics

SD-WAN captures metrics that go far beyond the standard WAN measurements. For example, the traditional way would measure packet loss, latency, and jitter metrics to determine path quality. These measurements are insufficient for routing protocols that only make the packet flow decision at layer 3 of the OSI model.

As we know, layer 3 of the OSI model lacks intelligence and misses the overall user experience. Rather than relying on bits, bytes jitter, and latency, we must start to look at the application transactions.

SD-WAN incorporates better metrics beyond those a standard WAN edge router considers. These metrics may include application response time, network transfer, and service response time. Some SD-WAN solutions monitor each flow’s RTT, sliding windows, and ACK delays, not just the I.P. or TCP. This creates a more accurate view of the application’s performance.

SD-WAN Features and Benefits

      • Leverage all available connectivity types.

All SD-WAN vendors can balance traffic across all transports regardless of transport type, which can be done per flow or packet. This ensures that existing redundant links sitting idle are not being used. SD-WAN creates an active-active network and eliminates the need to use and maintain traditional routing protocols for active–standby setups.  

      • App-aware routing capabilities 

As we know, application visibility is critical to forward efficiently over either transport. Still, we also need to go one step further and examine deep inside the application and understand what sub-applications exist, such as determining Facebook chat over regular Facebook. This allows you to balance loads across the WAN based on sub-applications. 

      • Regional-specific routing topologies

Several topologies include a hub and spoke full mesh and Internet PoP topologies. Each organization will have different requirements for choosing a topology. For example, Voice should use a full mesh design, while data requires a hub and spoke connecting to a central data center.

As we are moving heavily into cloud applications, local internet access/internet breakout is a better strategic option than backhauling traffic to a central site when it doesn’t need to. SD-WAN abstracts the details of WAN, enabling application-independent topologies. Each application can have its topology, and this can be dynamically changed. All of this is managed by an SD-WAN control plane.

      • Centralized device management & policy administration 

With the controller-based approach that SD-WAN has, you are not embedding the control plane in the network. This allows you to centrally provision and push policies down any instructions to the data plane from a central location. This simplifies management and increases scale. The manual box-by-box approach to policy enforcement is not the way forward.

The ability to tie everything to a template and automate enables rapid branch deployments, security updates, and other policy changes. It’s much better to manage it all in one central place with the ability to dynamically push out what’s needed, such as updates and other configuration changes. 

      • High availability with automatic failovers 

You cannot apply a single viewpoint to high availability. Many components are involved in creating a high availability plan, such as device, link, and site level’s high availability requirements; these should be addressed in an end-to-end solution. In addition, traditional WANs require additional telemetry information to detect failures and brown-out events. 

      • Encryption on all transports, irrespective of link type 

Regardless of link type, MPLS, LTE, or the Internet, we need the capacity to encrypt all those paths without the excess baggage and complications that IPsec brings. Encryption should happen automatically, and the complexity of IPsec should be abstracted.

Summary: SD WAN Overlay

In today’s digital landscape, businesses increasingly rely on cloud-based applications, remote workforces, and data-driven operations. As a result, the demand for a more flexible, scalable, and secure network infrastructure has never been greater. This is where SD-WAN overlay comes into play, revolutionizing how organizations connect and operate.

SD-WAN overlay is a network architecture that allows organizations to abstract and virtualize their wide area networks, decoupling them from the underlying physical infrastructure. It utilizes software-defined networking (SDN) principles to create an overlay network that runs on top of the existing WAN infrastructure, enabling centralized management, control, and optimization of network traffic.

Key benefits of SD-WAN overlay 

1. Enhanced Performance and Reliability:

SD-WAN overlay leverages multiple network paths to distribute traffic intelligently, ensuring optimal performance and reliability. By dynamically routing traffic based on real-time conditions, businesses can overcome network congestion, reduce latency, and maximize application performance. This capability is particularly crucial for organizations with distributed branch offices or remote workers, as it enables seamless connectivity and productivity.

2. Cost Efficiency and Scalability:

Traditional WAN architectures can be expensive to implement and maintain, especially when organizations need to expand their network footprint. SD-WAN overlay offers a cost-effective alternative by utilizing existing infrastructure and incorporating affordable broadband connections. With centralized management and simplified configuration, scaling the network becomes a breeze, allowing businesses to adapt quickly to changing demands without breaking the bank.

3. Improved Security and Compliance:

In an era of increasing cybersecurity threats, protecting sensitive data and ensuring regulatory compliance are paramount. SD-WAN overlay incorporates advanced security features to safeguard network traffic, including encryption, authentication, and threat detection. Businesses can effectively mitigate risks, maintain data integrity, and comply with industry regulations by segmenting network traffic and applying granular security policies.

4. Streamlined Network Management:

Managing a complex network infrastructure can be a daunting task. SD-WAN overlay simplifies network management with centralized control and visibility, enabling administrators to monitor and manage the entire network from a single pane of glass. This level of control allows for faster troubleshooting, policy enforcement, and network optimization, resulting in improved operational efficiency and reduced downtime.

5. Agility and Flexibility:

In today’s fast-paced business environment, agility is critical to staying competitive. SD-WAN overlay empowers organizations to adapt rapidly to changing business needs by providing the flexibility to integrate new technologies and services seamlessly. Whether adding new branch locations, integrating cloud applications, or adopting emerging technologies like IoT, SD-WAN overlay offers businesses the agility to stay ahead of the curve.

Implementation of SD-WAN Overlay:

Implementing SD-WAN overlay requires careful planning and consideration. The following steps outline a typical implementation process:

1. Assess Network Requirements: Evaluate existing network infrastructure, bandwidth requirements, and application performance needs to determine the most suitable SD-WAN overlay solution.

2. Design and Architecture: Create a network design incorporating SD-WAN overlay while considering factors such as branch office connectivity, data center integration, and security requirements.

3. Vendor Selection: Choose a reliable and reputable SD-WAN overlay vendor based on their technology, features, support, and scalability.

4. Deployment and Configuration: Install the required hardware or virtual appliances and configure the SD-WAN overlay solution according to the network design. This includes setting up policies, traffic routing, and security parameters.

5. Testing and Optimization: Thoroughly test the SD-WAN overlay solution, ensuring its compatibility with existing applications and network infrastructure. Optimize the solution based on performance metrics and user feedback.

Conclusion: SD-WAN overlay is a game-changer for businesses seeking to optimize their network infrastructure. By enhancing performance, reducing costs, improving security, streamlining management, and enabling agility, SD-WAN overlay unlocks the true potential of connectivity. Embracing this technology allows organizations to embrace digital transformation, drive innovation, and gain a competitive edge in the digital era. In an ever-evolving business landscape, SD-WAN overlay is the key to unlocking new growth opportunities and future-proofing your network infrastructure.

SD-WAN topology

SD WAN | SD WAN Tutorial

In today's digital age, businesses increasingly rely on technology for seamless communication and efficient operations. One technology that has gained significant traction is Software-Defined Wide Area Networking (SD-WAN). This blog post will provide a comprehensive tutorial on SD-WAN, explaining its key features, benefits, and implementation aspects.

SD-WAN stands for Software-Defined Wide Area Networking. It is a revolutionary approach to network connectivity that enables organizations to simplify their network infrastructure and enhance performance. Unlike traditional Wide Area Networks (WANs), SD-WAN leverages software-defined networking principles to abstract network control from hardware devices.

Table of Contents

Highlights: SD WAN Tutorial

The Role of Abstraction

Firstly, this SD-WAN tutorial will address how SD-WAN incorporates a level of abstraction into WAN, creating virtual WANs: WAN virtualization. Now imagine these virtual WANs individually holding a single application running over the WAN but consider them end-to-end instead of being in one location, i.e., on a server. The individual WAN runs to the cloud or enterprise location, having secure, isolated paths with different policies and topologies. Wide Area Network (WAN) virtualization is an emerging technology revolutionizing how networks are designed and managed.

Decoupling the Infrastructure

It allows for decoupling the physical network infrastructure from the logical network, enabling the same physical infrastructure to be used for multiple logical networks. WAN virtualization enables organizations to utilize a single physical infrastructure to create multiple virtual networks, each with unique characteristics. WAN virtualization is a core requirement enabling SD-WAN.

Highlighting SD-WAN

This SD-WAN tutorial will address the SD-WAN vendor’s approach to an underlay and an overlay, including the SD-WAN requirements. The underlay consists of the physical or virtual infrastructure and the overlay network, the SD WAN overlay to which the applications are mapped. SD-WAN solutions are designed to provide secure, reliable, and high-performance connectivity across multiple locations and networks. Organizations can manage their network configurations, policies, and security infrastructure with SD-WAN.

In addition, SD-WAN solutions can be deployed over any type of existing WAN infrastructure, such as MPLS, Frame Relay, and more. SD-WAN offers enhanced security features like encryption, authentication, and access control. This ensures that data is secure and confidential and that only authorized users can access the network.

Related: Before you proceed, you may find the following posts helpful for pre-information:

  1. SD WAN Security 
  2. WAN Monitoring
  3. Zero Trust SASE
  4. Forwarding Routing Protocols



SD-WAN Tutorial

Key SD WAN Tutorial Discussion Points:


  • WAN transformation.

  • SD WAN requirements.

  • Challenges with the WAN.

  • Old methods of routing protocols.

  • SD-WAN overlay core features.

  • Challenges with BGP.

 

Back to basics: SD-WAN Tutorial

SD-WAN requirements with performance per overlay

As each application is in an isolated WAN overlay, we can assign different mechanisms independent of others to each overlay. Such different performance metrics and topologies can be set to each overlay. More importantly, all these can be given regardless of the underlying transport. The critical point is that each of these virtual WANs is entirely independent.

SD-WAN solutions offer several benefits, such as greater flexibility in routing, improved scalability, and enhanced security. Additionally, SD-WAN solutions can help organizations reduce cyber-attack risks while providing end-to-end visibility into application performance and network traffic.

SD-WAN Tutorial

Key SD-WAN Benifits

Improved performance

Not all traffic treated equally

Zero-trust security protecton

Reduced WAN complexity

Central policy management

Key Features of SD-WAN

Centralized Control and Visibility:

SD-WAN provides a centralized management console, allowing network administrators complete control over their network infrastructure. This enables them to monitor and manage network traffic, prioritize critical applications, and allocate bandwidth resources effectively.

Dynamic Path Selection:

SD-WAN intelligently selects the most optimal path for data transmission based on real-time network conditions. By dynamically routing traffic through the most efficient path, SD-WAN improves network performance, minimizes latency, and ensures a seamless user experience.

Security and Encryption:

SD-WAN solutions incorporate robust security measures to protect data transmission across the network. Encryption protocols, firewalls, and intrusion detection systems are implemented to safeguard sensitive information and mitigate potential security threats.

Benefits of SD-WAN

Enhanced Network Performance:

SD-WAN significantly improves network performance by leveraging multiple connections and routing traffic dynamically. It optimizes bandwidth utilization, reduces latency, and ensures consistent application performance, even in geographically dispersed locations.

Cost Savings:

By leveraging affordable broadband internet connections, SD-WAN eliminates the need for expensive dedicated MPLS connections. This reduces network costs and enables organizations to scale their network infrastructure without breaking the bank.

Simplified Network Management:

SD-WAN simplifies network management through centralized control and automation. This eliminates manual configuration and reduces the complexity of managing a traditional WAN infrastructure. As a result, organizations can streamline their IT operations and allocate resources more efficiently.

 

Implementing SD-WAN

Assessing Network Requirements:

Before implementing SD-WAN, organizations must assess their network requirements, such as bandwidth, application performance, and security requirements. This will help select the right SD-WAN solution that aligns with their business objectives.

Vendor Selection:

Organizations should evaluate different SD-WAN vendors based on their offerings, scalability, security features, and customer support. Choosing a vendor that can meet current requirements and accommodate future growth is crucial.

Deployment and Configuration:

Once the vendor is selected, the implementation involves deploying SD-WAN appliances or virtual instances across the network nodes. Configuration consists of defining policies, prioritizing applications, and establishing security measures.

SD-WAN Tutorial and SD-WAN Requirements:

SD-WAN Is Not New

Before we get into the details of this SD-WAN tutorial, the critical point is that the concepts of SD-WAN are not new and share ideas with the DMVPN phases.  We have had encryption, path control, and overlay networking for some time.

However, the main benefit of SD-WAN is that it acts as an enabler to wrap these technologies together and present them to enterprises as a new integrated offering. We have WAN edge devices that forward traffic to other edge devices across a WAN via centralized control. This enables you to configure application-based policy forwarding and security rules across performance-graded WAN paths.

Policy based routing
Diagram: Policy-based routing. Source Paloalto.

The SD-WAN Control and Data Plane

SD-WAN separates the control from the data plane functions, uses central control plane components to make intelligent decisions, and forwards these decisions to the data plane SD-WAN Edge routers. The control plane components provide the control plane for the SD-WAN network and instruct the data plane devices that consist of the SD-WAN Edge router instructions as to where to steer traffic.

The brains of the SD-WAN network are the SD-WAN control plane components with a fully holistic view that is end-to-end. This is compared to the traditional network where each device’s control plane functions are resident. For example, the data plane is where the simple forwarding occurs, and the control plane, which is separate from the data plane, sets up all the controls for the data plane to forward.

Video: DMVPN Phases

Under the hood, SD-WAN shares some of the technologies used by DMVPN. In this technical demo, we will start with the first network topology, with a Hub and Spoke design, and recap DMVPN Phase 1. This was the starting point of the DMVPN design phases. However, today, you will probably see DMVPN phase 3, which allows for spoke-to-spoke tunnels, which may be better suited if you don’t need a true hub and spoke. In this demo, there will also be a bit of troubleshooting.

DMVPN Phases
Prev 1 of 1 Next
Prev 1 of 1 Next

 

SD WAN tutorial: Removing intensive algorithms

BGP-based networks

SDN is about taking intensive network algorithms out of WAN edge router hardware and placing them into a central controller. Previously, in traditional networks, this was in individual hardware devices using control plane points in the data path. BGP-based networks attempted to use the same concepts with Route-Reflector (RR) designs.

They moved route reflectors (RR) off the data plane, and these RRs were then used to compute the best-path algorithms. Route reflectors can be positioned anywhere in the network and do not have to sit on the data path.

BGP Route Reflection
Diagram: BGP Route Reflection

With the controller-based approach that SD-WAN has, you are not embedding the control plane in the network. This allows you to centrally provision and pushes policy down any instructions to the data plane from a central location. This simplifies management and increases scale.

SD-WAN can centralize control plane security and routing, resulting in data path fluidity. The data plane can flow based on the policy set by the control plane controller that is not in the data plane. The SD-WAN control plane handles routing and security decisions and passes the relevant information between the edge routers.

SD WAN tutorial
Diagram: SD-WAN: SD WAN tutorial.

SD WAN Tutorial: Challenges With the WAN 

The traditional WAN comes with a lot of challenges. It creates a siloed management effect where different WAN links try to connect everything. Traditional WANs require extensive planning for the logistics of calling. In addition, trying to add a branch or remote location can be costly. Additional hardware purchases are required for each site.

wide area network
Diagram: Wide Area Network (WAN): WAN network and the challenges.

Challenge: Visibility

Visibility plays a vital role in day-to-day monitoring, and alerting is crucial to understanding the ongoing operational impact of the WAN. In addition, visibility enables critical performance levels to be monitored as deployments are scaled out. This helps with proactive alerting, troubleshooting, and policy optimization. Unfortunately, the traditional WAN is known for its need for more visibility.

Challenge: Service Level Agreement (SLA)

A service level agreement (SLA) is a legally binding contract between the service provider and one or more clients that lays down the specific terms and agreements governing the duration of the service engagement. For example, a traditional WAN architecture may consist of private MPLS links with Internet or LTE links as backup.

The SLAs within the MPLS service provider environment are usually broken down into bronze, silver, and gold main categories. However, these types of SLA only fit some geographies and should be fine-tuned per location and customer requirements. Therefore, these SLAs are very rigid.

Challenge: Static and lacking agility

The WAN’s capacity, reliability, analytics, and security parts should be available on demand. Yet the WAN infrastructure is very static. New sites and bandwidth upgrades require considerable processing time, and this WAN’s static nature prohibits agility. For today’s type of application and the agility required for business, the WAN is not agile enough, and nothing can be performed on the fly to meet business requirements. When it comes to network topologies, they can be depicted either physically or logically. Common topologies you may have seen include the Star, Mesh, Full, and Ring topologies.

Fixed topologies

In a physical world, these topologies are fixed and cannot be automatically changed. And the logical topologies can also be hindered by physical footprints. The traditional model of operation forces applications to fit into a specific network topology already built and designed. We see this a lot with MPLS/VPNs. The application needs to fit into a predefined topology. This can be changed with configurations such as adding and removing Route Targets, but this requires administrator intervention.

Route Targets (RT)
Diagram: Complications with Route Targets. Source Cisco.

SD WAN tutorial: The old methods of routing protocols

Routing Protocols

With any SD-WAN tutorial, we must address inconsistencies with traditional routing protocols. For example, routing protocols make forwarding decisions based on destination addresses, and these decisions are made on a hop-by-hop basis. As a result, the application can take paths limited to routing loop restrictions, meaning that the routing protocols will not take a path that could potentially result in a forwarding loop. Although this overcomes the routing loop problems, it limits the number of paths the application traffic can take.

The traditional WAN needs help enabling micro-segmentation. Micro-segmentation enhances network security by restricting hackers’ lateral movement in the event of a breach. As a result, it’s become increasingly widely deployed by enterprises over the last few years. It provides firms with improved control over east-west traffic and helps to keep applications running in the cloud or data center-type environments more secure.

Routing support often needs to be more consistent. For example, many traditional WAN vendors support both LAN and WAN side dynamic routing and virtual routing and forwarding (VRF) – some only on the WAN side. Then, some only support static routing, and other vendors don’t have any support for routing.

Video: Routing Convergence

In this video, we will address routing convergence, also known as convergence routing. We know we have Layer 2 switches that create Ethernet. So, all endpoints physically connect to a Layer 2 switch. And if you are on a single LAN with one large VLAN, you are ready with this setup as switches work out of the box, making decisions based on Layer 2 MAC addresses.

So, these Layer 2 MAC addresses are already assigned to the NIC cards on your hosts, so you don’t need to do anything. You can configure the switches to say that this MAC address is available on this port and this MAC is available on this port. Still, it’s better for the switch to dynamically learn this when the two hosts connected to it start communicating and sending traffic. So if you want a switch to learn the MAC address, send a ping, and it will dynamically do all the MAC learning.

Routing Convergence
Prev 1 of 1 Next
Prev 1 of 1 Next

SD-WAN Tutorial: Challenges with BGP

The issue with BGP: Border Gateway Protocol (BGP) attributes

Border Gateway Protocol (BGP) is a gateway protocol that enables the internet to exchange routing information between autonomous systems (AS). As networks interact with each other, they need a way to communicate. This is accomplished through peering. BGP makes peering possible. Without it, networks would not be able to send and receive information from each other. However, it comes with some challenges.

A redundant WAN design requires a routing protocol, either dynamic or static, for practical traffic engineering and failover. This can be done in several ways. For example, for the Border Gateway Protocol (BGP), we can set BGP attributes such as the MED and Local Preference or the administrative distance on static routes. However, routing protocols require complex tuning to load balance between border edge devices.

Although these attributes allow granular policy control, they do not cover aspects relating to path performance, such as Round Trip Time (RTT), delay, and jitter. In addition, there has always been a problem with complex routing for the WAN. As a result, it’s tricky to configure Quality of Service (QOS) policies on a per-link basis and design WAN solutions to incorporate multiple failure scenarios.

Issues with BGP: Lack of performance awareness

Due to the lack of performance awareness, BGP may not choose the best-performing path. Therefore, we must ask ourselves whether BGP can route on the best versus the shortest path

bgp protocol
Diagram: SD WAN tutorial and BGP protocol. BGP protocol example.

Issues with BGP: The shortest path is not always the best path

The shortest path is not necessarily the best path. Initially, we didn’t have real-time voice and video traffic, which is highly sensitive to latency and jitter. We also assumed that all links were equal. This is not the case today, where we have a mix-and-match of connections, such as slow LTE and fast MPLS. Therefore, the shortest path is no longer effective.

However, there are solutions on the market to enhance BGP, offering performance-based solutions for BGP-based networks. These could, for example, send out ICMP requests to monitor the network, then, based on the response, modify the BGP attributes such as AS prepending to influence the traffic flow. All this is done in an attempt to make BGP more performance-based. 

BGP is not performance-aware

However, we still need to avoid the fact that BGP needs to be made aware of capacity and performance. The common BGP attributes used for path selection are AS-Path length and multi-exit discriminators (MED). Unfortunately, these attributes do not correlate with the network or application’s performance.

Video: BGP in the Data Center

In this whiteboard session, we will address the basics of BGP. A network exists specifically to serve the connectivity requirements of applications, and these applications are to serve business needs. So, these applications must run on stable networks, and stable networks are built from stable routing protocols. Routing Protocols are a set of predefined rules used by the routers that interconnect your network to maintain the communication between the source and the destination. These routing protocols help to find the routes between two nodes on the computer network.

BGP in the Data Center
Prev 1 of 1 Next
Prev 1 of 1 Next

Issues with BGP: AS-Path that misses critical performance metrics

When BGP receives multiple paths to the same destination with default configurations, it runs the best path algorithm to decide the best way to install in the IP routing table. Generally, this path selection is based on AS-Path, the number of ASs. However, AS-Path is not an efficient measure of end-to-end transit.

It misses the entire network shape, which can result in long path selection or paths experiencing packet loss. Also, BGP changes paths only in reaction to changes in the policy or the set of available routes.

BGP protocol explained
Diagram: SD WAN tutorial and BGP protocol explained—the issues.

Issues with BGP: BGP and Active-Active deployments

Configuring BGP at the WAN edge requires the applications to fit into a previously defined network topology. We need something else for applications. BGP is hard to configure and manage when you want active-active or bandwidth aggregation. What options do you have when you want to dynamically steer sessions over multiple links?

Blackout detection only

BGP was not designed to address WAN transport brownouts caused by packet loss. Even with blackouts of complete link failure, the application recovery could take tens of seconds and even minutes to fully operational. Nowadays, we have more brownouts than blackouts. However, the original design of BGP was to detect blackouts only.

Brownouts can last anywhere from 10ms to 10 seconds, so it’s crucial to see the failure in a sub-second and re-route to a better path. To provide resiliency, WAN edge protocols must be combined with additional mechanisms, such as IP SLA and even enhanced object tracking. Unfortunately, these add to configuration complexity.

IP SLA Configuration
Diagram: Example IP SLA configuration. Source SlidePlayer.

SD WAN Tutorial: Major Environmental Changes

The hybrid WAN, typically consisting of Internet and MPLS, was introduced to save costs and resilience. However, we have had three emerging factors – new application requirements, increased Internet use, and the adoption of public cloud services that have put traditional designs under pressure.

We also have a lot of complexity at the branch. Many branch sites now include various appliances such as firewalls, intrusion prevention, Internet Protocol (IP) VPN concentrators, WAN path controllers, and WAN optimization controllers.

All these point solutions must be maintained and operated and provide the proper visibility that can be easily digested. Visibility is critical for the WAN. So, how do you obtain visibility into application performance across a hybrid WAN and ensure that applications receive appropriate prioritization and are forwarded over a proper path?

The era of client-server  

The design for the WAN and branch sites was conceived in the client-server era. At that time, the WAN design satisfies the applications’ needs. Then, applications and data resided behind the central firewall in the on-premises data center. Today, we are in a different space with hybrid IT and multi-cloud designs, making applications and data distribution. Data is now omnipresent. The type of WAN and branch originating in the client-server era was not designed with cloud applications. 

Hub and spoke designs.

The “hub and spoke” model was designed for client/server environments where almost all of an organization’s data and applications resided in the data center (i.e., the hub location) and were accessed by workers in branch locations (i.e., the spokes).  Internet traffic would enter the enterprise through a single ingress/egress point, typically into the data center, which would then pass through the hub and to the users in branch offices.

The birth of the cloud resulted in a significant shift in how we consume applications, traffic types, and network topology. There was a big push to the cloud, and almost everything was offered as a SaaS. In addition, the cloud era changed the traffic patterns as the traffic goes directly to the cloud from the branch site and doesn’t need to be backhauled to the on-premise data center.

network design
Diagram: Hub and Spoke: Network design.

Challenges with hub and spoke design.

The hub and spoke model needs to be updated. Because the model is centralized, day-to-day operations may be relatively inflexible, and changes at the hub, even in a single route, may have unexpected consequences throughout the network. It may be difficult or even impossible to handle occasional periods of high demand between two spokes.

The result of the cloud acceleration meant that the best point of access is only sometimes in the central location. Why would branch sites direct all internet-bound traffic to the central HQ, causing traffic tromboning and adding to latency when it can go directly to the cloud? The hub and spoke design could be an efficient topology for cloud-based applications. 

Active/Active and Active/Passive

Historically, WANs are built on “active-passive,” where a branch can be connected using two or more links, but only the primary link is active and passing traffic. In this scenario, the backup connection only becomes active if the primary connection fails. While this might seem sensible, it could be more efficient.

The interest in active-active has always been there, but it was challenging to configure and expensive to implement. In addition, active/active designs with traditional routing protocols are hard to design, inflexible, and a nightmare to troubleshoot.

Convergence and application performance problems can arise from active-active WAN edge designs. For example, active-active packets that reach the other end could be out-of-order packets due to each link propagating at different speeds. Also, the remote end has to reassemble, resulting in additional jitter and delay. Both high jitter and delay are bad for network performance.

The issues arising from active-active are often known as spray and pray. It increases bandwidth but decreases goodput. Spraying packets down both links can result in 20% drops or packet reordering. There will also be firewall issues as they may see asymmetric routes.

TCP out of order packets
Diagram: TCP out-of-order packets. Source F5.

SD-WAN tutorial and SD WAN requirements and active-active paths.

For an active-active design, one must have application session awareness and a design that eliminates asymmetric routing. In addition, it would help if you slice up the WAN so application flows can work efficiently over either link. SD-WAN does this. Also, WAN designs can be active–standby, which requires routing protocol convergence in the event of primary link failure.

Unfortunately, routing protocols are known to converge slowly. The emergence of SD-WAN technologies with multi-path capabilities combined with the ubiquity of broadband has made active-active highly attractive and something any business can deploy and manage quickly and easily.

SD-WAN solution enables the creation of virtual overlays that bond multiple underlay links. Virtual overlays would allow enterprises to classify and categorize applications based on their unique service level requirements and provide fast failover should an underlay link experience congestion or a brownout or outage.

There is traditional routing regardless of the mechanism used to speed up convergence and failure detection. These several convergence steps need to be carried out a ) Detecting the topology change, b ) Notifying the rest of the network about the change, c ) Calculating the new best path, d ) and e) switching to the new best path. Traditional WAN protocols route down one path and, by default, have no awareness of what’s happening at the application level. For this reason, there have been many attempts to enhance the WANs behavior. 

Example Convergence Time with OSPF
Diagram:Example Convergence Time with OSPF. Source INE.
Example Convergence Time with OSPF
Diagram:Example Convergence Time with OSPF. Source INE.

A keynote for this SD WAN tutorial: The issues with MPLS

multiprotocol label switching
Diagram: Multiprotocol label switching (MPLS).

MPLS has some great features but is only suitable for some application profiles. As a result, it can introduce more points of failure than traditional internet transport. Its architecture is predefined and, in some cases, inflexible. For example, some Service Providers (SP) might only offer hub and spoke topologies, and others only offer a full mesh.  Any changes to these predefined architectures will require manual intervention unless you have a very flexible MPLS service provider that allows you to do cool stuff with Route Targets.

MPLS forwarding
Diagram: MPLS forwarding

SD-WAN Tutorial and Scenario: Old and rigid MPLS

I designed a headquarters site for a large enterprise during a recent consultancy. MPLS topologies, once provisioned, are challenging to change. MPLS topologies are similar to the brick foundation of a house. Once the foundation is laid, changing the original structure is easy by starting over. In its simplest form, we have Provider Edge (PE) and P ( Provider ) routers in an MPLS network. The P router configuration does not change based on customer requirements, but the PE router does 

Route Targets

We have several technologies, such as Route Target, to control routers in and out of PE routers. A PE router with matching route targets and configurable variables allows the routes to pass. This created the customer topologies such as a hub and spoke or full mesh. In addition, the Wide Area Network (WAN) I worked on was fully outsourced. As a result, any requests would require service provider intervention with additional design & provisioning activities. 

For example, mapping application subnets to new or existing RT may involve recent high-level design approval with additional configuration templates, which would have to be applied by provisioning teams. It was a lot of work for such a small task. But, unfortunately, it puts the brakes on agility and pushes lead times through the roof. 

BGP community tagging

While there are ways to overcome this with BGP community tagging and matching, which provides some flexibility, we must recognize that it remains a fixed, predefined configuration. As a result, all subsequent design changes may still require service provider intervention.

SD WAN Requirements

sd wan requirements
Diagram: SD-WAN: The drivers for SD-WAN.

In the proceeding sections of this SD WAN tutorial, we will address the SD-WAN driver, which ranges from the need for flexible topologies to bandwidth-intensive applications.

SD-WAN tutorial and SD WAN requirements: Flexible topologies

For example, using DPI, we can have Voice over IP traffic go over MPLS. Here, the SD-WAN will look at real-time protocol and session initiation protocol. We can also have less critical applications that can go to the Internet. MPLS can be used only for a specific app.

As a result, the best-effort traffic is pinned to the Internet, and only critical apps get an SLA and go on the MPLS path. Now we have better utilization of the transports. And circuits never need to be dormant. With SD-WAN, we are using the B/W that you have available and ensure an optimized experience.

The SD-WAN’s value is that the solution tracks the network and path conditions in real time, revealing performance issues as they are happening. Then, dynamically redirect data traffic to the following available path.

Then, when the network recovers to its normal state, the SD-WAN solution can redirect the traffic path of the data to its original location. Therefore the effects of network degradation, which come in the form of brownouts and soft failure, can be minimized.

VPN Segmentation
Diagram: VPN Segmentation. Source Cisco.

SD-WAN tutorial and SD WAN requirements: Encryption key rotation

Data security has never been a more important consideration than it is today. Therefore, businesses and other organizations must take robust measures to keep data and information safely under lock and key. Encryption keys must be rotated regularly (the standard interval is every 90 days) to reduce the risk of compromised data security.

However, regular VPN-based encryption key rotation can be complicated and disruptive, often requiring downtime. SD-WAN can offer automatic key rotation, allowing network administrators to pre-program rotations without manual intervention or system downtime.

SD-WAN tutorial and SD WAN requirements: Push to the cloud 

Another critical feature of SD-WAN technology is cloud breakout. This lets you connect branch office users to cloud-hosted applications directly and securely, eliminating the inefficiencies of backhauling cloud-destined traffic through the data center. Given the ever-growing importance of SaaS and IaaS services, efficient and reliable access to the cloud is crucial for many businesses and other organizations. By simplifying how branch traffic is routed, SD-WAN makes setting up breakouts quicker and easier.

  • The changing perimeter location

Users are no longer positioned in one location with corporate-owned static devices. Instead, they are dispersed; additional latency degrades application performance when connecting to central areas. Optimizations can be made to applications and network devices, but the only solution is to shorten the link by moving to cloud-based applications. There is a huge push and a rapid flux for cloud-based applications. Most are now moving away from on-premise in-house hosting to cloud-based management.

The ready-made global footprint enables the usage of SaaS-based platforms that negate the drawbacks of dispersed users tromboning to a central data center to access applications. Logically positioned cloud platforms are closer to the mobile user. In addition, cloud hosting these applications is far more efficient than making them available over the public Internet.

sd wan tutorial

SD-WAN tutorial and SD WAN requirements: Decentralization of traffic

A lot of traffic is now decentralized from the central data center to remote branch sites. Many branches do not run high bandwidth-intensive applications. These types of branch sites are known as light edges. Despite the traffic change, the traditional branch sites rely on hub sites for most security and network services.

The branch sites should connect to the cloud applications directly over the Internet without tromboning traffic to data centers for Internet access or security services. An option should exist to extend the security perimeter into the branch sites without requiring expensive onsite firewalls and IPS/IDS. SD-WAN builds a dynamic security fabric without the appliance sprawl of multiple security devices and vendors.

  • The ability to service chain traffic 

Also, service chaining. Service chaining through SD-WAN allows organizations to reroute their data traffic through one service or multiple services, including intrusion detection and prevention devices or cloud-based security services. It thereby enables firms to declutter their branch office networks.

They can, after all, automate how particular types of traffic flows are handled and assemble connected network services into a single chain.

SD-WAN tutorial and SD WAN requirements: Bandwidth-intensive applications 

Exponential growth in demand for high-bandwidth applications such as multimedia in cellular networks has triggered the need to develop new technologies capable of providing the required high-bandwidth, reliable links in wireless environments. The biggest user of internet bandwidth is video streaming—more than half of total global traffic. The Cartesian study confirms historical trends reflecting consumer usage that remains highly asymmetric as video streaming remains the most popular.

  • Richer and hungry applications

Richer applications, multimedia traffic, and growth in the cloud application consumption model drive the need for additional bandwidth. Unfortunately, the congestion leads to packet drops, ultimately degrading application performance and user experience.

SD-WAN offers flexible bandwidth allocation so that you don’t have to go through the hassle of manually allocating bandwidth for specific applications. Instead, SD-WAN allows you to classify applications and specify a particular service level requirement. This way, you can ensure your set-up is better equipped to run smoothly, minimizing the risk of glitchy and delayed performance on an audio conference call.

SD-WAN tutorial and SD WAN requirements: Organic growth 

We also have organic business growth, a big driver for additional bandwidth requirements. The challenge is that existing network infrastructures are static and need help to respond adequately to this growth in a reasonable period. The last mile of MPLS puts a lock on you, destroying agility. Circuit lead times impede the organization’s productivity and create an overall lag.

SD-WAN tutorial and SD WAN requirements: Costs 

A WAN solution should be simple. To serve the new era of applications, we need to increase the link capacity by buying more bandwidth. However, life is more complex. The WAN is an expensive part of the network, and employing link oversubscription to reduce the congestion is too costly.

Bandwidth comes at a high cost to cater to new application requirements not met by the existing TDM-based MPLS architectures. At the same time, feature-rich MPLS comes at a high price for relatively low bandwidth. You are going to need more bandwidth to beat latency.

On the more traditional side, MPLS and private ethernet lines (EPLs) can range in cost from $700 to $10,000 per month, depending on bandwidth size and distance of the link itself. Some enterprises must also account for redundancies at each site as uptime for higher-priority sites comes into play. Cost becomes exponential when you have a large number of sites to deploy.

SD-WAN tutorial and SD-WAN requirements: Limitations of protocols 

We already mentioned some problems with routing protocols, but leaving IPsec to default raises challenges. IPSec architecture is point-to-point, not site-to-site. Therefore, it does not natively support redundant uplinks. Complex configurations and potentially additional protocols are required when sites have multiple uplinks to multiple providers. 

Left to its defaults, IPsec is not abstracted, and one session cannot be sent over various uplinks. This will cause challenges with transport failover and path selection. Secure tunnels should be torn up and down immediately, and new sites should be incorporated into a secure overlay without much delay or manual intervention.

SD-WANrequirements: Internet of Things (IoT) 

As millions of IoT devices come online, how do we further segment and secure this traffic without complicating the network design? There will be many dumb IoT devices that will require communication with the IoT platform in a remote location. Therefore, will there be increased signaling traffic over the WAN? 

Security and bandwidth consumption are vital issues concerning the introduction of IP-enabled objects. Although encryption is a great way to prevent hackers from accessing data, it is also one of the leading IoT security challenges.

These drives like the storage and processing capabilities found on a traditional computer. The result is increased attacks where hackers can easily manipulate the algorithms designed for protection. Also, Weak credentials and login details leave nearly all IoT devices vulnerable to password hacking and brute force. Any company that uses factory default credentials on its devices places its business, assets, customers, and valuable information at risk of being susceptible to a brute-force attack.

SD-WAN tutorial and SD WAN requirements: Visibility

Many service provider challenges include a need for more visibility into customer traffic. The lack of granular details of traffic profiles leads to expensive over-provision of bandwidth and link resilience. In addition, upgrades at both a packet and optical layer often need complete traffic visibility and justification.

There are many networks out there that are left at half capacity just in case there is an unexpected spike in traffic. As a result, much money is spent on link underutilization, which should be spent on innovation. This link between underutilization and oversubscription is due to a need for more visibility.

Summary: SD WAN Tutorial

SD-WAN, or Software-Defined Wide Area Networks, has emerged as a game-changing technology in the realm of networking. This tutorial delved into SD-WAN fundamentals, its benefits, and how it revolutionizes traditional WAN infrastructures.

Section 2: Understanding SD-WAN

SD-WAN is an innovative approach to networking that simplifies the management and operation of a wide area network. It utilizes software-defined principles to abstract the underlying network infrastructure and provide centralized control, visibility, and policy-based management.

Section 3: Key Features and Benefits

One of the critical features of SD-WAN is its ability to optimize network performance by intelligently routing traffic over multiple paths, including MPLS, broadband, and LTE. This enables organizations to leverage cost-effective internet connections without compromising performance or reliability. Additionally, SD-WAN offers enhanced security measures, such as encrypted tunneling and integrated firewall capabilities.

Section 4: Deployment and Implementation

Implementing SD-WAN requires careful planning and consideration. This section will explore the different deployment models, including on-premises, cloud-based, and hybrid approaches. We will discuss the necessary steps in deploying SD-WAN, from initial assessment and design to configuration and ongoing management.

Section 5: Use Cases and Real-World Examples

SD-WAN has gained traction across various industries due to its versatility and cost-saving potential. This section will showcase notable use cases, such as retail, healthcare, and remote office connectivity, highlighting the benefits and outcomes of SD-WAN implementation. Real-world examples will provide practical insights into the transformative capabilities of SD-WAN.

Section 6: Future Trends and Considerations

As technology continues to evolve, staying updated on the latest trends and considerations in the SD-WAN landscape is crucial. This section will explore emerging concepts, such as AI-driven SD-WAN and integrating SD-WAN with edge computing and IoT technologies. Understanding these trends will help organizations stay ahead in the ever-evolving networking realm.

Conclusion:

In conclusion, SD-WAN represents a paradigm shift in how wide area networks are designed and managed. Its ability to optimize performance, ensure security, and reduce costs has made it an attractive solution for organizations of all sizes. By understanding the fundamentals, exploring deployment options, and staying informed about the latest trends, businesses can leverage SD-WAN to unlock new possibilities and drive digital transformation.

rsz_technology_focused_hubnw

Matt Conran | Network World

Hello, I have created a Network World column and will be releasing a few blogs per month. Kindly visit the following link to view my full profile and recent blogs – Matt Conran Network World.

The list of individual blogs can be found here:

“In this day and age, demands on networks are coming from a variety of sources, internal end-users, external customers and via changes in the application architecture. Such demands put pressure on traditional architectures.

To deal effectively with these demands requires the network domain to become more dynamic. For this, we must embrace digital transformation. However, current methods are delaying this much-needed transition. One major pain point that networks suffer from is the necessity to dispense with manual working, which lacks fabric wide automation. This must be addressed if organizations are to implement new products and services ahead of the competition.

So, to evolve, to be in line with the current times and use technology as an effective tool, one must drive the entire organization to become a digital enterprise. The network components do play a key role, but the digital transformation process is an enterprise-wide initiative.”

“There’s a buzz in the industry about a new type of product that promises to change the way we secure and network our organizations. It is called the Secure Access Service Edge (SASE). It was first mentioned by Gartner, Inc. in its hype cycle for networking. Since then Barracuda highlighted SASE in a recent PR update and Zscaler also discussed it in their earnings call. Most recently, Cato Networks announced that it was mentioned by Gartner as a “sample vendor” in the hype cycle.

Today, the enterprises have upgraded their portfolio and as a consequence, the ramifications of the network also need to be enhanced. What we are witnessing is cloud, mobility, and edge, which has resulted in increased pressure on the legacy network and security architecture. Enterprises are transitioning all users, applications, and data located on-premise, to a heavy reliance on the cloud, edge applications, and a dispersed mobile workforce.”

“Microsoft has introduced a new virtual WAN as a competitive differentiator and is getting enough tracking that AWS and Google may follow. At present, Microsoft is the only company to offer a virtual WAN of this kind. This made me curious to discover the highs and lows of this technology. So I sat down with Sorell Slaymaker, Principal Consulting Analyst at TechVision Research to discuss. The following is a summary of our discussion.

But before we proceed, let’s gain some understanding of the cloud connectivity.

Cloud connectivity has evolved over time. When the cloud was introduced about a decade ago, let’s say, if you were an enterprise, you would connect to what’s known as a cloud service provider (CSP). However, over the last 10 years, many providers like Equinix have started to offer carrier-neutral collocations. Now, there is the opportunity to meet a variety of cloud companies in a carrier-neutral colocation. On the other hand, there are certain limitations as well as cloud connectivity.”

“Actions speak louder than words. Reliable actions build lasting trust in contrast to unreliable words. Imagine that you had a house with a guarded wall. You would feel safe in the house, correct? Now, what if that wall is dismantled? You might start to feel your security is under threat. Anyone could have easy access to your house.

In the same way, with traditional security products: it is as if anyone is allowed to leave their house, knock at your door and pick your locks. Wouldn’t it be more secure if only certain individuals whom you fully trust can even see your house? This is the essence of zero-trust networking and is a core concept discussed in my recent course on SDP (software-defined perimeter).

Within a zero-trust environment, there is no implicit trust. Thus, trust must be sourced from somewhere else in order to gain access to protected resources. It is only after sufficient trust has been established and the necessary controls are passed, that the access is granted, providing a path to the requested resource. The access path to the resource is designed differently, depending on whether it’s a client or service-initiated software-defined perimeter solution.”

“Networking has gone through various transformations over the last decade. In essence, the network has become complex and hard to manage using traditional mechanisms. Now there is a significant need to design and integrate devices from multiple vendors and employ new technologies, such as virtualization and cloud services.

Therefore, every network is a unique snowflake. You will never come across two identical networks. The products offered by the vendors act as the building blocks for engineers to design solutions that work for them. If we all had a simple and predictable network, this would not be a problem. But there are no global references to follow and designs vary from organization to organization. These lead to network variation even while offering similar services.

It is estimated that over 60% of users consider their I.T environment to be more complex than it was 2 years ago. We can only assume that network complexity is going to increase in the future.”

We are living in a hyperconnected world where anything can now be pushed to the cloud. The idea of having content located in one place, which could be useful from the management’s perspective, is now redundant. Today, the users and data are omnipresent.

The customer’s expectations have up-surged because of this evolution. There is now an increased expectation of high-quality service and a decrease in customer’s patience. In the past, one could patiently wait 10 hours to download the content. But this is certainly not the scenario at the present time. Nowadays we have high expectations and high-performance requirements but on the other hand, there are concerns as well. The internet is a weird place, with unpredictable asymmetric patterns, buffer bloat and a list of other performance-related problems that I wrote about on Network Insight. [Disclaimer: the author is employed by Network Insight.]

Also, the internet is growing at an accelerated rate. By the year 2020, the internet is expected to reach 1.5 Gigabyte of traffic per day per person. In the coming times, the world of the Internet of Things (IoT) driven by objects will far supersede these data figures as well. For example, a connected airplane will generate around 5 Terabytes of data per day. This spiraling level of volume requires a new approach to data management and forces us to re-think how we delivery applications.”

“Deploying zero trust software-defined perimeter (SDP) architecture is not about completely replacing virtual private network (VPN) technologies and firewalls. By and large, the firewall demarcation points that mark the inside and outside are not going away anytime soon. The VPN concentrator will also have its position for the foreseeable future.

A rip and replace is a very aggressive deployment approach regardless of the age of technology. And while SDP is new, it should be approached with care when choosing the correct vendor. An SDP adoption should be a slow migration process as opposed to the once off rip and replace.

As I wrote about on Network Insight [Disclaimer: the author is employed by Network Insight], while SDP is a disruptive technology, after discussing with numerous SDP vendors, I have discovered that the current SDP landscape tends to be based on specific use cases and projects, as opposed to a technology that has to be implemented globally. To start with, you should be able to implement SDP for only certain user segments.”

“Networks were initially designed to create internal segments that were separated from the external world by using a fixed perimeter. The internal network was deemed trustworthy, whereas the external was considered hostile. However, this is still the foundation for most networking professionals even though a lot has changed since the inception of the design.

More often than not the fixed perimeter consists of a number of network and security appliances, thereby creating a service chained stack, resulting in appliance sprawl. Typically, the appliances that a user may need to pass to get to the internal LAN may vary. But generally, the stack would consist of global load balancers, external firewall, DDoS appliance, VPN concentrator, internal firewall and eventually LAN segments.

The perimeter approach based its design on visibility and accessibility. If an entity external to the network can’t see an internal resource, then access cannot be gained. As a result, external entities were blocked from coming in, yet internal entities were permitted to passage out. However, it worked only to a certain degree. Realistically, the fixed network perimeter will always be breachable; it’s just a matter of time. Someone with enough skill will eventually get through.”

“In recent years, a significant number of organizations have transformed their wide area network (WAN). Many of these organizations have some kind of cloud-presence across on-premise data centers and remote site locations.

The vast majority of organizations that I have consulted with have over 10 locations. And it is common to have headquarters in both the US and Europe, along with remote site locations spanning North America, Europe, and Asia.

A WAN transformation project requires this diversity to be taken into consideration when choosing the best SD-WAN vendor to satisfy both; networking and security requirements. Fundamentally, SD-WAN is not just about physical connectivity, there are many more related aspects.”

“As the cloud service providers and search engines started with the structuring process of their business, they quickly ran into the problems of managing the networking equipment. Ultimately, after a few rounds of getting the network vendors to understand their problems, these hyperscale network operators revolted.

Primarily, what the operators were looking for was a level of control in managing their network which the network vendors couldn’t offer. The revolution burned the path that introduced open networking, and network disaggregation to the work of networking. Let us first learn about disaggregation followed by open networking.”

“I recently shared my thoughts about the role of open source in networking. I discussed two significant technological changes that we have witnessed. I call them waves, and these waves will redefine how we think about networking and security.

The first wave signifies that networking is moving to the software so that it can run on commodity off-the-shelf hardware. The second wave is the use of open source technologies, thereby removing the barriers to entry for new product innovation and rapid market access. This is especially supported in the SD-WAN market rush.

Seemingly, we are beginning to see less investment in hardware unless there is a specific segment that needs to be resolved. But generally, software-based platforms are preferred as they bring many advantages. It is evident that there has been a technology shift. We have moved networking from hardware to software and this shift has positive effects for users, enterprises and service providers.”

“BGP (Border Gateway Protocol) is considered the glue of the internet. If we view through the lens of farsightedness, however, there’s a question that still remains unanswered for the future. Will BGP have the ability to route on the best path versus the shortest path?

There are vendors offering performance-based solutions for BGP-based networks. They have adopted various practices, such as, sending out pings to monitor the network and then modifying the BGP attributes, such as the AS prepending to make BGP do the performance-based routing (PBR). However, this falls short in a number of ways.

The problem with BGP is that it’s not capacity or performance aware and therefore its decisions can sink the application’s performance. The attributes that BGP relies upon for path selection are, for example, AS-Path length and multi-exit discriminators (MEDs), which do not always correlate with the network’s performance.”

“The transformation to the digital age has introduced significant changes to the cloud and data center environments. This has compelled the organizations to innovate more quickly than ever before. This, however, brings with it both – the advantages and disadvantages.

The network and security need to keep up with this rapid pace of change. If you cannot match the speed of the digital age, then ultimately bad actors will become a hazard. Therefore, the organizations must move to a zero-trust environment: default deny, with least privilege access. In today’s evolving digital world this is the primary key to success.

Ideally, a comprehensive solution must provide protection across all platforms including legacy servers, VMs, services in public clouds, on-premise, off-premise, hosted, managed or self-managed. We are going to stay hybrid for a long time, therefore we need to equip our architecture with zero-trust.”

“With the introduction of cloud, BYOD, IoT, and virtual offices scattered around the globe, the traditional architectures not only hold us back in terms of productivity but also create security flaws that leave gaps for compromise.

The network and security architectures that are commonly deployed today are not fit for today’s digital world. They were designed for another time, a time of the past. This could sound daunting…and it indeed is.”

“The Internet was designed to connect things easily, but a lot has changed since its inception. Users now expect the internet to find the “what” (i.e., the content), but the current communication model is still focused on the “where.”

The Internet has evolved to be dominated by content distribution and retrieval. As a matter of fact, networking protocols still focus on the connection between hosts that surfaces many challenges.

The most obvious solution is to replace the “where” with the “what” and this is what Named Data Networking (NDN) proposes. NDN uses named content as opposed to host identifiers as its abstraction.”

“Today, connectivity to the Internet is easy; you simply get an Ethernet driver and hook up the TCP/IP protocol stack. Then dissimilar network types in remote locations can communicate with each other. However, before the introduction of the TCP/IP model, networks were manually connected but with the TCP/IP stack, the networks can connect themselves up, nice and easy. This eventually caused the Internet to explode, followed by the World Wide Web.

So far, TCP/IP has been a great success. It’s good at moving data and is both robust and scalable. It enables any node to talk to any other node by using a point-to-point communication channel with IP addresses as identifiers for the source and destination. Ideally, a network ships the data bits. You can either name the locations to ship the bits to or name the bits themselves. Today’s TCP/IP protocol architecture picked the first option. Let’s discuss the section option later in the article.

It essentially follows the communication model used by the circuit-switched telephone networks. We migrated from phone numbers to IP addresses and circuit-switching by packet-switching with datagram delivery. But the point-to-point, location-based model stayed the same. This made sense during the old times, but not in today’s times as the view of the world has changed considerably. Computing and communication technologies have advanced rapidly.”

“Technology is always evolving. However, in recent time, two significant changes have emerged in the world of networking. Firstly, the networking is moving to software that can run on commodity off-the-shelf hardware. Secondly, we are witnessing the introduction and use of many open source technologies, removing the barrier of entry for new product innovation and rapid market access.

Networking is the last bastion within IT to adopt the open source. Consequently, this has badly hit the networking industry in terms of the slow speed of innovation and high costs. Every other element of IT has seen radical technology and cost model changes over the past 10 years. However, IP networking has not changed much since the mid-’90s.

When I became aware of these trends, I decided to sit with Sorell Slaymaker to analyze the evolution and determine how it will inspire the market in the coming years.”

“Ideally, meeting the business objectives of speed, agility, and cost containment boil down to two architectural approaches: the legacy telco versus the cloud-based provider.

Today, the wide area network (WAN) is a vital enterprise resource. Its uptime, often targeting availability of 99.999%, is essential to maintain the productivity of employees and partners and also for maintaining the business’s competitive edge.

Historically, enterprises had two options for WAN management models — do it yourself (DIY) and a managed network service (MNS). Under the DIY model, the IT networking and security teams build the WAN by integrating multiple components including MPLS service providers, internet service providers (ISPs), edge routers, WAN optimizer, and firewalls.

The components are responsible for keeping that infrastructure current and optimized. They configure and adjust the network for changes, troubleshoot outages and ensure that the network is secure. Since this is not a trivial task, therefore many organizations have switched to an MNS. The enterprises outsource the buildout, configuration and on-going management often to a regional telco.”

“To undergo the transition from legacy to cloud-native application environments you need to employ zero trust.

Enterprises operating in the traditional monolithic environment may have strict organizational structures. As a result, the requirement for security may restrain them from transitioning to a hybrid or cloud-native application deployment model.

In spite of the obvious difficulties, the majority of enterprises want to take advantage of cloud-native capabilities. Today, most entities are considering or evaluating cloud-native to enhance their customer’s experience. In some cases, it is the ability to draw richer customer market analytics or to provide operational excellence.

Cloud-native is a key strategic agenda that allows customers to take advantage of many new capabilities and frameworks. It enables organizations to build and evolve going forward to gain an edge over their competitors.”

“Domain name system (DNS) over transport layer security (TLS) adds an extra layer of encryption, but in what way does it impact your IP network traffic? The additional layer of encryption indicates controlling what’s happening over the network is likely to become challenging.

Most noticeably it will prevent ISPs and enterprises from monitoring the user’s site activity and will also have negative implications for both; the wide area network (WAN) optimization and SD-WAN vendors.

During a recent call with Sorell Slaymaker, we rolled back in time and discussed how we got here, to a world that will soon be fully encrypted. We started with SSL1.0, which was the original version of HTTPS as opposed to the non-secure HTTP. As an aftermath of evolution, it had many security vulnerabilities. Consequently, we then evolved from SSL 1.1 to TLS 1.2.”

“Delivering global SD-WAN is very different from delivering local networks. Local networks offer complete control to the end-to-end design, enabling low-latency and predictable connections. There might still be blackouts and brownouts but you’re in control and can troubleshoot accordingly with appropriate visibility.

With global SD-WANs, though, managing the middle-mile/backbone performance and managing the last-mile are, well shall we say, more challenging. Most SD-WAN vendors don’t have control over these two segments, which affects application performance and service agility.

In particular, an issue that SD-WAN appliance vendors often overlook is the management of the last-mile. With multiprotocol label switching (MPLS), the provider assumes the responsibility, but this is no longer the case with SD-WAN. Getting the last-mile right is challenging for many global SD-WANs.”

“Today’s threat landscape consists of skilled, organized and well-funded bad actors. They have many goals including exfiltrating sensitive data for political or economic motives. To combat these multiple threats, the cybersecurity market is required to expand at an even greater rate.

The IT leaders must evolve their security framework if they want to stay ahead of the cyber threats. The evolution in security we are witnessing has a tilt towards the Zero-Trust model and the software-defined perimeter (SDP), also called a “Black Cloud”. The principle of its design is based on the need-to-know model.

The Zero-Trust model says that anyone attempting to access a resource must be authenticated and be authorized first. Users cannot connect to anything since unauthorized resources are invisible, left in the dark. For additional protection, the Zero-Trust model can be combined with machine learning (ML) to discover the risky user behavior. Besides, it can be applied for conditional access.”

“There are three types of applications; applications that manage the business, applications that run the business and miscellaneous apps.

A security breach or performance related issue for an application that runs the business would undoubtedly impact the top-line revenue. For example, an issue in a hotel booking system would directly affect the top-line revenue as opposed to an outage in Office 365.

It is a general assumption that cloud deployments would suffer from business-impacting performance issues due to the network. The objective is to have applications within 25ms (one-way) of the users who use them. However, too many network architectures backhaul the traffic to traverse from a private to the public internetwork.”

“Back in the early 2000s, I was the sole network engineer at a startup. By morning, my role included managing four floors and 22 European locations packed with different vendors and servers between three companies. In the evenings, I administered the largest enterprise streaming networking in Europe with a group of highly skilled staff.

Since we were an early startup, combined roles were the norm. I’m sure that most of you who joined as young engineers in such situations could understand how I felt back then. However, it was a good experience, so I battled through it. To keep my evening’s stress-free and without any IT calls, I had to design in as much high-availability (HA) as I possibly could. After all, all the interesting technological learning was in the second part of my day working with content delivery mechanisms and complex routing. All of which came back to me when I read a recent post on Cato network’s self-healing SD-WAN for global enterprises networks.

Cato is enriching the self-healing capabilities of Cato Cloud. Rather than the enterprise having the skill and knowledge to think about every type of failure in an HA design, the Cato Cloud now heals itself end-to-end, ensuring service continuity.”

While computing, storage, and programming have dramatically changed and become simpler and cheaper over the last 20 years, however, IP networking has not. IP networking is still stuck in the era of mid-1990s.

Realistically, when I look at ways to upgrade or improve a network, the approach falls into two separate buckets. One is the tactical move and the other is strategic. For example, when I look at IPv6, I see this as a tactical move. There aren’t many business value-adds.

In fact, there are opposites such as additional overheads and minimal internetworking QoS between IPv4 & v6 with zero application awareness and still a lack of security. Here, I do not intend to say that one should not upgrade to IPv6, it does give you more IP addresses (if you need them) and better multicast capabilities but it’s a tactical move.

It was about 20 years ago when I plugged my first Ethernet cable into a switch. It was for our new chief executive officer. Little did she know that she was about to share her traffic with most others on the first floor. At that time being a network engineer, I had five floors to be looked after.

Having a few virtual LANs (VLANs) per floor was a common design practice in those traditional days. Essentially, a couple of broadcast domains per floor were deemed OK. With the VLAN-based approach, we used to give access to different people on the same subnet. Even though people worked at different levels but if in the same subnet, they were all treated the same.

The web application firewall (WAF) issue didn’t seem to me as a big deal until I actually started to dig deeper into the ongoing discussion in this field. It generally seems that vendors are trying to convince customers and themselves that everything is going smooth and that there is not a problem. In reality, however, customers don’t buy it anymore and the WAF industry is under a major pressure as constantly failing on the customer quality perspective.

There have also been red flags raised from the use of the runtime application self-protection (RASP) technology. There is now a trend to enter the mitigation/defense side into the application and compile it within the code. It is considered that the runtime application self-protection is a shortcut to securing software that is also compounded by performance problems. It seems to be a desperate solution to replace the WAFs, as no one really likes to mix its “security appliance” inside the application code, which is exactly what the RASP vendors are currently offering to their customers. However, some vendors are adopting the RASP technology.

“John Kindervag, a former analyst from Forrester Research, was the first to introduce the Zero-Trust model back in 2010. The focus then was more on the application layer. However, once I heard that Sorell Slaymaker from Techvision Research was pushing the topic at the network level, I couldn’t resist giving him a call to discuss the generals on Zero Trust Networking (ZTN). During the conversation, he shone a light on numerous known and unknown facts about Zero Trust Networking that could prove useful to anyone.

The traditional world of networking started with static domains. The classical network model divided clients and users into two groups – trusted and untrusted. The trusted are those inside the internal network, the untrusted are external to the network, which could be either mobile users or partner networks. To recast the untrusted to become trusted, one would typically use a virtual private network (VPN) to access the internal network.”

“Over the last few years, I have been sprawled in so many technologies that I have forgotten where my roots began in the world of the data center. Therefore, I decided to delve deeper into what’s prevalent and headed straight to Ivan Pepelnjak’s Ethernet VPN (EVPN) webinar hosted by Dinesh Dutt. I knew of the distinguished Dinesh since he was the chief scientist at Cumulus Networks, and for me, he is a leader in this field. Before reading his book on EVPN, I decided to give Dinesh a call to exchange our views about the beginning of EVPN. We talked about the practicalities and limitations of the data center. Here is an excerpt from our discussion.”

“If you still live in a world of the script-driven approach for both service provider and enterprise networks, you are going to reach limits. There is only so far you can go alone. It creates a gap that lacks modeling and database at a higher layer. Production-grade service provider and enterprise networks require a production-grade automation framework.

In today’s environment, the network infrastructure acts as the core centerpiece, providing critical connection points. Over time, the role of infrastructure has expanded substantially. In the present day, it largely influences the critical business functions for both the service provider and enterprise environments”

“At the present time, there is a remarkable trend for application modularization that splits the large hard-to-change monolith into a focused microservices cloud-native architecture. The monolith keeps much of the state in memory and replicates between the instances, which makes them hard to split and scale. Scaling up can be expensive and scaling out requires replicating the state and the entire application, rather than the parts that need to be replicated.

In comparison to microservices, which provide separation of the logic from the state, the separation enables the application to be broken apart into a number of smaller more manageable units, making them easier to scale. Therefore, a microservices environment consists of multiple services communicating with each other. All the communication between services is initiated and carried out with network calls, and services exposed via application programming interfaces (APIs). Each service comes with its own purpose that serves a unique business value.”

“When I stepped into the field of networking, everything was static and security was based on perimeter-level firewalling. It was common to have two perimeter-based firewalls; internal and external to the wide area network (WAN). Such layout was good enough in those days.

I remember the time when connected devices were corporate-owned. Everything was hard-wired and I used to define the access control policies on a port-by-port and VLAN-by-VLAN basis. There were numerous manual end-to-end policy configurations, which were not only time consuming but also error-prone.

There was a complete lack of visibility and global policy throughout the network and every morning, I relied on the multi-router traffic Grapher (MRTG) to manual inspect the traffic spikes indicating variations from baselines. Once something was plugged in, it was “there for life”. Have you ever heard of the 20-year-old PC that no one knows where it is but it still replies to ping? In contrast, we now live in an entirely different world. The perimeter has dissolved, resulting in perimeter-level firewalling alone to be insufficient.”

“Recently, I was reading a blog post by Ivan Pepelnjak on intent-based networking. He discusses that the definition of intent is “a usually clearly formulated or planned intention” and the word “intention” is defined as ’what one intends to do or bring about.” I started to ponder over his submission that the definition is confusing as there are many variations.

To guide my understanding, I decided to delve deeper into the building blocks of intent-based networking, which led me to a variety of closed-loop automation solutions. After extensive research, my view is that closed-loop automation is a prerequisite for intent-based networking. Keeping in mind the current requirements, it’s a solution that the businesses can deploy.

Now that I have examined different vendors, I would recommend gazing from a bird’s eye view, to make sure the solution overcomes today’s business and technical challenges. The outputs should drive a future-proof solution”

“What keeps me awake at night is the thought of artificial intelligence lying in wait in the hands of bad actors. Artificial intelligence combined with the powers of IoT-based attacks will create an environment tapped for mayhem. It is easy to write about, but it is hard for security professionals to combat. AI has more force, severity, and fatality which can change the face of a network and application in seconds.

When I think of the capabilities artificial intelligence has in the world of cybersecurity I know that unless we prepare well we will be like Bambi walking in the woods. The time is now to prepare for the unknown. Security professionals must examine the classical defense mechanisms in place to determine if they can withstand an attack based on artificial intelligence.”

“When I began my journey in 2015 with SD-WAN, the implementation requirements were different to what they are today. Initially, I deployed pilot sites for internal reachability. This was not a design flaw, but a solution requirement set by the options available to SD-WAN at that time. The initial requirement when designing SD-WAN was to replace multiprotocol label switching (MPLS) and connect the internal resources together.

Our projects gained the benefits of SD-WAN deployments. It certainly added value, but there were compelling constraints. In particular, we were limited to internal resources and users, yet our architecture consisted of remote partners and mobile workers. The real challenge for SD-WAN vendors is not solely to satisfy internal reachability. The wide area network (WAN) must support a range of different entities that require network access from multiple locations.”

“Applications have become a key driver of revenue, rather than their previous role as merely a tool to support the business process. What acts as the heart for all applications is the network providing the connection points. Due to the new, critical importance of the application layer, IT professionals are looking for ways to improve the architecture of their network.

A new era of campus network design is required, one that enforces policy-based automation from the edge of the network to public and private clouds using an intent-based paradigm.

SD-Access is an example of an intent-based network within the campus. It is broken down into three major elements

  1. Control-Plane based on Locator/ID separation protocol (LISP),
  2. Data-Plane based on Virtual Extensible LAN (VXLAN) and
  3. Policy-Plane based on Cisco TrustSec.”

“When it comes to technology, nothing is static, everything is evolving. Either we keep inventing mechanisms that dig out new security holes, or we are forced to implement existing kludges to cover up the inadequacies in security on which our web applications depend.

The assault on the changing digital landscape with all its new requirements has created a black hole that needs attention. The shift in technology, while creating opportunities, has a bias to create security threats. Unfortunately, with the passage of time, these trends will continue to escalate, putting web application security at center stage.

Business relies on web applications. Loss of service to business-focused web applications not only affects the brand but also results in financial loss. The web application acts as the front door to valuable assets. If you don’t efficiently lock the door or at least know when it has been opened, valuable revenue-generating web applications are left compromised.”

“When I started my journey in the technology sector back in the early 2000’s the world of networking comprised of simple structures. I remember configuring several standard branch sites that would connect to a central headquarters. There was only a handful of remote warriors who were assigned, and usually just a few high-ranking officials.

As the dependence on networking increased, so did the complexity of network designs. The standard single site became dual-based with redundant connectivity to different providers, advanced failover techniques, and high-availability designs became the norm. The number of remote workers increased, and eventually, security holes began to open in my network design.

Unfortunately, the advances in network connectivity were not in conjunction with appropriate advances in security, forcing everyone back to the drawing board. Without adequate security, the network connectivity that is left to defaults is completely insecure and is unable to validate the source or secure individual packets. If you can’t trust the network, you have to somehow secure it. We secured connections over unsecured mediums, which led to the implementation of IPSec-based VPNs along with all their complex baggage.”

“Over the years, we have embraced new technologies to find improved ways to build systems.  As a result, today’s infrastructures have undergone significant evolution. To keep pace with the arrival of new technologies, legacy is often combined with the new, but they do not always mesh well. Such a fusion between ultra-modern and conventional has created drag in the overall solution, thereby, spawning tension between past and future in how things are secured.

The multi-tenant shared infrastructure of the cloud, container technologies like Docker and Kubernetes, and new architectures like microservices and serverless, while technically remarkable, increasing complexity. Complexity is the number one enemy of security. Therefore, to be effectively aligned with the adoption of these technologies, a new approach to security is required that does not depend on shifting infrastructure as the control point.”

“Throughout my early years as a consultant, when asynchronous transfer mode (ATM) was the rage and multiprotocol label switching (MPLS) was still at the outset, I handled numerous roles as a network architect alongside various carriers. During that period, I experienced first-hand problems that the new technologies posed to them.

The lack of true end-to-end automation made our daily tasks run into the night. Bespoke network designs due to the shortfall of appropriate documentation resulted in one that person knows all. The provisioning teams never fully understood the design. The copy-and-paste implementation approach is error-prone, leaving teams blindfolded when something went wrong.

Designs were stitched together and with so much variation, that limited troubleshooting to a personalized approach. That previous experience surfaced in mind when I heard about carriers delivering SD-WAN services. I started to question if they could have made the adequate changes to provide such an agile service.”

SD-WAN Static Network-Based

SD-WAN Static Network-Based

In today's fast-paced digital world, businesses are constantly seeking innovative solutions to enhance network connectivity and efficiency. One such groundbreaking technology that has emerged is SD-WAN Static Network-Based. This blogpost delves into the concept of SD-WAN Static Network-Based, its benefits, and its potential impact on the future of networking.

SD-WAN, or Software-Defined Wide Area Networking, is a transformative approach that simplifies the management and operation of a wide area network. By separating the network hardware from its control mechanism, SD-WAN enables businesses to utilize various underlying transport technologies, including broadband internet, to securely connect users to applications.

Dynamic vs. Static Network-Based SD-WAN: While dynamic SD-WAN provides agility and flexibility by dynamically selecting the best path for each application, static network-based SD-WAN takes a different approach. It leverages predetermined paths and preconfigured policies to ensure reliable and predictable performance. This approach can be particularly beneficial for businesses with specific compliance or performance requirements.

Enhanced Security: With static network-based SD-WAN, businesses can implement advanced security measures across their network infrastructure, protecting critical data and minimizing vulnerabilities.

Improved Performance: By utilizing predetermined paths and policies, SD-WAN static network-based ensures consistent application performance, reducing latency and packet loss, and enhancing user experience.

Cost Optimization: Static network-based SD-WAN enables businesses to make the most of cost-effective transport options, such as broadband internet, while still maintaining a high level of performance and reliability.

SD-WAN static network-based has found applications across various industries. From finance to healthcare, businesses are leveraging this technology to streamline operations, enhance productivity, and improve customer experience.

While SD-WAN static network-based offers numerous benefits, it's crucial to consider potential challenges. These may include network complexity during implementation, ensuring compatibility with existing infrastructure, and the need for adequate network monitoring and management.

Conclusion: SD-WAN static network-based represents a significant shift in the networking landscape. Its ability to provide secure, reliable, and optimized connectivity opens up new possibilities for businesses across industries. As organizations continue to embrace digital transformation, SD-WAN static network-based stands as a powerful tool to revolutionize network connectivity and drive future innovation.

Highlights: SD-WAN Static Network-Based

Change in Landscape

We are now in full swing of a new era of application communication, with more businesses looking for an SD-WAN Static Network-Based solution suitable for them. Digital communication now formulates our culture and drives organizations to a new world, improving productivity around the globe.

The dramatic changes in application consumption introduce new paradigms while reforming how we approach networking. Networking around the Wide Area Network (WAN) must change as the perimeter dissolves.

Recent Application Requirements

Recent application requirements drive a new type of SD WAN Overlay that more accurately supports today’s environment with an additional layer of policy management. It’s not just about IP addresses and port numbers anymore; it’s the entire suite of business logic that drives the correct forwarding behavior. The WAN must start to make decisions holistically.

It is not just a single module in the network design and must touch all elements to capture the correct per-application forwarding behavior. It should be automatable and rope all components together to form a comprehensive end-to-end solution orchestrated from a single pane of glass. Before we delve into the drivers for an SD-WAN Static Network-Based solution, you may want to recap SD-WAN with this SD WAN tutorial.

Before you proceed, you may find the following posts helpful:

  1. SD WAN Security 
  2. SD WAN SASE

 



SD-WAN Static Network-Based.

Key SD-WAN Static Network-Based Discussion points:


  • Introduction to an application-orientated WAN.

  • The drivers for SD-WAN.

  • Limitations of some protocols.

  • Dicsussion on WAN challenges.

  • List the SD-WAN core features.

 

Back to Basic With SD-WAN Static Network-Based

The traditional network

The networks we depend on for business are sensitive to many factors that can result in a slow and unreliable experience. We can experience latency, which either refers to the time between a data packet being sent and received or the round-trip time, which is the time it takes for the packet to be sent and for it to get a reply.

We can also experience jitter, the variance in the time delay between data packets in the network, which is basically a “disruption” in the sending and receiving of packets. We have fixed-bandwidth networks that can experience congestion. For example, with five people sharing the same Internet link, each could experience a stable and swift network. Add another 20 or 30 people onto the same link, and the experience will be markedly different.

Benefits of SD-WAN Static Network-Based:

1. Enhanced Security:

With SD-WAN Static Network-Based, organizations gain greater control over their network security. By configuring static routes, administrators can ensure that traffic flows through secure paths, minimizing the risk of data breaches. Additionally, static routing reduces the exposure to potential threats by limiting the number of entry points into the network.

2. Improved Performance:

Static routing enables organizations to optimize network performance by establishing the most efficient paths for data transmission. Administrators can minimize latency and packet loss by carefully designing the network architecture, resulting in faster and more reliable application delivery. This is particularly crucial for organizations that rely on real-time applications such as video conferencing or VoIP.

3. Simplified Network Management:

One of the key advantages of SD-WAN Static Network-Based is its simplicity in network management. With static routing, administrators have complete visibility and control over the network infrastructure. They can easily configure, monitor, and troubleshoot the network, reducing the complexity associated with dynamic routing protocols. This simplification allows IT teams to focus on strategic initiatives rather than spending excessive time on network maintenance.

4. Cost Savings:

SD-WAN Static Network-Based can lead to significant cost savings for organizations. By leveraging existing network infrastructure and optimizing traffic flow, businesses can reduce the need for expensive bandwidth upgrades. Additionally, static routing eliminates the need for complex routing protocols, which can be costly to implement and maintain. These cost savings make SD-WAN Static Network-Based an attractive option for organizations seeking to maximize network efficiency while minimizing expenses.

SD-WAN Static Network-Based: Application-Orientated WAN

Push to the cloud.  

When geographically dispersed users connect back to central locations, their consumption triggers additional latency, degrading the application’s performance. No one can get away from latency unless we find ways to change the speed of light. One way is to shorten the link by moving to cloud-based applications.

The push to the cloud is inevitable. Most businesses are now moving away from on-premise in-house hosting to cloud-based management. Nevertheless, the benefits of moving to the cloud are manifold. It is easier for so many reasons.

The ready-made global footprint enables the usage of SaaS-based platforms that negate the drawbacks of dispersed users tromboning to a central data center. This software is pivotal to millions of businesses worldwide, which explains why companies such as Capsifi are so popular.

Logically positioned cloud platforms are closer to the mobile user. It’s increasingly far more efficient from the technical and business standpoint to host these applications in the cloud, which makes them available over the public Internet.

Bandwidth intensive applications

Richer applications, multimedia traffic, and growth in the cloud application consumption model drive the need for additional bandwidth. Unfortunately, we can only fit so much into a single link. The congestion leads to packet drops, ultimately degrading application performance and user experience. In addition, most applications ride on TCP, yet TCP was not designed with performance.

Organic growth

Organic business growth is a significant driver for additional bandwidth requirements. The challenge is that existing network infrastructures are static and unable to respond to this growth in a reasonable period adequately. The last mile of MPLS locks you in and kills agility. Circuit lead times impede the organization’s productivity and create an overall lag.

Costs

A WAN virtualization solution should be simple. To serve the new era of applications, we need to increase the link capacity by buying more bandwidth. However, nothing is as easy as it may seem. The WAN is one of the network’s most expensive parts, and employing link oversubscription to reduce congestion is too expensive.

Furthermore, bandwidth comes at a cost, and the existing TDM-based MPLS architectures cannot accommodate application demands. 

Traditional MPLS comes with a lot of benefits and is feature-rich. No one doubts this fact. MPLS will never die. However, it comes at a high cost for relatively low bandwidth. Unfortunately, MPLS’s price and capabilities are not a perfect couple.

  • Hybrid connectivity

Since there is not one stamp for the entire world, similar applications will have different forwarding preferences. Therefore, application flows are dynamic and change depending on user consumption. Furthermore, the MPLS, LTE, and Internet links often complement each other since they support different application types.

For example, Storage and Big data replication traffic are forwarded through the MPLS links, while cheaper Internet connectivity is used for standard Web-based applications.

Limitations of protocols

When left to its defaults, IPsec is challenged by hybrid connectivity. IPSec architecture is point-to-point, not site-to-site. As a result, it doesn’t natively support redundant uplinks. Complex configurations are required when sites have multiple uplinks to multiple providers.

By default, IPsec is not abstracted; one session cannot be used over multiple uplinks, causing additional issues with transport failover and path selection. It’s a Swiss Army knife of features, and much of IPSec’s complexities should be abstracted. Secure tunnels should be torn up and down immediately, and new sites should be incorporated into a secure overlay without much delay or manual intervention. 

Internet of Things (IoT)

Security and bandwidth consumption are key issues concerning introducing IP-enabled objects and IoT access technologies. IoT is all about Data and will bring a shed load of additional overheads for networks to consume. As millions of IoT devices come online, how efficiently do we segment traffic without complicating the network design further? Complex networks are hard to troubleshoot, and simplicity is the mother of all architectural success. Furthermore, much IoT processing requires communication to remote IoT platforms. How do we account for the increased signaling traffic over the WAN? The introduction of billions of IoT devices leaves many unanswered questions.

Branch NFV

There has been strong interest in infrastructure consolidation by deploying Network Function Virtualization (NFV) at the branch. Enabling on-demand service and chaining application flows key drivers for NFV. However, traditional service chaining is static since it is bound to a particular network topology. Moreover, it is typically built through manual configuration and is prone to human error.

 Challenges to existing WAN

Traditional WAN architectures consist of private MPLS links complemented with Internet links as a backup. Standard templates in most Service Provider environments are usually broken down into Bronze, Silver, and Gold SLAs. 

However, these types of SLA do not fit all geographies and often should be fine-tuned per location. Capacity, reliability, analytics, and security are all central parts of the WAN that should be available on demand. Traditional infrastructure is very static, and bandwidth upgrades and service changes require considerable time processing and locking agility for new sites.

It’s not agile enough, and nothing can be performed on the fly to meet the growing business needs. In addition, the cost per bit for the private connection is high, which is problematic for bandwidth-intensive applications, especially when the upgrades are too costly and can’t be afforded. 

 

  • A distributed world of dissolving perimeters

Perimeters are dissolving, and the world is becoming distributed. Applications require a WAN to support distributed environments along with flexible network points. Centralized-only designs result in suboptimal traffic engineering and increased latency. Increased latency disrupts the application performance, and only a particular type of content can be put into a Content Delivery Network (CDN). CDN cannot be used for everything.

Traditional WANs are operationally complex; people likely perform different network and security functions. For example, you may have a DMVPN, Security, and Networking specialist. Some wear all hats, but they are few and far between. Different hats have different ideas, and agreeing on a minor network change could take ages.

The World of SD-WAN Static Network-Based

SD-WAN replaces traditional WAN routers, which are agnostic to the underlying transport technology. You can use various link types, such as MPLS, LTE, and broadband. All combined. Based on policies generated by the business, SD-WAN enables load sharing across different WAN connections that more efficiently support today’s application environment.

It pulls policy and intelligence out of the network and places them into an end-to-end solution orchestrated by a single pane of glass. SDN-WAN is not just about tunnels. It consists of components that work together to simplify network operations while meeting all bandwidth and resilience requirements.

Centralized points in the network are no longer adequate; we need network points positioned where they make the most sense for the application and user. It is illogical to backhaul traffic to a central data center and is far more efficient to connect remote sites to a SaaS or IaaS model over the public Internet. The majority of enterprises prefer to connect remote sites directly to cloud services. So why not let them do this in the best possible way?

A new style of WAN and SD-WAN

We require a new style of WAN and a shift from a network-based approach to an application-based approach. The new WAN no longer looks solely at the network to forward packets. Instead, it looks at the business application and decides how to optimize it with the correct forwarding behavior. This new style of forwarding is problematic with traditional WAN architecture.

Making business logic decisions with IP and port number information is challenging. Standard routing is packet by packet and can only set part of the picture. They have routing tables and perform forwarding but essentially operate on their little island, losing out on a holistic view required for accurate end-to-end decision-making. An additional layer of information is needed.

A controller-based approach offers the necessary holistic view. We can now make decisions based on global information, not solely on a path-by-path basis. Getting all the routing information and compiling it into a dashboard to make a decision is much more efficient than making local decisions that only see parts of the network. 

From a customer’s viewpoint, what would the perfect WAN look like if you could roll back the clock and start again?   

SD-WAN Static Network-Based Components

SD-WAN key features:

  • App-Aware Routing Capabilities

Not only do we need application visibility to forward efficiently over either transport, but we also need the ability to examine deep inside the application and look at the sub-applications. For example, we can determine whether Facebook chat is over regular Facebook. This removes the application’s mystery and allows you to balance loads based on sub-applications. It’s like using a scalpel to configure the network instead of a sledgehammer.

  • Ease Of Integration With Existing Infrastructure

The risk of introducing new technologies may come with a disruptive implementation strategy. Loss of service damages more than the engineer’s reputation. It hits all areas of the business. The ability to seamlessly insert new sites into existing designs is a vital criterion. With any network change, a critical evaluation is to know how to balance risk with innovation while still meeting objectives.

How aligned is marketing content to what’s happening in reality? It’s easy for marketing materials to implement their solution at Layer 2 or 3! It’s an entirely different ball game doing this. SD-WAN carries a certain amount of due diligence. One way to read between the noise is to examine who has real-life deployments with proven Proof of concept (POC) and validated designs. Proven POC will help you guide your transition in a step-by-step manner.

  • Regional Specific Routing Topologies

Every company has different requirements for hub and spoke full mesh and Internet PoP topologies. For example, Voice should follow a full mesh design, while Data requires a hub and spokes connecting to a central data center. Nearest service availability is the key to improved performance, as there is little we can do about the latency Gods except by moving services closer together. 

  • Centralized Device Management & Policy Administration

The manual box-by-box approach to policy enforcement is not the way forward. It’s similar to stepping back into the Stone Age to request a catered flight. The ability to tie everything to a template and automate enables rapid branch deployments, security updates, and configuration changes. The optimal solutions have everything in one place and can dynamically push out upgrades.

  • High Available With Automatic Failovers

You cannot apply a singular viewpoint to high availability. An end-to-end solution should address the high availability requirements of the device, link, and site level. WANs can fail quickly, but this requires additional telemetry information to detect failures and brownout events. 

  • Encryption On All Transports

Irrespective of link type, whether MPLS, LTE, or the Internet, we need the capacity to encrypt on all those paths without the excess baggage of IPsec. Encryption should happen automatically, and the complexity of IPsec should be abstracted.

Summary: SD-WAN Static Network-Based

The networking world has witnessed a significant transformation in recent years, and one technology that has been making waves is SD-WAN (Software-Defined Wide Area Network). In this blog post, we will delve into the details of SD-WAN, its benefits, and how it is revolutionizing the network landscape.

Understanding SD-WAN

SD-WAN is a technology that abstracts networking hardware and enables centralized control and management of wide-area networks (WANs). Unlike traditional WANs, which rely on static and complex configurations, SD-WAN introduces agility and flexibility by separating the control plane from the data plane.

The Benefits of SD-WAN

SD-WAN offers numerous advantages that are transforming the way businesses approach networking. Firstly, it enhances network performance by leveraging multiple transport technologies such as MPLS, broadband, and LTE, providing intelligent traffic routing and optimization. Secondly, it brings cost savings by utilizing cost-effective internet connections and reducing reliance on expensive private circuits. Additionally, SD-WAN simplifies network management through centralized policies and automation, improving overall operational efficiency.

Security Considerations with SD-WAN

While SD-WAN offers immense benefits, it is crucial to address security concerns. With SD-WAN adoption, organizations must ensure robust security measures are in place. Encryption, authentication, and secure connectivity between branch offices and data centers are paramount. Implementing next-generation firewalls, intrusion detection systems, and regular security audits are essential to mitigate risks.

Case Studies: Real-World SD-WAN Success Stories

To truly grasp the impact of SD-WAN, let’s explore a few real-world success stories. Company X, a multinational organization, improved network performance and reduced costs by implementing SD-WAN across its global offices. Similarly, Company Y streamlined its network management, achieving better scalability and agility in response to changing business needs.

Conclusion:

The rise of SD-WAN has revolutionized the network landscape, empowering organizations with enhanced performance, cost savings, and simplified management. However, it is crucial to prioritize security measures to protect sensitive data and maintain network integrity. As businesses continue to embrace digital transformation, SD-WAN emerges as a powerful tool to drive network efficiency and enable seamless connectivity.

sd-wan3

VIPTELA SD-WAN – WAN Segmentation

 

 

VIPTELA SD-WAN

Problem Statement

WAN edge networks sit too far from business logic and are built, and designed with limited application and business flexibility. On the other hand, applications sit closer to business logic. It’s time for networking to bridge this gap using policies and business logic-aware principles. For additional information on WAN, challenges proceed to this SD WAN tutorial.

 

Traditional Segmentation

Network segmentation is defined as a portion of the network that is separated from the rest. Segmentation can be physical or logical. Physical segmentation involves complete isolation at a device and link level. Some organizations require the physical division of individual business units for security, political, or other reasons. At a basic level, logical segmentation begins with VLAN boundaries at Layer 2. VLAN consists of a group of devices that communicate as if they were connected to the same wire. As VLANs are logical and not physical connections, they are flexible and can span multiple devices.

While VLANs provide logical separation at Layer 2, virtual routing and forwarding (VRF) instances provide separation at Layer 3. Layer 2 VLAN can now map to Layer 3 VRF instances. However, every VRF has a separate control plane and configuration completed on a hop-by-hop basis. Individual VRFs with separate control planes from individual routing protocol neighbor relationships, hamper router performance.

VIPTELA VRF
VRF Separation

MPLS/VPNs overcome hop-by-hop configurations to allow segmentation. This enables physically divided business units to be logically divided without the need for individual hop-by-hop VPN configurations throughout the network. Instead, only the PE edge routers carry VPN information. This supports a variety of topologies such as hub and spoke, partial mesh, or any to any connectivity. While MPLS/VPNs have their benefits, they also introduce a unique set of challenges.

MPLS Challenges

MPLS topologies, once provisioned, are difficult to modify. This is due to the fact that all the impacted PE routers have to be re-provisioned with each new policy change. In this way, MPLS topologies are similar to the brick foundation of a house. Once the foundation is laid, it’s hard to make changes to the original structure without starting over.

Most modifications to VPN site topologies must go through an additional design phase. If a Wide Area Network (WAN) is outsourced to a carrier, it would require service provider intervention with additional design & provisioning activities. A simple task such as mapping application subnets to new or existing RT may involve onsite consultants, new high-level design, and configuration templates which would have to be applied by provisioning teams. Service provider provisioning and design activities are not free and usually have long lead times.

Some flexible Edge MPLS designs do exist. For example, community tagging and matching. During the design phase, the customer and service provider agree on predefined communities. Once these are set by the customer (attached to a prefix) they are matched by the provider to perform a certain type of traffic engineering (TE).

While community tagging and matching do provide some degree of flexibility and are commonly used, it remains a fixed, predefined configuration. Any subsequent design changes may still require service provider intervention.

Applications Must Fit A Certain Topology

The model forces applications to fit into a network topology that is already built and designed. It lacks the flexibility for the network to keep up with changing business needs. What’s needed is a way to map application requirements to the network. Applications are exploding and each has a variety of operational and performance requirements, which should be met in isolation.

Viptela SD-WAN & Topologies Per Application

By moving from hardware and diverse control planes to software & unified control planes at the WAN edge, SD-WAN evolves the fixed network approach. It abstracts the details of WAN; allowing application-independent topologies. SD-WAN provides segmentation between traffic sets and could be a good way to help create on-demand applications. Essentially, creating multiple on-demand topologies per application or group of applications. Each application can have its own topology and dynamically changing policies which are managed by a central controller.

The application controls the network design.

SD-WAN - Application flows
SD-WAN – Application flows

In SD-WAN, the central controller is hosted and managed by the customer, not a service provider. This enables the WAN to be segmented for each application at the customer’s discretion. For example, PCI traffic can be transported using an overlay specifically designed for compliance via Provider A. Meanwhile, ATM traffic can travel over the same provider network but using an overlay specifically designed for ATM. Meanwhile, each overlay can have different performance characteristics for path failover so that if the network does not reach a certain round-trip time (RTT) metric, traffic can reroute over another path. The customer has complete control over what application goes where and has the power to change policies on the fly.

The SDN controller, which acts as the brain, can set different applications to run over different paths based on business requirements and performance SLAs, not on a fixed topology. Each topology is segmented from the others and applications can share or have independent on-demand topologies. SD-WAN dramatically accelerates the time it takes to adapt the network to changing business needs.

SD-WAN Topologies

The network topologies can be depicted either physically or logically. Common topologies such as Star, Mesh, Full, and Ring are categorized under a centralized or decentralized function. In a physical world, these topologies are fixed and cannot be automatically changed. Logical topologies may also be hindered by physical device footprints.

In contrast, SD-WAN fully supports the coexistence of multiple application topologies regardless of existing physical footprints. For example, Lync message and video subscriptions may require different path topologies with separate SLAs. Messages may travel over low-cost links while video requires lower latency transports.

SD-WAN can flexibly cater to the needs of any type of application. In retail environments, store-specific applications may require a hub and spoke topology for authentication or security requirements. Surveillance systems may require full mesh topology. Guest Wi-fi may require local Internal access compared to normal user traffic that is scrubbed via a hub site. This per-application topology gives designers better control over the network. Viptela SD-WAN endpoints support multiple logical segments (regardless of the existing physical network), each of which can use a unique topology (full mesh, hub and spoke, ring) and be managed via its own policy.

Viptela SD-WAN: Predictable Application Performance

Obtaining predictable services is achieved by understanding the per-application requirements and routing appropriately.

Traditional MPLS WANs, offer limited fabric visibility. For example, providers allow enterprises to perform traceroute and pings, but bits-per-second is a primitive and unreliable method for measuring end-to-end application performance. Instead, WANs should be monitored at multiple layers, not just at the packet layer.

If a service provider is multiple autonomous systems (AS) away and experiencing performance problems, these cannot be addressed and detected using traditional distance-vector methods. This makes it impossible to route around problems and detect transitory oscillations. If errors on a transit path exist, a way must exist to penalize those paths.

Currently, there’s no way to detect when a remote network on the Internet is experiencing brownouts. Since the routing protocol is still operating, the best path does not change as neighbors might still be up. Routing should exist at the transport and application layer, and monitor both application flows and transactions. SD-WAN provides this function and delivers visibility at the device, transport, and application layer for insight into how the network is performing at any given time. This makes it possible to react to transitory failures before they can impact users.

“This post is sponsored by Viptela, an SD-WAN company. All thoughts and opinions expressed are the authors”

 

news

Tech News – Gartner, IoT Security, Plexxi Switch

On 27th of July, Gartner published its annual Hype Cycle for Networking and Communications. It assesses network technologies that are most relevant and scores them on benefits rating and maturity levels. Key areas that are at “Peak” levels are SD-WAN, WAN Optimizations, SDN Applications, Cloud Managed Networks and White-Box switching.

Markets are looking to find ways to cut costs and benefit from ROI. All those areas mentioned above are in their “Peak” and moving fast to next phase of “Sliding into the Trough” SD-WAN offers the ability to combine and secure diverse links; drastically reducing WAN costs. Viptela (SD-WAN company) recently signed a Fortune 200 global retailer that are currently deploying the technology to thousands of stores. I’m interested to see what SD-WAN brings to the future and how all these POC pan out. One thing to keep in mind is that every SD-WAN startup has some kind of magic sauce for overlay deployment – creating perfect vendor lock-in.

White-Box switching saves costs by using ‘generic,’ off-the-shelf switching and routing hardware. It’s a fundamental use case of SDN controllers. The intelligence is moved to the controller and white-box functionality is limited to forwarding data packets, without making key policy decisions. It is, however, important not to move 100% of the decision-making to the controller and keep some time-sensitive control planes protocols ( LACP or BFD ) local to the switch.

 

Internet of Things (IoT) – Another Security Breach

When you start connecting computing infrastructure to physical world, security vulnerability take over a whole new meaning.

Hackers have remotely killed a jeep on the highway in St. Louis. According to Wired, the hackers remotely toyed with the air conditioning, radio, and windshield wipers. They also caused the accelerator to stop working. These exploits will probably continue and part of the problem is that there are a lot of malware factors out there with malicious intent. It’s always been a cat and mouse games with security as hackers come up with ingenious ways to execute exploits. This explains why so much money goes into security and the IoT will definitely drive more money into the security realm. Security prevention mechanism will mitigate these attacks but can they prevent all of them?

Hackers always find a way to move one step ahead of prevention mechanisms. New product development should start with secure codes and architectures. Product teams should not put priority on release dates in front of security. Hackers connecting to real things like cars that can kill people is worrying. The story goes on and on as the creators of technologies do not think the same way as a hacker. Inventors of product and people who attempt to secure those products are not half as inventive as hackers. Hackers get up every morning and think about exploits, they live for it and are extremely creative. The mindset of product innovations must change before IoT is safe to connect physically.

 

Plexxi – Generation 2 Switch

Plexxi is an SDN startup that recently came out with a new Plexxi  Generation 2 Switch. They are probably the first SDN company to have a working solution for the data centre. Their approach to networking allows you to achieve the full potential of your data and applications by moving beyond static networks.

They have a unique approach that uses a wave division multiplexing on their fabric. They change the nature of physical connectivity between switches. They use unique hardware with merchant silicon inside that allows you to change the architecture of your network at any time without causing data loss. Their affinity SDN controller can rejig the network based on changing bandwidth requirements.

With traditional networking, we create as much bandwidth as possible but generally don’t use all the bandwidth. Plexxi challenge existing network approach and dynamically control the networking bandwidth between switches. Their generation 2 switch can determine what bandwidth is being used on the network and re-architect the network to use as much bandwidth as needed. The network architecture changes based on the flows in the data path. Now, you have a network that changes on demands to meet dynamically changing traffic flows. For example, if you have two switches communicating at 1Gbps but a spike occurs and they now need 40Gbps. Plexxi affinity SDN controller can detect this and automatically allocate more bandwidth to that link.

 

vnet1

Azure ExpressRoute

Azure ExpressRoute

In today's ever-evolving digital landscape, businesses are increasingly relying on cloud services for their infrastructure and data needs. Azure ExpressRoute, a dedicated network connection provided by Microsoft, offers a reliable and secure solution for organizations seeking direct access to Azure services. In this blog post, we will dive into the world of Azure ExpressRoute, exploring its benefits, implementation, and use cases.

Azure ExpressRoute is a private connection that allows businesses to establish a dedicated link between their on-premises network and Microsoft Azure. Unlike a regular internet connection, ExpressRoute offers higher security, lower latency, and increased reliability. By bypassing the public internet, organizations can experience enhanced performance and better control over their data.

Enhanced Performance: With ExpressRoute, businesses can achieve lower latency and higher bandwidth, resulting in faster and more responsive access to Azure services. This is especially critical for applications that require real-time data processing or heavy workloads.

Improved Security: ExpressRoute ensures a private and secure connection to Azure, reducing the risk of data breaches and unauthorized access. By leveraging private connections, businesses can maintain a higher level of control over their data and maintain compliance with industry regulations.

Hybrid Cloud Integration: Azure ExpressRoute enables seamless integration between on-premises infrastructure and Azure services. This allows organizations to extend their existing network resources to the cloud, creating a hybrid environment that offers flexibility and scalability.

Provider Selection: Businesses can choose from a range of ExpressRoute providers, including major telecommunications companies and internet service providers. It is essential to evaluate factors such as coverage, pricing, and support when selecting a provider that aligns with specific requirements.

Connection Types: Azure ExpressRoute offers two connection types - Layer 2 (Ethernet) and Layer 3 (IPVPN). Layer 2 provides a flexible and scalable solution, while Layer 3 offers more control over routing and traffic management. Understanding the differences between these connection types is crucial for successful implementation.

Global Enterprises: Large organizations with geographically dispersed offices can leverage Azure ExpressRoute to establish a private, high-speed connection to Azure services. This ensures consistent performance and secure data transmission across multiple locations.

Data-Intensive Applications: Industries dealing with massive data volumes, such as finance, healthcare, and research, can benefit from ExpressRoute's dedicated bandwidth. By bypassing the public internet, these organizations can achieve faster data transfers and real-time analytics.

Compliance and Security Requirements: Businesses operating in highly regulated industries, such as banking or government sectors, can utilize Azure ExpressRoute to meet stringent compliance requirements. The private connection ensures data privacy, integrity, and adherence to industry-specific regulations.

Conclusion: Azure ExpressRoute opens up a world of possibilities for businesses seeking a secure, high-performance connection to the cloud. By leveraging dedicated network links, organizations can unlock the full potential of Azure services while maintaining control over their data and ensuring compliance. Whether it's enhancing performance, improving security, or enabling hybrid cloud integration, ExpressRoute proves to be a valuable asset in today's digital landscape.

Highlights: Azure ExpressRoute

Azure Networking

Using Azure Networking, you can connect your on-premises data center to the cloud using fully managed and scalable networking services. Azure networking services allow you to build a secure virtual network infrastructure, manage your applications’ network traffic, and protect them from DDoS attacks. In addition to enabling secure remote access to internal resources within your organization, Azure network resources can also be used to monitor and secure your network connectivity globally.

With Azure, complex network architectures can be supported with robust, fully managed, and dynamic network infrastructure. A hybrid network solution combines on-premises and cloud infrastructure to create public access to network services and secure application networks.

Azure Virtual Network

Azure Virtual Network, the foundation of Azure networking, provides a secure and isolated environment for your resources. It lets you define your IP address space, create subnets, and establish connectivity to your on-premises network. With Azure Virtual Network, you have complete control over network traffic flow, security policies, and routing.

Azure Load Balancer

In a world where high availability and scalability are paramount, Azure Load Balancer comes to the rescue. This powerful tool distributes incoming network traffic across multiple VM instances, ensuring optimal resource utilization and fault tolerance. Whether it’s TCP or UDP traffic or public or private load balancing, Azure Load Balancer has you covered.

Azure Virtual WAN

Azure Virtual WAN simplifies network connectivity and management for organizations with geographically dispersed branches. By leveraging Microsoft’s global network infrastructure, Virtual WAN provides secure and optimized connectivity between branches and Azure resources. It seamlessly integrates with Azure Virtual Network and offers features like VPN and ExpressRoute connectivity.

Azure Firewall

Network security is a top priority, and Azure Firewall rises to the challenge. Acting as a highly available, cloud-native firewall-as-a-service, Azure Firewall provides centralized network security management. It offers application and network-layer filtering, threat intelligence integration, and outbound connectivity control. With Azure Firewall, you can safeguard your network and applications with ease.

Azure Virtual Network

Azure Virtual Networks (Azure VNets) are essential in building networks within the Azure infrastructure. Azure networking is fundamental to managing and securely connecting to other external networks (public and on-premises) over the Internet.

Azure VNet goes beyond traditional on-premises networks. In addition to isolation, high availability, and scalability, It helps secure your Azure resources by allowing you to administer, filter, or route traffic based on your preferences.

Peering between Azure VNets

Peering between Azure Virtual Networks (VNets) allows you to connect several virtual networks. Microsoft’s infrastructure and a secure private network connect the VMs in the peer virtual networks. Resources can be shared and connected directly between the two networks in a peering network.

Azure currently supports global VNet peering, which connects virtual networks within the same Azure region, instead of global VNet peering, which connects virtual networks across Azure regions.

A virtual wide area network powered by Azure

Azure Virtual WAN is a managed networking service that offers networking, security, and routing features. It is made possible by the Azure global network. Various VPN connectivity options are available, including site-to-site VPNs and ExpressRoutes.

For those who prefer working from home or other remote locations, virtual WANs assist in connecting to the Internet and other Azure resources, including networking and remote user connectivity. Using Azure Virtual WAN, existing infrastructure or data centers can be moved from on-premises to Microsoft Azure.

ExpressRoute

Internet Challenges

One of the primary culprits behind sluggish internet performance is the occurrence of bottlenecks. These bottlenecks can happen at various points along the internet infrastructure, from local networks to internet service providers (ISPs) and even at the server end. Limited bandwidth can also impact internet speed, especially during peak usage hours when networks become congested. Understanding these bottlenecks and bandwidth limitations is crucial in addressing internet performance issues.

The Role of Latency

While speed is essential, it’s not the only factor contributing to a smooth online experience. Latency, often measured in milliseconds, is critical in determining how quickly data travels between its source and destination. High latency can result in noticeable delays, particularly in activities that require real-time interaction, such as online gaming or video conferencing. Various factors, including distance, network congestion, and routing inefficiencies, can contribute to latency issues.

ExpressRoute Azure

Using Azure ExpressRoute, you can extend on-premises networks into Microsoft’s cloud infrastructure over a private connection. This networking service allows you to connect your on-premises networks to Azure. You can connect your on-premises network with Azure using an IP VPN network with Layer 3 connectivity, enabling you to connect Azure to your own WAN or data center on-premises.

There is no internet traffic with Azure ExpressRoute since the connection is private. Compared to public networks, ExpressRoute connections are faster, more reliable, more available, and more secure.

a) Enhanced Security: ExpressRoute provides a private connection, making it an ideal choice for organizations dealing with sensitive data. By avoiding the public internet, companies can significantly reduce the risk of unauthorized access and potential security breaches.

b) High Performance: ExpressRoute allows businesses to achieve faster data transfers and lower latency than standard internet connections. This is particularly beneficial for applications that require real-time data processing, such as video streaming, IoT solutions, and financial transactions.

c) Reliable and Consistent Connectivity: Azure ExpressRoute offers uptime Service Level Agreements (SLAs) and guarantees a more stable connection than internet-based connections. This ensures critical workloads and applications remain accessible and functional even during peak usage.

Use Cases for Azure ExpressRoute

a) Hybrid Cloud Environments: ExpressRoute enables organizations to extend their on-premises infrastructure to the Azure cloud seamlessly. This facilitates a hybrid cloud setup, where companies can leverage Azure’s scalability and flexibility while retaining certain workloads or sensitive data within their own data centers.

b) Big Data and Analytics: ExpressRoute provides a reliable and high-bandwidth connection to Azure’s data services for businesses that heavily rely on big data analytics. This enables faster and more efficient data transfers, allowing organizations to extract real-time actionable insights.

c) Disaster Recovery and Business Continuity: ExpressRoute can be instrumental in establishing a robust disaster recovery strategy. By replicating critical data and applications to Azure, businesses can ensure seamless failover during unforeseen events, minimizing downtime and maintaining business continuity.

You may find the following helpful post for pre-information:

  1. Load Balancer Scaling
  2. IDS IPS Azure
  3. Low Latency Network Design
  4. Data Center Performance
  5. Baseline Engineering
  6. WAN SDN 
  7. Technology Insight for Microsegmentation
  8. SDP VPN



Azure Express Route.

Key Azure ExpressRoute Discussion Points:


  • Introduction to Azure ExpressRoute and what is involved.

  • Highlighting the issues around Internet performance.

  • Critical points on the Azure solution and how it can be implemented.

  • Technical details on Microsoft ExpressRoute and redundancy.

  • Technical details VNETs and TINA Tunnels.

Back to basics with Azure VPN gateway

Let it to its defaults. When you deploy one Azure VPN gateway, two gateway instances are configured in an active standby configuration. This standby instance delivers partial redundancy but is not highly available, as it might take a few minutes for the second instance to arrive online and reconnect to the VPN destination.

For this lower level of redundancy, you can choose whether the VPN is regionally redundant or zone-redundant. If you utilize a Basic public IP address, the VPN you configure can only be regionally redundant. If you require a zone-redundant configuration, use a Standard public IP address with the VPN gateway.

Benefits of Azure ExpressRoute:

1. Enhanced Performance: ExpressRoute offers predictable network performance with low latency and high bandwidth, allowing organizations to meet their demanding application requirements. By bypassing the public internet, organizations can reduce network congestion and improve overall application performance.

2. Improved Security: ExpressRoute provides a private connection, ensuring data remains within the organization’s network perimeter. This eliminates the risks associated with transmitting data over the public internet, such as data breaches and unauthorized access. Furthermore, ExpressRoute supports network isolation, enabling organizations to control their data strictly.

3. Reliability and Availability: Azure ExpressRoute offers a Service Level Agreement (SLA) that guarantees a high level of availability, with uptime percentages ranging from 99.9% to 99.99%. This ensures organizations can rely on a stable connection to Azure services, minimizing downtime and maximizing productivity.

4. Cost Optimization: ExpressRoute helps organizations optimize costs by reducing data transfer costs and providing more predictable pricing models. With dedicated connectivity, businesses can avoid unpredictable network costs associated with public internet connections.

Use Cases of Azure ExpressRoute:

1. Hybrid Cloud Connectivity: Organizations with a hybrid cloud infrastructure, combining on-premises resources with cloud services, can use ExpressRoute to establish a seamless and secure connection between their environments. This enables seamless data transfer, application migration, and hybrid identity management.

2. Data-Intensive Workloads: ExpressRoute is particularly beneficial for organizations dealing with large volumes of data or running data-intensive workloads. By leveraging the high-bandwidth connection, organizations can transfer data quickly and efficiently, ensuring optimal performance for analytics, machine learning, and other data-driven processes.

3. Compliance and Data Sovereignty: Industries with strict compliance requirements, such as finance, healthcare, and government sectors, can benefit from ExpressRoute’s ability to keep data within their network perimeter. This ensures compliance with data protection regulations and facilitates data sovereignty, addressing data privacy and residency concerns.

The following table lists ExpressRoute locations;

Azure ExpressRoute

Azure Express Route and Encryption

Azure ExpressRoute does not offer built-in encryption. For this reason, you should investigate Barracuda’s cloud security product sets. They offer secure transmission and automatic path failover via redundant, secure tunnels to complete an end-to-end cloud solution. Other 3rd-party security products are available in Azure but are not as mature as Barracuda’s product set.

Internet Performance

Connecting to Azure public cloud over the Internet may be cheap, but it has its drawbacks with security, uptime, latency, packet loss, and jitter. The latency, jitter, and packet loss associated with the Internet often cause the performance of an application to degrade. This is primarily a concern if you support hybrid applications requiring real-time backend on-premise communications.

Transport network performance directly impacts application performance. Businesses are now facing new challenges when accessing applications in the cloud over the Internet. Delayed round-trip time (RTT) is a big concern. TCP spends a few RTTs to establish the TCP session—two RTTs before you get the first data byte.

Client-side cookies may also add delays if they are large enough and unable to fit in the first data byte. Having a transport network offering good RTT is essential for application performance. You need the ability to transport packets as quickly as possible and support the concept that “every packet counts.

  • The Internet does not provide this or offer any guaranteed Service Level Agreement (SLA) for individual traffic classes.

The Azure solution – Azure ExpressRoute & Telecity cloud-IX

With Microsoft Azure ExpressRoute, you get your private connection to Azure with a guaranteed SLA. It’s like a natural extension to your data center, offering lower latency, higher throughput, and better reliability than the Internet. You can now build applications spanning on-premise infrastructures and Azure Cloud without compromising performance. It bypasses the Internet and lets you connect your on-premise data center to your cloud data center via 3rd-party MPLS networks.

There are two ways to establish your private connection to Azure with ExpressRoute: Exchange Provider or Network Service Provider. Choose a method if you want to co-locate equipment. Companies like Telecity offer a “bridging product” enabling direct connectivity from your WAN to Azure via their MPLS network. Even though Telecity is an exchange provider, its network offerings are network service providers. Their bridging product is called Cloud-IX. Bridging product connectivity makes Azure Cloud look like another terrestrial data center.

Azure ExpressRoute
Diagram: Azure ExpressRoute.

Cloud-IX is a neutral cloud ecosystem. It allows enterprises to establish private connections to cloud service providers, not just Azure. Telecity Cloud-IX network already has redundant NNI peering to Microsoft data centers, enabling you to set up your peering connections to Cloud-IX via BGP or statics only. You don’t peer directly with Azure. Telecity and Cloud-IX take care of transport security and redundancy. Cloud-IX is likely an MPLS network that uses route targets (RT) and route distinguishers (RD) to separate and distinguish customer traffic.

Azure ExpressRoute Redundancy

The introduction of VNets

Layer-3 overlays called VNets ( cloud boundaries/subnets) are now associated with four ExpressRoutes. This offers a proper active-active data center design, enabling path diversity and the ability to build resilient connectivity. This is great for designers as it means we can make true geo-resilience into ExpressRoute designs by creating two ExpressRoute “dedicated circuits” and associating each virtual network with both.

This ensures full end-to-end resilience built into the Azure ExpressRoute configuration, including removing all geographic SPOFs. ExpressRoute connections are created between the Exchange Service Provider or Network Service Provider and the Microsoft cloud. The connectivity between customers’ on-premise locations and the service provider is produced independently of ExpressRoute. Microsoft only peers with service providers.

Azure Express Route
Diagram: Azure Express Route redundancy with VNets.

Barracuda NG firewall & Azure Express Route

Barracuda NG Firewall adds protection to Microsoft ExpressRoute. The NG is installed at both ends of the connection and offers traffic access controls, security features, low latency, and automatic path failover with Barracuda’s proprietary transport protocol, TINA. Traffic Access Control: From the IP to the Application layer, the NG firewall gives you complete visibility into traffic flows in and out of ExpressRoute.

With visibility, you get better control of the traffic. In addition, the NG firewall allows you to log what servers are doing outbound. This may be interesting to know if a server gets hacked in Azure. You would like to know what the attacker is doing outbound to it. Analytics will let you contain it or log it. When you get attacked, you need to know what traffic the attacker generates and if they are pivoting to other servers.

There have been security concerns about the number of administrative domains ExpressRoute overlays. It would help if you implemented security measures as you shared the logic with other customers’ physical routers. The NG encrypts end-to-end traffic from both endpoints. This encryption can be customized based on your requirements; for example, transport may be TCP, UDP, or hybrid, and you have complete control over the keys and algorithms.

  • Preserve low latency

Preserve Low Latency for applications that require high-quality service. The NG can provide quality service based on ports and applications, which offer a better service to high business applications. It also optimizes traffic by sending bulk traffic automatically over the Internet and keeping critical traffic on the low latency path.

Automatic Transport Link failover with TINA. Upon MPLS link failure, the NG can automatically switch to an internet-based transport and continue to pass traffic to the Azure gateway. It automatically creates a secure tunnel over the Internet without any packet drops, offering a graceful failover to Internet VPN. This allows multiple links to be active-active, making the WAN edge similar to the analogy of SD-WAN utilizing a transport-agnostic failover approach.

TINA is SSL-based, not IPSEC, and runs over TCP/UDP /ESP. Because Azure only supports TCP & UDP, TINA is supported and can run across the Microsoft fabric.

Summary: Azure ExpressRoute

In today’s rapidly evolving digital landscape, businesses seek ways to enhance cloud connectivity for seamless data transfer and improved security. One such solution is Azure ExpressRoute, a private and dedicated network connection to Microsoft Azure. In this blog post, we delved into the various benefits of Azure ExpressRoute and how it can revolutionize your cloud experience.

Understanding Azure ExpressRoute

Azure ExpressRoute is a service that allows organizations to establish a private and dedicated connection to Azure, bypassing the public internet. This direct pathway ensures a more reliable, secure, and low-latency data and application transfer connection.

Enhanced Security and Data Privacy

With Azure ExpressRoute, organizations can significantly enhance security by keeping their data off the public internet. Establishing a private connection safeguards sensitive information from potential threats, ensuring data privacy and compliance with industry regulations.

Improved Performance and Reliability

The dedicated nature of Azure ExpressRoute ensures a high-performance connection with consistent network latency and minimal packet loss. By bypassing the public internet, organizations can achieve faster data transfer speeds, reduced latency, and enhanced user experience.

Hybrid Cloud Enablement

Azure ExpressRoute enables seamless integration between on-premises infrastructure and the Azure cloud environment. This makes it an ideal solution for organizations adopting a hybrid cloud strategy, allowing them to leverage the benefits of both environments without compromising on security or performance.

Flexible Network Architecture

Azure ExpressRoute offers flexibility in network architecture, allowing organizations to choose from multiple connectivity options. Whether establishing a direct connection from their data center or utilizing a colocation facility, organizations can design a network setup that best suits their requirements.

Conclusion:

Azure ExpressRoute provides businesses with a direct and dedicated pathway to the cloud, offering enhanced security, improved performance, and flexibility in network architecture. By leveraging Azure ExpressRoute, organizations can unlock the full potential of their cloud infrastructure and accelerate their digital transformation journey.

network overlays

WAN Virtualization

WAN Virtualization

In today's fast-paced digital world, seamless connectivity is the key to success for businesses of all sizes. WAN (Wide Area Network) virtualization has emerged as a game-changing technology, revolutionizing the way organizations connect their geographically dispersed branches and remote employees. In this blog post, we will explore the concept of WAN virtualization, its benefits, implementation considerations, and its potential impact on businesses.

WAN virtualization is a technology that abstracts the physical network infrastructure, allowing multiple logical networks to operate independently over a shared physical infrastructure. It enables organizations to combine various types of connectivity, such as MPLS, broadband, and cellular, into a single virtual network. By doing so, WAN virtualization enhances network performance, scalability, and flexibility.

Increased Flexibility and Scalability: WAN virtualization allows businesses to scale their network resources on-demand, facilitating seamless expansion or contraction based on their requirements. It provides flexibility to dynamically allocate bandwidth, prioritize critical applications, and adapt to changing network conditions.

Improved Performance and Reliability:By leveraging intelligent traffic management techniques and load balancing algorithms, WAN virtualization optimizes network performance. It intelligently routes traffic across multiple network paths, avoiding congestion and reducing latency. Additionally, it enables automatic failover and redundancy, ensuring high network availability.

Simplified Network Management:Traditional WAN architectures often involve complex configurations and manual provisioning. WAN virtualization simplifies network management by centralizing control and automating tasks. Administrators can easily set policies, monitor network performance, and make changes from a single management interface, saving time and reducing human errors.

Multi-Site Connectivity: For organizations with multiple remote sites, WAN virtualization offers a cost-effective solution. It enables seamless connectivity between sites, allowing efficient data transfer, collaboration, and resource sharing. With centralized management, network administrators can ensure consistent policies and security across all sites. Cloud Connectivity:

As more businesses adopt cloud-based applications and services, WAN virtualization becomes an essential component. It provides reliable and secure connectivity between on-premises infrastructure and public or private cloud environments. By prioritizing critical cloud traffic and optimizing routing, WAN virtualization ensures optimal performance for cloud-based applications.

Highlights: WAN Virtualization

WAN virtualization can be defined as the abstraction of physical network resources into virtual entities, allowing for more flexible and efficient network management. By separating the control plane from the data plane, WAN virtualization enables the centralized management and orchestration of network resources, regardless of their physical locations. This not only simplifies network administration but also paves the way for enhanced scalability and agility.

WAN virtualization optimizes network performance by intelligently routing traffic and dynamically adjusting network resources based on real-time conditions. This ensures that critical applications receive the necessary bandwidth and quality of service, resulting in improved user experience and productivity.

Organizations can reduce their reliance on expensive dedicated circuits and hardware appliances by leveraging WAN virtualization. Instead, they can leverage existing network infrastructure and utilize cost-effective internet connections without compromising security or performance. This significantly lowers operational costs and capital expenditures.

Traditional WAN architectures often struggle to keep up with modern businesses’ evolving needs. WAN virtualization solves this challenge by providing a scalable and flexible network infrastructure. With virtual overlays, organizations can rapidly deploy and scale their network resources as needed, empowering them to adapt quickly to changing business requirements.

The desired benefits

Businesses often want to replace or augment premium bandwidth services and switch from active/standby to active/active WAN transport models. This will reduce their costs. The challenge, however, is that augmentation can increase operational complexity. Creating a consistent operational model and simplifying IT requires businesses to avoid complexity. The importance of maintaining remote site uptime for business continuity goes beyond simply preventing blackouts. Latency, jitter, and loss can affect critical applications and render them inoperable. As a result, the applications are entirely unavailable. The term “brownout” refers to these situations. Businesses today are focused on providing a consistent, high-quality application experience.

Ensuring connectivity

To ensure connectivity and make changes, there is a shift towards retaking control. It extends beyond routing or quality of service to include application experience and availability. The Internet edge is still not familiar to many businesses regarding remote sites. Software as a Service (SaaS) and productivity applications can be rolled out more effectively with this support. Better access to Infrastructure as a Service (IaaS) is also necessary. Offloading guest traffic to branches with direct Internet connectivity is also possible. However, many businesses are interested in doing so. This is because offloading this traffic locally is more efficient than routing it through a centralized data center to consume WAN bandwidth. WAN bandwidth is wasted and is not efficient.

The shift to application-centric architecture

Business requirements are changing rapidly, and today’s networks cannot cope. It is traditionally more expensive and has a fixed capacity for hardware-centric networks. In addition, the box-by-box configuration approach, siloed management tools, and lack of automated provisioning make them more challenging to support. They are inflexible, static, expensive, and difficult to maintain due to conflicting policies between domains and different configurations between services. As a result, security vulnerabilities and misconfigurations are more likely to occur. An application- or service-centric architecture focusing on simplicity and user experience should replace a connectivity-centric architecture.

Understanding Virtualization

Virtualization is a technology that allows the creation of virtual versions of various IT resources, such as servers, networks, and storage devices. These virtual resources operate independently from physical hardware, enabling multiple operating systems and applications to run simultaneously on a single physical machine. Virtualization opens up a world of possibilities by breaking the traditional one-to-one relationship between hardware and software. Now, virtualization has moved to the WAN.

WAN Virtualization and SD-WAN

Organizations constantly seek innovative solutions in modern networking to enhance their network infrastructure and optimize connectivity. One such solution that has gained significant attention is WAN virtualization. In this blog post, we will delve into the concept of WAN virtualization, its benefits, and how it revolutionizes how businesses connect and communicate.

WAN virtualization, also known as Software-Defined WAN (SD-WAN), is a technology that enables organizations to abstract their wide area network (WAN) connections from the underlying physical infrastructure. It leverages software-defined networking (SDN) principles to decouple network control and data forwarding, providing a more flexible, scalable, and efficient network solution.

SD-WAN Highlights

SD-WAN, in essence, is a virtualized approach to wide-area networking. It leverages software-defined networking principles to simplify managing and operating a wide area network, connecting geographically dispersed locations. Unlike traditional networks, SD-WAN offers centralized control, automated traffic management, and enhanced security.

sd-wan technology

VPN and SDN Components

So, what is WAN virtualization? WAN virtualization is an essential technology in the modern business world. It creates virtualized versions of wide area networks (WANs) – networks spanning a wide geographic area. The virtualized WANs can then manage and secure a company’s data, applications, and services.

Regarding implementation, WAN virtualization requires using a virtual private network (VPN), a secure private network accessible only by authorized personnel. This ensures that only those with proper credentials can access the data. WAN virtualization also requires software-defined networking (SDN) to manage the network and its components.

Related: Before you proceed, you may find the following posts helpful:

  1. SD WAN Overlay
  2. Generic Routing Encapsulation
  3. WAN Monitoring
  4. SD WAN Security 
  5. Container Based Virtualization
  6. SD WAN and Nuage Networks



WAN Virtualization

Key WAN Virtualization Discussion Points:


  • Introduction to WAN Virtualization and what is involved.

  • Highlighting the issues around internet traffic left to its defaults.

  • Critical points on WAN utilization problems.

  • Technical details on routing protocol convergence.

  • Technical details on SD WAN Overlay and how this changes the WAN.

Back to Basics: WAN virtualization.

WAN Challenges

Deploying and managing the Wide Area Network (WAN) has become more challenging. Engineers face several design challenges, such as traffic flow decentralizing, inefficient WAN link utilization, routing protocol convergence, and application performance issues with active-active WAN edge designs. Active-active WAN designs that spray and pray over multiple active links present technical and business challenges.

To do this efficiently, you have to understand application flows. There may also be performance problems. When packets get to the other end, there may be out-of-order packets as each link propagates at different speeds. The remote end has to be reassembled and put back together, causing jitter and delay. Both high jitter and delay are bad for network performance. To recap on WAN virtualization, including the drivers for SD-WAN, you may follow this SD WAN tutorial.

What is WAN Virtualization
Diagram: What is WAN virtualization? Source Linkedin.

Knowledge Check: Cisco PfR

Cisco PfR is an intelligent routing solution that dynamically optimizes traffic flow within a network. Unlike traditional routing protocols, PfR makes real-time decisions based on network conditions, application requirements, and business policies. By monitoring various metrics such as delay, packet loss, and link utilization, PfR intelligently determines the best path for traffic.

Key Features and Functionalities

PfR offers many features and functionalities that significantly enhance network performance. Some notable features include:

1. Intelligent Path Control: PfR selects the optimal traffic path based on performance metrics, ensuring efficient utilization of network resources.

2. Application-Aware Routing: PfR considers the specific requirements of different applications and dynamically adjusts routing to provide the best user experience.

3. Load Balancing: By distributing traffic across multiple paths, PfR improves network efficiency and avoids bottlenecks.

Performance based routing

Knowledge Check: Control and Data Plane

Understanding the Control Plane

The control plane can be likened to a network’s brain. It is responsible for making high-level decisions and managing network-wide operations. From routing protocols to network management systems, the control plane ensures data is directed along the most optimal paths. By analyzing network topology, the control plane determines the best routes to reach a destination and establishes the necessary rules for data transmission.

Unveiling the Data Plane

In contrast to the control plane, the data plane focuses on the actual movement of data packets within the network. It can be thought of as the hands and feet executing the control plane’s instructions. The data plane handles packet forwarding, traffic classification, and Quality of Service (QoS) enforcement tasks. It ensures that data packets are correctly encapsulated, forwarded to their intended destinations, and delivered with the necessary priority and reliability.

Use Cases and Deployment Scenarios

Distributed Enterprises

For organizations with multiple branch locations, WAN virtualization offers a cost-effective solution for connecting remote sites to the central network. It allows for secure and efficient data transfer between branches, enabling seamless collaboration and resource sharing.

Cloud Connectivity

WAN virtualization is ideal for enterprises adopting cloud-based services. It provides a secure and optimized connection to public and private cloud environments, ensuring reliable access to critical applications and data hosted in the cloud.

Disaster Recovery and Business Continuity

WAN virtualization plays a vital role in disaster recovery strategies. Organizations can ensure business continuity during a natural disaster or system failure by replicating data and applications across geographically dispersed sites.

Challenges and Considerations

Implementing WAN virtualization requires careful planning and consideration. Factors such as network security, bandwidth requirements, and compatibility with existing infrastructure need to be evaluated. It is essential to choose a solution that aligns with the specific needs and goals of the organization.

SD-WAN vs. DMVPN

Two popular WAN solutions are DMVPN and SD-WAN.

DMVPN (Dynamic Multipoint Virtual Private Network) and SD-WAN (Software-Defined Wide Area Network) are popular solutions to improve connectivity between distributed branch offices. DMVPN is a Cisco-specific solution, and SD-WAN is a software-based solution that can be used with any router. Both solutions provide several advantages, but there are some differences between them.

DMVPN is a secure, cost-effective, and scalable network solution that combines underlying technologies and DMVVPN phases (for example, the traditional DMVPN phase 1 ) to connect multiple sites. It allows the customer to use existing infrastructure and provides easy deployment and management. This solution is an excellent choice for businesses with many branch offices because it allows for secure communication and the ability to deploy new sites quickly.

DMVPN and WAN Virtualization

SD-WAN is a software-based solution that is gaining popularity in the enterprise market. It provides improved application performance, security, and network reliability. SD-WAN is an excellent choice for businesses that require high-performance applications across multiple sites. It provides an easy-to-use centralized management console that allows companies to deploy new sites and manage the network quickly.

Dynamic Multipoint VPN
Diagram: Example with DMVPN. Source is Cisco

1st Lab Guide: DMVPN operating over the WAN

The following shows DMVPN operating over the WAN. The SP node represents the WAN network. Then we have R11 as the hub and R2, R3 as the spokes.  Several protocols make the DMVPM network over the WAN possible. We have GRE; in this case, the tunnel destination is specified as a point-to-point GRE tunnel instead of a mGRE tunnel.

Then we have NHRP, which is used to help create a mapping as this is a nonbroadcast network; we can not use ARP. So, we need to manually set this up on the spokes with the command: ip nhrp NHS 192.168.100.11

DMVPN configuration
Diagram: DMVPN Configuration.

Shift from network-centric to business intent.

The core of WAN virtualization involves shifting focus from a network-centric model to a business intent-based WAN network. So, instead of designing the WAN for the network, we can create the WAN for the application. This way, the WAN architecture can simplify application deployment and management.

First, however, the mindset must shift from a network topology focus to an application services topology. A new application style consumes vast bandwidth and is very susceptible to variations in bandwidth quality. Things such as jitter, loss, and delay impact most applications, which makes it essential to improve the WAN environment for these applications.

wan virtualization
Diagram: WAN virtualization.

The spray-and-pray method over two links increases bandwidth but decreases “goodput.” It also affects firewalls, as they will see asymmetric routes. When you want an active-active model, you need application session awareness and a design that eliminates asymmetric routing. It would help if you could slice the WAN properly so application flows can work efficiently over either link.

What is WAN Virtualization: Decentralizing Traffic

Decentralizing traffic from the data center to the branch requires more bandwidth to the network’s edges. As a result, we see many high-bandwidth applications running on remote sites. This is what businesses are now trying to accomplish. Traditional branch sites usually rely on hub sites for most services and do not host bandwidth-intensive applications. Today, remote locations require extra bandwidth, which is not cheaper yearly.

Inefficient WAN utilization

Redundant WAN links usually require a dynamic routing protocol for traffic engineering and failover. Routing protocols require complex tuning to load balance traffic between border devices. Border Gateway Protocol (BGP) is the primary protocol for connecting sites to external networks.

It relies on path attributes to choose the best path based on availability and distance. Although these attributes allow granular policy control, they do not cover aspects relating to path performance, such as Round Trip Time (RTT), delay, and jitter.

Port 179
Furthermore, BGP does not always choose the “best” path, which may have different meanings for customers. For example, customer A might consider the path via provider A as the best due to the price of links. Default routing does not take this into account. Packet-level routing protocols are not designed to handle the complexities of running over multiple transport-agnostic links. Therefore, a solution that eliminates the need for packet-level routing protocols must arise.
BGP Path Attributes
Diagram: BGP Path Attributes Source is Cisco.

Routing protocol convergence

WAN designs can also be active standby, which requires routing protocol convergence in the event of primary link failure. However, routing convergence is slow, and to speed up, additional features, such as Bidirectional Forwarding Detection (BFD), are implemented that may stress the network’s control plane. Although mechanisms exist to speed up convergence and failure detection, there are still several convergence steps, such as:

Rouitng Convergence

Convergence


Detect


Describe


Switch 


Find

 Branch office security

With traditional network solutions, branches connect back to the data center, which typically provides Internet access. However, the application world has evolved, and branches directly consume applications such as Office 365 in the cloud. This drives a need for branches to access these services over the Internet without going to the data center for Internet access or security scrubbing.

Extending the security diameter into the branches should be possible without requiring onsite firewalls / IPS and other security paradigm changes. A solution must exist that allows you to extend your security domain to the branch sites without costly security appliances at each branch—essentially, building a dynamic security fabric.

WAN Virtualization

The solution to all these problems is SD-WAN ( software-defined WAN ). SD-WAN is a transport-independent overlay software-based networking deployment. It uses software and cloud-based technologies to simplify the delivery of WAN services to branch offices. Similar to Software Defined Networking (SDN), SD-WAN works by abstraction. It abstracts network hardware into a control plane with multiple data planes to make up one large WAN fabric.

 SD-WAN in a nutshell 

When we consider the Wide Area Network (WAN) environment at a basic level, we connect data centers to several branch offices to deliver packets between those sites, supporting the transport of application transactions and services. The SD-WAN platform allows you to pull Internet connectivity into those sites, becoming part of one large transport-independent WAN fabric.

SD-WAN monitors the paths and the application performance on each link (Internet, MPLS, LTE ) and chooses the best path based on performance.

There are many forms of Internet connectivity (cable, DSL, broadband, and Ethernet). They are quick to deploy at a fraction of the cost of private MPLS circuits. SD-WAN provides the benefit of using all these links and monitoring which applications are best for them.

Application performance is continuously monitored across all eligible paths-direct internet, internet VPN, and private WAN. It creates an active-active network and eliminates the need to use and maintain traditional routing protocols for active-standby setups—no reliance on the active-standby model and associated problems.

WAN virtualization
Diagram: WAN virtualization. Source is Juniper

SD-WAN simplifies WAN management

SD-WAN simplifies managing a wide area network by providing a centralized platform for managing and monitoring traffic across the network. This helps reduce the complexity of managing multiple networks, eliminating the need for manual configuration of each site. Instead, all of the sites are configured from a single management console.

SD-WAN also provides advanced security features such as encryption and firewalling, which can be configured to ensure that only authorized traffic is allowed access to the network. Additionally, SD-WAN can optimize network performance by automatically routing traffic over the most efficient paths.

what is wan virtualization

SD-WAN Packet Steering

SD-WAN packet steering is a technology that efficiently routes packets across a wide area network (WAN). It is based on the concept of steering packets so that they can be delivered more quickly and reliably than traditional routing protocols. Packet steering is crucial to SD-WAN technology, allowing organizations to maximize their WAN connections.

SD-WAN packet steering works by analyzing packets sent across the WAN and looking for patterns or trends. Based on these patterns, the SD-WAN can dynamically route the packets to deliver them more quickly and reliably. This can be done in various ways, such as considering latency and packet loss or ensuring the packets are routed over the most reliable connections.

Spraying packets down both links can result in 20% drops or packet reordering. SD-WAN makes packets better utilized, no reorder, and better “goodput.” SD-WAN increases your buying power and results in buying lower bandwidth links and running them more efficiently. Over-provision is unnecessary as you are using the existing WAN bandwidth better.

Knowledge Check: Application-Aware Routing (AAR)

Understanding Application-Aware Routing (AAR)

Application-Aware Routing is a sophisticated networking technique that goes beyond traditional packet-based routing. It considers the unique requirements of different applications, such as video streaming, cloud-based services, or real-time communication, and optimizes the network path accordingly. By prioritizing and steering traffic based on application characteristics, it ensures smooth and efficient data transmission.

Benefits of Application-Aware Routing

2.1 Enhanced Performance: Application-aware routing significantly improves overall performance by dynamically allocating network resources to applications with high bandwidth or low latency requirements. This translates into faster downloads, seamless video streaming, and reduced response times for critical applications.

2.2 Increased Reliability: Traditional routing methods treat all traffic equally, often resulting in congestion and potential bottlenecks. Application Aware Routing intelligently distributes network traffic, avoiding congested paths and ensuring a reliable and consistent user experience. In network failure or congestion, it can dynamically reroute traffic to alternative paths, minimizing downtime and disruptions.

Implementation Strategies

Deep Packet Inspection: A key component of Application-Aware Routing is deep packet inspection (DPI), which analyzes the content of network packets to identify specific applications. DPI enables routers and switches to make informed decisions about handling each packet based on its application, ensuring optimal routing and resource allocation.

3.2 Quality of Service (QoS) Configuration: Implementing QoS parameters alongside Application Aware Routing allows network administrators to allocate bandwidth, prioritize specific applications over others, and enforce policies to ensure the best possible user experience. QoS configurations can be customized based on organizational needs and application requirements.

Future Possibilities

As the digital landscape continues to evolve, the potential for Application-Aware Routing is boundless. With emerging technologies like the Internet of Things (IoT) and 5G networks, the ability to intelligently route traffic based on specific application needs will become even more critical. Application-Aware Routing has the potential to optimize resource utilization, enhance security, and support the seamless integration of diverse applications and services.

Benefits of WAN Virtualization:

1. Enhanced Network Performance: WAN virtualization allows organizations to optimize network performance by intelligently routing traffic across multiple WAN links. Organizations can achieve improved application performance and reduced latency by dynamically selecting the most efficient path based on real-time network conditions.

2. Cost Savings: Traditional WAN solutions often require expensive dedicated circuits for each branch office. With WAN virtualization, organizations can leverage cost-effective internet connections, such as broadband or LTE, while ensuring secure and reliable connectivity. This flexibility in choosing connectivity options can significantly reduce operational costs.

3. Simplified Network Management: WAN virtualization provides centralized management and control of the entire network infrastructure. This simplifies network provisioning, configuration, and monitoring, reducing traditional WAN deployments’ complexity and administrative overhead.

4. Increased Scalability: WAN virtualization offers the scalability to accommodate evolving network requirements as organizations grow and expand their operations. It allows for the seamless integration of new branch offices and additional bandwidth without significant infrastructure changes.

5. Enhanced Security: With the rise in cybersecurity threats, network security is paramount. WAN virtualization enables organizations to implement robust security measures, such as encryption and firewall policies, across the entire network. This helps protect sensitive data and ensures compliance with industry regulations.

  • A final note on what is WAN virtualization

Server virtualization and automation in the data center are prevalent, but WANs are stalling in this space. It is the last bastion of hardware models that has complexity. Like hypervisors have transformed data centers, SD-WAN aims to change how WAN networks are built and managed. When server virtualization and hypervisor came along, we did not have to worry about the underlying hardware. Instead, a virtual machine (VM) can be provided and run as an application. Today’s WAN environment requires you to manage details of carrier infrastructure, routing protocols, and encryption. 

  • SD-WAN pulls all WAN resources together and slices up the WAN to match the applications on them.

The Role of WAN Virtualization in Digital Transformation:

In today’s digital era, where cloud-based applications and remote workforces are becoming the norm, WAN virtualization is critical in enabling digital transformation. It empowers organizations to embrace new technologies, such as cloud computing and unified communications, by providing secure and reliable connectivity to distributed resources.

Summary: WAN Virtualization

In our ever-connected world, seamless network connectivity is necessary for businesses of all sizes. However, traditional Wide Area Networks (WANs) often fall short of meeting the demands of modern data transmission and application performance. This is where the concept of WAN virtualization comes into play, promising to revolutionize network connectivity like never before.

Understanding WAN Virtualization

WAN virtualization, also known as Software-Defined WAN (SD-WAN), is a technology that abstracts the physical infrastructure of traditional WANs and allows for centralized control, management, and optimization of network resources. By decoupling the control plane from the underlying hardware, WAN virtualization enables organizations to dynamically allocate bandwidth, prioritize critical applications, and ensure optimal performance across geographically dispersed locations.

The Benefits of WAN Virtualization

Enhanced Flexibility and Scalability

With WAN virtualization, organizations can effortlessly scale their network infrastructure to accommodate growing business needs. The virtualized nature of the WAN allows for easy addition or removal of network resources, enabling businesses to adapt to changing requirements without costly hardware upgrades.

Improved Application Performance

WAN virtualization empowers businesses to optimize application performance by intelligently routing network traffic based on application type, quality of service requirements, and network conditions. By dynamically selecting the most efficient path for data transmission, WAN virtualization minimizes latency, improves response times, and enhances overall user experience.

Cost Savings and Efficiency

By leveraging WAN virtualization, organizations can reduce their reliance on expensive Multiprotocol Label Switching (MPLS) connections and embrace more cost-effective broadband links. The ability to intelligently distribute traffic across diverse network paths enhances network redundancy and maximizes bandwidth utilization, providing significant cost savings and improved efficiency.

Implementation Considerations

Network Security

When adopting WAN virtualization, it is crucial to implement robust security measures to protect sensitive data and ensure network integrity. Encryption protocols, threat detection systems, and secure access controls should be implemented to safeguard against potential security breaches.

Quality of Service (QoS)

Organizations should prioritize critical applications and allocate appropriate bandwidth resources through Quality of Service (QoS) policies to ensure optimal application performance. By adequately configuring QoS settings, businesses can guarantee mission-critical applications receive the necessary network resources, minimizing latency and providing a seamless user experience.

Real-World Use Cases

Global Enterprise Networks

Large multinational corporations with a widespread presence can significantly benefit from WAN virtualization. These organizations can achieve consistent performance across geographically dispersed locations by centralizing network management and leveraging intelligent traffic routing, improving collaboration and productivity.

Branch Office Connectivity

WAN virtualization simplifies connectivity and network management for businesses with multiple branch offices. It enables organizations to establish secure and efficient connections between headquarters and remote locations, ensuring seamless access to critical resources and applications.

Conclusion

In conclusion, WAN virtualization represents a paradigm shift in network connectivity, offering enhanced flexibility, improved application performance, and cost savings for businesses. By embracing this transformative technology, organizations can unlock the true potential of their networks, enabling them to thrive in the digital age.

WAN Design Requirements

WAN SDN

WAN SDN

In today's fast-paced digital world, organizations constantly seek ways to optimize their network infrastructure for improved performance, scalability, and cost efficiency. One emerging technology that has gained significant traction is WAN Software-Defined Networking (SDN). By decoupling the control and data planes, WAN SDN provides organizations unprecedented flexibility, agility, and control over their wide area networks (WANs). In this blog post, we will delve into the world of WAN SDN, exploring its key benefits, implementation considerations, and real-world use cases.

WAN SDN is a network architecture that allows organizations to manage and control their wide area networks using software centrally. Traditionally, WANs have been complex and time-consuming to configure, often requiring manual network provisioning and management intervention. However, with WAN SDN, network administrators can automate these tasks through a centralized controller, simplifying network operations and reducing human errors.

1. Enhanced Agility: WAN SDN empowers network administrators with the ability to quickly adapt to changing business needs. With programmable policies and dynamic control, organizations can easily adjust network configurations, prioritize traffic, and implement changes without the need for manual reconfiguration of individual devices.

2. Improved Scalability: Traditional wide area networks often face scalability challenges due to the complex nature of managing numerous remote sites. WAN SDN addresses this issue by providing centralized control, allowing for streamlined network expansion, and efficient resource allocation.

3. Optimal Resource Utilization: WAN SDN enables organizations to maximize their network resources by intelligently routing traffic and dynamically allocating bandwidth based on real-time demands. This ensures that critical applications receive the necessary resources while minimizing wastage.

1. Multi-site Enterprises: WAN SDN is particularly beneficial for organizations with multiple branch locations. It allows for simplified network management across geographically dispersed sites, enabling efficient resource allocation, centralized security policies, and rapid deployment of new services.

2. Cloud Connectivity: WAN SDN plays a crucial role in connecting enterprise networks with cloud service providers. It offers seamless integration, secure connections, and dynamic bandwidth allocation, ensuring optimal performance and reliability for cloud-based applications.

3. Service Providers: WAN SDN can revolutionize how service providers deliver network services to their customers. It enables the creation of virtual private networks (VPNs) on-demand, facilitates network slicing for different tenants, and provides granular control and visibility for service-level agreements (SLAs).

In conclusion, WAN SDN represents a paradigm shift in wide area network management. Its ability to centralize control, enhance agility, and optimize resource utilization make it a game-changer for modern networking infrastructures. As organizations continue to embrace digital transformation and demand more from their networks, WAN SDN will undoubtedly play a pivotal role in shaping the future of networking.

Highlights: WAN SDN

With software-defined networking (SDN), network configurations can be dynamic and programmatically optimized, improving network performance and monitoring more like cloud computing than traditional network management. By disassociating the forwarding of network packets from routing (control plane), SDN can be used to centralize network intelligence within a single network component by improving the static architecture of traditional networks. Controllers make up the control plane of an SDN network, which contains all of the network’s intelligence. They are considered the brains of the network. Security, scalability, and elasticity are some of the drawbacks of centralization.

Since OpenFlow’s emergence in 2011, SDN was commonly associated with remote communication with network plane elements to determine the path of network packets across network switches. Additionally, proprietary network virtualization platforms, such as Cisco Systems’ Open Network Environment and Nicira’s, use the term.

The SD-WAN technology is used in wide area networks (WANs

Application Challenges

Compared to a network-centric model, business intent-based WAN networks have great potential. By using a WAN architecture, applications can be deployed and managed more efficiently. However, application services topologies must replace network topologies. Supporting new and existing applications on the WAN is a common challenge for network operations staff. Applications such as these consume large amounts of bandwidth and are extremely sensitive to variations in bandwidth quality. Improving the WAN environment for these applications is more critical due to jitter, loss, and delay.

WAN SLA

In addition, cloud-based applications such as Enterprise Resource Planning (ERP) and Customer Relationship Management (CRM) are increasing bandwidth demands on the WAN. As cloud applications require increasing bandwidth, provisioning new applications and services is becoming increasingly complex and expensive. In today’s business environment, WAN routing and network SLAs are controlled by MPLS L3VPN service providers. As a result, they are less able to adapt to new delivery methods, such as cloud-based and SaaS-based applications.

These applications could take months to implement in service providers’ environments. These changes can also be expensive for some service providers, and some may not be made at all. There is no way to instantiate VPNs independent of underlying transport since service providers control the WAN core. Implementing differentiated service levels for different applications becomes challenging, if not impossible.

Transport Independance: Hybrid WAN

The hybrid WAN concept was born out of this need. An alternative path that applications can take across a WAN environment is provided by hybrid WAN, which involves businesses acquiring non-MPLS networks and adding them to their LANs. Business enterprises can control these circuits, including routing and application performance. VPN tunnels are typically created over the top of these circuits to provide secure transport over any link. 4G/LTE, L2VPN, commodity broadband Internet, and L2VPN are all examples of these types of links.

As a result, transport independence is achieved. In this way, any transport type can be used under the VPN, and deterministic routing and application performance can be achieved. These commodity links can be used to transmit some applications rather than the traditionally controlled L3VPN MPLS links provided by service providers.

DMVPN configuration
Diagram: DMVPN Configuration

SDN and APIs

WAN SDN is a modern approach to network management that uses a centralized control model to manage, configure, and monitor large and complex networks. It allows network administrators to use software to configure, monitor, and manage network elements from a single, centralized system. This enables the network to be managed more efficiently and cost-effectively than traditional networks.

SDN uses an application programming interface (API) to abstract the underlying physical network infrastructure, allowing for more agile network control and easier management. It also enables network administrators to configure and deploy services from a centralized location rapidly. This enables network administrators to respond quickly to changes in traffic patterns or network conditions, allowing for more efficient use of resources.

  • Scalability and Automation

SDN also allows for improved scalability and automation. Network administrators can quickly scale up or down the network by leveraging automated scripts depending on its current needs. Automation also enables the network to be maintained more rapidly and efficiently, saving time and resources.

Before you proceed, you may find the following posts helpful:

  1. WAN Virtualization
  2. Software Defined Perimeter Solutions
  3. What is OpenFlow
  4. SD WAN Tutorial
  5. What Does SDN Mean
  6. Data Center Site Selection



SDN Internet

Key WAN SDN Discussion Points:


  • Introduction to WAN SDN and what is involved.

  • Highlighting the challenges of a traditional WAN design.

  • Critical points on the rise of WAN SDN.

  • Technical details Internet measurements.

  • The LISP protocol.

Back to Basics with WAN SDN

A Deterministic Solution

Technology typically starts as a highly engineered, expensive, deterministic solution. As the marketplace evolves and competition rises, the need for a non-deterministic, inexpensive solution comes into play. We see this throughout history. First, mainframes were/are expensive, and with the arrival of a microprocessor personal computer, the client/server model was born. The Static RAM ( SRAM ) technology was replaced with cheaper Dynamic RAM ( DRAM ). These patterns consistently apply to all areas of technology.

Finally, deterministic and costly technology is replaced with intelligent technology using redundancy and optimization techniques. This process is now appearing in Wide Area Networks (WAN). Now, we are witnessing changes to routing space with the incorporation of Software Defined Networking (SDN) and BGP (Border Gateway Protocol). By combining these two technologies, companies can now perform  intelligent routing, aka SD-WAN path selection, with an SD WAN Overlay

  • A key point: SD-WAN Path Selection

SD-WAN path selection is essential to a Software-Defined Wide Area Network (SD-WAN) architecture. SD-WAN path selection selects the most optimal network path for a given application or user. This process is automated and based on user-defined criteria, such as latency, jitter, cost, availability, and security. As a result, SD-WAN can ensure that applications and users experience the best possible performance by making intelligent decisions on which network path to use.

When selecting the best path for a given application or user, SD-WAN looks at the quality of the connection and the available bandwidth. It then looks at the cost associated with each path. Cost can be a significant factor when selecting a path, especially for large enterprises or organizations with multiple sites.

SD-WAN can also prioritize certain types of traffic over others. This is done by assigning different weights or priorities for various kinds of traffic. For example, an organization may prioritize voice traffic over other types of traffic. This ensures that voice traffic has the best possible chance of completing its journey without interruption.

SD WAN traffic steering
Diagram: SD WAN traffic steering. Source Cisco.
  • Back to basics with DMVPN

Wide Area Network (WAN) DMVPN (Dynamic Multipoint Virtual Private Network) is a type of Virtual Private Network (VPN) that uses an underlying public network, such as the Internet, to transport data between remote sites. It provides a secure, encrypted connection between two or more private networks, allowing them to communicate over the public network without establishing a dedicated physical connection.

Critical Benefits of WAN SDN:

Enhanced Network Flexibility:

WAN SDN enables organizations to dynamically adapt their network infrastructure to meet changing business requirements. Network administrators can quickly respond to network demands through programmable policies and automated provisioning, ensuring optimal performance and resource allocation.

Improved Network Agility:

By separating the control and data planes, WAN SDN allows for faster decision-making and network reconfiguration. This agility enables organizations to rapidly deploy new services, adjust network traffic flows, and optimize bandwidth utilization, ultimately enhancing overall network performance.

Cost Efficiency:

WAN SDN eliminates manual configuration and reduces the complexity associated with traditional network management approaches. This streamlined network management saves costs through reduced operational expenses, improved resource utilization, and increased network efficiency.

Critical Considerations for Implementation:

Network Security:

When adopting WAN SDN, organizations must consider the potential security risks associated with software-defined networks. Robust security measures, including authentication, encryption, and access controls, should be implemented to protect against unauthorized access and potential vulnerabilities.

Staff Training and Expertise:

Implementing WAN SDN requires skilled network administrators proficient in configuring and managing the software-defined network infrastructure. Organizations must train and upskill their IT teams to ensure successful implementation and ongoing management.

Real-World Use Cases:

Multi-Site Connectivity:

WAN SDN enables organizations with multiple geographically dispersed locations to connect their sites seamlessly. Administrators can prioritize traffic, optimize bandwidth utilization, and ensure consistent network performance across all locations by centrally controlling the network.

Cloud Connectivity:

With the increasing adoption of cloud services, WAN SDN allows organizations to connect their data centers to public and private clouds securely and efficiently. This facilitates smooth data transfers, supports workload mobility, and enhances cloud performance.

Disaster Recovery:

WAN SDN simplifies disaster recovery planning by allowing organizations to reroute network traffic dynamically during a network failure. This ensures business continuity and minimizes downtime, as the network can automatically adapt to changing conditions and reroute traffic through alternative paths.

The Rise of WAN SDN

The foundation for business and cloud services are crucial elements of business operations. The transport network used for these services is best efforts, weak, and offers no guarantee of an acceptable delay. More services are being brought to the Internet, yet the Internet is managed inefficiently and cheaply.

Every Autonomous System (AS) acts independently, and there is a price war between transit providers, leading to poor quality of transit services. Operating over this flawed network, customers must find ways to guarantee applications receive the expected level of quality.

Border Gateway Protocol (BGP), the Internet’s glue, has several path selection flaws. The main drawback of BGP is the routing paradigm relating to the path-selection process. BGP default path selection is based on Autonomous System (AS) Path length; prefer the path with the shortest AS_PATH. It misses the shape of the network with its current path selection process. It does not care if propagation delay, packet loss, or link congestion exists. It resulted in long path selection and utilizing paths potentially experiencing packet loss.

Example: WAN SDN with Border6 

Border6 is a French company that started in 2012. It offers non-stop internet and an integrated WAN SDN solution, influencing BGP to perform optimum routing. It’s not a replacement for BGP but a complementary tool to enhance routing decisions. For example, it automates changes in routing in cases of link congestion/blackouts.

“The agile way of improving BGP paths by the Border 6 tool improves network stability” Brandon Wade, iCastCenter Owner.

As the Internet became more popular, customers wanted to add additional intelligence to routing. Additionally, businesses require SDN traffic optimizations, as many run their entire service offerings on top of it.

What is non-stop internet?

Border6 offers an integrated WAN SDN solution with BGP that adds intelligence to outbound routing. A common approach when designing SDN in real-world networks is to prefer that SDN solutions incorporate existing field testing mechanisms (BGP) and not reinvent all the wheels ever invented. Therefore, the border6 approach to influence BGP with SDN is a welcomed and less risky approach to implementing a greenfield startup. In addition, Microsoft and Viptela use the SDN solution to control BGP behavior.

Border6 uses BGP to guide what might be reachable. Based on various performance metrics, they measure how well paths perform. They use BGP to learn the structure of the Internet and then run their algorithms to determine what is essential for individual customers. Every customer has different needs to reach different subnets. Some prefer costs; others prefer performance.

They elect several interesting “best” performing prefixes, and the most critical prefixes are selected. Next, they find probing locations and measure the source with automatic probes to determine the best path. All these tools combined enhance the behavior of BGP. Their mechanism can detect if an ISP has hardware/software problems, drops packets, or rerouting packets worldwide. 

Thousands of tests per minute

The Solution offers the best path by executing thousands of tests per minute and enabling results to include the best paths for packet delivery. Outputs from the live probing of path delays and packet loss inform BGP on which path to route traffic. The “best path” is different for each customer. It depends on the routing policy the customer wants to take. Some customers prefer paths without packet loss; others wish to cheap costs or paths under 100ms. It comes down to customer requirements and the applications they serve.

BGP – Unrelated to Performance

Traditionally, BGP gets its information to make decisions based on data unrelated to performance. Broder 6 tries to correlate your packet’s path to the Internet by choosing the fastest or cheapest link, depending on your requirements.

They are taking BGP data service providers and sending them as a baseline. Based on that broad connectivity picture, they have their measurements – lowest latency, packets lost, etc.- and adjust the data from BGP to consider these other measures. They were, eventually, performing optimum packet traffic forwarding. They first look at Netflow or Sflow data to determine what is essential and use their tool to collect and aggregate the data. From this data, they know what destinations are critical to that customer.

BGP for outbound | Locator/ID Separation Protocol (LISP) for inbound

Border6 products relate to outbound traffic optimizations. It can be hard to influence inbound traffic optimization with BGP. Most AS behave selfishly and optimize the traffic in their interest. They are trying to provide tools that help AS optimize inbound flows by integrating their product set with the Locator/ID Separation Protocol (LISP). The diagram below displays generic LISP components. It’s not necessarily related to Border6 LISP design.

LISP decouples the address space so you can optimize inbound traffic flows. Many LISP uses cases are seen with active-active data centers and VM mobility. It decouples the “who” and the “where,” which allows end-host addressing not to correlate with the actual host location. The drawback is that LISP requires endpoints that can build LISP tunnels.

Currently, they are trying to provide a solution using LISP as a signaling protocol between Border6 devices. They are also working on performing statistical analysis for data received to mitigate potential denial-of-service (DDoS) events. More DDoS algorithms are coming in future releases.

Conclusion:

WAN SDN is revolutionizing how organizations manage and control their wide area networks. WAN SDN enables organizations to optimize their network infrastructure to meet evolving business needs by providing enhanced flexibility, agility, and cost efficiency.

However, successful implementation requires careful consideration of network security, staff training, and expertise. With real-world use cases ranging from multi-site connectivity to disaster recovery, WAN SDN holds immense potential for organizations seeking to transform their network connectivity and unlock new opportunities in the digital era.

Summary: WAN SDN

In today’s digital age, where connectivity and speed are paramount, traditional Wide Area Networks (WANs) often fall short of meeting the demands of modern businesses. However, a revolutionary solution that promises to transform how we think about and utilize WANs has emerged. Enter Software-Defined Networking (SDN), a paradigm-shifting approach that brings unprecedented flexibility, efficiency, and control to WAN infrastructure.

Understanding SDN

At its core, SDN is a network architecture that separates the control plane from the data plane. By decoupling network control and forwarding functions, SDN enables centralized management and programmability of the entire network, regardless of its geographical spread. Traditional WANs relied on complex and static configurations, but SDN introduced a level of agility and simplicity that was previously unimaginable.

Benefits of SDN for WANs

Enhanced Flexibility

SDN empowers network administrators to dynamically configure and customize WANs based on specific requirements. With a software-based control plane, they can quickly implement changes, allocate bandwidth, and optimize traffic routing, all in real time. This flexibility allows businesses to adapt swiftly to evolving needs and drive innovation.

Improved Efficiency

By leveraging SDN, WANs can achieve higher levels of efficiency through centralized management and automation. Network policies can be defined and enforced holistically, reducing manual configuration efforts and minimizing human errors. Additionally, SDN enables the intelligent allocation of network resources, optimizing bandwidth utilization and enhancing overall network performance.

Enhanced Security

Security threats are a constant concern in any network infrastructure. SDN brings a new layer of security to WANs by providing granular control over traffic flows and implementing sophisticated security policies. With SDN, network administrators can easily monitor, detect, and mitigate potential threats, ensuring data integrity and protecting against unauthorized access.

Use Cases and Implementation Examples

Dynamic Multi-site Connectivity

SDN enables seamless connectivity between multiple sites, allowing businesses to establish secure and scalable networks. With SDN, organizations can dynamically create and manage virtual private networks (VPNs) across geographically dispersed locations, simplifying network expansion and enabling agile resource allocation.

Cloud Integration and Hybrid WANs

Integrating SDN with cloud services unlocks a whole new level of scalability and flexibility for WANs. By combining SDN with cloud-based infrastructure, organizations can easily extend their networks to the cloud, access resources on demand, and leverage the benefits of hybrid WAN architectures.

Conclusion:

With its ability to enhance flexibility, improve efficiency, and bolster security, SDN is ushering in a new era for Wide-Area Networks (WANs). By embracing the power of software-defined networking, businesses can overcome the limitations of traditional WANs and build robust, agile, and future-proof network infrastructures. It’s time to embrace the SDN revolution and unlock the full potential of your WAN.

viptela1

Viptela Software Defined WAN (SD-WAN)

 

viptela sd wan

Viptela SD WAN

Why can’t enterprise networks scale like the Internet? What if you could virtualize the entire network?

Wide Area Network (WAN) connectivity models follow a hybrid approach, and companies may have multiple types – MPLS and the Internet. For example, branch A has remote access over the Internet, while branch B employs private MPLS connectivity. Internet and MPLS have distinct connectivity models, and different types of overlay exist for the Internet and MPLS-based networks.

The challenge is to combine these overlays automatically and provide a transport-agnostic overlay network. The data consumption model in enterprises is shifting. Around 70% of data is; now Internet-bound, and it is expensive to trombone traffic from defined DMZ points. Customers are looking for topological flexibility, causing a shift in security parameters. Topological flexibility forces us to rethink WAN solutions for tomorrow’s networks and leads towards Viptela SD-WAN.

 

Before you proceed, you may find the following helpful:

  1. SD WAN Tutorial
  2. SD WAN Overlay
  3. SD WAN Security 
  4. WAN Virtualization
  5. SD-WAN Segmentation

 

Solution: Viptela SD WAN

Viptela created a new overlay network called Secure Extensible Network (SEN) to address these challenges. For the first time, encryption is built into the solution. Security and routing are combined into one solution. Enables you to span environments, anywhere-to-anywhere in a secure deployment. This type of architecture is not possible with today’s traditional networking methods.

Founded in 2012, Viptela is a Virtual Private Network (VPN) company utilizing concepts of Software Defined Networking (SDN) to transform end-to-end network infrastructure. Based in San Jose, they are developing an SDN Wide Area Network (WAN) product offering any-to-any connectivity with features such as application-aware routing, service chaining, virtual Demilitarized Zone (DMZ), and weighted Equal Cost Multipath (ECMP) operating on different transports.

The key benefit of Viptela is any-to-any connectivity product offering. Connectivity was previously found in Multiprotocol Label Switching (MPLS) networks. They purely work on the connectivity model and not security frameworks. They can, however, influence-traffic paths to and from security services.

Viptela sd wan

 

Ubiquitous data plane

MPLS was attractive because it had a single control plane and a ubiquitous data plane. As long as you are in the MPLS network, connection to anyone is possible. Granted, you have the correct Route Distinguisher (RD) and Route Target (RT) configurations. But why can’t you take this model to Wide Area Network? Invent a technology that can create a similar model and offer ubiquitous connectivity regardless of transport type ( Internet, MPLS ).

 

Why Viptela SDN WAN?

The business today wants different types of connectivity modules. When you map service to business logic, the network/service topology is already laid out. It’s defined. Services have to follow this topology. Viptela is changing this concept by altering the data and control plane connectivity model using SDN to create an SDN WAN technology.

SDN is all about taking intensive network algorithms out of the hardware. Previously, in traditional networks, this was in individual hardware devices using control plane points in the data path. As a result, control points may become congested (for example – OSPF max neighbors reached). Customers lose capacity on the control plane front but not on the data plane. SDN is moving the intensive computation to off-the-shelf servers. MPLS networks attempt to use the same concepts with Route-Reflector (RR) designs.

They started to move route reflectors off the data plane to compute the best-path algorithms. Route reflectors can be positioned anywhere in the network and do not have to sit on the data path. Controller-based SDN approach, you are not embedding the control plane in the network. The controller is off the path. Now, you can scale out and SDN frameworks centrally provision and push policy down to the data plane.

Viptela can take any circuit and provide the ubiquitous connectivity MPLS provided, but now, it’s based on a policy with a central controller. Remote sites can have random transport methods. One leg could be the Internet, and the other could be MPLS. As long as there is an IP path between endpoint A and the controller, Viptela can provide the ubiquitous data plane.

 

Viptela SD WAN and Secure Extensible Network (SEN)

Managed overlay network

If you look at the existing WAN, it is two-part: routing and security. Routing connects sites, and security secures transmission. We have too many network security and policy configuration points in the current model. SEN allows you to centralize control plane security and routing, resulting in data path fluidity. The controller takes care of routing and security decisions.

It passes the relevant information between endpoints. Endpoints can pop up anywhere in the network. All they have to do is set up a control channel for the central controller. This approach does not build excessive control channels, as the control channel is between the controller and endpoints. Not from endpoint to endpoint. The data plane can flow based on the policy in the center of the network.

Viptela SD WAN

 

Viptela SD WAN: Deployment considerations

Deployment of separate data plane nodes at the customer site is integrated into existing infrastructure at Layer 2 or 3. So you can deploy incrementally, starting with one node and ending with thousands. It is so scalable because it is based on routed technology. The model allows you to deploy, for example, a guest network and then integrate it further into your network over time. Internally they use Border Gateway Protocol (BGP). One the data plane, they use standard IPSec between endpoints. It also works over Network Address Translation (NAT), meaning IPSec over UDP.

When an attacker gets access to your network, it is easy for them to reach the beachhead and hop from one segment to another. Viptela enables per-segment encryption, so even if they get to one segment, they will not be able to jump to another. Key management on a global scale has always been a challenge. Viptela solves this with a propitiatory distributed manager based on a priority system. Currently, their key management solution is not open to the industry.

 

SDN controller

You have a controller and VPN termination points i.e data plane points. The controller is the central management piece that assigns the policy. Data points are modules that are shipped to customer sites. The controller allows you to dictate different topologies for individual endpoint segments. Similar to how you influence-routing tables with RT in MPLS.

The control plane is at the controller.

 

Data plane module

Data plane modules are located at the customer site. They connect this data plane module, which could be a PE hand-off to the internal side of the network. The data plane module must be in the data plane path on the customer site. Internal side, they discover the routing protocols and participate in prefix learning. At Layer 2, they discover the VLANs. Their module can either be the default gateway or perform the router neighbor relationship function. WAN side, data plane module registers uplink IP address to WAN controller/orchestration system. The controller builds encrypted tunnels between the data endpoints. The encrypted control channels are only needed when you build over untrusted third parties.

If the problem occurs with controller connectivity, the on-site module can stop being the default gateway and usually participate in Layer 3 forwarding for existing protocols. It backs off from being the primary router for off-net traffic. It’s like creating VRF for different businesses and default routes for each VRF with a single peering point to the controller; Policy-Based Routing (PBR) for each VRF for data plane activity. The PBR is based on information coming from the controller. Each control segment can have a separate policy (for example – modifying the next hop). From a configuration point of view, you need an IP on the data plane module and the remote controller IP. The controller pushes down the rest.

 

  • Viptela SD WAN: Use cases

For example, you have a branch office with three distinct segments, and you want each endpoint to have its independent topology. The topology should be service driven, and the service should not follow existing defined topology. Each business should depict how they want their business to connect to the network team should not say this is how the topology is, and you must obey our topology.

From a carrier’s perspective, they can expand their MPLS network to areas they do not have a physical presence. And bring customers with this secure overlay to their closest pop where they have an MPLS peering. MPLS providers can expand their footprint to areas where they do not have service. If MPLS has customers in region X and wants to connect to the customer in region Y, they can use Viptela. Having those different data plane endpoints through a security framework would be best before entering the MPLS network.

Viptela allows you to steer traffic based on the SLA requirements of the application, aka Application-Aware Routing. For example, if you have two sites with dual connectivity to MPLS and Internet, data plane modules (located at customer sites) nodes can steer traffic over either the MPLS or Internet transport based on end-to-end latency or drops. They do this by maintaining the real-time loss, latency, and jitter characteristics and then applying policies on the centralized controller. As a result, critical traffic is always steered to the most reliable link. This architecture can scale to 1000 nodes in a full mesh topology.

 

viptela sd wan