wan monitoring

WAN Monitoring

WAN Monitoring

In today's digital landscape, the demand for seamless and reliable network connectivity is paramount. This is where Software-Defined Wide Area Networking (SD-WAN) comes into play. SD-WAN offers enhanced agility, cost savings, and improved application performance. However, to truly leverage the benefits of SD-WAN, effective monitoring is crucial. In this blogpost, we will explore the importance of SD-WAN monitoring and how it empowers businesses to conquer the digital highway.

SD-WAN monitoring involves the continuous observation and analysis of network traffic, performance metrics, and security aspects within an SD-WAN infrastructure. It provides real-time insights into network behavior, enabling proactive troubleshooting, performance optimization, and security management.

WAN monitoring refers to the practice of actively monitoring and managing a wide area network to ensure its smooth operation. It involves collecting data about network traffic, bandwidth utilization, latency, packet loss, and other key performance indicators. By continuously monitoring the network, administrators can identify potential issues, troubleshoot problems, and optimize performance.

a. Proactive Network Management: WAN monitoring enables proactive identification and resolution of network issues before they impact users. By receiving real-time alerts and notifications, administrators can take immediate action to mitigate disruptions and minimize downtime.

b. Enhanced Performance: With WAN monitoring, administrators gain granular visibility into network performance metrics. They can identify bandwidth bottlenecks, optimize routing, and allocate resources efficiently, resulting in improved network performance and user experience.

c. Security and Compliance: WAN monitoring helps detect and prevent security breaches by monitoring traffic patterns and identifying anomalies. It enables the identification of potential threats, such as unauthorized access attempts or data exfiltration. Additionally, it aids in maintaining compliance with industry regulations by monitoring network activity and generating audit logs.

a. Scalability: When selecting a WAN monitoring solution, it is important to consider its scalability. Ensure that the solution can handle the size and complexity of your network infrastructure, accommodating future growth and network expansions.

b. Real-time Monitoring: Look for a solution that provides real-time monitoring capabilities, allowing you to detect issues as they occur. Real-time data and alerts enable prompt troubleshooting and minimize the impact on network performance.

c. Comprehensive Reporting: A robust WAN monitoring solution should offer detailed reports and analytics. These reports provide valuable insights into network performance trends, usage patterns, and potential areas for improvement.

Highlights: WAN Monitoring

Cloud-based Services

Cloud-based services, such as SaaS applications, are becoming increasingly popular among enterprises, increasing reliance on the Internet to deliver WAN traffic. Because many critical applications and services are no longer internal, traditional MPLS services make suboptimal use of expensive backhaul WAN bandwidth. Consequently, enterprises are migrating to hybrid WANs and SD-WAN technologies that combine traditional MPLS circuits with direct Internet access (DIA).

Over the last several years, a thriving SD-WAN vendor and managed SD-WAN provider market has met this need. As enterprises refresh their branch office routers, SD-WAN solutions, and associated network monitoring capabilities are expected to become nearly ubiquitous.

Key Points: – 

a) Choosing the Right Monitoring Tools: Selecting robust network monitoring tools is crucial for effective WAN monitoring. These tools should provide real-time insights, customizable dashboards, and comprehensive reporting capabilities to track network performance and identify potential issues.

b) Setting Up Performance Baselines: Establishing performance baselines helps organizations identify deviations from normal network behavior. IT teams can quickly identify anomalies and take corrective actions by defining acceptable thresholds for critical metrics, such as latency or packet loss.

c) Implementing Proactive Alerts: Configuring proactive alerts ensures that IT teams are promptly notified of performance issues or abnormalities. These alerts can be set up for specific metrics, such as bandwidth utilization exceeding a certain threshold, allowing IT teams to investigate and resolve issues before they impact users.

WAN Monitoring Metrics

**Overcome: Suboptimal Performance**

With a growing distributed workforce, enterprises are increasingly leveraging cloud-based applications. As evolving business needs have dramatically expanded to include software as a service (SaaS) and the cloud, enterprises are moving to wide area networks (WANs) that are software-defined, Internet-centric, and architected for optimal interconnection with cloud and external services to combat rising transport costs and suboptimal application performance. 

**Gaining: WAN Valuable Insights**

Monitoring a WAN’s performance involves tracking various metrics that provide valuable insights into its health and efficiency. These metrics include latency, packet loss, jitter, bandwidth utilization, and availability. Explore these metrics and understand their importance in maintaining a robust network infrastructure.

1. Latency: Latency refers to the time data travels from the source to the destination. Even minor delays in data transmission can significantly impact application performance, especially for real-time applications like video conferencing or VoIP. We’ll discuss how measuring latency helps identify potential network congestion points and optimize data routing for reduced latency.

2. Packet Loss: Packet loss occurs when data packets fail to reach their intended destination. This can lead to retransmissions, increased latency, and degraded application performance. By monitoring packet loss rates, network administrators can pinpoint underlying issues, such as network congestion or hardware problems, and take proactive measures to mitigate packet loss.

3. Jitter: Jitter refers to the variation in delay between data packets arriving at their destination. High jitter can lead to inconsistent performance, particularly for voice and video applications. We’ll explore how monitoring jitter helps identify network instability and implement quality of service (QoS) mechanisms to ensure smooth data delivery.

4. Bandwidth Utilization: Effective bandwidth utilization is crucial for maintaining optimal network performance. Monitoring bandwidth usage patterns helps identify peak usage times, bandwidth-hungry applications, and potential network bottlenecks. We’ll discuss the significance of bandwidth monitoring and how it enables network administrators to allocate resources efficiently and plan for future scalability.

Example Product: Cisco ThousandEyes

### Introduction to Cisco ThousandEyes

In today’s hyper-connected world, maintaining a reliable, high-performance Wide Area Network (WAN) is crucial for businesses of all sizes. Enter Cisco ThousandEyes, a robust network intelligence platform designed to provide unparalleled visibility into your WAN performance. From detecting outages to diagnosing complex network issues, ThousandEyes is a game-changer in the realm of WAN monitoring.

### Why WAN Monitoring Matters

WAN monitoring is essential for ensuring that your network operates smoothly and efficiently. With the increasing reliance on cloud services, SaaS applications, and remote work environments, any disruption in WAN can result in significant downtime and lost productivity. Cisco ThousandEyes offers a comprehensive solution by continuously monitoring the health of your WAN, identifying potential issues before they escalate, and providing actionable insights to resolve them promptly.

### Key Features of Cisco ThousandEyes

1. **Synthetic Monitoring**: Simulate user interactions to proactively identify potential issues.

2. **Real-Time Data Collection**: Gather real-time metrics on latency, packet loss, and jitter.

3. **Path Visualization**: Visualize the entire network path from end-user to server, identifying bottlenecks.

4. **Alerts and Reporting**: Set up custom alerts and generate detailed reports for proactive management.

5. **Global Agent Coverage**: Deploy agents globally to monitor network performance from various locations.

WAN Monitoring Tools

WAN monitoring tools are software applications or platforms that enable network administrators to monitor, analyze, and troubleshoot their wide area networks.

These tools collect data from various network devices and endpoints, providing valuable insights into network performance, bandwidth utilization, application performance, and security threats. Organizations can proactively address issues and optimize their WAN infrastructure by comprehensively understanding their network’s health and performance.

WAN monitoring tools offer a wide range of features to empower network administrators. These include real-time monitoring and alerts, bandwidth utilization analysis, application performance monitoring, network mapping and visualization, traffic flow analysis, and security monitoring.

With these capabilities, organizations can identify bottlenecks, detect network anomalies, optimize resource allocation, ensure Quality of Service (QoS), and mitigate security risks. Furthermore, many tools provide historical data analysis and reporting, enabling administrators to track network performance trends and make data-driven decisions.

Example Monitoring Technology: Nethogs

IP SLAs ICMP Echo Operation

IP SLAs ICMP Echo Operations, also known as Internet Protocol Service Level Agreements Internet Control Message Protocol Echo Operations, is a feature Cisco devices provide. It allows network administrators to measure network performance by sending ICMP echo requests (ping) between devices, enabling them to gather valuable data about network latency, packet loss, and jitter.

Network administrators can proactively monitor network performance, identify potential bottlenecks, and troubleshoot connectivity issues using IP SLAs ICMP Echo Operations. The key benefits of this feature include:

1. Performance Monitoring: IP SLAs ICMP Echo Operations provides real-time monitoring capabilities, allowing administrators to track network performance metrics such as latency and packet loss.

2. Troubleshooting: With IP SLAs ICMP Echo Operations, administrators can pinpoint network issues and determine whether network devices, configuration, or external factors cause them.

3. SLA Compliance: Organizations relying on Service Level Agreements (SLAs) can leverage IP SLAs ICMP Echo Operations to ensure compliance with performance targets and quickly identify deviations.

Understanding Traceroute

Traceroute, also known as tracert in Windows, is a network diagnostic tool that traces the path packets taken from your device to a destination. It provides valuable insights into the various hops or intermediate devices that data encounters. By sending a series of specially crafted packets, traceroute measures the time it takes for each hop to respond, enabling us to visualize the network path.

  • Time-to-Live (TTL) field in IP packets

Behind the scenes, traceroute utilizes the Time-to-Live (TTL) field in IP packets to gather information about the hops. It starts by sending packets with a TTL of 1, which ensures they are discarded by the first hop encountered. The hop then sends back an ICMP Time Exceeded message, indicating its presence. Traceroute then repeats this process, gradually incrementing the TTL until it reaches the destination and receives an ICMP Echo Reply.

  • Packet’s round-trip time (RTT).

As the traceroute progresses through each hop, it collects IP addresses and measures each packet’s round-trip time (RTT). These valuable pieces of information allow us to map the network path. By correlating IP addresses with geographical locations, we can visualize the journey of our data on a global scale.

  • Capture Network Issues

Traceroute is not only a fascinating tool for exploration but also a powerful troubleshooting aid. We can identify potential bottlenecks, network congestion, or even faulty devices by analyzing the RTT values and the number of hops. This makes traceroute an invaluable resource for network administrators and tech enthusiasts alike.

Understanding ICMP Basics

ICMP, often called the “heart and soul” of network troubleshooting, is an integral part of the Internet Protocol Suite. It operates at the network layer and is responsible for vital functions such as error reporting, network diagnostics, and route change notifications. By understanding the basics of ICMP, we can gain insights into how it contributes to efficient network communication.

ICMP Message Types

ICMP encompasses a wide range of message types that serve different purposes. From ICMP Echo Request (ping) to Destination Unreachable, Time Exceeded, and Redirect messages, each type serves a unique role in network diagnostics and troubleshooting. Exploring these message types and their significance will shed light on the underlying mechanisms of network communication.

**Round-trip time, packet loss, and network congestion**

Network administrators and operators heavily rely on ICMP to monitor network performance. Key metrics such as round-trip time, packet loss, and network congestion can be measured using ICMP tools and techniques. This section will delve into how ICMP aids in network performance monitoring and the benefits it brings to maintaining optimal network operations.

Use Case: At the WAN Edge

**Performance-Based Routing**

Performance-based or dynamic routing is a method of intelligently directing network traffic based on real-time performance metrics. Unlike traditional static routing, which relies on predetermined paths, performance-based routing adapts dynamically to network conditions. By continuously monitoring factors such as latency, packet loss, and bandwidth availability, performance-based routing ensures that data takes the most optimal path to reach its destination.

Key Points: – 

A. Enhanced Network Reliability: By constantly evaluating network performance, performance-based routing can quickly react to failures or congestion, automatically rerouting traffic to alternate paths. This proactive approach minimizes downtime and improves overall network reliability.

B. Improved Application Performance: Performance-based routing can prioritize traffic based on specific application requirements. Critical applications, such as video conferencing or real-time data transfer, can be allocated more bandwidth and given higher priority, ensuring optimal performance and user experience.

C. Efficient Resource Utilization: Performance-based routing optimizes resource utilization across multiple network paths by intelligently distributing network traffic. This results in improved bandwidth utilization, reduced congestion, and a more efficient use of available resources.

D. Performance Metrics and Monitoring: Organizations must deploy network monitoring tools to collect real-time performance metrics to implement performance-based routing. These metrics serve as the foundation for decision-making algorithms that determine the best path for network traffic.

E. Dynamic Path Selection Algorithms: Implementing performance-based routing requires intelligent algorithms capable of analyzing performance metrics and selecting the most optimal path for each data packet. These algorithms consider latency, packet loss, and available bandwidth to make informed routing decisions.

F. Network Infrastructure Considerations: Organizations must ensure their network infrastructure can support the increased complexity before implementing performance-based routing. This may involve upgrading network devices, establishing redundancy, and configuring routing protocols to accommodate dynamic path selection.

Ensuring High Availability and Performance

– Network downtime and performance issues can significantly impact business operations, causing financial losses and damaging reputation. Network monitoring allows organizations to proactively monitor and manage network infrastructure, ensuring high availability and optimal performance.

– Network administrators can identify and address issues promptly by tracking key performance indicators, such as response time and uptime, minimizing downtime, and maximizing productivity.

– As businesses grow and evolve, their network requirements change. Network monitoring provides valuable insights into network capacity utilization, helping organizations plan for future growth and scalability.

– By monitoring network traffic patterns and usage trends, IT teams can identify potential capacity bottlenecks, plan network upgrades, and optimize resource allocation. This proactive approach enables businesses to scale their networks effectively, avoiding performance issues associated with inadequate capacity.

Monitoring TCP

TCP (Transmission Control Protocol) is a fundamental component of Internet communication, ensuring reliable data transmission. Behind the scenes, TCP performance parameters are crucial in optimizing network performance.

TCP Performance Parameters:

TCP performance parameters are configuration settings that govern the behavior of TCP connections. These parameters determine various aspects of the transmission process, including congestion control, window size, timeouts, and more. Network administrators can balance reliability, throughput, and latency by adjusting these parameters.

Congestion Window (CWND): CWND represents the number of unacknowledged packets a sender can transmit before awaiting an acknowledgment. Adjusting CWND can affect the amount of data sent, impacting throughput and congestion control.

Maximum Segment Size (MSS): MSS refers to the maximum amount of data transmitted in a single TCP segment. Optimizing MSS can help reduce overhead and improve overall efficiency.

Window Scaling: Window scaling allows for adjusting the TCP window size beyond its traditional limit of 64KB. Enabling window scaling can enhance throughput, especially in high-bandwidth networks.

Note: To fine-tune TCP performance parameters, network administrators must carefully analyze their network’s requirements and characteristics. Here are some best practices for optimizing TCP performance:

Analyze Network Conditions: Understanding the network environment, including bandwidth, latency, and packet loss, is crucial for selecting appropriate performance parameters.

Conduct Experiments: It’s essential to test different parameter configurations in a controlled environment to determine their impact on network performance. Tools like Wireshark can help monitor and analyze TCP traffic.

Monitor and Adjust: Network conditions are dynamic, so monitoring TCP performance and adjusting parameters accordingly is vital for maintaining optimal performance.

What is TCP MSS?

TCP MSS refers to the maximum amount of data transmitted in a single TCP segment. It represents the payload size within the segment, excluding the TCP header. The MSS value is negotiated during the TCP handshake process, allowing both ends of the connection to agree upon an optimal segment size.

**Amount of data in each segement**

Efficiently managing TCP MSS is crucial for various reasons. Firstly, it impacts network performance by directly influencing the amount of data sent in each segment. Controlling MSS can help mitigate packet fragmentation and reassembly issues, reducing the overall network overhead. Optimizing TCP MSS can also enhance throughput and minimize latency, improving application performance.

**Crucial Factors to consider**

Several factors come into play when determining the appropriate TCP MSS value. Network infrastructure, such as routers and firewalls, may impose limitations on the MSS. Path MTU (Maximum Transmission Unit) discovery also affects TCP MSS, as it determines the maximum packet size that can be transmitted without fragmentation. Understanding these factors is vital for configuring TCP MSS appropriately.

Gaining WAN Visibility

Example Technology: NetFlow

Implementing NetFlow provides numerous advantages for network administrators. Firstly, it enables comprehensive traffic monitoring, helping identify and troubleshoot performance issues, bottlenecks, or abnormal behavior. Secondly, NetFlow offers valuable insights into network security, allowing the detection of potential threats, such as DDoS attacks or unauthorized access attempts. Additionally, NetFlow facilitates capacity planning by providing detailed traffic statistics, which helps optimize network resources and infrastructure.

Implementing NetFlow

Implementing NetFlow requires both hardware and software components. Network devices like routers and switches need to support NetFlow functionality. Configuring NetFlow on these devices involves defining flow record formats, setting sampling rates, and specifying collector destinations. In terms of software, organizations can choose from various NetFlow collectors and analyzers that process and visualize the collected data. These tools offer powerful reporting capabilities and advanced features for network traffic analysis.

NetFlow use cases

NetFlow finds application in various scenarios across different industries. NetFlow data is instrumental in detecting and investigating security incidents, enabling prompt response and mitigation in cybersecurity. Network administrators leverage NetFlow to optimize bandwidth allocation, ensuring efficient usage and fair distribution. Moreover, NetFlow analysis plays a vital role in compliance monitoring, aiding organizations in meeting regulatory requirements and maintaining data integrity.

netflow

Ethernet Switched Port Analyzer:

SPAN, also known as port mirroring, is a feature that enables the network switch to copy traffic from one or more source ports and send it to a destination port. This destination port is typically connected to a packet analyzer or network monitoring tool. By monitoring network traffic in real time, administrators gain valuable insights into network performance, security, and troubleshooting.

Proactive Monitoring:

The implementation of SPAN offers several advantages to network administrators. Firstly, it allows for proactive monitoring, enabling timely identification and resolution of potential network issues. Secondly, SPAN facilitates network troubleshooting by capturing and analyzing traffic patterns, helping to pinpoint the root cause of problems. Additionally, SPAN can be used for security purposes, such as detecting and preventing unauthorized access or malicious activities within the network.

Understanding sFlow:

sFlow is a technology that enables real-time network monitoring by sampling packets at wire speed. It offers a scalable and efficient way to collect comprehensive data about network performance, traffic patterns, and potential security threats. By leveraging the power of sFlow, network administrators gain valuable insights that help optimize network performance and troubleshoot issues proactively.

Implementing sFlow on Cisco NX-OS brings several key advantages. Firstly, it provides granular visibility into network traffic, allowing administrators to identify bandwidth-hungry applications, detect anomalies, and ensure optimal resource allocation. Secondly, sFlow enables real-time network performance monitoring, enabling rapid troubleshooting and minimizing downtime. Additionally, sFlow helps in capacity planning, allowing organizations to scale their networks effectively.

Use Case: Cisco Performance Routing

Understanding Cisco Pfr

Cisco Pfr, also known as Optimized Edge Routing (OER), is an advanced routing technology that automatically selects the best path for network traffic based on real-time performance metrics. It goes beyond traditional routing protocols by considering link latency, jitter, packet loss, and available bandwidth. By dynamically adapting to changing network conditions, Cisco Pfr ensures that traffic is routed through the most optimal path, improving application performance and reducing congestion.

Enhanced Network Performance: Cisco Pfr optimizes traffic flow by intelligently selecting the most efficient path, reducing latency, and improving overall network performance. This leads to enhanced end-user experience and increased productivity.

Resilience and Redundancy: Cisco Pfr ensures high network availability by dynamically adapting to network changes. It automatically reroutes traffic in case of link failures, minimizing downtime and providing seamless connectivity.

Improved Application Performance: By intelligently routing traffic based on application-specific requirements, Cisco Pfr prioritizes critical applications and optimizes their performance. This ensures smooth and reliable application delivery, even in bandwidth-constrained environments.

SD-WAN Monitoring: The Components

WAN Chalenges:

So, within your data center topology, the old approach to the WAN did not scale very well. First, there is cost, complexity, and the length of installation times. The network is built on expensive proprietary equipment that is difficult to manage, and then we have expensive transport costs that lack agility.

1.Configuration Complexity:

Not to mention the complexity of segmentation with complex BGP configurations and tagging mechanisms used to control traffic over the WAN. There are also limitations to forwarding routing protocols. It’s not that they redesigned it severely; it’s just a different solution needed over the WAN.

2.Distributed Control Plane:

There was also a distributed control plane where every node had to be considered and managed. And if you had multi-vendor equipment at the WAN edge, different teams could have managed this in other locations. 

You could look at 8 – 12 weeks as soon as you want to upgrade. With the legacy network, all the change control is with the service provider, which I have found to be a major challenge.

3.Architectural Challenges:

There was also a significant architectural change, where a continuous flow of applications moved to the cloud. Therefore, routing via the primary data center where the security stack was located was not as important. Instead, it was much better to route the application directly into the cloud in the first cloud world. 

WAN Modernization

The initial use case of SD-WAN and other routing control platforms was to increase the use of Internet-based links and reduce the high costs of MPLS. However, when you start deploying SD-WAN, many immediately see the benefits. So, as you deploy SD-WAN, you are getting 5 x 9s with dual internal links, and MPLS at the WAN edge of the network is something you could move away from, especially for remote branches.

Required: Transport Independence 

There was also the need for transport independence and to avoid the long lead times associated with deploying a new MPLS circuit. With SD-WAN, you create SD-WAN overlay tunnels over the top of whatever ISP and mix and match as you see fit.

Required: Constant Performance

With SD-WAN, we now have an SD-WAN controller in a central location. This brings with it a lot of consistency in security and performance. In addition, we have a consistent policy pushed through the network regardless of network locations.

SD-WAN monitoring and performance-based application delivery

SD-WAN is also application-focused; we now have performance-based application delivery and routing. This type of design was possible with traditional WANs but was challenging and complex to manage daily. It’s a better use of capital and business outcomes. So we can use the less expensive connection without dropping any packets. There is no longer leverage in having something as a backup. With SD-WAN, you can find several virtual paths and routes around all failures.

**The ability to route intelligently**

Now, applications can be routed intelligently, and using performance as a key driver can make WAN monitoring more complete. It’s not just about making a decision based on up or down. Now we have the concept of brownouts, maybe high latency or high jitter. That circuit is not down, but the application will route around the issue with intelligent WAN segmentation.

  • Stage1: Application Visibility

For SD-WAN to make the correct provisioning and routing decisions, visibility into application performance is required. Therefore, SD-WAN enforces the right QoS policy based on how an application is tagged. To determine what prioritization they need within QoS policies, you need monitoring tools to deliver insights on various parameters, such as application response times, network saturation, and bandwidth usage. You control the overlay.

  • Stage2: Underlay Visibility

Then it would help if you considered underlay visibility. I have found a gap in visibility between the tunnels riding over the network and the underlying transport network. SD-WAN visibility leans heavily on the virtual overlay. For WAN underlay monitoring, we must consider the network is a hardware-dependent physical network responsible for delivering packets. The underlay network can be the Internet, MPLS, satellite, Ethernet, broadband, or any transport mode. A service provider controls the underlay.

  • Stage3: Security Visibility

Finally, and more importantly, security visibility. Here, we need to cover the underlay and overlay of the SD-WAN network, considering devices, domains, IPs, users, and connections throughout the network. Often, malicious traffic can hide in encrypted packets and appear like normal traffic—for example, crypto mining. The traditional deep packet inspection (DPI) engines have proven to fall short here.

We must look at deep packet dynamics (DPD) and encrypted traffic analysis (ETA). Combined with artificial intelligence (AI), it can fingerprint the metadata of the packet and use behavioral heuristics to see through encrypted traffic for threats without the negative aspects of decryption.

Googles SD-WAN Cloud Hub

SD-WAN Cloud Hub is a cutting-edge networking technology that combines the power of software-defined wide area networking (SD-WAN) and cloud computing. It revolutionizes the way organizations connect and manage their network infrastructure. By leveraging the cloud as a central hub, SD-WAN Cloud Hub enables seamless connectivity between various branch locations, data centers, and cloud environments.

Enhance performance & reliability 

One of the key advantages of SD-WAN Cloud Hub is its ability to enhance network performance and reliability. By intelligently routing traffic through the most optimal path, it minimizes latency, packet loss, and jitter. This ensures smooth and uninterrupted access to critical applications and services.

Centralised visibility & control

Additionally, SD-WAN Cloud Hub offers centralized visibility and control, allowing IT teams to streamline network management and troubleshoot issues effectively.

Troubleshoot brownouts

Detecting brownouts

Traditional monitoring solutions focus on device health and cannot detect complex network service issues like brownouts. Therefore, it is critical to evaluate solutions that are easy to deploy and use to simulate end-user behavior from the suitable locations for the relevant network services.

Required Active Monitoring

Most of the reported brownouts reported causes require active monitoring to detect. Five of the top six reasons brownouts occur can only be seen with active monitoring: congestion, buffer full drops, missing or misconfigured QoS, problematic in-line devices, external network issues, and poor planning or design of Wi-Fi.

Challenge: Troubleshooting Brownouts

Troubleshooting a brownout is difficult, especially when understanding geo policy and tunnel performance. What applications and users are affected, and how do you tie back to the SD-WAN tunnels? Brownouts are different from blackouts as application performance is affected.

SD-WAN Monitoring and Visibility

So, we have clear advantages to introducing SD-WAN; managers and engineers must consider how they operationalize this new technology. Designing and installing is one aspect, but how will SD-WAN be monitored and maintained? Where do visibility and security fit into the picture?

While most SD-WAN solutions provide native network and application performance visibility, this isn’t enough. I would recommend that you supplement native SD-WAN visibility with third-party monitoring tools. SD-WAN vendors are not monitoring or observability experts. So, it is like a networking vendor jumping into the security space.

Encrypted traffic and DPI

Traditionally, we look for anomalies against unencrypted traffic, and you can inspect the payload and use deep packet inspection (DPI). Nowadays, there is more than simple UDP scanning. Still, bad actors appear in encrypted traffic and can mask and hide activity among the usual traffic. This means some DPI vendors are ineffective and can’t see the payloads. Without appropriate visibility, the appliance will send a lot of alerts that are false positives.

**Deep packet inspection technology**

Deep packet inspection technology has been around for decades. It utilizes traffic mirroring to analyze the payload of each packet passing through a mirrored sensor or core device, the traditional approach to network detection and response (NDR). Most modern cyberattacks, including ransomware, lateral movement, and Advanced Persistent Threats (APT), heavily utilize encryption in their attack routines. However, this limitation can create a security gap since DPI was not built to analyze encrypted traffic.

**Legacy Visibility Solution**

So, the legacy visibility solutions only work for unencrypted or clear text protocols such as HTTP. In addition, DPI requires a decryption proxy, or middlebox, to be deployed for encrypted traffic. Middleboxes can be costly, introduce performance bottlenecks, and create additional security concerns.

**Legacy: Unencrypted Traffic**

Previously, security practitioners would apply DPI techniques to unencrypted HTTP traffic to identify critical session details such as browser user agent, presence of a network cookie, or parameters of an HTTP POST. However, as web traffic moves from HTTP to encrypted HTTPS, network defenders are losing visibility into those details.

Good visibility and security posture

We need to leverage your network monitoring infrastructure effectively for better security and application performance monitoring to be more effective, especially in the world of SD-WAN. However, this comes with challenges with collecting and storing standard telemetry and the ability to view encrypted traffic.

The network teams spend a lot of time on security incidents, and sometimes, the security team has to look after network issues. So, both of these teams work together. For example, packet analysis needs to be leveraged by both teams and flow control and other telemetry data need to be analyzed by the two teams.

The role of a common platform:

It’s good that other network and security teams can work off a common platform and standard telemetry. A network monitoring system can plug into your SD-WAN controller to help operationalize your SD-WAN environments. Many application performance problems arise from security issues. So, you need to know your applications and examine encrypted traffic without decrypting.

Network performance monitoring and diagnostics:

We have Flow, SNMP, and API for network performance monitoring and diagnostics. We have encrypted traffic analysis and machine learning (ML) for threat and risk identification for security teams. This will help you reduce complexity and will increase efficiency and emerge. So we have many things, such as secure access service edge (SASE) SD-WAN, and the network and security teams are under pressure to respond better.

Merging of network and security:

The market is moving towards the merging of network and security teams. We see this with cloud, SD-WAN, and also SASE. So, with the cloud, for example, we have a lot of security built into the fabric. With VPC, we have security group policies built into the fabric. SD-WAN, we have end-to-end segmentation commonly based on an overlay technology. That can also be terminal on a virtual private cloud (VPC). Then, SASE is a combination of all.

Enhanced Detection:

We need to improve monitoring, investigation capabilities, and detection. This is where the zero trust architecture and technologies such as single packet authorization can help you monitor and enhance detection with the deduction and response solutions.

In addition, we must look at network logging and encrypted traffic analyses to improve investigation capabilities. Regarding investment, we have traditionally looked at packets and logs but have SNMP, NetFlow, and API. There are a lot of telemetries that can be used for security, viewed initially as performance monitoring. Now, it has been managed as a security and cybersecurity use case.

**The need for a baseline**

You need to understand and baseline the current network for smooth SD-WAN rollouts. Also, when it comes to policy, it is no longer just a primary backup link and a backup design. Now, we have intelligence application profiling. 

Everything is based on performance parameters such as loss, latency, and jitter. So, before you start any of this, you must have good visibility and observability. You need to understand your network and get a baseline for policy creation, and getting the proper visibility is the first step in planning the SD-WAN rollout process.

Network monitoring platform

For traditional networks, they will be SNMP, Flow data, and a lot of multi-vendor equipment. You need to monitor and understand how applications are used across the environment, and not everyone uses the same vendor for everything. For this, you need a network monitoring platform, which can easily be scaled to perform baseline and complete reporting and take into all multi-vendor networks. To deploy SD-WAN, you need a network monitoring platform to collect multiple telemetries, be multi-vendor, and scale. 

Variety of telemetry

Consuming packets, decoding this to IPFIX, and bringing API-based data is critical. So, you need to be able to consume all of this data. Visibility is key when you are rolling out SD-WAN. You first need to baseline to see what is expected. This will let you know if SD-WAN will make a difference and what type of difference it will make at each site. So, with SD-WAN, you can deploy application-aware policies that are site-specific or region-specific, but you first need a baseline to tell you what policies you need at each site.

QoS visibility

With a network monitoring platform, you can get visibility into QoS. This can be done by using advanced flow technologies to see the marking. For example, in the case of VOIP, the traffic should be marked as expedited forwarding (EF). Also, we need to be visible in the queueing, and shaping is also critical. You can assume that the user phones automatically market the traffic as EF.

Still, a misconfiguration at one of the switches in the data path could be remarking this to best efforts. Once you have all this data, you must collect and store it. The monitoring platform must scale, especially for global customers, and collect information for large environments. Flow can be challenging. What if you have 100,000 flow records per second? 

WAN capacity planning

When you have a baseline, you need to understand WAN capacity planning for each service provider. This will allow you to re-evaluate your service provider’s needs. In the long run, this will save costs. In addition, we can use WAN capacity planning to let you know each site is reaching your limit.

WAN capacity planning is not just about reports. Now, we are looking extensively at the data to draw value. Here, we can see the introduction of artificial intelligence for IT operations (AIOps) and machine learning to help predict WAN capacity and future problems. This will give you a long-term prediction when deciding on WAN bandwidth and service provider needs.

Advice: Get to know your sites and POC.

You also need to know the sites. A network monitoring platform will allow you to look at sites and understand bandwidth usage across your service providers, enabling you to identify your critical sites. You will want various sites and a cross-section of other sites on satellite connection or LTE, especially with retail. So, look for varying sites and learn about problematic sites where your users have problems with applications that are good candidates for proof of concept. 

Advice: Decide on Proof of Concept

Your network performance management software will give you visibility into what sites to include in your proof of concept. This platform will tell you what sites are critical and which are problematic in terms of performance and would be a good mix for a proof of concept. When you get inappropriate sites in the mix, you will immediately see the return on investment (ROI) for SD-WAN. So uptime will increase, and you will see this immediately. But for this to be in effect, you first need a baseline.

Identity your applications: Everything is port 80

So, we have latency, jitter, and loss. Understanding when loss happens is apparent. However, with specific applications, with 1 – 5 % packet loss, there may not be a failover, which can negatively affect the applications. Also, many don’t know what applications are running. What about people connecting to the VPN with no split tunnel and then streaming movies?  We have IP and ports to identity applications running on your network, but everything is port 80 now. So, you need to be able to consume different types of telemetry from the network to understand your applications fully.

The issues with deep packet inspection

So, what about the homegrown applications that a DPI engine might not know about? Many DPI vendors will have trouble identifying these. It would help if you had the network monitoring platform to categorize and identify applications based on several parameters that DPI can’t. A DPI engine can classify many applications but can’t do everything. A network monitoring platform can create a custom application, let’s say, based on an IP address, port number, URL, and URI.  

Requirements: Network monitoring platform

Know application routing

The network monitoring platform needs to know the application policy and routing. It needs to know when there are error threshold events as applications are routed based on intelligence policy. Once the policy is understood, you must see how the overlay application is routed. With SD-WAN, we have per segment per topology to do this based on VRF or service VPN. We can have full mesh or regions with hub and spoke. Per segment, topology verification is also needed to know that things are running correctly. To understand the application policy, what the traffic looks like, and to be able to verify brownouts. 

SD-WAN multi-vendor

Due to mergers or acquisitions, you may have an environment with multiple vendors for SD-WAN. Each vendor has its secret source, too. The network monitoring platform needs to bridge the gap and monitor both sides. There may even be different business units. So, how do you leverage common infrastructure to achieve this? We first need to leverage telemetry for monitoring and analysts. This is important as if you are putting in info packet analysis; this should be leveraged by both security and network teams, reducing tool sprawl.

Overcome the common telemetry challenges.

Trying standard telemetry does come with its challenge, and every type of telemetry has its one type of challenge. Firstly, Big Data: This is a lot of volume in terms of storage size—the speed and planning of where you will do all the packet analysis. Next, we have the collection and performance side of things. How do we collect all of this data? From a Flow perspective, you can get flow from different devices. So, how do you collect from all the edge devices and then bring them into a central location?

Finally, we have cost and complexity challenges. You may have different products for different solutions. We have an NPM for network performance monitoring, an NDR, and packet captures. Other products work on the same telemetry. Some often start with packet capture and move to an NPM or NDR solution.

A final note on encrypted traffic

**SD-WAN encryption**

With SD-WAN, everything is encrypted across public transport. So, most SD-WAN vendors can meter traffic on the LAN side before it enters the SD-WAN tunnels, but many applications are encrypted end to end. You even need to identify keystrokes through encrypted sessions. How can you get fully encrypted visibility? By 2025, all traffic will be encrypted. Here, we can use a network monitoring platform to identify and analyze threats among encrypted traffic.

**Deep packet dynamics**

So, you should be able to track and classify with what’s known as deep packet dynamic, which could include, for example, byte distributions, sequence of packets, time, jitter, RTT, and interflow stats. Now, we can push this into machine learning to identify applications and any anomalies associated with encryption. This can identify threats in encrypted traffic without decrypting the traffic.

**Improving Visibility**

Deep packet dynamics improve encrypted traffic visibility while remaining scalable and causing no impediment to latency or violation of privacy. Now, we have a malware detection method and cryptographic assessment of secured network sessions that does not rely on decryption.

This can be done without having the keys or decrypting the traffic. Managing the session key for decryption is complex and can be costly computationally. It is also often incomplete. They often only support session key forwarding on Windows or Linux or not on MacOS, never mind the world of IoT.

**Encrypted traffic analytics**

Cisco’s Encrypted Traffic Analytics (ETA) uses the software Stealthwatch to compare the metadata of benign and malicious network packets to identify malicious traffic, even if it’s encrypted. This provides insight into threats in encrypted traffic without decryption. In addition, recent work on Cisco’s TLS fingerprinting can provide fine-grained details about the enterprise network’s applications, operating systems, and processes.

The issue with packet analysis is that everything is encrypted, especially with TLS1.3. The monitoring of the traffic and the WAN edge is encrypted. How do you encrypt all of this, and how do you store all of this? How do you encrypt traffic analysis? Decrypting traffic can create an exploit and potential attack surface, and you also don’t want to decrypt everything.

Summary: WAN Monitoring

In today’s digital landscape, businesses heavily rely on their networks to ensure seamless connectivity and efficient data transfer. As organizations increasingly adopt Software-Defined Wide Area Networking (SD-WAN) solutions, the need for robust monitoring becomes paramount. This blog post delved into SD-WAN monitoring, its significance, and how it empowers businesses to optimize their network performance.

 Understanding SD-WAN

SD-WAN, short for Software-Defined Wide Area Networking, revolutionizes traditional networking by leveraging software-defined techniques to simplify management, enhance agility, and streamline connectivity across geographically dispersed locations. By abstracting network control from the underlying hardware, SD-WAN enables organizations to optimize bandwidth utilization, reduce costs, and improve application performance.

The Role of Monitoring in SD-WAN

Effective monitoring plays a pivotal role in ensuring the smooth operation of SD-WAN deployments. It provides real-time visibility into network performance, application traffic, and security threats. Monitoring tools enable IT teams to proactively identify bottlenecks, latency issues, or network disruptions, allowing them to address these challenges and maintain optimal network performance swiftly.

 Key Benefits of SD-WAN Monitoring

Enhanced Network Performance: SD-WAN monitoring empowers organizations to monitor and analyze network traffic, identify performance bottlenecks, and optimize bandwidth allocation. This leads to improved application performance and enhanced end-user experience.

Increased Security: With SD-WAN monitoring, IT teams can monitor network traffic for potential security threats, detect anomalies, and quickly respond to attacks or breaches. Monitoring helps ensure compliance with security policies and provides valuable insights for maintaining a robust security posture.

Proactive Issue Resolution: Real-time monitoring allows IT teams to identify and resolve issues before they escalate proactively. Organizations can minimize downtime, optimize resource allocation, and ensure business continuity by leveraging comprehensive visibility into network performance and traffic patterns.

Best Practices for SD-WAN Monitoring

Choosing the Right Monitoring Solution: Select a monitoring solution that aligns with your organization’s specific needs, supports SD-WAN protocols, and provides comprehensive visibility into network traffic and performance metrics.

Monitoring Key Performance Indicators (KPIs): Define relevant KPIs such as latency, packet loss, jitter, and bandwidth utilization to track network performance effectively. Regularly monitor these KPIs to identify trends, anomalies, and areas for improvement.

4.3 Integration with Network Management Systems: Integrate SD-WAN monitoring tools with existing network management systems and IT infrastructure to streamline operations, centralize monitoring, and enable a holistic network view.

Conclusion:

SD-WAN monitoring is a critical component of successful SD-WAN deployments. By providing real-time visibility, enhanced network performance, increased security, and proactive issue resolution, monitoring tools empower organizations to maximize the benefits of SD-WAN technology. As businesses continue to embrace SD-WAN solutions, investing in robust monitoring capabilities will be essential to ensuring optimal network performance and driving digital transformation.

network overlays

WAN Virtualization

WAN Virtualization

In today's fast-paced digital world, seamless connectivity is the key to success for businesses of all sizes. WAN (Wide Area Network) virtualization has emerged as a game-changing technology, revolutionizing the way organizations connect their geographically dispersed branches and remote employees. In this blog post, we will explore the concept of WAN virtualization, its benefits, implementation considerations, and its potential impact on businesses.

WAN virtualization is a technology that abstracts the physical network infrastructure, allowing multiple logical networks to operate independently over a shared physical infrastructure. It enables organizations to combine various types of connectivity, such as MPLS, broadband, and cellular, into a single virtual network. By doing so, WAN virtualization enhances network performance, scalability, and flexibility.

Increased Flexibility and Scalability: WAN virtualization allows businesses to scale their network resources on-demand, facilitating seamless expansion or contraction based on their requirements. It provides flexibility to dynamically allocate bandwidth, prioritize critical applications, and adapt to changing network conditions.

Improved Performance and Reliability:By leveraging intelligent traffic management techniques and load balancing algorithms, WAN virtualization optimizes network performance. It intelligently routes traffic across multiple network paths, avoiding congestion and reducing latency. Additionally, it enables automatic failover and redundancy, ensuring high network availability.

Simplified Network Management:Traditional WAN architectures often involve complex configurations and manual provisioning. WAN virtualization simplifies network management by centralizing control and automating tasks. Administrators can easily set policies, monitor network performance, and make changes from a single management interface, saving time and reducing human errors.

Multi-Site Connectivity: For organizations with multiple remote sites, WAN virtualization offers a cost-effective solution. It enables seamless connectivity between sites, allowing efficient data transfer, collaboration, and resource sharing. With centralized management, network administrators can ensure consistent policies and security across all sites. Cloud Connectivity:

As more businesses adopt cloud-based applications and services, WAN virtualization becomes an essential component. It provides reliable and secure connectivity between on-premises infrastructure and public or private cloud environments. By prioritizing critical cloud traffic and optimizing routing, WAN virtualization ensures optimal performance for cloud-based applications.

Highlights: WAN Virtualization

WAN Virtualization

Separating: Control and Data Plane:

WAN virtualization can be defined as the abstraction of physical network resources into virtual entities, allowing for more flexible and efficient network management. By separating the control plane from the data plane, WAN virtualization enables the centralized management and orchestration of network resources, regardless of their physical locations. This simplifies network administration and paves the way for enhanced scalability and agility.

WAN virtualization optimizes network performance by intelligently routing traffic and dynamically adjusting network resources based on real-time conditions. This ensures that critical applications receive the necessary bandwidth and quality of service, resulting in improved user experience and productivity.

By leveraging WAN virtualization, organizations can reduce their reliance on expensive dedicated circuits and hardware appliances. Instead, they can leverage existing network infrastructure and utilize cost-effective internet connections without compromising security or performance. This significantly lowers operational costs and capital expenditures.

Traditional WAN architectures often struggle to meet modern businesses’ evolving needs. WAN virtualization solves this challenge by providing a scalable and flexible network infrastructure. With virtual overlays, organizations can rapidly deploy and scale their network resources as needed, empowering them to adapt quickly to changing business requirements.

Enhanced Performance and Reliability: WAN virtualization leverages intelligent traffic routing algorithms to dynamically select the optimal path for data transmission. This ensures that critical applications receive the necessary bandwidth and low latency, resulting in improved performance and end-user experience. Moreover, by seamlessly integrating various network links, WAN virtualization enhances network resilience and minimizes downtime.

Cost-Effective Solution: Traditional WAN architectures often come with substantial costs, particularly when it comes to deploying and managing multiple dedicated connections. WAN virtualization leverages cost-effective broadband links, reducing the reliance on expensive MPLS circuits. This not only lowers operational expenses but also enables organizations to scale their network infrastructure without breaking the bank.

Centralized Control and Visibility: WAN virtualization platforms provide centralized management and control, allowing administrators to monitor and configure the entire network from a single interface. This simplifies network operations, reduces complexity, and enhances troubleshooting capabilities. Additionally, administrators gain deep visibility into network performance and can make informed decisions to optimize the network.

Streamlined Deployment and Scalability: With WAN virtualization, deploying new sites or expanding the network becomes a streamlined process. Virtual networks can be quickly provisioned, and new sites can be seamlessly integrated into the existing infrastructure. This agility ensures that businesses can rapidly adapt to changing requirements and scale their network as needed.

Google Cloud Data Centers

**Understanding Google Network Connectivity Center**

Google Network Connectivity Center (NCC) is a cloud-based service designed to simplify and centralize network management. By leveraging Google’s extensive global infrastructure, NCC provides organizations with a unified platform to manage their network connectivity across various environments, including on-premises data centers, multi-cloud setups, and hybrid environments.

**Key Features and Benefits**

1. **Centralized Network Management**: NCC offers a single pane of glass for network administrators to monitor and manage connectivity across different environments. This centralized approach reduces the complexity associated with managing multiple network endpoints and enhances operational efficiency.

2. **Enhanced Security**: With NCC, organizations can implement robust security measures across their network. The service supports advanced encryption protocols and integrates seamlessly with Google’s security tools, ensuring that data remains secure as it moves between different environments.

3. **Scalability and Flexibility**: One of the standout features of NCC is its ability to scale with your organization’s needs. Whether you’re expanding your data center operations or integrating new cloud services, NCC provides the flexibility to adapt quickly and efficiently.

**Optimizing Data Center Operations**

Data centers are the backbone of modern digital infrastructure, and optimizing their operations is crucial for any organization. NCC facilitates this by offering tools that enhance data center connectivity and performance. For instance, with NCC, you can easily set up and manage VPNs, interconnect data centers across different regions, and ensure high availability and redundancy.

**Seamless Integration with Other Google Services**

NCC isn’t just a standalone service; it integrates seamlessly with other Google Cloud services such as Cloud Interconnect, Cloud VPN, and Google Cloud Armor. This integration allows organizations to build comprehensive network solutions that leverage the best of Google’s cloud offerings. Whether it’s enhancing security, improving performance, or ensuring compliance, NCC works in tandem with other services to deliver a cohesive and powerful network management solution.

Understanding Network Tiers

Google Cloud offers two distinct Network Tiers: Premium Tier and Standard Tier. Each tier is designed to cater to specific use cases and requirements. The Premium Tier provides users with unparalleled performance, low latency, and high availability. On the other hand, the Standard Tier offers a more cost-effective solution without compromising on reliability.

The Premium Tier, powered by Google’s global fiber network, ensures lightning-fast connectivity and optimal performance for critical workloads. With its vast network of points of presence (PoPs), it minimizes latency and enables seamless data transfers across regions. By leveraging the Premium Tier, businesses can ensure superior user experiences and support demanding applications that require real-time data processing.

While the Premium Tier delivers exceptional performance, the Standard Tier presents an attractive option for cost-conscious organizations. By utilizing Google Cloud’s extensive network peering relationships, the Standard Tier offers reliable connectivity at a reduced cost. It is an ideal choice for workloads that are less latency-sensitive or require moderate bandwidth.

What is VPC Networking?

VPC networking refers to the virtual network environment that allows you to securely connect your resources running in the cloud. It provides isolation, control, and flexibility, enabling you to define custom network configurations to suit your specific needs. In Google Cloud, VPC networking is a fundamental building block for your cloud infrastructure.

Google Cloud VPC networking offers a range of powerful features that enhance your network management capabilities. These include subnetting, firewall rules, route tables, VPN connectivity, and load balancing. Let’s explore each of these features in more detail:

Subnetting: With VPC subnetting, you can divide your IP address range into smaller subnets, allowing for better resource allocation and network segmentation.

Firewall Rules: Google Cloud VPC networking provides robust firewall rules that enable you to control inbound and outbound traffic, ensuring enhanced security for your applications and data.

Route Tables: Route tables in VPC networking allow you to define the routing logic for your network traffic, ensuring efficient communication between different subnets and external networks.

VPN Connectivity: Google Cloud supports VPN connectivity, allowing you to establish secure connections between your on-premises network and your cloud resources, creating a hybrid infrastructure.

Load Balancing: VPC networking offers load balancing capabilities, distributing incoming traffic across multiple instances, increasing availability and scalability of your applications.

Understanding HA VPN

Before diving into the configuration process, let’s first understand HA VPN. HA VPN is a highly available and scalable VPN solution provided by Google Cloud. It allows you to establish secure connections between your on-premises network and your Google Cloud Virtual Private Cloud (VPC) network.

To start configuring HA VPN, you must fulfill a few prerequisites. First, a dedicated VPC network should be set up in Google Cloud. Next, ensure a compatible VPN gateway device on your on-premises network. Additionally, you’ll need to gather relevant information such as IP ranges, shared secrets, and routing details.

Understanding Load Balancers

Load balancers act as a traffic cop, directing client requests to a set of backend servers that can handle the load. In Google Cloud, there are two types of load balancers: Network Load Balancer (NLB) and HTTP(S) Load Balancer (HLB). NLB operates at the transport layer, while HLB operates at the application layer, providing advanced features like SSL termination and content-based routing.

To set up an NLB in Google Cloud, you need to define forwarding rules, target pools, and health checks. Forwarding rules determine how traffic is distributed, while target pools group the backend instances. Health checks ensure that traffic is only directed to healthy instances, minimizing downtime and maximizing availability.

HLB offers a more feature-rich load balancing solution, ideal for web applications. It supports features like SSL offloading, URL mapping, and session affinity. To configure an HLB, you need to define frontend and backend services, as well as backend buckets or instance groups. Frontend services handle incoming requests, while backend services define the pool of resources serving the traffic.

Understanding SD-WAN Cloud Hub

SD-WAN Cloud Hub is a cutting-edge networking solution that combines the power of software-defined wide area networking (SD-WAN) with the scalability and reliability of cloud services. It acts as a centralized hub, enabling organizations to connect their branch offices, data centers, and cloud resources in a secure and efficient manner. By leveraging SD-WAN Cloud Hub, businesses can simplify their network architecture, improve application performance, and reduce costs.

Google Cloud needs no introduction. With its robust infrastructure, comprehensive suite of services, and global reach, it has become a preferred choice for businesses across industries. From compute and storage to AI and analytics, Google Cloud offers a wide range of solutions that empower organizations to innovate and scale. By integrating SD-WAN Cloud Hub with Google Cloud, businesses can unlock unparalleled benefits and take their network connectivity to new heights.

Example: DMVPN ( Dynamic Multipoint VPN)

Separating control from the data plane

DMVPN is a Cisco-developed solution that combines the benefits of multipoint GRE tunnels, IPsec encryption, and dynamic routing protocols to create a flexible and efficient virtual private network. It simplifies network architecture, reduces operational costs, and enhances scalability. With DMVPN, organizations can connect remote sites, branch offices, and mobile users seamlessly, creating a cohesive network infrastructure.

The underlay infrastructure forms the foundation of DMVPN. It refers to the physical network that connects the different sites or locations. This could be an existing Wide Area Network (WAN) infrastructure, such as MPLS, or the public Internet. The underlay provides the transport for the overlay network, enabling the secure transmission of data packets between sites.

The overlay network is the virtual network created on top of the underlay infrastructure. It is responsible for establishing the secure tunnels and routing between the connected sites. DMVPN uses multipoint GRE tunnels to allow dynamic and direct communication between sites, eliminating the need for a hub-and-spoke topology. IPsec encryption ensures the confidentiality and integrity of data transmitted over the overlay network.

Example WAN Technology: Tunneling IPv6 over IPV4

IPv6 tunneling is a technique that allows the transmission of IPv6 packets over an IPv4 network infrastructure. It enables communication between IPv6 networks by encapsulating IPv6 packets within IPv4 packets. By doing so, organizations can utilize existing IPv4 infrastructure while transitioning to IPv6. Before delving into its various implementations, understanding the basics of IPv6 tunneling is crucial.

Types of IPv6 Tunneling

There are several types of IPv6 tunneling techniques, each with its advantages and considerations. Let’s explore a few popular types:

Manual Tunneling: Manual tunneling is a simple method configuring tunnel endpoints. It also requires manually configuring tunnel interfaces on each participating device. While it provides flexibility and control, this approach can be time-consuming and prone to human error.

Automatic Tunneling: Automatic tunneling, also known as 6to4 tunneling, allows for the automatic creation of tunnels without manual configuration. It utilizes the 6to4 addressing scheme, where IPv6 packets are encapsulated within IPv4 packets using protocol 41. While convenient, automatic tunneling may encounter issues with address translation and compatibility.

Teredo Tunneling: Teredo tunneling is another automatic technique that enables IPv6 connectivity for hosts behind IPv4 Network Address Translation (NAT) devices. It uses UDP encapsulation to carry IPv6 packets over IPv4 networks. Though widely supported, Teredo tunneling may suffer from performance limitations due to its reliance on UDP.

Understanding VRFs

VRFs, in simple terms, allow the creation of multiple virtual routing tables within a single physical router or switch. Each VRF operates as an independent routing instance with its routing table, interfaces, and forwarding decisions. This powerful concept allows for logical separation of network traffic, enabling enhanced security, scalability, and efficiency.

One of VRFs’ primary advantages is network segmentation. By creating separate VRF instances, organizations can effectively isolate different parts of their network, ensuring traffic from one VRF cannot directly communicate with another. This segmentation enhances network security and provides granular control over network resources.

Furthermore, VRFs enable efficient use of network resources. By utilizing VRFs, organizations can optimize their routing decisions, ensuring that traffic is forwarded through the most appropriate path based on the specific requirements of each VRF. This dynamic routing capability leads to improved network performance and better resource utilization.

Use Cases for VRFs

VRFs are widely used in various networking scenarios. One common use case is in service provider networks, where VRFs separate customer traffic, allowing multiple customers to share a single physical infrastructure while maintaining isolation. This approach brings cost savings and scalability benefits.

Another use case for VRFs is in enterprise networks with strict security requirements. By leveraging VRFs, organizations can segregate sensitive data traffic from the rest of the network, reducing the risk of unauthorized access and potential data breaches.

Example WAN technology: Cisco PfR

Cisco PfR is an intelligent routing solution that utilizes real-time performance metrics to make dynamic routing decisions. By continuously monitoring network conditions, such as latency, jitter, and packet loss, PfR can intelligently reroute traffic to optimize performance. Unlike traditional static routing protocols, PfR adapts to network changes on the fly, ensuring optimal utilization of available resources.

Key Features of Cisco PfR

a. Performance Monitoring: PfR continuously collects performance data from various sources, including routers, probes, and end-user devices. This data provides valuable insights into network behavior and helps identify areas of improvement.

b. Intelligent Traffic Engineering: With its advanced algorithms, Cisco PfR can dynamically select the best path for traffic based on predefined policies and performance metrics. This enables efficient utilization of available network resources and minimizes congestion.

c. Application Visibility and Control: PfR offers deep visibility into application-level performance, allowing network administrators to prioritize critical applications and allocate resources accordingly. This ensures optimal performance for business-critical applications and improves overall user experience.

Performance based routing

DMVPN and WAN Virtualization

Example WAN Technology: Network Overlay

Virtual network overlays serve as a layer of abstraction, enabling the creation of multiple virtual networks on top of a physical network infrastructure. By encapsulating network traffic within virtual tunnels, overlays provide isolation, scalability, and flexibility, empowering organizations to manage their networks efficiently.

Underneath the surface, virtual network overlays rely on encapsulation protocols such as Virtual Extensible LAN (VXLAN) or Generic Routing Encapsulation (GRE). These protocols enable the creation of virtual tunnels, allowing network packets to traverse the physical infrastructure while remaining isolated within their respective virtual networks.

What is GRE?

At its core, Generic Routing Encapsulation is a tunneling protocol that allows the encapsulation of different network layer protocols within IP packets. It acts as an envelope, carrying packets from one network to another across an intermediate network. GRE provides a flexible and scalable solution for connecting disparate networks, facilitating seamless communication.

GRE encapsulates the original packet, often called the payload, within a new IP packet. This encapsulated packet is then sent to the destination network, where it is decapsulated to retrieve the original payload. By adding an IP header, GRE enables the transportation of various protocols across different network infrastructures, including IPv4, IPv6, IPX, and MPLS.

Introducing IPSec

IPSec, short for Internet Protocol Security, is a suite of protocols that provides security services at the IP network layer. It offers data integrity, confidentiality, and authentication features, ensuring that data transmitted over IP networks remains protected from unauthorized access and tampering. IPSec operates in two modes: Transport Mode and Tunnel Mode.

Combining GRE and IPSec

By combining GRE and IPSec, organizations can create secure and private communication channels over public networks. GRE provides the tunneling mechanism, while IPSec adds an extra layer of security by encrypting and authenticating the encapsulated packets. This combination allows for the secure transmission of sensitive data, remote access to private networks, and the establishment of virtual private networks (VPNs).

The combination of GRE and IPSec offers several advantages. First, it enables the creation of secure VPNs, allowing remote users to connect securely to private networks over public infrastructure. Second, it protects against eavesdropping and data tampering, ensuring the confidentiality and integrity of transmitted data. Lastly, GRE and IPSec are vendor-neutral protocols widely supported by various network equipment, making them accessible and compatible.

GRE with IPsec ipsec plus GRE

Types of WAN Virtualization Techniques

There are several popular techniques for implementing WAN virtualization, each with its unique characteristics and use cases. Let’s explore a few of them:

a. MPLS (Multi-Protocol Label Switching): MPLS is a widely used technique that leverages labels to direct network traffic efficiently. It provides reliable and secure connectivity, making it suitable for businesses requiring stringent service level agreements (SLAs).

b. SD-WAN (Software-Defined Wide Area Network): SD-WAN is a revolutionary technology that abstracts and centralizes the network control plane in software. It offers dynamic path selection, traffic prioritization, and simplified network management, making it ideal for organizations with multiple branch locations.

c. VPLS (Virtual Private LAN Service): VPLS extends the functionality of Ethernet-based LANs over a wide area network. It creates a virtual bridge between geographically dispersed sites, enabling seamless communication as if they were part of the same local network.

What is MPLS?

MPLS, short for Multi-Protocol Label Switching, is a versatile and scalable protocol used in modern networks. At its core, MPLS assigns labels to network packets, allowing for efficient and flexible routing. These labels help streamline traffic flow, leading to improved performance and reliability. To understand how MPLS works, we need to explore its key components.

The basic building block is the Label Switched Path (LSP), a predetermined path that packets follow. Labels are attached to packets at the ingress router, guiding them along the LSP until they reach their destination. This label-based forwarding mechanism enables MPLS to offer traffic engineering capabilities and support various network services.

Understanding Label Distributed Protocols

Label distributed protocols, or LDP, are fundamental to modern networking technologies. They are designed to establish and maintain label-switched paths (LSPs) in a network. LDP operates by distributing labels, which are used to identify and forward network traffic efficiently. By leveraging labels, LDP enhances network scalability and enables faster packet forwarding.

One key advantage of label-distributed protocols is their ability to support multiprotocol label switching (MPLS). MPLS allows for efficient routing of different types of network traffic, including IP, Ethernet, and ATM. This versatility makes label-distributed protocols highly adaptable and suitable for diverse network environments. Additionally, LDP minimizes network congestion, improves Quality of Service (QoS), and promotes effective resource utilization.

What is MPLS LDP?

MPLS LDP, or Label Distribution Protocol, is a key component of Multiprotocol Label Switching (MPLS) technology. It facilitates the establishment of label-switched paths (LSPs) through the network, enabling efficient forwarding of data packets. MPLS LDP uses labels to direct network traffic along predetermined paths, eliminating the need for complex routing table lookups.

One of MPLS LDP’s primary advantages is its ability to enhance network performance. By utilizing labels, MPLS LDP reduces the time and resources required for packet forwarding, resulting in faster data transmission and reduced network congestion. Additionally, MPLS LDP allows for traffic engineering, enabling network administrators to prioritize certain types of traffic and allocate bandwidth accordingly.

Understanding MPLS VPNs

MPLS VPNs, or Multiprotocol Label Switching Virtual Private Networks, are network infrastructure that allows multiple sites or branches of an organization to communicate over a shared service provider network securely. Unlike traditional VPNs, MPLS VPNs utilize labels to efficiently route and prioritize data packets, ensuring optimal performance and security. By encapsulating data within labels, MPLS VPNs enable seamless communication between different sites while maintaining privacy and segregation.

Understanding SD-WAN

SD-WAN is a cutting-edge networking technology that utilizes software-defined principles to manage and optimize network connections intelligently. Unlike traditional WAN, which relies on costly and inflexible hardware, SD-WAN leverages software-based solutions to streamline network management, improve performance, and enhance security.

Key Benefits of SD-WAN

a) Enhanced Performance: SD-WAN intelligently routes traffic across multiple network paths, ensuring optimal performance and reduced latency. This results in faster data transfers and improved user experience.

b) Cost Efficiency: With SD-WAN, businesses can leverage affordable broadband connections rather than relying solely on expensive MPLS (Multiprotocol Label Switching) links. This not only reduces costs but also enhances network resilience.

c) Simplified Management: SD-WAN centralizes network management through a user-friendly interface, allowing IT teams to easily configure, monitor, and troubleshoot network connections. This simplification saves time and resources, enabling IT professionals to focus on strategic initiatives.

SD-WAN incorporates robust security measures to protect network traffic and sensitive data. It employs encryption protocols, firewall capabilities, and traffic segmentation techniques to safeguard against unauthorized access and potential cyber threats. These advanced security features give businesses peace of mind and ensure data integrity.

Understanding VPLS

VPLS, short for Virtual Private LAN Service, is a technology that enables the creation of a virtual LAN (Local Area Network) over a shared or public network infrastructure. It allows geographically dispersed sites to connect as if they are part of the same LAN, regardless of their physical distance. This technology uses MPLS (Multiprotocol Label Switching) to transport Ethernet frames across the network efficiently.

Key Features and Benefits

Scalability and Flexibility: VPLS offers scalability, allowing businesses to easily expand their network as their requirements grow. It allows adding or removing sites without disrupting the overall network, making it an ideal choice for organizations with dynamic needs.

Seamless Connectivity: By extending the LAN across different locations, VPLS provides a seamless and transparent network experience. Employees can access shared resources, such as files and applications, as if they were all in the same office, promoting collaboration and productivity across geographically dispersed teams.

Enhanced Security: VPLS ensures a high level of security by isolating each customer’s traffic within their own virtual LAN. The data is encapsulated and encrypted, protecting it from unauthorized access. This makes VPLS a reliable solution for organizations that handle sensitive information and must comply with strict security regulations.

Advanced WAN Designs

DMVPN Phase 2 Spoke to Spoke Tunnels

Learning the mapping information required through NHRP resolution creates a dynamic spoke-to-spoke tunnel. How does a spoke know how to perform such a task? As an enhancement to DMVPN Phase 1, spoke-to-spoke tunnels were first introduced in Phase 2 of the network. Phase 2 handed responsibility for NHRP resolution requests to each spoke individually, which means that spokes initiated NHRP resolution requests when they determined a packet needed a spoke-to-spoke tunnel. Cisco Express Forwarding (CEF) would assist the spoke in making this decision based on information contained in its routing table.

Exploring Single Hub Dual Cloud Architecture

Single Hub Dual Cloud is a specific deployment model within the DMVPN framework that provides enhanced redundancy and improved performance. This architecture connects a single hub device to two separate cloud service providers, creating two independent VPN clouds. This setup offers numerous advantages, including increased availability, load balancing, and optimized traffic routing.

One key benefit of the Single Hub Dual Cloud approach is improved network resiliency. With two independent clouds, businesses can ensure uninterrupted connectivity even if one cloud or service provider experiences issues. This redundancy minimizes downtime and helps maintain business continuity. This architecture’s load-balancing capabilities also enable efficient traffic distribution, reducing congestion and enhancing overall network performance.

Implementing DMVPN Single Hub Dual Cloud requires careful planning and configuration. Organizations must assess their needs, evaluate suitable cloud service providers, and design a robust network architecture. Working with experienced network engineers and leveraging automation tools can streamline deployment and ensure successful implementation.

WAN Services

Network Address Translation:

In simple terms, NAT is a technique for modifying IP addresses while packets traverse from one network to another. It bridges private local networks and the public Internet, allowing multiple devices to share a single public IP address. By translating IP addresses, NAT enables private networks to communicate with external networks without exposing their internal structure.

Types of Network Address Translation

There are several types of NAT, each serving a specific purpose. Let’s explore a few common ones:

Static NAT: Static NAT, also known as one-to-one NAT, maps a private IP address to a public IP address. It is often used when specific devices on a network require direct access to the internet. With static NAT, inbound and outbound traffic can be routed seamlessly.

Dynamic NAT: On the other hand, Dynamic NAT allows a pool of public IP addresses to be shared among several devices within a private network. As devices connect to the internet, they are assigned an available public IP address from the pool. Dynamic NAT facilitates efficient utilization of public IP addresses while maintaining network security.

Port Address Translation (PAT): PAT, also called NAT Overload, is an extension of dynamic NAT. Rather than assigning a unique public IP address to each device, PAT assigns a unique port number to each connection. PAT allows multiple devices to share a single public IP address by keeping track of port numbers. This technique is widely used in home networks and small businesses.

NAT plays a crucial role in enhancing network security. By hiding devices’ internal IP addresses, it acts as a barrier against potential attacks from the Internet. External threats find it harder to identify and target individual devices within a private network. NAT acts as a shield, providing additional security to the network infrastructure.

Understanding Policy-Based Routing

Policy-based Routing (PBR) allows network administrators to control the path of network traffic based on specific policies or criteria. Unlike traditional routing protocols, PBR offers a more granular and flexible approach to directing network traffic, enabling fine-grained control over routing decisions.

PBR offers many features and functionalities that empower network administrators to optimize network traffic flow. Some key aspects include:

1. Traffic Classification: PBR allows the classification of network traffic based on various attributes such as source IP, destination IP, protocol, port numbers, or even specific packet attributes. This flexibility enables administrators to create customized policies tailored to their network requirements.

2. Routing Decision Control: With PBR, administrators can define specific routing decisions for classified traffic. Traffic matching certain criteria can be directed towards a specific next-hop or exit interface, bypassing the regular routing table.

3. Load Balancing and Traffic Engineering: PBR can distribute traffic across multiple paths, leveraging load balancing techniques. By intelligently distributing traffic, administrators can optimize resource utilization and enhance network performance.

Understanding TCP MSS

TCP MSS refers to the maximum amount of data encapsulated in a single TCP segment. It determines the payload size within each TCP packet, excluding the TCP/IP headers. By limiting the MSS, TCP ensures that data is transmitted in manageable chunks, preventing fragmentation and improving overall network performance.

Several factors influence the determination of TCP MSS. One crucial aspect is the underlying network’s Maximum Transmission Unit (MTU). The MTU represents the largest packet size transmitted over a network without fragmentation. TCP MSS is typically set to match the MTU to avoid packet fragmentation and subsequent retransmissions.

By appropriately configuring TCP MSS, network administrators can optimize network performance. Matching the TCP MSS to the MTU size reduces the chances of packet fragmentation, which can lead to delays and retransmissions. Moreover, a properly sized TCP MSS can prevent unnecessary overhead and improve bandwidth utilization.

Adjusting the TCP MSS to suit specific network requirements is possible. Network administrators can configure the TCP MSS value on routers, firewalls, and end devices. This flexibility allows for fine-tuning network performance based on the specific characteristics and constraints of the network infrastructure.

The desired benefits

Businesses often want to replace or augment premium bandwidth services and switch from active/standby to active/active WAN transport models. This will reduce their costs. The challenge, however, is that augmentation can increase operational complexity. Creating a consistent operational model and simplifying IT requires businesses to avoid complexity. The importance of maintaining remote site uptime for business continuity goes beyond simply preventing blackouts. Latency, jitter, and loss can affect critical applications and render them inoperable. As a result, the applications are entirely unavailable. The term “brownout” refers to these situations. Businesses today are focused on providing a consistent, high-quality application experience.

Ensuring connectivity

To ensure connectivity and make changes, there is a shift towards retaking control. It extends beyond routing or quality of service to include application experience and availability. The Internet edge is still not familiar to many businesses regarding remote sites. Software as a Service (SaaS) and productivity applications can be rolled out more effectively with this support. Better access to Infrastructure as a Service (IaaS) is also necessary. Offloading guest traffic to branches with direct Internet connectivity is also possible. However, many businesses are interested in doing so. This is because offloading this traffic locally is more efficient than routing it through a centralized data center to consume WAN bandwidth. WAN bandwidth is wasted and is not efficient.

The shift to application-centric architecture

Business requirements are changing rapidly, and today’s networks cannot cope. It is traditionally more expensive and has a fixed capacity for hardware-centric networks. In addition, the box-by-box configuration approach, siloed management tools, and lack of automated provisioning make them more challenging to support. They are inflexible, static, expensive, and difficult to maintain due to conflicting policies between domains and different configurations between services. As a result, security vulnerabilities and misconfigurations are more likely to occur. An application- or service-centric architecture focusing on simplicity and user experience should replace a connectivity-centric architecture.

Understanding Virtualization

Virtualization is a technology that allows the creation of virtual versions of various IT resources, such as servers, networks, and storage devices. These virtual resources operate independently from physical hardware, enabling multiple operating systems and applications to run simultaneously on a single physical machine. Virtualization opens possibilities by breaking the traditional one-to-one relationship between hardware and software. Now, virtualization has moved to the WAN.

WAN Virtualization and SD-WAN

Organizations constantly seek innovative solutions in modern networking to enhance their network infrastructure and optimize connectivity. One such solution that has gained significant attention is WAN virtualization. In this blog post, we will delve into the concept of WAN virtualization, its benefits, and how it revolutionizes how businesses connect and communicate.

WAN virtualization, also known as Software-Defined WAN (SD-WAN), is a technology that enables organizations to abstract their wide area network (WAN) connections from the underlying physical infrastructure. It leverages software-defined networking (SDN) principles to decouple network control and data forwarding, providing a more flexible, scalable, and efficient network solution.

VPN and SDN Components

WAN virtualization is an essential technology in the modern business world. It creates virtualized versions of wide area networks (WANs) – networks spanning a wide geographic area. The virtualized WANs can then manage and secure a company’s data, applications, and services.

Regarding implementation, WAN virtualization requires using a virtual private network (VPN), a secure private network accessible only by authorized personnel. This ensures that only those with proper credentials can access the data. WAN virtualization also requires software-defined networking (SDN) to manage the network and its components.

Related: Before you proceed, you may find the following posts helpful:

  1. SD WAN Overlay
  2. Generic Routing Encapsulation
  3. WAN Monitoring
  4. SD WAN Security 
  5. Container Based Virtualization
  6. SD WAN and Nuage Networks

WAN Virtualization

WAN Challenges

Deploying and managing the Wide Area Network (WAN) has become more challenging. Engineers face several design challenges, such as traffic flow decentralizing, inefficient WAN link utilization, routing protocol convergence, and application performance issues with active-active WAN edge designs. Active-active WAN designs that spray and pray over multiple active links present technical and business challenges.

To do this efficiently, you have to understand application flows. There may also be performance problems. When packets reach the other end, there may be out-of-order packets as each link propagates at different speeds. The remote end has to be reassembled and put back together, causing jitter and delay. Both high jitter and delay are bad for network performance. To recap on WAN virtualization, including the drivers for SD-WAN, you may follow this SD WAN tutorial.

What is WAN Virtualization
Diagram: What is WAN virtualization? Source Linkedin.

Knowledge Check: Cisco PfR

Cisco PfR is an intelligent routing solution that dynamically optimizes traffic flow within a network. Unlike traditional routing protocols, PfR makes real-time decisions based on network conditions, application requirements, and business policies. By monitoring various metrics such as delay, packet loss, and link utilization, PfR intelligently determines the best path for traffic.

Key Features and Functionalities

PfR offers many features and functionalities that significantly enhance network performance. Some notable features include:

1. Intelligent Path Control: PfR selects the optimal traffic path based on performance metrics, ensuring efficient utilization of network resources.

2. Application-Aware Routing: PfR considers the specific requirements of different applications and dynamically adjusts routing to provide the best user experience.

3. Load Balancing: By distributing traffic across multiple paths, PfR improves network efficiency and avoids bottlenecks.

Performance based routing

Knowledge Check: Control and Data Plane

Understanding the Control Plane

The control plane can be likened to a network’s brain. It is responsible for making high-level decisions and managing network-wide operations. From routing protocols to network management systems, the control plane ensures data is directed along the most optimal paths. By analyzing network topology, the control plane determines the best routes to reach a destination and establishes the necessary rules for data transmission.

Unveiling the Data Plane

In contrast to the control plane, the data plane focuses on the actual movement of data packets within the network. It can be thought of as the hands and feet executing the control plane’s instructions. The data plane handles packet forwarding, traffic classification, and Quality of Service (QoS) enforcement tasks. It ensures that data packets are correctly encapsulated, forwarded to their intended destinations, and delivered with the necessary priority and reliability.

Use Cases and Deployment Scenarios

Distributed Enterprises

For organizations with multiple branch locations, WAN virtualization offers a cost-effective solution for connecting remote sites to the central network. It allows for secure and efficient data transfer between branches, enabling seamless collaboration and resource sharing.

Cloud Connectivity

WAN virtualization is ideal for enterprises adopting cloud-based services. It provides a secure and optimized connection to public and private cloud environments, ensuring reliable access to critical applications and data hosted in the cloud.

Disaster Recovery and Business Continuity

WAN virtualization plays a vital role in disaster recovery strategies. Organizations can ensure business continuity during a natural disaster or system failure by replicating data and applications across geographically dispersed sites.

Challenges and Considerations

Implementing WAN virtualization requires careful planning and consideration. Factors such as network security, bandwidth requirements, and compatibility with existing infrastructure need to be evaluated. It is essential to choose a solution that aligns with the specific needs and goals of the organization.

SD-WAN vs. DMVPN

Two popular WAN solutions are DMVPN and SD-WAN.

DMVPN (Dynamic Multipoint Virtual Private Network) and SD-WAN (Software-Defined Wide Area Network) are popular solutions to improve connectivity between distributed branch offices. DMVPN is a Cisco-specific solution, and SD-WAN is a software-based solution that can be used with any router. Both solutions provide several advantages, but there are some differences between them.

DMVPN is a secure, cost-effective, and scalable network solution that combines underlying technologies and DMVVPN phases (for example, the traditional DMVPN phase 1 ) to connect multiple sites. It allows the customer to use existing infrastructure and provides easy deployment and management. This solution is an excellent choice for businesses with many branch offices because it allows for secure communication and the ability to deploy new sites quickly.

DMVPN and WAN Virtualization

SD-WAN is a software-based solution that is gaining popularity in the enterprise market. It provides improved application performance, security, and network reliability. SD-WAN is an excellent choice for businesses that require high-performance applications across multiple sites. It provides an easy-to-use centralized management console that allows companies to deploy new sites and manage the network quickly.

Dynamic Multipoint VPN
Diagram: Example with DMVPN. Source is Cisco

Guide: DMVPN operating over the WAN

The following shows DMVPN operating over the WAN. The SP node represents the WAN network. Then we have R11 as the hub and R2, R3 as the spokes.  Several protocols make the DMVPM network over the WAN possible. We have GRE; in this case, the tunnel destination is specified as a point-to-point GRE tunnel instead of a mGRE tunnel.

Then we have NHRP, which is used to help create a mapping as this is a nonbroadcast network; we can not use ARP. So, we need to manually set this up on the spokes with the command: ip nhrp NHS 192.168.100.11

DMVPN configuration
Diagram: DMVPN Configuration.

Shift from network-centric to business intent.

The core of WAN virtualization involves shifting focus from a network-centric model to a business intent-based WAN network. So, instead of designing the WAN for the network, we can create the WAN for the application. This way, the WAN architecture can simplify application deployment and management.

First, however, the mindset must shift from a network topology focus to an application services topology. A new application style consumes vast bandwidth and is very susceptible to variations in bandwidth quality. Things such as jitter, loss, and delay impact most applications, which makes it essential to improve the WAN environment for these applications.

wan virtualization
Diagram: WAN virtualization.

The spray-and-pray method over two links increases bandwidth but decreases “goodput.” It also affects firewalls, as they will see asymmetric routes. When you want an active-active model, you need application session awareness and a design that eliminates asymmetric routing. It would help if you could slice the WAN properly so application flows can work efficiently over either link.

What is WAN Virtualization: Decentralizing Traffic

Decentralizing traffic from the data center to the branch requires more bandwidth to the network’s edges. As a result, we see many high-bandwidth applications running on remote sites. This is what businesses are now trying to accomplish. Traditional branch sites usually rely on hub sites for most services and do not host bandwidth-intensive applications. Today, remote locations require extra bandwidth, which is not cheaper yearly.

Inefficient WAN utilization

Redundant WAN links usually require a dynamic routing protocol for traffic engineering and failover. Routing protocols require complex tuning to load balance traffic between border devices. Border Gateway Protocol (BGP) is the primary protocol for connecting sites to external networks.

It relies on path attributes to choose the best path based on availability and distance. Although these attributes allow granular policy control, they do not cover aspects relating to path performance, such as Round Trip Time (RTT), delay, and jitter.

Port 179
Furthermore, BGP does not always choose the “best” path, which may have different meanings for customers. For example, customer A might consider the path via provider A as the best due to the price of links. Default routing does not take this into account. Packet-level routing protocols are not designed to handle the complexities of running over multiple transport-agnostic links. Therefore, a solution that eliminates the need for packet-level routing protocols must arise.
BGP Path Attributes
Diagram: BGP Path Attributes Source is Cisco.

Routing protocol convergence

WAN designs can also be active standby, which requires routing protocol convergence in the event of primary link failure. However, routing convergence is slow, and to speed up, additional features, such as Bidirectional Forwarding Detection (BFD), are implemented that may stress the network’s control plane. Although mechanisms exist to speed up convergence and failure detection, there are still several convergence steps, such as:

Rouitng Convergence

Convergence


Detect


Describe


Switch 


Find

 Branch office security

With traditional network solutions, branches connect back to the data center, which typically provides Internet access. However, the application world has evolved, and branches directly consume applications such as Office 365 in the cloud. This drives a need for branches to access these services over the Internet without going to the data center for Internet access or security scrubbing.

Extending the security diameter into the branches should be possible without requiring onsite firewalls / IPS and other security paradigm changes. A solution must exist that allows you to extend your security domain to the branch sites without costly security appliances at each branch—essentially, building a dynamic security fabric.

WAN Virtualization

The solution to all these problems is SD-WAN ( software-defined WAN ). SD-WAN is a transport-independent overlay software-based networking deployment. It uses software and cloud-based technologies to simplify the delivery of WAN services to branch offices. Similar to Software Defined Networking (SDN), SD-WAN works by abstraction. It abstracts network hardware into a control plane with multiple data planes to make up one large WAN fabric.

 SD-WAN in a nutshell 

When we consider the Wide Area Network (WAN) environment at a basic level, we connect data centers to several branch offices to deliver packets between those sites, supporting the transport of application transactions and services. The SD-WAN platform allows you to pull Internet connectivity into those sites, becoming part of one large transport-independent WAN fabric.

SD-WAN monitors the paths and the application performance on each link (Internet, MPLS, LTE ) and chooses the best path based on performance.

There are many forms of Internet connectivity (cable, DSL, broadband, and Ethernet). They are quick to deploy at a fraction of the cost of private MPLS circuits. SD-WAN provides the benefit of using all these links and monitoring which applications are best for them.

Application performance is continuously monitored across all eligible paths-direct internet, internet VPN, and private WAN. It creates an active-active network and eliminates the need to use and maintain traditional routing protocols for active-standby setups—no reliance on the active-standby model and associated problems.

WAN virtualization
Diagram: WAN virtualization. Source is Juniper

SD-WAN simplifies WAN management

SD-WAN simplifies managing a wide area network by providing a centralized platform for managing and monitoring traffic across the network. This helps reduce the complexity of managing multiple networks, eliminating the need for manual configuration of each site. Instead, all of the sites are configured from a single management console.

SD-WAN also provides advanced security features such as encryption and firewalling, which can be configured to ensure that only authorized traffic is allowed access to the network. Additionally, SD-WAN can optimize network performance by automatically routing traffic over the most efficient paths.

what is wan virtualization

SD-WAN Packet Steering

SD-WAN packet steering is a technology that efficiently routes packets across a wide area network (WAN). It is based on the concept of steering packets so that they can be delivered more quickly and reliably than traditional routing protocols. Packet steering is crucial to SD-WAN technology, allowing organizations to maximize their WAN connections.

SD-WAN packet steering works by analyzing packets sent across the WAN and looking for patterns or trends. Based on these patterns, the SD-WAN can dynamically route the packets to deliver them more quickly and reliably. This can be done in various ways, such as considering latency and packet loss or ensuring the packets are routed over the most reliable connections.

Spraying packets down both links can result in 20% drops or packet reordering. SD-WAN makes packets better utilized, no reorder, and better “goodput.” SD-WAN increases your buying power and results in buying lower bandwidth links and running them more efficiently. Over-provision is unnecessary as you are using the existing WAN bandwidth better.

Knowledge Check: Application-Aware Routing (AAR)

Understanding Application-Aware Routing (AAR)

Application-aware routing is a sophisticated networking technique that goes beyond traditional packet-based routing. It considers the unique requirements of different applications, such as video streaming, cloud-based services, or real-time communication, and optimizes the network path accordingly. It ensures smooth and efficient data transmission by prioritizing and steering traffic based on application characteristics.

Benefits of Application-Aware Routing

2.1 Enhanced Performance: Application-aware routing significantly improves overall performance by dynamically allocating network resources to applications with high bandwidth or low latency requirements. This translates into faster downloads, seamless video streaming, and reduced response times for critical applications.

2.2 Increased Reliability: Traditional routing methods treat all traffic equally, often resulting in congestion and potential bottlenecks. Application Aware Routing intelligently distributes network traffic, avoiding congested paths and ensuring a reliable and consistent user experience. In network failure or congestion, it can dynamically reroute traffic to alternative paths, minimizing downtime and disruptions.

Implementation Strategies

Deep Packet Inspection: A key component of Application-Aware Routing is deep packet inspection (DPI), which analyzes the content of network packets to identify specific applications. DPI enables routers and switches to make informed decisions about handling each packet based on its application, ensuring optimal routing and resource allocation.

3.2 Quality of Service (QoS) Configuration: Implementing QoS parameters alongside Application Aware Routing allows network administrators to allocate bandwidth, prioritize specific applications over others, and enforce policies to ensure the best possible user experience. QoS configurations can be customized based on organizational needs and application requirements.

Future Possibilities

As the digital landscape continues to evolve, the potential for Application-Aware Routing is boundless. With emerging technologies like the Internet of Things (IoT) and 5G networks, the ability to intelligently route traffic based on specific application needs will become even more critical. Application-aware routing has the potential to optimize resource utilization, enhance security, and support the seamless integration of diverse applications and services.

Benefits of WAN Virtualization:

1. Enhanced Network Performance: WAN virtualization allows organizations to optimize network performance by intelligently routing traffic across multiple WAN links. By dynamically selecting the most efficient path based on real-time network conditions, organizations can achieve improved application performance and reduced latency.

2. Cost Savings: Traditional WAN solutions often require expensive dedicated circuits for each branch office. With WAN virtualization, organizations can leverage cost-effective internet connections, such as broadband or LTE, while ensuring secure and reliable connectivity. This flexibility in choosing connectivity options can significantly reduce operational costs.

3. Simplified Network Management: WAN virtualization provides centralized management and control of the entire network infrastructure. This simplifies network provisioning, configuration, and monitoring, reducing the complexity and administrative overhead of traditional WAN deployments.

4. Increased Scalability: WAN virtualization offers the scalability to accommodate evolving network requirements as organizations grow and expand their operations. It allows for the seamless integration of new branch offices and additional bandwidth without significant infrastructure changes.

5. Enhanced Security: With the rise in cybersecurity threats, network security is paramount. WAN virtualization enables organizations to implement robust security measures, such as encryption and firewall policies, across the entire network. This helps protect sensitive data and ensures compliance with industry regulations.

A final note on what is WAN virtualization

Server virtualization and automation in the data center are prevalent, but WANs are stalling in this space. It is the last bastion of hardware models that has complexity. Like hypervisors have transformed data centers, SD-WAN aims to change how WAN networks are built and managed. When server virtualization and hypervisor came along, we did not have to worry about the underlying hardware. Instead, a virtual machine (VM) can be provided and run as an application. Today’s WAN environment requires you to manage details of carrier infrastructure, routing protocols, and encryption. 

  • SD-WAN pulls all WAN resources together and slices up the WAN to match the applications on them.

The Role of WAN Virtualization in Digital Transformation:

In today’s digital era, where cloud-based applications and remote workforces are becoming the norm, WAN virtualization is critical in enabling digital transformation. It empowers organizations to embrace new technologies, such as cloud computing and unified communications, by providing secure and reliable connectivity to distributed resources.

Summary: WAN Virtualization

In our ever-connected world, seamless network connectivity is necessary for businesses of all sizes. However, traditional Wide Area Networks (WANs) often fall short of meeting the demands of modern data transmission and application performance. This is where the concept of WAN virtualization comes into play, promising to revolutionize network connectivity like never before.

Understanding WAN Virtualization

WAN virtualization, also known as Software-Defined WAN (SD-WAN), is a technology that abstracts the physical infrastructure of traditional WANs and allows for centralized control, management, and optimization of network resources. By decoupling the control plane from the underlying hardware, WAN virtualization enables organizations to dynamically allocate bandwidth, prioritize critical applications, and ensure optimal performance across geographically dispersed locations.

The Benefits of WAN Virtualization

Enhanced Flexibility and Scalability

With WAN virtualization, organizations can effortlessly scale their network infrastructure to accommodate growing business needs. The virtualized nature of the WAN allows for easy addition or removal of network resources, enabling businesses to adapt to changing requirements without costly hardware upgrades.

Improved Application Performance

WAN virtualization empowers businesses to optimize application performance by intelligently routing network traffic based on application type, quality of service requirements, and network conditions. By dynamically selecting the most efficient path for data transmission, WAN virtualization minimizes latency, improves response times, and enhances overall user experience.

Cost Savings and Efficiency

By leveraging WAN virtualization, organizations can reduce their reliance on expensive Multiprotocol Label Switching (MPLS) connections and embrace more cost-effective broadband links. The ability to intelligently distribute traffic across diverse network paths enhances network redundancy and maximizes bandwidth utilization, providing significant cost savings and improved efficiency.

Implementation Considerations

Network Security

When adopting WAN virtualization, it is crucial to implement robust security measures to protect sensitive data and ensure network integrity. Encryption protocols, threat detection systems, and secure access controls should be implemented to safeguard against potential security breaches.

Quality of Service (QoS)

Organizations should prioritize critical applications and allocate appropriate bandwidth resources through Quality of Service (QoS) policies to ensure optimal application performance. By adequately configuring QoS settings, businesses can guarantee mission-critical applications receive the necessary network resources, minimizing latency and providing a seamless user experience.

Real-World Use Cases

Global Enterprise Networks

Large multinational corporations with a widespread presence can significantly benefit from WAN virtualization. These organizations can achieve consistent performance across geographically dispersed locations by centralizing network management and leveraging intelligent traffic routing, improving collaboration and productivity.

Branch Office Connectivity

WAN virtualization simplifies connectivity and network management for businesses with multiple branch offices. It enables organizations to establish secure and efficient connections between headquarters and remote locations, ensuring seamless access to critical resources and applications.

Conclusion

In conclusion, WAN virtualization represents a paradigm shift in network connectivity, offering enhanced flexibility, improved application performance, and cost savings for businesses. By embracing this transformative technology, organizations can unlock the true potential of their networks, enabling them to thrive in the digital age.