SD WAN Overlay

SD WAN Overlay

SD WAN Overlay

In today's digital age, businesses rely on seamless and secure network connectivity to support their operations. Traditional Wide Area Network (WAN) architectures often struggle to meet the demands of modern companies due to their limited bandwidth, high costs, and lack of flexibility. A revolutionary SD-WAN (Software-Defined Wide Area Network) overlay has emerged to address these challenges, offering businesses a more efficient and agile network solution. This blog post will delve into SD-WAN overlay, exploring its benefits, implementation, and potential to transform how businesses connect.

SD-WAN employs the concepts of overlay networking. Overlay networking is a virtual network architecture that allows for the creation of multiple logical networks on top of an existing physical network infrastructure. It involves the encapsulation of network traffic within packets, enabling data to traverse across different networks regardless of their physical locations. This abstraction layer provides immense flexibility and agility, making overlay networking an attractive option for organizations of all sizes.

- Scalability: One of the key advantages of overlay networking is its ability to scale effortlessly. By decoupling the logical network from the underlying physical infrastructure, organizations can rapidly deploy and expand their networks without disruption. This scalability is particularly crucial in cloud environments or scenarios where network requirements change frequently.

- Security and Isolation: Overlay networks provide enhanced security by isolating different logical networks from each other. This isolation ensures that data traffic remains segregated and prevents unauthorized access to sensitive information. Additionally, overlay networks can implement advanced security measures such as encryption and access control, further fortifying network security.

Highlights: SD WAN Overlay

Understanding Overlay Networking

Overlay networking is a revolutionary approach to network design that enables the creation of virtual networks on top of existing physical networks. By abstracting the underlying infrastructure, overlay networks provide a flexible and scalable solution to meet the dynamic demands of modern applications and services. Whether in cloud environments, data centers, or even across geographically dispersed locations, overlay networking opens up a world of possibilities.

Overlay networks act as a virtual layer on the physical network infrastructure. They enable the creation of logical connections independent of the physical network topology. In the context of SD-WAN, overlay networks facilitate the seamless integration of multiple network connections, including MPLS, broadband, and LTE, into a unified and efficient network.

The advantages of overlay networking are manifold:

– First, it allows for seamless network segmentation, enabling different applications or user groups to operate in isolation while sharing the same physical infrastructure. This enhances security and simplifies network management.

– Secondly, overlay networks facilitate deploying advanced network services such as load balancing, firewalling, and encryption without complex changes to the underlying network infrastructure. This abstraction level empowers organizations to adapt and respond rapidly to evolving business needs.

**So, what exactly is an SD-WAN overlay?**

In simple terms, it is a virtual layer added to the existing network infrastructure. These network overlays connect different locations, such as branch offices, data centers, and the cloud, by creating a secure and reliable network.

1. Tunnel-Based Overlays:

One of the most common types of SD-WAN overlays is tunnel-based overlays. This approach encapsulates network traffic within a virtual tunnel, allowing it to traverse multiple networks securely. Tunnel-based overlays are typically implemented using IPsec or GRE (Generic Routing Encapsulation) protocols. They offer enhanced security through encryption and provide a reliable connection between the SD-WAN edge devices.

Example Technology: IPSec and GRE

Organizations can achieve enhanced network security and improved connectivity by combining GRE and IPSec. The integration allows for creating secure tunnels between networks, ensuring that data transmitted between them remains protected from potential threats. This combination also enables the establishment of Virtual Private Networks (VPNs), enabling secure remote access and seamless connectivity for geographically dispersed teams.

By encrypting GRE tunnels, IPsec provides secure VPN tunnels. This approach offers many benefits, including support for dynamic IGP routing protocols, non-IP protocols, and IP multicast. Furthermore, the headend IPsec termination points can support QoS policies and deterministic routing metrics.

There is built-in redundancy due to the pre-established primary and backup GRE over IPsec tunnels. Static IP addresses are required for the headend site, but dynamic IP addresses are permitted for the remote sites. To differentiate primary tunnels from backup tunnels, routing metrics can be modified slightly to favor one or the other.

GRE with IPsec ipsec plus GRE

2. Segment-Based Overlays:

Segment-based overlays are designed to segment the network traffic based on specific criteria such as application type, user group, or location. This allows organizations to prioritize critical applications and allocate network resources accordingly. By segmenting the traffic, SD-WAN can optimize the performance of each application and ensure a consistent user experience. Segment-based overlays are particularly beneficial for businesses with diverse network requirements.

3. Policy-Based Overlays:

Policy-based overlays enable organizations to define rules and policies that govern the behavior of the SD-WAN network. These overlays use intelligent routing algorithms to dynamically select the most optimal path for network traffic based on predefined policies. By leveraging policy-based overlays, businesses can ensure efficient utilization of network resources, minimize latency, and improve overall network performance.

4. Hybrid Overlays:

Hybrid overlays combine the benefits of both public and private networks. This overlay allows organizations to utilize multiple network connections, including MPLS, broadband, and LTE, to create a robust and resilient network infrastructure. Hybrid overlays intelligently route traffic through the most suitable connection based on application requirements, network availability, and cost. Businesses can achieve high availability, cost-effectiveness, and improved application performance by leveraging mixed overlays.

5. Cloud-Enabled Overlays:

As more businesses adopt cloud-based applications and services, seamless connectivity to cloud environments becomes crucial. Cloud-enabled overlays provide direct and secure connectivity between the SD-WAN network and cloud service providers. These overlays ensure optimized performance for cloud applications by minimizing latency and providing efficient data transfer. Cloud-enabled overlays simplify the management and deployment of SD-WAN in multi-cloud environments, making them an ideal choice for businesses embracing cloud technologies.

**Challenge: The traditional network**

The networks we depend on for business are sensitive to many factors that can result in a slow and unreliable experience. Latency, which refers to the time between a data packet being sent and received, or round-trip time, which is the time it takes for the packet to be sent and for it to get a reply, can be experienced.

We can also experience jitter, the variance in the time delay between data packets in the network, which is a “disruption” in the sending and receiving packets. We have fixed-bandwidth networks that can experience congestion. For example, with five people sharing the same Internet link, each could experience a stable and swift network. Add another 20 or 30 people onto the same link, and the experience will be markedly different.

Google Cloud & SD WAN

SD WAN Cloud Hub

Google Cloud is renowned for its robust infrastructure, scalability, and advanced services. By integrating SD-WAN with Google Cloud, businesses can tap into this powerful ecosystem. They gain access to Google’s global network, enabling optimized routing and lower latency. Additionally, the scalability and flexibility of Google Cloud allow organizations to adapt to changing network demands seamlessly.

Key Advantages:

When SD-WAN Cloud Hub is integrated with Google Cloud, it unlocks a host of advantages. Firstly, it enables organizations to seamlessly connect their branch offices, data centers, and cloud resources, providing a unified network fabric. This integration optimizes traffic flow and enhances application performance, ensuring a consistent user experience.

Key Features and Benefits:

Intelligent Traffic Steering: SD-WAN Cloud Hub with Google Cloud allows organizations to intelligently steer traffic based on application requirements, network conditions, and security policies. This dynamic traffic routing ensures optimal performance and minimizes latency.

Simplified Network Management: The centralized management platform of SD-WAN Cloud Hub simplifies network configuration, monitoring, and troubleshooting. Integration with Google Cloud provides a unified view of the entire network infrastructure, streamlining operations and reducing complexity.

Enhanced Security: SD-WAN Cloud Hub leverages Google Cloud’s security features, such as Cloud Armor and Cloud Identity-Aware Proxy, to protect network traffic and ensure secure communication between branches, data centers, and the cloud.

VPN and Overlay technologies

  • Performance-Based Routing & DMVPN 

Performance-based routing is a dynamic routing technique beyond traditional static routing protocols. It leverages real-time data and network monitoring to make intelligent routing decisions based on latency, bandwidth availability, and network congestion. By constantly analyzing network conditions, performance-based routing algorithms can adapt and reroute traffic to the most efficient paths, ensuring speedy and reliable data transmission.

  • DMVPN Phase 3

DMVPN Phase 3 is the latest iteration of the DMVPN technology, designed to address the limitations of its predecessors. It introduces significant improvements in scalability, routing protocol support, and encryption capabilities. By utilizing multipoint GRE tunnels, DMVPN Phase 3 enables efficient and dynamic communication between multiple sites securely and efficiently.

One of DMVPN Phase 3’s key advantages is its scalability. The introduction of the NHRP (Next Hop Resolution Protocol) redirect allows for the dynamic and efficient allocation of network resources, making it an ideal solution for large-scale deployments. Additionally, DMVPN Phase 3 supports a wide range of routing protocols, including OSPF, EIGRP, and BGP, providing network administrators with flexibility and ease of integration.

Multipoint GRE (mGRE) is an underlying technology that plays a crucial role in DMVPN’s functionality. It enables the establishment of multiple tunnels over a single GRE interface, maximizing network resource utilization. Encapsulating packets in GRE headers, mGRE facilitates traffic routing between remote sites, creating a secure and efficient communication path.

Configuring spokes to terminate multiple headends at one or more hub locations can achieve redundancy. Cryptographic attributes are typically mapped to the tunnel initiated by the remote peer through IPsec tunnel protection..

FlexVPN Site-to-Site Smart Defaults

FlexVPN Site-to-Site Smart Defaults is a feature designed to simplify and streamline the configuration process of site-to-site VPN connections. It provides a set of predefined default values for various parameters, eliminating the need for complex manual configurations. Network administrators can save time and effort by leveraging these smart defaults while ensuring robust security.

One key advantage of FlexVPN Site-to-Site Smart Defaults is its ease of use. Network administrators can quickly deploy secure site-to-site VPN connections without extensive knowledge of complex VPN configurations. The predefined defaults ensure that essential security features are automatically enabled, minimizing the risk of misconfigurations and vulnerabilities.

FlexVPN IKEv2 Routing

FlexVPN IKEv2 routing is a robust routing protocol that combines the flexibility of FlexVPN with the security of IKEv2. It allows for dynamic routing between different sites and simplifies the management of complex network infrastructures. By utilizing the power of cryptographic security and advanced routing techniques, FlexVPN IKEv2 routing ensures secure and efficient communication across networks.

FlexVPN IKEv2 routing offers numerous benefits, making it a preferred choice for network administrators. Firstly, it provides enhanced scalability, allowing networks to grow and adapt to changing requirements quickly. Additionally, the protocol ensures end-to-end security by encapsulating data within secure tunnels, protecting it from unauthorized access. Moreover, FlexVPN IKEv2 routing supports multipoint connectivity, enabling seamless communication between multiple sites.

**Transport Fabric Technology**

SD-WAN leverages transport-independent fabric technology to connect remote locations. This is achieved by using overlay technology. The SDWAN overlay works by tunneling traffic over any transport between destinations within the WAN environment.

This gives authentic flexibility to routing applications across any network portion regardless of the circuit or transport type. This is the definition of transport independence. Having a fabric SDWAN overlay network means that every remote site, regardless of physical or logical separation, is always a single hop away from another. DMPVN works based on transport agnostic design.

SD-WAN vs Traditional WAN

SD-WAN overlays offer several advantages over traditional WANs, including improved scalability, reduced complexity, and better control over traffic flows. They also provide better security, as each site is protected by its dedicated security protocols. Additionally, SD-WAN overlays can improve application performance and reliability and reduce latency.

Key Point: SD-WAN abstracts the underlay

With SD-WAN, the virtual WAN overlays are abstracted from the physical device’s underlay. Therefore, the virtual WAN overlays can take on topologies independent of each other without being pinned to the configuration of the underlay network. SD-WAN changes how you map application requirements to the network, allowing for the creation of independent topologies per application.

For example, mission-critical applications may use expensive leased lines, while lower-priority applications can use inexpensive best-effort Internet links. This can all change on the fly if specific performance metrics are unmet.

Previously, the application had to match and “fit” into the network with the legacy WAN, but with an SD-WAN, the application now controls the network topology. Multiple independent topologies per application are a crucial driver for SD-WAN.

Example Technology: PTP GRE

Point to Point GRE, or Generic Routing Encapsulation, is a protocol that encapsulates and transports various network layer protocols over an IP network. Providing a virtual point-to-point link enables secure and efficient communication between remote networks. Point to Point GRE offers a flexible and scalable solution for organizations seeking to establish secure connections over public or private networks.

SD-WAN combines transports, SDWAN overlay, and underlay

Look at it this way. With an SD-WAN topology, there are different levels of networking. There is an underlay network, the physical infrastructure, and an SDWAN overlay network. The physical infrastructure is the router, switches, and WAN transports; the overlay network is the virtual WAN overlays.

The SDWAN overlay presents a different network to the application. For example, the voice overlay will see only the voice overlay. The logical virtual pipe the overlay creates and the application sees differs from the underlay.

An SDWAN overlay network is a virtual or logical network created on top of an existing physical network. The internet, which connects many nodes via circuit switching, is an example of an SDWAN overlay network. An overlay network is any virtual layer on top of physical network infrastructure.

  • Consider an SDWAN overlay as a flexible tag.

This may be as simple as a virtual local area network (VLAN) but typically refers to more complex virtual layers from an SDN or an SD-WAN). Think of an SDWAN overlay as a tag so that building the overlays is not expensive or time-consuming. In addition, you don’t need to buy physical equipment for each overlay as the overlay is virtualized and in the software.

Similar to software-defined networking (SDN), the critical part is that SD-WAN works by abstraction. All the complexities are abstracted into application overlays. For example, application type A can use this SDWAN overlay, and application type B can use that SDWAN overlay. 

  • I.P. and port number, orchestrations, and end-to-end

Recent application requirements drive a new type of WAN that more accurately supports today’s environment with an additional layer of policy management. The world has moved away from looking at I.P. addresses and Port numbers used to identify applications and made the correct forwarding decision. 

Example Product: Cisco Meraki

**Section 1: Simplified Network Management**

One of the standout features of the Cisco Meraki platform is its user-friendly interface. Gone are the days of complex configurations and cumbersome setups. With Meraki, IT administrators can manage their entire network from a single, intuitive dashboard. This centralized management capability allows for real-time monitoring, troubleshooting, and updates, all from the cloud. The simplicity of the platform means that even those with limited technical expertise can effectively manage and optimize their network.

**Section 2: Robust Security Features**

In today’s digital landscape, security is paramount. Cisco Meraki understands this and has built comprehensive security features into its platform. From advanced malware protection to intrusion prevention and content filtering, Meraki offers a multi-layered approach to cybersecurity. The platform also includes built-in security analytics, providing IT teams with valuable insights to proactively address potential threats. This level of security ensures that your network remains protected against both internal and external vulnerabilities.

**Section 3: Scalability and Flexibility**

Another significant advantage of the Cisco Meraki platform is its scalability. As your business grows, so too can your network. Meraki’s cloud-based nature allows for seamless integration of new devices and locations without the need for extensive hardware upgrades. This flexibility makes it an ideal solution for businesses of all sizes, from small startups to large multinational corporations. The platform’s ability to adapt to changing needs ensures that it can grow alongside your business.

**Section 4: Comprehensive Support and Training**

Cisco Meraki doesn’t just provide a platform; it offers a complete ecosystem of support and training. From comprehensive documentation and online tutorials to live webinars and a dedicated support team, Meraki ensures that you have all the resources you need to make the most of its platform. This commitment to customer success means that you’re never alone in your network management journey.

Challenges to Existing WAN

Traditional WAN architectures consist of private MPLS links complemented with Internet links as a backup. Standard templates in most Service Provider environments are usually broken down into Bronze, Silver, and Gold SLAs. 

However, these types of SLA do not fit all geographies and often should be fine-tuned per location. Capacity, reliability, analytics, and security are all central parts of the WAN that should be available on demand. Traditional infrastructure is very static, and bandwidth upgrades and service changes require considerable time processing and locking agility for new sites.

It’s not agile enough, and nothing can be performed on the fly to meet the growing business needs. In addition, the cost per bit for the private connection is high, which is problematic for bandwidth-intensive applications, especially when the upgrades are too costly and can’t be afforded. 

  • A distributed world of dissolving perimeters

Perimeters are dissolving, and the world is becoming distributed. Applications require a WAN to support distributed environments along with flexible network points. Centralized-only designs result in suboptimal traffic engineering and increased latency. Increased latency disrupts the application performance, and only a particular type of content can be put into a Content Delivery Network (CDN). CDN cannot be used for everything.

Traditional WANs are operationally complex; people likely perform different network and security functions. For example, you may have a DMVPN, Security, and Networking specialist. Some wear all hats, but they are few and far between. Different hats have different ideas, and agreeing on a minor network change could take ages.

  • The World of SD-WAN Static Network-Based

SD-WAN replaces traditional WAN routers that are agnostic to the underlying transport technology. You can use various link types, such as MPLS, LTE, and broadband. All combined. Based on policies generated by the business, SD-WAN enables load sharing across different WAN connections that more efficiently support today’s application environment.

It pulls policy and intelligence out of the network and places them into an end-to-end solution orchestrated by a single pane of glass. SDN-WAN is not just about tunnels. It consists of components that work together to simplify network operations while meeting all bandwidth and resilience requirements.

Centralized network points are no longer adequate; we need network points positioned where they make the most sense for the application and user. Backhauling traffic to a central data center is illogical, and connecting remote sites to a SaaS or IaaS model over the public Internet is far more efficient. The majority of enterprises prefer to connect remote sites directly to cloud services. So why not let them do this in the best possible way?

A new style of WAN and SD-WAN

We require a new WAN style and a shift from a network-based approach to an application-based approach. The new WAN no longer looks solely at the network to forward packets. Instead, it looks at the business application and decides how to optimize it with the correct forwarding behavior. This new style of forwarding is problematic with traditional WAN architecture.

Making business logic decisions with IP and port number information is challenging. Standard routing is packet by packet and can only set part of the picture. They have routing tables and perform forwarding but essentially operate on their little island, losing out on a holistic view required for accurate end-to-end decision-making. An additional layer of information is needed.

A controller-based approach offers the necessary holistic view. We can now make decisions based on global information, not solely on a path-by-path basis. Getting all the routing information and compiling it into a dashboard to make a decision is much more efficient than making local decisions that only see parts of the network. 

From a customer’s viewpoint, what would the perfect WAN look like if you could roll back the clock and start again?   

Related: For additional pre-information, you may find the following helpful:

  1. Transport SDN
  2. SD WAN Diagram 
  3. Overlay Virtual Networking

SD WAN Overlay

Introducing the SD-WAN Overlay

SD-WAN decouples (separates) the WAN infrastructure, whether physical or virtual, from its control plane mechanism and allows applications or application groups to be placed into virtual WAN overlays. The separation will enable us to bring many enhancements and improvements to a WAN that has had little innovation in the past compared to the rest of the infrastructure, such as server and storage modules.

With server virtualization, several virtual machines create application isolation on a physical server. Applications placed in VMs operate in isolation, yet the VMs are installed on the same physical hosts.

Consider SD-WAN to operate with similar principles. Each application or group can operate independently when traversing the WAN to endpoints in the cloud or other remote sites. These applications are placed into a virtual SDWAN overlay.

Overlay Networking

Overlay networking is an approach to computer networking that involves building a layer of virtual networks on top of an existing physical network. This approach improves the underlying infrastructure’s scalability, performance, and security. It also allows for the creation of virtual networks that span multiple physical networks, allowing for greater flexibility in traffic routes.

**Virtualization**

At the core of overlay networking is the concept of virtualization. This involves separating the physical infrastructure from the virtual networks, allowing greater control over allocating resources. This separation also allows the creation of virtual network segments that span multiple physical networks. This provides an efficient way to route traffic and the ability to provide additional security and privacy measures.

**Underlay network**

A network underlay is a physical infrastructure that provides the foundation for a network overlay, a logical abstraction of the underlying physical network. The network underlay provides the physical transport of data between nodes, while the overlay provides logical connectivity.

The network underlay can comprise various technologies, such as Ethernet, Wi-Fi, cellular, satellite, and fiber optics. It is the foundation of a network overlay and essential for its proper functioning. It provides data transport and physical connections between nodes. It also provides the physical elements that make up the infrastructure, such as routers, switches, and firewalls.

Example: DMVPN over IPSec

Understanding DMVPN

DMVPN is a dynamic VPN technology that simplifies establishing secure connections between multiple sites. Unlike traditional VPNs, which require point-to-point tunnels, DMVPN uses a hub-and-spoke architecture, allowing any-to-any connectivity. This flexibility enables organizations to quickly scale their networks and accommodate dynamic changes in their infrastructure.

On the other hand, IPsec provides a robust framework for securing IP communications. It offers encryption, authentication, and integrity mechanisms, ensuring that data transmitted over the network remains confidential and protected against unauthorized access. IPsec is a widely adopted standard that is compatible with various network devices and software, making it an ideal choice for securing DMVPN connections.

The combination of DMVPN and IPsec brings numerous benefits to organizations. Firstly, DMVPN’s dynamic nature allows for easy scalability and improved network resiliency. New sites can be added seamlessly without the need for manual configuration changes. Additionally, DMVPN over IPsec provides strong encryption, ensuring the confidentiality of sensitive data. Moreover, DMVPN’s any-to-any connectivity optimizes network traffic flow, enhancing performance and reducing latency.

 

Overlay networking
Diagram: Overlay networking. Source Researchgate.

Key Challenges: Driving Overlay Networking & SD-WAN

**Challenge: We need more bandwidth**

Modern businesses demand more bandwidth than ever to connect their data, applications, and services. As a result, we have many things to consider with the WAN, such as regulations, security, visibility, branch, data center sites, remote workers, internet access, cloud, and traffic prioritization. They were driving the need for SD-WAN.

The concepts and design principles of creating a wide area network (WAN) to provide resilient and optimal transit between endpoints have continuously evolved. However, the driver behind building a better WAN is to support applications that demand performance and resiliency.

**Challenge: Suboptimal traffic flow**

The optimal route will be the fastest or most efficient and, therefore, preferred to transfer data. Sub-optimal routes will be slower and, hence, not the selected route. Centralized-only designs resulted in suboptimal traffic flow and increased latency, which will degrade application performance.

A key point to note is that traditional networks focus on centralized points in the network that all applications, network, and security services must adhere to. These network points are fixed and cannot be changed.

**Challenge: Network point intelligence**

However, the network should evolve to have network points positioned where it makes the most sense for the application and user, not based on a previously validated design for a different application era. For example, many branch sites do not have local Internet breakouts.

So, for this reason, we backhauled internet-bound traffic to secure, centralized internet portals at the H.Q. site. As a result, we sacrificed the performance of Internet and cloud applications. Designs that place the H.Q. site at the center of connectivity requirements inhibit the dynamic access requirements for digital business.

**Challenge: Hub and spoke drawbacks**

Simple spoke-type networks are sub-optimal because you always have to go to the center point of the hub and then out to the machine you need rather than being able to go directly to whichever node you need. As a result, the hub becomes a bottleneck in the network as all data must go through it. With a more scattered network using multiple hubs and switches, a less congested and more optimal route could be found between machines.

Cisco SD WAN Overlay
Diagram: Cisco SD-WAN overlay. Source Network Academy

The Fabric:

The word fabric comes from the fact that there are many paths to move from one server to another to ease balance and traffic distribution. SDN aims to centralize the order that enables the distribution of the flows over all the fabric paths. Then, we have an SDN controller device. The SDN controller can also control several fabrics simultaneously, managing intra and inter-datacenter flows.

SD-WAN is used to control and manage a company’s multiple WANs. There are different types of WAN: Internet, MPLS, LTE, DSL, fiber, wired network, circuit link, etc. SD-WAN uses SDN technology to control the entire environment. Like SDN, the data plane and control plane are separated. A centralized controller must be added to manage flows, routing or switch policies, packet priority, network policies, etc. SD-WAN technology is based on overlay, meaning nodes representing underlying networks.

Centralized logic:

In a traditional network, each device’s transport functions and controller layer are resident. This is why any configuration or change must be done box-by-box. Configuration was carried out manually or, at the most, an Ansible script. SD-WAN brings Software-Defined Networking (SDN) concepts to the enterprise branch WAN.

Software-defined networking (SDN) is an architecture, whereas SD-WAN is a technology that can be purchased and built on SDN’s foundational concepts. SD-WAN’s centralized logic stems from SDN. SDN separates the control from the data plane and uses a central controller to make intelligent decisions, similar to the design that most SD-WAN vendors operate.

A holistic view:

The controller and the SD-WAN overlay have a holistic view. The controller supports central policy management, enabling network-wide policy definitions and traffic visibility. The SD-WAN edge devices perform the data plane. The data plane is where simple forwarding occurs, and the control plane, which is separate from the data plane, sets up all the controls for the data plane to forward.

Like SDN, the SD-WAN overlay abstracts network hardware into a control plane with multiple data planes to make up one large WAN fabric. As the control layer is abstracted and decoupled above the physicals and running in software, services can be virtualized and delivered from a central location to any point on the network.

SD-WAN Overlay Features

SD-WAN Overlay Feature 1: Combining the transports:

At its core, SD-WAN shapes and steers application traffic across multiple WAN means of transport. Building on the concept of link bonding to combine numerous means of transport and transport types, the SD-WAN overlay improves the idea by moving the functionality up the stack—first, SD-WAN aggregates last-mile services, representing them as a single pipe to the application.SD-WAN allows you to combine all transport links into one big pipe. SD-WAN is transport agnostic. As it works by abstraction, it does not care what transport links you have. Maybe you have MPLS, private Internet, or LTE. It can combine all these or use them separately.

SD-WAN Overlay Feature 2: location:

From a central location, SD-WAN pulls all of these WAN resources together, creating one large WAN fabric that allows administrators to slice up the WAN to match the application requirements that sit on top. Different applications traverse the WAN, so we need the WAN to react differently. For example, if you’re running a call center, you want a low delay, latency, and high availability with Voice traffic. You may wish to use this traffic to use an excellent service-level agreement path.

SD WAN traffic steering
Diagram: SD-WAN traffic steering. Source Cisco.

SD-WAN Overlay Feature 3: steering:

Traffic steering may also be required: voice traffic to another path if, for example, the first Path is experiencing high latency. If it’s not possible to steer traffic automatically to a link that is better performing, run a series of path remediation techniques to try to improve performance. File transfer differs from real-time Voice: you can tolerate more delay but need more B/W. Here, you may want to use a combination of WAN transports ( such as customer broadband and LTE ) to achieve higher aggregate B/W.

This also allows you to automatically steer traffic over different WAN transports when there is a deflagration on one link. With the SD-WAN overlay, we must start thinking about paths, not links.

SD-WAN Overlay Feature 4: decisions:

At its core, SD-WAN enables real-time application traffic steering over any link, such as broadband, LTE, and MPLS, assigning pre-defined policies based on business intent. Steering policies support many application types, making intelligent decisions about how WAN links are utilized and which paths are taken.

The concept of an underlay and overlay are not new, and SD-WAN borrows these designs. First, the underlay is the physical or virtual world, such as the physical infrastructure. Then, we have the overlay, where all the intelligence can be set. The SDWAN overlay represents the virtual WANs that hold your different applications.

A virtual WAN overlay enables us to steer traffic and combine all bandwidths. Similar to how applications are mapped to V.M. in the server world, with SD-WAN, each application is mapped to its own virtual SD-WAN overlay. Each virtual SD-WAN overlay can have its own SD-WAN security policies, topologies, and performance requirements.

SD-WAN Overlay Feature 5:-Aware Routing Capabilities

Not only do we need application visibility to forward efficiently over either transport, but we also need the ability to examine deep inside the application and look at the sub-applications. For example, we can determine whether Facebook chat is over regular Facebook. This removes the application’s mystery and allows you to balance loads based on sub-applications. It’s like using a scalpel to configure the network instead of a sledgehammer.

SD-WAN Overlay Feature 6: Of Integration With Existing Infrastructure

The risk of introducing new technologies may come with a disruptive implementation strategy. Loss of service damages more than the engineer’s reputation. It hits all areas of the business. The ability to seamlessly insert new sites into existing designs is a vital criterion. With any network change, a critical evaluation is to know how to balance risk with innovation while still meeting objectives.

How aligned is marketing content to what’s happening in reality? It’s easy for marketing materials to implement their solution at Layer 2 or 3! It’s an entirely different ball game doing this. SD-WAN carries a certain amount of due diligence. One way to read between the noise is to examine who has real-life deployments with proven Proof of concept (POC) and validated designs. Proven POC will help you guide your transition in a step-by-step manner.

SD-WAN Overlay Feature 7: Specific Routing Topologies

Every company has different requirements for hub and spoke full mesh and Internet PoP topologies. For example, Voice should follow a full mesh design, while Data requires a hub and spokes connecting to a central data center. Nearest service availability is the key to improved performance, as there is little we can do about the latency Gods except by moving services closer together. 

SD-WAN Overlay Feature 8: Device Management & Policy Administration

The manual box-by-box approach to policy enforcement is not the way forward. It’s similar to stepping back into the Stone Age to request a catered flight. The ability to tie everything to a template and automate enables rapid branch deployments, security updates, and configuration changes. The optimal solutions have everything in one place and can dynamically push out upgrades.

SD-WAN Overlay Feature 9: Available With Automatic Failovers

You cannot apply a singular viewpoint to high availability. An end-to-end solution should address the high availability requirements of the device, link, and site level. WANs can fail quickly, but this requires additional telemetry information to detect failures and brownout events. 

SD-WAN Overlay Feature 10: On All Transports

Irrespective of link type, whether MPLS, LTE, or the Internet, we need the capacity to encrypt on all those paths without the excess baggage of IPsec. Encryption should happen automatically, and the complexity of IPsec should be abstracted.

**Application-Orientated WAN**

Push to the cloud:  

When geographically dispersed users connect back to central locations, their consumption triggers additional latency, degrading the application’s performance. No one can get away from latency unless we find ways to change the speed of light. One way is to shorten the link by moving to cloud-based applications.

The push to the cloud is inevitable. Most businesses are now moving away from on-premise in-house hosting to cloud-based management. Nevertheless, the benefits of moving to the cloud are manifold. It is easier for so many reasons.

The ready-made global footprint enables the usage of SaaS-based platforms that negate the drawbacks of dispersed users tromboning to a central data center. This software is pivotal to millions of businesses worldwide, which explains why companies such as Capsifi are so popular.

Logically positioned cloud platforms are closer to the mobile user. It’s increasingly far more efficient from the technical and business standpoint to host these applications in the cloud, which makes them available over the public Internet.

Bandwidth intensive applications:

Richer applications, multimedia traffic, and growth in the cloud application consumption model drive the need for additional bandwidth. Unfortunately, we can only fit so much into a single link. The congestion leads to packet drops, ultimately degrading application performance and user experience. In addition, most applications ride on TCP, yet TCP was not designed with performance.

Organic growth:

Organic business growth is a significant driver of additional bandwidth requirements. The challenge is that existing network infrastructures are static and unable to respond to this growth in a reasonable period adequately. The last mile of MPLS locks you in and kills agility. Circuit lead times impede the organization’s productivity and create an overall lag.

Costs:

A WAN virtualization solution should be simple. To serve the new era of applications, we need to increase the link capacity by buying more bandwidth. However, nothing is as easy as it may seem. The WAN is one of the network’s most expensive parts, and employing link oversubscription to reduce congestion is too costly.

Furthermore, bandwidth comes at a cost, and the existing TDM-based MPLS architectures cannot accommodate application demands. 

Traditional MPLS is feature-rich and offers many benefits. No one doubts this fact. MPLS will never die. However, it comes at a high cost for relatively low bandwidth. Unfortunately, MPLS’s price and capabilities are not a perfect couple.

Hybrid connectivity:

Since there is not one stamp for the entire world, similar applications will have different forwarding preferences. Therefore, application flows are dynamic and change depending on user consumption. Furthermore, the MPLS, LTE, and Internet links often complement each other since they support different application types.

For example, Storage and Big data replication traffic are forwarded through the MPLS links, while cheaper Internet connectivity is used for standard Web-based applications.

Limitations of protocols:

When left to its defaults, IPsec is challenged by hybrid connectivity. IPSec architecture is point-to-point, not site-to-site. As a result, it doesn’t natively support redundant uplinks. Complex configurations are required when sites have multiple uplinks to multiple providers.

By default, IPsec is not abstracted; one session cannot be used over multiple uplinks, causing additional issues with transport failover and path selection. It’s a Swiss Army knife of features, and much of IPSec’s complexities should be abstracted. Secure tunnels should be torn up and down immediately, and new sites should be incorporated into a secure overlay without much delay or manual intervention. 

Internet of Things (IoT):

Security and bandwidth consumption are key issues when introducing IP-enabled objects and IoT access technologies. IoT is all about Data and will bring a shed load of additional overheads for networks to consume. As millions of IoT devices come online, how efficiently do we segment traffic without complicating the network design further? Complex networks are hard to troubleshoot, and simplicity is the mother of all architectural success. Furthermore, much IoT processing requires communication to remote IoT platforms. How do we account for the increased signaling traffic over the WAN? The introduction of billions of IoT devices leaves many unanswered questions.

Branch NFV:

There has been strong interest in infrastructure consolidation by deploying Network Function Virtualization (NFV) at the branch. Enabling on-demand service and chaining application flows are key drivers for NFV. However, traditional service chaining is static since it is bound to a particular network topology. Moreover, it is typically built through manual configuration and is prone to human error.

 SD-WAN overlay path monitoring:

SD-WAN monitors the paths and the application performance on each link (Internet, MPLS, LTE ) and then chooses the best path based on real-time conditions and the business policy. In summary, the underlay network is the physical or virtual infrastructure above which the overlay network is built. An SDWAN overlay network is a virtual network built on top of an underlying Network infrastructure/Network layer (the underlay).

Controller-based policy:

An additional layer of information is needed to make more intelligent decisions about how and where to forward application traffic. SD-WAN offers a controller-based policy approach that incorporates a holistic view.

A central controller can now make decisions based on global information, not solely on a path-by-path basis with traditional routing protocols.  Getting all the routing information and compiling it into the controller to make a decision is much more efficient than making local decisions that only see a limited part of the network.

The SD-WAN Controller provides physical or virtual device management for all SD-WAN Edges associated with the controller. This includes, but is not limited to, configuration and activation, IP address management, and pushing down policies onto SD-WAN Edges located at the branch sites.

SD-WAN Overlay Case Study:

Personal Note: I recently consulted for a private enterprise. Like many enterprises, they have many applications, both legacy and new. No one knew about courses and applications running over the WAN; visibility was low. For the network design, the H.Q. has MPLS and Direct Internet access. Nothing is new here; this design has been in place for the last decade. All traffic is backhauled to the HQ/MPLS headend for security screening. The security stack, including firewalls, IDS/IPS, and anti-malware, was in the H.Q. The remote sites have high latency and limited connectivity options.

More importantly, they are transitioning their ERP system to the cloud. As apps move to the cloud, they want to avoid fixed WAN, a big driver for a flexible SD-WAN solution. They also have remote branches, which are hindered by high latency and poorly managed IT infrastructure. But they don’t want an I.T. representative at each site location. They have heard that SD-WAN has a centralized logic and can view the entire network from one central location. These remote sites must receive large files from the H.Q.; the branch sites’ transport links are only single-customer broadband links.

Some remote sites have LTE, and the bills are getting more significant. The company wants to reduce costs with dedicated Internet access or customer/business broadband. They have heard that you can combine different transports with SD-WAN and have several path remediations on degraded transports for better performance. So, they decided to roll out SD-WAN. From this new architecture, they gained several benefits.

SD-WAN Visibility

When your business-critical applications operate over different provider networks, troubleshooting and finding the root cause of problems becomes more challenging. So, visibility is critical to business. SD-WAN allows you to see network performance data in real-time and is essential for determining where packet loss, latency, and jitter are occurring so you can resolve the problem quickly.

You also need to be able to see who or what is consuming bandwidth so you can spot intermittent problems. For all these reasons, SD-WAN visibility needs to go beyond network performance metrics and provide greater insight into the delivery chains that run from applications to users.

  • Understand your baselines:

Visibility is needed to complete the network baseline before the SD-WAN is deployed. This enables the organization to understand existing capabilities, the norm, what applications are running, the number of sites connected, what service providers used, and whether they’re meeting their SLAs. Visibility is critical to obtaining a complete picture, so teams understand how to optimize the business infrastructure. SD-WAN gives you an intelligent edge, so you can see all the traffic and act on it immediately.

First, look at the visibility of the various flows, the links used, and any issues on those links. Then, if necessary, you can tweak the bonding policy to optimize the traffic flow. Before the rollout of SD-WAN, there was no visibility into the types of traffic, and different apps used what B.W. They had limited knowledge of WAN performance.

  • SD-WAN offers higher visibility:

With SD-WAN, they have the visibility to control and class traffic on layer seven values, such as what URL you are using and what Domain you are trying to hit, along with the standard port and protocol. All applications are not equal; some run better on different links. If an application is not performing correctly, you can route it to a different circuit. With the SD-WAN orchestrator, you have complete visibility across all locations, all links, and into the other traffic across all circuits. 

  • SD-WAN High Availability:

Any high-availability solution aims to ensure that all network services are resilient to failure. It aims to provide continuous access to network resources by addressing the potential causes of downtime through functionality, design, and best practices. The previous high-availability design was active and passive with manual failover. It was hard to maintain, and there was a lot of unused bandwidth. Now, they use resources more efficiently and are no longer tied to the bandwidth of the first circuit.

There is a better granular application failover mechanism. You can also select which apps are prioritized if a link fails or when a certain congestion ratio is hit. For example, you have LTE as a backup, which can be expensive. So applications marked high priority are steered over the backup link, but guest WIFI traffic isn’t.  

  • Flexible topology:

Before, they had a hub-and-spoke MPLS design for all applications. They wanted a complete mesh architecture for some applications, kept the existing hub, and spoke for others. However, the service provider couldn’t accommodate the level of granularity that they wanted.

With SD-WAN, they can choose topologies that are better suited to the application type. As a result, the network design is now more flexible and matches the application than the application matching a network design it doesn’t want.

Types of SD-WAN

The market for branch office wide-area network functionality is shifting from dedicated routing, security, and WAN optimization appliances to feature-rich SD-WAN. As a result, WAN edge infrastructure now incorporates a widening set of network functions, including secure routers, firewalls, SD-WAN, WAN path control, and WAN optimization, along with traditional routing functionality. Therefore, consider the following approach to deploying SD-WAN.

1. Application-based approach

With SD-WAN, we are shifting from a network-based approach to an application-based approach. The new WAN no longer looks solely at the network to forward packets. Instead, it looks at the business requirements and decides how to optimize the application with the correct forwarding behavior. This new way of forwarding would be problematic when using traditional WAN architectures.

Making business logic decisions with I.P. and port number information is challenging. Standard routing is the most common way to forward application traffic today, but it only assesses part of the picture when making its forwarding decision. 

These devices have routing tables to perform forwarding. Still, with this model, they operate and decide on their little island, losing the holistic view required for accurate end-to-end decision-making.  

2. SD-WAN: Holistic decision

The WAN must start making decisions holistically. It should not be viewed as a single module in the network design. Instead, it must incorporate several elements it has not integrated to capture the correct per-application forwarding behavior. The ideal WAN should be automatable to form a comprehensive end-to-end solution centrally orchestrated from a single pane of glass.

Managed and orchestrated centrally, this new WAN fabric is transport agnostic. It offers application-aware routing, regional-specific routing topologies, encryption on all transports regardless of link type, and high availability with automatic failover. All of these will be discussed shortly and are the essence of SD-WAN.  

3. SD-WAN and central logic        

Besides the virtual SD-WAN overlay, another key SD-WAN concept is centralized logic. Upon examining a standard router, local routing tables are computed from an algorithm to forward a packet to a given destination.

It receives routes from its peers or neighbors but computes paths locally and makes local routing decisions. The critical point to note is that everything is calculated locally. SD-WAN functions on a different paradigm.

Rather than using distributed logic, it utilizes centralized logic. This allows you to view the entire network holistically and with a distributed forwarding plane that makes real-time decisions based on better metrics than before.

This paradigm enables SD-WAN to see how the flows behave along the path. They are taking the fragmented control approach and centralizing it while benefiting from a distributed system. 

The SD-WAN controller, which acts as the brain, can set different applications to run over different paths based on business requirements and performance SLAs, not on a fixed topology. So, for example, if one path does not have acceptable packet loss and latency is high, we can move to another path dynamically.

4. Independent topologies

SD-WAN has different levels of networking and brings the concepts of SDN into the Wide Area Network. Similar to SDN, we have an underlay and an overlay network with SD-WAN. The WAN infrastructure, either physical or virtual, is the underlay, and the SDWAN overlay is in software on top of the underlay where the applications are mapped.

This decoupling or separation of functions allows different application or group overlays. Previously, the application had to work with a fixed and pre-built network infrastructure. With SD-WAN, the application can choose the type of topology it wants, such as a full mesh or hub and spoke. The topologies with SD-WAN are much more flexible.

5. The SD-WAN overlay

SD-WAN optimizes traffic over multiple available connections. It dynamically steers traffic to the best available link. Suppose the available links show any transmission issues. In that case, it will immediately transfer to a better path or apply remediation to a link if, for example, you only have a single link. SD-WAN delivers application flows from a source to a destination based on the configured policy and best available network path. A core concept of SD-WAN is overlaid.

SD-WAN solutions provide the software abstraction to create the SD-WAN overlay and decouple network software services from the underlying physical infrastructure. Multiple virtual overlays may be defined to abstract the underlying physical transport services, each supporting a different quality of service, preferred transport, and high availability characteristics.

6. Application mapping

Application mapping also allows you to steer traffic over different WAN transports. This steering is automatic and can be implemented when specific performance metrics are unmet. For example, if Internet transport has a 15% packet loss, the policy can be set to steer all or some of the application traffic over to better-performing MPLS transport.

Applications are mapped to different overlays based on business intent, not infrastructure details like IP addresses. When you think about overlays, it’s common to have, on average, four overlays. For example, you may have a gold, platinum, and bronze SDWAN overlay, and then you can map the applications to these overlays.

The applications will have different networking requirements, and overlays allow you to slice and dice your network if you have multiple application types. 

SD-WAN & WAN metrics

SD-WAN captures metrics that go far beyond the standard WAN measurements. For example, the traditional method measures packet loss, latency, and jitter metrics to determine path quality. These measurements are insufficient for routing protocols that only make the packet flow decision at layer 3 of the OSI model.

As we know, layer 3 of the OSI model lacks intelligence and misses the overall user experience. We must start looking at application transactions rather than relying on bits, bytes jitter, and latency.

SD-WAN incorporates better metrics beyond those a standard WAN edge router considers. These metrics may include application response time, network transfer, and service response time. Some SD-WAN solutions monitor each flow’s RTT, sliding windows, and ACK delays, not just the I.P. or TCP. This creates a more accurate view of the application’s performance.

Overlay Use Case: DMVPN Dual Cloud

Exploring Single Hub Dual Cloud Configuration

The Single Hub, Dual Cloud configuration, is a DMVPN setup in which a central hub site connects to two or more cloud service providers simultaneously. This configuration offers several advantages, such as increased redundancy, improved performance, and enhanced security.

By connecting to multiple cloud service providers, the Single Hub Dual Cloud configuration ensures redundancy if one provider experiences an outage. This redundancy enhances network availability and minimizes the risk of downtime, providing a robust and reliable network infrastructure.

With the Single Hub Dual Cloud configuration, network traffic can be load-balanced across multiple cloud service providers. This load balancing distributes the workload evenly, optimizing network performance and preventing bottlenecks. It allows for efficient utilization of network resources, resulting in enhanced user experience and improved application performance.

Summary: SD WAN Overlay

In today’s digital landscape, businesses increasingly rely on cloud-based applications, remote workforces, and data-driven operations. As a result, the demand for a more flexible, scalable, and secure network infrastructure has never been greater. This is where SD-WAN overlay comes into play, revolutionizing how organizations connect and operate.

SD-WAN overlay is a network architecture that allows organizations to abstract and virtualize their wide area networks, decoupling them from the underlying physical infrastructure. It utilizes software-defined networking (SDN) principles to create an overlay network that runs on top of the existing WAN infrastructure, enabling centralized management, control, and optimization of network traffic.

Key benefits of SD-WAN overlay 

1. Enhanced Performance and Reliability:

SD-WAN overlay leverages multiple network paths to distribute traffic intelligently, ensuring optimal performance and reliability. By dynamically routing traffic based on real-time conditions, businesses can overcome network congestion, reduce latency, and maximize application performance. This capability is particularly crucial for organizations with distributed branch offices or remote workers, as it enables seamless connectivity and productivity.

2. Cost Efficiency and Scalability:

Traditional WAN architectures can be expensive to implement and maintain, especially when organizations need to expand their network footprint. SD-WAN overlay offers a cost-effective alternative by utilizing existing infrastructure and incorporating affordable broadband connections. With centralized management and simplified configuration, scaling the network becomes a breeze, allowing businesses to adapt quickly to changing demands without breaking the bank.

3. Improved Security and Compliance:

In an era of increasing cybersecurity threats, protecting sensitive data and ensuring regulatory compliance are paramount. SD-WAN overlay incorporates advanced security features to safeguard network traffic, including encryption, authentication, and threat detection. Businesses can effectively mitigate risks, maintain data integrity, and comply with industry regulations by segmenting network traffic and applying granular security policies.

4. Streamlined Network Management:

Managing a complex network infrastructure can be a daunting task. SD-WAN overlay simplifies network management with centralized control and visibility, enabling administrators to monitor and manage the entire network from a single pane of glass. This level of control allows for faster troubleshooting, policy enforcement, and network optimization, resulting in improved operational efficiency and reduced downtime.

5. Agility and Flexibility:

In today’s fast-paced business environment, agility is critical to staying competitive. SD-WAN overlay empowers organizations to adapt rapidly to changing business needs by providing the flexibility to integrate new technologies and services seamlessly. Whether adding new branch locations, integrating cloud applications, or adopting emerging technologies like IoT, SD-WAN overlay offers businesses the agility to stay ahead of the curve.

Implementation of SD-WAN Overlay:

Implementing SD-WAN overlay requires careful planning and consideration. The following steps outline a typical implementation process:

1. Assess Network Requirements: Evaluate existing network infrastructure, bandwidth requirements, and application performance needs to determine the most suitable SD-WAN overlay solution.

2. Design and Architecture: Create a network design incorporating SD-WAN overlay while considering factors such as branch office connectivity, data center integration, and security requirements.

3. Vendor Selection: Choose a reliable and reputable SD-WAN overlay vendor based on their technology, features, support, and scalability.

4. Deployment and Configuration: Install the required hardware or virtual appliances and configure the SD-WAN overlay solution according to the network design. This includes setting up policies, traffic routing, and security parameters.

5. Testing and Optimization: Thoroughly test the SD-WAN overlay solution, ensuring its compatibility with existing applications and network infrastructure. Optimize the solution based on performance metrics and user feedback.

Conclusion: SD-WAN overlay is a game-changer for businesses seeking to optimize their network infrastructure. By enhancing performance, reducing costs, improving security, streamlining management, and enabling agility, SD-WAN overlay unlocks the true potential of connectivity. Embracing this technology allows organizations to embrace digital transformation, drive innovation, and gain a competitive edge in the digital era. In an ever-evolving business landscape, SD-WAN overlay is the key to unlocking new growth opportunities and future-proofing your network infrastructure.

SD-WAN topology

SD WAN | SD WAN Tutorial

SD WAN Tutorial

In the ever-evolving landscape of networking technology, SD-WAN has emerged as a powerful solution that revolutionizes the way businesses connect and operate. This blog post delves into the world of SD-WAN, exploring its key features, benefits, and the impact it has on modern networks.

SD-WAN, which stands for Software-Defined Wide Area Networking, is a technology that simplifies the management and operation of a wide area network. By separating the network hardware from its control mechanism, SD-WAN enables businesses to have more flexibility and control over their network infrastructure. Unlike traditional WAN setups, SD-WAN utilizes software to intelligently route traffic across multiple connection types, optimizing performance and enhancing security.

One of the fundamental features of SD-WAN is its ability to provide centralized network management. This means that network administrators can easily configure and monitor the entire network from a single interface, streamlining operations and reducing complexity. Additionally, SD-WAN offers dynamic path selection, allowing traffic to be routed based on real-time conditions, such as latency, congestion, and link availability. This dynamic routing capability ensures optimal performance and resilience.

Another significant benefit of SD-WAN is its ability to support multiple connection types, including MPLS, broadband, and cellular networks. This enhances network reliability and scalability, as businesses can leverage multiple connections to avoid single points of failure and accommodate growing bandwidth demands. Moreover, SD-WAN solutions often incorporate advanced security features, such as encryption and segmentation, ensuring data integrity and protecting against cyber threats.?

SD-WAN has had a profound impact on modern networks, empowering businesses with greater agility and cost-efficiency. With the rise of cloud computing and the increasing adoption of SaaS applications, traditional network architectures were often unable to provide the necessary performance and reliability. SD-WAN addresses these challenges by enabling direct and secure access to cloud resources, eliminating the need for backhauling traffic to a central data center.

Furthermore, SD-WAN enhances network visibility and control, allowing businesses to prioritize critical applications, apply Quality of Service (QoS) policies, and optimize bandwidth utilization. This level of granular control ensures that essential business applications receive the required resources, enhancing user experience and productivity. Additionally, SD-WAN simplifies network deployments, making it easier for organizations to expand their networks, open new branches, and integrate acquisitions seamlessly.

SD-WAN represents a significant evolution in networking technology, offering businesses a comprehensive solution to modern connectivity challenges. With its centralized management, dynamic path selection, and support for multiple connection types, SD-WAN empowers organizations to build robust, secure, and agile networks. As businesses continue to embrace digital transformation, SD-WAN is poised to play a pivotal role in shaping the future of networking.

Highlights: SD WAN Tutorial

Network Abstraction

– In an era where digital transformation is no longer a luxury but a necessity, businesses are constantly looking for ways to optimize their network infrastructure.

Enter Software-Defined Wide Area Networking (SD-WAN), a revolutionary technology that is transforming how organizations approach WAN architecture.

– SD-WAN is not just a buzzword; it is a robust solution designed to simplify the management and operation of a WAN by decoupling the networking hardware from its control mechanism. 

### The Basics of SD-WAN Technology

At its core, SD-WAN is a virtualized WAN architecture that allows enterprises to leverage any combination of transport services, including MPLS, LTE, and broadband internet services, to securely connect users to applications.

Unlike traditional WANs, which require proprietary hardware and complex configurations, SD-WAN uses a centralized control function to direct traffic across the WAN, increasing application performance and delivering a high-quality user experience. This separation of the data plane from the control plane is what makes SD-WAN a game-changer in the world of networking.

### WAN Virtualization: The Heart of SD-WAN

WAN virtualization is a critical component of SD-WAN. It abstracts the underlying network infrastructure, creating a virtual overlay that provides seamless connectivity across different network types. This virtualization enables businesses to manage network traffic more effectively, prioritize critical applications, and ensure reliable performance irrespective of the physical network.

With WAN virtualization, businesses can rapidly deploy new applications and services, respond to changes in network conditions, and optimize bandwidth usage without costly hardware upgrades.

### Benefits of Adopting SD-WAN

The benefits of adopting SD-WAN are manifold. Firstly, it offers cost savings by reducing the need for expensive MPLS circuits and allowing the use of more cost-effective broadband connections. Secondly, it provides enhanced security through integrated encryption and advanced threat protection. Additionally, SD-WAN simplifies network management by providing a centralized dashboard that offers visibility into network traffic and performance. This simplification leads to improved agility, allowing IT teams to deploy and manage applications with greater speed and efficiency.

### The Future of Networking with SD-WAN

As businesses continue to embrace cloud services and remote work becomes increasingly prevalent, the demand for flexible, scalable, and secure networking solutions will only grow. SD-WAN is well-positioned to meet these demands, offering a future-proof solution that can adapt to the ever-changing landscape of enterprise networking. With its ability to integrate with cloud platforms and support IoT deployments, SD-WAN is paving the way for the next generation of network connectivity.

Picture This: Personal Note – 

Now imagine these virtual WANs individually holding a single application running over the WAN, but consider them end-to-end instead of being in one location, i.e., on a server. The individual WAN runs to the cloud or enterprise location, having secure, isolated paths with different policies and topologies. Wide Area Network (WAN) virtualization is an emerging technology revolutionizing how networks are designed and managed.

Note: WAN virtualization allows for decoupling the physical network infrastructure from the logical network, enabling the same physical infrastructure for multiple logical networks. It allows organizations to utilize a single physical infrastructure to create multiple virtual networks, each with unique characteristics. WAN virtualization is a core requirement enabling SD-WAN.

SD WAN Overlay & Underlay Design 

This SD-WAN tutorial will address the SD-WAN vendor’s approach to an underlay and an overlay, including the SD-WAN requirements. The underlay consists of the physical or virtual infrastructure and the overlay network, the SD WAN overlay to which the applications are mapped.

SD-WAN solutions are designed to provide secure, reliable, and high-performance connectivity across multiple locations and networks. Organizations can manage their network configurations, policies, and security infrastructure with SD-WAN.

In addition, SD-WAN solutions can be deployed over any type of existing WAN infrastructure, such as MPLS, Frame Relay, and more. SD-WAN offers enhanced security features like encryption, authentication, and access control. This ensures that data is secure and confidential and that only authorized users can access the network.

Example Technology: GRE Overlay with IPsec

GRE with IPsec ipsec plus GRE

Google SD WAN Cloud Hub

The integration of SD-WAN with Google Cloud takes connectivity to new heights. By deploying an SD-WAN Cloud Hub, businesses can seamlessly connect their branch networks to the cloud, leveraging the power of Google Cloud’s infrastructure.

This enables organizations to optimize network performance, reduce latency, and enhance overall user experience. The centralized management capabilities of SD-WAN further simplify network operations, allowing businesses to efficiently control traffic routing, prioritize critical applications, and ensure maximum uptime.

Seamless Integration

One of the standout aspects of SD-WAN Cloud Hub is its seamless integration with Google Cloud. Organizations can extend their on-premises network to Google Cloud, enabling them to leverage Google Cloud’s extensive services and resources. This integration empowers businesses to adopt a hybrid cloud strategy, seamlessly connecting their on-premises infrastructure with the scalability and flexibility of Google Cloud.

SD-WAN Enables

A) Performance-Based Routing

Performance-based routing is a dynamic routing technique that selects the best path for data transmission based on real-time performance metrics. Unlike traditional routing protocols that solely consider static factors like hop count, performance-based routing considers parameters such as latency, packet loss, and bandwidth availability.

-Enhanced Network Performance: Performance-based routing minimizes latency and packet loss by dynamically selecting the optimal path, improving overall network performance. This leads to faster data transfer speeds and better user experiences for applications and services.

-Efficient Bandwidth Utilization: Performance-based routing intelligently allocates network resources by diverting traffic to less congested routes. This ensures that available bandwidth is utilized optimally, preventing bottlenecks and congestion in the network.

-Redundancy and Failover: Another advantage of performance-based routing is its ability to provide built-in redundancy and failover mechanisms. By constantly monitoring performance metrics, it can automatically reroute traffic when a network link or node fails, ensuring uninterrupted connectivity.

B) Understanding DMVPN Phase 3

DMVPN Phase 3 is an advanced networking solution that provides scalable and efficient connectivity for organizations with distributed networks. Unlike its predecessors, DMVPN Phase 3 introduces the concept of Spokes connecting directly with each other, eliminating the need for traffic to traverse through the Hub. This dynamic spoke-to-spoke tunneling architecture enhances network performance and reduces latency, making it an ideal choice for modern network infrastructures.

DMVPN Phase 3 offers many advantages for organizations seeking streamlined network connectivity. First, it provides enhanced scalability, allowing for easy addition or removal of spokes without impacting the overall network. Additionally, direct spoke-to-spoke communication reduces the dependency on the Hub, resulting in improved network resiliency and reduced bandwidth consumption. Moreover, DMVPN Phase 3 supports dynamic routing protocols, enabling efficient and automated network management.

C) Securing DMVPN with IPSec

DMVPN is a Cisco proprietary solution that simplifies the deployment of VPN networks, offering scalability, flexibility, and ease of management. It utilizes multipoint GRE tunnels to establish secure connections between multiple sites, creating a virtual mesh network. This architecture eliminates the need for point-to-point tunnels between every site, reducing overhead and enhancing scalability.

IPsec, short for Internet Protocol Security, is a widely used protocol suite that provides secure communication over IP networks. With features like authentication, encryption, and data integrity, IPsec ensures confidentiality and integrity of data transmitted over the network. When combined with DMVPN, IPsec adds an additional layer of security to the virtual network, safeguarding sensitive information from unauthorized access.

DMVPN over IPsec offers numerous advantages for organizations. Firstly, it enables dynamic and on-demand connectivity, adding new sites seamlessly without manual configuration. This scalability reduces administrative overhead and streamlines network expansion. Secondly, DMVPN over IPsec provides enhanced security, ensuring that data remains confidential and protected from potential threats. Lastly, it improves network performance by leveraging multipoint connectivity, optimizing bandwidth usage, and reducing latency.

Example Product: Cisco Meraki

### Centralized Management

One of the standout features of the Cisco Meraki platform is its centralized management system. Gone are the days of configuring each device individually. With Meraki, all your network devices can be managed from a single, intuitive dashboard. This not only simplifies the administration process but also ensures that your network remains consistent and secure. The centralized dashboard provides real-time monitoring, configuration, and troubleshooting capabilities, allowing IT administrators to manage their entire network with ease and efficiency.

### Robust Security Features

Security is a top priority for any network, and Cisco Meraki excels in this area. The platform offers a comprehensive suite of security features designed to protect your network from a wide range of threats. Built-in firewall, intrusion detection, and prevention systems work seamlessly to safeguard your data. Additionally, Meraki’s advanced malware protection and content filtering ensure that harmful content is kept at bay. With automatic firmware updates, your network is always protected against the latest vulnerabilities, giving you peace of mind.

### Unparalleled Scalability

As your business grows, so does your network. Cisco Meraki is designed to scale effortlessly with your organization. Whether you are managing a small business or a large enterprise, Meraki’s cloud-based architecture allows you to add new devices and locations without the need for complex configurations or costly hardware investments. The platform supports a wide range of devices, including switches, routers, access points, and security cameras, all of which can be easily integrated into your existing network.

### Seamless Integration and Automation

Integration and automation are key to streamlining network management, and Cisco Meraki shines in these areas. The platform supports API integrations, allowing you to connect with third-party applications and services. This opens up a world of possibilities for automating routine tasks, such as device provisioning, network monitoring, and reporting. By leveraging these capabilities, businesses can reduce manual workload, minimize errors, and improve overall operational efficiency.

### Enhanced User Experience

User experience is at the heart of the Cisco Meraki platform. The user-friendly dashboard is designed with simplicity in mind, making it accessible even to those with limited technical expertise. Detailed analytics and reporting tools provide valuable insights into network performance, helping administrators make informed decisions. Additionally, the platform’s mobile app allows for on-the-go management, ensuring that your network is always within reach.

Related: Before you proceed, you may find the following posts helpful for pre-information:

  1. SD WAN Security 
  2. WAN Monitoring
  3. Zero Trust SASE
  4. Forwarding Routing Protocols

SD WAN Tutorial

Transition: The era of client-server

The design for the WAN and branch sites was conceived in the client-server era. At that time, the WAN design satisfies the applications’ needs. Then, applications and data resided behind the central firewall in the on-premises data center. Today, we are in a different space with hybrid IT and multi-cloud designs, making applications and data distribution. Data is now omnipresent. The type of WAN and branch originating in the client-server era was not designed with cloud applications. 

Hub and spoke designs:

The “hub and spoke” model was designed for client/server environments where almost all of an organization’s data and applications resided in the data center (i.e., the hub location) and were accessed by workers in branch locations (i.e., the spokes).  Internet traffic would enter the enterprise through a single ingress/egress point, typically into the data center, which would then pass through the hub and to the users in branch offices.

Push to the Cloud:

The birth of the cloud resulted in a significant shift in how we consume applications, traffic types, and network topology. There was a big push to the cloud, and almost everything was offered as a SaaS. In addition, the cloud era changed traffic patterns, as traffic goes directly to the cloud from the branch site and doesn’t need to be backhauled to the on-premise data center.

**Challenge: Hub and spoke design**

The hub and spoke model needs to be updated. Because the model is centralized, day-to-day operations may be relatively inflexible, and changes at the hub, even in a single route, may have unexpected consequences throughout the network. It may be difficult or even impossible to handle occasional periods of high demand between two spokes.

The result of the cloud acceleration means that the best access point is only sometimes in the central location. Why would branch sites direct all internet-bound traffic to the central HQ, causing traffic tromboning and adding to latency when it can go directly to the cloud? The hub-and-spoke design could be an efficient topology for cloud-based applications. 

**Active/Active and Active/Passive**

Historically, WANs are built on “active-passive,” where a branch can be connected using two or more links, but only the primary link is active and passing traffic. In this scenario, the backup connection only becomes active if the primary connection fails. While this might seem sensible, it could be more efficient.

Interest in active-active routing protocols has always existed, but it was challenging to configure and expensive to implement. In addition, active/active designs with traditional routing protocols are complex to design, inflexible, and a nightmare to troubleshoot.

Convergence & Application Performance:

Convergence and application performance problems can arise from active-active WAN edge designs. For example, active-active packets that reach the other end could be out-of-order packets due to each link propagating at different speeds. Also, the remote end has to reassemble, resulting in additional jitter and delay. Both high jitter and delay are bad for network performance.

Packet reordering:

The issues arising from active-active are often known as spray and pray. It increases bandwidth but decreases good output. Spraying packets down both links can result in 20% drops or packet reordering. There will also be firewall issues as they may see asymmetric routes.

TCP out of order packets
Diagram: TCP out-of-order packets. Source F5.

Key SD-WAN Requirements 

1: SD-WAN requirement and active-active paths

For an active-active design, one must know the application session and design that eliminates asymmetric routing. In addition, it would help if you slice up the WAN so application flows can work efficiently over either link. SD-WAN does this. Also, WAN designs can be active–standby, which requires routing protocol convergence in the event of primary link failure.

Routing Protocol Convergence

Unfortunately, routing protocols are known to converge slowly. The emergence of SD-WAN technologies with multi-path capabilities combined with the ubiquity of broadband has made active-active highly attractive and something any business can deploy and manage quickly and easily.

SD-WAN solution enables the creation of virtual overlays that bond multiple underlay links. Virtual overlays would allow enterprises to classify and categorize applications based on their unique service level requirements and provide fast failover should an underlay link experience congestion, a brownout, or an outage.

Example: BFD improving convergence

There is traditional routing regardless of the mechanism used to speed up convergence and failure detection. These several convergence steps need to be carried out:

a ) Detecting the topology change,

b ) Notifying the rest of the network about the change,

c ) Calculating the new best path, and

d) switching to the new best path.

Traditional WAN protocols route down one path and, by default, have no awareness of what’s happening at the application level. For this reason, there have been many attempts to enhance Wan’s behavior. 

Example Convergence Time with OSPF
Diagram:Example Convergence Time with OSPF. Source INE.
Example Convergence Time with OSPF
Diagram:Example Convergence Time with OSPF. Source INE.

2: SD-WAN requirements: Flexible topologies

For example, using DPI, we can have Voice over IP traffic go over MPLS. Here, the SD-WAN will look at real-time protocol and session initiation protocol. We can also have less critical applications that can go to the Internet. MPLS can be used only for a specific app.

As a result, the best-effort traffic is pinned to the Internet, and only critical apps get an SLA and go on the MPLS path. Now, we better utilize the transports, and circuits never need to be dormant. With SD-WAN, we are using the B/W that you have available to ensure an optimized experience.

The SD-WAN’s value is that the solution tracks network and path conditions in real time, revealing performance issues as they occur. Then, it dynamically redirects data traffic to the following available path.

Then, when the network recovers to its normal state, the SD-WAN solution can redirect the data’s traffic path to its original location. Therefore, the effects of network degradation, such as brownouts and soft failure, can be minimized.

VPN Segmentation
Diagram: VPN Segmentation. Source Cisco.

3: SD-WAN requirements: Encryption key rotation

Data security has never been a more important consideration than it is today. Therefore, businesses and other organizations must take robust measures to keep data and information safely under lock and key. Encryption keys must be rotated regularly (the standard interval is every 90 days) to reduce the risk of compromised data security.

However, regular VPN-based encryption key rotation can be complicated and disruptive, often requiring downtime. SD-WAN can offer automatic key rotation, allowing network administrators to pre-program rotations without manual intervention or system downtime.

4: SD-WAN requirements: Push to the cloud 

Another critical feature of SD-WAN technology is cloud breakout. This lets you connect branch office users to cloud-hosted applications directly and securely, eliminating the inefficiencies of backhauling cloud-destined traffic through the data center. Given the ever-growing importance of SaaS and IaaS services, efficient and reliable access to the cloud is crucial for many businesses and other organizations. By simplifying how branch traffic is routed, SD-WAN makes setting up breakouts quicker and easier.

**The changing perimeter location**

Users are no longer positioned in one location with corporate-owned static devices. Instead, they are dispersed; additional latency degrades application performance when connecting to central areas. Applications and network devices can be optimized, but the only solution is to shorten the link by moving to cloud-based applications. There is a huge push and a rapid flux for cloud-based applications. Most are now moving away from on-premise in-house hosting to cloud-based management.

**SaaS-based Applications**

The ready-made global footprint enables the usage of SaaS-based platforms that negate the drawbacks of dispersed users tromboning to a central data center to access applications. Logically positioned cloud platforms are closer to the mobile user. In addition, hosting these applications on the cloud is far more efficient than making them available over the public Internet.

5: SD-WAN requirements: Decentralization of traffic

A lot of traffic is now decentralized from the central data center to remote branch sites. Many branches do not run high bandwidth-intensive applications. These types of branch sites are known as light edges. Despite the traffic change, the traditional branch sites rely on hub sites for most security and network services.

The branch sites should connect to the cloud applications directly over the Internet without tromboning traffic to data centers for Internet access or security services. An option should exist to extend the security perimeter into the branch sites without requiring expensive onsite firewalls and IPS/IDS. SD-WAN builds a dynamic security fabric without the appliance sprawl of multiple security devices and vendors.

**The ability to service chain traffic** 

Also, service chaining. Service chaining through SD-WAN allows organizations to reroute their data traffic through one service or multiple services, including intrusion detection and prevention devices or cloud-based security services. It thereby enables firms to declutter their branch office networks.

After all, they can automate handling particular types of traffic flows and assemble connected network services into a single chain.

6: SD-WAN requirements: Bandwidth-intensive applications 

Exponential growth in demand for high-bandwidth applications such as multimedia in cellular networks has triggered the need to develop new technologies capable of providing the required high-bandwidth, reliable links in wireless environments. Video streaming is the biggest user of internet bandwidth—more than half of total global traffic. The Cartesian study confirms historical trends reflecting consumer usage that remains highly asymmetric, as video streaming remains the most popular.

**Richer and hungry applications**

Richer applications, multimedia traffic, and growth in the cloud application consumption model drive the need for additional bandwidth. Unfortunately, the congestion leads to packet drops, ultimately degrading application performance and user experience.

SD-WAN offers flexible bandwidth allocation, so you don’t manually allocate bandwidth for specific applications. Instead, SD-WAN allows you to classify applications and specify a particular service level requirement. This way, you can ensure your set-up is better equipped to run smoothly, minimizing the risk of glitchy and delayed performance on an audio conference call.

7: SD-WAN requirements: Organic growth 

We also have organic business growth, a big driver for additional bandwidth requirements. The challenge is that existing network infrastructures are static and need help to respond adequately to this growth in a reasonable period. The last mile of MPLS locks you in, destroying agility. Circuit lead times impede the organization’s productivity and create an overall lag.

A WAN solution should be simple. To serve the new era of applications, we need to increase the link capacity by buying more bandwidth. However, life is more complex. The WAN is an expensive part of the network, and employing link oversubscription to reduce the congestion is too costly.

Bandwidth is expensive to cater to new application requirements not met by the existing TDM-based MPLS architectures. At the same time, feature-rich MPLS is expensive for relatively low bandwidth. You will need more bandwidth to beat latency.

On the more traditional side, MPLS and private ethernet lines (EPLs) can range in cost from $700 to $10,000 per month, depending on bandwidth size and distance of the link itself. Some enterprises must also account for redundancies at each site as uptime for higher-priority sites comes into play. Cost becomes exponential when you have a large number of sites to deploy.

8: SD-WAN requirements: Limitations of protocols 

We already mentioned some problems with routing protocols, but leaving IPsec to default raises challenges. IPSec architecture is point-to-point, not site-to-site. Therefore, it does not natively support redundant uplinks. Complex configurations and potentially additional protocols are required when sites have multiple uplinks to multiple providers. 

Left to its defaults, IPsec is not abstracted, and one session cannot be sent over various uplinks. This will cause challenges with transport failover and path selection. Secure tunnels should be torn up and down immediately, and new sites should be incorporated into a secure overlay without much delay or manual intervention.

9: SD-WAN requirements: Internet of Things (IoT) 

As millions of IoT devices come online, how do we further segment and secure this traffic without complicating the network design? Many dumb IoT devices will require communication with the IoT platform in a remote location. Therefore, will there be increased signaling traffic over the WAN? 

Security and bandwidth consumption are vital issues concerning the introduction of IP-enabled objects. Although encryption is a great way to prevent hackers from accessing data, it is also one of the leading IoT security challenges.

These drives like the storage and processing capabilities found on a traditional computer. The result is increased attacks where hackers can easily manipulate the algorithms designed for protection. Also, Weak credentials and login details leave nearly all IoT devices vulnerable to password hacking and brute force. Any company that uses factory default credentials on its devices places its business, assets, customers, and valuable information at risk of being susceptible to a brute-force attack.

10: SD-WAN requirements: Visibility

Many service provider challenges include the need for more visibility into customer traffic. The lack of granular details of traffic profiles leads to expensive over-provision of bandwidth and link resilience. In addition, upgrades at both the packet and optical layers often need complete traffic visibility and justification.

In case of an unexpected traffic spike, many networks are left at half capacity. As a result, much money is spent on link underutilization, which should be spent on innovation. This link between underutilization and oversubscription is due to a need for more visibility.

**SD-WAN Use Case**

DMVPN: Exploring Single Hub Dual Cloud Architecture

Single Hub Dual Cloud architecture takes the traditional DMVPN setup to the next level. Instead of relying on a single cloud (service provider) for connectivity, this configuration utilizes two separate clouds, providing redundancy and improved performance. The hub device is the central point of contact for all remote sites, ensuring seamless communication between them.

1. Enhanced Redundancy: The Single Hub Dual Cloud configuration offers built-in redundancy with two separate clouds. If one cloud experiences downtime or connectivity issues, the network seamlessly switches to the alternate cloud, ensuring uninterrupted communication.

2. Improved Performance: By distributing the load across two clouds, Single Hub Dual Cloud architecture can handle higher traffic volumes efficiently. This leads to improved network performance and reduced latency for end-users.

3. Scalability: This architecture allows for easy scalability as new sites can be seamlessly added to the network without disrupting the existing infrastructure. The hub device manages the routing and connectivity, simplifying network management and reducing administrative overhead.

Summary: SD WAN Tutorial

SD-WAN, or Software-Defined Wide Area Networks, has emerged as a game-changing technology in the realm of networking. This tutorial delved into SD-WAN fundamentals, its benefits, and how it revolutionizes traditional WAN infrastructures.

Understanding SD-WAN

SD-WAN is an innovative approach to networking that simplifies the management and operation of a wide area network. It utilizes software-defined principles to abstract the underlying network infrastructure and provide centralized control, visibility, and policy-based management.

Key Features and Benefits

One of the critical features of SD-WAN is its ability to optimize network performance by intelligently routing traffic over multiple paths, including MPLS, broadband, and LTE. This enables organizations to leverage cost-effective internet connections without compromising performance or reliability. Additionally, SD-WAN offers enhanced security measures, such as encrypted tunneling and integrated firewall capabilities.

Deployment and Implementation

Implementing SD-WAN requires careful planning and consideration. This section will explore the different deployment models, including on-premises, cloud-based, and hybrid approaches. We will discuss the necessary steps in deploying SD-WAN, from initial assessment and design to configuration and ongoing management.

Use Cases and Real-World Examples

SD-WAN has gained traction across various industries due to its versatility and cost-saving potential. This section will showcase notable use cases, such as retail, healthcare, and remote office connectivity, highlighting the benefits and outcomes of SD-WAN implementation. Real-world examples will provide practical insights into the transformative capabilities of SD-WAN.

Future Trends and Considerations

As technology continues to evolve, staying updated on the latest trends and considerations in the SD-WAN landscape is crucial. This section will explore emerging concepts, such as AI-driven SD-WAN and integrating SD-WAN with edge computing and IoT technologies. Understanding these trends will help organizations stay ahead in the ever-evolving networking realm.

Conclusion:

In conclusion, SD-WAN represents a paradigm shift in how wide area networks are designed and managed. Its ability to optimize performance, ensure security, and reduce costs has made it an attractive solution for organizations of all sizes. By understanding the fundamentals, exploring deployment options, and staying informed about the latest trends, businesses can leverage SD-WAN to unlock new possibilities and drive digital transformation.

rsz_technology_focused_hubnw

Matt Conran | Network World

Hello, I have created a Network World column and will be releasing a few blogs per month. Kindly visit the following link to view my full profile and recent blogs – Matt Conran Network World.

The list of individual blogs can be found here:

“In this day and age, demands on networks are coming from a variety of sources, internal end-users, external customers and via changes in the application architecture. Such demands put pressure on traditional architectures.

To deal effectively with these demands requires the network domain to become more dynamic. For this, we must embrace digital transformation. However, current methods are delaying this much-needed transition. One major pain point that networks suffer from is the necessity to dispense with manual working, which lacks fabric wide automation. This must be addressed if organizations are to implement new products and services ahead of the competition.

So, to evolve, to be in line with the current times and use technology as an effective tool, one must drive the entire organization to become a digital enterprise. The network components do play a key role, but the digital transformation process is an enterprise-wide initiative.”

“There’s a buzz in the industry about a new type of product that promises to change the way we secure and network our organizations. It is called the Secure Access Service Edge (SASE). It was first mentioned by Gartner, Inc. in its hype cycle for networking. Since then Barracuda highlighted SASE in a recent PR update and Zscaler also discussed it in their earnings call. Most recently, Cato Networks announced that it was mentioned by Gartner as a “sample vendor” in the hype cycle.

Today, the enterprises have upgraded their portfolio and as a consequence, the ramifications of the network also need to be enhanced. What we are witnessing is cloud, mobility, and edge, which has resulted in increased pressure on the legacy network and security architecture. Enterprises are transitioning all users, applications, and data located on-premise, to a heavy reliance on the cloud, edge applications, and a dispersed mobile workforce.”

“Microsoft has introduced a new virtual WAN as a competitive differentiator and is getting enough tracking that AWS and Google may follow. At present, Microsoft is the only company to offer a virtual WAN of this kind. This made me curious to discover the highs and lows of this technology. So I sat down with Sorell Slaymaker, Principal Consulting Analyst at TechVision Research to discuss. The following is a summary of our discussion.

But before we proceed, let’s gain some understanding of the cloud connectivity.

Cloud connectivity has evolved over time. When the cloud was introduced about a decade ago, let’s say, if you were an enterprise, you would connect to what’s known as a cloud service provider (CSP). However, over the last 10 years, many providers like Equinix have started to offer carrier-neutral collocations. Now, there is the opportunity to meet a variety of cloud companies in a carrier-neutral colocation. On the other hand, there are certain limitations as well as cloud connectivity.”

“Actions speak louder than words. Reliable actions build lasting trust in contrast to unreliable words. Imagine that you had a house with a guarded wall. You would feel safe in the house, correct? Now, what if that wall is dismantled? You might start to feel your security is under threat. Anyone could have easy access to your house.

In the same way, with traditional security products: it is as if anyone is allowed to leave their house, knock at your door and pick your locks. Wouldn’t it be more secure if only certain individuals whom you fully trust can even see your house? This is the essence of zero-trust networking and is a core concept discussed in my recent course on SDP (software-defined perimeter).

Within a zero-trust environment, there is no implicit trust. Thus, trust must be sourced from somewhere else in order to gain access to protected resources. It is only after sufficient trust has been established and the necessary controls are passed, that the access is granted, providing a path to the requested resource. The access path to the resource is designed differently, depending on whether it’s a client or service-initiated software-defined perimeter solution.”

“Networking has gone through various transformations over the last decade. In essence, the network has become complex and hard to manage using traditional mechanisms. Now there is a significant need to design and integrate devices from multiple vendors and employ new technologies, such as virtualization and cloud services.

Therefore, every network is a unique snowflake. You will never come across two identical networks. The products offered by the vendors act as the building blocks for engineers to design solutions that work for them. If we all had a simple and predictable network, this would not be a problem. But there are no global references to follow and designs vary from organization to organization. These lead to network variation even while offering similar services.

It is estimated that over 60% of users consider their I.T environment to be more complex than it was 2 years ago. We can only assume that network complexity is going to increase in the future.”

We are living in a hyperconnected world where anything can now be pushed to the cloud. The idea of having content located in one place, which could be useful from the management’s perspective, is now redundant. Today, the users and data are omnipresent.

The customer’s expectations have up-surged because of this evolution. There is now an increased expectation of high-quality service and a decrease in customer’s patience. In the past, one could patiently wait 10 hours to download the content. But this is certainly not the scenario at the present time. Nowadays we have high expectations and high-performance requirements but on the other hand, there are concerns as well. The internet is a weird place, with unpredictable asymmetric patterns, buffer bloat and a list of other performance-related problems that I wrote about on Network Insight. [Disclaimer: the author is employed by Network Insight.]

Also, the internet is growing at an accelerated rate. By the year 2020, the internet is expected to reach 1.5 Gigabyte of traffic per day per person. In the coming times, the world of the Internet of Things (IoT) driven by objects will far supersede these data figures as well. For example, a connected airplane will generate around 5 Terabytes of data per day. This spiraling level of volume requires a new approach to data management and forces us to re-think how we delivery applications.”

“Deploying zero trust software-defined perimeter (SDP) architecture is not about completely replacing virtual private network (VPN) technologies and firewalls. By and large, the firewall demarcation points that mark the inside and outside are not going away anytime soon. The VPN concentrator will also have its position for the foreseeable future.

A rip and replace is a very aggressive deployment approach regardless of the age of technology. And while SDP is new, it should be approached with care when choosing the correct vendor. An SDP adoption should be a slow migration process as opposed to the once off rip and replace.

As I wrote about on Network Insight [Disclaimer: the author is employed by Network Insight], while SDP is a disruptive technology, after discussing with numerous SDP vendors, I have discovered that the current SDP landscape tends to be based on specific use cases and projects, as opposed to a technology that has to be implemented globally. To start with, you should be able to implement SDP for only certain user segments.”

“Networks were initially designed to create internal segments that were separated from the external world by using a fixed perimeter. The internal network was deemed trustworthy, whereas the external was considered hostile. However, this is still the foundation for most networking professionals even though a lot has changed since the inception of the design.

More often than not the fixed perimeter consists of a number of network and security appliances, thereby creating a service chained stack, resulting in appliance sprawl. Typically, the appliances that a user may need to pass to get to the internal LAN may vary. But generally, the stack would consist of global load balancers, external firewall, DDoS appliance, VPN concentrator, internal firewall and eventually LAN segments.

The perimeter approach based its design on visibility and accessibility. If an entity external to the network can’t see an internal resource, then access cannot be gained. As a result, external entities were blocked from coming in, yet internal entities were permitted to passage out. However, it worked only to a certain degree. Realistically, the fixed network perimeter will always be breachable; it’s just a matter of time. Someone with enough skill will eventually get through.”

“In recent years, a significant number of organizations have transformed their wide area network (WAN). Many of these organizations have some kind of cloud-presence across on-premise data centers and remote site locations.

The vast majority of organizations that I have consulted with have over 10 locations. And it is common to have headquarters in both the US and Europe, along with remote site locations spanning North America, Europe, and Asia.

A WAN transformation project requires this diversity to be taken into consideration when choosing the best SD-WAN vendor to satisfy both; networking and security requirements. Fundamentally, SD-WAN is not just about physical connectivity, there are many more related aspects.”

“As the cloud service providers and search engines started with the structuring process of their business, they quickly ran into the problems of managing the networking equipment. Ultimately, after a few rounds of getting the network vendors to understand their problems, these hyperscale network operators revolted.

Primarily, what the operators were looking for was a level of control in managing their network which the network vendors couldn’t offer. The revolution burned the path that introduced open networking, and network disaggregation to the work of networking. Let us first learn about disaggregation followed by open networking.”

“I recently shared my thoughts about the role of open source in networking. I discussed two significant technological changes that we have witnessed. I call them waves, and these waves will redefine how we think about networking and security.

The first wave signifies that networking is moving to the software so that it can run on commodity off-the-shelf hardware. The second wave is the use of open source technologies, thereby removing the barriers to entry for new product innovation and rapid market access. This is especially supported in the SD-WAN market rush.

Seemingly, we are beginning to see less investment in hardware unless there is a specific segment that needs to be resolved. But generally, software-based platforms are preferred as they bring many advantages. It is evident that there has been a technology shift. We have moved networking from hardware to software and this shift has positive effects for users, enterprises and service providers.”

“BGP (Border Gateway Protocol) is considered the glue of the internet. If we view through the lens of farsightedness, however, there’s a question that still remains unanswered for the future. Will BGP have the ability to route on the best path versus the shortest path?

There are vendors offering performance-based solutions for BGP-based networks. They have adopted various practices, such as, sending out pings to monitor the network and then modifying the BGP attributes, such as the AS prepending to make BGP do the performance-based routing (PBR). However, this falls short in a number of ways.

The problem with BGP is that it’s not capacity or performance aware and therefore its decisions can sink the application’s performance. The attributes that BGP relies upon for path selection are, for example, AS-Path length and multi-exit discriminators (MEDs), which do not always correlate with the network’s performance.”

“The transformation to the digital age has introduced significant changes to the cloud and data center environments. This has compelled the organizations to innovate more quickly than ever before. This, however, brings with it both – the advantages and disadvantages.

The network and security need to keep up with this rapid pace of change. If you cannot match the speed of the digital age, then ultimately bad actors will become a hazard. Therefore, the organizations must move to a zero-trust environment: default deny, with least privilege access. In today’s evolving digital world this is the primary key to success.

Ideally, a comprehensive solution must provide protection across all platforms including legacy servers, VMs, services in public clouds, on-premise, off-premise, hosted, managed or self-managed. We are going to stay hybrid for a long time, therefore we need to equip our architecture with zero-trust.”

“With the introduction of cloud, BYOD, IoT, and virtual offices scattered around the globe, the traditional architectures not only hold us back in terms of productivity but also create security flaws that leave gaps for compromise.

The network and security architectures that are commonly deployed today are not fit for today’s digital world. They were designed for another time, a time of the past. This could sound daunting…and it indeed is.”

“The Internet was designed to connect things easily, but a lot has changed since its inception. Users now expect the internet to find the “what” (i.e., the content), but the current communication model is still focused on the “where.”

The Internet has evolved to be dominated by content distribution and retrieval. As a matter of fact, networking protocols still focus on the connection between hosts that surfaces many challenges.

The most obvious solution is to replace the “where” with the “what” and this is what Named Data Networking (NDN) proposes. NDN uses named content as opposed to host identifiers as its abstraction.”

“Today, connectivity to the Internet is easy; you simply get an Ethernet driver and hook up the TCP/IP protocol stack. Then dissimilar network types in remote locations can communicate with each other. However, before the introduction of the TCP/IP model, networks were manually connected but with the TCP/IP stack, the networks can connect themselves up, nice and easy. This eventually caused the Internet to explode, followed by the World Wide Web.

So far, TCP/IP has been a great success. It’s good at moving data and is both robust and scalable. It enables any node to talk to any other node by using a point-to-point communication channel with IP addresses as identifiers for the source and destination. Ideally, a network ships the data bits. You can either name the locations to ship the bits to or name the bits themselves. Today’s TCP/IP protocol architecture picked the first option. Let’s discuss the section option later in the article.

It essentially follows the communication model used by the circuit-switched telephone networks. We migrated from phone numbers to IP addresses and circuit-switching by packet-switching with datagram delivery. But the point-to-point, location-based model stayed the same. This made sense during the old times, but not in today’s times as the view of the world has changed considerably. Computing and communication technologies have advanced rapidly.”

“Technology is always evolving. However, in recent time, two significant changes have emerged in the world of networking. Firstly, the networking is moving to software that can run on commodity off-the-shelf hardware. Secondly, we are witnessing the introduction and use of many open source technologies, removing the barrier of entry for new product innovation and rapid market access.

Networking is the last bastion within IT to adopt the open source. Consequently, this has badly hit the networking industry in terms of the slow speed of innovation and high costs. Every other element of IT has seen radical technology and cost model changes over the past 10 years. However, IP networking has not changed much since the mid-’90s.

When I became aware of these trends, I decided to sit with Sorell Slaymaker to analyze the evolution and determine how it will inspire the market in the coming years.”

“Ideally, meeting the business objectives of speed, agility, and cost containment boil down to two architectural approaches: the legacy telco versus the cloud-based provider.

Today, the wide area network (WAN) is a vital enterprise resource. Its uptime, often targeting availability of 99.999%, is essential to maintain the productivity of employees and partners and also for maintaining the business’s competitive edge.

Historically, enterprises had two options for WAN management models — do it yourself (DIY) and a managed network service (MNS). Under the DIY model, the IT networking and security teams build the WAN by integrating multiple components including MPLS service providers, internet service providers (ISPs), edge routers, WAN optimizer, and firewalls.

The components are responsible for keeping that infrastructure current and optimized. They configure and adjust the network for changes, troubleshoot outages and ensure that the network is secure. Since this is not a trivial task, therefore many organizations have switched to an MNS. The enterprises outsource the buildout, configuration and on-going management often to a regional telco.”

“To undergo the transition from legacy to cloud-native application environments you need to employ zero trust.

Enterprises operating in the traditional monolithic environment may have strict organizational structures. As a result, the requirement for security may restrain them from transitioning to a hybrid or cloud-native application deployment model.

In spite of the obvious difficulties, the majority of enterprises want to take advantage of cloud-native capabilities. Today, most entities are considering or evaluating cloud-native to enhance their customer’s experience. In some cases, it is the ability to draw richer customer market analytics or to provide operational excellence.

Cloud-native is a key strategic agenda that allows customers to take advantage of many new capabilities and frameworks. It enables organizations to build and evolve going forward to gain an edge over their competitors.”

“Domain name system (DNS) over transport layer security (TLS) adds an extra layer of encryption, but in what way does it impact your IP network traffic? The additional layer of encryption indicates controlling what’s happening over the network is likely to become challenging.

Most noticeably it will prevent ISPs and enterprises from monitoring the user’s site activity and will also have negative implications for both; the wide area network (WAN) optimization and SD-WAN vendors.

During a recent call with Sorell Slaymaker, we rolled back in time and discussed how we got here, to a world that will soon be fully encrypted. We started with SSL1.0, which was the original version of HTTPS as opposed to the non-secure HTTP. As an aftermath of evolution, it had many security vulnerabilities. Consequently, we then evolved from SSL 1.1 to TLS 1.2.”

“Delivering global SD-WAN is very different from delivering local networks. Local networks offer complete control to the end-to-end design, enabling low-latency and predictable connections. There might still be blackouts and brownouts but you’re in control and can troubleshoot accordingly with appropriate visibility.

With global SD-WANs, though, managing the middle-mile/backbone performance and managing the last-mile are, well shall we say, more challenging. Most SD-WAN vendors don’t have control over these two segments, which affects application performance and service agility.

In particular, an issue that SD-WAN appliance vendors often overlook is the management of the last-mile. With multiprotocol label switching (MPLS), the provider assumes the responsibility, but this is no longer the case with SD-WAN. Getting the last-mile right is challenging for many global SD-WANs.”

“Today’s threat landscape consists of skilled, organized and well-funded bad actors. They have many goals including exfiltrating sensitive data for political or economic motives. To combat these multiple threats, the cybersecurity market is required to expand at an even greater rate.

The IT leaders must evolve their security framework if they want to stay ahead of the cyber threats. The evolution in security we are witnessing has a tilt towards the Zero-Trust model and the software-defined perimeter (SDP), also called a “Black Cloud”. The principle of its design is based on the need-to-know model.

The Zero-Trust model says that anyone attempting to access a resource must be authenticated and be authorized first. Users cannot connect to anything since unauthorized resources are invisible, left in the dark. For additional protection, the Zero-Trust model can be combined with machine learning (ML) to discover the risky user behavior. Besides, it can be applied for conditional access.”

“There are three types of applications; applications that manage the business, applications that run the business and miscellaneous apps.

A security breach or performance related issue for an application that runs the business would undoubtedly impact the top-line revenue. For example, an issue in a hotel booking system would directly affect the top-line revenue as opposed to an outage in Office 365.

It is a general assumption that cloud deployments would suffer from business-impacting performance issues due to the network. The objective is to have applications within 25ms (one-way) of the users who use them. However, too many network architectures backhaul the traffic to traverse from a private to the public internetwork.”

“Back in the early 2000s, I was the sole network engineer at a startup. By morning, my role included managing four floors and 22 European locations packed with different vendors and servers between three companies. In the evenings, I administered the largest enterprise streaming networking in Europe with a group of highly skilled staff.

Since we were an early startup, combined roles were the norm. I’m sure that most of you who joined as young engineers in such situations could understand how I felt back then. However, it was a good experience, so I battled through it. To keep my evening’s stress-free and without any IT calls, I had to design in as much high-availability (HA) as I possibly could. After all, all the interesting technological learning was in the second part of my day working with content delivery mechanisms and complex routing. All of which came back to me when I read a recent post on Cato network’s self-healing SD-WAN for global enterprises networks.

Cato is enriching the self-healing capabilities of Cato Cloud. Rather than the enterprise having the skill and knowledge to think about every type of failure in an HA design, the Cato Cloud now heals itself end-to-end, ensuring service continuity.”

While computing, storage, and programming have dramatically changed and become simpler and cheaper over the last 20 years, however, IP networking has not. IP networking is still stuck in the era of mid-1990s.

Realistically, when I look at ways to upgrade or improve a network, the approach falls into two separate buckets. One is the tactical move and the other is strategic. For example, when I look at IPv6, I see this as a tactical move. There aren’t many business value-adds.

In fact, there are opposites such as additional overheads and minimal internetworking QoS between IPv4 & v6 with zero application awareness and still a lack of security. Here, I do not intend to say that one should not upgrade to IPv6, it does give you more IP addresses (if you need them) and better multicast capabilities but it’s a tactical move.

It was about 20 years ago when I plugged my first Ethernet cable into a switch. It was for our new chief executive officer. Little did she know that she was about to share her traffic with most others on the first floor. At that time being a network engineer, I had five floors to be looked after.

Having a few virtual LANs (VLANs) per floor was a common design practice in those traditional days. Essentially, a couple of broadcast domains per floor were deemed OK. With the VLAN-based approach, we used to give access to different people on the same subnet. Even though people worked at different levels but if in the same subnet, they were all treated the same.

The web application firewall (WAF) issue didn’t seem to me as a big deal until I actually started to dig deeper into the ongoing discussion in this field. It generally seems that vendors are trying to convince customers and themselves that everything is going smooth and that there is not a problem. In reality, however, customers don’t buy it anymore and the WAF industry is under a major pressure as constantly failing on the customer quality perspective.

There have also been red flags raised from the use of the runtime application self-protection (RASP) technology. There is now a trend to enter the mitigation/defense side into the application and compile it within the code. It is considered that the runtime application self-protection is a shortcut to securing software that is also compounded by performance problems. It seems to be a desperate solution to replace the WAFs, as no one really likes to mix its “security appliance” inside the application code, which is exactly what the RASP vendors are currently offering to their customers. However, some vendors are adopting the RASP technology.

“John Kindervag, a former analyst from Forrester Research, was the first to introduce the Zero-Trust model back in 2010. The focus then was more on the application layer. However, once I heard that Sorell Slaymaker from Techvision Research was pushing the topic at the network level, I couldn’t resist giving him a call to discuss the generals on Zero Trust Networking (ZTN). During the conversation, he shone a light on numerous known and unknown facts about Zero Trust Networking that could prove useful to anyone.

The traditional world of networking started with static domains. The classical network model divided clients and users into two groups – trusted and untrusted. The trusted are those inside the internal network, the untrusted are external to the network, which could be either mobile users or partner networks. To recast the untrusted to become trusted, one would typically use a virtual private network (VPN) to access the internal network.”

“Over the last few years, I have been sprawled in so many technologies that I have forgotten where my roots began in the world of the data center. Therefore, I decided to delve deeper into what’s prevalent and headed straight to Ivan Pepelnjak’s Ethernet VPN (EVPN) webinar hosted by Dinesh Dutt. I knew of the distinguished Dinesh since he was the chief scientist at Cumulus Networks, and for me, he is a leader in this field. Before reading his book on EVPN, I decided to give Dinesh a call to exchange our views about the beginning of EVPN. We talked about the practicalities and limitations of the data center. Here is an excerpt from our discussion.”

“If you still live in a world of the script-driven approach for both service provider and enterprise networks, you are going to reach limits. There is only so far you can go alone. It creates a gap that lacks modeling and database at a higher layer. Production-grade service provider and enterprise networks require a production-grade automation framework.

In today’s environment, the network infrastructure acts as the core centerpiece, providing critical connection points. Over time, the role of infrastructure has expanded substantially. In the present day, it largely influences the critical business functions for both the service provider and enterprise environments”

“At the present time, there is a remarkable trend for application modularization that splits the large hard-to-change monolith into a focused microservices cloud-native architecture. The monolith keeps much of the state in memory and replicates between the instances, which makes them hard to split and scale. Scaling up can be expensive and scaling out requires replicating the state and the entire application, rather than the parts that need to be replicated.

In comparison to microservices, which provide separation of the logic from the state, the separation enables the application to be broken apart into a number of smaller more manageable units, making them easier to scale. Therefore, a microservices environment consists of multiple services communicating with each other. All the communication between services is initiated and carried out with network calls, and services exposed via application programming interfaces (APIs). Each service comes with its own purpose that serves a unique business value.”

“When I stepped into the field of networking, everything was static and security was based on perimeter-level firewalling. It was common to have two perimeter-based firewalls; internal and external to the wide area network (WAN). Such layout was good enough in those days.

I remember the time when connected devices were corporate-owned. Everything was hard-wired and I used to define the access control policies on a port-by-port and VLAN-by-VLAN basis. There were numerous manual end-to-end policy configurations, which were not only time consuming but also error-prone.

There was a complete lack of visibility and global policy throughout the network and every morning, I relied on the multi-router traffic Grapher (MRTG) to manual inspect the traffic spikes indicating variations from baselines. Once something was plugged in, it was “there for life”. Have you ever heard of the 20-year-old PC that no one knows where it is but it still replies to ping? In contrast, we now live in an entirely different world. The perimeter has dissolved, resulting in perimeter-level firewalling alone to be insufficient.”

“Recently, I was reading a blog post by Ivan Pepelnjak on intent-based networking. He discusses that the definition of intent is “a usually clearly formulated or planned intention” and the word “intention” is defined as ’what one intends to do or bring about.” I started to ponder over his submission that the definition is confusing as there are many variations.

To guide my understanding, I decided to delve deeper into the building blocks of intent-based networking, which led me to a variety of closed-loop automation solutions. After extensive research, my view is that closed-loop automation is a prerequisite for intent-based networking. Keeping in mind the current requirements, it’s a solution that the businesses can deploy.

Now that I have examined different vendors, I would recommend gazing from a bird’s eye view, to make sure the solution overcomes today’s business and technical challenges. The outputs should drive a future-proof solution”

“What keeps me awake at night is the thought of artificial intelligence lying in wait in the hands of bad actors. Artificial intelligence combined with the powers of IoT-based attacks will create an environment tapped for mayhem. It is easy to write about, but it is hard for security professionals to combat. AI has more force, severity, and fatality which can change the face of a network and application in seconds.

When I think of the capabilities artificial intelligence has in the world of cybersecurity I know that unless we prepare well we will be like Bambi walking in the woods. The time is now to prepare for the unknown. Security professionals must examine the classical defense mechanisms in place to determine if they can withstand an attack based on artificial intelligence.”

“When I began my journey in 2015 with SD-WAN, the implementation requirements were different to what they are today. Initially, I deployed pilot sites for internal reachability. This was not a design flaw, but a solution requirement set by the options available to SD-WAN at that time. The initial requirement when designing SD-WAN was to replace multiprotocol label switching (MPLS) and connect the internal resources together.

Our projects gained the benefits of SD-WAN deployments. It certainly added value, but there were compelling constraints. In particular, we were limited to internal resources and users, yet our architecture consisted of remote partners and mobile workers. The real challenge for SD-WAN vendors is not solely to satisfy internal reachability. The wide area network (WAN) must support a range of different entities that require network access from multiple locations.”

“Applications have become a key driver of revenue, rather than their previous role as merely a tool to support the business process. What acts as the heart for all applications is the network providing the connection points. Due to the new, critical importance of the application layer, IT professionals are looking for ways to improve the architecture of their network.

A new era of campus network design is required, one that enforces policy-based automation from the edge of the network to public and private clouds using an intent-based paradigm.

SD-Access is an example of an intent-based network within the campus. It is broken down into three major elements

  1. Control-Plane based on Locator/ID separation protocol (LISP),
  2. Data-Plane based on Virtual Extensible LAN (VXLAN) and
  3. Policy-Plane based on Cisco TrustSec.”

“When it comes to technology, nothing is static, everything is evolving. Either we keep inventing mechanisms that dig out new security holes, or we are forced to implement existing kludges to cover up the inadequacies in security on which our web applications depend.

The assault on the changing digital landscape with all its new requirements has created a black hole that needs attention. The shift in technology, while creating opportunities, has a bias to create security threats. Unfortunately, with the passage of time, these trends will continue to escalate, putting web application security at center stage.

Business relies on web applications. Loss of service to business-focused web applications not only affects the brand but also results in financial loss. The web application acts as the front door to valuable assets. If you don’t efficiently lock the door or at least know when it has been opened, valuable revenue-generating web applications are left compromised.”

“When I started my journey in the technology sector back in the early 2000’s the world of networking comprised of simple structures. I remember configuring several standard branch sites that would connect to a central headquarters. There was only a handful of remote warriors who were assigned, and usually just a few high-ranking officials.

As the dependence on networking increased, so did the complexity of network designs. The standard single site became dual-based with redundant connectivity to different providers, advanced failover techniques, and high-availability designs became the norm. The number of remote workers increased, and eventually, security holes began to open in my network design.

Unfortunately, the advances in network connectivity were not in conjunction with appropriate advances in security, forcing everyone back to the drawing board. Without adequate security, the network connectivity that is left to defaults is completely insecure and is unable to validate the source or secure individual packets. If you can’t trust the network, you have to somehow secure it. We secured connections over unsecured mediums, which led to the implementation of IPSec-based VPNs along with all their complex baggage.”

“Over the years, we have embraced new technologies to find improved ways to build systems.  As a result, today’s infrastructures have undergone significant evolution. To keep pace with the arrival of new technologies, legacy is often combined with the new, but they do not always mesh well. Such a fusion between ultra-modern and conventional has created drag in the overall solution, thereby, spawning tension between past and future in how things are secured.

The multi-tenant shared infrastructure of the cloud, container technologies like Docker and Kubernetes, and new architectures like microservices and serverless, while technically remarkable, increasing complexity. Complexity is the number one enemy of security. Therefore, to be effectively aligned with the adoption of these technologies, a new approach to security is required that does not depend on shifting infrastructure as the control point.”

“Throughout my early years as a consultant, when asynchronous transfer mode (ATM) was the rage and multiprotocol label switching (MPLS) was still at the outset, I handled numerous roles as a network architect alongside various carriers. During that period, I experienced first-hand problems that the new technologies posed to them.

The lack of true end-to-end automation made our daily tasks run into the night. Bespoke network designs due to the shortfall of appropriate documentation resulted in one that person knows all. The provisioning teams never fully understood the design. The copy-and-paste implementation approach is error-prone, leaving teams blindfolded when something went wrong.

Designs were stitched together and with so much variation, that limited troubleshooting to a personalized approach. That previous experience surfaced in mind when I heard about carriers delivering SD-WAN services. I started to question if they could have made the adequate changes to provide such an agile service.”

sd-wan3

VIPTELA SD-WAN – WAN Segmentation

 

 

VIPTELA SD-WAN

Problem Statement

WAN edge networks sit too far from business logic and are built, and designed with limited application and business flexibility. On the other hand, applications sit closer to business logic. It’s time for networking to bridge this gap using policies and business logic-aware principles. For additional information on WAN, challenges proceed to this SD WAN tutorial.

 

Traditional Segmentation

Network segmentation is defined as a portion of the network that is separated from the rest. Segmentation can be physical or logical. Physical segmentation involves complete isolation at a device and link level. Some organizations require the physical division of individual business units for security, political, or other reasons. At a basic level, logical segmentation begins with VLAN boundaries at Layer 2. VLAN consists of a group of devices that communicate as if they were connected to the same wire. As VLANs are logical and not physical connections, they are flexible and can span multiple devices.

While VLANs provide logical separation at Layer 2, virtual routing and forwarding (VRF) instances provide separation at Layer 3. Layer 2 VLAN can now map to Layer 3 VRF instances. However, every VRF has a separate control plane and configuration completed on a hop-by-hop basis. Individual VRFs with separate control planes from individual routing protocol neighbor relationships, hamper router performance.

VIPTELA VRF
VRF Separation

MPLS/VPNs overcome hop-by-hop configurations to allow segmentation. This enables physically divided business units to be logically divided without the need for individual hop-by-hop VPN configurations throughout the network. Instead, only the PE edge routers carry VPN information. This supports a variety of topologies such as hub and spoke, partial mesh, or any to any connectivity. While MPLS/VPNs have their benefits, they also introduce a unique set of challenges.

MPLS Challenges

MPLS topologies, once provisioned, are difficult to modify. This is due to the fact that all the impacted PE routers have to be re-provisioned with each new policy change. In this way, MPLS topologies are similar to the brick foundation of a house. Once the foundation is laid, it’s hard to make changes to the original structure without starting over.

Most modifications to VPN site topologies must go through an additional design phase. If a Wide Area Network (WAN) is outsourced to a carrier, it would require service provider intervention with additional design & provisioning activities. A simple task such as mapping application subnets to new or existing RT may involve onsite consultants, new high-level design, and configuration templates which would have to be applied by provisioning teams. Service provider provisioning and design activities are not free and usually have long lead times.

Some flexible Edge MPLS designs do exist. For example, community tagging and matching. During the design phase, the customer and service provider agree on predefined communities. Once these are set by the customer (attached to a prefix) they are matched by the provider to perform a certain type of traffic engineering (TE).

While community tagging and matching do provide some degree of flexibility and are commonly used, it remains a fixed, predefined configuration. Any subsequent design changes may still require service provider intervention.

Applications Must Fit A Certain Topology

The model forces applications to fit into a network topology that is already built and designed. It lacks the flexibility for the network to keep up with changing business needs. What’s needed is a way to map application requirements to the network. Applications are exploding and each has a variety of operational and performance requirements, which should be met in isolation.

Viptela SD-WAN & Topologies Per Application

By moving from hardware and diverse control planes to software & unified control planes at the WAN edge, SD-WAN evolves the fixed network approach. It abstracts the details of WAN; allowing application-independent topologies. SD-WAN provides segmentation between traffic sets and could be a good way to help create on-demand applications. Essentially, creating multiple on-demand topologies per application or group of applications. Each application can have its own topology and dynamically changing policies which are managed by a central controller.

The application controls the network design.

SD-WAN - Application flows
SD-WAN – Application flows

In SD-WAN, the central controller is hosted and managed by the customer, not a service provider. This enables the WAN to be segmented for each application at the customer’s discretion. For example, PCI traffic can be transported using an overlay specifically designed for compliance via Provider A. Meanwhile, ATM traffic can travel over the same provider network but using an overlay specifically designed for ATM. Meanwhile, each overlay can have different performance characteristics for path failover so that if the network does not reach a certain round-trip time (RTT) metric, traffic can reroute over another path. The customer has complete control over what application goes where and has the power to change policies on the fly.

The SDN controller, which acts as the brain, can set different applications to run over different paths based on business requirements and performance SLAs, not on a fixed topology. Each topology is segmented from the others and applications can share or have independent on-demand topologies. SD-WAN dramatically accelerates the time it takes to adapt the network to changing business needs.

SD-WAN Topologies

The network topologies can be depicted either physically or logically. Common topologies such as Star, Mesh, Full, and Ring are categorized under a centralized or decentralized function. In a physical world, these topologies are fixed and cannot be automatically changed. Logical topologies may also be hindered by physical device footprints.

In contrast, SD-WAN fully supports the coexistence of multiple application topologies regardless of existing physical footprints. For example, Lync message and video subscriptions may require different path topologies with separate SLAs. Messages may travel over low-cost links while video requires lower latency transports.

SD-WAN can flexibly cater to the needs of any type of application. In retail environments, store-specific applications may require a hub and spoke topology for authentication or security requirements. Surveillance systems may require full mesh topology. Guest Wi-fi may require local Internal access compared to normal user traffic that is scrubbed via a hub site. This per-application topology gives designers better control over the network. Viptela SD-WAN endpoints support multiple logical segments (regardless of the existing physical network), each of which can use a unique topology (full mesh, hub and spoke, ring) and be managed via its own policy.

Viptela SD-WAN: Predictable Application Performance

Obtaining predictable services is achieved by understanding the per-application requirements and routing appropriately.

Traditional MPLS WANs, offer limited fabric visibility. For example, providers allow enterprises to perform traceroute and pings, but bits-per-second is a primitive and unreliable method for measuring end-to-end application performance. Instead, WANs should be monitored at multiple layers, not just at the packet layer.

If a service provider is multiple autonomous systems (AS) away and experiencing performance problems, these cannot be addressed and detected using traditional distance-vector methods. This makes it impossible to route around problems and detect transitory oscillations. If errors on a transit path exist, a way must exist to penalize those paths.

Currently, there’s no way to detect when a remote network on the Internet is experiencing brownouts. Since the routing protocol is still operating, the best path does not change as neighbors might still be up. Routing should exist at the transport and application layer, and monitor both application flows and transactions. SD-WAN provides this function and delivers visibility at the device, transport, and application layer for insight into how the network is performing at any given time. This makes it possible to react to transitory failures before they can impact users.

“This post is sponsored by Viptela, an SD-WAN company. All thoughts and opinions expressed are the authors”

 

news

Tech News – Gartner, IoT Security, Plexxi Switch

On 27th of July, Gartner published its annual Hype Cycle for Networking and Communications. It assesses network technologies that are most relevant and scores them on benefits rating and maturity levels. Key areas that are at “Peak” levels are SD-WAN, WAN Optimizations, SDN Applications, Cloud Managed Networks and White-Box switching.

Markets are looking to find ways to cut costs and benefit from ROI. All those areas mentioned above are in their “Peak” and moving fast to next phase of “Sliding into the Trough” SD-WAN offers the ability to combine and secure diverse links; drastically reducing WAN costs. Viptela (SD-WAN company) recently signed a Fortune 200 global retailer that are currently deploying the technology to thousands of stores. I’m interested to see what SD-WAN brings to the future and how all these POC pan out. One thing to keep in mind is that every SD-WAN startup has some kind of magic sauce for overlay deployment – creating perfect vendor lock-in.

White-Box switching saves costs by using ‘generic,’ off-the-shelf switching and routing hardware. It’s a fundamental use case of SDN controllers. The intelligence is moved to the controller and white-box functionality is limited to forwarding data packets, without making key policy decisions. It is, however, important not to move 100% of the decision-making to the controller and keep some time-sensitive control planes protocols ( LACP or BFD ) local to the switch.

 

Internet of Things (IoT) – Another Security Breach

When you start connecting computing infrastructure to physical world, security vulnerability take over a whole new meaning.

Hackers have remotely killed a jeep on the highway in St. Louis. According to Wired, the hackers remotely toyed with the air conditioning, radio, and windshield wipers. They also caused the accelerator to stop working. These exploits will probably continue and part of the problem is that there are a lot of malware factors out there with malicious intent. It’s always been a cat and mouse games with security as hackers come up with ingenious ways to execute exploits. This explains why so much money goes into security and the IoT will definitely drive more money into the security realm. Security prevention mechanism will mitigate these attacks but can they prevent all of them?

Hackers always find a way to move one step ahead of prevention mechanisms. New product development should start with secure codes and architectures. Product teams should not put priority on release dates in front of security. Hackers connecting to real things like cars that can kill people is worrying. The story goes on and on as the creators of technologies do not think the same way as a hacker. Inventors of product and people who attempt to secure those products are not half as inventive as hackers. Hackers get up every morning and think about exploits, they live for it and are extremely creative. The mindset of product innovations must change before IoT is safe to connect physically.

 

Plexxi – Generation 2 Switch

Plexxi is an SDN startup that recently came out with a new Plexxi  Generation 2 Switch. They are probably the first SDN company to have a working solution for the data centre. Their approach to networking allows you to achieve the full potential of your data and applications by moving beyond static networks.

They have a unique approach that uses a wave division multiplexing on their fabric. They change the nature of physical connectivity between switches. They use unique hardware with merchant silicon inside that allows you to change the architecture of your network at any time without causing data loss. Their affinity SDN controller can rejig the network based on changing bandwidth requirements.

With traditional networking, we create as much bandwidth as possible but generally don’t use all the bandwidth. Plexxi challenge existing network approach and dynamically control the networking bandwidth between switches. Their generation 2 switch can determine what bandwidth is being used on the network and re-architect the network to use as much bandwidth as needed. The network architecture changes based on the flows in the data path. Now, you have a network that changes on demands to meet dynamically changing traffic flows. For example, if you have two switches communicating at 1Gbps but a spike occurs and they now need 40Gbps. Plexxi affinity SDN controller can detect this and automatically allocate more bandwidth to that link.

 

vnet1

Azure ExpressRoute

Azure ExpressRoute

In today's ever-evolving digital landscape, businesses are increasingly relying on cloud services for their infrastructure and data needs. Azure ExpressRoute, a dedicated network connection provided by Microsoft, offers a reliable and secure solution for organizations seeking direct access to Azure services. In this blog post, we will dive into the world of Azure ExpressRoute, exploring its benefits, implementation, and use cases.

Azure ExpressRoute is a private connection that allows businesses to establish a dedicated link between their on-premises network and Microsoft Azure. Unlike a regular internet connection, ExpressRoute offers higher security, lower latency, and increased reliability. By bypassing the public internet, organizations can experience enhanced performance and better control over their data.

Enhanced Performance: With ExpressRoute, businesses can achieve lower latency and higher bandwidth, resulting in faster and more responsive access to Azure services. This is especially critical for applications that require real-time data processing or heavy workloads.

Improved Security: ExpressRoute ensures a private and secure connection to Azure, reducing the risk of data breaches and unauthorized access. By leveraging private connections, businesses can maintain a higher level of control over their data and maintain compliance with industry regulations.

Hybrid Cloud Integration: Azure ExpressRoute enables seamless integration between on-premises infrastructure and Azure services. This allows organizations to extend their existing network resources to the cloud, creating a hybrid environment that offers flexibility and scalability.

Provider Selection: Businesses can choose from a range of ExpressRoute providers, including major telecommunications companies and internet service providers. It is essential to evaluate factors such as coverage, pricing, and support when selecting a provider that aligns with specific requirements.

Connection Types: Azure ExpressRoute offers two connection types - Layer 2 (Ethernet) and Layer 3 (IPVPN). Layer 2 provides a flexible and scalable solution, while Layer 3 offers more control over routing and traffic management. Understanding the differences between these connection types is crucial for successful implementation.

Global Enterprises: Large organizations with geographically dispersed offices can leverage Azure ExpressRoute to establish a private, high-speed connection to Azure services. This ensures consistent performance and secure data transmission across multiple locations.

Data-Intensive Applications: Industries dealing with massive data volumes, such as finance, healthcare, and research, can benefit from ExpressRoute's dedicated bandwidth. By bypassing the public internet, these organizations can achieve faster data transfers and real-time analytics.

Compliance and Security Requirements: Businesses operating in highly regulated industries, such as banking or government sectors, can utilize Azure ExpressRoute to meet stringent compliance requirements. The private connection ensures data privacy, integrity, and adherence to industry-specific regulations.

Azure ExpressRoute opens up a world of possibilities for businesses seeking a secure, high-performance connection to the cloud. By leveraging dedicated network links, organizations can unlock the full potential of Azure services while maintaining control over their data and ensuring compliance. Whether it's enhancing performance, improving security, or enabling hybrid cloud integration, ExpressRoute proves to be a valuable asset in today's digital landscape.

Highlights: Azure ExpressRoute

What is Azure ExpressRoute?

Azure ExpressRoute provides a private and dedicated connection between on-premises infrastructure and the Azure cloud. Unlike a regular internet connection, ExpressRoute offers a more reliable and consistent performance by bypassing public networks. It allows businesses to extend their network into the Azure cloud and access various services with enhanced speed and reduced latency.

a) Enhanced Network Performance: By bypassing the public internet, ExpressRoute offers low-latency connections, ensuring faster data transfers and improved application performance. This is particularly beneficial for bandwidth-intensive workloads and real-time applications.

b) Improved Security: With ExpressRoute, data exchanges between on-premises infrastructure and Azure occur over a private connection, reducing the exposure to potential security threats. This added layer of security is crucial for organizations dealing with sensitive and confidential data.

c) Scalability and Flexibility: ExpressRoute enables businesses to scale their network connectivity as their needs grow. Whether it’s expanding to new regions or increasing bandwidth capacity, ExpressRoute provides the necessary flexibility to accommodate changing requirements.

**Setting up Azure ExpressRoute**

a) Connectivity Models: Azure ExpressRoute supports two connectivity models – Network Service Provider (NSP) and Exchange Provider (IXP). NSP connectivity involves partnering with a network service provider to establish the connection, while IXP connectivity allows direct peering with Azure at an internet exchange point.

b) Prerequisites and Configuration: Before setting up ExpressRoute, organizations need to meet certain prerequisites, such as having an Azure subscription and establishing peering relationships. Configuration involves defining routing settings, setting up virtual networks, and configuring the ExpressRoute circuit.

**Use Cases for Azure ExpressRoute**

a) Hybrid Cloud Environments: ExpressRoute is ideal for organizations that operate in a hybrid cloud environment, integrating on-premises infrastructure with Azure. It enables seamless data transfers and allows businesses to leverage the benefits of both private and public cloud environments.

b) Big Data and Analytics: For data-intensive workloads, such as big data analytics, ExpressRoute provides a high-bandwidth and low-latency connection to Azure, ensuring efficient data processing and analysis.

c) Disaster Recovery and Business Continuity: ExpressRoute plays a crucial role in disaster recovery scenarios by providing a reliable and dedicated connection to Azure. Organizations can replicate their critical data and applications to Azure, ensuring business continuity in case of unforeseen events.

Common Azure Cloud Components

  • Azure Networking

Using Azure Networking, you can connect your on-premises data center to the cloud using fully managed and scalable networking services. Azure networking services allow you to build a secure virtual network infrastructure, manage your applications’ network traffic, and protect them from DDoS attacks. In addition to enabling secure remote access to internal resources within your organization, Azure network resources can also be used to monitor and secure your network connectivity globally.

With Azure, complex network architectures can be supported with robust, fully managed, and dynamic network infrastructure. A hybrid network solution combines on-premises and cloud infrastructure to create public access to network services and secure application networks.

  • Azure Virtual Network

Azure Virtual Network, the foundation of Azure networking, provides a secure and isolated environment for your resources. It lets you define your IP address space, create subnets, and establish connectivity to your on-premises network. With Azure Virtual Network, you have complete control over network traffic flow, security policies, and routing.

  • Azure Load Balancer

In a world where high availability and scalability are paramount, Azure Load Balancer comes to the rescue. This powerful tool distributes incoming network traffic across multiple VM instances, ensuring optimal resource utilization and fault tolerance. Whether it’s TCP or UDP traffic or public or private load balancing, Azure Load Balancer has you covered.

  • Azure Virtual WAN

Azure Virtual WAN simplifies network connectivity and management for organizations with geographically dispersed branches. By leveraging Microsoft’s global network infrastructure, Virtual WAN provides secure and optimized connectivity between branches and Azure resources. It seamlessly integrates with Azure Virtual Network and offers features like VPN and ExpressRoute connectivity.

  • Azure Firewall

Network security is a top priority, and Azure Firewall rises to the challenge. Acting as a highly available, cloud-native firewall-as-a-service, Azure Firewall provides centralized network security management. It offers application and network-layer filtering, threat intelligence integration, and outbound connectivity control. With Azure Firewall, you can safeguard your network and applications with ease.

  • Azure Virtual Network

Azure Virtual Networks (Azure VNets) are essential in building networks within the Azure infrastructure. Azure networking is fundamental to managing and securely connecting to other external networks (public and on-premises) over the Internet.

Azure VNet goes beyond traditional on-premises networks. In addition to isolation, high availability, and scalability, It helps secure your Azure resources by allowing you to administer, filter, or route traffic based on your preferences.

  • Peering between Azure VNets

Peering between Azure Virtual Networks (VNets) allows you to connect several virtual networks. Microsoft’s infrastructure and a secure private network connect the VMs in the peer virtual networks. Resources can be shared and connected directly between the two networks in a peering network.

Azure currently supports global VNet peering, which connects virtual networks within the same Azure region, instead of global VNet peering, which connects virtual networks across Azure regions.

A virtual wide area network powered by Azure

Azure Virtual WAN is a managed networking service that offers networking, security, and routing features. It is made possible by the Azure global network. Various VPN connectivity options are available, including site-to-site VPNs and ExpressRoutes.

For those who prefer working from home or other remote locations, virtual WANs assist in connecting to the Internet and other Azure resources, including networking and remote user connectivity. Using Azure Virtual WAN, existing infrastructure or data centers can be moved from on-premises to Microsoft Azure.

ExpressRoute

Internet Challenges

One of the primary culprits behind sluggish internet performance is the occurrence of bottlenecks. These bottlenecks can happen at various points along the internet infrastructure, from local networks to internet service providers (ISPs) and even at the server end. Limited bandwidth can also impact internet speed, especially during peak usage hours when networks become congested. Understanding these bottlenecks and bandwidth limitations is crucial in addressing internet performance issues.

The Role of Latency

While speed is essential, it’s not the only factor contributing to a smooth online experience. Latency, often measured in milliseconds, is critical in determining how quickly data travels between its source and destination. High latency can result in noticeable delays, particularly in activities that require real-time interaction, such as online gaming or video conferencing. Various factors, including distance, network congestion, and routing inefficiencies, can contribute to latency issues.

ExpressRoute Azure

Using Azure ExpressRoute, you can extend on-premises networks into Microsoft’s cloud infrastructure over a private connection. This networking service allows you to connect your on-premises networks to Azure. You can connect your on-premises network with Azure using an IP VPN network with Layer 3 connectivity, enabling you to connect Azure to your own WAN or data center on-premises.

There is no internet traffic with Azure ExpressRoute since the connection is private. Compared to public networks, ExpressRoute connections are faster, more reliable, more available, and more secure.

a) Enhanced Security: ExpressRoute provides a private connection, making it an ideal choice for organizations dealing with sensitive data. By avoiding the public internet, companies can significantly reduce the risk of unauthorized access and potential security breaches.

b) High Performance: ExpressRoute allows businesses to achieve faster data transfers and lower latency than standard internet connections. This is particularly beneficial for applications that require real-time data processing, such as video streaming, IoT solutions, and financial transactions.

c) Reliable and Consistent Connectivity: Azure ExpressRoute offers uptime Service Level Agreements (SLAs) and guarantees a more stable connection than internet-based connections. This ensures critical workloads and applications remain accessible and functional even during peak usage.

**Use Cases for Azure ExpressRoute**

a) Hybrid Cloud Environments: ExpressRoute enables organizations to extend their on-premises infrastructure to the Azure cloud seamlessly. This facilitates a hybrid cloud setup, where companies can leverage Azure’s scalability and flexibility while retaining certain workloads or sensitive data within their own data centers.

b) Big Data and Analytics: ExpressRoute provides a reliable and high-bandwidth connection to Azure’s data services for businesses that heavily rely on big data analytics. This enables faster and more efficient data transfers, allowing organizations to extract real-time actionable insights.

c) Disaster Recovery and Business Continuity: ExpressRoute can be instrumental in establishing a robust disaster recovery strategy. By replicating critical data and applications to Azure, businesses can ensure seamless failover during unforeseen events, minimizing downtime and maintaining business continuity.

You may find the following helpful post for pre-information:

  1. Load Balancer Scaling
  2. IDS IPS Azure
  3. Low Latency Network Design
  4. Data Center Performance
  5. Baseline Engineering
  6. WAN SDN 
  7. Technology Insight for Microsegmentation
  8. SDP VPN

Azure ExpressRoute

Let it to its defaults. When you deploy one Azure VPN gateway, two gateway instances are configured in an active standby configuration. This standby instance delivers partial redundancy but is not highly available, as it might take a few minutes for the second instance to arrive online and reconnect to the VPN destination.

For this lower level of redundancy, you can choose whether the VPN is regionally redundant or zone-redundant. If you utilize a Basic public IP address, the VPN you configure can only be regionally redundant. If you require a zone-redundant configuration, use a Standard public IP address with the VPN gateway.

The following table lists ExpressRoute locations;

Azure ExpressRoute

Azure Express Route and Encryption

Azure ExpressRoute does not offer built-in encryption. For this reason, you should investigate Barracuda’s cloud security product sets. They offer secure transmission and automatic path failover via redundant, secure tunnels to complete an end-to-end cloud solution. Other 3rd-party security products are available in Azure but are not as mature as Barracuda’s product set.

Internet Performance

Connecting to Azure public cloud over the Internet may be cheap, but it has its drawbacks with security, uptime, latency, packet loss, and jitter. The latency, jitter, and packet loss associated with the Internet often cause the performance of an application to degrade. This is primarily a concern if you support hybrid applications requiring real-time backend on-premise communications.

Transport network performance directly impacts application performance. Businesses are now facing new challenges when accessing applications in the cloud over the Internet. Delayed round-trip time (RTT) is a big concern. TCP spends a few RTTs to establish the TCP session—two RTTs before you get the first data byte.

Client-side cookies may also add delays if they are large enough and unable to fit in the first data byte. Having a transport network offering good RTT is essential for application performance. You need the ability to transport packets as quickly as possible and support the concept that “every packet counts.

  • The Internet does not provide this or offer any guaranteed Service Level Agreement (SLA) for individual traffic classes.

The Azure solution – Azure ExpressRoute & Telecity cloud-IX

With Microsoft Azure ExpressRoute, you get your private connection to Azure with a guaranteed SLA. It’s like a natural extension to your data center, offering lower latency, higher throughput, and better reliability than the Internet. You can now build applications spanning on-premise infrastructures and Azure Cloud without compromising performance. It bypasses the Internet and lets you connect your on-premise data center to your cloud data center via 3rd-party MPLS networks.

There are two ways to establish your private connection to Azure with ExpressRoute: Exchange Provider or Network Service Provider. Choose a method if you want to co-locate equipment. Companies like Telecity offer a “bridging product” enabling direct connectivity from your WAN to Azure via their MPLS network. Even though Telecity is an exchange provider, its network offerings are network service providers. Their bridging product is called Cloud-IX. Bridging product connectivity makes Azure Cloud look like another terrestrial data center.

Azure ExpressRoute
Diagram: Azure ExpressRoute.

Cloud-IX is a neutral cloud ecosystem. It allows enterprises to establish private connections to cloud service providers, not just Azure. Telecity Cloud-IX network already has redundant NNI peering to Microsoft data centers, enabling you to set up your peering connections to Cloud-IX via BGP or statics only. You don’t peer directly with Azure. Telecity and Cloud-IX take care of transport security and redundancy. Cloud-IX is likely an MPLS network that uses route targets (RT) and route distinguishers (RD) to separate and distinguish customer traffic.

Azure ExpressRoute Redundancy

The introduction of VNets

Layer-3 overlays called VNets ( cloud boundaries/subnets) are now associated with four ExpressRoutes. This offers a proper active-active data center design, enabling path diversity and the ability to build resilient connectivity. This is great for designers as it means we can make true geo-resilience into ExpressRoute designs by creating two ExpressRoute “dedicated circuits” and associating each virtual network with both.

This ensures full end-to-end resilience built into the Azure ExpressRoute configuration, including removing all geographic SPOFs. ExpressRoute connections are created between the Exchange Service Provider or Network Service Provider and the Microsoft cloud. The connectivity between customers’ on-premise locations and the service provider is produced independently of ExpressRoute. Microsoft only peers with service providers.

Azure Express Route
Diagram: Azure Express Route redundancy with VNets.

Barracuda NG firewall & Azure Express Route

Barracuda NG Firewall adds protection to Microsoft ExpressRoute. The NG is installed at both ends of the connection and offers traffic access controls, security features, low latency, and automatic path failover with Barracuda’s proprietary transport protocol, TINA. Traffic Access Control: From the IP to the Application layer, the NG firewall gives you complete visibility into traffic flows in and out of ExpressRoute.

With visibility, you get better control of the traffic. In addition, the NG firewall allows you to log what servers are doing outbound. This may be interesting to know if a server gets hacked in Azure. You would like to know what the attacker is doing outbound to it. Analytics will let you contain it or log it. When you get attacked, you need to know what traffic the attacker generates and if they are pivoting to other servers.

There have been security concerns about the number of administrative domains ExpressRoute overlays. It would help if you implemented security measures as you shared the logic with other customers’ physical routers. The NG encrypts end-to-end traffic from both endpoints. This encryption can be customized based on your requirements; for example, transport may be TCP, UDP, or hybrid, and you have complete control over the keys and algorithms.

  • Preserve low latency

Preserve Low Latency for applications that require high-quality service. The NG can provide quality service based on ports and applications, which offer a better service to high business applications. It also optimizes traffic by sending bulk traffic automatically over the Internet and keeping critical traffic on the low latency path.

Automatic Transport Link failover with TINA. Upon MPLS link failure, the NG can automatically switch to an internet-based transport and continue to pass traffic to the Azure gateway. It automatically creates a secure tunnel over the Internet without any packet drops, offering a graceful failover to Internet VPN. This allows multiple links to be active-active, making the WAN edge similar to the analogy of SD-WAN utilizing a transport-agnostic failover approach.

TINA is SSL-based, not IPSEC, and runs over TCP/UDP /ESP. Because Azure only supports TCP & UDP, TINA is supported and can run across the Microsoft fabric.

Closing Points on Azure ExpressRoute

Azure ExpressRoute is a service that enables you to create private connections between Microsoft Azure data centers and infrastructure on your premises or in a co-location environment. Unlike a typical internet connection, ExpressRoute provides a more reliable, faster, and secure network experience. This is achieved through dedicated connectivity, which can bypass the public internet, thereby reducing latency and improving overall performance.

Azure ExpressRoute offers numerous advantages for businesses looking to optimize their cloud strategy:

1. **Enhanced Security**: By establishing a private connection, ExpressRoute minimizes the risk of data breaches that can occur over the public internet. This is particularly beneficial for industries with stringent regulatory requirements.

2. **Improved Performance**: With dedicated bandwidth, businesses can experience consistent network performance, making it ideal for data-heavy applications like analytics and real-time operations.

3. **Cost-Effective**: Although there are associated costs with implementing ExpressRoute, the potential savings in terms of downtime reduction and operational efficiency can outweigh these expenses.

4. **Scalability**: As your business grows, ExpressRoute can easily scale to accommodate increased data transfer demands without impacting performance.

Azure ExpressRoute can be particularly beneficial in several scenarios:

– **Financial Services**: Institutions that require secure and compliant connections for sensitive transactions can benefit from the enhanced security of ExpressRoute.

– **Healthcare**: Medical facilities that need to transfer large amounts of data, such as medical imaging, can do so efficiently with ExpressRoute’s high-performance connectivity.

– **Manufacturing**: Companies that rely on real-time data from IoT devices can ensure minimal latency and high reliability with ExpressRoute connections.

Implementing Azure ExpressRoute involves several steps, starting with the selection of a connectivity provider from Microsoft’s list of ExpressRoute partners. Once connected, businesses can configure their network to integrate with their existing infrastructure. Microsoft provides extensive documentation and support to aid in the setup process, ensuring a smooth transition to this powerful service.

Summary: Azure ExpressRoute

In today’s rapidly evolving digital landscape, businesses seek ways to enhance cloud connectivity for seamless data transfer and improved security. One such solution is Azure ExpressRoute, a private and dedicated network connection to Microsoft Azure. In this blog post, we delved into the various benefits of Azure ExpressRoute and how it can revolutionize your cloud experience.

Understanding Azure ExpressRoute

Azure ExpressRoute is a service that allows organizations to establish a private and dedicated connection to Azure, bypassing the public internet. This direct pathway ensures a more reliable, secure, and low-latency data and application transfer connection.

Enhanced Security and Data Privacy

With Azure ExpressRoute, organizations can significantly enhance security by keeping their data off the public internet. Establishing a private connection safeguards sensitive information from potential threats, ensuring data privacy and compliance with industry regulations.

Improved Performance and Reliability

The dedicated nature of Azure ExpressRoute ensures a high-performance connection with consistent network latency and minimal packet loss. By bypassing the public internet, organizations can achieve faster data transfer speeds, reduced latency, and enhanced user experience.

Hybrid Cloud Enablement

Azure ExpressRoute enables seamless integration between on-premises infrastructure and the Azure cloud environment. This makes it an ideal solution for organizations adopting a hybrid cloud strategy, allowing them to leverage the benefits of both environments without compromising on security or performance.

Flexible Network Architecture

Azure ExpressRoute offers flexibility in network architecture, allowing organizations to choose from multiple connectivity options. Whether establishing a direct connection from their data center or utilizing a colocation facility, organizations can design a network setup that best suits their requirements.

Conclusion:

Azure ExpressRoute provides businesses with a direct and dedicated pathway to the cloud, offering enhanced security, improved performance, and flexibility in network architecture. By leveraging Azure ExpressRoute, organizations can unlock the full potential of their cloud infrastructure and accelerate their digital transformation journey.

network overlays

WAN Virtualization

WAN Virtualization

In today's fast-paced digital world, seamless connectivity is the key to success for businesses of all sizes. WAN (Wide Area Network) virtualization has emerged as a game-changing technology, revolutionizing the way organizations connect their geographically dispersed branches and remote employees. In this blog post, we will explore the concept of WAN virtualization, its benefits, implementation considerations, and its potential impact on businesses.

WAN virtualization is a technology that abstracts the physical network infrastructure, allowing multiple logical networks to operate independently over a shared physical infrastructure. It enables organizations to combine various types of connectivity, such as MPLS, broadband, and cellular, into a single virtual network. By doing so, WAN virtualization enhances network performance, scalability, and flexibility.

Increased Flexibility and Scalability: WAN virtualization allows businesses to scale their network resources on-demand, facilitating seamless expansion or contraction based on their requirements. It provides flexibility to dynamically allocate bandwidth, prioritize critical applications, and adapt to changing network conditions.

Improved Performance and Reliability: By leveraging intelligent traffic management techniques and load balancing algorithms, WAN virtualization optimizes network performance. It intelligently routes traffic across multiple network paths, avoiding congestion and reducing latency. Additionally, it enables automatic failover and redundancy, ensuring high network availability.

Simplified Network Management: Traditional WAN architectures often involve complex configurations and manual provisioning. WAN virtualization simplifies network management by centralizing control and automating tasks. Administrators can easily set policies, monitor network performance, and make changes from a single management interface, saving time and reducing human errors.

Multi-Site Connectivity: For organizations with multiple remote sites, WAN virtualization offers a cost-effective solution. It enables seamless connectivity between sites, allowing efficient data transfer, collaboration, and resource sharing. With centralized management, network administrators can ensure consistent policies and security across all sites. Cloud Connectivity:

As more businesses adopt cloud-based applications and services, WAN virtualization becomes an essential component. It provides reliable and secure connectivity between on-premises infrastructure and public or private cloud environments. By prioritizing critical cloud traffic and optimizing routing, WAN virtualization ensures optimal performance for cloud-based applications.

Highlights: WAN Virtualization

### The Basics of WAN

A WAN is a telecommunications network that extends over a large geographical area. It is designed to connect devices and networks across long distances, using various communication links such as leased lines, satellite links, or the internet. The primary purpose of a WAN is to facilitate the sharing of resources and information across locations, making it a vital component of modern business infrastructure. WANs can be either private, connecting specific networks of an organization, or public, utilizing the internet for broader connectivity.

### The Role of Virtualization in WAN

Virtualization has revolutionized the way WANs operate, offering enhanced flexibility, efficiency, and scalability. By decoupling network functions from physical hardware, virtualization allows for the creation of virtual networks that can be easily managed and adjusted to meet organizational needs. This approach reduces the dependency on physical infrastructure, leading to cost savings and improved resource utilization. Virtualized WANs can dynamically allocate bandwidth, prioritize traffic, and ensure optimal performance, making them an attractive solution for businesses seeking agility and resilience.

Separating: Control and Data Plane:

1: – WAN virtualization can be defined as the abstraction of physical network resources into virtual entities, allowing for more flexible and efficient network management. By separating the control plane from the data plane, WAN virtualization enables the centralized management and orchestration of network resources, regardless of their physical locations. This simplifies network administration and paves the way for enhanced scalability and agility.

2: – WAN virtualization optimizes network performance by intelligently routing traffic and dynamically adjusting network resources based on real-time conditions. This ensures that critical applications receive the necessary bandwidth and quality of service, resulting in improved user experience and productivity.

3: – By leveraging WAN virtualization, organizations can reduce their reliance on expensive dedicated circuits and hardware appliances. Instead, they can leverage existing network infrastructure and utilize cost-effective internet connections without compromising security or performance. This significantly lowers operational costs and capital expenditures.

4: – Traditional WAN architectures often struggle to meet modern businesses’ evolving needs. WAN virtualization solves this challenge by providing a scalable and flexible network infrastructure. With virtual overlays, organizations can rapidly deploy and scale their network resources as needed, empowering them to adapt quickly to changing business requirements.

**Implementing WAN Virtualization**

Successful implementation of WAN virtualization requires careful planning and execution. Start by assessing your current network infrastructure and identifying areas for improvement. Choose a virtualization solution that aligns with your organization’s specific needs and budget. Consider leveraging software-defined WAN (SD-WAN) technologies to simplify the deployment process and enhance overall network performance.

There are several popular techniques for implementing WAN virtualization, each with its unique characteristics and use cases. Let’s explore a few of them:

a. MPLS (Multi-Protocol Label Switching): MPLS is a widely used technique that leverages labels to direct network traffic efficiently. It provides reliable and secure connectivity, making it suitable for businesses requiring stringent service level agreements (SLAs).

b. SD-WAN (Software-Defined Wide Area Network): SD-WAN is a revolutionary technology that abstracts and centralizes the network control plane in software. It offers dynamic path selection, traffic prioritization, and simplified network management, making it ideal for organizations with multiple branch locations.

c. VPLS (Virtual Private LAN Service): VPLS extends the functionality of Ethernet-based LANs over a wide area network. It creates a virtual bridge between geographically dispersed sites, enabling seamless communication as if they were part of the same local network.

Example Technology: MPLS & LDP

**The Mechanics of MPLS: How It Works**

MPLS operates by assigning labels to data packets at the network’s entry point—an MPLS-enabled router. These labels determine the path the packet will take through the network, enabling quick and efficient routing. Each router along the path uses the label to make forwarding decisions, eliminating the need for complex table lookups. This not only accelerates data transmission but also allows network administrators to predefine optimal paths for different types of traffic, enhancing network performance and reliability.

**Exploring LDP: The Glue of MPLS Systems**

The Label Distribution Protocol (LDP) is crucial for the functioning of MPLS networks. LDP is responsible for the distribution of labels between routers, ensuring that each understands how to handle the labeled packets appropriately. When routers communicate using LDP, they exchange label information, which helps in building a label-switched path (LSP). This process involves the negotiation of label values and the establishment of the end-to-end path that data packets will traverse, making LDP the unsung hero that ensures seamless and effective MPLS operation.

**Benefits of MPLS and LDP in Modern Networks**

MPLS and LDP together offer a range of benefits that make them indispensable in contemporary networking. They provide a scalable solution that supports a wide array of services, including VPNs, traffic engineering, and quality of service (QoS). This versatility makes it easier for network operators to manage and optimize traffic, leading to improved bandwidth utilization and reduced latency. Additionally, MPLS networks are inherently more secure, as the label-switching mechanism makes it difficult for unauthorized users to intercept or tamper with data.

Overcoming Potential Challenges

While WAN virtualization offers numerous benefits, it also presents certain challenges. Security is a top concern, as virtualized networks can introduce new vulnerabilities. It’s essential to implement robust security measures, such as encryption and access controls, to protect your virtualized WAN. Additionally, ensure your IT team is adequately trained to manage and monitor the virtual network environment effectively.

**Section 1: The Complexity of Network Integration**

One of the primary challenges in WAN virtualization is integrating new virtualized solutions with existing network infrastructures. This task often involves dealing with legacy systems that may not easily adapt to virtualized environments. Organizations need to ensure compatibility and seamless operation across all network components. To address this complexity, businesses can employ network abstraction techniques and use software-defined networking (SDN) tools that offer greater control and flexibility, allowing for a smoother integration process.

**Section 2: Security Concerns in Virtualized Environments**

Security remains a critical concern in any network architecture, and virtualization adds another layer of complexity. Virtual environments can introduce vulnerabilities if not properly managed. The key to overcoming these security challenges lies in implementing robust security protocols and practices. Utilizing encryption, firewalls, and regular security audits can help safeguard the network. Additionally, leveraging network segmentation and zero-trust models can significantly enhance the security of virtualized WANs.

**Section 3: Managing Performance and Reliability**

Ensuring consistent performance and reliability in a virtualized WAN is another significant challenge. Virtualization can sometimes lead to latency and bandwidth issues, affecting the overall user experience. To mitigate these issues, organizations should focus on traffic optimization techniques and quality of service (QoS) management. Implementing dynamic path selection and traffic prioritization can ensure that mission-critical applications receive the necessary bandwidth and performance, maintaining high levels of reliability across the network.

**Section 4: Cost Implications and ROI**

While WAN virtualization can lead to cost savings in the long run, the initial investment and transition can be costly. Organizations must carefully consider the cost implications and potential return on investment (ROI) when adopting virtualized solutions. Conducting thorough cost-benefit analyses and pilot testing can provide valuable insights into the financial viability of virtualization projects. By aligning virtualization strategies with business goals, companies can maximize ROI and achieve sustainable growth.

WAN Virtualisation & SD-WAN Cloud Hub

SD-WAN Cloud Hub is a cutting-edge networking solution that combines the power of software-defined wide area networking (SD-WAN) with the scalability and reliability of cloud services. It acts as a centralized hub, enabling organizations to connect their branch offices, data centers, and cloud resources in a secure and efficient manner. By leveraging SD-WAN Cloud Hub, businesses can simplify their network architecture, improve application performance, and reduce costs.

Google Cloud needs no introduction. With its robust infrastructure, comprehensive suite of services, and global reach, it has become a preferred choice for businesses across industries. From compute and storage to AI and analytics, Google Cloud offers a wide range of solutions that empower organizations to innovate and scale. By integrating SD-WAN Cloud Hub with Google Cloud, businesses can unlock unparalleled benefits and take their network connectivity to new heights.

Understanding SD-WAN

SD-WAN is a cutting-edge networking technology that utilizes software-defined principles to manage and optimize network connections intelligently. Unlike traditional WAN, which relies on costly and inflexible hardware, SD-WAN leverages software-based solutions to streamline network management, improve performance, and enhance security.

Key Benefits of SD-WAN

a) Enhanced Performance: SD-WAN intelligently routes traffic across multiple network paths, ensuring optimal performance and reduced latency. This results in faster data transfers and improved user experience.

b) Cost Efficiency: With SD-WAN, businesses can leverage affordable broadband connections rather than relying solely on expensive MPLS (Multiprotocol Label Switching) links. This not only reduces costs but also enhances network resilience.

c) Simplified Management: SD-WAN centralizes network management through a user-friendly interface, allowing IT teams to easily configure, monitor, and troubleshoot network connections. This simplification saves time and resources, enabling IT professionals to focus on strategic initiatives.

SD-WAN incorporates robust security measures to protect network traffic and sensitive data. It employs encryption protocols, firewall capabilities, and traffic segmentation techniques to safeguard against unauthorized access and potential cyber threats. These advanced security features give businesses peace of mind and ensure data integrity.

WAN Virtualization with Network Connectivity Center

**Understanding Google Network Connectivity Center**

Google Network Connectivity Center (NCC) is a cloud-based service designed to simplify and centralize network management. By leveraging Google’s extensive global infrastructure, NCC provides organizations with a unified platform to manage their network connectivity across various environments, including on-premises data centers, multi-cloud setups, and hybrid environments.

**Key Features and Benefits**

1. **Centralized Network Management**: NCC offers a single pane of glass for network administrators to monitor and manage connectivity across different environments. This centralized approach reduces the complexity associated with managing multiple network endpoints and enhances operational efficiency.

2. **Enhanced Security**: With NCC, organizations can implement robust security measures across their network. The service supports advanced encryption protocols and integrates seamlessly with Google’s security tools, ensuring that data remains secure as it moves between different environments.

3. **Scalability and Flexibility**: One of the standout features of NCC is its ability to scale with your organization’s needs. Whether you’re expanding your data center operations or integrating new cloud services, NCC provides the flexibility to adapt quickly and efficiently.

**Optimizing Data Center Operations**

Data centers are the backbone of modern digital infrastructure, and optimizing their operations is crucial for any organization. NCC facilitates this by offering tools that enhance data center connectivity and performance. For instance, with NCC, you can easily set up and manage VPNs, interconnect data centers across different regions, and ensure high availability and redundancy.

**Seamless Integration with Other Google Services**

NCC isn’t just a standalone service; it integrates seamlessly with other Google Cloud services such as Cloud Interconnect, Cloud VPN, and Google Cloud Armor. This integration allows organizations to build comprehensive network solutions that leverage the best of Google’s cloud offerings. Whether it’s enhancing security, improving performance, or ensuring compliance, NCC works in tandem with other services to deliver a cohesive and powerful network management solution.

Understanding Network Tiers

Google Cloud offers two distinct Network Tiers: Premium Tier and Standard Tier. Each tier is designed to cater to specific use cases and requirements. The Premium Tier provides users with unparalleled performance, low latency, and high availability. On the other hand, the Standard Tier offers a more cost-effective solution without compromising on reliability.

The Premium Tier, powered by Google’s global fiber network, ensures lightning-fast connectivity and optimal performance for critical workloads. With its vast network of points of presence (PoPs), it minimizes latency and enables seamless data transfers across regions. By leveraging the Premium Tier, businesses can ensure superior user experiences and support demanding applications that require real-time data processing.

While the Premium Tier delivers exceptional performance, the Standard Tier presents an attractive option for cost-conscious organizations. By utilizing Google Cloud’s extensive network peering relationships, the Standard Tier offers reliable connectivity at a reduced cost. It is an ideal choice for workloads that are less latency-sensitive or require moderate bandwidth.

What is VPC Networking?

VPC networking refers to the virtual network environment that allows you to securely connect your resources running in the cloud. It provides isolation, control, and flexibility, enabling you to define custom network configurations to suit your specific needs. In Google Cloud, VPC networking is a fundamental building block for your cloud infrastructure.

Google Cloud VPC networking offers a range of powerful features that enhance your network management capabilities. These include subnetting, firewall rules, route tables, VPN connectivity, and load balancing. Let’s explore each of these features in more detail:

Subnetting: With VPC subnetting, you can divide your IP address range into smaller subnets, allowing for better resource allocation and network segmentation.

Firewall Rules: Google Cloud VPC networking provides robust firewall rules that enable you to control inbound and outbound traffic, ensuring enhanced security for your applications and data.

Route Tables: Route tables in VPC networking allow you to define the routing logic for your network traffic, ensuring efficient communication between different subnets and external networks.

VPN Connectivity: Google Cloud supports VPN connectivity, allowing you to establish secure connections between your on-premises network and your cloud resources, creating a hybrid infrastructure.

Load Balancing: VPC networking offers load balancing capabilities, distributing incoming traffic across multiple instances, increasing availability and scalability of your applications.

Example: DMVPN ( Dynamic Multipoint VPN)

Separating control from the data plane

DMVPN is a Cisco-developed solution that combines the benefits of multipoint GRE tunnels, IPsec encryption, and dynamic routing protocols to create a flexible and efficient virtual private network. It simplifies network architecture, reduces operational costs, and enhances scalability. With DMVPN, organizations can connect remote sites, branch offices, and mobile users seamlessly, creating a cohesive network infrastructure.

The underlay infrastructure forms the foundation of DMVPN. It refers to the physical network that connects the different sites or locations. This could be an existing Wide Area Network (WAN) infrastructure, such as MPLS, or the public Internet. The underlay provides the transport for the overlay network, enabling the secure transmission of data packets between sites.

The overlay network is the virtual network created on top of the underlay infrastructure. It is responsible for establishing the secure tunnels and routing between the connected sites. DMVPN uses multipoint GRE tunnels to allow dynamic and direct communication between sites, eliminating the need for a hub-and-spoke topology. IPsec encryption ensures the confidentiality and integrity of data transmitted over the overlay network.

Example WAN Technology: Tunneling IPv6 over IPV4

IPv6 tunneling is a technique that allows the transmission of IPv6 packets over an IPv4 network infrastructure. It enables communication between IPv6 networks by encapsulating IPv6 packets within IPv4 packets. By doing so, organizations can utilize existing IPv4 infrastructure while transitioning to IPv6. Before delving into its various implementations, understanding the basics of IPv6 tunneling is crucial.

Types of IPv6 Tunneling

There are several types of IPv6 tunneling techniques, each with its advantages and considerations. Let’s explore a few popular types:

Manual Tunneling: Manual tunneling is a simple method configuring tunnel endpoints. It also requires manually configuring tunnel interfaces on each participating device. While it provides flexibility and control, this approach can be time-consuming and prone to human error.

Automatic Tunneling: Automatic tunneling, also known as 6to4 tunneling, allows for the automatic creation of tunnels without manual configuration. It utilizes the 6to4 addressing scheme, where IPv6 packets are encapsulated within IPv4 packets using protocol 41. While convenient, automatic tunneling may encounter issues with address translation and compatibility.

Teredo Tunneling: Teredo tunneling is another automatic technique that enables IPv6 connectivity for hosts behind IPv4 Network Address Translation (NAT) devices. It uses UDP encapsulation to carry IPv6 packets over IPv4 networks. Though widely supported, Teredo tunneling may suffer from performance limitations due to its reliance on UDP.

WAN Virtualization Technologies

Understanding VRFs

VRFs, in simple terms, allow the creation of multiple virtual routing tables within a single physical router or switch. Each VRF operates as an independent routing instance with its routing table, interfaces, and forwarding decisions. This powerful concept allows for logical separation of network traffic, enabling enhanced security, scalability, and efficiency.

One of VRFs’ primary advantages is network segmentation. By creating separate VRF instances, organizations can effectively isolate different parts of their network, ensuring traffic from one VRF cannot directly communicate with another. This segmentation enhances network security and provides granular control over network resources.

Furthermore, VRFs enable efficient use of network resources. By utilizing VRFs, organizations can optimize their routing decisions, ensuring that traffic is forwarded through the most appropriate path based on the specific requirements of each VRF. This dynamic routing capability leads to improved network performance and better resource utilization.

Use Cases for VRFs

VRFs are widely used in various networking scenarios. One common use case is in service provider networks, where VRFs separate customer traffic, allowing multiple customers to share a single physical infrastructure while maintaining isolation. This approach brings cost savings and scalability benefits.

Another use case for VRFs is in enterprise networks with strict security requirements. By leveraging VRFs, organizations can segregate sensitive data traffic from the rest of the network, reducing the risk of unauthorized access and potential data breaches.

Example WAN technology: Cisco PfR

Cisco PfR is an intelligent routing solution that utilizes real-time performance metrics to make dynamic routing decisions. By continuously monitoring network conditions, such as latency, jitter, and packet loss, PfR can intelligently reroute traffic to optimize performance. Unlike traditional static routing protocols, PfR adapts to network changes on the fly, ensuring optimal utilization of available resources.

Key Features of Cisco PfR

a. Performance Monitoring: PfR continuously collects performance data from various sources, including routers, probes, and end-user devices. This data provides valuable insights into network behavior and helps identify areas of improvement.

b. Intelligent Traffic Engineering: With its advanced algorithms, Cisco PfR can dynamically select the best path for traffic based on predefined policies and performance metrics. This enables efficient utilization of available network resources and minimizes congestion.

c. Application Visibility and Control: PfR offers deep visibility into application-level performance, allowing network administrators to prioritize critical applications and allocate resources accordingly. This ensures optimal performance for business-critical applications and improves overall user experience.

Performance based routing

DMVPN and WAN Virtualization

Example WAN Technology: Network Overlay

Virtual network overlays serve as a layer of abstraction, enabling the creation of multiple virtual networks on top of a physical network infrastructure. By encapsulating network traffic within virtual tunnels, overlays provide isolation, scalability, and flexibility, empowering organizations to manage their networks efficiently.

Underneath the surface, virtual network overlays rely on encapsulation protocols such as Virtual Extensible LAN (VXLAN) or Generic Routing Encapsulation (GRE). These protocols enable the creation of virtual tunnels, allowing network packets to traverse the physical infrastructure while remaining isolated within their respective virtual networks.

**What is GRE?**

At its core, Generic Routing Encapsulation is a tunneling protocol that allows the encapsulation of different network layer protocols within IP packets. It acts as an envelope, carrying packets from one network to another across an intermediate network. GRE provides a flexible and scalable solution for connecting disparate networks, facilitating seamless communication.

GRE encapsulates the original packet, often called the payload, within a new IP packet. This encapsulated packet is then sent to the destination network, where it is decapsulated to retrieve the original payload. By adding an IP header, GRE enables the transportation of various protocols across different network infrastructures, including IPv4, IPv6, IPX, and MPLS.

**Introducing IPSec Services**

IPSec, short for Internet Protocol Security, is a suite of protocols that provides security services at the IP network layer. It offers data integrity, confidentiality, and authentication features, ensuring that data transmitted over IP networks remains protected from unauthorized access and tampering. IPSec operates in two modes: Transport Mode and Tunnel Mode.

**Combining GRE & IPSec**

By combining GRE and IPSec, organizations can create secure and private communication channels over public networks. GRE provides the tunneling mechanism, while IPSec adds an extra layer of security by encrypting and authenticating the encapsulated packets. This combination allows for the secure transmission of sensitive data, remote access to private networks, and the establishment of virtual private networks (VPNs).

The combination of GRE and IPSec offers several advantages. First, it enables the creation of secure VPNs, allowing remote users to connect securely to private networks over public infrastructure. Second, it protects against eavesdropping and data tampering, ensuring the confidentiality and integrity of transmitted data. Lastly, GRE and IPSec are vendor-neutral protocols widely supported by various network equipment, making them accessible and compatible.

GRE with IPsec ipsec plus GRE

What is MPLS?

MPLS, short for Multi-Protocol Label Switching, is a versatile and scalable protocol used in modern networks. At its core, MPLS assigns labels to network packets, allowing for efficient and flexible routing. These labels help streamline traffic flow, leading to improved performance and reliability. To understand how MPLS works, we need to explore its key components.

The basic building block is the Label Switched Path (LSP), a predetermined path that packets follow. Labels are attached to packets at the ingress router, guiding them along the LSP until they reach their destination. This label-based forwarding mechanism enables MPLS to offer traffic engineering capabilities and support various network services.

Understanding Label Distributed Protocols

Label distributed protocols, or LDP, are fundamental to modern networking technologies. They are designed to establish and maintain label-switched paths (LSPs) in a network. LDP operates by distributing labels, which are used to identify and forward network traffic efficiently. By leveraging labels, LDP enhances network scalability and enables faster packet forwarding.

One key advantage of label-distributed protocols is their ability to support multiprotocol label switching (MPLS). MPLS allows for efficient routing of different types of network traffic, including IP, Ethernet, and ATM. This versatility makes label-distributed protocols highly adaptable and suitable for diverse network environments. Additionally, LDP minimizes network congestion, improves Quality of Service (QoS), and promotes effective resource utilization.

What is MPLS LDP?

MPLS LDP, or Label Distribution Protocol, is a key component of Multiprotocol Label Switching (MPLS) technology. It facilitates the establishment of label-switched paths (LSPs) through the network, enabling efficient forwarding of data packets. MPLS LDP uses labels to direct network traffic along predetermined paths, eliminating the need for complex routing table lookups.

One of MPLS LDP’s primary advantages is its ability to enhance network performance. By utilizing labels, MPLS LDP reduces the time and resources required for packet forwarding, resulting in faster data transmission and reduced network congestion. Additionally, MPLS LDP allows for traffic engineering, enabling network administrators to prioritize certain types of traffic and allocate bandwidth accordingly.

Understanding MPLS VPNs

MPLS VPNs, or Multiprotocol Label Switching Virtual Private Networks, are network infrastructure that allows multiple sites or branches of an organization to communicate over a shared service provider network securely. Unlike traditional VPNs, MPLS VPNs utilize labels to efficiently route and prioritize data packets, ensuring optimal performance and security. By encapsulating data within labels, MPLS VPNs enable seamless communication between different sites while maintaining privacy and segregation.

Understanding VPLS

VPLS, short for Virtual Private LAN Service, is a technology that enables the creation of a virtual LAN (Local Area Network) over a shared or public network infrastructure. It allows geographically dispersed sites to connect as if they are part of the same LAN, regardless of their physical distance. This technology uses MPLS (Multiprotocol Label Switching) to transport Ethernet frames across the network efficiently.

Key Features and Benefits

Scalability and Flexibility: VPLS offers scalability, allowing businesses to easily expand their network as their requirements grow. It allows adding or removing sites without disrupting the overall network, making it an ideal choice for organizations with dynamic needs.

Seamless Connectivity: By extending the LAN across different locations, VPLS provides a seamless and transparent network experience. Employees can access shared resources, such as files and applications, as if they were all in the same office, promoting collaboration and productivity across geographically dispersed teams.

Enhanced Security: VPLS ensures a high level of security by isolating each customer’s traffic within their own virtual LAN. The data is encapsulated and encrypted, protecting it from unauthorized access. This makes VPLS a reliable solution for organizations that handle sensitive information and must comply with strict security regulations.

**Advanced WAN Designs**

DMVPN Phase 2 Spoke to Spoke Tunnels

Learning the mapping information required through NHRP resolution creates a dynamic spoke-to-spoke tunnel. How does a spoke know how to perform such a task? As an enhancement to DMVPN Phase 1, spoke-to-spoke tunnels were first introduced in Phase 2 of the network. Phase 2 handed responsibility for NHRP resolution requests to each spoke individually, which means that spokes initiated NHRP resolution requests when they determined a packet needed a spoke-to-spoke tunnel. Cisco Express Forwarding (CEF) would assist the spoke in making this decision based on information contained in its routing table.

Exploring Single Hub Dual Cloud Architecture

– Single Hub Dual Cloud is a specific deployment model within the DMVPN framework that provides enhanced redundancy and improved performance. This architecture connects a single hub device to two separate cloud service providers, creating two independent VPN clouds. This setup offers numerous advantages, including increased availability, load balancing, and optimized traffic routing.

– One key benefit of the Single Hub Dual Cloud approach is improved network resiliency. With two independent clouds, businesses can ensure uninterrupted connectivity even if one cloud or service provider experiences issues. This redundancy minimizes downtime and helps maintain business continuity. This architecture’s load-balancing capabilities also enable efficient traffic distribution, reducing congestion and enhancing overall network performance.

– Implementing DMVPN Single Hub Dual Cloud requires careful planning and configuration. Organizations must assess their needs, evaluate suitable cloud service providers, and design a robust network architecture. Working with experienced network engineers and leveraging automation tools can streamline deployment and ensure successful implementation.

WAN Services

Network Address Translation:

In simple terms, NAT is a technique for modifying IP addresses while packets traverse from one network to another. It bridges private local networks and the public Internet, allowing multiple devices to share a single public IP address. By translating IP addresses, NAT enables private networks to communicate with external networks without exposing their internal structure.

Types of Network Address Translation

There are several types of NAT, each serving a specific purpose. Let’s explore a few common ones:

Static NAT: Static NAT, also known as one-to-one NAT, maps a private IP address to a public IP address. It is often used when specific devices on a network require direct access to the internet. With static NAT, inbound and outbound traffic can be routed seamlessly.

Dynamic NAT: On the other hand, Dynamic NAT allows a pool of public IP addresses to be shared among several devices within a private network. As devices connect to the internet, they are assigned an available public IP address from the pool. Dynamic NAT facilitates efficient utilization of public IP addresses while maintaining network security.

Port Address Translation (PAT): PAT, also called NAT Overload, is an extension of dynamic NAT. Rather than assigning a unique public IP address to each device, PAT assigns a unique port number to each connection. PAT allows multiple devices to share a single public IP address by keeping track of port numbers. This technique is widely used in home networks and small businesses.

NAT plays a crucial role in enhancing network security. By hiding devices’ internal IP addresses, it acts as a barrier against potential attacks from the Internet. External threats find it harder to identify and target individual devices within a private network. NAT acts as a shield, providing additional security to the network infrastructure.

PBR At the WAN Edge

Understanding Policy-Based Routing

Policy-based Routing (PBR) allows network administrators to control the path of network traffic based on specific policies or criteria. Unlike traditional routing protocols, PBR offers a more granular and flexible approach to directing network traffic, enabling fine-grained control over routing decisions.

PBR offers many features and functionalities that empower network administrators to optimize network traffic flow. Some key aspects include:

1. Traffic Classification: PBR allows the classification of network traffic based on various attributes such as source IP, destination IP, protocol, port numbers, or even specific packet attributes. This flexibility enables administrators to create customized policies tailored to their network requirements.

2. Routing Decision Control: With PBR, administrators can define specific routing decisions for classified traffic. Traffic matching certain criteria can be directed towards a specific next-hop or exit interface, bypassing the regular routing table.

3. Load Balancing and Traffic Engineering: PBR can distribute traffic across multiple paths, leveraging load balancing techniques. By intelligently distributing traffic, administrators can optimize resource utilization and enhance network performance.

Performance at the WAN Edge

Understanding TCP MSS

TCP MSS refers to the maximum amount of data encapsulated in a single TCP segment. It determines the payload size within each TCP packet, excluding the TCP/IP headers. By limiting the MSS, TCP ensures that data is transmitted in manageable chunks, preventing fragmentation and improving overall network performance.

Several factors influence the determination of TCP MSS. One crucial aspect is the underlying network’s Maximum Transmission Unit (MTU). The MTU represents the largest packet size transmitted over a network without fragmentation. TCP MSS is typically set to match the MTU to avoid packet fragmentation and subsequent retransmissions.

By appropriately configuring TCP MSS, network administrators can optimize network performance. Matching the TCP MSS to the MTU size reduces the chances of packet fragmentation, which can lead to delays and retransmissions. Moreover, a properly sized TCP MSS can prevent unnecessary overhead and improve bandwidth utilization.

Adjusting the TCP MSS to suit specific network requirements is possible. Network administrators can configure the TCP MSS value on routers, firewalls, and end devices. This flexibility allows for fine-tuning network performance based on the specific characteristics and constraints of the network infrastructure.

WAN – The desired benefits

Businesses often want to replace or augment premium bandwidth services and switch from active/standby to active/active WAN transport models. This will reduce their costs. The challenge, however, is that augmentation can increase operational complexity. Creating a consistent operational model and simplifying IT requires businesses to avoid complexity.

The importance of maintaining remote site uptime for business continuity goes beyond simply preventing blackouts. Latency, jitter, and loss can affect critical applications and render them inoperable. As a result, the applications are entirely unavailable. The term “brownout” refers to these situations. Businesses today are focused on providing a consistent, high-quality application experience.

Ensuring connectivity

To ensure connectivity and make changes, there is a shift towards retaking control. It extends beyond routing or quality of service to include application experience and availability. The Internet edge is still not familiar to many businesses regarding remote sites. Software as a Service (SaaS) and productivity applications can be rolled out more effectively with this support.

Better access to Infrastructure as a Service (IaaS) is also necessary. Offloading guest traffic to branches with direct Internet connectivity is also possible. However, many businesses are interested in doing so. This is because offloading this traffic locally is more efficient than routing it through a centralized data center to consume WAN bandwidth. WAN bandwidth is wasted and is not efficient.

The shift to application-centric architecture

Business requirements are changing rapidly, and today’s networks cannot cope. It is traditionally more expensive and has a fixed capacity for hardware-centric networks. In addition, the box-by-box configuration approach, siloed management tools, and lack of automated provisioning make them more challenging to support.

They are inflexible, static, expensive, and difficult to maintain due to conflicting policies between domains and different configurations between services. As a result, security vulnerabilities and misconfigurations are more likely to occur. An application- or service-centric architecture focusing on simplicity and user experience should replace a connectivity-centric architecture.

Understanding Virtualization

Virtualization is a technology that allows the creation of virtual versions of various IT resources, such as servers, networks, and storage devices. These virtual resources operate independently from physical hardware, enabling multiple operating systems and applications to run simultaneously on a single physical machine. Virtualization opens possibilities by breaking the traditional one-to-one relationship between hardware and software. Now, virtualization has moved to the WAN.

WAN Virtualization and SD-WAN

Organizations constantly seek innovative solutions in modern networking to enhance their network infrastructure and optimize connectivity. One such solution that has gained significant attention is WAN virtualization. In this blog post, we will delve into the concept of WAN virtualization, its benefits, and how it revolutionizes how businesses connect and communicate.

WAN virtualization, also known as Software-Defined WAN (SD-WAN), is a technology that enables organizations to abstract their wide area network (WAN) connections from the underlying physical infrastructure. It leverages software-defined networking (SDN) principles to decouple network control and data forwarding, providing a more flexible, scalable, and efficient network solution.

VPN and SDN Components

WAN virtualization is an essential technology in the modern business world. It creates virtualized versions of wide area networks (WANs) – networks spanning a wide geographic area. The virtualized WANs can then manage and secure a company’s data, applications, and services.

Regarding implementation, WAN virtualization requires using a virtual private network (VPN), a secure private network accessible only by authorized personnel. This ensures that only those with proper credentials can access the data. WAN virtualization also requires software-defined networking (SDN) to manage the network and its components.

Related: Before you proceed, you may find the following posts helpful:

  1. SD WAN Overlay
  2. Generic Routing Encapsulation
  3. WAN Monitoring
  4. SD WAN Security 
  5. Container Based Virtualization
  6. SD WAN and Nuage Networks

WAN Virtualization

Knowledge Check: Application-Aware Routing (AAR)

Understanding Application-Aware Routing (AAR)

Application-aware routing is a sophisticated networking technique that goes beyond traditional packet-based routing. It considers the unique requirements of different applications, such as video streaming, cloud-based services, or real-time communication, and optimizes the network path accordingly. It ensures smooth and efficient data transmission by prioritizing and steering traffic based on application characteristics.

Benefits of Application-Aware Routing

1- Enhanced Performance: Application-aware routing significantly improves overall performance by dynamically allocating network resources to applications with high bandwidth or low latency requirements. This translates into faster downloads, seamless video streaming, and reduced response times for critical applications.

2-  Increased Reliability: Traditional routing methods treat all traffic equally, often resulting in congestion and potential bottlenecks. Application Aware Routing intelligently distributes network traffic, avoiding congested paths and ensuring a reliable and consistent user experience. In network failure or congestion, it can dynamically reroute traffic to alternative paths, minimizing downtime and disruptions.

Implementation Strategies

1- Deep Packet Inspection: A key component of Application-Aware Routing is deep packet inspection (DPI), which analyzes the content of network packets to identify specific applications. DPI enables routers and switches to make informed decisions about handling each packet based on its application, ensuring optimal routing and resource allocation.

2- Quality of Service (QoS) Configuration: Implementing QoS parameters alongside Application Aware Routing allows network administrators to allocate bandwidth, prioritize specific applications over others, and enforce policies to ensure the best possible user experience. QoS configurations can be customized based on organizational needs and application requirements.

Future Possibilities

As the digital landscape continues to evolve, the potential for Application-Aware Routing is boundless. With emerging technologies like the Internet of Things (IoT) and 5G networks, the ability to intelligently route traffic based on specific application needs will become even more critical. Application-aware routing has the potential to optimize resource utilization, enhance security, and support the seamless integration of diverse applications and services.

 

 

WAN Challenges

Deploying and managing the Wide Area Network (WAN) has become more challenging. Engineers face several design challenges, such as traffic flow decentralizing, inefficient WAN link utilization, routing protocol convergence, and application performance issues with active-active WAN edge designs. Active-active WAN designs that spray and pray over multiple active links present technical and business challenges.

To do this efficiently, you have to understand application flows. There may also be performance problems. When packets reach the other end, there may be out-of-order packets as each link propagates at different speeds. The remote end has to be reassembled and put back together, causing jitter and delay. Both high jitter and delay are bad for network performance. To recap on WAN virtualization, including the drivers for SD-WAN, you may follow this SD WAN tutorial.

What is WAN Virtualization
Diagram: What is WAN virtualization? Source Linkedin.

Knowledge Check: Control and Data Plane

Understanding the Control Plane

The control plane can be likened to a network’s brain. It is responsible for making high-level decisions and managing network-wide operations. From routing protocols to network management systems, the control plane ensures data is directed along the most optimal paths. By analyzing network topology, the control plane determines the best routes to reach a destination and establishes the necessary rules for data transmission.

Unveiling the Data Plane

In contrast to the control plane, the data plane focuses on the actual movement of data packets within the network. It can be thought of as the hands and feet executing the control plane’s instructions. The data plane handles packet forwarding, traffic classification, and Quality of Service (QoS) enforcement tasks. It ensures that data packets are correctly encapsulated, forwarded to their intended destinations, and delivered with the necessary priority and reliability.

Use Cases and Deployment Scenarios

Distributed Enterprises:

For organizations with multiple branch locations, WAN virtualization offers a cost-effective solution for connecting remote sites to the central network. It allows for secure and efficient data transfer between branches, enabling seamless collaboration and resource sharing.

Cloud Connectivity:

WAN virtualization is ideal for enterprises adopting cloud-based services. It provides a secure and optimized connection to public and private cloud environments, ensuring reliable access to critical applications and data hosted in the cloud.

Disaster Recovery and Business Continuity:

WAN virtualization plays a vital role in disaster recovery strategies. Organizations can ensure business continuity during a natural disaster or system failure by replicating data and applications across geographically dispersed sites.

Challenges and Considerations:

Implementing WAN virtualization requires careful planning and consideration. Factors such as network security, bandwidth requirements, and compatibility with existing infrastructure need to be evaluated. It is essential to choose a solution that aligns with the specific needs and goals of the organization.

SD-WAN vs. DMVPN

Two popular WAN solutions are DMVPN and SD-WAN.

DMVPN (Dynamic Multipoint Virtual Private Network) and SD-WAN (Software-Defined Wide Area Network) are popular solutions to improve connectivity between distributed branch offices. DMVPN is a Cisco-specific solution, and SD-WAN is a software-based solution that can be used with any router. Both solutions provide several advantages, but there are some differences between them.

DMVPN is a secure, cost-effective, and scalable network solution that combines underlying technologies and DMVVPN phases (for example, the traditional DMVPN phase 1 ) to connect multiple sites. It allows the customer to use existing infrastructure and provides easy deployment and management. This solution is an excellent choice for businesses with many branch offices because it allows for secure communication and the ability to deploy new sites quickly.

DMVPN and WAN Virtualization

SD-WAN is a software-based solution that is gaining popularity in the enterprise market. It provides improved application performance, security, and network reliability. SD-WAN is an excellent choice for businesses that require high-performance applications across multiple sites. It provides an easy-to-use centralized management console that allows companies to deploy new sites and manage the network quickly.

Dynamic Multipoint VPN
Diagram: Example with DMVPN. Source is Cisco

Guide: DMVPN operating over the WAN

The following shows DMVPN operating over the WAN. The SP node represents the WAN network. Then we have R11 as the hub and R2, R3 as the spokes.  Several protocols make the DMVPM network over the WAN possible. We have GRE; in this case, the tunnel destination is specified as a point-to-point GRE tunnel instead of a mGRE tunnel.

Then we have NHRP, which is used to help create a mapping as this is a nonbroadcast network; we can not use ARP. So, we need to manually set this up on the spokes with the command: ip nhrp NHS 192.168.100.11

DMVPN configuration
Diagram: DMVPN Configuration.

Shift from network-centric to business intent.

The core of WAN virtualization involves shifting focus from a network-centric model to a business intent-based WAN network. So, instead of designing the WAN for the network, we can create the WAN for the application. This way, the WAN architecture can simplify application deployment and management.

First, however, the mindset must shift from a network topology focus to an application services topology. A new application style consumes vast bandwidth and is very susceptible to variations in bandwidth quality. Things such as jitter, loss, and delay impact most applications, which makes it essential to improve the WAN environment for these applications.

wan virtualization
Diagram: WAN virtualization.

The spray-and-pray method over two links increases bandwidth but decreases “goodput.” It also affects firewalls, as they will see asymmetric routes. When you want an active-active model, you need application session awareness and a design that eliminates asymmetric routing. It would help if you could slice the WAN properly so application flows can work efficiently over either link.

What is WAN Virtualization: Decentralizing Traffic

Decentralizing traffic from the data center to the branch requires more bandwidth to the network’s edges. As a result, we see many high-bandwidth applications running on remote sites. This is what businesses are now trying to accomplish. Traditional branch sites usually rely on hub sites for most services and do not host bandwidth-intensive applications. Today, remote locations require extra bandwidth, which is not cheaper yearly.

Inefficient WAN utilization

Redundant WAN links usually require a dynamic routing protocol for traffic engineering and failover. Routing protocols require complex tuning to load balance traffic between border devices. Border Gateway Protocol (BGP) is the primary protocol for connecting sites to external networks.

It relies on path attributes to choose the best path based on availability and distance. Although these attributes allow granular policy control, they do not cover aspects relating to path performance, such as Round Trip Time (RTT), delay, and jitter.

Port 179
Furthermore, BGP does not always choose the “best” path, which may have different meanings for customers. For example, customer A might consider the path via provider A as the best due to the price of links. Default routing does not take this into account. Packet-level routing protocols are not designed to handle the complexities of running over multiple transport-agnostic links. Therefore, a solution that eliminates the need for packet-level routing protocols must arise.
BGP Path Attributes
Diagram: BGP Path Attributes Source is Cisco.

Routing protocol convergence

WAN designs can also be active standby, which requires routing protocol convergence in the event of primary link failure. However, routing convergence is slow, and to speed up, additional features, such as Bidirectional Forwarding Detection (BFD), are implemented that may stress the network’s control plane. Although mechanisms exist to speed up convergence and failure detection, there are still several convergence steps, such as:

Rouitng Convergence

Convergence


Detect


Describe


Switch 


Find

Branch office security

With traditional network solutions, branches connect back to the data center, which typically provides Internet access. However, the application world has evolved, and branches directly consume applications such as Office 365 in the cloud. This drives a need for branches to access these services over the Internet without going to the data center for Internet access or security scrubbing.

Extending the security diameter into the branches should be possible without requiring onsite firewalls / IPS and other security paradigm changes. A solution must exist that allows you to extend your security domain to the branch sites without costly security appliances at each branch—essentially, building a dynamic security fabric.

WAN Virtualization

The solution to all these problems is SD-WAN ( software-defined WAN ). SD-WAN is a transport-independent overlay software-based networking deployment. It uses software and cloud-based technologies to simplify the delivery of WAN services to branch offices. Similar to Software Defined Networking (SDN), SD-WAN works by abstraction. It abstracts network hardware into a control plane with multiple data planes to make up one large WAN fabric.

SD-WAN in a nutshell 

When we consider the Wide Area Network (WAN) environment at a basic level, we connect data centers to several branch offices to deliver packets between those sites, supporting the transport of application transactions and services. The SD-WAN platform allows you to pull Internet connectivity into those sites, becoming part of one large transport-independent WAN fabric.

SD-WAN monitors the paths and the application performance on each link (Internet, MPLS, LTE ) and chooses the best path based on performance.

There are many forms of Internet connectivity (cable, DSL, broadband, and Ethernet). They are quick to deploy at a fraction of the cost of private MPLS circuits. SD-WAN provides the benefit of using all these links and monitoring which applications are best for them.

Application performance is continuously monitored across all eligible paths-direct internet, internet VPN, and private WAN. It creates an active-active network and eliminates the need to use and maintain traditional routing protocols for active-standby setups—no reliance on the active-standby model and associated problems.

WAN virtualization
Diagram: WAN virtualization. Source is Juniper

SD-WAN simplifies WAN management

SD-WAN simplifies managing a wide area network by providing a centralized platform for managing and monitoring traffic across the network. This helps reduce the complexity of managing multiple networks, eliminating the need for manual configuration of each site. Instead, all of the sites are configured from a single management console.

SD-WAN also provides advanced security features such as encryption and firewalling, which can be configured to ensure that only authorized traffic is allowed access to the network. Additionally, SD-WAN can optimize network performance by automatically routing traffic over the most efficient paths.

what is wan virtualization

SD-WAN Packet Steering

SD-WAN packet steering is a technology that efficiently routes packets across a wide area network (WAN). It is based on the concept of steering packets so that they can be delivered more quickly and reliably than traditional routing protocols. Packet steering is crucial to SD-WAN technology, allowing organizations to maximize their WAN connections.

SD-WAN packet steering works by analyzing packets sent across the WAN and looking for patterns or trends. Based on these patterns, the SD-WAN can dynamically route the packets to deliver them more quickly and reliably. This can be done in various ways, such as considering latency and packet loss or ensuring the packets are routed over the most reliable connections.

Spraying packets down both links can result in 20% drops or packet reordering. SD-WAN makes packets better utilized, no reorder, and better “goodput.” SD-WAN increases your buying power and results in buying lower bandwidth links and running them more efficiently. Over-provision is unnecessary as you are using the existing WAN bandwidth better.

Example WAN Security Technology: Suricata

A Final Note: WAN virtualization

Server virtualization and automation in the data center are prevalent, but WANs are stalling in this space. It is the last bastion of hardware models that has complexity. Like hypervisors have transformed data centers, SD-WAN aims to change how WAN networks are built and managed. When server virtualization and hypervisor came along, we did not have to worry about the underlying hardware. Instead, a virtual machine (VM) can be provided and run as an application. Today’s WAN environment requires you to manage details of carrier infrastructure, routing protocols, and encryption. 

  • SD-WAN pulls all WAN resources together and slices up the WAN to match the applications on them.

The Role of WAN Virtualization in Digital Transformation:

In today’s digital era, where cloud-based applications and remote workforces are becoming the norm, WAN virtualization is critical in enabling digital transformation. It empowers organizations to embrace new technologies, such as cloud computing and unified communications, by providing secure and reliable connectivity to distributed resources.

Summary: WAN Virtualization

In our ever-connected world, seamless network connectivity is necessary for businesses of all sizes. However, traditional Wide Area Networks (WANs) often fall short of meeting the demands of modern data transmission and application performance. This is where the concept of WAN virtualization comes into play, promising to revolutionize network connectivity like never before.

Understanding WAN Virtualization

WAN virtualization, also known as Software-Defined WAN (SD-WAN), is a technology that abstracts the physical infrastructure of traditional WANs and allows for centralized control, management, and optimization of network resources. By decoupling the control plane from the underlying hardware, WAN virtualization enables organizations to dynamically allocate bandwidth, prioritize critical applications, and ensure optimal performance across geographically dispersed locations.

The Benefits of WAN Virtualization

Enhanced Flexibility and Scalability: With WAN virtualization, organizations can effortlessly scale their network infrastructure to accommodate growing business needs. The virtualized nature of the WAN allows for easy addition or removal of network resources, enabling businesses to adapt to changing requirements without costly hardware upgrades.

Improved Application Performance: WAN virtualization empowers businesses to optimize application performance by intelligently routing network traffic based on application type, quality of service requirements, and network conditions. By dynamically selecting the most efficient path for data transmission, WAN virtualization minimizes latency, improves response times, and enhances overall user experience.

Cost Savings and Efficiency: By leveraging WAN virtualization, organizations can reduce their reliance on expensive Multiprotocol Label Switching (MPLS) connections and embrace more cost-effective broadband links. The ability to intelligently distribute traffic across diverse network paths enhances network redundancy and maximizes bandwidth utilization, providing significant cost savings and improved efficiency.

Implementation Considerations

Network Security: When adopting WAN virtualization, it is crucial to implement robust security measures to protect sensitive data and ensure network integrity. Encryption protocols, threat detection systems, and secure access controls should be implemented to safeguard against potential security breaches.

Quality of Service (QoS): Organizations should prioritize critical applications and allocate appropriate bandwidth resources through Quality of Service (QoS) policies to ensure optimal application performance. By adequately configuring QoS settings, businesses can guarantee mission-critical applications receive the necessary network resources, minimizing latency and providing a seamless user experience.

Real-World Use Cases

Global Enterprise Networks

Large multinational corporations with a widespread presence can significantly benefit from WAN virtualization. These organizations can achieve consistent performance across geographically dispersed locations by centralizing network management and leveraging intelligent traffic routing, improving collaboration and productivity.

Branch Office Connectivity

WAN virtualization simplifies connectivity and network management for businesses with multiple branch offices. It enables organizations to establish secure and efficient connections between headquarters and remote locations, ensuring seamless access to critical resources and applications.

In conclusion, WAN virtualization represents a paradigm shift in network connectivity, offering enhanced flexibility, improved application performance, and cost savings for businesses. By embracing this transformative technology, organizations can unlock the true potential of their networks, enabling them to thrive in the digital age.

WAN Design Requirements

WAN SDN

WAN SDN

In today's fast-paced digital world, organizations constantly seek ways to optimize their network infrastructure for improved performance, scalability, and cost efficiency. One emerging technology that has gained significant traction is WAN Software-Defined Networking (SDN). By decoupling the control and data planes, WAN SDN provides organizations unprecedented flexibility, agility, and control over their wide area networks (WANs). In this blog post, we will delve into the world of WAN SDN, exploring its key benefits, implementation considerations, and real-world use cases.

WAN SDN is a network architecture that allows organizations to manage and control their wide area networks using software centrally. Traditionally, WANs have been complex and time-consuming to configure, often requiring manual network provisioning and management intervention. However, with WAN SDN, network administrators can automate these tasks through a centralized controller, simplifying network operations and reducing human errors.

Enhanced Agility: WAN SDN empowers network administrators with the ability to quickly adapt to changing business needs. With programmable policies and dynamic control, organizations can easily adjust network configurations, prioritize traffic, and implement changes without the need for manual reconfiguration of individual devices.

Improved Scalability: Traditional wide area networks often face scalability challenges due to the complex nature of managing numerous remote sites. WAN SDN addresses this issue by providing centralized control, allowing for streamlined network expansion, and efficient resource allocation.

Optimal Resource Utilization: WAN SDN enables organizations to maximize their network resources by intelligently routing traffic and dynamically allocating bandwidth based on real-time demands. This ensures that critical applications receive the necessary resources while minimizing wastage.

Multi-site Enterprises: WAN SDN is particularly beneficial for organizations with multiple branch locations. It allows for simplified network management across geographically dispersed sites, enabling efficient resource allocation, centralized security policies, and rapid deployment of new services.

Cloud Connectivity: WAN SDN plays a crucial role in connecting enterprise networks with cloud service providers. It offers seamless integration, secure connections, and dynamic bandwidth allocation, ensuring optimal performance and reliability for cloud-based applications.

Service Providers: WAN SDN can revolutionize how service providers deliver network services to their customers. It enables the creation of virtual private networks (VPNs) on-demand, facilitates network slicing for different tenants, and provides granular control and visibility for service-level agreements (SLAs).

WAN SDN represents a paradigm shift in wide area network management. Its ability to centralize control, enhance agility, and optimize resource utilization make it a game-changer for modern networking infrastructures. As organizations continue to embrace digital transformation and demand more from their networks, WAN SDN will undoubtedly play a pivotal role in shaping the future of networking.

Highlights: WAN SDN

Discussing WAN SDN

1: – ) Traditional WANs have long been plagued by various limitations, such as complexity, lack of agility, and high operational costs. These legacy networks typically rely on manual configurations and proprietary hardware, making them inflexible and time-consuming. SDN brings a paradigm shift to WANs by decoupling the network control plane from the underlying infrastructure. With centralized control and programmability, SDN enables network administrators to manage and orchestrate their WANs through a single interface, simplifying network operations and promoting agility.

2: – ) At its core, WAN SDN separates the control plane from the data plane, allowing network administrators to manage network traffic dynamically and programmatically. This separation leads to more efficient network management, reducing the complexity associated with traditional network infrastructures. With WAN SDN, businesses can optimize traffic flow, enhance security, and reduce operational costs by leveraging centralized control and automation.

3: – ) One of the key advantages of SDN in WANs is its inherent flexibility and scalability. With SDN, network administrators can dynamically allocate bandwidth, reroute traffic, and prioritize applications based on real-time needs. This level of granular control allows organizations to optimize their network resources efficiently and adapt to changing demands.

4: – )  SDN brings enhanced security features to WANs through centralized policy enforcement and monitoring. By abstracting network control, SDN allows for consistent security policies across the entire network, minimizing vulnerabilities and ensuring better threat detection and mitigation. Additionally, SDN enables rapid network recovery and failover mechanisms, enhancing overall resilience.

**Key Benefits of WAN SDN**

1. **Scalability and Flexibility**: WAN SDN enables networks to adapt quickly to changing demands without the need for significant hardware investments. This flexibility is crucial for organizations looking to scale their operations efficiently.

2. **Improved Network Performance**: By optimizing traffic routing and prioritizing critical applications, WAN SDN ensures that networks operate at peak performance levels. This capability is particularly beneficial for businesses with high bandwidth demands.

3. **Enhanced Security**: WAN SDN allows for the implementation of robust security measures, including automated threat detection and response. This proactive approach to security helps protect sensitive data and maintain compliance with industry regulations.

**Application Challenges**

Compared to a network-centric model, business intent-based WAN networks have great potential. By using a WAN architecture, applications can be deployed and managed more efficiently. However, application services topologies must replace network topologies. Supporting new and existing applications on the WAN is a common challenge for network operations staff. Applications such as these consume large amounts of bandwidth and are extremely sensitive to variations in bandwidth quality. Improving the WAN environment for these applications is more critical due to jitter, loss, and delay.

**WAN SLA**

In addition, cloud-based applications such as Enterprise Resource Planning (ERP) and Customer Relationship Management (CRM) are increasing bandwidth demands on the WAN. As cloud applications require increasing bandwidth, provisioning new applications and services is becoming increasingly complex and expensive. In today’s business environment, WAN routing and network SLAs are controlled by MPLS L3VPN service providers. As a result, they are less able to adapt to new delivery methods, such as cloud-based and SaaS-based applications.

These applications could take months to implement in service providers’ environments. These changes can also be expensive for some service providers, and some may not be made at all. There is no way to instantiate VPNs independent of underlying transport since service providers control the WAN core. Implementing differentiated service levels for different applications becomes challenging, if not impossible.

WAN SDN Technology: DMVPN

DMVPN is a Cisco-developed solution that enables the creation of virtual private networks over public or private networks. Unlike traditional VPNs that require point-to-point connections, DMVPN utilizes a hub-and-spoke architecture, allowing for dynamic and scalable network deployments. DMVPN simplifies network management and reduces administrative overhead by leveraging multipoint GRE tunnels.

– Multipoint GRE Tunnels: At the core of DMVPN lies the concept of multipoint GRE tunnels. These tunnels create a virtual network, connecting multiple sites while encapsulating packets in GRE headers. This enables efficient traffic routing between sites, reducing the complexity and overhead associated with traditional point-to-point VPNs.

– Next-Hop Resolution Protocol (NHRP): NHRP plays a crucial role in DMVPN by dynamically mapping tunnel IP addresses to physical addresses. It allows for the efficient resolution of next-hop information, eliminating the need for static routes. NHRP also enables on-demand tunnel establishment, improving scalability and reducing administrative overhead.

– IPsec Encryption: DMVPN utilizes IPsec encryption to ensure secure communication over the VPN. IPsec provides confidentiality, integrity, and authentication of data, making it ideal for protecting sensitive information transmitted over the network. With DMVPN, IPsec is applied dynamically per-tunnelly, enhancing flexibility and scalability.

DMVPN over IPSec

Understanding DMVPN & IPSec

IPsec, a widely adopted security protocol, is integral to DMVPN deployments. It provides the cryptographic framework necessary for securing data transmitted over the network. By leveraging IPsec, DMVPN ensures the transmitted information’s confidentiality, integrity, and authenticity, protecting sensitive data from unauthorized access and tampering.

Firstly, the dynamic mesh topology eliminates the need for complex hub-and-spoke configurations, simplifying network management and reducing administrative overhead. Additionally, DMVPN’s scalability enables seamless integration of new sites and facilitates rapid expansion without compromising performance. Furthermore, the inherent flexibility ensures optimal routing, load balancing, and efficient bandwidth utilization.

Example WAN Techniques: 

Understanding Virtual Routing and Forwarding

VRF is a technology that enables the creation of multiple virtual routing tables within a single physical router. Each VRF instance acts as an independent router with its routing table, interfaces, and forwarding decisions. This separation allows different networks or customers to coexist on the same physical infrastructure while maintaining complete isolation.

One critical advantage of VRF is its ability to provide network segmentation. By dividing a physical router into multiple VRF instances, organizations can isolate their networks, ensuring that traffic from one VRF does not leak into another. This enhances security and provides a robust framework for multi-tenancy scenarios.

Use Cases for VRF

VRF finds application in various scenarios, including:

1. Service Providers: VRF allows providers to offer their customers virtual private network (VPN) services. Each customer can have their own VRF, ensuring their traffic remains separate and secure.

2. Enterprise Networks: VRF can segregate different organizational departments, creating independent virtual networks.

3. Internet of Things (IoT): With the proliferation of IoT devices, VRF can create separate routing domains for different IoT deployments, improving scalability and security.

Understanding Policy-Based Routing

Policy-based Routing, at its core, involves manipulating routing decisions based on predefined policies. Unlike traditional routing protocols that rely solely on destination addresses, PBR considers additional factors such as source IP, ports, protocols, and even time of day. By implementing PBR, network administrators gain flexibility in directing traffic flows to specific paths based on specified conditions.

The adoption of Policy Based Routing brings forth a multitude of benefits. Firstly, it enables efficient utilization of network resources by allowing administrators to prioritize or allocate bandwidth for specific applications or user groups. Additionally, PBR enhances security by allowing traffic redirection to dedicated firewalls or intrusion detection systems. Furthermore, PBR facilitates load balancing and traffic engineering, ensuring optimal performance across the network.

Implementing Policy-Based Routing

To implement PBR, network administrators must follow a series of steps. Firstly, the traffic classification criteria are defined by specifying the match criteria based on desired conditions. Secondly, create route maps that outline the actions for matched traffic. These actions may include altering the next-hop address, setting specific Quality of Service (QoS) parameters, or redirecting traffic to a different interface. Lastly, the route maps should be applied to the appropriate interfaces or specific traffic flows.

Example SD WAN Product: Cisco Meraki

**Seamless Cloud Management**

One of the standout features of Cisco Meraki is its seamless cloud management. Unlike traditional network systems, Meraki’s cloud-based platform allows IT administrators to manage their entire network from a single, intuitive dashboard. This centralization not only simplifies network management but also provides real-time visibility and control over all connected devices. With automatic updates and zero-touch provisioning, businesses can ensure their network is always up-to-date and secure without the need for extensive manual intervention.

**Cutting-Edge Security Features**

Security is at the core of Cisco Meraki’s suite of products. With cyber threats becoming more sophisticated, Meraki offers a multi-layered security approach to protect sensitive data. Features such as Advanced Malware Protection (AMP), Intrusion Prevention System (IPS), and secure VPNs ensure that the network is safeguarded against intrusions and malware. Additionally, Meraki’s security appliances are designed to detect and mitigate threats in real-time, providing businesses with peace of mind knowing their data is secure.

**Scalability and Flexibility**

As businesses grow, so do their networking needs. Cisco Meraki’s scalable solutions are designed to grow with your organization. Whether you are expanding your office space, adding new branches, or integrating more IoT devices, Meraki’s flexible infrastructure can easily adapt to these changes. The platform supports a wide range of devices, from access points and switches to security cameras and mobile device management, making it a comprehensive solution for various networking requirements.

**Enhanced User Experience**

Beyond security and management, Cisco Meraki enhances the user experience by ensuring reliable and high-performance network connectivity. Features such as intelligent traffic shaping, load balancing, and seamless roaming between access points ensure that users enjoy consistent and fast internet access. Furthermore, Meraki’s analytics tools provide insights into network usage and performance, allowing businesses to optimize their network for better efficiency and user satisfaction.

Performance at the WAN Edge

Understanding Performance-Based Routing

Performance-based routing is a dynamic approach to network traffic management that prioritizes route selection based on real-time performance metrics. Instead of relying on traditional static routing protocols, performance-based routing algorithms assess the current conditions of network paths, such as latency, packet loss, and available bandwidth, to make informed routing decisions. By dynamically adapting to changing network conditions, performance-based routing aims to optimize traffic flow and enhance overall network performance.

The adoption of performance-based routing brings forth a multitude of benefits for businesses.

1- Firstly, it enhances network reliability by automatically rerouting traffic away from congested or underperforming paths, minimizing the chances of bottlenecks and service disruptions.

2- Secondly, it optimizes application performance by intelligently selecting the best path based on real-time network conditions, thus reducing latency and improving end-user experience. A

3- Additionally, performance-based routing allows for efficient utilization of available network resources, maximizing bandwidth utilization and cost-effectiveness.

Implementation Details:

Implementing performance-based routing requires a thoughtful approach. Firstly, businesses must invest in monitoring tools that provide real-time insights into network performance metrics. These tools can range from simple latency monitoring to more advanced solutions that analyze packet loss and bandwidth availability.

Once the necessary monitoring infrastructure is in place, configuring performance-based routing algorithms within network devices becomes the next step. This involves setting up rules and policies that dictate how traffic should be routed based on specific performance metrics.

Lastly, regular monitoring and fine-tuning performance-based routing configurations are essential to ensure optimal network performance.

WAN Performance Parameters

TCP Performance Parameters

TCP (Transmission Control Protocol) is the backbone of modern Internet communication, ensuring reliable data transmission across networks. Behind the scenes, TCP performance is influenced by several key parameters that can significantly impact network efficiency.

TCP performance parameters govern how TCP behaves in various network conditions. These parameters can be fine-tuned to adapt TCP’s behavior to specific network characteristics, such as latency, bandwidth, and congestion. By adjusting these parameters, network administrators and system engineers can optimize TCP performance for better throughput, reduced latency, and improved overall network efficiency.

Congestion Control Algorithms: Congestion control algorithms are crucial in TCP performance. They monitor network conditions, detect congestion, and adjust TCP’s sending rate accordingly. Popular algorithms like Reno, Cubic, and BBR implement different strategies to handle congestion, balancing fairness and efficiency. Understanding these algorithms and their impact on TCP behavior is essential for maintaining a stable and responsive network.

Window Size and Bandwidth Delay Product: The window size parameter, often called the congestion window, determines the amount of data that can be sent before receiving an acknowledgment. The bandwidth-delay product should set the window size, a value calculated by multiplying the available bandwidth with the round-trip time (RTT). Adjusting the window size to match the bandwidth-delay product ensures optimal data transfer and prevents underutilization or overutilization of network resources.

Maximum Segment Size (MSS): The Maximum Segment Size is another TCP performance parameter defining the maximum amount of data encapsulated within a single TCP segment. By carefully configuring the MSS, it is possible to reduce packet fragmentation, enhance data transmission efficiency, and mitigate issues related to network overhead.

Selective Acknowledgment (SACK): Selective Acknowledgment is a TCP extension that allows the receiver to acknowledge out-of-order segments and provide more precise information about the received data. Enabling SACK can improve TCP performance by reducing retransmissions and enhancing the overall reliability of data transmission.

Understanding TCP MSS

TCP MSS refers to the maximum amount of data encapsulated within a single TCP segment. It represents the most significant data payload that can be transmitted without fragmentation. By limiting the segment size, TCP aims to prevent excessive overhead and ensure efficient data transmission across networks.

Several factors influence the determination of TCP MSS. One crucial aspect is the underlying network infrastructure’s Maximum Transmission Unit (MTU). The MTU represents the maximum packet size that can be transmitted over the network without fragmentation. TCP MSS must be set to a value equal to or lower than the MTU to avoid fragmentation and subsequent performance degradation.

Path MTU Discovery (PMTUD) is a mechanism TCP employs to dynamically determine the optimal MSS value for a given network path. By exchanging ICMP messages with routers along the path, TCP can ascertain the MTU and adjust the MSS accordingly. PMTUD helps prevent packet fragmentation and ensures efficient data transmission across network segments.

The TCP MSS value directly affects network performance. A smaller MSS can increase overhead due to more segments and headers, potentially reducing overall throughput. On the other hand, a larger MSS can increase the risk of fragmentation and subsequent retransmissions, impacting latency and overall network efficiency. Striking the right balance is crucial for optimal performance.

Example WAN Technology: DMVPN Phase 3

Understanding DMVPN Phase 3

DMVPN Phase 3 builds upon the foundation of its predecessors, bringing forth even more advanced features. This section will provide an overview of DMVPN Phase 3, highlighting its main enhancements, such as increased scalability, simplified configuration, and enhanced security protocols.

One of the standout features of DMVPN Phase 3 is its scalability. This section will explain how DMVPN Phase 3 allows organizations to effortlessly add new sites to the network without complex manual configurations. By leveraging multipoint GRE tunnels, DMVPN Phase 3 offers a dynamic and flexible solution that can easily accommodate growing networks.

Example WAN Technology: FlexVPN Site-to-Site Smart Defaults

Understanding FlexVPN Site-to-Site Smart Defaults

FlexVPN Site-to-Site Smart Defaults is a powerful feature that simplifies site-to-site VPN configuration and deployment process. Providing pre-defined templates and configurations eliminates the need for manual configuration, reducing the chances of misconfigurations or human errors. This feature ensures a secure and reliable VPN connection between sites, enabling organizations to establish a robust network infrastructure.

FlexVPN Site-to-Site Smart Defaults offers several key features and benefits that contribute to improved network security. Firstly, it provides secure cryptographic algorithms that protect data transmission, ensuring the confidentiality and integrity of sensitive information. Additionally, it supports various authentication methods, such as digital certificates and pre-shared keys, further enhancing the overall security of the VPN connection. The feature also allows for easy scalability, enabling organizations to expand their network infrastructure without compromising security.

Example WAN Technology: FlexVPN IKEv2 Routing

Understanding FlexVPN

FlexVPN, short for Flexible VPN, is a versatile framework offering various VPN solutions. It provides a secure and scalable approach to establishing Virtual Private Networks (VPNs) over various network infrastructures. With its flexibility, it allows for seamless integration and interoperability across different platforms and devices.

IKEv2, or Internet Key Exchange version 2, is a secure and efficient protocol for establishing and managing VPN connections. It boasts numerous advantages, including its robust security features, ability to handle network disruptions, and support for rapid reconnection. IKEv2 is highly regarded for its ability to maintain stable and uninterrupted VPN connections, making it an ideal choice for FlexVPN.

a. Enhanced Security: FlexVPN IKEv2 Routing offers advanced encryption algorithms and authentication methods, ensuring the confidentiality and integrity of data transmitted over the VPN.

b. Scalability: With its flexible architecture, FlexVPN IKEv2 Routing effortlessly scales to accommodate growing network demands, making it suitable for small—to large-scale deployments.

c. Dynamic Routing: One of FlexVPN IKEv2 Routing’s standout features is its support for dynamic routing protocols, such as OSPF and EIGRP. This enables efficient and dynamic routing of traffic within the VPN network.

d. Seamless Failover: FlexVPN IKEv2 Routing provides automatic failover capabilities, ensuring uninterrupted connectivity even during network disruptions or hardware failures.

Understanding MPLS (Multi-Protocol Label Switching)

MPLS serves as the foundation for MPLS VPNs. It is a versatile and efficient routing technique that uses labels to forward data packets through a network. By assigning labels to packets, MPLS routers can make fast-forwarding decisions based on the labels, reducing the need for complex and time-consuming lookups in routing tables. This results in improved network performance and scalability.

Understanding MPLS LDP

MPLS LDP is a crucial component in establishing label-switched paths within MPLS networks. MPLS LDP facilitates efficient packet forwarding and routing by enabling the distribution of labels and creating forwarding equivalency classes. Let’s take a closer look at how MPLS LDP operates.

One of the fundamental aspects of MPLS LDP is label distribution. Through signaling protocols, MPLS LDP ensures that labels are assigned and distributed across network nodes. This enables routers to make forwarding decisions based on labels, resulting in streamlined and efficient data transmission.

In MPLS LDP, labels serve as the building blocks of label-switched paths. These paths allow routers to forward packets based on labels rather than traditional IP routing. Additionally, MPLS LDP employs forwarding equivalency classes (FECs) to group packets with similar characteristics, further enhancing network performance.

MPLS Virtual Private Networks (VPNs) Explained

VPNs provide secure communication over public networks by creating a private tunnel through which data can travel. They employ encryption and tunneling protocols to protect data from eavesdropping and unauthorized access. MPLS VPNs utilize this VPN concept to establish secure connections between geographically dispersed sites or remote users.

MPLS VPN Components

Customer Edge (CE) Router: The CE router acts as the entry and exit point for customer networks. It connects to the provider network and exchanges routing information. It encapsulates customer data into MPLS packets and forwards them to the provider network.

Provider Edge (PE) Router: The PE router sits at the edge of the service provider’s network and connects to the CE routers. It acts as a bridge between the customer and provider networks and handles the MPLS label switching. The PE router assigns labels to incoming packets and forwards them based on the labels’ instructions.

Provider (P) Router: P routers form the backbone of the service provider’s network. They forward MPLS packets based on the labels without inspecting the packet’s content, ensuring efficient data transmission within the provider’s network.

Virtual Routing and Forwarding (VRF) Tables: VRF tables maintain separate routing instances within a single PE router. Each VRF table represents a unique VPN and keeps the customer’s routing information isolated from other VPNs. VRF tables enable the PE router to handle multiple VPNs concurrently, providing secure and independent communication channels.

Use Case – DMVPN Single Hub, Dual Cloud

Single Hub, Dual Cloud is a specific configuration within the DMVPN architecture. In this setup, a central hub device acts as the primary connection point for branch offices while utilizing two separate cloud providers for redundancy and load balancing. This configuration offers several advantages, including improved availability, increased bandwidth, and enhanced failover capabilities.

1. Enhanced Redundancy: By leveraging two cloud providers, organizations can achieve high availability and minimize downtime. If one cloud provider experiences an issue or outage, the traffic can seamlessly be redirected to the alternate provider, ensuring uninterrupted connectivity.

2. Load Balancing: Distributing network traffic across two cloud providers allows for better resource utilization and improved performance. Organizations can optimize their bandwidth usage and mitigate potential bottlenecks.

3. Scalability: Single Hub, Dual Cloud DMVPN allows organizations to easily scale their network infrastructure by adding more branch offices or cloud providers as needed. This flexibility ensures that the network can adapt to changing business requirements.

4. Cost Efficiency: Utilizing multiple cloud providers can lead to cost savings through competitive pricing and the ability to negotiate better service level agreements (SLAs). Organizations can choose the most cost-effective options while maintaining the desired level of performance and reliability.

The role of SDN

With software-defined networking (SDN), network configurations can be dynamic and programmatically optimized, improving network performance and monitoring more like cloud computing than traditional network management. By disassociating the forwarding of network packets from routing (control plane), SDN can be used to centralize network intelligence within a single network component by improving the static architecture of traditional networks.

Controllers make up the control plane of an SDN network, which contains all of the network’s intelligence. They are considered the brains of the network. Security, scalability, and elasticity are some of the drawbacks of centralization.

Since OpenFlow’s emergence in 2011, SDN was commonly associated with remote communication with network plane elements to determine the path of network packets across network switches. Additionally, proprietary network virtualization platforms, such as Cisco Systems’ Open Network Environment and Nicira’s, use the term.

The SD-WAN technology is used in wide area networks (WANs)

SD-WAN, short for Software-Defined Wide Area Networking, is a transformative approach to network connectivity. Unlike traditional WAN, which relies on hardware-based infrastructure, SD-WAN utilizes software and cloud-based technologies to connect networks over large geographic areas securely. By separating the control plane from the data plane, SD-WAN provides centralized management and enhanced flexibility, enabling businesses to optimize their network performance.

Transport Independance: Hybrid WAN

The hybrid WAN concept was born out of this need. An alternative path that applications can take across a WAN environment is provided by hybrid WAN, which involves businesses acquiring non-MPLS networks and adding them to their LANs. Business enterprises can control these circuits, including routing and application performance. VPN tunnels are typically created over the top of these circuits to provide secure transport over any link. 4G/LTE, L2VPN, commodity broadband Internet, and L2VPN are all examples of these types of links.

As a result, transport independence is achieved. In this way, any transport type can be used under the VPN, and deterministic routing and application performance can be achieved. These commodity links can transmit some applications rather than the traditionally controlled L3VPN MPLS links provided by service providers.

SDN and APIs

WAN SDN is a modern approach to network management that uses a centralized control model to manage, configure, and monitor large and complex networks. It allows network administrators to use software to configure, monitor, and manage network elements from a single, centralized system. This enables the network to be managed more efficiently and cost-effectively than traditional networks.

SDN uses an application programming interface (API) to abstract the underlying physical network infrastructure, allowing for more agile network control and easier management. It also enables network administrators to rapidly configure and deploy services from a centralized location. This enables network administrators to respond quickly to changes in traffic patterns or network conditions, allowing for more efficient use of resources.

Scalability and Automation

SDN also allows for improved scalability and automation. Network administrators can quickly scale up or down the network by leveraging automated scripts depending on its current needs. Automation also enables the network to be maintained more rapidly and efficiently, saving time and resources.

Before you proceed, you may find the following posts helpful:

  1. WAN Virtualization
  2. Software Defined Perimeter Solutions
  3. What is OpenFlow
  4. SD WAN Tutorial
  5. What Does SDN Mean
  6. Data Center Site Selection

WAN SDN

A Deterministic Solution

Technology typically starts as a highly engineered, expensive, deterministic solution. As the marketplace evolves and competition rises, the need for a non-deterministic, inexpensive solution comes into play. We see this throughout history. First, mainframes were/are expensive, and with the arrival of a microprocessor personal computer, the client/server model was born. The Static RAM ( SRAM ) technology was replaced with cheaper Dynamic RAM ( DRAM ). These patterns consistently apply to all areas of technology.

Finally, deterministic and costly technology is replaced with intelligent technology using redundancy and optimization techniques. This process is now appearing in Wide Area Networks (WAN). Now, we are witnessing changes to routing space with the incorporation of Software Defined Networking (SDN) and BGP (Border Gateway Protocol). By combining these two technologies, companies can now perform  intelligent routing, aka SD-WAN path selection, with an SD WAN Overlay

**SD-WAN Path Selection**

SD-WAN path selection is essential to a Software-Defined Wide Area Network (SD-WAN) architecture. SD-WAN path selection selects the most optimal network path for a given application or user. This process is automated and based on user-defined criteria, such as latency, jitter, cost, availability, and security. As a result, SD-WAN can ensure that applications and users experience the best possible performance by making intelligent decisions on which network path to use.

When selecting the best path for a given application or user, SD-WAN looks at the quality of the connection and the available bandwidth. It then looks at the cost associated with each path. Cost can be a significant factor when selecting a path, especially for large enterprises or organizations with multiple sites.

SD-WAN can also prioritize certain types of traffic over others. This is done by assigning different weights or priorities for various kinds of traffic. For example, an organization may prioritize voice traffic over other types of traffic. This ensures that voice traffic has the best possible chance of completing its journey without interruption.

SD WAN traffic steering
Diagram: SD WAN traffic steering. Source Cisco.

Critical Considerations for Implementation:

Network Security:

When adopting WAN SDN, organizations must consider the potential security risks associated with software-defined networks. Robust security measures, including authentication, encryption, and access controls, should be implemented to protect against unauthorized access and potential vulnerabilities.

Staff Training and Expertise:

Implementing WAN SDN requires skilled network administrators proficient in configuring and managing the software-defined network infrastructure. Organizations must train and upskill their IT teams to ensure successful implementation and ongoing management.

Real-World Use Cases:

Multi-Site Connectivity:

WAN SDN enables organizations with multiple geographically dispersed locations to connect their sites seamlessly. Administrators can prioritize traffic, optimize bandwidth utilization, and ensure consistent network performance across all locations by centrally controlling the network.

Cloud Connectivity:

With the increasing adoption of cloud services, WAN SDN allows organizations to connect their data centers to public and private clouds securely and efficiently. This facilitates smooth data transfers, supports workload mobility, and enhances cloud performance.

Disaster Recovery:

WAN SDN simplifies disaster recovery planning by allowing organizations to reroute network traffic dynamically during a network failure. This ensures business continuity and minimizes downtime, as the network can automatically adapt to changing conditions and reroute traffic through alternative paths.

The Rise of WAN SDN

The foundation for business and cloud services are crucial elements of business operations. The transport network used for these services is best efforts, weak, and offers no guarantee of an acceptable delay. More services are being brought to the Internet, yet the Internet is managed inefficiently and cheaply.

Every Autonomous System (AS) acts independently, and there is a price war between transit providers, leading to poor quality of transit services. Operating over this flawed network, customers must find ways to guarantee applications receive the expected level of quality.

Border Gateway Protocol (BGP), the Internet’s glue, has several path selection flaws. The main drawback of BGP is the routing paradigm relating to the path-selection process. BGP default path selection is based on Autonomous System (AS) Path length; prefer the path with the shortest AS_PATH. It misses the shape of the network with its current path selection process. It does not care if propagation delay, packet loss, or link congestion exists. It resulted in long path selection and utilizing paths potentially experiencing packet loss.

Example: WAN SDN with Border6 

Border6 is a French company that started in 2012. It offers non-stop internet and an integrated WAN SDN solution, influencing BGP to perform optimum routing. It’s not a replacement for BGP but a complementary tool to enhance routing decisions. For example, it automates changes in routing in cases of link congestion/blackouts.

“The agile way of improving BGP paths by the Border 6 tool improves network stability” Brandon Wade, iCastCenter Owner.

As the Internet became more popular, customers wanted to add additional intelligence to routing. Additionally, businesses require SDN traffic optimizations, as many run their entire service offerings on top of it.

What is non-stop internet?

Border6 offers an integrated WAN SDN solution with BGP that adds intelligence to outbound routing. A common approach when designing SDN in real-world networks is to prefer that SDN solutions incorporate existing field testing mechanisms (BGP) and not reinvent all the wheels ever invented. Therefore, the border6 approach to influence BGP with SDN is a welcomed and less risky approach to implementing a greenfield startup. In addition, Microsoft and Viptela use the SDN solution to control BGP behavior.

Border6 uses BGP to guide what might be reachable. Based on various performance metrics, they measure how well paths perform. They use BGP to learn the structure of the Internet and then run their algorithms to determine what is essential for individual customers. Every customer has different needs to reach different subnets. Some prefer costs; others prefer performance.

They elect several interesting “best” performing prefixes, and the most critical prefixes are selected. Next, they find probing locations and measure the source with automatic probes to determine the best path. All these tools combined enhance the behavior of BGP. Their mechanism can detect if an ISP has hardware/software problems, drops packets, or rerouting packets worldwide. 

Thousands of tests per minute

The Solution offers the best path by executing thousands of tests per minute and enabling results to include the best paths for packet delivery. Outputs from the live probing of path delays and packet loss inform BGP on which path to route traffic. The “best path” is different for each customer. It depends on the routing policy the customer wants to take. Some customers prefer paths without packet loss; others wish to cheap costs or paths under 100ms. It comes down to customer requirements and the applications they serve.

**BGP – Unrelated to Performance**

Traditionally, BGP gets its information to make decisions based on data unrelated to performance. Broder 6 tries to correlate your packet’s path to the Internet by choosing the fastest or cheapest link, depending on your requirements.

They are taking BGP data service providers and sending them as a baseline. Based on that broad connectivity picture, they have their measurements – lowest latency, packets lost, etc.- and adjust the data from BGP to consider these other measures. They were, eventually, performing optimum packet traffic forwarding. They first look at Netflow or Sflow data to determine what is essential and use their tool to collect and aggregate the data. From this data, they know what destinations are critical to that customer.

BGP for outbound | Locator/ID Separation Protocol (LISP) for inbound

Border6 products relate to outbound traffic optimizations. It can be hard to influence inbound traffic optimization with BGP. Most AS behave selfishly and optimize the traffic in their interest. They are trying to provide tools that help AS optimize inbound flows by integrating their product set with the Locator/ID Separation Protocol (LISP). The diagram below displays generic LISP components. It’s not necessarily related to Border6 LISP design.

LISP decouples the address space so you can optimize inbound traffic flows. Many LISP uses cases are seen with active-active data centers and VM mobility. It decouples the “who” and the “where,” which allows end-host addressing not to correlate with the actual host location. The drawback is that LISP requires endpoints that can build LISP tunnels.

Currently, they are trying to provide a solution using LISP as a signaling protocol between Border6 devices. They are also working on performing statistical analysis for data received to mitigate potential denial-of-service (DDoS) events. More DDoS algorithms are coming in future releases.

Closing Points: On WAN SDN

At its core, WAN SDN separates the control plane from the data plane, facilitating centralized network management. This separation allows for dynamic adjustments to network configurations, providing businesses with the agility to respond to changing conditions and demands. By leveraging software to control network resources, organizations can achieve significant improvements in performance and cost-effectiveness.

One of the primary advantages of WAN SDN is its ability to optimize network traffic and improve bandwidth utilization. By intelligently routing data, WAN SDN minimizes latency and enhances the overall user experience. Additionally, it simplifies network management by providing a single, centralized platform to control and configure network policies, reducing the complexity and time required for network maintenance.

Summary: WAN SDN

In today’s digital age, where connectivity and speed are paramount, traditional Wide Area Networks (WANs) often fall short of meeting the demands of modern businesses. However, a revolutionary solution that promises to transform how we think about and utilize WANs has emerged. Enter Software-Defined Networking (SDN), a paradigm-shifting approach that brings unprecedented flexibility, efficiency, and control to WAN infrastructure.

Understanding SDN

At its core, SDN is a network architecture that separates the control plane from the data plane. By decoupling network control and forwarding functions, SDN enables centralized management and programmability of the entire network, regardless of its geographical spread. Traditional WANs relied on complex and static configurations, but SDN introduced a level of agility and simplicity that was previously unimaginable.

Benefits of SDN for WANs

Enhanced Flexibility

SDN empowers network administrators to dynamically configure and customize WANs based on specific requirements. With a software-based control plane, they can quickly implement changes, allocate bandwidth, and optimize traffic routing, all in real time. This flexibility allows businesses to adapt swiftly to evolving needs and drive innovation.

Improved Efficiency

By leveraging SDN, WANs can achieve higher levels of efficiency through centralized management and automation. Network policies can be defined and enforced holistically, reducing manual configuration efforts and minimizing human errors. Additionally, SDN enables the intelligent allocation of network resources, optimizing bandwidth utilization and enhancing overall network performance.

Enhanced Security

Security threats are a constant concern in any network infrastructure. SDN brings a new layer of security to WANs by providing granular control over traffic flows and implementing sophisticated security policies. With SDN, network administrators can easily monitor, detect, and mitigate potential threats, ensuring data integrity and protecting against unauthorized access.

Use Cases and Implementation Examples

Dynamic Multi-site Connectivity

SDN enables seamless connectivity between multiple sites, allowing businesses to establish secure and scalable networks. With SDN, organizations can dynamically create and manage virtual private networks (VPNs) across geographically dispersed locations, simplifying network expansion and enabling agile resource allocation.

Cloud Integration and Hybrid WANs

Integrating SDN with cloud services unlocks a whole new level of scalability and flexibility for WANs. By combining SDN with cloud-based infrastructure, organizations can easily extend their networks to the cloud, access resources on demand, and leverage the benefits of hybrid WAN architectures.

Conclusion:

With its ability to enhance flexibility, improve efficiency, and bolster security, SDN is ushering in a new era for Wide-Area Networks (WANs). By embracing the power of software-defined networking, businesses can overcome the limitations of traditional WANs and build robust, agile, and future-proof network infrastructures. It’s time to embrace the SDN revolution and unlock the full potential of your WAN.

viptela1

Viptela Software Defined WAN (SD-WAN)

 

viptela sd wan

Viptela SD WAN

Why can’t enterprise networks scale like the Internet? What if you could virtualize the entire network?

Wide Area Network (WAN) connectivity models follow a hybrid approach, and companies may have multiple types – MPLS and the Internet. For example, branch A has remote access over the Internet, while branch B employs private MPLS connectivity. Internet and MPLS have distinct connectivity models, and different types of overlay exist for the Internet and MPLS-based networks.

The challenge is to combine these overlays automatically and provide a transport-agnostic overlay network. The data consumption model in enterprises is shifting. Around 70% of data is; now Internet-bound, and it is expensive to trombone traffic from defined DMZ points. Customers are looking for topological flexibility, causing a shift in security parameters. Topological flexibility forces us to rethink WAN solutions for tomorrow’s networks and leads towards Viptela SD-WAN.

 

Before you proceed, you may find the following helpful:

  1. SD WAN Tutorial
  2. SD WAN Overlay
  3. SD WAN Security 
  4. WAN Virtualization
  5. SD-WAN Segmentation

 

Solution: Viptela SD WAN

Viptela created a new overlay network called Secure Extensible Network (SEN) to address these challenges. For the first time, encryption is built into the solution. Security and routing are combined into one solution. Enables you to span environments, anywhere-to-anywhere in a secure deployment. This type of architecture is not possible with today’s traditional networking methods.

Founded in 2012, Viptela is a Virtual Private Network (VPN) company utilizing concepts of Software Defined Networking (SDN) to transform end-to-end network infrastructure. Based in San Jose, they are developing an SDN Wide Area Network (WAN) product offering any-to-any connectivity with features such as application-aware routing, service chaining, virtual Demilitarized Zone (DMZ), and weighted Equal Cost Multipath (ECMP) operating on different transports.

The key benefit of Viptela is any-to-any connectivity product offering. Connectivity was previously found in Multiprotocol Label Switching (MPLS) networks. They purely work on the connectivity model and not security frameworks. They can, however, influence-traffic paths to and from security services.

Viptela sd wan

 

Ubiquitous data plane

MPLS was attractive because it had a single control plane and a ubiquitous data plane. As long as you are in the MPLS network, connection to anyone is possible. Granted, you have the correct Route Distinguisher (RD) and Route Target (RT) configurations. But why can’t you take this model to Wide Area Network? Invent a technology that can create a similar model and offer ubiquitous connectivity regardless of transport type ( Internet, MPLS ).

 

Why Viptela SDN WAN?

The business today wants different types of connectivity modules. When you map service to business logic, the network/service topology is already laid out. It’s defined. Services have to follow this topology. Viptela is changing this concept by altering the data and control plane connectivity model using SDN to create an SDN WAN technology.

SDN is all about taking intensive network algorithms out of the hardware. Previously, in traditional networks, this was in individual hardware devices using control plane points in the data path. As a result, control points may become congested (for example – OSPF max neighbors reached). Customers lose capacity on the control plane front but not on the data plane. SDN is moving the intensive computation to off-the-shelf servers. MPLS networks attempt to use the same concepts with Route-Reflector (RR) designs.

They started to move route reflectors off the data plane to compute the best-path algorithms. Route reflectors can be positioned anywhere in the network and do not have to sit on the data path. Controller-based SDN approach, you are not embedding the control plane in the network. The controller is off the path. Now, you can scale out and SDN frameworks centrally provision and push policy down to the data plane.

Viptela can take any circuit and provide the ubiquitous connectivity MPLS provided, but now, it’s based on a policy with a central controller. Remote sites can have random transport methods. One leg could be the Internet, and the other could be MPLS. As long as there is an IP path between endpoint A and the controller, Viptela can provide the ubiquitous data plane.

 

Viptela SD WAN and Secure Extensible Network (SEN)

Managed overlay network

If you look at the existing WAN, it is two-part: routing and security. Routing connects sites, and security secures transmission. We have too many network security and policy configuration points in the current model. SEN allows you to centralize control plane security and routing, resulting in data path fluidity. The controller takes care of routing and security decisions.

It passes the relevant information between endpoints. Endpoints can pop up anywhere in the network. All they have to do is set up a control channel for the central controller. This approach does not build excessive control channels, as the control channel is between the controller and endpoints. Not from endpoint to endpoint. The data plane can flow based on the policy in the center of the network.

Viptela SD WAN

 

Viptela SD WAN: Deployment considerations

Deployment of separate data plane nodes at the customer site is integrated into existing infrastructure at Layer 2 or 3. So you can deploy incrementally, starting with one node and ending with thousands. It is so scalable because it is based on routed technology. The model allows you to deploy, for example, a guest network and then integrate it further into your network over time. Internally they use Border Gateway Protocol (BGP). One the data plane, they use standard IPSec between endpoints. It also works over Network Address Translation (NAT), meaning IPSec over UDP.

When an attacker gets access to your network, it is easy for them to reach the beachhead and hop from one segment to another. Viptela enables per-segment encryption, so even if they get to one segment, they will not be able to jump to another. Key management on a global scale has always been a challenge. Viptela solves this with a propitiatory distributed manager based on a priority system. Currently, their key management solution is not open to the industry.

 

SDN controller

You have a controller and VPN termination points i.e data plane points. The controller is the central management piece that assigns the policy. Data points are modules that are shipped to customer sites. The controller allows you to dictate different topologies for individual endpoint segments. Similar to how you influence-routing tables with RT in MPLS.

The control plane is at the controller.

 

Data plane module

Data plane modules are located at the customer site. They connect this data plane module, which could be a PE hand-off to the internal side of the network. The data plane module must be in the data plane path on the customer site. Internal side, they discover the routing protocols and participate in prefix learning. At Layer 2, they discover the VLANs. Their module can either be the default gateway or perform the router neighbor relationship function. WAN side, data plane module registers uplink IP address to WAN controller/orchestration system. The controller builds encrypted tunnels between the data endpoints. The encrypted control channels are only needed when you build over untrusted third parties.

If the problem occurs with controller connectivity, the on-site module can stop being the default gateway and usually participate in Layer 3 forwarding for existing protocols. It backs off from being the primary router for off-net traffic. It’s like creating VRF for different businesses and default routes for each VRF with a single peering point to the controller; Policy-Based Routing (PBR) for each VRF for data plane activity. The PBR is based on information coming from the controller. Each control segment can have a separate policy (for example – modifying the next hop). From a configuration point of view, you need an IP on the data plane module and the remote controller IP. The controller pushes down the rest.

 

  • Viptela SD WAN: Use cases

For example, you have a branch office with three distinct segments, and you want each endpoint to have its independent topology. The topology should be service driven, and the service should not follow existing defined topology. Each business should depict how they want their business to connect to the network team should not say this is how the topology is, and you must obey our topology.

From a carrier’s perspective, they can expand their MPLS network to areas they do not have a physical presence. And bring customers with this secure overlay to their closest pop where they have an MPLS peering. MPLS providers can expand their footprint to areas where they do not have service. If MPLS has customers in region X and wants to connect to the customer in region Y, they can use Viptela. Having those different data plane endpoints through a security framework would be best before entering the MPLS network.

Viptela allows you to steer traffic based on the SLA requirements of the application, aka Application-Aware Routing. For example, if you have two sites with dual connectivity to MPLS and Internet, data plane modules (located at customer sites) nodes can steer traffic over either the MPLS or Internet transport based on end-to-end latency or drops. They do this by maintaining the real-time loss, latency, and jitter characteristics and then applying policies on the centralized controller. As a result, critical traffic is always steered to the most reliable link. This architecture can scale to 1000 nodes in a full mesh topology.

 

viptela sd wan