Application Delivery Controllers

Application Delivery Network

Application Delivery Networks

In today's fast-paced digital world, businesses rely heavily on applications to deliver services and engage customers. However, as the demand for seamless connectivity and optimal performance increases, so does the need for a robust and efficient infrastructure. This is where Application Delivery Networks (ADNs) come into play. In this blog post, we will explore the importance of ADNs and how they enhance the delivery of applications.

An Application Delivery Network is a suite of technologies designed to optimize applications' performance, security, and availability across the internet. It bridges users and applications, ensuring data flows smoothly, securely, and efficiently.

ADNs, also known as content delivery networks (CDNs), are a distributed network of servers strategically placed across various geographical locations. Their primary purpose is to efficiently deliver web content, applications, and other digital assets to end-users. By leveraging a network of servers closer to the user, ADNs significantly reduce latency, improve website performance, and ensure faster content delivery.

Accelerated Website Performance: ADNs employ various optimization techniques such as caching, compression, and content prioritization to enhance website performance. By reducing page load times, businesses can improve user engagement, increase conversions, and boost overall customer satisfaction.

Global Scalability: With an ADN in place, businesses can effortlessly scale their online presence globally. ADNs have servers strategically located worldwide, allowing organizations to effectively serve content to users regardless of their geographic location. This scalability ensures consistent and reliable performance, irrespective of the user's proximity to the server.

Security Protection:ADNs act as a protective shield against cyber threats, including Distributed Denial of Service (DDoS) attacks. These networks are equipped with advanced security measures, including real-time threat monitoring, traffic filtering, and intelligent routing, ensuring a secure and uninterrupted online experience for users.

E-commerce Websites: ADNs play a crucial role in optimizing e-commerce websites by delivering product images, videos, and other content quickly. This leads to improved user experience, increased conversion rates, and ultimately, higher revenue.

Media Streaming Platforms: Streaming platforms heavily rely on ADNs to deliver high-quality video content without buffering or interruptions. By distributing the content across multiple servers, ADNs ensure smooth playback and an enjoyable streaming experience for users worldwide.

In conclusion, implementing an Application Delivery Network is a game-changer for businesses aiming to enhance digital experiences. With accelerated website performance, global scalability, and enhanced security, ADNs pave the way for improved user satisfaction, increased engagement, and ultimately, business success in the digital realm.

Highlights: Application Delivery Networks

 

Content Delivery Networks

Using server farms and Web proxies can help build large sites and improve Web performance, but they are insufficient for viral sites serving global content. A different approach is needed for these sites.

The concept of traditional Web caching is turned on by Content Delivery Networks (CDNs). The provider places a copy of the requested page in a set of nodes at different locations and instructs the client to use a nearby node as the server rather than having the client search for a copy in a nearby cache.

The diagram shows the path data takes when a CDN distributes it. It is a tree. In this example, the origin server distributes a copy of the content to Sydney, Boston, and Amsterdam nodes. Dashed lines indicate this. The client is responsible for fetching pages from the nearest node in the CDN. Solid lines show this. As a result, the Sydney clients both fetch the Sydney copy of the page; they do not both fetch the European origin server.

CDN Tree Structure

There are three advantages to using a tree structure. When the distribution among CDN nodes becomes the bottleneck, more levels in the tree can be used to scale up the content distribution to as many clients as needed. The tree structure is efficient regardless of how many clients there are.

The origin server is not overloaded by communicating with the many clients through the CDN tree. It does not have to respond to each request for a page on its own. A nearby server is more efficient than a distant server for fetching pages for each client. TCP slow-start ramps up more quickly because of the shorter round-trip time, and a shorter network path passes through less congestion on the Internet because the round-trip time is shorter.

Additionally, the network is placed under the least amount of load possible. Traffic for a particular page should only pass through each CDN node once if they are well placed. This is because someone eventually pays for network bandwidth.

Varying Applications

Compared to 15 years ago, applications exploded when load balancers first arrived. Content such as blogs, content sharing, wiki, shared calendars,s, and social media exist that load balancers ( ADC ) must now serve. A plethora of “chattier” protocols exist with different requirements. Every application has additional network requirements for the functions they provide.

Each application has different expectations for its service levels. Slow networks and high server load mean you cannot efficiently run applications and web-based services. Data is slow to load, and productivity slips. Application delivery controllers ( ADC ) or load balancers can detect and adapt to changing network conditions for public, private, and hybrid cloud traffic patterns.

Before you proceed, you may find the following posts helpful:

  1. Stateful Inspection Firewall
  2. Application Delivery Architecture
  3. Load Balancer Scaling
  4. A10 Networks
  5. Stateless Network Functions
  6. Network Security Components
  7. WAN SDN

Back to basics with load balancing

Load balancing is distributing network traffic across a group of endpoints. In addition, load balancing is a solution to hardware and software performance. So, when you are faced with scaling user demand and maxing out the performance, you have two options: scale up or scale out. Scaling up, or vertical scaling, has physical computational limits and can be costly. Then scaling out (i.e., horizontal scaling) allows you to distribute the computational load across as many systems as necessary to handle the workload. When scaling out, a load balancer can help spread the workload.

The Benefits of Application Delivery Networks:

1. Enhanced Network Connectivity: ADNs utilize various techniques like load balancing, caching, and content optimization to deliver applications faster and improve user experience. Distributing traffic intelligently, ADNs ensure that no single server is overwhelmed, leading to faster response times and reduced latency.

2. Scalability: ADNs enable businesses to scale their applications effortlessly. With the ability to add or remove servers dynamically, ADNs accommodate high traffic demands without compromising performance. This scalability ensures businesses can handle sudden spikes in user activity or seasonal fluctuations without disruption.

3. Security: In an era of increasing cyber threats, ADNs provide robust security features to protect applications and data from unauthorized access, DDoS attacks, and other vulnerabilities. ADNs employ advanced security mechanisms such as SSL encryption, web application firewalls, and intrusion detection systems to safeguard critical assets.

4. Global Load Balancing: As businesses expand across different geographical regions, ADNs offer global load balancing capabilities. By strategically distributing traffic across multiple data centers, ADNs ensure that users are seamlessly connected to the nearest server, reducing latency and optimizing performance.

5. Improved Availability: ADNs employ techniques like health monitoring and failover mechanisms to ensure high availability of applications. In a server failure, ADNs automatically redirect requests to healthy servers, minimizing downtime and improving overall reliability.

Lab on GLBP

GLBP is a Cisco proprietary routing protocol designed to provide load balancing and redundancy for IP traffic across multiple routers or gateways. It enhances the commonly used Hot Standby Router Protocol (HSRP). It is primarily used in enterprise networks to distribute traffic across multiple paths, ensuring optimal utilization of network resources. Notice below when we change the GLBP priority, the role of the device changes.

Gateway Load Balancer Protocol
Diagram: Gateway Load Balancer Protocol (GLBP)

Application Delivery Network and Network Control Points.

Application delivery controllers act as network control points to protect our networks. We use them to improve application service levels delivered across networks. Challenges for securing the data center range from preventing denial of service attacks from the network to application.

Also, how do you connect data centers to link on-premise to remote cloud services and support traffic bursts between both locations? When you look at the needs of the data center, the network is the control point, and nothing is more important than this control point. ADC allows you to insert control points and enforce policies at different points of the network.

Application delivery network
Diagram: Application delivery network.

Case Study: Citrix Netscaler

The Company was founded in 1998; the first product was launched in 2000. The first product was a simple Transmission Control Protocol (TCP) proxy. All it did was sit behind a load balancer, proxy TCP connections at layer 4, and offload them from backend servers. As the web developed, scalability issues were the load on the backend servers from servicing the increasing amount of TCP connections. So, they wrote their own performance-orientated custom TCP stack.

They have a quick PCI architecture. No, interrupts. Netscaler has written the code with x86 architecture in mind. The way x86 is written is to have fast processors and slower dynamic random-access memory (DRAM). The processor should work on the local cache, but that does not work for how network traffic flows. Netscaler has a unique code that processes a packet while permitting entry to another packet. This gives them great latency statistics.

Application delivery network and TriScale technology

TriScale technology changes how the Application Delivery Controller (ADC) is provisioned and managed. It brings cloud agility to data centers. TriScale allows networks to scale up, out, and in via consolidation.

Scale-out: Clustering

For high availability (HA), Netscaler only has active / standby and clustering. They oppose active/active. Active/active deployments are not truly active. Most setups are accomplished by setting one application via one load balancer and another via a 2nd-load balancer. It does not give you any more capacity. You cannot oversubscribe if one fails. The other node has to take over and service additional load from the failed load balancer.

Netscaler skipped this and went straight to clustering. They can cluster up to 32, allowing a cloud of Netscaler. Clustering is a cloud of Netscaler. All are active, sharing states and configurations, so if one of your ADCs goes down, others can pick up transparently. All nodes know all information about sessions, and that information is shared.

 Stateless vs. stateful

Netscaler offers dynamic failover for long-lived protocols, like Structured Query Language (SQL) sessions and other streaming protocols. This differs from when you are load-balancing Hypertext Transfer Protocol (HTTP). HTTP is a generic and stateless application-level protocol. No information is kept across requests; applications must remember the per-user state.

Every HTTP request is a valid standalone request per the protocol. No one knows or cares that much if you lose an HTTP request. Clients try again. High availability is generally not an issue for web traffic. With HTTP 2.0, the idea of sustaining the connection during failover means that the session never gets torn down and restarted. 

HTTP ( stateless ) lives on top of TCP ( stateful ). Transmission Control Protocol (TCP) is stateful in the sense that it maintains state in the form of TCP windows size (how much data endpoints can receive) and packet order ( packet receipts confirmation). TCP endpoints must remember the state of the other. Stateless protocols can be built on top of the stateful protocol, and stateful protocols can be built on top of the stateless protocol.

Applications built on top of HTTP aren’t necessarily stateless. Applications implement state over HTTP. For example, a client requests data and is first authenticated before data transfer. This is common for websites requiring users to visit a login page before sending a message.

Scale-in: Multi-tenancy

In the enterprise, you can have network overlays ( such as VXLAN) that allow the virtualization of segments. We need network services like firewalls and load balancers to do the same thing. Netscaler offers a scale-in service that allows a single platform to become multiple. It’s not a software partition; it’s a hardware partition.

100% of CPU, crypto, and network resources are isolated. This allows the management of individual instances without affecting others. If you experience a big traffic spike on one load balancer, it does not affect other cases of load balancing on that device.

Every application or application owner can have a dedicated ADC. This approach lets you meet the application requirement without worrying about contention or overrun from other application configurations. In addition, it enables you to run several independent Netscaler instances on a single 2RU appliance. Every application owner looks like they have a dedicated ADC, and from the network view, each application is running on its appliance.

Behind the scenes, Netscaler consolidated all this down to a single device. So, what Netscaler did was get the MPX platform and add a load of VPX to it to create an SDX product. When you spin up the VPX on the SDX, you allocate isolated resources such as CPU and disk space.

Scale-up: pay as you grow

Scale-up is a software license key upgrade that increases performance and offers customers more flexibility. For example, if you buy an MPX, you are not locked into specific performance metrics of that box. With a license upgrade, you can double its throughput, packets per second, connections per second, and Secure Sockets Layer (SSL) transactions per second.

Netscaler and Software-defined networking (SDN)

When we usually discuss SDN, we focus on Layer 2 and 3 networks and what it takes to separate the control and data plane. The majority of SDN discussions are Layer 2 and Layer 3-centric conversations. However, Layer 4 to Layer 7 solutions need to be integrated into the SDN network. Netscaler is developing centralized control capabilities for integrating Layer 4 to Layer 7 solutions into SDN networks.

So, how can SDN directly benefit the application at Layer 7? As applications become more prominent, there must be ways to auto-deploy applications. Storage and computing have been automated for some time now. However, the network is more complex, so virtualization is harder. This is where SDN comes into play. SDN takes all the complexity away from managing networks.

In an era where applications are businesses’ backbone, ensuring optimal performance, scalability, and security is crucial. Application Delivery Networks play a vital role in achieving these objectives. From enhancing performance and scalability to providing robust security and global load balancing, ADNs offer a comprehensive solution for businesses seeking to optimize application delivery. By leveraging ADNs, companies can deliver seamless experiences to their users, gain a competitive edge, and drive growth in today’s digital landscape.

 

Summary: Application Delivery Networks

In today’s digital age, where speed and efficiency are paramount, businesses and users constantly seek ways to optimize their online experiences. One technology that has emerged as a game-changer in this realm is Application Delivery Networks (ADNs). ADNs are revolutionizing web performance by improving speed, reliability, and security. In this blog post, we explored the critical aspects of ADNs and their role in enhancing web performance.

Understanding Application Delivery Networks

ADNs, also known as Content Delivery Networks (CDNs), are a distributed network of servers strategically placed worldwide. These servers act as intermediaries between users and web servers, optimizing web content delivery. By caching and storing website data closer to the end-user, ADNs reduce latency and minimize the distance data packets travel.

Accelerating Web Performance

One of the primary benefits of ADNs is their ability to accelerate web performance. With traditional web hosting, a user’s request goes directly to the origin server, which may be geographically distant. This can result in slower loading times. ADNs, on the other hand, leverage their distributed server infrastructure to serve content from the server nearest to the user. By minimizing the distance and network hops, ADNs significantly improve page load times, providing a seamless browsing experience.

Enhancing Reliability and Scalability

Another advantage of ADNs is their ability to enhance reliability and scalability. ADNs employ load balancing, failover mechanisms, and intelligent routing to ensure the high availability of web content. In case of server failures or traffic spikes, ADNs automatically redirect requests to alternate servers, preventing downtime and maintaining a smooth user experience.

Strengthening Security

In addition to performance benefits, ADNs offer robust security features. ADNs can protect websites from DDoS attacks, malicious bots, and other security threats by acting as a shield between users and origin servers. With their widespread server infrastructure and advanced security measures, ADNs provide an additional layer of defense, ensuring the integrity and availability of web content.

Conclusion:

Application Delivery Networks have emerged as a vital component in optimizing web performance. By leveraging their distributed server infrastructure, ADNs enhance speed, reliability, and security, making the browsing experience faster and more secure for users worldwide. As businesses strive to deliver exceptional online experiences, investing in ADNs has become essential.

network overlays

Network Overlays

Network Overlays

In the world of networking, there is a hidden gem that has been revolutionizing the way we connect and communicate. Network overlays, the mystical layer that enhances our networks, are here to unlock new possibilities and transform the way we experience connectivity. In this blog post, we will delve into the enchanting world of network overlays, exploring their benefits, functionality, and potential applications.

Network overlays, at their core, are virtual networks created on top of physical networks. They act as an additional layer, abstracting the underlying infrastructure and providing a flexible and scalable network environment. By decoupling the logical and physical aspects of networking, overlays enable simplified management, efficient resource utilization, and dynamic adaptation to changing requirements.

One of the key elements that make network overlays so powerful is their ability to encapsulate and transport network traffic. By encapsulating data packets within packets of a different protocol, overlays create virtual tunnels that can traverse different networks, regardless of their underlying infrastructure. This magic enables seamless communication between geographically dispersed devices and networks, bringing about a new level of connectivity.

The versatility of network overlays opens up a world of possibilities. From enhancing security through encrypted tunnels to enabling network virtualization and multi-tenancy, overlays empower organizations to build complex and dynamic network architectures. They facilitate the deployment of services, applications, and virtual machines across different environments, allowing for efficient resource utilization and improved scalability.

Network overlays have found their place in various domains. In data centers, overlays enable the creation of virtual networks for different tenants, isolating their traffic and providing enhanced security. In cloud computing, overlays play a crucial role in enabling seamless communication between different cloud providers and environments. Additionally, overlays have been leveraged in Software-Defined Networking (SDN) to enable network programmability and agility.

Highlights: Network Overlays

Overlay networks are computer networks that are layered on top of other networks (logical instead of physical). They differ from the traditional OSI layered network model and almost always assume that the underlay network is an IP network. These technologies include VXLAN, BGP VPNs, Layer 2 and Layer 3, and IP over IP, such as GRE or IPsec tunnels. Overlay networks, such as SD-WAN, use IP over IP technologies.

The overlay network (SDN overlay) allows multiple network layers to be run on top of each other, adding new applications and improving security. Multiple secure overlays can be created using software over existing networking hardware infrastructure by making virtual connections between two endpoints. Endpoints can be physical locations, such as network ports, or logical locations, such as software addresses, in the cloud.

Software tags, labels, and/or encryption create a virtual tunnel between two network endpoints. If encryption is used, end users must be authenticated to use the connection. Similar to a phone system, the technology can be considered endpoints with identification tags. An identification tag or number can be used to locate a device in a network, creating virtual connections.

Networking approach based on overlays

Different overlay networking approaches are often debated in the SDN community. Some software-only solutions may not be able to integrate at the chip level, depending on the technology. The layering of software and processing in overlay networking has been criticized for creating performance overhead. Network overlays are controlled by SDN controllers using the OpenFlow protocol, which requires specific software code or “agents” to be installed.

Change in Traffic Patterns

Despite being in remote geographic locations, a host of physical servers and I/O devices can host multiple virtual servers that share the same logical network, thanks to the paradigm shift toward cloud computing. In contrast to the traditional north-south direction of data traffic within data centers, virtualization has facilitated more significant east-west data traffic. Communication between servers and applications within a data center is known as east-west traffic.

In corporate networks or on the Internet, much of the data required by the end user involves more complex data that requires preprocessing. Using a web server (via an app server) to access a database as an example of east-west traffic, we can demonstrate the need for preprocessing.

The birth of network overlays

Network virtualization overlays have become the de facto solution for addressing the problems just described in relation to data center expansion. Overlays allow existing network technologies to be abstracted, extending the capabilities of classic networks.

Networking has been using overlays for quite some time. Overlays were developed because of the disadvantages of conventional networks. An overlay is a tunnel that runs on top of a physical network infrastructure, as its name implies.

MPLS overlay
Diagram: MPLS Overlay

Following MPLS- and GRE-based encapsulations in the 1990s, other tunneling technologies, such as IPsec,8 6in4,9, and L2TPv3,10, also gained popularity. For example, 6in4 tunnels were used to carry payloads over a transport network that could not support the payload type. These tunnels were utilized for security purposes, simplifying routing lookups, or carrying payloads over unsupported transport networks.

Network Overlays and Virtual Networks

Network overlays have emerged as a powerful solution to address the challenges posed by the increasing complexity of modern networks. This blog post will explore network overlays, their benefits, and how they improve connectivity and scalability in today’s digital landscape.

Network overlays are virtual networks that run on physical networks, providing an additional abstraction layer. They allow organizations to create logical networks independent of the underlying physical infrastructure. This decoupling enables flexibility, scalability, and simplified management of complex network architectures.

Overlay networking
Diagram: Overlay Networking with VXLAN

Virtual Network Services

Network overlays refer to virtualizing network services and infrastructure over existing physical networks. By decoupling the network control plane from the underlying hardware, network overlays provide a layer of abstraction that simplifies network management while offering enhanced flexibility and scalability. This approach allows organizations to create virtual networks tailored to their specific needs without the constraints imposed by physical infrastructure limitations.

Creating an overlay tunnel

A network overlay is an architecture that creates a virtualized network on top of an existing physical network. It allows multiple virtual networks to run independently and securely on the same physical infrastructure. Network overlays are a great way to create a more secure and flexible network environment without investing in new infrastructure.

Network overlays can be used for various applications, such as creating virtual LANs (VLANs), virtual private networks (VPNs), and multicast networks. For example, we have DMVPN (Dynamic Multipoint VPN), with several DMVPN phases providing a secure network technology that allows for multiple sites’ efficient and secure connection.

DMVPN and WAN Virtualization

In addition, they can segment traffic and provide secure communication between two or more networks. As a result, network overlays allow for more efficient resource use and provide better performance, scalability, and security.

Enhanced Connectivity:

Network overlays improve connectivity by enabling seamless communication between different network domains. By abstracting the underlying physical infrastructure, overlays facilitate the creation of virtual network segments that can span geographical locations, data centers, and cloud environments. This enhanced connectivity promotes better collaboration, data sharing, and application access within and across organizations.

Related: Before you proceed, you may find the following helpful:

  1. Data Center Topologies
  2. SD WAN Overlay
  3. Nexus 1000v
  4. SDN Data Center
  5. Virtual Overlay Network
  6. SD WAN SASE

Overlay Tunnel

Key Network Overlays Discussion Points:


  • Introduction to network overlays and what is involved.

  • Highlighting the details of control plane interaction.

  • Technical details on the encapsulation overhead.

  • Scenario: Security in the tunnel overlay.

  • A final note on STP and layer 2 attacks.

Back to Basics With Network Overlays

Supporting distributed application

There has been a significant paradigm shift in data center networks. This evolution has driven network overlays known as tunnel overlay, bringing several new requirements to data center designs. Distributed applications are transforming traffic profiles, and there is a rapid rise in intra-DC traffic ( East-West ).

We designers face several challenges to support this type of scale. First, we must implement network virtualization with the overlay tunnel for large cloud deployment.

Suppose a customer requires a logical segment per application, and each application requires load balancing or firewall services between segments. In that case, having an all-physical network using traditional VLANs is impossible. The limitations of 4000 VLANS and the requirement for stretched Layer 2 subnets have pushed designers to virtualize workloads over an underlying network.

1st Lab guide on network overlay with VXLAN

The following guide has a Layer 2 overlay across a routed Layer 3 core. Spine A and Spine B cores run an OSPF network to each other and the leaf layer. VXLAN is the overlay protocol that maps a bridge domain to a VNI that extends layer 2 across the routed core. Notice in the show command output that the encapsulation is set to VXLAN.

VXLAN overlay
Diagram: VXLAN Overlay

Concepts of network Virtualization

Network virtualization is cutting up a single physical network into multiple virtual networks. Virtualizing a resource allows it to be shared by various users. Numerous virtual networks have jumped up over the decades to satisfy different needs. 

A primary distinction between these different types is their model for providing network connectivity. Networks can provide connectivity via bridging (L2) or routing (L3). Thus, virtual networks can be either virtual L2 networks or virtual L3 networks.

Virtual networks started with the Virtual Local Area Network (VLAN). First, the VLAN was invented to lessen the unnecessary chatter in a Layer 2 network by isolating applications from their noisy neighbors. Then VLAN was then pushed into the world of security.

Then, we had the Virtual Routing and Forwarding (VRF). The virtual L3 network was invented along with the L3 Virtual Private Network (L3VPN) to solve the problem of interconnecting geographically disparate networks of an enterprise over a public network. 

Network Overlay

Main Network Overlay Components

Network Overlay

  • Network overlays are virtual networks that run on physical networks.

  • Network overlays improve connectivity by enabling seamless communication between different network domains.

  • The limitations of 4000 VLANS and the requirement for stretched Layer 2 subnets have pushed designers to virtualize workloads over an underlying network.

  • Virtual networks can be either virtual L2 networks or virtual L3 networks.

Benefits of Network Overlays:

1. Simplified Network Management: With network overlays, organizations can manage their networks centrally, using software-defined networking (SDN) controllers. This centralized approach eliminates the need for manual configuration and reduces the complexity associated with traditional network management.

2. Enhanced Scalability: Network overlays enable businesses to scale their networks easily by provisioning virtual networks on demand. This flexibility allows rapid deployment of new services and applications without physical network reconfiguration.

3. Improved Security: Network overlays provide an additional layer of security by encapsulating traffic within virtual tunnels. This isolation helps prevent unauthorized access and reduces the risk of potential security breaches, especially in multi-tenant environments.

4. Interoperability: Network overlays can be deployed across heterogeneous environments, enabling seamless connectivity between different network types, such as private and public clouds. This interoperability makes it possible to extend the network across multiple locations and integrate various cloud services effortlessly.

  • Scalability and Elasticity

One of the critical advantages of network overlays is their ability to scale and adapt to changing network requirements. Overlays can dynamically allocate resources based on demand, allowing network administrators to deploy new services or expand existing ones rapidly. This elasticity enables organizations to meet the evolving needs of their users and applications without the constraints imposed by the underlying physical infrastructure.

  • Simplified Network Management

Network overlays simplify network management by providing a centralized control plane. This control plane abstracts the complexity of the underlying physical infrastructure, allowing administrators to configure and manage the virtual networks through a single interface. This simplification reduces operational overhead, minimizes human errors, and enhances network security.

  • VXLAN vs VLAN

One of the first notable differences between VXLAN and VLAN was increased scalability. The VXLAN ID is 24 bits, enabling you to create up to 16 million isolated networks. This overcomes the limitation of VLANs having the 12-bit VLAN ID, which enables only a maximum of 4094 isolated networks.

Tunnel Overlay
Multiple segments per application and the need for a tunnel overlay,

What are the drawbacks of network overlays, and how does it affect network stability?

Use Cases of Network Overlays:

1. Data Center Networking: Network overlays are commonly used in data center environments to simplify network management, enhance scalability, and improve workload mobility. By abstracting the physical network infrastructure, data center operators can create isolated virtual networks for different applications or tenants, ensuring optimal resource utilization and agility.

2. Cloud Networking: Network overlays enable connectivity and security in cloud environments. Whether public, private, or hybrid clouds, network overlays allow organizations to extend their networks seamlessly, ensuring consistent connectivity and security policies across diverse cloud environments.

3. Multi-site Connectivity: For organizations with multiple locations, network overlays provide a cost-effective solution for connecting geographically dispersed sites. By leveraging virtual networks, businesses can establish secure and reliable communication channels between sites, regardless of the underlying physical infrastructure.

Overlay Technologies:

1. Virtual Private Networks (VPNs):

Virtual Private Networks create secure, encrypted connections over public networks, enabling remote access and secure communication between geographically dispersed locations. VPNs are commonly used to provide secure connectivity for remote workers or to establish site-to-site connections between different branches of an organization.

2. Software-Defined Networking (SDN):

Software-defined networking is a network architecture that separates the control plane from the data plane, enabling centralized network management and programmability. SDN overlays can leverage network virtualization techniques to create logical networks independent of the underlying physical infrastructure.

3. Network Function Virtualization (NFV):

Network Function Virtualization abstracts network services, such as firewalls, load balancers, and routers, from dedicated hardware appliances and runs them as virtual instances. NFV overlays allow organizations to deploy dynamically and scale network services, reducing costs and improving operational efficiency.

Control Plane Interaction

Tunneled network overlays

Virtualization adds a level of complexity to the network. Consider the example of a standard tunnel. We are essentially virtualizing workloads over an underlying network. From a control plane perspective, there must be more than one control plane.

This results in two views of the network’s forwarding and reachability information—a view from the tunnel endpoints and a view from the underlying network. The control plane may be static or dynamic and provides reachability through the virtual topology on top of it, which provides reachability to the tunnel endpoints.

overlay tunnel
The overlay tunnel and potential consequences.

Router A has two paths to reach 192.0.2.0/24. Already, we have the complexity of influencing and managing what traffic should and shouldn’t go down the tunnel. Modifying metrics for specific destinations will influence path selection, but this comes with additional configuration complexity and policies’ manual management.

The incorrect interaction configuration between two control planes may cause a routing loop or suboptimal routing through the tunnel interfaces. The “routers in the middle” and the “routers at tunnel edges” have different views of the network – increasing network complexity.

  • A key point: Not an independent topology

These two control planes may seem to act independently, but they are not independent topologies. The control plane of the virtual topology relies heavily on the control plane of the underlying network. These control planes should not be allowed to interplay freely, as both can react differently to inevitable failures. The timing of the convergence process and how quickly each control plane reacts may be the same or different.

The underlying network could converge faster or slower than the overlaying control plane, affecting application performance. Design best practice is to design the network overlays control plane so that it detects and reacts to network failures faster than the underlying control plane or have the underlying control plane detect and respond faster than the network overlays control plane.

Encapsulation overhead

Every VXLAN packet originating from the end host and sent toward the IP core will be stamped with a VXLAN header. This leads to an additional 50 bytes per packet from the source to the destination server. If the core cannot accommodate the greater MTU size or the Path MTU is broken, the packet may have to be fragmented into smaller pieces. Also, the VXLAN header must be encapsulated and de-encapsulated on the virtual switch, which takes up computing cycles. Both of these are problematic for network performance.

vxlan overhead
VXLAN overhead.

Security in a tunnel overlay

There are many security flaws with tunnels and network overlays. The most notable is that they hide path information. A tunnel can pass one route on one day and take another path on a different day. The change of path may be unknown to the network administrator. Traditional routing is hop-by-hop; every router decides where the traffic should be routed.

However, independent hop-by-hop decisions are not signaled and are not known by the tunnel endpoints. As a result, an attacker can direct the tunnel traffic via an unintended path where the rerouted traffic can be monitored and snooped.

VXLAN security

Tunneled traffic hides from any policies or security checkpoints. Many firewalls have HTTP port 80 open to support web browsing. This can allow an attacker to tunnel traffic in an HTTP envelope, bypassing all the security checks. There are also several security implications if you are tunneling with GRE.

First, GRE does not perform encryption or authentication on any part of the data journey. The optional 32-bit tunnel key for identifying individual traffic flows can easily be brute-forced due to the restriction of 2×32 number combinations.

Finally, it has a weak implementation of the sequence number used to provide a method of in-order delivery. These shortcomings have opened up to several MTU-based and GRE packet injection attacks.

 STP and Layer 2 attacks

VXLAN extends layer 2 domains across layer 3 boundaries, resulting in more extensive layer 2 flat networks. Regarding intrusion, the attack zones become much more significant as we connect up to two remote disjointed endpoints. This increases the attack zones over traditional VLANs where the Layer 2 broadcast domain was much smaller.

You are open to various STP attacks if you run STP over VXLAN. Tools such as BSD brconfig and Linux bridge-utilis allow you to generate STP frames into a Layer 2 network and can be used to insert a rogue root bridge to modify the traffic path.

 Tunnel overlay with VXLAN inbuilt security?

The VXLAN standard has no built-in security, so if your core is not secure and becomes compromised, so will all your VXLAN tunneled traffic. Schemes such as 802.1x should be deployed for the admission control of VTEP ( tunnel endpoints ). 802.1x at the edges provides defense so that rogue endpoints may not inject traffic into the VXLAN cloud. The VXLAN payload can also be encrypted with IPsec.

Closing Points: Understanding Network Overlays

At its core, a network overlay is a virtual network created using software-defined networking (SDN) technologies. It enables the creation of logical network segments independent of the physical infrastructure. By decoupling the network’s control plane from its data plane, overlays provide flexibility, scalability, and agility for network architectures.

Benefits of Network Overlays

Enhanced Security and Isolation

Network overlays provide strong isolation between virtual networks, ensuring that traffic remains separate and secure. This isolation helps protect sensitive data and prevents unauthorized access, making overlays an ideal solution for multi-tenant environments.

Simplified Network Management

With network overlays, administrators can manage and control the network centrally, regardless of the underlying physical infrastructure. This centralized management simplifies network provisioning, configuration, and troubleshooting, improving operational efficiency.

Overlay Technologies

Virtual Extensible LAN (VXLAN)

VXLAN is a widely adopted overlay technology that extends Layer 2 networks over an existing Layer 3 infrastructure. It uses encapsulation techniques to provide scalability and flexibility, allowing for the seamless expansion of network segments.

Generic Routing Encapsulation (GRE)

GRE is another popular overlay protocol that enables the creation of private point-to-point or multipoint tunnels over an IP network. It provides a simple and reliable way to connect geographically dispersed networks and facilitates the transit of diverse protocols.

Use Cases for Network Overlays

Data Center Virtualization

Network overlays play a crucial role in data center virtualization, allowing the creation of virtual networks that can span multiple physical servers. This enables efficient resource utilization, workload mobility, and simplified network management.

Hybrid Cloud Connectivity

By leveraging network overlays, organizations can establish secure and seamless connections between on-premises infrastructure and public cloud environments. This enables hybrid cloud architectures, providing the flexibility to leverage the benefits of both worlds.

In conclusion, network overlays have emerged as a powerful tool in modern networking, enabling flexibility, security, and simplified management. With their ability to abstract away the complexities of physical infrastructure, overlays pave the way for innovative network architectures. As technology continues to evolve, network overlays will undoubtedly play a vital role in shaping the future of networking. 

Summary: Network Overlays

Network overlays have revolutionized the way we connect and communicate in the digital realm. In this blog post, we will explore the fascinating world of network overlays, their purpose, benefits, and how they function. So, fasten your seatbelts as we embark on this exciting journey!

What are Network Overlays?

Network overlays are virtual networks that are built on top of an existing physical network infrastructure. They provide an additional layer of abstraction, allowing for enhanced flexibility, scalability, and security. By decoupling the logical network from the physical infrastructure, network overlays enable organizations to optimize their network resources and streamline operations.

Benefits of Network Overlays

Improved Scalability:

Network overlays allow for seamless scaling of network resources without disrupting the underlying infrastructure. This means that as your network demands grow, you can easily add or remove virtual network components without affecting the overall network performance.

Enhanced Security:

With network overlays, organizations can implement advanced security measures to protect their data and applications. By creating isolated virtual networks, sensitive information can be shielded from unauthorized access, reducing the risk of potential security breaches.

Simplified Network Management:

Network overlays provide a centralized management interface, allowing administrators to control and monitor the entire network from a single point of control. This simplifies network management tasks, improves troubleshooting capabilities, and enhances overall operational efficiency.

How Network Overlays Work

Overlay Protocols:

Network overlays utilize various overlay protocols such as VXLAN (Virtual Extensible LAN), NVGRE (Network Virtualization using Generic Routing Encapsulation), and GRE (Generic Routing Encapsulation) to encapsulate and transmit data packets across the physical network.

Control Plane and Data Plane Separation:

Network overlays separate the control plane from the data plane. The control plane handles the creation, configuration, and management of virtual networks, while the data plane deals with the actual forwarding of data packets.

Use Cases of Network Overlays

Multi-Tenancy Environments:

Network overlays are highly beneficial in multi-tenant environments, where multiple organizations or users share the same physical network infrastructure. By creating isolated virtual networks, each tenant can have their own dedicated resources while maintaining logical separation.

Data Center Interconnectivity:

Network overlays enable seamless connectivity between geographically dispersed data centers. By extending virtual networks across different locations, organizations can achieve efficient workload migration, disaster recovery, and improved application performance.

Hybrid Cloud Deployments:

Network overlays play a crucial role in hybrid cloud environments, where organizations combine public cloud services with on-premises infrastructure. They provide a unified network fabric that connects the different cloud environments, ensuring smooth data flow and consistent network policies.

Conclusion:

In conclusion, network overlays have revolutionized the networking landscape by providing virtualization and abstraction layers on top of physical networks. Their benefits, including improved scalability, enhanced security, and simplified management, make them an essential component in modern network architectures. As technology continues to evolve, network overlays will undoubtedly play a vital role in shaping the future of networking.