sdn-and-bgp

SDN Traffic Optimizations

 

BGP Inbound Traffic Engineering

 

SDN Traffic Optimizations

In today’s digital age, where data consumption is exponentially increasing, efficient network traffic management has become more critical than ever. Traditional networking approaches often struggle to meet the demand, leading to congestion, latency, and poor performance. However, with the advent of Software-Defined Networking (SDN), network administrators now have powerful tools to optimize network traffic and improve overall network efficiency.

SDN is an innovative networking approach that separates the control and data planes. By centralizing network control in a software-based controller, SDN enables administrators to have a holistic view of the network and implement traffic optimizations seamlessly. This centralized control allows for dynamic traffic routing, load balancing, and prioritization, improving network performance.

 

Highlights: SDN Traffic Optimizations

  • Challenges to Multihoming

Multihoming to different transit providers has become an essential service component at the Internet edge. Multihoming allows you to satisfy several high-level requirements, including redundancy. Redundancy is site or device/link level and protects from a single point of failure.

There are several ways to route and manage traffic in and out of multi-homed sites. Some rely on static routing, while others rely on the routing policy capabilities of the inter-domain routing protocol, Border Gateway Protocol (BGP).

 

You may find the following helpful post for pre-information:

  1. WAN SDN 
  2. TCP IP Optimization
  3. BGP SDN
  4. What is BGP Protocol in Networking
  5. Network Traffic Engineering

 



Inbound Traffic Optimization

Key SDN Traffic Optimizations Discussion Points:


  • Introduction to SDN Traffic Optimizations and what is involved.

  • Highlighting the challenges with BGP inbound traffic engineering.

  • Critical points on how this can be solved.

  • Technical details with a use case that uses LISP protocol.

  • A final point on WAN SDN.

 

Back to basics with BGP

Border Gateway Protocol (BGP) is the routing protocol to exchange routing information across the Internet. BGP is considered the glue of the Internet and is the only protocol designed to deal with a network of the Internet’s size. As a result, BGP is sometimes called a distance-path protocol.

BGP does not look at something as simple as hop count or link costs, but it doesn’t keep track of the complete topology of the entire network either. Instead, BGP accomplishes this through neighbor-peer relationships that must be explicitly configured.

 

  • A key point: Lab guide on BGP

Here we have a sample BGP network that consists of two nodes, BGP Peer 1 and BGP Peer 2. We are running iBGP between these BGP peers, which is done by configuring both peers with the same AS number. In our case, this is AS 1. The command: show ip bgp summary is used to determine the status of a BGP neighbor. Remember that BGP runs over TCP port 179 and is a path vector protocol.

 

Port 179
Diagram: Port 179 with BGP peerings.

 

BGP Inbound Traffic Engineering

BGP is great for reducing network complexity and increasing scale at edges, but it has shortcomings concerning path selection. BGP is scalable and robust, but routing decisions based on BGP attributes are flawed. These are driving a requirement for a new approach, SDN traffic optimizations, and triangular routing with the LISP control plane.

For BGP inbound traffic engineering, the protocol validates path attributes. It selects the best path by checking local preference, shortest AS Path, ORIGIN attribute, lower MED attribute, eBGP routes are preferred over iBGP routes, and lower metric to the NEXT-HOP. Although these attributes allow granular policy control, they do not cover aspects relating to path performance. So, how can you add intelligence to BGP?

SDN Traffic Optimizations

 

Traffic Engineering with SDN:

SDN enables administrators to implement advanced traffic engineering techniques to optimize network traffic. By leveraging real-time network analytics and traffic monitoring, SDN controllers can intelligently route traffic based on various parameters such as bandwidth, latency requirements, and network congestion. This dynamic traffic engineering ensures network resources are efficiently utilized, reducing bottlenecks and improving overall network performance.

Quality of Service (QoS) Optimization:

One of the key benefits of SDN is its ability to prioritize certain types of network traffic over others. With SDN, administrators can implement Quality of Service (QoS) policies to ensure critical applications and services receive the necessary bandwidth and low latency they require. By prioritizing traffic based on predefined rules, SDN can guarantee a consistent user experience for essential services while preventing network congestion caused by non-critical traffic.

Scalability and Flexibility:

Traditional networking architectures often struggle to scale efficiently, leading to performance degradation as network demand increases. SDN offers inherent scalability by decoupling network control from the underlying hardware. With SDN, administrators can quickly scale network resources and adapt to changing traffic patterns by dynamically provisioning resources and adjusting traffic flow without requiring manual configuration changes.

Network Virtualization:

SDN provides the foundation for network virtualization, allowing administrators to create virtual networks independent of the underlying physical infrastructure. This virtualization enables the efficient allocation of network resources, isolation of traffic, and simplified network management. By leveraging network virtualization, organizations can optimize their network traffic by creating logical networks that meet specific requirements, such as separating traffic for different departments or applications.

 

SDN Traffic Optimizations and Border 6

Border6’s goal is simple: to develop an innovative routing optimization platform. Their toolset (NSI probe and NSI server) is not a replacement for BGP but a complementary tool. BGP is still required at network edges. The NSI products integrate with the border-routing process to complement the BGP decision process.

Integrating NSI into BGP adds additional intelligence to the BGP routing process and overcomes the issues addressed with BGP inbound traffic engineering. They are allowing engineers to automate, control, and monitor routing policies. For example, they have a Routing Decision Engine ( RDE ) that looks at the cost of transits. It takes into account the monthly subscription cost and the cost of traffic bursts.

 

Inbound traffic optimization

NSI probing and analysis allow them to measure latency and packet loss. The best path is then compared to the original path selected by the BGP process. The entire process lets you compare paths in terms of performance. If BGP does not determine the best path, NSI automates traffic engineering and pushes outbound traffic via the best-performing path.

The NSI probe communicates with the BGP edge routers and sends aggregated data back to the NSI Server. The server then analyzes the data and triggers an action for the NSI probe. You can have multiple NSI probes for various data center topologies at each location.

BGP Inbound Traffic Engineering

Optimizing inbound traffic flow

Enforcing outbound routing is performed without any difficulty. Inbound routing differs as you rely on the upstream 3rd party to take action. You can, however, influence this with AS-PATH prepending, community tagging, and auto-shutdown of defective links. Locator/ID Separation Protocol (LISP) provides more granularity for inbound traffic engineering as it separates the address spaces.

Border 6 supports LISP version 1.1 and can respond to the path available to external servers to reach a preferred network. This is based on NSI measurements. Border 6 is collaborating with French Research Agency ( ANR ) to develop a design to integrate NSI with LISP for inbound traffic optimization. This is an ongoing project and is dependent on the broader scope of a global LISP implementation. And as Mateusz Viste states – “LISP is not going to rule the Internet tomorrow, nor the day after that.

 

Border 6 LISP process

The NSI device registers itself with a MAP server. A LISP Map server is a LISP infrastructure device that advertises host prefixes that are advertising to it. The registration process involves sending the MAP server the customer’s prefix. When other LISP participants need to send a packet to the customer’s prefix, they query the MAP server for its location. The MAP server, in turn, relays it to the NSI device.

NSI identifies who is asking (what remote prefix), and responds with the correct RLOC device. RLOCs identify the location of the prefix. The selection of RLOC is based on where transit gateway Border 6 prefers. This requires LISP tunnels on every customer’s edge routers, making it possible for external entities to send LISP-tunneled packets. Until LISP becomes widely available, Border6 continues other working practices to optimize inbound traffic flows, shortest AS-Path, community tagging, and auto-shutdown of defective links.

 

Other Inbound Optimizations

Standard AS-PATH prepending is a well-known BGP path engineering method. BGP selects paths with the shortest AS path. Setting multiple AS entries to a prefix, announced to each of your transits, will affect inbound traffic flow. Community tagging – is a “work-in-progress” project due this year. Essentially, they can add custom-defined communities to the selected prefix.

Transit providers can match these communities and re-announce them partially. Effectively, traffic engineering-inbound flow. Auto-shutdown of defective links – when NSI detects a failure on one of your transit, it can shut down the BGP session (via ssh access on your router), preventing announcements of your prefix via particular links.

 

NSI route limiter

RAM and CPU are critical components of router resources and should always be protected. Routers at the edge may need to accept large portions of the BGP table, maybe the entire BGP table, consuming many router resources. The global IPv4 routing table has surpassed the 500 thousand route benchmark. We are quickly reaching the hard forwarding capacity limits of many popular routers. NSI has a nice feature known as a route limiter.

It is used for routers that can not accept large BGP tables due to memory or other constraints. NSI can feed low-end customer edge routers routes that NSI selects to match destinations where you send traffic. This frees up RAM and CPU for additional control and data plane tasks. It also lets you use cheaper Layer 3 switches, such as Cumulus or Brocade. Make your WAN edge and BGP platform a proper BGP SDN-powered solution.

Software-Defined Networking (SDN) has revolutionized network traffic optimization by giving administrators unprecedented control and flexibility. With its centralized control, dynamic traffic engineering capabilities, and ability to prioritize critical traffic, SDN enables organizations to improve network performance, reduce congestion, and enhance the overall user experience. As the demand for data continues to grow, SDN will play a crucial role in ensuring efficient network traffic management in the digital era.

 

SDN traffic optimizations

 

wan-sdn

WAN SDN

 

Software defined networking

 

WAN SDN

In today’s fast-paced digital world, organizations constantly seek ways to optimize their network infrastructure for improved performance, scalability, and cost efficiency. One emerging technology that has gained significant traction is WAN Software-Defined Networking (SDN). By decoupling the control and data planes, WAN SDN provides organizations unprecedented flexibility, agility, and control over their wide area networks (WANs). In this blog post, we will delve into the world of WAN SDN, exploring its key benefits, implementation considerations, and real-world use cases.

WAN SDN is a network architecture that allows organizations to manage and control their wide area networks using software centrally. Traditionally, WANs have been complex and time-consuming to configure, often requiring manual network provisioning and management intervention. However, with WAN SDN, network administrators can automate these tasks through a centralized controller, simplifying network operations and reducing human errors.

 

Highlights: WAN SDN

  • SDN and APIs

WAN SDN is a modern approach to network management that uses a centralized control model to manage, configure, and monitor large and complex networks. It allows network administrators to use software to configure, monitor, and manage network elements from a single, centralized system. This enables the network to be managed more efficiently and cost-effectively than traditional networks.

SDN uses an application programming interface (API) to abstract the underlying physical network infrastructure, allowing for more agile network control and easier management. It also enables network administrators to rapidly configure and deploy services from a centralized location. This enables network administrators to respond quickly to changes in traffic patterns or network conditions, allowing for more efficient use of resources.

  • Scalability and Automation

SDN also allows for improved scalability and automation. Network administrators can quickly scale up or down the network by leveraging automated scripts depending on its current needs. Automation also enables the network to be maintained more quickly and efficiently, saving time and resources.

 

Before you proceed, you may find the following posts helpful:

  1. WAN Virtualization
  2. Software Defined Perimeter Solutions
  3. What is OpenFlow
  4. SD WAN Tutorial
  5. What Does SDN Mean
  6. Data Center Site Selection

 



SDN Internet

Key WAN SDN Discussion Points:


  • Introduction to WAN SDN and what is involved.

  • Highlighting the challenges of a traditional WAN design.

  • Critical points on the rise of WAN SDN.

  • Technical details Internet measurements.

  • The LISP protocol.

 

Back to Basics with WAN SDN

A Deterministic Solution

Technology typically starts as a highly engineered, expensive, deterministic solution. As the marketplace evolves and competition rises, the need for a non-deterministic, inexpensive solution comes into play. We see this throughout history. First, mainframes were/are expensive, and with the arrival of a microprocessor personal computer, the client/server model was born. The Static RAM ( SRAM ) technology was replaced with cheaper Dynamic RAM ( DRAM ). These patterns consistently apply to all areas of technology.

Finally, deterministic and costly technology is replaced with intelligent technology-using redundancy and optimization techniques. This process is now appearing in Wide Area Networks (WAN). Now, we are witnessing changes to routing space with the incorporation of Software Defined Networking (SDN) and BGP (Border Gateway Protocol). By combining these two technologies, companies can now perform  intelligent routing, aka SD-WAN path selection, with an SD WAN Overlay

 

  • A key point: SD-WAN Path Selection

SD-WAN path selection is essential to a Software-Defined Wide Area Network (SD-WAN) architecture. SD-WAN path selection selects the most optimal network path for a given application or user. This process is automated and based on user-defined criteria, such as latency, jitter, cost, availability, and security. As a result, SD-WAN can ensure that applications and users experience the best possible performance by making intelligent decisions on which network path to use.

When selecting the best path for a given application or user, SD-WAN looks at the quality of the connection and the available bandwidth. It then looks at the cost associated with each path. Cost can be a significant factor when selecting a path, especially for large enterprises or organizations with multiple sites.

SD-WAN can also prioritize certain types of traffic over others. This is done by assigning different weights or priorities for different kinds of traffic. For example, an organization may prioritize voice traffic over other types of traffic. This ensures that voice traffic has the best possible chance of completing its journey without interruption.

SD WAN traffic steering
Diagram: SD WAN traffic steering. Source Cisco.

 

 

  • Back to basics with DMVPN

Wide Area Network (WAN) DMVPN (Dynamic Multipoint Virtual Private Network) is a type of Virtual Private Network (VPN) that uses an underlying public network, such as the Internet, to transport data between remote sites. It provides a secure, encrypted connection between two or more private networks, allowing them to communicate over the public network without establishing a dedicated physical connection.

 

Critical Benefits of WAN SDN:

Enhanced Network Flexibility:

WAN SDN enables organizations to adapt their network infrastructure to meet changing business requirements dynamically. Network administrators can quickly respond to network demands through programmable policies and automated provisioning, ensuring optimal performance and resource allocation.

Improved Network Agility:

By separating the control and data planes, WAN SDN allows for faster decision-making and network reconfiguration. This agility enables organizations to rapidly deploy new services, adjust network traffic flows, and optimize bandwidth utilization, ultimately enhancing overall network performance.

Cost Efficiency:

WAN SDN eliminates manual configuration and reduces the complexity associated with traditional network management approaches. This streamlined network management saves cost through reduced operational expenses, improved resource utilization, and increased network efficiency.

Critical Considerations for Implementation:

Network Security:

When adopting WAN SDN, organizations must consider the potential security risks associated with software-defined networks. Robust security measures, including authentication, encryption, and access controls, should be implemented to protect against unauthorized access and potential vulnerabilities.

Staff Training and Expertise:

Implementing WAN SDN requires skilled network administrators proficient in configuring and managing the software-defined network infrastructure. Organizations must train and upskill their IT teams to ensure successful implementation and ongoing management.

Real-World Use Cases:

Multi-Site Connectivity:

WAN SDN enables organizations with multiple geographically dispersed locations to connect their sites seamlessly. Administrators can prioritize traffic, optimize bandwidth utilization, and ensure consistent network performance across all locations by centrally controlling the network.

Cloud Connectivity:

With the increasing adoption of cloud services, WAN SDN allows organizations to connect their data centers to public and private clouds securely and efficiently. This facilitates smooth data transfers, supports workload mobility, and enhances cloud performance.

Disaster Recovery:

WAN SDN simplifies disaster recovery planning by allowing organizations to reroute network traffic during a network failure dynamically. This ensures business continuity and minimizes downtime, as the network can automatically adapt to changing conditions and reroute traffic through alternative paths.

 

The Rise of WAN SDN

The foundation for business and cloud services are crucial elements of business operations. The transport network used for these services is best efforts, weak, and offers no guarantee of an acceptable delay. More services are being brought to the Internet, yet the Internet is managed inefficiently and cheaply.

Every Autonomous System (AS) acts independently, and there is a price war between transit providers, leading to poor quality of transit services. Operating over this flawed network, customers must find ways to guarantee applications receive the expected level of quality.

Border Gateway Protocol (BGP), the Internet’s glue, has several path selection flaws. The main drawback of BGP is the routing paradigm relating to the path-selection process. BGP default path selection is based on Autonomous System (AS) Path length; prefer the path with the shortest AS_PATH. It misses the shape of the network with its current path selection process. It does not care if propagation delay, packet loss, or link congestion exists. It resulted in long path selection and utilizing paths potentially experiencing packet loss.

 

WAN SDN with Border6 

Border6 is a French company that started in 2012. It offers a Non-Stop Internet, an integrated WAN SDN solution influencing BGP to perform optimum routing. It’s not a replacement for BGP but a complementary tool to enhance routing decisions. For example, it automates changes in routing in cases of link congestion/blackouts.

“The agile way of improving BGP paths by the Border 6 tool improves network stability” Brandon Wade, iCastCenter Owner.

Customers wanted to bring additional intelligence to routing as the Internet became more popular. Additionally, businesses require SDN traffic optimizations as many run their entire service offerings on top of it.

 

What is non-stop internet?

Border6 offers an integrated WAN SDN solution with BGP that adds intelligence to outbound routing. A common approach when designing SDN in real-world networks is to prefer that SDN solutions incorporate existing field testing mechanisms (BGP) and not reinvent all the wheels ever invented. Therefore, the border6 approach to influence BGP with SDN is a welcomed and less risky approach to implementing a greenfield startup. In addition, Microsoft and Viptela also use the SDN solution to control the behavior of BGP.

Border6 takes BGP as a sort of guidance of what might be reachable. Based on various performance metrics, they measure how well paths perform. They use BGP to learn the structure of the Internet and then run their algorithms to know what is essential for individual customers. Every customer has different needs to reach different subnets. Some prefer costs; others prefer performance.

They elect several interesting “best” performing prefixes, and the most critical prefixes are selected. Next, they find probing locations and measure the source with automatic probes; to determine the best path. All these tools combined enhance the behavior of BGP. Their mechanism can detect if ISP has hardware/software problems, drops packets, or rerouting packets worldwide. 

 

Thousands of tests per minute

The Solution offers the best path by executing thousands of tests per minute and enabling results to include the best paths for packet delivery. Outputs from the live probing of path delays and packet loss inform BGP on which path to route traffic. The “best path” is different for each customer. It depends on the routing policy the customer wants to take. Some customers prefer paths without packet loss; others want cheap costs or paths under 100ms. It comes down to customer requirements and the applications they serve.

 

BGP – Unrelated to Performance

Traditionally, BGP is getting its information to make decisions based on data unrelated to performance. Broder 6 tries to correlate your packet’s path to the Internet by choosing the fastest or cheapest link, depending on requirements.

They are taking BGP data service providers are sending them as a baseline. Based on that broad connectivity picture, they have their measurements – lowest latency, packets lost, etc.- and adjust the data from BGP to consider these other measures. They were, eventually, performing optimum packet traffic forwarding. They first look at Netflow or Sflow data to determine what is essential and use their tool to collect and aggregate the data. From this data, they know what destinations are critical to that customer.

 

BGP for outbound | Locator/ID Separation Protocol (LISP) for inbound

Border6 products relate to outbound traffic optimizations. It can be hard to influence inbound traffic optimization with BGP. Most AS behave selfishly and optimize the traffic in their interest. They are trying to provide tools that help AS optimize inbound flows by integrating their product set with Locator/ID Separation Protocol (LISP). The diagram below displays generic LISP components. It’s not necessarily related to Border6 LISP design.

LISP decouples the address space so you can optimize inbound traffic flows. Many LISP uses cases are seen with active-active data centers and VM mobility. It decouples the “who” and the “where,” which allows end-host addressing not to correlate with the actual host location. The drawback is that LISP requires endpoints that can build LISP tunnels.

Currently, they are trying to provide a solution using LISP as a signaling protocol between Border6 devices. They are also working on performing statistical analysis for data received to mitigate potential denial-of-service (DDoS) events. More DDoS algorithms are coming in future releases.

 

Conclusion:

WAN SDN is revolutionizing how organizations manage and control their wide area networks. WAN SDN enables organizations to optimize their network infrastructure to meet evolving business needs by providing enhanced flexibility, agility, and cost efficiency.

However, successful implementation requires careful consideration of network security, staff training, and expertise. With real-world use cases ranging from multi-site connectivity to disaster recovery, WAN SDN holds immense potential for organizations seeking to transform their network connectivity and unlock new opportunities in the digital era.

 

Software defined networking

 

Data Center Design Requirements

Low Latency Network Design

Low Latency Network Design

In today's fast-paced digital world, where every millisecond counts, the demand for high-performance networks with low latency has never been greater. Whether it's streaming high-definition content, online gaming, or real-time financial transactions, a network's ability to minimize delay is crucial.

In this blog post, we will delve into the fascinating realm of low latency network design and explore the key strategies and considerations that make it possible.

Latency, often referred to as "network delay," is the time it takes for data to travel from its source to its destination. It includes various factors such as transmission delay, processing delay, and queuing delay. Before we dive into the design aspects, it's important to have a solid understanding of latency and its impact on network performance.

Table of Contents

Highlights: Low Latency Network Design

 

A New Operational Model

We are; now all moving in the cloud direction. The requirement is for large data centers that are elastic and scalable. The result of these changes that are influenced by innovations and methodology in the server/application world is that the network industry is experiencing a new operational model. Provisioning must be quick, and designers look to automate network configuration more systematically and in a less error-prone programmatic way. It is challenging to meet these new requirements with traditional data center designs.

Changing Traffic Flow

Traffic flow has changed, and we have a lot of east-to-west traffic. Existing data center designs are designed to focus on north-to-south flows. East-to-west traffic requires changing the architecture from an aggregating-based model to a massive multipathing model. Referred to as Clos networks, leaf and spine designs allow for building huge networks with reasonably sized equipment, enabling low latency network design.

 



Latency In Networking.

Key Low Latency Network Design Discussion Points:


  • Introduction to low latency network design and what is involved.

  • Highlighting the details of the different data center latency requirements.

  • Critical points on latency in networking.

  • Technical details on oversubscription.

  • Technical details on deep packet buffers.

 

Related: Before you proceed, you may find the following post helpful:

  1. Baseline Engineering
  2. Dropped Packet Test
  3. SDN Data Center
  4. Azure ExpressRoute
  5. Zero Trust SASE
  6. Service Level Objectives (slos)

 

Forwarding Features

Control Features

Network and Storage integration

Bridging without STP

Multi pathing for Layer 2 and Layer 3

Integration with server virtualization

Low latency

Good MAC, ARP and L3 table size

Optimal Layer 3 forwarding

Deep packet buffers

Path isolation

 

Back to Basics: Network testing.

Network Testing

A stable network results from careful design and testing. Although many vendors often perform exhaustive systems testing and provide this via 3rd party testing reports, they cannot reproduce every customer’s environment. So to determine your primary data center design, you must conduct your tests.

Effective testing is the best indicator of production readiness. On the other hand, ineffective testing may lead to a false sense of confidence, causing downtime. Therefore, you should adopt a structured approach to testing as the best way to discover and fix the defects in the least amount of time at the lowest possible cost.

 

Lab Guide: RSVP.

In this example, we will have a look at RSVP. Resource reservation signals the network and requests a specific bandwidth and delay required for a flow. When the reservation is successful, each network component (primarily routers) will reserve the necessary bandwidth and delay. 

  1. First, we need to enable RSVP on all interfaces: ip rsvp bandwidth 128 64
  2. Then, configure R1 to act like an RSVP host so it will send an RSVP send path message:
  3. Finally. Configure R4 to respond to this reservation:

 

Resource Reservation
Diagram: Resource Reservation

 

What is low latency?

Low latency is the ability of a computing system or network to respond with minimal delay. Actual low latency metrics vary according to the use case. So, what is a low-latency network? A low-latency network has been designed and optimized to reduce latency as much as possible. However, a low-latency network can only improve latency caused by factors outside the network.

We first have to consider latency jitters when they deviate unpredictably from an average; in other words, they are low at one moment and high at the next. For some applications, this unpredictability is more problematic than high latency. We also have ultra-low latency measured in nanoseconds, while low latency is measured in milliseconds. Therefore, ultra-low latency delivers a response much faster, with fewer delays than low latency.

 

Importance of Low Latency Network Design:

1. Improved User Experience: Low latency networks ensure seamless and uninterrupted communication, enabling users to access and transmit data more efficiently. This is particularly crucial in latency-sensitive applications where any delay can be detrimental.

2. Competitive Advantage: In today’s competitive business landscape, organizations that deliver faster and more responsive services gain a significant edge. Low latency networks enable companies to provide real-time services, enhancing customer satisfaction and loyalty.

3. Support for Emerging Technologies: Low latency networks form the backbone for emerging technologies such as the Internet of Things (IoT), autonomous vehicles, augmented reality (AR), and virtual reality (VR). These technologies require rapid data exchange and response times, which can only be achieved through low-latency network design.

 

Data Center Latency Requirements

  • Latency requirements

Intra-data center traffic flows concern us more with latency than outbound traffic flow. High latency between servers degrades performance and results in the ability to send less traffic between two endpoints. Low latency allows you to use as much bandwidth as possible.

A low-lay network design known as  Ultra-low latency ( ULL ) data center design is the race to zero. The goal is to design as fast as possible with the lowest end-to-end latency. Latency on an IP/Ethernet switched network can be as low as 50 ns.

Low Latency Network Design
Diagram: Low Latency Network Design

 

High-frequency trading ( HFT ) environments push for this trend, where providing information from stock markets with minimal delay is imperative. HFT environments are different than most DC designs and don’t support virtualization. The Port count is low, and servers are designed in small domains.

It is conceptually similar to how Layer 2 domains should be designed as small Layer 2 network pockets. Applications are grouped to match optimum traffic patterns where many-to-one conversations are reduced. This will reduce the need for buffering, increasing network performance. CX-1 cables are preferred over the more popular optical fiber.

 

Oversubscription

The optimum low-latency network design should consider and predict the possibility of congestion at critical network points. An example of an unacceptable oversubscription would be a ToR switch with 20 Gbps traffic from servers but only 10 Gbps uplink. This will result in packet drops and poor application performance.

data center network design
Diagram: Data center network design and oversubscription

 

Previous data center designs were 3-tier aggregation model-based ( developed by Cisco ). Now, we are going for 2-tier models. The main design point for this model is the number of ports on the core; more ports on the core result in more extensive networks. Similar design questions would be a) how much routing and b) how much bridging will I implement c) where do I insert my network services modules?

We are now designing networks with lots of tiers – Clos Network. The concept comes from voice networks from around 1953, previously built voice switches with crossbar design. Clos designs give optimum any-to-any connectivity. Requires low latency and non-blocking components. Every element should be non-blocking. Multipath technologies deliver a linear increase in oversubscription with each device failure and are better than architectures that degrade during failures.

 

Lossless transport

Data Center Bridging ( DCB ) offers standards for flow control and queuing. Even if your data center does not use (the Internet Small Computer System Interface) ISCSI, TCP elephant flows benefit from lossless transport, improving data center performance. However, research has shown that many TCP flows are below 100Mbps.

The remaining small percentage are elephant flows, which consume 80% of all traffic inside the data center. Due to their size and how TCP operates, when an elephant flows experience packet drops, they will slow down, affecting network performance.

 

Distributed resource scheduling

VMmobiliy is a VMware tool used for distributed resource scheduling. Load from hypervisors is automatically spread to other underutilized VMs. Other use cases in cloud environments where DC requires dynamic workload placement, and you don’t know where the VM will be in advance.

If you want to retain sessions, keep them in the same subnet. Layer 3 VMotion is too slow as routing protocol convergence will always take a few seconds. In theory, you could optimize timers for routing protocol fast convergence, but in practice, Interior Gateway Protocols ( IGP ) give you eventual consistency.

 

VMmobiliy

Data Centers require bridging at layer 2 to retain the IP addresses for VMobility. TCP stack currently has no separation between “who” and “where” you are, i.e., IP address represents both functions. Future implementation with Locator/ID Separation Protocol ( LISP ) divides these two roles, but bridging for VMobility is required until fully implemented.

 

Spanning Tree Protocol ( STP )

Spanning Tree reduces bandwidth by 50%, and massive multipathing technologies allow you to scale without losing 50% of the link bandwidth. Data centers want to move VMs without distributing traffic flow. VMware has VMotion. Microsoft Hyper-V has Live migration.

 

Network convergence

Layer 3 network requires many events to complete before it reaches a fully converged state. In layer 2, when the first broadcast is sent, every switch knows precisely where that switch has moved. There are no mechanisms with Layer 3 to do something similar. Layer 2 networks result in a large broadcast domain.

You may also experience large sub-optimal flows as the Layer 3 next hop will stay the same when you move the VM. Optimum Layer 3 forwarding – what Juniper is doing with Q fabric. Every Layer 3 switch has the same IP address; they can all serve as the next hop—resulting in optimum traffic flow.

 

Deep packet buffers 

We have more DC traffic and elephant flows from distributed databases. Traffic is now becoming very bursty. We also have a lot of microburst traffic. The bursts are so short that they don’t register as high link utilization but are big enough to overflow packet buffers and cause drops. This type of behavior with TCP causes TCP slow start. A slow start with elephant flows is problematic for networks.

 

Key Considerations for Low Latency Network Design:

1. Network Infrastructure: To achieve low latency, network designers must optimize the infrastructure by reducing bottlenecks, eliminating single points of failure, and ensuring sufficient bandwidth capacity.

2. Proximity: Locating servers and data centers closer to end-users can significantly reduce latency. Data can travel faster by minimizing the physical distance, resulting in lower latency.

3. Traffic Prioritization: Prioritizing latency-sensitive traffic within the network can help ensure that critical data packets are given higher priority, reducing the overall latency.

4. Quality of Service (QoS): Implementing QoS mechanisms allows network administrators to allocate resources based on application requirements. By prioritizing latency-sensitive applications, low latency can be maintained.

5. Optimization Techniques: Various optimization techniques, such as caching, compression, and load balancing, can further reduce latency by minimizing the volume of data transmitted and distributing the workload efficiently.

Summary: Low Latency Network Design

In today’s fast-paced digital world, where every millisecond counts, the importance of low-latency network design cannot be overstated. Whether it’s online gaming, high-frequency trading, or real-time video streaming, minimizing latency has become crucial in delivering seamless user experiences. This blog post explored the fundamentals of low-latency network design and its impact on various industries.

Section 1: Understanding Latency

In the context of networking, latency refers to the time it takes for data to travel from its source to its destination. It is often measured in milliseconds (ms) and can be influenced by various factors such as distance, network congestion, and processing delays. By reducing latency, businesses can improve the responsiveness of their applications, enhance user satisfaction, and gain a competitive edge.

Section 2: The Benefits of Low Latency

Low latency networks offer numerous advantages across different sectors. In the financial industry, where split-second decisions can make or break fortunes, low latency enables high-frequency trading firms to execute trades with minimal delays, maximizing their profitability. Similarly, in online gaming, low latency ensures smooth gameplay and minimizes the dreaded lag that can frustrate gamers. Additionally, industries like telecommunication and live video streaming heavily rely on low-latency networks to deliver real-time communication and immersive experiences.

Section 3: Strategies for Low Latency Network Design

Designing a low-latency network requires careful planning and implementation. Here are some key strategies that can help achieve optimal latency:

Subsection: Network Optimization

By optimizing network infrastructure, including routers, switches, and cables, organizations can minimize data transmission delays. This involves utilizing high-speed, low-latency equipment and implementing efficient routing protocols to ensure data takes the most direct and fastest path.

Subsection: Data Compression and Caching

Reducing the size of data packets through compression techniques can significantly reduce latency. Additionally, implementing caching mechanisms allows frequently accessed data to be stored closer to the end-users, reducing the round-trip time and improving overall latency.

Subsection: Content Delivery Networks (CDNs)

Leveraging CDNs can greatly enhance latency, especially for global businesses. By distributing content across geographically dispersed servers, CDNs bring data closer to end-users, reducing the distance and time it takes to retrieve information.

Conclusion:

Low-latency network design has become a vital aspect of modern technology in a world driven by real-time interactions and instant gratification. By understanding the impact of latency, harnessing the benefits of low latency, and implementing effective strategies, businesses can unlock opportunities and deliver exceptional user experiences. Embracing low latency is not just a trend but a necessity for staying ahead in the digital age.

LISP networking

LISP Protocol and VM Mobility

 

vm mobility

 

LISP Protocol and VM Mobility

The networking world is constantly evolving, with new technologies emerging to meet the demands of an increasingly connected world. One such technology that has gained significant attention is the LISP protocol. In this blog post, we will delve into the intricacies of the LISP protocol, exploring its purpose, benefits, and how it bridges the gap in modern networking and its use case with VM mobility.

  • What is LISP?

LISP, which stands for Locator/ID Separation Protocol, is a network protocol that separates the identity of a device from its location. Unlike traditional IP addressing schemes, which rely on a tightly coupled relationship between the IP address and the device’s physical location, LISP separates these two aspects, allowing for more flexibility and scalability in network design.

  • How Does LISP Work

Locator Identity Separation Protocol ( LISP ) provides a set of functions that allow Endpoint identifiers ( EID ) to be mapped to an RLOC address space. The mapping between these two endpoints offers the separation of IP addresses into two numbering schemes ( similar to the “who” and the “where” analogy ), offering many traffic engineering and IP mobility benefits for the geographic dispersion of data centers beneficial for VM mobility.

  • LISP Components

The LISP protocol operates by creating a mapping system that separates the device’s identifier, the Endpoint Identifier (EID), from its location, the Routing Locator (RLOC). This separation is achieved using a distributed database called the LISP Mapping System (LMS), which maintains the mapping between EIDs and RLOCs. When a packet is sent to a destination EID, it is encapsulated and routed based on the RLOC, allowing for efficient and scalable communication.

 

Before you proceed, you may find the following posts helpful:

  1. LISP Hybrid Cloud 
  2. LISP Control Plane
  3. Triangular Routing
  4. Active Active Data Center Design
  5. Application Aware Networking

 

VM Mobility

Key LISP Protocol Discussion Points:


  • Introduction to the LISP Protocol and what is involved.

  • Highlighting the details of the LISP traffic flow.

  • Technical details on LAN extension considerations. 

  • LISP Extended Subnet and Across Subnet.

 

  • A key point: Video on LISP configuration.

In this video, we will have a look at LISP configuration. This can be considered the first step before you get into the more advanced features of LISP and VM mobility. From its inception, the LISP protocol has been an open standard protocol that interoperates across various platforms and is incrementally deployable on top of any transport.

LISP’s flexibility has led to its application in every part of today’s modern network, from the data center to the enterprise WAN to the enterprise campus to the service provider edge and the core. The following will help you understand a LISP hybrid cloud implementation.

 

Hands on Video Series - Enterprise Networking | LISP Configuration Intro
Prev 1 of 1 Next
Prev 1 of 1 Next

 

Back to basics with the Virtual Machine (VM).

Virtualization

Virtualization can be applied to subsystems such as disks and a whole machine. A virtual machine (VM) is implemented by adding a software layer to an actual device to sustain the desired virtual machine’s architecture. In general, a virtual machine can circumvent real compatibility and hardware resource limitations to enable a more elevated degree of software portability and flexibility.

n the dynamic world of modern computing, the ability to seamlessly move virtual machines (VMs) between different physical hosts has become a critical aspect of managing resources and ensuring optimal performance. This blog post explores VM mobility and its significance in today’s rapidly evolving computing landscape.

VM mobility refers to transferring a virtual machine from one physical host to another without disrupting operation. This capability is made possible through virtualization technologies such as hypervisors, which enable the abstraction of hardware resources and allow multiple VMs to coexist on a single physical machine.

LISP and VM Mobility

The Locator/Identifier Separation Protocol (LISP) is an innovative networking architecture that decouples the identity (Identifier) of a device or VM from its location (Locator). By separating the two, LISP provides a scalable and flexible solution for VM mobility.

How LISP Enhances VM Mobility:

1. Improved Scalability:

LISP introduces a level of indirection by assigning Endpoint Identifiers (EIDs) to VMs. These EIDs act as unique identifiers, allowing VMs to retain their identity even when they are moved to different locations. This enables enterprises to scale their VM deployments without worrying about the limitations imposed by the underlying network infrastructure.

2. Seamless VM Mobility:

LISP simplifies moving VMs by abstracting the location information using Routing Locators (RLOCs). When a VM is migrated, LISP updates the mapping between the EID and RLOC, allowing the VM to maintain uninterrupted connectivity. This eliminates the need for complex network reconfigurations, reducing downtime and improving overall agility.

3. Load Balancing and Disaster Recovery:

LISP enables efficient load balancing and disaster recovery strategies by providing the ability to distribute VMs across multiple physical hosts or data centers. With LISP, VMs can be dynamically moved to optimize resource utilization or to ensure business continuity in the event of a failure. This improves application performance and enhances the overall resilience of the IT infrastructure.

4. Interoperability and Flexibility:

LISP is designed to be interoperable with existing network infrastructure, allowing organizations to gradually adopt the protocol without disrupting their current operations. It integrates seamlessly with IPv4 and IPv6 networks, making it a future-proof solution for VM mobility.

 

Basic LISP Traffic flow

A device ( S1 ) initiates a connection and wants to communicate with another external device ( D1 ). D1 is located in a remote network. S1 will create a packet with the EID of S1 as the source IP address and the EID of D1 as the destination IP address. As the packets flow to the network’s edge on their way to D1, it is met by an Ingress Tunnel Router ( ITR ).

The ITR maps the destination EID to a destination RLOC and then encapsulates the original packet with an additional header with the source IP address of the ITR RLOC and the destination IP address of the RLOC of an Egress Tunnel Router ( ETR ). The ETR is located on the remote site next to the destination device D1.

LISP protocol

The magic is how these mappings are defined, especially regarding VM mobility. There is no routing convergence, and any changes to the mapping systems are unknown to the source and destination hosts. We are offering complete transparency.

 

LISP Terminology

LISP namespaces:

LSP Name Component

LISP Protocol Description 

End-point Identifiers  ( EID ) Addresses

The EID is allocated to an end host from an EID-prefix block. The EID associates where a host is located and identifies endpoints. The remote host obtains a destination the same way it obtains a normal destination address today, for example through DNS or SIP. The procedure a host uses to send IP packets does not change. EIDs are not routable.

Route Locator ( RLOC ) Addresses

The RLOC is an address or group of prefixes that map to an Egress Tunnel Router ( ETR ). Reachability within the RLOC space is achieved by traditional routing methods. The RLOC address must be routable.

 

LISP site devices:

LISP Component

LISP Protocol Description 

Ingress Tunnel Router ( ITR )

An ITR is a LISP Site device that sits in a LISP site and receives packets from internal hosts. It in turn encapsulates them to remote LISP sites. To determine where to send the packet the ITR performs an EID-to-RLOC mapping lookup. The ITR should be the first-hop or default router within a site for the source hosts.

Egress Tunnel Router ( ETR )

An ETR is a LISP Site device that receives LISP-encapsulated IP packets from the Internet, decapsulates them, and forwards them to local EIDs at the site. An ETR only accepts an IP packet where the destination address is the “outer” IP header and is one of its own configured RLOCs. The ETR should be the last hop router directly connected to the destination.

 

LISP infrastructure devices:

LISP Component Name

LISP Protocol Description

Map-Server ( MS )

The map server contains the EID-to-RLOC mappings and the ETRs register their EIDs to the map server. The map-server advertises these, usually as an aggregate into the LISP mapping system.

Map-Resolver ( MR )

When resolving EID-to-RLOC mappings the ITRs send LISP Map-Requests to Map-Resolvers. The Map-Resolver is typically an Anycast address. This improves the mapping lookup performance by choosing the map-resolver that is topologically closest to the requesting ITR.

Proxy ITR ( PITR )

Provides connectivity to non-LISP sites. It acts like an ITR but does so on behalf of non-LISP sites.

Proxy ETR ( PETR )

Acts like an ETR but does so on behalf of LISP sites that want to communicate to destinations at non-LISP sites.

 

VM Mobility

LISP Host Mobility

LISP VM Mobility ( LISP Host Mobility ) functionality allows any IP address ( End host ) to move from its subnet to either a) a completely different subnet, known as “across subnet,” or b) to an extension of its subnet in a different location, known as “extended subnet” – while keeping its original IP address.

When the end host carries its own Layer 3 address to the remote site, and the prefix is the same as the remote site, it is known as an “extended subnet.” Extended subnet mode requires a Layer 2 LAN extension. On the other hand, when the end hosts carry a different network prefix to the remote site, it is known as “across subnets.” When this is the case, a Layer 2 extension is not needed between sites.

 

LAN extension considerations

LISP does not remove the need for a LAN extension if a VM wants to perform a “hot” migration between two dispersed sites. The LAN extension is deployed to stretch a VLAN/IP subnet between separate locations. LISP complements LAN extensions with efficient move detection methods and ingress traffic engineering.

LISP works with all LAN extensions – whether back-to-back vPC and VSS over dark fiber, VPLS, Overlay Transport Virtualization ( OTV ), or Ethernet over MPLS/IP. LAN extension best practices should still be applied to the data center edges. These include but are not limited to – End-to-end Loop Prevention and STP isolation.

A LISP site with a LAN extension extends a single site across two physical data center sites. This is because the extended subnet functionality of LISP makes two DC sites a single LISP site. On the other hand, when LISP is deployed without a LAN extension, a single LISP site is not extended between two data centers, and we end up having separate LISP sites.

 

LISP extended subnet

VM mobility
VM mobility: LISP protocol and extended subnets

 

The LAN extension technology must filter Hot Standby Router Protocol ( HSRP ) HELLO messages across the two data centers to avoid asymmetric traffic handling. This creates an active-active HSRP setup. HSRP localization optimizes egress traffic flows. LISP optimizes ingress traffic flows.

The default gateway and virtual MAC address must remain consistent in both data centers. This is because the moved VM will continue to send to the same gateway MAC address. This is accomplished by configuring the same HSRP gateway IP address and group in both data centers. When an active-active HSRP domain is used, re-ARP is not needed during mobility events.

The LAN extension technology must have multicast enabled to support the proper operation of LISP. Once a dynamic EID is detected, the multicast group IP addresses send a map-notify message by the xTR to all other xTRs. The multicast messages are delivered leveraging the LAN extension.

 

LISP across subnet 

VM mobility
VM mobility: LISP protocol across Subnets

 

LISP across subnets requires the mobile VM to access the same gateway IP address, even if they move across subnets. This will prevent egress traffic triangulation back to the original data center. This can be achieved by manually setting the vMAC address associated with the HSRP group to be consistent across sites.

Proxy ARP must be configured under local and remote SVIs to handle new ARP requests generated by the migrated workload correctly.
With this deployment, there is no need to deploy a LAN extension to stretch VLAN/IP between sites. This is why it is considered to address “cold” migration scenarios, such as Disaster Recovery ( DR ) or cloud bursting and workload mobility according to demands.

 

Benefits of LISP:

1. Scalability: By separating the identifier from the location, LISP provides a scalable solution for network design. It allows for hierarchical addressing, reducing the size of the global routing table and enabling efficient routing across large networks.

2. Mobility: LISP’s separation of identity and location mainly benefits mobile devices. As devices move between networks, their EIDs remain constant while the RLOCs are updated dynamically. This enables seamless mobility without disrupting ongoing connections.

3. Multihoming: LISP allows a device to have multiple RLOCs, enabling multihoming capabilities without complex network configurations. This ensures redundancy, load balancing, and improved network reliability.

4. Security: LISP provides enhanced security features such as cryptographic authentication and integrity checks, ensuring the integrity and authenticity of the mapping information. This helps in mitigating potential attacks, such as IP spoofing.

Applications of LISP:

1. Data Center Interconnection: LISP can interconnect geographically dispersed data centers, providing efficient and scalable communication between different locations.

2. Internet of Things (IoT): With the exponential growth of IoT devices, LISP offers an efficient solution for managing these devices’ addressing and communication needs, ensuring seamless connectivity in large-scale deployments.

3. Content Delivery Networks (CDNs): LISP can optimize content delivery by allowing CDNs to cache content closer to end-users, reducing latency and improving overall performance.

Conclusion:

The LISP protocol is a revolutionary technology that addresses the challenges of scalability, mobility, multi-homing, and security in modern networking. Its separation of identity and location opens up new possibilities for efficient and flexible network design. With its numerous benefits and versatile applications, LISP is poised to play a pivotal role in shaping the future of networking.

 

 

 

lisp protocol

Internet locator

Internet Locator

 

internet connectivity

 

Internet Locator

In today’s digitally connected world, the ability to locate and navigate through various online platforms has become an essential skill. With the advent of Internet Locator, individuals, and businesses can now effortlessly explore the vast online landscape. In this blog post, we will delve into the concept of Internet Locator, its significance, and how it has revolutionized how we navigate the digital realm.

 

  • Routing table growth

There has been exponential growth in Internet usage, and the scalability of today’s Internet routing system is now a concern. With more people surfing the web than ever, the underlying technology must be able to cope with demand.

Whereas in the past, getting an internet connection via some internet locator service could sometimes be expensive, nowadays, thanks to bundles that include telephone connections and streaming services, connecting to the web has never been more affordable. It is also important to note that routing table growth has a significant drive driving a need to reexamine internet connectivity.

 

  • Limitation in technologies

This has been met with the limitations and constraints of router technology and current Internet addressing architectures. If we look at the core Internet protocols that comprise the Internet, we have not experienced any significant change in over a decade.

There has been a radical change to the physical-layer mechanisms that underlie the Internet, but there has been only a small number of tweaks to BGP and its transport protocol, TCP. Mechanisms such as MPLS were introduced to provide a workaround to IP limitations within the ISP. Still, Layer 3 or 4 has had no substantial change for over a decade.

 

Before you proceed, you may find the following posts helpful:

  1. Container Based Virtualization
  2. Observability vs Monitoring
  3. Data Center Design Guide
  4. LISP Protocol
  5. What Is BGP Protocol In Networking

 

Internet Locator

Key Internet Locator Discussion Points:


  • Introduction to Internet Locator and what is involved.

  • Highlighting the details of the default-free zone.

  • Technical details on the LISP protocol and how this may help.

  • Scenario: BGP in the DFZ.

  • A final note on security. 

 

  • A key point: Video on LISP.

The following video introduces the LISP protocol, its use with the different LISP components, triangular routing, and how they interact with the LISP control plane. The LISP overlay network helps organizations provide connectivity to devices and workloads wherever they move, enabling open and highly scalable networks with exceptional flexibility and agility.

 

Tech Brief Video Series - Enterprise Networking | LISP Components & DEMO
Prev 1 of 1 Next
Prev 1 of 1 Next

 

Back to basics with the Internet

The Internet is often represented as a cloud. However, this needs to be clarified as there are few direct connections over the Internet. The Internet is also a partially distributed network. The Internet is decentralized, with many centers or nodes and direct or indirect links. There are also different types of networks out there on the Internet. For example, we have a centralized, decentralized, and distributed network.

The Internet is a conglomeration of independent systems representing organizations’ administrative authority and routing policies. Autonomous systems are made up of Layer 3 routers that run Interior Gateway Protocols (IGPs) such as Open Shortest Path First (OSPF) and Intermediate System-to-Intermediate System (IS-IS) within their borders and interconnect via an Exterior Gateway Protocol (EGP). The current Internet de facto standard EGP is the Border Gateway Protocol Version 4 (BGP-4), defined in RFC 1771.

 

  • A key point: Lab guide on BGP

In the following, we see a simple BGP design. BGP operated over TCP, more specifically, TCP port 179. BGP peers are created and can be iBGP or EBGP. In the screenshots below, we have an iBGP design. Remember that BGP is a Path Vector Protocol and utilizes a path vector protocol, which considers various factors while making routing decisions. These factors include the number of network hops, network policies, and path attributes such as AS path, next-hop, and origin.

Port 179
Diagram: Port 179 with BGP peerings.

1. Path Vector Protocol: BGP utilizes a path vector protocol, which considers various factors while making routing decisions. These factors include the number of network hops, network policies, and path attributes such as AS path, next-hop, and origin.

 

Internet Locator: Default Free Zone ( DFZ )

The first large-scale packet-switching network was ARPAnet- the modern Internet’s predecessor. It used a simplex protocol called Network Control Program ( NCP ). NCP combined addressing and transport into a single protocol. Many applications were built on top of NCP, which was very successful. However, it lacked flexibility. As a result, reliability was separated from addressing and packet transfer in the design of the Internet Protocol Suite, with IP being separated from TCP.

On the 1st of January 1983, ARPAnet officially rendered NCP and moved to a more flexible and powerful protocol suite – TCP/IP. The transition from NCP to TCP/IP was known as “flag day,” It was quickly done with only 400 nodes to recompute.

Today, a similar flag day is impossible due to the sheer size and scale of the Internet backbone. The requirement to change anything on the Internet is driven by necessity, and it’s usually slow to change such a vast network. For example, inserting an additional header into the protocol would impact IP fragmentation processing and congestion mechanism. Changing the semantics of IP addressing is problematic as the IP address has been used as an identifier to higher-level protocols and encoded in the application.

 

Default Free Zone
Diagram: Default Free Zone. The source is TypePad.

 

The driving forces of the DFZ

Many factors are driving the growth of the Default Free Zone ( DFZ ). These mainly include multi-homing, traffic engineering, and policy routing. The Internet Architecture Board ( IAB ) met on October 18-19th, 2006, and their key finding was that they needed to devise a scalable routing and addressing system. Such an addressing system must meet the current challenges of multi-homing and traffic engineering requirements.

 

Internet Locator: Locator/ID Separation Protocol ( LISP )

There has been some progress with the Locator/ID separation protocol ( LISP ) development. LISP is a routing architecture that redesigns the current addressing architecture. Traditional addressing architecture uses a single name, the IP address, to express two functions of a device.

The first function is its identity, i.e., who, and the second function is its location, i.e., where. LISP separates IP addresses into two namespaces: Endpoint Identifiers ( EIDs ), non-routable addresses assigned to hosts, and Routing Locators ( RLOCs), routable addresses assigned to routers that make up the global routing system.

internet locator
Internet locator with LISP

 

Separating these functions offers numerous benefits within a single protocol, one of which attempts to address the scalability of the Default Free Zone. In addition, LISP is a network-based implementation with most of the deployment at the network edges. As a result, LISP integrates well into the current network infrastructure and requires no changes to the end host stack.

 

  • A key point: Lab guide on LISP.

In the following guide, we will look at a LISP network. These LISP protocol components include the following:

  • Map Registration and Map Notify.
  • Map Request and Map-Reply.
  • LISP Protocol Data Path.
  • Proxy ETR.
  • Proxy ITR.

LISP implements the use of two namespaces instead of a single IP address:

  1. Endpoint identifiers (EIDs)—assigned to end hosts.
  2. Routing locators (RLOCs) are assigned to devices (primarily routers) that comprise the global routing system.

Splitting EID and RLOC functions yields several advantages, including improved routing system scalability, multihoming efficiency, and ingress traffic engineering. With the command: show lisp site summary, site 1 consists of R1 and site 2 consists of R2.  Each of these sites advertises its own EID-prefix. On R1, the tunnel router, we see the routing locator address 10.0.1.2. The RLOCs ( routing locators ) are interfaces on the tunnel routers.

Internet locator

 

Border Gateway Protocol (BGP) role in the DFZ

Border Gateway Protocol, or BGP, is an exterior gateway protocol that allows different autonomous systems (AS) to exchange routing information. It is designed to enable efficient communication between different networks, facilitating data exchange and traffic across the internet.

 

Exchanging NLRI

BGP is the protocol used to exchange NLRI between devices on the Internet and is the most critical piece of Internet architecture. It is used to interconnect Autonomous systems on the Internet, and it holds the entire network together. Routes are exchanged between BGP speakers with UPDATE messages. The BGP routing table ( RIB ) now stands at over 520,000 routes.

Although some of this growth is organic, a large proportion is driven by prefix de-aggregation. Prefix de-aggregation leads to increased BGP UPDATE messages injected into the DFZ. UPDATE messages require protocol activity between routing nodes, which requires additional processing to maintain the state for the longer prefixes.

Excess churn exposes the network’s core to the edges’ dynamic nature. This detrimental impacts routing convergence since UPDATES need to be recomputed and downloaded from the RIB to the FIB. As a result, it is commonly viewed that the Internet is never fully converged.

 

  • A key point: Video on BGP operating in the data center

In this whiteboard session, we will address the basics of BGP. A network exists specifically to serve the connectivity requirements of applications, and these applications are to serve business needs. So these applications must run on stable networks built and stable networks are built from stable routing protocols.

 

BGP in the Data Center
Prev 1 of 1 Next
Prev 1 of 1 Next

 

Security in the DFZ

Security is probably the most significant Internet problem; no magic bullet exists. Instead, an arms race is underway as techniques used by attackers and defenders co-evolve. This is because the Internet was designed to move packets from A to B as fast as possible, irrespective of whether B wants any of those packets.

In 1997, a misconfigured AS7007 router flooded the entire Internet with /24 BGP routes. As a result, routing was globally disrupted for more than 1 hour as the more specific prefixes took precedence over the aggregated routes. In addition, more specific routes advertised from AS7007 to AS1239 attracted traffic from all over the Internet into AS1239, saturating its links and causing router crashes.

There are automatic measures to combat prefix hijacking, but they are not widely used or compulsory. The essence of BGP design allows you to advertise whatever NLRI you want, and it’s up to the connecting service provider to have the appropriate filtering in place.

 

Drawbacks to BGP

BGP’s main drawback concerning security is that it does not hide policy information, and by default, it doesn’t validate the source. However, as BGPv4 runs over TCP, it is not as insecure as many think. A remote intrusion into BGP would require guessing the correct TCP numbers to insert data, and most TCP/IP stacks have hard-to-predict TCP sequence numbers. To compromise BGP routing, a standard method is to insert a rogue router that must be explicitly configured in the target’s BGP configuration as a neighbor statement.

 

Significance of BGP:

1. Inter-Domain Routing: BGP is primarily used for inter-domain routing, enabling different networks to communicate and exchange traffic across the internet. It ensures that data packets reach their intended destinations efficiently, regardless of the AS they belong to.

2. Internet Service Provider (ISP) Connectivity: BGP is crucial for ISPs as it allows them to connect their networks with other ISPs. This connectivity enables end-users to access various online services, websites, and content hosted on different networks, regardless of geographical location.

3. Redundancy and Load Balancing: BGP’s dynamic routing capabilities enable network administrators to create redundant paths and distribute traffic across multiple links. This redundancy enhances network resilience and ensures uninterrupted connectivity even during link failures.

4. Internet Traffic Engineering: BGP plays a vital role in internet traffic engineering, allowing organizations to optimize the flow of traffic within their networks. By manipulating BGP attributes and policies, network administrators can influence the path selection process and direct traffic through preferred routes.

 

internet connectivity