Green data center with eco friendly electricity usage tiny person concept. Database server technology for file storage hosting with ecological and carbon neutral power source vector illustration.

LISP Hybrid Cloud Use Case

LISP Hybrid Cloud Use Case

In the world of networking, the ability to efficiently manage and scale networks is of paramount importance. This is where LISP networking comes into play. LISP, which stands for Locator/ID Separation Protocol, is a powerful networking technology that offers numerous benefits to network administrators and operators. In this blog post, we will explore the world of LISP networking and its key features and advantages.

LISP networking is a revolutionary approach to IP addressing and routing that separates the identity of a device (ID) from its location (locator). Traditional IP addressing relies on combining these two aspects, making it challenging to scale networks and manage mobility. LISP overcomes these limitations by decoupling the device's identity and location, enabling more flexible and scalable network architectures.

LISP, at its core, is a routing architecture that separates location and identity information for IP addresses. By doing so, it enables scalable and efficient routing across networks. LISP hybrid cloud leverages this architecture to seamlessly integrate multiple cloud environments, including public, private, and on-premises clouds.

Enhanced Scalability: LISP hybrid cloud allows organizations to scale their cloud infrastructure effortlessly. By abstracting location information from IP addresses, it enables efficient traffic routing across cloud environments, ensuring optimal utilization of resources.

Improved Security and Privacy: With LISP hybrid cloud, organizations can establish secure and private connections between different cloud environments. This ensures that sensitive data remains protected while being seamlessly accessed across clouds, bolstering data security and compliance.

Simplified Network Management: By centralizing network policies and control, LISP hybrid cloud simplifies network management for organizations. It provides a unified view of the entire cloud infrastructure, enabling efficient monitoring, troubleshooting, and policy enforcement.

Seamless Data Migration: LISP hybrid cloud enables seamless migration of data between different clouds, eliminating the complexities associated with traditional data migration methods. It allows organizations to transfer large volumes of data quickly and efficiently, minimizing downtime and disruption.

Hybrid Application Deployment: Organizations can leverage LISP hybrid cloud to deploy applications across multiple cloud environments. This enables a flexible and scalable infrastructure, where applications can utilize resources from different clouds based on specific requirements, optimizing performance and cost-efficiency.

Conclusion: In conclusion, the LISP hybrid cloud use case presents a compelling solution for organizations seeking to enhance their cloud infrastructure. With its scalability, security, and simplified network management benefits, LISP hybrid cloud opens up a world of possibilities for seamless integration and optimization of multiple cloud environments. Embracing LISP hybrid cloud can drive efficiency, flexibility, and agility, empowering organizations to stay ahead in today's dynamic digital landscape.

Highlights: LISP Hybrid Cloud Use Case

Understanding LISP

LISP, short for Locator/ID Separation Protocol, is a routing architecture that separates the endpoint identifier (ID) from its location (locator). By doing so, LISP enables efficient routing, scalability, and mobility in IP networks. This protocol has been widely adopted in modern networking to address the challenges posed by the growth of the Internet and the limitations of traditional IP addressing.

Hybrid cloud architecture combines the best of both worlds by integrating public and private cloud environments. It allows organizations to leverage the scalability and cost-effectiveness of public clouds while maintaining control over sensitive data and critical applications in private clouds. This flexible approach provides businesses with the agility to scale their resources up or down based on demand, ensuring optimal performance and cost-efficiency.

The Synergy of LISP and Hybrid Cloud

When LISP and hybrid cloud architecture merge, the result is a powerful combination that offers numerous advantages. LISP’s ability to separate the ID from the locator enables seamless mobility and efficient routing across hybrid cloud environments. By leveraging LISP, organizations can achieve enhanced scalability, simplified network management, and improved performance across their distributed infrastructure.

Highlighting real-world examples of LISP hybrid cloud use cases can shed light on its practical applications. From multinational corporations with geographically dispersed offices to service providers managing cloud-based services, LISP hybrid cloud use cases have demonstrated significant improvements in network performance, reduced latency, simplified network management, and increased overall agility.

Use Case: Hybrid Cloud

The hybrid cloud connects the public cloud provider to the private enterprise cloud. It consists of two or more distinct infrastructures in dispersed locations that remain unique. These unique entities are bound together logically via a network to enable data and application portability. LISP networking performs hybrid cloud and can overcome the negative drawback of stretched VLAN. How do you support intra-subnet traffic patterns among two dispersed cloud locations? Without a stretched VLAN spanning locations, instability may arise from broadcast storms and Layer 2 loops.

Triangular routing

End to End-to-end connectivity

Enterprises want the ability to seamlessly insert their application right into the heart of the cloud provider without changing any parameters. Customers want to do this without changing the VM’s IP addresses and MAC addresses. This requires the VLAN to be stretched end-to-end. Unfortunately, IP routing cannot support VLAN extension, which puts pressure on the data center interconnect ( DCI ) link to enable extended VLANs. In reality, and from experience, this is not a good solution.

LISP Architecture on Cisco Platforms

There are various Cisco platforms that support LISP, but the platforms are mainly characterized by the operating system software they run. LISP is supported by Cisco’s IOS/IOS-XE, IOS-XR, and NX-OS operating systems. LISP offers several distinctive features and functions, including xTR/MS/MR, IGP Assist, and ESM/ASM Multi-hop. It is not true that all hardware supports all functions or features. Users need to verify that a platform supports key features before implementing it.

IOS-XR and NX-OS do not have distributed architectures, as does Cisco IOS/IOS-XE.RIB and Cisco Express Forwarding (CEF) provide the forwarding architecture for LISP on IOS/IOS-XE platforms using the LISP control process.

Before you proceed, you may find the following helpful:

  1. LISP Protocol
  2. LISP Hybrid Cloud Implementation
  3. Network Stretch
  4. LISP Control Plane
  5. Internet of Things Access Technologies
 

LISP Hybrid Cloud Use Case

Back to basics with a LISP network

The LISP Network

The LISP network comprises a mapping system with a global database of RLOC-EID mapping entries. The mapping system is the control plane of the LISP network decoupled from the data plane. The mapping system is address-family agnostic; the EID can be an IPv4 address mapped to an RLOC IPv6 address and vice versa. Or the EID may be a Virtual Extensible LAN (VXLAN) Layer 2 virtual network identifier (L2VNI) mapped to a VXLAN tunnel endpoint (VTEP) address working as an RLOC IP address.

How Does LISP Networking Work?

At its core, LISP networking introduces a new level of indirection between the device’s IP address and location. LISP relies on two key components: the xTR (eXternal Tunnel Router) and the mapping system. The xTR is responsible for encapsulating and forwarding traffic between different LISP sites, while the mapping system stores the mappings between the device’s identity and its current location.

Benefits of LISP Networking:

Scalability: LISP provides a scalable solution for managing large networks by separating the device’s identity from its location. This allows for efficient routing and reduces the amount of routing table information that needs to be stored and exchanged.

Mobility: LISP networking offers seamless mobility support, enabling devices to change locations without disrupting ongoing communications. This is particularly beneficial in scenarios where mobile devices are constantly moving, such as IoT deployments or mobile networks.

Traffic Engineering: LISP allows network administrators to optimize traffic flow by manipulating the mappings between device IDs and locators. This provides greater control over network traffic and enables efficient load balancing and congestion management.

Security: LISP supports secure communications through the use of cryptographic techniques. It provides authentication and integrity verification mechanisms, ensuring the confidentiality and integrity of data transmitted over the network.

Use Cases for LISP Networking:

Data Centers: LISP can significantly simplify the management of large-scale data center networks by providing efficient traffic engineering and seamless mobility support for virtual machines.

Internet Service Providers (ISPs): LISP can help ISPs improve their network scalability and handle the increasing demand for IP addresses. It enables ISPs to optimize their routing tables and efficiently manage address space.

IoT Deployments: LISP’s mobility support and scalability make it an ideal choice for IoT deployments. It efficiently manages large devices and enables seamless connectivity as devices move across different networks.

LISP Networking and Stretched VLAN

Locator Identity Separation Protocol ( LISP ) can extend subnets without the VLAN. I am creating a LISP Hybrid Cloud. A subnet extension with LISP is far more appealing than a Layer 2 LAN extension. The LISP-enabled hybrid cloud solution allows Intra-subnet communication regardless of where the server is. This means you can have two servers in different locations, one in the public cloud and the other in the Enterprise domain; both servers can communicate as if they were on the same subnet.

LISP acts as an overlay technology

LISP operates like an overlay technology; it encapsulates the source packet with UDP and a header consisting of the source and destination RLOC ( RLOC are used to map EIDS). The result is that you can address the servers in the cloud according to your addressing scheme. There is no need to match your addressing scheme to the cloud addressing scheme.

LISP on the Cloud Service Router ( CRS ) 1000V ( virtual router ) solution provides a Layer-3-based approach to a hybrid cloud. It allows you to stretch subnets from the enterprise to the public cloud without needing a Layer 2 LAN extension.

LISP networking
LISP networking and hybrid cloud

LISP networking deployment key points:

  1. LISP can be deployed with the CRS 1000V in the cloud and either a CRS 1000V or ASR 1000 in the enterprise domain.
  2. The enterprise CRS must have at least two interfaces. One interface is the L3 routed interface to the core. The second interface is a Layer 2 interface to support VLAN connectivity for the servers that require mobility.
  3. The enterprise CRS does not need to be the default gateway, and its interaction with the local infrastructure ( via the Layer 2 interface ) is based on Proxy-ARP. As a result, ARP packets must be allowed on the underlying networks.
  4. The Cloud CRS is also deployed with at least two interfaces. One interface is facing the Internet or MPLS network. The second interface faces the local infrastructure, either by VLANs or Virtual Extensible LAN ( VXLAN ).
  5. The CRS offers machine-level high availability and supports all the VMware high-availability features such as dynamic resource scheduling ( DRS ), vMotion, NIC load balancing, and teaming.
Hybrid Cloud
Hybrid cloud and CRS1000V
  1. LISP is a network-based solution and is independent of the hypervisor. You can have different hypervisors in the Enterprise and the public cloud. No changes to virtual servers or hosts. It’s completely transparent.
  2. The PxTR ( also used to forward to non-LISP sites ) is deployed in the enterprise cloud, and the xTR is deployed in the public cloud.
  3. The CRS1000V deployed in the public cloud is secured by an IPSEC tunnel. Therefore, the LISP tunnel should be encrypted using IPSEC tunnel mode. Tunnel mode is preferred to support NAT.
  4. Each CRS must have one unique outside IP address. This is used to form the IPSEC tunnel between the two endpoints.
  5. Dynamic or static Routing must be enabled over the IPSEC tunnel. This is to announce the RLOC IP address used by the LISP mapping system.
  6. The map-resolver ( MR ) and map server ( MS ) can be enabled on the xTR in the Enterprise or the xTR in the cloud.
  7. Traffic symmetry is still required when you have stateful devices in the path.

 LISP stretched subnets

The two modes of LISP operation are the LISP “Across” subnet and the LISP “Extended” subnet mode. Neither of these modes is used with the LISP-enabled CRS hybrid cloud deployment scenario. The mode of operation utilized is called the LISP stretched subnet model ( SSM ). The same subnet is used on both sides of the network, and mobility is performed between these two segments on the same subnet. You may think that this is the same as LISP “Extended” subnet mode, but in this case, we are not using a LAN extension between sites. Instead, the extended mode requires a LAN extension such as OTV.

LISP stretched subnets
LISP stretched subnets

Summary: LISP Hybrid Cloud Use Case

In the rapidly evolving world of cloud computing, businesses constantly seek innovative solutions to optimize their operations. One such groundbreaking approach is the utilization of LISP (Locator/ID Separation Protocol) in hybrid cloud environments. In this blog post, we explored the fascinating use case of LISP Hybrid Cloud and delved into its benefits, implementation, and potential for revolutionizing the industry.

Understanding LISP Hybrid Cloud

LISP Hybrid Cloud combines the best of two worlds: the scalability and flexibility of public cloud services with the security and control of private cloud infrastructure. By separating the location and identity of network devices, LISP allows for seamless communication between public and private clouds. This breakthrough technology enables businesses to leverage the advantages of both environments and optimize their cloud strategies.

Benefits of LISP Hybrid Cloud

Enhanced Scalability: LISP Hybrid Cloud offers unparalleled scalability by allowing businesses to scale their operations across public and private clouds seamlessly. This ensures that organizations can meet evolving demands without compromising performance or security.

Improved Flexibility: With LISP Hybrid Cloud, businesses can choose the most suitable cloud resources. They can leverage the vast capabilities of public clouds for non-sensitive workloads while keeping critical data and applications secure within their private cloud infrastructure.

Enhanced Security: LISP Hybrid Cloud provides enhanced security by leveraging the inherent advantages of private clouds. Critical data and applications can remain within the organization’s secure network, minimizing the risk of unauthorized access or data breaches.

Implementation of LISP Hybrid Cloud

Implementing LISP Hybrid Cloud involves several key steps. First, organizations must evaluate their cloud requirements and determine the optimal balance between public and private cloud resources. Next, they must deploy the necessary LISP infrastructure, including LISP routers and mapping servers. Finally, businesses must establish secure communication channels between their public and private cloud environments, ensuring seamless data transfer and interconnectivity.

Conclusion:

In conclusion, LISP Hybrid Cloud represents a revolutionary approach to cloud computing. By harnessing the power of LISP, businesses can unlock the potential of hybrid cloud environments, enabling enhanced scalability, improved flexibility, and heightened security. As the cloud landscape continues to evolve, LISP Hybrid Cloud is poised to play a pivotal role in shaping the future of cloud computing.

Dynamic Workload Scaling

Dynamic Workload Scaling ( DWS )

 

 

Dynamic Workload Scaling ( DWS ) 

In today’s fast-paced digital landscape, businesses strive to deliver high-quality services while minimizing costs and maximizing efficiency. To achieve this, organizations are increasingly adopting dynamic workload scaling techniques. This blog post will explore the concept of dynamic workload scaling, its benefits, and how it can help businesses optimize their operations.

  • Adjustment of resources

Dynamic workload scaling refers to the automated adjustment of computing resources to match the changing demands of a workload. This technique allows organizations to scale their infrastructure up or down in real time based on the workload requirements. By dynamically allocating resources, businesses can ensure that their systems operate optimally, regardless of varying workloads.

  • Defined Thresholds

Dynamic workload scaling is all about monitoring and distributing traffic at user-defined thresholds. Data centers are under pressure to support the ability to burst new transactions to available Virtual Machines ( VM ). In some cases, the VMs used to handle the additional load will be geographically dispersed, with both data centers connected by a Data Center Interconnect ( DCI ) link. The ability to migrate workloads within an enterprise hybrid cloud or in a hybrid cloud solution between enterprise and service provider is critical for business continuity for planned and unplanned outages.

 

Before you proceed, you may find the following post helpful:

  1. Network Security Components
  2. Virtual Data Center Design
  3. How To Scale Load Balancer
  4. Distributed Systems Observability
  5. Active Active Data Center Design
  6. Cisco Secure Firewall

 

Dynamic Workloads

Key Dynamic Workload Scaling Discussion Points:


  • Introduction to Dynamic Workload Scaling and what is involved.

  • Highlighting the details of dynamic workloads and how they can be implemented.

  • Critical points on how Cisco approaches Dynamic Workload Scaling.

  • A final note on design considerations.

 

Back to basics with OTV.

Overlay Transport Virtualization (OTV) is an IP-based technology to provide a Layer 2 extension between data centers. OTV is transport agnostic, indicating that the transport infrastructure between data centers can be dark fiber, MPLS, IP routed WAN, ATM, Frame Relay, etc.

The sole prerequisite is that the data centers must have IP reachability between them. OTV permits multipoint services for Layer 2 extension and separated Layer 2 domains between data centers, maintaining an IP-based interconnection’s fault-isolation, resiliency, and load-balancing benefits.

Unlike traditional Layer 2 extension technologies, OTV introduces the Layer 2 MAC routing concept. The MAC-routing concept enables a control-plane protocol to advertise the reachability of Layer 2 MAC addresses. As a result, the MAC-routing idea has enormous advantages over traditional Layer 2 extension technologies that traditionally leveraged data plane learning, flooding Layer 2 traffic across the transport infrastructure.

 

Cisco and Dynamic Workloads

A new technology introduced by Cisco, called Dynamic Workload Scaling ( DWS ), satisfies the requirement of dynamically bursting workloads based on user-defined thresholds to available resource pools ( VMs ). It is tightly integrated with Cisco Application Control Engine ( ACE ) and Cisco’s Dynamic MAC-in-IP encapsulation technology known as Overlay Transport Virtualization ( OTV ), enabling resource distribution across Data Center sites. OTV provides the LAN extension method that keeps the virtual machine’s state as it passes locations, and ACE delivers the load-balancing functionality.

 

dynamic workloads
Dynamic workload and dynamic workload scaling.

 

Dynamic workload scaling: How does it work?  

  • DWS monitors the VM capacity for an application and expands that application to another resource pool during periods of peak usage. We provide a perfect solution for distributed applications among geographically dispersed data centers.
  • DWS uses the ACE and OTV technologies to build a MAC table. It monitors the local MAC entries and those located via the OTV link to determine if a MAC entry is considered “Local” or “Remote.”
  • The ACE monitors the utilization of the “local” VM. From these values, the ACE can compute the average load of the local Data Center.
  • DWS uses two APIs. One is to monitor the server load information polled from VMware’s VCenter, and another API is to poll OTV information from the Nexus 7000.
  • During normal load conditions, when the data center is experiencing low utilization, the ACE can load incoming balance traffic to the local VMs.
  • However, when the data center experiences high utilization and crosses the predefined thresholds, the ACE will add the “remote” VM to its load-balancing mechanism.
workload scaling
Workload scaling and its operations.

 

Dynamic workload scaling: Design considerations

During congestion, the ACE adds the “remote” VM to its load-balancing algorithm. The remote VM placed in the secondary data center could add additional load on the DCI. Essentially hair-pining traffic for some time as ingress traffic for the “remote” VM continues to flow via the primary data center. DWS should be used with Locator Identity Separation Protocol ( LISP ) to enable automatic move detection and optimal ingress path selection.

 

Benefits of Dynamic Workload Scaling:

1. Improved Efficiency:

Dynamic workload scaling enables businesses to allocate resources precisely as needed, eliminating the inefficiencies associated with over-provisioning or under-utilization. Organizations can optimize resource utilization and reduce operational costs by automatically scaling resources up during periods of high demand and scaling them down during periods of low demand.

2. Enhanced Performance:

With dynamic workload scaling, businesses can effectively handle sudden spikes in workload without compromising performance. Organizations can maintain consistent service levels and ensure smooth operations during peak times by automatically provisioning additional resources when required. This leads to improved customer satisfaction and retention.

3. Cost Optimization:

Traditional static infrastructure requires businesses to provision resources based on anticipated peak workloads, often leading to over-provisioning and unnecessary costs. Dynamic workload scaling allows organizations to provision resources on demand, resulting in cost savings by paying only for the resources utilized. Additionally, by scaling down resources during periods of low demand, businesses can further reduce operational expenses.

4. Scalability and Flexibility:

Dynamic workload scaling allows businesses to scale their operations as needed. Whether expanding to accommodate business growth or handling seasonal fluctuations, organizations can easily adjust their resources to match the workload demands. This scalability and flexibility enable businesses to respond quickly to changing market conditions and stay competitive.

Dynamic workload scaling has emerged as a crucial technique for optimizing efficiency and performance in today’s digital landscape. By dynamically allocating computing resources based on workload requirements, businesses can improve efficiency, enhance performance, optimize costs, and achieve scalability. Implementing robust monitoring systems, automation, and leveraging cloud computing services are critical steps toward successful dynamic workload scaling. Organizations can stay agile and competitive and deliver exceptional customer service by adopting this approach.

Key Features of Cisco Dynamic Workload Scaling:

Intelligent Automation:

Cisco’s dynamic workload scaling solutions leverage intelligent automation capabilities to monitor real-time workload demands. By analyzing historical data and utilizing machine learning algorithms, Cisco’s automation tools can accurately predict future workload requirements and proactively scale resources accordingly.

Application-Aware Scaling:

Cisco’s dynamic workload scaling solutions are designed to understand the unique requirements of different applications. By utilizing application-aware scaling, Cisco can allocate resources based on the specific needs of each workload, ensuring optimal performance and minimizing resource wastage.

Seamless Integration:

Cisco’s dynamic workload scaling solutions seamlessly integrate with existing IT infrastructures, allowing businesses to leverage their current investments. This ensures a smooth transition to dynamic workload scaling without extensive infrastructure overhauls.

Conclusion:

In today’s dynamic business environment, efficiently managing and scaling workloads is critical for organizational success. Cisco’s dynamic workload scaling solutions provide businesses with the flexibility, performance optimization, and cost savings necessary to thrive in an ever-changing landscape. By leveraging intelligent automation, application-aware scaling, and seamless integration, Cisco empowers organizations to adapt and scale their workloads effortlessly. Embrace Cisco’s dynamic workload scaling and unlock the full potential of your business operations.

 

WAN Design Requirements

LISP Data Plane | LISP Control plane

LISP Control and Data Plane

The networking landscape has undergone significant transformations over the years, with the need for efficient and scalable routing protocols becoming increasingly crucial. In this blog post, we will delve into the world of LISP (Locator/ID Separation Protocol) and explore its control plane, shedding light on its advantages to modern networks.

LISP, developed by the Internet Engineering Task Force (IETF), is a protocol that separates the location and identity of network devices. It provides a scalable solution for routing by decoupling the IP address (identity) from a device's physical location (locator). The control plane of LISP plays a vital role in managing and distributing the mapping information required for efficient and effective routing.

We need a method to separate identity from location that offers many benefits. However, a single address field for identifying a device and determining where it is topologically located is not an optimum approach and presents many challenges with host mobility.

- Understanding the Control Plane: The control plane in LISP is responsible for managing the mappings between endpoint identifiers (EIDs) and routing locators (RLOCs). It enables efficient and scalable routing by separating the identity of a device from its location. By leveraging the distributed mapping system, control plane operations ensure seamless communication across networks.

- Unraveling the Data Plane: The data plane is where the actual packet forwarding occurs in LISP. It relies on encapsulation and decapsulation techniques to encapsulate the original IP packet within a LISP header. The encapsulated packet is then routed through the network based on the EID-to-RLOC mapping obtained from the control plane. The data plane plays a vital role in maintaining network efficiency and enabling seamless mobility.

The LISP control and data plane offer several advantages for modern networks. Firstly, it enhances scalability by reducing the size of routing tables and simplifying network architecture. Secondly, LISP provides improved mobility support, allowing devices to move without changing their IP addresses. This feature is particularly beneficial for mobile networks and IoT deployments. Lastly, the control and data plane separation enables more efficient traffic engineering and network optimization.

Implementing LISP control and data plane requires a combination of software and hardware components. Several vendors offer LISP-enabled routers and switches, making it easier to adopt this protocol in existing network infrastructures. Additionally, various open-source software implementations are available, allowing network administrators to experiment and deploy LISP in a flexible manner.

Highlights: LISP Control and Data Plane

The LISP Protocol

The LISP protocol offers an architecture that provides seamless ingress traffic engineering and move detection without any DNS changes or agents on the host. A design that LISP can use would be active data center design. A vital concept of the LISP protocol is that end hosts operate similarly. Hosts’ IP addresses for tracking sockets and connections and sending and receiving packets do not change.

LISP Routing

LISP attempts to establish communication among endpoint devices. Endpoints in IP networks are called IP hosts, and these hosts are typically not LISP-enabled, so each endpoint originates packets with a single IPv4 or IPv6 header to another endpoint. Many endpoints exist, including servers (physical or virtual), workstations, tablets, smartphones, printers, IP phones, and telepresence devices. EIDs are LISP addresses assigned to endpoints.

EID – Globally Unique

The EID must be globally unique when communicating on the Internet, just like IP addresses. To be reachable from the public IP space, private addresses must be translated to global addresses through network address translation (NAT). Like any other routing database on the Internet, the global LISP mapping database cannot be populated with private addresses. In contrast, the global LISP mapping database can identify entries as members of different virtual private networks (VPNs).

Triangular routing

BGP/MPLS Internet Protocol (IP) VPN network routers have separate virtual routing and forwarding (VRF) tables for each VPN; in the same vein, LISP can be used to create private networks and to have an Internet router with separate routing tables (VRFs) for internet routes and private addresses. In many cases, private EID addresses do not have to be routable over the public Internet when using a dedicated private LISP mapping database. With LISP, private deployments may use the public Internet as an underlay to create VPNs, leveraging the public Internet for transport.

Before you proceed, you may find the following useful for pre-information:

  1. Observability vs Monitoring
  2. VM Mobility 
  3. What Is VXLAN
  4. LISP Hybrid Cloud
  5. Segment Routing
  6. Remote Browser Isolation

LISP Control and Data Plane

LISP: An IP overlay solution

LISP is an IP overlay solution that keeps the same semantics for IPv4 and IPv6 packet headers but operates two separate namespaces: one to specify the location and the other to determine the identity. A LISP packet has an inner IP header, which, like the headers of traditional IP packets, is for communicating endpoint to endpoint.

This would be from a particular source to a destination address. Then we have the outer IP header that provides the location to which the endpoint attaches. The outer IP headers are also IP addresses.

Therefore, if an endpoint changes location, its IP address remains unchanged. It is the outer header that consistently gets the packet to the location of the endpoint. The endpoint identifier (EID) address is mapped to a router that the endpoint sits behind, which is understood as the routing locator (RLOC) in LISP terminology.

Benefits of LISP Control Plane:

1. Scalability: LISP’s control plane offers scalability advantages by reducing the size of the routing tables. With LISP, the mapping system maintains only the necessary information, allowing for efficient routing in large networks.

2. Mobility: The control plane of LISP enables seamless mobility as devices move across different locations. By separating the identity and locator, LISP ensures that devices maintain connectivity even when their physical location changes, reducing disruptions and enhancing network flexibility.

3. Traffic Engineering: LISP’s control plane allows for intelligent traffic engineering, enabling network operators to optimize traffic flow based on specific requirements. By leveraging the mapping information, routing decisions can be made dynamically, leading to efficient utilization of network resources.

4. Security: The LISP control plane offers enhanced security features. By separating the identity and locator, LISP helps protect the privacy of devices, making it harder for attackers to track or target specific devices. Additionally, LISP supports authentication mechanisms, ensuring the integrity and authenticity of the mapping information.

Implementing LISP Control Plane:

Several components are required to implement the LISP control plane, including the mapping system, the encapsulation mechanism, and the LISP routers. The mapping system is responsible for storing and distributing the mapping information, while the encapsulation mechanism ensures the separation of identity and locator. LISP routers play a crucial role in forwarding traffic based on the mapping information received from the control plane.

  • Real-World Use Cases:

LISP control plane has found applications in various real-world scenarios, including:

1. Data Centers: LISP helps optimize traffic flow within data centers, facilitating efficient load balancing and reducing latency.

2. Internet Service Providers (ISPs): The LISP control plane enables ISPs to enhance their routing infrastructure, improving scalability and mobility support for their customers.

3. Internet of Things (IoT): As the number of connected devices continues to grow, the LISP control plane offers a scalable solution for managing the routing of IoT devices, ensuring seamless connectivity even as devices move.

Control Plane vs Data Plane

The LISP data plane

LISP protocol
LISP protocol and the data plane functions.
  1. Client C1 is located in a remote LISP-enabled site and wants to open a TCP connection with D1, a server deployed in a LISP-enabled Data Center. C1 queries through DNS the IP address of D1 and an A/AAAA record is returned. The address returned is the destination Endpoint Identifier ( EID ), and it’s non-routable. EIDs are IP addresses assigned to hosts. Client C1 realizes this is not an address on its local subnet and steers the traffic to its default gateway, a LISP-enabled device. This triggers the LISP control-plane activity.
  2. The LISP control plane is triggered only if the lookup produces no results or if the only available match is a default route. This means that a Map-Request ( from ITR to the Mapping system ) is sent only when the destination is not found.
  3. The ITR receives its EID-to-RLOC mapping from the mapping system and updates its local map-cache, which previously did not contain the mapping. The local map cache can be used for future communications between these endpoints.
  4. The destination EID will be mapped to several RLOC ( Routing Locator ), which will identify the ( Egress Tunnel Router ) ETRs at the remote Data Center site. Each entry has associated priorities and weights with loading balance, influencing inbound traffic towards the RLOC address space. The specific RLOC is selected per-flow based on the 5-tuple hashing of the original client’s IP packet.
  5. Once the controls are in place, the ITR performs LISP encapsulation on the original packets and forwards the LISP encapsulated packet to one ( two or more if load balancing is used ) of the RLOCs of the Data Center ETRs. RLOC prefixes are routable addresses. Destination ETR receives the packet, decapsulates it, and sends it towards the destination EID.

LISP control plane

LISP Control plane
LISP Control plan
  1. The destination ETRs register their non-routable EIDs to the Map-Server using a Map-Register message. This is done every 60 seconds.If the ITR does not have a local mapping for the remote EID-RLOC mapping, it will send a Map-Request message to the Map-Resolver. Map-Requests should be rate-limited to avoid denial of service attacks.
  2. The Map-Resolver then forwards the request to the authoritative Map-Server. The Map-Resolver and Map-Server could be the same device. The Map resolver could also be an anycast address.
  3. The Map-Server then forwards the request to the last registered ETR. The ETR looks at the destination of the Map-Request and compares it to its configured EID-to-RLOC database. A match triggers the ETR to directly reply to the ITR with a Map-Reply containing the requested mapping information. Map-Replies are sent using the underlying routing system topology. On the other hand, if there is no match, the Map-Request is dropped.
  4. When the ITR receives the Map-Reply containing the mapping information, it will update its local EID-to-RLOC map cache. All subsequent flows will go forward without the integration of the mapping systems.

Summary: LISP Control and Data Plane

LISP, which stands for Locator/Identifier Separation Protocol, is a networking architecture that separates the device’s identity (identifier) from its location (locator). This innovative approach benefits network scalability, mobility, and security. In this blog post, we will dive into the details of the LISP control and data plane and explore how they work together to provide efficient and flexible networking solutions.

Understanding the LISP Control Plane

The control plane in LISP is responsible for managing the mapping between the device’s identifier and locator. It handles the registration process, where a device registers its identifier and locator information with a Map-Server. The control plane also maintains the mapping database, which stores the current mappings. This section will delve into the workings of the LISP control plane and discuss its essential components and protocols.

Exploring the LISP Data Plane

While the control plane handles the mapping information, the data plane in LISP is responsible for the actual forwarding of traffic. It ensures that packets are efficiently routed to their intended destination by leveraging the mappings provided by the control plane. This section will explore the LISP data plane, including its encapsulation mechanisms and how it facilitates seamless communication across different networks.

Benefits of the LISP Control and Data Plane Integration

The true power of LISP lies in the seamless integration of its control and data planes. By separating the identity and location, LISP enables improved scalability and mobility. This section will discuss the advantages of this integration, such as simplified network management, enhanced load balancing, and efficient traffic engineering.

Conclusion:

In conclusion, the LISP control and data plane form a harmonious duo that revolutionizes networking architectures. The control plane efficiently manages the mapping between the identifier and locator, while the data plane ensures optimal packet forwarding. Their integration brings numerous benefits, paving the way for scalable, mobile, and secure networks. Whether you’re an aspiring network engineer or a seasoned professional, understanding the intricacies of the LISP control and data plane is crucial in today’s rapidly evolving networking landscape.