Cisco DMVPN is based on a virtual private network (VPN), which provides private connectivity over a public network like the Internet. Furthermore, the DMVPN network takes this concept of VPN further by allowing multiple VPNs to be deployed over a shared infrastructure in a manageable and scalable way. This shared infrastructure, or “DMVPN network,” enables each VPN to connect to the other VPNs without needing expensive dedicated connections or complex configurations.
DMVPN Explained: DMVPN creates a virtual network built on the existing infrastructure. This virtual network consists of “tunnels” between various endpoints, such as corporate networks, branch offices, or remote users. This virtual network allows for secure communication between these endpoints, regardless of their geographic location. As we are operating under an underlay, DMVPN is an overlay solution.
- VPN-based security solutions
VPN-based security solutions are increasingly popular and have proven effective and secure technology for protecting sensitive data traversing insecure channel mediums, such as the Internet.
Traditional IPsec-based site-to-site, hub-to-spoke VPN deployment models must scale better and be adequate only for small- and medium-sized networks. However, as demand for IPsec-based VPN implementation grows, organizations with large-scale enterprise networks require scalable and dynamic IPsec solutions that interconnect sites across the Internet with reduced latency while optimizing network performance and bandwidth utilization.
Dynamic Multipoint VPN (DMVPN) technology is used for scaling IPsec VPN networks by offering a large-scale IPsec VPN deployment model that allows the network to expand and realize its full potential. In addition, DMVPN offers scalability that enables zero-touch deployment models.
For pre-information, you may find the following posts helpful.
So, for sites to learn about each other and create dynamic VPNs, the DMVPN solution consists of a combination of existing technologies. Therefore, efficiently designing and implementing a Cisco DMVPN network requires thoroughly understanding these components, their interactions, and how they all come together to create a DMVPN network.
These technologies may seem complex, and this post aims to break them down into simple terms. First, we mentioned that DMVPN has different components, which are the building block of a DMVPN network. These include Generic Routing Encapsulation (GRE), Next Hop Redundancy Protocol (NHRP), and IPsec.
The Dynamic Multipoint VPN (DMVPN) feature allows users to better scale large and small IP Security (IPsec) Virtual Private Networks (VPNs) by combining generic routing encapsulation (GRE) tunnels, IPsec encryption, and Next Hop Resolution Protocol (NHRP).
Each of these components needs a base configuration for DMVPN to work. Once the base configuration is in place, we have a variety of show and debug commands to troubleshoot a DMVPN network to ensure smooth operations.
Key DMVPN components include:
● Multipoint GRE (mGRE) tunnel interface: Allows a single GRE interface to support multiple IPsec tunnels, simplifying the size and complexity of the configuration.
● Dynamic discovery of IPsec tunnel endpoints and crypto profiles: Eliminates the need to configure static crypto maps defining every pair of IPsec peers, further simplifying the configuration.
● NHRP: Allows spokes to be deployed with dynamically assigned public IP addresses (i.e., behind an ISP’s router). The hub maintains an NHRP database of the public interface addresses of each spoke. Each spoke registers its real address when it boots; when it needs to build direct tunnels with other spokes, it queries the NHRP database for real addresses of the destination spokes
Back to basics: DMVPN Explained.
A Cisco DMVPN network consists of many virtual networks, and such a virtual network is called an overlay network because it depends on an underlying transport called the underlay network. The underlay network forwards traffic flowing through the overlay network. With the use of protocol analyzers, the underlay network is aware of the existence of the overlay. However, left to its defaults, the underlay network does not fully see the overlay network.
We will have routers at the company’s sites that are considered the endpoints of the tunnel that forms the overlay network. So, we could have a WAN edge router or Cisco ASA configured for DMVPN. Then, for the underlay that is likely out of your control, have an array of service provider equipment such as routers, switches, firewalls, and load balancers that make up the underlay.
Creating an overlay network
The overlay network does not magically appear. To create an overlay network, we need a tunneling technique. Many tunneling technologies can be used to form the overlay network. The Generic Routing Encapsulation (GRE) tunnel is the most widely used for external connectivity, while VXLAN is for internal to the data center. And the one which DMVPN adopts. A GRE tunnel can support tunneling for various protocols over an IP-based network. It works by inserting an IP and GRE header on top of the original protocol packet, creating a new GRE/IP packet.
The resulting GRE/IP packet uses a source/destination pair routable over the underlying infrastructure. The GRE/IP header is the outer header, and the original protocol header is the inner header.
- A key point: Is GRE over IPsec a tunneling protocol?
GRE is a tunneling protocol that transports multicast, broadcast, and non-IP packets like IPX. IPSec is an encryption protocol. IPSec can only transport unicast packets, not multicast & broadcast. Hence we wrap it GRE first and then into IPSec, called GRE over IPSec.
Once the virtual tunnel is fully functional, the routers need a way to direct traffic through their tunnels. Dynamic routing protocols such as EIGRP and OSPF are excellent choices for this. By simply enabling a dynamic routing protocol on the tunnel and LAN interfaces on the router, the routing table is populated appropriately:
GRE tunnels are, by nature, point-to-point tunnels. Therefore, configuring a GRE tunnel with multiple destinations by default is impossible. A point-to-point GRE tunnel is an excellent solution if the goal is only connecting two sites and you don’t plan to scale. However, as sites are added, additional point-to-point tunnels are required to connect them, which is complex to manage.
An alternative to configuring multiple point-to-point GRE tunnels is to use multipoint GRE tunnels to provide the connectivity desired. Multipoint GRE (mGRE) tunnels are similar in construction to point-to-point GRE tunnels except for the tunnel destination command. However, instead of declaring a static destination, no destination is declared, and instead, the tunnel mode gre multipoint command is issued.
How does one remote site know what destination to set for the GRE/IP packet created by the tunnel interface? The easy answer is that, on its own, it can’t. The site can only glean the destination address with the help of an additional protocol. This is the next component used to create a DMVPN network isNext Hop Resolution Protocol (NHRP).
Essentially, mGRE features a single GRE interface on each router with the possibility of multiple destinations. This interface secures multiple IPsec tunnels and reduces the overall scope of the DMVPN configuration. However, if two branch routers need to tunnel traffic, mGRE and point-to-point GRE may not know which IP addresses to use. The Next Hop Resolution Protocol (NHRP) is used to solve this issue.
Next Hop Resolution Protocol (NHRP)
The Next Hop Resolution Protocol (NHRP) is a networking protocol designed to facilitate efficient and reliable communication between two nodes on a network. It does this by providing a way for one node to discover the IP address of another node on the same network.
The primary role of NHRP is to allow a node to resolve the IP address of another node that is not directly connected to the same network. This is done by querying an NHRP server which contains a mapping of all the nodes on the network. When a node requests the NHRP server, it will return the IP address of the destination node.
NHRP was originally designed to allow routers connected to non-broadcast multiple-access (NBMA) networks to discover the proper next-hop mappings to communicate. It is specified in RFC 2332. NBMA networks faced a similar issue as mGRE tunnels.
The NHRP can deploy spokes with assigned IP addresses. These spokes can be connected from the central DMVPN hub. One branch router requires this protocol to find the public IP address of the second branch router. NHRP uses a “server-client” model, where one router functions as the NHRP server while the other routers are the NHRP clients. In the multipoint GRE/DMVPN topology, the hub router is the NHRP server, and all other routers are the spokes.
Each client registers with the server and reports its public IP address, which the server tracks in its cache. Then, through a process that involves registration and resolution requests from the client routers and resolution replies from the server router, traffic is enabled between various routers in the DMVPN.
An IPsec tunnel is a secure connection between two or more devices over an untrusted network using a set of cryptographic security protocols. The most common type of IPsec tunnel is the site-to-site tunnel, which connects two sites or networks. It allows two remote sites to communicate securely and exchange traffic between them. Another type of IPsec tunnel is the remote-access tunnel, which allows a remote user to connect to the corporate network securely.
Several parameters, such as authentication method, encryption algorithm, and tunnel mode, must be configured when setting up an IPsec tunnel. Additional security protocols such as Internet Key Exchange (IKE) can also be used for further authentication and encryption, depending on the organization’s needs.
IPsec Tunnel Endpoint Discovery
Tunnel Endpoint Discovery (TED) allows routers to automatically discover IPsec endpoints so that static crypto maps must not be configured between individual IPsec tunnel endpoints. In addition, TED allows endpoints or peers to dynamically and proactively initiate the negotiation of IPsec tunnels to discover unknown peers. These remote peers do not need to have TED configured to be discovered by inbound TED probes. So, while configuring TED, VPN devices that receive TED probes on interfaces — that are not configured for TED — can negotiate a dynamically initiated tunnel using TED.
Routing protocols enable the DMVPN to find routes between different endpoints efficiently and effectively. Therefore, to build a scalable and stable DMVPN, choosing the right routing protocol is essential. One option is to use Open Shortest Path First (OSPF) as the interior routing protocol. However, OSPF is best suited for small-scale DMVPN deployments.
The Enhanced Interior Gateway Routing Protocol (EIGRP) or Border Gateway Protocol (BGP) is more suitable for large-scale implementations. EIGRP is not restricted by the topology limitations of a link-state protocol and is easier to deploy and scale in a DMVPN topology. BGP can scale to many peers and routes, and it puts less strain on the routers compared to other routing protocols
DMVPN Deployment Scenarios:
Cisco DMVPN can be deployed in two ways:
- Hub-and-spoke deployment model
- Spoke-to-spoke deployment model
Hub-and-spoke deployment model: In this traditional topology, remote sites, which are the spokes, are aggregated into a headend VPN device. The headend VPN location would be at the corporate headquarters, known as the hub.
Traffic from any remote site to other remote sites would need to pass through the headend device. Cisco DMVPN supports dynamic routing, QoS, and IP Multicast while significantly reducing the configuration effort.
Spoke-to-spoke deployment model: Cisco DMVPN allows the creation of a full-mesh VPN, in which traditional hub-and-spoke connectivity is supplemented by dynamically created IPsec tunnels directly between the spokes.
With direct spoke-to-spoke tunnels, traffic between remote sites does not need to traverse the hub; this eliminates additional delays and conserves WAN bandwidth while improving performance.
Spoke-to-spoke capability is supported in a single-hub or multi-hub environment. Multihub deployments provide increased spoke-to-spoke resiliency and redundancy.
The word phase is almost always connected to DMVPN design discussions. DMVPN phase refers to the version of DMVPN implemented in a DMVPN design. As mentioned above, we can have two deployment models, each of which can be mapped to a DMVPN Phase.
Cisco DMVPN as a solution was rolled out in different stages as the explanation became more widely adopted to address performance issues and additional improvised features. There are three main phases for DMVPN:
- Phase 1 – Hub-and-spoke
- Phase 2 – Spoke-initiated spoke-to-spoke tunnels
- Phase 3 – Hub-initiated spoke-to-spoke tunnels
The differences between the DMVPN phases are related to routing efficiency and the ability to create spoke-to-spoke tunnels. We started with DMVPN Phase 1 where we only had a hub to spoke. So this needed to be more scalable as we could not have direct spoke-to-spoke communication. Instead, the spokes could communicate with one another but were required to traverse the hub.
Then we went to DMVPN Phase 2 to support spoke-to-spoke with dynamic tunnels. These tunnels were brought up by initially passing traffic via the hub. Later, Cisco developed DMVPN Phase 3, which optimized how spoke-to-spoke commutation happens and the tunnel build-up.
Dynamic multipoint virtual private networks began simply as what is best described as hub-and-spoke topologies. The primary tool to create these VPNs combines Multipoint Generic Routing Encapsulation (mGRE) connections employed on the hub and traditional Point-to-Point (P2P) GRE tunnels on the spoke devices.
In this initial deployment methodology, known as a Phase 1 DMVPN, the spokes can only join the hub and communicate with one another through the hub. This phase does not use spoke-to-spoke tunnels. Instead, the spokes are configured for point-to-point GRE to the hub and register their logical IP with the non-broadcast multi-access (NBMA) address on the next hop server (NHS) hub.
It is essential to keep in mind that there is a total of three phases, and each one can influence the following:
- Spoke-to-spoke traffic patterns
- Routing protocol design
Summary of DMVPN Phases
Phase 1—Hub-to-Spoke Designs: Phase 1 was the first design introduced for hub-to-spoke implementation, where spoke-to-spoke traffic would traverse via the hub. Phase 1 also introduced daisy chaining of identical hubs for scaling the network, thereby providing Server Load Balancing (SLB) capability to increase the CPU power.
Phase 2—Spoke-to-Spoke Designs: Phase 2 design introduced the ability for dynamic spoke-to-spoke tunnels without traffic going through the hub, intersite communication bypassing the hub, thereby providing greater scalability and better traffic control. In Phase 2 network design, each DMVPN network is independent of other DMVPN networks, causing spoke-to-spoke traffic from different regions to traverse the regional hubs without going through the central hub.
Phase 3—Hierarchical (Tree-Based) Designs: Phase 3 extended Phase 2 design with the capability to establish dynamic and direct spoke-to-spoke tunnels from different DMVPN networks across multiple regions. In Phase 3, all regional DMVPN networks are bound to form a single hierarchical (tree-based) DMVPN network, including the central hubs. As a result, spoke-to-spoke traffic from different regions can establish direct tunnels with each other, thereby bypassing both the regional and main hubs.
Which deployment model can you use? The 80:20 traffic rule can be used to determine which model to use:
- If 80 percent or more of the traffic from the spokes is directed into the hub network itself, deploy the hub-and-spoke model.
- Consider the spoke-to-spoke model if more than 20 percent of the traffic is meant for other spokes.
The hub-and-spoke model is usually preferred for networks with a high volume of IP Multicast traffic.
Medium-sized and large-scale site-to-site VPN deployments require support for advanced IP network services such as:
● IP Multicast: Required for efficient and scalable one-to-many (i.e., Internet broadcast) and many-to-many (i.e., conferencing) communications, and commonly needed by voice, video, and specific data applications
● Dynamic routing protocols: Typically required in all but the smallest deployments or wherever static routing is not manageable or optimal
● QoS: Mandatory to ensure performance and quality of voice, video, and real-time data applications
- A key point: DMVPN Explained
Traditionally, supporting these services required tunneling IPsec inside protocols such as Generic Route Encapsulation (GRE), which introduced an overlay network, making it complex to set up and manage and limiting the solution’s scalability. Indeed, traditional IPsec only supports IP Unicast, making deploying applications that involve one-to-many and many-to-many communications inefficient.
Cisco DMVPN combines GRE tunneling and IPsec encryption with Next-Hop Resolution Protocol (NHRP) routing to meet these requirements while reducing the administrative burden.
How DMVPN Works
DMVPN builds a dynamic tunnel overlay network.
• Initially, each spoke establishes a permanent IPsec tunnel to the hub. (At this stage, spokes do not establish tunnels with other spokes within the network.) The hub address should be static and known by all of the spokes.
• Each spoke registers its actual address as a client to the NHRP server on the hub. The NHRP server maintains an NHRP database of the public interface addresses for each spoke.
• When a spoke requires that packets be sent to a destination (private) subnet on another spoke, it queries the NHRP server for the real (outside) addresses of the other spoke’s destination to build direct tunnels.
• The NHRP server looks up the NHRP database for the corresponding destination spoke and replies with the real address for the target router. NHRP prevents dynamic routing protocols from discovering the route to the correct spoke. (Dynamic routing adjacencies are established only from spoke to the hub.)
• After the originating spoke learns the peer address of the target spoke, it initiates a dynamic IPsec tunnel to the target spoke.
• Integrating the multipoint GRE (mGRE) interface, NHRP, and IPsec establishes a direct dynamic spoke-to-spoke tunnel over the DMVPN network.
The spoke-to-spoke tunnels are established on demand whenever traffic is sent between the spokes. After that, packets can bypass the hub and use the spoke-to-spoke tunnel directly.
Feature Design of Dynamic Multipoint VPN
The Dynamic Multipoint VPN (DMVPN) feature combines GRE tunnels, IPsec encryption, and NHRP routing to provide users ease of configuration via crypto profiles–which override the requirement for defining static crypto maps–and dynamic discovery of tunnel endpoints.
This feature relies on the following two Cisco-enhanced standard technologies:
- NHRP–A client and server protocol where the hub is the server, and the spokes are the clients. The hub maintains an NHRP database of the public interface addresses of each spoke. Each spoke registers its real address when it boots and queries the NHRP database for real addresses of the destination spokes to build direct tunnels.
- mGRE Tunnel Interface –Allows a single GRE interface to support multiple IPsec tunnels and simplifies the size and complexity of the configuration.
- Each spoke has a permanent IPsec tunnel to the hub, not to the other spokes within the network. Each spoke registers as a client of the NHRP server.
- When a spoke needs to send a packet to a destination (private) subnet on another spoke, it queries the NHRP server for the real (outside) address of the destination (target) spoke.
- After the originating spoke “learns” the peer address of the target spoke, it can initiate a dynamic IPsec tunnel to the target spoke.
- The spoke-to-spoke tunnel is built over the multipoint GRE interface.
- The spoke-to-spoke links are established on demand whenever there is traffic between the spokes. After that, packets can bypass the hub and use the spoke-to-spoke tunnel.
Cisco DMVPN Solution Architecture
DMVPN allows IPsec VPN networks to better scale hub-to-spoke and spoke-to-spoke designs, optimizing performance and reducing communication latency between sites.
DMVPN offers a wide range of benefits, including the following:
• The capability to build dynamic hub-to-spoke and spoke-to-spoke IPsec tunnels
• Optimized network performance
• Reduced latency for real-time applications
• Reduced router configuration on the hub that provides the capability to dynamically add multiple spoke tunnels without touching the hub configuration
• Automatic triggering of IPsec encryption by GRE tunnel source and destination, assuring zero packet loss
• Support for spoke routers with dynamic physical interface IP addresses (for example, DSL and cable connections)
• The capability to establish dynamic and direct spoke-to-spoke IPsec tunnels for communication between sites without having the traffic go through the hub; that is, intersite communication bypassing the hub
• Support for dynamic routing protocols running over the DMVPN tunnels
• Support for multicast traffic from hub to spokes
• Support for VPN Routing and Forwarding (VRF) integration extended in multiprotocol label switching (MPLS) networks
• Self-healing capability maximizing VPN tunnel uptime by rerouting around network link failures
• Load-balancing capability offering increased performance by transparently terminating VPN connections to multiple headend VPN devices
Network availability over a secure channel is critical in designing scalable IPsec VPN solution designs with networks becoming geographically distributed. DMVPN solution architecture is by far the most effective and scalable solution available.
- DMVPN - May 20, 2023
- Computer Networking - April 7, 2023
- eBOOK – SASE Capabilities - April 6, 2023
Leave a Reply