Design Guide | DMVPN Phases

DMVPN

DMVPN

 

Dynamic MultiPoint Virtual Private Network ( DMVPN ) is a dynamic form of virtual private network ( VPN ) that allows a mesh of VPNs without the need to pre-configure all tunnel endpoints i.e spokes. Tunnels on spokes establish on demand based on traffic patterns without repeated configuration on hubs or spokes.

In its simplest form, DMVPN is a point-to-multipoint Layer 3 overlay VPN enabling logical hub and spoke topology supporting direct spoke-to-spoke communications depending on DMVPN design ( Phase 1, Phase 2 and Phase 3 ) selection. VPN Phase selection greatly affects routing protocol configuration and how it works over the logical topology. From a routing point of view, parallels between frame-relay routing and DMVPN routing protocol configurations are evident.

 

DMVPN allows the creation of full mesh GRE or IPsec tunnels with a simple template of configuration. From a provisioning point of view, DMPVN is simple.

 

DMVPN is a combination of 4 technologies:

mGRE : In concept, GRE tunnels behave like point-to-point serial links. mGRE behave like LAN so you have many neighbors reachable over the same interface. The “M” in mGRE stands for multipoint.

Dynamic Next Hop Resolution Protocol ( NHRP ) with Next Hop Server ( NHS ): LAN environments utilize Address Resolution Protocol ( ARP ) to determine MAC address of your neighbor ( inverse ARP for frame relay ). mGRE the role of ARP is replaced by NHRP. NHRP binds together the logical IP address on the tunnel with the physical IP address used on the outgoing link ( tunnel source ).

Resolution process determines if you want to form tunnel destination to X, what address does tunnel X resolve towards. DMVPN binds IP-to-IP opposed to ARP, which bind destination IP to destination MAC address.

IPsec tunnel protection : DMVPN is a routing technique not directly related to encryption. IPsec is optional and used primarily over public networks. Potential designs exist for DMVPN in public networks with GETVPN which allows grouping of tunnels to single Security Association ( SA ).

Routing : Designers are implementing DMVPN without IPsec for MPLS-based networks to improve convergence as DMVPN acts independent of service provider routing policy. The sites only need IP connectivity to each other to form DMVPN network. You just need to ping the tunnel endpoints and route IP between the sites. Up to end customers to decide routing policy not the service provider; offering more flexibility than sites connected by MPLS. MPLS connected sites the service provider determines routing protocol policies.

 

DMVPN Messages

DMVPN Messages

 

Map IP to IP : means if you want to reach my private address you need to GRE encapsulate it and send towards my public address. Spoke registration process.

 

Three design models called Phases

DMVPN phase-selected influence spoke-to-spoke traffic patterns, supported routing designs and scalability.

Phase 1: All traffic flows through the hub. The hub is used for control plane of the network and is also in the data plane path.

Phase 2: Allows spoke-to-spoke tunnels. Spoke-to-spoke communication does not need the hub in the actual data plane. Spoke-to-spoke tunnels are on demand based on spoke traffic triggering the tunnel. Routing protocol design limitations exist. The hub is used for control plane but unlike phase 1 not necessarily in the data plane.

Phase 3: Improves scalability of Phase 2. We can use any Routing Protocol with any setup. “NHRP redirect” and “shortcuts” take care of traffic flows.

 

DMVPN Phase 1

 

DMVPN Phase 1

DMVPN Phase 1

 

Phase 1 consists of mGRE on hub and point-to-point GRE tunnels on spoke.

Hub can reach any spoke over tunnel interface but spokes can only go through hub. No direct Spoke-to-Spoke. Spoke only needs to reach the hub so a host route to the hub is required. Perfect for default route design from hub. Design against any routing protocol, as long as you set the next-hop to hub device.

Multicast ( routing protocol control plane ) exchanged between hub and spoke and not spoke-to-spoke.

On spoke, to help with environments where Path MTU is broken, enter adjust MMS. Must be 40 bytes lower than the MTU – ip mtu 1400 & ip tcp adjust-mss 1360. It inserts the max segment size option in TCP SYN packets so even if Path MTU does not work, at least TCP sessions are unaffected.

 

Tunnel Keys

Tunnel key are optional for hubs with single tunnel interface. They can be used for parallel tunnels usually in conjunction with VRF-light designs. Two tunnels between hub and spoke, the hub cannot determine based on destination or source IP address, which tunnel it belongs too. Tunnel keys are used identify tunnels and help with mapping-incoming GRE packets to multiple tunnel interfaces.

 

GRE Tunnel Keys

GRE Tunnel Keys

 

Tunnel Keys on 6500 and 7600  – hardware is incapable of using tunnel keys. It cannot look that deep in the packet. All incoming traffic is switched by CPU so performance goes down by factor of 100. To overcome this you should use a different source for each parallel tunnel.

If you have a static configuration and the network is stable, you could use a “hold-time” and “registration timeout” based on hours, not the 60-second default.

In carrier Ethernet and Cable network, the spoke IP is assigned by DHCP and can change regularly. Also, in xDLS environments where PPPOE session can be cleared and spokes get a new IP address. Non-Unique NHRP Registration work efficiently here.

 

Routing Protocol

Routing for Phase 1 is simple. Summarization and default routing at hub is allowed. Next-hop on spokes is always changed by the hub; hub is always the next hop.

Spoke need to first communicate with hub, it makes no sense to send them all routing information. Simply send them a default route.

Careful with recursive routing – sometimes the spoke can advertise its physical address over the tunnel so the hub attempts to send DMVPN packet to the spoke via the tunnel; resulting in tunnel flaps.

 

DMVPN Phase 1 OSPF Routing

Recommended design should use different routing protocol over DMVPN but you can extend OSPF domain by adding the DMVPN network into a separate OSPF Area. Possible to have one big area but with a large number of spokes try minimize topology information spokes have to process.

Redundant set-up with spoke running two tunnels to redundant Hubs i.e Tunnel 1 to Hub 1 and Tunnel 2 to Hub 2. Design to have the tunnel interfaces in the same non-backbone area. Having them in separate areas will cause spoke to become Area Border Router ( ABR ). Every OSPF ABR must have a link to Area 0. Resulting in complex OSPF Virtual-Link configuration and additional unnecessary Shortest Path First ( SPF ) runs.

Make sure SPF algorithm does not consume too much spoke resource. If spoke are high-end routers with good CPU, then, you do not care about SPF running on Spoke. Usually, they are low-end routers and maintaining efficient resources levels is critical. Potentially design DMVPN area as stub or totally stub area. This design prevents changes (example: prefix additions ) on non-DVMPN part to cause full or partial SPF’s.

 

Low-end spoke routers can handle 50 routers in single OSPF area.

 

Configure OSPF point-to-multipoint. Mandatory on hub and recommended on spoke. Spoke have GRE tunnels, by default use OSPF point-to-point network type. Timers need to match for OSPF adjacency to come up.

OSPF is hierarchical by design and not scalable. OSPF over DMVPN is fine as long as you have a low number of spoke site i.e below 100.

 

DMVPN Phase 1 EIGRP Routing

On Hub, disable split horizon and perform summarization. Deploy EIGRP leak maps for redundant remote sites. Two routers connecting the DMVPN and with leak maps, specify, which information ( routes ) can leak to each redundant spoke.

Deploy spokes as Stub routers. Without stub routing whenever change occurs ( prefix lost ), the hub will query all spokes for path information.

Important to specify interface Bandwidth.

 

DMVPN Phase 1 BGP Routing

Recommended to use EBGP. Hub must have next-hop-self on all BGP neighbors. To save resources and configuration steps, possible to use policy templates. Try to minimize routing updates to spokes by filtering BGP updates or advertising default route to spoke devices.

In recent IOS, we have dynamic BGP neighbors. Configure range on hub with command bgp listen range 192.168.0.0/24 peer-group spokes. Inbound BGP sessions are accepted if the source IP address is in the specified range of 192.168.0.0/24.

 

DMVPN Phase 1 Summary

DMVPN Phase 1 Summary

 

DMVPN Phase 2

 

DMVPN Phase 2

DMVPN Phase 2

 

Phase 2 allows mGRE on the hub and spoke, permitting spoke-to-spoke on demand tunnels. Phase 2 consists of no changes on hub router, just change tunnel mode on spokes to gre multipoint – tunnel mode gre multipoint. Tunnel keys are mandatory when multiple tunnels share the same source interface.

Multicast traffic still flows between the hub and spoke only but data traffic can now flow from spoke-to-spoke.

 

DMVPN Phase 2 Packet Flow

-For initial packet flow, even though the routing table displays the spoke as the Next Hop, all packets are sent to hub router. Shortcut not established.
-The spokes send NHRP request to the Hub and asks the hub about the IP address of the other spokes.
-Reply is received and stored on the NHRP dynamic cache on the spoke router.
-Now spokes attempt to set up IPSEC and IKE session to other spokes directly.
-Once IKE and IPSEC become operational, the NHRP entry too is operational and the CEF table is modified so spokes can send traffic direct to spokes.

 

The process is unidirectional. Reverse traffic from other spoke triggers the same mechanism. Spokes don’t establish two unidirectional IPsec sessions; Only one.

More routing protocol restriction with Phase 2 than with Phase 1 and Phase 3. For example, summarization and default routing is NOT allowed at hub and next-hop on spokes is always preserved by the hub. Spokes need specific routes to each other networks.

 

DMVPN Phase 2 OSPF Routing

Recommended to use OSPF network type Broadcast. Make sure hub is DR. You will have a disaster if a spoke becomes Designated Router ( DR ). For that reason set the spoke OSPF priority to “ZERO”.

OSPF multicast packets are delivered to hub only. Due to configured static or dynamic NHRP multicast maps, OSPF neighbor relationships only form between the hub and spoke.

Spoke router needs all routes from all other spokes so default routing is impossible to the hub.

 

DMVPN Phase 2 EIGRP Routing

No changes to spoke. On hub only add no ip next-hop self. Disable EIRP split-horizon on hub routers to propagate updates between spokes.

Do not use summarization, if you do configure summarization on spokes, routes will not arrive to other spokes. Resulting in spoke-to-spoke traffic going to the hub.

 

DMVPN Phase 2 BGP Routing

Remove the next-hop-self on hub routers.

 

Split Default Routing

Split default routing may be used if you have the requirement for default routing to hub : maybe for central firewall design and you want all traffic to go there before proceeding to the Internet.  The problem with Phase 2 allows spoke-to-spoke traffic so even though we would default route pointing to hub we actually need to the default route point to the internet.

Require two routing perspectives; one perspective for GRE and IPsec packets and another perspective of data traversing the enterprise WAN. Possible to configure Policy Based Routing ( PBR ) but only as a temporary measure. PBR can run into bugs and is difficult to troubleshoot. Split routing with VRF is much cleaner. Routing tables for different VRF may contain default routes. Routing in one VRF will not affect routing in another VRF.

 

Split Default Routing

Split Default Routing

 

Multi-Homed Remote Site

To make it complication the spoke needs two 0.0.0/0. one for each DMVPN Hub network. Now, we have two default routes in the same INTERNET vrf. We need a mechanism to tell us, which one to use, for which DMVPN cloud.

 

Redundant Sites

Redundant Sites

 

Even if the source of the tunnel is for mGRE-B ISP-B, the routing table could send the traffic to ISP-A. ISP-A may perform uRFC to prevent address spoofing. Resulting in packet drops.

The problem is the selection of the outgoing link ( ISP-A ) depends on Cisco Express Forwarding ( CEF ) hashing, which you cannot influence. So, we have a problem and the outgoing packet has to use the correct outgoing link based on source and not destination IP address. The solution is Tunnel Route-via – Policy routing for GRE. To get this to work with IPsec install two VRFs, one for each ISP.

 

DMVPN Phase 3

 

DMVPN Phase 3

DMVPN Phase 3

 

Phase 3 consists of mGRE on the hub and mGRE tunnels on the spoke. Allows spoke-to-spoke on demand tunnels. The difference here is that when the hub receives NHRP request, the hub can send a redirect to remote spoke in order to tell them to update their routing table.

Allows spoke-to-spoke communication even with default routing. So even though the routing table points to the hub, the traffic flows between spokes. No limits on routing, we still get spoke-to-spoke traffic flow even when you use default routes

“Traffic-driven-redirect”; hub notices the spoke is sending data to it, it sends a redirect back to spoke, saying use this other spoke. Redirect informs the sender a better path. The spoke will install this shortcut and initiate IPsec with other Spoke. Use ip nhrp redirect on hub routers & ip nhrp shortcuts on spoke routers.

No restrictions on routing protocol or which routes are received by spokes. Summarization and default routing is allowed. Next hop is always the hub.

 

About Matt Conran

Matt Conran has created 165 entries.

Leave a Reply