In our last article we looked at IPv6 over IPv4 DMVPN configuration, where IPv4 transport was used to tunnel IPv6 traffic. In this blog post I would like to show you how to deploy pure IPv6 DMVPN network, and even more importantly, how to enable one IPv6 Routing Protocol over another in the Cloud.
Since IPv6 will be now also used as the underlying transport, the overall configuration of the DMVPN devices will be a little bit different from the previous example; also note that our topology was slightly modified :
Key thing here is that NBMA addresses are no longer IPv4; basically IPv6 is used everywhere, which means that our mappings on the Spokes will be always referring to v6 information.
Let’s start our configuration. We will first configure our Hub (R3), then the Spokes (R2, R4), and finally enable routing on the Overlay network. Since IPSec is optional, we will not be using it in this example (note that to protect IPv6 packets IKEv2 would have to be used, not IKEv1).
R3 (Hub) configuration. Again, everything is IPv6, including the tunnel mode. Don’t forget that link-local addresses must be always hard-coded on a given Cloud, on every device :
Let’s now quickly verify our configuration. First thing look at the transport mechanism for the Tunnel :
Now the NHRP mappings learned by the Hub after Spokes joined to the Cloud :
And we see we have connectivity within the Cloud :
OK so at that point, what we don’t know yet is private networks connected to our devices. We have to somehow advertise this information and the way we do this is by enabling a Routing Protocol over the Cloud. This is how we extend our regular Control Plane with VPN information needed to prepare the Data Plane for DMVPN-through traffic.
What I would like to show you specifically, is how to use EIGRPv6 and OSPFv3, and how to adjust the protocol’s configuration to match the DMVPN Phase configured in the network. Before we start, make sure that everyone can route IPv6 packets :
DMVPN Phase II
We are going to start with EIGRP. I am going to use a new configuration framework for this protocol, which is known as EIGRP Named Mode. In short, it allows you to configure a single process for multiple different Address Families, plus it supports configuration of few new features that cannot be enabled in the Classic Mode :
The good thing about Named Mode is that all protocol-related configuration is done under the process; you no longer have to move back and forth between interface and process level configuration modes (Next-Hop and Split Horizon here). Since the process activates on all interfaces of the device by default, you have to say “shutdown” under every interface you don’t want to run a given Address Family on. In our case we only want to run EIGRPv6 on the Tunnel interfaces (and private subnets to advertise them), so I shut it down everywhere else. Another major thing to remember about is that Router-ID is by default taken from the highest-numbered IPv4 interface of the router (loopbacks preferred) – but if no interface runs IPv4 you have to statically hard-code it. Similar configuration goes to the Spokes :
The end result is that we should be now able to reach other prefixes and build direct Spoke-to-Spoke tunnel between R2 and R4. Unfortunately this particular version of IOS I am running on (15.3) apparently has some issues with NHRP, and Spokes always resolve other Spokes’ Next-Hop address to the NBMA of the Hub. This is either an IOS bug or Cisco is basically pushing towards Phase III deployments, deeming Phase II a legacy solution :
So in this particular code the traffic is still flowing through the Hub, and that’s obviously not what we want. But before we switch to Phase III, let’s also take a look at OSPFv3 configuration you would normally use to support Phase II :
In case of OSPF we must use a Broadcast network type to keep the original IP address as a Next-Hop. Since there is going to be a DR/BDR election on that network, make sure that Hub is a DR (otherwise routes will not get propagated to all Spokes). This translates to setting the OSPF priority to 0 on the Spokes. The process was also enabled on Loopbacks 10 because I am using them to emulate our private sites.
Unfortunately we will have the same problem in the Data Plane like with EIGRP – again, this is probably a bug or new “feature” :
DMVPN Phase III
What’s going to change here? First and foremost, we will have to enable NHRP Redirections on the Hub, and Shortcuts on the Spokes :
Now the protocol-related changes (I am modifying configs shown for Phase II). For EIGRP we only need to enable changing of the Next-Hop so it always points to the Hub :
Now at that point NHRP Redirection should have already taken place – after R2 receives a Resolution Reply for 2192:1:4::4/128, an NHRP cache entry will be added. Now with the Shortcut feature enabled, NHRP intercepts every data packet in the output feature path. It checks to see if there is an NHRP cache entry to the destination of the data packet and, if yes, it replaces the current output adjacency with the one present in the NHRP cache. The data packet is therefore switched out using the new adjacency provided by NHRP. We can confirm that with another trace :
Also note that newer IOS versions will actually tell you about NHRP-learned information :
Also the CEF FIB table gets automatically updated :
Finally, DMVPN Phase III configuration for OSPFv3 requires a single change on the Tunnels :
After you send some data packets, you should be able to see the Shortcuts installed :