Quantcast
Channel: CCIE Blog | iPexpert » CCIE Lab
Viewing all 220 articles
Browse latest View live

Configure a Highly-Available IPSec VPN tunnel on IOS

$
0
0

It is possible to configure Highly-Available IPSec VPN tunnel on IOS so that the SA information is replicated between the routers. This ensures that a potential failover will be transparent to users and it will not require adjustments or reconfiguration of any remote peers.

There are two protocols used to deploy this feature, HSRP and Stateful Switchover (SSO). HSRP is one of the First Hop Redundancy Protocols that provide network redundancy for IP networks, ensuring that user traffic immediately and transparently recovers from failures in network edge devices. The protocol monitors the interfaces so that if either interface goes down, the whole router is deemed to be down and the ownership of IKE and IPSec SAs is passed to the standby router (which now transitions to the HSRP active state). SSO allows the active and standby routers to share IKE and IPSec state information so both routers have enough information to become the active router at any time.

Before we take a look at the configuration, let’s have few words about our topology. The internal network (VLAN 146 below) configuration is outside the scope of this post, but it would be normally configured with a separate HSRP instance, tracking not only internal but also external interfaces. The goal is to make sure that the traffic leaving the VPN is entering the Active router (SSO/HSRP –active). So things like default route pointing to the internal VIP or using RRI is something you would definitely want to look at.

piotr-IPSec-Stateful

Our focus will be the “outside” part, so where the tunnel is terminated. The session will land on a Virtual Address (6.6.156.100) that is associated with our HSRP instance. Configuration of R2 is going to be like a regular L2L tunnel – R10 and R11 is where HSRP and SSO will be deployed. Let’s first look at R2 config :

crypto isakmp policy 10
 encr aes
 authentication pre-share
 group 2

crypto isakmp key cisco address 6.6.156.100
crypto isakmp keepalive 10 3 periodic

crypto ipsec transform-set SET2 esp-aes esp-sha-hmac 
crypto ipsec security-association replay window-size 1024

ip access-list extended HA_VPN
 permit ip 6.6.2.0 0.0.0.255 6.6.146.0 0.0.0.255

crypto map MAP2 10 ipsec-isakmp 
 set peer 6.6.156.100
 set transform-set SET2 
 set pfs group2
 match address HA_VPN

int g0/0
 crypto map MAP2

Again, it is pretty much a regular IKEv1 L2L configuration where we define our Phase I and II Policies, Authentication Credentials, Encryption Domain and we use a crypto map to bind these together and associate with an interface. Two things that were done as well was enabling DPD (Legacy Keepalives are not supported by this feature) and expanding Anti-Reply window. DPD is used to detect liveliness of the remote peer, where Anti-Reply window was increased to avoid any potential problems that might be related to how SSO replicates Sequence Number updates to the standby SA. By default, this happens every X-number of packets and this “X” is then explicitly set to a minimal value on R10 and R11 (1000 packets). Also, note that we are using Pre-Shared Keys for authentication – that’s another limitation of Stateful IPSec Failover.

All right, what do we have to configure on R10 and R11? The same regular settings plus HSRP and SSO :

R10 :
crypto isakmp policy 10
 encr aes
 authentication pre-share
 group 2

crypto isakmp key cisco address 6.6.25.2       

crypto isakmp keepalive 10 3 periodic

ip access-list extended HA_VPN
 permit ip 6.6.146.0 0.0.0.255 6.6.2.0 0.0.0.255

crypto ipsec transform-set SET2 esp-aes esp-sha-hmac 

crypto map MAP2 10 ipsec-isakmp 
 set peer 6.6.25.2
 set transform-set SET2 
 set pfs group2
 match address HA_VPN

crypto ipsec security-association replay window-size 1024

Now let’s look at that “extra” configuration. Here’s how you can tune Anti-Reply updates :

crypto map MAP2 redundancy replay-interval in 1000 out 1000

Next, we need to tell the router what addresses and ports will be used by SSO to replicate the state information (timeout settings shown are based on the default values). A similar configuration will be done on R11 but the addresses and ports will be reversed (local will be R11, remote R10) :

ipc zone default
 association 1
  no shutdown
  protocol sctp
   local-port 5000
    local-ip 6.6.146.10
    retransmit-timeout 300 10000
    path-retransmit 5
    assoc-retransmit 5
   remote-port 5000
    remote-ip 6.6.146.11

Now we should build a tracking object. Instead of looking only at G0/1, we will be looking at two interfaces (inside and outside), to ensure that failover takes place no matter which of the two interfaces fails (remember, our Active Router must be active for both networks to avoid traffic black-holing).

track 1 interface GigabitEthernet0/0 line-protocol
track 2 interface GigabitEthernet0/1 line-protocol

track 3 list boolean and
 object 1
 object 2

It is important to keep HSRP priorities the same on both routers. This is needed because the SSO-standby device always reboots to sync its state with the Active box. If you left one router with a higher priority, and this device failed, the other router would self-reboot after the previously active box comes alive again.

interface GigabitEthernet0/1
 ip address 6.6.156.10 255.255.255.0
 standby 2 ip 6.6.156.100
 standby 2 preempt
 standby 2 priority 100
 standby 2 name HSRP
 standby 2 track 3 decrement 30
 crypto map MAP2 redundancy HSRP stateful

Note that the crypto map was applied with the “redundancy stateful” option. Finally we need to activate inter-device SSO communication:

redundancy inter-device
 scheme standby HSRP

A very similar configuration is done on R11. The only changes from R10 config are done to SSO (as explained earlier) :

R11 :
crypto isakmp policy 10
 encr aes
 authentication pre-share
 group 2

crypto isakmp key cisco address 6.6.25.2       
crypto isakmp keepalive 10 3 periodic

ip access-list extended HA_VPN
 permit ip 6.6.146.0 0.0.0.255 6.6.2.0 0.0.0.255

crypto ipsec transform-set SET2 esp-aes esp-sha-hmac 

crypto map MAP2 10 ipsec-isakmp 
 set peer 6.6.25.2
 set transform-set SET2 
 set pfs group2
 match address HA_VPN

crypto ipsec security-association replay window-size 1024
crypto map MAP2 redundancy replay-interval in 1000 out 1000

ipc zone default
 association 1
  no shutdown
  protocol sctp
   local-port 5000
    local-ip 6.6.146.11
    retransmit-timeout 300 10000
    path-retransmit 5
    assoc-retransmit 5
   remote-port 5000
    remote-ip 6.6.146.10

track 1 interface GigabitEthernet0/0 line-protocol
track 2 interface GigabitEthernet0/1 line-protocol
track 3 list boolean and
 object 1
 object 2

interface GigabitEthernet0/1
 ip address 6.6.156.11 255.255.255.0
 standby 2 ip 6.6.156.100
 standby 2 preempt
 standby 2 priority 100
 standby 2 name HSRP
 standby 2 track 3 decrement 30
 crypto map MAP2 redundancy HSRP stateful

redundancy inter-device
 scheme standby HSRP

NOTE : If you are using 15.2 (3)T to test it (like what we have on our routers), it is definitely advisable to disable the Hardware VPN module due to a Bug (on both, R10 and R11) :

no crypto engine onboard 0

Once you configured the devices the HSRP standby unit will now reload to synchronize SAs with the Active unit.

Time to verify our configuration:

R11#sh crypto engine brief 
        crypto engine name:  Virtual Private Network (VPN) Module
        crypto engine type:  hardware
                     State:  Disabled
                  Location:  onboard 0
              Product Name:  Onboard-VPN
                HW Version:  1.0
               Compression:  Yes
                       DES:  Yes
                     3 DES:  Yes
                   AES CBC:  Yes (128,192,256)
                  AES CNTR:  No
     Maximum buffer length:  0000
          Maximum DH index:  0000
          Maximum SA index:  0000
        Maximum Flow index:  3200
      Maximum RSA key size:  0000


        crypto engine name:  Cisco VPN Software Implementation
        crypto engine type:  software
             serial number:  12D23801
       crypto engine state:  installed
     crypto engine in slot:  N/A


R10#sh standby brief
                     P indicates configured to preempt.
                     |
Interface   Grp  Pri P State   Active          Standby         Virtual IP
Gi0/1       2    100 P Active  local         6.6.156.11    6.6.156.100

R10#sh crypto ha
IKE VIP: 6.6.156.100
  stamp: C0 7F F5 AE 96 7E 06 E3 FC 92 F3 60 92 51 49 88 
IPSec VIP: 6.6.156.100

R10#sh redundancy states 
       my state = 13 -ACTIVE
     peer state = 8  -STANDBY HOT 
           Mode = Duplex
        Unit ID = 0

     Maintenance Mode = Disabled
    Manual Swact = enabled
 Communications = Up

   client count = 13
 client_notification_TMR = 60000 milliseconds
           RF debug mask = 0x0  

R11#sh redundancy states 
       my state = 8  -STANDBY HOT 
     peer state = 13 -ACTIVE 
           Mode = Duplex
        Unit ID = 0

     Maintenance Mode = Disabled
    Manual Swact = cannot be initiated from this the standby unit
Communications = Up

   client count = 14
 client_notification_TMR = 60000 milliseconds
           RF debug mask = 0x0

R10#sh redundancy inter-device 
Redundancy inter-device state: RF_INTERDEV_STATE_ACT
  Scheme: Standby
      Groupname: HSRP Group State: Active
  Peer present: RF_INTERDEV_PEER_COMM
  Security: Not configured

R11#sh redundancy inter-device 
Redundancy inter-device state: RF_INTERDEV_STATE_STDBY
  Scheme: Standby
      Groupname: HSRP Group State: Standby
  Peer present: RF_INTERDEV_PEER_COMM
  Security: Not configured

R2#ping 6.6.146.4 source g0/0 rep 5    

Type escape sequence to abort.
Sending 5, 100-byte ICMP Echos to 6.6.146.4, timeout is 2 seconds:
Packet sent with a source address of 6.6.2.2 
.!!!!
Success rate is 80 percent (4/5), round-trip min/avg/max = 28/28/28 ms

R10#sh crypto session detail
Crypto session current status

Code: C - IKE Configuration mode, D - Dead Peer Detection     
K - Keepalives, N - NAT-traversal, T - cTCP encapsulation     
X - IKE Extended Authentication, F - IKE Fragmentation

Interface: GigabitEthernet0/1
Uptime: 01:02:50
Session status: UP-ACTIVE     
Peer: 6.6.25.2 port 500 fvrf: (none) ivrf: (none)
      Phase1_id: 6.6.25.2
      Desc: (none)
  IKEv1 SA: local 6.6.156.100/500 remote 6.6.25.2/500 Active 
          Capabilities:D connid:1002 lifetime:22:57:09
  IPSEC FLOW: permit ip 6.6.146.0/255.255.255.0 6.6.2.0/255.255.255.0 
        Active SAs: 2, origin: crypto map
        Inbound:  #pkts dec'ed 4 drop 0 life (KB/Sec) 4355669/3236
        Outbound: #pkts enc'ed 4 drop 0 life (KB/Sec) 4355669/3236

Let’s now start sending traffic from R2 and disable G0/0 on R10 (this causes it to reboot since it lost the SSO-active status) – see what happens :

R2#ping 6.6.146.4 source g0/0 rep 50000

Type escape sequence to abort. Sending 50000, 100-byte 
ICMP Echos to 6.6.146.4, timeout is 2 seconds: 
Packet sent with a source address of 6.6.2.2

!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!U.
...!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
!!!!!!!!!!!!!!!!!!!!!!!!!!!

Looks like 5 Echoes were lost but then we have the connectivity again.

R11#
*Dec 11 21:44:26.123: %HSRP-5-STATECHANGE: 
GigabitEthernet0/1 Grp 2 state Standby -> Active
*Dec 11 21:44:26.127: %CRYPTO-5-IPSEC_SA_HA_STATUS: 
IPSec sa's if any, for vip  6.6.156.100 will change 
from STANDBY to ACTIVE

R11#sh crypto session detail
Crypto session current status

Code: C - IKE Configuration mode, D - Dead Peer Detection     
K - Keepalives, N - NAT-traversal, T - cTCP encapsulation     
X - IKE Extended Authentication, F - IKE Fragmentation

Interface: GigabitEthernet0/1
Session status: UP-ACTIVE     
Peer: 6.6.25.2 port 500 fvrf: (none) ivrf: (none)
      Desc: (none)
      Phase1_id: (none)
  IKEv1 SA: local 6.6.156.100/500 remote 6.6.25.2/500 Active 
          Capabilities:D connid:1001 lifetime:23:49:39
  IPSEC FLOW: permit ip 6.6.146.0/255.255.255.0 6.6.2.0/255.255.255.0 
        Active SAs: 2, origin: crypto map
        Inbound:  #pkts dec'ed 138 drop 0 life (KB/Sec) 3742576/3083
        Outbound: #pkts enc'ed 135 drop 0 life (KB/Sec) 4203376/3083

Finally I want to mention that this feature is very buggy, especially in the IOS code that we run on our devices (disabling the Hardware VPN module does not appear to solve all of the problems – (By the way, don’t try this in Production). You may see that SA synchronization works only until the first failure occurs after it the SA data does not appear to be mirrored to the standby device, even that SSO states are shown correctly.


Understanding WAN Quality of Service

$
0
0

The time has come, CCIE Collaboration hopefuls, to focus my blog on Quality of Service (QoS). I know, it’s everyone’s favorite subject, right? Well, you don’t have to like it; you just have to know it!

I would specifically like to focus on WAN QoS policies as they are going to be an essential piece of the lab blueprint to understand. Typically, the goal on a WAN interface is to queue traffic in such a way as to prioritize certain types of traffic over other types of traffic. Voice traffic will usually be placed in some type of expedited or prioritized queue while other types of traffic (video, signaling, web, etc.) will use other queues to provide minimum bandwidth guarantees. Policies such as this will utilize the Modular QoS Command Line Interface (MQC) for implementation.

To begin, let’s use our three-site topology (HQ, SB, and SC) to provide a backdrop for this example. The HQ site (R1) has a Frame Relay connection to both the SB (R2) and SC (R3) sites through the same physical Serial interface, which has a total of 1.544 Mbps of bandwidth available. Assume that both R2 and R3 have connections to R1 using Frame Relay over their respective Serial interfaces, utilizing 768 kbps of bandwidth for each Permanent Virtual Circuit (PVC).

WAN-QoS-01

For the link between HQ and SB, we should provide priority queuing for 5 voice calls (DSCP EF) using the G.729 codec, a minimum bandwidth of 150 kbps for signaling (DSCP CS3), and 400 kbps of bandwidth for video traffic (DSCP AF41). The link between HQ and SC should provide priority queuing for 3 voice calls using the G.711 codec and minimum bandwidth guarantees of 150 kbps and 300 kbps for signaling (DSCP CS3) and video (DSCP AF41) traffic, respectively.

Before we implement this QoS policy in IOS, we should first determine the amount of bandwidth to be used by the “voice” class of traffic in each policy. For the link between HQ and SB, we must account for 5 total calls at G.729 while the link between HQ and SC requires bandwidth reservations for 3 G.711 calls. To determine the bandwidth, we must use three different formulas in our calculations.

  • Total Packet Size (bytes) = Layer 2 Overhead + Layer 3 Overhead + Codec Payload Size
  • Packets Per Second (packets) = Codec Bitrate/(Codec Payload Size x 8)
  • Bandwidth (bits per second) = Total Packet Size x Packets Per Second x 8

First, let’s focus on the link between HQ and SB to solve the equation for Total Packet Size. For the Layer 2 overhead value, since Frame Relay has been implemented, a value of 7 bytes should be used. Next, for the Layer 3 overhead value, we must take into account the total size of the IP (20 bytes), UDP (8 bytes), and RTP (12 bytes) headers (40 bytes). If Compressed RTP (cRTP) is used, the Layer 3 overhead can shrink all the way down to a value of 2 bytes (4 bytes with CRC). Next, the codec payload size must be determined. Since the G.729 codec is used for the HQ and SB link, this value should be 20 bytes (assuming the default 20 ms codec sample interval). Unfortunately, all of the above values cannot be determined from a show command or any other easily accessible mechanism, so they must be committed to memory. The formula for Total Packet Size should now be a solvable equation.

  • Total Packet Size = 7 + 40 + 20
  • Total Packet Size = 67 bytes

Next, to solve the Packets Per Second equation, we must use the G.729 codec bitrate of 8000 bps. Next, multiply the codec payload size, which was used in the previous equation, by 8 in order to convert the value to bits instead of bytes.

    • Packets Per Second = 8000/(20 x 8)
    • Packets Per Second = 50 packets

Finally, the last equation should be used to calculate the bandwidth. It is necessary to solve the previous two equations first since each of the calculated values will be used in this equation. The Total Packet Size was determined to be 67 bytes while Packets Per Second is 50. Lastly, we must multiply by 8 to convert the value to bits.

      • Bandwidth = 67 x 50 x 8
      • Bandwidth = 26800 bps or 26.8 kbps

Since the number of calls allowed between HQ and SB should be 5, we should multiply 26.8 by 5 in order to arrive at the final bandwidth value that should be configured within the QoS policy (134 kbps).

Using the same formulas, let’s now tackle the calculations for the HQ to SC link. Remember, this link uses a different codec (G.711) than the previously calculated link between HQ and SB. With that in mind, the codec payload size (160 bytes) and codec bit rate (64000 bps) must be modified in the equations.

      • Total Packet Size = 7 + 40 + 160
      • Total Packet Size = 207 bytes
      • Packets Per Second = 64000/(160 x 8)
      • Packets Per Second = 50 packets
      • Bandwidth = 207 x 50 x 8
      • Bandwidth = 82800 bps or 82.8 kbps

Since the final bandwidth value per call is 82.8 kbps, we must multiply that value by 3 to get the proper queue bandwidth for the voice class on the link between HQ and SC (249 kbps).

To learn more about bandwidth calculations, visit the following link (cisco.com).

Now that the bandwidth for the “voice” class has been calculated, the QoS policy should be implemented within each router. The first step is to classify the type of traffic that should be manipulated. In this case, we were given the exact classifications that should be used for each type of traffic in the above. To configure this within IOS, we must use the class-map command on each router.

R1, R2, and R3

WAN-QoS-02

Once the traffic has been classified appropriately on each router, the policy-map command must be used to implement the policy dictated by the previously mentioned requirements. On the R1 router, we must create two policies, one for SB, and one for SC. For both policy-maps, the priority command should be used to ensure that voice packets are placed into the low latency queue (LLQ) and treated with the highest priority. The bandwidth command, used in the “SIGNALING” and “VIDEO” classes, provides the minimum bandwidth guarantee for the class. Each value was entered as required by the provided scenario.

R1

WAN-QoS-03

The QoS policy should be configured on R2 and R3 as well.

R2

WAN-QoS-04

R3

WAN-QoS-05

Now that the QoS policy has been configured, we must assign it to the WAN interface on each router. This can be a little bit tricky to configure, based on where we try to assign the policy. For example, let’s try to assign the “HQ-TO-SB” policy-map to the sub-interface pointing towards R2 using the service-policy output command.

R1

WAN-QoS-06

As you can see in the above, Class-Based Weighted Fair Queuing (CBWFQ) is not supported on sub-interfaces, which creates a problem for us. Where can we apply the policy? Let’s try creating a nested policy instead, using another policy-map. Within the “PARENT” policy, under “class-default”, assign the recently created “HQ-TO-SB” policy. Then try to assign the “PARENT” policy to the sub-interface.

R1

WAN-QoS-07

As shown above, this does not work either, stating, “Cannot attach queuing-based child policy to a non-queuing based class”. Basically, the router is having a problem determining how much bandwidth is available on the physical interface for the policy to use since we are attempting to apply it to a sub-interface (virtual). However, if the policy were applied directly to a physical interface (not a sub-interface), would we still have this problem? With that in mind, let’s attempt to attach the policy to the main physical interface.

R1

WAN-QoS-08

Once again, we get the same issue. Since the “HQ-TO-SB” policy is assigned to the “PARENT” policy, there is no way for the router to determine if there is enough bandwidth available to support the configured policy-map (since the “PARENT” policy does not define the interface bandwidth). To test the theory, apply the “HQ-TO-SB” policy to the main physical interface on R1 and observe the results.

R1

WAN-QoS-09

Success! The policy has been applied to the Serial 0/1/0 interface. This is because the router is now able to determine the maximum amount of bandwidth available to the policy. However, there is still a problem. The policy intended only to serve the link between HQ and SB is now applied to the main interface, meaning that it is now active for the link to both SB and SC! We need to find a way around this problem to properly assign the policy. There are actually two ways in which it can be done. The first is to continue assigning the policy directly to the physical interface using a nested policy approach in combination with the Frame Relay DLCI. The second is to assign the policy directly to the Frame Relay DLCI by way of a Frame Relay map-class.

Let’s examine the first possibility. On the R1 router, we have to somehow differentiate between the traffic destined for R2 and the traffic destined for R3. The easiest way to do this is to use the DLCI already assigned to the sub-interface. On R1, DLCI 102 and 103 connect to R2 and R3, respectively. We must create a new class-map for each to define this on the router.

R1

WAN-QoS-10

Once the class-map for each DLCI is configured, we must now create a new “parent” policy to be used for HQ to the physical interface. Within the new policy, the original “HQ-TO-SB” policy should be applied under the new “DLCI-102” class.

R1

WAN-QoS-11

Now, let’s try to assign the “SB-PARENT” policy-map to the main physical interface.

R1

WAN-QoS-12

Once again, we are faced with the same issue. Unfortunately, no clues are given as to what the problem is. However, think back to the problem of bandwidth accounting. Since the policy-map is nested, the “HQ-TO-SB” LLQ policy doesn’t have any way to determine the bandwidth available, so the policy application fails. In order to “force” the router to recognize the proper bandwidth value for the PVC to SB, we must use shaping under the “SB-PARENT” policy with the shape average command. This will allow the LLQ policy to reference the bandwidth amount provided by the shaping mechanism as the maximum bandwidth value.

R1

WAN-QoS-13

Finally, we have a successful policy application! The show policy-map interface command can now be issued to show the statistics of the policy on the applied interface.

R1

WAN-QoS-14

Remember, there is still another way to accomplish the assignment of the “HQ-TO-SB” policy. We can create a Frame Relay map-class and attach a shaping-based PARENT policy containing the original LLQ policy. From there, the map-class can be assigned directly to the DLCI under the sub-interface.

R1

WAN-QoS-15

Use any method of policy application that makes the most sense to you, but remember to always pay attention to the requirements of the question. The task requirements may disallow a specific type of application.

Create and assign the remainder of the policies to R1, R2, and R3 to complete the requirements. Remember, on both R2 and R3, we can apply the LLQ policy directly to the physical interface since there are no other sub-interfaces defined to other locations.

R1 (SC Policy Application)

WAN-QoS-16

R2 (HQ Policy Application)

WAN-QoS-17

R3 (SC Policy Application)

WAN-QoS-18

I hope this has been helpful to all that are studying for the CCIE Collaboration lab exam.  Please keep your eye out for many updates to come for both our workbooks and videos.  Also, if you’re ever feeling like you need an extra push to get ready for the lab, are hitting roadblocks in your preparation, or just need some direction on how tackle the CCIE Collaboration Lab, give us a call and speak with an iPexpert Training Adviser about attending one of my bootcamps.

Thanks again for reading and good luck in your preparation!

Download the final configuration text here.

Wireless Configuration Method Speed Test Shootout :: Part 3

$
0
0

This is the third and final article in a series focusing on seeing which configuration methods are fastest or slowest in the CCIE wireless lab.  The idea is to test each method under a variety of likely configuration scenarios that you would experience in the real lab and see how things stack up.

Check out the supporting Speed Test video playlist on our YouTube channel.

This article focuses on autonomous APs.  I set up 3 different scenarios, as listed below:

  • Configuring WDS using local RADIUS and registering 2 APs
  • Configuring two SSIDs with their associated VLANs
  • Configuring a few settings under the radios

If you want to watch the actual configurations, you can check out the companion video to this article over in our YouTube channel.  It shows how I arrived at the configuration speeds and the methods that I used.  You may be able to pick up a few tips or tricks for faster configurations by watching how I do things.

WDS

For this test, had to configure local RADIUS with a network device and user account, then configure AAP1 as a WDS with associated authentication methods.  Finally, I registered both AAP1 and AAP2 to the WDS service.

CLI= 2:45
GUI= 6:06

The CLI was more than twice as fast.  I did stumble a few times trying to type the commands, which is normal for me when trying for speed.  Also, sometimes GUI load times are faster/slower.  But this is a good representation to show the drastic speed difference between our two methods.

SSIDs

I started with an AP with no VLANs configured and pretended the RADIUS server configurations were already taken care of.  So I had to define 3 VLANs (1 management and 2 for the SSIDs), and then define the SSIDs and encryption.  The SSIDs were enabled on both radios.

CLI= 2:19
GUI= 4:41

Here the CLI again was twice as fast as the GUI.  Even though the configuration of the VLANs alone amounted to 27 total commands, the slow GUI response couldn’t keep up.

Radio Configurations

This test involved configuring a static channel, transmit and client power levels, and the removal of some of the lower data rates from each radio.

CLI= 0:29
GUI= 2:19

Thanks to the small number of CLI lines of config, this ended up being over 4 times faster than the GUI.

Aggregate Results

Let’s add up these times and see what the speed difference is assuming this was approximately an entire lab’s worth of configurations.

CLI=  5:33
GUI= 13:06

CLI beat out the GUI by 7:33, making it about 2.36 times faster.  That’s a pretty nice time savings.  If you look back to our WLC configuration aggregate time comparison in the last article, the difference here is significantly more between the best and worst methods.  And that’s for a section that is less than half as long as the unified section!

 

Thoughts on the autonomous results

There really is no doubt how much faster the CLI is on the autonomous APs, and it’s probably not a surprise to anyone.  Now we often have less choice of our configuration method on autonomous APs, since it may take work before the GUI is even accessible.  So that forces us to learn a good chunk of the CLI configurations.  But you can see that there is a worthwhile return on fleshing out your CLI knowledge.

At the end of the lab, 7 minutes probably won’t be the difference between passing and failing.  So while the speed gains are nice, it probably won’t be the end of you if you can’t use the CLI for every last thing on autonomous APs.

 

Closing thoughts on this blog series

My goal for this series was not necessarily to show you the fastest method to configure everything.  Rather, I wanted to shine a light on how much (or how little) difference there was between the available methods.  I was hoping to validate something that I have been teaching mainly based on unverified personal observations in the past, which was the idea that choosing the absolute best configuration method is not the most important thing when it comes to having enough time to finish the lab in 8 hours.  I think I validated that claim pretty successfully.

Many students that I talk to who pass the lab often have a common experience.  They complete all of the configurations with at least 1-2 hours left (or sometimes more).  They are then able to verify everything again to check for mistakes and to make sure nothing broke.  Looking at the time differences in my testing, there is maybe a 10-15 minute differential between the best and worst methods.  Let’s say these people were using the most efficient methods of configurations (which most of them weren’t), they would still have plenty of time at the end of the lab for their verifications.  This is assuming that you are good at your chosen method and know how to configure things well in general.

So as you attempt to choose your own configuration method, it’s worth paying attention to these results.  Try to use the more efficient methods where it makes sense and avoid the slow methods if possible.  But know that at the end of the lab, your chosen method of configurations will ultimately be one of the smaller variables in your overall speed of completion.

 

 

New Product Release :: CCIE Data Center – Written Exam Video on Demand

$
0
0

We are happy to announce that we’ve recently completed a brand new CCIE Data Center Written Exam Video on Demand. In this coursework, you’ll immerse yourself in each technology your instructor, Jason Lunde CCIE #29431 x2 (R&S and Data Center) presents. Jason will also dissect each technology in a manner in which you will walk away with a complete understanding. Included in the coursework is close to 18 hours of lectures, white boards, and configuration topics!

Check out a sample of our course here.


Check out this Video on Demand course here.

Below, you will find the complete outline of our latest Video on Demand course! We’re quite confident that you won’t find a more thorough, up-to-date product on the market!

Outline

  • Course Introduction
  • CCIE DC Equipment Overview
  • NX-OS Architecture
  • NX-OS Redundancy and File MGMT
  • VDC’s
  • Fabric Extension
  • NX-OS Layer 2
    • VLANs/PVLANs
    • Spanning-tree
    • Port-channels
  • Virtual Port-Channels (vPC)
  • NX-OS Basic Layer 3
    • EIGRP
    • OSPF
    • BGP
  • CCIE DC Jumbo Frames
  • FabricPath
  • VRF (virtual routing and forwarding instances)
  • NX-OS Multicast
  • NX-OS Security
    • Local Accounts
    • RBAC
    • AAA
    • SSH
    • CoPP
    • Rate-limiting
    • ACLs
    • Port-security
    • DHCP Snooping
    • DAI
    • IP Source Guard
  • First Hop Redundancy
    • HSRP
    • VRRP
    • GLBP
  • OTV
  • NX-OS Services
    • ISSU
    • Smart Call Home
    • SNMP
    • SPAN
    • EEM
    • Netflow
  • Unified Ports
  • Fibre Channel
    • Theory
    • Addressing
    • Principle Switches
    • Flow Control and B2B Credits
    • VSANs and Trunking
    • Port-Channels
    • Zoning
    • NPV/NPIV
    • CFS
  • iSCSI
  • FCIP
  • FCoE
    • DCB and DCBX
  • UCS – The equipment
  • UCS MGMT
  • UCS Local Area Networking
  • UCS Storage Area Networking
  • UCS Pools
  • UCS Policies
  • UCS Service Profiles
  • C-Series Integrations w/UCS
  • Nexus 1000v
  • ACE

FCIP – The Beginning

$
0
0

FCIP is notably a part of the CCIE Data Center lab exam blueprint. It is also a sticking point for a lot of candidates who have not done a whole lot on the storage networking side. Luckily FCIP has many correlations to the modern-day Ethernet networking that we all know and love, as it’s really just another tunneling technology! After some thought, I have decided to break this down into 2 blog posts. This one will cover FCIP basics, and another that will cover some more advanced FCIP options that you might have to use during the CCIE lab examination.

FCIP is used for extending a Fibre Channel (FC) network over an IP backbone. It encapsulates FC in IP so that SCSI and non-SCSI FC frames can be sent over an IP network. Normally most organizations are not going to do this simply for the sake of extending their FC network (why extend a lossless network over a lossy medium?), but rather for backup or replication jobs that need to occur between storage systems that are across some geographical distance. A typical deployment scenario is shown below:

20141229_01

Here we have two SANs separated by an IP network. Now, the MDSs currently in scope for the CCIE DC lab exam are MDS 9222is, which have 4 Gigabit interfaces native to the system. They also have the needed SAN_EXTN_OVER_IP license shipped natively with the system. The MDS 9222i can support up to 3 FCIP tunnels per gigabit interface, giving us a maximum of 12 FCIP tunnels available by default. So we can easily take one of these Gigabit interfaces, and create an FCIP tunnel across the IP network using its addressing as our tunnel source.

The configuration of an FCIP tunnel is actually really simple. There are a couple of ‘housekeeping’ items however, that we must take care of prior to beginning the configuration. 1 – We must put an IP address on our Gig interfaces, and 2 – We must make sure we have IP reachability to the other side (where we wish to terminate the tunnel). This may seem like a no-brainer, but it’s often the simple things that trip us up in our exams! It must be noted that the MDSs do not support dynamic routing either. So if we have to extend past a L3 boundary, we must put in static routes. Let’s assign some IP addressing to our sample topology:

20141229_02

MDS1
MDS1(config)# int gig 1/1
MDS1(config-if)# ip address 192.168.10.1 255.255.255.0
MDS1(config-if)# no shut
MDS1(config-if)# show int gig 1/1
GigabitEthernet1/1 is up
Hardware is GigabitEthernet, address is 000d.bd85.4a88
Internet address is 192.168.10.1/24
MTU 1500 bytes
Port mode is IPS
Speed is 1 Gbps
Beacon is turned off
Auto-Negotiation is turned on
5 minutes input rate 0 bits/sec, 0 bytes/sec, 0 frames/sec
5 minutes output rate 0 bits/sec, 0 bytes/sec, 0 frames/sec
0 packets input, 0 bytes
0 multicast frames, 0 compressed
0 input errors, 0 frame, 0 overrun 0 fifo
1 packets output, 42 bytes, 0 underruns
0 output errors, 0 collisions, 0 fifo
0 carrier errors

MDS2
MDS2(config-if)# int gig 1/1
MDS2(config-if)# ip address 192.168.10.2 255.255.255.0
MDS2(config-if)# no shut
MDS2(config-if)# show int gig 1/1
GigabitEthernet1/1 is up
Hardware is GigabitEthernet, address is 0017.5ab5.2f58
Internet address is 192.168.10.2/24
MTU 1500 bytes
Port mode is IPS
Speed is 1 Gbps
Beacon is turned off
Auto-Negotiation is turned on
5 minutes input rate 0 bits/sec, 0 bytes/sec, 0 frames/sec
5 minutes output rate 0 bits/sec, 0 bytes/sec, 0 frames/sec
0 packets input, 0 bytes
0 multicast frames, 0 compressed
0 input errors, 0 frame, 0 overrun 0 fifo
1 packets output, 42 bytes, 0 underruns
0 output errors, 0 collisions, 0 fifo
0 carrier errors

Now that we have placed our addressing, and verified that the interfaces are up, we want to establish that we have basic IP reachability. The best method to verify this is via a PING!

MDS1(config-if)# ping 192.168.10.2
PING 192.168.10.2 (192.168.10.2) 56(84) bytes of data.
64 bytes from 192.168.10.2: icmp_seq=2 ttl=255 time=0.397 ms
64 bytes from 192.168.10.2: icmp_seq=3 ttl=255 time=0.390 ms
64 bytes from 192.168.10.2: icmp_seq=4 ttl=255 time=0.380 ms
64 bytes from 192.168.10.2: icmp_seq=5 ttl=255 time=0.385 ms

--- 192.168.10.2 ping statistics ---
5 packets transmitted, 4 received, 20% packet loss, time 3999ms
rtt min/avg/max/mdev = 0.380/0.388/0.397/0.006 ms

We are good to go! Again, had we needed to pass across a L3 boundary, we would have needed a static route. The ‘basic’ FCIP configuration can be broken down into 2 very easy components:

1 – The FCIP Profile configuration
This consists of declaring our tunnel source.
2 – The FCIP interface configuration
We will reference our FCIP profile (for our tunnel source), and we will declare a tunnel destination.

It really is that simple! An FCIP tunnel is ALWAYS an E port, or rather a virtual E (vE) port. So it will be extending the fabric in exactly the same manner as a traditional E-port, even trunking VSANs if you wish it to do so. So zoning changes, principle switch elections, etc…will all be extended across this IP boundary between the storage area networks.

So, in our current demo topology I have setup the JBOD’s VSAN 10. Each MDS has a locally attached JBOD, and some local FLOGI entries. There is no other connection between the MDSs, so the only entries in the FCNS database, are the local entries also present in the FLOGI DB:

MDS1(config-if)# show flogi database
-----------------------------------------------------------------------
INTERFACE VSAN FCID PORT NAME NODE NAME
-----------------------------------------------------------------------
fc1/5 10 0x0b00e1 21:00:00:11:c6:a6:3c:72 20:00:00:11:c6:a6:3c:72
fc1/5 10 0x0b00ef 21:00:00:14:c3:a0:68:ed 20:00:00:14:c3:a0:68:ed

Total number of flogi = 2.

MDS1(config-if)# show fcns database

VSAN 10:
--------------------------------------------------------------------
FCID TYPE PWWN (VENDOR) FC4-TYPE:FEATURE
--------------------------------------------------------------------
0x0b00e1 NL 21:00:00:11:c6:a6:3c:72 scsi-fcp:target
0x0b00ef NL 21:00:00:14:c3:a0:68:ed scsi-fcp:target

Total number of entries = 2

MDS2(config-if)# show flogi database
-----------------------------------------------------------------------
INTERFACE VSAN FCID PORT NAME NODE NAME
-----------------------------------------------------------------------
fc1/5 10 0x1600da 22:00:00:11:c6:a6:25:78 20:00:00:11:c6:a6:25:78
fc1/5 10 0x1600e2 22:00:00:14:c3:a0:68:ee 20:00:00:14:c3:a0:68:ee

Total number of flogi = 2.

MDS2(config-if)# show fcns database

VSAN 10:
--------------------------------------------------------------------
FCID TYPE PWWN (VENDOR) FC4-TYPE:FEATURE
--------------------------------------------------------------------
0x1600da NL 22:00:00:11:c6:a6:25:78 scsi-fcp:target
0x1600e2 NL 22:00:00:14:c3:a0:68:ee scsi-fcp:target

Total number of entries = 2

So when we get the FCIP tunnel up and running, one of our first verifications steps will be to check the FCNS database, ensure that the fabric has merged, and that we see 4 entries (2 per side).

Our first configuration step is to enable the feature, and create our FCIP profiles. Again, this is fairly straight-forward. We simply want to create the profile, with any number (it’s locally significant), and bind it to our LOCAL IP address. Remember, this is our tunnel source.

MDS1
MDS1(config-if)# feature fcip
MDS1(config)# fcip profile 1
MDS1(config-profile)# ip address 192.168.10.1

MDS2
MDS2(config-if)# feature fcip
MDS2(config)# fcip profile 1
MDS2(config-profile)# ip address 192.168.10.2

Now we want to create our FCIP interfaces. These are logical interfaces, and again can be any number as they are only locally significant. We will reference our FCIP profile, which we want to use as our tunnel source, and we will declare a tunnel destination.

MDS1
MDS1(config-profile)# int fcip1
MDS1(config-if)# use-profile 1
MDS1(config-if)# peer-info ipad 192.168.10.2
MDS1(config-if)# no shut

MDS2
MDS2(config-profile)# int fcip1
MDS2(config-if)# use-profile 1
MDS2(config-if)# peer-info ipad 192.168.10.1
MDS2(config-if)# no shut

Believe it or not, that’s all we really need in order to get FCIP up and going! Let’s verify that our FCIP interfaces came up.

MDS2(config-if)# show int fcip1
fcip1 is trunking
Hardware is GigabitEthernet
Port WWN is 20:10:00:0d:ec:34:67:40
Peer port WWN is 20:10:00:0d:ec:10:52:40
Admin port mode is auto, trunk mode is on
snmp link state traps are enabled
Port mode is TE
Port vsan is 1
Speed is 1 Gbps
Trunk vsans (admin allowed and active) (1,10,301-303)
Trunk vsans (up) (1,10,301-303)
Trunk vsans (isolated) ()
Trunk vsans (initializing) ()
Interface last changed at Fri Dec 26 15:53:16 2014

This is awesome! Our interface is up and trunking, and our VSAN is even in the UP state! Let’s see if the fabrics have merged, and if we see the FCNS database populated properly.

MDS1(config-if)# show fcns database

VSAN 10:
--------------------------------------------------------------------
FCID TYPE PWWN (VENDOR) FC4-TYPE:FEATURE
--------------------------------------------------------------------
0x0b00e1 NL 21:00:00:11:c6:a6:3c:72 scsi-fcp:target
0x0b00ef NL 21:00:00:14:c3:a0:68:ed scsi-fcp:target
0x1600da NL 22:00:00:11:c6:a6:25:78 scsi-fcp:target
0x1600e2 NL 22:00:00:14:c3:a0:68:ee scsi-fcp:target

Total number of entries = 4

MDS2(config-if)# show fcns database

VSAN 10:
--------------------------------------------------------------------
FCID TYPE PWWN (VENDOR) FC4-TYPE:FEATURE
--------------------------------------------------------------------
0x0b00e1 NL 21:00:00:11:c6:a6:3c:72 scsi-fcp:target
0x0b00ef NL 21:00:00:14:c3:a0:68:ed scsi-fcp:target
0x1600da NL 22:00:00:11:c6:a6:25:78 scsi-fcp:target
0x1600e2 NL 22:00:00:14:c3:a0:68:ee scsi-fcp:target

Total number of entries = 4

That, my friends, is winning! We see now that we have all 4 entries in our FCNS database, meaning that our fabrics have, in fact, merged properly. There are a couple more things that we can verify here however. Let’s take a look at the full output of the command ‘show interface fcip1.’

MDS2(config-if)# show int fcip1
fcip1 is trunking
Hardware is GigabitEthernet
Port WWN is 20:10:00:0d:ec:34:67:40
Peer port WWN is 20:10:00:0d:ec:10:52:40
Admin port mode is auto, trunk mode is on
snmp link state traps are enabled
Port mode is TE
Port vsan is 1
Speed is 1 Gbps
Trunk vsans (admin allowed and active) (1,10,301-303)
Trunk vsans (up) (1,10,301-303)
Trunk vsans (isolated) ()
Trunk vsans (initializing) ()
Interface last changed at Fri Dec 26 15:53:16 2014

Using Profile id 1 (interface GigabitEthernet1/1)
Peer Information
Peer Internet address is 192.168.10.1 and port is 3225

Write acceleration mode is configured off
Tape acceleration mode is configured off
Tape Accelerator flow control buffer size is automatic
FICON XRC Accelerator is configured off
Ficon Tape acceleration configured off for all vsans
IP Compression is disabled
Maximum number of TCP connections is 2
QOS control code point is 0
QOS data code point is 0
TCP Connection Information
2 Active TCP connections
Control connection: Local 192.168.10.2:3225, Remote 192.168.10.1:65508
Data connection: Local 192.168.10.2:3225, Remote 192.168.10.1:65510

18 Attempts for active connections, 6 close of connections
TCP Parameters
Path MTU 1500 bytes
Current retransmission timeout is 200 ms
Round trip time: Smoothed 2 ms, Variance: 3 Jitter: 150 us
Advertized window: Current: 33 KB, Maximum: 25 KB, Scale: 5
Peer receive window: Current: 29 KB, Maximum: 29 KB, Scale: 5
Congestion window: Current: 14 KB, Slow start threshold: 112 KB
Current Send Buffer Size: 25 KB, Requested Send Buffer Size: 0 KB
CWM Burst Size: 50 KB
Measured RTT : 0 us Min RTT: 0 us Max RTT: 0 us
5 minutes input rate 2680 bits/sec, 335 bytes/sec, 2 frames/sec
5 minutes output rate 2632 bits/sec, 329 bytes/sec, 2 frames/sec
808 frames input, 100688 bytes
792 Class F frames input, 98872 bytes
16 Class 2/3 frames input, 1816 bytes
0 Reass frames
0 Error frames timestamp error 0
814 frames output, 99900 bytes
798 Class F frames output, 98084 bytes
16 Class 2/3 frames output, 1816 bytes
0 Error frames

This is one of the best troubleshooting and verification commands available for FCIP! We can immediately note a couple of things here. The default TCP port for FCIP is 3225, and we have 2 TCP streams by default. One of these streams is for control traffic, and the other for data traffic. We will see in the next post, that we can assign certain DSCP values to these streams, so that they can have QoS policies give them certain treatment throughout the network. We can also tell here, that MDS1 initiated the connection to MDS2, as MDS2’s local port is 3225, while MDS1’s port is a high-numbered ephemeral port. This is another item that we will learn to control, and manipulate, in the next blog post!

If you have any questions or comments please feel free to leave them here on the blog, or shoot me an email directly at jlunde@ipexpert.com. I look forward to hearing from you, and please check back soon for the second post in this series where we will tweak some nerd knobs with regards to FCIP!

iPexpert’s Newest “CCIE Wall of Fame” Additions 2/27/2015

$
0
0

Please join us in congratulating the following iPexpert client’s who have passed their CCIE lab!

This Week’s CCIE Success Stories

  • Haroon Raees, CCIE #46529 (Collaboration)
  • Evariste Happi, CCIE #46452 (Collaboration)
  • Daniel Flieth, CCIE #46067 (Collaboration)
  • Majid, CCIE #45866 (Collaboration)
  • Rob Lacrosse, CCIE #45283 (Collaboration)
  • Devan Lim, CCIE #45991 (Collaboration)
  • Clay Ostlund, CCIE #45770 (Collaboration)

We Want to Hear From You!

Have you passed your CCIE lab exam and used any of iPexpert’s self-study products, or attended a CCIE Bootcamp? If so, we’d like to add you to our CCIE Wall of Fame!

Time To Get More Advanced :: FCIP Pt. 2!

$
0
0

Part 1 of this blog series created a topology, much like you see below, where we configured a single vE (virtual expansion) port from MDS1 to MDS2 across an IP network.  We merged VSAN 10 across this FCIP tunnel and verified it by looking into the FCNS database and ensuring that we saw entries from both sides.  Today we are going to build upon this topology, and get into some more advanced features like changing the default TCP port, setting DSCP values for the two TCP streams, and controlling who initiates the tunnel!

FCIPpt2g1

So first things first…the default port for FCIP is TCP port 3225. We will terminate both of our TCP streams on this port (we have 1 stream for control and another for data traffic). Essentially 1 of the MDS’s will initiate the connection to the other, and their destination port will be TCP/3225. Their source port will be some high-number ephemeral port by default (usually over 65000). We can look at the output of a ‘show int fcip #’ to find out who initiated, and on which ports!

MDS1-6(config-if)# show int fcip1
fcip1 is trunking
Hardware is GigabitEthernet
Port WWN is 20:10:00:0d:ec:1f:a4:00
Peer port WWN is 20:10:00:0d:ec:4c:d3:40
Admin port mode is auto, trunk mode is on
snmp link state traps are enabled
Port mode is TE
Port vsan is 1
Speed is 1 Gbps
Trunk vsans (admin allowed and active) (1,10)
Trunk vsans (up) (1,10)
Trunk vsans (isolated) ()
Trunk vsans (initializing) ()
Interface last changed at Tue Feb 10 18:48:58 2015

Using Profile id 1 (interface GigabitEthernet1/1)
Peer Information
Peer Internet address is 192.168.10.2 and port is 3225
Write acceleration mode is configured off
Tape acceleration mode is configured off
Tape Accelerator flow control buffer size is automatic
FICON XRC Accelerator is configured off
Ficon Tape acceleration configured off for all vsans
IP Compression is disabled
Maximum number of TCP connections is 2
QOS control code point is 0
QOS data code point is 0
TCP Connection Information
2 Active TCP connections
Control connection: Local 192.168.10.1:3225, Remote 192.168.10.2:65524
Data connection: Local 192.168.10.1:3225, Remote 192.168.10.2:65526

12 Attempts for active connections, 0 close of connections
TCP Parameters
Path MTU 1500 bytes
Current retransmission timeout is 200 ms
Round trip time: Smoothed 2 ms, Variance: 3 Jitter: 150 us
Advertised window: Current: 34 KB, Maximum: 25 KB, Scale: 5
Peer receive window: Current: 29 KB, Maximum: 29 KB, Scale: 5
Congestion window: Current: 14 KB, Slow start threshold: 112 KB
Current Send Buffer Size: 25 KB, Requested Send Buffer Size: 0 KB
CWM Burst Size: 50 KB
Measured RTT : 0 us Min RTT: 0 us Max RTT: 0 us
5 minutes input rate 1488 bits/sec, 186 bytes/sec, 1 frames/sec
5 minutes output rate 1528 bits/sec, 191 bytes/sec, 1 frames/sec

If we look here, we can see that this switch, MDS1, represented by ‘Local’ in the above output, is terminating the stream on port 3225. The other switch, MDS2, represented by the keyword ‘Remote’ in the above output, is using high-numbered ephemeral ports above 65000. This indicates that MDS2 initiated the connection to MDS 1! What if we wanted to change this around? Imagine a scenario where a firewall was placed in-between these two devices, and we now needed to initiate the connection from MDS1 towards MDS2, on TCP port 3229. This scenario would look something like what you see here:
FCIPpt2g2

So with that in mind, how could we change this behavior?  Fortunately, most of this work is going to be done on MDS2, where a couple of things have to take place.  First, we must tell MDS2, that the FCIP1 profile is not going to terminate, aka ‘expect,’ connections on TCP port 3225 now.  Second, we must tell the FCIP interface that he is NOT, under any circumstances, to initiate an FCIP tunnel to MDS1!  The configuration on MDS2 will look something like so:

MDS2(config)# fcip profile 1
MDS2(config-profile)# port 3229
MDS2(config-profile)# interface fcip1
MDS2(config-if)# passive-mode

Now if you watch and wait for your tunnel to come up, you will be waiting a long time! We still have some work to do on MDS1 in order for all of this to function. Up to this point, we have told MDS2 to accept connections on the non-default TCP port of 3229. We have also told him NOT to initiate the connection. So currently, MDS1 is attempting to initiate the tunnel, just on the wrong port! We need to tell MDS1, to point towards MDS2 on the correct port now in order to complete the configuration:

MDS1(config)# int fcip1
MDS1(config-if)# peer-info ipaddr 192.168.10.2 port 3229
MDS1(config-if)#

Now we have told MDS1 where to go, and on what port, so lets verify that all is now as we had intended!

MDS2(config-if)# show int fcip1
fcip1 is trunking
Hardware is GigabitEthernet
Port WWN is 20:10:00:0d:ec:4c:d3:40
Peer port WWN is 20:10:00:0d:ec:1f:a4:00
Admin port mode is auto, trunk mode is on
snmp link state traps are enabled
Port mode is TE
Port vsan is 1
Speed is 1 Gbps
Trunk vsans (admin allowed and active) (1,10)
Trunk vsans (up) (1,10)
Trunk vsans (isolated) ()
Trunk vsans (initializing) ()
Interface last changed at Tue Feb 10 19:42:02 2015

Using Profile id 1 (interface GigabitEthernet1/1)
Peer Information
Peer Internet address is 192.168.10.1 and port is 3225
Write acceleration mode is configured off
Tape acceleration mode is configured off
Tape Accelerator flow control buffer size is automatic
FICON XRC Accelerator is configured off
Ficon Tape acceleration configured off for all vsans
IP Compression is disabled
Passive mode is enabled
Maximum number of TCP connections is 2
QOS control code point is 0
QOS data code point is 0
TCP Connection Information
2 Active TCP connections
Control connection: Local 192.168.10.2:3229, Remote 192.168.10.1:65500
Data connection: Local 192.168.10.2:3229, Remote 192.168.10.1:65502

8 Attempts for active connections, 5 close of connections

So as we can see, the tunnel is up; the VSANs have already even fully converged! We can see that on MDS2 passive mode is enabled, and he is now terminating the connection on TCP port 3229. So per our instructions, we have easily completed that task!

Next we might want to give our FCIP TCP streams some preferential treatment within this IP network. By default, these would end up in the class-default, as the packets are not marked. Since we do not extend B2B credits over these links, its impossible to guarantee a truly ‘lossless fabric’ as FC so deeply enjoys. So what we may do is opt in favor of marking our FCIP packets with a certain DSCP value, and then giving them some special treatment throughout the IP network. The first real step in doing this is marking the traffic at the egress points, which, in this case, are the MDS switches Gigabit interfaces. This is a VERY easy fix, as all we have to do is set them under the FCIP interface(s):FCIPpt2g3

Again, we can verify that the parameters took by looking into the output of a ‘show interface fcip#’:

MDS2(config-if)# show int fcip1
fcip1 is trunking
Hardware is GigabitEthernet
Port WWN is 20:10:00:0d:ec:4c:d3:40
Peer port WWN is 20:10:00:0d:ec:1f:a4:00
Admin port mode is auto, trunk mode is on
snmp link state traps are enabled
Port mode is TE
Port vsan is 1
Speed is 1 Gbps
Trunk vsans (admin allowed and active) (1,10)
Trunk vsans (up) (1,10)
Trunk vsans (isolated) ()
Trunk vsans (initializing) ()
Interface last changed at Tue Feb 10 19:56:40 2015

Using Profile id 1 (interface GigabitEthernet1/1)
Peer Information
Peer Internet address is 192.168.10.1 and port is 3225
Write acceleration mode is configured off
Tape acceleration mode is configured off
Tape Accelerator flow control buffer size is automatic
FICON XRC Accelerator is configured off
Ficon Tape acceleration configured off for all vsans
IP Compression is disabled
Passive mode is enabled
Maximum number of TCP connections is 2
QOS control code point is 29
QOS data code point is 30

Just to verify that our marking is working, I setup a quick set of class-maps and policy-map on the 5K that terminates the Gigabit interface of MDS1. Just for reference, the policy looks a little something like so:

//First we set some qos class/policy-maps that will match our DSCP values, and place them into a qos-group of our choosing. I apply this qos policy-map to my physical interface that the MDS is hanging off of.

class-map type qos match-any 29
match dscp 29
class-map type qos match-any 30
match dscp 30
policy-map type qos VERIFY
class 29
set qos-group 1
class 30
set qos-group 2

//Because the 5K has a cosmetic bug where ‘show policy-map interface’ doesn’t work very well, I write a network-qos set of policies that we will be able to verify with the ‘show queueing’ command.

class-map type network-qos qos1
match qos-group 1
class-map type network-qos qos2
match qos-group 2

policy-map type network-qos default-nq-VERIFY
class type network-qos qos1
class type network-qos qos2

system qos
service-policy type network-qos default-nq-VERIFY

Now we should be able to verify that some packets marked with these DSCP values are actually traversing interface Ethernet 1/11, where the MDS is physically located off of:

SW3-6(config-if)# show queuing interface ethernet 1/11
Ethernet1/11 queuing information:
TX Queuing
qos-group sched-type oper-bandwidth
0 WRR 100
1 WRR 0
2 WRR 0

RX Queuing
qos-group 0
q-size: 364800, HW MTU: 1500 (1500 configured)
drop-type: drop, xon: 0, xoff: 364800
Statistics:
Pkts received over the port : 19
Ucast pkts sent to the cross-bar : 17
Mcast pkts sent to the cross-bar : 2
Ucast pkts received from the cross-bar : 1407
Pkts sent to the port : 1407
Pkts discarded on ingress : 0
Per-priority-pause status : Rx (Inactive), Tx (Inactive)

qos-group 1
q-size: 22720, HW MTU: 1500 (1500 configured)
drop-type: drop, xon: 0, xoff: 22720
Statistics:
Pkts received over the port : 1349
Ucast pkts sent to the cross-bar : 1349
Mcast pkts sent to the cross-bar : 0
Ucast pkts received from the cross-bar : 0
Pkts sent to the port : 0
Pkts discarded on ingress : 0
Per-priority-pause status : Rx (Inactive), Tx (Inactive)

qos-group 2
q-size: 22720, HW MTU: 1500 (1500 configured)
drop-type: drop, xon: 0, xoff: 22720
Statistics:
Pkts received over the port : 37
Ucast pkts sent to the cross-bar : 37
Mcast pkts sent to the cross-bar : 0
Ucast pkts received from the cross-bar : 0
Pkts sent to the port : 0
Pkts discarded on ingress : 0
Per-priority-pause status : Rx (Inactive), Tx (Inactive)

Total Multicast crossbar statistics:
Mcast pkts received from the cross-bar : 0

So as you can see, we are catching some of the marked traffic coming ingress on Ethernet 1/11! We are seeing much more control traffic at the present time, as we are not sending any data traffic over the FCIP tunnels yet! Of course, marking this traffic alone will not guarantee that it gets special treatment throughout the network. We must ensure that it does, by writing strong QoS policies around it!

In the next, and final FCIP lesson, we will continue on with the advanced features by setting up a redundant FCIP tunnel, and creating some FC port-channels with them! Then we will practice steering traffic across these interfaces, by leveraging FSPF costs!

Interactions between QoS and IPSec on IOS and the ASA

$
0
0

Quality of Service configuration for the traffic entering/leaving a VPN tunnel may require some special considerations. In this article, I am going to focus on interactions between QoS and IPSec on IOS and the ASA.

There are two methods of deploying QoS for VPNs – you can match the original (Clear-text/ unencrypted) traffic flows or the actual VPN (Aggregate traffic). This second option can be useful when you want to apply a single QoS policy to all packets leaving a tunnel, no matter what are the original sources and destinations protected by the VPN.

We have got a VPN tunnel built between R1 and ASA. R6 and 10.1.1.0/24 are protected networksQosipsecG1

Let’s start on IOS (R1). The VPN tunnel is already up – we will configure a basic QoS Policy to enable LLQ for delay-sensitive traffic, such as Voice (I assume these are all packets with DSCP of EF). Note that this configuration would normally match all EF-colored packets (including non-VPN EF traffic), but since we won’t have any clear-text EF flows in this network we don’t really care:

class-map match-all VOICE
match dscp ef
policy-map QOS
class VOICE
priority

int f0/0
service-policy output QOS

Voice traffic will be now emulated by our switches CAT2 and CAT3 using Echo packets with DSCP value of EF (decimal value 46 = ToS decimal value 184 = ToS in hex 0xB8) :

CAT2#ping
Protocol [ip]:
Target IP address: 192.1.6.6
Repeat count [5]:
Datagram size [100]:
Timeout in seconds [2]:
Extended commands [n]: y
Source address or interface:
Type of service [0]: 0xb8
Set DF bit in IP header? [no]:
Validate reply data? [no]:
Data pattern [0xABCD]:
Loose, Strict, Record, Timestamp, Verbose[none]:
Sweep range of sizes [n]:
Type escape sequence to abort.
Sending 5, 100-byte ICMP Echos to 192.1.6.6, timeout is 2 seconds:
.!!!!
Success rate is 80 percent (4/5), round-trip min/avg/max = 1/4/9 ms

CAT3#ping
Protocol [ip]:
Target IP address: 192.1.6.6
Repeat count [5]:
Datagram size [100]:
Timeout in seconds [2]:
Extended commands [n]: y
Source address or interface:
Type of service [0]: 0xb8
Set DF bit in IP header? [no]:
Validate reply data? [no]:
Data pattern [0xABCD]:
Loose, Strict, Record, Timestamp, Verbose[none]:
Sweep range of sizes [n]:
Type escape sequence to abort.
Sending 5, 100-byte ICMP Echos to 192.1.6.6, timeout is 2 seconds:
.!!!!!
Success rate is 100 percent (5/5), round-trip min/avg/max = 1/7/17 ms

Note that 9 packets went through the tunnel and they were all matched by our VOICE class, so prioritized :

R1#sh cry sess det
Crypto session current status

Code: C - IKE Configuration mode, D - Dead Peer Detection
K - Keepalives, N - NAT-traversal, T - cTCP encapsulation
X - IKE Extended Authentication, F - IKE Fragmentation

Interface: FastEthernet0/0
Uptime: 00:01:20
Session status: UP-ACTIVE
Peer: 192.1.4.31 port 500 fvrf: (none) ivrf: (none)
Phase1_id: 192.1.4.31
Desc: (none)
IKEv1 SA: local 192.1.14.1/500 remote 192.1.4.31/500 Active
Capabilities:(none) connid:1004 lifetime:23:58:39
IPSEC FLOW: permit ip 10.1.1.0/255.255.255.0 192.1.6.0/255.255.255.0
Active SAs: 2, origin: crypto map
Inbound: #pkts dec'ed 9 drop 0 life (KB/Sec) 4565014/3519
Outbound: #pkts enc'ed 9 drop 1 life (KB/Sec) 4565014/3519

R1#sh policy-map int f0/0
FastEthernet0/0

Service-policy output: QOS

queue stats for all priority classes:

queue limit 64 packets
(queue depth/total drops/no-buffer drops) 0/0/0
(pkts output/bytes output) 10/1660

Class-map: VOICE (match-all)
9 packets, 1494 bytes
5 minute offered rate 0 bps, drop rate 0 bps
Match: dscp ef (46)
Priority: Strict, burst bytes 1500, b/w exceed drops: 0

Class-map: class-default (match-any)
101 packets, 9305 bytes
5 minute offered rate 0 bps, drop rate 0 bps
Match: any

queue limit 64 packets
(queue depth/total drops/no-buffer drops) 0/0/0
(pkts output/bytes output) 101/9584

OK, looks like we don’t have any issues here. All EF-marked packets entering the VPN were properly classified and prioritized. How about if we change our policy? We will still be matching EF packets, but, let’s say, only sourced by CAT2 :

access-list 160 permit ip host 10.1.1.120 host 192.1.6.6

class-map match-all VOICE
match access-group 160

R1#sh class-map VOICE
Class Map match-all VOICE (id 1)
Match dscp ef (46)
Match access-group 160

Packets will be now first sent from CAT3, then from CAT2. Here are the results :

After CAT3 :

R1#sh policy-map int f0/0
FastEthernet0/0

Service-policy output: QOS

queue stats for all priority classes:

queue limit 64 packets
(queue depth/total drops/no-buffer drops) 0/0/0
(pkts output/bytes output) 9/1494

Class-map: VOICE (match-all)
9 packets, 1494 bytes
5 minute offered rate 0 bps, drop rate 0 bps
Match: dscp ef (46)
Match: access-group 160
Priority: Strict, burst bytes 1500, b/w exceed drops: 0

Class-map: class-default (match-any)
441 packets, 41417 bytes
5 minute offered rate 0 bps, drop rate 0 bps
Match: any

After CAT2 :

R1#sh policy-map int f0/0
FastEthernet0/0

Service-policy output: QOS

queue stats for all priority classes:

queue limit 64 packets
(queue depth/total drops/no-buffer drops) 0/0/0
(pkts output/bytes output) 9/1494

Class-map: VOICE (match-all)
9 packets, 1494 bytes
5 minute offered rate 0 bps, drop rate 0 bps
Match: dscp ef (46)
Match: access-group 160
Priority: Strict, burst bytes 1500, b/w exceed drops: 0

Class-map: class-default (match-any)
475 packets, 41417 bytes
5 minute offered rate 0 bps, drop rate 0 bps
Match: any

R1#sh cry sess det
Crypto session current status

Code: C - IKE Configuration mode, D - Dead Peer Detection
K - Keepalives, N - NAT-traversal, T - cTCP encapsulation
X - IKE Extended Authentication, F - IKE Fragmentation

Interface: FastEthernet0/0
Uptime: 00:28:39
Session status: UP-ACTIVE
Peer: 192.1.4.31 port 500 fvrf: (none) ivrf: (none)
Phase1_id: 192.1.4.31
Desc: (none)
IKEv1 SA: local 192.1.14.1/500 remote 192.1.4.31/500 Active
Capabilities:(none) connid:1004 lifetime:23:31:20
IPSEC FLOW: permit ip 10.1.1.0/255.255.255.0 192.1.6.0/255.255.255.0
Active SAs: 2, origin: crypto map
Inbound: #pkts dec'ed 19 drop 0 life (KB/Sec) 4565013/1880
Outbound: #pkts enc'ed 19 drop 1 life (KB/Sec) 4565013/1880

R1#sh access-list 160
Extended IP access list 160
10 permit ip host 10.1.1.120 host 192.1.6.6 (5 matches)

Wow… hold on. We are no longer matching the Voice class, even from CAT2 (though we have a match in the ACL). What’s the problem??

This behavior is perfectly normal on IOS. The problem we run into here is that IOS devices apply IPSec services BEFORE QoS by default. So our QoS engine sees already encrypted traffic (from R1 to the ASA), and not the original flow (from 10.1.1.120 to 192.1.6.6). The reason we had the packets matched in our first test (without an ACL), is that ToS field is always copied to the transport header of the IPSec packet (or in other words IPSec always copies the ToS from the original packet). This means that pure ToS-based matching, including DSCP/IP Precedence which are part of the ToS field, will always work, and this is how you can match the traffic as an Aggregate (another Aggregate option would be to match the VPN tunnel itself – so ESP/AH).

Anything we can do to allow for more granular policies, like what we tried to accomplish in our second test (Clear-text flow)? Yes. This is known as QoS Pre-Classification. Simply enable it in the crypto map to make sure that QoS will see the original traffic flows, before encryption/IPSec kicks in :

crypto map MAP10 10 ipsec-isakmp
qos pre-classify

R1#sh crypto map int f0/0
Crypto Map "MAP10" 10 ipsec-isakmp
Peer = 192.1.4.31
Extended IP access list 157
access-list 157 permit ip 10.1.1.0 0.0.0.255 192.1.6.0 0.0.0.255
Current peer: 192.1.4.31
Security association lifetime: 4608000 kilobytes/3600 seconds
Responder-Only (Y/N): N
PFS (Y/N): N
Transform sets={
SET4: { esp-3des esp-md5-hmac } ,

}
QOS pre-classification
Interfaces using crypto map MAP10:
FastEthernet0/0

Another five EF Echos from CAT2 (traffic from CAT3 of course will not be a match) and better :

R1#sh policy-map int f0/0
FastEthernet0/0

Service-policy output: QOS

queue stats for all priority classes:

queue limit 64 packets
(queue depth/total drops/no-buffer drops) 0/0/0
(pkts output/bytes output) 14/2324

Class-map: VOICE (match-all)
14 packets, 2064 bytes
5 minute offered rate 0 bps, drop rate 0 bps
Match: dscp ef (46)
Match: access-group 160
Priority: Strict, burst bytes 1500, b/w exceed drops: 0

Class-map: class-default (match-any)
957 packets, 90525 bytes
5 minute offered rate 0 bps, drop rate 0 bps
Match: any

queue limit 64 packets
(queue depth/total drops/no-buffer drops) 0/0/0
(pkts output/bytes output) 957/91962

Now let’s move on to the ASA. Are we going to see any differences from IOS? I can tell you now – yes. On the ASA there are no special commands needed to match the Original flows – QoS happens before and even after IPSec. Quick test to show you how to police packets to CAT2 :

access-list 150 permit ip host 192.1.6.6 host 10.1.1.120

class-map R6TOCAT2
match access-list 150

policy-map QOS
class R6TOCAT2
police output 16000

service-policy QOS interface outside

R6#ping 10.1.1.120 size 1000 rep 10

Type escape sequence to abort.
Sending 10, 1000-byte ICMP Echos to 10.1.1.120, timeout is 2 seconds:
.!!!.!!!.!
Success rate is 70 percent (7/10)
, round-trip min/avg/max = 4/5/8 ms

R6#ping 10.1.1.130 size 1000 rep 10

Type escape sequence to abort.
Sending 10, 1000-byte ICMP Echos to 10.1.1.130, timeout is 2 seconds:
!!!!!!!!!!
Success rate is 100 percent (10/10),
round-trip min/avg/max = 4/6/8 ms

A1(config)# show vpn-sessiondb l2l

Session Type: LAN-to-LAN

Connection : 192.1.14.1
Index : 4 IP Addr : 192.1.14.1
Protocol : IKEv1 IPsec
Encryption : 3DES Hashing : MD5
Bytes Tx : 17000 Bytes Rx : 17000
Login Time : 20:07:33 UTC Tue Jan 20 2015
Duration : 0h:03m:14s

A1(config)# show service-policy police

Interface outside:
Service-policy: QOS
Class-map: R6TOCAT2
Output police Interface outside:
cir 16000 bps, bc 1500 bytes
conformed 7 packets, 7448 bytes; actions: transmit
exceeded 2 packets, 2128 bytes; actions: drop
conformed 728 bps, exceed 208 bps

Now let’s change this ACL to match ESP between the ASA and R1 and observe the results :

no access-list 150 ext permit ip host 192.1.6.6 host 10.1.1.1
access-list 150 ext permit esp host 192.1.4.31 host 192.1.14.1

R6#ping 10.1.1.130 size 1000 rep 10

Type escape sequence to abort.
Sending 10, 1000-byte ICMP Echos to 10.1.1.130, timeout is 2 seconds:
!!!.!!!.!!
Success rate is 80 percent (8/10), round-trip min/avg/max = 4/6/12 ms

R6#ping 10.1.1.120 size 1000 rep 10

Type escape sequence to abort.
Sending 10, 1000-byte ICMP Echos to 10.1.1.120, timeout is 2 seconds:
!!!.!!!.!!
Success rate is 80 percent (8/10), round-trip min/avg/max = 4/5/8 ms

So here we are matching the Aggregate for this single VPN – ASA was also able to catch traffic after encryption. ToS-based classification would also work – again, IPSec copies ToS field to the transport header in the protected packet and ASA would see it.

That’s not the end. There is also another way to match Aggregate traffic on the ASA that is not available on IOS. We can match a Tunnel Group and all traffic leaving it via “match tunnel-group”, or even do this per-VPN peer when multiple devices terminate a tunnel on a single ASA’s Tunnel Group (just add “match flow ip destination-address” to the class-map that matches a Tunnel Group). This second option would be useful for Remote Access VPNs.

Let’s look at that – I am just going to switch from Policing to LLQ, since Policing only works with “match flow ip destination-address” – and this command is pretty much useless for L2L tunnels because there is always just one peer for a single Tunnel Group. Here are the changes :

priority-queue outside

policy-map QOS
class R6TOCAT2
priority

class-map R6TOCAT2
no match access-list 150
match tunnel-group 192.1.14.1

R6#ping 10.1.1.120 size 1000 rep 10

Type escape sequence to abort.
Sending 10, 1000-byte ICMP Echos to 10.1.1.120, timeout is 2 seconds:
!!!!!!!!!!
Success rate is 100 percent (10/10), round-trip min/avg/max = 4/6/8 ms

R6#ping 10.1.1.130 size 1000 rep 10

Type escape sequence to abort.
Sending 10, 1000-byte ICMP Echos to 10.1.1.130, timeout is 2 seconds:
!!!!!!!!!!
Success rate is 100 percent (10/10), round-trip min/avg/max = 4/6/8 ms

A1# sh service-policy priority

Interface outside:
Service-policy: QOS
Class-map: R6TOCAT2
Priority:
Interface outside: aggregate drop 0, aggregate transmit 20

If you were to configure destination flow –based matching and rate-limit in the policy for this Tunnel Group, you would see that our 16kbps limit applies to all clear-text flows together, no matter what are the sources and destinations. Again, the purpose of “match flow ip destination-address” is to apply QoS per VPN peer and not clear-text destinations.

To finish that discussion, let me recap on our classification options for IPSec VPNs :
1. IOS

  • Aggregate traffic can be matched based on ToS field or using an ACL for ESP/AH
  • Clear-text flows can be only matched with QoS Pre-Classification turned on

2. ASA

  • Aggregate traffic can be matched based on ToS field or using an ACL for ESP/AH
  • Aggregate traffic can be matched for the entire Tunnel Group or per VPN peer terminating its tunnel on this Tunnel Group (note that some QoS tools are not supported unless you enable this per-peer matching)
  • Clear-text flows can be always classified without any special considerations

Why joining APs to a Controller Across a NAT Needs Special Configurations

$
0
0

Many wireless engineers know that having a lightweight AP join up to a controller across a NAT requires some extra configuration. But many don’t understand why it needs the configuration. This article will talk about what the NAT is, why it causes a problem for the normal join process, and what the configuration changes do to make things work.

What is NAT and where do we see it in the wireless world?

NAT stands for Network Address Translation, and it does pretty much what the name implies. It translates addresses from their original values to something new. Let’s take a look at a classic wireless example.

Let’s say I have an office Extend AP (OEAP) in my house, and I want it to join the WLC in my company’s DMZ. But I don’t want to actually configure a public IP on my WLC. This is where the NAT comes into play.

Screen Shot 2015-03-13 at 12.12.16 PM

In the image above, the OEAP talks through the firewall in order to talk to the DMZ WLC. In order for the AP to talk to the WLC, it has to target a public IP because it needs to communicate across the Internet. So if the WLC itself doesn’t have a public IP, we configure a public IP on the firewall for the OEAP to target. When the traffic from the OEAP reaches the firewall on the specified IP, the firewall will rewrite the destination IP address of the packet with the private IP address of the WLC and forward the packet towards the WLC. In the reverse direction, as the WLC sends packets to the OEAP, the firewall rewrites the source IP so that it looks like it’s coming from the public IP when it arrives at the OEAP. So without the OEAP being aware of it, it’s actually communicating to a private IP even though it looks like it’s talking to a public IP. This allows the WLC to use a private IP while still talking to APs out on the Internet.

Why does this cause a problem for my OEAP?

We can see that the OEAP and the WLC can talk just fine across this NAT. So what’s the problem? It has to do with how the discovery and join processes work.

The first thing that must happen is that the OEAP must discover the WLC using this public NAT’ed IP. Typically the OEAP would be configured such that the DMZ WLC is its primary WLC, referencing the public IP address. That means that the OEAP sends a discovery request to the public IP. The discovery request crosses the NAT and arrives at the management interface of the WLC. The WLC then sends back a discovery response to the OEAP, which makes it back to the OEAP. So far so good. But now we are going to have a problem.

The next steps will be that the OEAP will send a join request to the AP Manager interface of the WLC. But what IP address will it send the join request to? Part of the information in discovery response that the WLC sent the OEAP is the IP address of an AP Manager interface to send the join request to. This is the actual IP configured on the AP manager interface itself. This IP address is a private IP address. So when the OEAP sends its join request, it sends it to the private IP address instead of the public NAT’ed IP address. Thus, the request never traverses the Internet and never makes it to the WLC. So the OEAP never joins the WLC.

How can we make this work?

Assuming the WLC model and code level supports it, every AP manager interface has a checkbox to enable NAT support. Once you check the box, you can type in the public IP address that corresponds to the private IP address of the AP Manager as configured on the NAT on the firewall. Once you configure this setting, it alters the behavior of the discovery responses. Now when the WLC sends a discovery response, it uses the public NAT IP address as the address of the AP Manager interface. So the OEAP learns that it should send the join request to the public IP. Thus, the join request gets to the firewall and the firewall NATs it and forwards it on to the WLC’s AP manager interface.

Design considerations when using NAT

Here are some things to keep in mind when you start to support AP joins across a NAT.

  • The NAT should be a 1-to-1 NAT where you have a unique public IP address for each AP Manager interface IP address.
  • If you have multiple AP Managers on your DMZ WLC, you will want to enable NAT on all of them. Otherwise, you will have instances where the OEAPs are learning private IPs in the discovery responses.
  • Once you enable NAT on an AP Manager interface, ALL APs must join across this NAT. So by default, you typically won’t have APs both on the Internet and on the internal corporate network joining the same WLC because the internal APs typically won’t be able to target the public IP address and use the NAT.
    • You can get around this limitation starting in 7.0.220.0 code with the CLI command “config network ap-discovery nat-ip-only disable”.
  • You don’t actually have to use an AP in OEAP mode to join across a NAT. But if you use a different mode, I’d be sure to enable DTLS encryption if there are any centrally switched WLANs to protect your client data traffic from being snooped on.

Cisco Announces Version 3 of the CCIE Wireless Blueprint

$
0
0

The wait is finally over and version 3 of the wireless CCIE blueprint has finally been announced.  On September 14, the new version of the written and lab exams will go live.  This will bring a very long-in-the-tooth version 2 blueprint to an end after a nearly 4-year run.  While we tearfully say goodbye to WCS, let’s take a look at what version 3 is bringing to the table.

New Lab Exam Format

First off, let’s look at what has changed in the format of the lab itself.  The wireless track is following suit with the R&S and SP tracks and including multiple sections to the lab.  The wireless lab will now begin with a 1 hour Diagnostic section, followed by a 7 hour Configuration section.

The Diagnostic section is similar to what was done in R&S and SP.  This section tests your ability to assess and diagnose issues in a network without any access to the devices themselves.  Basically, you are given access to a number of pieces of information (emails, topology diagrams, logs, etc) that describe an issue and give the needed information to figure out the root cause.  You will then be given items to complete based on the information such as multiple choice, drag-and-drop, or point-and-click items.  So it’s sort of like the written exam, but with much more information to go through and higher levels of analysis.  Throughout this section, there will be multiple “trouble tickets” to complete.  So you will be working through multiple different scenarios and not just one large scenario.

The Configuration section will be the same as what we have historically done in previous versions.  You’ll be given a single topology and you will have a large number of configuration and troubleshooting tasks to complete on the devices and servers provided.

Each section will be graded and there will be a minimum passing score for both the sections individually as well as a combined minimum passing score.  Based on this, you need to get beyond the minimum score in at least one of the sections as it seems possible to pass both sections while still failing to get the combined minimum passing score.  Otherwise, there is no purpose to the minimum combined score.

New Knowledge Domains

Cisco added some new knowledge domains to version 3.  Most notably they added Converged Access as well as a dedicated Security and Identify Management section.  No domains were removed.  Though there was some renaming and shuffling around of previous domains.

The new version contains the following domains for the lab exam.

1- Configure and Troubleshoot the Network Infrastructure
2- Configure and Troubleshoot an Autonomous Deployment Model
3- Configure and Troubleshoot a Unified Deployment Model (Centralized)
4- Configure and Troubleshoot a Unified Deployment Model (Converged)
5- Configure and Troubleshoot Security & Identity Management
6- Configure and Troubleshoot Prime Infrastructure and MSE
7- Configure and Troubleshoot WLAN media and application services

What implications does this new domain list have?  Here are a few things that I’m noticing right off the bat.

  • Since there are separate domains for centralized and converged unified wireless, you will definitely do both over the course of a given lab.  Interaction between them is also probably highly likely.
  • Since there is a separate Security and Identity Management section, working with the RADIUS server (ISE in this version) will probably play a larger role than it did in version 2.
  • Since we have more domains and only 7 hours in the configuration section, we’ll be doing fewer things per section.
  • I see even more emphasis that wireless professionals need to have skills that stretch beyond our wireless specific realm (switching, network services, security- ISE/RADIUS)

New Hardware and Software

While I think that version 2 of the lab is great.  One of the glaring deficiencies as of late has been the age of the software.  Thanks to rapid development in the wireless world, our version 2 stuff is very outdated.  Version 3 brings us just about to the bleeding edge from a Cisco standpoint.  With the addition of converged access, the lab is testing things that a small percentage of the customer base has moved to.  The good news there is that hopefully v3 will age a little better than v2 did.  V2 was almost immediately out of date when WCS was replaced by Prime.

So what hardware and software will you see in v3?  Here’s the list.

V3 Hardware

  • 5500 series controllers
  • 5760 series controllers
  • 3700 series APs (lightweight and autonomous)
  • 1600 series APs
  • 4500E series switches with Sup 8-E
  • 3650 series switches
  • Prime Infrastructure server
  • ISE server
  • MSE 3300 series server

V3 Software

  • 8.0 code for 5500 WLC
  • 3.6E IOS-XE code for the 5760 WLC
  • 15.3 code for the autonomous APs
  • 3.6E IOS-XE code on both switch models
  • Prime Infrastructure 2.2
  • ISE 1.3
  • 8.0 code for the MSE
  • AnyConnect secure mobility client 4.0 (no config required)
  • Jabber client 10.x (no config required)

As you can see, we are using current latest code almost across the board.  We’re also using current top enterprise-tier hardware for controllers and one of the AP models.  All switches are also converged access capable.

New Technologies and Topics in Version 3

As we look deeper into the domains covered in v3.  Here is a highlight list of new topics and technologies that v3 brings to the table.  This is not an exhaustive list.

  • Converged access and everything that it entails
  • Greatly expanded focus on IPv6
  • FlexConnect (many more features than what H-REAP had)
  • Enhanced security functionality with ISE
    • Centralized Guest WebAuth and Policies
    • Client profiling and provisioning
    • SXP/SGT support
    • CoA
  • Enhanced functionality on the 5508 WLCs
    • AVC and netflow
    • New mobility
    • New HA options
  • CMX on the MSE server

Removed Technologies and Topics in Version 3

So we have a healthy amount of new stuff to tackle.  Can we at least get rid of a bunch of stuff from v2 to compensate for this?  Unfortunately, there is little of significance going away.  Here is a list of what didn’t make the transition.

  • MFP
  • Peer-to-peer blocking
  • IGMP snooping
  • WDS on autonomous
  • Upgrading autonomous APs to unified APs
  • Implementing local DHCP services on the WLC

So maybe with the exception of WDS, nothing of any real significance was removed.  These are mostly topics of secondary importance.  Most of the big things that went away were simply replaced by newer versions of the same thing.  Such as…

  • H-REAP was replaced by FlexConnect
  • WCS was replaced by Prime Infrastructure
  • ACS was replaced by ISE

Advice For Those Looking To Pass Version 2

I know there are many of you out there who have already started your journey with version 2 of the lab blueprint.  Due to the significant changes in version 3, I’d suggest that those who have already put significant time into v2 try to pass their lab under the current version.  It will probably be the easier road assuming you can take at least 2 attempts at it.  At this point, if you haven’t made any attempts, that may be all that you’ll be able to do.  Seeing as the average number of attempts for most of my students to pass the current lab is 2-3, you should have a reasonable shot of passing in 2 (assuming you put in the hard hours of preparation).  But it’s no guarantee.

As you plan out your strategy to pass version 2, here are a few things to think about.

  •  While lab dates are currently plentiful everywhere, I would expect the dates leading up to the last day for v2 will begin to fill up as we get 3-4 months out (maybe even sooner).
  • Rules were put in place last year enforcing longer wait periods after failed attempts.
    • After your first failed attempt, it’s a 30 day wait period
    • After your second failed attempt, it’s a 90 day wait period
    • The next 2 failure attempts also carry 90 day wait periods

So let’s say that you have your first attempt scheduled on April 1.  If you fail that, you can schedule attempt #2 at the beginning of May.  So the earliest you could schedule attempt #3 is early August.  Hopefully in May, there would still be August dates available.  But there is no guarantee as you are relying on the 6 weeks leading up to the version change not being filled.  But as long as you can get the first attempt in by the next few months, it should be pretty safe to assume you can get in 2 attempts.  If you want a shot at 3 attempts, you need to be taking your first attempt ASAP!

Also, if you really want to pass in version 2, attending my classes would probably be a big help.  I’m not trying to be a salesman here.  But classes make a big difference for most people.  You get 1-2 weeks of dedicated study time isolated from the rest of the world.  You get great workbooks not available to regular self-studiers.  I cover and help you zero in on what is most important in the actual lab.  And you get the ability to ask all of the questions that you want.   It’s not a magic potion that will allow you to pass with ease by any means.  There is no escaping the requirement of putting in hour after hour of studying and practicing.  But it’s an accelerant.  And when you have to make every lab attempt count, it’ll definitely boost your odds.

If you are still 3+ months away from being lab ready, it might be best to look at going after version 3.  You may only have 1 chance at taking the v2 lab.  Even with taking a class, the odds of passing on your first attempt aren’t in your favor.

As you evaluate your plan, just be sure to keep a realistic mindset.  Try to figure out how much time you would be able to study for v2, how many attempts you will likely be able to make, and factors like being able to attend a class or getting time off of work to study.  Then also take into account that plans rarely go 100% as expected over the course of 4-6 months.  Stuff happens, you get sick, family emergencies pop up, work gets crazy, racks get booked up on the day that you wanted, maybe I get hit by a bus and your class is cancelled, etc.  If you plan relies on everything working out just right without any wiggle room, maybe you need to re-evaluate.

Advice For Those Looking To Pass Version 3

So maybe version 2 isn’t in the cards for one reason or another.  That’s OK.  One nice thing about v3 is that you can actually dig deep into the technologies, hardware, and software that you are probably using most every day.  No more dealing with WCS or deferred controller code.  No more trying to figure out how to obtain a virtual image for an MSE code that has no official virtual image.  No more getting made fun of for calling FlexConenct H-REAP.  While the version change may have pushed out your completion date, I think anyone who can pass it is going to be a very well rounded engineer when it comes to making the Cisco portfolio work.   And while just having the letters CCIE behind your name is a big deal, the knowledge and skills that you learn while achieving the certification are where the main value is derived from.

So if you want to start preparing for version 3, here is what you can do today to get started.

1)   Go to the Cisco Learning Network and read up on everything they have released about version 3 so far.

https://learningnetwork.cisco.com/community/certifications/ccie_wireless

Particularly, read through the following.

The v2 to v3 info doc
https://learningcontent.cisco.com/cln_storage/text/cln/marketing/ccie_wireless_updates_v2_v3.pdf

The v3 lab blueprint
https://learningnetwork.cisco.com/docs/DOC-26435

The v3 hardware/software list
https://learningnetwork.cisco.com/docs/DOC-26437

The v3 reading list
https://learningnetwork.cisco.com/docs/DOC-26433

2)   Download all of the Cisco.com documents off of the reading list and start reading through them.  Be sure to highlight and take notes.

3)   Look at the books on the reading list and evaluate what might be worth purchasing.

I’ve personally not found the wireless books to be super helpful in my studies.  Most of the important concepts can typically be found in the free cisco.com documentation.  Also, due to the amount if time it takes to write a book, they are typically referencing older code than what you would see in the lab.  Most likely you will be able to get by with free documentation.  If Cisco Press comes out with a quick reference guide for the written/lab exam, grab that.

4)   Figure out what you might be able to study with equipment already available to you

Thanks to the new hardware, it will probably be tough to build a home lab to practice everything.  Most of us usually can’t swing $2000 for a 3650 to practice converged access.  Even running demo versions of the servers requires a pretty beefy ESXi box.  But you can probably practice some things like autonomous on cheaper APs that can run 15.3, or maybe snag a 2504 or even run a virtual WLC to get exposed to 8.0 code if you haven’t used it yet.

That should keep you busy for a while.


What is iPexpert Doing To Prepare for Version 3?

I’m getting started on creating all new materials for v3.  We’ll have 2-3 workbooks created with racks for you to practice on, multiple video series to watch, and classes to attend.  I’m looking forward to taking CCIE wireless studies to a whole new level with version 3.  I hope you’ll be joining me.

For those of you who have the iPeverything subscription, you will get access to all of the self-study materials as they are released.  That includes videos, audio, and quizzers.  If you also got the workbooks as a part of your current subscription, then you’ll get the new ones as well.

If you have any questions or concerns over what products you may or may not have access to once the version 3 materials are released, talk to one of our training advisors at sales@ipexpert.com and they will take care of you.

As new information about our products and release dates for version 3 materials becomes available, I’ll be sure to post it here as well as on our Facebook group page at https://www.facebook.com/groups/iPexpertCCIEWireless.StudyGroup/.

Good luck with your studies!

 

 

Routing on the CCIE Data Center Lab – How Deep Do We Need to Go?

$
0
0

Every time I teach NX-OS the same question often arises, “How good do we need to be at routing in order to pass the lab exam?” My first inkling is always to say ‘learn it all,’ but we all know that isn’t always possible. There is a ton of information to learn within the scope of this lab exam, so in order to fully understand this question, we need to look towards Cisco’s almighty guide, the blueprint!

They have gone pretty easy on us in terms of routing, but in their defense, they do have an entire lab dedicated to routing and switching. If we scan down the blueprint to Section 1.2, we see the category we are looking for:

Screen Shot 2015-03-20 at 2.54.17 PM

While that comprises that entire section, I would also err on the side of caution and include Section 1.4a grouped within the L3 category, those being first-hop routing protocols such as HSRP, GLBP, and VRRP.

Look at what they ask us for here, and lets analyze it. They ask for BASIC EIGRP and OSPF, Bi-directional forwarding detection, and equal-cost multi-pathing. ECMP isn’t really its own ‘protocol’, rather something that most L3 protocols support. We will see that EIGRP and OSPF both support a default of 8 equal-cost multi-paths by default! Bi-directional forwarding detection, while a protocol, is not a routing protocol. So that leaves us with EIGRP, OSPF, and FabricPath. FabricPath is kind of its own deal within this blueprint, so I think we can safely exclude it from the remainder of this conversation too. So lets talk EIGRP and OSPF, the only dynamic routing protocols identified on the blueprint. A lot of people say…” is that all NX-OS supports?” The answer is a resounding no! NX-OS supports RIPv2, EIGRP, OSPF, BGP, IS-IS, and LISP,  that’s just IPv4! We however are only going to be tested on the two protocols.

So usually the next question I get is surrounding the depth at which we must know these two dynamic routing protocols. The blueprint clearly states ‘basic.’ But what defines basic? To a route/switch CCIE, basic might mean something more advanced than say somebody coming from a strong voice or server background. This question is a little harder to answer, as they don’t give any clear indication as to what defines basic on the blueprint.

So while we could speculate all we want, I have tried to categorize things for my students as you see below. This is by no means a definitive guide, simply a professional opinion on what I would consider basic vs. advanced for these two routing protocols.

Basic


  • Basic Protocol Information – know what IP ports and multicast addresses EIGRP and OSPF utilize, and know the requirements of the protocols in order to form adjacencies and exchange routing information
  • Base configuration – get the protocol up, forming adjacencies, setting router-id’s, and exchanging/advertising routes
  • Authentication – know the various authentication methods, and how to apply them for both protocols
  • Basic redistribution – know how to redistribute direct routes, static routes, and other routing protocols
  • OSPF Interface Types – know how to differentiate between P2P and broadcast network types, and how to change them
  • Basic summary-addressing for OSPF and EIGRP
  • Passive-interface theory and application
  • BFD application

Advanced


  • EIGRP Stub routing
  • OSPF stub/total stub/NSSA routing
  • Advanced route filtering including leak-maps, filter-lists, etc…
  • OSPF virtual-links
  • SPF Optimization
  • EIGRP Split horizon manipulation
  • Routing protocol timer manipulation
  • Administrative distance manipulation

While there is much more to both of these protocols than just these bullet points, I think the above breakout does a pretty good job at defining some of the implied knowledge that the blueprint steers any CCIE DC candidate towards. When I began my preparations, these were the guidelines that I worked within (even coming from an RS CCIE background!), and now I pass them on to you!

Also, as any good candidates, we should know how to reach the configuration guide for the topics! I have learned through my exam experiences, that they will always catch you off-guard with something completely random, so knowing how to reach the configuration guide for a specific topic is crucial! As always, start with ‘root’:

http://www.cisco.com/cisco/web/psa/default.html?mode=prod

Switches->Data Center Switches -> Nexus 7000 Series Switches -> Nexus 7000 Series Switches -> Configuration Guides -> Cisco Nexus 7000 Series NX-OS Unicast Routing Configuration Guide, Release 6.x   

This should get you to a place where you could find the answer to anything you are looking for in terms of L3 routing protocols within NX-OS! If all else fails, remember to hit ctrl-f and just search the page for specific keywords.

I should also mention that one of the hardest things for a lot of people is bridging the configuration gap from IOS to NX-OS in terms of configuring these L3 protocols. If you want some more insight on that approach, please hit up a boot camp with me in the near future or watch some of our VoD’s within our CCIE DC library!

Lastly, and I promise I will end with this! These are the only two dynamic routing protocols you will need to know for the CCIE Data Center Lab exam. However, once you pass this exam and are labeled a CCIE DC EXPERT, you will be expected to know much more, including these ‘advanced’ topics and even the protocols excluded from this blueprint. So once, you achieve those glorious numbers (or designation if you’re a prior CCIE), please don’t stop learning. Go out there and continue your education in order to fill in those few gaps!

Happy learning everyone!

Build Your CCIE Security Knowledge with Cisco Docs!

$
0
0

A good knowledge of Cisco’s Documentation is what could make a difference in passing or failing the exam. Because of that, I would like to show you how to access most useful Doc CD resources on a per blueprint-section basis. In addition, we will also take a look at the location of a particular document, so you know how to access it without using the Search function. Same thing as what you will have to do to access those resources in the lab.

Unless otherwise mentioned, all documents discussed in this blog are part of Configuration Guides.

1.System Hardening and Availability

Probably the most useful doc here will be for Control Plane features. However, I am going to show you more so you at least know how to find them.

Our starting point for this section is IOS Configuration Guides :
IOS and NX-OS Software -> IOS -> IOS Software Release 15M&T -> 15.2M&T

Routing Protocol Authentication :

IP Routing : RIP -> Configuring Routing Information Protocol
Read More Here
IP Routing : EIGRP -> IP EIGRP Route Authentication
Read More Here
IP Routing : EIGRP -> IPv6 Routing : EIGRP Support
Read More Here
IP Routing : OSPF -> OSPFv2 Cryptographic Authentication
Read More Here
IP Routing : OSPF -> OSPFv3 Authentication Support with IPSec
Read More Here

Route Filtering, PBR :

IP Routing : Protocol-Independent -> Basic IP Routing
Read More Here
IP Routing : Protocol-Independent -> Policy-Based Routing
Read More Here
+ Respective RIP, EIGRP, OSPF and BGP Configuration Guides

Control Plane Policing, Protection and Logging :

QoS : Quality of Service Solutions -> Policing and Shaping
Read More Here

Device Access Control, Role-Based CLI Access, SSH :

Security, Services, and VPN : Securing User Services -> User Security
Read More Here
Security, Services, and VPN : Securing User Services -> Secure Shell
Read More Here

Disabling Unnecessary Services :

Security, Services, and VPN : Securing User Services -> User Security -> AutoSecure
Read More Here

NetFlow, Flexible NetFlow :

Network Management : NetFlow
Read More Here

Network Management : Flexible NetFlow
Read More Here

CPU/Memory Thresholds, Fault Management :

Network Management : Network Management -> Basic System Management
Read More Here

SNMP :

Network Management : Network Management -> SNMP
Read More Here

2. Threat Identification and Mitigation

A great resource can be found under ASA :

Protocols, Ports, ICMP Types :

Security -> Firewalls -> ASA -> ASA 5500-X Series Next-Generation Firewalls -> Cisco ASA 5500 Series Configuration Guide using the CLI 8.4/8.6Â -> Reference -> Addresses, Protocols and Ports
Read More Here

Pretty much all L2 Security Features (Port Security, DHCP Snooping, DAI, Source Guard, STP Protection, Storm Control, etc.) are documented under Switch Config Guide :

L2 Security :

Switches -> Campus LAN Switches – Access > Catalyst 3750-X Series Switches ->
Cisco IOS Release 15.0(2) SE and Later :
Read More Here

IPv6 First Hop Security :

Switches -> Campus LAN Switches – Access > Catalyst 3750-X Series Switches ->
Cisco IOS Release 15.0(2) SE and Later -> Configuring IPv6 Unicast Routing
Read More Here

For all other features, let’s go back to the main IOS documentation page :
IOS and NX-OS Software -> IOS -> IOS Software Release 15M&T -> 15.2M&T

SEND :

IP : IPv6 Implementation Guide -> Implementing First Hop Security in IPv6
Read More Here

FPM :

Security, Services, and VPN : Securing the Data Plane -> Flexible Packet Matching
Read More Here

TCP Intercept :

Security, Services, and VPN : Securing the Data Plane -> Denial of Service Attack Prevention
Read More Here

3. Intrusion Prevention and Content Security

Not really much to look for in this section :

IPS :

Security -> Next Generation Intrusion Prevention System (NGIPS) -> Intrusion Prevention System -> IPS 4200 Series Sensor -> IPS 7.1 (IDM or CLI)
Link 1
Link 2

IOS IPS :

IOS 15.2M&T -> Security : Securing the Data Plane -> Cisco IOS Intrusion Prevention System -> Cisco IOS IPS 5.x Signature Format Support and Usability Enhancements
Read More Here

ASA IPS Module :

ASA 8.4/8.6 Config Guide -> Configuring Modules -> Configuring the IPS Module
Read More Here

WSA :

Security -> Web Security -> Web Security Appliance -> End User Guides -> IronPort AsyncOS 7.1
Read More Here

4. Identity Management

In my opinion, ACS & ISE User Guides are pretty much useless on the exam. Once exception is part of the ISE document that shows you how to prepare your switch for 802.1x including certain Profiling methods. Great reference to see if you have not missed any of the 802.1x-related commands :

Wired 802.1x & NAD Profiling configuration :

Security -> Access Control and Policy -> Identity Services Engine -> Configuration Guides -> User Guide Release 1.1.x -> Reference -> Switch and WLC Configuration Required to Support Cisco ISE Functions
Read More Here

Now move to : IOS and NX-OS Software -> IOS -> IOS Software Release 15M&T -> 15.2M&T

AAA :

Security : Securing User Services -> Authentication, Authorization and Accounting
Read More Here

RADIUS & TACACS+ Attributes :

Security : Securing User Services -> RADIUS Attributes
Read More Here
Security : Securing User Services -> TACACS+ Configuration -> TACACS Attribute-Value Pairs
Read More Here

5. Perimeter Security and Services

First – IOS documents :
IOS and NX-OS Software -> IOS -> IOS Software Release 15M&T -> 15.2M&T

NAT :

IP : IP Addressing -> NAT Configuration Guide -> Configuring NAT for IP Address Conservation
Read More Here

IPv4/IPv6 ACLs, Dynamic/Reflexive ACLs, Object Groups, SEND :

Security : Securing the Data Plane -> Access Control Lists
Read More Here

CBAC, Transparent IOS Firewall :

Security : Securing the Data Plane -> Context-Based Access Control Firewall
Read More Here

Application Firewall (http) :

Network Management : Network Management -> http Services -> http Inspection Engine
Read More Here

ZFW, User-Based Firewall :

Security : Securing the Data Plane -> Zone-Based Policy Firewall
Read More Here

URPF :

Security : Securing the Data Plane -> Unicast Reverse Path Forwarding
Read More Here

QoS and NBAR :

QoS : Quality of Service Solutions -> (multiple links)
QoS Solutions Link 1
QoS Solutions Link 2
QoS Solutions Link 3
QoS Solutions Link 4
QoS Solutions Link 5
QoS Solutions Link 6
QoS Solutions Link 7

For the ASA, you should be familiar with the entire Configuration Guide for 8.4/8.6 :

ASA (multiple features) :

Security -> Firewalls -> ASA -> ASA 5500-X Series Next-Generation Firewalls -> Cisco ASA 5500 Series Configuration Guide using the CLI 8.4/8.6
Read More Here

Also know how to deal with old-style NAT & Transparent Firewall :

ASA Old NAT (before 8.3) :

Security -> Firewalls -> ASA -> ASA 5500-X Series Next-Generation Firewalls -> Cisco ASA 5500 Series Configuration Guide using the CLI 8.2 -> Configuring NAT
Read More Here

ASA Old Transparent Firewall (before 8.4) :

Security -> Firewalls -> ASA -> ASA 5500-X Series Next-Generation Firewalls -> Cisco ASA 5500 Series Configuration Guide using the CLI 8.2 -> Getting Started and General Information -> Configuring the Transparent or Routed Firewall
Read More Here

6. Confidentiality and Secure Access

For VPNs, it will be ASA Configuration Guide (-> Configuring VPN), and a bunch of IOS documents :

IOS and NX-OS Software -> IOS -> IOS Software Release 15M&T -> 15.2M&T

VPNs, PKI :

Security : Secure Connectivity -> (multiple links)
Secure Connectivity Link 1
Secure Connectivity Link 2
Secure Connectivity Link 3
Secure Connectivity Link 4
Secure Connectivity Link 5
Secure Connectivity Link 6
Secure Connectivity Link 7
Secure Connectivity Link 8

MACSec can be found under Switch docs :

MACSec :

Switches -> Campus LAN Switches Access > Catalyst 3750-X Series Switches ->
Cisco IOS Release 15.0(2) SE and Later -> Configuring MACsec Encryption
Read More Here

Last but not the least, Wireless :

Wireless Security :

Wireless -> Wireless LAN Controller -> Wireless LAN Controller Software -> 7.2 -> Configuring Security Solutions
Read More Here

If you feel there is another resource that should be included in the above list, don’t hesitate to contact us.

How to use AnyConnect to “cheat” in the CCIE wireless lab

$
0
0

How would you like to be able to look up the answers to some of the tasks in the wireless lab, and not get in trouble over it? Well, read on, and I’ll give you a fun tip that you may be able to use in the lab to solve parts of certain lab tasks. It’s not actually cheating, but it almost feels like it.

One of the realities of the lab is that there will be some pre-configurations on many of the devices. You won’t be configuring every last device from scratch. There’s not enough time, and they’d prefer to test you on more complex things than configuring every VLAN, interface, host name, etc from scratch. Just about anything has the potential to have some level of pre-configuration, and that includes the AnyConnect client. If you find that the AnyConnect client already has some WLAN profiles configured on it, say a silent “thank you” to Cisco because they just gave you a ton of great information.

Another reality of the lab is that they often don’t ask you to do things in the most straightforward and clear way possible. Often they use code words or phrases that need to be interpreted. For instance, instead of saying that the WLAN should use WPA2/AES with a PSK for the layer 2 security, they may say something like “Use a security method that supports RSN with a non-RC4 cipher and a shared key of “wireless”. If you weren’t sure what RSN was or that it was directing you to choose WPA2 and that AES is your only WPA2 option that doesn’t use an RC4 cipher, you’d be in trouble. Fortunately, if AnyConnect has the WLAN profile pre-configured, it can tell you what the answer is.

Go to your Windows client with AnyConnect installed on it and open up the AnyConnect advanced window. Assuming some profiles are pre-configured, you’ll see something similar to the image below.

Screen Shot 2015-04-07 at 4.28.32 PM

 

Here we see the following information about each WLAN profile.

  • The SSID
  • The layer 2 security method

So if there is ever a question as to what either of these settings should be, AnyConnect just gave you the answer. Or at least a big clue towards the answer. For instance, look at the Video-Pod1 profile towards the bottom of the list. We can see that the client is configured to connect to the WLAN using WPA2/AES with a PSK (Personal = PSK, Enterprise = 802.1x). So at a minimum, the WLAN must be configured to use those security settings or the client will never be able to connect.

Now this isn’t necessarily the end of the security configuration of the WLAN. There is the potential that they want you to enable multiple security methods (for instance, WPA2/AES and WPA/TKIP), or maybe even MAC filtering. But at a minimum, it’s a good start to the security configuration of your WLANs.

While this seems like such a simple thing, it can be what you need to help get your points for the WLAN configs. You’d be surprised at how many times I see students choosing the wrong security settings and even the wrong SSID names during my graded mock labs. Then they wonder why the client isn’t connecting to their WLANs while the answer was right there the whole time. So feel free to “cheat” using this method if it is available to you. I won’t tell. ;-)

The Benefits RTMT Features in CUCM and CUC

$
0
0

Finally, it’s the blog you’ve all been waiting for! Yes, that’s right folks; the time has come to discuss the benefits of Real-Time Monitoring Tool (RTMT) in CUCM and CUC. All right, I know it’s not the most exciting subject to discuss all the topics on the CCIE Collaboration lab blueprint, but it can help you perform troubleshooting tasks in a very efficient manner. The goal for this blog is to point out a couple useful features of RTMT to give you a nice boost when tackling different lab topics.

For those that are not familiar with RTMT, it can be used to pull traces (log files) for troubleshooting in all Cisco UC servers, monitor real-time platform statistics, check syslog messages, and display a host of “Performance” parameters that can assist the engineer in gathering system information. While those are all great features worthy of our attention, I’d like to focus specifically on a new RTMT feature available in CUCM 9.x called “Session Trace Log View.” This feature is an excellent troubleshooting tool, especially when used with SIP. Essentially what this does for us is organize the traces in such a way as to provide a cohesive view of the messaging associated with a particular call flow.

As an example, let’s have a look at a simple call flow between two CUCM-registered phones. The HQ and SB CUCM clusters have a SIP ICT configured to allow calls between devices registered to each cluster. In this example, HQ Phone 1 will place a call to SB Phone 1 over the SIP ICT.

RTMT-00001

Before placing the call, log into RTMT for the HQ CUCM cluster (Publisher) and navigate to CallManager  Call Manager Summary.

RTMT-00002

Next, on the left-hand pane, select the “Real Time Data” option under the “Session Trace Log View” heading. This will allow access to debug the call flow between HQ Phone 1 and SB Phone 1.

RTMT-00003

When the tool loads, the first thing that needs to happen is to set the starting date and time that should be used to search for call records. We should also set the duration to the maximum value (60 minutes) in order to provide greater searchability. In other words, we are creating a one-hour date/time range by which to search for calls. Yes, the method is a little bit strange, but we need to roll with it—we have no choice! Don’t forget to set the time according to the server time, not the time of the local PC. Also, make sure to select the proper time zone.

RTMT-00004

Once the date/time parameters are locked in, we are ready to make the test call. Place the call between HQ Phone 1 and SB Phone 1. When it completes successfully, go ahead and hang up. Now click the Run button within RTMT. This will reveal useful statistics about the call, such as “Calling DN”, “Orig Called DN”, “Called Device Name”, and “Termination Cause Code.” In this case, since the call was ended “normally” by hanging up the phone, the cause code is “(16) Normal Call Clearing.”

RTMT-00005

Now for the real magic! Double click the call record in order to launch an RTMT-created diagram of the call flow from the perspective of the HQ CUCM cluster.

RTMT-00006

As you can see in the screenshot, this provides an incredibly useful diagram of the call flow between devices. We have an overview of the SIP messaging that takes place between endpoints with each message being “clickable”.

Let’s click on the original INVITE message that is sent from the phone to the HQ CUCM Subscriber server (10.10.13.12) to reveal the details of the message.

RTMT-00007

As you can see in the above, the actual message is displayed, which is essential to understanding what is happening within the SIP call. If you click the “Call Flow Diagram” tab, you can return to the diagram to click another message in another part of the call flow if you so desire.

As mentioned above, RTMT is not only useful for CUCM servers, but also every other UC server on the CCIE Collaboration blueprint. In this next example, let’s focus on the Unity Connection server to explore the “Port Monitor” feature. This feature will provide access to view the real-time status of the ports along with statistics about each call that traverses them. Log in to RTMT for CUC and navigate to Unity Connection  Port Monitor.

RTMT-00008

Next, select the server of interest from the dropdown box.

RTMT-00009

After the node has been selected, set the “Polling Rate” to the lowest value possible (1) and click the Set Polling Rate button. Then click the Start Polling Button to begin gathering data about the available Unity Connection ports.

RTMT-00010

At this point, all ports are now visible from the tool.

RTMT-00011

Now make a test call into Unity Connection by pressing the “Messages” key on SB Phone 1 and observe the behavior.

RTMT-00012

As you can see in the above, the call is displayed as a “Direct” call and has used the second available voicemail port in the system. In the case of a “Forwarded” call, it will be displayed differently in the tool. We can test this by attempting to leave a message for a user configured on the system.

RTMT-00013

In this case, the third available port was used in order to handle the forwarded call.

Once again, this can be a very useful tool when troubleshooting voicemail issues. Simply understanding if the call has been interpreted correctly on the Unity Connection server (either “Direct” or “Forwarded”) can make a world of difference in your troubleshooting approach.

So there you have it; a little bit more information about RTMT to help you prepare for and pass this lab! I hope you found this post useful and that it provides you with some much-needed ammunition to slay the beast that is the CCIE Collaboration lab! As always, if you need an extra push to get ready for the lab, are hitting roadblocks in your preparation, or just need some direction on how tackle the CCIE Collaboration Lab, give us a call and speak with an iPexpert Training Advisor about attending one of my bootcamps.

Thanks again for reading and good luck in your preparation!

Check out My Video on Demand Courses that feature in-depth information on Cisco Unified Communications Manager!

Learn CUCM Features from My Written Exam Prep VoD

iPexpert’s Cisco CCIE Collaboration Technology Workbook (Vol. 1) – Features 7 CUCM and CUCME Labs

Connectivity Policies in UCSM

$
0
0

The importance of using templates with Cisco UCS cannot be emphasized enough. Creating something that you can reuse over and over again, as well as update and push out to pre-created objects, can save you a ton of time from an administrative perspective. vNIC and vHBA templates are a great example of this within Cisco UCSM. These templates allow you to create reusable vNIC and vHBA objects that you can reference within the creation of a service-profile template or even a LAN/SAN connectivity policy.

20141216_01

20141216_02

You can reach even further with policies and template on UCS, and create what are known as LAN and SAN connectivity policies. These will allow us to pre-create the LAN and SAN connections for service profiles and service profile templates. Take this for example. Say we knew that we were going to deploy a lot of ESXi servers in our environment, and that they would all essentially have the same exact LAN requirements in terms of the number of NICs that they needed, and the VLANs that they would need for the hosts themselves, as well as the guest machines. We could create a connectivity policy for these servers, and reuse that policy in all of our service profiles, and/or templates! To begin the process we will want to navigate to our LAN tab, and filter to policies. If you right-click on ‘LAN Connectivity Policies,’ UCS will allow you to create a new one. Here I create one called ESXi-LCP:

20141216_03

It now asks us to create our “network.” We will want to click ‘add’ to create the vNICs that we will be using. Notice that once you get in here, that we can even reference our existing vNIC templates to save ourselves even more time! Here is a screenshot of the first vNIC that I created within my policy:

20141216_04

I wont show you both of them, but we will just create our 2 vNICs here, referencing our existing vNIC templates.

20141216_05

Once complete, we have a policy that we can use within our SP or SP-templates to save us a ton of time on something that we normally would have to do over and over. Below is an example of me applying the LAN connectivity policy inside of a service profile:

20141216_06

Now what we have ultimately done here is used vNIC templates inside of our LAN connectivity policy. This means that later if we say, had to add a VLAN to those vNICs, that we could just change it on the vNIC template and those changes would be pushed to all of the vNICs that had been derived from those templates (as long as we used an updating vNIC template)! We can do the same exact process from the storage side as well. We can create vHBA templates and SAN connectivity policies to pre-provision all of our existing or future connectivity down to our stateless computing environment.

Now, this might be a process that I leverage in the CCIE lab depending on a few things. Verbiage of course is going to lead you down one path or another during your exam. However, if not told to do otherwise, and if I had multiple service profiles, or SP-templates to create that all used the same pools, and vNIC/vHBA structure, I might be tempted to leverage these in order to save a bit of time. Odds are though, that these will be much more useful in normal day-to-day administration of UCS systems within your existing environments.


DMVPN for IPv6

$
0
0

As you probably already know, every DMVPN network consists of multiple GRE tunnels that are established dynamically. At the beginning, every Spoke in the Cloud is supposed to build a direct tunnel to the Hub. Then, once the Control Plane converges, the Spokes can possibly build tunnels with other DMVPN device(s), of course assuming that our DMVPN deployment (aka “Phase”) allows for that. 

In most cases DMVPN tunnels will be deployed over an IPv4 backbone, interconnecting different sites running IPv4. But since GRE is a multi-protocol tunneling mechanism, we can use it to carry different protocol traffic, like for example IPv6. Frankly, in the newer versions of IOS code you could even change the underlying transport from IPv4 to IPv6. This basically means that you can use an IPv4 OR IPv6 network to tunnel IPv4 OR IPv6 traffic.

In this particular article I am going to discuss a scenario in which the Transport/NBMA network (“Underlay”) uses IPv4 addresses, but the goal will be to use the DMVPN to interconnect sites enabled only for IPv6.

As you can see from the topology below, our private sites are configured with prefixes starting with 2192:1:X::/64, and the VPN (“Overlay”) subnet used is 2001:256::/64 :

DMVPN for IPv6

Note that we are using IPv6 addresses for the VPN, since we will be using an IPv6 routing protocol to exchange the Control Plane information. That’s the most common way of deploying a DMVPN like that (another option would be to use MBGP).

Let’s start our configuration. We will first configure our Hub (R6), then the Spokes (R2, R5), and finally enable routing on the Overlay network. Since IPSec is optional, we will not be using it in this example.

R6 (Hub) configuration. Almost everything is IPv6 here; other than that loopback0 is the NBMA (so it uses IPv4 6.6.6.6), and tunnel mode is set to “gre multipoint” which means that our transport is IPv4. Link-local address was hard-coded (we will do the same on the Spokes) because it must be always unique on a given Cloud. Those addresses are used to source Control Plane messages (e.g. NHRP), and will be used also used by our Routing Protocol (Next-Hop addresses in updates are link-local).

interface Tunnel256
ipv6 address FE80::6 link-local
ipv6 address 2001:256::6/64
ip mtu 1400
ipv6 mtu 1400
ipv6 nhrp map multicast dynamic
ipv6 nhrp network-id 256
tunnel source Loopback0
tunnel mode gre multipoint

Next comes R2. Note the key thing here – the NHRP mapping is between an IPv6 and IPv4 address. Also, since the Hub’s address is always the logical one, we are pointing to IPv6 here; multicast traffic will be sent to NBMA, however :

interface Tunnel256
ipv6 address FE80::2 link-local
ipv6 address 2001:256::2/64
ip mtu 1400
ipv6 mtu 1400
ipv6 nhrp map 2001:256::6/128 6.6.6.6
ipv6 nhrp map multicast 6.6.6.6
ipv6 nhrp network-id 256
ipv6 nhrp nhs 2001:256::6
tunnel source Loopback0
tunnel mode gre multipoint

On my particular code (15.1(3)T4) you could use the Context-Sensitive Help to quickly figure out the syntax (just remember that you have to start with ipv6 nhrp) :

code-DMVPN-1But note that this is not going to work on an IOS that supports IPv6 transport for DMVPN (15.2(1)T and above), which would show you both addresses as a potential argument  :
code-DMVPN-2

Pretty much the same commands go to R5, just remember to use correct addresses and hard-code link-local address :

code-DMVPN-3

Let’s now quickly verify our configuration :

code-DMVPN-4

We see Hub learned the mappings for both IPv6 addresses (global unicast and link-local) via NHRP. DMVPN is up :

code-DMVPN-5
How about one of the Spokes :

code-DMVPN-6

All good. Let’s now look at a GRE debug (debug tunnel) and check connectivity within the Cloud :

code-DMVPN-7

We do have connectivity within the Cloud, at least between the Spokes and the Hub. We will now deploy one of the IPv6 Routing Protocols to extend the original Control Plane with the VPN information. I am going to use EIGRPv6 and we will run DMVPN Phase II :

All devices (R2, R5, R6) :
code-DMVPN-8

EIGRPv6 was also enabled on interfaces emulating our private sites (2192:1:X::/64). Time to verify :

code-DMVPN-9

OK, so Control Plane is fine. Next is the Data Plane. We see that first traceroute goes through the Hub, but when R2 learns the NBMA for FE80::5, a Spoke-to-Spoke tunnel is established.
code-DMVPN-10

code-DMVPN-11

To finish that discussion, let me recap on our configuration :

  1. Even that Tunnel’s address is now IPv6, the encapsulation mode is still set to “tunnel mode GRE multipoint.”
  2. Watch out for “ipv6 nhrp” on the Spokes – depending on the argument this command takes IPv6, IPv4 or both types of addresses as an input.
  3. Link-local addresses must be unique within the Cloud. That’s why you should always hard-code them.

Download the Code here.

Learn more about IPv4 & IPv6 with our Video on Demand Course.

Understanding NPV: Part 1 of a 2-part Series

$
0
0

N-Port Virtualization (NPV), and N-port ID Virtualization (NPIV) have been around for quite some time now. Enhancements have been made to the traditional NPV and NPIV implementations, making it more convenient for unified fabric topologies (which is what we will be discussing today). This blog, part 1 in a 2-part series, will be discussing the ‘fcoe-npv’ implementation of NPV/NPIV, while the next blog will be focused on the traditional implementation.

NPV and NPIV were created as a method in which we could add additional switches (i.e. port density), to a given fabric, without consuming additional domain-id’s, or adding to the administrative burden of a growing SAN (managing zoning, domain-id’s, principle switch elections, FSPF routing, etc…). A lot of this concern stemmed from the fact that the Fibre Channel standard limits us to 239 usable domain id’s. Essentially 8-bits, or the most significant byte in the Fibre Channel ID (FCID), is reserved for this domain-id. This byte is what is used within FSPF protocol to route traffic throughout a Fibre Channel fabric. While this gives us 256 addresses, only 239 are usable, as some are reserved. Beyond this, many vendors restrict us too a much smaller number of domain-id’s on the fabric. Brocade, for example, will typically impose a 24 edge switch limit per fabric, while Cisco recommends staying under 40; though some of the larger MDS switches state the number 80 as their tested and theoretical maximum.

The feature ‘fcoe-npv’ came to fruition in NX-OS 5.0(3)N2(1). It offers what Cisco describes as an ‘enhanced form of FIP snooping,’ as it implements FIP snooping as well as providing the legacy benefits of NPV with traffic-engineering and VSAN management. The fcoe-npv feature requires the FCOE_NPV_PKG license on the device but does not require a reload like the legacy NPV (and subsequent configuration wipe). Also, it should be noted that when you utilize fcoe-npv, you cannot have the ‘feature fcoe’ separately enabled, nor can your device support native Fibre Channel interfaces.

SW4(config)# feature fcoe
ERROR: Cannot enable feature fcoe because feature fcoe-npv is enabled. Disable feature fcoe-npv, reload the system, and try again.

SW4(config)# slot 1
SW4(config-slot)# port 31-32 type fc<s/span>
ERROR: Cannot change port types to fc as feature fcoe-npv is enabled

Now enough with the perfunctory stuff… Let’s take a look at the topology that we will be using! Below we see a diagram with 2 Nexus 5548’s (SW3 and SW4), as well as a couple of FEX’s and a C200 M2 with a P81 VIC card in it. SW3 in this diagram will represent our NPIV switch, while SW4 will be our fcoe-npv test dummy. We will be simulating FLOGI’s from the C-series server into the network.

understanding-npv-diagram

So the first thing we need to do is prepare the infrastructure, and verify at a basic level, that everything is operating correctly. So let’s go ahead and enable our features.

understading-npv-table

Step 1 is now complete! I didn’t show it here, but I did have to manually turn on the service policies to support FCoE, as this switch did not apply them automatically. Next we need to prepare our Ethernet infrastructure. So, as you all probably know, in order to run FCoE, we need to run FCoE over 802.1q trunks. So I am going to go ahead and configure the interface e1/5 between SW3 and SW4 as a trunk, as well as port 1 off of both Fabric Extenders.

understanding-npv-table2

Now we can put in our VSANs, as well as FCoE VSANs.

understanding-npv-table3

Good, now on to the VFC (virtual fibre channel) interface configuration! We will start with the link between SW3 and SW4.

understanding-npv-table4

Now for the checks and balances we as CCIE candidates strive for! What we should expect is that SW4’s VFC5 interface will FLOGI on one of the two available VSANs using it’s fwwn (fabric world-wide name), up to SW3. So let’s check out SW3, and see if that is so.

understanding-npv-table-5

understanding-npv-table-6

So we do see a FLOGI on VSAN 20! Lets make sure that it is the vfc5 interface’s fwwn, off of SW4.

understanding-npv-table7

We now have verification that SW4’s VFC5 interface sent a FLOGI to log into SW3. This is the nature of NPV. The first FLOGI will always be the NPV switches NP port logging into the NPIV switch.

Now we need to configure the host ports. To properly simulate, we are going to place one port in VSAN 10, and the other in VSAN 20. We want to make certain that the FLOGIs are converted to FDISC’s (fabric discovery messages) and proxied to the upstream NPIV device. What you will notice here is that we do NOT turn on the feature ‘fport-channel-trunk.’ This feature is intended for native FC only. By default, FCoE interfaces have trunk mode set to on, and, in fact, we cannot disable this!

So now we will configure our 2 host facing VFC interfaces. We essentially want something like this to happen within our topology:

understanding-npv-diagram-2

understanding-npv-table-8

Now we can check our VFC interfaces.

understanding-npv-table9

understanding-npv-tabl10

So as of right now, we have nothing! What we need to do in our lab, as the C-series is not actively pursing storage, is reboot it. We have told the FC vHBA’s to boot from SAN, and we have put in some fake pWWNs for it to try and boot towards. While this is not perfect, it will, if configured correctly, FLOGI initially while trying to boot-up. Lets go into the CIMC of that server. Here are my 2 vHBA’s, and their pWWN’s, and the link we will click to reload the server.

screencap-npv-pt1

A couple of notable debugs, if you’re a debugging sort of person:

  • Debug flogi init << To see the FLOGI’s on the NPIV switch come in
  • Debug npv flogi-fsm events << To see the FLOGI’s from the NPV switches perspective

After waiting a couple of minutes for the boot-up process to commence, we should see some FLOGIs proxied towards our NPIV switch. Normally we will see one side FLOGI, and then 30 seconds to a minute later, the other side should login as well. Once complete, we should see something resembling this on the NPIV switch:

understanding-npv-code view

Notice we now see our 2 vHBA’s logged into their respective VSANs (they even have the same FCID in their respective VSANs in this example!), as well as the vfc5, interfaces initial login from SW4’s NPV port.

If we look at SW3, we can now see that both VSAN’s are in the ‘UP’ state (remember just 20 was prior, as nothing was active on VSAN 10), and we can see the proxied pWWN’s.

understanding-npv-codeview2

So everything works as we expected within this topology. A couple of key takeaways here however….the NPV switch no longer takes up a domain-id, nor does it participate in zoning or FSPF routing. It’s basically a fabric extender on the storage network, and the nodes logging in through the NPV switch will be logged in on the upstream NPIV switch, using its domain-ID within their FCID’s. Utilizing ‘fcoe-npv’ we did NOT have to reboot our switch, lose any configuration, nor did we have to enable trunking, or the feature fport-channel-trunking, to allow VSANs to be tagged across our VFC E-port interfaces. Trunking is always on with regards to VFC interfaces.

In the next part, we will explore the native NPV/NPIV functionality, and some of the quirks that go along with it! If you want to read a little bit more about fcoe-npv, I have included some reference links below (as well as links for some information annotated within this post).

iPexpert Study Materials Covering NPV/NPIV

Product Update :: CCIE Collaboration 8-Hour Mock Lab Workbook (Vol. 2)

$
0
0

We’re excited to announce the full-scale launch of our CCIE Collaboration 8-Hour Mock Lab Workbook (Vol. 2)!  Written and tested by the world’s best Collaboration Instructor – Andy Vassar CCIE #22042 (Collaboration, Voice, and R&S), it’s a must have solution for any student that’s preparing for their Cisco Collaboration Certification.

Here’s what you get:

Five Complete 8-hour Mock Lab Scenarios

  • Scenarios you’ll encounter during your actual lab!

Detailed Solution Guide (DSG):

  • What to configure.
  • Why you need to configure.
  • What to look for when configuring your labs.

Web-based access to our workbooks

  • Study on the go.
  • Download PDF’s.

Pathway to success:

  1. Purchase your Volume 2 Workbook here today
  2. Purchase your rack rental vouchers
  3. Start preparing for your Lab Exam!
  4. Reserve a spot in a Collaboration 5-Day Bootcamp and get direct training from our world-class instructor Andy Vassar!

After you’ve purchased your CCIE Collaboration 8-Hour Mock Lab Workbook (Vol.2), don’t forget to reserve rack time with our CCIE Collaboration Rack Rental Vouchers – time slots book fast.  Purchase your vouchers today!

Dial Peer Redundancy

$
0
0

The implementation of redundancy in any technology is of paramount importance, whether you’re studying to achieve a CCIE certification or designing a network for a client. So it goes without saying that this is a concept with which you should become intimately familiar.

In this blog, we’ll turn our focus specifically to redundancy in IOS dial-peers. Of course, dial-peers come in two different flavors: POTS and VoIP. POTS dial-peers deal exclusively with PSTN connectivity while VoIP dial-peers can be used for several purposes, as long as the communication takes place over IP.

Let’s take the example of a call routed inbound from the PSTN, destined toward the HQ CUCM cluster using the H.323 protocol. The configuration on the gateway appears as shown below.

Dial-Peer-Redundancy01

As you can see, we are accepting calls inbound from the PSTN using dial-peer voice 1 pots and translating the incoming called number to a 4-digit DN. From there, we have two separate dial-peers with the ability to send the call to the HQ CUCM cluster. As you know, the dial-peer with the lowest preference (default 0) is chosen as the first routing option. If for some reason, that option is unavailable, the next possible dial-peer will be chosen to route the call. In this case, the HQ CUCM Subscriber server (10.10.13.12) is the first option while the HQ CUCM Publisher server (10.10.13.11) is second.

During this situation, let’s say that the targeted server listed in the primary dial-peer becomes inaccessible. We can simulate this in our network by creating an IP route to Null0, essentially dropping all traffic destined for this address in the “bit bucket”.

Dial-Peer-Redundancy02

Since the Subscriber server is now inaccessible, inbound calls should be routed towards the Publisher server, right? Technically this is true, but by default, it will take a total of 15 seconds to do so. This is because the timer associated with establishing a TCP connection in the H.323 protocol suite defaults to this value. Furthermore, even if the user was willing to wait the required amount of time for the call to be processed, the ISDN PRI will prevent the wait time from exceeding 10 seconds due to the ISDN Call Proceeding (T310) timer. This means that in this configuration, the call will never be routed to the intended destination in the case where the primary dial-peer fails. See the below debug isdn q931 command output (abbreviated) for an ISDN timeout example.

Dial-Peer-Redundancy03

Based on these facts, the best course of action here is to modify the H.323 timer in order to make the failover happen more quickly. Keep in mind that it must be less than 10 seconds to avoid conflict with the ISDN T310 timer. The modification will take place within a voice class specifically geared towards the H.323 protocol. The h225 timeout tcp establish command will be used to set the timer; in this case 1 second was chosen. Once the voice class is configured, it should be assigned to the dial-peers.

Dial-Peer-Redundancy04

Once assigned, calls from the PSTN will now failover to the redundant dial-peer after 1 second. For confirmation, see the below debug isdn q931 command output (abbreviated).

Dial-Peer-Redundancy05

Just as with H.323, SIP faces the same challenge with failover time. Let’s now take the same example, but replace the H.323 dial-peers with SIP dial-peers.

Dial-Peer-Redundancy06

The settings that are associated with failover, in this case are the “Retry INVITE” parameter and the “TRYING” timer. The number of times that the INVITE message is re-sent is determined by the first parameter while the initial delay time is determined by the second setting. By default, the values are 6 and 500 ms, respectively. Furthermore, each time that an INVITE message is sent, the delay time (time before another re-INVITE) is doubled. This means that between the second and third re-INVITE, the delay will be 1 second rather than 500 ms. In total, if unchanged, the call will take 63.5 seconds to fail over, which is obviously unacceptable.

Default Timers (TRYING – 500 ms, INVITE – 6 retries)

  • INVITE at time 00.000 Seconds – Total Time 00.000 (Original INVITE
  • INVITE delay of 00.500 Seconds – Total Time 00.500 (1st Retry)
  • INVITE delay of 01.000 Seconds – Total Time 01.500 (2nd Retry)
  • INVITE delay of 02.000 Seconds – Total Time 03.500 (3rd Retry)
  • INVITE delay of 04.000 Seconds – Total Time 07.500 (4th Retry)
  • INVITE delay of 08.000 Seconds – Total Time 15.500 (5th Retry)
  • INVITE delay of 16.000 Seconds – Total Time 31.500 (6th Retry)
  • INVITE delay of 32.000 Seconds – Total Time 63.500 (Failover Occurs)

Now let’s modify the SIP settings to fail over before the ISDN T310 timer expires. Here the sip-ua on the router is configured with the retry invite parameter set to 2 and the timers trying parameter set to 150 ms.

Dial-Peer-Redundancy07

After the configuration change, the timers should behave as outlined below.

Default Timers (TRYING – 150 ms, INVITE – 2 retries)

  • INVITE at time 00.000 Seconds – Total Time 00.000 (Original INVITE)
  • INVITE delay of 00.150 Seconds – Total Time 00.150 (1st Retry)
  • INVITE delay of 00.300 Seconds – Total Time 00.450 (2nd Retry)
  • INVITE delay of 00.600 Seconds – Total Time 01.050 (Failover Occurs)

Dial-Peer-Redundancy08

Obviously, since we are referring to VoIP dial-peers, the timer and redundancy configuration for both H.323 and SIP applies to more than just inbound PSTN calls. It can also play a major role on any IP to IP gateway, such as CUBE.

I hope this blog was helpful to you in your CCIE Collaboration preparation! As always, give us a call and speak with a training advisor if you’re interested in purchasing our workbooks, attending a class, or have any questions regarding your preparation. Thanks again for reading and good luck in your preparation!

Learn More About Dial Peer Configurations

Building a CCIE Wireless v3 Home Lab on the “Cheap”

$
0
0

With the new version of the CCIE Wireless lab coming in September, many people will be looking to start preparing for their lab attempts. But one look at the hardware list for the exam shows that fully replicating the lab will be out of reach for just about everyone, unless your work already has a spectacular lab. But as with most every track, you can typically practice most things on a home lab without breaking the bank if you look at alternative options.

Before I list out my recommendations for a home lab, know that it will have some significant limitations. It’s not something that you could become fully lab ready on. Also, you will have a hard time following along with the workbooks that I’ll be putting out due to the restricted number of devices and the restricted feature sets available to them. But for self-directed study, this will allow you to practice a large portion of the v3 blueprint.

Recommended Hardware

These recommendations assume that you are starting a lab from scratch and don’t have existing equipment to pull from. If you have better stuff than what I recommend, use them.

WLCs

You will probably want to use a mixture of physical and virtual controllers in your lab. I’d get a single 2504 WLC and then use the virtual WLCs to get you to your desired number of WLCs. Honestly 1 vWLC is probably fine, but 2 would give you a smidge of extra flexibility. You wouldn’t want to go 100% virtual as the vWLC carries a number of restrictions that would prohibit practicing a decent chunk of the Unified blueprint.

2504 WLCs can be bought in the US off of eBay for about $500-$550 each.

APs

You honestly don’t need 802.11ac APs for practicing. It’s not worth the money to be able to practice a couple of extra commands. I’d recommend the 1142 series APs as they can do most everything (unified or autonomous). You’ll want 2 at a bare minimum. But 4 would be nice if you want to avoid constantly switching modes between lightweight and autonomous (use 2 of each mode).

1142 APs can be bought in the US off of eBay for about $50-$60 each.

Switches

I wouldn’t bother looking to buy a switch that supports converged access. The cheapest option that I have found is roughly $2,400 for a single switch. You could rent an insane amount of rack time for that. So if we can ignore the whole converged access thing, then we only need to worry about the switching feature set. Something that supports basic IPv6 routing will do the trick. I’d go with a 3560 that supports PoE. You’d want a pair of them, which should allow you to practice most everything. You could throw in a 3rd if you want a little more flexibility, but it’s not required.

WS-C3560-24PS switches can be bought in the US off of eBay for $100 or less each.

Servers

The good news here is that you should be able to use demo/evaluation licenses for PI, ISE, and the MSE. So you just need to supply an ESXi server to run them on. If you don’t have one lying around, you can probably find a used one on eBay for around $500 – $700. It may not be the fastest thing. But as long as you can run the VMs, that’s the important part. If the server didn’t have enough CPU/RAM to run all 3 at the same time, you could just run 1-2 at a time and still practice everything.

Terminal Servers

While this isn’t required, it makes life so much easier that it’s well worth the cost. If you don’t know what a term server is, it’s a device that has one connection to your network and then multiple connections that plug into the console ports of your WLCs, APs, and switches. This allows you to access all console ports simultaneously and removes the need for your PC to be physically plugged into the console ports. It not only increases your efficiency by a large amount, it frees you up to access your lab from anywhere that has IP connectivity to the term server.

The Cisco 2511 is the classic term server of choice for CCIE students around the world. I’d recommend the AS2511-RJ to avoid the need to use octal cables for the console connections. Just be sure to either order one that comes with an Ethernet AUI transceiver, or buy one separately.

AS2511-RJ term servers can be bought in the US off of ebay for $150 – $200 or less each.

Total Cost for a Decent Home Lab

So what does all of this add up to? Here is the break-down based off of US ebay pricing. You may find cheaper versions, but I wanted to use numbers that appear to be fairly easy to guarantee without much work. Where there is a range, I split the difference.

  • 1- 2504 WLC= $525
  • 4- 1142 APs= $55 * 4 = $220
  • 2- 3560 Switches= $100 * 2 = $200
  • 1- esxi server= $600
  • 1- term server= $175

Total cost= $1720

So it isn’t necessarily cheap. But it is about as reasonable as you can get if you want to tackle the majority of the blueprint.

Is It Worth It?

That is a loaded and debatable question that could easily merit its own separate post. So I’ll leave that up to you to decide. Due to class schedules, we won’t have our v3 rack rentals available until late August/early September. So for now, a home lab is your only option for hands-on practice. If you decide to go that route, hopefully these recommendations will help you give you the best bang for your buck.


Check Out Our v2 Wirelesss Racks

Viewing all 220 articles
Browse latest View live