Quantcast
Channel: CCIE Blog | iPexpert » CCIE Lab
Viewing all 220 articles
Browse latest View live

Transparent ASA NAT

$
0
0

Today I would like to discuss transparent ASA NAT. The good news about this technology is that implementation of NAT on a transparent firewall is actually the same as in routed mode, however, some additional steps may need to be taken in the Control Plane to make it work.

When you use NAT in transparent mode, the outgoing interface of a packet is determined by performing a MAC address lookup instead of a route lookup. This is because the firewall simply switches the frame based on L2 information instead of trying to route it (it uses the frame’s destination MAC to find the outgoing port in the CAM). There are, however, few exceptions to this rule, and one of them may be NAT (I said “may” because it would only come into play if the translated traffic is one hop away from the ASA – then the firewall needs to perform a route lookup to find the next hop gateway).

Take this topology for example :

1-1

Now let’s try to configure simple translations (NAT configuration will not be our focus – again, it’s the same as in routed mode) and think about changes needed (if any) in the Control Plane of the devices involved. I am going to break this example into four different scenarios to better illustrate the problem :

  1. Original & Translated addresses belong to the Local network (ASA’s subnet)
  2. Original address is Local, Translated is not
  3. Translated address is Local, Original is not
  4. Both Original and Translated addresses are not directly connected to the ASA

Scenario #1 (Local addresses) :

Here we will translate R2’s IP address (172.3.245.2) to something from that Local range, let’s say 172.3.245.200 :

1-2

And we test :

1-3

Works great. No additional changes were needed in addition to regular NAT config but note what happened in the background :

1-4

After return Telnet packet was received from R5 (that was de-translated to real destination address 172.3.245.2) ASA internally generated an ARP Request to that address. It was trying to learn the interface connected to 172.3.245.2 to know where to send the frame after the de-translation and with what L2 destination MAC. This is because when the original packet was translated (Telnet SYN segment from R2 to R5) the ASA also changed L2 source to itself (it Proxy-ARPs for .200) :

1-5

Then the reply (before and after ASA processing) :

1-6

Note that ASA did not have to learn R5’s interface and MAC because packets from R2 to R5 are bridged based on the original MAC – R5’s address which is already in the CAM.

Scenario #2 (Local Original & Non-local Translated addresses) :

Here we will translate R4’s IP address (172.3.245.4) to something Remote (Non-local) – like 22.22.22.22 :

1-7

Is this now enough to communicate R4 with outside bi-directionally? Not really. The problem is that our upstream device (R5) does not know how to get to 22.22.22.0/24 :

1-8

OK, we will fix it by adding a route for 22.22.22.0/24 on R5 via ASA (you could also point to R2) :

1-9

Now a similar thing occurs as in the previous case. When ASA gets a reply (destination 22.22.22.22) it performs a de-translation and then, since the address is found out to be “local”, it generates an ARP request to figure out the outgoing interface and destination MAC for the frame (no RIB lookup) :

1-10

Scenario #3 (Non-local Original & Local Translated addresses) :

Here we will translate Remote address 10.2.2.2 (connected to R2) to something from the Local subnet – e.g 172.3.245.222 :

1-11

Initially we don’t have connectivity. Even that ASA technically does the translation and de-translation (debug nat) :

1-12

It does not know where (MAC) to send a packet with a L3 destination of 10.2.2.2 :

1-13

What do we do? A static route comes to rescue :

1-14

So what happened here? After de-NAT took place ASA had to learn the interface and MAC corresponding to 10.2.2.2. This address is Non-local so ASA had to use its routing table to learn the Local next-hop. After next-hop was found (172.3.245.222) it generated ARP request to get the corresponding MAC (same as in our previous cases).

Scenario #4 (Non-local addresses) :

The translation configured in this scenario will be for Remote address 10.4.4.4 (connected to R4) and we will use 44.44.44.44 as the translated address :

1-15

All right so in addition to the NAT config we definitely want to let the ASA know about 10.4.4.4/24 but this time our upstream router (R5) must also learn about the NAT pool :

1-16

This case was almost the same like #3 – we just had to add one more route – on upstream R5.

As you can see even that NAT configuration in the transparent mode is the same as in routed (one limitation here is that you cannot configure interface PAT – since interfaces don’t have addresses assigned), some routing changes might be needed on the ASA and/or upstream device to ensure bi-directional connectivity.

– Piotr Kaluzny CCIE #25665 (Security) / CCIE Security Instructor – iPexpert, Inc.

About Piotr: Piotr, a MSc in Computer Science, has been in the networking industry for over seven years working in several different capacities within enterprise Cisco environments. His responsibilities included, but were not limited to, implementation, design, and level three technical support. Piotr already has an extensive background as a Technical Instructor – he has been designing and developing Cisco training solutions and teaching CCIE classes for the past four years.


Wireless Configuration Method Speed Test Shootout- Part 1

$
0
0

If you have ever taken a class from me, or watch some of my videos, you may have heard me talk about the different methods available for configuring wireless devices in the CCIE Wireless lab.

 

Specifically, you may have heard that I don’t consider that finding the absolute most efficient method of configuring devices the holy grail of CCIE studies.  I don’t advocate that people become experts at using WCS templates or learning how to configure everything in the CLI on a WLC.  The reason that I don’t encourage people to focus on developing these (typically) alternate configuration methods is because I contend that the time saved by using these other methods is generally small.  So the time spent developing these skills has a much lower ROI than time spent learning other important skills like troubleshooting and brushing up on technologies where you are weak. These recommendations have primarily been based off of personal educated guesses and not on concrete data.  So I decided to run tests on the speediness of different configuration methods across a number of common configuration scenarios that are likely to be seen in the lab.  This should give us some real numbers to base our configuration decisions on. Now before we get to the tests, I want to say that configuration speed isn’t the only reason to choose one method over another.  But it should be factored into the overall equation.  Here is a short list of other reasons to select one method over another.

  • Learning curve- How much time/effort will it take to develop a config method to a competent level?
  • Accuracy- How prone is the method to introducing mistakes?  The more typing and clicking, the more chances there are to make a mistake.
  • Verification- How easy is it to see that your configurations were applied correctly?
  • Spotting bad pre-configurations- How likely are you to see existing pre-configurations that may introduce issues/troubleshooting?

So while speed of configuration is important.  These other reasons need to be taken into consideration as well.  Put them all together and you should be able to make a good decision about which method to use.  Also, don’t think that I am recommending choosing the “best” method for each individual technology.  Most people tend to use one method as their primary method and use the other methods only where required (or where the benefit is significantly better).  That tends to make things simpler.  So you may want to see which method is the best for you on the whole and focus on that.  Then decide if there are any individual technologies where you will deviate from your standard. Before we look at the first speed test, let’s look at the testing methodology that I used when running my tests.

Testing Methodology

In the lab, we have 3 different options for configurations.  The 3 methods are CLI, GUI, and WCS. Not every device has every method available.  I will be focusing my tests on devices that have at least 2 methods available (otherwise there is no potential for comparison).  So that means the tests will be run on WLCs and Autonomous APs.  Let’s talk about how I tested each method.

CLI

When testing the CLI, I used a few different methods.  Any time where I needed to configure the same thing across more than one device, I used notepad.  I would write the lines of config in notepad and then copy/paste them into each device.  This is not only the fastest way to configure multiple devices, it also drastically increases your ability to apply configs consistently across devices. I also leveraged notepad in a few other instances.  If I was configuring something on a single device where there was a lot of repetition, I tried using notepad to see if copy/paste or find/replace would speed things up vs. not using notepad.  I also used notepad as a reference. I am not a CLI wizard at everything.  Particularly on WLCs.  I rarely use the CLI to configure things on WLCs, so my skills are not as developed there.  I also configure a few select things in the GUI on autonomous APs.  So where I was not able to configure things from memory in the CLI, I used notepad to have the lines of config in view.  That way, I could simply look at the config that should be applied and then quickly type it out.  That mostly bridged the gap to get me close to “expert” level on the CLI.

GUI

When configuring from the GUI, I always started at the home page to include the process of getting to the appropriate configuration screens.  The reason for this was to make a fair comparison to the CLI method where you are always ready to configure anything.  I used tabbed windows since that is what I am most used to.  But this is basically the same as what I would use in the real lab where I had separate windows overlaid in a cascading fashion.  In both cases, I move to the next device with a single click of a mouse.  I don’t have full view of every window at the same time.  I also configured things as I recommend in my classes to help reduce instances of config errors across multiple devices.  I configure things mostly 1 screen at a time, and only a few settings at a time.  It may not be the fastest method, but it is my recommended method.

WCS

Like the GUI method, I always started at the home page to include the time spent to get to the appropriate configuration screen.  I also added some extra configurations in certain instances when there was a discrepancy between default settings in a WCS template and default settings in the GUI/CLI where I felt that setting was important.  For instance, WLAN templates have WMM disabled by default and I think it is important to have it enabled unless otherwise asked.  Lastly, I always made sure that the WLCs and WCS were in sync with the current WLC configuration to give a best-scenario example. If you want to watch the actual configurations, you can check out the companion video to this article over in our YouTube channel.  It shows how I arrived at the configuration speeds and the methods that I used.  You may be able to pick up a few tips or tricks for faster configurations by watching how I do things.

Speed Test 1- Configuring WLANs on WLCs

In the lab, you will be configuring numerous WLANs on WLCs.  You will also be configuring many of them across multiple controllers.  So this seems like a great place to start as it has some of the highest potential to show large differences in configuration speeds.  I chose 2 different WLAN configurations to test. The first WLAN was a Guest style WLAN configured across 4 controllers.  The Guest WLAN included the following typical guest settings.

  • WLAN was enabled
  • The anchor WLC used VLAN 11 and the foreign WLCs used the Management interface
  • No layer 2 authentication
  • Layer 3 web-auth authentication
  • Bronze QoS profile
  • Session timeout of 8 hours
  • Peer-to-peer blocking
  • DHCP assignment required

I didn’t get into additional steps like setting up the mobility tunneling.  I just wanted to test the creation of the WLAN.  The results are shown below in order of fastest to slowest in minutes:seconds. WCS= 1:05 CLI= 1:09 GUI= 1:47 Keep in mind there is probably a margin of error of 10-15 seconds.  Some people may be able to do any given method slightly faster.  But the numbers are appropriate for reasonably skilled people.  So what we are mainly looking for are significant differences.  If times are within 10-20% of each other, I generally consider them equal.  In this case, the WCS and CLI methods were equal, but the GUI was a good 60% slower. The second WLAN was a standard data WLAN configured across 2 controllers.  The Data WLAN included the following settings.

  • WLAN was enabled
  • Interface VLAN13
  • WPA2/AES with 802.1x for the layer 2 security
  • Specified a RADIUS server
  • Band select enabled
  • Client load balancing enabled

So it was a fairly generic 802.1x WLAN with a couple of features in the Advanced tab thrown in.  Here are the results in order of fastest to slowest. CLI= 0:45 GUI= 0:46 WCS= 1:00 Here we see that the GUI and the CLI are essentially the same and WCS is about 30% slower. Now, we will be configuring many different WLANs during the lab.  So let’s use these 2 WLANs as templates and see what things look like after configuring a mixture of these.  Here are 3 scenarios of different combinations of these templates that you could potentially run into in the lab.  The last line shows the average time that it would take to configure these scenarios.

GUI CLI WCS
1 Guest and 6 Data 0:06:23 0:05:39 0:07:05
2 Guest and 4 Data 0:06:38 0:05:18 0:06:10
5 Data 0:03:50 0:03:45 0:05:00
Average 0:05:37 0:04:54 0:06:05

Now look at the difference. CLI= winner! GUI= 0:43 (15%) slower than the CLI WCS= 1:11 (24%) slower than the CLI Are you surprised?  I know I was.  Anecdotal wisdom is that WCS is the fastest way to configure multiple things across multiple controllers.  But here we see that it is actually the slowest method as far as WLANs are concerned.  Let’s analyze these results a bit and then I’ll give you my conclusion.

Test Analysis

As we look at the different config methods, we can start to draw some basic conclusions about the different configuration methods.

  • In general, it takes less time to configure a line item in the GUI than it takes to configure the same line item in the CLI (clicking a box is faster than typing a command).
  • The CLI suffers no up-front time penalty to get started with a configuration.
  • The GUI suffers a small up-front time penalty to get started with a configuration.
  • WCS suffers a larger up-front time penalty to get started with a configuration.
  • The CLI only takes an extra 1-2 seconds to replicate configs across each additional WLC
  • The GUI configuration has little time saved configuring across multiple WLCs vs. configuring a single WLC (it takes roughly twice as long to configure 2 WLCs vs. 1 WLC)
  • WCS really takes no additional time to replicate configs across each additional WLC

Based on this, we can see that the CLI config times are primarily driven by the number of lines of config on a single WLC since it suffers no initial time penalty and almost no config replication penalty.  Config times on the GUI are primarily driven by the number of options configured multiplied by the number of WLCs configured.  The initial time penalty is negligible.  WCS config times are primarily driven by the number of options configured on a single WLC plus the initial time penalty of opening up the appropriate config template.

Conclusion

As far as configuring WLANs on WLCs goes, the CLI was the clear winner.  It appears to be fast to configure an individual WLAN and also fast to replicate across controllers.  So it’s a good method regardless of the number of configurations per WLAN or the number of WLCs involved.  The GUI is a good method for smaller numbers of WLCs, but loses out across larger numbers of WLCs.  WCS does best (comparatively) across a larger number of WLCs but fades slightly with smaller numbers. While I was surprised to see WCS come in last when you look at the time that it takes to configure an entire lab’s worth of WLANs, I was not too surprised to see that the time difference wasn’t very large.  There was 1:11 difference on average and only 1:26 difference in the worst case across each lab scenario.  So as far as WLANs are concerned, your choice of configuration method is fairly inconsequential in terms of time.  1 minute in the lab isn’t that big of a deal for a technology of this scope. In the next round of speed tests, we will look at some more WLC configurations and see if the times continue to be fairly similar across the board.

iPexpert’s Newest “CCIE Wall of Fame” Additions 10/17/2014

$
0
0

Please Join us in congratulating the following iPexpert clients who have all passed their CCIE lab!

This Week’s CCIE Successful Stories

  • Uwe-Heinz Janssen, CCIE #45035 (Wireless)
  • Jaromir Likavec, CCIE #45051 (Wireless)

This Week’s CCIE Testimonials

Uwe-Heinz Janssen, CCIE #45035 Wrote:
“Many, many thanks to iPexpert for the great study materials and to Jeff Resink. For preparation I used CCIE Wireless BLS self-study material and attended: CCIE Wireless 5 Day v2.0 Bootcamp – vClass and CCIE Wireless OWLE v2.0 Bootcamp – vClass Class. I strongly recommend iPexpert for CCIE Wireless lab preparation.”

We Want to Hear From You!

Have you passed your CCIE lab exam and used any of iPexpert’s or Proctor Labs self-study products or attended a CCIE Bootcamp? If so, we’d like to add you to our CCIE Wall of Fame!

CCIE Data Center :: Private VLAN Trunks (Part 1) – The Promiscuous Trunk

$
0
0

I get a lot of inquiries regarding private VLAN trunking in my boot camps. The theory behind them is actually quite simple. So in this first post, we will discuss the first of two trunk types, the promiscuous trunk.

A little background first however. When we deal with private VLANs, even in NX-OS, we still have primary and secondary VLANs. The primary VLAN is essentially the subnets representation to the outside world (outside of their L3 subnet), and the secondary VLANs are really the ones that we are trying to place some restrictions around. They include isolated and community VLANs. I am going to assume some prior knowledge on the subject, but here is a graphical depiction on how the communications actually can take place:

 

DC251

So when we start talking about private VLAN promiscuous trunks I want you to imagine a scenario where you have a router-on-a-stick that has no idea about private VLANs, or the mappings associated with them. Something that looks a little something like this:

 

DC252

The issue arises with the router…you see its going to be placed off of an 802.1q trunk link. It has no idea of private-VLANs, and their associated mappings, its configuration will look something like this:

DC253

Also, for now, let’s assume a default promiscuous port configuration facing the router:

DC254

Let’s say, hypothetically speaking, that a packet from one of the community servers arrives at the router, it will arrive in VLAN 200, which won’t match any of the sub-interfaces on the router (based on tag), and it will be dropped.

 

DC255

This poses a problem…the first one being that you are actually using a router on a stick…but I digress (j/k!).  There has to be a more graceful way to do this, and make it function. We would need to essentially remove that VLAN 200 tag, and replace it with the primary VLAN tag so that it can arrive on the proper sub-interface on the router and be routed to its destination.  This is exactly what promiscuous trunks do! We need to convert the promiscuous port configuration on the interface facing the router, with a promiscuous trunk configuration:

DC256

What these statements do is tell the switch that if the traffic for one of those secondary VLANs (200-201) is destined for that trunk port, that on egress the switch needs to remove the secondary VLAN tag, and replace it with the primary VLAN tag (100).

 

DC257

When the packet comes back from the router it will of course be tagged in VLAN 100, but the switch KNOWS about the private VLANs and all of the mapping and associations therein…and the traffic will pass as normal. The nice thing about private VLAN trunks, is that we can still carry normal dot1q traffic across the links, and traffic will flow normally. We can have up to 16 primary->secondary VLAN pairs across any given trunk, and the statement I provided above would constitute 2 (100-200, and 100-201). So plan carefully.

I don’t anticipate this is something that we will see too often, but it is definitely good to know about! If you have any questions please post them in the comments and I will get back to you! In part two of this series I will cover the other type of PVLAN trunk, the isolated or secondary trunk. Stay tuned!

-Jason

iPexpert’s Newest “CCIE Wall of Fame” Additions 10/24/2014

$
0
0

Please Join us in congratulating the following iPexpert client who has passed CCIE lab!

This Week’s CCIE Success Story

  • Ivan Sanchez, CCIE #41442 (Voice)

We Want to Hear From You!

Have you passed your CCIE lab exam and used any of iPexpert’s or Proctor Labs self-study products or attended a CCIE Bootcamp? If so, we’d like to add you to our CCIE Wall of Fame!

Cisco’s CCIE Service Provider v4 – First Look

$
0
0

For years now I have joked with friends and students that I think Cisco has a special employee whose job is to watch what I’m doing and draft policy or changes to certification tracks accordingly. Just recently this became more than a joke.

I’m actually searching my studio/office for bugs (not the 6 legged kinds). No more than two months ago I announced to my family and my co-workers that I was going to pursue the CCIE Service Provider track. Let’s face it, once you get your first CCIE (in my case Routing and Switching), you find that you have more time on your hands than you can deal with. Additionally, those nagging voices in the back of your head keep you wondering; “Was it a fluke? Did I get an easy test? Did the proctors make a mistake and accidentally pass me?” Seriously, I promise all these thoughts will go through your head (or maybe I’m just that paranoid).

It won’t be long before you start wanting to take on another track. I pursued my second CCIE in Data Center. I mean let’s look at this critically; I already had my RS CCIE, and that was equal to about one third of the topics in Data Center, plus about 3 years field experience with Nexus. So by my reasoning I was already close to halfway done with DC without even cracking a book. Four months later, using iPexpert’s CCIE Data Center Videos and workbooks I had a second plaque hanging on my wall, where once I honestly thought I would never have even one.

My Data Center was finished seven months ago, and again I found myself asking what to do next. I looked at some other vendor storage and virtualization certifications, but being a hardcore Cisco guy, I found my eye drifting toward another CCIE. As I said earlier Service Provider looked interesting. From a thirty-thousand foot perspective it seemed like it was taking everything I learned as a CCIE RS and going about twice as deep. But it was technologies I loved, like BGP and MPLS. Add to that topics like MPLS TE, Metro-Ethernet and Integrated ISIS. So I dutifully, even excitedly, dove into the blueprint and started studying.

Here is where my tale comes full circle. Not even two months into my preparation Cisco announces, with no fanfare or glitz mind you, that the Service Provider Track Blueprint was changing and the new CCIESPv4 exam is being ushered in. Well ‘ushered’ may be too strong a word, but by May 2015 the SPv3 exam will be retired. Groan!!!! This means again I find myself a victim of a blueprint change. I should not have let the Data Center CCIE lull me into such a false state of security. Rather than lament the lost time studying topics like ATM, Frame-Relay, and a dozen others. I instead decided to try to push aside the paranoia and embrace the change.

So I’ll share my observations here with you guys.

First the good news: we have gone from 7 topic domains to 6. Why is that good news? Well think about it. That’s one less topic domain we need to concern ourselves with. Add to that that two topics have been completely removed. Yes it’s true, we no longer need to concern ourselves with Managing Services Traversing the Core and Service Provider Network Implementing Principles. The old individual topics of L3VPN Technologies and L2VPN Technologies have been combined into a single domain called Service Provider-Based Services. For ease of topic management the huge domain of Core IP Technologies has been broken into two more manageable sections: Core Routing and High Availability and Fast Convergence. The last change combines two previous sections into one topic domain: Access and Aggregation. These changes result in a topic domain listing like the following:

blog1

So the good news doesn’t end with the reduced number of the topic domains. The new domains illustrated on the right side of the table translate to sections that are better divided, smaller in scope, and represents an equal balance between the individual domains. How do I know that there is indeed a better balance? Well let’s look at the weighting that Cisco has assigned each of these six domains in the blueprint.

  • (10%) Service Provider Architecture and Evolution
  • (23%) Core Routing
  • (23%) Service Provider Based Services
  • (17%) Access and Aggregation
  • (10%) High Availability and Fast Convergence
  • (17%) Service Provider Security, Service Provider Operation and Management

To me this percentage breakout screams that each topic is going to be almost equally weighted with the others. Especially the core technical topics. This translates into an even playing field for new candidates coming to the Service provider track, especially now that many of the legacy technologies are being removed. Technologies are being taken out of the exam you ask? Did I forget to mention that? I would swear I mentioned topics like Frame Relay and ATM. Well I guess we need to clarify this. The topics removed from the exam are:

The following topics have been removed from the CCIE SP v4 lab:

  • Describe, implement, optimize, and troubleshoot packet over SONET
  • Describe, implement, optimize, and troubleshoot IP over DWDM
  • Describe, implement, optimize, and troubleshoot SP high-end products
  • Describe, implement, optimize, and troubleshoot SONET/SDH connections
  • Describe, implement, optimize, and troubleshoot T1/T3 and E1/E3 connections
  • Describe, implement, optimize, and troubleshoot IP over DSL to the customer
  • Describe, implement, optimize, and troubleshoot IP over wire line to the customer
  • Describe, implement, optimize, and troubleshoot IP over cable to the customer

Well, good riddance I say. SONET and E1/T1 are long dead in my professional world. The one topic that had me surprised was DSL. Especially in my “more rural” consulting world, I still see lots of DSL. But its removal opens the door for more engaging, future-proof technologies. I mean let’s not lose sight of the fact that Ethernet and Gigabit-Ethernet WAN is here to stay.

It’s not all Blue Skies and Rainbows though, people. Cisco has decided to make this track a jam-packed thrill ride of features, technologies, and capabilities. Just look at the topics that have been added to Service Provider version 4:

  • Describe, implement, and troubleshoot advanced BGP features, for example, add-path and BGP LS
  • Describe, implement, and troubleshoot mLDP (including mLDP profiles from 0 to 9)
  • Describe and optimize multicast scale and performance
  • Describe, implement, and troubleshoot MPLS QoS models (MAM, RDM, pipe, short pipe, and uniform)
  • Describe, implement, and troubleshoot MPLS TE QoS mechanisms (CBTS, PBTS, and DS-TE)
  • Describe, implement, and troubleshoot E-LAN and E-TREE, for example, VPLS and H-VPLS
  • Describe, implement, and troubleshoot Unified MPLS and CSC
  • Describe, implement, and troubleshoot LISP
  • Describe, implement, and troubleshoot GRE- and mGRE-based VPN
  • Describe, implement, and troubleshoot IPv6 transition mechanism, for example, NAT44, NAT64, 6RD, and DS lite
  • Describe, implement, and troubleshoot end-to-end fast convergence
  • Describe, implement, and troubleshoot multi-VRF CE
  • Describe, implement, and troubleshoot Layer 2 failure detection
  • Describe, implement, and troubleshoot Layer 3 failure detection
  • Describe, implement, and troubleshoot control plane protection techniques (LPTS and CoPP)
  • Describe, implement, and troubleshoot logging and SNMP security
  • Describe, implement, and troubleshoot timing, for example, NTP, 1588v2, and SyncE
  • Describe, implement, and troubleshoot SNMP traps, RMON, EEM, and EPC
  • Describe, implement, and troubleshoot port mirroring protocols, for example, SPAN, RSPAN, and ERSPAN
  • Describe, implement, and troubleshoot NetFlow and IPFIX
  • Describe, implement, and troubleshoot IP SLA
  • Describe, implement, and troubleshoot MPLS OAM and Ethernet OAM
  • Add path is important to provide redundancy in RR deployments.
  • BGP-LS is used for seamless MPLS.
  • MLDP is used for transporting multicast over MPLS.

Yeah look at that list. It’s an alphabet soup of acronyms and abbreviations. Some of these I was clueless about, others like LISP I remember from my Data Center studies. I also remembering saying thank goodness LISP is not on the CCIE Data Center blueprint, and voila its here in SPv4 (testing, testing… can you hear me Cisco?).

All joking aside, there isn’t a technology on this list that you should not expect to see in a modern or transitional service provider environment. The other element to this is that Cisco is going to demand that we know these topics, protocols, and features for both IPv4 and IPv6. In fact, so much emphasis will be placed on IPv6 that passing the Service Provider version 4 exam will earn you the IPv6 Forum Gold Certification and permission to use that logo. This tells me that this exam will be dual stack, and very future-focused. That means we will need to understand IPv6’s impact on our environments just as well as we know IPv4. We have to face facts: IPv6 is coming to the world at large, but for us Service Provider Candidates it is here like gangbusters.

This next part is what most out there will consider the most troubling piece of news yet. There is no way to be gentle with this tidbit. So using the “pull the bandage off in one swift motion” method seems to be the most appropriate. So with that said, the new CCIE Service Provider Exam will emulate the current CCIEv5 Routing and Switching exam exactly.

SPv4 will now have a Troubleshooting, Diagnostics, and Configuration section. Ouch!!!

I rolled my eyes too.

But here it is black and white from Cisco themselves:

blog2

Notice that in the SPv4 exam the Diagnostic Section is 60 minutes long. What that means exactly, “I don’t know.” The RSv5 exam has 30 minutes for 3 questions, maybe its 6 questions in SP now, or maybe its 3 very hard questions. I don’t know, but I’m not looking forward to it.

At the end of the day all we really want to know is what it’s going to take to pass the exam. Cisco goes so far as to tell us what it expects from us in each of the individual sections in this graphic:

blog3

Note that each section is cited as having an individual passing score, but to pass the lab you must meet or surpass an overall cut score. This is also identical to the CCIE RSv5 exam where you must obtain a passing score in each of the three sections, while needing to still obtain a quantitatively higher gross score to pass the lab. It will be interesting to see how this is going to play out.

Right now the information is so new that sharing much more than I have would be little better than spreading rumors and speculation. As I learn more I will keep you guys updated. Information I share will be verified and validated before I post it. My initial thoughts on this new exam is that it will be far more relevant to modern Service Providers and, as such, will be more meaningful as a professional certification. I also believe this particular certification will be a much more challenging track to pass (on par, if not harder than RSv5).

Let me know what you think.

-Terry

iPexpert’s Newest “CCIE Wall of Fame” Additions 10/31/2014

$
0
0

Please Join us in congratulating the following iPexpert clients who have passed the CCIE lab!

This Week’s CCIE Success Stories

  • Alex Burger, CCIE #45253 (Wireless)
  • Jeffrey Wang, CCIE #44712 (Wireless)

We Want to Hear From You!

Have you passed your CCIE lab exam and used any of iPexpert’s or Proctor Labs self-study products, or attended a CCIE Bootcamp? If so, we’d like to add you to our CCIE Wall of Fame!

Native Call Queuing

$
0
0

Native Call Queuing is new in CUCM 9.x and is a pretty commonly misunderstood feature by students.  Just the name itself it can cause confusion.  One might look at the name and think logically, “This feature is built inside CUCM and queues calls natively?!

Note: Blog best viewed w/ Google Chrome.

Return our UCCX servers immediately, we won’t be needing them anymore!”  Not so fast—it doesn’t exactly work that well.  It does, however, give you an option when agents are truly busy and assisting other customers.

Let’s run through the configuration and do some testing on it.  We will need to log into the HQ CUCM server and create a line group, hunt list, and hunt pilot.  That’s the first thing to note: Native Call Queuing utilizes the existing CUCM hunting structure that we’ve all been using for years now.  So we can now all collectively breathe a sigh of relief in the name of familiarity.

Navigate to Call Routing –> Route/Hunt –> Line Group and click on the Add New button to create a new Line Group.

NCQ-01

Give the new Line Group a name (e.g. NCQ_LG) and select a distribution algorithm.

NCQ-02

Next, add members to the Line Group by selecting the extensions and clicking the Add to Line Group button.  Click the Save button to complete the configuration.

NCQ-03

Next, we need to create a Hunt List to take advantage of the Line Group that was just created.  To accomplish this, navigate to Call Routing –> Route/Hunt –> Hunt List and click the Add New button to create a new Hunt List.  Of course, we need to give it a name (e.g. NCQ_HL), select a CUCM Group, and enable the Hunt List.

NCQ-04

After this is complete, hit the Save button to be redirected to the Hunt List configuration page.  Here, we can select the previously configured Line Group (NCQ_LG) by clicking the Add Line Group button and associating it with the Hunt List.  Remember to click the Save button when finished.

NCQ-05

Next, navigate to Call Routing –> Route/Hunt –> Hunt Pilot and click the Add New button to add the Hunt Pilot configuration.  Assign a directory number (e.g. 1111) and place it in an appropriate partition.  Next, assign the Hunt List created in the previous step to allow it to use the associated configuration.  Also, make sure to un-check the “Provide Outside Dial Tone” checkbox since this is an internal number.

NCQ-06

Up to this point, it has been “business as usual” when it comes to “normal” hunting configuration.  We’ve configured the combination of Line Group/Hunt List/Hunt Pilot successfully.  Now, let’s examine the Native Call Queuing aspect of this configuration.

Scroll down to the section labeled “Queuing” in the Hunt Pilot configuration page.  The first logical option available is the “Queue Calls” checkbox.  Once this is selected, Native Call Queuing is switched on!  But don’t get too excited just yet, there are still some other configurations that need to be done.

NCQ-07

Let’s run through the settings that are available.

Maximum Number of Callers Allowed in Queue – Pretty self-explanatory; 32 is the default and 100 is the maximum

When Queue is full – When the maximum number of queued callers is reached, the actions that can be taken are to either disconnect the call, or route the call to a specific destination.

Maximum Wait Time in Queue – Once again, self-explanatory; 900 is the default and 3600 is the maximum (in seconds).

When maximum wait time is met – Same actions as above; either disconnect or forward somewhere else.

When no hunt members answer, are logged in, or registered – In these states, we can perform the same actions as above; either disconnect or forward to another destination.

NCQ-08

When configuring, you might see the last bullet point from above and get a little frustrated.  In the state where nobody answers the call, nobody is logged in, or where no participating phone is registered, the only option is to disconnect or forward the call.  Now you’re thinking, “Wait a minute, I thought we were queuing calls?!”  Unfortunately, we aren’t exactly in this case.  In these situations, Native Call Queuing behaves exactly as a “normal” hunt pilot configuration would.  There is no option to queue calls when they are in this state.  So what does Native Call Queuing actually do for us then?  The answer is that it will only provide queuing functions when all members of the line group are busy (on an active call).

You may have noticed that we skipped a parameter when running through the configuration settings.  And you would be correct!  The parameter we skipped was called “Network Hold MOH Source & Announcements”.  This parameter controls what the user hears when the actual queuing takes effect.  If you pull down the dropdown box, you’ll notice that you can select a music-on-hold audio source.  Being that this is the default system configuration, we can only select SampleAudioSource at this point.

NCQ-09

We can add configuration to the SampleAudioSource file once selected by navigating to Media Resources –> Music On Hold Audio Source.  After clicking on the SampleAudioSource file, a list of options about the file appears.  Scroll down to the bottom of the page to reveal the “Announcement Settings” header.  Within this header, there are settings related to Native Call Queuing that can be applied.

NCQ-10

The first setting, called “Initial Announcement” is a dropdown box allowing the selection of a Welcome greeting as the calling party enters the queue.  A selection that might make sense here would be “Welcome Greeting Sample”.

NCQ-11

The next setting controls when the announcement is actually played.  There are two options: “Always” or “Only for queued calls”.  Note that while this setting has a benevolent agenda, it does not tend to work very well.  By that, I mean that the Initial Announcement will play no matter the setting (in most cases).  Obviously, this was not intended by Cisco, but it happens nonetheless.

NCQ-12

Next, the “Periodic Announcement” option selects a file that will be played to callers waiting in the queue for the time specified in the “Periodic Announcement Interval”.  A good option here would be the “Wait In Queue Sample” along with an interval of 30 seconds (default).

NCQ-13

Once these settings are configured, callers attempting to connect to the Hunt Pilot (DN 1111) will hear the configured queue announcements—provided, of course, that all line group members are busy.  One quick method that I use in testing is to simply have one line group member call the other line group member.  At that point, I can call from the PSTN phone or a phone on another cluster (provided that the proper call routing is configured) to test the configuration.

So there you have it—Native Call Queuing in all its glory!  In all seriousness, I hope that this was a helpful resource for you in your CCIE Collaboration studies.  As always, give us a call and speak with a training advisor if you are interested in taking your preparation to the next level by attending one of my CCIE Collaboration bootcamps.  Good luck and happy labbing!

- Andy


DMVPN QOS (Spoke-by-Spoke Basis)

$
0
0

In one of the most recent vLecture sessions we discussed the importance of being able to apply Quality of Service mechanisms across DMVPN deployments.

Note: Blog is best viewed w/ Google Chrome.

First and foremost we need to recognize that most organizations using DMVPNs are doing so to reduce the expense associated with leased line connectivity. At the end of the day we are seeing significant deployment of DMVPNs across the internet. As such it immediately becomes important that we understand not all service providers are created equal and that internet bandwidths may vary from site-to-site. In the days of frame relay it was possible to apply available QoS policies based on the fact that each independent connection was associated with a virtual circuit known as a Data Link Control Interface (DLCI). However, in the confines of CCIE RSv5, we no longer have frame relay in the mix. Now, when it comes to non-broadcast multi-access environments, we have Dynamic Multipoint Virtual Private Networks. DMVPNs have no functional equivalent to DLCIs, therefore there is no link specific overlay or underlay that we can use to identify which particular spoke may or may not need special treatment. Let’s look at a network topology where we can illustrate this situation and look at our options when it comes to link-by-link QoS across DMVPNs.

20141031-blog

In this topology we will assume that the DMVPN cloud is representative of the internet connecting three offices, R1 in our illustration represents the HQ office and R2/R3 will be satellite offices. For the purposes of our discussions R1 will represent a 50Mbps Metro-Ethernet connection, whereas R2 will be a 10Mbps connection and lastly, R3 will be a 5 Mbps circuit to the internet. We are running DMVPN across the internet connecting these sites and have been tasked to ensure R1 does not oversubscribe the remote sites while at the same time making the most efficient use of each site’s available bandwidth. First, we need to look at the QoS mechanism that the task is asking us to employ. We are told that R1 is not supposed to be allowed to saturate the remote offices internet links. The most useful tool that comes to mind would be shaping. If we regulate the amount of traffic that R1 can send toward R2 and R3 such that it will be shaped to meet each site’s bandwidth requirements, we can satisfy all requirements of this task. So we will employ shaping as our QoS mechanism of choice. The first requirement will be to create a policy-map that will provide the correct shaping requirements on a link-by-link basis. Since we are going to be shaping all traffic it will be simplest to use the class class-default under each of our corresponding policies. This will be accomplished like this:

R1(config)#policy-map LINK_TO_R2
R1(config-pmap)# class class-default
R1(config-pmap-c)#  shape average 10 M
R1(config-pmap-c)#policy-map LINK_TO_R3
R1(config-pmap)# class class-default
R1(config-pmap-c)#  shape average 5 M
R1(config-pmap-c)#end

Now we have the policies that match the bandwidth requirements of each of our two DMVPN Spokes, but the problem now comes when we consider how to apply them to the DMVPN Tunnel on R1 toward the spokes. The immediate issue is how we delineate between the two links without logical constructs like the DLCIs we mentioned previously. DMVPN uses only one overlay to create the virtual private cloud, and as such affords us no equivalent mechanism to represent the individual links. However, IOS comes to our aid with the capabilities found inside Next Hop Resolution Protocol (NHRP), by supporting the concept of QoS Group Mappings. Group Mappings allow us to create a series of what I will call “QOS GROUPS”. These groups will allow us to apply QoS mechanisms at the time our spokes register with the Next Hop Server (NHS). To accomplish this we need to create the groups that we will use. I will apply the names I used in the policy maps as the group name just for clarity’s sake. This configuration will be done under the Tunnel0 interface on R1 via the nhrp map group command:

R1(config)#int tunnel 0
R1(config-if)#nhrp map group LINK_TO_R2 service-policy output LINK_TO_R2
R1(config-if)#nhrp map group LINK_TO_R3 service-policy output LINK_TO_R3
R1(config-if)#end

As you can see, we simply create the “map group” and apply the corresponding policy-map under those groups in the direction we need. This is done using the service-policy command just like we would under an interface. However, we still have our original dilemma. How do we tell the DMVPN hub which policy to apply and when? This is accomplished under the tunnel interfaces of the corresponding spoke devices. At the time the spoke registers with the NHS server, the spoke can be configured to notify the hub which “QOS GROUP” to use when data is sent toward that individual spoke. This is accomplished by specifying the name of the template the spoke wants to “join” and we need to remember that the configuration is case-sensitive. Again we will use the nhrp command:

R2

R2(config)#int tun 0
R2(config-if)#nhrp group LINK_TO_R2
R2(config-if)#end

 R3

R3(config)#int tun 0
R3(config-if)#nhrp group LINK_TO_R3
R3(config-if)#end

It may be necessary to bounce the tunnel interfaces to force this process to “reregister” with the NHS “aka The Hub”. We can determine if the configuration has worked by using the show dmvpn detail command.

R1#show dmvpn detail
Legend: Attrb –> S – Static, D – Dynamic, I – Incomplete
N – NATed, L – Local, X – No Socket
# Ent –> Number of NHRP entries with same NBMA peer
NHS Status: E –> Expecting Replies, R –> Responding, W –> Waiting
UpDn Time –> Up or Down Time for a Tunnel
==========================================================================
Interface Tunnel0 is up/up, Addr. is 10.10.10.1, VRF “”
Tunnel Src./Dest. addr: 10.1.123.1/MGRE, Tunnel VRF “”
Protocol/Transport: “multi-GRE/IP”, Protect “”
Interface State Control: Disabled
nhrp event-publisher : Disabled
Type:Hub, Total NBMA Peers (v4/v6): 2
# Ent Peer NBMA Addr Peer Tunnel Add State UpDn Tm Attrb Target Network
—– ————— ————— —– ——– —– —————– 1 10.1.123.2 10.10.10.2 UP 00:00:22 D 10.10.10.2/32
NHRP group: LINK_TO_R2
Output QoS service-policy applied: LINK_TO_R2
1 10.1.123.3 10.10.10.3 UP 00:00:19 D 10.10.10.3/32
NHRP group: LINK_TO_R3
Output QoS service-policy applied: LINK_TO_R3

Crypto Session Details:
——————————————————————————–

Pending DMVPN Sessions:
R1#

We see the NHRP Groups have been applied based on the names that we employed. But how do we see what policy-maps have been applied? We use the show policy-map multipoint command to see how the policies have been distributed:

R1#show policy-map multipoint
Interface Tunnel0 <–> 10.1.123.2
Service-policy output: LINK_TO_R2
Class-map: class-default (match-any)
6864 packets, 2935716 bytes
5 minute offered rate 0000 bps, drop rate 0000 bps
Match: any
Queuing
queue limit 2500 packets
(queue depth/total drops/no-buffer drops) 0/0/0
(pkts output/bytes output) 8464/3124612
shape (average) cir 10000000, bc 40000, be 40000
target shape rate 10000000
Interface Tunnel0 <–> 10.1.123.3
Service-policy output: LINK_TO_R3
Class-map: class-default (match-any)
6134 packets, 492862 bytes
5 minute offered rate 0000 bps, drop rate 0000 bps
Match: any
Queueing
queue limit 1250 packets
(queue depth/total drops/no-buffer drops) 0/0/0
(pkts output/bytes output) 6134/578738
shape (average) cir 5000000, bc 20000, be 20000
target shape rate 5000000
R1#

The policies have been applied correctly. We want to test this a bit more vigorously, while at the same time exploring the hierarchical nature of our QoS tools. We need to apply an additional element to the equation here so we can explore how this configuration will work for us. That element will be to restrict ping traffic such that anything over 64K will be dropped on the LINK_TO_R2, and anything in excess of 8K will be dropped on the LINK_TO_R3. Obviously we are talking about policing this particular class of traffic. To do so we will need one class-map and two policy-maps:

R1(config)#class-map PING
R1(config-cmap)#match protocol ping
R1(config-cmap)#exit
R1(config)#policy-map POLICE_TO_R2
R1(config-pmap)#class PING
R1(config-pmap-c)#police 64 k conform-action transmit exceed-action drop
R1(config-pmap-c-police)#exit
R1(config-pmap-c)#exit
R1(config-pmap)#policy-map POLICE_TO_R3
R1(config-pmap)#class PING
R1(config-pmap-c)#police 8 k conform-action transmit exceed-action drop
R1(config-pmap-c-police)#exit

Now we will modify the existing policy maps by nesting these new policies inside of them:

R1(config)#policy-map LINK_TO_R2
R1(config-pmap)# class class-default
R1(config-pmap-c)#service-policy POLICE_TO_R2
R1(config-pmap-c)#policy-map LINK_TO_R3
R1(config-pmap)# class class-default
R1(config-pmap-c)#service-policy POLICE_TO_R3
R1(config-pmap-c)#exit

Now we can look at the applied policy-map again:

R1#show policy-map multipoint
Interface Tunnel0 <–> 10.1.123.2
Service-policy output: LINK_TO_R2
Class-map: class-default (match-any)
7025 packets, 2949240 bytes
5 minute offered rate 0000 bps, drop rate 0000 bps
Match: any
Queueing
queue limit 2500 packets
(queue depth/total drops/no-buffer drops) 0/0/0
(pkts output/bytes output) 8625/3140390
shape (average) cir 10000000, bc 40000, be 40000
target shape rate 10000000
Service-policy : POLICE_TO_R2
Class-map: PING (match-all)
0 packets, 0 bytes
5 minute offered rate 0000 bps, drop rate 0000 bps
Match: protocol ping
police:
cir 64000 bps, bc 17916 bytes
conformed 0 packets, 0 bytes; actions:
transmit
exceeded 0 packets, 0 bytes; actions:
drop
conformed 0000 bps, exceeded 0000 bps
Class-map: class-default (match-any)
13 packets, 1092 bytes
5 minute offered rate 0000 bps, drop rate 0000 bps
Match: any
Interface Tunnel0 <–> 10.1.123.3
Service-policy output: LINK_TO_R3
Class-map: class-default (match-any)
6297 packets, 506554 bytes
5 minute offered rate 0000 bps, drop rate 0000 bps
Match: any
Queueing
queue limit 1250 packets
(queue depth/total drops/no-buffer drops) 0/0/0
(pkts output/bytes output) 6297/594712
shape (average) cir 5000000, bc 20000, be 20000
target shape rate 5000000
Service-policy : POLICE_TO_R3
Class-map: PING (match-all)
0 packets, 0 bytes
5 minute offered rate 0000 bps, drop rate 0000 bps
Match: protocol ping
police:
cir 8000 bps, bc 17916 bytes
conformed 0 packets, 0 bytes; actions:
transmit
exceeded 0 packets, 0 bytes; actions:
drop
conformed 0000 bps, exceeded 0000 bps
Class-map: class-default (match-any)
12 packets, 1008 bytes
5 minute offered rate 0000 bps, drop rate 0000 bps
Match: any

We can see the policers have been applied per policy per link.

- Terry

iPexpert’s Newest “CCIE Wall of Fame” Additions 11/07/2014

$
0
0

Please Join us in congratulating the following iPexpert client who has passed his CCIE lab!

This Week’s CCIE Success Stories

  • Piotr Morawski, CCIE #10626 (Voice)

We Want to Hear From You!

Have you passed your CCIE lab exam and used any of iPexpert’s or Proctor Labs self-study products, or attended a CCIE Bootcamp? If so, we’d like to add you to our CCIE Wall of Fame!

Wireless Configuration Method Speed Test Shootout- Part 2

$
0
0

This is the second article in a series focusing on seeing which configuration methods are fastest or slowest in the CCIE wireless lab.  The idea is to test each method under a variety of likely configuration scenarios that you would experience in the real lab and see how things stack up.

Note: Blog is best viewed w/ Google Chrome.

If you didn’t read the first article, take a few minutes to do that.  It talks about why this series is being written as well as other reasons beside speed that should be considered when choosing a config method.  It also showed the results of WLAN configuration tests on WLCs.  In this article, we will continue to focus on WLCs and using our 3 config methods of CLI, GUI, and WCS.  This time, we will focus on non-WLAN configurations.  Here are the test configurations that I did for this article.

  • Adding a RADIUS server to 4 WLCs
  • Configuring a 4-line ACL on a single WLC
  • Configuring a 4-line ACL across multiple controllers
  • Configuring an NTP server on 4 WLCs
  • Configuring QoS Profiles across 4 WLCs
  • Configuring mobility group memberships across 4 WLCs
  • Configuring RRM across 2 WLCs

If you want to watch the actual configurations, you can check out the companion video to this article over in our YouTube channel.  It shows how I arrived at the configuration speeds and the methods that I used.  You may be able to pick up a few tips or tricks for faster configurations by watching how I do things.

RADIUS Servers

For this test, I added a RADIUS authentication server to all 4 of our WLCs.  I just left the settings at their defaults and simply made sure I specified the IP, shared secret, enabled for network/management, and administratively enabled the server.  Here are the results from fastest to slowest in minutes:seconds.

CLI= 0:21
GUI= 0:45
WCS= 0:45

The CLI was roughly twice as fast as the other methods thanks to only having to configure a single command on each WLC.  If we had been required to tweak the configs a bit more, then times may have stacked up differently as the CLI configs can get more involved.

ACLs

The first test was configuring a 4-line ACL on a single WLC.  The ACL was written as if it would be used as a CPU ACL controlling telnet traffic. Here are the results from fastest to slowest in minutes:seconds.

GUI= 1:08
WCS= 2:08
CLI= 2:23

Here the GUI was twice as fast as the other methods.  But that is for applying the ACL to a single WLC.  What about applying it to multiple WLCs?  I didn’t re-run the test for the CLI or WCS, since the time will essentially be the same.  So I’m adding 1 second to WCS to check an extra box and I’m adding 5 seconds to the CLI method to cover pasting in the large number of lines of config.

For the GUI, I could do one of two things.  Either multiply the time by the number of WLCs, or use a hybrid method.  One recommendation that I give in my classes is to configure it in the GUI on 1 WLC, then copy paste the CLI config from the running config into the rest of the WLCs.  I did test that method and it took 34 seconds to copy the ACL to the 2nd WLC.  Here are the numbers for applying an ACL to 2 different WLCs.

Hybrid GUI+CLI= 1:42
WCS= 2:09
GUI= 2:16
CLI= 2:28

So when replicating across 2 controllers, each method is about the same.  Though using the GUI/CLI hybrid method is measurably faster.

NTP server

This test involved configuring an NTP server across 4 controllers.  I expected these numbers to be similar to the RADIUS test since they are pretty close in simplicity.  Here are the results.

CLI= 0:14
WCS= 0:31
WLC= 0:34

The results were pretty much in line with the RADIUS test.  CLI was twice as fast.

QoS Profiles

Here I enabled 802.1p markings on all QoS profiles across 4 WLCs.  This requires you to disable the radios before configuring.  So in the CLI and GUI tests, I included disabling the radios at the beginning and enabling them at the end.  I did not include that in the WCS time since it is very painful to do that in WCS.  Also, using a template to do that carries the risk of overwriting other settings that should not be changed.  So most people would probably use either the GUI or the CLI to manipulate the radio states.

CLI= 0:52
WCS= 1:51 (not including shut/no shut on the radios)
GUI= 2:28

The CLI was the superior method here.  WCS is not a legitimate stand-alone option.  And the GUI config was pretty slow since it required a whole lot of clicking.

Mobility Group Memberships

Here I configured mobility group memberships as follows.  I added WLC2-4 to WLC1’s group (and vice versa) pretending that WLC1 was a DMZ anchor controller.  I also added WLC2 and WLC4 to each other’s group list pretending they were in the same building as each other.  I did not manipulate any group names.  Here are the results.

CLI= 1:12
GUI= 1:15
WCS= 1:30

No significant differentiation between methods.  Of course, this is assuming that you are using the most efficient configurations in each method.  Check out the video to see how I accomplished each one.  Choosing alternate methods will significantly alter the time.

RRM

Here I set WLC1 to be the RF leader and WLC2 to be a static member for both bands.  I then set a TCP threshold and 2 settings under DCA.  I did the TCP and DCA settings on both bands on both WLCs.  I could not do this with WCS templates (unless I used a CLI template).  So I just skipped WCS for this test.

CLI= 1:40
GUI= 1:50
WCS= don’t bother

CLI and the GUI were pretty much the same here.

Aggregate Results

For fun, let’s see what the summation of each of the tests would be to see which method is the fastest overall.  For these numbers, I’ll use the tests listed below.

  • Average of the WLAN configurations from the previous article
  • RADIUS server
  • ACL on 1 WLC
  • ACL on 2 WLCs (using the GUI only method)
  • NTP server
  • QoS profiles (adding a 2 minute penalty to WCS for shut/no shutting radios)
  • Mobility groups

I’ll omit the RRM test since there is no good way to include WCS there.  Here are the sum totals of each of the methods across these tests.

CLI= 12:24
GUI= 14:03 (1:39 slower than the CLI)
WCS= 16:59 (4:35 slower than the CLI)

Even without the 2 minute penalty, the WCS had the slowest aggregate time.  The CLI was the fastest, but not by a huge margin.  It was about 13% faster than the GUI.  Let’s say that you had to do 3 times this amount of config in the lab.  The difference then climbs to about 5 minutes between the CLI and the GUI.

Thoughts on the results

As far as configuring WLCs goes, I stand by the recommendations that I have made in the past that the GUI is fast enough.  With maybe a 5 minute difference between the CLI and the GUI, it’s not a significant difference maker.  While it may not be the fastest method, it’s close enough that you can feel comfortable making your configuration choices based on some of the other reasons like learning curve and accuracy.

One thing that I do need to change is my thought that WCS can save you time.  That is actually almost never the case.  And on the whole it’s the slowest method.  So if speed is a primary concern, it should be avoided except in very specific instances.

Now one thing to keep in mind is that these results are based on relatively good response rates in the GUI and on WCS.  Sometimes we don’t get that in the lab.  This is primarily the case in Sydney.  So if you plan on taking your lab in Sydney, using the CLI becomes a significant speed advantage.  It can be hard to rely on the GUI/WCS there and hope to finish in good time at that location.  I have also heard random reports of slowness in RTP.  One person said that they were actually on the Sydney rack based on what they saw in their terminal server.  So keep it in mind that you may not be able to avoid high-latency issues 100%.

Look for my next post in this series where I start testing Autonomous AP configurations.

- Jeff

WSA – Guest Access

$
0
0

Web Security Appliance (WSA) offers the ability to grant limited access to users who fail authentication due to invalid credentials.

Note: Blog is best viewed w/ Google Chrome.

By default, when a client fails authentication, the Web Proxy continually requests valid credentials, essentially blocking access to all resources. In order to change it, some modifications to the proxy policies are needed so when a client passes incorrect credentials for the first time, the user will be treated as a guest and the device will not prompt for another authentication. Examples of when this feature can be useful are listed below :

  • Temporary access for visitors & contractors
  • Access for visiting employees, coming from other branch locations, when separate databases are used in the HQ and other offices
  • Temporary access for new hires (until they are finally added to the AD)

The configuration of Guest Access on the WSA can be summarized as follows :

  1. An Identity must be defined that requires authentication but also allows Guest Privileges
  2. A Policy Rule must be created where support for Guest Access is enabled
  3. The above Policy Rule should use the Guest-enabled Identity as a condition – otherwise a Global Rule may be matched for the transaction (depends on the WSA code version)

Before we get to the configuration of Guest Access I want to show you what happens when someone fails authentication and the feature is not enabled :

WSA-1

Let’s try to get to www.cnn.com and observe the logs :

1414254855.077 2 192.168.20.200 TCP_DENIED/407 1763 GET http://www.cnn.com/ – NONE/- – OTHER-NONE-DefaultGroup-NONE-NONE-NONE-NONE <IW_news,6.9,”1″,”-”,-,-,-,”-”,”-”,-,-,-,”-”,”-”,-,”-”,”-”,-,-,IW_news,-,”-”,”-”,”Unknown”,”Unknown”,”-”,”-”,7052.00,0,-,”-”,”-”> -

As you can see from the above, the transaction was denied – actually all traffic coming from the user’s PC going through the proxy is now blocked until the user successfully authenticates.

In our case I am going to assume that VLAN 20 is a network segment dedicated for guest access. To start our configuration I will create an Identity that will catch all transactions coming from this network (192.168.20.0/24). What’s important to remember here is that the “Support Guest privileges” option should be checked to tell the proxy that all connections caught by this Identity are candidates for Guest Access :

WSA-2

So this is what we have under Identities :

WSA-3

Next thing we want to do is to create an Access Policy rule. Two key things to remember about are : one is to select our Guest Identity as one of the Policy conditions (in our case it will be the only condition) and two is to enable Guest Access by selecting “Guests (users failing authentication)” checkbox. Note that this option only becomes available when there is at least one Identity defined with “Support for Guest Privileges” enabled :

Image 4

WSA-4

Now a simple change under URL Filtering within this rule to unblock traffic going to all webpages categorized as “News” :

Image 5

WSA-5

And there we go :

WSA-6

Looks good – time to test our configuration :

1414255770.906 128 192.168.20.200 TCP_REFRESH_HIT/200 417 GET http://www.cnn.com/ “(Unauthenticated)192.168.20.200″ DIRECT/www.cnn.com text/html ALLOW_WBRS_11-GUESTSPOL-GUESTID-NONE-NONE-NONE-DefaultGroup <IW_news,6.9,”1″,”-”,-,-,-,”1″,”-”,-,-,-,”-”,”1″,-,”-”,”-”,-,-,IW_news,-,”-”,”-”,”Unknown”,”Unknown”,”-”,”-”,26.06,0,-,”-”,”-”> -

All good!

Also note that when the Web Proxy grants a user guest access, it identifies and logs the user as a guest in the access logs. You can specify whether the Web Proxy identifies the user by IP address (default) or user name :

WSA-7

Here is a sample log message generated by the proxy with user-based logging enabled :

1414257443.357 123 192.168.20.200 TCP_REFRESH_HIT/200 417 GET http://www.cnn.com/ “(Unauthenticated)Student” DIRECT/www.cnn.com text/html ALLOW_WBRS_11-GUESTSPOL-GUESTID-NONE-NONE-NONE-DefaultGroup <IW_news,6.9,”1″,”-”,-,-,-,”1″,”-”,-,-,-,”-”,”1″,-,”-”,”-”,-,-,IW_news,-,”-”,”-”,”Unknown”,”Unknown”,”-”,”-”,27.12,0,-,”-”,”-”> -

As you can see from this article, Guest Access can be easily enabled on the WSA whenever you want to provide some limited access to certain users. And what is nice about that feature is that once enabled, there is no need for the administrator to communicate the guest credentials to any visitor.

- Piotr

iPexpert’s Newest “CCIE Wall of Fame” Additions 11/14/2014

$
0
0

Today was a great week, in our opinion! As Andy is putting up the final touches on his CCIE Collaboration product portfolio, and has a few classes under his belt, we’re beginning to see students pass their Collaboration lab. And of course, Jeff is still cranking out Wireless success stories!

Please Join us in congratulating the following iPexpert clients who have passed their CCIE lab!

  • Rashmi Patel, CCIE #44921 (Collaboration)
  • Jonathan Woloshyn , CCIE #45422 (Collaboration)
  • Istvan Czobor , CCIE #45345 (Wireless)
  • Nitin Chopra , CCIE #45371 (Wireless)

This Week’s CCIE Testimonials

Rashmi Patel, CCIE #44921 Wrote:
“I’d like to thank Andy and iPexpert for their CCIE Collaboration study materials and bootcamp! I used iPexpert’s Ultimate Self-Study Bundle (for the CCIE Collaboration lab). I also attended iPexpert’s 5-Day Bootcamp in Chicago. Andy is a great instructor, he helped me understand the key technical areas within the lab blueprint.”

We Want to Hear From You!

Have you passed your CCIE lab exam and used any of iPexpert’s or Proctor Labs self-study products, or attended a CCIE Bootcamp? If so, we’d like to add you to our CCIE Wall of Fame!

iPexpert’s Newest “CCIE Wall of Fame” Additions 11/21/2014

$
0
0

Please Join us in congratulating the following iPexpert clients who have passed their CCIE lab!

  • Andre Mitchell, CCIE #44619 (Collaboration)
  • Gaurav Vasudeva , CCIE #42760 (Routing and Switching)

We Want to Hear From You!

Have you passed your CCIE lab exam and used any of iPexpert’s or Proctor Labs self-study products, or attended a CCIE Bootcamp? If so, we’d like to add you to our CCIE Wall of Fame!

CCIE Collaboration Success :: Student Spotlight

$
0
0

We’d like to thank Jon Woloshyn for his testimonial! Jon recently passed the CCIE Collaboration lab! Here’s what Jon had to say:

“I attended iPexpert’s CCIE Collaboration 10-Day Bootcamp in August 2014 and I’m happy to say that on November 11th I passed the CCIE Collaboration exam on my first attempt.

I owe a lot of my success to Andy Vassar and iPexpert. The volume 1 workbook coupled with week 1 of the 10—Day CCIE Collaboration Lab Bootcamp helped solidify my understanding and comfort level with all of the technologies on the blueprint. Having my own un-shared, dedicated pod with the exact lab hardware that’s on the lab during that week to practice on day and night was huge. Being able to ask Andy every question that came to mind and get a detailed response was awesome. The fact the he would break from the lesson and lab up the questions being asked to prove the technology made the class very flexible and almost tailored to each student who required additional knowledge.

Week 2 of the 10-day course was the 1-Week Lab Experience (OWLE). I would not have passed without this week. Andy shared his lab strategy and at first I was skeptical but I decided to try it out during my studies. I practiced the OWLE labs and Volume 2 mock labs over and over using that strategy until it became second nature. I can’t imagine passing without using that approach to the lab. The knowledge I gained from the OWLE was the difference for me.

I highly recommend iPexpert’s Collaboration products and in particular – Andy Vassar!”

-Jon Woloshyn, CCIE #45422 (Collaboration)


iPexpert’s Newest “CCIE Wall of Fame” Additions 12/05/2014

$
0
0

Please Join us in congratulating the following iPexpert clients who have passed their CCIE lab!

  • Mathew Varghese, CCIE #45557 (Collaboration)
  • Nick Thompson , CCIE #45731 (Collaboration)

We Want to Hear From You!

Have you passed your CCIE lab exam and used any of iPexpert’s or Proctor Labs self-study products, or attended a CCIE Bootcamp? If so, we’d like to add you to our CCIE Wall of Fame!

Dial-Peer Digit Manipulation

$
0
0

In the CCIE Collaboration lab, understanding dial-peers is extremely important. Lack of knowledge in this area can yield devastating results in your lab score report since they can be found in so many different sections of the exam. We must be thoroughly prepared to tackle every aspect of this technology should we be presented with it at some point.

I recently got a great question in our forums about digit manipulation within POTS dial-peers and how they interact with translation rules and profiles. I figured that since this is such an important topic, my answer to his question bears repeating so it can reach a wider audience.

Let’s begin with the simple example of dialing the number “123” from a CUCME phone. Of course, the POTS dial-peer must be created to support the desired behavior.

dmdm-001

When this pattern is selected, all digits will be stripped automatically since they are explicitly defined. This is due to the “automatic POTS dial-peer digit strip” feature in IOS. See below for the ISDN Q.931 debug output (no Called Party Number).

dmdm-002

Since we are not currently sending a Called Party Number, we’ll need some way to add the digits back to the string to route the call. Let’s add a translation rule and profile to accomplish this.

dmdm-003

This configuration will match the original dialed string of “123” and replace it with “345″. It should be applied to the dial-peer as shown below.

dmdm-004After applying the translation profile, as expected, the Q.931 debug shows that the Called Party Number is now “345”.

dmdm-005

Next, to test the available dial-peer digit manipulation techniques, add the forward-digits command to the dial-peer.

dmdm-006

Since this manipulation occurs after the translation profile, it will keep the “rightmost” 2 digits for the “345” called number, as shown in the Q.931 debug. Take note that it is using the number that was applied in the translation profile as opposed to the original called number that matched the dial-peer.

dmdm-007

Now, add the prefix command to the dial-peer to understand how it affects the number.

dmdm-008

With both the forward-digits and prefix commands applied, this results in the number “745″, per the Q.931 debug command.

dmdm-009

Based on the above, what we can conclude is that translation profiles are applied as the first step in IOS digit manipulation. All other dial-peer digit manipulation techniques are applied after this point. In other words, digit manipulation in IOS is considered to be “additive” since it can modify the number after the translation profile is applied. This is in contrast to what is normally seen on CUCM, where digit manipulations performed at different configuration levels still reference the original called number instead of adhering to manipulations that were previously performed. For example, Called Party Transformations in CUCM take precedence over called party digit manipulations in both Route Lists and Route Patterns and use the original called number instead of any manipulated called number from the Route Lists/Patterns.

I hope this has been helpful to all that are studying for the CCIE Collaboration lab exam. Please keep your eye out for many updates to come for both our workbooks and videos. Also, if you’re ever feeling like you need an extra push to get ready for the lab, are hitting roadblocks in your preparation, or just need some direction on how tackle the CCIE Collaboration Lab, give us a call and speak with an iPexpert Training Adviser about attending one of my bootcamps.

Thanks again for reading and good luck in your preparation!

- Andy

IKEv2 VPN – ASA/IOS

$
0
0

In our next blog post, we will focus on configuring an IKEv2 VPN between the ASA and IOS.

Is there anything special about that configuration? Yes and no. It is still “just” IKEv2 that will take care of negotiating our tunnels, but there will definitely be a difference in how we configure one platform versus another. Remember – tunnel interfaces are not supported on the ASA, at least as of 8.6, and this generally means that we will not be able to use tunnels (FlexVPNs) on IOS, too (there is actually one small exception to this rule, but it will not be discussed in this article).

Let’s take a look at our simple network:
20141216_01

We’ll try to build a VPN tunnel between R10 and ASA3 that we will then use to protect traffic flowing between VLANs 10 and 8. I am going to start with the ASA configuration.

First and foremost – the Policy. Note that PRF must generally be the same as what you have selected for Integrity/Hashing:

crypto ikev2 policy 10
encryption aes-256
integrity sha384
prf sha384
group 14

We will authenticate the tunnel using pre-shared-keys, and since authentication method is no longer negotiated in IKEv2 we will have to configure in under the Tunnel Group:

tunnel-group 8.9.2.10 type ipsec-l2l
tunnel-group 8.9.2.10 ipsec-attributes
ikev2 remote-authentication pre-shared-key ipexpert
ikev2 local-authentication pre-shared-key ipexpert

This covers IKE SA INIT exchange and part of IKE AUTH. What else do we need for IKE AUTH? Encryption domain is defined by an ACL :

access-list PROXY6 extended permit ip 192.168.8.0 255.255.255.0 10.100.100.0 255.255.255.0

And security algorithms are now configured in an IPSec Proposal:

crypto ipsec ikev2 ipsec-proposal SET2
protocol esp encryption aes
protocol esp integrity sha-1

Finally, we need to bind our ACL, Proposal and the peer’s IP in a crypto map, which will be then applied to the interface. I also enabled PFS, but this is optional:

crypto map MAP2 10 match address PROXY6
crypto map MAP2 10 set peer 8.9.2.10
crypto map MAP2 10 set ikev2 ipsec-proposal SET2
crypto map MAP2 10 set pfs group14
crypto map MAP2 interface outside

Don’t forget to enable IKEv2 on the interface where the map was applied:

crypto ikev2 enable outside

All right, we are done with the firewall. The remaining configuration is for an IOS device. There is actually a nice document that shows you few examples for the “Legacy” (crypto-map based) implementations. It can be found under : IOS Configuration Guides -> Secure Connectivity -> Appendix: IKEv2 and Legacy VPNs

OK, so what do we have to do on R10? Remember that our Policy is defined in a Proposal:

crypto ikev2 proposal IKE_PROP
encryption aes-cbc-256
integrity sha384
group 14

crypto ikev2 policy IKE_POL
proposal IKE_PROP

We definitely want to build a Keyring which will be then referenced in our Profile. This is needed for Key authentication:

crypto ikev2 keyring IKE_KRING
peer ASA3
address 8.9.2.30
pre-shared-key local ipexpert
pre-shared-key remote ipexpert

IKEv2 Profile is used to distinguish between multiple VPN peers – this is also a place where we configure our authentication and potentially change our own IKE ID:

crypto ikev2 profile IKE_PROF
match identity remote address 8.9.2.30 255.255.255.255
identity local address 8.9.2.10
authentication remote pre-share
authentication local pre-share
keyring local IKE_KRING

Access-list will be a mirror image of the ASA’s:

access-list 120 permit ip 10.100.100.0 0.0.0.255 192.168.8.0 0.0.0.255

And a crypto map serves a similar purpose as on the ASA – just note that IKEv2 profile must be attached, which is needed to allow R10 to act as the session Initiator (and not Responder only):

crypto map MAP1 10 ipsec-isakmp
set peer 8.9.2.30
set transform-set SET1
set pfs group14
set ikev2-profile IKE_PROF
match address 120

int g0/0
crypto map MAP1

Assuming that you have all the needed routes in place, you should now be able to test it and obtain results similar to mine:

R8#ping 10.100.100.10

Type escape sequence to abort.
Sending 5, 100-byte ICMP Echos to 10.100.100.10, timeout is 2 seconds:
.!!!!
Success rate is 80 percent (4/5), round-trip min/avg/max = 1/1/1 ms

R10#sh crypto ikev2 session
IPv4 Crypto IKEv2 Session

Session-id:42, Status:UP-ACTIVE, IKE count:1, CHILD count:1

Tunnel-id Local Remote fvrf/ivrf Status
1 8.9.2.10/500 8.9.2.30/500 none/none READY
Encr: AES-CBC, keysize: 256, Hash: SHA384, DH Grp:14, Auth sign: PSK, Auth verify: PSK
Life/Active Time: 86400/57 sec
Child sa: local selector 10.100.100.0/0 - 10.100.100.255/65535
remote selector 192.168.8.0/0 - 192.168.8.255/65535
ESP spi in/out: 0x6D1B5D64/0xACA66EA5

IPv6 Crypto IKEv2 Session

R10#sh crypto session detail
Crypto session current status

Code: C - IKE Configuration mode, D - Dead Peer Detection
K - Keepalives, N - NAT-traversal, T - cTCP encapsulation
X - IKE Extended Authentication, F - IKE Fragmentation

Interface: GigabitEthernet0/0
Uptime: 00:01:02
Session status: UP-ACTIVE
Peer: 8.9.2.30 port 500 fvrf: (none) ivrf: (none)
Phase1_id: 8.9.2.30
Desc: (none)
IKEv2 SA: local 8.9.2.10/500 remote 8.9.2.30/500 Active
Capabilities:(none) connid:1 lifetime:23:58:58
IPSEC FLOW: permit ip 10.100.100.0/255.255.255.0 192.168.8.0/255.255.255.0
Active SAs: 2, origin: crypto map
Inbound: #pkts dec'ed 5 drop 0 life (KB/Sec) 4185109/3537
Outbound: #pkts enc'ed 5 drop 0 life (KB/Sec) 4185109/3537

ASA3(config)# sh vpn-sessiondb l2l

Session Type: LAN-to-LAN

Connection : 8.9.2.10
Index : 48 IP Addr : 8.9.2.10
Protocol : IKEv2 IPsec
Encryption : AES256 AES128 Hashing : SHA384 SHA1
Bytes Tx : 400 Bytes Rx : 400
Login Time : 17:53:24 UTC Tue Dec 9 2014
Duration : 0h:02m:28s

Last but not the least – know your weapons (if something goes wrong). The debug command I recommend for IOS is “debug crypto ikev2”. On the ASA, you may want to try these two “debug crypto platform 10” or “debug crypto protocol 255”.

iPexpert Introduces Jarrod Mills, as CTO and Sr. Routing and Switching Product Portfolio Director / Instructor

$
0
0

As a former attorney, I often found myself drawn to the comfort and familiarity of my office computer. While the thought of spending countless hours toiling over legal briefs caused me much discomfort, spending that same amount of time on a computer was therapeutic. Now, many years later, I can see how my transition into IT was a natural progression, but at the time it seemed crazy to those close to me.

From my formative years on the competitive math team in middle school and high school, to attending college, graduate school and law school on full academic scholarships, I have always striven to excel. What I lacked in career path clarity, I made up for in sheer determination.

Over the past 20 years, I have been fortunate enough to pursue my passion in networking, designing and building world-class networks for Fortune 50 companies throughout the world. Through hard work and perseverance, I have been able to attain 4 CCIE’s (Routing and Switching, Security, Service Provider, Data Center – AND – Wayne has already given me a deadline for #5! ;-). I’ve also been able to amass countless other IT certifications, while simultaneously mentoring and teaching numerous friends and colleagues in their pursuit of Cisco certifications.

Throughout this last year, Wayne Lawson (iPexpert’s CEO) and I spent countless hours discussing and brainstorming how to expand and improve on the CCIE training currently available in the marketplace. Ultimately, the chance to work alongside the pioneer of online CCIE training proved impossible to resist. I am thrilled to be able to work with the all-star cast of CCIE instructors that iPexpert has assembled.

In 2015, I will devote my passion and expertise to ensure that our Routing and Switching portfolio is hands-down the industry leader. I will also be heavily involved in the development of CCIE Service Provider and CCDE portfolio management.

Words can’t express how anxious and excited I am to begin assisting all of the aspiring CCIEs in the community.

Sincerely – Jarrod

CTO – iPexpert / Routing and Switching Product Portfolio Director and Sr. CCIE R&S Instructor
CCIE #6679 x 4 (R&S, Data Center, Service Provider, and Security)

iPexpert’s Newest “CCIE Wall of Fame” Additions 1/09/2015

$
0
0

Please join us in congratulating the following iPexpert client’s who have passed their CCIE lab!

This Week’s CCIE Success Stories

  • Srikanth Navuluri, CCIE #45896 (Routing & Switching)
  • Rodrick Burke, CCIE #46154 (Wireless)
  • Bradley Lierman, CCIE #46093 (Collaboration)
  • Lee Ramirez, CCIE #46113 (Wireless)

We Want to Hear From You!

Have you passed your CCIE lab exam and used any of iPexpert’s self-study products, or attended a CCIE Bootcamp? If so, we’d like to add you to our CCIE Wall of Fame!

Viewing all 220 articles
Browse latest View live