The background is that I look after a network which uses layer 3 switches for the core/distribution layer. These are mostly 3560s. When I first started implementing the 3560 switches, I read that they supported IPv6. Being an early adopter when it came to IPv6, I sought to enable IPv6.
It turns out that how you enable IPv6 on the 3560 family is by repartitioning the CAM/TCAM tables using the sdm prefer command. This command dictates how much space us used for various kinds of resources, such as layer 2 entries, L3 routes, multicast routes and the mix between them.
Now when I first enabled IPv6 on my 3560s I didn't really understand what a TCAM was or why it was critical to layer 3 operations so I ended up making a choice that for years has impacted the performance of the network.
The command I used at the time was:
# sdm prefer dual-ipv4-and-ipv6 vlan
I figured at the time that I had a few vlans and that would be the way to go. Here is the table showing the mix of resources you get when you choose this option:
The selected template optimizes the resources in
the switch to support this level of features for
8 routed interfaces and 1024 VLANs.
number of unicast mac addresses: 8K
number of IPv4 IGMP groups + multicast routes: 1K
number of IPv4 unicast routes: 0
number of IPv6 multicast groups: 1K
number of directly-connected IPv6 addresses: 0
number of indirect IPv6 unicast routes: 0
number of IPv4 policy based routing aces: 0
number of IPv4/MAC qos aces: 0.75K
number of IPv4/MAC security aces: 1K
number of IPv6 policy based routing aces: 0
number of IPv6 qos aces: 0.5K
number of IPv6 security aces: 0.5K
Can you see something a bit strange here? This line is the issue:
number of IPv4 unicast routes: 0
Since most of my network was still IPv4, this line allow no space in the TCAM for IPv4 unicast routes! That was most of my traffic. The net result was periodic spikes in CPU usage on the switches when significant traffic went through them. It wasn't until recently, when studying for my CCNP SWITCH exam that I realized that these switches actually do routing in hardware for most traffic as long as there is room in the TCAM.
So I had a configuration that specifically did not have any room in the TCAM so all IPv4 unicast routing on these switches was being done in software. Now the CPU in a 3560 isn't great but its probably sufficient for low level traffic and having a dedicated backup LAN meant that a lot of heavy traffic wasn't routed, yet periodically there was enough traffic to spike the CPU. The cpu would max out at over 80% which is enough to mean other services could suffer.
Before you start thinking that I was a bit negligent letting this issue carry on for 'years' let me state that I had tried to debug this according to the methods suggested by Cisco.
I started out with doing:
#show proc cpu | ex 0.00
This shows the cpu tables excluding anything that's not taking up any CPU. The output of this showed IP Input was the process taking up all the CPU. This is exactly what to expect if lots of traffic is getting punted to the CPU. The next step is to find out why. The command:
# show ip cef switching statistics
Reason Drop Punt Punt2Host
RP LES TTL expired 0 0 1
RP LES Features 0 4881 0
RP LES Total 0 4881 1
All Total 0 4881 1
This command shows what is causing the CPU punts to occur. TTL is obvious but features requires more detail:
# show ip cef switching statistics feature
IPv4 CEF input features:
Feature Drop Consume Punt Punt2Host Gave route
NAT Outside 0 0 4881 0 0
Total 0 0 4881 0 0
IPv4 CEF output features:
Feature Drop Consume Punt Punt2Host New i/f
Total 0 0 0 0 0
IPv4 CEF post-encap features:
Feature Drop Consume Punt Punt2Host New i/f
Total 0 0 0 0 0
IPv4 CEF for us features:
Feature Drop Consume Punt Punt2Host New i/f
Total 0 0 0 0 0
This command, in my case, showed huge amounts of NAT Outside Punts. At this point I was stumped. I searched repeatedly for anything that could trigger NAT and explain what was going on.
As you may have guessed by now, that output was a furfy with the problem had nothing to do with NAT.
From the above output of the sdm preferences, it is now obvious that my naive choice for the sdm prefences resulted in no space in the TCAM for IPv4 routes and thus all IPv4 routing was being done by the CPU using the IP Input process.
The solution? Simply change the sdm preferences to dual-ipv4-and-ipv6 default!
"desktop IPv4 and IPv6 default" template:
The selected template optimizes the resources in
the switch to support this level of features for
8 routed interfaces and 1024 VLANs.
number of unicast mac addresses: 2K
number of IPv4 IGMP groups + multicast routes: 1K
number of IPv4 unicast routes: 3K
number of directly-connected IPv4 hosts: 2K
number of indirect IPv4 routes: 1K
number of IPv6 multicast groups: 1K
number of directly-connected IPv6 addresses: 2K
number of indirect IPv6 unicast routes: 1K
number of IPv4 policy based routing aces: 0
number of IPv4/MAC qos aces: 0.75K
number of IPv4/MAC security aces: 1K
number of IPv6 policy based routing aces: 0
number of IPv6 qos aces: 0.5K
number of IPv6 security aces: 0.5K
Now I have plenty of space for both IPv4 and IPv6 routes what I loose is policy based routing but hey, that's something I can live with. Since this change I haven't had a single CPU spike (> 2 days now).I have also since learnt that you don't get taught about SDM preferences until you study routing and switching at the expert level.
That's what you get for being an early adopter!
No comments:
Post a Comment