Wednesday, 23 November 2011

The IPv6 Conundrum

I have been making use of IPv6 for over 10 years now and only recently has support for it become good enough that I can think about treating IPv6 like IPv4. For a long time IPv6 was the poorer cousin of IPv4, less features, less support, more bugs.

Nowadays, I think it's reasonable that most people would expect IPv6 to be production ready. I certainly do. I would expect IPv6 to be able to support all my network needs and in theory replace IPv4.

Yet even today I've managed to find an issue that has broken down this assumption. I'll walk you through it but it's reasonably obscure but still annoying nonetheless.

First a little background. Here at work recently upgraded one of our switch blocks to support gigabit speeds to the desktop PCs. We used Cisco 2960S series for the access layer and 3560X series for the distribution layer.

Something interesting about these models of switches is that they all have a dedicated management port. This is a Good Thing &tm; as it allows you to completely separate your management traffic from your production traffic. For those vague on security, this means it's that much hard to take control of the network. Done properly, and your management IP addresses don't need to be routable on your production network at all.

Of course, this sounded great to me. Given, that this post is about IPv6, you might be able to see where this is going, but stick with me.

The configuration for the 2960S is easy enough. The management port is the only port configured for routed mode and given that they aren't normally L3 switches, this makes the management port the only physical port with an IP address. Simply string all the management ports together onto a management switch and you're away.

So far so good, I have a management port on each access switch, assigned both a IPv4 and an IPv6 address. Each port hooks back to a management switch which is connected to the rest of the network via a router (ok, it should be a firewall I'll admit).

Then we come to the 3560X switches. Since these switches are operating as distribution devices they do routing. Now here's the complicated bit, in order to provide separation for the management port, it needs to be placed into a vrf instance (virtual routing and forwarding) else the management traffic will be routed as per normal along with the production traffic.

Not so hard you say? VRF Lite has been supported on these devices for a while now that's true. VRF lite works a treat, for IPv4. When you start talking about IPv6 inside a vrf, you need to move it up a whole notch. Currently that technology only exists for complicated solutions like 6VPE over an MPLS network.

Thus, whilst I can separate my management traffic for IPv4, IPv6 still isn't supported to the same level and thus I cannot have separated management traffic for these devices if I wish to use IPv6 for management.

Yes, I can hear you say, why use IPv6 for management? You know, the simple answer is because I should be able to. After all this time, there shouldn't be any operational differences between these two protocols.Yet there still are which created issues if one is to try to head down the IPv6 only path.

As a result, I'm still forced to treat IPv6 (as much as I love it) as a lessor protocol compared to IPv4. That's just the way it is.

Monday, 26 September 2011

CCDP ARCH Attempt 2

Yep, It's treadmill time again. Sometimes I wonder, when it feels like this that I bother putting myself through it. Then I remember I really enjoy the feeling after getting that pass mark. But I don't' want to get ahead of myself so I'm just putting it out there that hopefully by lunchtime tomorrow (15hrs time) I'll know if I've passed my 2nd attempt at the ARCH exam. Missed by 4% last time (that's not giving anything away I hope) so I'm hoping the 3months since and the boat load of study have been enough to get me over the line.

Wednesday, 3 August 2011

ASA and OSPF

I have recently updated my ASA cluster to 8.4.X in the hope that this might introduce some stability in my IPv6 support. Little did I know it would introduce other issues for IPv4 that I have only now resolved.

Firstly a little bit of background. I run a pair of Cisco ASA 5510 in active/standby mode. They run OSPF to learn about internal routes whilst having a default route pointing out to my policy routers.

The problem I had encountered since the upgrade (or perhaps shortly after) was that the ASA cluster was no longer sending messages to syslog. Upon investigating I could see an obvious route in the routing table, learned via OSPF yet the firewall would state that there was no route.

This was unusual behaviour and whilst I wanted to resolve it , I also needed syslog working. So I created a static host route for the syslog server so that I could get that working. In order to complete the process I had to disable redistribution of static routes within OSPF else all other syslog traffic would end up at the firewalls.

This of course then broke remote access VPN as an IPSec VPN would generate a static route on the ASA which would then need to be redistributed. So I lived with that for a while whilst I continued to investigate.

Today I needed to get the VPN working again so I renabled the static route redistribution which then caused the syslog routes to dissapear. I finally solved it when I looked at the configuration in CLI mode and discovered that OSPF had three network statements! Now normally this wouldn't be an issue but when I looked closely at the routing table I discovered something unusual. For each internal destination there were multiple routes in the routing table, only one of which had an interface associated with it.

I theorized that a bug in ASDM prevented any of the network statements from showing up and thus caused me to add superfluous network statements. In turn, these superfluous network statements caused invalid routes to be added to the routing table and prevent normal traffic from working.

I believe this is correct as upon removing the extra network statements and bouncing the OSPF process, the interface-less routes dissapeared and normal traffic resumed. I was able to remove the static routes and everything functioned as expected.

For reference, I'm running 8.4.2. I can only assume this is a bug and will be resolve in a future update. I'm just glad I've resolved it.


Thursday, 2 June 2011

Whole Lotta IPv6

Wow, I've just had the pleasure of receiving my allocation from APNIC of a /32 prefix of IPv6 address space. That's a lot of address space.

It's a hard thing to think about how big IPv6 address space is, so to put it into perspective, I'll equate it to the size of a single LAN. Now in IPv4 world, a single LAN is usually a /24 which gives 254 hosts on a network. That's a good number for most applications. When we move to IPv6, to support auto configuration, the recommended number of bits was set at 64 bits. Already that's mind boggling-ly huge but ignoring that for now, we can use a /64 as the basis for measuring the size of various IPv6 prefixes.

So a single IPv6 LAN is 1 /64 prefix. The recommended allocation for enterprise networks is a /48 which adds 16 more bits. So this means we have potentially 65,436 IPv6 LANs in a /48 prefix.

Can you see where I'm going with this?

My allocation is a /32 because we are a data center and we allocate address space to clients. So this is 16 more bits again from  a /48. One way of looking at it is that there are 65,536 /48s in a /32 or 4,294,967,296 IPv6 LANs in a /32.

If you recognize that big number, you're doing well, it's the total number of IPv4 addresses possible. So my /32 IPv6 allocation has as many LANs as there are IPv4 addresses possible.

Wow. Each one of these IPv6 LANs is also stupendously big but lets not think about that!

To top it all off, this is just a very small piece of the IPv6 global pool. Is it sinking in how big it all is now? There's just so much more room in IPv6 land to spread your stuff out without worrying about conserving every last bit.

Now I just need to renumber everything!

Monday, 23 May 2011

ASA, Failover and IPv6 , Part 2

After posting the previous post about my ASA cluster and IPv6 I began to have problems.

Initially the solution described (setting the next IPv6 hop to an anycast address) worked as expected and I could get IPv6 traffic through the firewall without trouble. But after a while, for some reason, the traffic would stop working.

My standard investigation process eventually led me to log onto the ASA and try to ping the next hop address which in this situation is an anycast address shared by both next hop routers.

Usually the ping to the anycast would fail whilst the ping to the individual IPv6 addresses would succeed after which point the anycast address would start working again.

This was annoying to say the least and I started to have doubts about the design using an anycast address.

I did know that I wasn't running the latest firmware for my ASA (only 8.2.X) but to upgrade required a memory increase for the new firmware. I simply had to live with it for the mean time until I could get an memory upgrade through.

Fast forward to the (almost) present. Memory upgrade has been completed and I've now got the latest (8.4.X) firmware on the ASA cluster. Before you ask, yes, my problem has now been resolved with the IPv6 routing through the firewall working consistingly for several weeks now. I haven't changed the design, the next hop out of the ASAs is still an anycast IPv6 address. Similarly, the internal next hop address is also an anycast address. Both directions work a treat and have been since the upgrade.

I can take away from this the fact that it is a valid design to use an anycast address as a next hop. Sure it's not quiet the same as a redundancy protocol but it works and that's all I care about. I presume there was a bug or issue with the older ASA firmware that prevented this from working properly.

Now I can move forward to World IPv6 Day testing.

Tuesday, 19 April 2011

ASA, Failover and IPv6

Today I finally solved a long standing issue with IPv6 that has been bugging me.

Firstly a bit of background. I have long been an IPv6 advocate and when possible I have been enabling IPv6 in my infrastructure in the hope of one day having a production ready IPv6 network. The last piece of the puzzle was my firewall infrastructure. Last year I upgraded from a single PIX firewall to a pair of ASA firewalls in an active/standby failover cluster.

So far so good. I had already had IPv6 working under the PIX firewalls and soon had it working under the ASA. Until, that is, I enabled failover.

The zeroth problem that has yet to be resolved is the lack of dynamic routing support for IPv6 on the ASAs. There are ways around this but it means that the solution used for IPv4 cannot be used for IPv6 (e.g. the advertising of VPN /32 routes via OSPF) and I believe any differences in operation like this will make transition all the more difficult.

The first actual problem was that the code at the time (8.0 ) did not support failing over of the IPv6 addresses. What this meant was that when the firewalls failed over (often for apparently no reason) the IPv6 address configured on an interface would be come unavailable to the rest of the network. Since there was no dynamic routing this meant that static routes pointing at the ASA for IPv6 needed to be updated both inside and outside every time a fail over occured. This wasn't ideal but I lived with it whilst the fail overs were few and far between.

A subtle issue here is related to IPv6 auto-configuration. Initially I used EUI-64 on both internal and outside interfaces which meant that a fail over would actually change the address on the interface due to the different MAC addresses for each physical NIC. Sure, the obvious solution would to be configure a manual address on each interface. The problem was since the code didn't do correct fail over of the IPv6 addresses, each firewall would hear the others address and complain about duplicate IPv6 addresses and shutdown IPv6 on each interface. Totally useful!

Fast forward a bit and a new release of code now supports IPv6 fail over (8.2.4). This means the internal and external IPv6 addresses get moved across when there's a fail over event. This is great but for reasons unknown to me didn't actually solve my problem. My ASA cluster has two routers inside and two routers outside. After each fail over event, the active ASA could only contact one of the external routers which meant that the static IPv6 routing on the ASA (remember no dynamic routing) needed to be updated each and every time there was a fail over event. So despite supporting IPv6 fail over properly (i.e. each firewall no longer complained about seeing each other) my IPv6 solution was no better off.

Now, I no longer needed to change the static routes on the internal and external routers, but I did have to change the routes on the ASA itself.

I was getting frustrated with this state of IPv6 support given the imperative to embrace IPv6 in the Asia Pacific region. I imagined that it shouldn't really be this hard.

Fast forward to today.

From some dusty corner of my mind I recalled today, whilst trying to debug this issue, the anycast type of IPv6 address. I had it in my head that my networking kit didn't support anycast but it was worth a try.

Checking on the external router, I was able to enter:

ipv6 address xxxx:xxxx:xxxx:ffcb:ffff::1/64 anycast

on one of the external routers. Excellent. I repeated the same on the second router then checked for connectivity on the ASA. It worked! So I updated the routing on the ASA to point its static IPv6 default route to the anycast address now instead of the router specific one.

The next step was to repeat this for the internal routers. My internal 'routers' are L3 switches and I was expecting less complete IPv6 support from them but to my surprise I was able to repeat the above command on the internal L3 switches.

Following this, I repeated the ping test on the ASA and was thankful to get a response. So again, I updated the static route pointing to the internal IPv6 /48 network to go via the anycast IPv6 address.

Now, after all this wrangling, I have a stable IPv6 infrastructure. I can ssh to an IPv6 address on my external routers and presumable, it will all work after a fail over event. I can't see why it wouldn't. I'm now starting to see the usefulness of the anycast addreess type. For a long time (i.e. up until today) I had the impression that they were only good for application level stuff, e.g. dns queries.

I'm now keen to make more use of anycast within my network for say dns or ntp servers instead of using mechanisms such as multicast routing (which isn't yet supported on L3 switches for IPv6!).

Happy routing!

Thursday, 27 January 2011

Passed CCNP

Well finally after redoing my SWITCH exam then finally sitting TSHOOT on the 14th of Dec last year, I have completed all the requirements for my CCNP and am now certified.

I even have a nice little logo that I can include in signatures which is a little bit of icing.

Of course there is no rest and I've already started in on my next certification, CCDA though I'm going to be studying (I think) for the new 2.1 curriculum that has recently been released. This of course makes things harder as the reference material is still being written for this certification but hey, nothing wrong with a challenge!

After CCDA, on to CCDP which is also being updated.

After I get my CCDP its on to the CCIE.

Wish me luck!

The end of the global IPv4 address pool

Yes, the end is nigh, rumor has it that 2nd of Feb 2011 will be the official announcement day for the end of the global IPv4 pool. Technical types will no doubt understand the implications of this.

What irritates me though, is the media who choose to display their complete ignorance in order to drum up business (I can only assume).

Here are a few pointers for any media types out there:

"IT'S the end of the web as we know it.
Since its inception, the internet..."


The 'web' is not the same thing as 'the Internet', you cannot use these terms interchangeably.
 
"Web developers have compensated for it by creating IPv6"

Web developers did not invent IPv6, the IETF did many years ago.


"At best, their user experience will be clunky and slow."

IPv6 will not break things or make your Internet experience worse. Buggy software will but that's harder to blame in a news story.

"The current generation of iPhones, for example, won't display anything with an IPv6 address correctly."


Apple iPhones can do IPv6 just fine (those that run iOS 4.X) and will have no issues accessing the IPv6 Internet when it becomes more available. The issue is carrier support for IPv6 which is still absent in most markets.


As a matter of fact, most recent devices will support IPv6, the issue has been the infrastructure. Sure there's no content  and what there is has restrictive terms (e.g. Google over IPv6) that see only a few make use of it. Yet the biggest issue is the infrastructure upon which the Internet exists. There's no great incentive to move over to a new protocol considering the costs that could be incurred.

That being said, as part of a companies normal upgrade cycle, IPv6 will be included on newer kit purchased. Some companies will choose to take the initiative and enable IPv6, perhaps gradually; other companies will choose to pretend it is a security risk and disable it everywhere lest someone hacks them via IPv6.

My only hope is the huge media beat up about the exhaustion of IPv4 will get more average businesses aware of IPv6 so that will start asking their ISPs about it and thus create some level of demand!

Will you?