Today I finally solved a long standing issue with IPv6 that has been bugging me.
Firstly a bit of background. I have long been an IPv6 advocate and when possible I have been enabling IPv6 in my infrastructure in the hope of one day having a production ready IPv6 network. The last piece of the puzzle was my firewall infrastructure. Last year I upgraded from a single PIX firewall to a pair of ASA firewalls in an active/standby failover cluster.
So far so good. I had already had IPv6 working under the PIX firewalls and soon had it working under the ASA. Until, that is, I enabled failover.
The zeroth problem that has yet to be resolved is the lack of dynamic routing support for IPv6 on the ASAs. There are ways around this but it means that the solution used for IPv4 cannot be used for IPv6 (e.g. the advertising of VPN /32 routes via OSPF) and I believe any differences in operation like this will make transition all the more difficult.
The first actual problem was that the code at the time (8.0 ) did not support failing over of the IPv6 addresses. What this meant was that when the firewalls failed over (often for apparently no reason) the IPv6 address configured on an interface would be come unavailable to the rest of the network. Since there was no dynamic routing this meant that static routes pointing at the ASA for IPv6 needed to be updated both inside and outside every time a fail over occured. This wasn't ideal but I lived with it whilst the fail overs were few and far between.
A subtle issue here is related to IPv6 auto-configuration. Initially I used EUI-64 on both internal and outside interfaces which meant that a fail over would actually change the address on the interface due to the different MAC addresses for each physical NIC. Sure, the obvious solution would to be configure a manual address on each interface. The problem was since the code didn't do correct fail over of the IPv6 addresses, each firewall would hear the others address and complain about duplicate IPv6 addresses and shutdown IPv6 on each interface. Totally useful!
Fast forward a bit and a new release of code now supports IPv6 fail over (8.2.4). This means the internal and external IPv6 addresses get moved across when there's a fail over event. This is great but for reasons unknown to me didn't actually solve my problem. My ASA cluster has two routers inside and two routers outside. After each fail over event, the active ASA could only contact one of the external routers which meant that the static IPv6 routing on the ASA (remember no dynamic routing) needed to be updated each and every time there was a fail over event. So despite supporting IPv6 fail over properly (i.e. each firewall no longer complained about seeing each other) my IPv6 solution was no better off.
Now, I no longer needed to change the static routes on the internal and external routers, but I did have to change the routes on the ASA itself.
I was getting frustrated with this state of IPv6 support given the imperative to embrace IPv6 in the Asia Pacific region. I imagined that it shouldn't really be this hard.
Fast forward to today.
From some dusty corner of my mind I recalled today, whilst trying to debug this issue, the anycast type of IPv6 address. I had it in my head that my networking kit didn't support anycast but it was worth a try.
Checking on the external router, I was able to enter:
ipv6 address xxxx:xxxx:xxxx:ffcb:ffff::1/64 anycast
on one of the external routers. Excellent. I repeated the same on the second router then checked for connectivity on the ASA. It worked! So I updated the routing on the ASA to point its static IPv6 default route to the anycast address now instead of the router specific one.
The next step was to repeat this for the internal routers. My internal 'routers' are L3 switches and I was expecting less complete IPv6 support from them but to my surprise I was able to repeat the above command on the internal L3 switches.
Following this, I repeated the ping test on the ASA and was thankful to get a response. So again, I updated the static route pointing to the internal IPv6 /48 network to go via the anycast IPv6 address.
Now, after all this wrangling, I have a stable IPv6 infrastructure. I can ssh to an IPv6 address on my external routers and presumable, it will all work after a fail over event. I can't see why it wouldn't. I'm now starting to see the usefulness of the anycast addreess type. For a long time (i.e. up until today) I had the impression that they were only good for application level stuff, e.g. dns queries.
I'm now keen to make more use of anycast within my network for say dns or ntp servers instead of using mechanisms such as multicast routing (which isn't yet supported on L3 switches for IPv6!).
Happy routing!
No comments:
Post a Comment