Feed on
Posts
Comments

Learning about IPv6

A few observations with IPv6 fiddling all day long:

IPv6 peering is going to be crazy important. Only a couple of our transit providers, Tiscali and Verizon (f/k/a UUnet) will give us native/dual-stack transit. To help ensure independent connectivity we’ll have to turn up v6 peering at our various peering points. With IPv4 the driving factor is not so much connectivity as reducing transit costs, as we have several different v4 paths to a destination. To use ‘Google over IPv6’, where they will serve up an AAAA record for www.google.com for your network, you had better have good v6 connectivity to Google to ensure it’s reachable at all times.

A /32 is a crazy amount of address space. The popular ISP allocation practice now is to give /48 or /56 to each customer. A /32 is 2^32 (4.2 billion) /64s, or an entire IPv4 internet worth of /64s. And there’s 2^32 /32s.

IPv6 PTR records is a lot of decimal places. Hurricane routed me a /64, for which I delegated reverse DNS to myself. When setting up the zone file, you can’t leave out the trailing zeros. This results in:

[bwann@tifa ~]$ host tifa.ipv6.wann.net
tifa.ipv6.wann.net has IPv6 address 2001:470:1f0f:6c4::2

[bwann@tifa ~]$ host 2001:470:1f0f:6c4::2
2.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.4.c.6.0.f.0.f.1.0.7.4.0.1.0.0.2.ip6.arpa 
domain name pointer tifa.ipv6.wann.net.

Fortunately since I’m delegated only a /64, that saves typing out 16 digits in the zone file:

2.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0  IN     PTR     tifa.ipv6.wann.net.

Having lots of zeros in the address takes out a lot of grief, since you can leave off leading zeros and use :: shorthand.

Since an ASA doesn’t do plain GRE tunnels, I brought home my 2620 and will use that to run a v6 tunnel to HE for my entire home network. I still need to get my hands on an ASA for firewalling purposes, since it handles v6 directly. This should give me a feel for how auto configuration works.

Now I’m trying to find the biggest websites that serve up AAAA records. I keep hoping some nerd at Facebook, MySpace or CNN has a small setup somewhere. Haven’t found anything terribly exciting so far. This leads me to believe we have a lot of work in the content provider space to get content accessible via v6. It’s obviously a chicken/egg scenario, but I think solving the server side is a slightly easier problem.

Dave Meyer gave a somewhat frightening talk at NANOG 45 about how dual stack has failed us. The original model around 2000 was to start assigning IPv4 + IPv6 addresses to everything. Over time more and more things would run dual stack, eventually being the majority, just as IPv4 space was exhausted.

The point he was trying to make was that we should’ve made dual stack a high priority back in 2000-2003, but we didn’t. IPv6 uptake hasn’t happened anywhere near on the scale it needs to, now IPv4 is almost exhausted, which prevents the use of future dual stack!

Instead we’ll start to see “carrier grade NAT” devices come along. A large ISP/cableco/telco could come along, put all their customers behind it and give them all IPv6 addresses. The CGN will take care of IPv4 accessibility. Vendors will love it since they’ll get to sell big iron. But, it may be nearly impossible to rip it out later and restore the natural end-to-end connectivity we’ve come to enjoy and expect.

The vast majority of our customers have load balancers in one shape or another. The problem is, not many load balancers actually support v6; the Cisco CSM and CSS don’t, the ACE maybe. Citrix Netscalers do, but they’re very expensive boxes. A comment I read last night was “do you seriously expect you’ll do so much v6 traffic you need to load balance?” While likely very true, not many of our customers will go for it. I don’t fully understand IPv6 anycasting to know if it’s a possible solution or not.

Leave a Reply