The writing was on the wall as far back as the ‘80s: IPv4, the fourth version of the Internet Protocol, a standards-based routing method for the vast majority of Internet traffic, was going to run out of addresses. Finally, last year, the American Registry for Internet Numbers (ARIN) ran out of their supply of IPv4 addresses. Although official exhaustion was reached in 2011, network design and routing tricks prolonged the supply, as did the trading of IP addresses on the open market.
Read on to learn how the switch to the relatively new IPv6 affects data centers. But first, a quick primer on IP addresses in general.
IPv4 is used on packet-switched networks, using 32-bit addresses. This means that there is an upper limit of 4,294,967,296 addresses. Each address is used to identify an individual device connected to the internet, which is then used to direct traffic to and from the device.
IPv4 is most often represented in the dot-decimal notation, with four sets of numbers separated by periods. In this format, because the actual value is 32-bit, the quad-dot IP address 192.0.2.235 represents 3221226219.
Out of the over 4 billion addresses allocated in IPv4, there are subdivisions for private networks, shared addresses, benchmark tests, and other specific uses.
As the number of Internet users increased dramatically, plus more and more devices connected to the internet, it became very obvious that IPv4 would run out of the total possible number of addresses. In fact, over 4 billion devices already share IP addresses.
IPv6 adds 340 trillion trillion trillion additional IP addresses, more than enough for every person on the planet to have dozens of devices connected. IPv6 launched on June 6, 2012, but many organizations have not adopted measures to accommodate the new system.
As opposed to the 32-bit addresses used by IPv4, IPv6 uses 128-bit addresses. They two are not interoperable, so internet providers and network technicians must double up on any equipment that can not read both protocols. IPv6 is represented in 8 groups of 16 bits, written as 4 digits and separated by colons, like 2001:0db8:0000:0000:0000:ff00:0042:8329. Because they are so much more unwieldy to type and say, there are proposals for standard text translations.
Right now, according to Google, the United States is sitting at around 24% IPv6 adoption, while globally adoption rates are closer to 8.5%.
The new protocol also adds some additional security features, simplifies router processing, and implements multicasting, in which a single network packet is sent to multiple destinations in a single operation.
Because virtualization proliferates far more machines than physical hardware would otherwise allow, many data centers have already been forced to adopt IPv6. However, older devices must be moved to the new protocol, and the entire data center must be able to support both versions for years to come.
In July of 2013, the Internet Engineering Task Force drafted Operation Guidelines for Datacenters regarding IPv6. They stated that there are three transition stages:
During this time, the data center keeps a native IPv4 infrastructure, with gateway routers and application gateways performing adaptation to IPv6 traffic arriving from the outside internet.
While the two protocols are not interoperable, there are methods to allow transitioning between them, like IPv4-translated IPv6 addresses. Some functionality is naturally lost. In this process an algorithm translates the packet. Other modes include tunnel brokers, 6rd, Nat64 servers, 464XLAT, and more. Some of these have specific uses, like allowing IPv6 networks to communicate with technologies that are currently limited to IPv4.
This is going to be the stage for most data centers for the foreseeable future, until IPv6 is the most common protocol in use. In the dual stack phase, both native IPv4 and IPv6 are present I the infrastructure, up to whatever layer in the interconnection scheme where Level 3 is applied to packet forwarding.
Pretty self-explanatory, this stage involves a pervasive IPv6 infrastructure, including IPv6 hypervisors, which use tunneling or NAT if required by applications using IPv4.
For now, it seems that all network devices along the path must support both protocols. That means endpoints, routers, and switches. Most backbone and internet service providers likely already support both. In many cases, this will be as simple as turning IPv6 on for a compatible device. In other cases, new hardware might be necessary, and close monitoring of traffic will let network technicians know where they need to implement 6in4 protocol translators like tunnelbrokers.