Two recent reports have been making the rounds, each claiming the green data center market is expected to grow at a compound annual growth rate greater than 25% over the next four to five years. As the market for data center equipment expands, the demand for energy efficient hardware and support infrastructure is expected to increasingly drive investments.
Why and how are “green” data centers shifting the market so dramatically? Here are four factors driving major investment in green data center technology.
As virtualization continues to evolve, containers, virtual networks, and virtual storage are joining virtual machines to create a data center that is software-driven. In order to scale dynamically and to greater lengths, a data center must have a mix of storage methods, fabric networks, network controllers capable of software-defined networking, and adjustable, efficient cooling that can handle greater rack densities.
More hardware certainly doesn’t sound more green, but software-definition means that workloads can use only the resources they need in a dynamic manner, rather than the entire data center constantly whirring at 100% power without 100% utilization. As more resources are freed up with virtualization, power and cooling capacity is recovered.
Data centers only need massive scale because there is an increased demand for compute resources, especially storage. The Internet of Things could generate 44 trillion gigabytes of data by 2020. All of that information needs to go somewhere, and it’s looking increasingly likely that a vast network of smaller, medium, and enormous data centers will play host.
Most data traversing the internet is impermanent—which is good, because as of 2013, just 33% of total traffic would be able to be stored in all the available storage on earth. By 2020, the available data storage will only be able to hold less than 15% of the total data traversing the global internet. Still, even with temporary data making up the bulk, storage is scaling to meet this demand.
That storage needs to be innovative and highly efficient, tailored to a specific job, whether it’s long-term like tape or super fast like solid-state. Efficient use of storage media will be key to future data center services even more so than today.
Meanwhile, governments in the United States and elsewhere have begun passing laws requiring companies—and government subdivisions especially—to meet more stringent energy efficiency standards to combat global warming and dependence on foreign resources.
In the United States, this started with federal consolidation initiatives, which seek to deliver efficiency gains by eliminating unused or underutilized computing equipment, combining facilities for better use of existing infrastructure, and eliminating the wasteful “server closets,” or server rooms that have an ad-hoc design rather than being explicitly designed for efficient computing.
But since then, states like California and Washington have begun to mandate economizer cooling in data centers (which can reduce cooling consumption by up to 50%). In California, data centers may have to operate at a maximum PUE of 1.5.
Ultimately, all of these reasons boil down to one main attraction of energy efficiency: cost savings. While we’d like the reason to simply be caring for the environment, in the end ROI is understandably king in the business world. Cooling, compute, and supporting infrastructure all draw significant energy, which is the single biggest cost for a data center operator—some estimates peg it at 20-60% of operating costs. Therefore the more efficient operations are, the greater the cost savings will be.