Another year, another trend in the data center world. Although edge data centers first starting making headlines circa 2014 or 2015, they’ve become mainstream as more and more users slurp down increasing amounts of data. That takes serious bandwidth; to the point that many pundits are pointing towards the placement of workloads in edge facilities, rather than the traditional centralized data centers in major markets, as a sign that cloud computing is starting to wane.
On the contrary, edge data centers serve to supplement and improve the reach of even the major cloud computing providers. No major cloud service provider (CSP) is going to only place workloads in major markets. Just look at our neighbors in Cheyenne: Microsoft has a huge facility that they’re actively expanding. Amazon operates data centers in Ohio, which, while central for the US in general and equidistant from major population centers like Chicago and New York, is hardly a major market in itself.
And beyond large scale platforms like Azure or AWS, you have players like Green House Data, who offer smaller scale virtualization from data centers in a myriad of second and third tier markets.
But it's not just about the cloud spreading itself to the edge. Here's why edge computing will be important, but will also become more of a niche deployment model, with cloud remaining the king of application processing and data storage.
The Green House Data blog has hit a major milestone this month, rocketing from around 8,000 monthly unique visitors to 12,000 unique visitors in March. As we pass the 10k mark, we want to say thanks to everyone who has come to our little corner of the internet and also take a look back at our most enduring and popular posts over the years.
From cloud hosting to data center design to information security, the blog has covered a lot of ground in the past five or six years, with experts from our staff joining our marketing and content teams for weekly updates.
Here are the top 10 all time posts from the Green House Data blog.
The past five or ten years have been jam-packed with cloud computing hype. Indeed, the cloud is here to stay, without a doubt. But recent reports show analysts expect hardware sales for on-premise enterprise IT to tick up significantly.
High profile examples like Dropbox show that moving back to a more traditional data center can create efficiencies and free up cash flow. Is the enterprise data center – and by extension, colocation – about to put up a fight against the cloud?
Data centers are invariably focused on 100% availability, which comes down to reliability of power and various mechanical and electrical components throughout the facility. But energy efficiency is a major priority as well, even for data centers that don’t call themselves “green” or “sustainable”.
With electricity providing a bulk of the operating expense, any gains in efficiency can go a long way towards minimizing OpEx. Many data center efficiency measures focus on containment, cooling, and other measures within the white space, but critical power infrastructure can be a good target for efficiency gains as well.
Major UPS manufacturers often include an “ecomode,” or in the case of our Cheyenne data center, Eaton’s Energy Saver System (ESS). These modes can lead to efficiency gains of several percentage points, which sounds low, but in practice can lead to thousands of dollars of savings and carbon emission reductions in the hundreds or thousands of pounds.
Unless you’ve been living under a rock or aren’t in the IT field at all, by now you’ve likely heard about the widespread Spectre and Meltdown vulnerabilities affecting an enormous swath of processors manufactured by Intel and AMD, the industry leaders, leading to security vulnerabilities and performance problems.
Green House Data staff have been hard at work patching systems as fixes have come available this week. Here’s a quick summary of the vulnerabilities, their effects on cloud and general computing performance, and what we’ve done to fix them so far. We also provide a few links for users who need to patch their own operating systems or investigate further.
We have arrived again at that time when year-end lists proliferate for perusal by a workforce distracted by the holidays. The data center industry continued to chug forward in 2017, with M&A activity heating up in particular. Here are the top stories that broke throughout the data center world, plus a list of the most visited posts from our own humble blog.
Application performance can often hinge on how well your storage can serve data to end clients. For this reason you must correctly design or choose your storage tier speed in terms of both IOPS and throughput, which rate the speed and bandwidth of the storage respectively.
It is vital to plan according to manufacturer and developer recommendations as well as real-world benchmarks to maximize your storage (and subsequently application) performance. Take a look at peak IOPS and throughput ratings, read/write ratios, RAID penalties, and physical latency.
Data center containment is the practice of splitting the aisles of a data center into segregated hot and cold sections, depending on how each aisle is set up. For example, some data centers might have the front of their servers on the inside of the aisle, with fans blowing the exhaust outside the aisle. Others might have the front of their servers on the outside of the aisle, and vent heat inside the aisle.
Containment keeps the hot air exiting servers from mixing with the cold air coming in from the Computer Room Air Conditioning (CRAC), dramatically improving energy efficiency and also maintaining a more consistent temperature, which reduces the overall load on both air conditioning units and the servers themselves.
Green House Data uses full containment in our Cheyenne and East Coast data centers, but only recently implemented it in our Seattle, WA facility. This case study demonstrates how even a simple containment system can lead to significant energy efficiency improvements. We expect the system to pay for itself within the year, in part thanks to generous rebates from Seattle Public Utilities.
Green House Data released our first Sustainability Report last year, covering the calendar year of 2015. Our goal for this initial report was largely to set a baseline by which we can measure our environmental impact from year to year, as well as to maintain our goal of transparency as a company.
As we’ve written about many times before, the data center industry is not particularly environmentally friendly. We consume millions and billions of kilowatt-hours of electricity annually, which is our biggest contributor to emissions. But computing equipment also has a significant toll on the environment. We also consume quite a bit of water.
By focusing on energy-efficient design and operation methods like free cooling and aisle containment, data centers can reduce consumption. Green House Data goes beyond low PUE ratings and tries to be as green as possible throughout our operations.
How did we fare in 2016? Let’s take a look at some Sustainability Report highlights to find out.
Automation, and in particular, computer automation, has changed everything from the jobs we do from the cars we drive. Naturally, it has reached the data center as well, the nexus that stores and controls many of the systems driving automation in other spheres. But, even in a world where hardware and software can function more intelligently than ever, it still takes people to make everything go.
Most of us have been in the spot where we just want something to work. When you fire up your laptop, you want the internet to come on, or when you look for your electronic files, you want them to be accessible in the last place you left them.
In the endless quest for uptime and systems that just work, software and hardware pieces of data centers have both seen increased automation — but the robots aren’t replacing human service technicians just yet.