Unless you’ve been living under a rock or aren’t in the IT field at all, by now you’ve likely heard about the widespread Spectre and Meltdown vulnerabilities affecting an enormous swath of processors manufactured by Intel and AMD, the industry leaders, leading to security vulnerabilities and performance problems.
Green House Data staff have been hard at work patching systems as fixes have come available this week. Here’s a quick summary of the vulnerabilities, their effects on cloud and general computing performance, and what we’ve done to fix them so far. We also provide a few links for users who need to patch their own operating systems or investigate further.
We have arrived again at that time when year-end lists proliferate for perusal by a workforce distracted by the holidays. The data center industry continued to chug forward in 2017, with M&A activity heating up in particular. Here are the top stories that broke throughout the data center world, plus a list of the most visited posts from our own humble blog.
Application performance can often hinge on how well your storage can serve data to end clients. For this reason you must correctly design or choose your storage tier speed in terms of both IOPS and throughput, which rate the speed and bandwidth of the storage respectively.
It is vital to plan according to manufacturer and developer recommendations as well as real-world benchmarks to maximize your storage (and subsequently application) performance. Take a look at peak IOPS and throughput ratings, read/write ratios, RAID penalties, and physical latency.
Data center containment is the practice of splitting the aisles of a data center into segregated hot and cold sections, depending on how each aisle is set up. For example, some data centers might have the front of their servers on the inside of the aisle, with fans blowing the exhaust outside the aisle. Others might have the front of their servers on the outside of the aisle, and vent heat inside the aisle.
Containment keeps the hot air exiting servers from mixing with the cold air coming in from the Computer Room Air Conditioning (CRAC), dramatically improving energy efficiency and also maintaining a more consistent temperature, which reduces the overall load on both air conditioning units and the servers themselves.
Green House Data uses full containment in our Cheyenne and East Coast data centers, but only recently implemented it in our Seattle, WA facility. This case study demonstrates how even a simple containment system can lead to significant energy efficiency improvements. We expect the system to pay for itself within the year, in part thanks to generous rebates from Seattle Public Utilities.
Green House Data released our first Sustainability Report last year, covering the calendar year of 2015. Our goal for this initial report was largely to set a baseline by which we can measure our environmental impact from year to year, as well as to maintain our goal of transparency as a company.
As we’ve written about many times before, the data center industry is not particularly environmentally friendly. We consume millions and billions of kilowatt-hours of electricity annually, which is our biggest contributor to emissions. But computing equipment also has a significant toll on the environment. We also consume quite a bit of water.
By focusing on energy-efficient design and operation methods like free cooling and aisle containment, data centers can reduce consumption. Green House Data goes beyond low PUE ratings and tries to be as green as possible throughout our operations.
How did we fare in 2016? Let’s take a look at some Sustainability Report highlights to find out.
Automation, and in particular, computer automation, has changed everything from the jobs we do from the cars we drive. Naturally, it has reached the data center as well, the nexus that stores and controls many of the systems driving automation in other spheres. But, even in a world where hardware and software can function more intelligently than ever, it still takes people to make everything go.
Most of us have been in the spot where we just want something to work. When you fire up your laptop, you want the internet to come on, or when you look for your electronic files, you want them to be accessible in the last place you left them.
In the endless quest for uptime and systems that just work, software and hardware pieces of data centers have both seen increased automation — but the robots aren’t replacing human service technicians just yet.
Here you are. You have all documentation ready and in-hand to present to your manager. The year is 2004, and you are pitching a new idea for updating the company’s existing network core infrastructure. The terms “cost-savings,” “scalable,” and “extensible” are prevalent within the proposal documents. You are unsure if the terms “open-source” and “networking” were ever mentioned in the same sentence before. However, the solution is simple: build a network that is non-blocking, resilient, predictable, and manageable.
Additionally, you adopt and include the operational efficiencies seen within your SysAdmin department. Who doesn’t love streamlined provisioning, broad visibility into resource utilization, and scalable deployment of services? What if we could provide the same network technologies across a variety of hardware platforms?
It turns out we can, using open-source networking to decouple hardware vendors from software. While this is only one option available in the market and may not work for every enterprise organization, the disaggregation of network hardware and software has some definite advantages.
Private vs. public cloud is a battle many thought was over years ago, and some recent think pieces seem to confirm that notion, claiming no one can match the economies of scale delivered by hyperscale cloud providers.
But private cloud, or on-premise virtualization, can still be a less expensive option — if you have the staff and capabilities to support it. A recent study from 451 Research describes when the tipping point is in the favor of private cloud and when public cloud has a lower total cost of ownership (TCO), based on utilization of hardware and efficiency of your staff.
How secure is your data center? In order to guarantee security, maintain uptime, and pass HIPAA and SSAE 16 Type II certifications, Green House Data has over sixty auditable security, environmental, and compliance control measures. Each compliant data center is audited once per year.
Some of these control points are standard practice, while others had to be added to daily routines in some facilities in order to gain compliance and bring them up to our strict standards. This list can help you get your data center up to speed – or see just how much effort goes into keeping server rooms monitored, secured, and fully auditable.
See all 61 points we check for security and auditability after the jump.
For the past decade, Power Usage Effectiveness has been the most common standard to measure data center energy efficiency. While PUE remains in the news with recent controversy over its inclusion in the latest ASHRAE standards, other energy efficiency metrics are starting to catch on – specifically server utilization.
We’ve covered PUE before on the blog, but basically it’s the ratio of overall power used to power used for strictly computing equipment. The closer to a 1.0 ratio, the more efficient the facility.
As the industry has matured, PUE has come under fire as being too simple, easy to manipulate, or failing to consider other environmental concerns. This led to the development of other data center energy efficiency and environmental impact measurements and benchmarks, for renewable use, reeuse of energy, and even water consumption.