Automation, and in particular, computer automation, has changed everything from the jobs we do from the cars we drive. Naturally, it has reached the data center as well, the nexus that stores and controls many of the systems driving automation in other spheres. But, even in a world where hardware and software can function more intelligently than ever, it still takes people to make everything go.
Most of us have been in the spot where we just want something to work. When you fire up your laptop, you want the internet to come on, or when you look for your electronic files, you want them to be accessible in the last place you left them.
In the endless quest for uptime and systems that just work, software and hardware pieces of data centers have both seen increased automation — but the robots aren’t replacing human service technicians just yet.
Here you are. You have all documentation ready and in-hand to present to your manager. The year is 2004, and you are pitching a new idea for updating the company’s existing network core infrastructure. The terms “cost-savings,” “scalable,” and “extensible” are prevalent within the proposal documents. You are unsure if the terms “open-source” and “networking” were ever mentioned in the same sentence before. However, the solution is simple: build a network that is non-blocking, resilient, predictable, and manageable.
Additionally, you adopt and include the operational efficiencies seen within your SysAdmin department. Who doesn’t love streamlined provisioning, broad visibility into resource utilization, and scalable deployment of services? What if we could provide the same network technologies across a variety of hardware platforms?
It turns out we can, using open-source networking to decouple hardware vendors from software. While this is only one option available in the market and may not work for every enterprise organization, the disaggregation of network hardware and software has some definite advantages.
Private vs. public cloud is a battle many thought was over years ago, and some recent think pieces seem to confirm that notion, claiming no one can match the economies of scale delivered by hyperscale cloud providers.
But private cloud, or on-premise virtualization, can still be a less expensive option — if you have the staff and capabilities to support it. A recent study from 451 Research describes when the tipping point is in the favor of private cloud and when public cloud has a lower total cost of ownership (TCO), based on utilization of hardware and efficiency of your staff.
How secure is your data center? In order to guarantee security, maintain uptime, and pass HIPAA and SSAE 16 Type II certifications, Green House Data has over sixty auditable security, environmental, and compliance control measures. Each compliant data center is audited once per year.
Some of these control points are standard practice, while others had to be added to daily routines in some facilities in order to gain compliance and bring them up to our strict standards. This list can help you get your data center up to speed – or see just how much effort goes into keeping server rooms monitored, secured, and fully auditable.
See all 61 points we check for security and auditability after the jump.
For the past decade, Power Usage Effectiveness has been the most common standard to measure data center energy efficiency. While PUE remains in the news with recent controversy over its inclusion in the latest ASHRAE standards, other energy efficiency metrics are starting to catch on – specifically server utilization.
We’ve covered PUE before on the blog, but basically it’s the ratio of overall power used to power used for strictly computing equipment. The closer to a 1.0 ratio, the more efficient the facility.
As the industry has matured, PUE has come under fire as being too simple, easy to manipulate, or failing to consider other environmental concerns. This led to the development of other data center energy efficiency and environmental impact measurements and benchmarks, for renewable use, reeuse of energy, and even water consumption.
With the average cost of data center outages hovering at $740,000 (according to a Ponemon / Emerson study from 2016), operators must take action to avoid the most common causes of downtime. Let’s take a quick dive into the leading origins of unplanned downtime and how you can avoid them in your data center.
Edge data centers have a lot of buzz these days as a way to deliver services outside of core markets. But do actual data center operators have any interest in edge facilities? And what exactly is an edge data center, anyway?
Green House Data surveyed 492 IT professionals, with 38% being Executive level. The results indicate a mild interest in edge data centers, but mostly for future deployments. 18% currently use an edge data center, with 46% planning to add an edge facility within the next 12 months. 54%, meanwhile, do not plan to add an edge data center.
Read on to see the full survey results.
Can you believe we’re already over a quarter of the way through 2016? Feels like we were just posting our 2015 blog wrap up yesterday. But here we are—the data center world keeps spinning. In case you missed something in the past three and a half months, we’ve collected our top blog posts and some of the most popular data center news headlines from around the blogosphere in today’s post.
As data center design continues to evolve, one stalwart piece hasn’t changed too much: cabinet or rack security and monitoring. After all, how complicated can a door lock get? While most every data center will have some form of lock on their racks and/or cabinets, especially colocation facilities as they have multiple clients accessing shared floor space, not all locks are created equal. Newer technologies allow automated access logs, biometric security, wireless unlocking, and more.
With different compliance standards and security requirements for various applications, some colocation providers will install custom locks for your cabinet if necessary. Physical security measures remain vitally important, as social engineering and theft can extend to hardware and not just data. How then do data center providers go about securing cabinets and racks?
Distributed Denial of Service attacks are nothing new, but they’re becoming more and more common, from politically motivated attacks on financial and government institutions to recent attacks on data centers like Digital Ocean. DDoS attacks are when hackers use hijacked computers to flood servers with incoming requests and essentially shut down services by clogging network traffic or sending mass quantities of junk data. They are increasingly difficult to defend against as they grow in scale, and because they are distributed among various infected machines, it can be difficult to block traffic based on IP address.
Public institutions, financial industries, eCommerce sites, and hosting providers are among the most popular targets, but anyone can be a victim—and if your IT infrastructure is hosted in a data center, you need that facility to provide strong DDoS mitigation to avoid service interruptions of your own.
Read on to learn common DDoS attack methods and mitigation strategies.