How secure is your data center? In order to guarantee security, maintain uptime, and pass HIPAA and SSAE 16 Type II certifications, Green House Data has over sixty auditable security, environmental, and compliance control measures. Each compliant data center is audited once per year.
Some of these control points are standard practice, while others had to be added to daily routines in some facilities in order to gain compliance and bring them up to our strict standards. This list can help you get your data center up to speed – or see just how much effort goes into keeping server rooms monitored, secured, and fully auditable.
See all 61 points we check for security and auditability after the jump.
Green House Data provides a 100% SLA – which means your cloud infrastructure is guaranteed to be online 24/7. But errors in application deployment, cyber attacks, configuration mishaps, heavy network traffic, and other issues can still cause your virtual machines to crash, if you are managing them yourself. One tool in the arsenal to fight cloud downtime is VMware Fault Tolerance.
Fault Tolerance (FT) increases availability of virtual machines by creating an identical copy of the production VM that is continuously updated and ready to replace the original VM in the event of downtime. VMware FT is part of vSphere High Availability and works with it to keep the backup VM in tandem.
FT is often used for applications that require constant availability, especially if they have continual or near-constant client connections, or for custom applications that require clustering.
Read on to see host server and VM requirements for FT, plus the difference between FT and VMware High Availability.
Let's dive back into our ongoing blog series Get to Know Green House Data, where we shine a light on the employees across our nationwide offices and data centers. When we talk about Hear from a Human Support, these are the engineers, help desk staff, and data center technicians keeping the pulse of your IT infrastructure going strong.
This week, we talk to Managed Services Senior Engineer Sergio Gonzalez.
Two of the biggest buzzwords thrown around when talking about cloud are “scalability” and “on-demand.” Those concepts also have implications for your capacity planning as an IT department. You may think that cloud machines nullify the need for capacity planning – after all, if you can just adjust resources on the fly and add or remove processing power and storage as needed, why bother projecting demand?
While it’s true that you can scale as needed, you need to maximize your IT budget and use those dollars efficiently at all times, while avoiding cloud sprawl. Pay-as-you-go only works when you keep a careful eye on your resources, or it can add up quickly when you have unused resources. Capacity planning still has a role to play in your cloud plans.
We introduced some key concepts of cloud monitoring in our blog post earlier this week, namely the three types of data you need to collect to keep an eye on your cloud infrastructure. Today we’ll dive a little further into another factor mentioned: granularity.
The granularity of your cloud monitoring data is how often you record the state of each metric. This can significantly change the visibility into your environment, as not having a granular enough view averages out potential problems to the point where they may not appear as a red flag upon troubleshooting review.
Granularity becomes a careful balancing act where collecting too much data taxes your system and negatively effects performance, while not taking the pulse often enough can lead to ineffective cloud monitoring.