How secure is your data center? In order to guarantee security, maintain uptime, and pass HIPAA and SSAE 16 Type II certifications, Green House Data has over sixty auditable security, environmental, and compliance control measures. Each compliant data center is audited once per year.
Some of these control points are standard practice, while others had to be added to daily routines in some facilities in order to gain compliance and bring them up to our strict standards. This list can help you get your data center up to speed – or see just how much effort goes into keeping server rooms monitored, secured, and fully auditable.
See all 61 points we check for security and auditability after the jump.
Green House Data provides a 100% SLA – which means your cloud infrastructure is guaranteed to be online 24/7. But errors in application deployment, cyber attacks, configuration mishaps, heavy network traffic, and other issues can still cause your virtual machines to crash, if you are managing them yourself. One tool in the arsenal to fight cloud downtime is VMware Fault Tolerance.
Fault Tolerance (FT) increases availability of virtual machines by creating an identical copy of the production VM that is continuously updated and ready to replace the original VM in the event of downtime. VMware FT is part of vSphere High Availability and works with it to keep the backup VM in tandem.
FT is often used for applications that require constant availability, especially if they have continual or near-constant client connections, or for custom applications that require clustering.
Read on to see host server and VM requirements for FT, plus the difference between FT and VMware High Availability.
Two of the biggest buzzwords thrown around when talking about cloud are “scalability” and “on-demand.” Those concepts also have implications for your capacity planning as an IT department. You may think that cloud machines nullify the need for capacity planning – after all, if you can just adjust resources on the fly and add or remove processing power and storage as needed, why bother projecting demand?
While it’s true that you can scale as needed, you need to maximize your IT budget and use those dollars efficiently at all times, while avoiding cloud sprawl. Pay-as-you-go only works when you keep a careful eye on your resources, or it can add up quickly when you have unused resources. Capacity planning still has a role to play in your cloud plans.
We introduced some key concepts of cloud monitoring in our blog post earlier this week, namely the three types of data you need to collect to keep an eye on your cloud infrastructure. Today we’ll dive a little further into another factor mentioned: granularity.
The granularity of your cloud monitoring data is how often you record the state of each metric. This can significantly change the visibility into your environment, as not having a granular enough view averages out potential problems to the point where they may not appear as a red flag upon troubleshooting review.
Granularity becomes a careful balancing act where collecting too much data taxes your system and negatively effects performance, while not taking the pulse often enough can lead to ineffective cloud monitoring.
Network and system utilization monitoring are essential pieces of any cloud environment, helping engineers ensure consistent performance and spotting threats to availability, whether they be resource or security related, before they impact users.
There are a variety of platforms to collect data on your cloud environment. Depending on your cloud provider, some of them might be included in your contract. If you require specific features or integrations, you might add a third party monitoring platform. Some can even monitor across different public and hybrid clouds on different virtualization platforms.
Once you’ve settled on a monitoring tool, you have to decide what data to collect and how much of it to store and review. If you have a very large scale environment, this may even be a dedicated role for an employee. Some cloud environments will generate constant data that must be reviewed in order to meet internal SLAs or guarantee availability of your platform to the public. Other environments will only generate data rarely. In either case, the more information you can afford to store and review, the better you’ll be able to prevent and troubleshoot any problems with your virtual machines.