Green House Data provides a 100% SLA – which means your cloud infrastructure is guaranteed to be online 24/7. But errors in application deployment, cyber attacks, configuration mishaps, heavy network traffic, and other issues can still cause your virtual machines to crash, if you are managing them yourself. One tool in the arsenal to fight cloud downtime is VMware Fault Tolerance.
Fault Tolerance (FT) increases availability of virtual machines by creating an identical copy of the production VM that is continuously updated and ready to replace the original VM in the event of downtime. VMware FT is part of vSphere High Availability and works with it to keep the backup VM in tandem.
FT is often used for applications that require constant availability, especially if they have continual or near-constant client connections, or for custom applications that require clustering.
Read on to see host server and VM requirements for FT, plus the difference between FT and VMware High Availability.
Two of the biggest buzzwords thrown around when talking about cloud are “scalability” and “on-demand.” Those concepts also have implications for your capacity planning as an IT department. You may think that cloud machines nullify the need for capacity planning – after all, if you can just adjust resources on the fly and add or remove processing power and storage as needed, why bother projecting demand?
While it’s true that you can scale as needed, you need to maximize your IT budget and use those dollars efficiently at all times, while avoiding cloud sprawl. Pay-as-you-go only works when you keep a careful eye on your resources, or it can add up quickly when you have unused resources. Capacity planning still has a role to play in your cloud plans.
We introduced some key concepts of cloud monitoring in our blog post earlier this week, namely the three types of data you need to collect to keep an eye on your cloud infrastructure. Today we’ll dive a little further into another factor mentioned: granularity.
The granularity of your cloud monitoring data is how often you record the state of each metric. This can significantly change the visibility into your environment, as not having a granular enough view averages out potential problems to the point where they may not appear as a red flag upon troubleshooting review.
Granularity becomes a careful balancing act where collecting too much data taxes your system and negatively effects performance, while not taking the pulse often enough can lead to ineffective cloud monitoring.
Network and system utilization monitoring are essential pieces of any cloud environment, helping engineers ensure consistent performance and spotting threats to availability, whether they be resource or security related, before they impact users.
There are a variety of platforms to collect data on your cloud environment. Depending on your cloud provider, some of them might be included in your contract. If you require specific features or integrations, you might add a third party monitoring platform. Some can even monitor across different public and hybrid clouds on different virtualization platforms.
Once you’ve settled on a monitoring tool, you have to decide what data to collect and how much of it to store and review. If you have a very large scale environment, this may even be a dedicated role for an employee. Some cloud environments will generate constant data that must be reviewed in order to meet internal SLAs or guarantee availability of your platform to the public. Other environments will only generate data rarely. In either case, the more information you can afford to store and review, the better you’ll be able to prevent and troubleshoot any problems with your virtual machines.
By all major accounts, most organizations are heading towards a future IT environment that mixes and matches from a variety of cloud services and providers. Now is the time to lay the groundwork for your multicloud future, by documenting a strategy for multicloud management and adopting new technologies for single-pane visibility.
Fighting shadow cloud and getting all of your organization’s disparate cloud resources under a single roof is an uphill battle. The modern IT reality is that users are far more technologically savvy than in the past and they aren’t afraid to go around IT to get the tools they want to use right now, today. That means multicloud is here today and it’s going to be a big part of the future, too.
IDC found that 47% of DevOps focused organizations plan to have five or more clouds by 2020. Even if you aren’t using DevOps methodology, embracing the cloud often leads to an agile mindset where it’s easy to slip into information silos stranded on one cloud provider or another. As you deploy applications in the cloud that makes the most sense, you can end up with stranded data and interoperability issues.
A carefully designed hybrid cloud environment can accommodate data across workloads, availability zones/locations, and access points. It can also enable repeatable, automated, and granular security and monitoring.