Disaster recovery and DRaaS solutions are intended as a method to keep a constant, or near-constant copy of your IT infrastructure in the cloud, ready to turn on a moment’s notice in the case of downtime at your primary data center site. But DR tools can also be used for your initial cloud migration, providing an on-ramp to the cloud that is cost-effective and relatively fast. You also get the bonus of a ready-to-go DR plan, if you continue to maintain the DR environment after your production servers turn on.
Resource pools in VMware powered clouds are one way to manage all available server resources and divide them among your virtual machines (VMs). They are essentially folders for your VMs that direct the server to allocate a certain amount of resources to a specified group of VMs in a hierarchy.
Resource pools are generally used to prioritize certain VMs over others, for reselling resources outside of your organization, and for isolating groups of VMs within performance standards, like when setting up a pool for Testing and Development vs. Production. Access controls are another reason to use resource pools – administrators can delegate a single pool of resources to a team member based on permissions.
Here are some tips to help you efficiently manage the CPU and memory allocated to your cloud servers.
How secure is your data center? In order to guarantee security, maintain uptime, and pass HIPAA and SSAE 16 Type II certifications, Green House Data has over sixty auditable security, environmental, and compliance control measures. Each compliant data center is audited once per year.
Some of these control points are standard practice, while others had to be added to daily routines in some facilities in order to gain compliance and bring them up to our strict standards. This list can help you get your data center up to speed – or see just how much effort goes into keeping server rooms monitored, secured, and fully auditable.
See all 61 points we check for security and auditability after the jump.
Green House Data provides a 100% SLA – which means your cloud infrastructure is guaranteed to be online 24/7. But errors in application deployment, cyber attacks, configuration mishaps, heavy network traffic, and other issues can still cause your virtual machines to crash, if you are managing them yourself. One tool in the arsenal to fight cloud downtime is VMware Fault Tolerance.
Fault Tolerance (FT) increases availability of virtual machines by creating an identical copy of the production VM that is continuously updated and ready to replace the original VM in the event of downtime. VMware FT is part of vSphere High Availability and works with it to keep the backup VM in tandem.
FT is often used for applications that require constant availability, especially if they have continual or near-constant client connections, or for custom applications that require clustering.
Read on to see host server and VM requirements for FT, plus the difference between FT and VMware High Availability.
Two of the biggest buzzwords thrown around when talking about cloud are “scalability” and “on-demand.” Those concepts also have implications for your capacity planning as an IT department. You may think that cloud machines nullify the need for capacity planning – after all, if you can just adjust resources on the fly and add or remove processing power and storage as needed, why bother projecting demand?
While it’s true that you can scale as needed, you need to maximize your IT budget and use those dollars efficiently at all times, while avoiding cloud sprawl. Pay-as-you-go only works when you keep a careful eye on your resources, or it can add up quickly when you have unused resources. Capacity planning still has a role to play in your cloud plans.