Hybrid IT infrastructure seems to be the deployment mode du jour, but some theorize that hybrid is just a stopover on the way to a 100% public cloud environment. With cloud adoption as a whole moving slower than many anticipated, it may be too early to definitively say whether hybrid is here to stay, but in our opinion, hybrid will remain a valuable model for many years to come.
Surveys from McAfee and RightScale both show hybrid cloud and multicloud adoption increasing, with McAfee finding a jump from 19% of organizations using hybrid cloud in 2015 to 57% using hybrid cloud in 2016, and RightScale showing an increase from 58% to 71% over the same period.
But are these increases just because hybrid cloud is the easiest deployment model? Often times a company will add cloud resources alongside their current infrastructure, which is considered a form of hybrid cloud. Or is it because the definition of hybrid cloud itself is shifting?
It’s easy to provision additional VMs and increase resource commitment from your overall resource pool using the vSphere web portal. Maybe too easy. If you overstretch your resources, some features like High Availability failover may not function as planned. HA keeps your VMs from failing by pooling VMs and hosts in a cluster, relaunching failed VMs on alternate hosts.
Overcommitting resources can also lead to general performance problems, so it is in your best interest to use Admission Control to keep a close watch on overall capacity. Another reason? You might be trying to power on new VMs, only to run into errors as you exceed your Admission Control rules. Tweaking them can save you from buying additional host resources.
This post will introduce the concepts of slot sizes and configuration of Admission Control to allow more VMs to move between hosts when you have turned on High Availability in vSphere/vCenter.
With two different licensing models and several different versions of SQL Server, managing your licensing in a virtualized environment (like a hosted VMware cloud) is no simple matter.
This quick rundown will guide you towards the best licensing choice for your cloud VMs running Microsoft SQL Server.
Application performance can often hinge on how well your storage can serve data to end clients. For this reason you must correctly design or choose your storage tier speed in terms of both IOPS and throughput, which rate the speed and bandwidth of the storage respectively.
It is vital to plan according to manufacturer and developer recommendations as well as real-world benchmarks to maximize your storage (and subsequently application) performance. Take a look at peak IOPS and throughput ratings, read/write ratios, RAID penalties, and physical latency.
Our cloud engineering team wears many hats, with different roles taking on different pieces of the nationwide gBlock™ Cloud falling to different staff members, but everyone pulling their weight on a wide variety of projects simultaneously.
Our new blog series digs into the daily life of our technology staff, focusing on their challenges, routines, and goals, to provide insight for those who are eying the IT field, or customers and friends who may be curious what goes on behind the scenes at a cloud data center.
This week, we talk to Senior Cloud Technologist Josh Larsen, who has moved around in different roles for Green House Data for over six years. As Cloud Technologist, his job largely entails forecasting and planning for large-scale cloud projects across our entire environment.