Gartner anticipates that 90% of large organizations will have a Chief Data Officer by 2019.
This isn’t too surprising when you consider than the total amount of data is expected to grow exponentially, doubling in size every year until 2020. That’s 50 times more data in a decade.
Big data has plenty of insights for business large and small, and data-based initiatives are underway across the globe, as organizations seek to quickly understand and analyze mountains of information to glean a competitive advantage.
A Chief Data Officer makes key decisions around the storage, handling, and use of a business’ information, including the type of platforms used, connections to/from production applications, analytics processes, and efficient flow of data.
Let’s dig into what that means in practice and how a CDO can help reduce the significant costs around data storage, platforms, and access, while also improving business functionality and agility.
Hybrid IT infrastructure seems to be the deployment mode du jour, but some theorize that hybrid is just a stopover on the way to a 100% public cloud environment. With cloud adoption as a whole moving slower than many anticipated, it may be too early to definitively say whether hybrid is here to stay, but in our opinion, hybrid will remain a valuable model for many years to come.
Surveys from McAfee and RightScale both show hybrid cloud and multicloud adoption increasing, with McAfee finding a jump from 19% of organizations using hybrid cloud in 2015 to 57% using hybrid cloud in 2016, and RightScale showing an increase from 58% to 71% over the same period.
But are these increases just because hybrid cloud is the easiest deployment model? Often times a company will add cloud resources alongside their current infrastructure, which is considered a form of hybrid cloud. Or is it because the definition of hybrid cloud itself is shifting?
It’s easy to provision additional VMs and increase resource commitment from your overall resource pool using the vSphere web portal. Maybe too easy. If you overstretch your resources, some features like High Availability failover may not function as planned. HA keeps your VMs from failing by pooling VMs and hosts in a cluster, relaunching failed VMs on alternate hosts.
Overcommitting resources can also lead to general performance problems, so it is in your best interest to use Admission Control to keep a close watch on overall capacity. Another reason? You might be trying to power on new VMs, only to run into errors as you exceed your Admission Control rules. Tweaking them can save you from buying additional host resources.
This post will introduce the concepts of slot sizes and configuration of Admission Control to allow more VMs to move between hosts when you have turned on High Availability in vSphere/vCenter.
With two different licensing models and several different versions of SQL Server, managing your licensing in a virtualized environment (like a hosted VMware cloud) is no simple matter.
This quick rundown will guide you towards the best licensing choice for your cloud VMs running Microsoft SQL Server.
Application performance can often hinge on how well your storage can serve data to end clients. For this reason you must correctly design or choose your storage tier speed in terms of both IOPS and throughput, which rate the speed and bandwidth of the storage respectively.
It is vital to plan according to manufacturer and developer recommendations as well as real-world benchmarks to maximize your storage (and subsequently application) performance. Take a look at peak IOPS and throughput ratings, read/write ratios, RAID penalties, and physical latency.