Gartner anticipates that 90% of large organizations will have a Chief Data Officer by 2019.
This isn’t too surprising when you consider than the total amount of data is expected to grow exponentially, doubling in size every year until 2020. That’s 50 times more data in a decade.
Big data has plenty of insights for business large and small, and data-based initiatives are underway across the globe, as organizations seek to quickly understand and analyze mountains of information to glean a competitive advantage.
A Chief Data Officer makes key decisions around the storage, handling, and use of a business’ information, including the type of platforms used, connections to/from production applications, analytics processes, and efficient flow of data.
Let’s dig into what that means in practice and how a CDO can help reduce the significant costs around data storage, platforms, and access, while also improving business functionality and agility.
Application performance can often hinge on how well your storage can serve data to end clients. For this reason you must correctly design or choose your storage tier speed in terms of both IOPS and throughput, which rate the speed and bandwidth of the storage respectively.
It is vital to plan according to manufacturer and developer recommendations as well as real-world benchmarks to maximize your storage (and subsequently application) performance. Take a look at peak IOPS and throughput ratings, read/write ratios, RAID penalties, and physical latency.
We’ve been cloud-native since the beginning, offering VMware-powered virtual hosting for almost ten years. In fact, our very first EMC backend storage array is now sitting in the lobby of our Cheyenne headquarters.
Of course, we couldn’t stay stagnant with our cloud offerings (you’d notice if that old storage array was still powering your cloud, trust us). The hardware, software, and facilities powering the gBlock cloud have undergone a variety of upgrades over the past decade, and the latest set is big enough for us to dub it officially the gBlock Cloud 2.0.
So what’s new in the Green House Data cloud? Let’s dive into the benefits customers can receive when the migrate to this new and improved platform.
We’ve covered software-defined storage (SDS) in the past on the blog, delving into how it can automate many of your storage administration tasks. Today we’ll get a bit deeper into how SDS improves storage capacity management by maximizing the performance of the storage attached to each virtual machine according to pre-set rules.
In vSphere, storage management involves a combination of performance and service levels and capacity planning. SDS controls in the VMware ecosystem are called Storage Based Policy Management (SPDM) and with their use, you no longer have to provision virtual machines individually according to their storage requirements.
Here’s how SPBM eliminates the need to overprovision and manually manage storage arrays.
Cloud servers are easy to provision and configure. Maybe too easy. That’s why many organizations are finding their cloud spend spiraling out of control. If you have recently experienced shock and awe at your monthly cloud bill, you may need to examine your environment for optimization opportunities.
Here are four of the top areas to reduce your cloud sprawl, and by extension, your cloud spend.
Placing data in the cloud comes with a set of concerns — accessibility (will my information always be available if the cloud has technical problems?) and security (how safe is my data when I can’t control the security measures?) chief among them. Of these, security has long been the primary concern for technology decision makers considering the cloud.
Recent surveys reveal that while security remains top of mind, the location of data is rising in prominence as a barrier or concern for cloud adoption. These concerns stem in part from the difficulty of visibility into data transit and storage. Customers might want to know where exactly their data is residing so they can retrieve it quickly — and also for legal implications.
Two recent court cases between Google, Microsoft, and the Federal Government highlight the legal entanglements that could come with storing information in the cloud. Read on to learn why the location of your cloud data is vital.
Here at Green House Data, our technicians are constantly working hard behind the scenes to improve the customer experience in our cloud products. We’ve recently completed a round of upgrades to bring you the latest features and bug fixes to our gBlock Cloud platform.
Here are some of the newest features that are available to you today, including improved web portal access, new disaster recovery features and interoperability with AWS and Azure, and more.
“Can my application run in the cloud?”
It’s a question we get more frequently than you might think — and the answer is almost always yes. Just yesterday, we got a web chat from an individual who wanted to know if a cloud server could run his e-mail server, SMTP-based, with PowerMTA, or if he would need a dedicated option. Mail servers are frequently run on virtual machines, so this configuration should pose no problem as a cloud server.
There are thousands of applications, running on a wide variety of operating systems, that play nice with VMware virtualization platforms (the basis of the gBlock cloud). Here are four hybrid cloud use cases to get you started.
Two of the biggest buzzwords thrown around when talking about cloud are “scalability” and “on-demand.” Those concepts also have implications for your capacity planning as an IT department. You may think that cloud machines nullify the need for capacity planning – after all, if you can just adjust resources on the fly and add or remove processing power and storage as needed, why bother projecting demand?
While it’s true that you can scale as needed, you need to maximize your IT budget and use those dollars efficiently at all times, while avoiding cloud sprawl. Pay-as-you-go only works when you keep a careful eye on your resources, or it can add up quickly when you have unused resources. Capacity planning still has a role to play in your cloud plans.
Backing up your enterprise data and applications is a no-brainer. Most everyone has experienced that moment of panic when a hardware failure sinks in and you realize the project you’ve been working on is never coming back. When we’re talking about an entire company’s IT infrastructure, an outage means dozens or hundreds of projects with hefty downtime costs.
You might have a backup plan in place, but backups are not disaster recovery (and disaster recovery is not ideal for backup, either). Backup is intended as a long-term, low cost solution for storing data, applications, configurations, etc. Disaster recovery is designed to get only the most critical portions of your IT infrastructure back online as fast as possible.
That means storage and bandwidth costs tend to be higher with DR, but recovery times are measured in minutes rather than hours or days. Let’s take a look at the other ways the two methods differ.