We’ve covered software-defined storage (SDS) in the past on the blog, delving into how it can automate many of your storage administration tasks. Today we’ll get a bit deeper into how SDS improves storage capacity management by maximizing the performance of the storage attached to each virtual machine according to pre-set rules.
In vSphere, storage management involves a combination of performance and service levels and capacity planning. SDS controls in the VMware ecosystem are called Storage Based Policy Management (SPDM) and with their use, you no longer have to provision virtual machines individually according to their storage requirements.
Here’s how SPBM eliminates the need to overprovision and manually manage storage arrays.
Cloud servers are easy to provision and configure. Maybe too easy. That’s why many organizations are finding their cloud spend spiraling out of control. If you have recently experienced shock and awe at your monthly cloud bill, you may need to examine your environment for optimization opportunities.
Here are four of the top areas to reduce your cloud sprawl, and by extension, your cloud spend.
Placing data in the cloud comes with a set of concerns — accessibility (will my information always be available if the cloud has technical problems?) and security (how safe is my data when I can’t control the security measures?) chief among them. Of these, security has long been the primary concern for technology decision makers considering the cloud.
Recent surveys reveal that while security remains top of mind, the location of data is rising in prominence as a barrier or concern for cloud adoption. These concerns stem in part from the difficulty of visibility into data transit and storage. Customers might want to know where exactly their data is residing so they can retrieve it quickly — and also for legal implications.
Two recent court cases between Google, Microsoft, and the Federal Government highlight the legal entanglements that could come with storing information in the cloud. Read on to learn why the location of your cloud data is vital.
Here at Green House Data, our technicians are constantly working hard behind the scenes to improve the customer experience in our cloud products. We’ve recently completed a round of upgrades to bring you the latest features and bug fixes to our gBlock Cloud platform.
Here are some of the newest features that are available to you today, including improved web portal access, new disaster recovery features and interoperability with AWS and Azure, and more.
“Can my application run in the cloud?”
It’s a question we get more frequently than you might think — and the answer is almost always yes. Just yesterday, we got a web chat from an individual who wanted to know if a cloud server could run his e-mail server, SMTP-based, with PowerMTA, or if he would need a dedicated option. Mail servers are frequently run on virtual machines, so this configuration should pose no problem as a cloud server.
There are thousands of applications, running on a wide variety of operating systems, that play nice with VMware virtualization platforms (the basis of the gBlock cloud). Here are four hybrid cloud use cases to get you started.
Two of the biggest buzzwords thrown around when talking about cloud are “scalability” and “on-demand.” Those concepts also have implications for your capacity planning as an IT department. You may think that cloud machines nullify the need for capacity planning – after all, if you can just adjust resources on the fly and add or remove processing power and storage as needed, why bother projecting demand?
While it’s true that you can scale as needed, you need to maximize your IT budget and use those dollars efficiently at all times, while avoiding cloud sprawl. Pay-as-you-go only works when you keep a careful eye on your resources, or it can add up quickly when you have unused resources. Capacity planning still has a role to play in your cloud plans.
Backing up your enterprise data and applications is a no-brainer. Most everyone has experienced that moment of panic when a hardware failure sinks in and you realize the project you’ve been working on is never coming back. When we’re talking about an entire company’s IT infrastructure, an outage means dozens or hundreds of projects with hefty downtime costs.
You might have a backup plan in place, but backups are not disaster recovery (and disaster recovery is not ideal for backup, either). Backup is intended as a long-term, low cost solution for storing data, applications, configurations, etc. Disaster recovery is designed to get only the most critical portions of your IT infrastructure back online as fast as possible.
That means storage and bandwidth costs tend to be higher with DR, but recovery times are measured in minutes rather than hours or days. Let’s take a look at the other ways the two methods differ.
There are plenty of ways to use cloud computing for your enterprise applications, but if you’re going beyond Software as a Service options, chances are high that you’ll want to test your cloud application before deploying it to a live user environment. Because cloud is such a malleable term, “cloud testing” can be confusing too. Let’s clear up what exactly needs to be considered when you launch a cloud testing initiative.
Can you believe we’re already over a quarter of the way through 2016? Feels like we were just posting our 2015 blog wrap up yesterday. But here we are—the data center world keeps spinning. In case you missed something in the past three and a half months, we’ve collected our top blog posts and some of the most popular data center news headlines from around the blogosphere in today’s post.
Cloud storage, especially object storage, is often marketed by touting its “durability,” with many providers boasting eleven or thirteen “nines”, in other words 99.999999999% reliability. It sounds great—as close to 100% reliable as you can get. But what is durability in relation to storage, and do you really need those eleven nines?
Not every service provider even offers a durability rating as it can be difficult to measure and guarantee. A more important question to ask your cloud hosting provider is about how they are protecting against data loss generally. What technologies are in play? What are your odds of recoving data? How can you tie in backup?