Placing data in the cloud comes with a set of concerns — accessibility (will my information always be available if the cloud has technical problems?) and security (how safe is my data when I can’t control the security measures?) chief among them. Of these, security has long been the primary concern for technology decision makers considering the cloud.
Recent surveys reveal that while security remains top of mind, the location of data is rising in prominence as a barrier or concern for cloud adoption. These concerns stem in part from the difficulty of visibility into data transit and storage. Customers might want to know where exactly their data is residing so they can retrieve it quickly — and also for legal implications.
Two recent court cases between Google, Microsoft, and the Federal Government highlight the legal entanglements that could come with storing information in the cloud. Read on to learn why the location of your cloud data is vital.
Here at Green House Data, our technicians are constantly working hard behind the scenes to improve the customer experience in our cloud products. We’ve recently completed a round of upgrades to bring you the latest features and bug fixes to our gBlock Cloud platform.
Here are some of the newest features that are available to you today, including improved web portal access, new disaster recovery features and interoperability with AWS and Azure, and more.
“Can my application run in the cloud?”
It’s a question we get more frequently than you might think — and the answer is almost always yes. Just yesterday, we got a web chat from an individual who wanted to know if a cloud server could run his e-mail server, SMTP-based, with PowerMTA, or if he would need a dedicated option. Mail servers are frequently run on virtual machines, so this configuration should pose no problem as a cloud server.
There are thousands of applications, running on a wide variety of operating systems, that play nice with VMware virtualization platforms (the basis of the gBlock cloud). Here are four hybrid cloud use cases to get you started.
Two of the biggest buzzwords thrown around when talking about cloud are “scalability” and “on-demand.” Those concepts also have implications for your capacity planning as an IT department. You may think that cloud machines nullify the need for capacity planning – after all, if you can just adjust resources on the fly and add or remove processing power and storage as needed, why bother projecting demand?
While it’s true that you can scale as needed, you need to maximize your IT budget and use those dollars efficiently at all times, while avoiding cloud sprawl. Pay-as-you-go only works when you keep a careful eye on your resources, or it can add up quickly when you have unused resources. Capacity planning still has a role to play in your cloud plans.
Backing up your enterprise data and applications is a no-brainer. Most everyone has experienced that moment of panic when a hardware failure sinks in and you realize the project you’ve been working on is never coming back. When we’re talking about an entire company’s IT infrastructure, an outage means dozens or hundreds of projects with hefty downtime costs.
You might have a backup plan in place, but backups are not disaster recovery (and disaster recovery is not ideal for backup, either). Backup is intended as a long-term, low cost solution for storing data, applications, configurations, etc. Disaster recovery is designed to get only the most critical portions of your IT infrastructure back online as fast as possible.
That means storage and bandwidth costs tend to be higher with DR, but recovery times are measured in minutes rather than hours or days. Let’s take a look at the other ways the two methods differ.
There are plenty of ways to use cloud computing for your enterprise applications, but if you’re going beyond Software as a Service options, chances are high that you’ll want to test your cloud application before deploying it to a live user environment. Because cloud is such a malleable term, “cloud testing” can be confusing too. Let’s clear up what exactly needs to be considered when you launch a cloud testing initiative.
Can you believe we’re already over a quarter of the way through 2016? Feels like we were just posting our 2015 blog wrap up yesterday. But here we are—the data center world keeps spinning. In case you missed something in the past three and a half months, we’ve collected our top blog posts and some of the most popular data center news headlines from around the blogosphere in today’s post.
Cloud storage, especially object storage, is often marketed by touting its “durability,” with many providers boasting eleven or thirteen “nines”, in other words 99.999999999% reliability. It sounds great—as close to 100% reliable as you can get. But what is durability in relation to storage, and do you really need those eleven nines?
Not every service provider even offers a durability rating as it can be difficult to measure and guarantee. A more important question to ask your cloud hosting provider is about how they are protecting against data loss generally. What technologies are in play? What are your odds of recoving data? How can you tie in backup?
Even enterprise and midmarket companies, who traditionally have been able to afford to purchase and run their own IT infrastructure, have seen the writing on the wall: it is soon going to be too cost-prohibitive and time consuming to buy and administrate their own on-premise systems. While not everyone is cloud-first, hybrid is starting to gain significant ground.
At the same time, storage requirements are ballooning rapidly. As more devices are connected, more data is collected, and more of business processes go digital, storage needs continue to pile up (plus there’s all that pesky backup data you’ve been holding onto for decades already).
What does the future of enterprise IT storage look like, then? Increasingly, it will be software-defined. Gartner reports that by 2019, 70% of existing storage array products will be available as software-only versions. Software defined storage (SDS) technology enables both object and block/file level storage to be moved across virtualized environments, enabling portability, scalability, vendor agnosticism, and the ability to reuse old or commodity hardware as additional storage.
What grabbed your attention the most in 2015? Our most popular posts from the year are below, along with a wrap up of the industry's biggest headlines.
This year didn't bring massive upheaval in the data center realm, but there was a fair share of news that caused ripples or at least garnered a lot of clicks and retweets. In the industry at large, big news included the Dell-EMC merger, telcos selling off data centers, and the Uptime Institute killing off tiers.
On our humble blog, our most popular posts covered Ubuntu VM optimization, CloudStack vs. vCloud, disaster recovery, and more. Read on for a full list of 2015's biggest data center stories.