We’ve been cloud-native since the beginning, offering VMware-powered virtual hosting for almost ten years. In fact, our very first EMC backend storage array is now sitting in the lobby of our Cheyenne headquarters.
Of course, we couldn’t stay stagnant with our cloud offerings (you’d notice if that old storage array was still powering your cloud, trust us). The hardware, software, and facilities powering the gBlock cloud have undergone a variety of upgrades over the past decade, and the latest set is big enough for us to dub it officially the gBlock Cloud 2.0.
So what’s new in the Green House Data cloud? Let’s dive into the benefits customers can receive when the migrate to this new and improved platform.
Here at Green House Data, our technicians are constantly working hard behind the scenes to improve the customer experience in our cloud products. We’ve recently completed a round of upgrades to bring you the latest features and bug fixes to our gBlock Cloud platform.
Here are some of the newest features that are available to you today, including improved web portal access, new disaster recovery features and interoperability with AWS and Azure, and more.
Cloud storage, especially object storage, is often marketed by touting its “durability,” with many providers boasting eleven or thirteen “nines”, in other words 99.999999999% reliability. It sounds great—as close to 100% reliable as you can get. But what is durability in relation to storage, and do you really need those eleven nines?
Not every service provider even offers a durability rating as it can be difficult to measure and guarantee. A more important question to ask your cloud hosting provider is about how they are protecting against data loss generally. What technologies are in play? What are your odds of recoving data? How can you tie in backup?
Even enterprise and midmarket companies, who traditionally have been able to afford to purchase and run their own IT infrastructure, have seen the writing on the wall: it is soon going to be too cost-prohibitive and time consuming to buy and administrate their own on-premise systems. While not everyone is cloud-first, hybrid is starting to gain significant ground.
At the same time, storage requirements are ballooning rapidly. As more devices are connected, more data is collected, and more of business processes go digital, storage needs continue to pile up (plus there’s all that pesky backup data you’ve been holding onto for decades already).
What does the future of enterprise IT storage look like, then? Increasingly, it will be software-defined. Gartner reports that by 2019, 70% of existing storage array products will be available as software-only versions. Software defined storage (SDS) technology enables both object and block/file level storage to be moved across virtualized environments, enabling portability, scalability, vendor agnosticism, and the ability to reuse old or commodity hardware as additional storage.
What grabbed your attention the most in 2015? Our most popular posts from the year are below, along with a wrap up of the industry's biggest headlines.
This year didn't bring massive upheaval in the data center realm, but there was a fair share of news that caused ripples or at least garnered a lot of clicks and retweets. In the industry at large, big news included the Dell-EMC merger, telcos selling off data centers, and the Uptime Institute killing off tiers.
On our humble blog, our most popular posts covered Ubuntu VM optimization, CloudStack vs. vCloud, disaster recovery, and more. Read on for a full list of 2015's biggest data center stories.
Much has been written about how to plan for disaster recovery, but why do you need to consider disaster recovery at all? What is so important about it to a business? Why can’t you just copy everything to a secondary device and stop worrying? IT departments often get caught in the trap of relying on physical backups, thinking, “I back up everything on external storage and our systems are in a safe area. What more do I need?”
When it comes to disaster recovery, you can never be too prepared. I worked for one company—we’ll call them Company A—who thought that they were ready for the worst. But even the best laid back up plans can go wrong when you rely only on physical media.
By now, even your non-techy mom has probably heard of Big Data, with IBM and others advertising it on TV and every other IT vendor pushing their platform. If you don't know about big data, here is it is in a nutshell: as more and more devices are connected to the internet and storage capabilities continue to advance, we’re able to collect, store, and run analytics on massive sets of information in order to discover insights and make more informed decisions.
Some industries like research, oil and gas, manufacturing, and logistics, have been doing this for years, often on dedicated hardware. The advantages of virtualization can be levied for big data use, too, even though at its core, big data is focused on distribution of jobs over a wide array of resources, while virtualization as a concept is the exact opposite.
If you’re gearing up for a big data deployment, you can use VMware tools to stack it on top of virtual machines, allowing you to add resources easily when you need to run large analytics jobs and scale back when you don’t need as much processing power or want to delete old unused datasets from storage. This elasticity helps maximize your available compute resources and can be used in a mixed-workload environment. Plus, you can manage and automate your big data VMs from the same tools as your other infrastructure.
Here is quick primer on what to keep in mind with VMware big data platforms.