Disaster recovery is a vital part of any backup strategy, but sometimes it's not clear how it differs from your everyday backups. A Microsoft survey discovered most organizations experience 4 or more disruptions each year with an average cost of $1.5 million an hour. To fight the high cost of downtime, 43% of IT professionals are planning to invest in or improve their business continuity with cloud-based disaster recovery, citing reduced costs and expanded coverage as their primary reasons, according to IDG.
With disaster recovery (DR) taking such a high priority in the IT world right now, we asked our resident expert Josh Larsen, Sales Engineer, to answer some of the most common DR questions.
Two of the most buzzworthy IT strategies right now are hybrid infrastructure, especially hybrid cloud, and software-defined data centers (SDDC). With VMware recently throwing its weight behind SDDC technologies and surveys from last year demonstrating that 75% of C-Level executives are focusing on hybrid cloud, these technologies are here to stay.
Gartner reports that only 10-15% of enterprises and mid market organizations are currently using hybrid computing, however. Their report states that, “More advanced approaches…suffer from significant setup and operational complexity.” New software defined data center management could help bridge the gap between interest and implementation.
Together, software-defined technology and hybrid IT help deliver a mobile, highly resilient and easy to manage infrastructure for your business applications and data. Here’s how.
Last year's VMworld showed the company was serious about making containers work alongside and inside of virtual machines, but with Docker and other container technology continuing to make strides even in the enterprise, VMworld 2015 delivered serious development efforts on VMware's behalf. The result? Photon Platform, a forked version of Linux specifically designed to integrate containers into vSphere, as well as vSphere Integrated Containers.
While containers have been viewed with great interest by the enterprise, they can lack security and integrations with backup and other software. VMware needs a way to solve these problems while also providing a platform to manage containers alongside virtual machines in vSphere.
Here's what you need to know about how these new tools can help you efficiently managed containers in and alongside your vSphere environment.
When vSphere 6.0 came out earlier this year, there was a lot of hubbub about one feature in particular, and rightfully so. VVols, or virtual volumes, are a way to virtualize storage arrays and have them dynamically move and configure alongside your virtual machines.
VVols don’t replace traditional virtual storage methods, so you can keep using your existing storage strategies and hardware along with VVols. Basically, no matter what kind of storage you’re using in the data center, vSphere treats it as a datastore logical object. Previously, each time you needed to configure a VM for performance or availability, you’d have to move it to a different datastore.
Read on to learn why virtualized storage is way cool, and for some reasons you might not want to dive in just yet.
As Director of Engineering and Operations at Green House Data, Mike Mazarakis has helped his share of companies migrate to the cloud. With 20 years of data center and networking experience, he's a self-described “pragmatist in IT” who has watched virtualization evolve into the concept of cloud we all know today.
Mike answered questions submitted by the public in a webcast last month. We interviewed him to get the answers to the most pressing cloud migration questions and help you plan your move to hosted IT. Look for more features in our cloud migration series in the coming weeks.
After the jump, learn how small businesses and enterprises differ in their approach to the cloud, read a walkthrough of one company's quest to move to the cloud while continuing to use existing IT assets, and see the three primary types of new cloud users—plus more!
There are a few leading choices on the market when it comes to deploying and managing cloud servers. Two popular options are CloudStack, an open source project from Apache, and vCloud from VMware. The two platforms both offer web portals for cloud management, API compatibilities, snapshots, monitoring, security options, and virtual network tools.
Depending on your IT department’s current infrastructure, staff levels, and knowledge, one or the other might be a better choice for private or hybrid cloud deployments.
Much has been written about how to plan for disaster recovery, but why do you need to consider disaster recovery at all? What is so important about it to a business? Why can’t you just copy everything to a secondary device and stop worrying? IT departments often get caught in the trap of relying on physical backups, thinking, “I back up everything on external storage and our systems are in a safe area. What more do I need?”
When it comes to disaster recovery, you can never be too prepared. I worked for one company—we’ll call them Company A—who thought that they were ready for the worst. But even the best laid back up plans can go wrong when you rely only on physical media.
Last month, a study released by IT analysts IDC got some headlines as it described “Worldwide Cloud Adoption in the Manufacturing Industry”. While based on some research from 2014, the study showed cloud computing growth among manufacturers will continue well into 2015 and beyond. Gartner agrees, stating that cloud-based manufacturing software will increase from 22% to 45% over the next decade.
How many manufacturers are using cloud, and what benefits does it bring?
As part of Green House Data’s recent acquisition of FiberCloud, the company gained three data centers in the state of Washington, each connected via redundant fiber.
These network links are further improved through Multiple Protocol Label Switching (MPLS) network technology, which increases data center Quality of Service by allowing administrators better control over traffic shaping and faster receipt of data packets at endpoints.
This blog looks at how MPLS works and how it helps data centers provide better network services.
As faster network speeds, MPLS networks between data centers, and software-defined technologies proliferate, it becomes easier than ever to host some applications across the country—or even across the world—without any negative impact.
However, for other cloud computing uses, data center location can have major implications when it comes to performance, compliance, and disaster recovery. There are two camps on the issue of data center locations for cloud infrastructure: yes, it matters, and no, it doesn’t make much of a difference.