As part of Green House Data’s recent acquisition of FiberCloud, the company gained three data centers in the state of Washington, each connected via redundant fiber.
These network links are further improved through Multiple Protocol Label Switching (MPLS) network technology, which increases data center Quality of Service by allowing administrators better control over traffic shaping and faster receipt of data packets at endpoints.
This blog looks at how MPLS works and how it helps data centers provide better network services.
As faster network speeds, MPLS networks between data centers, and software-defined technologies proliferate, it becomes easier than ever to host some applications across the country—or even across the world—without any negative impact.
However, for other cloud computing uses, data center location can have major implications when it comes to performance, compliance, and disaster recovery. There are two camps on the issue of data center locations for cloud infrastructure: yes, it matters, and no, it doesn’t make much of a difference.
DevOps was created in response to the disconnect between the development team and the operations team within IT departments. This disconnect stems from a lack of communication and collaboration that creates this “wall of confusion” separating the IT department into two very distinct sections, resulting in low productivity.
We like to tout our Cheyenne facilities as some of the safest data centers in the nation. Southeastern Wyoming experiences very few flooding, tornadoes, or earthquakes, and zero hurricanes.
But there is that pesky supervolcano underneath Yellowstone. You know, the one that could obliterate 2/3 of the United States.
That’s why this April 1st, Green House Data is announcing our facility is Supervolcano Resistant®.
Virtualization is a standard practice for IT shops around the world. However, as more data center operators look to consolidate and migrate to new virtualized environments, some legacy applications remain stumbling blocks on the way to a 100% virtualized infrastructure.
Legacy apps are tough nuts to crack: your users are accustomed to them, so they are highly efficient in business use, but they might clash with your more modern IT tools, they might no longer supported by the vendor, or the hardware underneath might be ready to kick the bucket.
“No worries,” I hear you say. “I can just virtualize the platform.”
That might work in most cases, but there are some legacy apps that either just won’t make the leap to virtualization or are too much trouble to virtualize to make it worthwhile. Here are the most common examples run into by our techs: