What grabbed your attention the most in 2015? Our most popular posts from the year are below, along with a wrap up of the industry's biggest headlines.
This year didn't bring massive upheaval in the data center realm, but there was a fair share of news that caused ripples or at least garnered a lot of clicks and retweets. In the industry at large, big news included the Dell-EMC merger, telcos selling off data centers, and the Uptime Institute killing off tiers.
On our humble blog, our most popular posts covered Ubuntu VM optimization, CloudStack vs. vCloud, disaster recovery, and more. Read on for a full list of 2015's biggest data center stories.
Carrier hotels are generally large buildings, often in population centers, that are built to serve as a secure site for data communications interconnections. That means they also often function as large-scale colocation sites. By combining infrastructure resources, many providers can converge in a single facility, lowering overhead and allowing tenants access to many services and connections.
Green House Data’s Seattle data center in the Westin Building Exchange features 7,000 square feet of white space across three floors of the massive building. But what's so great about being in a carrier hotel, anyway?
As part of Green House Data’s recent acquisition of FiberCloud, the company gained three data centers in the state of Washington, each connected via redundant fiber.
These network links are further improved through Multiple Protocol Label Switching (MPLS) network technology, which increases data center Quality of Service by allowing administrators better control over traffic shaping and faster receipt of data packets at endpoints.
This blog looks at how MPLS works and how it helps data centers provide better network services.
With demand for pre-built data center space continuing to grow, you’d expect to find facilities being built all across the country, with a concentration in major markets, some outliers, and a general distribution around other areas. To some extent, that’s true. But the distribution is hardly uniform, with competing providers and in-house facilities alike suddenly cropping up next to each other.
So what makes these data center clusters happen? Wouldn’t builders like to place facilities in more diverse areas in order to avoid cascading or single-point failures from the same power outage or natural disaster? The decision to build in a cluster goes beyond offering competition in a popular area.
Green House Data’s own data center in Orangeburg, NY is a part of one of these clusters of development, and there are a number of factors why we joined Bloomberg’s giant facility just down Ramland Rd.
There are plenty of factors when sizing up colocation providers: available space, power configurations, efficiency, support services, networking, etc. But one aspect makes all the difference, with ripple effects on many of these other factors: location. Depending on your infrastructure demands, you might need a data center nearby for low latencies or far away for disaster recovery; in either case, the location can also impact power pricing, energy efficiency, and connectivity.
You’re probably familiar with “swamp cooling” at home, especially if you live in the dry West like we do. Swamp cooling is evaporative cooling, a more efficient method of air conditioning than vapor compression or absorption refrigeration, the latter relying on refrigerant that contributes to ozone depletion, in addition to consuming more energy. Free cooling has made inroads in the data center and is common in many new builds and retrofits as a method of saving energy and water alike, leading to its nickname of “free cooling”.
Less than a year remains until Microsoft halts support for Windows Server 2003. Just check the ominous countdown clock on their official migration website. With many systems still running Server 2003, including a plethora of 32-bit applications, now is the time to start a migration plan, if you haven't already.
Patching is necessary to keep servers secure from attackers and viruses as well as free from bugs, which can sap productivity. Designing your server and virtual machine infrastructure to suit service levels and future change management will save you time and potential outages when the time comes to patch—and when it does, these simple best practices will help smooth the process.
Whether you’ve been handed a mandate to consolidate your data centers, like many federal government data center managers, or you’re evaluating consolidation as an option for aging or expensive in-house data centers, the process can deliver cost savings and higher efficiency without losing the uptime and power provided by your existing infrastructure.
What most data center managers worry about—and rightly so—as they face a consolidation mandate is uptime and cost for cloud or colocation infrastructure. The employee reaction to consolidation news is also worrisome, as inevitably some jobs will be cut.
What they may not realize is that if the data center shutdown isn’t smooth and the replacement services aren’t carefully evaluated and set up, the transfer process might eliminate any consolidation ROI. Here are some best practices that will maximize the benefit of data center consolidation. But first…
Although they started to gain real momentum circa 2011 or so, modular and containerized data centers are still spreading their way across the industry. The two models share many similarities: ease of deployment, the ability to add more computing power more or less on demand, highly energy efficient operation, and some degree of prefabrication. Depending on the enterprise and IT needs, each has distinct advantages and disadvantages for data center design and infrastructure procurement.
Why go modular or containerized? Both models provide a standardized kit to scale out a data center piece by piece. A facility can be designed with an initial baseload for power and then built out with racks, cooling, and support equipment as needed. As more customers come on or the company grows larger, new servers and networking equipment are added to meet demand.