A lot of the focus in the data center energy efficiency world is placed on cooling, and rightly so: cooling servers can consume as much as 37% of a well-designed data center, according to Emerson. But there are opportunities to improve efficiency on the other side of the rack, too. The power delivery systems, including Uninterruptible Power Supplies (UPS) and transformers, can deliver significant energy (and cost) savings.
Whether you’ve been handed a mandate to consolidate your data centers, like many federal government data center managers, or you’re evaluating consolidation as an option for aging or expensive in-house data centers, the process can deliver cost savings and higher efficiency without losing the uptime and power provided by your existing infrastructure.
What most data center managers worry about—and rightly so—as they face a consolidation mandate is uptime and cost for cloud or colocation infrastructure. The employee reaction to consolidation news is also worrisome, as inevitably some jobs will be cut.
What they may not realize is that if the data center shutdown isn’t smooth and the replacement services aren’t carefully evaluated and set up, the transfer process might eliminate any consolidation ROI. Here are some best practices that will maximize the benefit of data center consolidation. But first…
There are myriad ways to bill colocation customers, making a comparison between multiple bids an occasionally daunting process. The industry does seem to be shifting towards an accepted standard billing model based on metered electricity use, but older billing methods based on footprint and telecom connections are still in play.
This can make it difficult to compare bids based on different models. What are the distinctions in colocation pricing models and what is the fairest method for customers and providers alike?
The software defined data center (SDDC) is a natural evolution of virtualization, extending it beyond virtual machines on a server to virtual networks, virtual storage, and new automated management tools with similar benefits to traditional virtualization. The term was first coined by VMware CTO Steve Herrod in 2012.
In an SDCC, all physical infrastructure is treated as one resource that can be divided as needed, rather than split up by individual servers, switches, routers, hard drive, storage bays, and so on. Software and services are installed on an abstracted layer on top of data center hardware to manage virtual networks, virtualized servers, and virtual storage.
Security. When it comes down to it, security is the main reason many executives are wary of cloud hosting. It’s a good reason, too. It takes a bit of faith to put critical business data into external infrastructure. Managed cloud security services offer peace of mind as dedicated NOC staff keeps watch 24 hours a day for incoming threats, both taking precautions and responding to attacks as soon as they are detected. The three stages of managed security services are:
The Network Operations Center, or NOC, is the beating heart of the data center. The room is hung with TV screens and stuffed with computers, staffed 24 hours a day, 7 days a week. So what are the NOC technicians up to during all that time? Quite a bit, actually. Let's take a look at a night in the life of a NOC tech.
In today’s challenging business climate, designing, financing and constructing a multi-tenant data center project is a challenge; even for established data center owner / operators.
Having a design and engineering team that understands the challenges associated with this process can help. Highly qualified engineering firms understand not only the options and efficiencies of design, but also the cost to build and operate the data center. Increasingly, the design and engineering teams need to play a larger role in these investment and financial elements of the project. Although first cost is always of great importance, the cost to maintain and operate the data center over a 20 year period is even more substantial. The best design decisions today tend to be based on a total cost of ownership (TCO) model, and carefully measure costs of energy and water consumption as well as repairs and maintenance.
Although digital security is paramount to keeping your business data safe within our data center, and for meeting compliance standards, the physical security measures are just as important. For example, our HIPAA infographic shows how many data breaches result from stolen equipment. These threats are largely internal in nature, which is why four layers of security—physical facility security, that is—help ensure the safety of equipment and information stored in our facility.
Cables are tangly, dongly little devils. Everyone’s dealt with that jumbled mess behind a desk or entertainment center at some point. They collect dust, clutter up space and probably even evolve life if you leave them alone long enough. In a data center, that just doesn’t cut it. In fact, cable clutter can actually raise operational costs, drag down energy efficiency and even put infrastructure at risk of interference, cross-talk, and cable damage.
The word “tier” is used frequently in the context of data centers. As we explained in a previous blog post, data center facilities are ranked by Tier depending on their infrastructure and redundancy. But tiers are also used to describe the resources and infrastructure of a virtual machine or server, and are often mentioned in billing or application deployment. This post aims to clear up any remaining confusion over the two types of tiers.