With more and more companies taking advantage of cloud computing for on-demand infrastructure and additional resources, penetration testers are being called upon to perform more security testing on virtualized environments. Clients may require testing for compliance standards like PCI DSS, or they may be evaluating multiple cloud providers for the most secure option. The cloud brings with it a new set of considerations for testers, as a virtual environment could house multiple tenants on the same architecture.
The first thing to decide is whether you are outsourcing pen testing to a third party or keeping it in-house with your security team. With a third party you will only need to mitigate any contract and SLA problems. Be sure to vet a third party thoroughly, asking exactly what they will test, what tools they will use, scan policies, whether they used white-box or black-box testing (in black box, the tester infiltrates without any previous knowledge of the environment, while white box is the opposite).
Either way you’ll need to know exactly what will be tested including which applications, database servers, devices including storage, and devices.
Whether you’ve been handed a mandate to consolidate your data centers, like many federal government data center managers, or you’re evaluating consolidation as an option for aging or expensive in-house data centers, the process can deliver cost savings and higher efficiency without losing the uptime and power provided by your existing infrastructure.
What most data center managers worry about—and rightly so—as they face a consolidation mandate is uptime and cost for cloud or colocation infrastructure. The employee reaction to consolidation news is also worrisome, as inevitably some jobs will be cut.
What they may not realize is that if the data center shutdown isn’t smooth and the replacement services aren’t carefully evaluated and set up, the transfer process might eliminate any consolidation ROI. Here are some best practices that will maximize the benefit of data center consolidation. But first…
A recent post on Government Technology recounts the stressful tale of a data center fire in an Iowa government facility and the subsequent scramble to restore operations. Despite a second available data center, the team chose to get the original facility up and running. Thanks to the fire department and their own fire suppression systems, the equipment was salvaged and back online in just twelve hours.
With the amount of electrical equipment and heat generated in a data center, fires are a real and constant threat. Whether your company has in-house infrastructure or hosts with a data center service provider, knowing which suppression systems to use and how to respond if they fail is essential to avoid downtime should disaster strike.
Although they started to gain real momentum circa 2011 or so, modular and containerized data centers are still spreading their way across the industry. The two models share many similarities: ease of deployment, the ability to add more computing power more or less on demand, highly energy efficient operation, and some degree of prefabrication. Depending on the enterprise and IT needs, each has distinct advantages and disadvantages for data center design and infrastructure procurement.
Why go modular or containerized? Both models provide a standardized kit to scale out a data center piece by piece. A facility can be designed with an initial baseload for power and then built out with racks, cooling, and support equipment as needed. As more customers come on or the company grows larger, new servers and networking equipment are added to meet demand.
Today’s shortage of qualified IT workers is no secret. This is a part of the larger problem of the skills gap and the even larger STEM (Science Technology, Engineering, Math) crisis. One way in which Green House Data has started to combat this shortage is through a partnership with the University of Wyoming.