With more and more companies taking advantage of cloud computing for on-demand infrastructure and additional resources, penetration testers are being called upon to perform more security testing on virtualized environments. Clients may require testing for compliance standards like PCI DSS, or they may be evaluating multiple cloud providers for the most secure option. The cloud brings with it a new set of considerations for testers, as a virtual environment could house multiple tenants on the same architecture.
The first thing to decide is whether you are outsourcing pen testing to a third party or keeping it in-house with your security team. With a third party you will only need to mitigate any contract and SLA problems. Be sure to vet a third party thoroughly, asking exactly what they will test, what tools they will use, scan policies, whether they used white-box or black-box testing (in black box, the tester infiltrates without any previous knowledge of the environment, while white box is the opposite).
Either way you’ll need to know exactly what will be tested including which applications, database servers, devices including storage, and devices.
A recent post on Government Technology recounts the stressful tale of a data center fire in an Iowa government facility and the subsequent scramble to restore operations. Despite a second available data center, the team chose to get the original facility up and running. Thanks to the fire department and their own fire suppression systems, the equipment was salvaged and back online in just twelve hours.
With the amount of electrical equipment and heat generated in a data center, fires are a real and constant threat. Whether your company has in-house infrastructure or hosts with a data center service provider, knowing which suppression systems to use and how to respond if they fail is essential to avoid downtime should disaster strike.
Although they started to gain real momentum circa 2011 or so, modular and containerized data centers are still spreading their way across the industry. The two models share many similarities: ease of deployment, the ability to add more computing power more or less on demand, highly energy efficient operation, and some degree of prefabrication. Depending on the enterprise and IT needs, each has distinct advantages and disadvantages for data center design and infrastructure procurement.
Why go modular or containerized? Both models provide a standardized kit to scale out a data center piece by piece. A facility can be designed with an initial baseload for power and then built out with racks, cooling, and support equipment as needed. As more customers come on or the company grows larger, new servers and networking equipment are added to meet demand.
Today’s shortage of qualified IT workers is no secret. This is a part of the larger problem of the skills gap and the even larger STEM (Science Technology, Engineering, Math) crisis. One way in which Green House Data has started to combat this shortage is through a partnership with the University of Wyoming.
By now you’re likely familiar with PUE, or Power Usage Effectiveness, an industry standard measurement for the energy efficiency of a data center. Despite some claims that PUE is easily manipulated or not enough to judge full environmental impact, many data centers (including Green House Data) are using PUE to measure efficiency.
The Green Grid, a consortium of technology companies who aim to improve data center efficiency, has collaborated with industry groups around the world to develop several new metrics to measure carbon emissions and energy use in the data center, including GEC, ERF, CUE, and DCeP. What are these new measurements, and how does Green House Data stack up?