Cloud computing has largely hit the mainstream. Your mom knows about it (at least vaguely — she’s probably asking you to help her put her pictures in the cloud). But IT progress continues to march on, and a new model of information processing is beginning to take shape: fog computing.
So where does fog take over from cloud? When an army of connected devices require constant processing power and connectivity. The Internet of Things is coming fast. According to IDC, the IoT will expand by 2020 to include 4 billion people online, 25 million or more apps, 25 billion embedded and intelligent systems, and 50 trillion gigabytes of data.
Fog computing is a way to manage some of those bandwidth and processing power demands by splitting the duties between the local device and remote data centers. It should sound familiar if you know the hybrid cloud model, which balances onsite virtualization with hosted cloud from cloud providers.
Data centers never shut down, and the doors don’t ever really close. With 24/7 access for those with security clearance, plus round-the-clock monitoring by NOC staff and engineers, data centers don’t need a walkthrough to close up shop, unlike many other businesses.
That doesn’t mean that a similar process isn’t followed at the end of every shift or periodically throughout the day, however.
At Green House Data, the Global Support Center staff members are charged with walkthroughs to ensure proper operation of the data center from entrance to loading dock. Use this as a template for your own facility — or read it as assurance that we’re doing all we can to guarantee 100% uptime and a great customer experience.
Disaster recovery and DRaaS solutions are intended as a method to keep a constant, or near-constant copy of your IT infrastructure in the cloud, ready to turn on a moment’s notice in the case of downtime at your primary data center site. But DR tools can also be used for your initial cloud migration, providing an on-ramp to the cloud that is cost-effective and relatively fast. You also get the bonus of a ready-to-go DR plan, if you continue to maintain the DR environment after your production servers turn on.
Resource pools in VMware powered clouds are one way to manage all available server resources and divide them among your virtual machines (VMs). They are essentially folders for your VMs that direct the server to allocate a certain amount of resources to a specified group of VMs in a hierarchy.
Resource pools are generally used to prioritize certain VMs over others, for reselling resources outside of your organization, and for isolating groups of VMs within performance standards, like when setting up a pool for Testing and Development vs. Production. Access controls are another reason to use resource pools – administrators can delegate a single pool of resources to a team member based on permissions.
Here are some tips to help you efficiently manage the CPU and memory allocated to your cloud servers.
How secure is your data center? In order to guarantee security, maintain uptime, and pass HIPAA and SSAE 16 Type II certifications, Green House Data has over sixty auditable security, environmental, and compliance control measures. Each compliant data center is audited once per year.
Some of these control points are standard practice, while others had to be added to daily routines in some facilities in order to gain compliance and bring them up to our strict standards. This list can help you get your data center up to speed – or see just how much effort goes into keeping server rooms monitored, secured, and fully auditable.
See all 61 points we check for security and auditability after the jump.