Cloud is scalable, it’s flexible, it’s a whole host of cool-sounding adjectives. But what does that all mean in practice? While it’s nice for the IT budget to be able to adjust infrastructure resources on the fly, cloud servers are also facilitating a subtle shift in the way your department should be managing infrastructure.
It’s time to quit relying on monitoring solutions and simply reacting to problems as they arise. Proactive cloud management involves digging in across every department in your organization to more closely align IT resources with business objectives.
Here’s how the new paradigm of proactive cloud management differs from simple monitoring, patching, updates, and other firefighting.
You know what they say: a clean Active Directory keeps the attackers at bay. Or they should say it, anyway. Active Directory is a piece of Windows Server in charge of authentication and authorization for any “object” connected to the network. That includes users, systems, resources, and services.
As you might imagine, enterprises often manage sprawling Active Directories with thousands or even hundreds of thousands of objects, from laptops to printers. When a user leaves the company, their login may still reside in Active Directory. Groups used to organize different pieces of the directory may now lie empty.
Cleaning up your Active Directory not only improves database and server performance, but can plug holes in your security left from old accounts. A regularly scheduled Active Directory cleanup should be included with maintenance activities and performed at least annually.
Assuming your Active Directory server is hosted in the cloud, decluttering can also save you storage costs, while improving performance also lowers your monthly bills as bandwidth charges and compute resources can both drop.
If you’re pricing out a cloud server, you’re probably comparing pricing on a certain number of virtual CPUs, or Central Processing Units, as well as RAM and storage, and perhaps network fees. If you were building a gaming PC, you’d be pricing out all of those items, but you’d also be saving a major chunk of money for a graphics card, or GPU. GPUs are naturally intended to handle the processing of digital graphics in visually intensive tasks like gaming or animation.
With the rise of big data analytics and machine learning, however, GPUs are playing an increasingly important part in high performance computing. Cloud providers have started getting in on the game, enabling GPU-accelerated cloud servers with an eye on big data processing and other intensive applications.
Private vs. public cloud is a battle many thought was over years ago, and some recent think pieces seem to confirm that notion, claiming no one can match the economies of scale delivered by hyperscale cloud providers.
But private cloud, or on-premise virtualization, can still be a less expensive option — if you have the staff and capabilities to support it. A recent study from 451 Research describes when the tipping point is in the favor of private cloud and when public cloud has a lower total cost of ownership (TCO), based on utilization of hardware and efficiency of your staff.
Cloud infrastructure is all about providing the right amount of resources for your applications at any given moment. Overprovisioning might be wise for performance-oriented apps, but generally “right-sizing” is the best way to maximize your budget, especially as most IT departments face efficiency and cost struggles.
By being proactive about managing your virtual machine resources and halting underutilized or “zombie” VMs, you can free up those resources either to be decommissioned or reassigned to other uses.
You’ll want to adjust VM size to reclaim overprovisioned VMs, clean up idle or turned-off VMs, and resize VMs that are stretching their current resources beyond acceptable performance. Here’s how to practice active capacity management.