Placing data in the cloud comes with a set of concerns — accessibility (will my information always be available if the cloud has technical problems?) and security (how safe is my data when I can’t control the security measures?) chief among them. Of these, security has long been the primary concern for technology decision makers considering the cloud.
Recent surveys reveal that while security remains top of mind, the location of data is rising in prominence as a barrier or concern for cloud adoption. These concerns stem in part from the difficulty of visibility into data transit and storage. Customers might want to know where exactly their data is residing so they can retrieve it quickly — and also for legal implications.
Two recent court cases between Google, Microsoft, and the Federal Government highlight the legal entanglements that could come with storing information in the cloud. Read on to learn why the location of your cloud data is vital.
Here at Green House Data, our technicians are constantly working hard behind the scenes to improve the customer experience in our cloud products. We’ve recently completed a round of upgrades to bring you the latest features and bug fixes to our gBlock Cloud platform.
Here are some of the newest features that are available to you today, including improved web portal access, new disaster recovery features and interoperability with AWS and Azure, and more.
You may already know that at its most basic level, cloud computing is essentially storing, accessing, and interacting with data and applications over the internet instead of locally, like on a hard drive. And, if you are in a technical profession, you’re likely to know a whole lot more about the cloud, what it’s good for, how it’s built and deployed, and what it’s made up of.
But, what about the rest of us? Why should non-technical folks care about the cloud?
When designing the architecture for your SQL Server virtualized on VMware vSphere, your requirements will determine which SQL availability or vSphere availability features you should use. There are several availability features packaged with SQL server before you even get to vSphere features like Distributed Resource Scheduler, High Availability, Fault Tolerance, or vMotion, each of which have their own considerations when interacting with SQL.
To get started, you’ll want to ask yourself a few questions about your SQL deployment.
“Can my application run in the cloud?”
It’s a question we get more frequently than you might think — and the answer is almost always yes. Just yesterday, we got a web chat from an individual who wanted to know if a cloud server could run his e-mail server, SMTP-based, with PowerMTA, or if he would need a dedicated option. Mail servers are frequently run on virtual machines, so this configuration should pose no problem as a cloud server.
There are thousands of applications, running on a wide variety of operating systems, that play nice with VMware virtualization platforms (the basis of the gBlock cloud). Here are four hybrid cloud use cases to get you started.
You’re probably familiar with the kind of performance issues inherent in antivirus/antimalware tools. Anyone who has used a PC when the antivirus scan boots up can attest to sluggish performance. The same issues rear their head when using antivirus in a virtual environment – but virtual machines come with their own set of wrinkles.
Antivirus software can be installed either on the VM itself or on the host. Depending on your approach, you’ll want to consider these key factors to maximize performance.
You know what they say: a clean Active Directory keeps the attackers at bay. Or they should say it, anyway. Active Directory is a piece of Windows Server in charge of authentication and authorization for any “object” connected to the network. That includes users, systems, resources, and services.
As you might imagine, enterprises often manage sprawling Active Directories with thousands or even hundreds of thousands of objects, from laptops to printers. When a user leaves the company, their login may still reside in Active Directory. Groups used to organize different pieces of the directory may now lie empty.
Cleaning up your Active Directory not only improves database and server performance, but can plug holes in your security left from old accounts. A regularly scheduled Active Directory cleanup should be included with maintenance activities and performed at least annually.
Assuming your Active Directory server is hosted in the cloud, decluttering can also save you storage costs, while improving performance also lowers your monthly bills as bandwidth charges and compute resources can both drop.
If you’re pricing out a cloud server, you’re probably comparing pricing on a certain number of virtual CPUs, or Central Processing Units, as well as RAM and storage, and perhaps network fees. If you were building a gaming PC, you’d be pricing out all of those items, but you’d also be saving a major chunk of money for a graphics card, or GPU. GPUs are naturally intended to handle the processing of digital graphics in visually intensive tasks like gaming or animation.
With the rise of big data analytics and machine learning, however, GPUs are playing an increasingly important part in high performance computing. Cloud providers have started getting in on the game, enabling GPU-accelerated cloud servers with an eye on big data processing and other intensive applications.
Private vs. public cloud is a battle many thought was over years ago, and some recent think pieces seem to confirm that notion, claiming no one can match the economies of scale delivered by hyperscale cloud providers.
But private cloud, or on-premise virtualization, can still be a less expensive option — if you have the staff and capabilities to support it. A recent study from 451 Research describes when the tipping point is in the favor of private cloud and when public cloud has a lower total cost of ownership (TCO), based on utilization of hardware and efficiency of your staff.
Cloud infrastructure is all about providing the right amount of resources for your applications at any given moment. Overprovisioning might be wise for performance-oriented apps, but generally “right-sizing” is the best way to maximize your budget, especially as most IT departments face efficiency and cost struggles.
By being proactive about managing your virtual machine resources and halting underutilized or “zombie” VMs, you can free up those resources either to be decommissioned or reassigned to other uses.
You’ll want to adjust VM size to reclaim overprovisioned VMs, clean up idle or turned-off VMs, and resize VMs that are stretching their current resources beyond acceptable performance. Here’s how to practice active capacity management.