Cloud-native automation and orchestration tools make IT administration easier — at least once you know what you’re doing. While there is also some concern among the ranks of cloud technicians that automation could lead to job losses, by mastering the tools available you make yourself more valuable, while also finding and executing on efficiencies. Cloud automation is a win-win.
But where should you begin when it comes to automating your cloud environment? There are many moving parts in an enterprise cloud deployment, even within specific application clusters.
These are the three easiest targets for automation and orchestration.
You need IT infrastructure that you can count on even when you run into the rare network outage, equipment failure, or power issue. When your systems run into trouble, that’s where one or more of the three primary availability strategies will come into play: high availability, fault tolerance, and/or disaster recovery.
While each of these infrastructure design strategies has a role in keeping your critical applications and data up and running, they do not serve the same purpose. Simply because you operate a High Availability infrastructure does not mean you shouldn’t implement a disaster recovery site — and assuming otherwise risks disaster indeed.
What’s the difference between HA, FT, and DR anyway? Do you really need DR if you have HA set up?
If you have a pool of users that need access to Windows desktops, you can deliver those desktops and associated applications remotely, saving money on administration and end-user hardware alike, while gaining control over security and access control.
Two methods to achieve this are Virtual Desktop Infrastructure and Remote Desktop Services. In either case, the user connects to a server or virtual machine which is hosted within a data center or with a cloud provider. That remote server or VM contains the desktop environment and all data and applications are stored and processed remotely.
But is VDI or RDS the right choice for your situation? Let’s take a look at the differences between the two and some use cases for each.
As cloud adoption rates have increased and cloud models for enterprise IT mature, multicloud deployments have become more and more popular. They happen for a variety of reasons: some cloud platforms are better suited for specific applications, others may have security or compliance measures that are necessary. They might be located in different physical sites, fostering failover and disaster recovery or serving satellite markets. For many users, avoiding being locked in with a single vendor is huge for negotiation and data sovereignty.
Going multicloud isn’t a simple task, however, especially if you want to manage everything with a simple workflow. Here are the biggest stumbling blocks companies are facing when implementing multicloud.
When managing a virtualized environment you’ll naturally want to monitor your compute resources such as memory, CPU, storage, and bandwidth in order to keep an eye on any possible performance issues.
We’ve covered monitoring before – like how much information to collect, how granular you need to get, how to check load averages, and configuring vSphere Alarms for resource consumption. Today we’re taking a closer look at CPU performance monitoring in particular.
Often times the CPU is the first potential culprit to check when you encounter a struggling virtual machine. Learn the differences between CPU metrics, some common problems, and best practices for provisioning CPU cores in this blog.
NUMA architectures allow for greater scalability, which is of course great for building cloud data centers. But if your virtual machines aren’t configured correctly, NUMA can cause performance degradation in VMware virtualized servers.
Here’s an overview of what NUMA is, why it’s useful for cloud computing, and how to address it when configuring your VMware cloud server.
The Green House Data blog has hit a major milestone this month, rocketing from around 8,000 monthly unique visitors to 12,000 unique visitors in March. As we pass the 10k mark, we want to say thanks to everyone who has come to our little corner of the internet and also take a look back at our most enduring and popular posts over the years.
From cloud hosting to data center design to information security, the blog has covered a lot of ground in the past five or six years, with experts from our staff joining our marketing and content teams for weekly updates.
Here are the top 10 all time posts from the Green House Data blog.
Green House Data announced the addition of Azure cloud to our stable of managed cloud services this week. For some, this may come as a bit of a shock. We’ve been a VMware shop since the company was formed, with the gBlock Cloud hosted within our data centers on the vSphere platform.
We’ll continue to offer our own hosted VMware cloud as well as VMware cloud management on behalf of our clients, but we’ve expanded our scope to include Azure managed services. There are a number of reasons for this shift in strategy, which ultimately allows clients a wider breadth of service options to best suit their IT infrastructure goals.
VMware vSphere 6.5 introduced policy-based encryption, which simplifies the security management of VMs across large scale infrastructure, as each object no longer requires individual key management.
vSphere VM encryption offers quite a few advantages compared to other encryption methods, but it might not be a great fit for every workload. When weighing whether to encrypt or not, you’ll want to consider a few limitations, caveats, and performance issues first.
We thought everyone finally had cloud terminology all cleared up. You’ve certainly seen the countless blogs about IaaS, PaaS, and SaaS; not to mention the ever-proliferating surveys and reports on hybrid cloud being the deployment flavor du jour.
But things aren’t as clear as we might want them to be. For example, tell me what you think of when you hear the term “public cloud.”
Is it a hyperscale provider like AWS, Azure, or Google? It is, isn’t it? If not, you probably work with or for an organization similar to Green House Data, which has a public cloud offering with some major differences from the hyperscale players.
So how can we clear up the cloud? Has public become synonymous with hyperscale and self-provisioning? Has private cloud fallen by the wayside? And what should your business focus on, anyway?