Cloud IT infrastructure has plenty of overlap with traditional on-premise servers, but there are additional layers of complexity and new tools to learn as well. That’s why building a successful cloud team is so important to an effective cloud deployment.
A managed service provider can help you fill your cloud skills gaps and architect a versatile and resilient cloud platform for your applications and data. But if you want continued success in the cloud, having a cloud architect on your IT team goes a long way.
How has the role of a cloud architect evolved and what are they responsible for? Let’s take a look.
As cloud adoption rates have increased and cloud models for enterprise IT mature, multicloud deployments have become more and more popular. They happen for a variety of reasons: some cloud platforms are better suited for specific applications, others may have security or compliance measures that are necessary. They might be located in different physical sites, fostering failover and disaster recovery or serving satellite markets. For many users, avoiding being locked in with a single vendor is huge for negotiation and data sovereignty.
Going multicloud isn’t a simple task, however, especially if you want to manage everything with a simple workflow. Here are the biggest stumbling blocks companies are facing when implementing multicloud.
When managing a virtualized environment you’ll naturally want to monitor your compute resources such as memory, CPU, storage, and bandwidth in order to keep an eye on any possible performance issues.
We’ve covered monitoring before – like how much information to collect, how granular you need to get, how to check load averages, and configuring vSphere Alarms for resource consumption. Today we’re taking a closer look at CPU performance monitoring in particular.
Often times the CPU is the first potential culprit to check when you encounter a struggling virtual machine. Learn the differences between CPU metrics, some common problems, and best practices for provisioning CPU cores in this blog.
NUMA architectures allow for greater scalability, which is of course great for building cloud data centers. But if your virtual machines aren’t configured correctly, NUMA can cause performance degradation in VMware virtualized servers.
Here’s an overview of what NUMA is, why it’s useful for cloud computing, and how to address it when configuring your VMware cloud server.
At Green House Data we like to say there’s no “one size fits all” cloud deployment. That’s why we don’t have base package pricing on the website — every VM is right-sized and designed around our client’s applications and business goals. That philosophy applies to every cloud deployment, and the network considerations aren’t exempt.
Depending on your objectives, the intended use of the application in question, and the location of your users and service providers, your network will have different performance and cost implications.
Let’s take a look at how to prepare your network for varying application deployments in the cloud.
Another year, another trend in the data center world. Although edge data centers first starting making headlines circa 2014 or 2015, they’ve become mainstream as more and more users slurp down increasing amounts of data. That takes serious bandwidth; to the point that many pundits are pointing towards the placement of workloads in edge facilities, rather than the traditional centralized data centers in major markets, as a sign that cloud computing is starting to wane.
On the contrary, edge data centers serve to supplement and improve the reach of even the major cloud computing providers. No major cloud service provider (CSP) is going to only place workloads in major markets. Just look at our neighbors in Cheyenne: Microsoft has a huge facility that they’re actively expanding. Amazon operates data centers in Ohio, which, while central for the US in general and equidistant from major population centers like Chicago and New York, is hardly a major market in itself.
And beyond large scale platforms like Azure or AWS, you have players like Green House Data, who offer smaller scale virtualization from data centers in a myriad of second and third tier markets.
But it's not just about the cloud spreading itself to the edge. Here's why edge computing will be important, but will also become more of a niche deployment model, with cloud remaining the king of application processing and data storage.
The Green House Data blog has hit a major milestone this month, rocketing from around 8,000 monthly unique visitors to 12,000 unique visitors in March. As we pass the 10k mark, we want to say thanks to everyone who has come to our little corner of the internet and also take a look back at our most enduring and popular posts over the years.
From cloud hosting to data center design to information security, the blog has covered a lot of ground in the past five or six years, with experts from our staff joining our marketing and content teams for weekly updates.
Here are the top 10 all time posts from the Green House Data blog.
Green House Data announced the addition of Azure cloud to our stable of managed cloud services this week. For some, this may come as a bit of a shock. We’ve been a VMware shop since the company was formed, with the gBlock Cloud hosted within our data centers on the vSphere platform.
We’ll continue to offer our own hosted VMware cloud as well as VMware cloud management on behalf of our clients, but we’ve expanded our scope to include Azure managed services. There are a number of reasons for this shift in strategy, which ultimately allows clients a wider breadth of service options to best suit their IT infrastructure goals.
GDPR (General Data Protection Regulation) compliance is coming on May 25th to companies that operate in the European Union or have customers there. Fines for noncompliance can run into the tens of millions. Are you prepared? And do you even have to worry about it, if you’re a US-based operation?
Learn what security requirements fall under GDPR, as well as what situations would require compliance, and how you need to change your operations to avoid sanctions.
VMware vSphere 6.5 introduced policy-based encryption, which simplifies the security management of VMs across large scale infrastructure, as each object no longer requires individual key management.
vSphere VM encryption offers quite a few advantages compared to other encryption methods, but it might not be a great fit for every workload. When weighing whether to encrypt or not, you’ll want to consider a few limitations, caveats, and performance issues first.