You’re ready to start deploying and migrating applications into Microsoft’s Azure cloud platform — but there are four deployment models to contend with. Which should you choose? Each has strengths and weaknesses depending on the service you are setting up. Some might require more attention than others, but offer additional control. Others integrate services like load balancing or Operating Systems as more of a Platform as a Service.
Learn the differences between Azure Service Fabric, Azure Virtual Machines, Azure Containers, and Azure App Services, and when you might want to choose one over another. Green House Data is also ready to help you decide which of your business applications belong in which bucket — and we can help you administrate them, too.
We’ve gone back and forth on this for many years now. Are enterprise data centers dying? Gartner seems to think so, recently predicting that by 2025, 80% of enterprises will have shut down their traditional data centers, compared to 10% today.
That’s less than ten years out. Do you foresee your data center being put out to pasture within a decade? Or largely decommissioned and consolidated? It doesn’t seem too far-fetched considering an average hardware lifespan of three years. You could cycle through your servers three times over before then — and most of those compute workloads will likely end up in the cloud or hosted elsewhere.
Here's how that change will affect how you procure and manage IT services.
Migrating to the cloud? Now is the perfect time to start or continue your digital transformation. There are several methods when it comes to cloud migration. At some point in your cloud journey you’re bound to encounter more than one of them and each of them certainly has its purpose.
But if you aren’t designing in the cloud, for the cloud (which could involve rearchitecting or procuring replacement application components), you’re missing out on many of the biggest advantages of cloud computing.
Here’s why “lift and shift” ends up stifling what could be a transformative cloud migration that sets the stage for your enterprise IT for years to come.
While microservice application architecture dates back to 2011, enterprise IT tends to move relatively slowly when it comes to the adoption of new technologies. The concept and methodology has been refined in concert with the rise of cloud computing, and now microservices are a popular way to build, deploy, and most importantly scale applications.
Microservices can improve your agility, security, and resiliency, but they require a major adjustment to your development team’s workflow and the architecture of your application itself. Read on to learn the advantages of microservices and potential caveats for their use.
Cloud IT infrastructure has plenty of overlap with traditional on-premise servers, but there are additional layers of complexity and new tools to learn as well. That’s why building a successful cloud team is so important to an effective cloud deployment.
A managed service provider can help you fill your cloud skills gaps and architect a versatile and resilient cloud platform for your applications and data. But if you want continued success in the cloud, having a cloud architect on your IT team goes a long way.
How has the role of a cloud architect evolved and what are they responsible for? Let’s take a look.
As cloud adoption rates have increased and cloud models for enterprise IT mature, multicloud deployments have become more and more popular. They happen for a variety of reasons: some cloud platforms are better suited for specific applications, others may have security or compliance measures that are necessary. They might be located in different physical sites, fostering failover and disaster recovery or serving satellite markets. For many users, avoiding being locked in with a single vendor is huge for negotiation and data sovereignty.
Going multicloud isn’t a simple task, however, especially if you want to manage everything with a simple workflow. Here are the biggest stumbling blocks companies are facing when implementing multicloud.
When managing a virtualized environment you’ll naturally want to monitor your compute resources such as memory, CPU, storage, and bandwidth in order to keep an eye on any possible performance issues.
We’ve covered monitoring before – like how much information to collect, how granular you need to get, how to check load averages, and configuring vSphere Alarms for resource consumption. Today we’re taking a closer look at CPU performance monitoring in particular.
Often times the CPU is the first potential culprit to check when you encounter a struggling virtual machine. Learn the differences between CPU metrics, some common problems, and best practices for provisioning CPU cores in this blog.
NUMA architectures allow for greater scalability, which is of course great for building cloud data centers. But if your virtual machines aren’t configured correctly, NUMA can cause performance degradation in VMware virtualized servers.
Here’s an overview of what NUMA is, why it’s useful for cloud computing, and how to address it when configuring your VMware cloud server.
At Green House Data we like to say there’s no “one size fits all” cloud deployment. That’s why we don’t have base package pricing on the website — every VM is right-sized and designed around our client’s applications and business goals. That philosophy applies to every cloud deployment, and the network considerations aren’t exempt.
Depending on your objectives, the intended use of the application in question, and the location of your users and service providers, your network will have different performance and cost implications.
Let’s take a look at how to prepare your network for varying application deployments in the cloud.
Another year, another trend in the data center world. Although edge data centers first starting making headlines circa 2014 or 2015, they’ve become mainstream as more and more users slurp down increasing amounts of data. That takes serious bandwidth; to the point that many pundits are pointing towards the placement of workloads in edge facilities, rather than the traditional centralized data centers in major markets, as a sign that cloud computing is starting to wane.
On the contrary, edge data centers serve to supplement and improve the reach of even the major cloud computing providers. No major cloud service provider (CSP) is going to only place workloads in major markets. Just look at our neighbors in Cheyenne: Microsoft has a huge facility that they’re actively expanding. Amazon operates data centers in Ohio, which, while central for the US in general and equidistant from major population centers like Chicago and New York, is hardly a major market in itself.
And beyond large scale platforms like Azure or AWS, you have players like Green House Data, who offer smaller scale virtualization from data centers in a myriad of second and third tier markets.
But it's not just about the cloud spreading itself to the edge. Here's why edge computing will be important, but will also become more of a niche deployment model, with cloud remaining the king of application processing and data storage.