The holidays are looming, meaning many DevOps teams are about to have their apps take a beating as hundreds of holiday orders and new device users slam them all at the same time. Whether or not your systems are consumer-focused, there will eventually come a time when the overall load on your servers is pushed to the limit.
Load testing applications in the cloud allows development and testing staff to perform scale testing to see at what point virtual machines need to scale, when to add additional resources like storage or bandwidth, and when a failover solution might be necessary.
By thoroughly performing load tests throughout the DevOps process, your organization eventually lowers costs and your team doesn’t have to scramble during a major event. Here are some best practices when performing cloud-based load testing.
As you may have heard, Green House Data has completed our second acquisition of 2017 with the purchase of Ajubeo, a Denver-based cloud hosting provider. While more cloud resources are always beneficial to a nationwide cloud platform, some people might be left scratching their heads — after all, we have our headquarters just 100 miles north of Denver in Cheyenne, Wyoming. So what advantages are to be had from adding a Denver cloud node?
We thought everyone finally had cloud terminology all cleared up. You’ve certainly seen the countless blogs about IaaS, PaaS, and SaaS; not to mention the ever-proliferating surveys and reports on hybrid cloud being the deployment flavor du jour.
But things aren’t as clear as we might want them to be. For example, tell me what you think of when you hear the term “public cloud.”
Is it a hyperscale provider like AWS, Azure, or Google? It is, isn’t it? If not, you probably work with or for an organization similar to Green House Data, which has a public cloud offering with some major differences from the hyperscale players.
So how can we clear up the cloud? Has public become synonymous with hyperscale and self-provisioning? Has private cloud fallen by the wayside? And what should your business focus on, anyway?
Gartner anticipates that 90% of large organizations will have a Chief Data Officer by 2019.
This isn’t too surprising when you consider than the total amount of data is expected to grow exponentially, doubling in size every year until 2020. That’s 50 times more data in a decade.
Big data has plenty of insights for business large and small, and data-based initiatives are underway across the globe, as organizations seek to quickly understand and analyze mountains of information to glean a competitive advantage.
A Chief Data Officer makes key decisions around the storage, handling, and use of a business’ information, including the type of platforms used, connections to/from production applications, analytics processes, and efficient flow of data.
Let’s dig into what that means in practice and how a CDO can help reduce the significant costs around data storage, platforms, and access, while also improving business functionality and agility.
Hybrid IT infrastructure seems to be the deployment mode du jour, but some theorize that hybrid is just a stopover on the way to a 100% public cloud environment. With cloud adoption as a whole moving slower than many anticipated, it may be too early to definitively say whether hybrid is here to stay, but in our opinion, hybrid will remain a valuable model for many years to come.
Surveys from McAfee and RightScale both show hybrid cloud and multicloud adoption increasing, with McAfee finding a jump from 19% of organizations using hybrid cloud in 2015 to 57% using hybrid cloud in 2016, and RightScale showing an increase from 58% to 71% over the same period.
But are these increases just because hybrid cloud is the easiest deployment model? Often times a company will add cloud resources alongside their current infrastructure, which is considered a form of hybrid cloud. Or is it because the definition of hybrid cloud itself is shifting?