As we’ve mentioned before on the blog, the location of your cloud data matters. Latency, accessibility, and security are all top of mind, but legal concerns should also be considered. Case in point: a new law working its way through the Senate could have major implications for your data storage.
The CLOUD Act (Clarifying Lawful Overseas Use of Data) has recently garnered the support of major tech companies like Apple, Microsoft, and Google, among others. Its stated goal is to clarify a web of different laws relating to data disclosure and privacy so enforcement officers and government officials have well-defined guidelines when it comes to accessing remotely stored data, including information that resides overseas, which is otherwise governed by the host country’s own laws.
So how might the CLOUD Act affect cloud storage and data sovereignty?
Accountants and CFOs have had their work cut out for them when trying to balance the checkbooks on cloud computing services. Because software and hardware is often categorized as an asset with depreciation, the proliferation of cloud — a subscription-based computing service rather than a capital expense — threw a wrench into traditional accounting for the IT department.
The Financial Accounting Standards Board (FASB), which provides guidance and sets accounting standards for public companies as directed by the SEC, set out suggestions for handling cloud computing in 2015. They recently convened to fix some of the problems created by this previous ordinance.
Here’s why cloud computing can cause headaches for your CFO and why the new FASB rules could help clear things up.
Unless you’ve been living under a rock or aren’t in the IT field at all, by now you’ve likely heard about the widespread Spectre and Meltdown vulnerabilities affecting an enormous swath of processors manufactured by Intel and AMD, the industry leaders, leading to security vulnerabilities and performance problems.
Green House Data staff have been hard at work patching systems as fixes have come available this week. Here’s a quick summary of the vulnerabilities, their effects on cloud and general computing performance, and what we’ve done to fix them so far. We also provide a few links for users who need to patch their own operating systems or investigate further.
We have arrived again at that time when year-end lists proliferate for perusal by a workforce distracted by the holidays. The data center industry continued to chug forward in 2017, with M&A activity heating up in particular. Here are the top stories that broke throughout the data center world, plus a list of the most visited posts from our own humble blog.
Cloud computing adoption has been linked to “digital transformation,” a term encompassing the shift from traditional modes of consuming and administrating IT services to the new on-demand model, punctuated by the hiring and reshaping of IT staff around working cloud services, shifting to DevOps methods, or otherwise changing their business operations model in order to maintain or improve a competitive position in the market.
One major piece of digital transformation and cloud adoption is the use of multiple cloud service providers depending on the workload at hand. This mode of cloud computing is now one of the leading deployment types — and could be considered a sibling, or even the same thing, as hybrid cloud.
A recent survey from VMware and the MIT Technology Review classifies three stages on the way to a successful multicloud deployment. Where is your organization on this path towards hybrid cloud enlightenment?
Moving to the cloud, changing service providers, upgrading your host hardware, consolidating data centers, or switching to new software — they all might necessitate a database migration.
Moving a database is not a task to be taken lightly, but it can lead to more centralized and efficient management, lower storage costs, and/or reduced license requirements. To minimize your risk and downtime, follow these database migration tips.
The holidays are looming, meaning many DevOps teams are about to have their apps take a beating as hundreds of holiday orders and new device users slam them all at the same time. Whether or not your systems are consumer-focused, there will eventually come a time when the overall load on your servers is pushed to the limit.
Load testing applications in the cloud allows development and testing staff to perform scale testing to see at what point virtual machines need to scale, when to add additional resources like storage or bandwidth, and when a failover solution might be necessary.
By thoroughly performing load tests throughout the DevOps process, your organization eventually lowers costs and your team doesn’t have to scramble during a major event. Here are some best practices when performing cloud-based load testing.
As you may have heard, Green House Data has completed our second acquisition of 2017 with the purchase of Ajubeo, a Denver-based cloud hosting provider. While more cloud resources are always beneficial to a nationwide cloud platform, some people might be left scratching their heads — after all, we have our headquarters just 100 miles north of Denver in Cheyenne, Wyoming. So what advantages are to be had from adding a Denver cloud node?
We thought everyone finally had cloud terminology all cleared up. You’ve certainly seen the countless blogs about IaaS, PaaS, and SaaS; not to mention the ever-proliferating surveys and reports on hybrid cloud being the deployment flavor du jour.
But things aren’t as clear as we might want them to be. For example, tell me what you think of when you hear the term “public cloud.”
Is it a hyperscale provider like AWS, Azure, or Google? It is, isn’t it? If not, you probably work with or for an organization similar to Green House Data, which has a public cloud offering with some major differences from the hyperscale players.
So how can we clear up the cloud? Has public become synonymous with hyperscale and self-provisioning? Has private cloud fallen by the wayside? And what should your business focus on, anyway?
Hybrid IT infrastructure seems to be the deployment mode du jour, but some theorize that hybrid is just a stopover on the way to a 100% public cloud environment. With cloud adoption as a whole moving slower than many anticipated, it may be too early to definitively say whether hybrid is here to stay, but in our opinion, hybrid will remain a valuable model for many years to come.
Surveys from McAfee and RightScale both show hybrid cloud and multicloud adoption increasing, with McAfee finding a jump from 19% of organizations using hybrid cloud in 2015 to 57% using hybrid cloud in 2016, and RightScale showing an increase from 58% to 71% over the same period.
But are these increases just because hybrid cloud is the easiest deployment model? Often times a company will add cloud resources alongside their current infrastructure, which is considered a form of hybrid cloud. Or is it because the definition of hybrid cloud itself is shifting?