Your data is your business. Your databases, and the data-driven applications that leverage them, should be regularly audited for vulnerabilities. One of the top risks facing your data today is SQL injection (SQLi). According to the 2018 Verizon Data Breach Incident Report (DBIR), SQLi was the second most common form of hacking varieties within information breaches, exceeded only by stolen credentials.
This attack vector involves the exploit of programmatic weaknesses in applications to run unintended code to manipulate your backend SQL databases, and thus access information or even gain administrative access and credentials.
Any application that uses SQL could be subject to this type of attack, from simple websites to SaaS apps like your CRM and ERP — even VoIP systems. This attack is also not limited to applications exposed to the internet. Internal applications are prime targets for attackers who have breached your external boundary (e.g., through phishing).
As more and more businesses move their applications and associated data to the cloud, managing all that information becomes more complicated.
IT no longer has complete control and insight over every aspect of the datastore; instead as multiple cloud providers are implemented and endpoint data is served and collected from widely-flung users and workstations, you’re likely to run into compatibility and versioning issues between various databases and storage platforms. The data management problem grows even larger as multicloud, the Internet of Things, and Big Data initiatives rise in popularity and real-world applicability.
Three ways to get all your ever-growing databases and datastores on the same page are data federation, data hubs, and data lakes. What are the differences between each, and what are some pros and cons of their use?
Another year, another trend in the data center world. Although edge data centers first starting making headlines circa 2014 or 2015, they’ve become mainstream as more and more users slurp down increasing amounts of data. That takes serious bandwidth; to the point that many pundits are pointing towards the placement of workloads in edge facilities, rather than the traditional centralized data centers in major markets, as a sign that cloud computing is starting to wane.
On the contrary, edge data centers serve to supplement and improve the reach of even the major cloud computing providers. No major cloud service provider (CSP) is going to only place workloads in major markets. Just look at our neighbors in Cheyenne: Microsoft has a huge facility that they’re actively expanding. Amazon operates data centers in Ohio, which, while central for the US in general and equidistant from major population centers like Chicago and New York, is hardly a major market in itself.
And beyond large scale platforms like Azure or AWS, you have players like Green House Data, who offer smaller scale virtualization from data centers in a myriad of second and third tier markets.
But it's not just about the cloud spreading itself to the edge. Here's why edge computing will be important, but will also become more of a niche deployment model, with cloud remaining the king of application processing and data storage.
Green House Data announced the addition of Azure cloud to our stable of managed cloud services this week. For some, this may come as a bit of a shock. We’ve been a VMware shop since the company was formed, with the gBlock Cloud hosted within our data centers on the vSphere platform.
We’ll continue to offer our own hosted VMware cloud as well as VMware cloud management on behalf of our clients, but we’ve expanded our scope to include Azure managed services. There are a number of reasons for this shift in strategy, which ultimately allows clients a wider breadth of service options to best suit their IT infrastructure goals.
VMware vSphere 6.5 introduced policy-based encryption, which simplifies the security management of VMs across large scale infrastructure, as each object no longer requires individual key management.
vSphere VM encryption offers quite a few advantages compared to other encryption methods, but it might not be a great fit for every workload. When weighing whether to encrypt or not, you’ll want to consider a few limitations, caveats, and performance issues first.
As we’ve mentioned before on the blog, the location of your cloud data matters. Latency, accessibility, and security are all top of mind, but legal concerns should also be considered. Case in point: a new law working its way through the Senate could have major implications for your data storage.
The CLOUD Act (Clarifying Lawful Overseas Use of Data) has recently garnered the support of major tech companies like Apple, Microsoft, and Google, among others. Its stated goal is to clarify a web of different laws relating to data disclosure and privacy so enforcement officers and government officials have well-defined guidelines when it comes to accessing remotely stored data, including information that resides overseas, which is otherwise governed by the host country’s own laws.
So how might the CLOUD Act affect cloud storage and data sovereignty?
Gartner anticipates that 90% of large organizations will have a Chief Data Officer by 2019.
This isn’t too surprising when you consider than the total amount of data is expected to grow exponentially, doubling in size every year until 2020. That’s 50 times more data in a decade.
Big data has plenty of insights for business large and small, and data-based initiatives are underway across the globe, as organizations seek to quickly understand and analyze mountains of information to glean a competitive advantage.
A Chief Data Officer makes key decisions around the storage, handling, and use of a business’ information, including the type of platforms used, connections to/from production applications, analytics processes, and efficient flow of data.
Let’s dig into what that means in practice and how a CDO can help reduce the significant costs around data storage, platforms, and access, while also improving business functionality and agility.
Application performance can often hinge on how well your storage can serve data to end clients. For this reason you must correctly design or choose your storage tier speed in terms of both IOPS and throughput, which rate the speed and bandwidth of the storage respectively.
It is vital to plan according to manufacturer and developer recommendations as well as real-world benchmarks to maximize your storage (and subsequently application) performance. Take a look at peak IOPS and throughput ratings, read/write ratios, RAID penalties, and physical latency.
We’ve been cloud-native since the beginning, offering VMware-powered virtual hosting for almost ten years. In fact, our very first EMC backend storage array is now sitting in the lobby of our Cheyenne headquarters.
Of course, we couldn’t stay stagnant with our cloud offerings (you’d notice if that old storage array was still powering your cloud, trust us). The hardware, software, and facilities powering the gBlock cloud have undergone a variety of upgrades over the past decade, and the latest set is big enough for us to dub it officially the gBlock Cloud 2.0.
So what’s new in the Green House Data cloud? Let’s dive into the benefits customers can receive when the migrate to this new and improved platform.
We’ve covered software-defined storage (SDS) in the past on the blog, delving into how it can automate many of your storage administration tasks. Today we’ll get a bit deeper into how SDS improves storage capacity management by maximizing the performance of the storage attached to each virtual machine according to pre-set rules.
In vSphere, storage management involves a combination of performance and service levels and capacity planning. SDS controls in the VMware ecosystem are called Storage Based Policy Management (SPDM) and with their use, you no longer have to provision virtual machines individually according to their storage requirements.
Here’s how SPBM eliminates the need to overprovision and manually manage storage arrays.