Microsoft’s products “SCCM” and “SCOM” sound like confusingly-named twins, but try to get past your first impression of them as a set in identical dresses posing for a portrait. It is true that they are in the same Microsoft system center family, but each has its own distinctive traits and roles.
As more and more businesses move their applications and associated data to the cloud, managing all that information becomes more complicated.
IT no longer has complete control and insight over every aspect of the datastore; instead as multiple cloud providers are implemented and endpoint data is served and collected from widely-flung users and workstations, you’re likely to run into compatibility and versioning issues between various databases and storage platforms. The data management problem grows even larger as multicloud, the Internet of Things, and Big Data initiatives rise in popularity and real-world applicability.
Three ways to get all your ever-growing databases and datastores on the same page are data federation, data hubs, and data lakes. What are the differences between each, and what are some pros and cons of their use?
GDPR? Old news. (We’ll just pass over the fact that many organizations have yet to reach compliance…that’s another story.) While hosting providers that advertise to European companies and individuals must comply with the EU law, there are other legal requirements that US-focused organizations have to consider, namely Data Shield and an upcoming compliance mandate in the state of California that is similar to GDPR itself.
Privacy Shield is an international law in flux, with EU lawmakers threatening to withdraw entirely if the USA does not enforce compliance. The California Consumer Privacy Act (CCPA) will go into effect in 2020.
What do these laws entail? And should your organization be concerned with these data privacy measures?
You need IT infrastructure that you can count on even when you run into the rare network outage, equipment failure, or power issue. When your systems run into trouble, that’s where one or more of the three primary availability strategies will come into play: high availability, fault tolerance, and/or disaster recovery.
While each of these infrastructure design strategies has a role in keeping your critical applications and data up and running, they do not serve the same purpose. Simply because you operate a High Availability infrastructure does not mean you shouldn’t implement a disaster recovery site — and assuming otherwise risks disaster indeed.
What’s the difference between HA, FT, and DR anyway? Do you really need DR if you have HA set up?
Let’s get this out of the way first: two factor authentication is an effective mode of account verification and far, far better than a simple username and password (single factor) authentication method. But it isn’t a magic bullet and can be overcome, especially with clever social engineering (unsurprisingly, the weakest link in security remains people rather than technology). Ultimately, 2FA is only as secure as the method and technology or product used to secure it.
Here’s how 2FA can be overcome by determined hackers and how you can best maintain account integrity across your organization or personal accounts.