You need IT infrastructure that you can count on even when you run into the rare network outage, equipment failure, or power issue. When your systems run into trouble, that’s where one or more of the three primary availability strategies will come into play: high availability, fault tolerance, and/or disaster recovery.
While each of these infrastructure design strategies has a role in play in keeping your critical applications and data up and running, they do not serve the same purpose. Simply because you operate a High Availability infrastructure does not mean you shouldn’t implement a disaster recovery site — and assuming otherwise risks disaster indeed.
What’s the difference between HA, FT, and DR anyway? Do you really need DR if you have HA set up?
Let’s get this out of the way first: two factor authentication is an effective mode of account verification and far, far better than a simple username and password (single factor) authentication method. But it isn’t a magic bullet and can be overcome, especially with clever social engineering (unsurprisingly, the weakest link in security remains people rather than technology). Ultimately, 2FA is only as secure as the method and technology or product used to secure it.
Here’s how 2FA can be overcome by determined hackers and how you can best maintain account integrity across your organization or personal accounts.
Here we are again, talking about digital transformation. While the pile of buzzwords threatens to overwhelm at times, this particular movement has real benefits for organizations that are still running IT in the old style, with break-fix scrambling, disjointed service delivery, and a take-it-or-leave it approach to technology procurement.
Rather than focusing simply on the end goal from an IT perspective, your IT department should be focused on the bigger picture. Your users are in effect your customers — and your company’s customers are supported by those users. By bringing business goals and processes under the IT umbrella, you help foster communication, efficiency, improve IT services, and most importantly revenue growth across the organization.
Here are three areas to focus on when transforming your IT department into a service center.
If you have a pool of users that need access to Windows desktops, you can deliver those desktops and associated applications remotely, saving money on administration and end-user hardware alike, while gaining control over security and access control.
Two methods to achieve this are Virtual Desktop Infrastructure and Remote Desktop Services. In either case, the user connects to a server or virtual machine which is hosted within a data center or with a cloud provider. That remote server or VM contains the desktop environment and all data and applications are stored and processed remotely.
But is VDI or RDS the right choice for your situation? Let’s take a look at the differences between the two and some use cases for each.
You’re ready to start deploying and migrating applications into Microsoft’s Azure cloud platform — but there are four deployment models to contend with. Which should you choose? Each has strengths and weaknesses depending on the service you are setting up. Some might require more attention than others, but offer additional control. Others integrate services like load balancing or Operating Systems as more of a Platform as a Service.
Learn the differences between Azure Service Fabric, Azure Virtual Machines, Azure Containers, and Azure App Services, and when you might want to choose one over another. Green House Data is also ready to help you decide which of your business applications belong in which bucket — and we can help you administrate them, too.