Encryption over the HTTP protocol, also known as HTTPS or TLS over HTTP is the reason you see a little lock icon next to your web URL. As you likely know, a website using HTTPS has encrypted network traffic. In other words, outside parties or malicious software should not be able to intercept your communications to and from that website, because it is encrypted. Any time you perform a transaction over the internet that involves financial or personal information, you should be certain the web server is using HTTPS.
However, even as TLS (Transport Layer Security, referring to encrypting at the Transport Layer of the seven layer OSI model of networking) has spread to over half of the internet, clever cybercriminals have engineered network packets that actually use TLS within their malware to disguise it.
HTTPS is increasingly being used as a vehicle for malware to spread across the ‘net. While your information may be secure while it is transmitted, the website you’re visiting could still accidentally slip malware to your computer, or host it on its own servers, harvesting your information or installing a virus.
Here’s how TLS / SSL is being used by malicious actors across the net.
At Green House Data we like to say there’s no “one size fits all” cloud deployment. That’s why we don’t have base package pricing on the website — every VM is right-sized and designed around our client’s applications and business goals. That philosophy applies to every cloud deployment, and the network considerations aren’t exempt.
Depending on your objectives, the intended use of the application in question, and the location of your users and service providers, your network will have different performance and cost implications.
Let’s take a look at how to prepare your network for varying application deployments in the cloud.
The Green House Data blog has hit a major milestone this month, rocketing from around 8,000 monthly unique visitors to 12,000 unique visitors in March. As we pass the 10k mark, we want to say thanks to everyone who has come to our little corner of the internet and also take a look back at our most enduring and popular posts over the years.
From cloud hosting to data center design to information security, the blog has covered a lot of ground in the past five or six years, with experts from our staff joining our marketing and content teams for weekly updates.
Here are the top 10 all time posts from the Green House Data blog.
With proliferating security tools, in addition to more systems and users taking advantage of cloud resources, IT perimeter security is feels more difficult to enforce with each passing day.
Use this checklist to quickly cover your IT perimeter and network security protocols and make sure nothing is slipping through the cracks.
When planning a cloud migration, don’t forget to plan ahead for IP address changes that could affect your workloads and the way they interact with internal and external network traffic.
Cloud providers and data centers have a limited pool of IP addresses that they own, and they often re-use previously assigned IPs in order to maximize them. You can’t simply move your existing IP addresses along with your services. Rather, you’ll receive a dynamically assigned internal and external IP address.
To complicate matters, you could lose those dynamically assigned IPs if you stop your cloud instance (but usually only if you stop and deallocate the VM resources — most providers will keep your IP assigned to you if your machine is paused/stopped but still reserved within the overall resource pool). Luckily, there are a few ways to keep IPs relatively static in the cloud.
Here you are. You have all documentation ready and in-hand to present to your manager. The year is 2004, and you are pitching a new idea for updating the company’s existing network core infrastructure. The terms “cost-savings,” “scalable,” and “extensible” are prevalent within the proposal documents. You are unsure if the terms “open-source” and “networking” were ever mentioned in the same sentence before. However, the solution is simple: build a network that is non-blocking, resilient, predictable, and manageable.
Additionally, you adopt and include the operational efficiencies seen within your SysAdmin department. Who doesn’t love streamlined provisioning, broad visibility into resource utilization, and scalable deployment of services? What if we could provide the same network technologies across a variety of hardware platforms?
It turns out we can, using open-source networking to decouple hardware vendors from software. While this is only one option available in the market and may not work for every enterprise organization, the disaggregation of network hardware and software has some definite advantages.
Your business probably has faster internet than your home. If you’re with an enterprise, you almost certainly have some quality broadband. Plugging into the cloud can be a relatively painless process, albeit one that requires careful planning, but without considering your network design and connection speeds, even a simple cloud migration can become time-consuming, expensive, and difficult to manage.
According to a recent study by Emerson, cybercrime is the fastest growing cause of data center outages. To stay ahead of increasingly sophisticated attacks, infrastructure managers must combine software and hardware tools to constantly monitor, recognize, block, and remediate. Keeping an eye on network traffic is essential to accomplish this, and one developing method of network security control uses microsegmentation to do so.
Network microsegmentation is enabled by software-defined data center technology like VMware NSX. It gives network administrators new abilities to shape network traffic based on global policy, increasing security by crafting security policies around specific network segments or virtual machines.
Senior Systems Engineer Jim Taylor frequently shares “IT Tidbits” with the Green House Data technical staff, both in person and via e-mail dist-lists. This new blog series brings you a closer look at his latest tips.
From time to time, our Global Service Center staff and customers alike must troubleshoot Domain Name System (DNS) errors on their servers. Every server on the public internet is assigned an IP address by a Domain Name Server. The ISP has a DNS server that looks up DNS records and IP addresses against the master records, which are held in 13 servers maintained by independent organizations around the globe.
DNS errors can stem from many sources, including the configuration of DNS settings. The first step for many network issues is often a DNS lookup to gather more information and see if any of the issues are from a DNS issue. Two methods to accomplish DNS groundwork are nslookup and whois.
Can you believe we’re already over a quarter of the way through 2016? Feels like we were just posting our 2015 blog wrap up yesterday. But here we are—the data center world keeps spinning. In case you missed something in the past three and a half months, we’ve collected our top blog posts and some of the most popular data center news headlines from around the blogosphere in today’s post.