With proliferating security tools, in addition to more systems and users taking advantage of cloud resources, IT perimeter security is feels more difficult to enforce with each passing day.
Use this checklist to quickly cover your IT perimeter and network security protocols and make sure nothing is slipping through the cracks.
When planning a cloud migration, don’t forget to plan ahead for IP address changes that could affect your workloads and the way they interact with internal and external network traffic.
Cloud providers and data centers have a limited pool of IP addresses that they own, and they often re-use previously assigned IPs in order to maximize them. You can’t simply move your existing IP addresses along with your services. Rather, you’ll receive a dynamically assigned internal and external IP address.
To complicate matters, you could lose those dynamically assigned IPs if you stop your cloud instance (but usually only if you stop and deallocate the VM resources — most providers will keep your IP assigned to you if your machine is paused/stopped but still reserved within the overall resource pool). Luckily, there are a few ways to keep IPs relatively static in the cloud.
Here you are. You have all documentation ready and in-hand to present to your manager. The year is 2004, and you are pitching a new idea for updating the company’s existing network core infrastructure. The terms “cost-savings,” “scalable,” and “extensible” are prevalent within the proposal documents. You are unsure if the terms “open-source” and “networking” were ever mentioned in the same sentence before. However, the solution is simple: build a network that is non-blocking, resilient, predictable, and manageable.
Additionally, you adopt and include the operational efficiencies seen within your SysAdmin department. Who doesn’t love streamlined provisioning, broad visibility into resource utilization, and scalable deployment of services? What if we could provide the same network technologies across a variety of hardware platforms?
It turns out we can, using open-source networking to decouple hardware vendors from software. While this is only one option available in the market and may not work for every enterprise organization, the disaggregation of network hardware and software has some definite advantages.
Your business probably has faster internet than your home. If you’re with an enterprise, you almost certainly have some quality broadband. Plugging into the cloud can be a relatively painless process, albeit one that requires careful planning, but without considering your network design and connection speeds, even a simple cloud migration can become time-consuming, expensive, and difficult to manage.
According to a recent study by Emerson, cybercrime is the fastest growing cause of data center outages. To stay ahead of increasingly sophisticated attacks, infrastructure managers must combine software and hardware tools to constantly monitor, recognize, block, and remediate. Keeping an eye on network traffic is essential to accomplish this, and one developing method of network security control uses microsegmentation to do so.
Network microsegmentation is enabled by software-defined data center technology like VMware NSX. It gives network administrators new abilities to shape network traffic based on global policy, increasing security by crafting security policies around specific network segments or virtual machines.
Senior Systems Engineer Jim Taylor frequently shares “IT Tidbits” with the Green House Data technical staff, both in person and via e-mail dist-lists. This new blog series brings you a closer look at his latest tips.
From time to time, our Global Service Center staff and customers alike must troubleshoot Domain Name System (DNS) errors on their servers. Every server on the public internet is assigned an IP address by a Domain Name Server. The ISP has a DNS server that looks up DNS records and IP addresses against the master records, which are held in 13 servers maintained by independent organizations around the globe.
DNS errors can stem from many sources, including the configuration of DNS settings. The first step for many network issues is often a DNS lookup to gather more information and see if any of the issues are from a DNS issue. Two methods to accomplish DNS groundwork are nslookup and whois.
Can you believe we’re already over a quarter of the way through 2016? Feels like we were just posting our 2015 blog wrap up yesterday. But here we are—the data center world keeps spinning. In case you missed something in the past three and a half months, we’ve collected our top blog posts and some of the most popular data center news headlines from around the blogosphere in today’s post.
The writing was on the wall as far back as the ‘80s: IPv4, the fourth version of the Internet Protocol, a standards-based routing method for the vast majority of Internet traffic, was going to run out of addresses. Finally, last year, the American Registry for Internet Numbers (ARIN) ran out of their supply of IPv4 addresses. Although official exhaustion was reached in 2011, network design and routing tricks prolonged the supply, as did the trading of IP addresses on the open market.
Read on to learn how the switch to the relatively new IPv6 affects data centers. But first, a quick primer on IP addresses in general.
Demand remains strong in top data center markets across the world. You know the usual suspects: New York City, London, Chicago, Silicon Valley, Dallas. But unexpected locations are becoming more and more desirable for data center facilities, with demand growing in tier-two markets like the Pacific Northwest, but also in edge locations closer to the end user.
An “edge” location used to be limited to so-called tier-1 cities – those mentioned above, plus Chicago, Los Angeles, and other major metropolitan centers. Now it has expanded to tier-2 cities like Denver, Minneapolis, and yes, even Cheyenne, WY.
Why are data centers growing outside of major markets?
Carrier hotels are generally large buildings, often in population centers, that are built to serve as a secure site for data communications interconnections. That means they also often function as large-scale colocation sites. By combining infrastructure resources, many providers can converge in a single facility, lowering overhead and allowing tenants access to many services and connections.
Green House Data’s Seattle data center in the Westin Building Exchange features 7,000 square feet of white space across three floors of the massive building. But what's so great about being in a carrier hotel, anyway?