Kubernetes has been kicking around since Google made it open source in 2014. Like many technologies it took some time to go mainstream, but with the rapid adoption of containers by many enterprise organizations, Kubernetes (or k8s) is now extremely popular as a method to manage, scale, and deploy containers across host platforms.
If you aren’t very familiar with Kubernetes, here’s why you might be interested in the platform and why it has proven essential to large scale containerized IT applications.
In a nutshell, k8s is used to manage containerized applications — you can read more about containers and how they are different from virtual machines here — by grouping them into pods and clusters. It takes the general idea of containers (abstracting the VM-level components like operating system, storage, and networking) further, allowing management of containers wherever they are hosted from a single pane of glass.
Kubernetes can manage container clusters across on-premises and cloud deployed containers, within virtual machines and on physical hardware. It can work with Docker and other mainstream container platforms and tools.
Originally programmed by Google and released in 2014, k8s is now maintained by the Cloud Native Computing Foundation, a group started by Google, CoreOS, Mesosphere, Red Hat, Twitter, Huawei, Intel, Cisco, IBM, Docker, Univa, and VMware as part of the non-profit Linux Foundation. This keeps k8s open and free to all, and ensures that many other popular (and often for-profit) technologies are considered in its design and interoperability features.
Basic terms to know when using Kubernetes are nodes, pods, and deployments.
A node is the server on which you run your containers. It can be a cloud VM or a physical box. You’ll need a physical or virtual data center in which to run k8s, even if k8s can handle the definition of virtual networks and virtual storage.
A pod is a group of containers that are grouped together logically by IP address and may (or may not) share storage. They run on pods. A single node can host many pods or a pod can span across multiple nodes.
A group of pods is a deployment. Your complete application is the deployment. K8s can maintain the correct number of pods to maintain app availability while powering down those that aren’t needed.
Kubernetes can run on practically any platform, from AWS to Azure to VMware private clouds, even on bare metal servers. You can run it on Linux hypervisors like KVM. Really whatever flavor of enterprise computing your have running, k8s can probably be installed.
The main advantage of using a platform like k8s for container management vs. native tools is application portability. You only need a single platform to scale and manage your containers across a multi-cloud and hybrid environment, accessing them on-premises or in a variety of cloud hosts. In fact, some providers like Microsoft have even deprecated their older container services — in their case, with Azure Kubernetes Service.
While k8s is powerful, it does come with its own complexities and costs. Be careful configuring your VM memory and with network settings in particular. Some managed k8s services, like the Azure Kubernetes Service, make things somewhat easier, but also may require you to adapt your application. However, they will remove the burden of configuring master nodes, identity management, storage, and optimized operating systems. It’s up to you and your department to weigh these options against going in-house on k8s.