Containers are on the rise, with VMware integrating them into the vSphere platform. What started seemingly as a competitor to virtual machines has proved to be just another tool in the virtualization box available to administrators beyond software testing and development, as enterprises and mid-market companies of all sizes begin to implement containers alongside (and inside) their VMs.
Once you read a bit about the benefits of containerization, you may be curious about trying some out in your environment. But before you start spinning up containers left and right, make sure you’re using the right tool for the job. Containers certainly have their advantages, but there are many applications where a virtual machine will be more effective. Here’s how to decide.
Containers are virtualized servers that hold a host kernel including a complete file system and application, making them very portable and fast to start and stop. Their primary advantages are that they can be moved without accounting for hardware or configuration changes between systems, as well as their “layering,” in which the top layer of the container is the current state while the remaining layers are rebuildable images of previous versions.
Containers use less resources than VMs and you can therefore fit more of them onto a single server.
You can actually run containers within your VMs – and you can read more about how to managed your containerized VMs using vSphere if you want to do so. This helps you use your existing vSphere management tools with your containerized applications, including vital security features. Keep in mind this will have a slight performance impact.
Don’t give in to the impulse to compare containers and VMs directly. They each thrive in different use cases.
Virtual machines often take longer to boot and shut down. This is one feature that is especially well-suited to development and testing environments. If you’re going to be spinning up and powering down machines and clones of machines regularly, a Docker container is a good way to go. The difference can be in the tens of seconds.
Containers are geared towards Linux. If you want to virtualize another operating system, virtual machines are a better choice.
Docker lacks many automation and security features. A fully fledged VM management platform like vSphere includes a variety of automation features and built-in security from the kernel level to network switches.
By default, a container also exposes more attack vectors than a virtual machine. You’ll need to take immediate action to secure a newly deployed container, including dropping privileges and running apps and services as a non-root user.
Containers are more containerized. Okay, that sounds kind of stupid. And the entire point of containers is to run a single application, so the more granular they are, the better, right?
It’s true that many container advantages come from having discrete pieces of your IT infrastructure that can be moved around. But without tying that to virtual hardware, that also means you have more things to manage overall: your containers (which can quickly begin to sprawl), the physical and virtual hardware underneath, the operating systems, and the application itself. With a VM you can push batch updates to all linked versions of a VM; or you can update the OS underlying several applications running on a single VM; or you can manage the physical hardware running dozens of VMs.
For the time being, a combined environment of both VMs and containers will likely be seen in most organizations embracing the container movement. What that combination looks like depends on each individual deployment. Weigh your applications, your future plans, your cloud providers, and platforms to figure out if you want to run containers inside VMs, which apps are best suited to a container vs. a VM, and how you can maximize your compute resources while maintaining security and avoiding sprawl.