Containers are here to stay, but instead of being the virtual machine killer some touted them to be, they’re turning out to work in concert with legacy virtualization technologies. Seeing the writing on the wall, last fall VMware introduced Photon OS, a new spinoff of ESX that included management for container technology like Docker.
Now vSphere Integrated Containers (VIC) can be used in your existing vSphere environment, allowing the development advantages of containerization with the rapid provisioning, automation features, and management tools your administrators are already accustomed to.
Here are some key features for managing and securing your vSphere Integrated Containers.
Containers, the most popular form provided by Docker, are a different take on virtualization where the host kernel is shared by every container. Each container entails an entire filesystem and application. With traditional virtualization, each VM runs on top of a shared hypervisor, which sits on the host kernel.
Containers can be easily moved around and are great for software testing and development without having to take into account discrepancies in hardware or configuration. Another nice feature for development is that containers have layers, with the top layer being the current state and the others read-only images of previous versions, which can be re-built.
The Virtual Container Host (VCH) is how administrators can manage their containers within vSphere. Essentially a container runs within a virtual machine, treating that VM as the host hardware for its kernel and containerized application/filesystem. The VCH includes port mapping for client connections as well as an exposed Docker API endpoint for integration.
vSphere resources are managed in much the same manner as you normally would, with the resource pool divided up into VMs as needed for each container. Multiple VCHs can be set up within a single vSphere environment.
The vSphere Web Client is simply set up with a new wizard that install the VCH plugin. Alternatively, the command line can be used, by leveraging the create command of the vic-machine command line utility. This command utility can create a VCH within a vCenter server in a cluster, a vCenter server with standalone ESXi hosts, or standalone ESXi hosts.
Make sure your environment meets the prerequisites first, though.
The advantage of deploying containers on virtualized hardware and managing them via vSphere is primarily a simplified multi-host deployment process, with compute, storage, and network resources all managed in a single portal. The placement of each container and allocation of resources is handled by vSphere. VICs are well-integrated into vSphere to the point where stopping or deleting a container will also cause its host VM to power down or be deleted.
Additional software-defined tools like NSX allow automated configuration of networking and storage tiers according to set policies, so when you spin up a new container VM, it can be ready to go in an instant. NSX also enables security features by automating security policy enforcement.
Learn how load balancing keeps cloud infrastructure online with this webinar from LaGrange and Green House Data.
VMware has introduced the concept of “just enough VM” to explain how containers are deployed in VCH. Each container is executed in its own virtual machine generally, but with vSphere 6, users can Instant Clone to create forked VMs with thin copies, each holding a lightweight Linux Kernel – or “just enough VM” for a container to run.
Hosting each container within a single VM does keep the environment more secure, however, by isolating and using built-in security features of vSphere. Without them, a compromised container (which often has many exposed ports and attack vectors, as it is being used for active development) could lead to a cascade attack across the other containers on the VM.
VICs make a lot of sense for organizations who are already embedded in a VMware ecosystem, as they allow the use of familiar vSphere management features while still enjoying the agile capabilities of containers.
They aren’t necessarily perfect, however. While VMware downplays the performance impact, running within a hypervisor—even one that is “just enough VM”—still incurs more resource overhead than deploying on raw hardware with a completely shared kernel and Docker library.