Like VMs containers are loosely coupled but with no performance overhead or unpredictability and are extremely portable. I have discussed earlier why today’s data centers need an effective consolidation strategy. IT teams in the past have tried the too-simplistic approach of consolidating applications on bare metal servers and also on Virtual Machines but neither of these approaches has worked perfectly.
Consolidation on Bare Metal
When IT operations try to run multiple applications on the same server without proper resource caging in CPU, memory or network layers the applications suffer from an unpredictable performance. From an operational point of view managing multiple applications from the same host operating system at times is also very difficult.
- When do I apply a patch or take a planned outage?
- How to cater to changing workload or response time requirements?
Consolidation on Virtual Machines
To resolve some bare metal issues, VMs were pitched as the go-to solution for consolidation. But running Virtual machines on a host itself has a high performance overhead and creating VMs for every application leads to both software as well as OS sprawl. In today’s environment, provisioning distributed applications using VMs has exacerbated the IT landscape with VM sprawl.
So, on one hand, we have a tightly coupled, zero-performance overhead, non-portable consolidation platform using bare metal infrastructure, and on the other hand, we have a loosely coupled solution using VMs that has the performance overhead, partially portable but has unpredictable performance characteristics.
Containers provide the best of both worlds. Like VMs Containers are loosely coupled, but do not introduce performance overhead or unpredictability and are extremely portable.
What are Containers?
Linux Containers or LXC is an operating system-level virtualization method for running multiple isolated Linux systems (containers) on a single host (LXC host). Technically a container is a set of processes isolated from the rest of the machine. Containers use namespaces to have a private view of the system and cgroups to have reserved resources.
Containers look like a VM:
- You can ssh to a container
- You can have your own root access
- You can install packages in it
- You can have your own network interfaces etc.
But there is another type of container, Docker, that has created a lot of buzz in the industry lately.
Quoting a “Red Hat and Cisco collaborated white paper for IT leaders and industry analysts on Linux containers.”
“…Docker is poised to radically change the way applications are built, shipped, deployed, and instantiated. They accelerate application delivery by making it easy to package applications along with their dependencies. As a result, the same containerized application can operate in different development, test, and production environments.”
Docker allows you to package an application with all of its dependencies into a standardized unit that contains everything it needs to run: code, runtime dependencies, system tools, system libraries. This guarantees that it will always run the same, regardless of the environment it is running in.
Docker is designed to support single applications and is ephemeral in nature with persistent data stored outside the container. Docker is great for stateless apps.
But what about the data applications, the stateful ones? Can we use Containers to run data applications?
In the next blog post, we will examine the need to create a container aware storage layer and how we can run stateful applications using containers.