“You've probably heard that containers are a new technology that are set to disrupt the hypervisor market and eliminate the need for hypervisors or even that containers have been around forever and nothing has really changed. Or perhaps what is the future of VMs – are they the new legacy technology? And if so, what are containers and what are they good for?” stated Kelly.
“The reality is that both of the above statements are incorrect. Containers and virtual machines are really complementary technologies that depend on each other to drive the next wave of data centre innovation. Containers take the efficiency and agility of virtual machines to a whole new level.”
Virtualisation made IT operations more efficient, reliable and agile. Prior to virtual machines (VMs), a single application used to be run on a single physical server that was sized for peak workloads (that occur 10-20% of the time). For the other remaining 80-90% of time, those resources sat idle.
“Instead of wasting 80% of IT resources like we used to, and taking months to provision a physical server, virtualisation has driven utilisation rates up to 75% to 80%, and we can now provision virtual machines in minutes,” continued Kelly.
“VMs also made IT Operations much more agile – they could create and move VMs from one server to another in a minute. This agility also increased reliability – when servers failed, the VMs simply moved to another server – without disruption to service. IT Operations became faster, more reliable and more cost effective,” he continued.
Containers make developers more efficient and agile, and this is really the next wave of virtualisation. Containers are by design very granular and lightweight. They allow developers to break big monolithic applications into small chunks of code that are loosely coupled with other chunks. This allows them to rapidly develop, test, deploy, and repeat this cycle over and over with specific application components until they get the application just right for the end user.
Kelly added: “They are not making small changes in code that then require the entire application to be re-tested and re-deployed, they are changing, testing and re-deploying only the code (which is run in a container) that changed. This drastically accelerates the rate of development and innovation for developers. This application architecture is commonly referred to as a microservices based architecture and containers are the critical enabler of this.”
Containers will really drive this next wave of innovation with cloud-based applications. While containers fit naturally into the microservices based application architecture that underpins some of the largest web-scale applications, there's a problem. Unless you're running everything on premise, container environments do not provide a secure multi-tenant environment for these applications, and managing thousands of these containers can be a real challenge for IT Ops.
Kelly continued: “VMs provide a perfect deployment model for containers. For public cloud infrastructure as a service, deploying containers inside a virtual machine is the only option. For on premise containers, VMs provide an efficient, managed, multi-tenant (still important in large enterprises or complex IT environments), and secure sandbox of resources for a set of containers. You can dynamically resize these virtual machines to meet different demands and you can quickly move them around to greatly simplify the management of potentially hundreds of containers in a virtual machine,”
“If the application needs to scale resources beyond a single VM, new VMs can be spun up in minutes and interconnected over private virtual networks between the VMs. VMs provide secure, agile, and managed resource pools that let a developer rapidly scale hundreds, or even thousands, of containers as they need them. Containers running in a VM will really drive this next wave of data centre innovation,” concluded Kelly.