An Updated Performance Comparison of Virtual Machines and Linux Containers An Updated Performance Comparison of Virtual Machines and Linux Containers

Wes Felter, Alexandre Ferreira, Ram Rajamony, Juan Rubio
IBM Research Division
Austin Research Laboratory

Cloud computing makes extensive use of virtual machines (VMs) because they permit workloads to be isolated from one another and for the resource usage to be somewhat controlled. However, the extra levels of abstraction involved in virtualization reduce workload performance, which is passed on to customers as worse price/performance. Newer advances in container-based virtualization simplifies the deployment of applications while continuing to permit control of the resources allocated to different applications.

In this paper, we explore the performance of traditional virtual machine deployments, and contrast them with the use of Linux containers. We use a suite of workloads that stress CPU, memory, storage, and networking resources. We use KVM as a representative hypervisor and Docker as a container manager. Our results show that containers result in equal or better performance than VMs in almost all cases. Both VMs and containers require tuning to support I/O-intensive applications. We also discuss the implications of our performance results for future cloud architectures.

All of our tests were performed on an IBM System x3650
M4 server with two 2.4-3.0 GHz Intel Sandy Bridge-EP Xeon
E5-2665 processors for a total of 16 cores (plus HyperThreading)
and 256 GB of RAM. The two processors/sockets are
connected by QPI links making this a non-uniform memory
access (NUMA) system. This is a mainstream server configuration
that is very similar to those used by popular cloud
providers. We used Ubuntu 13.10 (Saucy) 64-bit with Linux
kernel 3.11.0, Docker 1.0, QEMU 1.5.0, and libvirt 1.1.1. For
consistency, all Docker containers used an Ubuntu 13.10 base
image and all VMs used the Ubuntu 13.10 cloud image.
Power management was disabled for the tests by using
the performance cpufreq governor. Docker containers were not
restricted by cgroups so they could consume the full resources
of the system under test. Likewise, VMs were configured
with 32 vCPUs and adequate RAM to hold the benchmark’s
working set. In some tests we explore the difference between
stock KVM (similar to a default OpenStack configuration)
and a highly-tuned KVM configuration (similar to public
clouds like EC2). We use microbenchmarks to individually
measure CPU, memory, network, and storage overhead. We
also measure two real server applications: Redis and MySQL.

We see several general trends in these results. As we expect
given their implmentations, containers and VMs impose almost
no overhead on CPU and memory usage; they only impact
I/O and OS interaction. This overhead comes in the form of
extra cycles for each I/O operation, so small I/Os suffer much
more than large ones. This overhead increases I/O latency and
reduces the CPU cycles available for useful work, limiting
throughput. Unfortunately, real applications often cannot batch
work into large I/Os.


Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s