Containers Explained

By | November 20, 2014

So in this weeks In Tech We Trust Podcast (listen via the controls below) we talked a bunch about Docker. And part way through the conversation Marc Farley suggested we back up a little and give a quick primer on Docker and Containers. Cracking idea. So we did that on the podcast, but I think it deserves a blog post. So here’s a quick primer on Containers…..

UPDATE: Since writing this post, I’ve produced a video training course dedicated to learning core Docker technologies called Docker Deep Dive over at Pluralsight. If video training’s your thing, go check out the course!  I’ve also got a sample module from the course available on YouTube, so you can try before you buy.

The Physical World of the Past

In the beginning we had physical machines – CPU, RAM, disk…….


Every time we wanted to deploy a new application we had to buy a new physical machine for it. In our high-level view, 10 applications would require 10 physical machines. Each physical machine would have its own Operating System installed, and the application would get installed on top of that Operating System. Like so…..


Problem was, resource utilisation on those physical machines was nearly zero – a shocking waste of power, cooling, raw materials, data centre floor space…..


Why Didn’t We Just Install Loads of Apps per Physical Machine?

What we really want, is to be able to install multiple apps per physical machine! So why didn’t we?

Well….. the problem back then was the fact that open systems Operating Systems like Windows and Linux were crap at doing that. We couldn’t isolate applications and stop them from interfering and trampling all over each other.

That’s a shame!

VMware to the Rescue!

So…… along came VMware… and we witnessed the rise of the Virtual Machine (VM).

This VM model installed something called a Hypervisor onto the physical machine. The hypervisor would own the physical resources (CPU, RAM, disk…) and it would create multiple Virtual Machines on each physical machine. These Virtual Machines looked, felt and worked just like physical machines.  So we installed an Operating System onto each VM, and then installed an app on top of the OS. Net result, multiple applications per physical machine, and higher resource utilisation.


Magic! And we basked in it’s beauty for years. Until…… we got over the euphoria and realised that in the cold light of day, it’s actually kinda ugly.

Wait…. Virtual Machines Are Ugly?

How come Virtual Machines are ugly?

Well, each VM needs its own Operating System. And Operating Systems consume resources! Remember, our ultimate goal is to run multiple applications on a physical machine, not to install multiple Operating Systems on a physical machine. Put another way, the Operating System only exists to facilitate the application – if we didn’t *need* the Operating System, we would get rid of it. Let’s face it, nobody is bragging about the number of Operating Systems their company has, or how more Operating Systems = more value for the business!

Every Operating System consumes system resources (overhead) – disk space, disk IOPS, RAM, CPU cycles….. They all need patching, and some of them need anti-virus. I love Operating Systems, but in these cases more is not better.

So while the Virtual Machine model was an improvement over the one-physical-machine-per-application model. It’s far from a thing of beauty. Far too much waste.

Containers Are Better

How come containers are better?

Well, they’re more efficient – they waste less!

Let’s look at the basic architecture of containers. In the image below we’ve got a physical machine with an Operating System running on the bare-metal (the Operating System owns and manages the physical machines hardware – CPU, RAM, disk…..). Then, on top of the Operating System, we run *containers*. And within each ocntainer we run an application. That’s the high level.


Kinda looks similar(ish) to the VM model….. But the crucial difference is that inside each container is pretty much just the application. Whereas in the VM model, inside each VM we need a full-blown Operating System image and then the application.

So containers are a lot more lightweight. Meaning……. they waste less resources – less disk IOPS, less RAM, less CPU cycles. They start faster and are arguably just as secure as VMs.


As a result, I think we’re on the verge of a seismic shift in IT, and that containers will be the future (see my previous post)!

That’s the high level for now. I’ll post more in the near future, on things like – how are containers secure, what apps can run in containers, why haven’t we been using containers in the past…..

Listen to the Podcast here –


9 thoughts on “Containers Explained

  1. Reid Earls

    Excellent write-up on an exciting tech. The ramification on O/S providers and things like licensing and support should scare the likes of MS. However, the savings in time and money for large IT orgs trying to push apps from dev to test to prod could make this a serious player.

  2. Emac

    Two words. Project Atomic. Now several more…

    ” Deploy and Manage Your Docker Containers.

    Project Atomic integrates the tools and patterns of container-based application and service deployment with trusted operating system platforms to deliver an end-to-end hosting architecture that’s modern, reliable, and secure.

    Cloud images are available for download, supporting VirtualBox, QEMU/KVM, and OpenStack. Support for bare metal installation is coming.”

    “An Atomic Host is a lean operating system designed to run Docker containers, built from upstream CentOS, Fedora, or Red Hat Enterprise Linux RPMs. It provides all the benefits of the upstream distribution, plus the ability to perform atomic upgrades and rollbacks — giving the best of both worlds: A modern update model from a Linux distribution you know and trust.”

  3. Pingback: [From the Nigel Poulton Blog] Goodbye VMs, here come the Containers??? | PBS – Primo Bonacina Services

  4. Pingback: Containers Explained | Pipeless Water - for per...

  5. Pingback: Containers Explained | Docker |

  6. Pingback: Newsletter: December 29, 1014 | Notes from MWhite

  7. Trevor

    So Sun Microsystems Solaris OS was ahead of the times regarding containers…..

  8. Nigel Poulton Post author

    Yes no doubt. But may be the market wasn’t ready. The guys at Docker are often at pains to point out that Docker is just part of something much bigger )cloud, microservice apps etc….)

    It’s all about timing.

Leave a Reply

Your email address will not be published. Required fields are marked *


You can add images to your comment by clicking here.