Author Archives: Nigel Poulton

Docker 1.12 The Game Changer

It’s no secret that Docker and containers have been threatening to change the world for a while now. And they already have for some of us. But you know what… there’s always been something missing, something preventing it from taking off big style. Like a missing ingredient or something. Well it’s missing no more!

WARNING: I’m gonna say this right up front…. this is probably gonna end up being a bit of a Docker love fest. If you’re not game for that, feel free to head off somewhere else. I won’t cry. The problem is… this stuff is so game changing on the technical front that it’s actually pretty hard not to get carried away.

What was missing?

In a word…. orchestration.

Docker’s pretty much always been cool. And I mean that in game changing technology way. A way that can bring immense value to a business in things like; time to market, stability, agility, and resource utilization. Just to name a few.

But at the same time… containers have always been hard to manage at scale.

So in the past we’ve had to grab external tools like Docker Swarm, Kubernetes, or may be Mesosphere DCOS and layer them on top of containers in order to be able to manage this stuff at scale. And while that was fine, it really wasn’t.

And the reason it wasn’t fine was complexity. Every one of these tools brought additional complexity. Especially if you wanted to do things securely. As if security was an “option” anymore!

Anyway…… the way I see it there were three main problems:

1. Deploying and managing the infrastructure stuff (Docker engines) at scale
2. Deploying and managing the application stuff (containers) at scale
3. Doing it all securely

Enter Swarm Mode

Swarm mode

Tackling point number one….. Docker 1.12 introduces an entirely new mode of operation called Swarm Mode. This is where Docker engines automagically join together and work as a team at scale. This team is called a swarm.

But check this out… every swarm is secure by default! Seriously.

Oh and it’s embarrassingly simple to build and manage! It’s only two commands to configure a brand new swarm that’s fully secured with TLS and key rotation! Something that in the past was painfully hard, took a stupid amount of time, money, and effort. Believe me, I’ve done it and it was never pretty!

Well Swarm Mode does away with that with two blindly simple commands! 


Another massive thing about swarm mode is the introduction of services.

So before services came along we deployed individual containers. If we had an app comprising 5 components we’d have a whole boat load of work to deploy them and manage them. Scaling was challenging and performing updates was more fun than it should’ve been.

Well all of that goes away with services.

With services we take that same app with 5 components, and define each component as a Docker Service. And it’s all declarative. This means we can tell Docker things like “hey make sure we’ve always got 5 containers backing my web front-end service” and Docker will go all out to make sure there’s always 5 containers backing your web front-end. Even when things fail. I’ll have some of that for a dollar!

But that’s not the end of it with services. Get a spike in demand, or predict a spike in demand? Well it’s a single simple command to instantly scale your service! 

Updates are a doddle too. Wanna change the version of an image your service is using…? Walk – in – the – park! Another simple command and your service is updated to the new version. 

But the same single command can make the update a rolling update. For example, take a service with 200 containers and update 20 containers at a time, and wait 15 minutes in between each batch of 20. 
Seriously…. this stuff is so easy even I can do it!

Final thoughts

I honestly fail to see how this is gonna be anything less than game changing. It’s just kicked aside the biggest technical hurdle to mainstream adoption of Docker at scale! It’s secure, it’s scalable, and it’s simple. Who doesn’t want that?!?!?!

Is it a kick where it hurts for the ecosystem? Hmmmm… kinda sorta. But let me explain….

I see this as a kick up the backside for the ecosystem, not a kick to the front! 

What’s the difference? Well a kick up the backside is more of a nudge for the ecosystem to up its game. There’s shed loads of talent and great products in the ecosystem. They just need to continue to get better.

And it’s not like the ecosystem wouldn’t have been expecting something like this from Docker, Inc. After all, as much as Docker want to develop an ecosystem of partners and the likes… they also want to continue to change the world themselves, not to mention turn a profit.

So in my opinion this is a kick to the backside for the ecosystem – albeit a firm one. But certainly not an all out sickening kick to the front :-S


Wanna actually see what this all looks like and see how to do it yourself?

My latest Pluralsight course – Getting Started with Docker – covers it all. Plus more!

Sure… Pluralsight’s a subscription-based service. But you know what… they’ve ALWAYS got free trial periods if you’re not sure if wanna part with your hard earned cash. Go check it out, it’s my best course yet, on a technology that promises to be huge for your future career!

Thoughts and comments welcome.

Docker Cloud: Why all the complaining

So there’s a few unhappy campers in the Docker space at the moment. And it seems to boil down to money.

Here’s my summary of the situation – The Docker Cloud service used to be called Tutum and it used to be free. Since then, Docker, Inc. acquired Tutum, re-branded the product as “Docker Cloud”, announced it as 1.0, and slapped a price tag on it. Some people are upset.

That’s the skinny.

There are definitely some valid reasons for folks getting upset. But there are also some… let’s just say “less valid reasons”.

I see the following two issues at the heart of it:

  1. Docker, Inc. just isn’t great at talking about commercial stuff
  2. People expect Docker to be free

The Docker, Inc side

Most people I talk to agree that Docker, Inc. has some issues with talking openly about it’s commercial offerings. For the longest time their commercial offerings have been hidden away in a grubby corner of the website, and you could even attend events such as Dockercon and not even know that commercial offerings even existed. It’s almost as if they’ve been afraid to mention the word “commercial”. Weird, for a company that’s looking to turn a profit at some point.

So why is this the case? Simple (IMO)…. a company so passionate about open source talking about commercial offerings is more uncomfortable than you dropping a loud and pungent fart the first time you meet your other half’s parents. So it seems Docker, Inc. has struggled to balance its love and passion for open source, with the need to make money so that they can survive and continue to change the world.

And to be fair, it’s a awkward dilemma. But I don’t think anyone who understands the DNA at Docker, Inc. can argue with their support of open source. Yet I doubt anyone can argue against the need to make money if we want them to still be around in five years.

Now this is just a personal opinion here (this is ALL personal opinion)…. but I think the guys at Docker need to bite the bullet and start being more open and aggressive about commercial offerings. Coz I’m even hearing of customers and potential customers that can’t even find the commercial stuff on Some say they even struggle to find out how to even open a ticket!

Long story short…. Docker, Inc needs to stop being ashamed of its commercial products and customers.

The customer side

We’re all tight-fisted. We don’t like parting with cash. And that’s especially the case if we’ve been using something for free and then are asked to pay for it.

But seriously…. when that¬†*free*¬†product or service is a beta product, we should absolutely expect the day to come when we’re asked to pay for it. To assume something will remain free – especially something like Tutum/Docker Cloud – is grossly naive in my opinion. We all know that Docker, Inc. needs to turn a profit (they’ve taken $180M in VC Funding so far). We all know that the money isn’t going to come from licensing the core Engine product. We all see that the money looks like it’s going to come from the higher level orchestration-type services. ¬†It’s simple 2+2 math! These higher level orchestration type services are going to cost money!

The solution

The way I see it, two things need to happen….

  1. Docker, Inc. could probably do a better job with their overall messaging around commercial products.
  2. Customers and end-users need to get to grips with the fact that high quality software costs money.

Neither of these points are rocket science, though I do think the onus is more on customers and end-users not to expect stuff to be free. I mean seriously… do any of us genuinely complain that AWS, MS Azure, Digital Ocean, VMware vCenter etc costing money? Of course not, we absolutely expect to pay for the stuff that keeps our businesses running. So why would we balk at the idea of paying for wares from Docker, Inc.? We need to get over this.

Now then… have Docker, Inc. got the pricing model wrong? Possibly. And it’s absolutely the right and prerogative of customers to complain and give feedback on this – very little will ever change if we don’t give feedback. The folks at Docker, Inc are no doubt still learning on this.


So the way I see it, there’s no big deal here.

Docker, Inc. probably need to tighten things up on the commercial side of the business (that’s to be expected). And customers need to get their heads around the fact that this stuff can’t be free forever.

The only alternative I see is Docker, Inc. making everything free, burning through their VC funding, going out of business, and customers ending up paying somebody else for similar services.

NOTE: While I’m heavily involved in the container ecosystem, everything said here is my own personal opinion, and most of it is probably pure fiction. Either way, I don’t speak for anybody other than myself.

Docker on Windows: State of Play

Just a few *very quick* and typo-riddled thoughts on what I understand to be the state of play with containers on Windows. I’m basing this on a session I attended at Dockercon that had a John Starks from Microsoft talk about Windows and Cotnainer internals and also perform some demos.

First up….. it’s quite clear that the guys at Microsoft have done a shed-load of works to get things to where they are now. But the question is….. where are they now?

Hiding the mess

In my personal opinion… all of which could be totally wrong… is that there is a lot of fudging going on to make containers work on Windows.

For example, there’s this shim layer in between the Docker execution and the API interfaces into the Windows kernel. My understanding is that this is because a ton of the internal Windows stuff is gonna change over time – so why bother exposing all of the internal if you know they’re gonna be majorly rewritten soon. And that says to me that they’re doing a bunch of tactical stuff today. Probably in order to ship soon (soon in MS timescales).

Heavy containers

Kind of related… a shiny new Windows container will have a bunch of processes running inside.

The demo at Dockercon showed a couple of Windows containers – each running 19 processes – and both were on a base Windows container without an app running. This is glaringly different from a Linux container that usually has a single process. And I believe I’m right in saying that some of those 19 processes are *spare processes* for the container to use when it needs to create new ones…. Coz apparently it’s not possible to create a new process from within a container running on Windows – something to do with having to make RPC calls back to the host in order to perform certain basic system services (kinda like syscalls).

How does Windows build a “Container”?

Internally Windows has the notion of a “job”…. a collection of processes and resources those processes are using. To create a container, Windows takes this “job”, injects it with steroids, and calls it a “silo”. Think of a “silo” as a container. These “silo’s” get a bunch of namespaces and restrictions etc that aren’t too dissimilar to Linux namespaces and cgroups. E.g. Object namespaces are used to give each container (silo) its own C: drive.

Union filesystems

Not surprisingly, the way the native Windows filesystem (NTFS) works doesn’t make it easy to implement a union filesystem. ¬†That’s not surprising considering the age of NTFS. ¬†Though let’s not forget that AUFS never made it into the mainline Linux kernel – too many patches and too much of a mess. ¬†So no disrespect to MS there.

What I did find surprising though was that there’s apparently nothing in the Windows storage stack that resembles device-mapper snapshots. Or I’m assuming there isn’t, as this would surely have been an option as a graphdriver – lets not forget that back in the day Red Hat wrote the devicemapper storage driver (graph driver) for Docker on Linux coz they couldn’t go with AUFS.

Anyway… MS has a workaround. I understood this to be a virtual block device on top of NTFS with symlinks. My *guess* is that this is not too dissimilar to Docker Linux using OverlayFS – a single writeable container layer on top of an image layer that is a bunch of directories with hard links. Don’t get me wrong here…. even if I’m right with my assumption you shouldn’t take the comparison too far (I do know that OverlayFS doesn’t deal with blocks etc).

Windows base images

Also…. every Windows Docker image will have to be built off one of two official Windows base images – Windows Server Core, or Windows Nanoserver. ¬†And check this out…. for legal reasons neither can be hosted on Docker Hub!

Oh dear…. talk about politics getting in the way of engineering and innovation. C’mon Microsoft you’re doing so well!

On a side note, it does seem that Nanoserver is seen by MS as the default server platform to build against in the future. Makes sense to me as it seems like it’s gonna be a real server OS and not a desktop OS dressed up to look like a server OS.

Hyper-V containers

OK so Hyper-V contaienrs work like this РOn a Windows server you spin up a Hyper-V container.  Windows then spins up a lightweight Hyper-V VM (sorta like boot2docker) and runs the container inside the Hyper-V VM. It seems to be a 1 Hper-V VM <> 1 container model.

Hyper-V containers also seem to be the direction that MS are going for folks who wanna run Windows containers on their laptops – basically saying Windows 10 won’t be offering native container support. Suppose this isn’t too different to boot2docker.

Final thoughts

All in all…. I was kinda disappointed. I’d genuinely hoped the implementation would’ve been cleaner. Though I don’t want to take away from the great work being done at MS. And let’s not forget that namespaces and the likes didn’t land in the Linux kernel overnight (not by a long-shot).

DISCLAIMER:¬†Like I said at the beginning….. I reserve the right to be wrong about every single thing I’ve said above. It was the last seesion of DockerCon and my brain was dumping unwanted memories as fast as it could in order to take in more of what was being said… sadly my grey matter didn’t keep up as well as I’d have liked ūüėÄ

NOTE: I’ll add some pics later.

Enterprise Docker Discussion Group

I just created a new LinkedIn Group called “Enterprise Docker”.

Raison d’√™tre:¬†To discuss all things relating to Docker and container use in the enterprise!

So stuff like:

  • Who pays for Docker and container infrastructure in enterprises
  • What commercial support is available and what it’s like
  • What does the Docker ecosystem need to do better in enterprises
  • Basically anything else related to running Docker and containers in traditional enterprises

I’m no expert on the topic and think we’re at the very early days of Docker and containers in traditional enterprises…. so I’m hoping we can have a bunch of discussions and learn from each other.

Jump on over and get involved!


Hopefully the folks over at Doker Inc. don’t shoot me for modifying their logo…. it’s my cheap attempt to showing enterprises running on Docker ūüėČ

Integrating Docker with Devops Automated Workflows

A few weeks ago I released a lightning fast (~1 hour) video course over at Pluralsight showing how to integrate Dockerized apps into a CI/CD automated workflow using some cool new features on Docker Hub.

Docker’s hot right. And we’ll be royally shafted if we don’t bring it under the control of our existing operational processes and the likes! ¬†Think about it….. who wants tens/hundreds/thousands of Dockerized apps floating around in their estate unchecked? Not me!

Yes Docker’s the bizzo, yes Docker’s important, yes Docker’s blah blah blah. But unless we want a repeat of VM Sprawl and the headaches that brought,we need to get our act together over the reality of Docker and Dockerized apps.

Anyway…. the course takes a small web app + simple test, and shows how to push it through CI tests, integration with Docker Hub (build triggers, webhooks etc…..), and pushing to the world on Amazon Web Services! ¬†All automated… and you’ll learn it all in a single hour! Sounds good to me!

Seriously….. if you don’t already have Dockerized apps in your estate:

  1. Go check again…. you actually might! Remember when we thought nobody was using AWS and then we learned about the whole shadow-IT thing…. Let’s not be caught sleeping at the wheel again.
  2. You will soon! ¬†Seriously…. ignoring Docker aint gonna make it go away. So take a leaf out of the Scouting for Dummies book…. be prepared!

Anyway… that’s about it really… it’s a short fun course. And if you’re too tight-fisted to have a Pluralsight subscription, get one! they’ve always got a free 10-day trial going.

Production in the Cloud

This is my “Production in the Cloud” presentation form the recent TECHUNPLUGGED event in Amsterdam.

The skinny: It’s my opinion that public cloud services such as Amazon Web Services (AWS) and Microsoft Azure are ready for many enterprise production workloads. More than most of us think.

Sure…… there’ll be cases where they’re not a fit. But for those cases that are a fit….. you’d be hard pressed to find a more highly available and more world-class infrastructure to deploy on.

A few quick questions:

  • Do any of us really believe we can build better data centres than Amazon and Microsoft?
  • Do any of us really think we’re better at security than Amazon and Microsoft?
  • Do any of us really think we can build more highly available infrastructure than Amazon and Microsoft?

If you answered yes to any of the above…. then ask this final question…. can you do it at anywhere near the short term cost?

SaveTheTree – Docker in enterprises

First up…. this has nothing to do with saving trees – at least of the botanical variety. This has everything to do with Docker in the enterprise!


So… I spun up some new Docker hosts the other day… and it wasn’t long before I needed my trusty old friend `docker images –tree`. Well what was my horror when I got bitch-slapped with this:

npoulton@ip-10-0-0-90:/home/ubuntu$ sudo docker images --tree
 flag provided but not defined: --tree
 See 'docker images --help'.

Basically the `–tree` flag’s been pulled from the code! And yes, I know it’s been throwing “Warning: ‘–tree’ is deprecated” warnings at me since forever. I just never thought they’d actually go through with it.

And you know what right… I know it’s just a piece of software we’re talking about here.. but I’m seriously mortified by this. I don’t think I’ve ever had a more poignant lesson that it’s the litle things that make a big difference. Such a tiny command, that was so insanely powerful for Docker image management.

Enterprise Impact

Anyway…. what’s this all got to do with Docker in the enterprise?

Well…. I’ve spent enough time working big enterprises and financial services orgs that I know the odd ting about what gets signed off into production in these organizations and what doesn’t. So stick with me for a sec here…

Traditional enterprises – especially government, financial services etc – are as anal as the best of them when it comes to signing off code and services into production. Hell some of them still roll their own Linux kernels, not to mention still run stuff on AIX and pay through the teeth for EMC storage coz it makes them feel warm and fuzzy. Bottom line…. they soil their pants over every new thing they allow in to production.

So what I’m saying is….. if I was still at one of these types of orgs lobbying to get Docker signed off into production… I’d have taken the removal of `docker images –tree` as a steel-toe-capped kick in the old meat and two veg!

Why? Because now my ability to perform basic and vital image management tasks has become a whole lot harder. And the idea of running the folloing instead is just insane!

docker run --rm -v /var/run/docker.sock:/var/run/docker.sock nate/dockviz images -t

Now I’ve no personal issue with Nate Jones and his actually quite cool little image. But the thought that I might be able to run code like that – spin up a random container from some guy called Nate who I’ve never met – on production systems is just mind blowing!

I’m sure doing this kind of stuff is done all the time in cool hipster companies and the likes – and I’m totally cool with that. But it’s absolutely not done in rusty old enterprises with the kind of big fat wallets that I’m sure Docker would love to help thin out.

So what I’m saying to Docker is….. and I say this with the deepest respect to those involved with the Docker project (props to you all for the genuinely awesome work you do)…. but please add the `–tree` option back. And keep good image and container management capabilities within the trusted core Docker codebase. The code that folks like me are no doubt trying to champion into production environments all over the world!

#SaveTheTree ūüėÄ

Learn Docker this Summer – for FREE!

Seriously… I can’t think of a better thing to do for your career this summer, than learning Docker!

Docker Learner

Well….. my Docker Deep Dive course (don’t be put off by the scary title) is part of select group of online video courses that make up Camp Pluralsight – a bunch of video training courses that are free during summer 2015. ¬†A cracking way enhancing your career prospects and learning cool new tech at the same time.

Seriously…. if you’re not up to speed with Docker, this course is for you! I genuinely think it’s the best way to get up to speed – I mean I honestly did everything I could to make this the ultimate Docker learning experience.

Anyway… pop on over to Pluralsight where the course is available to watch now… ¬†The containers are coming… don’t be caught sleeping!

PS. ¬†Below is some feedback I’ve had via Twitter – notice the comments that say how good it is for newbies!

Docker course praise

Why you should learn to program in Go…

Huh…. how come an infrastructure guy like me just produced a Go programming course for Pluralsight?

I’ll keep this brief… but I think Go is gonna be mahoosively important… I expect Go skills to be right up there as one of the most important skills for IT folks to have in the near future.

Anyway… three quick things to say.

First up… I started my IT life as FoxPro programmer back in ~1998. In fact I still hold a grudge against Microsoft for what they did to it. Anyway, my point is, I’m partial to a small amount of coding and thought it would be fun to create a course on the fundamentals of Go ūüėČ

Second up… Oh – my – goodness! Go is like an amazing language. And you know what… I reckon poised to take over the world. Now I’m not about to say that the Linux kernel is gonna get re-written in Go. ¬†But seriously… go check out the major infrastructure projects that are being written in Go these days! Just to name a few – Docker, most of the CoreOS stuff like etcd, fleet…, Google’s open sourced Kubernetes!

Whoooaaa! These are like the who’s who of game-changing new infrastructure technologies! Well guess what… they’re written mostly in Go. So I figured, if anyone wants to be really good at any of these techs, knowing Go would be massively helpful.

Third up… a ton of infra people are looking to acquire good plots of land in the new DevOps world. Well I reckon Go will be a cracking language to have in your suitcase if you’re planning the move to the brave new DevOps world.

OK… so what to expect from the course?

The key is in the word “fundamentals”. My aim was to give a good solid intro to most of the features of Go. Check out the course outline… but it’s basically – variables, functions, conditionals, loops, concurrency…. The idea being…. you’ll get enough of a theory + hands-on intro to take your interest further.

go TOC

That’s it really. Go check it out! As always, there’s a free trial that you can sign-up for in case you don’t wanna part with your hard-earned cash on a promise I made here – sign-up for the trial and then cancel if you don’t like what you see!

Linux + Containers + Go = 42