Docker Launches Kubernetes Killer

By | June 24, 2016

DISCLAIMER: This is all my personal opinion. As always, I reserve the right to be totally and utterly wrong about everything 😀

So Docker, Inc. just announced native clustering and orchestration. TLDR; I reckon it’s the biggest thing to happen to Docker since Docker, and I expect it to be a game-changer.

In true Docker form, this is both ridiculously simple, and reassuringly secure.

Think I sound like a shill or a Docker fanboy when I say that? I challenge anybody to show that Docker is not working hard to make things simple and secure…

On the clustering side of things they’re actually using the term swarm instead of cluster. But the short and skinny is that you can pool lots Docker hosts together and have them behave like a single large Docker host. And it’s so simple it’s just ridiculous!

As of Docker 1.12 you’ll be able to spin up a swarm (cluster) with just two commands:

$ docker swarm init   < This one creates the swarm
$ docker swarm join   < This one grows the swarm by adding hosts

And the beautiful thing…. it’s secure by default! And just in case it’s not obvious why that’s beautiful, check out just some of my old notes on how to configure a semi-secure swarm before this announcement.

swam mode relief

Well instead of all of that ugliness above, we can do more with just two simple commands, and everything will be secured with TLS, including automatic key rotation on a schedule that you can configure. Seriously, that’s a godsend.

A bit on native orchestration

So before this announcement, creating clusters of Docker hosts and orchestrating apps on top of them was hard. Not rocket science, but still hard. You’d have to take an additional technology, something like Kubernetes, and layer it on top and do a shed-load of configuration.

With this announcement, orchestration just got waaaaaay simpler.

But as well as the native clustering, Docker 1.12 also introduces the notion of services – a declarative way to define resilient scalable containers. So instead of the old way (docker run) where you’d say something like run this container on that node, you can now say (docker service create) run this container, make sure there’s always x number of them, and spread them nicely across all nodes in the swarm. Docker then takes care of the heavy lifting behind the scenes.

But not just that….. It does native rolling updates. So you can say things like update the version of the containers in this service to a new version, do the update two containers at a time and wait 2 minutes in between each pair of containers you update. Pretty amazing stuff. But there’s still more…… You can also scale services by saying things like increase the number of containers in this service to 110.

All good stuff.

Kubernetes impact

Clearly this is going toe-to-toe with ecosystem technologies like Kubernetes and Mesosphere DOCS*. And I know some people have questions over ethics of this and the potential impact on the budding 3rd party ecosystem. So here’s my thoughts….

Docker, Inc. has a philosophy of batteries included but removable. This sticks to that philosophy. The only change is the improved quality of the batteries that are included. That’s fine, they’re still removable. And as always, it’s still up to 3rd parties to produce better alternatives. I’m a massive free-market advocate and see competition as the major driving force behind innovation. I say bring it on!

QUESTION: Will this break Kubernetes?

In one word, “no”.

So here’s the thing….. Docker 1.12 is fully backwards compatible. That means whatever worked before 1.12 will still work with 1.12. The thing to note is that you don’t have to run Docker 1.12+ in swarm mode – it’s entirely optional. If you don’t enable swarm mode, then everything works exactly the same as it always has!

QUESTION: Will this kill Kubernetes and other ecosystem products?

No chance!

Coming back to the competition aspect. This is a great thing for the ecosystem and I look forward to better versions of all competing products. Think about Internet Explorer – the hideous browser from Microsoft. That was bundled with Windows for years! But did it stop Mozilla, Google and others from making something far superior? Ha!

* not all three products are exactly the same, and not all three have feature parity

How good is this gonna be?

Dunno. But early signs are good.

I’ve played with it for a few weeks and can attest that simplicity and security as as advertised. Scalability is something I’ve not had a chance to put to the test, and reliability is as you would expect with any new product or major feature – you’ll need to thoroughly test it before you trust it.

On the point of reliability though….. all of the new swarm features are implemented as separate go routines. So enabling swarm mode is what brings all the new new go routines into the fray. Disabling swarm mode takes them out of the mix. Real world implication… if you’re not using the funky new stuff, the core Docker engine in Docker 1.12 is gonna be as reliable as any other point release of the product.

Final word… I like what I see.

Wanna learn everything you need to know to get up and running with Docker Swarm Mode, Services, Stacks, Bundles and all the other great stuff released with Docker 1.12….. go checkout my new Getting Started with Docker video course. Trailer below.

16 thoughts on “Docker Launches Kubernetes Killer

  1. Yasmany Cubela Medina

    One question im asking my self since i didnt have the time to play with 1.12 is what about docker-compose i mean before it it create containers not services, but it will be any way to create services from docker-compose instead of common containers. I know that a services is just a declarative way to define a set of containers but it do bring the goodness of rolling updates, and scale. So what do you know about it?

  2. Rory McCune

    Hi, Cool post. One thing though I’d probably hesitate to call swarm mode “secure by default”. It does include encryption and key management which is great but, in the default configuration, it doesn’t include any authentication for nodes joining the swarm , which means that any host that can see the swarm ports at a network level can join the swarm and then potentially be assigned tasks.

    This could lead to sensitive information being sent to an untrusted host. There are options to change that behaviour, but they’re not the default. https://raesene.github.io/blog/2016/06/19/Docker-Swarm/ has a couple of details

  3. Nigel Poulton Post author

    Agreed Rory. I wouldn’t be surprised to see node joins require approval (the way that manager joins currently do) by the time it ships as GA. Good point though!

  4. Nigel Poulton Post author

    If I understand your question, the experimental channel of 1.12 already includes deployment of stacks via DAB files (distributed app bundles). I *think* you can use the latest version of compose to create DAB files from existing compose files. Is that what you were asking?

  5. florian

    tried to play with this simplified ‘docker swarm init & docker swarm join’ config by adding one more master to the picture (docker node promote) hoping that would end up in a sort of a NO single point of failure config by making possible a manager fail-over the same way node’s fail-over seems to work, unfortunately from my testing once we have two managers (‘node ls’ will show the 2 x ‘manager status’ as ‘Leader’ & ‘Reachable’) there’s still no manager failover – would expect if the ‘Leader’ goes down the other manager will take over thought that’s not the case at all … once a manager goes down the all swarm is no longer reachable ? (found same problem reported here ‘https://forums.docker.com/t/will-docker-swarm-1-12-support-multiple-managers/17020’) — are we missing anything in there ? so far, I couldn’t find any posts on how manager failover is working with docker engine 1.12 swarm mode – any thoughts would be really appreciated.

  6. Nigel Poulton Post author

    I’ve not tested that 2-manager scenario, but issues like this wouldn’t surprise me as so far as I’m aware 1.12 is still in RC?!?!

    Also, recommendation would definitely be to have an odd number of managers – 3 or 5 – so that RAFT can achieve quorum more often than not. An even number could create deadlock scenarios. More than 5 managers and you might see raft operations start to slow down.

    Aside from that, so far as I’m aware, elections should successfully take place in the event that the leader is lost, and a follower should become the new leader.

  7. Ajeet Raina

    @Rory, did you look at docker swarm update –auto-accept –secret feature. If you run that command, then the node has to get approval from manager to join the cluster and that too need password.

  8. Rory McCune

    @ajeet Yeah I did see those and indeed they can be changed, so it’s not that swarm lacks the capabilities but that, at the moment, the default settings are to have auto-accept for workers, which I think is a dangerous default, especially where the feature is being billed as “secure by default”.

    There’s a discussion happening about whether this will stay the default here https://github.com/docker/docker/issues/23785 and I’m hoping it’ll be switched before release.

  9. florian

    @Nigel – I finally managed to have a working config just by adding one more manager in the picture — as you suggested, it looks like, the key is to allow RAFT consensus to happen (https://raft.github.io/ “Typical consensus algorithms make progress when any majority of their servers are available”) so, your recommendation of having an odd number of managers is definitely the way to go. Thanks again for your advice.

  10. djgurvi

    Hello Florian,

    Can you share what config and scenario and number of nodes you used to make master node failure condition
    working?

  11. florian

    @djgurvi, minimal config requires at least 3 x node

    connect on “node_1” then run:
    docker swarm init –listen-addr “node_1_IP”:2377

    connect on “node_2” then run:
    docker swarm join –listen-addr “node_2_IP”:2377 “node_1_IP”:2377

    connect on “node_3” then run:
    docker swarm join –listen-addr “node_3_IP”:2377 “node_1_IP”:2377

    back on “node_1” then run:
    docker node promote “node_2_ID”
    docker node promote “node_3_ID”

    at this point you should have a fully redundant swarm cluster – you could shut down any of the nodes and the containers running on that node should be automatically restarted on the remaining nodes.

  12. Manuel Patrone

    Hey Nigel,
    What happened to your kubernetes pluralsight course? 😉
    Cheers

  13. Nigel Poulton Post author

    Good shout! I’ve put it on hold while I do a Docker orchestration one… sorry 😉

    It’s part done, just been pushed slightly down the agenda.

    Also thinking about waiting for v2 of the API.

  14. Philippe Soares

    @Rory Why would it be insecure to let a worker join the swarm ? As far as I know, only swarm managers can schedule tasks on the swarm, and joining as a swarm manager requires approval from a swarm master by default.

  15. Ajeet Raina

    @Rory, did you check out the recent docker 1.12.0-rc4.

    A Glimpse:

    $sudo docker swarm init –listen-addr 10.128.0.3:2377
    No –secret provided. Generated random secret:
    4qro1l56aep78a6nbj8bxfxl8
    Swarm initialized: current node (9j4k3fqv6u6dtow2alkpvdqew) is now a manager.
    To add a worker to this swarm, run the following command:
    docker swarm join –secret 4qro1l56aep78a6nbj8bxfxl8 \
    –ca-hash sha256:49a72e0436b03c7984265604f2afb9eaaf10154d05a7770e7432396cc8676963 \
    10.128.0.3:2377

    Hope you would love this.

  16. Sean

    > Think about Internet Explorer – the hideous browser from Microsoft. That was bundled with Windows for years!
    > But did it stop Mozilla, Google and others from making something far superior?

    Yeah but that’s only because of the government lawsuits that brought about yuuuge benefits for taxpayers and browser users alike!

    A lot of Docker users (tyre-kickers) have been waiting for Swarm to actually put the stuff in production.
    I just googled a bit and found a “proof” in Docker Survey 2016 – most Docker users look forward to Swarm.

Leave a Reply

Your email address will not be published. Required fields are marked *


*

You can add images to your comment by clicking here.