Author Archives: Nigel Poulton

Here come the *real* server Operating Systems…

The days of server and desktop Operating Systems looking and feeling the same are gone – those are the olden days!

We’re being invaded by a new breed of *proper* server Operating Systems.  These guys are stripped down, hardened, tuned and optimised for server workloads. They’ve got none of the extra crap needed for things like a slick desktop experience. No! These guys are designed and built to live in the data centre and the cloud and they look a lot different to their frilly desktop siblings.

Following in the footsteps of Android

Think about it like this…. Android is Linux. It runs a Linux kernel. But we accept that it’s waaaaay different to any Linux we run on a desktop or server. We accept that it’s highly tuned to run well on mobile devices, because…… mobile devices and the things we do with them are totally different to desktops!

Well guess what…… the same holds true for servers….. the things we do with servers are totally different to the things we do with desktops!

So it should only be natural that our server Operating Systems hugely differ from our desktop Operating Systems.

Let’s look at an example of one…..

RancherOS

Take a look at RancherOS announced a couple days ago (24th Feb 2015)…. this is a server OS in every respect.

NOTE: RancherOS is only one example. I’m also a fan of CoreOS. But things like Project Atomic from Red Hat and Snappy Ubuntu Core from Canonical are heading in this direction too.

rancherOS_logo_black

For starters…. the RancherOS ISO is an impressive 20MB – yes that’s MB and not GB! Talk about a small attack surface, and small code-base that requires patching… I’m guessing there’s no unnecessary code in their – definitely no pretty GUI! All good stuff for a server OS.

Also…. pretty much everything on RancherOS runs inside a Docker container. And we’re not just talking about applications. We’re talking about system services too! Stuff like syslog, and ntpd all run inside of their own Docker containers.

If you want to learn Docker… check out my Docker Deep Dive course available on Pluralsight. You can even sign-up for a free 10 day trial that will allow you to watch the entire course!

That means that if you log on to a RancherOS box and list the running processes, you’ll see a bunch of kernel threads dressed up as processes (that’s normal) but you won’t see much more than that. There’s no init or systemd…. PID1 will be a special Docker instance for running system processes as containers.  So as well as no systemd,  there’s also no package manager, no GUI, no unnecessary fluff.  Basically, it looks and feels nothing like desktop Linux!

Opinion

In my opinion….. this is all good stuff, and I love innovative stuff like this!

And it makes total sense…. servers are totally different beasts to desktop machines. I mean even Microsoft *started* to grock when it shipped Server Core. And well done to MS for finally shipping a server OS without a GUI. But the stuff going on in the Linux world is so far beyond that. We’re seeing massive changes for the cloud, automation, and container-based workloads.

The future’s bright… the future’s a new breed of minimalist, purpose-built, server Operating Systems!

NOTE: I’m not saying using Docker as PID1 or any of this stuff is production ready yet….. what I am saying though… is I love this kind of stuff and can see this kind of stuff being the future!

Quick Question on VSAN vs XtremIO Architectures

This is just a really quick one……

I noticed that upgrading an existing VSAN to the newly announced VSAN 6.0 is a non-disruptive rolling upgrade.

Fair enough, so it should be right?

Well…… part of the upgrade requires what sounds like a fairly significant underlying on-disk layout change.  Basically, the upgrade to VSAN 6.0 updates the underlying layout to a new format based on VirstoFS technology.  Sounds major to me!

So with that in mind…. I think an online rolling upgrade is fairly impressive.  Well done to the VSAN team.

So If VSAN Can Do It Why Can’t Others?

The question then begs….. if VSAN can upgrade the underlying disk layout on the fly as part of an online upgrade, then why can’t XtremIO (or any other modern storage technology for that matter)?!?!?!

Is the VSAN architecture superior?

Now I get that there are different use-cases for storage technologies, and that there’s not a one-size-fits-all architecture. But on the topic of non-disruptive upgrades… is VSAN a superior architecture?

Am I Missing Something?

May be I’m missing something.  May be the on-disk changes that VSAN is making are different and less fundamental than those recently made by XtremIO?

Just a curious guy asking a genuine question.

Comments welcome (please state your vendor affiliation).

Docker on Windows – Some Insight

So last week I had a cracking and informative conversation on Twitter about native Docker support on the next version of Windows Server. So I thought it only right to scribble down the good stuff here for anyone who didn’t get a chance to listen in and get involved.

It all started when Stu Miniman from Wikibon engaged with me over my previous post ESXi vs Hyper-V – Could Docker Support Be Significant. And to cut a long story short, Stu looped in Nick Weaver (formerly EMC, now at Intel), and Nick looped in Solomon Hykes (founder and CTO of Docker Inc). Anyway…. a bunch of stuff was discussed and the following is what interested me the most.

ALERT: Looks like the guys at MS won’t be shipping Windows Server vNEXT for a vLONG time – thanks to Ewan Leith for pointing that out to me.

On the Topic of Native Docker Support on Windows

First up OK….. when I’m talking about native Docker support on Windows Server, I’m talking about Windows running the Docker daemon – the core Docker engine. Meaning we’ll be able to take Windows Server and launch Docker containers on it, all natively. That’s what I’m fully expecting Microsoft to ship at some point on the next version of Windows Server – whenever that happens to be!!!!

NOTE: Now of course…. these containers will be leveraging the Windows kernel, so Linux containers won’t run on a Windows Server running Docker. Let’s be clear about this, Docker containers share access to the kernel of the host machine they’re running on, meaning Windows apps/containers will only run on Windows Docker hosts, and Linux apps/containers will only run on Linux Docker hosts.

Anyway….. on to the interesting stuff….

Porting Docker to Windows is Non-trivial

First up, Solomon states that porting Docker to Windows is non-trivial – obviously the Windows and Linux kernels are very different. But he does point out that the core abstractions of processes, files, networking etc are all the same.

docker-win-solomon-1

He also points out that the Windows guys have the luxury of starting from scratch, but also the extra requirements of the Windows Registry.  No wonder Windows Server vNEXT has been pushed back to 2016….

docker-win-solomon-2

Copy on Write and Union Mount Filesystems

Solomon then says that the copy-on-write/union mount filesystem approach that Docker images and containers currently rely on will be very different on Windows.

docker-win-solomon-3

Obviously Windows isn’t anywhere near as rich as Linux when it comes to filesystems and union filesystems – my words not his. But, he does point out that the Docker storage backend is already pluggable. For example, a bunch of different back-end filesystems can be used by Docker. Some of these include AUFS, devicemapper, BTRFS, and recently Overlayfs.

So at the core of Docker 1.x is the need for a CoW filesystem/union mount backend – the way images are built, and the way containers are launched is all built on top of CoW and union mounts.

NOTE: I cover this in my Docker Deep Dive course. In particular in Module 6 – A Closer Look at Images and Containers.

However, Solomon stated that *changing this requirement* will help. And when I asked, he said that the v2 Image format will be taking a step in that direction.

docker-win-solomon-4

The Docker Execution Driver

Now some other Linux goodness that’s core stuff for Docker containers are kernel namespaces, cgroups, and capabilities. The short and skinny of these is as follows –

  • Namespaces let us carve up things like the process tree, the filesystem, networking etc so that each container gets its own unique and isolated view of each – basically making a container think its got PID 0 and the root filesystem (/) and not knowing about other containers on the system….
  • croups let us control resource utilisation for containers
  • Capabilities let us get very granualr with container privileges

All of these are vital to a solid container system like Docker, and all are native in the Linux kernel.  So the question begs…. does the Windows kernel have anything similar up it’s sleeve?

Well your guess is as good as mine…. and apparently as good as Solomon’s.

docker-win-solomon-5

There’s a bunch of talk in social media about whether things like App-V, ThinApp, Drawbridge etc might be able to provide some of this functionality.  But it’s all speculation at the moment.

Microsoft and Open-source

Now then, on the topic of stuff like namespaces, cgroups and capabilities…… The way that Docker leverages these in a Linux environment is through a pluggable component called the execution driver.

And when Docker started out in life, it used LXC as its execution driver. However, this wasn’t ideal, as it was central to providing Docker with access to vital kernel features, and it was essentially borrowed technology. So in order to give themselves more control over core Docker components, Docker Inc chose to implement their own execution driver called libcontainer.

NOTE: Again, this is all covered in my Docker Deep Dive course

Anyway, the fact that Solomon says most of the work in this area is on the shoulders of the Windows guys, suggests – at least to me – more of an LXC based approach to the execution driver on the Windows platform (where Docker Inc are at least in part reliant on others – Microsoft in this case).

Also….. and this is nothing short of rash unsupported speculation from me here….. but if it were the case that MS own and not keep the native execution driver on the Windows platform closed-source…. would it give Microsoft the opportunity to “take the ball home” one they’ve sucked what they can from the Docker ecosystem????  But that’s just scandalous speculation from me – don’tread anything into it!

Docker and Open-source

Anyway…. the conversation sparked me to ask Solomon whether or not the Docker execution driver and storage driver for Windows would be open source – it’s well known that the guys at Docker are passionate about doing everything in the open. Solomon’s reply was that they won’t merge anything into Docker that isn’t open source. Cool!

docker-win-solomon-6

But since the conversation, I’ve wondered…… does that mean the code will be open sourced…. or does it mean that they won’t be, and those particular pluggable components won’t be “merged”. Hmmmmm????

All in all a great informative conversation! I hope my thoughts and opinions haven’t munged the facts with too much speculation and outright fiction :-D

A massive thanks to Stu, Nick and Solomon for part of the conversation. These are the types of guys I could spend all day talking to and learning from!

Comments welcome.

 

ESXi vs Hyper-V – Could Docker Support Be Significant

Thinking out loud here….

Did VMware shoot themselves in the foot ditching Linux?

I remember wondering a while back whether VMware had shot themselves in the foot by ditching Linux and open source in favour of a proprietary kernel which allowed them to keep everything to themselves. Let’s not forget, when they did this, there was no Linux container madness, no Docker, and open source wasn’t changing the world in quite the same way as it is today.

Open source is where it’s at today

But now the world is crazy with container fever, Docker is leading the charge, and open source partying like it’s 1999.  And suddenly…. the world is a whole different place.  So…… if VMware had stuck with Linux and truly embraced open source, would they be in a better position to leverage containers?  Would vSphere be able to run VM’s and Docker containers side-by-side?

Is Microsoft coming in from the left field?

But hows about this for a bizarre possibility…. With Microsoft pre-announcing native support for Docker containers in the next version of Windows Server – and yes I know the detail is suspiciously lacking…. However, could Microsoft be about to ship a hypervisor platform (Windows Server/Hyper-V) that does VM’s and Docker containers?!?!?!

UPDATE: I need to point out that native Docker on Windows will only support and run Windows-based containers (unless of course Microsoft pulls something truly incredible out of their hat… which all-due-respect but I’m not expecting.

And if so…….. will that be a significant factor in the Hypervisor war and give Hyper-V an edge as we transition to a container centric world?

Isn’t Azure more significant?

Sure, I know that Azure and the race for hybrid cloud is gonna have its say in ESX vs Hyper-V….. But could native support for Docker containers be a significant factor too?  Native type-1 hypervisor, native Docker containers, backed by the Azure cloud…. sounds pretty decent to me.

Hang on…. this is all assumption!

ow this is all assuming Microsoft actually comes good on its hint of native Docker containers. Plus… I’m assuming that the Hyper-V role will allow containers and VMs at the same time (big assumption).

To me, it looks like it might be interesting factor in the hypervisor war.

Like I said at the top, just thinking out loud!

One last question…

One last question: If open source was rockin it when VMware was born, like it’s rockin it now….. would they have open sourced ESX?

Comments welcome.

Need to get up to speed on Docker?  Check out my new Docker Deep Dive Course.  Or check out the free sample module on YouTube.

Want to Learn Docker – LOOK NO FURTHER!

There’s no arguing about it…. Docker is a hot trending technology and looks set to have a dazzling future. And it’s my personal opinion that it could be the next VMware.

With the above in mind, there’s no time like now to skill yourself up so that you can take part in the container revolution. So on that topic…… I honestly believe that my recently released Docker Deep Dive course – available on Pluralsight – is the best way to get yourself up to speed and in-the-know!

Hang on… Don’t I Have to Pay for Pluralsight?

ALERT: At the time of writing, we’ve currently got a 10-Day free trial. So go check that out!

Now I know that Pluralsight is a subscription based service (either $29 per month or $49 per month at the time of writing). But for your monthly subscription, you get access to the entire Pluralsight library (thousands of courses). So not just my Docker Deep Dive course, but also all of my other courses but everything else in the library.

Free Module on YouTube

Even so…. it’s still hard-earned cash that you’ll have to part with. So what we’ve done, is make one of the course modules available for free on YouTube – so that you get an idea of what you’ll be getting for your money.  Go try it out, I think you’ll like it!

What’s in This Free Module?

The free module is the second from last module in the course, and is a lightning-fast recap of some of the cool stuff we’ve learned earlier in the course.  So be warned, we don’t go into detail explaining stuff in the free module – we’ve already done all of the explaining earlier in the course.  This free module just rips through a bunch of cool stuff already learned.

I picked this particular module to be the free one coz I think it gives an idea of what’s covered in the course and what the course looks and feels like.

blog image

Anyway…… Check out the free module. If you like what you see, go sign up at Pluralsight and learn some serious Docker goodness.

If you don’t like what you see, fair enough, thanks for watching anyway.

Thanks for reading, and good luck in your career!

Stop Hiding the Negatives!

I was asked to comment on Chad Sakac’s post about not trusting people who *go negative*.  Basically Chad is telling us not to trust anyone that says negative stuff about a technology.  Frankly I think that notion is dangerous and absurd!

NOTE:  I’ve no beef with EMC, XtremIO here. I just think what Chad is suggesting is dangerous!

Sure….. none of us like a vendor, or a friend for that matter, who is constantly negative.  But since when were we not to believe anyone who has anything negative say about something?  As if saying something negative means you shouldn’t be believed.  Dare I say such a tactic is practiced by many religions of the world – and we have a name for that “brainwashing“.  “Don’t believe anything bad any says…. they’re evil and have evil agenda…”.  Okaaaaaaay, walk away quickly!

Sounds scary to me!

It’s vital to know the negatives

Question: Since when did any of us ever make a major decision without an honest evaluation – of the good and bad?

Answer: Hopefully never!

I’m about to move house.  Do you think I’ll be asking the structural surveyor to only report on the positives?  Do you think I’ll ask the drain surveyor to only tell me about the drains that work?  Do you think I’ll base my entire purchasing decision on the positive marketing of the agent selling the house (whose job it is to sell the house for the maximum price)?  What a laugh!

I’d be nuts to do that!  I’ll have an honest assessment of the property thanks!  Warts and all!  To suggest I’d do otherwise is nothing short of wreckless.

So why would it be any different buying a technology solution?  It wouldn’t!

Are You Trying to Hide Something?

I always tend to think that those who want to cover up or discredit *the negatives* are the ones that have something to hide.  It’s just an opinion, but I think it’s one that’s served me well so far in life.

I’m not having a dig at EMC here, I’m talking generally – all vendors, all technologies, probably all aspects of life.  In fact……. go back to my house purchase analogy….  The sellers who don’t want you to look at the negative reports are probably the ones that have something to hide/fear – blocked drains, subsidence, lack of insulation etc…

A Healthy Balanced Opinion

Let’s face it, assessing the positives and the negatives are vital in forming a healthy and balanced opinion.  I challenge anyone to disagree with that.

What?  Are we supposed to just read the positive reports and ignore anything else?  As if!

As a purchasing customer, I would always analyse everything – positive and negative.  And what I’d come out with was a healthy, balanced, well-informed opinion.

What kind of a fool would I look like if I put a recommendation together and my CIO came and asked why a certain negative aspect of a product wasn’t considered or factored into the decision?  Would I be expected to tell him/her that I was only considering the positives?  What a laugh!  Clearly not.

The Morale of the Story…

Let’s face it…… not always….. but too-often…… there are lies, damned lies, and anything a vendor says about itself.  Sad, but known to be true!

So the morale of the story…….?  Leave no stone unturned!

If you do leave stones unturned…. I guarantee there’ll be a venomous snake under there that’ll bite you and you’ll only have yourself to blame.

Don;t way you weren’t warned!  Suggesting you shouldn’t read alternative opinions is dangerous and wreckless.  And I honestly don’t think vendors should be advocating such a practice!

Thoughts and comments welcome – please declare any vendor affiliations.

tech.unplugged

 

A Container First Approach

It seems like everyone I speak to about Docker containers thinks they can’t replace virtual machines.

Well…….. I beg to differ!

I thought containers could only run a single process?

First up. Why do people think containers can’t replace VM’s?

Well…. from my extensive scientific research (a.k.a my gut-feel after conversations with various people) it seems the fact that containers are soooo great for single-process microservice type architectures, people just assume that’s all they’re good for. Add to that, a bunch of the messaging coming of out Docker Inc has subtle undertones that if you’re doing anything other than single-process containers, then you’re doing it wrong.

Well I disagree with the above.

Don’t get me wrong, I totally *get* the one-process per container thinking –

  • One concern per container
  • Totally magic for anything remotely associated with terms like web-scale and microservices
  • A process crashes.. just restart the container
  • Treat them like cattle ‘n all….

The list could go on…. and I’m totally cool with it all.  But what I’m not cool with is the idea that this is all containers are good for.  Just because containers are awesome at one thing, does not automatically mean they can’t be the bizzo for other use-case as well.

NOTE: Need to learn Docker? Check out my Docker Deep Dive course over at Pluralsight to get yourself up to speed!

Wait… containers can run more than one process!

The fact is, containers absolutely can run multiple processes. And lots do. In fact, projects like phusion/baseimage are all about containers running multiple processes.

The way I see it, stuff like phusion/baseimage are cracking examples of innovation on a new and revolutionary platform. And I’m expecting the future to be rich with init processes and forks of systemd hand crafted for running inside of containers! In fact the way I see it, container-focussed versions of apps and init systems might even dwarf their full-blown-OS cousins at some point in the future.  And without getting too carried away…. gazing further into my crystal ball, I can see a potential future where everything’s container-optimized, with just the occasional VM in the corner.

Disagree? Think I’m off my Rocker?

I probably am.

Magic, you’re entitled to your opinion.  But consider this…… a lot of the conversations I’m having about about containers and Docker are almost exactly the same as initial conversations I was involved with about virtualization and VMware.

It’s just like the early days with VMware

So wind the clock back 10-12 years, to the time I was first introduced to VMware….

Sure, my initial reaction was “wow this is cool“. But that reaction was immediately followed by “but its not for production!“. I specifically remember the discussions and thought patterns being pretty much – this is great for labs and testing, and may be some lightweight infrastructure services like DNS and the likes. But definitely not for mission critical stuff, definitely not for email, and definitely not for databases etc.

Well…. wind the clock forward 10-12 years to the present, and what have we got?

Most people now have a VM first approach to application deployment, and deploying to a physical server is frowned upon.

So what happened?

Well…. the VMware technology matured and so did the ecosystem. Not to mention users got more comfortable with it, and the vital fact that vendors started officially supporting their apps on VMs.  Suddenly the camel had its foot inside the tent!

Well guess what….. I bet the same will happen for containers. Today people’s initial reaction is that containers are only for certain types of apps and use-cases, but not for others.  Sound familiar?

Well I’m willing to bet that things will grow and mature to the point where we have a container first approach to app deployment, and deploying to a VM will be frowned upon!

Anyway, even if you disagree with me on the container first approach, the fact remains that containers absolutely can run multiple processes. And therefore are a threat to VMs.

Will multi-process containers be an interim solution?

Yes, almost definitely.

But that interim could end up being a loooong time.  The world doesn’t change overnight.  I’m sure we’d all love to be developing and running native scale-out microservice apps. But that’s a long-ways off for a lot of companies.  Multi-process containers could well be a good interim solution for those customers.

Are VMs gonna become extinct?

No, just relegated to the same corner of the data centre as the Mainframe.

Am I a VM hater?

No.  I’m actually a fan.  But that doesn’t change the fact that they’ve had their day.  And let’s be honest, it was a long a glorious day!  But trying to stop the sun setting on it is a waste of time talent and energy.

Am I a container fanboy?

Yes, may be.

Thanks for reading.

CoreOS Fires a Rocket Up Docker’s ****

Quickly setting the scene……. Containers are rocking the IT world. I think they’re the next generation of virtualisation technology. And I think in time they’ll supersede the virtual machine as the dominant runtime environment for applications. And Docker is emerging as the de facto standard in this new world of containers. And that’s all well and good. But amid all the hype surrounding containers and Docker, CoreOS decided it didn’t like what it was seeing, and launched a containers standard of its own called Rocket!

So…… let’s take a closer look……

Is Rocket a Threat to Docker?

rcket and docker

Q. Do I think Rocket is a threat to Docker?

A.  Hmmmmm….. Yes and No.

Q.  Do I think Rocket can be successful?

A.  Absolutely yes!

A bit more detail….

Competition is king! It drives innovation like nothing else I know. So the way I see it…… Rocket and Docker competing will only serve to drive innovation from all parties. So yes, Rocket is a threat to Docker. But that threat will only serve to improve Docker. And if that’s all Rocket accomplishes, it’ll be worth it. But I hope Rocket will do more than that, and go on to be a thriving container technology itself!

NOTE: Need to learn Docker? Check out my Docker Deep Dive course over at Pluralsight to get yourself up to speed!

Is There Room for Rocket and Docker?

Hell yes! The container ecosystem is young and has plenty of oxygen for everyone! In fact, Rocket validates the container market and Docker, and can potentially contribute significantly to the ecosystem.

And what the heck….. life in the container ecosystem would get pretty drab if Docker remained the only game in town. How boring would the hypervisor market be if all we had was KVM, or vSphere! Same would be true of Operating Systems — as good as Linux is, we need Windows, BSD, and all the rest, to keep things fresh and moving.

So Why Did CoreOS Do It?

Before I answer that…… once any company or technology gets to a certain size, there’s always gonna be people who think they should be doing things differently. That’s just life!

Sure….. for the guys at Docker it might sting a bit — especially when we consider the fact that Docker and CoreOS were such good playmates. But every company that matters, and every technology that matters, stirs up opinions and emotions. And this just proves that containers matter, and Docker matters!

Anyway, back to why CoreOS have done this….. Docker is doing amazing things. They’re breaking new ground and innovating like there’s no tomorrow. As are CoreOS! But the two companies have different views of the future, and slightly different goals. CoreOS wants/needs a lean and mean container runtime, whereas Docker appears to be evolving into something a whole lot more than just a container runtime. In essence, CoreOS is looking at Docker and seeing something a hole lot more than what it originally jumped into bed with.

docker swarm

And while CoreOS have a point,  I certainly wouldn’t class the Docker daemon as bloated. That said, it’ll be interesting to watch and see how things like clustering, networking, orchestration, potentially storage and more get implemented into Docker. Who knows, rival container technologies might serve to influence how this kinda stuff shapes out with Docker.

Conclusion

So much to say about this situation, and all of it good!

Containers are shaping the future of enterprise tech, and there was never gonna be just one container standard. In fact I expect we’ll see even more rival container technologies spring up in the coming months. And that’s great.

The way I see it, Rocket is a massive compliment to Docker and a massive validation of the container market.

And I’m pretty flipping excited to see what Rocket brings to the table — not to mention any other potential implementations of the App Container specification.

So kudos to the guys at CoreOS for having the cahoonas to stand up against a raging juggernaut! In fact….. half of me kinda hopes these two guys will battle it out, toe-to-toe, over the next few years — that would make for one heck of a cool future!

Anyway…. taking a step back and analysing things, I’m struggling to see a downside! Exciting times!

…… I just hope Docker doesn’t turn into a *big company* too soon!

Nimble Storage – Heir to the Throne of the SME?

I’m seriously thinking the recent announcement that Nimble Storage will support Fibre Channel might herald the birth of a new major primary storage player. And I seriously mean a major player… right up there with EMC, NetApp, HDS, HP…

How come? Isn’t the FC market flat?

Well, may be. But it’s still a huge-old market. And up until now it’s been one that Nimble simply doesn’t compete in. And so far, that’s been a good thing…………. well… at least for the likes of EMC, HP and HDS.

Nimble Storage – A Proper Storage Company

In my opinion, Nimble Storage are a proper storage company. But what do I mean by proper storage company?

Well…… it’s my opinion that they’re past being a risky start-up. They really look like they’ve got serious intentions of sticking around and making a go of it. They’ve got good tech and good people. And they’ve got a healthy growing install base.

Oh, and any time I write or tweet anything remotely critical of Nimble, I get fanboys leaping to their defence. And while I’m not a big *fan* of the fanboy thing, it shows that they have a pretty satisfied and loyal customer base. Not to mention the fact that a ton of their customers seem to trust the tech enough to upgrade it during prime-time business hours….

What Will This Mean?

The two things that immediately spring to mind are –

First up, this’ll open up a ton of doors for Nimble. I’ve worked at plenty of shops in the past that wouldn’t entertain the likes of Nimble purely because they didn’t have a Fibre Channel play. Well they’ve got that now. My only major question that I’m not quite sold on is scalability.

Second up, this isn’t good news for the likes of EMC, HP, HDS, Dell etc. Especially in the SME space.  I think the really big enterprise shops still have a cultural problem where they feel safe (wrongly IMO) buying from the traditional safe houses of EMC, IBM, HDS etc. But outside of the really big enterprises, and definitely the SMB market, folks aren’t so scared of building on top of newer and potentially more innovative tech. So while I think Nimble have already made inroads into NetApp accounts, they’ve now got the missing piece that will allow them to start eating into EMC, HP, HDS accounts…

Conclusion

I am seriously wondering whether this might be the start of something huge for Nimble – kinda like the caterpillar coming out of the cocoon. This could be of 3PAR proportions.

Speaking of 3PAR…… while Nimble have been adding FC, last I looked 3PAR still doesn’t do proper multi-protocol…..

Sorry Nimble Storage – I Don’t Believe You!

So a few weeks ago I was at Nimble Storage HQ for a briefing with the Storage Field Day crew. Nice site, nice breakfast, and thanks for contributing towards my travel and hotel costs (that’s the disclaimer out of the way).

nimble logo

The Claim!

Anyway, as part of the presentation, Rod Bagg (VP of Customer Support) made the claim that ~60% of Nimble storage arrays get upgraded during business prime time hours. Seriously! During the middle of the business day! And Rod clarified that these are production systems, not just test systems in a lab.

Now….. call me old fashioned, but I couldn’t swallow that!

The evidence is here 12:00 – 15:00 minutes on the video….

A Great Product or Bad Admin Practices?

The Nimble angle is that this is a huge testament to the quality of their product. To me….. it’s either incorrect data, or testament to dangerous administration practices.

All I can say is…. with what I know about IT infrastructure, that would not happen on my shift!

Don’t get me wrong, I’m not saying that the Nimble product isn’t good – to be honest I actually like it. But to me, no matter how good a critical shared infrastructure product is, it’s still too risky to upgrade during business hours.

I know that the world is changing, but has it changed that much?!?!?!?

You see…. no matter how reliable a product is, it only takes one thing to go wrong to bring it to its knees.

Put another way… if I signed off on an upgrade to a shared Nimble storage array in the middle of the business day, and that upgrade *went south*, I’d expect to be marched off site and asked never to step foot in the building again.

My Experience With Upgrades

Now I know things can go wrong in the middle of the business day even when a system isn’t being upgraded. But in my experience, the risk is a lot higher when performing an upgrade. I’m used to doing things like –

  • making sure we have spare drives on site during upgrades
  • making sure we have the vendor support duty managers mobile phone number
  • making sure my technical staff don’t have any other plans for the day….

…all as preparation for the worst.

And I’ve had bad things happen during upgrade. And every time, I was damn grateful that we’d started our upgrades early Saturday morning and had all day Saturday and Sunday to mop up if things went wrong.

I had one time where I was at the cinema watching the new Star Trek film when one of my team called me to tell me about a certain storage array that had gone down. That was at about 8pm on a Saturday evening and it only got fully back up and running at about 09:30am Monday morning – after core business hours had started! It wasn’t great.

Am I Wrong… Do I Need To Get My Butt Out Of the 1990’s?

I’m not doubting the Nimble product here. But I am doubting the practice of upgrading core infrastructure in the middle of the business day.

Now….. am I living in the past? Nimble aside…. is it safe to be upgrading core infrastructure components in the middle of the business day?