VSAN Is No Better Than a HW Array

By | October 20, 2014

One of the major topics we discussed in last weeks episode of The In Tech We Trust Podcast was VSAs vs VSAN. And I think it’s a pretty interesting topic.

ITWT-channel-logo-1022x1022

VSA’s Commoditize the Hypervisor

The way I see it, the VSA approach offers more choice, more competition, and is ideologically superior.

I’m a huge believer that competition drives innovation, as well as value for customers. Not only are there a ton of VSA products to choose from, but VSAs commoditize the hypervisor. I mean…. as long as a VSA vendor supports their VSA on multiple hypervisors, there’s absolutely no reason why a VSA running on top of vSphere can’t replicate with a VSA running on top of Hyper-V. And if you’re on vSphere today, you can absolutely switch over to Hyper-V tomorrow (or vice-versa). And the notion that the hypoervisor no longer matters (i.e. its commoditized) is a great idea in my book – it’s the next natural step from commoditization of hardware.

But VSAN has Advantages

That all said, I recognise that compared to a kernel-based storage controller architecture like VMware VSAN, the VSA approach has some challenges. After all, VMware VSAN can benefit from being written by guys that know vSphere and its roadmap inside out. And…. in general, tightly integrating something at the kernel-level often brings stability and performance benefits. So I see some obvious short-term benefits to the VSAN approach.

But VSAN is Hypervisor Lock-in

But…. VSAN locks customers in to vSphere – and no matter what anybody says, lock-in is never a good thing. Sure, there can be short-term benefits. But unless you’re a fanboy who’s intravenously taking vendor kool-aid…. in the long run, lock-in should be avoided at all costs!

VSAN is No Different To Buying a Traditional Array

For me, the VSA approach is the right approach. Buying into VSAN is only one small step away from buying traditional storage arrays where the software and hardware are locked to each other. With VSAN the storage intelligence is locked to the hypervisor – which is pretty much the same as with traditional arrays. Seriously, what’s the difference between welding VSAN to the vSphere kernel, versus welding storage controller intelligence to hardware – a la VMAX, VSP, 3PAR….?!?!?!

As good as VSAN might be, it’s the wrong approach.

For me, when I last looked at EMC ScaleIO, I thought I was looking at a technology that could take the market by storm. Fingers crossed EMC throws its brightest minds at it!

39 thoughts on “VSAN Is No Better Than a HW Array

  1. Chuck Hollis

    Hi Nigel

    Disclosure: I work for VMware, very involved with VSAN, etc. so you’ve been warned!

    I’m thinking you might have missed a few relevant aspects here.

    While I’m sure we can debate the pros and cons of different forms of lock-in, most IT practitioners realize that — at some point — they have to go with something specific, and it’s always good to have a plan B if your first choice doesn’t work out.

    VSAN (and VSAs) run on pretty much all qualified server hardware, so at least there’s that. I think what you’re trying to get at is that VSAN is tied to vSphere (very true) but for a user with an all-vSphere cluster (and there are many of those) that’s turning out to not be a huge concern.

    I was hoping you’d explore the big difference between the two approaches: the management model. VSAs tend to manage storage separately, so the workflows tend to go back and forth between what the VM administrator sees, and what the storage administrator sees.

    By comparison, the VSAN management model is an extension of what the VMware administrator does already. Policies are defined in vCenter, and then automatically applied when VMs are provisioned. There’s no notion of a “storage administrator”, or a separate storage administration interface.

    So far, that’s one of the big feedback points from VSAN users to date — the management model is about as simple as it can get.

    We haven’t publicly published performance comparisons yet, comparing VSAN and VSAs on identical hardware, but — as you might surmise — anything tightly integrated into the ESXi kernel has a substantial advantage. We’re hearing that consistently as well.

    And I guess the final observation we’re hearing is that it “just works” with the rest of the vSphere environment — great, if you’re a VMware customer.

    Thanks for listening!

    — Chuck

  2. Nigel Poulton Post author

    @chuck Thanks for chiming in…. Some thoughts in response….

    I see the short-term advantages of VSAN for customers running 100% vSphere shop. But VSAN locks them in even further. Open systems and heterogeneity are to be prized. I’ve no issues with a 100% VMware stack – just make it more modular (i.e. VSA). And I’m sure it’s not *currently* a concern for customers with 100% VMware stack. But who knows what tomorrow brings…

    As for the management model. I agree with you here – this is a good thing about VSAN. That said, I see no reason why VSAs can’t have the same model. I don’t think the current VSAN management model advantages should be a differentiator for long. Open APIs see to that.

    Re the performance…. I understand that it could potentially be better, in fact I think I called that out in the article. But performance in this space isn’t a game changer like it used to be. Surely tightly integrating software and hardware *the old way* yields even better performance, but none of us are screaming for the old days of Symmetrix and USP V to come back.

    By saying *it just works* insinuates that the VSA approach doesn’t, and I honestly think that could be misleading.

  3. Chuck Hollis

    Interesting comments … but I’m a bit confused as to what you’re arguing for (or against) …

    Perhaps a good starting point would be to re-emphasize that the vSphere storage management model (SPBM) is the same whether the customer is using VSAN, a VVOL-compatible storage array, or perhaps a VVOL-compatible VSA. Theoretically, the customer chooses any combination of storage layers. So reasonably open from that perspective.

    And, of course, the APIs are available if you’d prefer to manage using something external.

    I’m guessing that you are arguing that VSAN should work with other hypervisors? There are more than a few least-common-denominator VSAs that do that; the market isn’t exactly rushing to embrace them. And it wouldn’t be VSAN then: part of the product’s appeal is its deep integration with kernel resources. Have you ever taken a close look at how much overprovisioning of server resources are required with many of these VSAs? Clearly, there’s no free lunch here.

    “Open systems and heterogeneity are to be prized”. Maybe if you’re a systems integrator that helps customers deal with complexity. Instead, I’d argue that most folks are desperate for simpler, easier-to-manage environments — witness the interest in converged systems, for example.

    I guess we’ll have to agree to disagree on this one.

    — Chuck

  4. Nigel Poulton Post author

    @Chuck…. More than happy to disagree on this one.

    I’m arguing against VSAN as an architecture – tightly integrating with the kernel smacks of tightly integrating storage SW with storage HW *old-skool* style. And I’m definitely not arguing that VSAN should work with other hypervisors – I know enough about kernel integration to know that ESXi, KVM (Linux) and Hyper-V (Windows) are all different kernels πŸ˜‰

    Sure customers all want simplicity, and I see the interest in converged systems. But converged systems can be done with a VSA too – see HP’s recent announcement of hyperconverged systems working with both EVO:RAIL and StoreVirtual. So VSA’s can also bring simplicity in the right packaging etc. I see VSAN as a potential way to lock folks in tighter, and while I’m not old enough to remember IBM and the Mainframe, I’ve heard enough stories.

    And to your point about the vSphere storage management model being available to VVOL-compatible VSAs, surely means that touting the management model as an competitive advantage for VSAN is (or at least will be) moot?

    Happy to disagree, I always enjoy the discussion.

  5. Pingback: VSAN, VSA or Dedicated Array? | Architecting IT Blog

  6. Mark Burgess

    Hi Nigel,

    Surely Microsoft has built their entire business on lock-in rather than innovation, but you are suggesting that customers may want the choice to move to Hyper-V.

    If it were not for VMware much of the innovation in the industry over the last 10 years would not have happened – Microsoft’s objective is to maintain the status quo (not a good thing).

    I think we all agree that a kernel based solution is vastly superior to a VSA and will have the best chance of competing against the higher end storage arrays (got to be a good thing).

    It just does not make sense to me to build a storage/hyper-converged stack based on a series of VMs that in turn are build from millions of lines of Linux code and a bunch of open source projects.

    Because of the competition created by Hyper-V (clearly a good thing) VMware are under pressure to innovate and distance their technology from the competition – they have been focused on this over the last 3-4 years and the net result is VSAN, VVOLs and NSX (and other stuff I am sure) – I cannot see how this can be a bad thing and surely we want them to do more of this.

    There always has to be some degree of lock-in and pros and cons for every solution, so the customer needs to decide which is the best option for them.

    The other point that has not been discussed is the true benefits of a software-defined solution.

    If you take something like a hyper-converged appliance (i.e. EVO:RAIL or Nutanix) then this is the ultimate lock-in – the software is tied to the hardware and therefore you are going to be paying over the top for it – THEY ARE AS FAR AWAY FROM BEING SOFTWARE-DEFINED AS YOU CAN GET.

    The advantage of VSAN is that it is independent of the hardware – the customer can buy what ever commodity hardware makes sense and they can upgrade to improve performance, reliability and capacity when ever they need to.

    Also the software lives on beyond the hardware – with a storage array or hyper-converged appliance when you retire the hardware you also retire the software and you have to buy both again when you buy the replacement (how we have fallen for this I do not know!!!).

    I agree EMC ScaleIO looks interesting and for customers who must have a OS/hypervisor agnostic solution it makes a lot of sense, but there is a trade-off.

    Currently it is not a kernel module on vSphere (I believe it is coming), it is not developed by the OS/hypervisor vendor and most importantly it uses a capacity based license model.

    The advantage of VSAN is that overtime as your performance and capacity requirements go up those per socket licenses will provide increased value much as it does today with vSphere – therefore the customer will not have to keep going back to VMware for more licenses.

    On the other hand with ScaleIO you will need to keep purchasing further licenses as your capacity increases.

    I know Chuck came from EMC so it would be good to get his thoughts on this.

    I 100% agree with you that lock-in is bad – Microsoft, Cisco and Apple have done a great job of doing this, but to be fair they have many happy customers.

    The bottom line is there will always be a degree of lock-in, so it is down to the customer to way up the pros and cons of each solution and decide which is best for them.

    Should Microsoft detach Office, Exchange, SQL and SharePoint from Windows and make it run on every major OS and should VMware detach VSAN, VVOLs, NSX, DRS, FT, SRM and Horizon View from vSphere?

    From the customers point of view yes, but it would be economic suicide for the vendor so they are never going to do it.

    Comments would be appreciated – it’s an interesting subject as you say.

    I have posted further thoughts on our blog below:

    http://blog.snsltd.co.uk/an-introduction-to-vmware-virtual-san-software-defined-storage-technology/

    http://blog.snsltd.co.uk/what-are-the-pros-and-cons-of-software-defined-storage/

    Best regards
    Mark

  7. Nigel Poulton Post author

    Hi Mark.

    Thanks for getting involved. Some interesting points. Here are some of my thoughts…..

    The point was never about Hyper-V and thinking customers might want to switch to Hyper-V πŸ˜€ It was always about hypervisor choice. Replace in any of my comments with *any other hypervisor* (KVM, Xen…). And I’ve absolutely nothing against VMware and agree they’ve innovated. In fact i like a lot of what they do. My argument is 100% that “where VSAN lets you choose any hardware…… VSA lets you choose any hardware AND any hypervisor”.

    To your point about kernel-based solutions being vastly superior, I don’t agree they’re vastly superior at all. I think they may have some small advantages over something like a VSA. But look at hyper-converged stacks like Nutanix and Siplivity. VSAN (even EVO:RAIL) is certainly not vastly superior to either of those. Though I take your point about EVO:RAIL and Nutanix style hyper-converged solutions being uber lock-in, that’s a good point I hadn;t really thought about. That all said, I think hyper-converged although hyper-converged is lock-in, it’s a lot different to the old hardware storage solutions, in that non-disruptive rolling upgrades (HW and SW) should be a whole world better than back in the days of DMX et al. But yes, they are lock-in.

    Re ScaleIO. Things like licensing models can easily change, so Im concerned about that.

    It’s all down to virtualization and abstraction of underlying layers. And the VSA model takes what the hypervisor did with HW, and virtualizes the hypervisor. And obviously I don’t expect the hypervisor vendors to like that. VSAs are basically telling the hypervisor vendors that their crown jewels aren’t that important.

    PS. I’ll ignore your dig at open source. Many of the largest businesses and technology services in the world are built on top of open source. We’d probs be in a 100% MS world if it weren’t for open source :-S

  8. Mark Burgess

    Hi Nigel,

    Firstly I am not having a dig at open source, I totally agree with you that without it the world would be left to Microsoft and tier 1 storage and networking vendors (not good at all).

    Instead Open Source has allowed all sorts of start-ups/non-start-ups to compete that without it they would have had no chance – it is a greater leveler and gives the little guy a chance of competing with the big guys (got to be a good thing).

    What I do think is that this is not optimal from a performance and reliability point of view for a VSA because it requires far more code to manage, most of it outside your control. Instead the following are preferable for a server-side storage technology:

    A. Kernel modules – so it is as lean as possible
    B. Built by the OS/hypervisor vendor

    How much better this is than a VSA built by a 3rd party is open for debate, but we all agree it is better.

    The advantage, if it is built by a 3rd party, is that they can make it cross OS/hypervisor – this is exactly what EMC ScaleIO is all about (I believe it is kernel on Linux/Windows with vSphere to follow shortly).

    The customer then has to make a decision – is it more important that the solution is baked into the OS/hypervisor from a single vendor or it is cross-platform from a 3rd party.

    There is no right or wrong answer to this and it is not something you or I can advise on – there are pros and cons no matter which option the customer chooses.

    Again I agree with you that Nutanix is far more feature rich than VSAN is today, but that is not because it is a VSA (but I would assume development will be faster as they can take advantage of a whole bunch of existing open source code and projects and VMware will need to throw more developers at it to keep up).

    I saw a good post today on the future of VSAN (http://www.yellow-bricks.com/2014/10/21/coming-vsphere-vsan/).

    I think we are both more or less in complete agreement on this, but I do not think it is fair to say that VSAN has as much lock-in as a hardware array because of all the software-defined benefits outlined in our blog (http://blog.snsltd.co.uk/what-are-the-pros-and-cons-of-software-defined-storage/).

    The fact that a 3rd party VSA is cross-platform clearly has advantages, but there will be trade-offs with this.

    At the end of the day it is up to the customer to decide which is best for them as each has pros and cons:

    A. Hardware array (i.e. EMC VMAX/VNX or HDS VSP/HUS)
    B. Hyper-converged stack (i.e. EVO:RAIL or Nutanix)
    C. Software-defined hypervisor based virtual SAN (i.e. VMware VSAN)
    D. Software-defined OS agnostic virtual SAN (i.e. EMC ScaleIO)

    There is a place for all of these and it comes down to cost and a number of other factors.

    What if VMware was to include VSAN in most editions of vSphere would that change your view?

    Comments would be much appreciated.

    Best regards
    Mark

  9. Chuck Hollis

    A few final thoughts on this gentlemanly discussion?

    Nigel brings up the observation that, if you go with a hypervisor-integrated software storage stack like VSAN, you will find it more difficult to switch to a different hypervisor.

    I will grant that point with two caveats. Rehosting an existing production environment on a new hypervisor is not a task that is frequently undertaken. It’s a heckuva lot of work. So it doesn’t happen that often. New builds are different.

    And, as Mark pointed out, you can take your hardware with you if you do find yourself in that exceedingly rare situation.

    I understand your skepticism on the architectural advantage that a hypervisor-based storage stack brings, but the performance and resource efficiency differences between the two approaches are eye-opening to anyone who’s done A/B testing on identical hardware and workloads.

    No, there isn’t anything published on this (sorry!) but more than a few customers have done this testing, and, if you are so motivated, I would encourage you to do the same.

    This should not be a surprise: hypervisor-integrated storage software has an optimized IO/network path, can use the kernel to dynamically manage required resources, has direct access to the underlying hardware, etc. Architecture does matter.

    I like to argue that the new definition of “open” in our emerging software-defined world should center more on the ability to dynamically compose storage services via APIs, as that will ultimately generate more value for more customer environments.

    And that is largely independent of whether we’re talking about VSAN, VSA, converged bricks, familiar arrays, etc.

    Thanks for the chat!

    — Chuck

  10. Dheeraj Pandey

    The whole management argument of integration is being broken apart. Had that been true, Oracle apps would have continued to rule, and people would never have given Salesforce, Workday, ServiceNow, and others a chance. And this has been true for decades. Oracle won the DB war against IBM, even though IBM was a tightly integrated stack, top-to-bottom. After a certain point, even consumers started telling Facebook that their kitchen-sink app is not working, which is why FB started breaking apart that experience into something cleaner, usable, and user-experience-driven.

    These are the biggest advantages of running above the kernel:
    o Fault isolation: if storage has a bug, it won’t take compute down with it. If you want to quickly upgrade storage, you don’t have to move VMs around. Converging compute and storage should not create a toxic blob of infrastructure; isolation is critical, even when sharing hardware. That is what made virtualization and ESX such a beautiful paradigm.

    o Pace of innovation: User-level code for storage has ruled for the last 2 decades for exactly this reason. It’s more maintainable, its more debuggable, and its faster-paced. Bugs don’t bring entire machines down. Exact reason why GFS, HDFS, OneFS, Oracle RDBMS, MySQL, and so on are built in user space. Moore’s Law has made user-kernel transitions cheap. Zero-copy buffers, epoll, and O_DIRECT IO, etc. makes user-kernel transitions seamless. Similarly, virtual switching and VT-x technologies in hypervisors make hypervisor-VM transitions seamless.

    o Extensibility and ecosystem integration: User-space code makes it more extensible and lends itself to a pluggable architecture. Imagine connecting to AWS S3, Azure, compression library, security key management code, etc. from the kernel. The ecosystem in user-space thrives, and storage should not lag behind.

    o Rolling upgrades: Compute doesn’t blink when storage is undergoing a planned downtime.

    o Migration complexity (backward compatibility): It is extremely difficult to build next-generation distributed systems without using protobufs and HTTP for self-describing data format and RPC services. Imagine migrating 1PB of data if your extents are not self-describing. Imagine upgrading a 64-node cluster if your RPC services are not self-describing. Porting protobufs and HTTP in kernel is a nightmare, given the glibc and other user library dependencies.

    o Performance isolation: Converging compute and storage doesn’t mean storage should run amuk with resources. Administrators must be able to bound the CPU, memory, and network resources given to storage. Without a sandbox abstraction, in-kernel code is a toxic blob. Users should be able to grow and shrink storage resources, keeping the rest of application and datacenter needs in mind. Performance profiles of storage could be very different even in a hyperconverged architecture because of application nuances, flash-heavy nodes, storage-heavy nodes, GPU-heavy, and so on.

    o Security isolation: The trusted computing base of the hypervisor must be kept lean and mean. Heartbleed and ShellShock are the veritable tips of the iceberg. Kernels have to be trusted, not bloated. See T. Garfinkel, B. Pfaff, J. Chow, M., Rosenblum, and D. Boneh, β€œTerra: A virtual machine-based platform for trusted computing,” in Proceedings of the 19th ACM Symposium on Operating Systems Principles, pp. 193–206, 2003. Also see P. England, B. Lampson, J. Manferdelli, M. Peinado, B. Willman, β€œA Trusted Open Platform,” IEEE Computer, pp. 55–62, July 2003.

    o Storage is just a freakin’ app on the server. If we can run databases and ERP systems in a VM, there’s no reason why storage shouldn’t. And if we’re arguing for running storage inside the kernel, let’s port Oracle and SAP to run inside the hypervisor!

    In the end, we’ve to make storage an intelligent service in the datacenter. For too long, it has been a byte-shuttler between the network and the disk. If it needs to be an active system, it needs {fault|performance|security} isolation, speed of innovation, and ecosystem integration.

  11. Hans De Leenheer

    1) VSA vs Kernel (VSAN): I frankly cares less about the location of the IO controller/management than about the maturity of the solution at hand. As mentioned in the podcast: I’d rather buy a Nutanix solution over a VSAN solution, but I probably would prefer a VSAN solution over a LeftHand solution. When I would have to choose similar quality/price/maturity product I’d go for the VSA model just because of the future proof portability.

    2) vendor-lock-in: we got that point on the podcast of this week: the difference between historical vendor lock-in and today’s vendor lock-in is that we are buying node-based architectures in ‘buy-what-you-need-today’ instead of 5/7yrs acquisition cycles. That, combined with the fact that I can move VMs to other storage non-disruptively but also with very little disruption to other platforms (Hyper-V/AWS/Azure/KVM) minimizes the tradeoffs of a lock-in.

  12. Pingback: Let’s port Oracle and SAP to run inside the hypervisor |

  13. Chuck Hollis

    Hi Deeraj!

    Before we get started, it’s considered good form to share with readers your affiliation. Your twitter profile says that you are founder and CEO of Nutanix, and I think that’s an important piece of information to help people understand where you’re coming from.

    I had to catch myself, I almost (!) fell for the bait. As I went through your laundry list, I could easily come up with counter-examples and externalities that would result from your particular world view.

    Nahh, I’m not going to go there πŸ™‚

    For everyone else, Nutanix’s storage approach runs in user space. It can’t run as an integrated kernel service, because they don’t have a kernel, and previous attempts to integrated third-party device specific storage software have resulted in some of the problems Deeraj identifies.

    Not that I’m thinking Windows or anything πŸ™‚

    Perhaps the better way to frame the debate (perhaps better than the pedantic assault here) would be to ask “does it make sense to offer storage services as an integral part of the hypervisor?”.

    As with any technical proposition, there are pros and cons. I don’t agree with Deeraj’s list of negatives, at least, based on our experience with VSAN – there’s absolutely zero evidence to back up his assertion. Maybe with other technologies, though.

    In an effort to be helpful, here’s where I would point out the potential negatives from an architectural perspective.

    – The storage persistence layer is tied to the hypervisor. For some people (like Nigel here), that could be a big deal. Granted.

    – Storage is no longer an external array; compute/network/storage services are all done on the server. Some see that as a positive (integration, efficiency), others see it as a negative (specialization of duties, isolation). Granted as well.

    – If you want a shared storage persistence pool across a heterogenous environment (physical, multiple virtuals, etc.), a hypervisor-based approach can’t be considered.

    – For traditional storage administrators and architects, it’s an unfamiliar model.

    Uhhh, that’s about it. Everything else Deeraj mentions has more to do with specific implementations vs. a broader architectural discussion. And almost none of it applies to VSAN specifically.

    I did not hear any response to my core premise regarding hypervisor-integrated storage software: e.g. the storage persistence layer has visibility and direct access to all resources: physical and virtual: compute, memory, network, disks, flash, etc. This creates the potential for greater performance, efficiency, dealing with complex scenarios, etc.

    Any storage stack in user space is simply talking to a convenient abstraction vs. the real thing. That comes at a price.

    Thanks again for the chat!

    — Chuck

  14. Mark Burgess

    Hi Guys,

    For me this is not about the pros and cons of which is better – a VSA or tightly integrated hypervisor solution (and I am sure we all agree there are pros and cons for both), but about lock-in – it is more about economics than technology.

    Are you going to get better value because there is less lock-in with:

    A. A hardware array (i.e. EMC VNX)
    B. A cross-platform hyper-converged hardware appliance running a VSA (i.e. Nutanix)
    C. A tightly integrated hypervisor based solution (i.e. VSAN)

    A. Locks you to the vendors storage hardware and software, but allows you to use any servers and any hypervisor/OS
    B. Locks you to the vendors storage hardware, software and servers, but allows you to use any hypervisor
    C. Locks you to a vendors hypervisor and storage software, but allows you to use any server and storage hardware

    For me B. has the most lock-in, but of course it will depend on many other factors both technical and commercial for each individual use case – so ultimately there is a case for all 3 and the market will dictate which one is the most successful (clearly A. today).

    Now if B. were available as a purely software-defined solution like VSAN, it would easily win the “least lock-in” contest.

    Best regards
    Mark

  15. Gabriel Chapman

    Good discussion, really bummed I was not able to make the podcast this week to go over this live.

    I’m not sure that “vendor lock-in” is the boogeyman that some of us may make it out to be. Yes, I’ve seen very large organizations make strategic decisions to diversify in order to remove dependance on a single vendor and to create leverage in negotiations etc, but ultimately I think customers are looking for risk averse solutions that perform at the level required, meet SLA’s, and have maturity. If a single vendor stack provides that to them, then they will buy it and the peace of mind that comes with it.

    In reality the VSA vs vSAN discussion today is kind of a moot point because VMware does not offer kernel level integration for all storage vendors, and I can’t see them allowing 3rd party vendors the same level of access or kernel level integration that they will provide to vSAN or even other EMC based solutions in the future.

    I think the one challenge that vSAN faces is the proprietary nature of the storage protocol and obtaining certification for specific workloads. I harken back to the discussion last year regarding Microsoft not offering to support for Exchange running on NFS backed VMDK file systems. VSA’s that leverage iSCSI, NFS, SMB will have a leg up in that regard, and while it may not always be a sticking point for all customers, there will be some for which it is, and therefore it remains relevant.

    For me, I’m more interested in what the future portends for EMC as it relates to VMware capturing the storage stack and eating into the physical array business, granted they are all part of the same company, I can’t see EMC sales teams pushing vSAN ahead, the same with vSAN ready nodes, or EVO:Rail based solutions though with VCE coming under direct control and a push being made to capture more of the converged infrastructure stack that may indeed change.

  16. Simple Simon

    Each person on this thread is standing totally behind where his money comes from. Including Nigel Poulton πŸ™‚

  17. Nigel Poulton Post author

    @Simple Simon < who interestingly doesn't share his/her real name...... I've not had a chance to read/reply to any of the lengthy comments today, but I'll reply to this. While Simple Simon Pieman might be right about some commenters here, I can assure everybody here that I'm taking no money from ANY storage vendor, hypervisor vendor whatsoever - none! None of my money comes from anything to do with VSAs, VSAN, tradiaional storage etc..... zero. I don't advertise on this site or the podcast I run. Goodnight πŸ˜€ BTW looks like some great discussion going on, I'll catch up on comments tomorrow!

  18. Andrew Miller

    Ah…the traditional “lockin is bad” argument. πŸ™‚

    I do feel like this comment from Chuck is spot-on….

    “Any storage stack in user space is simply talking to a convenient abstraction vs. the real thing. That comes at a price.”

    To do a tongue in cheek TL:DR version….you’re always getting locked in – always. It’s just a question of being aware of where it’s happening and consciously choosing what benefits are worth the exit cost. This presumes that you 1) know what benefits you’re getting and their relative priority and 2) know the exit cost/effort.

    From a VSAN vs. VSA perspective…

    VSAN – locked into ESXi – sure. To get out, what do you do? Migrate your data at a higher level than the lockin — likely vmdk or application (ironically most likely via Storage vMotion….aka a feature you get from the hypervisor lockin ;-).

    VSA – comes from some array manufacturer (NetApp, HP, whoever) – all your data is in whatever format that VSA uses (aka lockin). To get you, what do you do? Migrate your data at a higher level than the lockin….wait a minute….isn’t that the same answer? πŸ™‚

    Full disclosure – I work for Varrow (EMC/Cisco/VMware/Citrix/many more Value Added Reseller), talk about this stuff for a living and try to be as upfront as I can…”wear my biases on my sleeve” if you will so people can judge my viewpoints knowing my background (said background ironically has a lot of NetApp (VSA even), EMC, and VMware….go figure).

  19. Hans De Leenheer

    @Anonymous

    Your comment is unworthy of this thread. Every person in IT has BIAS towards what they know, whether they are a customer, selling it, or making it.

    Towards the people in this thread I will give you one example: do you know why Nutanix does not have a CTO? Because Dheeraj IS the CTO, next to being founder and CEO. If there is one person on this planet that can tell you the merits/tradeoffs of a User Dom based storage layers it’s him. If he would not give you those here in the way he did I would definitely challenge his reasons for building it.

    As for myself: I have (vendor) customers on all sides of this spectrum. Shall I just shut up now?

  20. Mark

    Hi Guys,

    I think we are getting away from the point of this discussion – this is about lock-in and what is bad about it.

    For me lock-in is when someone buys something and then they cannot change it even if they really want to and therefore the quality of the product and the value proposition is generally not good (i.e. it is monopolistic).

    The best example of this in general IT has to be Microsoft – I think most organisations would love to move to something that was more innovative and cost effective, but due to the need to support legacy applications it is virtually impossible no matter how much money they throw at the problem.

    If VMware went years between bringing out versions of vSphere that people wanted to deploy and were not adding additionally value through technologies like VSAN, VVOLs and NSX they would go out of business.

    Only organisations like Microsoft can release major products that nobody wants (i.e. Windows Vista and 8) and still make huge profits.

    This is why it is really important that Microsoft Hyper-V does not become the dominant hypervisor – it will then be the ultimate lock-in and all the innovation we have seen from VMware over the last 10 years will cease – that is not to say we want to be locked into VMware either.

    For a hardware array like a VNX, hyper-converged infrastructure like Nutanix or software-defined solution like VSAN there is no lock-in, just some choice limitations.

    At any point with any of these if a CTO is not happy he can throw them out and move to a new storage/hypervisor stack – it will be a pain and maybe expensive, but there is nothing to stop him.

    Competition is therefore working and EMC, Nutanix and VMware need to be continually innovating and increasing the value proposition of their products to prevent customers moving away.

    On the subject of bias, this is the biggest problem in IT and therefore it is never truly possible to get honest independent advice. The best you can do is get advice from someone who can give you options, the problem is that a vendor is nearly always going to be the most biased of all because they can only sell you what they have.

    For some reason customers seem to believe they will get the best advice from a vendor, because they surely must know the most about their product which is clearly true – what they are not taking into account is the significant bias that must be influencing this advice.

    As pointed out previously a customer or reseller (I work for a reseller) can be typically far more unbiased, but equally this comes down to what they know. If you only have experience of working with one or two vendors you will still be heavily biased towards those.

    I therefore hope, like Nigel, my contributions to this discussion are “relatively” unbiased.

    Best regards
    Mark

  21. Nigel Poulton Post author

    @Everyone – cracking discussion, thanks so much for your input!

    Been busy, so going through comments in order and adding my thoughts where I feel necessary –

    So my point about lock-in was specifically *hypervisor* lock-in. I know I made a comparison to HW storage array lock-in from the past. May be that wasn’t a great comparison. To be clear, in the blog post I refer specifically to hypervisor lock-in. I absolutely think VSAN leads to *hypervisor* lock-in by adding value to the hypervisor, which IMHO opinion is *similar* to adding value to HW. I think it’s about time the hypervisor was commoditized more than it currently is. May be hypervisors should have hooks like as Intel VT – where we could have a few storage/networking/etc specific hooks and offloads into the hypervisor. And how cool would it be if these hooks/APIs were open and common across hypervisor platforms (technical and political challenges aside).

    Now I totally *get it* that there’s a very fine line between “lock-in” and “value-add” and I don’t doubt for one second that the developers and techies at VMware see VSAN as purely value-add, and the thought of lock-in never occurs to them. But to an outsider it can easily be seen as creating hypervisor lock-in. Yes, I know we can migrate VM’s between hypervisors, but that still has a ways to go before its great IMO. But IMHO adding components of the stack into the hypervisor/kernel doesn’t seem the right thing to do. Each level of the stack should be as modular and replaceable as possible. Sure…… I know customers don’t have to enable VSAN, but by creating VSAN, I’d be surprised if VMware developed storage hooks APIs etc designed for VSA architectures. And of course, even if you don’t enable VSAN, you can’t rip it out of the kernel either – the code is in there.

    @chuck You mention that customers don’t switch hypervisors very often. Agreed, and VSAN won’t make that any easier. We need a world in which the hypervisor is as commoditized as the HW and therefore as easy to switch out. Though of course I don’t expect the hypervisor vendors to want that.

    @chuck again….. Can the advantages of hypervisor/kernel-integrated storage not be exposed via APIs so that a strong ecosystem of VSAs can benefit? I think that comes back to my earlier point re Intel VT etc.

    @Deeraj Thanks for linking your name to nutanix.com but Chuck is right, it’s also good form to declare your vendor affiliation. No worries though…. we’re all learning.

    @Deeraj @Chuck….. I don’t agree that Deeraj’s points don’t apply to VSAN. Some of those challenges may not have manifested yet, or may be manageable right now, but I think his views are technically sound. That said….. I’m not saying the VSA approach is wart-free. Though I’m still favouring the VSA approach πŸ˜€

    @Chuck you finished one of your comments with “Any storage stack in user space is simply talking to a convenient abstraction vs. the real thing. That comes at a price.” << Surely the whole premise of a hypervisor is providing convenient abstractions to guest Operating Systems?? Am I missing something here? What we need, like I mentioned earlier, is hooks into the hypervisor for stack elements like storage. Anyway, If nothing else, like Mark Burgess mentioned above.... we've got a hell of a lot of choice these days (HW arrays, hyper-converged stacks, kernel-based storage, VSAs....) and choice is always good! I'm open to being wrong, but that's the way I see it at the moment. BTW, including this comment, I count 5 comments longer than the actual blog post :-S

  22. andre_leibovici@yahoo.com

    What a great discussion! Well-done everybody.

    In regards to hypervisor lock-in. I willing to bet that in the span of 1 year or less VSA vendors will be providing disk format abstraction, effectively enabling any disk type (vmdk, vhd, raw) to run on any hypervisor. Just like NetApp Shift does it today.

    The most advanced hyper-converged solutions may even enable live migration between heterogeneous clusters. It will be as simple moving workloads between hypervisors as it is today between hosts using live migration.

    The applicability of such technology is vast, including test/dev/prod, hybrid cloud etc.

    This is real and will only be possible with the VSA route.

    – Andre Leibovici work @ Nutanix

  23. Pingback: Lock-in: Real or Imaginary | VMTyler.com

  24. Jason

    “The cloud” is a big challenge to many assertions in these comments. Smart companies are moving away from treating servers as ‘pets’ and moving towards treating them as ‘cattle’. If you leverage technologies like Chef, Puppet, Vagrant, etc to drive your server provisioning, it matters not which Hypervisor they live on… move an instance recipe from Virtual Box on your laptop, to AWS for broader testing, and then into VMWare for production use.

    The fly in the ointment is the data services tier (your oracle, exchange, etc). Migrating them then is the only real challenge to move a service from infrastructure-a to infrastructure-b. (And most of those I’ve been able to handle as if it were just a scheduled reboot that took an extra 5 minutes)

    -Jason (No affiliation with a storage vendor)

  25. Kevin Stay

    Wow. Great discussion. I have really enjoyed reading through most of the comments. (The clown who suggested all the thoughts being expressed were coming straight out of folks wallets needs a boot to the head…)

    Disclaimer first: I am an enterprise engineer for a global medical devices company. We have nothing to do with any part of the IT industry and I have no affiliation with any vendors other than as a customer.

    While I agree in principle with the statement “vendor lock-in of any type is bad” and thus by extension hypervisor lock-in must also be bad I also feel there is a limit to how much that matters. Every 2 weeks a paycheck appears in my mailbox in exchange for assuring reliable delivery of required IT services to my employer.

    My job is to understand business requirements, distill them into functional requirements that can be turned into an RFP to potential vendors, and turn that into a detailed design specification once a specific solution is chosen. Then I have to match that DDS to installation qualification which bubbles “up” into the performance qualification and SOPs. Finally, and most importantly, I then deliver an initial performance qualification matched directly over to the original business requirements along with the means to assure/measure/monitor an ongoing meeting of those requirements for the life of the solution.

    Yes, vendor lock in is a consideration in all that. However, it carries orders of magnitude less weight than the need to meet the business needs. (Apologies for all the boring drivel about specifications and qualifications; I presented at spring DatacenterWorld on “From Risk Assessment to Successful Audit” so I tend to see things through that lens.)

    As regards storage once past the “commandments” thou shalt not lose data, thou shalt not interrupt access to data, our needs are pretty much the same as most. We must have a certain capacity and the ability to deliver the required latencies and throughputs as dynamically as possible where and when needed.

    Easy to write, must be easy to do, right? Solutions that give me that latency and throughput flexibility get a LOT of allowance over any vendor lock in concerns. At the moment Pernixdata occupies much of my thinking on how I do that. If I can get what I need using their solution I am more than happy to accept that, at least for now, that means only vSphere as a hypervisor.

  26. John (@Lost_Signal)

    Couple quick thoughts.

    VSAN lets me protect my hardware investment. If i Decide to change platforms tommrow, I can go install ScaleIO, or StarWind or something else (I am using SuperMicro FatTwins). Since a good deal of stuff I have is under VSPP Its not like I’m out anything.

    EVO:RAIL is a product I view as having two really obvious markets that people underestimate in size.

    1. People who don’t care about architecture, have bad architects on staff, or the cost of the solution is a joke compared to the cost of time to market etc. If I”m an oil company and want a standard platform to drop in for HA cluster in 500 rigs, honestly the cost of the solution is kind of a joke compared to the rest of the platform (and the cost of getting labor on site to poke at it). These are people who want to say “I need 4 VMware’s TOMORROW” and not really worry about it.

    2. Sales/VARs. The reps at [Insert mega VAR] would rather sell a customer 4 VMware’s than have to deal with the barking carnival that is selling traditional storage and drags the sales cycle out. They don’t have to worry about a low big subcontractor doing a bad job at implementation as a kindergartener could do it.

    For those of us who have more complicated needs, have existing licensing to re-use, look at price more closely, and have skilled VMware admins laying around, we’ll still use VSAN (just go with Ready nodes, or the HCL). As time moves on though I do expect to see faster time to deliver (EVO or not) as building your own CI becomes a simpler affair for everyone. (Right now it trades at a bit of a premium).

  27. Chris M Evans

    I’m surprised that so many people on this thread are trying to claim lock-in doesn’t exist. Of course it does. Just try moving the music collection you purchased on iTunes to another provider. Have a try and moving your WordPress blog, images and comments to another platform…

    Lock-in absolutely exists where a particular feature isn’t available from another vendor or there is no way to move the data or application – think of all your data in salesforce.com. Think of trying to move data locked on WORM drives or tapes.

    What’s been discussed so far is simply technical lock-in, being tied to a certain hardware or software platform. In reality, most companies find themselves in financial lock-in, having committed to a product (be it software or hardware) and amortised it over 3/4 years. Software licences and maintenance are rarely transferable.

    We should expect vendors to try and build in lock-in. Vendors like NetApp have been doing it for years with features like SnapVault. EMC are the masters at it with ViPR, SRM (previously ECC etc) and ecosystems built on top of their hardware platforms.

    Customers should balance this by having multi-vendor strategies, constructing their services around features not hardware products and understanding the risks.

    Don’t be seduced by the sales pitch; dig deeper; ask awkward questions.

    Whenever I look to deploy new technology (let’s pick storage as an example), I always work on the premise of how I will get OFF the platform in 3/5 years time, not how I will move my data onto a device. It’s the same principle as spending money on a large-value item like a house or a car; with a house you have to consider how easy it will be to sell in the future and not be seduced by the superfluous interior decor. With a car, every brand has a certain future value, good or bad.

  28. Paul Hutchings

    Interesting article and some interesting opinions and comments.

    I’m just a normal end user, no commercial interests in anything, I’m just trying to ensure that sure we use the right product for the job.

    My thoughts are that yes of course VSAN to some degree “locks” you to the hypervisor, but IMO the moment you choose a hypervisor you accept a degree of lock-in.

    We use vSphere, have done for 8 years or so and we’re a very small shop but even at our size it wouldn’t be a trivial job to migrate to another hypervisor – there’s a cost involved that would make it a very considered decision rather than something we’d do out of religion or anything other than very good reasons.

    Then you have your backup solution and all the management and monitoring infrastructure – I’d love to change backup vendor but again it’s a massive task and you’re always going to be left having to support the incumbent for a period of time.

    I know this is a storage blog so of course the focus is on storage, but to me the storage seems like it’s just one component when typically there’s a lot more to the decision.

  29. Pingback: Newsletter: October 26, 2014 | Notes from MWhite

  30. Pingback: Lock-in, choice, competition, innovation, commoditisation and the Software-Defined Data Centre | The SNS Tech Team Blog

  31. Terafirma

    Hi,

    I would have to somewhat disagree. Yes VSAN locks you to ESXi but how does VAS not lock you in?
    If the VM is in VMware format then a conversion has to be run to get it to Hyper-v thus you are tied to the Hypervisor unless we move to mounting volumes from the storage to the VM like Openstack.

    Secondly if you are comparing VSAN to locking legacy SAN then if one buys a VSA from HP are they now not locked to the same SAN software provider? They are still at the mercy of HP and can still only do what HP say and if they want to move their data it is now a process the same as changing a hardware SAN.

    This seems to be the big factor people keep skipping, Whoever you decide to work with for your storage is a lock-in as their system now owns your data hardware or software. The only way around this would be a standard on SDS akin to EMC giving ScaleIO to every company in the world for free and all hyper-visors dropping support for anything but it.

  32. Pingback: NetApp Integrated EVO: RAIL – A Triumph of Marketing Over Common Sense | Architecting IT Blog

  33. Pingback: In Kernel or Not In Kernel – This Is The Hyperconverged Question | Long White Virtual Clouds

  34. Ross Richards

    I think you are all missing the point. Vsan is there to compete with Azure/Amazon. Both of these solutions you are completely locking in. There are third party solutions to help you migrate but you are still locked into their architecture.

    VMware is just playing catch up to where they should have been headed. Once Vsan catches up, why does anybody need Public cloud? If they drop the VSAN pricing by 50+%, I have no need to use a public cloud ever again.

    Typical SAN purchases are very time consuming, costly and aggravating. By moving to a continuous model of purchases it makes every enterprise very capable.

    Administration wise, keeping in kernel makes sense as storage is just part of the cluster.

    Does anybody know if Amazon and Azure separate their storage and CPU, I bet you they don’t.

Leave a Reply

Your email address will not be published. Required fields are marked *


*

You can add images to your comment by clicking here.