HDP – Response to Marc Farley

By | June 26, 2009

Below are some notes to the above videos (especially for those in quiet offices who can’t watch the video yet).

 

I posted the video in response to Marc Farleys recent video and comments on Hitachi Dynamic Provisioning, as well as other comments that have been made in blogs and on twitter etc….. All of these comments have been slating HDP for being chubby and the likes. Hopefully this post and the video will help set the record straight.

 
The 42MB page
 

On the topic of HDP being chubby. While I accept that the HDP extent (page) size of 42MB is by far the largest of all the major vendors, I do not think this is necessarily all bad. For a start it maps perfectly with the internal workings of the USP V and VM. Secondly I think there is more metadata overhead the smaller your extent size – internal tables need to be larger and then there’s metadata for replication, snapshots and the likes. The smaller your extent size, the more metatada required. May be that’s OK on a midrange box that won’t be expected to address large amounts of storage and have lots of copy services running in the background……. I just don’t think its as simple as saying “ho ho ho look at Hitachi’s huge allocation unit, that must be lazy coding from the engineers”.

Yes I know that a smaller extent might be more thin friendly, but how many enterprise customers are deploying this for the “thin” oversubscription benefits?

Oh and yes there are situations where two small writes to the same HDP volume will require two separate pages. However, this is not always the case and often not.

See this link for what I’ve written in the past regarding how perfectly aligned 42MB is with the internal structures of the USP V.

 
Zero Page Reclaim
 

Then there’s Zero Page Reclaim. Suggesting that it’s an apology rather than a feature made me smile. I doubt anybody would take such a comment seriously, but I thought it created an opportunity to talk about it again. 

I’m pretty sure that Hitachi is the only vendor to offer it as GA. I’ve seen it and know that it works. Others talk about it but that’s about it (I understand they may be waiting for standards like TRIM). ZPR is a really great feature that allows you to reclaim unused capacity that can be used to offset future capacity purchases. Pretty good during times like these where purse strings are tightly controlled.

In the video I don’t even mention things like dynamic volume expansion or automatic dynamic relevelling etc. But put all of these together and Hitachi has a well stocked Dynamic Provisioning portfolio that stands up against any other vendors.

A few points about the video –

  • Yes that’s me – sorry I didn’t have a shave first.
  • Why a field of cows? Marc did his video in front of a field of cows so I thought it appropriate I respond like for like 😉
  • Why two videos? It was shot as a single video but I’ve never done this before and at 14 minutes was too long for YouTube as a single video
  • Why don’t I work for HDS or HP or someone like that? Your guess is as good as mine. I do have to remind myself occasionally that I don’t actually work for HDS 😉
  • Yes I know its a bit pants compared to Marcs many videos

Questions and comments always welcome.

Nigel
 
I only ever talk about storage

21 thoughts on “HDP – Response to Marc Farley

  1. marc farley

    Not only do you have cows – but also traffic. Excellent ambience!  I think this was very well done.  

  2. Dale Clutterbuck

    Great video/s! I have found thin-provisioning to be a bit of a funny beast, and i’m not sure if I like it. I have no doubt I will when file systems become "thin-aware". From what I understand Veritas VM has thin-aware features these days.
     
    I’ve said it before on other blogs but where I have found DP useful is its other features: load smoothing, simpler provisioning, performance, IO pooling.
     
    The whole idea of storing and IO pooling sits really well with me (I am a big fan of ZFS). HDP is great, the fact it allows for thin-provisioning (and ZPR) is just a bonus for me.

  3. Tony Asaro

    Great stuff Nigel. Very educational and informative. I am posting a link on my blog.

    Tony

  4. Pingback: Tony Asaro’s Blog Bytes » Blog Archive » Independent Analysis of Hitachi Dynamic Provisioning (plus cows and cars)

  5. Nigel Poulton

    Hi Marc,

    Ive been bitten by the bug now, keep thinking of things I could shoot on video.  Im coming after you 😉

    Tony,

    thanks for the pingback, glad you liked it.

    Nigel

  6. Calvin Zito

    Hey Nigel,
    I didn’t totally get what was going on after seeing Marc’s video and had asked our StorageWorks XP team to help me understand it.  You did a great job explaining it and I’ve canceled my request with the XP team!
    Well done!
    Calvin

  7. Pingback: Claus Mikkelsen’s Blog » Blog Archive » Anyone Interested in a 105,000 RPM Drive?

  8. Chris M Evans

    Nigel, interesting video!

    So, to re-hash the 42MB page thing, I think technologies like 3PARs have the edge because they were designed from the ground up to offer thin provisioning. Hitachi, EMC et al have had to shoehorn TP into their products with obvious side effects. I don’t think the size of the TP page is an issue for 3PAR as it’s part of the architecture in the first place.

    All that said, ZPR is my most favourite feature. Have EMC been silent on this? Just wait until an equivalent feature surfaces in V-Max…

  9. Nigel Poulton

    Hi Chris,

    Thats a really interesting point you bring up re 3PAR architecture and being designed with TP and wide wtriping in mind.  I have some thoughts around this and will probably dedicate another post to it in the next couple of days or so.

    I expect when EMC ships and announces ZPR for V-Max and its extended family the discussion and debate will really open up.  While 3PAR have some great technology and an industry legend in Marc Farley, they are still pretty small game at the moment (no disrespect intended).

  10. Pingback: Cinetica Blog » HDS e i Modulari

  11. Pingback: Cinetica Blog » L’acqua calda, il wide striping e il thin provisioning!

  12. Stephen2615

    È bene per vedere una risposta da cincetica.
    Is this six degrees of seperation..
    I wonder if I can get Tony Asaro to class me as an independent consultant and storage expert. 
     I must admit that the past year has been tough for me when it comes to keeping up with events but I found a link from Tony’s blog (as discussed by Cinetica) to something that Chris Evans said and that points to a HDS press release that states:
    Hitachi Data Systems Offers Free Storage Virtualization through “Switch it On” Program
    Boy do I feel ripped off with the recent purchase of new USP V’s where I was "given" an relatively small UVM licence at a special price so I could virtualise a couple of Clariions with large SATA disks sitting around doing nothing… yeah nothing.  Don’t you just love company mergers.
    This leads to Hu’s Blog that says:
    Currently there is a “Switch It On” promotion for virtualization of 3rd party storage systems behind the USP V which runs till the end of the year…..
    Its also goes on to talk about free DP licences and I look forward to seeing our sales rep in the near future… 
    Another of Hu’s blogs makes for exception reading if not slightly confusing for people with concentration problems related to cancer medication…
    http://blogs.hds.com/hu/2009/07/overheads-for-thin-provisioning.html
    "Unlike other thin provisioning systems that have no externalized storage virtualization, the USP V can leverage either internal or external RAID director hardware."
    Now.. where was I?  Oh thats right.  I have a paddock near me full of kangaroo’s that just wont stay behind me when cars get close for my version of the HDP videos..  Good job Nige..
     

  13. Nigel Poulton

    Hi Stephen,

    Firstly I wish you all the best with your cancer medication and hope you are fit and well soon.

    Secondly, Im not sure how to read the rest of your comment.  MAy be Im missing something.  Are you having a pop at me and being critical/sarcastic or are you being positive.  Like I say, may be Im missing something :-S

    Nigel

  14. Stephen2615

    Nigel,
     I was just waffling on about finding out about the free UVM licences in a very round about way.  I sometimes marvel at the "deals" I get from our sales rep.  Eg, I now have 5 Clariions as part of a merger and I wanted to virtualise storage from them to give me cheaper storage behind the new USP V’s.  I wanted some licences and was offered 8 TB for a price.  That was not long ago.
    That lead me to Hu’s blog about HDP and external "directors".  It just took me a round about way from here to HDS via links on blog pages.
    I just loved your response to Marc and I was going to do some sort of  video with kangaroo’s but alas there are not many cars around with live kangaroo’s.  Our local roads are littered with kangaroo road kill. 
    Keep up the good work as I am not doing much these last 9 months..  One day a week at work is enough for me but I still rule the storage roost.. 🙂
     

  15. lots-o-data

    Anyone know of a way to calculate the maximum amount of storage that can be managed by HDP? 
    A capacity limit in mapping the pages/chunks would be one reason for 42MB (or larger) chunks/pages

  16. Nigel Poulton

    I dont personally know, although its an interesting question.  I suspect a formula to work something like this out might reveal too much of the inner workings of HDP so is not likely to be made GA by Hitachi.

    Working out such a limit would require knowing how much shared memory each row in the free page list consumes as well as other metadata constructs.  This may also change any time Hitachi supported larger Shared Memory DIMMs etc…. 

    As you will know, the metadata constructs are stored in a particular and dedicated area of Shared Memory, so it depends how large that area is – 4GB from memory although that might be old info now……

    Either way, I expect that the theoretical limit might behigher than any realistic deployment – much like the theoretical limit of max external storage on a USP or USP V.  I know of a couple of quite large HDP deployments and none of them have come across issues hat I am aware of.

    Of course you are right – the smaller the allocation unit, the more metadata required.

  17. Nigel Poulton

    lots-o-data,
    Ive just been informed that the current documented limit is 1.1PB.
    So thats quite a lot, but worth keeping in mind if you plan on a very large implementation.  Obviously these things are subject to change.
    Hope this is helpful.

  18. Biju Krishnan

    the demo I saw for ZPR didnt impress me at all. We will have to wait for ZPR to develop itself and be a mature boy before we hand over the task of housekeeping to him.

    Some of the posts here are impressed by ZPR, which leads me into doubt of whether I have understood it well. Aren’t you folks referring to the little program that needs to be run on every server to free zero pages?? Could be nice for an enterprise with a probably a hundred servers or so. I wouldn’t call this revolutionary yet.

  19. Nigel Poulton

    Hi Biju,

    I am referring to a feature built in to recent versions of ucode on the USP V and XP24000 arrays.  Yes from within the Storage NAvigator GUI you have to run ZPR jobs per HDP LUN etc.

    Its value at the moment is post migration, when migrating from thick to thin.  For example, I am involved in a migration project at the moment.  We have several weekends scheduled for host migrations with between 20-30 servers per weekend.  As part of the post migration tasks for each server we can run ZPR jobs for their new HDP LUNs.

    Yes it requires effort on behalf of the storage admin.  But its not a job that you will regularly run against LUN.  Really just after migrations.  And if you can reclaim 30% or more of your capacity then I suggest its worth it in this economy.

    Nigel

Leave a Reply

Your email address will not be published. Required fields are marked *


*

You can add images to your comment by clicking here.