Dynamic Provisioning: No mouldy beans for me thanks!

By | April 2, 2009

As a techie I occasionally come across a technology that I really like, or like the sound of.  One such technology is Enterprise Flash Drives/Solid State Drives .  I really want to get my hands one some and deploy them in anger.  Almost to the point where I would shoehorn it in to a solution just so that I can play with it…..  but obviously Im far too professional to do something like that.

Then there are times when a technology sneaks up on me and I wake up one morning thinking how hideous the world would be without it.

Dynamic Provisioning is in the latter category.

I've been working with Dynamic Provisioning (HDP ) on Hitachi USP V arrays for a year or so now, and while I like it, I have gone on the record in the past as saying that I can take it or leave it.  Well to my surprise, apparently I can't…..

I came home today to an email from a management bod (and one that has impressed me so far) of an outsourced account that I am currently providing storage design and architecture expertise advice for.  The email set out to explain why Hitachi Dynamic Provisioning would not be included in the bundle that I was to create the solution from.  It contained a couple of articles cut and pasted into the email that denounced the evils of thin provisioning (TP), and used them as the basis of why HDP would not be purchased for this solution.

So I took a deep breath and stepped back to think about what this would mean for the design.  We would still be putting in the latest and greatest hardware from Hitachi and Brocade, which would last the company for the next 3-5 years.  But if we left out Dynamic Provisioning, we would be putting into place something in that was already on its way out.  In my reply, I likened it to – nice new tin but with mouldy beans inside.

So I listed some of the advantages of HDP as follows –

  • Greatly simplified LUN management (faster to provision LUNs, less thought required when provisioning, less spreadsheets to manage……)
  • Ability to dynamically grow LUNs (fudging LUNs together with LUSE will become a thing of the past)
  • More efficient Copy and Replication services (only copy real data and not zero data)
  • Zero Page Reclaim after migrations (we will be migrating a lot of hosts and storage to the new kit and could claw back a ton of capacity)
  • Wider back end striping

On top of the above, and Im sure it's the same with the other major players such as EMC and 3PAR etc, it looks to me like Hitachi are ploughing a whole load of R&D into Dynamic Provisioning.  Lets face it, most of the interesting new features that are being released and talked about relate to DP.  This technology is not just a standard speed or capacity hike, it's a game changer and it's getting interesting.

Obviously we expect the vendors to tell use we "need" a certain feature, especially the ones we have to cough up cash for the privilege of using.  But this one is becoming more and more compelling and almost overnight, and without me realising, its become a "must have" for me.

Interestingly, none of the places I have deployed HDP at have wanted to use the oversubscription feature (TP).  In fact they have almost been put off the technology because of the catastrophic possibilities this could bring if left unchecked. 

So might I suggest to the DP vendors out there, if you don't already have the ability to pre-allocate LUNs, or even better, turn off the ability to overprovision form a pool….. get it and get it quick.  People don't really want "thin".

Fortunately the manager involved with the account I mentioned is good enough to understand the business and operational benefits of DP.  But had I not been around, or had he been a cowboy or a "jobsworth", this one might have got away.


5 thoughts on “Dynamic Provisioning: No mouldy beans for me thanks!

  1. Bert Ho

    Hi Nigel,

    Just for curiousity, if I don’t guess it wrong, there should be tables containing the relationship / indexes of the Pages and the indexes are updated or referred so frequently.  What if those indexes got corrupted? 

    Well, I am sure they are well-protected but it seems that a new point of failure is introduced.

  2. Tom

    Hi Nigel,

    Excellent work you are doing to get us to understand HDP and the benefits. I wonder what the experiences on the Zero Page Reclaim feature are, I imagine that not all OS can cope with that, or would not write zeros on unclaimed space, especially Windows, where I would imagine it requires some additional steps, or even reboots? Or any UNIX flavour, like AIX or Soalris, how do they deal with that?

    Very courious.

  3. snig

    Bert,  that tables containing the relationship/indexes of the Pages are kept in what’s called Shared Memory.  This memory is dedicated to all aspect of control of things that happen int he array and it should be called Control Memory instead of shared memory.

    If those indexes get corrupted, then yes you would have a disaster on your hands.  Not only would you have issues with DP, but everything else going on in the box as well.  So since HDS will give you a 100% data availability guarantee just imagine the protection they put behind that never happening.

  4. Nigel Poulton

    Hi Bert,

    Good point.

    What Snig says is obviously correct.  On Hitachi storage this mapping table is generally referred to as the Dynamic Mapping Table, or DMT for short.  It is mirror protected in Shared Memory as well as on disk copies in the first Pool-VOL in the first pool created (may be all pools, Im not 100% sure).  Oh and if you have mode 460 set on the SVP then during a power down of the array an additional copy is also saved to the HDD of the SVP.

    Obviously "corruption" doesn't care how many copies you have, it will affect all copies if they are kept in sync.  As far as other protection mechanisms go, Hitachi have not published anything on this to my knowledge.  But I think Snigs comments sum it up well.

    I suppose another way of looking at it might be as follows – A typical non DP LUN presentation may look like this –

    • The host OS sees a volume, which it writes to as if it were locally attached over a SCSI bus (the specifics are abstracted by the following components)
    • The Volume manager has pieced together this volume from multiple LUNs or extents.  It has no doubt performed striping, mirroring and may be some other magic.  All requiring its own mappings/metadata.
    • The LUNs are mapped on front end ports on an external storage array over switches and HBAs.  Each of which performs its own mappings of sorts and requires its own metadata.
    • On the storage array, parts of the LUN will actually be in dirty queues in cache and other parts resident and up to data on disk.  The storage array takes care of these mappings with more metadata.
    • Behind the cache the LUN is created from several slices spread over multiple disks.  Again requiring mapping tables and metadata

    As you can tell, this is quite a simplified view and does not take in to account RAID, DP or virtualising external storage…..

    Yes, you are absolutely correct, DP does require another layer of mapping.  However, its not something I feel we should worry about.  We just need to understand it and also the fact that we are surrounded my such mappings everywhere we look in IT already.

    Back to pen and paper?  Sometimes I do wonder 😉

  5. Nigel Poulton

    Hi Tom,

    Thanks for the compliment.

    Re your question around Zero Page Reclaim, let me see if I can answer…..

    Where best results are expected is after migrating a non-DP LUN to a DP LUN.  Lets assume the original non-DP LUN was created as 100GB, but the host only ever wrote to 50GB of it.  There will then exist 50GB of free unused space on that LUN. 

    Prior to a LUN being presented to a host it is RAID formatted, this writes zeros to the entire address space of the LUN.  Therefore, because the host has not written to this last 50GB of space, it will still be full of zeros.  Zero Page Reclaim operations can reclaim that space back to the free pool.

    This is of course assuming that the filesystem doesn't go and touch all blocks or even a part of them, or that a database hasn't pre-allocated all blocks ahead of actual data writes.

    Now assuming there was space that could be reclaimed by Zero Page Reclaim operations…. Because it will only claim back pages that are entirely zeros, the host will be unaware that this operation has happened.  Even if, for whatever reason, the host had purposefully written zeros to large parts of the LUN, and these zeros represented real data, it should not be a problem.  Any time the host tries to access those zeros (that have been taken away from the host and assigned back to the free pool) the array performs an operation that returns a zero to the host – so the host still gets its zero and is blissfully unaware. 

    In fact, the host may even get a quicker response as there is no requirement for a back end read to disk including the wait for disk head positioning etc to determine that the data is a zero.  But that is neither here nor there really.

    Let me know if that answers your question or not.


Leave a Reply

Your email address will not be published. Required fields are marked *


You can add images to your comment by clicking here.