Pure unadulterated FUD!

By | October 1, 2007

NOTE:  This post has been renamed from "Energy matters, apparently!" 

In a response to my last post, Barry, of the Burke variety (seeing as we are now “blessed” with two Barry’s), pointed to an energy related post that talked about the recent Hitachi power savings announcement.  The post was written by Dick Sullivan, a fellow drone of Barry’s at the Energy Matters Corporation.

Thanks Barry, but it took me 3 or 4 minutes to read the post and those are 3 or 4 minutes of my life that I would like back.  But since Ive already lived more minutes than can be addressed by a DMX-4, and hope to live more than can be addressed by a USP-V I wont quibble too much πŸ˜‰

Here are a couple of comments on Dicks post –

Blah blah blah…. 

Dick starts out by pooh poohing Hitachi’s recent AMS/WMS power saving RAID Group announcement, basically saying its nice of Hitachi to do their part, but of course EMC has been doing its part for much longer…

He then seems to suggest that its not really worth all the effort on midrange storage.  After all, midrange storage doesn’t really suck that much energy in the first place so why bother!?!?  Oh and of course Hitachi are not keeping the disks turned off all of the time!?!?

He then moves on to moan that as we start to crank up the utilisation of these power saving disk groups we will gradually save less and less energy.  Well thanks for that little gem.  Same tends to go for performance – the more space on a disk you use, the more performance tends to tail off.  That doesn’t mean we throw in the performance towel from the start with the attitude that it will eventually degrade as utilisation increases.  Dick seems to take this attitude over power saving disk groups though.

He then has a moan about this being offered initially only as a service, and hence billable – I quote “If customers need training to use a feature, it has to make you wonder about some of the gotchas that tend not to be covered in a press release”.  Indicating that if customers need training then it must be riddled with gotchas!?!?  EMC never train any of its customers or offer any features that can only be implemented as a service????   Hmmmmmm…… kettle, pot……

He then moves onto the reliability of drives that are spun up and down frequently – yes that old chestnut.  OK, so I'll side with him to a point on this one.  I still have to be won over on this one.  Although since reading the HDS press release I have done a little (soon to be more than a little) research on modern disk drives and how they cope with this kind of treatment and Im starting to wonder if this will turn out to be such an issue after all?

While I personally do have niggling concerns over this, I think Dick is laying the FUD on a little thickly with comments such as  – when I spin the drives back up, “Will the data still be there?  Will the application recognise it?”.  As if he has never turned his laptop on and off, or powered down a CLARiiON in his life (may be he hasn’t). 

It just seems too easy to bring this up in response to the Hitachi announcement, and a lot of people have, including me.  But I have to wonder how much these folks (myself and Dick included) actually know about the workings and tolerances of disk drives, especially those installed in storage arrays?  Im willing to bet not as much as the people that matter at Hitachi.  Anybody out there think that Hitachi are about to stake their reputation on something they are not sure about?  Don't be soft!

From a high level view, I’m a big believer that my laptop is to a disk drive, what the summit of Mount Everest is to a sick person needing oxygen.  While on the other hand a storage array is to a disk drive what an Intensive Care Unit is to s sick person needing oxygen.  And I’ve never lost data from my laptop never mind and storage array, so I’m probably not going to lose too much sleep over this one. 

I think a good place to start with this technology will be disk groups dedicated to staging backups before being spooled off to tape.  If it proves itself there, lets move it further into the Data Centre.

Killer quote…. 

Dick then drops in a fantastic quote – Apparently one (only one??) unnamed VP of an unnamed “major financial company” is quoted as saying the USP is “ridiculous”.  Wow quite some quote.  Oh, and this unnamed VP from the unnamed company apparently has Hitachi gear.  Although its not clear whether this is a USP or a flat screen TV.  Zzzzzzzzz

Performance miracle…. 

Then ……. yes there’s more….. he has a pop at the fact Hitachi didn’t mention performance in this announcement.  Fair enough, but he then goes on to state that EMC does all of its power saving magic without sacrificing performance – again I quote – “One of the key design points for EMC is to do all of this power savings without sacrificing performance”.  Wow, so apparently you wont see a performance impact if you move some of your data off of FC and on to the newly supported SATAII drives that play such a huge role in the energy efficiency of the DMX-4 – now that’s what I call magic!

The differences that dont exist….

From that point on, right up to the very end of the post Dick uses the EMC CLARiiON to  contrast the EMC and Hitachi approaches to energy efficiency.  For the record, that is from line 76 to line 117 on his post.  This is 41 lines, which according to my math is just over a third or just under half of the entire post, depending on how you see it.  Why am I mentioning this?  Because in my opinion he may as well have left this out because there is little if ANY contrasting done!!! 

Don’t believe me, read on…….

The technologies Dick calls on to highlight the contrasts include the following –

  • Mix and match drive types within the same system (size, RPM, interface type..)
  • Data movement/migration between drive types
  • Waffle about SATA being more energy efficient than FC
  • Online LU/RAID Group expansion

Not exactly the longest list in the world and not much, if anything, to diff

erentiate EMC from the rest of the crowd.  Seriously, doesn’t everybody already do this?  

I won’t pretend to know as much about midrange stuff as I do about enterprise (and I don’t even know that much about enterprise πŸ˜‰ but I’ve done all of those things with AMS storage.  So where is the contrast? 

While Im here I think I’ll also pick up on a couple of other points Dick makes while making these contrasts that don’t appear to exist.

This one is a cracker – While talking about Meta-LUN expansion Dick asserts that you “.. don’t need to have any unused capacity spinning when you place the system on the floor” because “.. when you need it you can add it or expand it without disruption”.  Ooooops!!   That blatantly ranks as one of those ‘bares absolutely no resemblance to reality’ statements that some folk would have us believe come exclusively from the mouths of Hitachi executives. 

So…. shipped many CLARiiONs or Symms with ZERO free capacity recently??  Oh and with the AMS you can now ship that spare capacity, that your customers will demand, but leave it spun down until its needed  πŸ˜€

Dick also refers to EMC’s Virtual LUN technology as “unique”.  Im interested in what makes it unique?  Of course Im having a pop here, but actually in all seriousness if it is actually different enough from the rest of the crowd to be referred to as “unique” Im really interested to find out exactly how?  Surely we don’t have an EMCer using words like “unique” when they actually mean “ever so slightly different when you examine the minute”.

And I promise this is the last bone I will pick…… but he also states this Virtual LUN technology enables the “….migration of data between LUNs in the array with no application impact….”.  I’m interested in how the “no application impact” bit is handled?

Is this handled the same way as others handle it, by only allowing the migration to take place under certain load conditions etc?  Otherwise if the source disks are busy handling real I/O and you suddenly start reading every block of a LUN on those same disks so you can copy them to another tier, you are likely to see some performance impact.  No?

Yes Dick, energy matters.  But so does being fair and honest in your comments.

Nigel 

 

19 thoughts on “Pure unadulterated FUD!

  1. snig

    ROFLMAO!!!!

    I guess that's what you get when a marketing guy attempts to sound like he knows what he's talking about.  I guess he should bone up on the competition before trying to compare and contrast specific points about them.
    Oh and here is a quote from Toigo's blog last week from a CIO that hates his DMX gear…

    One guy that stands out is a CIO who was attending the show for the first time.  He works for a name brand candy company and wanted urgently to share his war stories.  Seems he has spent over a mil on a couple of EMC DMX3’s — or his predecessor did.  Someone drank the Kool Aide in any case.  EMC reassured his predecessor that the storage was scalable, the last he would need to buy for a few years.

    “Sure,” the CIO said, “it will scale to 1.1 PB with big lumbering 500 GB drives.  But the performance takes a nose dive.  In fact, we can’t build the capacity of one of these arrays to more than 250 TBs without bringing it to its knees.”

    So much for tiering storage in a single box.  Where ya at Barry W.?  You might be able to sell an SVC here… 

  2. Storagezilla

    Ah, Toigo. The man who launches a million websites but doesn’t solve a single problem or ship a single product as a result of any of them.
    I’d seriously question the 250TB performance number Snig since I’ve spoken to people with much higher configs than that but the fact that you need big fat drives to get big fat capacity numbers shouldn’t be a shocker to anyone. 
    Now Nigel I’ve been drinking tonight (I always make that disclaimer when I’ve been at the sauce) so I’m not going to get too deep into the muck but laptop drives *are not* the same as the ones you’re sticking in an array. Not only are they lower performing but they’re more resilient to vibration (As well as being dropped) and are designed to dock the head at every opportunity in order to save battery power.
    The laptop analogy is bull spouted by people who think the light in the fridge stays on when they close the door.
    As for never losing data that’s cow dung. You like all the rest of us have suffered from silent data corruption unless you’ve been performing end to end checksum operations. Unless you’re using an array (All EMC arrays have various front to back checksum schemes. At this time of night I couldn’t care less to check how anyone else does it) or ZFS you’ve lost data for a whole variety of reasons.
    While it’s a certainty that you have mangled files on your laptop and on your home PC it’s probably a single digit number of them and you won’t notice unless your OS barfs or something won’t open or an app crashes when you try and launch it. Scale the number to millions and billions and then you’d start tripping over corrupted files.
    One of the advanced technologies guys showed me a formula for it but I’ll be honest and say that I don’t pay attention when people start throwing maths at me.

  3. Nigel Poulton

    What, you trying to tell me the light in my fridge doesn’t stay on when I close the door?  Well it might not in yours but in mine it does.  Otherwise how would I be able to scan the contents of the fridge and see them on the built in LCD monitor while the door is shut?

    I hear what you say re laptop drives and tolerance to vibration etc.  So….. too much to imagine a disk drive designed for a storage array with higher tolerances to spin-up spin-down?  Come on, you folks at EMC are experts on spin πŸ˜‰

    And re the end-to-end checksums I don’t really see how this is related?  Once the data is committed and confirmed to disk where it lives that’s that (actually, do EMC make disks seeing as how EMC is supposedly where information lives?).  The debate about spin-up and spin-down is separate to end-to-end checksums.  At least in my opinion it appears to be.

  4. the storage anarchist

    Whether you believe it or not, fact is that enterprise class drives are NOT designed to be spun up and down repeatedly, as are laptop drives. Just as one example, enterprise spindle motors use a different lubricant than stop-start drives – one intented for CONTINUOUS use instead of intermittent…and there have been drive failures related to the fact that the lubricant "froze up" when drives were left powered off.
     
    And there are numerous other differences inside the drives.
     
    The vibration protection in a laptop drive really does add little benefit to drives installed side-by-side in a storage cabinet. Totally different dimensions and cycles of vibration in the two environments. And parking the heads usually isn’t an option, while data is flowing, anyway.
     
    Drive qualification at EMC takes a while because they run the drives through literally weeks of heat / cool / load / vibration testing of varying durations and cycles. The guys who do this testing can tell you that the drives you and I buy down at Best Buy wouldn’t last a day…and today even the SATA drives used in Symm, CLARiiON, Celerra and Centera are "upscale" versions of what we see in the stores. And I doubt that Hitachi is using the same SATA drives in the AMS that they stick that funky red label on for the consumer market (see today’s StorageMojo for that reference)
     
    And sorry – just because the data has been committed to disk doesn’t mean that you’re going to get the same data back when you read it – at least, not unless the array does something to detect and correct corruptions. Just ask Robin Harris…I may not like his style, but he gets it (in this case, at least).

  5. Nigel Poulton

    That’s exactly what Im saying Barry.  There are differences between drives designed for different purposes.  So it is so far out there to imagine a drive designed to sit in a storage array that can also be spun up and down reliably?  If its going to help the environment then surely its worth a little R&D?!  Its not that long ago that stiction was a problem until somebody came up with the idea of parking the heads on a ramp.

    There is a lot of talk that FC drives are not designed for this but SATA disks have arrived in the storage array with close family history in the desktop world making them an ideal place to start.

    Don’t get me wrong, I have concerns and still need to be won over on this point myself.  But at the end of the day, the likes of you and me can sit here all day arguing over why it will never work, but somebody else out there will just go out and make it happen.  Im confident of that.

    Oh and I never said that just because data was committed to disk meant we would get it back.  I just didn’t see the connection between that and end-to-end checksums??

    Ive said it before and will say again, an ideal place to start will be disks used to stage backup data to.  I see quite a lot of storage arrays where a large chunk of capacity is used to stage backups.  Data is staged to disk and then spooled off to tape and purged within the backup software.  Then the disks spin for the rest of the day until the next wave of backups comes in in the evening.  This is an almost no risk opportunity for these power saving RAID groups as the data on disk is no longer needed as its now on tape.  And using disk arrays for backup is becoming more and more popular.

  6. the storage anarchist

    Since we’re clearing the air, here:
     
    I am not arguing that there aren’t drives that can be (reasonably) reliably powered off and on again. All I said was the this wasn’t the design point of enterprise FC drives.
     
    And yes, you’re right, SATA drives are probably closer to that use case than enterprise FC drives. But you still need to take precautionary measures to ensure that the drives will work when you need them – you know, start them up every so often and verify that some/most/all of the data can still be read (and still matches the checksums, etc.). And of course, you have to keep track of the number of these cycles you perform, to ensure you don’t "over-exercise" the stop/start mechanics and inadvertently accellerate the MTBF/MTTR of the drive(s).
     
    And for the record – I also agree that power savings such as SATA and MAID (and thin provisioning and de-dupe, for that matter) is an inevitable reality for *ALL* storage platforms, from enterprise to midrange to SOHO…Green IT concerns aren’t limited to any one class or tier…

  7. Nigel Poulton

    Thanks for your input Barry. 

    I think what you say re power cycling not being a design point of FC drives is very true. 

    I would also extend that thought to the vast majority of the current crop of enterprise and midrange storage arrays – green was not a design point when they were drawn up on the old drawing boards.  And most of what we are seeing from vendors are after thoughts – definitely the so called green technologies Dick mentions in relation to the CLARiiON.  Now Im not slamming EMC here, Im including all vendors that I know.

    May be this will change with future products, Im not sure?  For sure I dont expect anyone to be favouring green over performance in the near future, but hopefully we will start to see more than the odd bit of more efficient uCode/simplified algorithms making the green headlines from top vendors.

    Im not sure if the following is interesting or not, but I was looking through my bumph for the upcoming Storage Expo in the UK and noticed the usual crowd listed under the Platinum Sponsors banner.  But I also noticed that
    Pillar are sponsoring the "Green IT Zone" and none of the top vendors, with the possible exception of IBM, were listed as partners.  Of course paying for your name to be on the Green list doesnt mean anything but I admit to raising my eyebrow.

  8. Storagezilla

    Thinking green has been a big thing around our place for a while but it has to be a global effort to really make a difference. 
    If you think about it by working with suppliers early in the design stage you can look to make savings on every component in the system (LP Centera nodes being an example of that thinking),  you can also do things in software and then there’s implementation and information management strategies where you can find power savings. None of those things on their own will get you out of the hole a combination of some of them to suit your needs will. 
    Personally my thinking about joining Green Initiatives there has to be more than a sticker and a website. What’s different about EMC is that someone in SPO writes a memo and hundreds/thousands of engineers are now tasked with looking at how many watts their product dissipates and how to get that number down.
    Not many other companies in this market can say the same thing.
    Like the big security push EMC undertook a lot of this effort is going to be invisible to customers. The same way people expect your products to be secure they’re going to expect them to be power efficient.

  9. stephen2615

    Just in case I really care enough to worry about green disks (yeah we did buy 2 Copan systems), can anyone tell me how much power one disk drive actually uses?  How much power would I save if I worked out how to turn off 50 disks? Would it be anywhere near the same scale as the 1000’s of PC’s that stay turned on 7 x 24 in our offices and are never power down by the OS?  I would like some sense made out of this whole green disk thing.  Marketing rubbish if you ask me.
     
    I suppose every little bit helps???

  10. the storage anarchist

    OK – rough numbers only – I’m not the expert on this (but I do play one at work).
     
    A typical drive draws "around" 20 watts – slightly more under load, slightly less when idle. Pretty much the same regardless of capacity or interface, although spindle speed does play a factor. Slowing a 10K rpm drive down to 5200 rpm drops the power down to maybe 15 watts. Stopping it from spinning altogether drops the power to maybe 3-5 watts ("sleep" mode). Powering it off drops the power to "almost" zero, but you then have to consider the surge required to spin it back up.
     
    The typical desktop PC, on the other hand, draws about 180-watts (more for the AlienWare gaming machines, to be sure) – plus the power for the monitor. Laptops are usually 60/90/120 watts when charging – less when the battery is full.
     
    Storage arrays and blade server racks are more typically measured in kilowatts.
     
    Maybe if Nigel apologizes to Energy Matters for the nice Welcome To Blogland, he’ll document some of the more specific numbers that he shares with customers and prospects. Mr. Sullivan really should be a welcome addition to our blogging community (although he’ll probably tread a little softer after today’s lessons :*).

  11. Nigel Poulton

    Zilla-man, if that’s true what you say about all the effort EMC puts into getting greener, then I genuinely mean it when I say “hats off to you guys”.  However, since you’ve worked for EMC since leaving college Im not so sure you can say “Whats different about EMC is that …..”

     

    Dick/BarryB, It would be interesting to see some of the numbers that Dick currently shares with customers and prospects.  But I wouldn’t hold your breath waiting for an apology from me – you might find that you die waiting.  I think you know Im not anti-EMC or anti-anybody, I just love storage and like you don’t like to see misleading statements in blogs….

     

    I assume from the lack of defence over comments such as “EMC’s unique Virtual LUN technology” and “So you don’t need to have any unused capacity spinning when you place the system on the floor…….”  As well as the fact that the technologies Dick refers to as being differentiators between EMC and the rest are actually not differentiators.  Oh and being green was almost certainly not a “design point” of any of them.

  12. iheartstorage

    Regarding EMC’s ‘green’ efforts, EMC moved to address this issue relatively early on, and via sensible avenues. Namely services (EMC Energy Efficiency Services launched in late 2006, before vendor ‘greenwashing’ hit fever pitch) and industry collaboration (Green Grid consortium). Yes, EMC Global Services is a for-profit enterprise, but I also believe that service engagements are the ideal platform for helping customers maximize the value of the investments that they’ve *already* made in storage infrastructure, *before* they buy their next truckload of new equipment. Re: The Green Grid, the cynics among us might look upon such consortiums with suspicion, but big picture, the green issue demands collaboration/cooperation. For all the talking we’ve been doing at boards like this about power consumption, wouldn’t it be nice if we had industry-agreed-upon metrics and methodologies to ground our discussion? The Green Grid is getting it done, and EMC is helping. All of the major vendors are represented. Except Hitachi.
     
    And yes, many of the green benefits delivered via EMC storage products (and other vendor products) are merely side effects of long established development roadmaps. But as ‘Zilla noted, the senior execs at EMC have lit the fire under the engineering teams to address this issue as a core area of focus (Centera LP node is early fruit of this labor). EMC will make big strides in the months/years ahead in this regard, as will other vendors, I’m sure.
     
    In the interim, try not to get sprayed with green paint.

  13. stephen2615

    Speaking of going green, what happened to the green image at the top of the blog page that said RM was going green?  Perhaps RM is not carbon neutral but working towards it?

  14. snig

    I decided it was time for a change.  We did it initially to be sarcastic as all the companies around were "going green".  My belief is that if you manage your data correctly you can be much "greener" than just by throwing a bunch of storage at the problem.

    We need a Halloween theme.

  15. stephen2615

    How about a jack o latern with the EMC logo instead of the face?  Thats enough to frighten me and we don’t take any note of Halloween here in Aus…

  16. Nigel Poulton

    You see, thats my main gripe with Dicks post over at Energy Matters —>  iheartstorage makes a good point when he says “….I also believe that service engagements are the ideal platform for helping customers maximize the value of the investments that they’ve *already* made in storage infrastructure….” (emphasis added). 

    Dick on the other hand felt the need to spread pure unadulterated FUD by spouting off his opinion that because this is being offered as a service “….it has to make you wonder about some of the gotchas that tend not to be covered in a press release”.  Does the same apply to all service offerings including those offered by EMC?????  One has to wonder!

    I think Dicks blog could be of value in keeping up with the green push.  But lets not be Mr E.M.C Predictable and use it to try and shout down the competition and make dubious statements about your won employers wares.

    I see there is still a deathly silence over some of the technologies Dick claimed as being unique and differentiating.  May I suggest that Dick follows what BarryB and MarkT have been doing and strike through the inaccurate statements in his post.

  17. the storage anarchist

    A gentle nudge back towards a factual representation of Dick’s position – his point was to ask why a service engagement was required to utilize a simple, fundamental feature such as MAID. You guys are undoubtedly smart enough to  figure out how you’re going to have to lay your volumes out across the RAID groups in order to have any chance of ever powering off a group, so bundling with a services offering seems overkill.
     
    Curiously, this service offering apparently wasn’t announced as available for the big-boy toys from Hitachi. As you note, good storage management is the pre-requisite to optimizing energy efficiency. A service offering would seem more important on a platform that supports so much more storage (internal and external) and that has few (if any) features designed to save/reduce power utilization.

  18. Nigel

    Barry,

    Thanks for clarifying Dicks position re the service engagement offering…. there was me thinking he was spreading a bit of fear, uncertainty and doubt.  I shall remember to read between the lines next time.

    On this point – I’ve seen so many places where even basic LUN allocation is done without thinking things through properly, causing all kinds of problems including performance and capacity management.  Sure there are plenty of us who, if left to ourselves, could do a good job of it, but sadly this cannot be said of everyone.  There are quite a few places out there where the Unix or the Windows guys run the storage and they generally need all the guidance they can get.  With that in mind, and the fact that most people usually want guidance with new features, I personally don’t think the service offering is overkill.  I would want at least some form of workshop from HDS before implementing it.

    Finally, I see your point re not offering this on the USP yet where more benefits could possibly be reaped.  However, Im personally a fan of testing new technologies and gadgets out in your lower end kit before throwing it in to the real stuff (I was a little surprised to see thin provisioning on the USP-V and not the AMS).

Leave a Reply

Your email address will not be published. Required fields are marked *


*

You can add images to your comment by clicking here.