Storage Archive

Pure Storage: Show Me the Money!!

Posted April 24, 2014 By Nigel Poulton

Holy shmokes! Pure Storage just netted an insane $225M in Series F funding! Taking their total VC cash haul – after 6 rounds – to a not too shabby $470M!  That's right, 470 meeeeellion dollars!  US dollars. Not Jamaican dollars!  

So what does that mean?

Read the remainder of this entry »

1 Comment. Join the Conversation

Software Defined Madness

Posted June 19, 2013 By Nigel Poulton

Quick shout out to Rick Vanover at Veeam and his short and snappy 30-minute Veeam Community Podcasts.

Yesterday I recorded a quick podcast with Rick on Software Defined Stuff, including Software Defined Storage, Software Defined Networking, Software Defined Data Centers….

I really enjoyed talking about this stuff, and you like your tech and are interested in some opinions and thoughts on the whole software defined madness that currently exists, then I recommend you head over and give the show a quick listen.  Definitely worth thirty minutes of your time!

Be the first to comment

Anyone for Target Driven Zoning

Posted May 9, 2013 By Nigel Poulton

The Fibre Channel world is dying with nothing new or interesting in the pipe right.  Well, not quite.  While its definitely not the melting pot that is cloud  and it doesn’t have software defined in it’s name, cool stuff is still happening – at least cool in my opinion – and Target Driven Zoning is one of those cool things currently being worked on…..

Read the remainder of this entry »

5 Comments so far. Join the Conversation

3PAR ASIC–Two-edged Sword?

Posted April 23, 2013 By Nigel Poulton

I caught up with Howard Marks last week when he was in town in London.  We went out for a quick bite to eat one night and talked shop for two or three hours.  I know my place when in the company of a grey beard like Howard, so for the most part I kept quiet and listened.  However at one point I vented my frustration that 3PAR arrays still don’t support flash as a cache – despite being based on a modern innovative architecture supposedly better suited to todays demands and requirements than something like, let’s say an apparently donkey architecture like EMC VNX.

But hang on a minute.  EMC VNX has supported flash as cache for ages now.  I cant be bothered to look it up, but I reckon VNX has supported flash as a cache (in the form of FAST Cache) for at least 2 years, probably more. 

Seriously, I thought the uber-modern architecture of 3PAR was supposed to make adding innovative technologies easier.  Could it actually be that the architecture of the box is hindering the adoption of important technologies like flash as a cache!  I mean seriously, how long does it take to catch-up to a 20 year old legacy technology like VNX?

Howard suggested that the problem might lie with the 3PAR ASIC.

We know that ASIC design can elongate the innovation cycle when compared to implementing on commodity Intel type architectures, but I honestly never thought that the ASIC might be behind the sloooooow uptake on flash as a cache.

I know that 3PAR support flash as a tier, but I also know that that isn’t always enough.  VNX supports flash as a tier and/or as a cache.  How can 3PAR be behind?

I have to admit that I’m a fan of the way 3PAR implements thin technologies, and I do  believe that implementing thin through the ASIC gives them an edge.  But in this case, assuming Howard is correct in assuming that the ASIC is the stumbling block to implementing flash as a cache, it seems the ASIC really is a two-edged sword – the ASIC giveth and the ASIC taketh!

Oh, and since I said Im a fan if how 3PAR implements thin technologies via the ASIC, it is only fair to say that when it comes to flash as a cache implementations, VNX clearly wipes the floor with 3PAR.

QUICK UPDATE: I'm certain that there will be tons of existing, and potential, 3PAR customers with legitimate use cases and requirements for flash as a cache.  So surely this will be an engineering priority within HP, and as such would have been implemented by now if it could!? 

21 Comments so far. Join the Conversation

HUS VM: Tier 1.5 from HDS

Posted September 25, 2012 By Nigel Poulton

HUS VM: What is in the name?

Ordinarily I wouldn’t spend much time on the name of a product, especially an HDS product where the incumbent Master Namer seems adamant on starting every product name with the word Hitachi.  But a name can often tell you can tell you a lot about something..

So, while the HDS sales crew might be spouting this as entry level enterprise, the detail hidden within the name suggests something more midrange –

1. HUS VM = Hitachi Unified Storage Virtual Midrange. 

2. The internal factory name at Hitachi Japan is HM700, with the HM standing for High-end Midrange, or High Midrange.

So while the marketing and sales guys might want to use the term “enterprise” or “entry level enterprise” to describe this kit, the engineers seem to want to refer to it as midrange.

My personal opinion is that it’s about as tier 1.5 as you can get.  Taking up residence right in the neighbourhood of the likes of 3PAR, XIV and high end VNX.  And for HDS that’s a good place to be, as this is a juicy market segment that HDS don’t have decent play for.

Internally at HDS the product was developed under code name “Wister Lake”.

But HDS Suck at Tier 1.5

HDS’s previous forays in to this space (the gap between their modular AMS/HUS and the enterprise USP/VSP) have been dire.  Technologies like NSC 55 and USP VM have been real text book examples of square pegs and round holes.  And only the most gullible or die hard HDS shops would have had the misfortune of purchasing these technologies.  But this time it could well be different, as this is most definitely not an artificially cropped VSP with arbitrary and artificial limits slapped on to it.  This genuinely is something specifically designed for that tier 1.5 market.  Almost like a 3PAR.

HUS VM: Hardware

From a hardware perspective, HUS VM is a brand spanking new hardware platform designed specifically for the tier 1.5 market.

It should be no surprise that there is a custom ASIC at the heart of the system.  After all, Hitachi is an engineering led organisation with a deep history in ASIC design. 

HUS VM ASIC

Should customers care that there is an ASIC inside though?

In answer to the question above, the technologist inside of me wants to answer “Yes”.  I like well designed technology and think that 3PARs implementation of thin technologies inside their ASIC has worked wonders for them.  All marketing and corporate BS aside , the 3PAR implementation of thin provisioning and space efficiency technologies is second to none in the market.  This I am certain is due in no small portion to its implementation of thin technologies within its ASIC. 

I also personally believe, and feel it is borne out in experience, that at the very high end, ASICS and custom silicon still has a place.  Even the most die hard advocates of “commodity everything” still slap custom silicon all over their products (whether it be for inline encryption or whatever else) albeit they might not always design the custom silicon in-house.

However, even the technologist within me wonders whether this requirement for custom silicon holds water in the more midrange segment, where high and predictable performance are not mandatory. 

Now while HUS VM has an ASIC at it’s heart, Hitachi have been stripping functionality out of the ASIC where possible.

The diagram above also points out that this could be viewed as a dual controller architecture, one that is very different from that of its Tier 1.5 neighbours 3PAR and XIV, which sport a more grid like approach.  However, the above architecture (HDS call the controllers “clusters”) it should be pointed out that this is the same as it is in VSP, where the controllers are tightly coupled via PCIe.  And as with VSP, all paths to a LUN are active, none of this ALUA smoke and mirrors.

Scalability doesn’t look overly great to me though.  Two controllers that look like it would be complicated to enable a scale-out approach of adding more controllers.

As the pictures below show, it bears the ugly HDS trademark bezels that make it look like it’s from the 1980’s, but the important thing is that it is radically different in its design to VSP – much better suited for tier 1.5 –

HUS VM logic boxes

HUS VM Cache and Main boardsHUS VM front photoRear HUS VM photo

From a hardware perspective below are some details that may be of interest –

Cache 256GB
Max Number of Drives 1,152
Max Front end ports 32 (48 if you have no internal drives)
Front end ports 8Gbps FC
Backend 32 x Native 6Gbps SAS
Drive form factor 2.5-inch SFF or 3.5-inch LFF
CPUs 2 x 8 core Intel Xeon (partitioned as 4 x 4 core for firmware compatibility with VSP F/W)

 

I’ve purposely not included any IOPS or MB/sec figures as I tend not to give much weight to them.  Suffice to say its sits snuggly between the low and high end offerings already available from HDS.

HUS VM frames

HDS are marketing this with a capacity sweet spot of between 20-180TB, although it can scale larger.  This is a good fit too as a VSP tends not to be very compelling from a price perspective until you start getting comfortably north of 100TB, probably north of 150TB.

While on the topic of hardware I should point out that there is a pair of clustered BlueArc/HNAS file servers inside (nestled in between the block storage controllers below and the disk drives above).  The file servers have 32GB cache, 6 x 1GigE and 4 x 10GigE, with 4 x 4Gbps FC for connecting to the block controllers.  May be a bit more on the file side of things later…..

HUS VM: Software

From a software and firmware perspective this baby runs the high end code found in the USP and VSP platforms, DKCMain.

There’s no fancy name for the firmware at HDS, nothing so fancy as "Enginuity”, “ONTAP”, or even “SiliconFS”.

Now this is a two edged sword.  With one edge of the sword you get all of the features (Dynamic Provisioning/Thin Provisioning, Dynamic Tiering, heterogeneous virtualisation, replication…) reliability and street cred of the USP/VSP code, but you also get all of the legacy.  Things like old fashioned parity groups etc.  While I’m not a fan of the legacy in the likes of DKCMain and Enginuity, I have to think that this is a good move from Hitachi.  This gives instant credibility when entering the tier 1.5 market and HDS are backing HUS VM with the same 100% data availability guarantee that has come with the USP V and VSP (if it doesnt meet this you get credits to use with HDS Confused smile).

With it running the same DKCMain code as the VSP, this means that implementation details such as the infamous 42MB page size for thin provisioning and space reclamation is exactly the same between VSP and HUS VM.  Also the same braod range of 3rd party arrays can be virtualised behind it….. 

Going with the VSP firmware also creates a nice option for existing HDS customers who might have USP/VSP technology in their main data centres but would like to put something a little smaller and cheaper in their smaller/remote data centres, but something with the ability to replicate back to the main data centres.  TrueCopy replication technologies including HUR work between USP/VSP <—> HUS VM…..

I should point out that at the time of release, some of the more high end configurations, such as 3DC configurations, are not available for HUS VM.  And that the cut of DKCMain firmware available for HUS VM is based on a code freeze from earlier in the year so may lack some of the most recent tweaks available for the VSP.  But I expect this will catch up over time.

HUS VM may also be a good fit if you already have scripted DR for USP/VSP as this will work seamlessly with HUS VM.

While on the topic of software, the HUS VM will be managed by the pretty, but ultimately frustrating, Hitachi Command Suite software.  So, a single management interface to manage all Hitachi storage products (well almost, as there is still the occasional requirement to break out in to some of the old interfaces that look and feel like 1980’s websites).  Moving in the right direction though…

Unified? Hmmm Kind of.

As previously mentioned there is a pair of clustered BlueArc file servers included in HUS VM.

Under the hood the block and file components are bolted together somewhat like CLARiiON and Celera in VNX – block controllers owning all of the disks with file heads that talk to the block controllers over internal FC connections.  The way that the block and file code components are implemented is also very similar to VNX, with the block (DKCMain), and file (BlueArc SiliconFS) packaged and running entirely separately a’ la Flare and Dart on VNX.

I suppose this works for a first cut, but its my opinion that it is somewhat frankenstorage (the same goes or VNX).  However, this is only my opinion, and I am open to the potential that for high end configurations this may be the best way to implement unified storage.  VNXe does things slightly differently in the low end of the EMC portfolio and Data ONTAP certainly does it in a more unified way in the low and midrange (and NetApp would argue also in the high end).  To me it’s ugly, and ugly might be functional but it doesn;t mean I have to like it.

I’ll say this, it’s a sight more unified than anything HP have Winking smile

But in HP’s defence, is unified something that fits well in the tier 1.5 space?  I do wonder how the price point will work out for file services in an HUS VM!

One final point on File, there is no scale-out like Isilon/OneFS.  But then again there isn’t from EMC in a unified packaging either.  The strongest play in the unified with scale-out file space is NetApp.

Conclusion

On the topic of the ASIC.  I know most people don’t think customers should care about such detail.  Well may be that is the case at the low end, but I personally think that it matters in tier 1.5 and above.  The discerning storage customer should care, because at the end of the day it does makes a difference and more fool you if you believe it doesn’t.

Still on hardware, Hitachi now has 3 platforms to develop and maintain.  How much of a strain will that be?

One thing that is certainly missing in my opinion is some form of flash cache.  We already see this in VNX and in the guise of PAM in NetApp systems.  Using flash as a cache extension like this, while it can have its drawbacks and subtleties, is often a very useful technology. 

While it’s not earth shattering and is merely plugging a gap that HDS have had in their portfolio for too long, it is about time that HDS had a decent play in tier 1.5 and this genuinely looks like it might be about as close as you can get to tier 1 without paying the tier 1 price premium (assuming they don’t charge too close to VSP type prices).

5 Comments so far. Join the Conversation

Quick Thoughts on ONTAP 8.1.1

Posted July 17, 2012 By Nigel Poulton

Due to the interest generated by the podcast I posted on Monday night about Data ONTAP 8.1.1, I thought it might be a good idea for me to follow it up with a short blog post outlining what I like and don’t like about ONTAP 8.1.1 and what I think about the current state of play with NetApp ONTAP.  I’ll keep it to ONTAP as I am a technologist at heart….

Read the remainder of this entry »

3 Comments so far. Join the Conversation

Benchmarks Shmenchmarks

Posted June 12, 2012 By Nigel Poulton

I recently recorded a podcast with Ray Lucchesi, a guy with a shed load of storage experience that I respect a lot.

Ray puts out a pretty decent monthly newsletter covering interesting things that have happened in the world of storage, as well as a roundup of some of the latest benchmark and performance news. In this podcast we discuss items from Rays newsletter, including -

  • Is there any point to benchmarks
  • Which benchmarks are good (i any) and which are a waste of time
  • What Ray has found interesting from the world of performance benchmarks in the last few weeks
  • IBM XIV – is it midrange or enterprise and how does it fair in SPC-2 benchmarks
  • Why dont we see flash array vendors submitting SPC benchmarks
  • Why doesn't EMC submit SPC benchmarks

Once a month the Deep Dive podcast will review Rays newsletter, covering the most interesting technical news from the storage world. DON'T MISS AN EPISODE – SUBSCRIBE FOR FREE!

…or click to download this podcast

Infosmack Podcast MP3
Be the first to comment

Pure Storage FlashArray: Ooooooh yeh!

Posted May 16, 2012 By Nigel Poulton

Pure Storage Blog Image 3

Today Pure Storage announced a pretty cool new all flash storage array, the Pure Storage FlashArray.  An interesting startup playing in an interesting and massively disruptive area of technology.  This post guts under the hood of the Pure Storage FlashArray and explains exactly why solid state storage technologies are literally changing the rules when it comes to storage array design.

Read the remainder of this entry »

9 Comments so far. Join the Conversation

How to Build a Solid State Storage Array

Posted May 10, 2012 By Nigel Poulton

While attended the April 2012 Solid State Storage Symposium in San Jose I had the opportunity to host a panel of experts and technologists from four of the leading solid state storage array vendors.  The panel discussed how to design and architect storage arrays built around solid state storage technologies.  Some of the questions included….

Read the remainder of this entry »

2 Comments so far. Join the Conversation

VSP and VMAX Tier 1 Shenanigans

Posted April 2, 2012 By Nigel Poulton

Hu Yoshida, CTO of Hitachi Data Systems and long time legend of the storage industry is a person I respect a lot.  Hu and I recently engaged in a discussion around the validity of architectures like VSP and their designation as Tier 1, which Hu summarised in a recent blog post he cut and pasted wrote. Hu has asked that I summarise my thoughts on the topic in a blog post so that he can fully digest and potentially answer.  Here goes. Read the remainder of this entry »

10 Comments so far. Join the Conversation