VMAX vs VSP

Considering the number of searches for “VSP vs VMAX” that land people at my site, I figured it was about time I wrote something on the topic so that all those people who end up here from that search have something worth reading.  So here we go…..

EMC Symmetrix VMAX vs Hitachi Virtual Storage Platform (VSP)

VMAX vs VSP Tale of the Tape

I do not work for EMC or Hitachi, or even HP…..  I am not an official authority on either VMAX or VSP and any comments and opinions are my own personal opinions and not the opinions of any of my employers, past, present or future.  This is a point in time article written in January 2011, so when you read this some of the information might be out of date. Do not take the information in this article on face value, speak with your vendor for clarification and the latest information.

So…. at first glance it may appear that VMAX and VSP are very similar, and in some respects they are.  However, when you get under the hood there are some fundamental and potentially important architectural differences.  The goal of this post is to discuss some of these differences and hopefully generate some discussion and opinion in the comments section at the end.

So, lets start…….

Custom vs Commodity

I have to mention this first as it provides the foundation for much of what we will discuss further on.

When it comes to the use of commodity off-the-shelf hardware versus design and development of custom silicon, such as ASICs, VMAX and VSP (actually EMC and Hitachi) find themselves staring at each other from opposite sides of the fence.  

EMC  – EMC have pitched their flag firmly in the commodity camp.  It’s fair to say that EMC have recently positioned themselves up close and personal with Intel.  In line with this, EMC have former Intel legend Pat Gelsinger as a direct report to Tucci.  I’m led to believe that Pat maintains a very close relationship with his former colleagues at Intel and is key in facilitating a very tight relationship between EMC and Intel – a relationship that they hope will allow VMAX and its siblings to be ahead of the curve when it comes to exploiting the latest and greatest from Intel.  In fact I believe I’m right in saying that EMC committed to utilising and exploiting the latest and greatest Intel chips as soon as is reasonably possible.

On Symmetrix VMAX, FA ports (front end ports) and DA ports (back end ports) are all tied to and controlled by commodity off-the-shelf Intel processors.  The commodity Intel chips process and handle all in and outbound I/O, all RAID functions, all fetch instructions, as well as all higher level software functions including Virtual Provisioning, Thin Provisioning, TimeFinder, RDF…  For example, RDF executes on cores that are tied to front end ports and TimeFinder executes on cores that are tied to back end ports.  In the current version of VMAX these higher level software tasks are rigidly locked to cores.

The general opinion around the benefits of the commodity approach and aligning with Intel include leveraging Intel’s massive R&D and riding the price/performance curve.

Hitachi – Hitachi on the other hand have a different approach, I suppose you could call it more of a hybrid approach.  In a Hitachi VSP you’ll find a far more balanced mix of commodity Intel processors as well as Hitachi designed proprietary ASICs. 

In a VSP, front and back end ports are controlled by Hitachi ASICs that are designed specifically for I/O processing, whereas higher level software functions such as Dynamic Provisioning, Thin Provisioning, ShadowImage, TrueCopy… are all offloaded to a flexible pool of commodity Intel processors.

Hitachi calls this ASIC "Data Accelerator" and it is a dual core ASIC which Hitachi says is designed specifically for I/O processing.

It is the Hitachi belief that this approach yields superior performance while also being more efficient when it comes to  power and cooling etc.  While architecturally this sounds great and may well be true, is this sustainable from and R&D perspective?.  And how much how much of a difference does it really make? 

Some of the challenges around designing and making your own custom silicon (ASICs) is that they cost a lot to design and make, not only in monetary terms but in terms of time.  Is this part of the reason that VSP felt like it was late to market?  After all, it didn’t appear until ~18 months after VMAX made its debut, but still based on the same processor and PCIe gen1 architecture as VMAX…..:-S ??

Putting a positive spin on things, at least you are not wholly reliant on somebody else’s roadmap – somebody who is admittedly driven by the gaming industry rather than the storage industry.  But then again, as Hitachi does leverage Intel inside (pun intended) they are somewhat reliant on Intel’s roadmap.  And as they are not as tightly aligned to Intel will they always be behind EMC in the exploitation of the Intel advances – how much coding on VSP will have to change when we see the likes of Sandy Bridge and Ivy Bridge processors arrive with their new memory controller architecture etc?

Point being, there are clear and fundamental differences that make for great technical debate.

Sub-LUN auto-tiering

Both arrays now do Sub-LUN auto-tiering, but interestingly take slightly different approaches.

EMC.  In VMAX world, sub-LUN auto-tiering is marketed as FAST VP (Fully Automated Tiering with Virtual Pools). 

If you want multiple tiers of storage in VMAX you have to manage multiple Pools (VP Pools) of storage.  Your config might look like this -

  1. Tier 0 = VP Pool comprising 400GB SSD 3RAID5. You might call this pool PoolA
  2. Tier 1 = VP Pool comprising 600GB 10K FC RAID 0.  You might call this pool PoolB
  3. Tier 2 = VP Pool comprising 2TB SATAII 7.2K 14RAID6  You might call this pool PoolC

For the record, if you did name your pools like this you should be shot!

There are constraints around mixing drive geometries and RAID configs within a single VP Pool etc.  If you want 3 tiers, you have a minimum of 3 tiers and 3 VP Pools to manage.

Hitachi.  In VSP land sub-LUN auto-tiering is marketed as HDT (Hitachi Dynamic Tiering).

Implementation wise, HDT approaches things slightly differently.  In VSP you can have multiple tiers within a single HDP Pool, referred to as an HDT Pool.  So a single pool can have up to three tiers (drive geometries and RAID levels).

Strangely the VSP approach looks very similar to how EMC do things with CLARiiON.  So VSP is pretty similar to CLARiiON, but both are different to VMAX :-S

While on this topic it’s worth mentioning Solid State Drives (EMC calls them Enterprise Flash Drives).

Both companies source ZeusIOPS drives from STEC Inc and both arrays put the drives on their standard backend loops where they cannot achieve their full potential of IOPs and throughput.  Both arrays also utilise SSD as a discrete tier of storage and not as a form of level 2 cache.  The drives in the VMAX have an FC interface whereas the drives in VSP have a SAS interface and VSP supports a 2.5" form factor SSD.

When it comes to the granularity of sub-LUN tiering, VMAX moves in units of either 7.6MB or 360MB whereas VSP moves in units of 42MB.  See here for more detail.

Racks

While strides have been made by both companies with advances such as; hot and cold isle designs, power feeds from below and above, and even industry standard 19” racks for VSP, both VMAX and VSP require bespoke custom cabinets and would be an eye sore in well designed Data Centres such as those pictured below -

DC pic1

DC pic2

I suppose at least VMAX will look good with its blue neon light on the front.

While I understand why storage vendors only certify their kit in their own racks, it is not beyond the realms of possibility for vendors to make it happen when it comes to having their arrays installable in customer cabinets.

While on the topic of Racks, lets mention 2 things -

  1. VMAX is sexy whereas VSP is ugly
  2. Off the shelf configs of VMAX allow for up to 10 storage bays, whereas VSP allows for up to 4 storage bays. 

VMAX 11bays

The above means that off the shelf (not requiring any form of RPQ) VMAX confgis can have more disks than VSP, although VSP does support more disks in a smaller footprint by supporting 2.5” drives whereas VMAX only support 3.5” drives.  However……. can you imagine trying to get approval for a microcode upgrade on a box that size as it is so big it will no doubt have 10,000 applications running on it!  Still, if you want to, you can.

Front End

Both arrays are predominantly FC arrays, with both VMAX and VSP also support FICON for Mainframe attach.  VMAX also supports iSCSI and IP ports for replication.  VSP does not.  Neither currently support FCoE but both will add this support in the very near future.

From an FC perspective, both arrays can have a maximum of 128 FC ports, although you can configure a VSP with no backend connectivity, or half backend connectivity which allows for more front end ports, up to 192 FC ports.  Not sure how may people make use of this?

As mentioned when we talked about commodity vs custom, Hitachi have developed a custom ASIC to handle I/O on the front end and back end ports, whereas EMC have commodity Intel processors handling front and back end I/O. 

Cache

VMAX and VSP both offer extremely large caches.  A significant architecral difference being that VSP separates control data and real data.  Control data has dedicated paths and dedicated DIMMs.  A fully configured VSP has 32GB of dedciated Control Data.  Another example of the Hitachi philosophy of specialisation (specialised ASICs, specialised memory areas and memory interconnects….).

VMAX implements somewhat of a distributed cache.  Each Engine has a large local cache.  However, thanks to the RapidIO fabric, all cache (even cache not sitting on the motherboard of the Engine – Director actually) is still directly addressable by every CPU/core in the system – think of it as globally addressable local memory.  This approach is a step away from the way things were with Symmetrix DMX and overhead when a core from Engine A needs to access memory in Engine B is very low due to the RapidIO fabric and custom EMC ASIC and RapidIO driver.

The VSP cache is more of a monolithic cache – you get a very large single image cache with a single cache directory and no requirement for a Store and Forward function.  Accessing cache slot 0 should be exactly the same as accessing slot 65536 (for example).

Talk about separation of control data and real data and line this with Hitachi’s ethos of separation and specialisation.

EMC shares user data and metadata on the same cache boards over the same interconnects.  EMC cache and disk affiniation to Engines. Globally addressable local cache?

Engines and Interconnects

VMAX is radically different to it’s predecessor DMX when it comes to its caching, interconnect and controller design.  VMAX is now based on Engines as processing building blocks.  An Engine has it’s own; FA ports, DA ports, CPU cores, cache, backend disk loops (switched actually) and attached disk drives etc.  A VMAX can have between 2 and 8 Engines.

Within an Engine there are two directors and these directors communicate over PCIe.  But inter-Engine communication is over RadIO.  You could say that the Engines are loosely coupled and give VMAX more of a modular look and feel.  Although each Engine has its own directly addressable cache (128GB at time of writing) it is able to transparently address and access cache in every other Engine over the RapidIO fabric.

Hang on…it’s got an ASIC!!!  While EMC are firmly sitting in the commodity Intel camp, it is worth pointing out that the chip that handles the RapidIO protocol and interface is an EMC designed ASIC.

Oh and while were being picky, current Data At Rest Encryption is implemented by an ASIC that is built in to the Tachyon processor on the new backend SLICs.  However, this is not an EMC design and therefore you could argue is quasi-commodity :-S

Personally both of the above do not take away from EMCs drive to exploit commodity Intel.  Make no mistake, EMC re-engineered Symmetrix, almost from the ground up, in order to exploit the Intel architecture.  Oh and two ASICs is still less than half the ASICs in VSP.

VSP on the other hand looks and feels less modular in its design.  In fact the VSP design looks more like a traditional monolithic array with a single large shared global cache.  VSP design is a tightly coupled two controller fully active-active design with communication between the two controllers being over PCIe.  So I suppose you could say that it looks a little like a very large VMAX Engine with two very large Directors…..No?

Back End

The VMAX backend is still Fibre Channel whereas VSP has made the jump to SAS.  While talking about the backend, VMAX supports only 3.5” drives while VSP can have either 3.5” or 2.5”.  Both support pretty much the full spectrum of drives, ranging from high capacity 7.2K RPM SATAII drives through the usual 10K and 15K all the way up the to high performance Flash drives.

Also, each VSP Back End Director (BED for short) has two of the Data Accelerator ASICS which performs parity calcs, drive rebuilds and encryption.  VMAX only uses an ASIC on the backend for encryption, and even then its not an EMC designed ASIC, so you could say its off-the-shelf (commodity) built in to the Tachyon chip.

However, the major difference on the backend is the capability to virtualise 3rd party arrays, which VSP does and VMAX does not.  This is an area in which I believe EMC and Hitachi differ philosophically.  For example, I’m sure EMC could have implemented VSP style virtualisation within VMAX but have chosen not to.  Either way, if you want to do it today, you need a VSP, as VMAX cannot do it.

Of course technically speaking it is the front end ports and not the back end ports on VSP that do the 3rd party virtualisation.  They do this by flipping the mode of the front end FC port from target to initiator and having it pretend to be a Windows host – scary thought, although it does work :-D

Oh and let’s not forget about VPLEX from EMC…. although for now that’s another conversation.

RAID

Both VMAX and VSP have very traditional RAID architectures that have barely changed over the last 10+ years.  Both support RAID1, RAID5 and RAID6 in typical formations, although VSP implements RAID10 rather than RAID1.  Importantly, neither have parallel, declustered RAID architectures or anything that is specifically designed for todays demands – in fact both have RAID implementations that were designed with yesterday demands in mind.  Having said that, both generally still cope well but are in need of modernisation.  Fingers crossed that both are working on such modernisations.

Oh yes…. VSP performs RAID XOR calcs etc in a custom ASIC whereas VMAX performs such calcs in commodity Intel chips.

Conclusion

If I was choosing one myself, which would I choose…… clearly a difficult choice, but if I were pushed it could potentially be III16118

:-D

Feel free to cast your vote on the enterprise storage array poll Ive just posted over on the right hand side of the site!

If there is enough interest I will consider doing more of a software comparison with things such as LUNs, management software, replication etc……

Constructive comments welcome (please disclose if you work for a vendor).  Oh and let me know if there are any areas you think I’ve missed.

Disclaimer: And please remember…. I do not work for EMC or Hitachi, or even HP…..  I am not an official authority on either VMAX or VSP and any comments and opinions are my own personal opinions and not the opinions of any of my employers, past, present or future.  This is a point in time article written in January 2011, so when you read this some of the information might be out of date. Do not take the information in this article on face value, speak with your vendor for clarification and the latest information.  Sigh….

You can follow me on Twitter and join me in putting the the technology world right ;-)  My Twitter handle is @nigelpoulton

38 comments for “VMAX vs VSP

  1. January 13, 2011 at 10:59 pm

    Hi Nigel
    Interesting survey on some of the key hardware differences — but isn't the software ultimately more important? 
    Not to through more work in front of you, but I bet there would be a strong demand for a software comparison on top of this.
    – Chuck

  2. January 13, 2011 at 11:04 pm

    Hi Chuck,

    Totally agree and I’m planning on doing that in the not too distant future.

    If it wasn’t for the fact that this post is over 2,000 words I would’ve compared software in this post.

    Nigel

  3. January 13, 2011 at 11:18 pm

    Hey Nigel – for the benefit of your readers that don't know, the HP StorageWorks P9500 Disk Array is based on the same hardware as HDS' VSP.  While the hardware is the same, there is one big difference that I've emphasized on my blog several times and that's Application Performance EXtender or APEX software.  This software leverages HP technology that we worked with Hitachi Japan, our OEM partner, to include in the product. 
    APEX is software that can help customers prioritize their applications and give the most important apps higher priority to improve the service level.  HDS said they had this but they either don't understand what APEX is or something else (I'll just say truth-twisting).  There is port control s/w that both HP and HDS have – that isn't APEX.  There's also software that can move data to a new LUN – that isn't what APEX does either.  Here's a white paper that describes it in more detail: HP StorageWorks P9000 APEX Software white paper.
    I've got several posts on my blog talking about the P9500 (formerly XP Disk Array). You can find a listing of them by clicking here.  I'd be happy to set up a virtual briefing for you to better understand APEX. 
    Thanks – Calvin Zito, aka @HPStorageGuy

  4. January 14, 2011 at 7:44 am

    Hi Nigel
    As usual a very good post. Sometimes i really wonder how long this
    Enterprise System Level feature fighting goes on until the vendors accept
    that the real battlefield is around the Midrange Systems. 
    Roger

  5. January 14, 2011 at 12:04 pm

    Hey, @HPStorageGuy – HDS' Claus Mikkelsen seems to disagree that APEX is all that special:
    http://blogs.hds.com/claus/2010/10/hiking-down-from-the-apex.html
    And FWIW, the chips that control front-end FC/FICON and back-end FC/SAS are *ALL* really ASICs – so it's unfair to count the encrypting Tachyon as a VMAX ASIC. There is only the 1 custom ASIC in a VMAX.

  6. January 14, 2011 at 4:12 pm

    companies (like mine) buy big Iron storage because it performs and doesnt go down. unlike other array companies who claim to be "enterprise" but even a little disk issue on a loop will cause an array to reboot, i'm sorry clustered or not….that's not enterprise!
    that is why the big dig vendors will continue the battles on the High-End systems.
     
    (for the record i have high-end and midrange systems from multiple vendors and they all serve their purposes just fine)

  7. January 14, 2011 at 7:25 pm

    Hello Nigel, this is another interesting post which will generate a lot of discussion and will help to highlight the differences between the two products.
     
    Custom or Commodity  
    With all due respect, this is irrelevant since the proof is in the end product’s feature, function, and competitive price.  
     
    Sub LUN level Tiering
    I do not have enough information about EMC’s FAST VP to do a comparison. I cannot find information about this on the EMC website. The implementation of HDT is as you describe it, providing three tiers in one pool, so that a volume is allocated to a pool and HDT moves the pages within the volume based on the access to the pages during selected time periods. The page size is the same page size that we use for thin provisioning and zero page reclaim. What should be noted is that this takes a lot of processing cycles and meta data to keep track of this paging activity. We do this processing in a pool of global, quad core, Intel processors, which is separate from our (custom) I/O processors and we store the meta data in a memory space that is separate from our data cache. In this way the VSP reduces the impact of sub LUN level tiering on I/O processing. While I do not know how VMAX does this, I assume that the same processors that do the I/O processing must also do the sub LUN level tiering and the meta data is stored in the data cache, which will impact overall performance. The granularity of the sub LUN size is not as important as the performance impact of the tiering process.
     
    Racks
    On the topic of Racks, beauty is in the eye of the beholder. It should be pointed out that the VSP rack has a width of 61 cm while the VMAX rack has a width of 76.7 cm and when you count the number of racks you need to include the controller racks as well. So the VMAX extends to 11 racks or 11×76.7 cm =  8.44 meters while the VSP extends to 6 racks or 6×61 cm = 3.67 meters, less than half the rack foot print of VMAX for about the same number of drives. The Small Form Factor 2.5 inch drives draw about half the power of the 3.5 inch drives, so the VSP draws about 40% less power than a VMAX with equivalent number of drives.
     
    Front End
    It should be noted that each of the VSP storage ports can be virtualized into 1024 virtual ports and each virtual port has its own address space and can be mode set to different host attachments, so the front end connectivity can be in the 10s of thousands. Also we can add FED blades in pairs, incrementally and non-disruptively. Unlike VMAX, we do not have to add an entire VMAX engine just to get additional front end ports.
     
    Cache
    Our cache is as you describe it. It is a global cache. However, we can increase the cache incrementally and non-disruptively by adding pairs of 64 GB cache modules. VMAX would have to add an entire VMAX engine just to get the additional cache. We also mirror the writes within the cache for write protection. The cache can also be partitioned to provide QoS for applications.
     
    Engines and Interconnects
    I would disagree with your view that the VSP looks and feels less modular than the VMAX. As you point out, VMAX granularity is at the engine level and they scale by loosely coupling whole engines together through an external RapidIO switch. The granularity of the VSP is at the component blade level and is tightly coupled through an internal switch. The advantage is that the VSP can scale front end, back end, internal processors, and cache modules at a more granular level than VMAX and can integrate them so that all these components can scale up to meet increasing host server demands and can also dynamically scale out to distribute resources to multiple processors.
     
    Do not be misled by the external packaging. The VSP is not a two controller active/active design. While half of the switch matrix is can be housed in another rack, the entire configuration is still one switch matrix of tightly coupled components.    
     
    Back End
    The back end is pretty much as you describe it. However, I doubt that EMC could do external storage virtualization in their VMAX engine otherwise they would have already done it and not have spent the effort on InVista. If they could do external storage virtualization in the VMAX engine they could leverage all their software like FAST, SRDF and TimeFinder across external storage. Why would they pass up this opportunity?
     
    RAID
    No argument here except to point out that the VSP RAID XOR calculations are done in a separate ASIC, which does not impact the processors that do the front end I/O processing or the general processing for functions like dynamic tiering and replications etc. The VMAX RAID calculations must contend with and impact everything else that is done on that general processor.
     
    Conclusion
    It really doesn’t matter whether the insides are custom or commodity. The market place will decide, so let’s continue the conversation and ask customers what they think.  But I do suggest staying tuned for our SPC-1 benchmark where you’ll get more info on the overall performance.

  8. January 14, 2011 at 7:46 pm

    Hey Barry, I didn't know you and Claus over at HDS were such good friends.  You must be for you to point to something he wrote about HP.  Then again, maybe not. I'm wondering what you think of his opinions on EMC gear.  FWIW, I'm sure the reverse would proably never happen.  http://blogs.hds.com/claus/2009/04/smackdown-on-barry-burke-emc.html

  9. January 14, 2011 at 7:50 pm

    Hu, just give us pretty flashing lights on the VSP cabinets, that can be customized like the christmas light shows.

  10. George Lester
    January 14, 2011 at 9:36 pm

    Hi Nigel, I am a HP storage marketing guy and here as are a couple of additional thoughts.
     
    The significance of using the 2.5” SAS disk drives is that it allows the HP StorageWorks P5000 to deliver a solution that is very space efficient and can fit into existing data center space and help improve the capacity/square meter.
     
    In addition, there’s another significant software difference.  HP StorageWorks P9000 Performance Advisor Software is unique to the P9500 and not the same as the HDS VSP performance management software. It customers identify performance bottlenecks, collect and display performance data, mitigate risk by providing performance alarm notification, avoid impending outages due to unavailable or misallocated storage resources. You can find out more about HP StorageWorks P9000 Performance Advisor at http://h18006.www1.hp.com/storage/software/p9000/pas/index.html
     
     
    Regards,
    George Lester

  11. Mary
    January 15, 2011 at 3:17 pm

    Wow , I didn't know EMC had introduced wireless technology to connect  there VMAX  engines……."But inter-Engine communication is over RadIO"  I know the Engines are within the same frame but that is pretty impressive with all the potential interference in a Data Centre Hall environment. Hee Hee. Seriously though a very interesting and eductaional post.

  12. January 17, 2011 at 1:47 pm

    Calvin/George,

    Thanks for pitching in. As I mentioned to Calvin on Twitter, I will include APEX when I write the VMAX vs VSP software comparison… Oh and I will name P9500

    Watch this space…..

  13. January 17, 2011 at 1:47 pm

    Roger,

    I think the vendors do "get" that midrange is also a huge battlefield.  EMC's big announcement (18th Jan 2011) this week will no doubt include major changes to their midrange offering, with NetApp firmly in the cross-hairs!  HP are also seeing this with their recent acquisition of 3PAR (3PAR sits nicely across T1 and T2).

    Do you see a day when a midrange product becomes the flagship product at EMC?

  14. January 17, 2011 at 1:50 pm

    Hu,

    Thanks for sharing your opinions, some valuable insights, as well as some points I disagree with…

    I agree with you re commodity vs custom.  One of the reasons I included it though was because I intend this post to highlight the hardware differences, and then let people make their own decisions.

    On the topic of sub-LUN auto-tiering.  I would argue that the extent size is "as important" and may in fact have an impact on the performance of the re-tiering process.  An extent size in the range of gigabytes will be very wasteful of premium SSD space and will also impact backend performance during re-tiering operations – the more you move the longer it takes… 

    I think in the future extent sizes will become smaller than they currently are, as well as being dynamic.  Just my humble opinion of course, but I think they have to.

    Unfortunately I have to take issue with the statement that front end ports on VSP can be virtualised in to 1024 virtual ports.  while Im sure this is theoretically possible (and even physically possible), this would create an almost unworkable confiuration.  If 1024 hosts are connected to a single physical front end port and each issued a single I/O at the same time then the port would be overrun!?!?

    Im assuming that you are saying each front end port can be virtualised and each front end port has 1024 buffers – theoretically this could equate to 1024 virtual ports but I would bet good money there are not real world configs out there even approaching this…..?

    Other than that, really appreciate insightful your comments.

  15. Pingback: :: blazilla.de ::
  16. Chris Fricke
    January 19, 2011 at 7:47 pm

    Very informative. I have no plans to buy either platform but it was an interesting enough read that I made it all the way to the bottom. Well done, sir. I look forward to seeing the software side of things.

  17. IvanE
    February 3, 2011 at 6:30 am

    LOL, 20 votes in one day for VMax?

    Did someone send a mail to peers asking to vote? ;)

  18. German
    February 25, 2011 at 7:01 am

    The real difference between a Vmax and a VSP you can see, if you try to implement new additional cache. On the Vmax you will loose half of the frontend ports. Funny, isn't it?

  19. February 28, 2011 at 10:16 pm

    German, not sure what you mean by "will loose half of the frontend ports" when upgrading cache in a VMAX?  Can you explain in more detail?

  20. German
    March 1, 2011 at 7:21 pm

    Nigel, in our case we'd an upgrate from 192 GB to 256GB. Every Vmax does have 4 Engines / 8 directors. Because it's a "clustered" solution, you will loose the half of the engines during the installation of the cache. The other side is providing the storage service via one fabric to the server.

  21. March 1, 2011 at 9:51 pm

    Hi German,

    Thanks for the clarification, I had't realised you had meant you lose half of the front end "during" the installation of the additional cache.

  22. Mox3311
    March 3, 2011 at 7:26 pm

    I like two special features.
    1.Take a controller offline to replace a dimm-as stated-lose FEDS etc.
    2. Tiering tracking and management done on some level by the SVP-Really?

  23. prasanna
    April 26, 2011 at 8:39 am

    Dear Nigel,
    We are at present using DS 8K series storage and would like to replace the same in coming quarter. I am in the process of comparing enterprise class storage of EMC, HDS and IBM. Do you have any ready document or reference site where i can find the direct comparison. A swe would be having reseident engineers of the vendor, our main focus area would be disaster recovery solutions (2 site/3 site) from each of them

  24. Chris Huys
    May 3, 2011 at 2:17 am

    Hi Nigel,
    You should have put in the poll, HDS VSP and HP P9500, instead of hitachi VSP. Now voting for hitachi VSP feels, as Im a HP engineer, like voting for a competitor. ;)
    Greetz,
    Chris

  25. Winnie_M
    May 25, 2011 at 10:47 pm

    Hi – maybe something else to mention is the "green" side of things – the VMAX weighs 6 times more than the VSP of the same configuration – batteryweight on the VMAX is extensive – anybody else with the same experience?

  26. May 31, 2011 at 10:38 pm

    Winnie,

    Thats an interesting point you bring up. I’d be interested in hearing more about the specifics of the kind of configs you are talking about here – you say same configs, can you give more detail?

    Nigel

  27. Storage noob
    August 1, 2011 at 10:07 pm

    III16119
    Just curious if your secret code meant to say 3Par? It actually says 3Pas.

  28. August 3, 2011 at 9:53 pm

    Hi Storage Noob (from HDS)

    Nice one you cracked the code, despite my error.  The answer is 3PAR.

    III16118

    III = 3

    16 = P is the 16th letter of the alphabet

    1 = A is the 1st letter…

    18 = R is the 18th letter

    BTW The error (3Pas) is my mistake and comes from the fact that originally I shifted every letter by 1 to make it harder and then shifted them back but obviously not the last letter ;-)

  29. it exec
    September 9, 2011 at 2:29 pm

    Hi Nigel,
    Thanks for the technical insights. It is interesting to me that in a comparison between EMC and HDS tier 1 options you would choose 3Par. I am curious of your reasons. Is cost a factor? Are there any situations you would recommend tier 1 monolithic architectures over 3Par?
    Having seen VSP in comparison to 3Par from a software management perspective, I failed to see much that 3Par offers that VSP does not. On the surface I would say the reporting and auto-tiering interfaces for 3Par seemed less evolved even. In terms of history and credibility alone, aren't EMC and HDS in a class above?
    Cheers,
    RG

  30. tuangprat
    October 11, 2011 at 5:26 pm

    Today (Sep 2011), we're support thin provisioning for m/f . How about the VMAX ??
    I can't see any update or support from EMC or IBM like HDS storage.
    May be, they need more testing time to GA. ….

  31. Amit Pahwa
    September 15, 2012 at 12:01 am

    Hi Nigel!
    As always….very informative and in-depth comparison indeed!
    Having done large scale implementation of both arrays, my vote goes to VSP for sheer quailty & reliability.
     
    One observation – Unlike you mentioned, VMAX can have a single engine (starts with engine 4 – FED 7/8). You wont get engine based front end redundancy in such scenario. 
    VSP wins hands down here as the number FED ports as required does not depend upon the capacity or size.

  32. Rahul
    December 5, 2012 at 3:29 pm

    Hi Nigel
    Thanks for the post even though I am little late, I have a question ( not a technical one though) on how you report an element like HDT/FAST VP to a customer ? If I have a customer and I provide a service based on Tering and HDT/FAST, I have found this very difficult to achieve and set some sort of pricing for that service ? Eventually the customer wants to know what data is being used and where. Did anyone encounter this issue and if they did, I would really appreciate how this was achieved.
    Regards
    RB

  33. December 6, 2013 at 9:13 am

    experienced in both HDS and EMC from midrange to enterprise like AMS, CX4, DMX, VMAX and VSP.  Also experienced on netapp storage like fas 3000, 6000 series.

    vmax never make me down and im happy with it.

    I have no idea the HDS software suck or local HDS guy suck . Most of the time, when i do refresh storage system from device manager.. the result always FAIL. And this thing is configured by local HDS guy. 

    Support wise, EMC give me better impression. I have a problem with HPUX and my EMC array. The EMC guy contact their host team instead asking me to contact HP.

     

Leave a Reply

Your email address will not be published. Required fields are marked *


*

You can add images to your comment by clicking here.