Hitachi Virtual Storage Platform – VSP

If you haven’t heard yet, HDS have just announced their next generation enterprise storage array.

WARNING! This article is a technical deep dive. It’s about 3.5K words and is not recommended for light-weights.  If you want to know what its all about from technical perspective I suggest  you grab a hot brew and read on.  If a technical deep dive is not what you want, then scurry on somewhere else.

Still here?  Nice one!  I think you’ll find this the best source of technical info on the VSP available anywhere.

So lets start with a picture.  This is what a VSP looks like -

VSP-Front

Pretty ugly right!  However, my mother always told me it’s more important for something to be beautiful on the inside.  So don’t let the ugly front doors put you off your brew ;-)

 

The Name

For those who may not know, HP also sell the VSP via an OEM agreement with Hitachi Ltd of Japan.  So the product has two names, an HDS name and an HP name – but its the same product under the hood.

The HDS name – I suppose with the current trend of everything needing a V at the beginning of its name, or the need to include the words virtual or cloud, it’s not surprise that HDS have called it the Virtual Storage Platform, or VSP for short.

 

A couple of points on the HDS name - 

  1. It’s a whole lot simpler than USP V, or is that USPV or USP-V or USPv, or may be even USP-v…..
  2. I suppose its better than CSP, where C would be for Cloud ;-)

Internal Code Names:  Internally at HDS the project was known as Victoria, and prior to that as project Mother Goose (guess it's been that long in coming that it needed two internal project names).  The Mother Goose name I'm not sure about, but the Victoria I have a fair idea…..  The guys at HDS tell me the obvious, they chose the name Victoria as it starts with a V, and that it has it’s beginnings in the Latin for Victory.  However, I’m fairly certain the real reason is that somebody in the team probably had a childhood crush on a girl named Victoria and they never quite got over her ;-)

The HP name – HP have previously named the product XP, with previous versions including XP1024, XP12000 etc.  So one may have guessed that HP might have named it something like – XPV, vXP, XP2048..  But No.  HP are marketing the product as the P9500, bringing it in line with the current HP naming standards – MSA is now P2000, Lefthand is P4000, XP is now P9000 series.

Hitachi (日立)Factory Name:  The internal Hitachi factory names for previous generations of this family of product were –

  • Lightning 9900 = RAID400
  • Lightning 9980V = RAID450
  • USP = RAID500
  • USP V = RAID600

The VSP follows suit and is internally referred to as the 日立 RAID700.  If you view the results of a SCSI Inquiry, or SCSI codepage 0×83, you will see this as the product ID.

High Level Overview

The VSP is a Fibre Channel only (oh, and FICON) enterprise storage array that scales from a single bay up to a maximum of 6 bays.  The two central bays house what Hitachi is calling Control Chassis (containing processors, memory, front and backend connectivity…) as well as drive enclosures.  The remaining 4 bays can contain only drive enclosures.  It can scale from zero (0) drives to a maximum of 2,048 drives, making it the biggest (Update: 29/09/2010 – biggeest from a drive slot perspective) storage array from Hitachi, but still not as big as some of the competition – but size is not everything to all people ;-)

VSP 6 frame diagram

NOTE:  There is no minimum entry model like with previous versions.  Also the smallest config can scale to become the largest with no forklift upgrade required. This is in stark contrast to the previous generation where a USP VM could not be upgraded to become a USP V.

Now to the REAL technical stuff…….

Sub-LUN Tiering

Today’s Holy Grail for all decent storage arrays is the ability to tier at the sub-LUN level.  That is, the ability to have the most active parts of a LUN reside on fast (most expensive)  media, and the less frequently accessed parts reside on slow (cheaper) media.  The VSP has this and it calls it Hitachi Dynamic Tiering (HDT).

Sparing you a lecture on Sub-LUN tiering I’ll just pick the interesting technical points for the VSP implementation -

  1. All drives can be placed into a single tiering pool.  You just throw your SSD, SAS and SATA into an SLMP (Single Large Melting Pot – I made that up).  Not what I was expecting given existing HDP Pool best practices, but I suppose it works from a simplicity perspective.
  2. New data is staged to the highest tier available, and then, as it becomes less active it is migrated down the tiers.  Most folks I speak to seem to initially think it will be best to stage data at the lowest tier and then promote it up through the tiers as it is used.  I guess staging to the highest tier means that you will always be fully utilising your expensive Tier 0.  Time will tell if this approach works.
  3. The Sub-LUN extent size (which Hitachi calls a Page) is the infamous chubby chunk… the 42MB HDP Page.  Basically the VSP will move data up and down the tiers in units of 42MB (contiguous space).  If you have 1MB of a file that is hot, the VSP will migrate that 1MB, plus the remaining 41MB of that Page up a tier.  The same for demotions.
  4. On the policy side of things, you can set gathering windows, exclusion windows, and the movement cycle is between 1-24 hours.

So a tick in the Sub-LUN tiering box for the VSP.  All that remains to be seen is whether it does what it says on the tin.

SAS backend and 2.5 inch drives

The writing has been on the wall for a while now, Fibre Channel as a drive interface is on its way out.  Don’t panic though, it won’t happen overnight, but by the same token, don’t be blind to the facts, it is happening.  The new protocol, interface and backend architecture is SAS (Serial Attached SCSI). 

NOTE:  Enterprise class SSD drives come with SAS interfaces, and you can plug Enterprise class SATA II drives on to SAS backends.

As far as I’m aware, the VSP is the first of the enterprise arrays to adopt a SAS backend, but it’s unlikely to be long before the rest follow suit.  So is there any value in the SAS backend, or is it just a tick in the box to be future proof?

There is an argument that the 6Gbps full duplex SAS backend will be able to sustain more IOPs and throughput to SSD drives, certainly more than a 4Gbps FC-AL (or switched FC-AL) backend.  However, don’t expect to be able to drive the ~45,000 IOPs that the spec sheet of an STEC Zeus IOPS claims.  Also note, that the spec sheet I’m reading from the STEC website only shows 3Gbps SAS interface, making it slower than 4Gbps FC at the time of writing :-S  However, when a 6Gbps interface comes along, the plumbing is already there in the VSP.

As far as I’m aware, the only manufacturer supplying SSD drives for the VSP is STEC Inc and I believe the drives to be the same ZEUS IOPS drives (200GB and 400GB varieties) that the competition use.

The pictures below are close-ups of a drive enclosure.  The picture on the left shows the enclosure with the fans (on hinges) opened, the picture on the right shows with the fans closed into position -

VSP-Drive-Enc-Open VSP-Drive-Enc-Closed

2.5 inch drives.  For me, 2.5 inch drives are the future of disk drives, and allow for greater capacity in the same footprint.  For example, a single frame VSP with a Control Chassis 0 can have 384 drives, each of which can be 2TB.  That’s a lot of capacity in a relatively small footprint.

It’s also strange that once you’ve seen one system with 2.5 inch drives, all systems that still deploy 3.5 inch drives look clunky and dated.  Bring on the 1cm drives!  J/K

UPDATE 29/09/2010 : Readers should note that capacities of 2.5 inch drives currently lag behind those of 3.5 inch drives. For latest drive info see relevant drive manufacturer websites.

Front End Director (FED) Design

So the front end design of the VSP is significantly different to the design of previous generations and one of the more technically interesting changes.

At a high level, Hitachi have taken the wide-striping concept that is now commonplace in backend architectures, and are bringing it to the front end.

Basically, the FEDs and BEDs (Front End Directors and Back End Directors) have custom I/O routing ASICs that are specialised for I/O traffic management – Hitachi are calling these data accelerator ASICs.  These ASICs have an affinity with ports.  However, the more general purpose CPUs are no longer locked to particular ports, they have been moved to a processor complex called the Virtual Storage Director (VSD) where they are pooled and can have their resources dynamically assigned and un-assigned from any front or back end port.

Example:  In legacy or more traditional architectures (I’m going to keep this high level), Processor 1 would be tied to Port 1 and Processor 2 would be tied to Port 2.  If Port 1 was running like the clappers and maxing out its Processor, while Port 2 and Processor 2 were sitting idle, there would be no way to assign the resources of Processor 2 to Port 1.  Now, with the VSP, the CPUs are pooled and can be assigned or unassigned from any Port, so a port that is running flat out can have the resources of multiple CPUs assigned to it.

Of course the proof is in the pudding.  BUT, this could potentially do to the front end, what drive pooling and wide-striping have done to the backend.  Look at a heat map of a traditional backend where specific spindles are assigned to specific LUNs and applications, and compare it with a heat map of a system that employs drive pooling and wide-striping.  The difference is amazing.

This could also have potential heat and power benefits, turning off processors when they are not required – although I’m speculating now.

ASIC vs Commodity: The general purpose CPUs, where the microcode, including copy services, replication services, HDP etc execute are Intel Core Duo Xeon quad-core processors.  The ASICs on the FEDs and BEDs are Hitachi designed.  Hitachi, like 3PAR, obviously feel there is still value in custom silicon.  This is in stark contrast to the likes of EMC VMAX and IBM XIV which have taken the commodity route, driving value out of software.

The custom ASICs take care of latency sensitive data such as user data, whereas less latency sensitive data, such as asynchronous replication traffic, is processed by the general purpose CPUs on the VSDs.

While on the topic of the front end, I should point out that this is a FC and FICON only array.  No iSCSI (no plans for it) and no FCoE.  However, FCoE is apparently not far off, may be the end of Q4 or early Q1 2011.

Control Chassis (Logic Boxes)

As mentioned earlier, and seen in the images below, the centre two cabinets, in a six frame VSP, house the Control Chassis’ – Control Chassis 0 and Control Chassis 1.  The picture below is of the front and rear of a frame containing a Control Chassis.  The Control Chassis is in the bottom half on both pictures, the top half contains the drive enclosures hidden by fans.

VSP front with doors open VSP rear with doors open

And a close-up shot of each – note the slapdash fibre cabling – tut tut ;-)

Control Chassis Front Control Chassis Rear

As will be show in the diagrams below. the Control Chassis contain the following -

At the front there are 4 x Virtual Storage Directors (VSD), and 8 x Data Cache Directors.

VSP Visio of front

At the rear there are 8 x FEDs, 4 x BEDs, and 8 x what Hitachi are calling Grid Switches, or GSW if you like.

VSP Visio of rear

Data cache is backed up to onboard flash drives, which I believe will mean no requirement for large batteries etc. Pretty cool, assuming I’m correct.  It has been confirmed that the previous statement of not requiring large batteries to hold cache in a power loss scenario is correct.

Each BED has 8 x 6Gbps SAS paths.  That adds up to 32 backend SAS links per Control Chassis, and 64 x 6Gbps links in a fully configured unit.  SAS runs at full duplex. 

SAS backend

Control Memory (CM) is located on the Virtual Storage Directors alongside the general purpose CPUs making it very quickly accessible to the CPUs, like an L2 cache.  As with previous architectures, Control Memory stores the usual metadata and system state such as LDEV mappings, DIF tables, run tables etc.

Grid Switches – The Grid Switch boards provide the, 4-lane PCIe gen. 1, paths that cross connect all of the other boards. Each Control Chassis can have either two or four GSW boards. Each GSW board has 24 unidirectional ports, each port having a send and receive path, with each operating at 1024MB/s.

Hitachi refers to the Control Chassis as being tightly coupled, and the interconnect between the two is PCIe gen 1 over copper.  The Control Chassi are able to communicate by mapping Hi-Star over PCIe.

 

Microcode and Microcode Updates (interesting)

As you would hope and expect, a large part of the VSP microcode is inherited from the USP V, which in turn inherited it from the USP, which inherited it from the Lightning 9980V, which inherited it from ……  The code from the USP V was recompiled to run natively on, and exploit the advantages of the new Intel processors.  All new features to the VSP, such as Dynamic Tiering, were developed to run natively on the VSP hardware.

Interestingly, the microcode now runs in fewer places – basically the Virtual Storage Directors.  This has the effect of hugely simplifying the microcode update process.  On that topic, and it’s not very often that I get excited about the microcode update process (in fact I’ve never gotten excited about it before), but this time I’m excited!  Stay with me on this…..

If we remember back to when we talked about the general purpose CPUs (where the ucode runs), we mentioned that the processors are not tied to front end ports.  Well as it turns out that this has a huge impact on microcode updates.  Basically, because the processors that run the microcode are not tied to ports, when you update the microcode, no ports ever have to go offline!  Take a second to let that settle…. once again, front end ports do not go offline during microcode updates!  There are other architectures out there that require paths to go offline during code upgrades, these are an outage waiting to happen.  I’m not saying there wont be a reduction in performance, but this is still HUGE!

Real world experience:  Anybody who works in large environments knows that microcode updates can cause issues.  Path failover and multi-pathing in general are often the cause.  Common examples include servers that incorrectly have both paths cabled to the same SAN fabric, not noticing that a path is already in a failed state on a server, applications that don’t work well with path-failover…  So to have a solution where paths are not taken offline during a microcode update is an absolute god-send for large organisations – IMO.

Data Centre Friendly

This might seem strange, but for products that are designed to spend their entire lives in data centres, enterprise storage arrays are often like a wart on the nose of a well designed Data Centre.

In the past, it has almost been a pre-req for enterprise storage arrays have features like bespoke custom sized cabinets and blowing hot air in whatever direction they felt like – sometimes towards the ceiling ;-)

Well finally VSP has made some steps in the right direction -

  1. It comes in a standard sized 42u 19 inch rack
  2. Has a hot and cold isle airflow design
  3. Can take power feeds from above or below – you can now install one in your garage without installing a raised floor ;-)

VSP-rack-sizes

Not quite at the point where customers can install one in their own pre-provisioned racks, but it’s moving in the right direction.

Hitachi are also claiming ~40% power reduction, which, if true, will go down well with every Data Centre I know.

But it Looks So Ugly

Let’s face it, most of us have a shallow side to us and like stuff that looks good. 

But before I get lambasted over this, let me state that I know that appearances are not that important when it comes to kit that spends it’s life in a data centre and never sees the light of day.  However, I think there are a couple of areas where it does matter (but way way less than how well it works etc…).  Those couple of areas are -

  1. First impressions count.
  2. It’s about brand.  Hitachi are no doubt billing this as an exciting product and trying to generate interest – at the end of the day they need to ship it in large quantities.  In my opinion, the outward design does not give a good first impression and certainly doesn’t capture the imagination.

VSP vs VMAX

Although HP didn’t actually put a blue neon light on the front door, their marketing guys seem to be getting the message with the photo-shopped image on the HP StorageWorks P9500 below -

HP-P9500

In a day when the competition are sporting kit with dual-power-fed blue neon lights and others with bezels designed by Armani, I was hoping for more from Hitachi.  But its not the end of the world and I think I’ll live, just about.

However, on the inside it certainly looks neater and tidier than previous generations.  This, again, is important as it speak of good engineering.

Management Software

Moving on from something that’s not all that important, to something that absolutely is…management software.

Apologies for mentioning this so far down the post, but I haven’t actually tested the software, and I have never been a fan of Hitachi storage software, so I find it hard to get excited.  Having said that, it looks for once like they might have come good on management software …

A previous poll on this website about how good or bad Hitachi Storage Management software is showed the following results -

33% said it was “Poor”

25% said it was “Average”

14% said it was “Good”

15% said they would “rather pull their teeth out than use it”

8% said it was “excellent”.  These voters no doubt worked for Hitachi ;-)

The Victoria launch is a joint Hardware and Software launch.  Device Manager is now at version 7, and it appears that Hitachi might have been listening to feedback. 

From what I've seen, aesthetically speaking, it's a huge improvement on previous versions.  Like other decent management GUIs it’s based on Adobe Flex technology and it actually works how you would expect a decent management app to work (I’ve seen a demo).  You know, like being able to resize the screen, move columns, re-order columns… 

Let's hope that it also scales and doesn't need "occasional" reboots.

Hitachi are positioning it as a unified (there’s a word that is finding it’s place alongside virtualisation and cloud) management platform across their storage products – VSP, HCP, HNAS, AMS…  Hitachi have also spent a lot of effort on building in SMI-S.

My personal opinion is that the management interface looks a lot better, in the same ball park as the likes of SMC, but not in yet the same league as XIV, 3PAR or UniSphere.

Still Legacy RAID Architecture

As with most things in life, you don’t get everything you hoped for.  And for me, the big thing that is missing is a revamp of the underlying RAID architecture.  With the long wait for the product I was hoping they were working on this.

In today’s world of 2TB and larger drives, an average of around 10^14 Bit Error Rate and therefore risk of Unrecoverable Read Error being something like one in every 12TB, elongated rebuild times, larger capacities to try and background scrub…… I’ll kill the list there so as not to keep you awake at night worrying about your RAID.  Point being, RAID architectures that were written 10, 15, 20+ years ago are showing clear signs of creaking at the seams.

In my personal opinion, distributed parallel RAID architectures that offer ultra fast but ultra low impact rebuilds (or re-protection) are needed.  I personally don’t want to see triple parity RAID or other band-aid solutions.  A redesign is needed.  Technologies like some of those sported by 3PAR, XIV, Xiotech etc need to make their way into enterprise storage arrays.  Rant over.

Misc – VAAI and Encryption

VAAI.  Unfortunately there is no VMware VAAI goodness on day one.  However, this will apparently come in the first code rev planned for either the end of this year or very early next year.

Encryption.  The Back End Directors are capable of encrypting data, internal to the VSP, while at rest, with XTS-AES 256 bit encryption.  Encryption is done in hardware with apparently no overhead, so wont impact your performance.

I’m also interested at the overlap with the AMS line.  VSP will scale as small, or smaller than AMS but with much richer (pun intended) features.  It seems an industry trend for midrange architectures to be reaching in to the enterprise space, whereas this seems the opposite way around.  Hmmmm…

Well, I’ve about worn my keyboard out so will bring this to a close.  Hope it has been useful.

Feel free to leave comments, thoughts, questions… but please, please, please, if you work for a vendor, please disclose this. Thanks!

You can talk to me online about technology on Twitter, I’m @nigelpoulton, or http://twitter.com/nigelpoulton

Disclaimer. I do not work for Hitachi, HDS, HP or anyone involved in the product. I have, however, been contracted in professional services and implementation architect type roles by both HDS and HP in the past. Opinions expressed in this article are my own and do not represent those of any of my employers, past, present, or future.  I have obviously had exposure to the product and people prior to product release, see previous posts for info. I have not received a penny in relation to this product.

68 comments Categories: Storage

68 thoughts on “Hitachi Virtual Storage Platform – VSP

  1. Hi Nigel,

    This new FED architecture with a processor pool sound interesting. I was wondering what happens if you want to set a port to external mode or initiator/target for HUR/TC. Do you need a separate pool for each type of port?

  2. JRW – Agree with all points on the trade-offs between large vs small extent size, as well as your thoughts that 42MB fits well with RAID levels – I wrote an opinion on a potential reason why it is 42MB here –

    http://blog.nigelpoulton.com/thin-provisioning-the-mystical-42mb-allocation-unit/

    As for starting a LUN out at the top tier, I know one concern a lot of folks have is that often in large shops a LUN is provisioned but the server build etc doesnt then happen for another couple of weeks, by which time it will probably have been migrated from tier 0 all the way through to tier trash. Current thinking around mitigating this all seem cumbersome for large shops.

    Enjoyed your comment and insight – Nigel

  3. @Craig (HDS)

    It is great to hear that HDS has finally been listening to feedback from customers especially regarding the poor state of their storage software. My concern is that HDS is now hyping the new all-singing all-dancing HCS 7 so much, that it had better be really, really good as the customer expectations that have been set are very high. Some more humility in recognising that the previous software hasn't been of sufficient quality would be a good start.
    Though this new task management system of backgrounding long-running tasks sounds great but what if the next task you want to do requires the previous tasks to complete first, *cough* sub-system refresh *cough* ? Has any effort gone into actually improving the speeds of the tasks as well as being able to background them?
    You appear surprised that not more has been made of the new features of HCS 7 but how many customers have actually seen it let alone used it? The HDS website seems sparse on actual detail on the new features of HCS 7 especially any comparisons to previous versions. For example the Device Manager demo is still of what appears to be version 6 and not the new shiny version 7.

  4. We had the song and dance at our local HDS office last week.

    We left with mixed emotions. The NDA hype we got last year about external storage and HDT was now basically “im sure it will be there in two or three microcode updates”. HDT in it self needs a few more tuneable knobs before we would even think about shelling out cash for a volume based license. We have gotten bitter from our XP24k experience i guess.

    The mid tier is looking better and better.

    They knew enough to finish the talk with updates to devicemanger so we could get at least a few calls of “praise the lord for having mercy on us”. Lets just hope the updates are worth more than the new coloring theme.

  5. > in large shops a LUN is provisioned but the server build etc doesnt then happen for another couple of weeks, by which time it will probably have been migrated from tier 0 all the way through to tier trash

    Nigel,

    In the given scenario nothing gets migrated anywhere. HDT works over HDP. If server build for the LUN never happened, there are no pages allocated at all and that LUN actually doesn’t reside on any tier whatsoever.

  6. Hi Ivan,

    You make a good point.

    However, many companies I know are not comfortable with the “thin” aspect of HDP type technologies and choose to pre-provisionin production volumes (Im not sure if HDP allows pre-provisioning yet). I admit I was thinking a little of a similar technology from another vendor who offer pre-provisioning of thin volumes. In the scenario where you pre-provision, the extents can potentially migrage down the tiers if you provisionin but dont commission the server for x weeks…

    Do you have many customers (esp Fortune 500 types) who are comfortable thin provisioning production volumes?

    Nigel

  7. >Do you have many customers (esp Fortune 500 types) who are comfortable thin provisioning production volumes?

    HDP is really widely used and with it being a part of BOS in VSP, will now be used even more.

    Most of the customers who don’t “trust” the thin aspect just never overprovision.

    There’s a number of not thin-friendly applications we see sitting on HDP just for the sake of simplified provisioning and administration.

    As for the scenario with preprovisioned volumes: I can’t really see the benefit of making sure there’s physical storage allocated to them in advance. And doing that is a hassle on it’s own, especially if these volumes will have thin-friendly file system on them.

  8. @ Sim Alam
     
    Some more humility in recognising that the previous software hasn't
    > been of sufficient quality would be a good start.
    I guess I haven't had you in a presentation I give to customers about the new HCS 7. The first thing I talk about is the difficulties customers have had with our software and then how we have improved the experience.  We are not perfect by any means, but we take all your feedback to heart and work on improving the experience for our customers.
     
    what if the next task you want to do requires the previous tasks to complete first
    Good catch.  That is something we are investigating to make sure we implement it the right way, for all our HCS features.  We do recognize there are times when a process depends on not only a previous task to end first, but use the information like identifier or state of the new object for the next task.
     
    *cough* sub-system refresh *cough* ?
    I'd like to see as few refreshes as possible and when they are necessary, make them as short as possible.  The team has worked diligently to find new efficiencies in data collection and have improved the response time considerably.  Some customers have very large environments and we have worked with those customers to make refreshes much less painful.
     
    Has any effort gone into actually improving the speeds of the tasks
    > as well as being able to background them?
    Yes. We have put a lot of focus on the tasks that are often used, and made improvements in the code to show noticeable improvement in the speed of tasks.  You should see up to 30% improvement in most day to day tasks.  Customer I have demoed HCS to say it is "snappy" and once you have a chance to play with the interface, I think you'll like the responsiveness of it. The VSP also has faster processors and better memory to improve the tasks that operate within the array.  We are currently researching other improvements in speed and will be releasing enhancements to HCS to make operations even faster.
     
    You appear surprised that not more has been made of the new features
    > of HCS 7 but how many customers have actually seen it let alone used it?
    I know many customers have not yet had a chance to touch and use HCS 7, but we did show many customers demos and screenshots of the suite in action.  I have found that publicly available materials is sparse, but I do see we have an overview and a video online now.  We will also have demos available soon to let you get a feel for HCS 7. There's still more for us to catch up on and thanks for pointing out the Device Manager flash demo is still at 6.  I will track that down and see if we can get it updated.

  9. Hi Nigel, truly appreciate your efforts. You have highlighted the VSP features in nice way and easy to understand.

  10. Hi, Storagebod, thanks for sininhg the light on wide striping again. Every vendor (including 3PAR) has to figure out their business model and decide what functional elements will be licensed separately. To some degree, the licensing costs reflect the development costs to add functionality to a product’s core architecture. By contrast, all 3PAR arrays ever made wide stripe data across all drives by default as part of the core architecture. Implementation differences can matter a great deal for performance, scalability, manageability and cost.3PAR is happy to have competitors like HDS and EMC copy our features in their products and we are happy when customers make comparisons. Of course, 3PAR will following them sometimes too, as we will with flash SSDs. In the long run, it doesn’t matter who is first with a feature, but who implements it best with the best economic advantage for customers.

  11. , the HDS VSP did have SFF drives and as far as I can tell the T800 did not.2) Dynamic prinisionovg, thin prinisionovg etc. should have some benefits and some performance costs. The benefits are probably subject for a different post but the performance costs should be evident from a sufficient set of benchmark results. There are plenty of storage systems that offer Thin prinisionovg but this is the first time I have seen one supply a benchmark result with it active. Kudos to HDS for being the first one to do so. All that being said, theoretically, thin prinisionovg should provide more data storage over less disks. Given that, I believe that from a performance per disk spindle basis it should perform better than a non-thinly provisioned system. To test this we would need equivalent systems one with thin prinisionovg active and one without. Alas, we don’t have such a comparison available just yet3) Thanks for providing the direct links to the two SPC-1reports.4) As for unused capacity, it’s a pretty complex issue and plays out, in the number of extra disks being used to support the workload, subsystem cost and $/IOPS. The purpose of the chart is to try to level the playing field, at least with respect to the number of disks, whether the capacity is used or unused plays no part in this chart.5) I guess I don’t see the host based RAID-0 striping configuration unless your talking about the TSC configuration section (~p.68). At best I see this as mapping the VSPs RAID-10 to the Windows host LUNs. While it does appear to be striping the host data across the VSP RAID10 LUNs, it’s unclear whether this helps or hurts the performance. Although to be honest I am no Windows configuration expert.Once again, thanks for the thoughtful comments.Ray

  12. If you understand SVC then saiyng “it’s unlikely that the backend storage (DS4700s) at the time were thin provisioned. So it’s sort of a mixture between thinly provisioned at the SVC level and not at the storage subsystem level.” doesn’t really make sense. That’s a bit like saiyng the hard drives on the VSP weren’t thin provisioned, only the volumes. It’s hard to compare the details of the two thin provisioned results there isn’t a lot of info on the VSP’s actual use of thin provisioning in the benchmark disclosure that I can see, and the benchmark seems to make heavy use of striping at the Windows HostOS layer with diskpart and dynamic disks, which I suspect most admins would be nervous about using in real life. I guess that’s the nature of benchmarks. 10 years ago controllers tended to be the bottleneck but I think the industry has long since fixed that. SPC-1 seems designed to show up controller bottlenecks otherwise the drive count is the choke point..With SSDs maybe the controller choke points will return to relevance soon.

  13. Hi Nigel. Thanks for sharing this. I am completely new to Hitachi and found your article most beneficial. I am, and have been, very "IBM" to date, so its good to get a feel for non-Blue products.

    I especially like the bit about the concurrent microcode updates (Very impressive)

    Mark

  14. Just saying……but the HOR(ror)CM files are still there….despite some predictions from 2010 that they would go away.

Comment navigation

← Older Comments

Leave a Reply

Your email address will not be published. Required fields are marked *


*

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>

You can add images to your comment by clicking here.