The Worlds Fastest Storage

By | November 18, 2006

A couple of weeks ago in a post titled “When bigger isn’t better” we ended up having an interesting conversation in the comments section relating to how slow the disk drive is, and the potential for SSD to start edging its way in and begin pushing the disk drive back into the lower tiers of enterprise storage – where these days it surely belongs!

Now fast forward a couple of weeks to yesterday when I was having a browse through the Storage Feeds section of RM and noticed a link titled “Here comes the fastest storage in the world” .  Clicking on the link I was interested to see it was an article on the latest and greatest SSD based storage subsystem from Texas Memory Systems, the Tera-RamSan .

It’s a 1TB SSD subsystem that can be accessed over 10Gbps Infiniband or Fibre Channel.  Its being touted as 250 times faster then disk and holds the record for SPC-1 (random OLTP workload) tests.  Its also has the best price/performance ratio for any SPC tested device.  Quite a lot to brag about!

Obviously that kind of performance is going to be a bit overkill for a lot of applications, but those I/O hungry OLTP apps and the like are going to love this.

When talking to people about SSD, they invariably worry about losing their data while the power is out.  After all, its computing 101 that the contents of RAM disappear when the power goes.  Texas appear to address this with heavy battery backup as well as sets of traditional hard disks configured as RAID3 that have data backed off to them periodically as well as during a shutdown.  Add to this the use of ChipKill technology, to protect data from being lost if a single memory chip fails, then the Tera-RamSan looks no more likely to lose your data than a traditional storage subsystem running traditional RAID5 or RAID1.  Having said that it does look like the standard warranty is only 1 year – not exactly confidence inspiring.

Not wanting to turn this into an advert for the system I’ll stop there and leave the rest to you.

For me, Im hoping that other three letter vendors will start to offer SSD.  Im not placing a bounty on the head of the disk drive, its served us well for over 50 years and still has a lot to offer, but its about time for it to start moving over.  For example Id like to may be see HDS offer support for pluggable SSD in the NSC55 (USP DKC) and then let me hang traditional disk based subsystems off the back for my lower performance requirements.  Im thinking of something *cheaper* and slower than normal cache but faster than disk.  Ive seen and heard from peers of situations where more than just database indexes and other very small LUNs could benefit greatly from living on SSD rather than disk.

Finally in the comments section of the post When bigger isn’t better we talked about the need for a major player to take up SSD and champion it.  While Texas may not be one of the biggest players, they certainly fit one of the criteria required to be an official big iron vendor alongside EMC, IBM and HDS – their name can be reduced to a 3 letter acronym… TMS Wink  No offense NetApp, may be you could start calling yourselves NAP Undecided.

Nigel (mackem)

2 thoughts on “The Worlds Fastest Storage

  1. Nigel (mackem)

    Hi Liho, appreciate your feedback! I wont be holding my breath for something like this from HDS then 😉

    While I appreciate Hu’s comments that a drawer full of memory can be expensive – so is cache memory and Flash Access licenses. Just to clarify what I would like – Im not after SSD that operates at the same speed as the expensive cache that goes in these top end subsystems. Vendor documentation typically lists cache at around a thousand times faster than disk. Id be happy with SSD that is just a couple of hundred times faster than disk – sometimes Im just too easily pleased 😉

    While I know that you can install large amounts of cache into HDS subsystems, we also tend to attach either lots of hosts or random I/O hungry apps to systems like these. These can quickly push your cache write pending rate above the 40% mark causing the destaging process to speed up, and if the data keeps coming in at the same rate, it no longer matters how many paths to cache you have or how fast your cache is, your disks become a bottle-neck – potentially hundreds of times slower than when cache was coping.

    And once you start to pin LUNs into cache you obviously have less cache for your other workloads and that 40% mark becomes easier to hit.

    I can also see how installing drawers full of memory (to paraphrase Hu’s comment that you cited above) breaks the Hitachi physical architecture model in their enterprise subsystems —> If you install a drawer full of SSD, how then do you install a traditional 4 disk Array Group which is always spread across 4 HDU boxes… suddenly one of those HDU boxes is for SSD and wont take your disks. I know that Hu’s comment was in relation to lower end storage which doesn’t have the same rules when installing Array Groups.

    In the same article Hu mentions that 4 years is a lifetime in storage and lists a few examples to back up this claim – all of which relate to increasing capacity. In 4 years time will the storage industry have gone through a lifetimes worth of change? New bolt on products for ILM, security etc will no doubt come along but the basics of storage will not change – we will still hide slow disks behind limited cache. When do we stop trying to hide slow disks behind larger and larger amounts of cache – will the next version of the USP stick to form and just double the addressable cache again?

    Without wanting to sound too much like a record player stuck on loop, I cant help but wonder if we are all a little conditioned by the industry to think that its OK that the disk drive is hundreds and hundreds of times slower than every other component in a computer. And that SSD in the enterprise is too hard or still a long way off in the future – lets be honest, people are already doing it and pretty successfully by the looks of it.

    While I do think that HDS can be innovative and up there with the market leaders in some areas, I think they operate like big iron on some issues. Ive previously read an article quoting Hu on EMC and their “…20-plus year old Symmetric cache architecture. I wouldn’t be surprised to see similar comments coming out of places like Texas referring HDS, EMC, IBM with their XX year old disk based architecture…… still stuck in the past comparatively speaking.

    Thanks again Liho – this is just my pennies worth.

Leave a Reply

Your email address will not be published. Required fields are marked *


*

You can add images to your comment by clicking here.