I had a conversation recently where one of the guys involved questioned whether or not application specific storage whitepapers are relevant any more. You know the type of paper I mean, “Best Practices for Deploying ABC application on XYZ Storage Arrays”.
I remember the time when reading such whitepapers was a fairly regular occurrence for me.
However, I dare say that I can’t remember the last time I read one – the only recent exception being a Microsoft Exchange paper extolling the virtues of DAS, while slamming all things SAN. However, I was specifically requested to read this and provide my opinion and comments.
From what I remember of most of the whitepapers, thy recommend things like the following -
- Databases should be placed on LUNs formatted as RAID X, with a minimum of Y spindles with an RPM of Z
- Log files should be on LUNs formatted as RAID A on B number of dedicated spindles providing C millisecond response time
- Databases should be on separate front end ports to Log files
- Blah blah blah
But is anybody seriously dedicating spindles, ports, cache slots… to applications any more?.
The world has changed. All good storage arrays sport wide-striping on the backend, making the days of dedicating spindles to applications long gone Similar techniques and best practices are emerging for front end port configurations (apply the principle of wide-striping to the front end too).
In many respects, today’s storage arrays are far simpler to manage and administer than those of 2 or 3 years ago – no more choosing which spindles to put a LUN on and which front end ports to present it on – spread it and forget seems to work for ~80% (may be higher for some people) of requirements. This spread it and forget approach also helps create and maintain well balanced, highly utilised arrays where time to market for provisioning tasks etc are greatly improved.
Sub-LUN tiering may also prove to simplify things. Instead of the application owner and the storage administrator having to decide which LUNs or portions of a LUN require low latency etc, let the system decide – let it place the extents/pages/chinks that have the highest rate of cache miss on SSD… More often than not, the array will know better than the application owner and storage admin (although a healthy amount of policy will have to exist to define limits etc).
So with these in mind, is there any value in the application and storage vendor sponsored whitepaper?
The major questions that I can think of in relation to application availability and performance from a storage perspective are around whether to de-dupe or not, whether to compress or not, whether to over-provision or not…. But they hardly seem to warrant a vendor sponsored whitepaper.
Or am I missing something?
You can talk to me, as well as a bunch of much smarter folks than, me on Twitter. I can be reached by sending tweets to @nigelpoulton