As someone who’s worked in the storage industry for nearly 30 years now, it’s great to see the market going through another one of its cyclical changes and new vendors emerging as challengers to the big boys.
With the top three legacy storage vendors losing market share1 to new entrants and smaller, longer-established vendors. Customers and resellers now have far more choice when choosing the best tool for the job.
An annoying problem arises when new entrants with loud voices make bold, sometimes ludicrous, claims. Take the line being touted by one of these new(ish) vendors, “Hard disks are dead. The future is all-flash”. At the fear of sounding like a scratched record, this couldn’t be further from the truth.
Flash is a fantastic tool that delivers lower latency, improved I/O rates, and has lowered the cost barrier for many high-performance applications. Will it replace hard disk drives in the short to medium term? Not at all. It’s like the point being made 10 years ago that tape is dead. Its role might change slightly but the traditional hard disk drive is here to stay for a long time yet.
What is interesting is how many arrays are being designed for hard disk drives OR for flash. There are some in-between approaches using tricks such as a flash cache to compensate for slow SATA drives, but the vast majority of vendors make customers choose between a slow and a fast platform – a choice that needs to be made up front.
I suppose a good analogy would be the choice you have to make when purchasing a car – do you go for the high-performance two-seater or get the station wagon with four-door functionality and flexibility. It all depends on your priorities – are you after speed or practicalities? Well yes, there’s the Porsche Cayenne but then you’re paying a premium for the blend of performance and convenience.
What would be ideal is something more adaptable – that moves from large, practical vehicle into a high-performance speedster, but at a sensible price.
Being able to fill a round hole with a round peg and a square hole with a square peg is important. Filling a round hole with a square peg by using a hammer isn’t the most efficient approach.
The second point about storage being adaptive is investment protection. Currently, many applications are best served by storage which is oriented towards features rather than performance and simplicity. However, with the slowly increasing maturity of Software Defined Storage, this core approach is extremely likely to switch. The question stands, should you deploy a feature-rich array today and then buy again in say 12-24 months’ time? Obviously, not many CFOs would support that approach.
Many customers and resellers are telling us that this choice is a concern and they feel that they’re at a road junction with no way to cross over to a different road.
This is the other part of “adaptive” that is important – the ability to interchange between both worlds when you’re ready. If you do decided that the feature-rich SAN is the way to go today then it’s wise to look at platforms that can adapt and interchange to SDS when you’re ready.
Far too many vendors are using off-the-shelf commodity hardware these days and putting all of their investment into the software layer only. This means that the underlying hardware is just dumb disk that won’t give users the reliability and performance needed for their software-defined world. Why not invest in a platform that is designed to bridge the two approaches? Then you’ll be able to have a much softer conversation with the aforementioned CFO – something that makes life easier for everyone (oh, and that pay rise conversation is a little easier!)