The case for Scale Out Storage

Scale-out is a new category of storage architecture that challenges the limitations associated with traditional SAN and NAS systems. The aim of scale-out storage is to provide service predictability allowing the user to grow storage resources in-line with data center demands as business needs change over time. Scale-out storage systems must therefore be able to expand but still maintain functionality and performance as they grow. By Gurdip Kalley, Head of Business Development at Solid State Solutions (S3).

  • 11 years ago Posted in

Scale-out technologies create a modern dynamic datacenter where IT Managers can address the issue of data growth whilst maintaining overheads and improving SLAs to application owners and the business.

In the datacenter of old, IT departments have had to struggle with legacy systems or Scale-Up storage. This traditional architectural approach to building storage systems is limited in its ability to accommodate growth in capacity, I/0 and compute power.

Capacity
As IT departments deal with more users, files, applications and servers so the need arises to upgrade the capacity of legacy or scale-up storage systems. With only a finite capability for capacity expansion IT departments commonly find themselves in a position of having two choices:
£ Purchase additional storage systems which increase complexity and management
£ Face an expensive “fork-lift” upgrade of their existing storage system’s controllers in order to manage the demand for capacity that brings with it down-time and risk.

I/O
As the number of users, servers and resource hungry applications grow so does the contention for bandwidth across server, network and storage system. Without enough I/O bandwidth, connected servers and users can become bottlenecked, requiring sophisticated storage tuning to maintain reasonable performance.

Compute
Legacy or scale-up storage systems have finite compute resources to provide data services like snapshots, replication and volume management. Without the ability to scale-out legacy systems may have to limit the additional services it can provide. For example, some systems place a hard limit on the number of snapshots that can be executed or the capacity to which a volume can be scaled.
The problem with legacy or scale-up storage is the inevitable point at which one encounters a downturn in performance. As additional capacity is added to the storage system, bandwidth and compute power don’t change. The result is a sliding effect whereby the best performance is always achieved on day one and when subsequent capacity is added the storage system performance becomes “diluted”.

Of course there are ways to overcome this by purchasing ahead of one’s current requirement but this is wasteful and expensive. Other technologies have emerged such as thin provisioning and management software to tune legacy systems to make them more efficient but these are software based fixes for a hardware problem that only continues to grow.

How Does “Scale-Out Storage” Work?
Scale-out storage systems consist of individual components called “nodes”. These nodes comprise of capacity, processing power and I/O bandwidth. As a node is added to the storage system the aggregate of each of these three resources in the system is upgraded simultaneously. As capacity is added both compute power and I/O bandwidth increase as well. These nodes are typically interconnected via a high-speed backplane like Infiniband that enables them to communicate with each other. Therefore scale-out storage systems become increasingly faster as capacity is added to the infrastructure.
The underlying magic to any scale-out storage architecture is the file system software that enables these nodes to be interconnected and referenced as a single object or cluster by the storage administrators.
In order to achieve this the file system has to be able to write and read across all nodes within the scale-out cluster utilising the aggregate performance of all the available capacity, bandwidth and compute resources.

Unlike traditional file systems, scale out software needs to be able to support petabyte large volume sizes. The goal is one volume that can scale to practically any size and support a variety of application types, ranging from user home directories to sequential processing tasks and virtual machine images.

Finally, the scale out software has to provide all of the data services that we have come to expect from traditional enterprise scale up systems such as snapshots, thin provisioning, cloning, replication and automated tiering. With scale-out storage the Storage Manager’s responsibility is to manage their data not to manage the storage hardware.
In summary scale out storage systems help IT address the challenges associated with managing evolving business requirements. Drivers such as growing file sizes, mixed workloads, virtualisation, big data projects and compliance are increasing the volume of data being stored. In addition to this IT is continually being asked to reduce budget and headcount.

Scale-out technologies help IT to cope and grow with business demands whilst providing predictability of cost, management and performance. Thus allowing the business to focus on what it does best, not on dealing with storage management issues

Exos X20 and IronWolf Pro 20TB CMR-based HDDs help organizations maximize the value of data.
Quest Software has signed a definitive agreement with Clearlake Capital Group, L.P. (together with...
Infinidat has achieved significant milestones in an aggressive expansion of its channel...
Collaboration will safeguard HPC storage systems and customer data with Panasas hardware-based...
Peraton, a leading mission capability integrator and transformative enterprise IT provider, has...
Helping customers plan for software failure, data loss and downtime.
Cloud Computing and Disaster Recovery specialist, virtualDCS has been named as the first UK-based...
SharePlex 10.1.2 enables customers to move data in near real-time to MySQL and PostgreSQL.