Cache is king

The solution for virtualized enterprise application acceleration – server-based Caching 2.0. By Henrik Hansen, EMEA Marketing Director, QLogic.

  • 11 years ago Posted in

Enterprise IT is all about doing more with what you have with minimal disruption. An ever increasing requirement for IT is getting stored information to business critical applications faster. Whether it’s a multi-server clustered database running on-line transactions, a mission-critical business intelligence application doing analytics processing, or an enterprise resource planning, supply chain management or business collaboration application such as enterprise e-mail; enterprise IT can’t afford to have business critical applications waiting for data.
Within data centres of all sizes, server-based flash solutions are being deployed as a means to improve application performance. Server-based flash-caching, which moves active, hot data closer to applications onto fast flash memory devices inside the server, has proven to be a viable solution to accelerating applications. However, currently available offerings come with unforeseen cost and added complexity, including purchasing, installing, managing and maintaining additional drivers and software layers in the server. Most server-based caching solutions are also ‘captive’ to a single server, which prevents acceleration of many virtualized and clustered applications, which in turn threatens the practicality and broad adoption of server-based cashing in the enterprise. IT professionals are left to rethink their strategy for server-based caching and look to server-based caching solutions with greater long-term benefits and broader application support.
Sever-based Caching 1.0 limitations: Virtualization
raises the stakes
Enterprise IT administrators face continued IT budget restraints that put pressure on getting more out of the data centre with less. Virtualization is one of the best ways to get the most out of a physical server, while server clusters also continue to expand their footprint. Highly virtualized implementations on server clusters have emerged as the top choice for optimizing existing infrastructure and for extending the life of data centre resources.
Virtualized workloads now surpass individual server workloads. Global market research firm IDC predicts nearly 91 million VMs will be deployed between now and 2016. Greater VM density means more predictable hardware utilization levels and more efficient use of hardware. Highly virtualized data centres, however, generate large numbers of highly randomized IOPs and produce some of the most performance-challenged environments in the enterprise. I/O performance hasn’t grown at the same rate as CPU performance, making it the scarcest resource, and the biggest bottleneck, in servers today.
Where does all this leave your application performance acceleration strategy? Since server-based caching solutions are captive to a single server, the benefits of the cached data support only the server in which the cache is installed. Virtualized and clustered environments still suffer from today’s 1.0 solutions. Shared storage and clustered servers help optimize both virtual and physical resources, but for all practical purposes, negate the benefits of single-server, captive flash solutions. On top of this, IT faces incremental software support and management issues as the growing number of virtualized servers and desktops multiply the complexity of each and every connection, both physical and virtual, with more drivers and caching software. The shared resource model with virtualization at its core may well be in direct conflict with your flash investment and server-based deployment model. Enter server-based caching version 2.0.
Server-based Caching 2.0 – Shared performance acceleration
and much more
Within the typical enterprise data centre, clustering and virtualization have neutralized the benefits of server-based caching 1.0. Current caching products address only pieces of the application acceleration problem, so a more capable solution is needed with extended functionality that is non-disruptive and simpler to deploy.
The new challenge is how to address the additional workloads, as data centres become more virtualized. New workloads, such as VDI and high performance mission-critical applications, are very different workloads, and are typically virtualized. If the highly randomized performance requirements of these tough workloads can be addressed, virtualization can deliver on its promises of productivity and asset utilization. Additionally, these workloads need to be managed in a common management umbrella and enabling infrastructure, and support services such as disaster recovery, high availability and service-level agreements.
In addition, the impact of virtualization on networked storage systems can make or break a successful deployment. I/O performance, from disk drive to the hypervisor emulation, is critical, especially for latency sensitive applications that require high overall bandwidth. And the more mission-critical applications are added to the equation, the more cache coherence, data protection and compliance policies come into play. At a base level, server-based caching 2.0 solutions need to boost performance for the widest range of applications. This means providing support for virtualized and clustered applications as a shared resource. Ideally, server-based caching 2.0 solutions also should maintain existing SAN architectural rules, utilize existing infrastructure, including drivers and management framework, plus be easy and non-disruptive to deploy.
One emerging solution leverages the existing SAN by transparently implementing a new form of caching SAN adapter to accelerate performance without any change to the rest of the storage infrastructure. This unique solution merges caching intelligence with SAN adapter technology to boost performance of clustered applications by moving the caching intelligence into the adapter. Implementing a shared caching architecture with other SAN adapters with this technology breaks the server-captive cache model and delivers the benefits of server-based flash/SSD performance acceleration to clustered application configurations – those most widely used for business critical applications today.
The caching SAN adapter – Delivering derver-based Caching 2.0
Very few companies have the ability to deliver a caching 2.0 solution. It takes proven technology in the data path combined with the ability to intelligently migrate active data to a cache; all while standard SAN traffic runs unimpeded. The logical I/O technology to start with is Fibre Channel for its large installed base and proven reliability, and a caching SAN adapter becomes the ideal mechanism for delivering the benefits server-based caching 2.0 to the enterprise data centre.
“Today’s server-based caching solutions are beginning to break down the I/O performance gap between high performance servers and slower, mechanical, disk-based arrays. QLogic’s FabricCache QLE10000 adapter redefines server-based caching with the industry’s first caching SAN adapter that makes SSDs a shared SAN resource,” said Arun Taneja, founder and consulting analyst of Taneja Group. “Solutions like the QLE10000 adapter will finally enable clustered enterprise applications to take advantage of SSD performance acceleration typically found only in individual servers…”
Regardless of the manufacturer, the caching SAN adapter warrants investigation. Transparency is the first key, as tying server-based cache to a SAN adapter unlocks numerous user benefits, not the least of which is simplicity. Server-based caching needs to be invisible to the OS and any hypervisors, removing the need for additional software drivers.
“By placing the cache in the fabric, and supporting it with a standard Fibre Channel HBA driver, the caching becomes transparent to the application host system and storage system,” David Floyer, CTO
and co-founder of Wikibon. “For Fibre Channel SAN environments this is a simpler and lower cost implementation to install, manage
and grow.”
Server-based caching is becoming the preferred method to accelerate enterprise data in a SAN. Today’s 1.0 solutions are a good start, and provide benefits to single server and simple un-shared configurations. Server-based caching 2.0 solutions will be able to deliver caching benefits to the data centre across the widest range of enterprise applications, including clustered and virtualized applications. The caching SAN adapter is the first to deliver 2.0 to the enterprise and
is here today.