Server-based caching for data acceleration

Enhancements to server and storage technology have created an I/O performance gap in enterprise storage networks that is being addressed by Flash-based cache solutions, including PCIe-based Flash cards. Flash-based caching decreases this performance gap by reducing I/O latency. With many new Flash cache solutions hitting the market, how can you choose the right solution for your environment? By Tim Lustig, Director of Corporate Marketing at QLogic Corporation.

  • 10 years ago Posted in

Any enterprise caching solution under consideration should have a few basic features. First, it should be easy to deploy. Next, it should be transparent to the OS and application. And finally, it should support caching on individual servers, as well as multi-server clusters, including highly virtualised environments and clustered applications. (This final consideration is actually a requirement for many enterprise IT environments, where large portions of the data centre are virtualised). When these three are combined, a Flash cache solution can maintain existing SAN data protection and compliance policies to deliver the greatest benefits across the widest range of applications in your enterprise.

 

Today we see server-based caching gaining in popularity because it places the Flash cache closest to the application, “short-stopping” a large percentage of the I/O demand of critical applications, thus lowering latency and improving overall storage performance. Caching at the server positions cache for mission-critical applications where it is insensitive to congestion on the network or storage infrastructure. Effectively reducing the demand on both the storage network and arrays, server-based caching improves overall storage performance for all applications, even those that do not have caching enabled, and extends the useful life of existing storage infrastructure.

 

Server-based caching requires no upgrades to storage arrays, no additional appliance installation in the data path of critical networks, and storage I/O performance can scale smoothly with increasing application demands. More importantly, server-based caching enables pooled cache to be shared across virtualised and clustered applications: a capability array-based caching and appliance-based caching simply cannot provide.

 

Caching SAN Adapter

Caching technology typically requires coherence between caches when solutions span multiple physical servers. Traditional captive implementations of server-based Flash caching do not support this capability. While they are very effective at improving the performance of individual servers, providing storage acceleration across clustered server environments or virtualised infrastructures which utilise multiple physical servers is beyond their reach.

 

This limits the performance benefits of flash-based caching to a relatively small set of single server applications.

 

The caching SAN adapter is a new approach to server-based caching that addresses these drawbacks. Rather than creating a discrete captive-cache for each server, the Flash-based cache can be integrated with a SAN adapter featuring a cache coherent implementation which utilizes the existing SAN infrastructure to create a shared cache resource distributed over multiple servers. This eliminates the single server limitation for caching and opens caching performance benefits to the high I/O demand of clustered applications and highly virtualized environments.

 

The caching SAN adapter is a new ground-breaking enterprise-ready application performance acceleration solution that combines a Fibre Channel host bus adapter (HBA), intelligent caching, and I/O management with connectivity to a server-based PCIe® Flash card. This innovative approach requires no changes to existing server software or infrastructure, and is completely application and hypervisor-transparent, as well as infrastructure and storage subsystem-agnostic.

 

The caching SAN adapter is exceptionally simple to deploy and manage, and transforms single-server, captive cache into a consolidated, shared, performance-enhancing resource across servers. The result is transparent, adapter-based caching that is a dramatically simpler solution, lowering Total Cost of Ownership (TCO) and unleashing shared performance for performance-challenged clustered and virtualized applications in the enterprise.

 

This approach guarantees cache coherence and precludes potential cache corruption by establishing a single cache owner for each configured LUN. Only one caching adapter in the accelerator cluster is ever actively caching each LUN’s traffic. All other members of the accelerator cluster process all I/O requests for each LUN through that LUN’s cache owner, so all storage accelerator cluster members work on the same copy of data. Cache coherence is guaranteed without the complexity and overhead of coordinating multiple copies of the same data. By clustering caches and enforcing cache coherence through a single LUN cache owner, shortcomings of traditional server-based caching are overcome.