The future of memory is already here

For many infrastructure organizations and large enterprises, memory continues to be a limiting factor as system architects scale out in an attempt to access more memory.By Diablo Technologies.

  • 7 years ago Posted in
New applications, such as in-memory databases and real-time analytics, are significantly increasing the demand for memory. Additionally, datasets are growing at unprecedented rates while organizations attempt to mine relevant insights from this information. As the demand for memory continues to outstrip supply, the cost/benefit ROI is prohibitive for most businesses. While several technologies on the horizon are promising to alleviate these obstacles, IT organizations need a way to address memory requirements in a more cost effective and flexible manner, today. and tomorrow.

In recent years, several companies have promised new memory types to optimize performance, expand capacity, and lower costs. These technologies, however, have encountered considerable delays, leaving speculation as to their viability. It's evident that real challenges exist when it comes to manufacturing new technologies, implementing dynamic memory allocation, and enabling efficient data management capabilities. The reality is that business needs and IT requirements are not going to wait because vendors don't have all the answers yet. Enterprise IT and Infrastructure organizations must find a way to keep up with the increasing need for bigger memory footprints while containing costs as they scale.
Databases, analytics, data processing, and virtual machine hosts are all examples of applications that require massive amounts of memory that cannot be achieved with DRAM alone - both from a capacity and cost standpoint.

For example, graph analysis generates large amounts of interim or temporary information in order to appropriately process data. It is common for a 200GB dataset to require 2 or 3TB of memory. Accessing all of this data from disk is too cumbersome, introducing latency and bandwidth limitations, as well as multiple bottlenecks. However, when an application is able to keep all of its data in memory, processing times can be improved significantly. In addition, the number of machines required to process the data can be reduced by several factors. This delivers massive savings on licensing, compute, networking, power, cooling, and rack space.

While there are certainly technologies out there that can help, additional features and software are required in order to fully address all the factors impeding performance. For example, NAND flash for use as byte addressable memory is not considered enterprise-grade off-the-shelf. For use as primary
application memory, it requires additional software and media management services to achieve enterprise reliability/durability when being accessed thousands to millions of times. NVMe is fast and reliable but it is still too expensive to scale for the masses and introduces latency since it is on the interrupt-driven PCIe bus.

What is needed is a solution that can utilize DRAM for low-latency and high-speed transactions, coupled with lower cost, scale-out memory such as NAND flash for capacity and economics. This hybrid memory approach must be implemented in such a way that the OS and applications access one large pool of byte-addressable system memory. Sophisticated software must understand how memory is used by the applications and manage the placement accordingly. It must also ensure that metadata, instruction data and application data are all positioned in the proper tier of memory to optimize performance, while maximizing endurance and reliability. Algorithms must accurately analyze and predict usage patterns, pre-fetch and place data, manage memory media writes for durability, and optimize performance.

There are three requirements for leveraging storage class memory as primary
memory:
.       Intelligent Scale
o       Physical memory must be abstracted to effectively manage hybrid memory technologies, scaling up memory resources while lowering costs
.       Smarter Data Placement
o       Intelligently and predictively place data in the right memory tier for highest performance
o       Bigger memory footprints enable more work per node for databases, analytics, data processing, and cloud applications
o       More work per node lowers costs through consolidation (compute, network, power, cooling, rack space)
.       Unique flexibility
o       A solution agnostic to hardware will leverage lower cost technologies while seamlessly adopting new technologies going forward The solution is clear and here today. Flash is ubiquitous, cost-effective, and most importantly. proven. Memory1T from Diablo Technologies leverages
flash to meet the flexibility and scalability requirements of the modern datacenter. Its accompanying DMX software efficiently manages the placement of data, caching, and tiering between DRAM and flash. In addition, DMX predicts and pre-fetches data to ensure the appropriate placement among
tiers - optimizing performance and durability at scale.

Consumers everywhere are sharing data at lightning fast speeds and with extraordinary frequency —...
Rainer Kaese, Senior Manager, Toshiba Electronics Europe GmbH is looking at the year ahead. What...
“One small step for man, one giant leap for mankind”. It’s been 50 years since Neil Armstrong...
There are a number of considerations and questions you should ask yourself as a business before...
As the majority of data in modern enterprises is produced outside the data centre, the ability to...
By Jason Howells, EMEA Director, MSP Solutions at Barracuda Networks.
By Jerome McFarland, Director of Marketing at Diablo Technologies.
By Randy Arseneau, CMO, INFINIDAT.