Storage outlook: 2014 and beyond

The world of storage is changing rapidly; data is being created and shared at a phenomenal rate. From global institutions to small businesses and individual consumers, data storage needs are growing, as is the desire to readily access data regardless of device or location. By Paul Rowan, General Manager Storage Products SSD and Nick Spittle, General Manager Product Management, Toshiba Electronics Europe.

  • 10 years ago Posted in

WITH INCREASING INTERNET SPEEDS, work place reliance on IT systems, access to HD and 3D video content and the rise of social media, digital data production and storage is at an all-time high – and businesses are at risk of being overloaded by data. To combat this tidal wave of data, market research firm IDC predicts worldwide installed raw storage capacity will climb from 2,596 exabytes (EB) in 2012 to 7,235 EB (7.235 zetabytes) in 2017.

To put this number in perspective, 1EB equals 1,000,000,000,000,000,000 bytes or 1018bytes. Perhaps more helpfully 1EB is equivalent to the storage available on 31 million 32GB iPads! The majority of digital data is still stored on HDDs, however SSDs offer compelling advantages such as lower energy consumption and faster data access times, whereas HDDs continue to offer price and performance characteristics that still often make them the favoured choice. Bridging the gap between two, solid state hybrid drives (SSHD) combine both SSDs and HDDs into a single drive and aim to hit the price / performance sweet spot. However the type of media on which the data is stored is only half the story – where that data is geographically located is the other. Consumers and enterprises are becoming more and more comfortable using cloud storage services, and accessing data via an internet connection, rather than from a local storage device.


Increased adoption of NAND
Despite improvements to HDD technology, mobile and computing devices are increasingly adopting SSD technologies due to their increased speeds and smaller form factors. SSDs have no moving parts at all and store all their data on NAND flash memory. Because the NAND flash memory can be accessed more quickly it increases data flow rates and the number of input/outputs per second (IOPs).


NAND chips make up the bulk of the cost of SSDs, and efforts to bring down the cost of NAND initially focused on increasing the amount of data that can be stored in a certain sized chip. Over the coming years, SSD prices are predicted to continue to become more affordable and are also likely to be squeezed into smaller and smaller packages. The increasing affordability will also see SSDs make the jump into automotive infotainment systems in the near future.


Due to the increased demand for higher capacity, NAND flash memory has become the most aggressively scaled technology among electronic devices. Die shrinks (shrinking the size of the NAND cells) has happened increasingly quickly over the past few years, and it is now commonplace to find SSDs based upon 19nm technologies.


Flash density increases can also be achieved through increasing the number of bits stored in each cell. Single level cell (SLC) NAND stores a single ‘bit’ within each cell, whereas multi-level cell (MLC) NAND can store two bits per cell, and triple level cell (TLC) NAND can store three. Because of the reduced cost per GB, MLC and TLC are becoming increasingly prevalent in the marketplace, particularly in consumer electronics and 3D NAND is on the horizon.


However, SSDs do face their limitations in terms of life expectancy, and for every bit that is stored per cell, the number of read / write cycles they can endure decreases: SLC can endure around 100,000 cycles, MLC 5,000 to 10,000 and TLC around 1000.


Workload, especially in the enterprise sector can be extreme and involve high volumes of transactional data storage, which could be weighted towards write operations over read. For SSDs the frequency of this data change predicts the life of the device in the field.
Enterprise storage trends
Tiered storage will become increasingly important for datacentres and cloud providers. It uses a range of HDD and SSD storage technologies so that data is stored on the most effective storage media. With the increasing demands for data storage, and fast data access, more and more data centres and cloud servers will be moving to a tiered architectures.


Automated algorithms select the most effective form of storage depending on cost, performance, availability, protection and recovery speed requirements. A tiered storage architecture will utilise the key benefits of the enterprise Solid State Drives (eSSDs) and HDDs to provide the appropriate storage solution according to how frequently and how quickly the data needs to be accessed. Access speeds are graduated, starting with the highest at the top of the pyramid and decreasing to the lowest at the bottom. The lowest tier houses offline and near-line data that is required for back up or compliance. This data is typically stored in 7,200 RPM hard disk drives. The ascending tiers would store business critical and online data with the faster 10,000 RPM HDDs towards the top of the pyramid. On the top sits the eSSDs with their super-fast access speeds, and these are used to store mission critical data that needs to be downloaded frequently.


Tiered storage systems are engineered to minimise power consumption by distributing data to the most appropriate storage ‘section’ or ‘layer’. By ensuring the most suitable media are used to store and retrieve data, power consumption and heat dissipation are minimised – both of which are critical issues for enterprise storage and data centres.


Big Data Trends
An increasing number of governments, companies and scientific researchers are all looking to understand data sets that are too large for commonly available software tools to capture, curate, manage and process. For instance, experiments conducted at the Large Hadron Collider12 generate more than 500 exabytes per day, although ‘only’ 25 petabytes (25,000,000,000,000,000 bytes) are stored on an annual basis. Collecting and storing data for ‘big data’ projects is just the start of the process – and many big data deployments are still in the early stages and focusing on processing traditional data sources rather than social, email, video and sensor data. As these projects mature, they will need to increased analytics capabilities and faster data access.


In the commercial world, eBay has a data warehouse that stores up to 90 petabyes and is used to not only facilitate transactions, but also analyses customer data and trends3.


In-Memory Computing (IMC) services are often provided by cloud providers and can squeeze batch analysis processes normally lasting hours into minutes or seconds. The increasing affordability of fast SSD technologies is helping to increase provider’s ability to provide IMC services further increasing the rate of adoption.


2014 is likely to see a continued growth of IMC adoption as well as an increased focus on the development of standards that will help reduce architectural complexity.


Conclusion
Data storage capacity needs will continue to grow unabated and both enterprises and consumers will continue to demand faster and more convenient access. More and more frequently accessed data will move to NAND-based products, while more tiered architectures will accommodate data backup and long term storage of less frequently accessed data. With Toshiba’s capabilities spanning across the entire storage media landscape, the company will be well placed to support the world’s growing storage needs.