Data – the currency of the new economy

By Nigel Edwards, Vice President, EMEA Sales and Channel Marketing at HGST, a Western Digital Company.

  • 8 years ago Posted in

Data: It’s the currency of the new economy. It has never been more important, and there has never been so much of it.

Fueled by the growing number of applications, devices and data types – not to mention the Internet of Things (IoT) – the amount of data being created and replicated is doubling every two years. Additionally, in a survey commissioned by HGST in late 2014, 86% of CIOs and IT decision makers surveyed believe that all data generated has value if the organisation is able to store, access and analyse it optimally.

The value of data is shifting the competitive landscape, forcing businesses to re-architect and reimagine their data centers to keep pace with new market dynamics. In 2014, a survey conducted by PWC found that 29% of critical decisions were made based on internal and external data and analysis. Players that understand the power of data, see it as an opportunity and those that decide to act on it, are positioned to win in the future.

This volume, velocity, longevity and value of data is putting storage at the heart of the data center. Having the right storage strategy is key to optimising infrastructures and realising the full power of data. Here are the top trends that I see coming up for the storage solutions industry:

Air is Dead. Helium HDDs Will Rule the Data Center

Helium-filled hard drives provide the highest capacity per drive with more stable and reliable recording technologies than what is possible with air-filled drives. The lower power consumption of Helium-filled drives also result in the highest enclosure and rack densities in the industry. Overall, these combined benefits deliver the lowest TCO per terabyte possible. Recognising these clear advantages, I believe Helium-filled drives will become the leading technology for scale-out applications such as active archive and cloud storage.

Data Access Infrastructure Will be Measured by the “Six-Second Rule”

Six seconds will become the new standard for data access amongst infrastructure architects. As articulated in Brian Shackel’s Acceptability Paradigm, six seconds will become the upper limits of what is acceptable for data access. Because of this Six-Second Rule, data center architects will no longer be able to classify aging “cold” data as “store once, hopefully read never,” and place this data on tape storage. Instead, businesses need to recognise that they can only harness the power of data if they have near instant access to it to extract the value. This means data center architects will need to understand an emerging category of disk-based active archive systems, which will enable data to be accessed in under the six seconds it will take before users lose interest in that data.

All-Flash is Not a Universal Fix. Architects Will Build for Speed AND Capacity

Driven by accelerated demand and backed by customer endorsements, CIOs are quickly realising Flash is well-suited for performance-intensive applications including databases, data warehouse and big data analytics. As PCIe prices move toward $1/GB, Gartner expects that nearly 50% of all SSD unit shipments to data centers will be PCIe by 2018. The most popular use of PCIe Flash is in caching configurations in front of an existing SAN. This approach is completely transparent to the existing SAN and drives latencies down to tens of microseconds from milliseconds. Where time is money, Flash is an increasingly attractive proposition.

Yet going forward, all architects will need to understand two unique paths on how they design their data center infrastructure. Performance-centric applications will employ “high performance” architectures, leveraging PCIe or new NVMe-over-fabric standards with the efficiencies of a shared SAN. On the other hand, capacity-centric applications will employ a completely different approach that focuses on “high capacity,” leveraging active archive and object storage solutions to achieve new levels of scalability, efficiency and cost effectiveness. This is particularly exciting, as this shift in architecture design will launch a virtuous cycle where companies with vertical innovation will drive success for customers across the board.

The Cloud Will Become a Mandatory “Third Leg” for Data Centers

Cloud strategies are creating a third platform that cooperates with and complements these high-performance and high-capacity architectures in the data center. It is well established that cloud architecture is here to stay, and many are already using public, private or a hybrid system architectures. However, an important distinction moving forward will be to understand how the cloud will add to the value of data, data longevity, data activity and overall architectural optimisation. Data centers that understand the economy of data within their business will be the ones that best utilise scalable cloud architectures and can fully extract the value of their operational data.

While Infrastructure Budgets Remain Flat, Expectations for Data Are Set To Double

IDC has identified that the Big Data technology and services market will grow at a 27% compound annual growth rate to $32.4 billion through 2017.

However, the number of IT professionals is not projected to double in our foreseeable lifetime. In fact, a recent labour report in the US shows that hiring trends for IT have been stagnant over the last five years. This means the amount of data that needs to be managed by each IT professional working today will increase eight times over by 2020. This points to the fact that data center infrastructure not only needs to be scalable, but also new levels of simplicity and ease of management will be essential for success.

Optimised Software Will Become A Focus For The Data Center

Data center storage solutions will begin to be optimised beyond just device characteristics. Optimised data storage goes beyond just hardware, with storage software providing the opportunity for optimisation as well. Although the biggest names in large cloud services firms have begun designing personalised, highly optimised hardware, few data centre providers have equivalent human resources to do the same. But new software advances in caching, replication, management and shared storage for scale-out database clusters will enable enterprise data centres to gain the same CapEx and OpEx benefits enjoyed by large cloud services firms without the need for equivalent resources.

Object Storage Systems To Go Mainstream

While traditional file and block storage is still a viable solution whenever fast access to data is needed, it is not optimised for storing multiple petabytes. By using a flat namespace for storing data and its associated metadata, object storage avoids the overheads associated with organising data in directories and managing separate metadata structures. In addition to this, the implementation of Erasure Coding as an alternative to RAID, makes object storage the most cost optimised and reliable way to store massive amounts of data. As a matter of fact, we can expect to see object storage systems cross the chasm into more mainstream enterprise data centres through multiple deployment options. With either a plug-and-play, S3 compliant scale-out object storage system or NAS-to-object-storage gateways, companies can preserve their existing application investments and gain the benefits of a highly scalable and easy-to-manage backend Active Archive repository. The simplicity and high scalability of the object storage interface gives data centre architects the option to choose from private, public or hybrid object storage solutions to suit their application needs, or surge requirements when necessary.


For businesses to remain competitive in today’s landscape, data matters. Companies are going to compete based on the insights they pull from their data which needs to be stored efficiently and instantly accessible for real-time processing, analytics and insights. So it’s important that companies are equipped with the latest in terms of storage, and new data centre technologies and architectures. This means the standard for efficiency, performance and scalability will continue to be under close scrutiny and businesses that want to succeed will need to extract greater value from the data they own.



 

Exos X20 and IronWolf Pro 20TB CMR-based HDDs help organizations maximize the value of data.
Quest Software has signed a definitive agreement with Clearlake Capital Group, L.P. (together with...
Infinidat has achieved significant milestones in an aggressive expansion of its channel...
Collaboration will safeguard HPC storage systems and customer data with Panasas hardware-based...
Peraton, a leading mission capability integrator and transformative enterprise IT provider, has...
Helping customers plan for software failure, data loss and downtime.
Cloud Computing and Disaster Recovery specialist, virtualDCS has been named as the first UK-based...
SharePlex 10.1.2 enables customers to move data in near real-time to MySQL and PostgreSQL.