Powering the Cloud: it’s all about the enterprise

Much of the Cloud’s success to date has been based around the big names we all know offering consumers a range of free and low cost ways to consume IT services. If the Cloud is to make a similar impact on the enterprise world, then it’s becoming increasingly clear that it needs to be made ‘enterprise-fit’. Who are the companies best-placed to help with this process? That’s where the debate starts!

  • 10 years ago Posted in

AS EVER, Powering the Cloud provided a great opportunity for visitors to talk to the vendor community about the latest trends and technologies to try and understand just what is going on in the Cloud dominated world of storage networking and, more widely, data centres. Some of the key announcements are featured in the article below, but perhaps most interesting were the trends and opinions not featured in the ‘official’ versions of the news. Courtesy of conversations with, amongst others, Virtual Instruments, Fujitsu, Quantum, Seagate, Emulex, SNIA Europe, Dot Hill, NetApp, Samsung, Aptare, Data Direct and Zadara, plenty of valuable insights were gained concerning a range of topics, with soundbytes below:

 In a real world example given by Virtual Instruments, a bank trying to resolve a performance issue without the use of VI’s Infrastructure Performance Management tool took 84 or so hours; one particular customer using VI’s technology has a mean time to performance issue resolution of 10 minutes
 Convergence will only ever go so far – for example, no one will ever converge such disparate disciplines as servers and cabling
 Open Source (OpenStack) has massive market momentum and is predicted to be the third major choice available to end users, alongside VMware and Microsoft.
 Although the Software Defined era allows the decoupling of software and hardware, many customers still want to be able to buy a solution/box that provides everything they need to run their app(s)/business.
 So, in Emulex’s case, the decision is to provide customers not with a ‘big box of lego’, but specific ‘lego kits’, courtesy of some custom software releases
 IT is very much a generational thing, with the younger folks happy to go with the Open world, less so the ‘more mature’ IT professionals
 Object Storage is not new, but there’s plenty of momentum around this technology right now in the long term data retention/archival space
 SNIA has an Object Storage initiative alongside recent work it has done providing storage security input on the ISO 27040 standard. The association is also trying to provide objectivity and clarity to the SSD/Flash market as well as the Software Defined definition.
 Emulex’s Sean Walsh sees four waves of hyperscale – right now we’re entering Wave 3 – enterprise adoption, with the final wave being the Internet of Things – the human being as a platform
 Although the known IT universe is already in a state of considerable flux, some of the industry’s largest players will be falling by the wayside – almost certainly replaced by some of the tech giants already established in the Far East
 Huawei and Lenovo we know about, but there are plenty more technology companies to come from Asia, with China predicted to have the same kind of impact as the Japanese tech companies did in the 70s and 80s
 Security is a very real issue when it comes to the Cloud and end users need to be very aware of the risk/reward trade offs
 Not all real time, automated tiering solutions are created equal
 The Hybrid Cloud is, perhaps, nothing more than a compromise. After all, if the promise of agility, reduced costs and increased performance hold good for some apps, surely they hold good for all?
 So, we’re back to the mindset issue. If you started a company today (as plenty of folks are) why would you want to own any IT estate?
 Seagate’s Kinetic Open Storage Platform promises to reduce TCO by ‘redefining and simplifying storage architectures’
 Look out for Remote Direct Memory Access (RDMA) – already making an impact in the storage networking space
 V NAND (3D Flash technology) promises to change the SSD/Flash market; as does the prospect of a new PCI-e bus standard
 Enterprise storage-as-a-service could well be a rather different entity from ‘storage-as-a-service’
 So, much of the Cloud to date is based around consumer ideas and technologies, and we are now seeing the Cloud being made ‘Enterprise-Ready’
 The task ahead for vendors and end users alike? Eliminating multiple layers of legacy hardware and software infrastructure to take advantage of the new IT world.

Emulex was discussing the results of a study of 1,623 European and US IT professionals, in which respondents provided insight into their enterprise data centre networking environments. The study, conducted in October 2014, found that 57 percent (%) of respondents have adopted hyperscale networking environments that require massive scalability in network resources. Of those respondents, more than half (51%) named increasing bandwidth as a major challenge in moving to hyperscale environments.
As applications become more network-centric, and the volume of data grows in the cloud, organisations are seeing increased bandwidth demand for front-end applications driven by mobility and Bring Your Own Device (BYOD), mid-tier big data analytics and content distribution, back-end transaction processing and storage management. The survey results below indicate that hyperscale and multi-tenant requirements are driving demand for higher network bandwidth to manage vast volumes of data, lower latency to accelerate application delivery and performance, and increased security to meet service level agreements (SLAs), and regulatory and compliance requirements. Organisations have responded by increasing their network bandwidth, and more than 77% of respondents running hyperscale environments say the move to the cloud has already necessitated the upgrade of their networks to at least 40Gb Ethernet (40GbE).
Other key findings from the survey include:
Hyperscale and the cloud
 37% of respondents are taking a hybrid approach, using both private and public cloud.
 31% are investing in private cloud, but are cautious about moving data to the public cloud.
 30% are taking a “wait and see” approach to storing data in the cloud.
The top workloads most frequently migrated to the cloud
 The top workloads respondents reported they have moved, or plan to move to the cloud, include customer business applications (29%), application testing/development (23%), big data/analytics (19%), Office 365 (19%), email collaboration (19%), customer relationship management (CRM) (18%), disaster recovery (13%), ecommerce (9%), ERP systems (5%) and SAP Hana (2%).
Hyperscale organisations are investing in open networking approaches
 55% of respondents from hyperscale organisations have already deployed an Open Compute-based infrastructure, and 43% plan to in the next 24 months. In contrast, only 17% of respondents from non-hyperscale organisations have deployed an Open Compute-based infrastructure, and only 17% plan to in the next 24 months.
 31% of respondents have already deployed an NFV-based infrastructure, and 68% plan to in the next 24 months. In contrast, only 16% of respondents from non-hyperscale organisations have deployed a network functions virtualisation (NFV)-based infrastructure, and only 16% plan to in the next 24 months.
 15% of respondents have already deployed OpenStack, and 82% plan to deploy OpenStack in the next 24 months. In comparison, only 11% of respondents from non-hyperscale organisations have deployed OpenStack, and only 20% plan to in the next 24 months.
 11% of respondents have already deployed software-defined networking (SDN), and 86% plan to implement SDN in the next 24 months. Comparatively, only 17% of respondents from non-hyperscale organisations have deployed SDN, and only 29% plan to in the next 24 months.
State of data center networking technologies and protocols
 All survey respondents reported they are using a mix of network technologies in their private data centers. 23% have deployed 10GbE, 7% have deployed 40GbE, and 3.5% have deployed 100GbE.
 In the next 12-24 months, 68% of all survey respondents plan to deploy 10GbE, 63% plan to deploy 25GbE, 69% plan to deploy 40GbE and 70% plan to deploy 100GbE.
 All survey respondents reported they are using a mix of networking protocols in their private data centers. 23% have deployed Fibre Channel over Ethernet (FCoE), 18% have deployed iSCSI and 2% have deployed RDMA over converged Ethernet (RoCE).
 In the next 12-24 months, 54% of respondents plan to deploy FCoE, 65% expect to be using iSCSI and 68% expect to be using RoCE.
Bandwidth remains a major challenge
 Bandwidth remains a major challenge for hyperscale companies, versus non-hyperscale companies that are focused on security. For hyperscale companies, respondents reported increased bandwidth requirements (51%), latency (36%), and security concerns (20%) as the top three challenges in migrating these workloads to the cloud.
 For non-hyperscale companies, respondents reported security (70%), latency (44%), and increased bandwidth requirements (39%) as the top three challenges in migrating workloads to the cloud.
Hyperscale organisations are upgrading networking technologies at a much higher rate
 97% of respondents reported that adoption of hyperscale has necessitated a move to 10GbE, 40GbE or higher speeds to meet demands of high-performance applications such as big data, analytics and content distribution, compared to only 48% of respondents from non-hyperscale organisations.
Hyperscale companies need network speed
 38% of respondents from hyperscale companies said they have 40Gb per second (Gbps) or faster network connections to their primary data center, compared to only 22% of respondents from non-hyperscale companies.
 93% of survey respondents at hyperscale companies expect to be at 40Gbps or faster in three years, but only 44% of non-hyperscale organisations expect to be at 40Gbps or faster in the same timeframe.
“The move to hyperscale is entering a second phase from the cloud providers to managed service providers (MSPs) and enterprises. This survey highlights that like every major transition in IT infrastructure, changes are required and IT professionals have to re-think almost everything they have done in the past,” said Shaun Walsh, senior vice president of marketing, Emulex. “Beyond the concept of hyperscale models, we see very specific technologies being implemented such as OpenStack, SDN and NFV. Each of these changes has performance, operational expenditure (OPEX) and team skill implications for application, networking and storage infrastructure. We are working with leading end users, ecosystem partners and OEMs to bring the right connectivity, monitoring and management tools to make these solutions viable and operational.”

Application workload-aware intelligence for Hybrid flash arrays
Dot Hill Systems Corp. unv eiled the latest version of its ‘seriously smart’ storage management software, Dot Hill RealStor™ version 2.0, which delivers application workload-aware intelligence for today’s next-generation hybrid flash storage arrays. Easy-to-use RealStor 2.0 accelerates storage management operations to deliver data where customers need it, when they need it in real-time.
This innovative Dot Hill technology delivers optimal data placement using autonomic real-time tiering, providing the highest performance to the most critical application data and aids the growth of virtual environments with a simplified user interface while improving data protection and system up-time. With patent-pending innovations that deliver better efficiency, higher reliability, enhanced cost-effectiveness, and fast access to “hot” data, RealStor 2.0 is now available across Dot Hill’s entire line of storage systems equipped with the latest generation AssuredSAN® 4004 storage controllers.
From IT professionals to Big Data analytics providers who require efficient, virtualised, real-time storage management for today’s dynamically changing environments, RealStor 2.0 delivers unprecedented access to data with greater predictability, reliability and affordability. Unlike other storage management solutions, RealStor 2.0 innovations offer three options to maximise flash utilisation with autonomic real-time tiering, LUN-pinning and read-caching. RealStor also improves data reliability with advanced snapshots, thin provisioning and faster rebuilds, resulting in new and more efficient storage management capabilities for hybrid arrays.
“For years, the world’s biggest names in the storage industry have leveraged Dot Hill’s proven RAID hardware in combination with our innovative software for advanced data management,” said Dana Kammersgard, president and CEO of Dot Hill Systems. “Our next-generation software storage management solution, RealStor 2.0, marks a major milestone in innovation for Dot Hill and provides new affordability and enterprise management capabilities for hybrid flash arrays.”
Providing a Hybrid Cloud foundation
NetApp, Inc. announced new software, services, and partnerships designed to simplify data management across clouds. The announcement included a powerful new version of the NetApp clustered Data ONTAP® operating system, Cloud ONTAP™, OnCommand® Cloud Manager and NetApp® Private Storage for Cloud. Combined, these patented and patent pending offerings enable customers to embrace the hybrid cloud while maintaining control of their data and ensuring choice across a blend of private and public cloud resources.

“Hybrid clouds will be the backbone of IT today and tomorrow,” said George Kurian, executive vice president of Product Operations at NetApp. “We help enterprises maintain control of their data as they bridge their on-premises architecture with the cloud of their choice. The NetApp Data Fabric for the Hybrid Cloud is the right architecture for building enterprise-class hybrid clouds. Our software improves the economics, flexibility, and business impact of a customer’s existing infrastructure, while our cloud technology solutions give customers the needed confidence in our ability to help them navigate the future.”

As the hybrid cloud continues to gain traction within the enterprise, NetApp believes that the data management elements in different parts of a hybrid cloud must be connected and work together to form a coherent, integrated, and interoperable system. This “data fabric” enables data to be consistently managed, transported seamlessly from one part of the cloud to another. It also enables enterprises to apply consistent policies and services to the data in the hybrid cloud regardless of the application, technology or cloud infrastructure provider.

Cloud three
EMC Corporation announced the acquisition of three cloud technology companies: The Cloudscaling Group, Inc., Maginatics, Inc. and Spanning Cloud Apps, Inc. Each company brings to EMC deep expertise and powerful capabilities that enable EMC to extend the reach of its hybrid cloud vision across cloud infrastructure, storage and data protection. The acquisitions, together with today’s announcement of the EMC Enterprise Hybrid Cloud Solution, underscore EMC’s commitment to customers to deliver choice and agility in hybrid cloud deployments.

With the acquisitions, EMC broadens its cloud capabilities on three key dimensions: the ability to offer customers hybrid cloud solutions based on OpenStack technology, cloud choice with data mobility across multiple clouds, and new protection capabilities for “born in the cloud” applications and data.

 Cloudscaling is a leading provider of OpenStack® Powered Infrastructure-as-a-Service (IaaS) for private and hybrid cloud solutions, and is a founding member of the OpenStack Foundation. Its Open Cloud System (OCS) provides an operating system to manage compute, storage and networking in the cloud. The Cloudscaling OCS supports a new generation of cloud-based applications and provides the agility, performance and economic benefits of leading public cloud services, but it is deployed in customer data centers and, therefore, remains under the control of IT. The addition of Cloudscaling will provide EMC customers with multiple options for running private and hybrid clouds and will help EMC accelerate its infrastructure offerings based on OpenStack technology.

 Maginatics is a cloud technology provider offering a highly consistent global namespace accessible from any device or location, unlocking enterprise hybrid cloud choice and flexibility for EMC customers and partners through interfaces into a variety of private and public clouds. The addition of Maginatics extends EMC’s cloud data protection strategy by enabling unified data protection and management across disparate private, public and hybrid clouds. Maginatics technology also facilitates efficient data mobility across multiple clouds with data deduplication, WAN optimization, handling of large objects and multi-threading. EMC expects to integrate Maginatics technology with existing EMC data protection software, storage and services.

 Spanning is a leading provider of subscription-based backup and recovery for “born in the cloud” applications and data. Spanning solutions prevent business interruption due to data loss in Google Apps and Salesforce.com (a solution for Microsoft Office 365 will be available in the first half of 2015). The combination of EMC’s data protection portfolio and Spanning’s services uniquely positions EMC to help users confidently deploy data protection solutions across all applications and workloads, regardless of where the data is created or where the applications reside.


EMC Corporation also announced immediate availability of its EMC® Enterprise Hybrid Cloud Solution that integrates hardware, software and services from EMC and VMware to unite the strengths of private and public cloud. The EMC Enterprise Hybrid Cloud Solution enables IT-as-a-service (ITaaS) in as few as 28 days. Organizations no longer will have to make tradeoffs between the speed and agility of public cloud services and the control and security of private cloud infrastructure.

Keeping aviation safe
Zadara™ Storage the provider of enterprise-class storage-as-a-service (STaaS), and CloudSigma, a public cloud IaaS provider with advanced hybrid hosting solutions, revealed that the European Space Agency (ESA) is leveraging the Zadara Virtual Private Storage Array™ (VPSA™) service in the CloudSigma Zurich cloud, in the service of public safety. The agency is using satellite data stored on the award-winning Zadara VPSA service to gain a better understanding of earthquakes and volcanic eruptions, and by so doinghelp reduce the loss to life and property as well as the economic impact caused by these natural disasters.
The Geohazard Supersites project’s mission is to advance scientific understanding of the physical processes that control earthquakes and volcanic eruptions as well as those driving tectonics and Earth surface dynamics. A major challenge for the project is the Super Site Exploitation Platform (SSEP) satellite data archive, which is large and is growing as the site adds new activities and new data sources (i.e., satellites). ESA recognisescloud computing’s potential to reach new audiences for its datasets and to spark new innovation. The Zadara VPSA service, offered by CloudSigma as Scale-Out Magnetic Storage, provides ESA’s SSEP team flexible and cost-effective storage with the necessary performance and scalability to run its analyses and store the satellite data and computational results. Thanks to Zadara VPSA, in conjunction with compute resources from CloudSigma, these large data sets can remain online at all times and be processed in place.
“We need large amounts of high performance storage, because our data archive is very large and growing,” said Julio Carreira, capacity manager at ESA. “By having our choice of good performance, cost effective storage as well as high-performance SSD storage, all of which is elastic and on-demand, we’re able to quickly and cost effectively deploy our platform, whose goal is to use Earth Observation data to protect both life and property from earthquake and volcanic hazards.”
“We see the cloud as having great potential for assisting the scientific community and in expanding the realm of possibilities unleashed by deploying such a flexible platform. We are very excited to see the new products and services that will develop from these cloud data repositories powered by CloudSigma and Zadara Storage,” said Robert Jenkins, CEO of CloudSigma.
“This is a great example of the power of the cloud when it comes to providing a public service. The combination of Zadara Storage and CloudSigma has provided the European Space Agency a powerful, elastic, rapidly deployable platform on which to run its applications and store its data,” said Nelson Nahum, CEO of Zadara Storage. “Additionally, the platform is much more cost effective than purchasing on-site storage – ensuring public funds are used as effectively as possible.”
VirtualWisdom4.1
Virtual Instruments launched the first update to its fourth-generation VirtualWisdom solution. The new VirtualWisdom4.1 features an advanced applied analytics module and provides users with an enhanced dynamic user-interface, reporting and analytics tools. Its entity-centric user interface, which logically groups system-wide resources, physical devices or applications, has also been further developed.

With the goal of turning data into useful, usable answers, Virtual Instruments has further enhanced the platform with analytics capabilities based on insights gained from working directly with their portfolio of global enterprise clients when optimising their infrastructures. As a result VirtualWisdom4.1 enables infrastructure managers to quickly address widespread performance management requirements such as workload balancing, event investigation and trend correlation, across the entire IT environment.

“The advanced applied analytics tools are able to execute tasks in just a few seconds, tasks which prior to VirtualWisdom4, would have taken hours or even days. The 4.1 module was developed to gain an explicit understanding of the relationships between entities within IT, from the hypervisor all the way to the storage tier, the associated metrics, and the context in which the data was measured. This means that VirtualWisdom4.1 is ideally placed to deliver meaningful recommendations in the unique context of a specific customer’s environment,” said Skip Bacon, Chief Technology Officer at Virtual Instruments.

For today’s virtualised and private cloud-based IT infrastructure, out-dated tools that only serve to provide a limited and imprecise understanding of the infrastructure, are not enough to guarantee availability, much less actually manage performance. Organisations require the ability to quickly get answers. VirtualWisdom provides IT decision-makers with a definitive understanding of how the applications and infrastructure are performing together. Version 4.1 further enables IT managers to proactively balance the provisioning of applications on virtual machines for maximum application performance with the confidence that systems will not slow down or fail.

Curing petabyte-scale data capacity management headaches
As organisations continue to grapple with the problem of how to leverage Big Data as they cross the Petabyte divide, Fujitsu has introduced what it says is the world’s first storage system designed to grow as big and last as long as the online data it hosts. By creating a storage eco-system with unlimited capacity that is capable of living forever, the FUJITSU Storage ETERNUS CD10000 helps organisations eliminate the major headaches associated with the exponential growth of data.
With the global amount of data generated and kept online continuing to multiply, organisations face three key problems: increased demands on scalability, greater complexity and cost, and physical limitations on the future ability to actually migrate data between storage systems without major disruption. Collectively, these factors dictate that businesses need a new approach to traditional storage as they move into the era of keeping tens of petabytes (PB) of data online, all the time. To put the sheer data volume into context, one PB of data is equivalent to approximately 100,000 hours of full HD 1080p video.
The architecture of this new hyper-scale, distributed scale-out eco-system allows individual storage nodes to be added, exchanged and upgraded in an organic way without downtime, helping the entire system – and its data – to live forever. Backwards compatibility means newer nodes can work alongside older, guaranteeing investment protection in the new ETERNUS system.
The ETERNUS CD10000 system heralds a new era of extremely high capacity solutions for everyday data retention and management problems. At launch, the system supports capacities up to 56 PB (56,000 TB) of online data, through the aggregation of up to 224 storage nodes. Next year, Fujitsu will introduce updates allowing for a far higher scalability.
Fujitsu has based the new enterprise-ready system on the open source-based storage software Red Hat’s Inktank Ceph Enterprise and added functional enhancements to deliver comprehensive management, with the system operated as a single pane of glass. On a global level, the complete and comprehensive Fujitsu maintenance and support services enable customers for the first time to rely on the delivery of true enterprise-class service levels for a storage system based on open source software. ETERNUS CD10000 offers the unique ability to present a truly unified view of block, object and file storage in a single distributed storage cluster – reducing complexity, lowering storage management costs and optimising existing physical disk space for data storage.

Ethernet connected drive
Seagate Technology unveiled what is thought to be the world’s first Ethernet connected drive – the Seagate Kinetic HDD. Based on the Seagate Kinetic Open Storage platform, Kinetic HHD reduces the total cost of ownership by combining an open source object storage protocol with Ethernet connectivity, eliminating multiple layers of legacy hardware and software infrastructure – simplifying cloud storage architectures.

Equipped with Ethernet connectivity, Kinetci HDD lowers capital expenditures associated with hardware by eliminating an entire storage server tier – reducing capital equipment costs by up to 15 per cent. Eliminating a storage server tier also reduces rack level power consumption.