VMWARE RIGHTLY HOLDS THE VIRTUALISATION CROWN in the modern era as the firm started developing the basis of the modern hypervisor in 1998. However, within 3 years, open source versions of the technology were starting to enter the mainstream. In 2003, the University of Cambridge Computer Laboratory created the first version of Xen, probably still the most widely used virtualisation platform. Microsoft joined the party 5 years later but a still a full decade behind VMware which in many ways explains the market dominance enjoyed by early entrant.
The initial thrust of virtualisation was to increase the efficiency of computing by allowing resources to be used more effectively. Instead of a server only using 20% of its processing power to run a single application, in a virtualised environment, the same single server can effectively run 4 separate virtual servers and application sets to achieve 80%+ utilisation of resources. In many ways, the technology has lock stepped with processor designs which now have multiple computing cores within the same physical CPU chip.
The examples of how virtualisation has improved effectiveness and reduced costs are numerous. From simple things like reducing the space needed in data centre racks, to effectively enabling modern cloud computing by allowing IT resources to be scaled and shared in an “on-demand” fashion across multiple tenants.
VMware has conducted many independently validated studies that have shown TCO reduction that average around 67% with payback times of less than 6 month for organisation switching to virtualisation.
VM challenges
However, virtualisation does bring some challenges along with benefits. One of the most common is “sprawl”. In the “bad old days” of computing, setting up and deploying a server in an enterprise IT environment was actually quite a complex process driven task. With server virtualisation, admins can literally spin up a new server with just a few clicks. For some organisations, this has led to erosion in the disciples around server provisioning. In some cases, IT departments simply forget about a VM while quick and dirty provisioning leads to carelessness around security and patching. However, there are no definitive studies into the prevalence of VM sprawl and over the last few years a number of vendors have launched tools to help better manage VM provision and life cycle.
Another issue is backup and recovery. The virtualisation process changes many of the common requirements of a legacy data backup and recovery process. Some of the newer backup products from vendors such as Veeam and Commvault are more “VM Centric” but many organisations still fail to adequately backup virtual servers to the same level of physical equivalents.
The inherent success of the virtualisation of computing has prompted the ICT industry to ask – why can’t we virtualise other resources such as networking and storage? This is the next stage of the concept and potentially another set of challenges.
Virtualise everything?
The dominance of VMware, especially with the late arrival of a credible rival in the shape of Microsoft has effectively made Vshpere the hypervisor of choice for larger enterprises. In many ways this has been beneficial to the wider industry in a similar fashion to Cisco’s dominance in the networking, HP’s leadership in laser printers and Intel’s in CPU’s made compatibility and standardisation much easier. However, when it comes to software defined networking and storage; there is no market leader or defined standard. This highly liquid environment makes it harder for adopters to be sure that the technology stack they are deploying against or developing for is going to have longevity.
All of the large ICT vendors are talking about software defined strategies and technologies and the last year has seen a series of high profile acquisitions including Nicira, Xsigo and ScaleIO. Many vendors are striving to get a foothold in what may be a game changer in terms of the storage status quo. One of the few “standards” that aims to at least create a bench mark of interoperability between competing technologies is OpenStack.
The standard evolved from a joint project between Rackspace, a big hosting provider and NASA, to build a standard for the space agency to run cloud computing services running on standard hardware. NASA is both an early adopter of virtualisation and a consumer of huge volumes of compute offered a great environment to test many of the concepts and with the help of an eager open source community, OpenStack grew and splintered to different areas like Compute (Nova), Object Storage (Swift) and Networking (Neutron).
Software defined storage
Data storage area is particularly well suited to virtualisation; especially as the volume of data that organisation’s routinely store is growing at the fastest rate in history. Other external drivers such as big data analytics and the growth of multimedia content is fuelling a desire to find new ways to reduce both the cost and complexity of data storage.
The move to software defined storage and wider virtualisation technologies have the potential to have as much impact as compute virtualisation. For many organisations, data is still confined to lots of silos of information which is difficult to manage and secure.
In addition, older legacy technologies such as fibre channel and inflexible direct attached storage are wasteful in terms of utilisation and energy consumption. The last issue is performance. With certain processes, as data volumes start to grow and spread across multiple locations, the physical limitations of spinning disk storage causes a performance bottleneck. The use of Solid State Disks (SSD) and flash can be deployed as a cache to help eliminate performance issue while scaling in line with growing volumes.
Widespread adoption of storage virtualisation is still a way off but many vendors, especially organisations that arrived in the era of virtualisation, are building storage technologies that adhere to standards like Ethernet, IP and OpenStack. In addition, as networking technologies start to become more virtualised, there becomes a possibility that end-to-end ICT delivery will become more flexible and elastic.
With computer, networking and storage all virtualised and software controlled, organisation will potentially be able to significantly reduce TCO across the entire IT stack. In addition, the prospect of future cloud services that turn ICT into a pure OPEX with just per minute billing models may well be possible – which really would be the start of a new industrial age.