THIS LAST YEAR brought both technology and market changes driven by many forces. On the physical side, the year started with the bang created by the Chelyabinsk meteor. While it shattered windows and set off car alarms, greater reverberations in information technology were caused by Edward Snowden’s revelations in the spring. They upended established views of what was secret and who was safe. Technical innovations shifted priorities, and cash reserves from an uncertain market fuelled private exits for a few companies.
The year ahead will be shaped by a set of forces that are harder to define, but might be closer to the revelations occurring in quantum physics in 2013. The confirmation of the Higgs Boson and subsequent Nobel Prize for its theoretical identification are events that mark the study of the fundamental forces of nature. The forces driving markets and technologies are just as mysterious, and, like quarks, can be strange and charming.
While we don’t have a Large Hadron Collider for IT forces, for this year’s preview, we will attempt to align them with the modern physical forces and show that, as they govern the physical world, shifts in IT are pushed by a common set of factors. We have yet to land on a grand, unifying IT theory, but we think that we’ve got enough to guide us in the year to come. These four forces are:
£ Mobility – The driving need to work
beyond the confines of fixed locations
has created both an explosion of new
device types and a need for wireless
connectivity.
£ Utility – An overarching force that attracts
activities like application development and
cloud adoption.
£ Efficiency – The core force that has
converged technologies and improved
data centre operation, and drives cost
controls.
£ Data gravity – The tremendous pull
created by the value and volume of data
is one of the more far-reaching forces. It
can drive investments in security to
protect and capacity to contain that
precious commodity.
The combination and interaction of these forces will shape IT decision-making, as well as vendor product and service plans. With this in mind, we present a few of the ideas that we see emerging from the dark matter of the universe in 2014. They are many and varied, and we’ve had to break them into three parts to cover them all. Here’s our first set, covering mobility, storage, security and networking.
The death of ‘bring your own device’
As Pluto has fallen from the ranks of planets, so will BYOD drop as a primary focus for IT. It won’t go away, and few will mourn its passing, but it turns out that there are many projects of equal magnitude, and the concern about OPMD (other people’s mobile devices) is fading. Mobility’s force has been straining IT project plans and budgets for some time, and there has been active debate about the role of BYOD activities in the enterprise. In 2014 the BYOD debate will become a moot point, as organisations that have struggled with device management move on to thinking more clearly about managing the data that’s destined for them.
We’ve already seen a shift in emphasis from mobile device management (MDM) products to mobile application management (MAM) tools. Mobility has provided the energy to start this, with utility forces in the form of user demand causing it to gain speed. Users have been reluctant to give over complete device control to their employers, and have become ever more skilled at gaining access to the data needed to get their jobs done, wherever they may be.
As the shift to mobile-based access is happening with users, enterprises will be placing a greater emphasis on applications that will protect data while getting to users. Data gravity has kept application development close to the sources of key data, often frustrating mobile users with poor performance or access. While few enterprises will be able to apply a mobile-first application development approach, the availability of better tools and platforms will allow data to be brought closer to users through replication, advances in CDNs and more stateless application design. It’s a matter of working around the access issues, rather than fully defying data gravity, but platform vendors will take up this complex task to allow users to get the performance they need.
An emphasis on mobile applications from within the enterprise will be matched by the rising wave of applications available directly to end users. Utility pressures will push users to try out all of the possible options. It is the users who know best what they need to get work done, and patience is not a virtue here. After all, in the age of ‘on demand,’ most users can’t wait for traditional enterprise development cycles. Corporate app store use will expand in an attempt to corral the growth of the number of applications in use, but the greater threat is the load that these many and varied mobile apps place on already strained enterprise application infrastructures. Enterprises will have to address this growth to avoid being overrun with servers pinged to death from chatty mobile apps.
And there is more mobile-driven disruption with which organisations will have to deal. Wearable computing will take its first tentative steps in 2014. Enterprises will have to address all of the issues of privacy, courtesy and data access that they should have when mobile devices first walked through the corporate lobby. ‘Wearables’ will make this seem more surreptitious, but it’s not a big step from the existing recording and storage capabilities. Utility forces will again be at play, with users clamouring for simpler technology use, and governance of technology’s impacts will bring legal and human resources into deep interaction with IT. Technology teams will have to think ahead to manage this new concern, since knee-jerk bans of whole device classes have never worked in the past.
Storage remains ‘boring’
While there are many transformations being worked in storage, the pull of data gravity is keeping the rotation of the various players in a steady orbit, preventing any revolutionary transformations. There is simply too much valuable data in existing storage environments to allow a wholesale migration. There have been bright comets of new technology, pulled in by data’s great mass, but few are willing to step into riskier technologies in what remains a risk-averse sector, and the probability of a ‘Deep Impact’-style technology asteroid upsetting this balance is low. The coming year will see ever deeper penetration of solid-state storage, and although we believe 2014 will be a ‘make or break’ year for many of the all-flash array (AFA) startups and specialists, a wholesale transition to all-flash systems won’t happen, outside of a few niche applications. Storage costs for rotating media will maintain them as an efficient path to the capacity the data growth demands. Efficiency will continue to press the adoption of more intelligence within storage, especially as we expect another year of minimal growth in storage budgets. Use of automated tiering and intelligent archiving will continue to grow, as will interest in object-based storage, although overall adoption will remain confined to niche applications and use cases.
What has been a nascent convergence movement will gain strength. Converged platforms will continue to gain adherents on a quest for greater operational simplicity. Backup will continue a slow but steady transformation as organisations aim to modernise their data protection infrastructure, and seek more integrated and service-oriented approaches. In backup and storage, more broadly, the centre of control will continue to move to the VM admin. Even though there’s no consensus on a definition, expect ‘software-defined storage’ to dominate the marketing narrative. We expect interest to coalesce around open-source-based approaches to storage, both cloudy and traditional.
Molecular clouds can spawn stars, but cloud storage activities are starting to provide abstractions that break dependence on traditional data linkages. They are still in their early stages, but in 2014, the efforts will bloom into better options through activities such as the OpenStack Swift project. It won’t escape the pull of the vast volumes of enterprise data, but it will upset the orbits of some vendors.
Storage networking will see the effects of convergence, as well. Cloud systems that require the flexibility of the converged interconnect will take a larger piece of this segment as deployments grow in size. But this isn’t to say that traditional interconnects are disappearing. The force of efficiency isn’t great enough to overcome data gravity’s pull generated by enterprise SANs. Because of this, Fibre Channel (FC) technologies will see another year in which they won’t die. While growth will be modest, at best, and the ‘Gen 6’ 32G interfaces will be a pipe dream for most, FC will remain the primary vehicle for storage data transport for the enterprise.
Sometimes stupid is smarter
The security revelations of the past year may have users clamouring to return to the days of a simpler and less-smart smartphone. What’s the use of being able to do everything on one’s mobile device if all of the wrong people wind up with your data? Data gravity’s effects will have users retreating to habits that protect this valuable commodity. A lack of dumber devices on the market will have users grabbing apps that will lobotomise existing ones, while manufacturers start to rethink product capabilities and protections.
But that’s not the only security shift that the New Year will present. The continuing slide of computing work from traditional desktops to mobile devices will mean that the large investment in endpoint protection that’s made today will have to be extended to protections built in to applications and data sources. Anti-malware and configuration management tools will be challenged by the increasing futility of trying to deliver platform security.
This comes at the same time that security management processes will make a shift from providing answers to asking questions. ‘Does this level of activity make me look insecure?’ The growth of analytics is changing the relationship with security management. The push of efficiency is creating products that digest much more information, but don’t provide hard answers. Knowing that 87% of your Windows desktops are patched is helpful from a process perspective, but doesn’t provide a broader picture of security posture. The year ahead will see administrators presented with the ‘Um, this doesn’t look right. What do you think?’ report. A more consultative interaction can raise the level of sophistication of the information provided, but it also requires that administrators be more actively engaged in security outcomes.
SDN is revealed as an elaborate hoax
It will be revealed in the coming year that, contrary to one of SDN’s fundamental assumptions, we don’t have to rebuild networks from the bare metal up. We’ve always had the power to make the needed changes much more incrementally – it’s just that no one believed it could be done. Clicking our ruby console cables together three times would have given us the APIs of which we’d always dreamed, but now that various networking vendors have started to deliver SDN functionality, there’s no need. After waking from 2013’s dreaming, SDN will truly arrive in 2014. Major networking players will demonstrate a heart and a brain, as well as practical implementations for physical network control. That won’t obviate the need for overlay SDN approaches, but it will tackle some of the issues of scale and scope that have hindered past deployments. OpenFlow options will proliferate to more gear and be suitable to a wider audience.
Virtual networking will be propelled past the limited use it’s received, driven by utility to simplify the deployment of computing infrastructure. We will pass the tipping point where initial connectivity needs will be met virtually as part of application orchestration. By year’s end, those without the ability to automate networks will be seen as laggards, rather than the mainstream they are today. That will accompany a need for greater systems management integration, but that level of sophistication will have to wait for another year before widespread adoption. The networking capabilities within cloud services will also advance, but this will be an area where network operator services will still struggle to keep up with this newfound flexibility and will limit inter-data centre and hybrid cloud creativity.
Out of the mists
We’ve covered the devices in our hands and the elements of the infrastructure that feeds them. Time to move into the clouds, across interconnects and off to the underpinnings of data centres. in the next. Here are our thoughts on cloudy futures, as well as facilities that are heating up and others that are cooling down.
Enterprise users are increasing consumption of cloud-based resources, but many are leaping without much of a look or even a net. Those who are trudging through a cold winter may look longingly at what some are doing and what others are planning with all of that data centre waste heat.
I forgot to turn off the cloud...
IT budgets will be blown when someone forgets to shut off the cloud when everyone goes home for the evening. While many organisations are starting to ramp up cloud use, continuing use of poor resource management controls will land more than a couple of organisations in financial hot water in 2014 as their consumption accelerates by accident or design. Remember the telecom bill shocks of previous century? Those who don’t are doomed to repeat them with cloud deployments that are driven by utility forces to make consumption easy. One mangled API call could spin up as many instances as shares were traded by errant programmed trading systems in 2013, without the friendly folks at the SEC to pick up the pieces.
Concern about managing costs will lead to service offerings that will give enterprises the dashboard functionality necessary to keep costs and performance visible. Those capabilities will rise out of CDN providers as they look to become the system of record for application and provider performance. Their ability to provide traffic direction will give them the ability to become de facto cloud brokers, but they’ll face stiff competition from platform vendors looking to take on that same role. More advanced traffic direction and distributed applications will become more necessary as the balkanisation of the Internet picks up. Another effect of the Snowden revelations will be that more nations will look to gain control over Internet connectivity in their regions. Whether it’s in-region root servers or deeper traffic control, these attitudes will move out of the fringes and could start to affect cloud operations.
Which is not to say that cloud use is without its benefits. We’ve been pondering the amount of revenue that could be generated by renting out freed-up data centre floor space for functions and events. As more capacity moves to hosted and cloud venues, there will be ever more square footage to be repurposed. If there’s to be a shift to CMOs driving IT budgets, as some have posited, can this sort of thing be far behind? Those forces of efficiency will be pushing for effective utilisation of all assets.
For those CIOs that hold the financial reins, the coming months will be spent in much closer collaboration with the CFO. An emphasis on better metrics will be driven by a need for agility in selecting best execution venues for the various applications the organisation fields. This will lead to the adoption of internal and external cloud brokers and a push for greater analytics.
Next year is still the year of PaaS
After all of the predictions of next year being the year of platform as a service, we’re in enthusiastic agreement once again. This will not be the year that PaaS soars to wild success. While its attractions are large and developers see the value in its charms, the market is still faced with an ‘I’ll adopt it right after I finish this next app’ attitude. As with many aspects of application development, the conflict between utility and efficiency is holding off a move to new technology adoption. PaaS could offer more efficient application delivery, but developers find it expedient to use the tools and techniques with which they’re familiar.
Two legs good, four legs bad
What this year will see is the death of multi-tenant-only cloud application delivery. Many providers have been holding to the idea of a single way of delivering their services. While we’ve been harangued about the dangers of ‘false clouds,’ concerns about performance and security will push more providers to offer single-tenant versions of their cloud-based products. That old demon data gravity simply creates too many concerns about noisy and nosy neighbours for some users. Their worries about some clients being more equal than others will encourage them to own their own means of production. This won’t deter those comfortable with managing multi-tenancy, but it will allow the more cautious to take a first step on their cloud journeys.
The moon and the sun
The moon is much smaller than the sun, but its proximity to the earth means that it’s powerful enough to drive the tides. In 2014, we’ll see the small but powerful ARM processors establish a greater presence, trading on their system size and power efficiency. The power dissipation of these much denser configurations will make them resemble the sun, rather than the cooler moon, however. In an interesting paradox, that will limit the deployment densities for low power servers that are possible except in those facilities that have the power and cooling densities to handle them. Some will experiment with using these dense configurations as an alternative to virtualisation, an interesting option where there are concerns about multi-tenancy. For highly parallel workloads, the power efficiency will be attractive, but will remain a distant challenger to conventional processors.
One of the effects of the forces of efficiency will be to take the greater densities that are becoming possible in server configuration and break apart traditional architectures to unlock higher performance. Server disaggregation allows higher levels of utilisation and greater systems flexibility as distributed storage mechanisms put more capacity closer to compute engines. Storage proximity becomes a greater reality with cost declines in SSDs and also conveys the additional benefit of cutting down on storage-generated east-west traffic flows within the data centre. Added to this is the improving position of silicon photonics and its ability to reduce the impact of distance on server construction. It’s a combination that will draw users beyond the hyperscale types.
Hot tubs in every data centre
All the power density issues that are created by denser architectures will need better cooling and 2014 will have both data centre operators and systems builders up to their necks in hot water, in some cases literally. In a move to reduce the energy requirements for data centre cooling, operators will be relying less on chillers in favour of raising water temperatures and using more energy-efficient cooling mechanisms, such as evaporative and direct/indirect economisation. It requires re-engineering heat transfer expectations, but the energy savings are substantial. With exit water temperatures at better than 95° F/34° C, we can expect some very toasty systems administrators. IBM has been operating an air-cooled data centre in Zurich that heats a nearby pool with waste heat. What’s to keep other data centres from adding a great new perk for the operations team?
Liquid cooling will make its first extension from the aisles and into the chassis in 2014. Where it was once the realm of exotic components and extreme performance, direct liquid cooling will make it into more mainstream applications. Bringing cooling water to the components generating the heat will require additional plumbing, but the savings in energy costs from both more efficient heat transfer and reduced air handling will make it worthwhile for those applications that have headed to higher energy dissipation. The change in high-performance computing deployments will lead the wave, with broader application following in later years. There will be more warm water for the hot tub!
Data centres through the looking glass
Environmental management won’t be the only thing on the minds of data centre operators in the year ahead. Multi-tenant facilities will be looking to bulk up on a steady diet of fibre, both lit and dark, to become more attractive. That glass will link facilities to clients as interest grows in creating execution venues that feel like they’re close to home, but are managed by others. Metro fibre assets are the first of these, with longer distance connections and increasing carrier densities following.
At the same time as ties to clients increase, broader connectivity options will also begin to play a larger role in data centre competitiveness. That competition is creating a collision in approaches as the Internet exchange model that’s been popular in Europe makes landfall in the US and carrier densities increase in facilities that haven’t traditionally offered those options. The Open IX Association is signing up members, while larger operators are creating inter-facility links and adding capacity and carriers. Customers will benefit the most because they’ll be happy to see more light at the end of the fibre tunnel.
The data revolution will not be televised
In our final installment, we’ll head into information management, analytics and social business. While the soporific effects of dealing with digital governance might be debatable, the revolution in analytics and data handling is much less so. There are changes afoot that will expand the availability of analytics, increasing its impact and reach.