IT directors must understand the economics of big software

By Mark Baker., OpenStack Product Manager at Canonical

  • 7 years ago Posted in
Business technology is under tremendous pressure and most organizations are ill-equipped to deal with the challenges and opportunities that are arising. Software as a Service, big data, cloud, scale-out, artificial intelligence, containers, OpenStack and microservices are not just buzz-words, they are disrupting traditional business models. While these terms and technologies represent a new world of opportunity, they also bring complexity that most IT departments are ill-equipped to deal with. This has become known as the era of Big Software.
 
To address the realities of Big Software, companies need to think differently. Traditional enterprise applications were monolithic in nature, procured from best of breed providers and installed on a relatively small number of large servers, modern application architectures and capacity requirements force companies to now roll out many applications, components and integration points spread across potentially thousands of hosted physical and virtual machines on premise or in a public cloud. Organizations must have the right mix of products, services, and tools to match the requirements of the business yet many IT departments are undertaking these challenges with the approaches and tools developed over a decade ago.
 
Some IT Directors have turned to public cloud providers like AWS (Amazon Web Services), Microsoft Azure, and GPC (Google Public Cloud) as a way to offset much of the CAPEX (capital expenses) of deploying hardware and software needed to bring new services online. They wanted to consume applications as services and offset most of the costs to OPEX (Operating Expenses). Initially, public cloud delivered on the CAPEX to OPEX promise, Moor Insights & Strategy analysts state, with cloud providers touting upwards of 45% in capital reductions in some cases, but organizations needing to deploy solutions at scale found themselves locked into a single cloud provider, fluctuating pricing models and unable to take advantage of the economies of scale that comes from committing to a platform. Forward thinking IT directors realized they must disaggregate their current data center environments to support scale-out private or hybrid cloud environments.
 
The Economics: Challenges & Opportunities with OpenStack
 
OpenStack is a way for organizations to deploy Open Source cloud infrastructure on commodity hardware. Customers look at OpenStack as an opportunity to reduce the cost of application deployment whilst increasing the speed with which they can bring new application services online. The cost to deploy OpenStack is relatively low, the ongoing investment in maintenance, labor, and operations can be high as some OpenStack solutions are unable to automate basic tasks such as updating and upgrading their environment. The cost of staff experienced and able to operate OpenStack at scale is high.
 
One of the main challenges with OpenStack is determining where the year-over-year operating costs and benefits of managing the solution reaches parity, not just public cloud, but with their software licensing and other critical infrastructure investments. Our experience working with many of the largest OpenStack deployments out there is that in a typical multi-year OpenStack deployment, labor can make up >40% of the overall costs, hardware maintenance and software license fees combined are around 20%, while hardware depreciation, networking, storage, and engineering combine to make-up the remainder according to HDS. Whilst the main advantage of moving to the public cloud is still the short-term reduction in the cost per headcount and the speed of application deployment that is unhindered by organisation inflexibility, the year-over-year public cloud expenses can be greater than using an automated on premises OpenStack implementation.
 
OpenStack is Big Software: A New Deployment Model is Needed
 
Building a private cloud infrastructure with OpenStack is an example of the big software challenge. Significant complexity exists in the design, configuration, and deployment of all production ready OpenStack private cloud projects. While the upfront costs are negligible, the true costs are in the ongoing operations; upgrading and patching of the deployment can be expensive. This is a stark example of how Canonical’s big software solutions address these challenges with a new breed of tools designed to model, deploy and operate big software. Canonical OpenStack Autopilot enables the deployment of revenue-generating cloud services by implementing a reference cloud that is flexible whilst minimising operational overhead. Application service components and the accompanying operations required to run them are encapsulated in code that enable organizations to connect, integrate, deploy and operate new services automatically without the need for consultants, integrator, additional costs or resources. Companies can choose from hundreds of microservices that enable everything from cloud communications, IoT enablement, big data, security and data management tools.
 
What the Future Holds for OpenStack
 
It is important to keep in mind that OpenStack is not a destination, rather a part of the scale-out journey to delivering scalable services faster than ever before. CIOs know they must have cloud as part of their overall adoption and that OpenStack is a key driver and enabler for hybrid cloud adoption. IT organizations that take a traditional approach will continue to struggle with service and applications integration while working to keep their operational costs from rising too much. The good news is, companies like Canonical are developing software to help companies with the insight, solutions, and leadership to engage in the Big Software era.
Every increment in understanding and collaboration around the stack, delivery, governance and...
In December, IDC predicted that global digital transformation investments will total $6.8 trillion...
Containers and Kubernetes are the driving force behind how the industry is reinventing the way we...
Only with a flexible integration layer built on the principles of API-led connectivity and reuse...
What precisely are the requirements of a DevOps practitioner, as opposed to an SRE, legacy...
For those in the ever-changing DevOps world, here are some best practices to reconnect with that...
There is no doubt that enterprise IT infrastructure has undergone radical changes over the last few...
Software application development is a profitable and high-growth industry, with the UK market...