We would like to keep you up to date with the latest news from Digitalisation World by sending you push notifications.
From security to quantum, AI and edge to cloud, our digital world is evolving and expanding at a far faster rate than ever before. In fact, IDC predicts our global datasphere – the digital data we create, capture, and consume – will grow from approximately 40 zettabytes of data in 2019 to 175 zettabytes by 2025. With ever increasing digital ‘noise’ it can be difficult for CIOs and CTOs within organisations to know where to start and it’s clear that leaders are feeling the pressure when it comes to significant investment decisions. But while this influx of data might come with challenges, it also presents opportunities for those working in IT and technology.
Relieving pressure is often about understanding where to prioritise. The daunting task of deciding the best way forward can feel more manageable when priorities are clear, previous experience can be relied upon and there is data to help inform critical decision-making. We have entered this new year during a difficult time for the global economy, and the advice most widely given now is to try and streamline investment and increase efficiencies, which can be more easily achieved when we can leverage the data available to us in the right way. For example, the consensus we’re hearing is that those who eagerly invested in cloud computing over the last two years are now finding themselves overspending, because in many cases they have focused less on the long term and more on short-term benefits. This is useful information to take forward in an uncertain economic environment.
So, what are some of the options available and how can decision makers make best use of their budgets in the next year?
Revolutionising through edge computing
One of the best ways to get the most out of IT investment in the next year is going to be through edge computing. This is a growing solution, with IDC predicting that worldwide spending on edge computing will reach £274bn by 2025. Put simply, those who don’t have edge computing will need it, or risk falling behind.
Edge computing is about collecting real-time data in locations outside of a data centre. This can bring with it multiple benefits for efficiency but before embarking on an edge journey, companies need to consider why they need an edge solution and what this is going to look like in the longer term. For example, organisations can take advantage of edge computing’s ability to be an extension of the cloud, by bringing faster, more reliable data to operations. This makes more sense to some industries over others. For instance, retail and manufacturing businesses will see more benefit from being able to gather and analyse data locally to make quicker decisions on inefficient processes.
Careful consideration of the return on investment is needed before taking the plunge, as is an understanding of legacy processes and the feasibility of innovating.
Getting prepared for quantum computing
Quantum computing will make substantial progress in 2023. Even though most companies don’t have access to a quantum computer, they will need to think about its impact on the business, security and more. Quantum computing’s getting real and those who don’t have someone in their business who understands it will likely miss out. Investing in quantum simulation and enabling data science and AI teams to learn the new languages and capabilities of quantum will be essential for 2023.
AI is writing code
It’s no secret that the impact of AI has already skyrocketed in the first month of the year, not least with Microsoft’s investment in ChatGPT. An AI that can write code completely shifts the current AI landscape in which we’ve been operating. The outcome in many cases is nearly, if not just as good, as a person writing the code themselves and in fact being used by developers to learn new techniques and approaches. This greatly increases quality, efficiency, and security, reduces costs and enables organisations to automatically take advantage of best practice. Now is the time to make the most of these systems. But the scale and size of the models which create them are shrinking in the background, making them viable options for organization to train and then operate at the edge. Even optimising one small use-case, can allow a company to rapidly monetise a new capability and then scale it up. It’s particularly interesting to imagine the potential impact of the output from one AI becoming the input for another. For example, the output of ChatGPT, being used as the input to Midjourney and OpenSCAD which is transmitted to a digital edge location which prints a brand-new AI-designed 3d object in metal, which has previously never existed before. In future, transformer models could be adapted to represent music in picture form and then generate music by generating a picture, or perhaps generate Xray images with certain variations of features which could be used to train train other AIs, for example for medical diagnosis.
Some key steps in business/design/manufacturing processes will be eliminated with these new approaches which will in turn massively reduce cost. That cost reduction will, in turn, open up capacity for new possible roles and applications.
This year is all about strategy, and better preparing ourselves for the technology of the future. Innovation has never been so pervasive, and forward-looking decisions will be critical. Its imperative companies think hard about the longer-term strategies of their business and prioritise investments in new technologies such as edge and AI that will result in more efficient and intelligent decision making.
Postscript: New challenges await regulators. Should it be legal requirement to tag every content item to say if it has been created by a human or a non-human? What if content created by ChatGPT-3 is used to train ChatGPT-5? Should some content types be illegal to be used to train AIs? As the population of AIs begins to exceed the population of humans, should there be a globally-unique way of categorically identifying every human, such that content creators and modifiers can never be spoofed by an AI and humans can not take credit for content created by an AI? Should there be global conventions/laws against using AI to generate polymorphic computer viruses – what really should be off-limits? Responsible use of IT has to be a key consideration – after all, the mission of Dell Technologies is to create technologies which drive human progress.