THE WAY WE MANAGE our datacentres is changing. What used to be a case of maintaining an accurate asset database, a configuration database, CMDB for the ITIL-minded and making sure there was enough power, cooling and IP addresses has evolved into something far more complicated. This has been a slow evolution with a sudden acceleration into complexity in the last couple of years. What’s causing this problem? Well there are a number of contributors.
The initial complication was virtualisation of hosts. How do we catalogue and keep track of something that can move between physical locations, often with dynamic IP addressing that impacts the power and cooling requirements of the hardware it is shifting around on? This makes the standard reporting models quite challenging as very often one object could appear more than once. Reports become a snapshot of a particular time rather than a reflection of a periods trend. This makes capacity management a little more challenging too.
Next we started virtualising Storage and Networks. This was a little easier than the server model as the logical objects were far more bound to physical instances. Again, the reporting challenges still existed.
More recently the advent of “Big Data” architecture has poured oil on the water still further. The architecture for this style of processing tends towards many smaller, commodity nodes rather than the super big node. It’s scale-out rather than scale-up. This means that we return to the initial days of many hardware devices that virtualisation was dragging us away from, but instead of each being an individual entity in its own right, they are now all loosely coupled into a single system. Add to this the desire of many
to be able to flex those pools of commodity nodes as the demand shifts and morphs over time and the complexity is increased
still further.
Finally, with all this virtualisation, and many node deployments we add the orchestration layer that makes this flexing and dynamic provisioning a reality. The hope is that the Orchestration layer
either manages the datacentre itself or at least references a system that is keeping track of all of this.
The problem is, that mostly the orchestration exists in a layered manner where each infrastructure layer is managed discreetly with an overarching orchestrator simply calling into those layers.
At first glance, this is a great approach as it means that the best-in-breed tools for each technology manage the appropriate layer. The problem arises when the orchestration layer has to understand all of the complexity of each layer below it and manage the datacentre based on this knowledge.
What is required is a standard. A fixed way of communicating between the layers that are extensible, shrink-to-fit, and common place. SNIA has worked on a number of standards in the storage arena that meet these requirements including the Storage Management Initiative Specification (SMI-S) and the Cloud Data Management Interface (CDMI).
Leveraging these APIs means that the consuming technology, in this case the orchestration layer, not only has a standardised and well known method to repeatedly query and report on elements of the storage infrastructure, utilising CDMI is now able to understand the capabilities of infrastructure elements and actually interact with those elements.
There are two very important criteria for the standards mentioned above. The first one of extensibility means that the infrastructure and the orchestration layer are not limited by the functionality described in the existing standard.
Providing the interface remains the same the architects or developers of the environment are able to add features and functionality and have it reflected in the standard way. The second criterion of shrink-to-fit is also key. This allows an environment only to implement the parts of the standard that it cares about.
If the infrastructure does not support certain features then they do not need to be implemented in those standards. It also allows for orchestration products to adapt only the elements it needs to function in the environment for which it is intended. Speeding development process for the vendor means that the consumer gets a supported standards based solution.
Both of the standards mentioned, but in particular CDMI, are based around meta-data. This is the data that describes other data. It could be meta-data about a service, a piece of equipment or a whole architecture. As we move toward software defined datacentres, we need to leverage more software defined solutions to interpret and manage those datacentres. Standards are the only way this will work at scale!
To find out more about the SNIA standards mentioned here, please visit www.snia-europe.org