Modern data centre management: pain points and challenges

As data centres increase in complexity and scale, a number of new challenges are introduced for IT administrators and business managers trying to cost-effectively keep operations efficient, secure and reliable. The task of managing high-speed networks with multiple access points and having the ability to extract information as it is needed is becoming more difficult. More importantly, as virtualisation takes hold and traffic becomes increasingly ‘invisible’ to the physical network, getting to grips with network monitoring has become critical. This article will explore these challenges, and the pain points facing modern data centre management teams. By Trevor Dearing, EMEA Marketing Director, Gigamon.

  • 10 years ago Posted in

IN SHORT, today’s data centre environments cannot be effectively managed without complete visibility. While administrators need to see patterns in packet data, business managers need to analyse application traffic to understand why certain undesirable events may be occurring. These events could include a slowdown in internet banking performance for financial institutions, just in time deliveries for the manufacturing industry or even broken access to patient records for healthcare providers using hospital technology.

That said, the design of data centres has evolved in such a way that visibility has unfortunately taken a back seat to innovation. The applications and operating systems that create and receive packets of data – the very information needed by admins and managers – no longer sit inside static servers, but have become dynamic and elusive due to increasing virtualisation.

It’s true that there are numerous benefits to virtualisation, which are further driving the trend. Among these is the creation of more dynamic and flexible infrastructures by maximising resource utilisation, while increasing IT service delivery. Then there is of course the financial justification, as virtualisation can lead to vast cost savings through streamlined server management and more efficient space and power usage. In short, virtualisation can make enterprises more agile than they have ever been in the past.

However, despite its benefits, the elastic and dynamic nature of virtualisation can – if not managed correctly – quickly become a nightmare for those in charge of monitoring networks. While responsiveness to immediate needs improves, troubleshooting and locating problems across the network can indeed become more difficult.

This is largely because virtualisation shrouds a lot of the network traffic, impacts visibility and as a result renders traditional approaches to network monitoring ineffective due to the increased prevalence of blind spots.

These ‘invisible’ networks make it difficult to secure network traffic and analyse performance as large portions of traffic are flowing through software-defined cloud infrastructures, encapsulated across virtual tunnel endpoints and often not hitting the physical network at all – causing virtual machine (VM) and network administrators to lose visibility and control. As a result, the most common network monitoring approaches cannot manage the growing volume of hidden traffic, complicating troubleshooting efforts and impacting the cost savings associated with virtualisation.

Traditional network monitoring and application performance management tools gather information by using SPAN ports and taps to connect to as many ports as possible, and it is common to temporarily resolve issues by simply deploying more tools. However, as IT staff do this, they find that an increasing number of points in the network need to interface with these additional tools – further increasing cost.
To keep traffic in the same VLAN or port group, traffic between hypervisors is being encapsulated by VXLAN, FabricPath or other overlay and tunnelling technologies. While on the surface this may be a good solution, visibility problems arise as overlay networks encapsulate Layer 2 packets in Layer 3 packets.

These must be decapsulated before network administrators can have visibility into them. This is further impacted by the fact that network and business managers must fully understand data flows – not just those that move within their data centre, but also those that move to other facilities or cloud environments.

Another pain point comes from speed. As speeds increase to 10GbE or more, existing systems will find it hard to keep up with this traffic volume and packet losses will result. In such a scenario, it is virtually impossible to discover the root of network problems as critical information could be held within the dropped packets. Once again, adding monitoring tools to keep up with increasing traffic speeds will negatively impact the cost-effectiveness of networks and upgrades.
Finally, compliance remains an ongoing issue as customers demand complete visibility into their private or public clouds when doing a physical to virtual migration.

What’s more many industry regulations such as PCI DSS require some level of insight into, and control over, network activity, particularly where payment card data is involved. Many of today’s tools are unable to deliver complete visibility under such circumstances, even though they may claim to do so. This can lead to an enterprise being lulled into a false sense of security as they believe that they are compliant, when in fact they are not.

As illustrated, there are a number of issues currently – or imminently – facing data centre teams as technology continues to advance. Naturally, these will vary between businesses, infrastructure sets and locations, but the fact remains that increased visibility is becoming one of the most critical requirements for new data centre fabrics.