Why monitoring converged networks needn’t be a headache

Next-generation networks demand a new approach to network monitoring. By Trevor Dearing, EMEA marketing director for Gigamon.

  • 10 years ago Posted in

TODAY’S CONSUMERS can be highly demanding and persistent when it comes to accessing data. The current explosion of the number of smartphones and mobile devices has led to an ‘always on’ culture, where it is considered normal to stream high bandwidth video, participate in seamless videoconferencing, play HD games online, surf the internet and, of course, make voice calls on the go.

What’s more, organisations are increasingly becoming mobile, with employees expecting seamless access to the entire spectrum of enterprise applications, irrespective of location and with little concern for the vast IT infrastructure estate that is required to facilitate such flexibility. Technology advances mean that more can be done across mobile devices and as such, consumers now demand as standard the ability to connect to next-generation services how and when they want to. All of these trends amplify complexity for data centre operators, as they call for additional infrastructure to maintain and enhance today’s communications. This hyper-connected world is in fact doing more than just changing our work, socialising, and communications habits – it is also impacting the way that data centre infrastructure is designed and managed.

The development of the converged network
When Ethernet and traditional IP models were invented some 40 years ago, the teams behind these technologies could never have imagined how they would be used today. As mentioned, the fact that people expect portable devices to receive live video with no thought for how it happens is no longer a surprise, but what’s interesting is that if we were starting all over again and designing the protocols to do this – we would probably not choose IP. Essentially, we have taken a system that was never designed for voice, video or storage, applied some modifications and it works very well. The desire for everything over IP started in the 1990s and has gained momentum ever since. The techniques used, while broadly similar, have developed over the years until we now have converged enhanced Ethernet or Data Centre Bridging which provides us with a fibre channel like environment.


In short, traditional IP models were never designed to carry diverse traffic loads, yet somehow carriers and operators have made it work – pulling voice, video, and data together onto a single network to benefit from the cost advantage of using one pipe to deliver these numerous communications services. However, consumers are becoming less tolerant of service interruption or price hikes, and providers are struggling to cope with the new performance, latency, and quality of service challenges that are presented when these converged services are delivered over the same underlying IP network.


The most pressing challenge that data centre operators are likely to experience is managing the very different requirements that each service must satisfy. As an example, voice traffic demands zero delay, jitter or loss, while the main demand of streaming applications is bandwidth availability. These very unique prerequisites call for a robust, scalable network management platform, which enables providers to monitor network activity without passing on unreasonable management costs to their customers. Therefore, ensuring a consistent service across the board means rethinking the traditional approach to network monitoring so that service providers can see what they are missing and instantly identify any areas of concern before they become a bigger problem in terms of availability, performance, or security.


Monitoring without impacting performance
With such varied traffic traversing a single converged network, data centre operators must first be able to have visibility of all activity before attempting to deploy appropriate monitoring and management tools. To ensure full control, IT teams must understand each traffic type – voice, data, video, etc. – and then monitor this traffic by using a range of different tools. This delivers a deep, accurate view of all network activity in real-time and offers insight into the performance of each service running over the network.


However, when so many different types of tools are introduced to the network – from application performance monitoring to voice recording and security – tool degradation and packet loss can result. This is because, when so much traffic is being directed at each tool, much of which is irrelevant to that tool’s purpose, they can become overloaded as they must sort through all of the “noise” to find the information that they need to process. For instance, a VoIP analyser will come under increased pressure if it constantly receives all of the network traffic, not just the VoIP traffic it needs to see, and this problem snowballs as network speeds increase. To maintain pervasive viability and increase the reliability of network monitoring, analysis, and security tools, without oversubscribing them, solutions must be introduced that effectively separate different traffic types and deliver appropriate packets to the necessary tools.


The problem is, today’s visibility solutions can differ greatly, employing a variety of filtering mechanisms with varying degrees of efficiency and performance to deliver the desired set of packets to one or more monitoring tools. However, with the magnitude and complexity of current converged network infrastructures, coupled with the rate of network development, the challenge is to develop visibility solutions that can scale to allow thousands of diverse traffic streams originating from dozens or even hundreds of network traffic sources, which can be granularly filtered and forwarded to a variety of monitoring tools and analysers with zero packet loss. For many data centre operators, Gigamon’s proprietary Flow Mapping® technology is emerging as the only viable solution to this ever increasing challenge.


A rule for each network service
This advanced filtering technology extends from network ports to tool ports, and solves many of the aforementioned issues presented with network convergence. It enables network managers to define and control how traffic should be directed and what type of information goes to certain monitoring tools. This approach takes line-rate traffic at 1Gb, 10Gb, 40Gb, or 100Gb from a network TAP or a SPAN mirror port and sends it through a set of user-defined rules to the tools and applications that secure, monitor and analyse IT infrastructure. In doing so, Flow Mapping technology provides superior granularity and scalability above and beyond the capabilities of connection-based and filter-based solutions by addressing the problems inherent when going beyond small numbers of connections or when more than one traffic distribution rule is required – as is the case with converged network services.


When deployed as part of a data centre transformation exercise, Flow Mapping technology combines an ingress port traffic filter, an egress port traffic filter, up to 13 unique user-selected criteria, and ties it to one or more output ports, allowing delivery of discrete traffic to the desired location. By applying a combination of different “map rules” to network traffic, the desired packet distribution can be achieved and each tool is guaranteed to see only the traffic that it needs – and nothing else. This reprieve also boosts the efficiency and lifespan of tools and makes management easier, delivering significant CAPEX and OPEX savings for the data centre as a result.


To conclude, the development of next-generation networks shows no sign of slowing, and is largely dictated by the evolving demands of the everyday consumer. As this happens, approaches to network monitoring, and indeed overall data centre design, must be urgently rethought if operators are to achieve the multi-faceted objective of satisfying demand, managing diverse traffic loads across converged networks and keeping costs down.