Optimising network management in high speed environments

The volume of traffic within networked environments is steadily increasing. In order to address this growth, network operators are moving onto the next step – 10 Gigabit Ethernet (10GbE). While 10GbE speeds will be sufficient for the moment, it will not be long before the 100GbE threshold is breached. As our appetite for network bandwidth continues to increase at breakneck speeds, data centre operations are struggling to keep pace and infrastructure management has become increasingly challenging. The network is the lifeblood of any organisation and ensuring its health is essential to enable revenue generation, sustain and enhance the customer experience, protect data and to ensure compliance. By Trevor Dearing, EMEA marketing director, Gigamon.

  • 10 years ago Posted in

AS 10GBE PROVIDES the required bandwidth to support a mix of network services and the increasing network load of today, it is fast becoming the predominant choice for core and distribution networks within modern data centres. Indeed, Dell’Oro Group has forecast that sales of 10 Gigabits per second (Gbps) Ethernet switches are expected to reach $13 billion by 2016, and will constitute nearly half of a total Ethernet switch market1. However, despite all of its benefits, the move to 10GbE is presenting a host of new challenges that traditional network management and monitoring solutions just aren’t equipped to deal with.

The challenge of traditional network monitoring
Traditionally, network monitoring has been something of an afterthought when designing network environments. However, the accurate monitoring of the newer, higher speed networks is critical and failure to do so effectively is likely to result in costly downtime – which most organisations can ill-afford. As such, effective monitoring strategies and real-time troubleshooting are quickly becoming growing concerns for businesses. However, the race to higher speeds, the emergence of increasingly complex security threats and ruthless compliance requirements, combined with the fact that virtualisation is being adopted into more network architectures means network monitoring is proving to be a challenge.
While 10GbE monitoring tools are available, they are hugely expensive and, in order to avoid the heavy financial burden many continue to utilise a more traditional approach to network monitoring. This involves directly attaching monitoring tools to each SPAN/mirror port on every switch – which is not only costly, but can also lead to a heavily distorted view of the network, where tools only see a limited segment of the traffic. In addition, as more traffic is being sent to the tool than it can handle, oversubscription can occur, which leads to analytical errors if unwanted packets are received from aggregated and filtered outputs. For instance, using this method, a VoIP analyser would be sent all of the network traffic, rather than just the VoIP traffic it actually needs to see. As networks are upgraded to 10Gbps and higher, this becomes more of a problem as it causes network links to become underutilised and operators are not equipped with the visibility that will allow them to be used to their full capability.
While Deep Packet Inspection is another option that is used by many, it is inherently resource intensive. This method is able to see everything that happens across the network, and across all network layers but, the faster the line rate, the more processing it will take for a tool to filter through the traffic that it needs to see. At 1Gbps line rate, 1Gbps tools may just be able to carry on without becoming oversubscribed, but at 10Gbps, it becomes highly likely that tools will struggle, even when using 10Gbps tools. It is therefore clear that this method of straight aggregation and filtering alone cannot be used to reliably direct appropriate network traffic to the monitoring tools. The tools will become quickly oversubscribed, resulting in serious packet losses and blind spots, and rendering them incapable of meaningful analysis.
Meeting modern demands
In order to address, and circumnavigate, the challenges presented by these legacy solutions, network operators require a monitoring resolution that will allow them to scale just a few connections up to thousands – allowing them to filter, aggregate, consolidate and replicate data to the existing monitoring tools. While this alone will solve some of the problem, reducing CAPEX and OPEX costs, without a level of intelligent filtering some of the traditional challenges will still be present. The monitoring tools are still likely to receive unwanted traffic and suffer from a level of oversubscription.
Flow mapping technology solves this problem. This advanced filtering solution makes it possible to combine thousands of different rules in a logical order to achieve the desired packet distribution – which ensures each tool only sees the traffic it needs to, and nothing else. Such granular customisation overcomes tool port oversubscription when aggregating traffic from multiple network ports. For example, if two 100Gbps connections are sending traffic to a single 10Gb tool port, it is likely the tool port would, at some point, become oversubscribed, leading to dropped packets. Flow mapping allows network operators to remove the parts of the overall traffic stream that do not interest the particular function of a specialised tool. Not only does this free up bandwidth and increase visibility, but it can also extend the life of monitoring tools as they are no longer required to process vast amounts of irrelevant data – leading to further financial savings.
In addition, using these multi-rule, pre-filters allows 10Gb traffic to be directed and mapped through 1Gb analysers. With each tool analysing a specific set of packets according to the specific filter rule – based on VLAN range, port number or IP subnet for instance – comprehensive monitoring at 10Gbps can be achieved, without the risk of oversubscription or packet loss. What’s more, since mapping filters are hardware based, latency is negligible and full line-rate performance is guaranteed.
Managing data centre infrastructure can be a difficult process at the best of times. When network speeds are upgraded it becomes even more complex and without the required visibility downtime is almost inevitable. Combining a network traffic visibility solution with Flow Mapping technology, allows operators to see exactly what is happening on the network at all times – from threats to performance issues – and maximise data centre performance, while lowering the total cost of management. With increased visibility, operators are able to see what they would otherwise miss, limit downtime to a minimum and ensure optimum performance.


1. http://www.delloro.com/news/data-center-forecast-to-drive-ethernet-switch-revenue-growth-through-2016