Big Data: ensuring network tools don’t crack under the pressure

Much has been made of the value of analysing the data held within an organisation. This Big Data has been instrumental in allowing organisations to implement changes in the way they do business and allowed them to understand what their customers are doing in order to build a better proposition. The key to being able to do this is the insight that is provided by the analysis of the data. By Trevor Dearing, EMEA marketing director, Gigamon.

  • 10 years ago Posted in

WHILE ANALYSING DATA at rest can provide a view of what people have done in the past, it may also be necessary for organisations to understand what people are doing now. This not only provides insight into the experiences that users or customers are enjoying – or not enjoying – but also provides more of an understanding about threats that could potentially affect the business. The way to achieve this insight is by gaining an in-depth view into the traffic that is flowing around the network. There are a huge number of tools available to look at everything from application performance to network forensics to behavioral anomalies, and understanding what is happening for each of them can provide an all important view of usage, capacity, security posture and all threats.

When you look closely at the traffic flowing around the network, around 85 percent of it is of little relevance to the tools – as, for example, some will not be interested in video or music based traffic. This, however, presents a challenge, because if you deliver a huge amount of traffic to a monitoring or security device it may not be able to process all of that traffic or make proper sense of it. This means that, from a monitoring perspective, you are not getting an accurate view – and in the security world, you are not seeing all of the possible threats. This is especially true as speeds increase to 10, 40 or even 100 Gigabits per second as many devices cannot operate at these speeds.


To solve issues around the rise in volumes of traffic, as well as the constant need to increase levels of performance, an intermediary step is required – traffic filtering and optimisation. In order to do this, what effectively amounts to, a separate network is required. This network needs to be intelligent enough to identify, classify and potentially treat the traffic, so that the monitoring or security tools can work at maximum efficiency.

To add to this complexity, security and monitoring devices need to see traffic in different formats and styles. This can range widely, with some tools looking at the traps and logsys sent from a network device directly to a targeted monitor dealing with relatively low traffic volumes, while other devices will look to gather as much packet based traffic as necessary to build a picture (though some are looking for the headers, some for the detail in the payload). Then, there is also a third group that will be looking for flow records in order to understand each conversation.


With more and more devices all chasing after the same data, it is becoming increasingly important to manage the traffic that is being captured from the network and delivered to the security and monitoring devices. This means that a separate monitoring network needs to have the intelligence to be able to identify the style and content within the traffic that each device is looking for and deliver it in the correct shape and size for that tool.
While aggregating traffic from network taps is a fairly simple process, being able to capture and optimise the data on the network into the correct format requires a much more scalable and controlled approach. It needs more than an adapted Ethernet switch or a packet broker, it needs an intelligent fabric that can adapt and react to the traffic types and be able to work across both the physical and virtual environments. The truth is that the edge of the network is now the vSwitch or even the vNIC and that is where traffic monitoring needs to start.


So if you are migrating your data centre to a virtualised, solid state, software defined environment, or are concerned about the effectiveness of your security environment and trying to deliver the best experience for your users, then it is definitely worth looking at how this can be achieved in the most efficient way.

Maintaining the ability to monitor the volume of traffic and to understand the security threats is more important than ever, so the same level of planning and rigour need to be applied. Monitoring and security should not be an afterthought but instead the appropriate technology should be designed into the infrastructure from the very beginning.