Traffic visibility limitations are leaving your network vulnerable?

Today’s enterprises and service providers have a certain level of maturity of approach when it comes to securing their networks. This ranges from confidence (whether misplaced or not) that there is not a serious risk of a security breach affecting them, to being highly focused and advanced, with not a single packet of network traffic being unauthorised or unaccounted for. Most, however, will find themselves somewhere between these two approaches, and will likely realise that, although they will have some kind of blended approach to securing the network, including various hardware devices sitting on the network edge, and a level of intelligence in monitoring the traffic that traverses their networks – they are most probably still not completely secure. By Trevor Dearing, EMEA Marketing Director, Gigamon.

  • 10 years ago Posted in

We need look no further than the massive data breach affecting Target – which made headlines for the US retailer for all the wrong reasons in December 2013 – as a typical example of how a billion dollar company, which will doubtlessly have implemented any number of security precautions overtime, has still fallen victim to cyber attack. In this instance, the breach is thought to have involved a combination of malware sneaked onto point of sale terminals, as well as compromised network access via a third party supplier to the retailer.

 

Thankfully, due to recognising that a growing proportion of modern cyber attacks like this are blended and complex in their nature, it is (hopefully) fair to say that almost every organisation now realises the limitations of the ‘old fashioned’ approach to security, which is simply to focus on constructing as hard an outer shell as possible. Even very basic modern threats can by-pass this approach, as threats often focus on exploiting a legitimate user that is already on the inside. This is not to say that network perimeter security is no longer needed – a large amount of attack traffic is automated, and will successfully find its intended target without these solutions in place. Instead, a hardened outer shell needs to form only part of a framework of network solutions that crucially provide the ability to perform deeper statistical analysis on network traffic, in order to better block and identify not just traditional threats, but any kind of suspicious network activity.

 

The typical approach to network security tools is to have them sitting on the production network in the data path. These in band network appliances can provide a high level of security and insight into network traffic but have significant drawbacks that cannot be overlooked. As organisations move from 1GbE to 10GbE networking and beyond, both the network and security performance of these tools begins to be compromised. As we know, security requires more than one type of protection (hardware firewalls, anti malware appliances, spam filters etc) which can result in “daisy-chained” tools – a series of security tools that process the traffic in sequence and through which each packet must pass. Each tool therefore presents reliability, performance and scalability risk for the enterprise due to the potential of tool failure. With this network design there is also often a requirement for traffic management that can bypass tools under certain conditions in order to avoid the loss of critical services – which is unfortunate, but at times unavoidable.

These drawbacks have helped to drive adoption of out of band network designs in order to improve tool performance and make the process of traffic monitoring more efficient and effective. There are some very significant benefits to performing certain types of traffic analysis out of band, such as that based on Cisco’s NetFlow protocol. This type of network monitoring has broad performance analysis applications, but crucially it also has the ability to identify changes in network behaviour and to support forensic investigations into understanding and replaying the history of security incidents.

 

If this kind of monitoring is performed in band on the production network, networking tool processor and memory load can cause severe service degradation, normally measured as an increase in device CPU and memory utilisation. As a result, due to the potential risk of dropping production traffic that this degradation causes, networking devices often resort to sampling packets to generate statistics. A low sampling rate (sometimes as low as 1 in 1000 packets) can result in missing out on important events that are happening in the network.

 

Although sampling traffic in this way may have a limited impact on monitoring general network performance, it has a more significant implication for monitoring for security incidents – as the whole picture is, by definition, not being made available. It is this lack of traffic visibility, brought about by limitations in network design and specification, which are leaving networks vulnerable to cyber threat.

 

The urgency to be able to effectively monitor what is happening inside the bounds of the network is rapidly growing. An ongoing and effective security strategy requires extensive, reliable and scalable visibility of network traffic in order to, not only prevent security breaches, but also actively react to potential threats before they can do any lasting damage.

 

While an ongoing and effective security architecture requires pervasive and efficient visibility of network traffic and communications, the architecture and approach adopted by many enterprises is based upon legacy technologies and thinking. An out of band security strategy, which is centred on capturing full visibility into traffic flows, and which provides protocols like NetFlow with the complete picture, enable a much more complete overall approach to securing the network.