Traffic visibility in the new DC world

Everything starts with an idea, every new business process, new service, new product begins with someone saying “I have an idea”. Ideas are great but if they cannot be practically, quickly and economically implemented they stay just ideas. Let’s look at the idea that moving to a more virtualized environment in the data centre can speed up the roll out of new applications, which as a concept no one really disagrees with. By Trevor Dearing, EMEA Marketing Director, Gigamon.

  • 10 years ago Posted in

The only reason that this idea works is that new developments in server, network and storage technology have made this concept a reality. Every year Moore’s law drives the capacity up of the latest servers with currently 8 or 16 core processors and soon 24 and 32 cores that can support many more virtual machines than before. Solid state disk drives can give the applications much faster access to dynamic tiered storage via Ethernet and tying the whole thing together are new 10Gb and soon 40Gb and 100Gb networks.
In the past servers and networks were less efficient and ran slower, one application ran on each server and so while the application was waiting nothing happened. This meant that they were very inefficient running at about 5% load. This generated a relatively small amount of traffic on the network and given that each server needed to support a number of networks and interface cards, data x2, storage x2, virtualization, etc it was very inefficient. Virtualisation allows many applications to run at once delivering much higher utilization of the server and multiplexing technologies in 10Gb Ethernet deliver better usage of the network so instead of seven 1Gb interface cards in each server each may need only two 10Gb cards. Each of these cards due to the multichannel technology in the latest interfaces can carry multiple data channels as well as storage and control traffic, this means the utilization will be much higher.
However one cannot just open a catalogue and buy a fully functioning data centre. There will always be a substantial amount of planning and evaluation, testing and retesting. Which hypervisor to choose, what platform to run it on, how to build the tiered storage, what does the network look like and how do you manage the whole process. Management always proves a challenge because there are so many things that need watching. Are the servers performing as they should, is the network causing any problems, what do we need to report on for compliance reasons are all questions that need answering.

Given all of this information how do we visualize the network and infrastructure. In past days the only thing you would plug into a mirror port would have been a protocol analyser but now many devices are looking for this traffic. This traffic needs to be delivered to the monitoring tools in its most efficient form, we cannot afford to drop any of the key information. This means that we cannot pass this traffic through the existing data network as that maybe what is causing the problem. There will be many occasions it is better to take the traffic from network taps which are purely passive devices and have no impact on the performance of the switch.
Given technologies like virtualization and multichannel 10Gb cards a modern data centre can generate a huge amount of monitoring traffic from many sources including mirror ports and taps. The danger is that so much could be generated that it will easily overwhelm existing monitoring tools. This problem is becoming all too common in expanding data centres. Many organisations try and solve the problem in various ways, you could buy faster interfaces for the tools or more tools all of which is very expensive and inefficient. As an alternative some use Ethernet switches with complex filters to try and isolate the traffic. Unfortunately this process is difficult to implement, not scalable and ultimately does not provide a long term solution.
If we want accurate visualization of the network the challenge is to provide the traffic to the tools in a simple to digest fashion that does not overwhelm them while at the same time making sure that they see only the traffic they need. If the tools cannot cope with all of the information they receive there is no way that you can visualize what is happening within the system.
This challenge was recognized some years ago and a new technology was created to solve this. The idea was pretty simple build a network specifically to transport all of the monitoring traffic and deliver it to the tools. The complexity comes when you start to realize that traffic taken form multiple points in the network could be the same and also it is a mixed collection of different types of traffic not all of which is of interest to the different tools. This means that this monitoring network needs to de-duplicate the traffic and sort it so that the right traffic goes to the right place. While we are doing this why not time stamp it, identify the port it came from and potentially tune it a variety of ways. Thus the visibility fabric was born, a distributed network that can deliver the right traffic to the right tool at the right time in the right format. This means that visualizing the traffic on the network and within the system becomes much cheaper, can be centralized and becomes much more efficient.
Building a visibility fabric opens up the opportunities for the future, it makes the role out of new services much quicker as greater visibility means that any evaluation of new equipment becomes easy.

As more traffic is generated and higher speeds built into the infrastructure then the visibility fabric will be able to move with the growth scaling to match the largest of data centres across multiple locations.
Traffic visualization is becoming of ever increasing importance but without accurate visibility of the network it cannot deliver all of its’ potential. The visibility fabric is a key partner in this environment and will become a key component of the modern data centre.
 

The first transoceanic cable to achieve 1/2 Petabit per second capacity, and also the first to...
10-year contract forms part of East Sussex Council’s new procurement Framework initiative serving...
CommScope High Density R-PHY Shelf to support DAA and virtualization across global broadband...
Djibouti Telecom is leveraging Ciena’s GeoMesh Extreme solution to upgrade its DARE1 (Djibouti...
New dual band stabilisation technique cancels the problem of temperature fluctuations to allow long...
Aryaka Networks has introduced its latest Services Point of Presence (PoP) in Dublin, Ireland,...
The service is the first of its kind to be trialled across the Atlantic on a live network and will...
SpaceX will locate Starlink ground stations within Google data center properties, providing...