PEOPLE SPEAK OF “disruptive technologies” as if they were an explosive force, but the disruption can also happen slowly over time. The enterprise network might seem more cohesive than disruptive – but look at what has happened over the years.
It began as a network of cables linking departments and workers to increase business efficiency. Instead of circulating hard copy, necessary data came straight to your desk, saving time and increasing productivity. But that ready availability meant that there was less need for the worker to be tied to one desk or office, leading to a more fluid “mobile worker” environment that was accelerated by the addition of wireless.
Meanwhile people’s expectations were being raised, and consumer products were developed to meet the demand for “information anywhere”. The traditional fixed structure of the enterprise is now not only becoming fluid with mobile workers but also permeated with a fast evolving ecosystem of BYO Devices.
From another angle, the ubiquity and efficiency of the network makes it the obvious choice for converging a diversity of other site functions – such as security, access control, fire alarms, environmental and process control systems.
The result today is that the enterprise network, originally conceived as a fixed infrastructure to serve a relatively static business environment, is increasingly forced to contend with the conflicting demands of a very diverse set of services that fluctuate in real time. This puts enormous strain on those who manage the system. Worst of all, the virtualization trend further accelerates this dynamism by imposing the mobility of virtual machines across the network, introducing changes at machine speeds.
The way to solve this incongruous situation, and relive the strain on the IT department, is surely to virtualise the network itself – allowing it to grow less rigid, more responsive and flexible to meet evolving business demands.
Software Defined Networking provides a means to do this, because it provides a central network controller operating via a separate control plane. These allow data to be gathered and major changes to be rolled out across the network in real time, instead of via manual configuration of individual switches or via the limited capabilities of SNMP.
Public/priv ate data centres and their needs
We’ve focused so far on the challenges in the enterprise, and the most discussed problems today tend to revolve around mobility, BYOD and the resulting security issues.
Important data is being requested by an application for someone with the name and password of a member of staff – but they are not inside the building and physically connected, and they are using an unfamiliar new smartphone. Can this be allowed?
Today’s security applications are limited by the data available to them – MAC or IP addresses, names and passwords. But SDN’s separate control plane opens up the field for innovation with a new breed of “content rich” APIs. Tomorrow’s access policies will be more specific: asking not only who is using the network, but also what that person is allowed, where they can access it and at what time. A short-term contractor, for example, might only be granted access when in a certain office and for the duration of their contract.
Virtualization opens up a further can of worms. There may be good reason for a VM to be moved to a faster server in another data centre, but what if the network there is more congested? Or, if the gain in processing speed is offset by greater distance from the data storage? These are issues that could be anticipated by an SDN controller and traffic management applications created to optimize the movement of VMs and to reconfigure the network automatically.
Similar challenges face the service provider but, for the public data centre, they are overshadowed by the questions of scale – how to scale massively to add virtual networks. Current approaches, such as VLAN with its 4,000 limitations, cannot cope with 10, 20 thousand or more users – so the industry is looking to SDN to provide segmentation and provide security to keep user networks distinct.
The move to providing value added services – SaaS, PaaS and IaaS – also requires massive scaling to accommodate sufficient storage and VMs. The attraction to customers of these services lies in their ability to sign up and change services on demand to meet business need, and that translates into a high level of dynamism imposed on the physical network – more than can ever be achieved manually. SDN offers the potential to manage these changes and policy updates automatically from a central controller.
The same applies to a lesser extent in the enterprise – a department requires a new service and it must be accommodated on the network quickly and without excessive teething problems. So the differences between the public and private data centre needs are more differences of emphasis.
The SDN model
In place of distributed control across each network device, current SDN practice simplifies the switch to function as a data plane monitored and managed by a separate control layer directed from a central controller. This central controller has a helicopter view of the entire network – both what is required of it and also how it is performing.
So, for example, an application might require a specific data flow, but the current configuration cannot support that level of traffic. Then we need a central controller with sufficient intelligence to recognise this discrepancy and automatically re-configure the network to support that data flow – rather than just sound an alert and require operator intervention.
SDN provides the architecture to enable this to happen, and a standard API such as OpenFlow is designed to communicate the necessary data and control signals, but the actual intelligence must be provided by an appropriate application. This is the opportunity SDN offers to a new breed of network application vendors: to build bespoke applications to meet specific network operator’s needs and, what’s more, to recognize common needs and provide “off-the-shelf” network apps supported by a recognized standard such as OpenFlow.
Application examples
Virtual machines can be provisioned in minutes while non-SDN networks require manually configuration. When users want to reshape the cloud service it is not the VM but the network that is the bottleneck, delaying delivery for days or weeks while the manual work is completed. If the aim is to spread workloads across multiple data centres for flexibility and high availability, this becomes a monumental task. The traditional network architecture is not scalable or agile enough to meet user expectations for cloud computing.
Big Virtual Switch from Big Switch Networks is a successful example of an application that runs on OpenFlow enabled networks to resolve this problem. It creates a unified network topology and automatically distributes a forwarding table for each OpenFlow-enabled physical and virtual switch. By isolating the traffic of data centre workloads into Virtual Network Segments according to programmed policies, it supports dynamic workload provisioning and multi-tenant networks on a massive scale – accommodating more than 32,000 Virtual Network Segments and more than a 1000 switches.
This is not just a one-off action, for it dynamically updates the Virtual Network Segments to reflect real-time changes in workload definitions learned from the network and through integration with third-party applications, including cloud management platforms. Each network segment can support rich network security settings, quality of service policy, and other policies.
There is no better example of SDN’s traffic engineering potential than that provided by Google in their Wan backbone spanning data centres in Europe, North America and the APAC region. The project began in 2011, before most people had even heard of OpenFlow, so Google built its own network switch from merchant silicon and open source routing stacks with OpenFlow support in order to address five main challenges:
£ Big networks don’t behave predictably enough
£ Failure response and performance is suboptimal
£ Difficulties in configuring and operating large networks
£ Dependency on manual, error-prone operations
£ In addition, Google did not have the advantage of starting from
scratch as in an academic pilot project – they were needing to
connect to existing networks
The aim was to optimize the WAN routing for high performance and network utilization, while being able to monitor and control network behaviour from a central point. A vendor-agnostic solution was essential, and at each site Google had multiple switch chasses allowing scalability to multi-terabit bandwidth as well as providing fault tolerance.
Although centralized control is a feature of the SDN concept, when spread over an intercontinental WAN it made more sense to have central controllers at each data centre linked into an overall traffic engineering controller. Google’s centralized traffic engineering (TE) service collects real-time use and topology data from the network and calculates bandwidth demand from the applications and services. It then computes the best traffic flow path assignments and uses the OpenFlow protocol to program those into the switches. As demand fluctuates, or unanticipated events happen in the network, the TE service re-computes and reprograms the system.
Two years on, the system has proved itself to the extent that Jim Wanderer, Google’s Director of Engineering, Platforms Networking can claim: “OpenFlow in particular, and SDN in general, are still in the early stages of development, but I can say with confidence that, even at this stage, OpenFlow has proved its value to us in optimizing traffic efficiency on an extremely large real-world WAN.”
My third example is one that too often gets overlooked, and it is the question of policy and identity management in a situation such as VDI (virtual desktop infrastructure). In place of the usual asymmetric traffic model we now have two-way traffic running between the access device and the server and we need the same identity management policies to apply to both traffic flows – even if the VDI server and the desktop are located a long distance apart. Maintaining consistent policies between, say, an office in London and a data centre in Berlin, is a significant challenge, especially when the desktop is being accessed over a range of devices from smartphones to desktop computers. Here again, a central SDN controller can keep track of users and devices at both ends and ensure that consistent authentication, admission control, authorisation etc. policies are maintained and applied across the service without provisioning on a device-by-device basis.
Conclusion
My three examples might seem petty in terms of a technology predicted to change the entire networking game, but I present them simply as examples of what is already being done and is proving of value in easing today’s key pain points. This is just a beginning, and other proprietary approaches – such as shortest path bridging traffic management – do exist but offer nothing to match the flexibility, openness and scaling potential of the SDN concept.
In just two years a lot has already
been achieved, but it is nothing to what will happen once an open standard such as OpenFlow becomes the norm and we have the market base to support that bright young breed of network application designers.