Derek Watkins, Vice President of Sales EMEA & India, for Opengear examines the key technologies and adoption trends of Software Defined Networking (SDN) and suggests that data centres getting ready for the shift should also consider how smart out-of-band remote access can help underpin the present and protect future investment.
As technology shifts go, the move to a software defined networking world is a major one that proponents argue is vital for society to cope with the 50 billion connected devices expected to be in operation by 2020 according to research from Ericsson. One popular SDN philosophy suggest decoupling control, application and data/forwarding planes into aspects under the management of software while utilising more commodity hardware. For the customers, the SDN paradigm could allow true interoperability and dynamic re-configurability without the current complexity of mismatched features and highly proprietary technology stacks.
Although only recently moving from theory to products, SDN has been in the drawing board for a while now. Growing from the roots of active networking in the late 1990s, SDN really rose to prominence following a 2005 Stanford University thesis on the topic written by Martin Casado, who went on to found an early SDN pioneer Nicira Networks which was acquired by VMware. From there, the OpenFlow protocol for directing network traffic using centrally controlled flows took shape.
The Cisco path
Even with the talk of SDN changing the networking landscape, the biggest player is still Cisco and it is promoting a slightly different take on the SDN issue that focuses on how the technology can assist performance and behaviour of key applications that utilise the network. The networking giant's Application-Centric Infrastructure (ACI) initiatives promise to help tackle the problem of application performance by using SDN compatible hardware with various APIs like OpenFlow combined with software policy controllers to define service levels and access privileges using the network hardware. This level of network reconfiguration and flexibility extends across both the physical and virtual network but for now, ACI is only a Cisco driven vision and real world customers are thin on the ground.
ACI is promoted as having a number of benefits for enterprises, data centres and service providers. At an application level, it claims to dramatic reduce the time needed to deliver services and allow on-demand scale-out and tear-down leading to more predictable application and IT spending.
Other SDN players are looking at ways of reaching the goal but with different paths. Pure software vendors like Big Switch promote its smart software on bare metal switch hardware which are high end data centre Ethernet switches that are sold without software, which they claim are a small fraction of the price of a tier-1 branded switch sold with a switch OS.
Other like Arista are building its own branded hardware with powerful operating systems that support initiates like OpenFlow and VMware NSX but are also extensible through scripts to accommodate new SDN offshoots like Cisco’s or esoteric use cases like the unusual failover requirements of supercomputer clusters.
Still, SDN is in the early days. Companies are figuring out what it is, how it can be woven into their current infrastructure and what it will cost them. A Juniper Networks survey of 400 IT decision-makers in the U.S. discovered that slightly more than half were planning to adopt SDN, with network performance and efficiency cited as the top gains. Reduced operating costs were also important, although finding a way to most economically implement SDN remained the number one challenge.
For all the surveyed intentions, there is still a major hurdle around interoperability. Even though the technology has gained industry-wide acceptance by companies including Deutsche Telekom, Facebook, Google, and Verizon, OpenFlow is not the only game in town and getting it to a point where it forms the basis of a reliable and multi-vendor state is still a challenge.
Opengear along with other vendors including Cisco, Brocade, HP and others have donated a number networking and out-of-band access products to Indiana University’s SDN Interoperability Lab, a neutral, third-party facility that tests OpenFlow products in a heterogeneous, multi-vendor environment. The Lab is operated by InCNTRE (Indiana Center for Network Translation Research and Education), in collaboration with IU's Global Research Network Operations Center (GlobalNOC), and is located on the Indiana University-Purdue University Indianapolis campus.
The lab, like several others around the world is developing testing tools, methodologies, and procedures around SDN and contributing to Open Networking Foundation (ONF) working groups to help create a more reliable and standards based framework or SDN adoption.
Even with the control and data planes potentially coming under the control of software, all core networking components are still reachable by a common denominator which is physical connectivity of a serial port or USB port to reach the internal management console. As organisation start to deploy SDN, there is also recognition from many that they will need to maintain alternative methods of physically reaching distant switches along still non-standardised SDN control plane.
Although SDN should reduce complexity and improve flexibility of network configuration, the reality is that network elements still fail. From simple things such as misconfiguration or component failure, even with SDN, there is a real world need to access devices via an alternative method. This is evident in InteropNET SDN which is using out-of-band (OOB) management devices from Opengear, giving central technicians the low level connectivity necessary for identifying and dealing with issues remotely. The most basic failure is a “lock-up” at which point the device becomes unresponsive. Even with clever SDN technology, the only realistic option is to cycle power and maybe roll back a configuration to a “last known good”. This type of device specific action are still only possible with direct access to power and in most instances, direct access to the console either through the serial port or Ethernet based console access. Although unglamorous, there is no provision within SDN to remove these core access points and all of the current generation of SDN badged network devices are all equipped with console access for good reason.
These OOB tools increasingly use 3G and 4G network as the alternate path which extends the geographic reach to devices that are outside of the data centre, especially in an era expecting a proliferation of Internet-of-Things use cases.
Irrespective of which vendor or technology becomes the dominant SDN framework, the market is clearly growing. A recent from MicroMarket Monitor estimated that income from SDN would be about $160 million in 2014 but expand at a 55 percent compound annual growth rate until 2019, reaching $1.4 billion at that time. For data centre owners heading down the SDN route, there is almost certainly going to be a period of costly forklift upgrades. Smart OOB appliances installed today in data centres, racks and remote sites to gain console access remotely over 3G/4G and LTE network will help facilitate this migration and provide ongoing management services. Although considered as a traditional technology, remote console access and physical remote connectivity will be the same even in the SDN future.