Architecting the data centre of the future

Today’s businesses require data centres that can work at the same rapid speed; deploying new applications quickly and efficiently, providing fast and reliable data access 24/7 and meeting or exceeding stringent service levels with zero downtime. By Nick Williams, Senior Product Manager, EMEA Data Centre IP, Brocade.

  • 10 years ago Posted in

CLOUD COMPUTING and virtualisation are critical when it comes to IT services successfully matching the ever increasing pace of business. By removing the need to provision on-premise physical infrastructure for every application, these technologies have the potential to make businesses far more agile and responsive, giving them the ability to respond to changing market forces and deploy applications and services faster and more cost-effectively than ever before. However, for these benefits to be realised the underlying data centre architecture will need to evolve significantly.

High level architecture model
IT needs to do more with less — that is one thing that everyone agrees on. Therefore, any high-level data centre architecture should also help to reduce operating costs. While it may be tempting to trade performance and reliability for lower cost, with ever more applications becoming mission critical, a sound architecture that delivers greater reliability over time will nearly always work out cheaper in the long term. It also improves performance so that critical data can be accessed faster and more reliably, even as traffic volumes continue to rise exponentially.


So, how can IT departments make sure that their network architecture has what it takes to support the business, not just now, but over the long-term?
Start with the target design


The first step is to decide how you want your data centre to look in the future. You need to select the extent and scope of LAN/SAN convergence, the number of layers within each network, and the number of switching tiers in each layer. In order to do this there are five questions about your ideal model that need to be answered to make sure that all of your future investments bring you closer to that goal:


1. How do you connect physical servers together and to the rest of the network?

2. Will there be an aggregation layer on the LAN or will large virtual servers connect directly into a high-port-count collapsed access/aggregation layer?


3. Do you want to be locked into one orchestration tool and hypervisor vendor or will you select different solutions to meet differing needs for separate applications and departments?


4. Will you be able to continue using any of your existing equipment in that design? And if not, why not?


In addition to these questions, it is vital that you have a clear picture of your current equipment and how it is being used today. For too many organisations, this information is missing or incomplete. If that is the case, you need to perform a thorough audit. Without comprehensive and up-to-date knowledge of the present situation it’s hard, if not impossible, to build a vision for the future and to fully embrace virtualisation and cloud computing.


Building the infrastructure
Once the overall design direction has been established, the next step towards a flexible cloud model is virtualisation. While virtualisation technology is not new, the scale at which it is being deployed is unprecedented and this is adding significant complexity to the network. In a virtualised data centre, where every aspect of every piece of hardware is abstracted from every other piece of hardware, it is far easier to deploy, re-configure or manage applications and services. In practice, this means that moving data from one array to another, moving applications from one server to another, moving network services from one switch to another—essentially any add, move, or change operation—could be conducted without applications or users knowing anything about it.


A sound infrastructure for the next-generation data centre should be robust enough to support virtualisation of every component and the data centre itself. Not all applications require or even benefit from every type of virtualisation. Therefore, the infrastructure must be flexible enough to support both applications on dedicated hardware and applications on virtualised hardware.


Network convergence in the data centre
There are many different ways to design a data centre to support virtualisation and cloud computing services and there are a number of key decisions to make. An important decision that impacts the top-level network architecture is network convergence. You can deploy network convergence or not and still deliver the cost reduction driving virtualisation and private cloud computing.


Network convergence of LAN and SAN traffic has broad implications beyond the simple notion of merging traffic on the same wire to reduce capital costs. Fortunately, convergence is not an all-or-nothing proposition. Essentially, there are four distinct options to choose from:


1. No convergence
Retaining the classic architecture is a valid choice. However, if you integrate LAN and SAN management functions with virtual server orchestration software then you can automate changes when VMs are moved across physical servers, resulting in considerable time and resource savings


2. Management convergence
The much sought after single-pane-of-glass management, whereby all metrics, applications and hardware can be monitored and maintained from a single point, requires increased convergence so that management tools can talk to LAN and SAN switches at the same time


3. Layer 2 technology convergence
Under this option, you can retain physically separate networks, but use the same type of Layer 2 (data link) infrastructure for IP and Fibre Channel traffic

4. Access layer convergence
Physically converging IP and Fibre Channel traffic inside a server, external network adapter and top-of-rack switch can minimise costs and streamline maintenance by reducing the number of cables and switches required


As with any infrastructure project, the total cost of each of these options should be considered beyond the initial deployment, as each will have a different impact on performance, availability, security and operations. Physical network convergence is one option that will grow in popularity as technology develops, but it is not an inevitability for all customers and all applications. When and how far to converge the IP and Fibre Channel traffic is a decision that should be made in the context of all of your organisational requirements.


Revolution through evolution
Much of the discussion around server virtualisation and cloud computing highlights the value of immediate access to unlimited amounts of computing, storage, and network bandwidth. However, the effectiveness of virtualisation and cloud computing is heavily dependent upon the data centres and the physical hardware on which they are based. To support virtualisation, the data centre architect has to harden the network against failures while ensuring it is adaptable and flexible; without disrupting traffic and while continuing to support existing data centre assets.
Crucially, achieving this does not require a full upgrade to the entire data centre network from the access layer to the core. In fact, the best approach is always to implement a virtualisation and private cloud computing architecture at a pace that makes sense for the individual business. IT teams need to make sure they evaluate their own organisational requirements carefully and consider each of the options outlined above in order to select the long-term strategy that will work best for the business.