Questions like these were asked at a recent DG Connect meeting in Brussels during April where the industry met with EU Commission officials to discuss what actions the EU Commission must make to help the EU and member states meet their energy and GHG targets. The crux of the matter is deciding what you measure and what behaviours you legislate against – clearly, the biggest energy consumer in data centres is the “compute” element and there lies the age old problem – buying an energy efficient fridge for someone who stores a single can of beer in it isn’t very environmentally friendly. So the solution as we all know lies in a joined up approach. Our challenge is to work on bringing forward credible solutions and the call to action is now – DCA Certifications is one solution and we are preparing an awareness seminar in June, that’s not to be missed.
On other government matters, I’m very pleased to report a useful workshop with UK Government dept of Business Innovation & Skills and Trade & Investment where we discussed actions to raise the profile of the data centre sector to encourage both inward investment and export of UK’s “world class” data centre sector. I hope to report on progress and the ongoing DCA relationship with BIS in the near future, thanks to all DCA members who responded to the call to participate. Also on the skills front I’m pleased to report a successful second training “DCA Bootcamp” pilot this time a one day intensive session designed to cover the basics. Over 50 students attended the day at TU Delft University in the Netherlands, a full report will follow.
Feature Focus
By Louise Fairley, Marketing Manager, Data Centre Alliance.
AS PART OF OUR FEATURE FOCUS throughout the year, May is dedicated to Industry Trends & Standards and we have some expert contribution from DCA Members UPS Ltd, Fluke Networks and Sentrum Colo.
‘UPSs that effectively protect their critical load from mains power failures and disturbances are essential to modern data centres. To successfully fulfil this role in today’s business conditions, they must do so while offering near-perfect availability, high efficiency and easy scalability’. Kenny Green, Technical Support Manager at Uninterruptible Power Supplies Ltd (UPS Ltd), looks at how available UPS topology allows data centre operators to meet these exacting requirements.
Roger Holder, EMEA Field Marketing Manager at Fluke Networks discusses AANPM: Application Aware Network Performance Management, which is also referred to as Network Performance Monitoring and Diagnostics (NPMD) and the growth in this emerging technology market. It emphasises the need for a single integrated solution to monitor, troubleshoot and analyse networks and—even more important—the applications and services they carry.
Cloud computing has taken off in recent years with Gartner senior analyst Ben Pring calling it the “phrase du jour”, a feeling that is popular in the industry. According to a recent report by Technology Business Research Inc., private cloud adoption will generate $69 billion in revenue by 2018.
However, banks are seemingly hesitant about moving to the cloud, Stephen Scott, Managing Director, Sentrum Colo, outlines the potential reasons behind this and how this will play out in the future.
The editorial contributions from our members are key to making our feature focus such an informative and successful part of the DCA’s information library, so thank you to all our contributing members this month.
For the June edition we focus on ‘Local/ regional government legislation & stakeholders’ and we are currently taking bookings for this edition.
For more information on our forward features and how to book your slot, please visit Data Central’s Media Centre, Submit an article.
Events
We are already signed up and partnering many events this year including:
Data Centres Europe
27-29 May, Monaco
Gartner IT Infrastructure & Operations Management Summit
2 - 3 June, Berlin, Germany
Cloud World Forum
17-18 June, London
Data Centre Transformation Conference
8 July, Manchester
Data Centre Expo
8-9 October, London
Powering the Cloud
28 October, Frankfurt
Emex
19-20 November, London
Details of all events are available via
www.data-central.org, with many carrying special discount offers for DCA members.
We hope to see you in Monaco for Datacentres Europe as the DCA team will
be on stand B18. Please do contact a member of the team if you are a DCA Member and plan a visit as there is a significant discount on offer for registration
of attendance.
Introducing application aware network performance management:
The new equation for faster problem-solving
By Roger Holder, EMEA Field Marketing Manager, Fluke Networks.
ORGANISATIONS are increasingly dependent on the performance of their business applications – which, in turn, depend on the performance of their network infrastructure. To keep the business running smoothly, the performance of both applications and the network must be maintained at the highest levels.
The traditional approach has been to monitor network and application performance separately, using different systems run by different teams. However, this is becoming more difficult as virtualisation extends from the data centre to the desktop and the use of cloud services continues to grow. When application problems occur in a hybrid cloud environment, how does the organisation determine if the problem is located in their infrastructure or that of their cloud provider? Trying to work out who owns a problem when all groups are reporting green KPIs is increasingly difficult and time-consuming.
A recent survey* of network professionals conducted by Fluke Networks indicated that more than half do not have the tools they need to quickly and accurately identify issues with VoIP, applications and other network performance issues. Organisations need end-to-end visibility across layers 1-7, from the data centre to the branch office, if they are to identify the source of any performance problems quickly and solve them before incurring costly downtime.
The solution that has emerged is termed AANPM: Application Aware Network Performance Management, which is also referred to as Network Performance Monitoring and Diagnostics (NPMD). In March Gartner published the first Magic Quadrant on the Network Performance Monitoring and Diagnostics market**, which foreshadows growth in this emerging technology market. It emphasises the need for a single integrated solution to monitor, troubleshoot and analyse networks and—even more important—the applications and services they carry. Fluke Networks has been named as a Leader in the NPMD Magic Quadrant.
AANPM is a method of monitoring, analysing and troubleshooting both networks and applications. It takes an application-centric view of everything happening across the network, providing end-to-end visibility of the network and applications and their interdependencies, and enabling engineers to monitor and optimise the end user experience. It does not look at applications from a coding perspective, but in terms of how they are deployed and how they are performing.
By leveraging data points from both application and network performance methodologies, AANPM helps all branches of IT work together to ensure optimal performance of applications and network. It helps engineers overcome the visibility challenges presented by virtualisation, BYOD and cloud based services and identify problems anywhere along the network path. It also provides application performance data to identify when a user is experiencing poor response times and which application component is contributing to the delay. This actionable performance data can be shared with the applications team to identify what led to the problem and which component needs attention.
In the data centre, organisation can realise tangible benefits from gaining visibility into where and when the network is busy. The ability to distinguish between critical business use and non-critical or recreational use, showing data in a form factor which they understand, helps to identify quickly which links need additional bandwidth and which can be reduced. This enables better control of IT budgets, allowing for future expansion through the analysis of growth patterns in application use.
Strictly speaking AANPM is defined as giving visibility across LAN, WAN and data centre environments – including all tiers of the server and application environment, whether virtual or physical – as well as layers 1-7, while supporting rates from 1 to 10Gbps. The AANPM solution developed by Fluke Networks extends this further by adding the ability to monitor and troubleshoot the wireless infrastructure and to support remote locations through a portable form factor if more in-depth visibility is needed – providing comprehensive visibility from the data centre to the user device. AANPM provides seven key benefits:
End-to-end infrastructure visibility
It brings together key data points from network management systems (NMS) and application performance management systems, providing a single dashboard view and helping engineers monitor KPIs and track device performance and usage.
Faster problem-solving
Different IT teams can work together using common tools to resolve issues.
Improved user experience
Applications can exist in many different places and different infrastructure tiers, making it difficult to discover root cause of problems, but AANPM enables teams to monitor all levels of the user experience and address issues before they become serious.
Enhanced productivity
By speeding up MTTR (mean time to resolution), AANPM reduces expensive downtime and improves quality of service.
Cost savings
An AANPM solution eliminates the need to use multiple tools to monitor the network and application infrastructure. Additionally, Gartner advise that, because poor network and application performance impact infrastructure costs as well as productivity, organisations need to focus on the user experience and capture data that enables them to fix the “right” problem first – which AANPM enables them to do.
Improved infrastructure optimisation
AANPM enables engineers to identify poor performance and prioritise projects such as server upgrades, make the business case for approval and verify the results. It also provides data to support capacity planning.
Better business understanding of IT
AANPM helps executives understand the cost of running critical applications and the impact if they go offline, as well as the dependencies between critical applications and the supporting infrastructure.
The key features of an AANPM system
AANPM provides performance data from both network and applications, including stream-to-disk packet storage, application response time analytics, IPFIX (NetFlow) and SNMP. A performance map enables users to watch over the entire enterprise network and isolate individual elements, transactions or even packets, either real time or back in time.
Data is shown on a single dashboard so everyone can see network and application performance metrics. Users can customise the dashboard to suit their individual responsibilities, but they can also see adjacent areas, so cross-functional teams can work together to solve problems and know that they are all seeing the same information.
If a link has errors or high utilisation, they can use a mouse to dive deeper into the transaction and find out why it might be introducing latency to all the upstream transactions. Every device in the path can be analysed (Figure 1). Simple logical workflows enable the user to isolate a problem down to the individual network element, transaction or even packet behind any performance event – real time or historic. An AANPM system stores all data flows, transactions and packets, so engineers can reconstruct events, use flow forensics for back-in-time identification of traffic on key links, and even play back VoIP calls and video streams. This is particularly useful for solving historic problems. It helps monitor SLAs and links to the cloud to ensure providers are meeting their SLAs, and assess where extra bandwidth might be needed by showing instant real-time bandwidth usage.
However you implement it, AANPM is something we can expect to hear a lot more about in data centres in 2014. Fluke Networks is a member of the DCA. DCA members are invited to visit www.flukenetworks.com/gartner to read a complimentary version of the Gartner, Inc. “2014 Magic Quadrant for Network Performance Monitoring and Diagnostics”. For more information visit http://www.flukenetworks.com/instantvisibility
About the Magic Quadrant Gartner does not endorse any vendor, product or service depicted in its research publications, and does not advise technology users to select only those vendors with the highest ratings. Gartner research publications consist of the opinions of Gartner’s research organization and should not be construed as statements of fact. Gartner disclaims all warranties, expressed or implied, with respect to this research, including any warranties of merchantability or fitness for a particular purpose.
* Fluke Networks Market Research, 2013
** Gartner, Inc. “Magic Quadrant for Network Performance Monitoring and Diagnostics” by Jonah Kowell, Vivek Bhalla and Colin Fletcher, March 6, 2014
A cloudy future for the banks
By Stephen Scott, Managing Director, Sentrum Colo.
CLOUD COMPUTING has taken off in recent years with Gartner senior analyst Ben Pring calling it the “phrase du jour”, a feeling that is popular in the industry. According to a recent report by Technology Business Research Inc., private cloud adoption will generate $69 billion in revenue by 2018.
However, banks are seemingly hesitant about moving to the cloud, Stephen Scott, Managing Director, Sentrum Colo, outlines below the potential reasons behind this and how this will play out in the future.
Why banks are not taking the plunge
Banks have very sophisticated IT and communications systems, and in some cases, these systems are quite complex due to past mergers. As banking sits in a heavily regulated sector, industry leaders may have concerns around the potential loss of control, availability and access to data. This can make it very difficult for banks to justify the investment required for cloud migration.
In order for banks to meet customer expectations for water-tight data security measures, and to benefit from the flexibility offered by cloud computing, more banks are exploring opportunities around the private cloud. Banks have the opportunity to boost the security of their data by storing it in two or three locations within a private cloud, instead of a single, owned legacy data centre. Although widespread public cloud adoption is unlikely to take off in the banking sector in the near future, the industry will see more and more banks move non-business-critical applications such as document management and email to private cloud.
In terms of new entrants to market, banking newcomers will typically focus on lending and they aren’t held back by legacy systems, which should make migration to the cloud a simpler decision for them. Despite examples from Australia where cloud computing is very much mainstream in the banking industry, it is unlikely that a major UK bank with a more complex business model will move its whole infrastructure to the cloud.
Australia is currently leading the crowd when it comes to cloud use in banking, with CommsBank and National Australia Bank taking the leap. However, in the same way that an aircraft manufacturer may be tempted to switch from Rolls-Royce engines to a new, cheaper alternative, but with concerns regarding reliability, a bank is likely to think twice before taking the plunge with cloud computing. The industry leaders are aware that if anything goes wrong, like an aircraft using a new-to-market engine, the consequences could be disastrous and a PR nightmare.
UK banks generally have a very complex IT and communications infrastructure, which would make could migration costly and time-consuming – this remains to be a major barrier. mAll banks – both industry heavyweights and newcomers – should also bear in mind that if they choose to adopt cloud computing, they may face a backlash from customers looking for a guarantee that large deposits and transactions aren’t compromised because of a new infrastructure.
The significance of a highly-regulated environment
Currently, from a legal standpoint, no cloud-specific laws exist and in the European Union, the use of cloud computing by banks is generally governed by the same rules as outsourcing, namely by the Markets in Financial Instrument Directive (MiFID) and the Capital Requirements Directive (CRD).
The MiFID states that banks that are using cloud computing must ensure regulators have “effective access” to “data” and “premises”, which can pose a barrier for the use of public cloud services. Putting the regulations and concerns over data security and integrity to one side, there is also a cultural barrier present to overcome. Traditionally, banks have built their own highly sophisticated infrastructures so technology decision makers may feel reluctant to let go of their systems through the use of cloud computing.
Security concerns that affect the banking industry exclusively
Banks have a huge responsibility to protect their customer data and any transactional data within their IT and communications infrastructures. Due to this, banks need to decide whether a move to the cloud will have an impact on the security level the company guarantees to its customers. This will also produce a knock-on effect for any of their insurance obligations.
Cloud computing and the future
The continued digitisation of services and the flexibility and cost benefits that cloud computing delivers are going to make it difficult for banks to keep away from the cloud. Cloud computing enables banks to keep up with technology changes while reducing costs, and to reinvent their business and operating models. When it comes to expanding a bank’s service offering through a new product or a branch, the cloud delivers a risk-free environment for testing and scalability.
Notwithstanding a gradual move to cloud services for latency and, to some extent, security insensitive applications, widespread cloud adoption by UK banks is still probably 10 years away. The industry is seeing many banks starting to move to third party data centres, where they might share floor space with a cloud service provider; providing companies an opportunity to try out the private cloud with applications such as document management or email.
It is very likely that when a bank needs to conduct a major upgrade to its infrastructure or network, they will start looking at moving some core applications to the cloud, but as it currently stands, a move to using cloud computing would instigate a large, time-consuming move with a lot of justification to stakeholders and customers alike.
What do today’s data centres expect of their UPS?
UPSs that effectively protect their critical load from mains power failures and disturbances are essential to modern data centres. To successfully fulfil this role in today’s business conditions, they must do so while offering near-perfect availability, high efficiency and easy scalability. In this article, Kenny Green, Technical Support Manager at Uninterruptible Power Supplies Ltd (UPS Ltd), a Kohler company, looks at how available UPS topology allows data centre operators to meet these exacting requirements.
TODAY, no data centre operates without an uninterruptible power supply (UPS) in place to protect the load from mains-related disturbances and power failures. If the load is unprotected, such events have the potential to cause irrevocable damage to IT hardware. Significant as this damage could be, it is unlikely to be as serious as the impact on business and reputation resulting from loss of data, or IT availability in a 24/7 online service environment.
With this in mind, data centre operators will judge a UPS by its level of availability, combined with the quality of protection it provides from mains failures and transients while on-line. However, current business, economic and even political conditions impose other pressures. The greatest of these is the need to improve energy efficiency. Although this is largely to minimise the steadily increasing cost of energy, cutting carbon emissions and ‘Going Green’ is also increasingly important. Another factor imposed by modern conditions is that data centres’ processing loads can change rapidly as demand for IT resource grows. To remain effective, UPSs must be readily scalable to keep pace with these rapid changes.
Meeting the requirements of today’s data centre users
Today’s UPSs allow data centre operators to overcome these issues. To see how they do so, we can look more closely at their technology, and at how they utilise this technology to fulfil their role.
Fig. 1 shows the major UPS components. Incoming raw mains is fed to a rectifier/charger for conversion to a DC output. This output supplies the inverter input and charges the UPS battery. When the incoming mains supply is available, the rectifier/charger keeps the battery fully charged, while the inverter also uses its DC level to develop an AC output for the critical load. If the AC mains supply fails, the inverter draws DC from the battery.
Because the battery is part of the DC bus, switchover between battery and rectifier, and back again, is seamless. The mains failure is entirely invisible to the critical load, provided it lasts less than the battery’s autonomy. The critical load is protected from incoming power aberrations as well as failures. The UPS rectifier and inverter provide a barrier to mains-borne noise and transient voltage excursions in addition to providing a well-regulated AC output.
Given that the UPS’s mains failure protection capability is subject to its battery autonomy, operators must have a strategy for handling power outages that exceed this. This strategy depends on whether or not the load must continue running throughout the mains failure. If this is not essential, a battery runtime of about 10 minutes will be sufficient to ensure that the ICT equipment has a safe, well-ordered shutdown. If the application must continue running throughout a power outage, the UPS must be provided with extra batteries or, preferably deployed with a back-up generator.
To summarise, we have seen that the UPS’s job is to be always available, providing a level of protection from mains supply failures and events compatible with the nature of the critical load. However, we have also mentioned that in today’s conditions it must achieve this with the best possible energy efficiency and easily-implemented scalability.
Modular UPS topology.
The solution lies in both the technology and the topology available in the latest developments in UPSs. Whereas, earlier-design UPSs used a transformer to step up their inverter’s output to the required AC voltage level, advances in power semiconductor technology, particularly the Insulated Gate Bipolar Transistor (IGBT) have allowed the transformer to be eliminated. This has had a number of profound effects on modern UPS design.
Firstly, transformerless UPSs are about 5% more efficient than transformer-based products. Fig.2 shows this, while revealing that efficiency is improved over the entire load spectrum from 100% down to 25%. As a result, substantial reductions in electricity running costs and heating losses are achieved. Additionally, the power factor is improved, while total input current harmonic distortion (THDi) is reduced, bringing further cost savings and improved reliability.
While transformerless technology is extremely important for its energy savings, its reductions in size and weight also have far-reaching effects. These reductions result from eliminating both the transformer and the phase controlled rectifier. A transformer-based 120 kVA UPS, for example, weighs 1200 kg and has a footprint of 1.32 m2. By contrast, a transformerless 120 kVA UPS weighs just 310 kg, with a footprint of 0.64 m2.
The significance of this is that it allows UPSs, even in high power installations, to be configured as sets of independent rack-mounted modules. For example, with the PowerWAVE 9500DPA, up five modules, each of 100 kW, can be accommodated within a single UPS frame. A UPS can be scaled to a 100 kW load with a single module, then incremented in 100 kW steps to 500 kW, matching the load as it grows. This flexibility in populating the frame is known as vertical scalability. For loads beyond 500 kW, up to five additional frames can be added, providing horizontal scalability up to 3 MW.
Alternatively, a <400 kW load can be supported by five 100 kW modules. This means that if one module fails, the other four can continue to fully support the load, as they still have 400 kW capacity between them. As one module is redundant, this is known as N+1 redundancy. Modules can be ‘hot-swapped’; a process where a faulty module can be removed, simply by sliding it out of the UPS frame, and replaced with another without interrupting power to the critical load. This also has a positive effect on Mean Time to Repair (MTTR).
Minimised MTTR contributes to increased availability, with modular UPS systems offering availabilities to 99.9999%. These UPSs are well-equipped to fulfil the critical power protection role expected of them by today’s data centres – and they can do so with a true online efficiency exceeding 96%.
The last 30 years of UPS development has without question had a significant effect on IT power security and today, R&D is still responsible for driving sales growth – meaning you can count on further step changes in efficiency in the coming years.
So, as the way data centres are used and managed develops over the next decade, UPS manufacturers will undoubtedly continue to invest and innovate, using the latest technological advances to ensure your load is as protected as it ever can be.
For more information about Uninterruptible Power Supplies Ltd and the UPS Systems they offer please visit www.upspower.co.uk
Captions
Fig.1: Typical UPS block diagram
Fig.2: Transformer-based and transformerless UPS efficiency curves