Q Please can you provide some general background on your
role as Data Centres Manager at the University of Hertfordshire,
in terms of the basic numbers (students, IT estate, your day to
day responsibilities and the like)?
A The University is the UK’s leading business-facing university and an exemplar in the sector. It is innovative and enterprising and challenges individuals and organisations to excel. The University of Hertfordshire is one of the region’s largest employers with over 2,300 staff and a turnover of almost £231 million.
With a student community of over 27,700 including more than 2,900 international students from over eighty-five different countries, the University has a global network of over 170,000 alumni.
The University of Hertfordshire was awarded ‘Entrepreneurial University of the Year 2010’ by the Times Higher Education (THE) and ranks in the top 4% of all universities in the world according to the recent THE, World University Rankings.
The University is a large and complex organisation comprising some 10 Academic Schools, 10 Professional Services strategic business units and a number of subsidiary companies, with defined targets and activities, that contribute to the achievement of the University’s overall strategic goals. IT services are in use 24 hours a day, 365 days a year, supporting staff and students on campus and at home and in international partner organisations.
The University is primarily based in Hatfield with two main multi-building campuses (the College Lane Campus and the de Havilland Campus) and a number of other buildings beyond the campus boundaries.
There are two main Data Centres on separate campuses and a Main Communications Room. In addition there are a number of smaller satellite server rooms where equipment is now being transferred into the central facilities. I am responsible for the IT facilities security and operations, and also some systems management. I am also responsible for the University’s Green ICT Strategy and the delivery of the Action Plan. Every year, we manage to identify improvements to the data centres that contribute directly to lowering total cost of ownership (TCO) and also the carbon footprint. This translates into large savings in power and cost.
Q Over that time, what are the major project successes that
you have achieved?
A The University of Hertfordshire was the first educational institution to comply with the EU Code of Conduct for Data Centres, and was recognised in the Datacenter Leaders Awards 2010, the EAUC Green Gown Awards 2011 & the Uptime Institute GEIT AwardsTM 2011. All-in-all, our ‘Green’ projects have saved the University over £213,000/year.
Q In more detail, can you tell us about the Reduction and Ee-Used
Energy in Institutional Data Centres (RARE IDC) project?
A The University of Hertfordshire sought to re-develop one of its Data Centres into a Tier-2 environmentally efficient facility.
We are recognised as a sector leader in environmental management and in 2008 were awarded funding from the Joint Information Systems Committee (JISC) for a project to incorporate green technologies into the refurbishment of the de Havilland Campus Data Centre.
This pioneering Reduction And Re-use of Energy in Institutional Data Centres (RARE-IDC) project aimed to provide world-class innovation in the use of Information and Communications Technology (ICT) in Higher Education.
The multi-award winning RARE-IDC Project built on work already underway throughout the University’s estates to explore the issues of reducing energy usage for cooling data centres and the reuse of heat generated from ICT equipment.As well as contributing to reducing the cost of ownership of University estates, the RARE-IDC project provided environmental sustainability with innovative green
practices for minimising the carbon footprint, which transfers across to all new ICT space.The project worked within the JISC Institutional Innovation Programme and helped to take forward the national development of the Green Agenda, demonstrating innovation and good practice.
Q You are currently working on the Carbon Accounting and
Reporting of Baseline for Services (CARBS) project. What is this
and what are the objectives?
A The JISC CARBS project employs inexpensive hardware that makes use of Power Distribution Unit (PDU) and internal server system metrics to enable more accurate measurement of power usage within systems and across hardware domains.
We are working with state-of-the-art components from Concurrent Thinking who are willing to provide technical resources and input into this project in order to prove the benefits of such a model.
Using this model a dashboard has been created to provide real-time measurement of the environmental (carbon) cost of delivering individual services that have been identified in this way.
Q Do you have any other data centre projects on the go, or
planned for the near future?
A We are currently defining the requirement for a replacement data centre by the end of this decade.
Q In simple terms, what are the main technologies and issues that
need to be considered when end users looking at greening
their data centre?
A In order to address your issues, you need to know first of all what they are. Simple metering of facility power and IT power will help to focus your efforts on the largest users of energy. Typically, one finds that cooling is the biggest facility cost, with electrical losses and IT efficiency not far behind.
Q In more detail, how can the relatively new DCIM discipline help
in this process (you’re using it in the CARBS project?)?
A DCIM has been variously described as “a solution waiting for a problem”. Having implemented a DCIM myself, I can agree with this – define your question and you can usually find a DCIM to answer it. Concurrent Thinking’s DCIM was particularly useful at looking at IT Server loading and comparing it with the environmental factors in the data centre. The first service we costed out proved that the hardware we purchased to run it on was highly over-spec’ed. This knowledge has given us the confidence to push back on what application vendors insist we buy, to ensure we right-size our servers for our applications.
Q What are the factors to be considered when deciding between
data centre refurbishment or a new build?
A Simply put: Do you have sufficient floor space already (for IT, plant and equipment)? Do you have sufficient power available within the building for today’s requirements and tomorrow’s? What is the cost and availability of land for a new build? If you can answer positively to the first two questions, then you should always err on refurbishment rather than new build as you’ve saved a significant amount of carbon right before you even start! One downside of refurbishment is the introduction of a lot more risk during the build phase.
Q Much is written about the importance of power and cooling.
What are the key points to consider, and what are the
technologies that you believe offer the most potential in the
‘greening’ process?
A Reducing your power draw is key to running an efficient data centre and in many facilities, cooling is the biggest user of energy. Following the Best Practices in the EU Code of Conduct for Data Centres is a good way of reducing cost with very little outlay. More efficient cooling technologies range from the simple replacement of fans in existing CRAC/CRAHs to complete system replacement with “fresh air” or liquid cooling. The latter systems have excellent ROI over standard DX systems but require larger up-front investment.
Q Similarly, what do end users need to look at when specifying/
upgrading their IT hardware – storage, servers and the like?
A We have a standard approach now that states that we will install in a virtualised environment unless the T&Cs with the s/w vendor prohibit it - fortunately, that is becoming rarer these days. No server we purchase comes with on-board storage – everything boots from the SAN. In this way, we can allocate just enough resources to the application as it needs.
Q How can virtualisation contribute to the green data centre?
A See the answer above. But in our case it made a huge difference. By virtualising and consolidating, we shrank our IT estate in one data centre from 23 cabinets, down to 15 and then further down to 13. It has crept up a little since, but not by much and I don’t foresee a big rise any time soon despite the increased demand that we need to service. All that saved space means, less physical hardware, consuming less power and putting out less heat that has to be cooled = a double saving!
Q Everyone seems to be rushing to the Cloud. Is this a positive or
negative when it comes to achieve sustainable, green IT?
A It’s not a straightforward answer. By definition, the most efficient data centre is that which is fully loaded. If a cloud data centre advertises itself as “most efficient” then it doesn’t have space for you or it’s lying. The truth of the matter is: there are very efficient cloud data centres and there are very inefficient cloud data centres. Everyone will need to do their own investigations when considering the alternatives. In almost all cases, you can save more money by making your own data centre more efficient first, unless you intend to scrap it. Cloud is great if you have temporary high demand for compute and/or storage and require great flexibility that cannot be accommodated in your existing data centre(s).
Q How important is it to establish accurate monitoring and
reporting of data centre conditions as a tool to achieve
a green environment?
A Building (or refurbishing) a green data centre is only the first stage of the journey of a sustainable ICT estate. The operational environment is arguably far more important in the long term. Accurate monitoring can provide you with a profile of your data centre that can be used to uncover behaviours, both desirable and undesirable that you may wish to investigate further. As an example, the behaviour of the cooling systems to seasonal ambient temperatures can tell you a lot about how your data centre performs and provide clues as to how further efficiencies may be gained. What is also important is, as I have experienced, problems with hardware can cause fluctuations in the profile that can be immediately investigated and corrected.
Q Presumably, this is a subset of the need for an overall, effective
data management strategy – maybe harking back to the days of
‘good old’ Information Lifecycle Management (ILM)?
A Once you go down the path of adding in all this extra metering and monitoring, quite quickly you realise that you have an awful lot of data being collected and you need some way of storing them in order to analyse and report on them. Concurrent Thinking’s DCIM helps me here as it can process massive amounts of simultaneous data points while other DCIMs may struggle to keep up. This stems from its origin as a High Performance Computing (HPC) tool.
Q How effectively do the IT and facilities (and other influential)
people work together at the University of Hertfordshire to help
achieve a green data centre environment?
A As mentioned earlier, the University has a Carbon Management Plan (CMP). As the owner of the Green ICT Strategy and Action Plan, I regularly meet with the Estates owners of the CMP. We discuss what projects we are working on and share ideas on how we can work together to achieve greater benefits. Most of my sustainability work impacts on the Estates environment and it is therefore imperative that they both understand and help me in what I am trying to achieve.
Q Do you believe that this convergence of the IT and facilities is an
essential prerequisite of a successful green data centre project?
A Absolutely! I would recommend, as we always do, putting an Estates person into every green data centre project, as a board member for oversight purposes and/or as a project team member.
Q There are plenty of data centre/more general initiatives designed
to help end users achieve sustainable, green IT environments,
do you think that, in general, these are helpful?
A I believe that the more publicity given to best practices, the more likely data centre owners and managers will notice them and act upon them. Most of the advice out there is simply a carbon copy of everything else. For instance, the EU Code of Conduct has adherents far outside of the EU, in the US, the Far East and Australia, but not everyone realises they are following the EU Code because they are referencing a document that is based almost wholly upon it, but doesn’t mention it.
Q Specifically, what is your view of the EU Code of Conduct for
Data Centres?
A I am actively involved in furthering the best practices of the code, so one might expect me to speak positively about it. However, before I became involved, I based my RARE-IDC project around it and reaped significant savings (around £186,000/year) through following its advice. The European Commission have not put enough resources behind it yet, but we are lobbying them to do more. Specifically, it takes too much time to become qualified as a ‘Participant’ as the person you email is a one-man band in Brussels. We hope this will change soon.
Q What about the Green Grid and its industry initiatives?
A The Green Grid has been an excellent force for good in the industry. They have helped to provide innovative research and white papers on data centre efficiencies, particularly around their metrics and the Data Centre Maturity Model. Much of their work is now being developed into ISO standards.
Q Do you think that the PUE metric has an important role to play in
helping end users to achieve a green data centre?
A PUE has already played a fantastic part in widening the understanding among the general populace concerning the problems of energy wastage in data centres worldwide. It is a simple formula that can be easily measured with, in many cases, just 2-3 metering points. It can be used by end users to measure their own performance as they undertake green projects.
Q What about the ASHRAE regulations?
A SHRAE TC9.9 is an industry group made up of the major IT equipment manufacturers who have a vested interest in ensuring they devise standards that they can manufacture to and, more importantly, that they can warrant their products against. Too many data centre managers keep to the “Recommended” environmental envelopes, when actually they should be pushing towards the “Allowable”.
Q Are there any other data centre/IT industry standards or
initiatives that you find useful?
A TIA 942 is useful for defining data centre build standards
EN 50600-2-2 and EN 50600-2-3 specify requirements for the measurement of parameters relevant to the power distribution and environmental control within the spaces of the data centre. EN 50600-2-6 contains the requirements for the recording of this data as part of the management and operational information as required by Key Performance Indicators that are chosen to be applied.
Q Are there any gaps in this standards/initiatives framework?
A The International Standards Organisation (ISO) has a number of working groups set up under JTC1/SC39 working on data centre standards for objective KPIs. The first due by next year will be PUE (ISO/IEC 30134-2). Following behind this will be others still being discussed and agreed.
Q Do you think that the hardware (and even software) vendors pay
enough attention to importance of energy efficiency when
designing their products (and applications)?
A It’s a great question and one which has only increased in relevance as the data centre industry has achieved such great efficiencies in the technology that supports the IT equipment, forcing us to look elsewhere for energy savings. Unfortunately, coders have become lazy. The availability of (in historical terms) vast quantities of cheap CPU, memory and storage has allowed application designers to pay little or no attention to efficiency in coding. Have you ever looked at how much energy
Facebook is draining from your iPhone battery? If we are ever going to make a significant difference to the energy that data centres consume, much of it caused by the boom in social media apps, then we need energy efficiency to be designed in at the conceptual stage and regularly benchmarked. We need more schools and universities to teach energy efficient programming in their classes. Major server manufacturers already provide high-efficacy models, but these come at a price that many organisations don’t want to pay and when they do, their systems administrators turn off all the eco-features because they limit performance.
Q If not, what more would you like to see done?
A See answer above – education, education, education!
Q We’ve covered a fair amount of ground (!), are there any other
technologies and/or issues that need to be considered when an
end user looks to embark on a green data centre project?
A I would certainly recommend investing in good power meters at the appropriate points in the data centre incoming supply. These should be IP and SNMP enabled and provide enough information to at least calculate PUE. The second key technology is the humble power distribution unit (PDU) that sits in the rack. There are so many caveats when choosing one that I highly recommend taking expert advice. You are going to buy a lot of them (at least 2 per cabinet), so you want to make sure they give you everything you need.
Q What one or two pieces of key advice would you give to an end
user at the start of the data centre greening process?
A It sounds complex, but it doesn’t need to be. There are a lot of simple things any data centre manager can do, that are neither particularly risky, nor costly and can make a significant difference to the bottom line of running their data centre. Make those quick wins your first goal, measure the improvements and cost savings and use these data to justify larger green projects.
Q Any other comments?
A If energy efficiency in data centres is of interest to you, please get involved in the industry groups working to further the spread of best practice and harmonisation of standards. The BCS DCSG is a good place to start.
The next issue of DCSUK features an article on the Carbon Accounting and Reporting of Baseline for Services (CARBS) project that Steve is overseeing at the University of Hertfordshire.