Data centre cooling conundrums

By Alan Beresford Managing and technical director of EcoCooling.

  • 10 years ago Posted in

SAVING ENERGY and reducing carbon is a major issue in the data centre sector with increased focus on lowering PUE. We asked Alan Beresford, managing and technical director of EcoCooling to look at the techniques available to achieve these goals and the latest developments.

Conventionally, data centres have favoured DX (direct expansion) refrigeration-based cooling systems using CRAC (computer room air conditioning) units. These have a relatively low capital cost but can be inefficient.

More advanced systems incorporate a principle called free cooling with more efficient fans and controls to reduce operating costs. Modular systems allow data centres to match their cooling equipment to their loads.

Free cooling is commonplace in the design of new data centres in the UK. Key savings can occur on an average of 255 days out of 365 - when ambient air entering the outside condenser is under 14C.

But there is a conundrum: How much can we rely on fresh air for the cooling of our data centres - and how does its use affect their ASHRAE compliance?

A number of issues prevent almost all conventional systems realising their maximum efficiency.

The DX Conundrum
DX (direct expansion) cooling is a refrigeration technique, used in many data centres. The most efficient application of DX cooling would incorporate ‘free cooling’ which allows the system to switch off the power-hungry compressor and use a simple, efficient, air-cooled circuit.
In theory this should provide a reasonably good coefficient of performance (CoP) of 3
So the equation should be a very reasonable:
IT Load 1.00
UPS Losses (5%) 0.05
Cooling (@ 3:1) 0.33
Other (lights etc) 0.12
Design PUE 1.5

However, most of these data centres are operating with measured PUEs of 1.7 to 3.3. Let’s investigate why. Last year, In London, the ambient air was under 14C for approx. 70 per cent of the time. This is when free cooling should be used.

However, because condenser units are usually placed in groups on rooftops or in outside plant areas, the only source of air into the inlet is a mixture of ambient air with hot air from adjacent condensers.
A relatively small amount of warm air recirculation can make a big difference. A 5C change would HALVE the time when free cooling could be used. This is the biggest reason why specified performances are not achieved, but there are more:

Humidity Control
Many data centres use humidity control based on out of date standards. But humidification is very energy intensive! It takes 3kW just to produce 1kg/hr of water vapour.

Changes in ASHRAE guidelines (20-80% relative humidity) have allowed many operators to discount humidification as the costs outweigh the benefits.
See if you can now turn your humidity controllers off. Permanently.

Too much equipment
Data centres generally have far too much cooling equipment operating.
Cooling is usually installed based on the maximum heat output of the IT equipment and with n+1, or 2n cooling units for redundancy.
Add these over-provisions to the fact that the data centre is rarely full and most IT equipment operates at less than 50-60 per cent of its maximum heat output. Then consider that refrigeration equipment has its maximum efficiency at full utilisation. Anything less greatly reduces the efficiency of the whole system.

Measured PUE
So, here’s how the PUE actually stacks up in nearly all small to medium data centres:
IT Load 1.0
UPS Losses (5-10%) 0.1 – 0.2
Cooling & Humidity 0.5 - 2.0
Odds & ends 0.1
Facility PUE 1.7 - 3.3

The 1MW conundrum
Even though chilled water is more efficient, if a medium size data centre starts out with say 300kW of DX units, by the time it has grown to 1MW, it is so heavily tied-in to DX technology that it’s impractical to change and the data centre has to continue to accept the poorer efficiencies and poor PUE.

On the other side of the conundrum, if a new-build data centre opts for chilled water cooling from the start, then it is going to spend most of its early years of operation with those giant units running at a fraction of their maximum load. Once again operating at way below their design maximum efficiencies.
Latest Developments

For data centres, large, medium and small there have been some significant advances:
In old installations the air handling fans only had one setting – full speed! Consequently these used masses of power. Early variable speed drives weren’t much better.
Today’s EC (electrically commutated) fans use one quarter of the energy at half speed. So as an EC fan runs slower, it uses much less energy, while also reducing noise and increasing operating life. Insist on EC variable speed fans in any new installation and retrofit them to legacy cooling units as fast as you can.

Free cooling
Just to confuse us, the term ‘free cooling’ is also applied to ventilation (non refrigeration) systems.

For most of the time, UK ambient air temperature is colder than that required in the data centre – so a simple ventilation system can maintain compliant conditions. This can remove the need for cooling for up to 95 per cent of the time in the UK and similar climates.

But a cooling system is needed on hot days. A very low energy solution: evaporative (or adiabatic) cooling has been introduced. The combination of a ventilation system using EC fans and evaporative cooling provides a solution with a CoP of over 20. A combination of ‘free cooling’ and adiabatic cooling can keep a data centre in the UK within acceptable ASHRAE conditions for 98% of the time at a fraction of the energy cost.

Direct evaporative cooling
In most data centre locations, the airborne contaminant levels are sufficiently low that direct fresh air can be used. Even after passing through the evaporation process, the air is still well within the ASHRAE humidity guidelines and filtration can now be achieved to EU4 levels.

Fresh air evaporative coolers like our own CREC (computer room evaporative coolers) are highly modular from 35kW up to many MW and there’s no conundrum at the 1MW or any other level. CRECs can simply grow with the IT load - which is how brand-new data centres with only partial utilisation can open with an operating PUE of 1.2 and be sure it will remain in a 1.2 to 1.4 envelope as the IT load grows. CRECs are also far more efficient at partial utilisation - with a 35kW module being appropriate for applications as low as 15kW.

Conclusion
The cooling of data centres need not be complicated or expensive. There are a range of options available to meet almost all operating requirements, redundancy and efficiency needs. Bear in mind at all times that the operating costs of fresh air systems can be 95% less than a close control refrigeration based system. The ASHRAE guide clearly states there is a balance between energy use and equipment reliability.

End users can evaluate the different systems and, taking into account all of the stakeholder interests, make an informed decision on the appropriate cooling
solution for their operation. I’d strongly recommend talking to manufacturers or consultants who, like ourselves at EcoCooling are members of the Data
Centre Alliance, to explore these different solutions.

 

12 steps to improving PUE

by Ian Dixon, VP of Data Centre Operations at Colt Technology Services

TRADITIONAL DATA CENTRES were planned around a 10 to 15 year life cycle, however it’s almost impossible to balance long term planning, ever changing business requirements and technology innovation.

With business and technology constantly changing even the greatest technology engaged business case can’t predict the long term. Trying to plan anything beyond two years is a challenge, so ‘guestimating’ compute fifteen years in advance is nigh on impossible. The data centre in particular has moved forward dramatically in the past ten years in terms of what is considered to be best practice. Previously each data centre was built differently and operators had their tools and processes to run them.

However, today there is a whole host of information available about how data centres should be run, how to improve efficiency and the best tools to use to improve and automate processes. Nevertheless, there still remains no single defined set of best practice standards. Typical of a developing industry, the data centre sector is notorious for deploying measurements which seemed good at the time but moved out of fashion as the real issues surfaced. When the cost of energy rose, the power of the data centre became a key factor for measurement, along with an increased focus on energy efficiency and the social responsibilities associated with running large data centre estates. This caused the operational management team right up to the board level to prioritise improving efficiencies as a way to make measurable differences as well as an impact to the company’s bottom line. Measuring power usage efficiency (PUE) is one of those measurements and has become the de facto standard for measuring energy efficiency, environmental impact, and cost of running a modern data centre.

As efficiency became synonymous to every data centre conversation, Power Usage Efficiency (PUE) became the metric of choice – the smaller the better. At Colt, we were well aware of the benefits of reducing power consumption and a couple of years ago we decided to regulate our approach to improving efficiency. We worked out a standard set of guidelines for our operations crew to systematically ensure that from simple to more complex solutions, we were maximising efficiency in every data centre we have. As a result we reduced our annual power bill by more than Ä4 million.

With that in mind we strive to help others benefit from improving their PUE, which is why we have compiled a list of 12 steps to improve PUE.

12 steps to Power Usage Efficiency
Measure
In order to assess any efficiency improvement, you need to be in a position to benchmark your existing energy usage against a comparable timeframe. Starting to measure, record and track power use on a regular basis is the first step to a more efficient data centre.

Regulate air flow – prevent hot and
cold mixing
The need to regulate temperature can be one of the most draining processes on energy supply, and have a substantial effect on PUE. A data centre is essentially a structure that manages the flow of cold air in, and extracts hot air. With servers pumping hot air out and cold air in, if not managed correctly mixing will occur, so controlling airflow and limiting mixing; and the various steps to do so should be a priority.

Align hot and cold aisles
Ideally, all servers within the racks and rows should face the same direction. These rows can then be arranged into hot and cold aisles with the front of the servers facing one way and the back the other; to allow for cool air aisles to be created blowing colder air to the front of the servers, and hot air aisles for the hot air to flow from the back.
Reduce airflow leaks
Install blanking plates to fill the gaps where no equipment is present. These can stop air escaping by preventing it from falling between or back behind the racks.

Check flooring
Flooring is as important to consider for potential air leakage as your walls. Check for gaps allowing air to escape; behind and beneath air cooling units or through air vents in floor tiles. The goal is to direct the cold air in one direction, through the racks and out the other side. Perforated tiles should be placed in the planned cold aisle in front of racks in order to do so.

Introduce aisle containment
Containment can be introduced once airflow leaks and gaps have been addressed. Introducing roofs and doors to the end of aisles can deliver major improvements. Solid doors or Perspex dividers are effective, even butchers’ curtains can make a big difference, if not a very attractive option.

Control air temperature
The average data centre runs at 21 degrees, but with extended ASHRAE standard, server manufacturers are happy with DC temperatures from 18-27 degrees supply air. Steps such as turning off cooling units or increasing the temperature of water to those units can lead to further savings.
Regulate humidity
By operating to a slightly wider banding; between 20-80%, air is able to drift a little more and vapour production can be decreased – significantly reducing energy consumption.

Check transformer voltage
A high unnecessary voltage equals a higher, unnecessary use of power, resulting in additional cost. Reducing voltage can also reduce loss in transformers.

Remove isolation transformers
Isolation transformers were once a legal requirement, however they are often no longer required and are an unnecessary drain on your power.

Turn off ‘zombie’ equipment
In an enterprise environment there is a good chance you’re running redundant equipment. Turning it off could be a simple step to significant energy savings.

Measure again
At the end of a 12 month period there should be enough data to benchmark against. This will allow you to assess whether your measurements are going in the right direction. Be cautious however of the seasonality and changes in outside temperate can have on a significant impact on power usage and anticipated energy savings.

The costs and implications of not prioritising increased efficiency, especially in data centres more than seven years old are not worth the risks. There are vast financial implications for the business, communications, client management and accounts if a data centre does not reduce its energy use, which is why it’s essential to ensure the correct processes are followed, thus allowing the life of a data centre to increase whilst the cost of running it substantially decreases. Our own efficiency improvement programme has now been running for three years and in that time has achieved an 18% reduction in energy use, achieving 10% in year one alone. Implementing some of these quick fix measure has saved us €4 million annually to date. Taking advantage of some of the measures above can have both short and long term improvements on the efficiency of your data centre estates.

 

Upgrading to electronically commutated fans in air conditioning and cooling systems can reduce energy consumption by up to 70%

The business case
According to the Carbon Trust, large UK businesses are paying out more than £1.6 billion too much on their energy bills every year because many are yet to seize the full opportunity to cut bills by around 15% through energy efficiency measures, savings which the Carbon Trust say are available through changes in behaviour and equipment and which represent highly attractive returns on investment.

The Carbon Trust also cites that for many businesses, a 20% cut in energy costs represents the same bottom line benefit as a 5% increase in sales.
The case for being more energy efficient is therefore a simple one, especially in an era of spiralling energy prices.

The solution
A relatively straightforward solution that can contribute to overall energy savings is to replace traditional AC fans with EC (Electronically Commutated) fans in precision air conditioning and cooling applications, offering the potential to reduce power consumption by up to 70% . Whilst end users and contractors are now typically specifying more energy efficient components like EC fans on new equipment there are still thousands of systems installed pre and post the millennium that will benefit enormously from being upgraded.

EC fans are typically 50% more efficient than previous generation fans, and use variable speed control matched to load to eliminate unnecessary energy usage. Integrating intelligent controls will improve energy efficiency still further by precisely matching speed to demand. The quiet, direct drive motors bring further business benefits by removing the need for belt replacement, maximising system uptime.

They are fully compatible with most systems, and units can be installed with minimal operational disruption. Typical payback period is less than two years.

The technology
EC centrifugal fans incorporate electronically commutated DC motor controls using semi-conductor modules which respond to signals from the indoor unit. The integrated AC to DC conversion combines the flexibility of connecting to AC mains with the efficiency and simple speed control of a DC motor.

With its highly efficient backward curved impeller, the EC fan significantly reduces power in comparison with equivalent AC fans at both full and modulated fan speeds.

The in-built EC fan control module allows for fan speed modulation from 0-100%; in comparison, the modulating range of a standard AC fan is typically 40-100% of full fan speed.

Example calculations
Based on estimates that every 1kW of power saved for each hour of continuous operation could bring savings of £876 per annum, the savings soon stack up:
Example site 1
Fans: 9
Power Saved: 58,342 kWh
Energy Saved: Up to 64%
Cost Saving: £5,834 pa

Example site 2
Fans: 36
Power Saved: 409,968 kWh
Energy Saved: 33%
Cost Saving: £36,897 pa

Maximising energy savings
The energy savings can be even greater when carried out as part of a wider programme of energy efficiency measures. To help them identify areas where all-important savings can be made, many organisations are now investing in energy surveys.

A typical survey will look at the performance of the complete cooling system, including refrigerant subcooling and superheats, as well as the chilled water system, and will also assess the fan airflow path to make sure it is as efficient as possible. In addition to fan upgrades, the survey might identify additional efficiency saving opportunities through for example incorporating electronic expansion valves (EEVs), upgrading inverters, compressors, coils and pumps or through installing software controls, energy meters and sequencers. Upgrades such as these can quickly pay back the cost of investment. Installing EEVs in cooling units and condensers for example reduces the need for high head pressure, which can result in an energy efficiency ratio (EER) increase of 30%.

Good practice is for contractor and client to work closely together in order to build a business case for budget approval. This
may involve installing the unit on a trial basis and taking ‘before and after’ readings of power and airflow to demonstrate the savings. A reputable service provider will
also re-commission the cooling system
for optimum performance after completing any work.