DATA CENTRES are the engines that power today’s information society. Concentrating racks of disk storage and processing capacity in carefully managed facilities makes possible the phenomenon of “cloud computing” which allows effectively limitless IT resources to be accessed from afar by ever smaller devices such as notebook and tablet computers and smart phones.
As data centres become crammed with ever greater densities of IT equipment - so that we don’t have to carry it all around in our pockets - an inevitable consequence is an accelerating demand for electrical power to drive it all. Much of that power is expended on cooling the densely packed IT equipment.
With processing and storage demands only likely to increase, the onus comes on data centre operators to maintain acceptable operating temperatures efficiently as well as effectively, despite ever increasing loads. It is not enough that every increment in Wattage needed to power the IT equipment must be matched by a similar power increment just to cool it down.
Today’s data centres need to take an all-encompassing approach to equipment cooling from the overall design of the building to the design of the racks and containment systems that house the equipment to the in-row cooling units that are placed within the racks themselves to absorb heat from individual servers and storage arrays.
In-row cooling products are particularly important in terms of a data centre’s overall performance because they can, if managed effectively, fine tune the heat dissipation effort by matching the required amount of cooling to the necessary load on an individual rack basis.
A vitally important feature is the software management which allows all of the various cooling systems to work together for optimised management. Such software monitors and analyses the cooling requirements of the data centre as a whole so that cooling resources can be allocated in real time to where they are most needed and, conversely, scaled back for maximum efficiency when they are not.
Overall architecture
The top level of data centre cooling concerns the heat management of the room itself. Typically this concerns the overall air-flow management within the data centre, including the use of a raised floor to facilitate air circulation and the deployment of chillers and heat-rejection systems outside the building to extract warm air from inside. These external systems also make use of cooler ambient air from outside the building to chill the warm extracted air before it is recycled back inside, thereby increasing efficiency.
Inside the centre itself, and particularly inside each individual rack, closely coupled cooling extracts heat away from individual points using only as much cooling power as is necessary.
In-row cooling
Close-coupled cooling units have evolved from traditional perimeter cooling units into smaller formats designed to sit inside racks themselves. The first generation of such units were produced about 10 years ago in 300mm wide configurations, and were succeeded by larger 600mm wide units which provided greater cooling capacity but occupied more space.
Schneider Electric’s latest in-row coolers provide similar cooling performance to the larger units but in a 300mm size. They come in two variants; the InRow CW-ACR301H and ACR301S, with the H option designed for high-temperature operation and the S variant referring to standard temperature operation.
The high-temperature (ACR301H) version operates up to 60kW at certain conditions and has an improved maximum airflow of 4200 cubic ft/min. (CFM). These units operate best at high water-inlet temperatures, a feature which allows the operator, when using these units in conjunction with outside chillers, to make better use of the free cooling from ambient air for more hours in the long run than is possible using standard coolers.
In both the high-temperature (H) and standard-temperature (S) variants, hot air from the IT equipment is passed over a water-filled coil which is cooled by an array of eight fans. The H-variant units have a larger and more efficient cooling coil with fans arranged in a horizontal rather than the more traditional vertical alignment.
Although this combination allows the unit to operate at high temperatures, it means that care must be taken not to allow condensation to occur on the coil as it would be impossible to collect it, given the position of the fins. Therefore it’s recommended that H variants include a dew-point control pump, or some other form of control such as a bypass, to monitor the temperature of the water as well as the temperature and humidity of the air coming into the unit, to automatically prevent any condensation occurring on the coil.
The H variants operate particularly efficiently at high water-inlet temperatures up to 20C, at which point they can deliver a cooling capacity of 20kW for a fixed return-air temperature of 35oC. However, 20oC is not a limit and even higher water inlet temperatures can be used as long as the user is happy with the required supply air temperature and return temperatures. Schneider Electric declares values for up to 22oC above which it becomes challenging to assure supply air at/ below 27oC.
For higher return-air temperature settings the
H variants can deliver a cooling capacity of 30kW for a water-inlet temperature of 20oC. Therefore, the higher the return temperature the more cooling capacity can be achieved. This helps to reduce cost because the higher cooling capacity means that fewer units need to be deployed to achieve the same cooling effort.
The standard-temperature S variants are similar in appearance to the H variants, at least on the outside, but inside there are differences to reflect the fact that they operate most effectively at water inlet temperatures up to 15oC. Their cooling capacity is lower, as is the air-flow rate through them but so too is the power consumed.
As with the H variants air is passed over a coil which is cooled by eight fans, but this variant utilises a coil with the fins oriented horizontally. Unlike the high-temperature units, some condensation may occur on the coil which may then by extracted using a condensate pump.
By comparison with the high-temperature units the S variants have lower cooling capacities as well as lower power consumption of 1.0kW at maximum operating conditions.
For ease of movement around a data centre, both types of in-row unit are equipped with casters. Both variants also have a new built-in touch-screen display for easier control of the units’ settings and clearer communication of changes in performance and alarms.
Both units achieve key energy savings thanks to tight air-flow management using an active flow controller. Traditionally, in-row cooling units have temperature sensors located at the front of the racks whose output is used by a control mechanism to adjust the fan speed.
The active-flow controllers provide more accurate adjustments to match the speed of the fans in the new units even more closely with the load generated by the IT equipment. This is important because a reduction in fan speed of 20%, easily achievable during off-peak hours, can result in a reduction in electrical power consumption of 45%. Because the relationship between fan speed and power consumption is not linear, the energy savings achieved by a slight reductions in fan speed are disproportionate.
Overall solution design
Schneider Electric’s ISX Designer tool allows an overall cooling solution for a data centre to be developed comprising many different types of equipment. The tool allows operators to calculate the capacity and performance they need under a variety of conditions and to simulate the operation, including contingency plans in the case of a failure of a single piece of equipment.
Optimised management, which is made possible by the sharing of data between all units in the cooling solution from in-row units to external chillers and heat exchangers, helps to reduce overall power consumption by dynamically changing the chilled water inlet set point according to the overall load of the centre at any point in time. This is to make maximum use of the “free cooling” potential of using ambient air to precool warm air before it is circulated back into the data centre.
For example, when the facility is working at maximum load, requiring maximum power consumption by the cooling equipment a certain chilled water point will be set. As the load drops from maximum throughout the day, the indoor units will sense this and reduce the cooling effort to the level that is necessary.
Conclusion
By matching cooling load to the immediate requirements of the IT equipment in data centre in an accurate and timely fashion, the entire cooling effort can be managed in a cost effective and efficient manner, ensuring safe operation of the vital IT equipment at a predictable and manageable cost.