DEEP (Data Center Efficiency Evolution Program) offers assistive criteria on how to optimize a data center’s mechanical systems not only to promote sustainability, but to also improve efficiency and lower potential operating costs. While retrofitting older systems with more sustainable technology can be costly, it’s far cheaper to update them now in order to avoid far costlier downtimes due to equipment corruption or failure. Beyond necessary upgrades, simple steps can also be taken to ensure optimal efficiency in a data center’s mechanical systems, such as proper utilization of the data center’s surrounding water sources for free cooling and temperature control. Provided below are other best practices in order to improve your data center’s mechanical systems.
Free air cooling
As energy costs and IT power consumption continue to rise, cooling costs can account for more than 50% of a data center’s annualized operating costs and approximately 40% of its power usage. In order to alleviate these setbacks, free air cooling is gaining popularity as an alternative option to more traditional mechanisms. Free air cooling typically involves the utilization of ambient outdoor air that’s filtered and humidified in order to reduce or eliminate the cost and usage of mechanical cooling systems. Cooling systems that use the free air cooling approach are usually called fresh air or air-side economizers.
There are two main free cooling options for a data center: 1) an integral free cooling or economizer coil on computer room air handlers or DX air conditioners, or 2) a centralized cooling tower and heat exchanger that works alongside the chiller. These integral economizer coils are ideal for sites with limited space. Secondly, data centers can use independent free coolers with direct and/or indirect evaporative cooling, which has a higher efficiency of heat exchange rate. Independent free coolers have shown energy savings of up to 70% versus mechanical refrigeration.
Other free cooling methods include a strainer cycle, which eliminates any debris from water within a cooling tower; a plate & frame heat exchange, which can transfer heat directly from the chilled water loop to the cooling tower loop; and the refrigeration migration, which utilizes a valve arrangement that opens a direct path between the condenser and evaporator.
Reductions in cooling systems usage can also mean drastic reductions in data center power consumption and possible repairs, lowering the energy and maintenance costs for facility owners. Some vendors have claimed that free-air cooling systems can save up to 30% on OpEx costs, and also allows for huge savings on CapEx.
Variable-speed fans
CRAC (computer room air conditioning) unit fans consume a lot of power and tend to account for 5% to 10% of a data center’s total energy use, according to Energy Star. Most CRAC units are unable to vary their fans speeds with the data center server load, which itself seems to fluctuate. To accommodate these fluctuations, variable-speed fan drives (VSDs) can be used.
VSDs are able to control speeds of electric motors in cooling applications in order to provide the precise amount of energy required in any given operation at any given time. As such, Mission Control states that not only are VSDs able to offer increased energy efficiency, but also increased reliability, availability, operability, scalability, and a host of other subsequent benefits. For example, a reduction of 10% in fan speed (after VSD installation) can reduce that fan’s electrical usage by approximately 25%. A 20% speed reduction can yield electrical savings of roughly 45% annually.
Many other measures to increase energy efficiency in mechanical systems, including free air cooling, rely on the installation of VSDs in CRAH and CRAC units to maximize their potential. However, it’s important to keep in mind that not all CRAH and CRAC units are able to be retrofitted with VSDs, so it’s best to ensure that your next CRAC/CRAH unit comes with pre-installed variable-speed fans.
AI/ML Software
Fortunately for data center operators, AI’s use cases are growing in frequency and complexity while its overall costs are decreasing. As such, it’s recommended that operators make AI/ML a key part of their planning and construction processes, including retrofit and data strategies. AI/ML can provide optimized visibility into legacy data systems, for example, and can also increase efficiency and lower risk of error while retrofitting modern technology (such as VSDs) onto older systems. Similarly, AI-embedded sensors can warn data center teams immediately about possible defects, and reduce the possibility of costly downtimes. Overall, installing AI/ML systems can significantly reduce the risk of equipment failures or power outages.
Waterside Economizers
For data centers with chilled water plants that utilize local air- or water-based cooling systems, a waterside economizer is a viable method to increase energy efficiency. Waterside economizers utilize the evaporative cooling capacity of a cooling tower to produce chilled water, and can be used instead of a chiller during the winter months. Furthermore, they’re able to produce cooling redundancy in case a cooler goes offline, therefore lowering the risk of operational downtime and subsequent financial setbacks. Consistent usage of a waterside economizer has been proven to reduce costs of a chilled water plant by up to 70%.
However, it’s important to note that waterside economizers tend to work best in climates where the wet bulb temperature is lower than 55 degrees Fahrenheit for at least 3,000 hours or more. Fortunately, a report by Energy Star identifies that most of the U.S. would fall within these parameters, only excluding areas in the extreme Southwest and part of the Southeast.
Teamed CRAHs
A typical rack of servers can generate temperatures up to 100°F. However, data centers and other computer facilities must maintain an environment of about 68°F. CRAC, HVAC and CRAH systems must maintain this temperature throughout data centers; otherwise, they put data center systems at risk of failing or shutting down.
While CRAC units, which use refrigerants and a compressor, are a common option for data centers, they have certain drawbacks and limitations to consider. Even though they’re quite capable of maintaining a data center’s distribution, humidity, and temperature, as well as absorbing heat from devices and blowing cool air to servers, they tend to use a considerable amount of power—up to 15% of a data center’s total power usage. CRACs also have difficulty adjusting to fluctuations in temperature and require thorough inspection of the data center prior to installation in order to optimize air flow. Due to these complications, CRACs are recommended only for small to midsize data centers that have no immediate need to scale.
Unlike CRACs, however, a computer room air handler (CRAH) uses fans, cooling coils and a water chiller system to remove heat. CRAHs blow air over cooling coils, similar to CRAC systems, but instead rely on access to chilled water instead of refrigerants to maintain proper temperatures. Teamed CRAH units are also able to dynamically adjust to fluctuations in temperatures with minimal human intervention, and can be outfitted with proprietary technology such as VSDs to further decrease energy usage and increase efficiency. Overall, while CRAH units tend to be more immediately expensive, they’re ultimately much more cost-effective and energy efficient for larger or hyperscale data centers with access to local sources of water.
Chiller Optimization/Set Points
While ASHRAE recommends 60°F as the maximum chilled water temperature setpoint (or CHWSTmax), it also emphasizes that this value can vary due to a given data center’s installed chillers and coils. However, despite ASHRAE’s recommendation, chilled water set points tend to instead be administered by IT management, and are often dictated by former precedent or “inherent IT conservatism,” according to Upsite. Without carefully analyzing the optimal temperatures for each point in a data center, potential complications can ensue, such as equipment outage and dew point issues, which involves cooling coils removing chilled water from the air.
More importantly, Upsite claims, arbitrarily-established low set points can drive up the energy costs of a data center cooling plant. Beyond using AI/ML software to accurately determine the optimal cooling unit set point for each location in a data center, energy costs can also be lowered by grouping equipment with similar heat load densities and temperature requirements. Isolating equipment into these aforementioned categories can allow cooling systems to be controlled to the least energy-intensive set points for each location.
Conclusion
Clearly, there are multiple factors to consider when optimizing your data center’s mechanical systems. Fortunately, this also means there are multiple options for you to pursue in order to reduce costs and potential risks of operational downtime. DEEP’s data center evaluation criteria highlights these options as the best means to optimize both efficiency and sustainability for continued long-term success and quality of service.