Close

Service Interruption

NSIDC Green Data Center

Awards

Governor's Award for High Impact Research, 2011

University of Colorado Boulder Campus Sustainability Award, 2012

Uptime Institute Green Enterprise IT Award, Facility Retrofit, 2012

The Green Data Center Project

Overview

Current developments and enhancements in digital technology and communication are causing significant growth demands in the Information Technology (IT) industry. The worldwide capacity for data storage and processing is increasing rapidly as more and more industries and companies need to provide or store data for clients, as well as for internal purposes.

Data centers provide a clean, safe, and stable environment for online servers to be maintained physically and virtually at all hours of the day. Data centers across the U.S. house anywhere from a few dozen to a few thousand servers. Many of these computers are meticulously maintained for peak performance, and the physical environment where they are stored must be kept very clean, free of static, and maintained within specific air quality standards. So while improvements in energy efficiency are important, air quality and reliability are also critical to data centers.

The Green Data Center project, funded by the National Science Foundation (NSF), with additional support from NASA, is an attempt to show the public that NSIDC is dedicated to reducing its impact on the environment, and that there is significant work to be done in improving the efficiency of data centers in the U.S.

Significance of the Project

The project developed an optimized, low-energy solution for a small data center in a climate with opportunities for evaporative cooling. While there has been considerable recent research on cooling technologies for data center applications, most work has focused on larger facilities with very high density heat sources and has often included direct cooling of heat sources with integrated cooling technologies. The application addressed in this project is representative of a smaller data center that is integrated into a larger building with other heating and cooling loads.

The project focuses on compressorless cooling strategies, specifically the use of outdoor air for cooling in conjunction with indirect-direct evaporative cooling processes. While these strategies can offer major energy savings, they are limited to climates with favorable outdoor conditions and can be affected by extreme conditions.

While the research is valuable in exploring novel approaches to data center energy management, the project was also designed to engage undergraduate and graduate students in an interesting and relevant research experience. For example, the Green Data Center Monitoring System was created by team member Benjamin Weerts, an Architectural Engineering CU Graduate student and a Graduate Research Assistant at NSIDC. He set up a Campbell Scientific Instruments Data Logger to collect temperature, humidity, airflow, and electrical power measurements. These measurements allow characterization of the heat gain from the data center equipment, the heat removed by the cooling system, the energy consumption of the cooling equipment, and the uniformity of conditions within the data center. The data are analyzed and aggregated to describe instantaneous power and thermodynamic conditions, as well as long term system efficiency. The monitoring data are augmented by one-time measurements of velocities and air pressures. These data will also be used for Benjamin's Masters Thesis related to the characterization of data center efficiency and performance.

In addition to the research and educational components, the project fosters multi-unit collaborative activities among a national laboratory, a major computer company, a CU research institute, and a CU academic department.

Components of the Project

The project included:

  1. installation of a replacement cooling system featuring external air and evaporative cooling systems
  2. consolidation of IT facilities into a single server room
  3. virtualization of servers and arrangement of systems in a hot/cold aisle rack configuration
  4. solar panels to generate electrical power
  5. a battery array, connected to the solar system, to substitute for the lack of a backup generator
  6. upgraded internet connectivity to 1 GB.

The cooling system, roof top solar power array, and improved connectivity will reduce the computer center’s carbon footprint and reduce the power need for active cooling by 90 to 100 percent. Refer to Figure 1.

cooling system diagram

Figure 1. Cooling System Diagram Showing the Old Configuration and the New Configuration for the Green Data Center

Cooling System

The new cooling system replaced the older of two existing Direct Expansion (DX) Computer Room Air Conditioning (CRAC) units. The new system will serve as the primary cooling and humidification source, and the remaining DX CRAC unit will serve as backup and operate during conditions with high outdoor wet bulb temperatures.

Air Quality Standards

Humidity levels in a data center are very important. Too much moisture in the air allows for condensation and short circuits that could cause damage to any part of the delicate integrated circuits, power supplies, and other hardware. Too little humidity allows for static charge accumulation and potential damage from a single large discharge could be significant also. Exceeding a certain ambient air temperature can cause circuits and electricity flow to slow down, thus impeding the productivity of the computer.

Computers use a significant amount of electrical energy. Most of this energy is converted into heat within a computer that then must be exhausted. Otherwise, damage to the electronics could occur. The American Society of Heating, Refrigeration, and Air Conditioning Engineers (ASHRAE) has established an air state, called the psychrometric envelope, that is suitable for data centers and computer rooms. The limits on this psychrometric envelope are shown in Table 1. Note: The same values presented in Table 1 can be seen on a psychrometric chart in Figure 2.

In order to maintain an appropriate air state within the limits as described in Table 1 and Figure 2, the type and control of a Heating, Ventilation, and Air Conditioning (HVAC) system for a data center is a vital consideration. Thus, NSIDC aimed to meet the ASHRAE Allowable Class 1 Computing Environment, as shown in Figure 2.


Table 1. 2008 ASHRAE Recommended Psychrometric Envelope for Data Centers
Property Recommended Allowable
Temperature (DB) 64°F - 8°F 59°F - 90°F
Moisture 42°F DP - 60% RH & 59°F DP 20% - 80% & 63°F DP

ASHRAE Recommended Operating Conditions for Data Centers

Figure 2. ASHRAE Recommended Operating Conditions for Data Centers

The Cooling System Equipment

Old Equipment

The data center was cooled by a packaged, standard air conditioning system that utilized a DX heat removal process through a liquid refrigerant. This is the same type of process that runs the air conditioning in a car or refrigerator. Although a robust and mature technology, it has been under scrutiny over the past few decades as other technologies capable of producing the same amount of cooling have entered into the market and become economically viable.

The DX process requires a toxic, synthetic refrigerant (R-22) and substantial energy to run a compressor. Of all the parts in a DX air conditioner, the compressor requires the most energy and is inherently inefficient. Up until the past few years, typical data centers have been cooled by these packaged DX systems, commonly referred to as a CRAC.

These systems were economical in the past due to their off-the-self packaging, compact size, and simple controls. However, as the need for energy use reduction and environmentally friendly technology becomes more and more prevalent, the DX process is quickly becoming antiquated. NSIDC operated two CRAC units full-time until June 2011.

New Equipment

cooling units
Figure 3. Air Handling Unit — Credit: Ben Weerts

The Green Data Center project installed a unique cooling system that uses both airside economization and a new air conditioner that uses the efficient Maisotsenko Cycle. The new cooling system consists of a rooftop Air Handling Unit (AHU) powered by a 10 horsepower variable speed fan motor, eight Coolerado® air conditioners, and a hot/cold aisle containment. Refer to Figure 1 for a schematic of the hot/cold aisle containment, which is visible in the Improved Configuration portion of Figure 1.

Airside economization is not a new technology, but it does add some complexity to the control system. An airside economizer is a control mode that allows the AHU to cool the space solely with outdoor air when the outdoor air is cooler than the indoor air. This is commonly referred to as free cooling. In this mode, no active air conditioning is required. As stated previously, humidity is an issue for computers and electronic systems. Thus, in many locations particularly the Midwest and East Coast, airside economization may not be possible due to hot and humid climates. However, Colorado has a much drier climate year round and cool enough for about six months of the year, so an airside economizer is a viable option to maintain an air state that is within the ASHRAE limits.

air handling unit
Figure 4. Image showing the air supply duct from the Air Handler (top of imagine), and eight cooling units (sides of image) — Credit: Ben Weerts

The centerpiece of the new system revolves around a series of CooleradoŽ air conditioners that utilize the Maisotsenko Cycle. This cycle uses both direct and indirect evaporative cooling to produce an air supply state that is 30 to 40 degrees Fahrenheit below the incoming air temperature. A patented Heat and Mass Exchanger (HMX) divides the incoming air stream into two streams: Working Air (WA) and Product Air (PA). The WA is directed into channels with wetted surfaces and cooled by direct evaporation. At the same time, adjacent channels carry PA without any water added or a wetted surface. The adjacency of these airstreams allows for indirect evaporative cooling of the PA stream. Heat is transferred to the WA stream by evaporation from an impermeable surface in contact with the PA stream. Cool PA is delivered directly to the computer room. WA, warm and saturated, is ultimately exhausted to the outside or directed into the space when room humidity is below the humidity set point. The room humidity can get below 25 percent relative humidity if the outside humidity is very low, which happens often in Colorado, and the AHU is in economizer mode, or providing a significant amount of outdoor air to the room. In the winter months, this effect is even more pronounced and the WA is directed into the space most of the time.

Note that this cooling process does not have a compressor or condenser in its cycle. Air from the AHU can now be cooled to a comparable cool temperature using an average of one tenth the energy that would be required by the CRAC system. Water is used in this cycle, and is only used once (single pass). Based on measurements, all eight of the Coolerado units consume an average of one gallon of water per minute in moderate cooling mode.

Power

The old CRAC cooling system consumed an average of almost 29 kW of power during the months of May through October of 2010. This year (2011), during the same months, the new system only consumed an average of just over 11 kW of power. For these months, there was a reduction of over 60 percent in cooling power. This reduction will continue to increase through the winter as the system is in economizer mode almost exclusively, which uses only about six percent of the typical CRAC power. The Green Data Center Project was completed at a relatively modest expense and will translate directly into significantly reduced energy costs for NSIDC in the coming years. Refer to Figure 5.

a graph comparing our energy use to date with the old system

Figure 5. Graph Comparing NSIDC Energy Use with the Old CRAC System

Solar Power

The system consists of roof top panels designed to minimize the carbon footprint by utilizing solar energy to support backup systems. Power provided by the solar panel system will charge a battery system with extra power being returned to the grid.

IT Facility Upgrade

The IT facility upgrade consisted of a virtualization of servers in order to consolidate the IT facility into a single server room. The upgraded facility will act as a node on the University of Colorado- Boulder's (UCB’s) Center for Campus Cyber infrastructure, providing campus-wide inter connectivity and support.

The upgraded facility will:

  1. Improve NSIDC’s infrastructure while broadening participation through increased computational power and data storage for supporting modern science by:
    • providing a basis for collaborative efforts – computational, sharing data/products, communication
    • providing increased reliability of virtual servers, cooling, and power.
  2. Function as a prototype facility as:
    • a demonstration center for the University, the State of Colorado, and Xcel Energy
    • an illustration of how to reduce the carbon footprint of an IT facility
    • the initial facility for the proposed NSF DataNet effort.
  3. Provide energy savings:
    • in Boulder, each kW of installed solar capacity generates 1500-1800 kWH on an annual basis. As the installation will be optimized for angle and azimuth, it should produce close to 36,000 kwh annually
    • the combination of the solar array and new cooling system will significantly reduce the facility’s peak power demand during the summer air conditioning season
      1. The solar array will reduce the computer related power demand on the regional system from 30 kW to 10 kW during the day time
      2. The new cooling system will reduce the peak load on the regional system from 70 kW to below 25 kW.
  4. Comply with national goals, such as:
    • carbon footprint reduction
    • supporting climate research
    • training scientists and engineers for the future
    • providing cyber infrastructure in support of research.

Summary

The innovative data center redesign slashed energy consumption for data center cooling by more than 90 percent, demonstrating how other data centers and the technology industry can save energy and reduce carbon emissions. The heart of the design includes new cooling technology that uses 90 percent less energy than traditional air conditioning, and an extensive rooftop solar array that results in a total energy savings of 70 percent. The new evaporative cooling units not only save energy, but also offer lower cost of maintenance.

The Green Data Center project will be completed by the end of 2011. The goals of NSF, NASA, and NSIDC have been met in terms of a more efficient data processing infrastructure and significantly reduced energy consumption. As a leading institution for climate research, NSIDC was committed to reducing its own energy use. And as a result, NSIDC hopes this project will be a showcase for other small data center, low-energy retrofits for years to come.