Environmental issues exposed by the media and driven by consumers have placed IT departments under pressure to develop "green" data centers. Factors including the reduction of energy consumption and carbon dioxide emissions in large data centers often provide the impetus for becoming green. A recent report issued by the United States Environmental Protection Agency (EPA) highlights the need for data center efficiency and demonstrates the increased pressure that IT departments are under.
A green data center is defined as one in which the mechanical, lighting, electrical and computer systems are designed for maximum energy efficiency and minimum environmental impact. The construction and operation of a green data center involves advanced technologies and strategies. Some examples include: Reducing the energy consumption of the data center; Minimizing building footprints; Maximizing cooling efficiency; Using low-emission building materials, carpets and paints; Installing catalytic converters on backup generators; Using alternative energy technologies such as photovoltaic electrical heat pumps and evaporative cooling.
The consumption of energy is considered the dominant factor in defining whether or not a facility is green. And according to a 2007 Gartner report entitled 2006 Data Center Polling Results: Power and Cooling, during the last five years the power demands of equipment have grown by five or more times. In fact, companies spend more on power to run a server over its lifetime than they do in capital expense to purchase it.
IT executives therefore need to start investigating alternative ways to build an energy-efficient data center. By following these six simple steps, IT executives can come closer to achieving their vision of a green data center:
STEP 1: Virtualize and consolidate: The basic concept of virtualization is simple: encapsulate computing resources and run on shared physical infrastructure in such a way that each appears to exist in its own separate physical environment. This process is accomplished by treating storage and computing resources as an aggregate pool from which networks, systems and applications can be drawn on an as-needed basis.
In addition, measurements indicate that often a single server utilizes only 5 to 15 percent of its capacity to service applications. With virtualization, the consolidation of under-utilized servers is seamless to the end user and significantly reduces power consumption.
STEP 2: Determine your cooling requirements: Most data center cooling systems that are in service today were deployed in a manner that assumed the load would be spread out in a uniform fashion; that is, the load in any given area would never be far greater than its relative share of the total data center space.
However, according to the American Society of Heating, Refrigerating and Air-Conditioning Engineers (ASHRAE), the typical data center that employs dense form-factor servers and storage averages 5,000 watts per square foot of equipment space. Compare that figure to the 2,000 watts used in the same amount of space in 2002 -- energy use and heat density have more than doubled in five years! The increases do not appear to be slowing, and the increased heat requires increased cooling. And cooling requires additional energy consumption.
STEP 3: Determine optimal placement of your equipment: Assuming you have enough cooling capacity for the equipment you have installed today, you must consider the placement of your equipment first. The Uptime Institute did a survey of 19 data centers and reported that, on average, only 40 percent of the cold air went directly to cool the servers in the room. Adopting an alternating hot aisle/cold aisle layout is optimal and can correct many cooling problems in a typical data center. Correct placement of vented tiles and closely coupled cooling systems are other techniques that can be used to improve cooling efficiencies.
Join the CIO Australia group on LinkedIn. The group is open to CIOs, IT Directors, COOs, CTOs and senior IT managers.