Let’s talk about a hot topic from this year’s US presidential election: energy. Both major-party candidates agreed that US energy policy needs some serious improvement, although they differed wildly on the best means of achieving those improvements. The IT industry worldwide has seized on the idea of energy conservation as a selling point, and companies all over are starting to recognize that they can save significant amounts of money by improving the efficiency of their IT operations.

How can you tap into these savings in your Microsoft Exchange Server environment? Simply put, the answer is to turn off as many devices as you possibly can. That’s what virtualization software vendors have been saying for years, but there are certainly ways to save energy other than by turning off the servers themselves. In the United States, the average cost of electricity for commercial use in July 2008 was 11.08 cents per kilowatt-hour (see “Average Retail Price of Electricity to Ultimate Customers by End-Use Sector, by State”). Considering that cost estimate, what does it cost you to run your servers? The answer is a resounding, “It depends.”

The US Department of Energy’s Industrial Technologies Program devised a model, described in “Five Ways to Reduce Data Center Server Power Consumption,”  that uses the CPU utilization of the servers as the primary factor for calculating power consumption. This model makes sense because other parts of the system have fairly constant power use. For example, hard disk drives spin all the time, RAM has to be refreshed, and monitors’ power draw is constant as long as they’re on. The formula for calculating power usage is simple:

Pn = (Pmax - Pidle) × n/100 + Pidle

To use this formula, you need to know the maximum amount of power a server can draw (Pmax), which is easy to derive from the rating of its power supply. You also need to know the server’s power draw at idle (Pidle), which you can figure out with an electricity usage meter such as a Kill A Watt. When you have those two factors, you can calculate the power draw for a given level of CPU utilization (n)—and the utilization data is easily available with Windows’ built-in performance monitoring utilities. That makes it easy for you to calculate how much power an Exchange server is using, and thus how much you might save by virtualizing it. This formula doesn’t take into account the use of multiple redundant power supplies, but you can factor those in as needed.

What about disks? Their power usage is fairly constant and relatively small (say, 10 watts in normal operation). This amount might not seem like a big deal, but if you have enough disks—as you will in most Exchange environments—those power costs add up. Microsoft includes disk power consumption as a factor in its latest Exchange Server 2007 Storage Cost Calculator. To perform this calculation, you’ll need to know the power consumption for the disks you’re using and the number of disks you have. Simple multiplication tells you the cost per disk per hour; multiply that by the number of hours the disk will run (there are 8760 hours in a year), and you have the total cost.

You’ll have other associated costs, too, such as the cost of cooling your computer systems. Cooling costs have turned out to be significant in many environments, especially with the wide use of multiprocessor and multicore systems. Bear in mind that a report commissioned by AMD, “Estimating Total Power Consumption by Servers in the U.S. and the World,”  says that the growth in server power use worldwide comes mostly from having more servers, not from having higher power use per server. I expect to see lots of effort focused on reducing cooling costs; Intel has already demonstrated some promising results using fresh outside air as a cooling medium, as reported in “Intel's secret weapon: Fresh air,” and other major IT vendors such as HP and IBM are working on related products and technologies.