March 1, 2023
Even if in-house enterprise data centers are shockingly inefficient (as IDC recently discovered) most data center designers and operators are looking to reduce their energy consumption, as it’s one of the biggest IT expenses. Budgets are tight, so large retrofits or new builds are often out of the question.
To increase energy efficiency and add the bonus of a lower carbon footprint, IT executives should perform a complete power consumption evaluation and then check out each of the following five areas of the data center.
Measure what pieces of your data center are consuming the most power. In most data centers, energy consumption from least to highest goes as follows:
From this measurement, you can identify the areas that need the most improvement, and then do some research to learn what new technologies can improve your biggest offenders. For example, you may be able to take advantage of free cooling instead of using CRACs throughout the year.
Locate servers that can be virtualized, if you have any, and migrate them to a cloud platform (in-house virtualization is totally cool). For unused or underused servers, migrate any active workloads to another virtualized server. You can reach up to 80 or 90% utilization without any adverse effects.
There are several ways to discover unused servers. You can simply power them down and wait to see if anyone calls in with complaints. Similarly, you can assume the server is unused if nobody calls after an unplanned outage, so keep an eye on your logs. You can use monitoring and reporting tools to see if anyone is logging into the server and using applications. Or you can simply ask users.
Okay, so we already told you to virtualize. But take a look at your storage and networking hardware and their power consumption as well. New software-defined data center technology allows you to virtualize beyond the server, with utilization improvements of 30-50%.
If you’re already running VMware virtualization platforms, this should be relatively simple to enable, as SDS (software defined storage) enables an abstraction layer that automates storage according to your policies. The “virtual data plane” stores data and applies services like snapshots, replication, caching, and more. Physical hardware is abstracted and aggregated into virtual datastores. Similarly, in network virtualization, the hardware is virtualized and then provisioned and managed in a flexibly, shared between VMs as necessary.
By virtualizing these additional data center components, your virtualization platform can intelligently allocate resources where needed, shutting down unneeded hardware and squeezing the maximum performance out of each piece. Traffic and data can be migrated towards physical hardware according to energy-aware policies set to conform with your SLAs or redundancy requirements.
Aisle containment is generally embraced across most data centers, but if you have colocation tenants, make sure they are placing their servers correctly so exhaust fans don't blow into the cold aisle. Check cabling to make sure it is neat and not impeding airflow (and once again, encourage tenants to do the same). Place blanking plates in cabinets where there are no servers in order to eliminate hot spots.
You can also segment out your floor so servers are in a different area from storage and networking equipment. These systems often have different power and cooling requirements, so you can tune each floor to be as efficient as possible.
Increasing rack density may be out of budget reach, but modern equipment can easily reach 10kW and above per cabinet.
If you can afford it and your location allows it, look into economization and/or free cooling with evaporative chillers rather than full-out air conditioners. You can reduce the use of coolant and increase efficiency by up to 70%.
The modern guidelines for IT equipment also allow much higher operating temperatures than in the past. For each degree you raise the floor temperature, you could gain about 3% savings each month. You can comfortably raise your temperatures to 75 degrees if your equipment and cooling is relatively modern.
We’ve covered UPS efficiency before on the blog, but your power systems are an often overlooked piece of the energy efficiency puzzle. You can upgrade to UPSs with more efficient inverters, or examine the possibility of replacing batteries with flywheels.
All of these suggestions may not be viable for your unique data center, so after your initial evaluation, come up with a game plan for which systems will garner the biggest efficiency gains. Have you already started making efficiency improvements? What has been most effective for your data center? Share your best tips on Twitter @greenhousedata with #GreenMyDataCenter!