Six Ways to Strike Back Against Data Center Power Inefficiency
June 01, 2020
Google's AI journey began at a point on the efficiency journey that most of us have to work hard to attain.
Google and other companies have leveraged AI to study their data center environmental monitoring system’s performance data, relative to power and cooling cycles within their facilities. They do this in order to produce a profile of their energy usage. Through artificial intelligence, Google’s energy profiles then became algorithms that allowed their facility managers to apply well-timed instructions to the building’s mechanical and electrical plants. While this is all pretty easy to understand, it glosses over the fact that this hyperscale provider was already hyper-efficient.
Google’s AI journey began at a point on the efficiency journey that most of us have to work hard to attain. For those of us whose focus is on getting to a sub-1.5 PUEs, here are six thoughts on current practices for designing efficiency into new data center facilities.
- New LED lighting: Continued advances in lighting technology have not only driven better visibility in the rack row but also allowed operators to eke out more energy savings.
- Higher operating temperatures: Once controversial, the idea of your cold aisles running warmer has become possible through broader operating ranges of IT equipment, and advances in remote monitoring technologies.
- Free air cooling versus CRAC or CRAH: Rethinking cooling has shifted the geographic position of data centers around the globe to more northern latitudes, optimizing the number of free cooling days available to the facility. When evaluating sites, keep in mind a change of venue can have a tremendous impact on the bottom line.
- Distributing at higher voltages: Three-phase distribution is more efficient, and implementing it at higher voltages makes it even more so. Manufacturers of IT gear and electrical equipment have spent the decade making more products available that support higher voltages within the data center space, including 415V PDU solutions.
- Workload consolidation through containerization: Containers have allowed for computing in an even smaller footprint, reducing the need to provide cooling for large volumes of space, while virtualization has reduced the number of computing devices needed in the first place.
- Power Monitoring: Switched Intelligent PDUs with remote management capabilities can measure power usage of IT infrastructure by providing:
o Data collection of power consumption at the outlet, device, and cabinet
o Support for reporting, alarming, and smart load shedding capabilities
o Environmental monitoring capability - temperature and humidity sensors
o Switching off unused or underutilized assets (zombie servers, storage, load balancers, etc.) or rebooting remotely
While the promise of better data center efficiency through artificial intelligence is a reality, there are still plenty of measures that can be taken to reduce the PUE of many sub-hyperscale facilities.
About the Author
Marc Cram is director of new market development for Server Technology, a brand of Legrand (@Legrand). A technology evangelist, he is driven by a passion to deliver a positive power experience for the data center owner/operator. He earned a bachelor’s degree in electrical engineering from Rice University and has more than 30 years of experience in the field of electronics. Follow him on LinkedIn or @ServerTechInc on Twitter.