The letters AI, which stands for “artificial intelligence,” stand on the Amazon Web Services booth on the Hannover Messe industrial trade fair in Hannover, Germany, on March 31, 2025.
Julian Stratenschulte | Picture Alliance | Getty Images
Amazon said Wednesday that its cloud division has developed hardware to chill down next-generation Nvidia graphics processing units which might be used for artificial intelligence workloads.
Nvidia’s GPUs, which have powered the generative AI boom, require massive amounts of energy. Which means corporations using the processors need additional equipment to chill them down.
Amazon considered erecting data centers that would accommodate widespread liquid cooling to take advantage of these power-hungry Nvidia GPUs. But that process would have taken too long, and commercially available equipment would not have worked, Dave Brown, vice chairman of compute and machine learning services at Amazon Web Services, said in a video posted to YouTube.
“They might take up an excessive amount of data center floor space or increase water usage substantially,” Brown said. “And while a few of these solutions could work for lower volumes at other providers, they simply would not be enough liquid-cooling capability to support our scale.”
Somewhat, Amazon engineers conceived of the In-Row Heat Exchanger, or IRHX, that could be plugged into existing and latest data centers. More traditional air cooling was sufficient for previous generations of Nvidia chips.
Customers can now access the AWS service as computing instances that go by the name P6e, Brown wrote in a blog post. The brand new systems accompany Nvidia’s design for dense computing power. Nvidia’s GB200 NVL72 packs a single rack with 72 Nvidia Blackwell GPUs which might be wired together to coach and run large AI models.
Computing clusters based on Nvidia’s GB200 NVL72 have previously been available through Microsoft or CoreWeave. AWS is the world’s largest supplier of cloud infrastructure.
Amazon has rolled out its own infrastructure hardware prior to now. The corporate has custom chips for general-purpose computing and for AI, and designed its own storage servers and networking routers. In running homegrown hardware, Amazon depends less on third-party suppliers, which may profit the corporate’s bottom line. In the primary quarter, AWS delivered the widest operating margin since at the least 2014, and the unit is liable for most of Amazon’s net income.
Microsoft, the second largest cloud provider, has followed Amazon’s lead and made strides in chip development. In 2023, the corporate designed its own systems called Sidekicks to chill the Maia AI chips it developed.
WATCH: AWS broadcasts latest CPU chip, will deliver record networking speed







