Bottom line: Nvidia took the wraps off its Hopper structure at GTC 2022, saying the H100 server accelerator however solely displaying off renders of it. Now we lastly have some in-hand images of the SXM variant of the cardboard, which encompasses a mind-boggling 700W TDP.
It’s been a bit over a month since Nvidia unveiled their H100 server accelerator based mostly on the Hopper structure, and up to now, we have solely seen renders of it. That modifications immediately, as ServeTheHome has simply shared footage of the cardboard in its SXM5 kind issue.
The GH100 compute GPU is fabricated on TSMC’s N4 course of node and has an 814 mm2 die measurement. The SXM variant options 16896 FP32 CUDA cores, 528 Tensor cores, and 80GB of HBM3 reminiscence related utilizing a 5120-bit bus. As could be seen within the photographs, there are six 16GB stacks of reminiscence across the GPU, however considered one of these is disabled.
Nvidia additionally quoted a staggering 700W TDP, 75% larger than its predecessor, so it is no shock that the cardboard comes with an extremely-impressive VRM resolution. It options 29 inductors, every outfitted with two energy phases and an extra three inductors with one energy stage. Cooling all of those tightly packed elements will most likely be a problem.
Another noticeable change is the connector format for SXM5. There’s now a brief and an extended mezzanine connector, whereas earlier generations featured two identically sized longer ones.
Nvidia will begin transport H100-equipped programs in Q3 of this 12 months. It’s value mentioning that the PCIe model of the H100 is at the moment listed in Japan for 4,745,950 yen ($36,300) after taxes and transport, though it has fewer CUDA cores, downgraded HBM2e reminiscence, and half the TDP of the SXM variant.