Nvidia has announced its new H100 Tensor Core GPUs as well as multiple partnerships with server manufacturers for inclusion in their next-gen products.
The company announced this during A special address (opens in a new tab) at Supercomputing 2022, and the next ones press release (opens in a new tab) claimed that “dozens” of new servers would benefit from the H100, including machines from Asus, Dell, HP Enterprise and Lenovo.
Reveal of the new GPU that will come with a five-year license from Nvidia AI enterprise (opens in a new tab) software, is part of a strategy that will put artificial intelligence (AI) frameworks and tools in the hands of as many companies as possible.
Nvidia H100
The company noted that AI already has some notable use cases, “from medical imaging to weather models to safety alert systems.”
By handing over systems to streamline the production of new AI workflows, the company is betting on even more AI implementations that will come to light in the near future.
“By providing a universal scientific computing platform that accelerates both principled numerical methods and artificial intelligence methods, [Nvidia is] giving scientists an instrument to make discoveries that will benefit humanity,” explained Jensen Huang, founder and CEO of Nvidia.
Nvidia also says it wants to “enhance” scientific discovery and has updated its CUDA, cuQuantum, and DOCA acceleration libraries accordingly to provide better performance for quantum computing and simulation workflows.
Tom’s gear (opens in a new tab) reported a significant improvement in performance, citing the Flatiron Institute’s Henri supercomputer, which debuted on the Top500 and Green500 lists this week.
Its power consumption is 31 kW, which translates into an energy efficiency of 65,091 GFLOPS/W, which gives it a world record.
Of course, there are countless other aspects that contribute to its overall performance, including air cooling, however, the improvements made by the H100 GPU have contributed to some extent to the success of the supercomputer.