The latest benchmark results for machine learning show that data center-class computers can execute large language models like Llama 2 and ChatGPT very well. MLPerf’s recent data delivery included a test of the GPT-J language model, with Nvidia dominating the performance results in this category. The benchmark tests, known as MLPerf, evaluate the performance of already-trained neural networks on different computer systems.
Nvidia’s new Grace Hopper superchip, which combines an H100 GPU with a Grace CPU, performed exceptionally well in various categories, thanks to its superior memory access capabilities. Intel’s Habana Gaudi2 accelerator also had a strong showing in the benchmarks. The benchmark results, charts, and data center efficiency results can be found in the original article.
The whytry.ai article you just read is a brief synopsis; the original article can be found here: Read the Full Article…