New Benchmark Measures the Performance Speed of AI Models

Technology
  • MLCommons, an artificial intelligence benchmark group, has unveiled the results of new tests designed to evaluate the speed at which top-of-the-line hardware can run AI models.
  • In the tests conducted on a large language model, a Nvidia Corp (NVDA.O) chip emerged as the top performer, with an Intel Corp (INTC.O) semiconductor coming in a close second.
  • The newly introduced MLPerf benchmark centers around a large language model featuring 6 billion parameters, primarily summarizing CNN news articles. This benchmark specifically simulates the “inference” phase of AI data processing, which powers the core functionality of generative AI tools. Nvidia’s leading submission for the inference benchmark was based on eight of its flagship H100 chips. While Nvidia has been a dominant force in training AI models, it has yet to establish a strong presence in the inference market.
  • Dave Salvator, Nvidia’s accelerated computing marketing director, commented, “What you see is that we’re delivering leadership performance across the board, and again, delivering that leadership performance on all workloads.”
  • Intel’s success in the benchmark is attributed to its Gaudi2 chips, produced by the Habana unit acquired by the company in 2019. The Gaudi2 system demonstrated performance roughly 10% slower than Nvidia’s system.
  • Eitan Medina, Chief Operating Officer at Habana, remarked, “We’re very proud of the results in inferencing, (as) we demonstrate the price performance advantage of Gaudi2.”
  • Intel claims that its system is more cost-effective than Nvidia’s, roughly priced at the level of Nvidia’s last-generation 100 systems. However, Intel has not disclosed the exact cost of the chip, and Nvidia has also refrained from discussing the pricing of its chip. Nvidia announced its plans to release a software upgrade that would double the performance compared to its performance in the MLPerf benchmark.
  • Alphabet’s (GOOGL.O) Google unit provided a preview of the performance of the latest iteration of a custom-built chip, which was initially announced at its cloud computing conference in August.

Leave a Reply

Your email address will not be published. Required fields are marked *