MLCommons wants to create AI benchmarks for laptops, desktops and workstations


With artificial intelligence (AI) shifting from cloud computing to on-device computing, it might be challenging to determine if a newly released laptop will execute GenAI-powered software more quickly than competing off-the-shelf laptops, desktops, or all-in-ones.
MLCommons charts a new course in AI development, aiming to establish comprehensive benchmarks for laptops, desktops, and workstations.
Spread the love

Wednesday, 24 January 2024, Bengaluru, India

With artificial intelligence (AI) shifting from cloud computing to on-device computing, it might be challenging to determine if a newly released laptop will execute GenAI-powered software more quickly than competing off-the-shelf laptops, desktops and workstations. Understanding could make the difference between waiting a few minutes or seconds for an image to generate; time is money.

MLCommons, Machine Learning Innovation.

(Image Source: mlcommons.org)

With the release of performance benchmarks aimed at “client systems,” or consumer PCs, MLCommons, the industry association responsible for several AI-related hardware benchmarking standards, hopes to facilitate comparison shopping.

The new working group, MLPerf Client, formed today by MLCommons, is to create AI benchmarks for workstations, laptops, and desktop computers running Linux, Windows, and other operating systems. The benchmarks will be “scenario-driven,” emphasizing actual end-user use cases, and “grounded in feedback from the community,” according to MLCommons.

MLCommons executive director David Kanter states that Meta’s Llama 2, which has already been included in MLCommons’ other benchmarking suites for data center hardware, will be the subject of MLPerf Client’s first test, which will concentrate on text-generating models. Meta has also worked closely with Qualcomm and Microsoft to optimize Llama 2 to take advantage of Windows-running devices.

“With AI becoming a standard computing component everywhere, the time is right to integrate MLPerf into client systems,” Kanter stated in a press release. “We are excited to collaborate with our members to advance new capabilities for the community at large and bring the excellence of MLPerf into client systems.”

Apple is not an MLPerf Client working group, including AMD, Arm, Asus, Dell, Intel, Lenovo, Microsoft, Nvidia, and Qualcomm.

Since Yannis Minadakis, an engineering director at Microsoft, co-chairs the MLPerf Client group and Apple isn’t a part of the MLCommons, its absence is only partially unexpected. The unfortunate result is that, at least not anytime soon, Apple devices will not be used to test any AI benchmarks that MLPerf Client generates.

Nevertheless, whether or not MLPerf Client supports macOS, this writer is interested in seeing what kind of benchmarks and tooling are developed from it. If GenAI is here to stay and there are no signs that the bubble will pop anytime soon, I wouldn’t be shocked to see these kinds of measurements become increasingly important when choosing which devices to buy.

The MLPerf Client benchmarks, in my ideal world, are comparable to the numerous internet tools that compare PC builds and provide an idea of the AI performance that a particular system is capable of. With the involvement of Qualcomm and Arm—two companies with significant stakes in the mobile device ecosystem—they may eventually extend to include phones and tablets. Though early in the process, let’s hope.

(Information Source: Techcrunch.com)


Spread the love

Disclaimer -We have collected this information from our direct sources, various trustworthy sources on the internet and the facts have been checked manually and verified by our in-house team.