New Geekbench AI benchmark can test the performance of CPUs, GPUs, and NPUs

Enlarge (credit: Primate Labs )

Neural processing units (NPUs) are becoming commonplace in chips from Intel and AMD after several years of being something you’d find mostly in smartphones and tablets (and Macs). But as more companies push to do more generative AI processing, image editing, and chatbot-ing locally on-device instead of in the cloud, being able to measure NPU performance will become more important to people making purchasing decisions.

Enter Primate Labs, developers of Geekbench. The main Geekbench app is designed to test CPU performance as well as GPU compute performance, but for the last few years, the company has been experimenting with a side project called Geekbench ML (for “Machine Learning”) to test the inference performance of NPUs. Now, as Microsoft’s Copilot+ initiative gets off the ground and Intel, AMD, Qualcomm, and Apple all push to boost NPU performance, Primate Labs is bumping Geekbench ML to version 1.0 and renaming it “Geekbench AI,” a change that will presumably help it ride the wave of AI-related buzz.

“Just as CPU-bound workloads vary in how they can take advantage of multiple cores or threads for performance scaling (necessitating both single-core and multi-core metrics in most related benchmarks), AI workloads cover a range of precision levels, depending on the task needed and the hardware available,” wrote Primate Labs’ John Poole in a blog post about the update. “Geekbench AI presents its summary for a range of workload tests accomplished with single-precision data, half-precision data, and quantized data, covering a variety used by developers in terms of both precision and purpose in AI systems.”

Read 6 remaining paragraphs | Comments