Intel refutes AMD’s claims, says its 5th-gen Xeon is faster than Epyc Turin in AI workloads

Published on:

In a nutshell: Intel has refuted AMD’s claims that its Fifth-gen Epyc ‘Turin’ CPUs provide sooner AI processing than Fifth-gen Xeon chips. In response to Intel, its 64-core Xeon Platinum 8529+ processors can outperform AMD’s newest 128-core knowledge heart CPUs in the identical workloads with the precise optimizations.

Intel’s assertion comes simply days after AMD unveiled its Fifth-gen Epyc CPUs with as much as 195 Zen 5 cores at Computex 2024. Anticipated to be marketed as a part of the Epyc 9005 household, the brand new lineup will provide a various vary of processors designed for compute, cloud, telco, and edge workloads.

Whereas the chips aren’t anticipated to hit the market till later this yr, AMD showcased benchmarks suggesting they are going to be sooner than Intel’s Emerald Rapids household in AI throughput workloads.

- Advertisement -

Whereas AMD claimed {that a} pair of its Epyc Turin processors might be as much as 5.4 instances sooner than a pair of Intel’s Xeon Platinum 8592+ CPUs when working a Llama 2-based chatbot, Intel says the benchmarks showcased by Group Purple provide an unfair comparability. In response to Intel, AMD didn’t disclose the software program configuration used for these benchmarks, and its personal testing reveals the Xeon chips to be sooner than the Epyc processors on the identical job.

In response to AMD’s benchmarks, two of its Fifth-gen Epyc CPUs in a dual-socket configuration with 128 cores every provide as much as 671 tokens per second of efficiency in Llama 2-7B, whereas Intel’s Fifth-gen Xeon Platinum 8592+ chips with 64 cores working in the same dual-socket mode provided simply 125 tokens per second.

See also  Is AI bullshit? Here’s how to find out

Nevertheless, Intel repeated the identical exams utilizing its Intel Extension for PyTorch (P99 Latency), and the outcomes had been drastically completely different. In these exams, the Xeons’ 686 tokens per second output was 5.4 instances sooner than what AMD had showcased.

- Advertisement -

Intel additionally claimed that in Translation and Summarization workloads, its Xeon chips provided 1.2x and a pair of.3x sooner efficiency in comparison with the benchmarks AMD showcased at Computex. AMD had claimed that Turin was 2.5x and three.9x sooner than the Fifth-gen Xeon in these workloads.

It may be slightly complicated to sift by way of the claims and counterclaims by the 2 chip majors, however the benchmarks offered by each are possible correct, relying on the way you have a look at it. Whereas AMD’s exams are in all probability legitimate for a naive configuration of Llama-7B, Intel is nicely inside its proper to level out that no person is probably going to make use of its {hardware} with out the Intel Extension for PyTorch, which affords a big uplift in real-world efficiency for AI workloads.

- Advertisment -


- Advertisment -

Leave a Reply

Please enter your comment!
Please enter your name here