Intel’s new Gaudi 3 accelerators massively undercut Nvidia GPUs as AI race heats up

Published on:

What simply occurred? Intel threw down the gauntlet in opposition to Nvidia within the heated battle for AI {hardware} supremacy. At Computex this week, CEO Pat Gelsinger unveiled pricing for Intel’s next-gen Gaudi 2 and Gaudi 3 AI accelerator chips, and the numbers look disruptive.

Pricing for merchandise like these is often saved hidden from the general public, however Intel has bucked the pattern and supplied some official figures. The flagship Gaudi 3 accelerator will value round $15,000 per unit when bought individually, which is 50 p.c cheaper than Nvidia’s competing H100 knowledge middle GPU.

The Gaudi 2, whereas much less highly effective, additionally undercuts Nvidia’s pricing dramatically. A whole 8-chip Gaudi 2 accelerator package will promote for $65,000 to system distributors. Intel claims that is simply one-third the worth of comparable setups from Nvidia and different rivals.

- Advertisement -

For the Gaudi 3, that very same 8-accelerator package configuration prices $125,000. Intel insists it is two-thirds cheaper than different options at that high-end efficiency tier.

To offer some context to Gaudi 3 pricing, Nvidia’s newly launched Blackwell B100 GPU prices round $30,000 per unit. In the meantime, the high-performance Blackwell CPU+GPU combo, the B200, sells for roughly $70,000.

After all, pricing is only one a part of the equation. Efficiency and the software program ecosystem are equally essential issues. On that entrance, Intel insists the Gaudi 3 retains tempo with or outperforms Nvidia’s H100 throughout quite a lot of vital AI coaching and inference workloads.

See also  To lead a technology team, immerse yourself in the business first

Benchmarks cited by Intel present the Gaudi 3 delivering as much as 40 p.c sooner coaching occasions than the H100 in massive 8,192-chip clusters. Even a smaller 64-chip Gaudi 3 setup provides 15 p.c larger throughput than the H100 on the favored LLaMA 2 language mannequin, in line with the corporate. For AI inference, Intel claims a 2x velocity benefit over the H100 on fashions like LLaMA and Mistral.

- Advertisement -

Nonetheless, whereas the Gaudi chips leverage open requirements like Ethernet for simpler deployment, they lack optimizations for Nvidia’s ubiquitous CUDA platform that almost all AI software program depends on right this moment. Convincing enterprises to refactor their code for Gaudi may very well be robust.

To drive adoption, Intel says it has lined up not less than 10 main server distributors – together with new Gaudi 3 companions like Asus, Foxconn, Gigabyte, Inventec, Quanta, and Wistron. Acquainted names like Dell, HPE, Lenovo, and Supermicro are additionally on board.

Nonetheless, Nvidia is a pressure to be reckoned with within the knowledge middle world. Within the closing quarter of 2023, they claimed a 73 p.c share of the info middle processor market, and that quantity has continued to rise, chipping away on the stakes of each Intel and AMD. The buyer GPU market is not all that totally different, with Nvidia commanding an 88 p.c share.

It is an uphill battle for Intel, however these huge value variations could assist shut the hole.

- Advertisment -

Related

- Advertisment -

Leave a Reply

Please enter your comment!
Please enter your name here