OpenAI, Intel, and Qualcomm talk AI compute at legendary Hot Chips conference

Published on:

The science and engineering of creating chips devoted to processing synthetic intelligence is as vibrant as ever, judging from a well-attended chip convention happening this week at Stanford College known as Scorching Chips.

The Scorching Chips present, presently in its thirty sixth 12 months, attracts 1,500 attendees, simply over half of whom take part by way of the web stay feed and the remainder at Stanford’s Memorial Auditorium. For many years, the present has been a hotbed for dialogue of probably the most cutting-edge chips from Intel, AMD, IBM, and lots of different distributors, with firms typically utilizing the present to unveil new merchandise. 

This 12 months’s convention obtained over 100 submissions for presentation from everywhere in the world. In the long run, 24 talks have been accepted, about as many as would slot in a two-day convention format. Two tutorial periods passed off on Sunday, with a keynote on Monday and Tuesday. There are additionally 13 poster periods. 

- Advertisement -

The tech talks onstage and the poster shows are extremely technical and oriented towards engineers. The viewers tends to unfold out laptops and a number of screens as if spending the periods of their private places of work. 

Monday morning’s session, that includes shows from Qualcomm about its Oryon processor for the info middle and Intel’s Lunar Lake processor, drew a packed crowd and elicited loads of viewers questions. 

In recent times, an enormous focus has been on chips designed to run neural community types of AI higher. This 12 months’s convention included a keynote by OpenAI’s Trevor Cai, the corporate’s head of {hardware}, about “Predictable scaling and infrastructure.” 

See also  Runway’s Gen-3 Alpha Turbo is here and can make AI videos faster than you can type

Cai, who has spent his time placing collectively OpenAI’s compute infrastructure, mentioned ChatGPT is the results of the corporate “spending years and billions of {dollars} predicting the subsequent phrase higher.” That led to successive talents equivalent to “zero-shot studying.”

- Advertisement -

“How did we all know it might work?” Cai requested rhetorically. As a result of there are “scaling legal guidelines” that present potential can predictably improve as a “energy regulation” of the compute used. Each time computing is doubled, the accuracy will get near an “irreducible” entropy, he defined. 

“That is what permits us to make investments, to construct huge clusters” of computer systems, mentioned Cai. There are “immense headwinds” to persevering with alongside the scaling curve, mentioned Cai. OpenAI should grapple with very difficult algorithm improvements, he mentioned.

For {hardware}, “Greenback and power prices of those huge clusters change into important even for highest free-cash-flow producing firms,” mentioned Cai.

The convention continues Tuesday with shows by Superior Micro Units and startup Cerebras Programs, amongst others.

- Advertisment -

Related

- Advertisment -

Leave a Reply

Please enter your comment!
Please enter your name here