OpenAI board forms Safety and Security Committee

Published on:

OpenAI’s board introduced the formation of a Security and Safety Committee which is tasked with making suggestions on essential security and safety selections for all OpenAI tasks.

The committee is led by administrators Bret Taylor (Chair), Adam D’Angelo, Nicole Seligman, and OpenAI’s CEO Sam Altman.

Aleksander Madry (Head of Preparedness), Lilian Weng (Head of Security Methods), John Schulman (Head of Alignment Science), Matt Knight (Head of Safety), and Jakub Pachocki (Chief Scientist) will even be on the committee.

- Advertisement -

OpenAI’s strategy to AI security has confronted each exterior and inner criticism. Final 12 months’s firing of Altman was supported by then-board member Ilya Sutskever and others, ostensibly over security issues.

Final week Sutskever and Jan Leike from OpenAI‘s “superalignment” crew left the corporate. Leike particularly famous issues of safety as his purpose for leaving, saying the corporate was letting security “take a backseat to shiny merchandise”.

Yesterday, Leike introduced that he was becoming a member of Anthropic to work on oversight and alignment analysis.

Now Altman just isn’t solely again as CEO, but in addition sits on the committee chargeable for highlighting issues of safety.

- Advertisement -

The Security and Safety Committee will use the subsequent 90 days to guage and additional develop OpenAI’s processes and safeguards.

The suggestions can be put to OpenAI’s board for approval and the corporate has dedicated to publishing the adopted security suggestions.

See also  Data centers could use as much as a third of Ireland's energy supply by 2026

This push for added guardrails comes as OpenAI says it has began coaching its subsequent frontier mannequin which it says will “convey us to the subsequent stage of capabilities on our path to AGI.”

No anticipated launch date was provided for the brand new mannequin however coaching alone will most likely take weeks if not months.

In an replace on its strategy to security printed after the AI Seoul Summit, OpenAI stated “We gained’t launch a brand new mannequin if it crosses a “Medium” threat threshold from our Preparedness Framework, till we implement enough security interventions to convey the post-mitigation rating again to “Medium”.”

It stated that greater than 70 exterior consultants had been concerned in pink teaming GPT-4o earlier than its launch.

With 90 days to go earlier than the committee presents its findings to the board, solely just lately began coaching, and a dedication to intensive pink teaming, it appears like we’ve acquired an extended wait earlier than we lastly get GPT-5.

- Advertisement -

 

- Advertisment -

Related

- Advertisment -

Leave a Reply

Please enter your comment!
Please enter your name here