Former OpenAI employees publish ‘Right to Warn’ open letter

Published on:

A group of former and current OpenAI and Google employees are calling AI companies out on what they say is a dangerous culture of secrecy surrounding AI risks.

The letter titled “A right to warn about advanced artificial intelligence” states that AI companies have strong financial incentives to avoid effective oversight of potential AI risks.

Besides being reckless by focusing on financial objectives instead of safety, the letter says that companies use punitive confidentiality agreements to actively discourage employees from raising concerns.

- Advertisement -

The signatories are all former OpenAI and Google employees, with Neel Nanda, the only one still working at Google. The letter was also endorsed by leading AI minds Yoshua Bengio, Geoffrey Hinton, and Stuart Russell.

As evidence of the concern over calling their former employers out, 6 of the signatories were not willing to disclose their names in the letter.

Former OpenAI researchers Daniel Kokotajlo and William Saunders, who also signed the letter, left the company earlier this year.

Kokotajlo was on the governance team, and Saunders worked on OpenAI’s Superalignment team which was disbanded last month when Ilya Sutskever and Jan Leikealso also left over safety concerns.

- Advertisement -

Kokotajlo explained his reason for leaving on a forum saying he doesn’t think OpenAI will “behave responsibly around the time of AGI.”

A call to action

The letter calls for a greater commitment from AI companies in the absence of regulation governing AI risks that the public doesn’t know about.

The letter says, “Ordinary whistleblower protections are insufficient because they focus on illegal activity, whereas many of the risks we are concerned about are not yet regulated.”

See also  Google's Search Engine Experience (SGE) threatens to scale AI's environmental impacts

The letter calls for AI companies to commit to four principles. In short, they want companies to:

  • Not enter into or enforce agreements that prohibit criticism of the company over safety concerns nor hold back financial benefits due to the employee. (ahem, OpenAI)
  • Facilitate an anonymous process for employees to raise risk-related concerns to the company’s board or other regulatory organizations.
  • To support a culture of open criticism allowing employees to make risk-related concerns public while not revealing intellectual property.
  • To not retaliate against current and former employees who publicly share risk-related confidential information after other processes have failed.

Several of the names on the list of signatories consider themselves effective altruists. From their posts and comments it’s clear people like Daniel Kokotajlo (Less Wrong) and William Saunders (AI Alignment Forum) believe things could end very badly if AI risks aren’t managed.

But these aren’t doomsayer trolls on a forum calling out from the sidelines. These are leading intellects that companies like OpenAI and Google saw fit to employ to create the tech they now fear.

And now they’re saying, ‘We’ve seen stuff that scares us. We want to warn people, but we’re not allowed to.’

- Advertisement -

You can read the letter here.

- Advertisment -

Related

- Advertisment -

Leave a Reply

Please enter your comment!
Please enter your name here