HOME > Media

A group that includes Elon Musk calls for a pause in artificial intelligence, citing "risks to society."

"Powerful artificial intelligence systems should only be developed when we are confident that the effects will be positive and the risks will be manageable," the group states.

Elon Musk (Photo: Carina Johansen / NTB via REUTERS)

(Reuters)- Billionaire Elon Musk, a group of artificial intelligence experts, and industry executives are calling for a six-month pause in the development of systems more powerful than OpenAI's recently launched GPT-4. The request is contained in an open letter that cites potential risks to society and humanity.

The letter, issued by the nonprofit organization Future of Life Institute and signed by over a thousand people, including Musk, calls for a pause in the advanced development of artificial intelligence until shared safety protocols for such projects are developed, implemented, and audited by independent experts.

"Powerful artificial intelligence systems should only be developed when we are confident that their effects will be positive and their risks will be manageable," the group states in the letter.

>>> Microsoft says new ChatGPT shows signs of being artificial intelligence with "human capabilities"

OpenAI, which counts Microsoft among its main supporters, did not immediately respond to a request for comment.

The letter details the potential risks to society and civilization in the form of economic and political problems and calls on developers to work with policymakers and regulatory authorities on governance.

Co-signatories include Emad Mostaque, CEO of Stability AI; researchers at Alphabet-owned DeepMind; and industry heavyweights Yoshua Bengio, often referred to as one of the "godfathers of artificial intelligence," and Stuart Russell, a pioneering researcher in the field.

According to data from the European Union, the Future of Life is primarily funded by the Musk Foundation, as well as the British group Founders Pledge and the Silicon Valley Community Foundation.

TRANSPARENCY

Sam Altman, CEO of OpenAI, did not sign the letter, a Future of Life spokesperson told Reuters.

"The letter isn't perfect, but the spirit is right: we need to slow down until we better understand the ramifications," said Gary Marcus, a professor at New York University and one of the letter's signatories. "Large corporations are becoming increasingly secretive about what they're doing, which makes it difficult for society to defend itself against any harm that might materialize."

Critics accused the signatories of the letter of promoting "the artificial intelligence frenzy," arguing that claims about the technology's current potential were greatly exaggerated.

"This type of statement aims to increase the frenzy. It serves to make people worried," says Johanna Björklund, a researcher and associate professor at Umeå University in Sweden. "I don't think there's any need to pull the handbrake."

Instead of halting the research, she said, researchers should be subject to greater transparency requirements. "If you do artificial intelligence research, you should be very transparent about how you do it," Björklund said.