Elon Musk and several other technologists call for a pause on training of AI systems


MARCH 29, 2023

More than 1,000 concerned critics of artificial intelligence, ranging from industry executives and academics to tech specialists, have signed an open letter calling for at least a six-month pause on large, open experiments with the technology.

Companies researching AI are “locked in an out-of-control race to develop and deploy ever more powerful digital minds that no one — not even their creators — can understand, predict, or reliably control,” the letter reads. “If such a pause cannot be enacted quickly, governments should step in and institute a moratorium.”

The letter warns of potentially apocalyptic scenarios.

“Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete and replace us?” it asks. “Should we risk loss of control of our civilization? Such decisions must not be delegated to unelected tech leaders.”

SpaceX and Tesla CEO Elon Musk, Apple co-founder Steve Wozniak, IBM chief scientist Grady Booch, stability AI CEO Emad Mostaque and tech ethicist Tristan Harris all signed the letter, which was released Wednesday morning.

Academics who signed it include Stuart Russell, who heads the University of California at Berkeley’s Center for Human-Compatible Artificial Intelligence, Hebrew University of Jerusalem historian Yuval Noah Harari and Sean O’Heigeartaigh, the executive director of Cambridge University’s Centre for the Study of Existential Risk.

Artificial intelligence tools that are available to the public are skyrocketing in popularity and capability. ChatGPT, a stunningly adept chatbot that uses language fluently but struggles with accuracy, became by some metrics the fastest-growing consumer application in history in January. Its parent company, OpenAI, released a new version of its AI software two months later. And tech companies like Google, Microsoft and Snapchat have rushed to incorporate such technology into their platform.

Industry watchdogs have warned that those companies are effectively testing out new technology on the general public, and that the companies behind them are deploying them without considering broader consequences, such as how they could disrupt labor markets.

Even OpenAI CEO Sam Altman has repeatedly called for regulation of the industry, though he was not among the initial round of signatories on the letter.

While many agree that the AI industry is moving ahead dangerously quickly, some ethicists have criticized the letter for focusing on theoretical, eventual harms from AI.

Sarah Myers West, the managing director of the AI Now Institute, a nonprofit that studies how AI technology affects society, said the letter misses some major concerns with the AI industry.

She said companies like Google and Microsoft are poised to dominate the U.S. AI market, that the technology might put large numbers of creative workers out of work, and that the companies are overhyping what their products can do. Last month, the Federal Trade Commission warned existing AI companies against making potentially false claims about their products.

“By focusing on hypothetical and long-term risks, it distracts from the regulation and enforcement we need in the here and now,” Myers West said.

Courtesy/Source: This article was originally published on NBCNews.com