Stuff Digital Edition

‘Risk of extinction’ with AI, tech leaders claim

Hundreds of artificial intelligence scientists and tech executives signed a one-sentence letter that succinctly warns AI poses an existential threat to humanity, the latest example of a growing chorus of alarms raised by the very people creating the technology.

‘‘Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war,’’ according to the statement released yesterday by the nonprofit Centre for AI Safety. The open letter was signed by more than 350 researchers and executives, including chatbot ChatGPT creator OpenAI’s CEO Sam Altman, as well as 38 members of Google’s DeepMind artificial intelligence unit.

Altman and others have been at the forefront of the field, pushing new ‘‘generative’’ AI to the masses, such as image generators and chatbots that can have humanlike conversations, summarise text and write computer code. OpenAI’s ChatGPT bot was the first to launch to the public in November, leading Microsoft and Google to launch their own versions earlier this year.

Since then, a growing faction within the AI community has been warning about the potential risks of a doomsday-type scenario where the technology grows sentient and attempts to destroy humans in some way. They are pitted against a second group of researchers who say this is a distraction from problems like inherent bias in current AI, the potential for it to take jobs and its ability to lie.

Sceptics also point out that companies who sell AI tools can benefit from the widespread idea that they are more powerful than they actually are – and they can front-run potential regulation on shorter term risks if they hype up those that are longer term.

Dan Hendrycks, a computer scientist who leads the Centre for AI Safety, said the single-sentence letter was designed to ensure the core message isn’t lost.

‘‘We need widespread acknowledgment of the stakes before we can have useful policy discussions,’’ Hendrycks said. ‘‘For risks of this magnitude, the takeaway isn’t that this technology is overhyped, but that this issue is currently underemphasised relative to the actual level of threat.’’

In late March, a different public letter gathered more than 1000 signatures from members of the academic, business and technology worlds who called for an outright pause on the development of new high-powered AI models until regulation could be put into place. Most of the field’s most influential leaders didn’t sign that one, but they have signed the new statement, including Altman and two of Google’s most senior AI executives: Demis Hassabis and James Manyika. Microsoft Chief Technology Officer Kevin Scott and Microsoft Chief Scientific Officer Eric Horvitz both signed it as well. Notably absent from the letter are Google CEO Sundar Pichai and Microsoft CEO Satya Nadella, the field’s two most powerful corporate leaders.

Pichai said in April that the pace of technological change may be too fast for society to adapt, but he was optimistic because the conversation around AI risks was already happening. Nadella has said that AI will be hugely beneficial by helping humans work more efficiently.

Earlier this month, Altman met with President Joe Biden to discuss AI regulation. He later testified on Capitol Hill, warning lawmakers that AI could cause significant harm to the world. Altman drew attention to specific ‘‘risky’’ applications, including using it to spread disinformation and potentially aid in more targeted drone strikes.

World

en-nz

2023-06-01T07:00:00.0000000Z

2023-06-01T07:00:00.0000000Z

https://fairfaxmedia.pressreader.com/article/281797108377064

Stuff Limited