} ?>
(Yicai) July 28 -- Developers of artificial intelligence models should prioritize safety over competing to build increasingly advanced black-box technologies, according to Stuart Jonathan Russell, a leading British computer scientist.
"I do think that the attention to the safety of AI systems is extremely important," said the University of California, Berkeley professor in an interview with Yicai during the three-day World Artificial Intelligence Conference in Shanghai, which concludes today.
"I hope that if we can reduce the arms race mentality where everyone is trying to be the first to create super intelligent machines, then we can focus more on safety and make sure that we remain in control of our AI systems, that we can trust them to act safely to the benefit of human beings in the long run. That’s the only technology that has value," he added.
Russell, co-author of the textbook Artificial Intelligence: A Modern Approach, has long warned about the risks posed by AI. In 2023, he joined thousands of technologists, including Tesla Chief Executive Officer Elon Musk and Turing Award recipients Geoffrey Hinton and Yoshua Bengio, in signing an open letter urging all AI laboratories to pause the training of systems more powerful than GPT-4 for at least six months to establish safety protocols that could mitigate the risk of AI-driven extinction.
During the conference, Russell also warned that the current scale of investment in AI is massive, and if the technology fails to yield quick returns, an industry bubble could burst. He noted that if artificial general intelligence reaches a level where it can replace most intellectual labor, a large number of educated workers could become unemployed.
In the short term, he said, society would struggle to adapt if AI undermines incentive mechanisms that have supported social stability for centuries. However, over the long term, humanity might evolve to adopt new societal structures and education systems.
Editor: Emmi Laine