Tesla and SpaceX chief executive Elon Musk has joined a group of tech experts including Apple co-founder Steve Wozniak to urge the world to initiate a temporary pause in the development of giant artificial intelligence or AI experiments citing risks to society and civilization.
The open letter, issued by the Future of Life Institute and signed by more than 1,300 people, including Musk, has asked all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4. If such a pause cannot be enacted quickly, governments should step in and institute a moratorium, it said.
The technologists said they are not seeking a pause on AI development in general, but merely a stepping back from the dangerous race to ever-larger unpredictable black-box models with emergent capabilities.
The letter warned that AI labs are "locked in an out-of-control race to develop and deploy ever more powerful digital minds that no one - not even their creators - can understand, predict, or reliably control."
The recently emerged generative AI technologies, compared to its predecessor machine learning methods such as analytical AI, are generalized rather than specialized use cases. They have the ability to generate novel, human-like output rather than merely describe or interpret existing information, and also have approachable interfaces to understand and respond with natural language, images, audio, and video.
Since contemporary AI systems are now becoming human-competitive at general tasks, the tech experts raised questions about whether to allow machines flood information channels with propaganda and lies, and whether to automate away all the jobs, including the fulfilling ones. They also raised concerns about developing nonhuman minds that might eventually outnumber, outsmart, obsolete and replace humans, as well as risking loss of control of civilization.
The letter warned that powerful AI systems should be developed only once there is confidence that their effects will be positive and their risks will be manageable.
During the required pause, AI labs and independent experts are urged to jointly develop and implement a set of shared safety protocols for advanced AI design and development that are rigorously audited and overseen by independent outside experts.
"In parallel, AI developers must work with policymakers to dramatically accelerate development of robust AI governance systems," it said.
The signatories reportedly include Stability AI CEO Emad Mostaque, researchers at DeepMind, owned by Alphabet, as well as AI firms Yoshua Bengio and Stuart Russell.
Meanwhile, Sam Altman, CEO of OpenAI which is in the forefront of developing generative AI, is reportedly absent from signing the letter.
GPT-4 is a multimodal large language model created by generative AI chatbot ChatGPT developer OpenAI, which is backed by tech major Microsoft.
Musk, who is co-founder of OpenAI, had previously raised concerns about the rapid development of AI, and recently warned that one of the biggest risks to the future of civilization is AI. Musk told attendees at the World Government Summit in Dubai, United Arab Emirates in February that AI is both positive or negative and has great, great promise, great capability, but with that comes great danger.
Musk's electric car company Tesla uses AI for its autopilot system.
The warning by tech experts comes at a time when a report by Goldman Sachs predicted that generative AI technologies, including the highly popular ChatGPT, could expose around 300 million full-time jobs around the world to automation, deepening the rising fears about job security.
For comments and feedback contact: editorial@rttnews.com
Business News