A bill aimed at regulating powerful artificial intelligence models is under consideration in California’s legislature, despite outcry that it could kill the technology it seeks to control.
“With Congress gridlocked over AI regulation… California must act to get ahead of the foreseeable risks presented by rapidly advancing AI while also fostering innovation,” said Democratic state senator Scott Wiener of San Francisco, the bill’s sponsor.
But critics, including Democratic members of US Congress, argue that threats of punitive measures against developers in a nascent field can throttle innovation.
“The view of many of us in Congress is that SB 1047 is well-intentioned but ill-informed,” influential Democratic congresswoman Nancy Pelosi of California said in a release, noting that top party members have shared their concerns with Wiener.
“While we want California to lead in AI in a way that protects consumers, data, intellectual property and more, SB 1047 is more harmful than helpful in that pursuit,” Pelosi said.
Pelosi pointed out that Stanford University computer science professor Fei-Fei Li, whom she referred to as the “Godmother of AI” for her status in the field, is among those opposing the bill.
– Harm or help? –
The bill, called the Safe and Secure Innovation for Frontier Artificial Intelligence Models Act, will not solve what it is meant to fix and will “deeply harm AI academia, little tech and the open-source community,” Li wrote earlier this month on X. Little tech refers to startups and small companies, as well as researchers and entrepreneurs.
Weiner said the legislation is intended to ensure safe development of large-scale AI models by establishing safety standards for developers of systems costing more than $100 million to train.
The bill requires developers of large “frontier” AI models to take precautions such as pre-deployment testing, simulating hacker attacks, installing cyber security safeguards, as well as providing protection for whistleblowers.
Recent changes to the bill include replacing criminal penalties for violations with civil penalties such as fines.
Wiener argues that AI safety and innovation are not mutually exclusive, and that tweaks to the bill have addressed some concerns of critics.
OpenAI, the creator of ChatGPT, has also come out against the bill, saying it would prefer national rules, fearing a chaotic patchwork of AI regulations across the US states.
At least 40 states have introduced bills this year to regulate AI, and a half dozen have adopted resolutions or enacted legislation aimed at the technology, according to The National Conference of State Legislatures.
OpenAI said the California bill could also chase innovators out of the state, home to Silicon Valley.
But Anthropic, another generative AI player that would be potentially affected by the measure, has said that after some welcome modifications, the bill has more benefits than flaws.
The bill also has high-profile backers from the AI community.
“Powerful AI systems bring incredible promise, but the risks are also very real and should be taken extremely seriously,” computer scientist Geoffrey Hinton the “Godfather of AI,” said in a Fortune op-ed piece cited by Wiener.
“SB 1047 takes a very sensible approach to balance those concerns.”
AI regulation with “real teeth” is critical, and California is a natural place to start since it has been a launch pad for the technology, according to Hinton.
Meanwhile, professors and students at the California Institute of Technology are urging people to sign a letter against the bill.
“We believe that this proposed legislation poses a significant threat to our ability to advance research by imposing burdensome and unrealistic regulations on AI development,” CalTech professor Anima Anandkumar said on X.
Be the first to comment