Artificial intelligence corporations are pushing back against California state lawmakers’ demand that they install a “kill switch” designed to mitigate potential dangers posed by the brand new technology — with some threatening to depart Silicon Valley altogether.
Scott Wiener, a Democratic state senator, introduced laws that will force tech corporations to comply with regulations fleshed out by a latest government-run agency designed to forestall AI corporations from allowing their products to realize “a hazardous capability” comparable to starting a nuclear war.
Wiener and other lawmakers wish to install guardrails around “extremely large” AI systems which have the potential to spit out instructions for creating disasters — comparable to constructing chemical weapons or assisting in cyberattacks — that might cause a minimum of $500 million in damages.
The measure, supported by a number of the most famous AI researchers, would also create a latest state agency to oversee developers and supply best practices, including for still-more powerful models that don’t yet exist.
The state attorney general also would give you the chance to pursue legal actions in case of violations.
But tech firms are threatening to relocate away from California if the brand new laws is enshrined into law.
The bill was passed last month by the state Senate.
A general assembly vote is scheduled for August. Whether it is passed, it goes to the desk of Gov. Gavin Newsom.
A spokesperson for the governor told The Post: “We typically don’t comment on pending laws.”
A senior Silicon Valley enterprise capitalist told Financial Times on Friday that he has fielded complaints from tech startup founders who’ve mused about leaving California altogether in response to the proposed laws.
“My advice to everyone that asks is we stay and fight,” the enterprise capitalist told FT. “But this may put a chill on open source and the start-up ecosystem. I do think some founders will elect to depart.”
The most important objections from tech firms to the proposal are that it is going to stifle innovation by deterring software engineers from taking daring risks with their products as a consequence of fears of a hypothetical scenario which will never come to pass.
“If someone desired to give you regulations to stifle innovation, one could hardly do higher,” Andrew Ng, an AI expert who has led projects at Google and Chinese firm Baidu, told FT.
“It creates massive liabilities for science-fiction risks, and so stokes fear in anyone daring to innovate.”
Arun Rao, lead product manager for generative AI at Meta, wrote on X last week that the bill was “unworkable” and would “end open source in [California].”
“The web tax impact by destroying the AI industry and driving corporations out might be within the billions, as each corporations and highly paid staff leave,” he wrote.
Outstanding Silicon Valley tech researchers have expressed alarm in recent times over the rapid advancement of artificial intelligence, saying that the implications for humans might be dire.
“I believe we’re not ready, I believe we don’t know what we’re doing, and I believe we’re all going to die,” AI theorist Eliezer Yudkowsky, who’s viewed as particularly extreme by his tech peers, said in an interview last summer.
Yudkowsky echoed concerns voiced by the likes of Elon Musk and other tech figures who advocated a six-month pause on AI research.
Musk said last 12 months that there’s a “non-zero probability” that AI could “go Terminator” on humanity.
Worries about artificial intelligence systems outsmarting humans and running wild have intensified with the rise of a latest generation of highly capable AI chatbots comparable to ChatGPT.
Earlier this 12 months, European Union lawmakers gave final approval to a law that seeks to control AI.
The law’s early drafts focused on AI systems carrying out narrowly limited tasks, like scanning resumes and job applications.
The astonishing rise of general purpose AI models, exemplified by OpenAI’s ChatGPT, sent EU policymakers scrambling to maintain up.
They added provisions for so-called generative AI models, the technology underpinning AI chatbot systems that may produce unique and seemingly lifelike responses, images and more.
Developers of general purpose AI models — from European startups to OpenAI and Google — may have to offer an in depth summary of the text, pictures, video and other data on the web that’s used to coach the systems in addition to follow EU copyright law.
Some AI uses are banned because they’re deemed to pose an unacceptable risk, like social scoring systems that govern how people behave, some varieties of predictive policing and emotion recognition systems at school and workplaces.
With Post Wires
Artificial intelligence corporations are pushing back against California state lawmakers’ demand that they install a “kill switch” designed to mitigate potential dangers posed by the brand new technology — with some threatening to depart Silicon Valley altogether.
Scott Wiener, a Democratic state senator, introduced laws that will force tech corporations to comply with regulations fleshed out by a latest government-run agency designed to forestall AI corporations from allowing their products to realize “a hazardous capability” comparable to starting a nuclear war.
Wiener and other lawmakers wish to install guardrails around “extremely large” AI systems which have the potential to spit out instructions for creating disasters — comparable to constructing chemical weapons or assisting in cyberattacks — that might cause a minimum of $500 million in damages.
The measure, supported by a number of the most famous AI researchers, would also create a latest state agency to oversee developers and supply best practices, including for still-more powerful models that don’t yet exist.
The state attorney general also would give you the chance to pursue legal actions in case of violations.
But tech firms are threatening to relocate away from California if the brand new laws is enshrined into law.
The bill was passed last month by the state Senate.
A general assembly vote is scheduled for August. Whether it is passed, it goes to the desk of Gov. Gavin Newsom.
A spokesperson for the governor told The Post: “We typically don’t comment on pending laws.”
A senior Silicon Valley enterprise capitalist told Financial Times on Friday that he has fielded complaints from tech startup founders who’ve mused about leaving California altogether in response to the proposed laws.
“My advice to everyone that asks is we stay and fight,” the enterprise capitalist told FT. “But this may put a chill on open source and the start-up ecosystem. I do think some founders will elect to depart.”
The most important objections from tech firms to the proposal are that it is going to stifle innovation by deterring software engineers from taking daring risks with their products as a consequence of fears of a hypothetical scenario which will never come to pass.
“If someone desired to give you regulations to stifle innovation, one could hardly do higher,” Andrew Ng, an AI expert who has led projects at Google and Chinese firm Baidu, told FT.
“It creates massive liabilities for science-fiction risks, and so stokes fear in anyone daring to innovate.”
Arun Rao, lead product manager for generative AI at Meta, wrote on X last week that the bill was “unworkable” and would “end open source in [California].”
“The web tax impact by destroying the AI industry and driving corporations out might be within the billions, as each corporations and highly paid staff leave,” he wrote.
Outstanding Silicon Valley tech researchers have expressed alarm in recent times over the rapid advancement of artificial intelligence, saying that the implications for humans might be dire.
“I believe we’re not ready, I believe we don’t know what we’re doing, and I believe we’re all going to die,” AI theorist Eliezer Yudkowsky, who’s viewed as particularly extreme by his tech peers, said in an interview last summer.
Yudkowsky echoed concerns voiced by the likes of Elon Musk and other tech figures who advocated a six-month pause on AI research.
Musk said last 12 months that there’s a “non-zero probability” that AI could “go Terminator” on humanity.
Worries about artificial intelligence systems outsmarting humans and running wild have intensified with the rise of a latest generation of highly capable AI chatbots comparable to ChatGPT.
Earlier this 12 months, European Union lawmakers gave final approval to a law that seeks to control AI.
The law’s early drafts focused on AI systems carrying out narrowly limited tasks, like scanning resumes and job applications.
The astonishing rise of general purpose AI models, exemplified by OpenAI’s ChatGPT, sent EU policymakers scrambling to maintain up.
They added provisions for so-called generative AI models, the technology underpinning AI chatbot systems that may produce unique and seemingly lifelike responses, images and more.
Developers of general purpose AI models — from European startups to OpenAI and Google — may have to offer an in depth summary of the text, pictures, video and other data on the web that’s used to coach the systems in addition to follow EU copyright law.
Some AI uses are banned because they’re deemed to pose an unacceptable risk, like social scoring systems that govern how people behave, some varieties of predictive policing and emotion recognition systems at school and workplaces.
With Post Wires