September 20, 2024

Westside People

Complete News World

California lawmakers vote on AI safety bill amid opposition from tech companies

California lawmakers vote on AI safety bill amid opposition from tech companies

California lawmakers are set to vote this week on a bill aimed at reducing the risk of artificial intelligence being used for nefarious purposes, such as cyberattacks or the development of biological weapons.

California Senate Bill 1047, authored by Senator Scott Wiener, would be the first in the United States to require AI companies to build large-scale models to test them for safety.

California lawmakers are considering dozens of AI-related bills this session. But Weiner’s proposal, called the “Safe and Secure Innovation for Leading AI Models Act,” has captured national attention because of fierce opposition from Silicon Valley, the nation’s AI powerhouse. Opponents say imposing onerous technical requirements and potential fines on California would stifle innovation and the country’s global competitiveness.

OpenAI is the latest AI developer to declare its opposition, arguing In a message on Wednesday That regulation of AI should be left to the federal government and claims that companies will leave California if the proposed legislation passes.

The state legislature is now set to vote on the bill, which Winner was recently modified. In response to criticism from the tech industry, he said the new language does not address all the issues raised by the industry.

“This is a reasonable, light-hearted bill that will in no way hinder innovation, but will help us overcome the risks that come with any powerful technology,” Weiner told reporters during a news conference on Monday.

What will the bill do?

The bill would require companies that build large AI models — those that cost more than $100 million to train — to mitigate any significant risks in the system that they find through safety tests. That includes creating a “shutdown” capability — or a way to pull the plug on a potentially unsafe model in catastrophic circumstances.

See also  Nearly 75% of people believe tipping is out of control

Developers will also be required to create a technical plan to address the safety risks and keep a copy of that plan for as long as the model is available, plus five years. Companies like Google, Meta, and OpenAI, which have large AI operations, have already done so. Voluntary commitments The Biden administration could take responsibility for managing AI risks, but California legislation would impose legal obligations and enforcement.

Each year, a third-party auditor assesses a company’s compliance with the law. In addition, companies must document their compliance with the law and report any safety incidents to the California Attorney General. The Attorney General’s office can impose civil penalties of up to $50,000 for first violations and an additional $100,000 for subsequent violations.

What are the criticisms?

Much of the tech industry has criticized the proposed bill for being too burdensome. Anthropic, a well-known AI company that markets itself as focused on safety, had argued that the previous version of the legislation would have created complex legal obligations that would stifle AI innovation, such as the ability for California’s attorney general to sue for negligence even if a safety disaster had not occurred.

OpenAI has suggested that companies will leave California if the bill passes to avoid its requirements. It has also insisted that AI regulation should be left to Congress to prevent a confusing patchwork of laws being passed across states.

Winner She replied The Senate Foreign Relations Committee called the idea of ​​companies fleeing California a “tiring argument,” noting that the bill’s provisions would still apply to companies that provide services to California residents, even if they are not headquartered there.

See also  Upscale restaurant Gotham in New York City was forced to close after falling victim to a $45,000 cyber scam

Last week, eight members of the US Congress announced Urge Gov. Gavin Newsom will veto Senate Bill 1047 over the obligations it would impose on companies that make and use artificial intelligence. Rep. Nancy Pelosi joined her colleagues in opposition, call “The move was ‘well-intentioned but ill-considered.’” (Wiener had been eyeing the honorary House speakership, which could entail a future confrontation with her daughter, Christine Pelosi, According to Politico.)

Pelosi and members of Congress are standing with the “godmother of AI,” Dr. Vivi Li, a Stanford University computer scientist and former Google researcher. In a recent op-ed, Li said the legislation would “harm our emerging AI ecosystem,” especially smaller developers who “are already at a disadvantage compared to today’s tech giants.”

What do supporters say?

The bill has received support from several AI startups, Co-Founder of Notion Simon Last, and AI “godfathers” Yoshua Bengio and Geoffrey Hinton. Bengio said the legislation would be a “positive and sensible step” to make AI safer while encouraging innovation.

Without adequate safety measures, the bill’s supporters fear that the dire consequences of unchecked AI could pose existential threats, such as increased risks to critical infrastructure and the creation of nuclear weapons.

Weiner defended his legislation as “simple and common sense,” noting that it would require only the largest AI companies to adopt safety measures. He also praised California’s leadership on U.S. technology policy, raising doubts about whether Congress will pass any substantive AI legislation in the near future.

“California has repeatedly stepped in to protect our residents and fill the void left by Congressional inaction,” Weiner responded, noting the lack of federal action on data privacy and social media regulation.

See also  Blackstone defaults on $562 million on Northern property-backed CMBS - Bloomberg News

What’s next?

In his latest statement on the bill, Weiner said the latest amendments take into account many of the concerns expressed by the AI ​​industry. The current version proposes a civil penalty instead of a criminal penalty, as originally proposed in the bill, for lying to the government. It also removes a proposal to create a new state regulatory body that would oversee AI models.

Anthropy said In a letter to Newsom The benefits of the revised legislation likely outweigh the potential harm to the AI ​​industry, with the main benefits being transparency with the public about AI safety and companies investing in risk mitigation. But Anthropic remains wary of the potential for broad restrictions and expanded reporting requirements.

“We think it’s critical that we have some framework for managing leading AI systems that roughly meets these three requirements,” Anthropic CEO Dario Amodei told the governor, whether that framework is SB-1047 or not.

California lawmakers have until Aug. 31, the end of the session, to pass the bill. If approved, it would go to Gov. Gavin Newsom for final approval by the end of September. The governor has not indicated whether he plans to sign the legislation.