7.3 C
New York
Monday, November 25, 2024

California Governor Gavin Newsom Vetoes AI Safety Bill SB 1047


Its defenders said it would be a modest law that established “clear, predictable, common sense security standards”for artificial intelligence. Opponents argued that it was a dangerous and arrogant measure that “stifle innovation.”

In any case, SB 1047 —California State Sen. Scott Wiener’s proposal regulate advanced AI models offered by companies doing business in the state, is now kaput, banned by Governor Gavin Newsom. The proposal had gained broad support in the legislature, being approved by the California State Assembly by a majority margin of 48 to 16 in August. In May, it passed the Senate by a vote of 32 to 1.

The bill, which would hold AI companies liable for catastrophic damage their “frontier” models may cause, was backed by a wide range of AI safety groups, as well as luminaries in the field such as Geoffrey Hinton, Joshua Bengioand Stuart Russellwho have warned about the potential of technology to pose massive, even existential, dangers to humanity. Received a surprise last-minute endorsement from Elon Muskwho among his other companies runs the AI ​​firm xAI.

Lined up against SB 1047 was nearly the entire tech industry, including Open AI, Facebookthe powerful investors Y Combinator and Andreessen Horowitzand some academic researchers who fear that it threatens open source AI models. Anthropic, another AI heavyweight, pushed to water down the bill. After many of the proposed amendments were adopted in August, the company said the bill “The benefits probably outweigh the costs.”

Despite industry backlash, the bill appeared to be popular among Californians. In a survey designed by supporters and a prominent opponent of the bill (intended to ensure that survey questions were worded fairly), Californians backed the legislation 54 percent to 28 percent. after listening to the arguments of both parties.

The wide bipartisan margins by which the bill passed the Assembly and Senate, and the public’s overall support (when not biased), could make Newsom’s veto seem surprising. But it’s not that simple. Andreessen Horowitz, the $43 billion venture capital giant, hired Newsom’s close friend and Democratic operative Jason Kinney to lobby against the bill, and several powerful Democrats, including eight members of the US House of California and former spokesperson Nancy Pelosiurged a veto, echoing tech industry talking points.

That was the faction that ultimately won, preventing California, the center of the AI ​​industry, from becoming the first state to establish strong AI liability rules. Interestingly, Newsom justified his veto arguing that SB1047 did not go very far enough. Because it focuses “only on the most expensive and largest-scale models,” he worries that the bill “may give the public a false sense of security about controlling this fast-moving technology. “Smaller, more specialized models may be equally or even more dangerous than the models targeted by SB 1047.”

Newsom’s decision has broad implications not only for AI safety in California, but also in the United States and potentially the world.

Having attracted all this intense lobbying, one might think that SB 1047 was an aggressive and heavy-handed bill, but, especially after several rounds of revisions in the State Assembly, the law itself proposed to do quite little.

would have offered whistleblower protections for tech workersalong with a process so that people who have confidential information about risky behavior in an AI laboratory can take their complaint to the state Attorney General without fear of prosecution. It would also have required AI companies to spend more than $100 million training an AI model to develop security plans. (The extraordinarily high threshold for this requirement to take effect was intended to protect California’s emerging industry, which objected that the compliance burden would be too high for small businesses.)

So what about this bill that would likely spark months of hysteria, intense lobbying by California’s business community, and unprecedented intervention by California’s federal representatives? Part of the answer is that the bill used to be stronger. The initial version of the law based the compliance threshold on computing power, meaning that over time, more companies would have become subject to the law as computers continued to become cheaper (and more powerful). It would also have established a state agency called the “Frontier Models Division” to review safety plans; the industry opposed the perceived power grab.

Another part of the answer is that many people were falsely told that the bill does more. A prominent critic incorrectly claimed that AI developers could be guilty of a serious crime, regardless of whether they were involved in a harmful incident, when the bill only contained provisions for criminal liability in the event that the developer knowingly lied under oath. (Those provisions were later removed anyway.) Congressional Representative Zoe Lofgren of the Science, Space and Technology Committee wrote a letter in opposition by falsely claiming that the bill requires compliance with guidance that does not yet exist.

But standards exist (you can read them in full here), and the bill did not require companies to comply with them. It only said that “a developer should consider industry best practices and applicable guidance” from the US Artificial Intelligence Security Institute, the National Institute of Standards and Technology, the Government Operations Agency and other accredited organizations.

Unfortunately, much of the discussion around SB 1047 focused on simply incorrect claims like these, in many cases proposed by people who should have known better.

SB 1047 was based on the idea that near-future AI systems could be extraordinarily powerful and, therefore, could be dangerousand that some supervision is required. That central proposal is extraordinarily controversial among AI researchers. Nothing exemplifies the divide better than the three men often called the “godfathers of machine learning,” Turing Award winners Yoshua Bengio, Geoffrey Hinton, and Yann LeCun. Bengio—a Future Perfect 2023 Honoree – and Hinton have become convinced in recent years that the technology they created can kill us all and have advocated for regulation and oversight. Hinton low from Google in 2023 to speak openly about their fears.

LeCun, chief AI scientist at Meta, took the opposite tack, declaring that such concerns are meaningless science fiction and that any regulation would strangle innovation. While Bengio and Hinton support the bill, LeCun opposes it, especially the idea that AI companies should take responsibility if AI is used in a mass casualty event.

In this sense, SB 1047 was the center of a symbolic tug-of-war: Does the government take concerns about AI safety seriously, or not? The actual wording of the bill may have been limited, but to the extent that it suggested that the government was listening to half of the experts who think AI could be extraordinarily dangerous, the implications were large.

It’s that sentiment that likely fueled some of the fiercest lobbying against the bill from venture capitalists Marc Andreessen and Ben Horowitz, whose firm a16z worked tirelessly to kill the bill, and some of the very unusual approaches to federal legislators to demand that they oppose a state bill. . More mundane politics probably played a role as well: politician reported that Pelosi opposed the bill because she is trying to court tech venture capitalists for her daughter, who will likely run against Scott Wiener for a seat in the House of Representatives.

Why SB 1047 is so important

It might seem strange that legislation in a single US state had so many people wringing their hands. But remember: California is not just any state. It is where several of the world’s leading artificial intelligence companies are based.

And what happens there is especially important because, at the federal level, lawmakers have been dragging the AI ​​regulation process. Between Washington’s vacillation and the looming elections, it’s up to states to pass new laws. The California bill, if given the green light by Newsom, would be a big piece of that puzzle and set the direction for the United States more broadly.

The rest of the world is watching too. “Countries around the world are analyzing these drafts for ideas that could influence their decisions on AI laws,” said Victoria Espinel, executive director of the Business Software Alliance, a lobbying group representing major software companies. software. said the New York Times in June.

Even China… often invoked as the boogeyman in American conversations on AI development (because “we don’t want to lose an arms race with China”) – is showing signs of worrying about safetynot just wanting to run forward. Bills like SB 1047 could convey to others that Americans care about security, too.

Frankly, it’s refreshing to see lawmakers realizing the tech world’s favorite tactic: claiming it can regulate itself. That claim may have prevailed in the age of social media, but it has become increasingly untenable. We need to regulate big tech. That means not only carrots, but also sticks.

Now that Newsom has killed the bill, he may face his own problems. A pro-SB1047 survey AI Policy Institute finds that 60 percent of voters are willing to blame him for future AI-related incidents if he vetoes SB 1047. In fact, they would punish him at the polls if he runs for higher office: 40 percent of voters of California says they would. You are less likely to vote for Newsom in a future presidential primary if he vetoes the bill.

Editor’s note, September 29 at 5 pm ET: This story, originally published August 31, has been updated to reflect California Governor Gavin Newsom’s decision to veto SB 1047.

Related Articles

Latest Articles