California has 30 new proposals to rein in A.I. but Trump could complicate them

California lawmakers are proposing regulations to protect people and society from the unintended consequences of artificial intelligence. By contrast, the Trump administration wants to avoid excessive regulation. Illustration by Gabriel Hongsdusit, CalMatters

By Khari Johnson for CalMatters

AI can get rid of racist restrictions in housing covenants and help people access government benefits, or it can deny people health care or a mortgage because of their race. That’s why, last month, for the third year in a row, Democratic Assemblymember Rebecca Bauer-Kahan of San Ramon proposed a bill to protect people from automated discrimination and AI that makes consequential decisions with the power to change a person’s life.

If passed, Assembly Bill 1018 will require the makers of AI to evaluate how the tech performs before it’s used and to notify people before AI makes decisions about employment, education, housing, health care, finance, criminal sentencing, and access to government services. It would also give people the right to opt-out of AI use and appeal a decision made by an AI model.

This year, California lawmakers like Bauer-Kahan are surging forward with 30 bills to regulate how AI impacts individuals and society, and some of the most high profile efforts are ones that the lawmakers attempted last year only to see them vetoed by Gov. Gavin Newsom or fail to pass.

In addition to the bill that guards against automated discrimination, lawmakers will again consider other legislation to protect society from AI, including a bill that requires a human driver in commercial vehicles and a new version of a measure to previously intended compel companies to better examine whether AI can cause harm.

The new wave of proposals follows a batch of more than 20 AI laws Newsom signed last year, but they are moving forward in a very different political environment.

Last year, the Biden administration supported measures to protect people from bias and discrimination and major companies signed pledges to responsibly develop AI, but today the White House under President Donald Trump opposes regulation and companies including Google are rolling back their own responsible AI rules. On his first day in office, Trump rescinded a Biden executive order intended to protect people and society from AI.

That dissonance could ultimately help the California lawmakers who want more AI protections. In a world of rapid-fire White House executive orders and chaotic, AI-driven decisionmaking by DOGE, there’s going to be more appetite for state lawmakers to regulate AI, said Stephen Aguilar, associate director of the Center for Generative AI and Society at the University of Southern California.

“I think California in particular is in position to say, ‘Okay we need mitigants in place now that folks are coming in with a wrecking ball,’” he said.

Bills will need to get through Newsom, who last year vetoed bills intended to protect people from self-driving trucks and weaponized robots and set standards for AI contracts signed by state agencies. Most notably, Newsom vetoed what was billed as the single-most comprehensive effort to regulate AI by compelling testing of AI models to determine whether they would likely lead to mass death, endanger public infrastructure, or enable severe cyberattacks.

Newsom vetoed the self-driving trucks and AI testing bills in part on the grounds that the bills could hinder innovation. He then created an AI working group to balance innovation with guardrails. That group should release recommendations about how to strike that balance in the coming weeks.

Democratic Sen. Scott Wiener of San Francisco, who carried the prominent AI bill, reintroduced a version of that proposal last month. Compared to last year, the bill is scaled back to protections for AI whistleblowers and establishment of a state cloud to enable research in the public interest. A former OpenAI employee who witnessed violation of internal safety policy told CalMatters that whistleblower protections are needed to keep society safe.

Assemblymember Rebecca Bauer-Kahan speaks in support of SCR 135, which would designate May 6, 2024 as California Holocaust Memorial Day on the Assembly floor at the state Capitol in Sacramento on April 29, 2024. Photo by Miguel Gutierrez Jr., CalMatters

Bauer-Kahan was the first state lawmaker to propose legislation that contains the AI Bill of Rights, a set of principles that the Biden administration and tech justice researchers called foundational to protecting people’s rights in the age of AI including the right to live free from discrimination, the right to know when AI makes important decisions about your life, and the right to know when an automated system is being used. It didn’t become law, but roughly a dozen states have passed or are considering similar bills, according to Consumer Reports.

In a press conference to reintroduce her bill, Bauer-Kahan said the Trump administration’s stance on AI regulation changes “the dynamic for the states.”

“It is on us more,” she said, pointing to his repeal of an executive order influenced by the AI Bill of Rights and the stall of the AI Civil Rights Act in Congress.

The tale of two administrations in Paris

Dueling perspectives on how the U.S and the rest of the world should regulate AI were on display earlier this month in Paris at a summit attended by CEOs and heads of state.

In comments at a private “working dinner” hosted by President Emmanuel Macron at the Elysee Palace, alongside people like OpenAI CEO Sam Altman and German Chancellor Olaf Scholz, AI Bill of Rights author and former director of the Office of Science and Technology Policy Alondra Nelson urged business and government leaders to discard misconceptions about AI like that its purpose is scale and efficiency. AI can accelerate growth, but its purpose is to serve humanity.

“It is not inevitable that AI will lead to great public benefits,” she said in remarks at the event. “We can create systems that expand opportunity rather than concentrate power. We can build technology that strengthens democracy rather than undermines it.”

Technology leaders attend a generative artificial intelligence meeting in San Francisco on June 29, 2023. Photo by Carlos Barria, Reuters

By contrast, Vice President J.D. Vance at the same event said the United States will fight what he called excessive AI regulation. The U.S. refused to sign an international declaration to “ensure AI is open, inclusive, transparent, ethical, safe, secure, and trustworthy.”

The Trump administration’s position that regulation is a threat to AI innovation mirrors the talking points of major companies such as Google, Meta, and OpenAI that lobbied against regulation last year.

Debate about whether to regulate AI comes at a time when Elon Musk, President Trump, and a small group of technologists seek to build and use AI within numerous federal agencies to improve efficiency and save money.

Those efforts risk cutting benefits to people who depend on them. A report released in late 2024 by California-based nonprofit TechTonic Justice found that AI influences government services for tens of millions of low-income Americans, often cutting benefits they’re entitled to and making opportunities harder to access.

Our content is free, but not free to produce

If you value our local news, arts and entertainment coverage, become an SN&R supporter with a one-time or recurring donation. Help us keep our reporters at work, bringing you the stories that need to be told.

Newsletter

Stay Updated

For the latest local news, arts and entertainment, sign up for our newsletter.
We'll tell you the story behind the story.

Be the first to comment on "California has 30 new proposals to rein in A.I. but Trump could complicate them"

Leave a comment

Your email address will not be published.


*