Critiqs

California Charts New Path for AI Oversight

california-charts-new-path-for-ai-oversight
  • California experts urge balanced AI rules, warning of fast progress and serious risks without safeguards.
  • The new AI oversight plan shifts focus to real-world impact and improved industry transparency practices.
  • Report calls for independent risk checks, whistleblower protections, and better public access to AI info.

The debate over how to regulate powerful AI models in California just took a sharp new turn.

After last year’s controversial veto of Senate Bill 1047 by Governor Gavin Newsom, a handpicked team of experts finally released their blueprint for state-level oversight of generative AI. Newsom had objected to the previous, stricter proposals, calling them too cookie cutter, but he did not close the door to crafting guidelines that would actually help instead of hinder innovation.

California’s new expert-led report reveals that large AI models are advancing more quickly than anticipated, especially in their ability to perform complex reasoning. The authors say that these breakthroughs carry enormous promise across sectors such as medicine, agriculture, transportation, finance, and more.

They also stress that unchecked progress brings very real dangers. The report bluntly warns that powerful AI, without guardrails, “could induce severe or potentially irreversible harms.” The group, which includes Stanford’s Fei-Fei Li and UC Berkeley’s Jennifer Tour Chayes, urges policymakers not to smother progress but to ensure rules are realistic.

Rapid Growth and Rising Risks

Since the draft report appeared in March, the risk associated with AI’s ability to assist in chemical and nuclear threats has grown more pressing. Some companies have already started flagging unexpected jumps in their models’ capabilities.

Instead of only focusing on how much computational power a model uses—which was one of the main criteria in last year’s failed bill—the new plan calls for a broader approach. The authors argue that what really matters is how these systems get used and the downstream impacts they create.

Transparency and openness remain elusive goals in the industry. The report highlights that there is still little consensus among developers about even basic best practices. Opaque methods for gathering training data, vague safety procedures and incomplete testing information are all common.

Whistleblower protections, safety guarantees for outside evaluators, and direct communication with the public are all identified as must-haves. The writers stress that “developers alone are simply inadequate at fully understanding the technology and, especially, its risks and harms.”

Scott Singer, a lead author of the report, told The Verge that California could unite other states around “commonsense policies that many people across the country support.” He believes this effort can head off the confusion that a patchwork of conflicting laws would create.

Calls for a national transparency standard have also grown. Anthropic CEO Dario Amodei has written that AI companies should have to explain how they test for major threats and national security loopholes directly on their company websites.

The California report makes clear that real, independent risk checks are still the missing piece. The authors say that third-party evaluators, who represent a much wider range of perspectives and expertise than the in-house teams at tech companies, are needed if hidden hazards are to be discovered in time.

But access for those outside testers is far from guaranteed. Even well-known firms that evaluate for safety, like Metr, say they have struggled to get the information and time they need from companies like OpenAI. According to Metr, those restrictions “prevent us from making robust capability assessments.”

Last March, hundreds of researchers signed an open letter demanding safe harbor for independent risk assessments. The new report backs that call and presses for options for those harmed by AI systems to report problems in real time.

“Even perfectly designed safety policies cannot prevent 100 percent of substantial, adverse outcomes,” the experts caution. The more that foundational AI models are used everywhere, the more vital it becomes to actually track what can go wrong, as also shown by rising legal risks for business.

SHARE

Add a Comment

What’s Happening in AI?

Stay ahead with daily AI tools, updates, and insights that matter.

Listen to AIBuzzNow - Pick Your Platform

This looks better in the app

We use cookies to improve your experience on our site. If you continue to use this site we will assume that you are happy with it.

Log in / Register

Join the AI Community That’s Always One Step Ahead