Technology

Can California show the way forward on AI safety?

Last week, California state Senator Scott Wiener (D-San Francisco) introduced a landmark new piece of AI legislation aimed at “establishing clear, predictable, common-sense safety standards for developers of the largest and most powerful AI systems.”

It’s a well-written, politically astute approach to regulating AI, narrowly focused on the companies building the biggest-scale models and the possibility that those massive efforts could cause mass harm.

As it has in fields from car emissions to climate change, California’s legislation could provide a model for national regulation, which looks likely to take much longer. But whether or not Wiener’s bill makes it through the statehouse in its current form, its existence reflects that politicians are starting to take tech leaders seriously when they claim they intend to build radical world-transforming technologies that pose significant safety risks — and ceasing to take them seriously when they claim, as some do, that they should do that with absolutely no oversight.

What the California AI bill gets right

One challenge of regulating powerful AI systems is defining just what you mean by “powerful AI systems.” We’re smack in the middle of the present AI hype cycle, and every company in Silicon Valley claims that they’re using AI, whether that means building customer service chatbots, day trading algorithms, general intelligences capable of convincingly mimicking humans, or even literal killer robots.

Defining the question is vital, because AI has enormous economic potential, and clumsy, excessively stringent regulations that crack down on beneficial systems could do enormous economic damage while doing surprisingly little about the very real safety concerns.

The California bill attempts to avoid this problem in a straightforward way: it concerns itself only with so-called “frontier” models, those “substantially more powerful than any system that exists today.” Wiener’s team argues that a model which meets the threshold the bill sets would cost at least $100 million to build, which means that any company that can afford to build one can definitely afford to comply with some safety regulations.

Even for such powerful models, the requirements aren’t overly onerous: The bill requires that companies developing such models prevent unauthorized access, be capable of shutting down copies of their AI in the case of a safety incident (though not other copies — more on that later), and notify the state of California on how they plan to do all this. Companies must demonstrate that their model complies with applicable regulation (for example from the federal government — though such regulations don’t exist yet, they may at some point). And they have to describe the safeguards they’re employing for their AI and why they are sufficient to prevent “critical harms,” defined as mass casualties and/or more than $500 million in damages.

The California bill was developed in significant consultation with leading, highly respected AI scientists, and released with endorsements from leading AI researchers, tech industry leaders, and advocates for responsible AI alike. It’s a reminder that despite vociferous, heated online disagreement, there’s actually a great deal these various groups agree on.

“AI systems beyond a certain level of capability can pose meaningful risks to democracies and public safety,” Yoshua Bengio, considered one of the godfathers of modern AI and a leading AI researcher, said of the proposed law. “Therefore, they should be properly tested and subject to appropriate safety measures. This bill offers a practical approach to accomplishing this, and is a major step toward the requirements that I’ve recommended to legislators.”

Of course, that’s not to say that everyone loves the bill.

What the California AI bill doesn’t do

Some critics have worried that the bill, while it’s a step forward, will be toothless in the case of a truly dangerous AI system. For one thing, if there’s a safety incident requiring a “full shutdown” of an AI system, the law doesn’t require you to retain the capability to shut down copies of your AI which have been released publicly, or are owned by other companies or other actors. The proposed regulations are easier to comply with, but because AI, like any computer program, is so easy to copy, it means that in the event of a serious safety incident, it wouldn’t actually be possible to just pull the plug.

“When we really need a full shutdown, this definition won’t work,” analyst Zvi Mowshowitz writes. “The whole point of a shutdown is that it happens everywhere whether you control it or not.”

There are also many concerns about AI that can’t be addressed by this particular bill. Researchers working on AI anticipate that it will change our society in many ways (for better and for worse), and cause diverse and varied harms: mass unemployment, cyberwarfare, AI-enabled fraud and scams, algorithmic codification of biased and unfair procedures, and many more.

To date, most public policy on AI has tried to target all of those at once: Biden’s executive order on AI last fall mentions all of these concerns. These problems, though, will require very different solutions, including some we have yet to imagine.

But existential risks, by definition, have to be solved to preserve a world in which we can make progress on all the others — and AI researchers take seriously the possibility that the most powerful AI systems will eventually pose a catastrophic risk to humanity. Regulation addressing that possibility should therefore be focused on the most powerful models, and on our ability to prevent mass casualty events they could precipitate.

At the same time, a model does not have to be extremely powerful to pose serious questions of algorithmic bias or discrimination — that can be done with an extremely simple model that predicts recidivism or eligibility for a mortgage on the basis of data that reflects decades of past discriminatory practices. Tackling those issues will require a different approach, one less focused on powerful frontier models and mass casualty incidents and more on our ability to understand and predict even simple AI systems.

No one law could possibly solve every challenge that we’ll face as AI becomes a bigger and bigger part of modern life. But it’s worth keeping in mind that “don’t release an AI that will predictably cause a mass casualty event,” while it’s a crucial element of ensuring that powerful AI development proceeds safely, is also a ridiculously low bar. Helping this technology reach its full potential for humanity — and ensuring that its development goes well — will require a lot of smart and informed policymaking. What California is attempting is just the beginning.

A version of this story originally appeared in the Future Perfect newsletter. Sign up here!

rn rn vox-markrn rn rn rn rn rn“,”cross_community”:false,”internal_groups”:[{“base_type”:”EntryGroup”,”id”:112405,”timestamp”:1708524002,”title”:”Approach — Explores solutions or ideas to solve problems”,”type”:”SiteGroup”,”url”:””,”slug”:”approach-explores-solutions-or-ideas-to-solve-problems”,”community_logo”:”rnrn rn vox-markrn rn rn rn rn rn“,”community_name”:”Vox”,”community_url”:”https://www.vox.com/”,”cross_community”:false,”entry_count”:245,”always_show”:false,”description”:””,”disclosure”:””,”cover_image_url”:””,”cover_image”:null,”title_image_url”:””,”intro_image”:null,”four_up_see_more_text”:”View All”}],”image”:{“ratio”:”*”,”original_url”:”https://cdn.vox-cdn.com/uploads/chorus_image/image/73143987/GettyImages_1816905191__Converted_.0.png”,”network”:”unison”,”bgcolor”:”white”,”pinterest_enabled”:false,”caption”:null,”credit”:”Moor Studio/Getty Images”,”focal_area”:{“top_left_x”:807,”top_left_y”:487,”bottom_right_x”:1113,”bottom_right_y”:793},”bounds”:[0,0,1920,1280],”uploaded_size”:{“width”:1920,”height”:1280},”focal_point”:null,”image_id”:73143987,”alt_text”:”A graphic illustration of a statue holding a balance scale in one hand and a sword in the other, against a gridded backdrop. “},”hub_image”:{“ratio”:”*”,”original_url”:”https://cdn.vox-cdn.com/uploads/chorus_image/image/73143987/GettyImages_1816905191__Converted_.0.png”,”network”:”unison”,”bgcolor”:”white”,”pinterest_enabled”:false,”caption”:null,”credit”:”Moor Studio/Getty Images”,”focal_area”:{“top_left_x”:807,”top_left_y”:487,”bottom_right_x”:1113,”bottom_right_y”:793},”bounds”:[0,0,1920,1280],”uploaded_size”:{“width”:1920,”height”:1280},”focal_point”:null,”image_id”:73143987,”alt_text”:”A graphic illustration of a statue holding a balance scale in one hand and a sword in the other, against a gridded backdrop. “},”lede_image”:{“ratio”:”*”,”original_url”:”https://cdn.vox-cdn.com/uploads/chorus_image/image/73144003/GettyImages_1816905191__Converted_.0.png”,”network”:”unison”,”bgcolor”:”white”,”pinterest_enabled”:false,”caption”:null,”credit”:”Moor Studio/Getty Images”,”focal_area”:{“top_left_x”:807,”top_left_y”:487,”bottom_right_x”:1113,”bottom_right_y”:793},”bounds”:[0,0,1920,1280],”uploaded_size”:{“width”:1920,”height”:1280},”focal_point”:null,”image_id”:73144003,”alt_text”:”A graphic illustration of a statue holding a balance scale in one hand and a sword in the other, against a gridded backdrop. “},”group_cover_image”:null,”picture_standard_lead_image”:{“ratio”:”*”,”original_url”:”https://cdn.vox-cdn.com/uploads/chorus_image/image/73144003/GettyImages_1816905191__Converted_.0.png”,”network”:”unison”,”bgcolor”:”white”,”pinterest_enabled”:false,”caption”:null,”credit”:”Moor Studio/Getty Images”,”focal_area”:{“top_left_x”:807,”top_left_y”:487,”bottom_right_x”:1113,”bottom_right_y”:793},”bounds”:[0,0,1920,1280],”uploaded_size”:{“width”:1920,”height”:1280},”focal_point”:null,”image_id”:73144003,”alt_text”:”A graphic illustration of a statue holding a balance scale in one hand and a sword in the other, against a gridded backdrop. “,”picture_element”:{“loading”:”eager”,”html”:{},”alt”:”A graphic illustration of a statue holding a balance scale in one hand and a sword in the other, against a gridded backdrop. “,”default”:{“srcset”:”https://cdn.vox-cdn.com/thumbor/0tjCBfNCLJ6LAf9w6YobZgsv3cM=/0x0:1920×1280/320×240/filters:focal(807×487:1113×793)/cdn.vox-cdn.com/uploads/chorus_image/image/73144003/GettyImages_1816905191__Converted_.0.png 320w, https://cdn.vox-cdn.com/thumbor/FLYu8DoJ_nN_KRJS0XJI-jfGZ98=/0x0:1920×1280/620×465/filters:focal(807×487:1113×793)/cdn.vox-cdn.com/uploads/chorus_image/image/73144003/GettyImages_1816905191__Converted_.0.png 620w, https://cdn.vox-cdn.com/thumbor/LU5MstfJyDvucl4Ym0jLzj6Vn2M=/0x0:1920×1280/920×690/filters:focal(807×487:1113×793)/cdn.vox-cdn.com/uploads/chorus_image/image/73144003/GettyImages_1816905191__Converted_.0.png 920w, https://cdn.vox-cdn.com/thumbor/W3QaH3-xTy4C8cnOLT9xOUE1MxA=/0x0:1920×1280/1220×915/filters:focal(807×487:1113×793)/cdn.vox-cdn.com/uploads/chorus_image/image/73144003/GettyImages_1816905191__Converted_.0.png 1220w, https://cdn.vox-cdn.com/thumbor/x7imrq_QkKPj5A8kyoGru1OLAY4=/0x0:1920×1280/1520×1140/filters:focal(807×487:1113×793)/cdn.vox-cdn.com/uploads/chorus_image/image/73144003/GettyImages_1816905191__Converted_.0.png 1520w”,”webp_srcset”:”https://cdn.vox-cdn.com/thumbor/cWHKrc3-xJjvprmCmJbWx_pJgRA=/0x0:1920×1280/320×240/filters:focal(807×487:1113×793):format(webp)/cdn.vox-cdn.com/uploads/chorus_image/image/73144003/GettyImages_1816905191__Converted_.0.png 320w, https://cdn.vox-cdn.com/thumbor/6WWeYIu0M46TxQjnenNIwKCv2FY=/0x0:1920×1280/620×465/filters:focal(807×487:1113×793):format(webp)/cdn.vox-cdn.com/uploads/chorus_image/image/73144003/GettyImages_1816905191__Converted_.0.png 620w, https://cdn.vox-cdn.com/thumbor/UcQFF88gJV9PDps3iYz-KPLnhWs=/0x0:1920×1280/920×690/filters:focal(807×487:1113×793):format(webp)/cdn.vox-cdn.com/uploads/chorus_image/image/73144003/GettyImages_1816905191__Converted_.0.png 920w, https://cdn.vox-cdn.com/thumbor/Ye9pYJPYP3_xSEAzDkX44woegO8=/0x0:1920×1280/1220×915/filters:focal(807×487:1113×793):format(webp)/cdn.vox-cdn.com/uploads/chorus_image/image/73144003/GettyImages_1816905191__Converted_.0.png 1220w, https://cdn.vox-cdn.com/thumbor/kY1m51KMErl7uT7w_x1A2SfsOa8=/0x0:1920×1280/1520×1140/filters:focal(807×487:1113×793):format(webp)/cdn.vox-cdn.com/uploads/chorus_image/image/73144003/GettyImages_1816905191__Converted_.0.png 1520w”,”media”:null,”sizes”:”(min-width: 809px) 485px, (min-width: 600px) 60vw, 100vw”,”fallback”:”https://cdn.vox-cdn.com/thumbor/q8wKpqZ68pQMq0N0t-WHCUVMrXQ=/0x0:1920×1280/1200×900/filters:focal(807×487:1113×793)/cdn.vox-cdn.com/uploads/chorus_image/image/73144003/GettyImages_1816905191__Converted_.0.png”},”art_directed”:[]}},”image_is_placeholder”:false,”image_is_hidden”:false,”network”:”vox”,”omits_labels”:false,”optimizable”:false,”promo_headline”:”Can California show the way forward on AI safety?”,”recommended_count”:0,”recs_enabled”:false,”slug”:”future-perfect/2024/2/16/24074360/artificial-intelligence-california-regulation-openai-google-chatgpt-existential-risk”,”dek”:”A new state bill aims to protect us from the most powerful and dangerous AI models.”,”homepage_title”:”Can California show the way forward on AI safety?”,”homepage_description”:”A new state bill aims to protect us from the most powerful and dangerous AI models.”,”show_homepage_description”:false,”title_display”:”Can California show the way forward on AI safety?”,”pull_quote”:null,”voxcreative”:false,”show_entry_time”:true,”show_dates”:true,”paywalled_content”:false,”paywalled_content_box_logo_url”:””,”paywalled_content_page_logo_url”:””,”paywalled_content_main_url”:””,”article_footer_body”:”At Vox, we believe that clarity is power, and that power shouldn’t only be available to those who can afford to pay. That’s why we keep our work free. Millions rely on Vox’s clear, high-quality journalism to understand the forces shaping today’s world. Support our mission and help keep Vox free for all by making a financial contribution to Vox today. rn”,”article_footer_header”:”Will you help keep Vox free for all?“,”use_article_footer”:true,”article_footer_cta_annual_plans”:”{rn “default_plan”: 1,rn “plans”: [rn {rn “amount”: 50,rn “plan_id”: 99546rn },rn {rn “amount”: 100,rn “plan_id”: 99547rn },rn {rn “amount”: 150,rn “plan_id”: 99548rn },rn {rn “amount”: 200,rn “plan_id”: 99549rn }rn ]rn}”,”article_footer_cta_button_annual_copy”:”year”,”article_footer_cta_button_copy”:”Yes, I’ll give”,”article_footer_cta_button_monthly_copy”:”month”,”article_footer_cta_default_frequency”:”monthly”,”article_footer_cta_monthly_plans”:”{rn “default_plan”: 0,rn “plans”: [rn {rn “amount”: 5,rn “plan_id”: 99543rn },rn {rn “amount”: 10,rn “plan_id”: 99544rn },rn {rn “amount”: 25,rn “plan_id”: 99545rn },rn {rn “amount”: 50,rn “plan_id”: 46947rn }rn ]rn}”,”article_footer_cta_once_plans”:”{rn “default_plan”: 0,rn “plans”: [rn {rn “amount”: 20,rn “plan_id”: 69278rn },rn {rn “amount”: 50,rn “plan_id”: 48880rn },rn {rn “amount”: 100,rn “plan_id”: 46607rn },rn {rn “amount”: 250,rn “plan_id”: 46946rn }rn ]rn}”,”use_article_footer_cta_read_counter”:true,”use_article_footer_cta”:true,”groups”:[{“base_type”:”EntryGroup”,”id”:76815,”timestamp”:1708524002,”title”:”Future Perfect”,”type”:”SiteGroup”,”url”:”https://www.vox.com/future-perfect”,”slug”:”future-perfect”,”community_logo”:”rnrn rn vox-markrn rn rn rn rn rn“,”community_name”:”Vox”,”community_url”:”https://www.vox.com/”,”cross_community”:false,”entry_count”:1795,”always_show”:false,”description”:”Finding the best ways to do good. “,”disclosure”:””,”cover_image_url”:””,”cover_image”:null,”title_image_url”:”https://cdn.vox-cdn.com/uploads/chorus_asset/file/16290809/future_perfect_sized.0.jpg”,”intro_image”:null,”four_up_see_more_text”:”View All”,”primary”:true},{“base_type”:”EntryGroup”,”id”:27524,”timestamp”:1708601403,”title”:”Technology”,”type”:”SiteGroup”,”url”:”https://www.vox.com/technology”,”slug”:”technology”,”community_logo”:”rnrn rn vox-markrn rn rn rn rn rn“,”community_name”:”Vox”,”community_url”:”https://www.vox.com/”,”cross_community”:false,”entry_count”:24586,”always_show”:false,”description”:”Uncovering and explaining how our digital world is changing — and changing us.”,”disclosure”:””,”cover_image_url”:””,”cover_image”:null,”title_image_url”:””,”intro_image”:null,”four_up_see_more_text”:”View All”,”primary”:false},{“base_type”:”EntryGroup”,”id”:80311,”timestamp”:1708601403,”title”:”Artificial Intelligence”,”type”:”SiteGroup”,”url”:”https://www.vox.com/artificial-intelligence”,”slug”:”artificial-intelligence”,”community_logo”:”rnrn rn vox-markrn rn rn rn rn rn“,”community_name”:”Vox”,”community_url”:”https://www.vox.com/”,”cross_community”:false,”entry_count”:425,”always_show”:false,”description”:”Vox’s coverage of how AI is shaping everything from text and image generation to how we live. “,”disclosure”:””,”cover_image_url”:””,”cover_image”:null,”title_image_url”:””,”intro_image”:null,”four_up_see_more_text”:”View All”,”primary”:false},{“base_type”:”EntryGroup”,”id”:102794,”timestamp”:1708601403,”title”:”Innovation”,”type”:”SiteGroup”,”url”:”https://www.vox.com/innovation”,”slug”:”innovation”,”community_logo”:”rnrn rn vox-markrn rn rn rn rn rn“,”community_name”:”Vox”,”community_url”:”https://www.vox.com/”,”cross_community”:false,”entry_count”:236,”always_show”:false,”description”:””,”disclosure”:””,”cover_image_url”:””,”cover_image”:null,”title_image_url”:””,”intro_image”:null,”four_up_see_more_text”:”View All”,”primary”:false}],”featured_placeable”:false,”video_placeable”:false,”disclaimer”:null,”volume_placement”:”lede”,”video_autoplay”:false,”youtube_url”:”http://bit.ly/voxyoutube”,”facebook_video_url”:””,”play_in_modal”:true,”user_preferences_for_privacy_enabled”:false,”show_branded_logos”:true}”>

$5/month

$10/month

$25/month

$50/month

Other

Yes, I’ll give $5/month

Yes, I’ll give $5/month


We accept credit card, Apple Pay, and


Google Pay. You can also contribute via



Related Articles

Check Also
Close
Back to top button