As corporations increasingly incorporate AI technologies into Americans’ daily lives, California lawmakers want to build public trust, fight algorithmic discrimination, and ban deepfakes involving elections or pornography.
Efforts in California, home to many of the world’s largest AI companies, could pave the way for AI regulations across the country. The United States is already behind Europe in regulating AI to limit risks, lawmakers and experts say, and the rapidly growing technology is raising concerns about job losses, misinformation, invasions of privacy and bias. automation.
A series of proposals aimed at addressing those concerns were introduced last week, but they must win approval from the other chamber before reaching Gov. Gavin Newsom’s desk. The Democratic governor has promoted California as an early adopter as well as a regulator, saying the state could soon deploy generative AI tools to address highway congestion, make roads safer and provide fiscal guidance, even as his administration considers new rules against AI discrimination in hiring practices. . On Wednesday, he announced at an AI summit in San Francisco that the state is considering at least three more tools, including one to address homelessness.
With strong privacy laws already in place, California is in a better position to enact impactful regulations than other states with big interests in AI, such as New York, said Tatiana Rice, deputy director of the Future of Privacy Forum, a nonprofit that works with legislators. on technology and privacy proposals.
“You need a data privacy law to be able to pass an AI law,” Rice said. “We’re still paying attention to what New York is doing, but I would bet more on California.”
California leaders said they can’t wait to act, citing the hard lessons they learned from failing to dominate social media companies when they could have had a chance. But they also want to continue attracting AI companies to the state.
“We want to dominate this space and I’m too competitive to suggest otherwise,” Newsom said at Wednesday’s event. “I think the world expects us in many ways to lead in this space, so we feel a deep sense of responsibility to get it right.”
Here’s a closer look at California’s proposals:
Some companies, including hospitals, already use artificial intelligence models to shape decisions about hiring, housing and medical options for millions of Americans without much oversight. According to the U.S. Equal Employment Opportunity Commission, up to 83% of employers use AI to assist in hiring. How those algorithms work remains largely a mystery.
One of the most ambitious AI measures adopted in California this year would pull back the curtain on these models by establishing an oversight framework to prevent bias and discrimination. It would require companies that use AI tools to participate in decisions that determine outcomes and inform affected people when AI is used. AI developers would need to routinely conduct internal evaluations of their models to detect bias. And the state attorney general would have the authority to investigate reports of discriminatory patterns and impose fines of $10,000 per violation.
AI companies may also soon be required to start disclosing what data they are using to train their models.
Inspired by the months-long Hollywood actors’ strike last year, a California lawmaker wants to protect workers from being replaced by their AI-generated clones, a major point of contention in contract negotiations.
The proposal, backed by the California Federation of Labor, would allow artists to cancel existing contracts if vague language allowed studios to freely use AI to digitally clone their voices and likenesses. It would also require performers to be represented by an attorney or union representative when signing new “voice and likeness” contracts.
California can also impose penalties for digitally cloning dead people without the consent of their heirs, citing the case of a media company that produced a fake hour-long comedy special generated by artificial intelligence to recreate the style and material of the late comedian George Carlin without the permission of his heirs.
Real-world risks abound as generative AI creates new content, such as text, audio, and photos, in response to prompts. That’s why lawmakers are considering requiring protective barriers around “extremely large” AI systems that have the potential to spit out instructions to create disasters (such as building chemical weapons or assisting in cyberattacks) that could cause at least $500 million in damage. of dollars. Such models would be required to have a built-in “kill switch,” among other things.
The measure, backed by some of the most renowned AI researchers, would also create a new state agency to oversee developers and provide best practices, including for even more powerful models that don’t exist yet. The state attorney general could also take legal action in case of violations.
A bipartisan coalition seeks to make it easier to prosecute people who use artificial intelligence tools to create images of child sexual abuse. Current law does not allow district attorneys to pursue people who possess or distribute AI-generated child sexual abuse images if the materials do not depict a real person, officials said.
A number of Democratic lawmakers are also backing a bill addressing election falsifications, citing concerns after AI-generated robocalls imitated President Joe Biden’s voice ahead of the recent New Hampshire presidential primary. The proposal would ban “materially misleading” election-related deepfakes in political emails, robocalls and television ads 120 days before Election Day and 60 days after. Another proposal would require social media platforms to label any election-related posts created by AI.
Keynote USA
For the Latest Local News, Follow @Keynote USA Local on Twitter.