California Governor Newsom Signs Several AI Bills but Vetoes Three
Key Takeaways
- The California legislature sent a range of AI bills to Governor Newsom’s desk in the legislative session that just concluded. The governor signed several into law and vetoed three.
- The signed bills include SB 53 (frontier AI transparency and safety), SB 243 (companion chatbots), AB 316 (AI defenses in litigation), AB 325 (antitrust limits on common pricing algorithms), AB 489 (limits on AI healthcare advice), AB 621 (deepfake pornography), and AB 853 (disclosures for generative AI).
- The vetoed bills are AB 1064 (chatbot safety for minors), SB 7 (employer use of AI), and SB 11 (digital replica disclosures).
At the close of the 2025 legislative session, the California legislature sent a flurry of bills regulating AI to Governor Gavin Newsom’s desk for signature or veto. In this Update, we highlight seven of these laws signed by the governor that specifically target AI development, deployment, or use by companies, and we also describe three AI bills that Governor Newsom vetoed. The new AI laws cover a wide array of topics, including transparency requirements, consumer protection concerns, antitrust, deepfake pornography, and AI-related defenses to liability. They impose new obligations on AI developers and deployers, as well as employers and entities that use AI. Below, we describe each law in more detail.
Adjacent to AI, the California legislature also sent a lengthy list of privacy bills to Governor Newsom, several of which were signed. Learn more about the privacy bills here.
New California AI Laws: Bills Signed by the Governor
SB 53: Frontier Model Transparency and Governance
SB 53 imposes new transparency obligations on “frontier developers,” defined to mean a person who has trained, or initiated the training of, a frontier model using computing power greater than 10^26 integer or floating-point operations. Specifically, SB 53 requires all frontier developers to:
- Publish transparency reports when deploying a new or substantially modified frontier model. The report must include basic information (website, mechanism for communicating with the developer, model release date, model languages, and model output modalities), as well as the model’s intended uses and any generally applicable restrictions or conditions on the model’s uses. Developers can comply by including this information in a system or model card.
- Report “critical safety incidents” pertaining to frontier models to the California Office of Emergency Services (OES) within 15 days of discovery (or within 24 hours if the incident poses an imminent risk of death or serious physical injury). Critical safety incidents are defined to include loss of control or unauthorized access that causes death or injury, harm from the materialization of a catastrophic risk (i.e., one that may materially contribute to the death or serious injury of more than 50 people or more than $1 billion dollars in damage), or materially increased catastrophic risk from the model using deceptive techniques to subvert the developer’s controls or monitoring. OES is empowered to adopt regulations that would enable frontier developers to comply with this requirement through meeting the standards of federal laws, regulations, or guidance documents designated by OES.
- Protect whistleblowers who disclose to authorities or other employees information that they believe, in good faith, indicates the frontier developer’s activities present a specific and substantial danger to the public resulting from a catastrophic risk, and not retaliate against employees who do so.
- Notify employees of their whistleblower rights.
Additional transparency and governance obligations apply to large frontier developers, meaning those with collective (including affiliates) annual gross revenues of more than $500 million. Those developers must:
- Publish on their websites annual “frontier AI frameworks” that include their assessments and risk mitigation protocols for catastrophic risks, cybersecurity practices and critical safety incident response processes, and internal governance practices to ensure implementation.
- Provide additional details in their transparency reports when deploying a new or substantially modified frontier model, including assessments of catastrophic risks under the AI framework and the results.
- Submit periodic (every three months by default) summaries to OES of any assessment of catastrophic risk resulting from internal use of its frontier models.
- Affirmatively offer reasonable internal processes for employees to anonymously disclose information they believe in good faith shows a specific and substantial danger to the public resulting from a catastrophic risk. Employees who make those disclosures must be provided monthly status updates about the investigation and remedial steps.
The law also establishes a consortium to design a state-backed public cloud computer cluster (called “CalCompute”) and directs California’s Department of Technology to conduct an annual review of key definitions and submit recommendations on any modifications to the Legislature.
Notably, SB 53 is a scaled-down version of last year’s SB 1047, which similarly sought to regulate developers of frontier AI models but which Governor Newsom vetoed (see our discussion), explaining the bill did “not take into account whether an Al system is deployed in high-risk environments, involves critical decision-making or the use of sensitive data.” SB 53 omits some of the more controversial elements of SB 1047, such as its requirement to include full shutdown capabilities for certain emerging technologies, and instead focuses on transparency and governance processes.
Governor Newsom issued a signing statement emphasizing that SB 53 was shaped by last spring’s report from the Joint California Policy Working Group on AI Frontier Models. The governor praised the law for helping to ensure California remains a stronghold of technological innovation while ensuring meaningful oversight of AI safety, particularly as it relates to matters of national security, cybersecurity, and public safety, and for providing a pathway for aligning critical incident reporting obligations with national standards that may be adopted by Congress or the federal government.
Violations are enforceable by the California attorney general and are subject to civil penalties of up to $1 million per violation, depending on the severity. Employees may enforce SB 53’s whistleblower protections through a private right of action, which also authorizes attorneys’ fees for a successful plaintiff.
SB 53 takes effect on January 1, 2026.
SB 243: Companion Chatbots
SB 243 regulates “companion chatbots,” defined as AI systems with natural language interfaces that provide adaptive, human-like responses to user inputs and are capable of meeting a user’s social needs, “including by exhibiting anthropomorphic features and being able to sustain a relationship across multiple interactions.” The definition excludes customer service chatbots and similar business tools, chatbots in video game platforms where replies are limited to topics related to the video game, and standalone consumer devices that include voice-activated virtual assistant functionalities.
An operator who makes a companion chatbot available to a user in California is required to:
- Provide a “clear and conspicuous” notice that companion chatbots are artificially generated if a reasonable person interacting with the chatbot would be misled to believe that the person is interacting with a human.
- Maintain a protocol and publish “details” on that protocol on its website to prevent its chatbot from producing content related to suicidal ideation, suicide, or self-harm (such as by referring a user to crisis services such as a suicide or crisis hotline if they express suicidal ideation, suicide, or self-harm).
- Submit annual reports to the Office of Suicide Prevention describing the operator’s protocols regarding users’ suicidal ideations and the number of crisis provider referral notifications.
- Implement additional protections for users that the operator knows to be minors, including providing a disclosure that the user is interacting with AI, reminding the user at least every three hours that the chatbot is AI-generated and to take a break, and instituting reasonable measures to avoid sexually explicit content. In addition, operators must disclose to all users that companion chatbots may not be suitable for some minors.
SB 243 creates a private right of action. Violations are enforceable by any person who suffers injury in fact. The law authorizes injunctive relief, damages in an amount equal to the greater of actual damages or $1,000 per violation, and reasonable attorneys’ fees and costs.
SB 243 takes effect on January 1, 2026, but the first required annual report to the Office of Suicide Prevention is due on July 1, 2027.
AB 316: AI Defenses in Litigation
AB 316 precludes companies or individuals who developed, modified, or used AI from asserting that the AI acted autonomously as a defense in civil actions seeking to hold them liable for harm allegedly caused by the AI. The law preserves other legal defenses, such as causation, foreseeability, comparative fault, and (presumably) that the defendant exercised ordinary care or skill in the development, modification, or use of AI.
AB 316 takes effect on January 1, 2026.
AB 325: Cartwright Act and Pricing Algorithms
AB 325 amends the Cartwright Act, California’s primary antitrust law, to prohibit the use or distribution of “common pricing algorithms” and to forbid one party from coercing another to adopt pricing or terms recommended by the algorithm. “Common pricing algorithm” is defined as any methodology, including computer software or other technology, used by multiple parties that uses competitor data to recommend, align, stabilize, set, or otherwise influence a price or commercial term. AB 325 also lowers the pleading standard for any alleged Cartwright Act violations (whether or not they involve alleged common pricing algorithms).
AB 325 takes effect on January 1, 2026.
AB 489: Deceptive Statements About Use of AI in Healthcare
AB 489 extends requirements designed to help patients know when healthcare communications are generated by AI to the developers and deployers of these technologies. First, AB 489 states that certain existing healthcare profession legal restrictions, which require disclaimers if patient communications about clinical information are generated by AI and the inclusion of information on how to contact a human, are enforceable against the developer or deployer of the system or device for these communications (i.e., a technology vendor). Second, AB 489 prohibits implying, in advertising or in the functionality of a health tech system, that AI-generated information is being provided by a licensed healthcare professional when it is not.
Violations are enforceable by healthcare licensing boards and healthcare licensing enforcement agencies.
AB 489 takes effect on January 1, 2026.
AB 621: Deepfake Pornography
AB 621 expands California’s civil remedies for the creation, intentional disclosure, and knowing facilitation or reckless aiding and abetting of the distribution of “digitized sexually explicit material,” including deepfake pornography, if the depicted individual either did not consent or was a minor when the material was created. “Digitized sexually explicit material” includes any image, visual, or audiovisual work created or substantially altered through digitization to show the individual nude or appearing to be engaged in sexual conduct.
The owner or operator of a “deepfake pornography service” (i.e., a website, application, or service with the primary purpose of creating digitized sexually explicit material) is deemed to be engaged in the creation of the illegal content and presumed to have knowledge that the depicted individual did not consent unless they have explicit written consent.
Providers of services that enable the ongoing operation of deepfake pornography services are presumed to be violating the law if, after receiving sufficient evidence from a victim or public prosecutor, they do not stop providing the services within 30 days or a court-provided extended period to allow for law enforcement investigation.
Disclaimers that the depicted individual did not participate or authorize the creation of the material or that users are prohibited from generating such content without the depicted individual’s consent, are not a defense. But AB 621 does exclude from liability persons who disclose covered materials in the course of reporting unlawful activity, law enforcement, legal proceedings, or matters of legitimate public concern, newsworthiness, or other commentary protected by the U.S. or California Constitutions. AB 621 also includes an express exemption for conduct protected by the Communications Decency Act (Section 230) and expressly confirms that internet service providers are not liable for mere “transmission, routing, or provisions of access to third-party content over its network.”
AB 621 provides for both public and private civil enforcement. A public prosecutor, who does not need to prove that any depicted individual suffered actual harm, may bring a civil action for injunctive relief, fines of $25,000 per violation (or $50,000 if committed with malice), and attorneys’ fees. A “depicted individual” who “suffers harm as a result” of a violation has a private right of action and may obtain injunctive relief and recover the amount of monetary gain made by the defendant; economic and noneconomic damages, including emotional distress, or statutory damages per work of $1,500 to $50,000, increased to up to $250,000 if the act was committed with malice; punitive damages; and attorneys’ fees.
AB 621 takes effect on January 1, 2026.
AB 853: California AI Transparency Act Amendments
AB 853 amends the California AI Transparency Act (SB 942), passed last year, by (1) delaying the effective date of the existing requirements under the California AI Transparency Act until August 2, 2026 (previously January 1, 2026) and (2) imposing new transparency requirements on additional types of companies that go into effect on January 1, 2027, and January 1, 2028.
AB 853 also adds obligations for “large online platforms” (e.g., websites distributing content to users with more than two million monthly users), “GenAI system hosting platforms” (e.g., websites sharing generative AI code or models for download), and manufacturers of “capture devices” (i.e., a device that can record photographs, audio, or video content like a camera, voice recorder, or phone with built-in microphone and/or camera):
- Large online platforms. As of January 1, 2027, large online platforms are required to develop a free method for users to access and easily inspect system provenance data of content distributed on their platform that reliably indicates whether the content was generated or substantially altered by a GenAI system or captured by a capture device. This means large online platforms must be able to detect whether any provenance data is embedded in the content distributed on their platform (assuming the content is compliant with widely adopted specifications by established standard-setting bodies).
- GenAI system hosting platforms. GenAI system hosting platforms are prohibited from knowingly making available a GenAI system that does not place latent disclosures in AI-generated content (e.g., image, video, or audio content), as required under the California AI Transparency Act. This obligation goes into effect January 1, 2027.
- Capture device manufacturers. For any capture devices sold within California on or after January 1, 2028, manufacturers are required to (1) provide users the option to include a latent disclosure in captured content (e.g., photos, videos, or audio content) that conveys certain data, like the name of the manufacturer and time and date of creation or alteration; and (2) embed latent disclosures in the content captured by the device by default, to the extent both are technically feasible.
Governor Newsom expressed misgivings about the law in his signing statement, noting that while AB 853 will advance trust in the use of AI, it has potential implementation challenges, particularly pertaining to user privacy. Governor Newsom suggested legislators enact follow-up legislation in 2026 before the law takes effect to address such issues.
Violations of the provisions added in this amendment, like other violations of the California AI Transparency Act, are enforceable by the California attorney general and local prosecutors and are subject to civil penalties of $5,000 per violation. Each day that a provider is in violation is deemed a discrete violation.
California AI Bills Vetoed by the Governor
AB 1064: Leading Ethical AI Development (LEAD) for Kids Act
AB 1064 was vetoed on October 13, 2025. If it had been signed, it would have prohibited “companion chatbots” from engaging in certain harmful interactions with minors (children under 18). It differed from SB 243 in various ways. Although it defined companion chatbots more narrowly than SB 243, it required operators to prohibit access to companion chatbots by minors unless the chatbot is “not foreseeably capable” of:
- Encouraging self-harm or harm to others, suicidal ideation, violence, drug or alcohol consumption, disordered eating, or illegal activity
- Providing unsupervised mental health advice or discouraging the minor from seeking help from a qualified professional or appropriate adult
- Engaging in erotic or sexually explicit interactions with the minor
- “Prioritizing validation of the user’s beliefs, preferences, or designs over factual accuracy or the child’s safety”
- “Optimizing engagement in a manner that supersedes the companion chatbot’s [aforementioned] required safety guardrails”
Governor Newsom explained he vetoed the LEAD Act but signed SB 243 because the LEAD Act’s restrictions risked a total ban on chatbot use by minors, instead of allowing adolescents to “learn how to safely interact with AI systems,” and several of the LEAD Act’s key provisions (protecting minors from harm and requiring disclosures) would be implemented through SB 243.
SB 7: Automated Decision Systems in Employment
SB 7 was vetoed on Oct. 13, 2025. If it had been signed, it would have imposed notice, transparency, and fairness requirements on employers using AI-driven automated decision systems (ADS). For example, SB 7 would have (1) required employers to give 30 days’ written notice before deploying an ADS affecting employment decisions; (2) required employers to give post-use written notice when relying “primarily” on ADS in discipline, termination, or deactivation decisions; (3) granted workers the right to access data collected and used in ADS decisions; (4) prohibited sole reliance on an ADS for negative employment actions (human review is required for discipline, termination, or deactivation decisions); and (5) banned retaliation against workers for exercising these rights. SB 7 would have applied to employees and independent contractors.
Governor Newsom’s veto message emphasized that while he shares concerns regarding the use of ADS by employers, the bill imposes “unfocused notification requirements” and “overly broad restrictions.” He suggests that legislators assess the efficacy of the forthcoming California Privacy Protection Agency regulations on the use of ADS in employment decisions to address these concerns before enacting new legislation in this space.
SB 11: Protections Against Misuse of Digital Replicas
SB 11 was vetoed on October 13, 2025. If it had been signed, it would have required AI providers that enable users to create digital replicas to provide consumer warnings governing the permissible use of digital replica technology (“Unlawful use of this technology to depict another person without prior consent may result in civil or criminal liability for the user.”). SB 11 also would have clarified that existing California false impersonation law applies with equal force to digital replicas. In particular, the bill would have amended California Civil Code § 3344 to clarify that the unauthorized use of a digital replica of a person’s voice or likeness is also prohibited without the person’s prior consent. The bill also would have removed the rebuttable presumption that an employee’s likeness appearing incidentally in an employer’s advertisements is not a prohibited “knowing” unauthorized use of the employee’s likeness.
Governor Newsom’s veto message states that he vetoed the bill because while at times public disclosures can provide transparency and mitigate harm, “it is unclear whether a warning would be sufficient to dissuade wrongdoers from using AI to impersonate others without their consent.”
Implications
The AI-related laws recently enacted in California continue to reflect a focus on AI transparency, child safety, nonconsensual intimate imagery, the potential for consumer deception, and other potential harms. But these laws generally take a targeted approach toward AI regulation, rather than attempting to enact a comprehensive framework (such as one modeled on the EU AI Act). Moreover, these laws seek to mitigate the risk of potential harm primarily through requiring disclosures to users and mandating internal governance frameworks, rather than outright prohibitions.
Governor Newsom’s vetoes also indicate a willingness to send the legislature back to the drawing board when he perceives a law to be vague and overly broad, unlikely to achieve its intended purpose, or duplicative of existing laws—even if the governor supports the law’s goals.