California Enacts SB 53 Requiring Safety Plans and Incident Reporting for Frontier AI Developers, September 2025
- Crypto Fairy

- Mar 12
- 2 min read
California Governor Gavin Newsom signed Senate Bill 53 (SB 53) into law on September 29, 2025. The bill was chaptered as Chapter 138, Statutes of 2025. SB 53 is in effect and imposes mandatory obligations on developers of large-scale AI models that meet defined computational thresholds. The law added Chapter 22.7 (commencing with Section 22756) to Division 8 of the California Business and Professions Code.
Section 22756.1 of the Business and Professions Code defines a covered AI system as a model trained using more than 10^26 floating point operations. Section 22756.2 requires developers of covered AI systems to: (a) implement and maintain a written safety and security plan describing the developer's risk assessment process, safety objectives, and technical safeguards; (b) establish and publish a policy for employees and contractors to report safety concerns without retaliation; and (c) preserve all training runs used to produce a covered AI system for a minimum of one year following completion of training. Section 22756.3 requires developers to report to the California Attorney General any safety incident in which a covered AI system causes, or materially contributes to, critical harm, defined to include attacks on critical infrastructure, creation of weapons of mass destruction, or cyberattacks causing damages exceeding $500 million.
The law applies to developers — companies or individuals who train covered AI systems — regardless of where they are incorporated, if they offer their systems in California. Cloud compute providers who make computational resources available to train covered AI systems must require their customers, by contract, to comply with SB 53 when the customer's intended use would produce a covered AI system. Deployers who fine-tune a covered AI system using more than 10^23 additional floating point operations face the same obligations as original developers for the fine-tuned model. The California Attorney General may bring civil enforcement actions; violations are subject to civil penalties.
Section 22756.5 exempts from the safety plan and incident reporting obligations any developer who: (a) trains a covered AI system exclusively for academic research or other non-commercial purposes and does not make the system publicly available; or (b) trains a covered AI system solely for use by a single entity within a closed internal environment not accessible to the public. The law does not set penalty amounts per violation by statute; the California Attorney General retains discretion over penalty amounts subject to applicable civil penalty provisions. The one-year training-run preservation obligation applies retroactively to training runs completed after the law's effective date of January 1, 2026.
Source: California SB 53, Chapter 138, Statutes of 2025, codified at California Business and Professions Code sections 22756–22756.5 (eff. January 1, 2026). Full text available at leginfo.legislature.ca.gov (bill ID 202520260SB53). Confirmed March 12, 2026.
The information provided is not legal, tax, investment, or accounting advice and should not be used as such. It is for discussion purposes only. Seek guidance from your own legal counsel and advisors on any matters. The views presented are those of the author and not any other individual or organization. Some parts of the text may be automatically generated. The author of this material makes no guarantees or warranties about the accuracy or completeness of the information.


