What Is California SB 53?
California Senate Bill 53, the Transparency in Frontier AI Act, was signed into law on September 29, 2025. It establishes transparency and safety requirements for developers of the largest and most capable AI models — often referred to as "frontier" AI models.
SB 53 is notable for what it is and what it replaced. The original version of the bill (SB 1047) would have imposed strict pre-deployment safety testing requirements and given the California Attorney General authority to seek injunctions against AI companies. After significant opposition from the tech industry, SB 53 was substantially narrowed to focus on transparency, incident reporting, and whistleblower protection rather than pre-deployment restrictions.
Who Must Comply
SB 53 applies to developers of covered AI models. A developer is any person or entity that creates, trains, or substantially modifies an AI model meeting the coverage thresholds.
Coverage Thresholds
The law applies to AI models that require:
- Training compute costs exceeding $100 million (adjusted annually for inflation and compute price changes), OR
- That are derived from a covered model through fine-tuning or other modifications
Enhanced Obligations
Developers with $500 million or more in annual revenue from AI model development have additional obligations beyond the base requirements.
Who Is Likely Covered
Based on current market conditions, the companies most likely covered include:
- OpenAI, Google/DeepMind, Anthropic, Meta AI, xAI, Microsoft, Amazon
- Any startup or company training models at the $100M+ compute cost threshold
- Companies that substantially modify covered models (fine-tuning at scale)
Who Is Likely NOT Covered
- Companies that use commercial AI APIs (GPT-4, Claude, Gemini) without modification
- Companies that fine-tune models below the coverage threshold
- Open-source developers working with smaller models
- Companies deploying pre-built AI products
Core Requirements
1. Safety and Security Protocol (SSP)
Every developer of a covered model must publish a Safety and Security Protocol before the model is made available. The SSP must include:
- Risk identification: Known and reasonably foreseeable risks associated with the model
- Risk assessment: Analysis of the severity and likelihood of identified risks
- Mitigation measures: Specific steps taken to reduce risks
- Ongoing monitoring: How the developer will monitor for emerging risks after deployment
- Incident response: Procedures for responding to safety incidents
The SSP must be publicly available — not just an internal document. It must be written in clear enough language that deployers and the public can understand the risks.
2. Safety Incident Reporting
Developers must report critical safety incidents to the California Attorney General. A critical safety incident is defined as:
- An event that causes or is reasonably likely to cause significant harm to public health, safety, or security
- The event is related to the capabilities or deployment of the covered AI model
The reporting timeline and specific procedures are subject to additional rulemaking by the California AG's office.
3. Whistleblower Protections
SB 53 includes robust whistleblower protections for employees and contractors:
- Developers may not retaliate against employees who report safety concerns
- Employees can report concerns internally or to government authorities
- Protections extend to contractors, temporary workers, and other individuals working on behalf of the developer
- Retaliation includes termination, demotion, suspension, threats, or any adverse employment action
This is one of the most significant provisions of the law. It creates a legal framework for AI safety researchers and employees to raise concerns without fear of retribution.
4. Enhanced Obligations for Large Developers ($500M+ Revenue)
Developers with annual AI revenue exceeding $500 million must also:
- Conduct pre-deployment safety evaluations including red-teaming and adversarial testing
- Engage independent third-party auditors to review their SSP and safety practices
- Publish annual transparency reports detailing model capabilities, known limitations, and safety measures
- Maintain detailed records of safety evaluations, incidents, and remediation efforts
Penalties
| Violation Type | Maximum Penalty |
|---|---|
| General violations | Up to $1 million per violation |
| Whistleblower retaliation | Additional penalties under California labor law |
| Pattern of violations | AG can seek injunctive relief |
The California Attorney General has exclusive enforcement authority. There is no private right of action.
How SB 53 Fits into California's AI Law Landscape
SB 53 is part of a broader package of AI laws California enacted in 2024-2025:
| Law | Focus |
|---|---|
| SB 53 | Frontier AI model transparency and safety |
| AB 2355 | AI-generated political advertisement disclosure |
| SB 926 | Criminalizes sexually explicit AI deepfakes |
| SB 981 | Social media reporting of AI identity theft |
| AB 2602 | Protects performers from unauthorized AI digital replicas |
| AB 2839 | Election deepfakes (blocked by federal court on First Amendment grounds) |
Together, these laws create a multi-layered regulatory framework for AI in California, though SB 53 is the only one focused specifically on frontier model developers.
How to Comply
For Covered Developers
Step 1: Determine If You're Covered
Calculate your model's training compute costs. If they exceed $100 million (or if your model is derived from a covered model), you must comply.
Step 2: Develop Your Safety and Security Protocol
Create a comprehensive SSP that covers:
- Known risks of your model's capabilities
- Risks of misuse or unintended applications
- Mitigation measures you've implemented
- How you monitor for emerging risks
- Your incident response procedures
Step 3: Implement Whistleblower Protections
- Establish internal channels for employees to report safety concerns
- Train managers on anti-retaliation requirements
- Create clear procedures for investigating and addressing reported concerns
- Document your whistleblower protection policies
Step 4: Set Up Incident Reporting
- Define what constitutes a critical safety incident for your model
- Establish monitoring systems to detect incidents
- Create reporting procedures for the California AG's office
- Assign responsibility for incident reporting
Step 5 (If $500M+ Revenue): Enhanced Obligations
- Engage independent auditors
- Conduct pre-deployment safety evaluations including red-teaming
- Prepare annual transparency reports
- Maintain comprehensive safety records
For Companies Using Covered Models
If you use commercial AI models from covered developers (through APIs or licensed products), SB 53's obligations fall primarily on the developer, not on you. However, you should:
- Read the developer's SSP to understand the risks of the model you're using
- Follow the developer's usage guidelines to avoid misuse scenarios
- Monitor for incidents related to your use of the model
- Understand liability boundaries between your organization and the model developer
The Broader Context
SB 53 represents a measured approach to frontier AI regulation. It was controversial during its passage — some argued it didn't go far enough, while others felt it would stifle innovation.
Key factors to watch:
- Federal preemption: The December 2025 Trump Executive Order on AI may lead to federal legislation that preempts parts of SB 53
- Compute cost threshold: As training costs decline due to hardware improvements, more models may fall under the law's coverage
- AG rulemaking: The California Attorney General will issue additional rules that clarify reporting requirements and other details
- Industry compliance: How major AI labs implement their SSPs will set de facto standards
Related Regulations
- EU AI Act (GPAI provisions): Similar obligations for general-purpose AI models, especially those with systemic risk
- Colorado AI Act: Broader scope covering deployment of high-risk AI, not just development
- NIST AI RMF: The federal framework that can serve as a foundation for SSP development
- Executive Order on AI (December 2025): Federal policy direction that may affect state-level enforcement
Frequently Asked Questions
No. SB 53 applies to developers of frontier AI models — the companies that train and create the models. If your company uses commercial AI APIs or products without substantially modifying the underlying model, the obligations fall on the model developer (OpenAI, Anthropic, Google, etc.), not on your company.
SB 53 covers AI models whose training required compute costs exceeding $100 million. This threshold is adjusted annually to account for changes in compute prices and inflation. As of 2026, this covers only the largest frontier models from major AI labs. The threshold may capture more models over time as training scales increase.
Failure to publish a Safety and Security Protocol is a violation of SB 53. The California Attorney General can impose penalties of up to $1 million per violation and seek injunctive relief to prevent the model from being deployed until the SSP is published.
Yes. SB 53 includes robust whistleblower protections. Developers may not retaliate against employees, contractors, or other workers who report AI safety concerns internally or to government authorities. Retaliation includes termination, demotion, threats, or any adverse employment action. Violations of whistleblower protections carry additional penalties under California labor law.
SB 53 is substantially narrower than the original SB 1047 proposal. SB 1047 would have required pre-deployment safety testing for all covered models and given the AG authority to seek injunctions before deployment. SB 53 focuses on transparency (publish your safety framework), incident reporting (tell the AG about safety incidents), and whistleblower protections. It does not impose pre-deployment restrictions except for very large developers ($500M+ revenue).
A critical safety incident is an event related to a covered AI model's capabilities or deployment that causes or is reasonably likely to cause significant harm to public health, safety, or security. The exact scope will be further defined through rulemaking by the California Attorney General's office.
References
- Senate Bill 53 — Transparency in Frontier AI Act. California State Legislature (2025). View source
- California Enacts Frontier AI Transparency Requirements. Skadden, Arps, Slate, Meagher & Flom LLP (2025)
- NIST Artificial Intelligence Risk Management Framework (AI RMF 1.0). National Institute of Standards and Technology (2023). View source
