Executive Summary: On October 30, 2023, President Biden signed Executive Order 14110 on Safe, Secure, and Trustworthy Artificial Intelligence – the most comprehensive federal AI policy action to date. The order establishes new safety and security requirements for AI developers, mandatory reporting for foundation models, sector-specific guidance across eight federal agencies, and protections against AI-enabled discrimination. While not creating new laws, it directs federal agencies to use existing authorities to regulate AI, affecting organizations across healthcare, finance, employment, housing, and critical infrastructure. Key deadlines began January 2024 with ongoing requirements through 2025.
::callout{type="info" title="Key Policy Framework"} Executive Order 14110 takes a risk-based approach focusing on:
- Foundation model safety reporting (models trained on >10^26 FLOPs)
- Sector-specific guidance from federal agencies
- Civil rights protections against algorithmic discrimination
- Critical infrastructure security standards
- Federal government AI procurement and deployment rules ::
Understanding Executive Order 14110
What It Is (and Isn't)
The Executive Order is not legislation – it doesn't create new laws or criminal penalties. Instead, it:
- Directs federal agencies to use existing regulatory authority to oversee AI
- Establishes reporting requirements for AI developers under Defense Production Act authority
- Creates standards and guidelines for federal AI use
- Coordinates policy across 50+ federal agencies
- Sets expectations for voluntary industry compliance
Who It Affects
Directly Subject to Requirements:
- Foundation model developers (models trained on >10^26 FLOPs or 10^23 FLOPs for biology)
- Federal contractors and grant recipients
- Entities in regulated sectors (healthcare, finance, housing, employment)
- Critical infrastructure operators
Indirectly Affected:
- Any organization deploying AI systems
- Companies in federal supply chains
- Businesses subject to federal agency oversight (FTC, EEOC, HHS, CFPB, HUD, DOT, DOE, CISA, etc.)
Key Requirements and Timelines
Foundation Model Reporting (Defense Production Act)
Threshold: Models trained using >10^26 floating-point operations (FLOPs) or >10^23 FLOPs primarily using biological sequence data.
Required Reports to Department of Commerce:
- Training run notifications and red-team safety test results
- Cybersecurity measures and physical security of model weights
- Ownership and possession details
- Measures to prevent misuse
Timeline: Reporting requirements began April 2024 (180 days after the EO).
Affected Companies: Primarily large AI labs (e.g., OpenAI, Anthropic, Google DeepMind, Meta) and any developer crossing the compute thresholds.
::statistic{value="10^26" label="FLOPs Threshold" description="Computing power threshold triggering mandatory federal reporting for AI foundation models"} ::
Sector-Specific Agency Actions
Department of Health and Human Services (HHS)
- AI safety program for healthcare
- Guidance on predictive algorithms in healthcare delivery
- Timeline: Initial guidance by April 2024
Department of Housing and Urban Development (HUD)
- Guidance on algorithmic discrimination in housing
- Fair Housing Act compliance for AI tools
- Timeline: Initial guidance by January 2024
Department of Labor (DOL) / EEOC
- Best practices for AI in employment decisions
- Guidance on Title VII compliance for hiring algorithms
- Timeline: Initial guidance by April 2024
Consumer Financial Protection Bureau (CFPB)
- Guidance on AI in lending and credit decisions
- ECOA and Fair Credit Reporting Act compliance
- Timeline: Ongoing enforcement actions
Federal Trade Commission (FTC)
- Enforcement against deceptive AI claims
- Guidance on algorithmic discrimination
- Consumer protection enforcement
- Timeline: Ongoing (FTC already active in AI enforcement)
Department of Transportation (DOT)
- AI safety framework for transportation systems
- Autonomous vehicle guidance
- Timeline: Framework by April 2024
Department of Energy (DOE)
- AI safety at critical energy infrastructure
- Nuclear facility AI security
- Timeline: Guidelines by July 2024
Cybersecurity and Infrastructure Security Agency (CISA)
- AI security guidelines for critical infrastructure
- Vulnerability disclosure for AI systems
- Timeline: Initial framework by January 2024
Civil Rights and Algorithmic Discrimination
The EO explicitly addresses AI-enabled discrimination.
Department of Justice (DOJ) Requirements:
- Best practices for investigating algorithmic discrimination
- Coordination with civil rights offices across agencies
- Timeline: Framework by February 2024
Protected Characteristics: Race, color, ethnicity, sex, religion, age, disability, veteran status, genetic information, national origin.
Covered Decisions: Employment, housing, credit, healthcare, education, criminal justice.
Federal Government AI Use
Office of Management and Budget (OMB) – Memorandum M-24-10 (March 2024):
Requirements for Federal Agencies:
- AI governance structures and Chief AI Officers
- Impact assessments for rights-impacting and safety-impacting AI
- Minimum practices: continuous monitoring, human review, opt-out mechanisms
- Annual AI inventory reporting
- Compliance deadline: December 2024 for rights-impacting AI
What This Means for Contractors:
- Federal contractors must meet agency AI requirements
- Documentation and transparency obligations
- Bias testing and monitoring requirements
- Human oversight and contestability mechanisms
::keyInsight{title="Federal Procurement Impact"} If you sell to federal agencies, you'll need to demonstrate compliance with agency-specific AI requirements. This includes impact assessments, continuous monitoring, human review processes, and bias testing. The December 2024 deadline for rights-impacting AI affects contractors immediately. ::
Practical Implications by Industry
Healthcare
What's Changing:
- HHS establishing an AI safety program
- Guidance on clinical decision support algorithms
- Requirements for transparency in diagnostic AI
Action Items:
- Review clinical algorithms for bias and accuracy
- Implement monitoring for AI diagnostic tools
- Prepare for an FDA-style oversight framework
- Document clinical validation processes
Financial Services
What's Changing:
- CFPB guidance on AI in lending
- FTC enforcement on algorithmic discrimination
- Enhanced transparency requirements
Action Items:
- Conduct adverse impact analysis for credit algorithms
- Implement adverse action notice procedures for AI decisions
- Document ECOA and FCRA compliance
- Prepare for CFPB examinations focused on AI
Employment
What's Changing:
- DOL/EEOC best practices for hiring algorithms
- Enhanced scrutiny of automated employment decisions
- Title VII compliance requirements
Action Items:
- Conduct bias audits on hiring and promotion algorithms (similar to NYC Local Law 144)
- Implement human review for employment decisions
- Document job-relatedness of selection criteria
- Prepare for EEOC investigations
Housing
What's Changing:
- HUD guidance on algorithmic discrimination
- Fair Housing Act enforcement
- Tenant screening algorithm scrutiny
Action Items:
- Review tenant screening algorithms for disparate impact
- Document Fair Housing Act compliance
- Implement appeal mechanisms for automated denials
- Monitor algorithms for discriminatory patterns
Critical Infrastructure
What's Changing:
- CISA security guidelines
- Sector-specific AI security requirements
- Vulnerability disclosure frameworks
Action Items:
- Assess AI systems in critical operations
- Implement security controls for AI models and data
- Establish a vulnerability disclosure process
- Coordinate with sector-specific agencies (DOE, DOT, DHS and others)
Compliance Strategy
Phase 1: Assessment (Immediate)
Inventory AI Systems:
- Identify all AI/ML systems in use
- Classify by risk level and use case
- Determine regulatory exposure by sector
- Identify foundation model dependencies
Regulatory Mapping:
- Which federal agencies regulate your industry?
- Which existing laws apply (Title VII, FHA, ECOA, FCRA, HIPAA, etc.)?
- Are you a federal contractor or grant recipient?
- Do you operate critical infrastructure?
Gap Analysis:
- Compare current practices to OMB M-24-10 minimum practices
- Identify missing documentation
- Assess bias testing capabilities
- Review governance structures
Phase 2: Documentation (Q1–Q2 2024)
Create Documentation:
- AI system inventory with use cases and risk levels
- Impact assessments for rights-impacting AI
- Bias testing methodologies and results
- Human oversight procedures
- Monitoring and performance metrics
- Incident response procedures
Governance:
- Designate AI governance roles (e.g., Chief AI Officer–like function)
- Establish a cross-functional AI review committee
- Create approval workflows for high-risk AI
- Implement change management for AI systems
Phase 3: Monitoring (Ongoing)
Continuous Monitoring:
- Track AI system performance metrics
- Monitor for bias and discriminatory patterns
- Log human review and override decisions
- Track complaints and appeals
Agency Engagement:
- Monitor agency guidance as it's released
- Participate in public comment periods
- Engage with industry associations
- Establish relationships with relevant agency staff
Phase 4: Adaptation (2025+)
Stay Current:
- Track new agency rules and guidance
- Update practices based on enforcement actions
- Monitor litigation and settlements
- Adapt to evolving best practices and standards (e.g., NIST AI RMF)
Enforcement and Penalties
How Enforcement Works
The Executive Order itself doesn't create penalties, but federal agencies can enforce using existing authority.
Civil Rights Violations:
- Title VII: EEOC can sue for employment discrimination (with significant compensatory and punitive damages exposure)
- Fair Housing Act: HUD/DOJ can seek penalties, including substantial civil penalties per violation
- ECOA: CFPB can seek civil penalties (including large per-day penalties) for credit discrimination
Consumer Protection:
- FTC Act Section 5: FTC can seek civil penalties for deceptive or unfair practices
- State consumer protection laws: Additional liability depending on jurisdiction
Healthcare:
- HIPAA: HHS can impose penalties up to tens of thousands of dollars per violation (with annual caps)
- FDA: Enforcement for AI-enabled medical devices and software
Federal Contracts:
- Contract termination
- Suspension and debarment
- False Claims Act liability (including treble damages)
Enforcement Trends
Active Enforcement Areas:
- FTC actions on algorithmic discrimination and deceptive AI claims
- EEOC investigations of hiring algorithms
- CFPB examinations of credit decisioning AI
- HUD investigations of tenant screening algorithms
What Triggers Enforcement:
- Consumer or employee complaints about AI decisions
- Disparate impact identified in agency audits
- Data breaches exposing AI vulnerabilities
- Media coverage or whistleblower reports
- Routine examinations finding AI issues
::callout{type="warning" title="Enforcement Priority Areas"} Federal agencies are prioritizing enforcement in:
- Employment decisions (EEOC focus)
- Credit and lending (CFPB focus)
- Housing (HUD focus)
- Deceptive AI marketing claims (FTC focus)
- Healthcare algorithms (HHS/FDA focus) ::
Relationship to State and International AI Laws
Federal vs. State Authority
The Executive Order does not preempt state laws:
- States can (and are) passing their own AI regulations
- Organizations must comply with both federal and state requirements
- Where they conflict, the more protective law typically applies in practice
Key State Laws:
- California CPRA (automated decision-making opt-outs)
- Colorado CPA (algorithm impact assessments)
- NYC Local Law 144 (employment bias audits)
- Illinois BIPA and AI Video Interview Act
- Virginia VCDPA (automated decision profiling)
International Comparison
EU AI Act:
- Comprehensive horizontal regulation with specific prohibited uses
- Risk-based classification with detailed obligations
- Significant penalties based on global revenue
- More prescriptive than the US Executive Order
US Executive Order:
- Uses existing agency authority rather than new legislation
- Sector-specific approach through multiple agencies
- Voluntary guidelines with enforcement through existing laws
- More flexible but potentially less predictable
For Multinational Companies:
- EU AI Act likely sets a global standard ("Brussels Effect")
- US requirements may be less stringent but enforced through multiple agencies
- Harmonizing to the highest common denominator is often the most efficient strategy
Preparing for Future Federal AI Legislation
The Executive Order is likely a precursor to comprehensive AI legislation.
Pending Congressional Activity:
- Multiple AI bills introduced in the 118th Congress
- Bipartisan AI working groups in both chambers
- Likely focus areas: foundation model safety, algorithmic discrimination, transparency, and accountability
What Companies Should Expect:
- Codification of EO requirements into law
- Mandatory rather than voluntary compliance
- Specific penalties for non-compliance
- Possible federal AI regulator or expanded agency authority
- Potential preemption of some state laws (still uncertain)
Strategic Positioning:
- Treat EO requirements as a floor, not a ceiling
- Build compliance infrastructure that can scale with new rules
- Engage in policy discussions and comment periods
- Monitor legislative developments closely
- Use voluntary frameworks (e.g., NIST AI RMF) as practical safe harbors
Key Takeaways
- Executive Order 14110 is not legislation but directs federal agencies to regulate AI using existing authority across civil rights, consumer protection, healthcare, financial services, employment, and housing laws.
- Foundation model developers face mandatory reporting under the Defense Production Act for models trained on >10^26 FLOPs (or >10^23 FLOPs for certain biological models), including safety testing, cybersecurity measures, and misuse prevention.
- Sector-specific guidance from multiple federal agencies creates tailored requirements for healthcare, finance, employment, housing, transportation, energy, and critical infrastructure.
- Federal contractors must comply with OMB M-24-10 requiring AI governance, impact assessments, continuous monitoring, human review, and opt-out mechanisms by December 2024 for rights-impacting AI.
- Enforcement happens through existing laws – agencies like FTC, EEOC, CFPB, HUD, and HHS use Title VII, the Fair Housing Act, ECOA, FCRA, the FTC Act, and HIPAA with penalties ranging from contract termination to substantial civil penalties.
- The EO does not preempt state laws – organizations must comply with both federal requirements and state AI regulations, creating a complex multi-jurisdictional compliance landscape.
- This is the beginning, not the end – expect comprehensive federal AI legislation to codify and expand these requirements, making early compliance a strategic advantage.
Frequently Asked Questions
Does the Executive Order create criminal penalties for AI misuse?
No. The Executive Order directs agencies to use existing regulatory authority but does not create new criminal offenses. However, existing federal criminal laws (such as computer fraud, wire fraud, and certain civil rights violations) can apply to AI misuse.
Do I need to report my AI systems to the federal government?
Only foundation model developers meeting the computational thresholds (>10^26 FLOPs, or >10^23 FLOPs for certain biological models) must report to the Department of Commerce under Defense Production Act authority. However, federal contractors and regulated entities must maintain AI inventories for their oversight agencies, and some states (like Colorado) require impact assessments.
How does this affect AI I purchase from vendors?
You remain responsible for compliance even when using third-party AI. You must conduct due diligence on vendor AI systems, ensure they meet regulatory requirements for your use case, and implement monitoring and human oversight. Vendor contracts should include compliance warranties, documentation obligations, and audit rights.
What if my AI system was deployed before the Executive Order?
The EO applies to existing AI systems, not just new deployments. You should assess legacy systems for compliance with agency guidance and implement required controls (monitoring, human review, bias testing) regardless of deployment date. Some agency deadlines (like OMB M-24-10's December 2024 deadline) apply to all systems in scope.
Can I rely on voluntary frameworks like NIST AI RMF for compliance?
Voluntary frameworks like the NIST AI Risk Management Framework provide a strong foundation for compliance and may serve as practical safe harbors in enforcement actions, but they don't guarantee compliance with specific legal requirements. Use NIST AI RMF as a starting point, then ensure you meet sector-specific agency guidance and applicable laws (Title VII, Fair Housing Act, ECOA, etc.).
How does the Executive Order interact with the EU AI Act?
They are separate regulatory regimes. US organizations operating in the EU must comply with both. The EU AI Act is more comprehensive and prescriptive, while the US EO works through existing sector-specific laws. For multinational companies, complying with the stricter EU requirements often satisfies many US expectations as well, though sector-specific US agency guidance may impose additional obligations.
What should I do first to prepare for compliance?
Start with an AI inventory identifying all systems and their use cases. Classify systems by risk level (particularly rights-impacting AI affecting employment, housing, credit, healthcare, or education). Map which federal agencies regulate your industry and review their AI guidance. Implement the OMB M-24-10 minimum practices (governance, impact assessments, monitoring, human review) as these represent federal government expectations across sectors.
Citations
- Executive Order 14110 on Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence (October 30, 2023) – https://www.whitehouse.gov/briefing-room/presidential-actions/2023/10/30/executive-order-on-the-safe-secure-and-trustworthy-development-and-use-of-artificial-intelligence/
- OMB Memorandum M-24-10: Advancing Governance, Innovation, and Risk Management for Agency Use of Artificial Intelligence (March 28, 2024) – https://www.whitehouse.gov/wp-content/uploads/2024/03/M-24-10-Memorandum-on-Advancing-Governance-Innovation-and-Risk-Management-for-Agency-Use-of-Artificial-Intelligence.pdf
- NIST AI Risk Management Framework (January 2023) – https://www.nist.gov/itl/ai-risk-management-framework
- Federal Trade Commission – AI and Algorithmic Tools (Enforcement and Guidance) – https://www.ftc.gov/business-guidance/blog/2023/02/keep-your-ai-claims-check
- Department of Commerce Bureau of Industry and Security – AI Diffusion Reporting Rule (April 2024) – https://www.bis.doc.gov/index.php/documents/about-bis/3435-ai-diffusion-reporting-requirements/file
Frequently Asked Questions
No. The Executive Order does not create new criminal offenses. It directs agencies to use existing regulatory authority, while existing federal criminal laws (such as computer fraud, wire fraud, and certain civil rights statutes) can still apply to AI misuse.
Only foundation model developers that exceed the compute thresholds (>10^26 FLOPs, or >10^23 FLOPs for certain biological models) must report to the Department of Commerce. Other organizations generally do not have to report models centrally but may need AI inventories and documentation for sector regulators or under state laws.
You remain responsible for compliance even when using third-party AI. You should perform due diligence on vendors, require documentation and testing, include compliance warranties and audit rights in contracts, and implement your own monitoring and human oversight controls.
The EO and related guidance apply to existing systems. You should retrospectively assess legacy AI for risk, bias, and security, and bring them into alignment with agency guidance and OMB M-24-10 requirements, including monitoring, human review, and impact assessments where applicable.
Frameworks like NIST AI RMF are strong foundations and may function as practical safe harbors, but they do not automatically satisfy specific statutory or regulatory obligations. You should use them as a baseline and then map controls to sector-specific rules and enforcement expectations.
They are separate regimes. The EU AI Act is more prescriptive and risk-tiered, while the US EO relies on existing sector laws and agency guidance. Multinationals typically design controls to meet the stricter EU requirements and then adjust for US sector-specific expectations.
Begin with an AI inventory and risk classification, map applicable regulators and laws, and implement OMB M-24-10 minimum practices—governance roles, impact assessments for rights-impacting AI, continuous monitoring, and human review and opt-out mechanisms.
Key Policy Framework
Executive Order 14110 takes a risk-based approach focusing on: - Foundation model safety reporting (models trained on >10^26 FLOPs) - Sector-specific guidance from federal agencies - Civil rights protections against algorithmic discrimination - Critical infrastructure security standards - Federal government AI procurement and deployment rules
Enforcement Priority Areas
Federal agencies are prioritizing enforcement in: 1. Employment decisions (EEOC focus) 2. Credit and lending (CFPB focus) 3. Housing (HUD focus) 4. Deceptive AI marketing claims (FTC focus) 5. Healthcare algorithms (HHS/FDA focus)
"If you sell to federal agencies, you'll need to demonstrate compliance with agency-specific AI requirements, including impact assessments, continuous monitoring, human review, and bias testing, by the December 2024 deadline for rights-impacting AI."
— Analysis of OMB Memorandum M-24-10
References
- Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence. The White House (2023). View source
- OMB Memorandum M-24-10: Advancing Governance, Innovation, and Risk Management for Agency Use of Artificial Intelligence. Office of Management and Budget (2024). View source
- AI Risk Management Framework. National Institute of Standards and Technology (NIST) (2023). View source
- Keep your AI claims in check. Federal Trade Commission (2023). View source
- AI Diffusion Reporting Requirements. U.S. Department of Commerce, Bureau of Industry and Security (2024). View source
