What is US AI Executive Order 14110?
October 2023 White House executive order establishing comprehensive federal AI strategy including safety standards for dual-use foundation models, NIST AI Risk Management Framework adoption, federal AI procurement guidelines, civil rights protections against algorithmic discrimination, and international AI governance coordination. Most significant US federal AI policy action to date.
This glossary term is currently being developed. Detailed content covering regulatory framework, compliance requirements, implementation timeline, and business implications will be added soon. For immediate assistance with AI regulation and compliance, please contact Pertama Partners for advisory services.
Executive Order 14110 establishes the federal AI governance framework that shapes procurement requirements, safety standards, and reporting obligations affecting any company selling AI services to government agencies. mid-market companies targeting federal contracts must demonstrate NIST AI RMF alignment, with non-compliant vendors excluded from an addressable market exceeding $15 billion in government AI spending. The cascading influence extends beyond government contracting, as enterprise customers increasingly adopt federal standards as their internal AI procurement baseline, making compliance a competitive prerequisite.
- Mandatory safety testing and reporting for large AI models (>10^26 FLOPs)
- Federal agency AI use transparency and impact assessment requirements
- NIST standards development for AI safety, security, and trustworthiness
- Anti-discrimination safeguards in housing, employment, and criminal justice AI
- International cooperation on AI governance and export controls
- Track implementation timelines for each federal agency's specific AI requirements, since deadlines cascade over 12-24 months with different compliance milestones affecting different industry sectors.
- Assess whether your AI models meet the dual-use foundation model reporting thresholds based on compute used during training, which triggers NIST safety standard compliance requirements.
- Monitor the executive order's evolving enforcement posture as administration priorities shift, since implementation guidance and funding allocations continue developing beyond initial signing.
- Document your AI safety testing procedures proactively, since federal procurement increasingly requires vendors to demonstrate alignment with NIST AI Risk Management Framework principles.
- Track implementation timelines for each federal agency's specific AI requirements, since deadlines cascade over 12-24 months with different compliance milestones affecting different industry sectors.
- Assess whether your AI models meet the dual-use foundation model reporting thresholds based on compute used during training, which triggers NIST safety standard compliance requirements.
- Monitor the executive order's evolving enforcement posture as administration priorities shift, since implementation guidance and funding allocations continue developing beyond initial signing.
- Document your AI safety testing procedures proactively, since federal procurement increasingly requires vendors to demonstrate alignment with NIST AI Risk Management Framework principles.
Common Questions
How does this regulation apply to our AI deployment?
Application depends on your AI system's risk classification, deployment location, and data processing activities. Consult with legal experts for specific guidance.
What are the compliance deadlines and penalties?
Deadlines vary by jurisdiction and AI system type. Non-compliance can result in significant fines, operational restrictions, or system bans.
More Questions
Implement robust governance frameworks, regular audits, documentation practices, and stay updated on regulatory changes through expert advisory.
References
- NIST Artificial Intelligence Risk Management Framework (AI RMF 1.0). National Institute of Standards and Technology (NIST) (2023). View source
- Stanford HAI AI Index Report 2025. Stanford Institute for Human-Centered AI (2025). View source
AI Regulation refers to the laws, rules, standards, and government policies that govern the development, deployment, and use of artificial intelligence systems. It encompasses mandatory legal requirements, voluntary guidelines, industry standards, and regulatory frameworks designed to manage AI risks while enabling innovation and economic benefit.
AI systems listed in Annex III of EU AI Act requiring strict compliance including biometric identification, critical infrastructure, education/employment systems, law enforcement, migration/border control, and justice administration. Must meet requirements for data governance, documentation, transparency, human oversight, and accuracy before market placement.
AI applications banned under EU AI Act Article 5 including subliminal manipulation, exploitation of vulnerabilities, social scoring by authorities, real-time remote biometric identification in public spaces (with narrow exceptions), and emotion recognition in workplace/education. Violations subject to maximum penalties.
Dedicated enforcement body within European Commission responsible for supervising general-purpose AI models, coordinating national AI authorities, maintaining AI Pact, and ensuring consistent AI Act implementation across member states. Established 2024 with powers to conduct investigations and impose penalties.
Specific EU AI Act requirements for foundation models and general-purpose AI systems including technical documentation, copyright compliance, detailed training content summaries, and additional obligations for systemic risk models (>10^25 FLOPs). Providers must publish model cards and cooperate with evaluations.
Need help implementing US AI Executive Order 14110?
Pertama Partners helps businesses across Southeast Asia adopt AI strategically. Let's discuss how us ai executive order 14110 fits into your AI roadmap.