Back to Insights
AI Governance & Risk ManagementGuide

US Executive Order on AI: What It Means for Business

January 28, 202614 min readMichael Lansdowne Hauge
For:Legal/ComplianceCISOCTO/CIOCHROCFOHead of OperationsIT ManagerConsultantBoard MemberCMO

Comprehensive analysis of Executive Order 14110 on Safe, Secure, and Trustworthy AI – requirements, timelines, and practical implications for organizations deploying AI systems.

Summarize and fact-check this article with:
Indian Woman Boardroom - ai governance & risk management insights

Key Takeaways

  • 1.Executive Order 14110 directs existing federal agencies to regulate AI but does not itself create new laws or criminal penalties.
  • 2.Foundation model developers crossing the 10^26 FLOPs threshold face mandatory reporting to the Department of Commerce under the Defense Production Act.
  • 3.Sector-specific guidance from agencies like HHS, HUD, DOL/EEOC, CFPB, FTC, DOT, DOE, and CISA will shape AI requirements across healthcare, finance, employment, housing, transportation, energy, and critical infrastructure.
  • 4.Federal contractors must align with OMB M-24-10 by December 2024 for rights-impacting AI, including governance, impact assessments, monitoring, and human review.
  • 5.Enforcement will occur through existing laws such as Title VII, the Fair Housing Act, ECOA, FCRA, HIPAA, and the FTC Act, with potentially significant civil penalties and contract consequences.
  • 6.The EO does not preempt state AI and privacy laws, so organizations must manage overlapping federal, state, and international obligations.
  • 7.Building robust AI governance and risk management now positions organizations to adapt quickly to future federal AI legislation.

On October 30, 2023, President Biden signed Executive Order 14110 on Safe, Secure, and Trustworthy Artificial Intelligence, marking the most comprehensive federal AI policy action in US history. The order establishes new safety and security requirements for AI developers, mandates reporting for foundation models, issues sector-specific guidance across eight federal agencies, and introduces protections against AI-enabled discrimination. While the order does not create new laws, it directs federal agencies to leverage existing authorities to regulate AI, with implications that ripple across healthcare, finance, employment, housing, and critical infrastructure. Key compliance deadlines began in January 2024, with ongoing requirements extending through 2025 and beyond.

Understanding Executive Order 14110

What It Is (and Isn't)

A common misunderstanding is that the Executive Order functions like legislation. It does not. EO 14110 creates no new laws and no criminal penalties. Instead, it operates as a directive to the federal regulatory apparatus. It instructs federal agencies to use their existing regulatory authority to oversee AI development and deployment. It establishes reporting requirements for AI developers under the Defense Production Act. It creates standards and guidelines for federal AI use. It coordinates policy across more than 50 federal agencies. And it sets expectations for voluntary industry compliance. The practical effect is that agencies like the FTC, EEOC, and CFPB now have explicit presidential backing to scrutinize AI systems within their existing enforcement mandates.

Who It Affects

The order draws a clear line between organizations directly subject to its requirements and those indirectly affected. Foundation model developers building models trained on more than 10^26 floating-point operations (FLOPs), or more than 10^23 FLOPs for biological sequence data, face mandatory reporting obligations. Federal contractors and grant recipients must meet new AI governance standards. Entities operating in regulated sectors (healthcare, finance, housing, and employment) face enhanced scrutiny. Critical infrastructure operators are subject to new security guidelines.

The indirect effects are arguably broader. Any organization deploying AI systems, any company in a federal supply chain, and any business subject to oversight from agencies such as the FTC, EEOC, HHS, CFPB, HUD, DOT, DOE, or CISA should treat this order as a signal that the compliance landscape has fundamentally shifted.

Key Requirements and Timelines

Foundation Model Reporting (Defense Production Act)

The order's most technically specific provision targets foundation models. Any model trained using more than 10^26 FLOPs, or more than 10^23 FLOPs when primarily using biological sequence data, triggers mandatory reporting to the Department of Commerce. Developers must submit training run notifications and red-team safety test results, document cybersecurity measures and the physical security of model weights, disclose ownership and possession details, and describe measures taken to prevent misuse.

These reporting requirements took effect in April 2024, 180 days after the order's signing. The threshold primarily affects large AI laboratories such as OpenAI, Anthropic, Google DeepMind, and Meta, though any developer crossing the compute thresholds falls under the mandate. The Department of Commerce Bureau of Industry and Security issued formal reporting rules to operationalize these requirements.

Sector-Specific Agency Actions

The order delegates AI oversight to eight federal agencies, each responsible for developing guidance tailored to its domain. The Department of Health and Human Services is establishing an AI safety program for healthcare, including guidance on predictive algorithms in clinical delivery, with initial guidance issued by April 2024. The Department of Housing and Urban Development has produced guidance on algorithmic discrimination in housing, ensuring Fair Housing Act compliance for AI tools, with initial guidance released as early as January 2024.

The Department of Labor and EEOC have developed best practices for AI in employment decisions, particularly guidance on Title VII compliance for hiring algorithms. The Consumer Financial Protection Bureau has issued guidance on AI in lending and credit decisions, ensuring compliance with the Equal Credit Opportunity Act and Fair Credit Reporting Act, with ongoing enforcement actions already underway.

The Federal Trade Commission has been the most visibly active, pursuing enforcement against deceptive AI claims and algorithmic discrimination under its existing consumer protection authority. The Department of Transportation has developed an AI safety framework for transportation systems and autonomous vehicles. The Department of Energy has issued guidelines for AI safety at critical energy infrastructure, including nuclear facilities. And the Cybersecurity and Infrastructure Security Agency has released AI security guidelines for critical infrastructure, including vulnerability disclosure frameworks.

Civil Rights and Algorithmic Discrimination

The order addresses AI-enabled discrimination with particular urgency. The Department of Justice has developed best practices for investigating algorithmic discrimination, coordinating with civil rights offices across the federal government. Protected characteristics under the framework include race, color, ethnicity, sex, religion, age, disability, veteran status, genetic information, and national origin. Covered decisions span employment, housing, credit, healthcare, education, and criminal justice. The breadth of this scope means that virtually any consumer-facing or employee-facing AI system falls within the order's anti-discrimination framework.

Federal Government AI Use

The Office of Management and Budget's Memorandum M-24-10, issued in March 2024, translates the order's principles into concrete requirements for federal agencies. Agencies must establish AI governance structures and designate Chief AI Officers. They must conduct impact assessments for AI systems that affect rights or safety. Minimum practices include continuous monitoring, human review, and opt-out mechanisms, with annual AI inventory reporting required across the government. The compliance deadline for rights-impacting AI was December 2024.

For federal contractors, the implications are immediate and substantial. Organizations selling to federal agencies must demonstrate compliance with agency-specific AI requirements, including impact assessments, continuous monitoring, human review processes, and bias testing. Documentation and transparency obligations now flow through the procurement process, making AI governance a condition of doing business with the government.

Practical Implications by Industry

Healthcare

The healthcare sector faces a distinct set of pressures. HHS is building an AI safety program that will govern clinical decision support algorithms and require transparency in diagnostic AI systems. Healthcare organizations should review clinical algorithms for bias and accuracy, implement monitoring for AI diagnostic tools, prepare for an FDA-style oversight framework, and rigorously document clinical validation processes. The stakes are high: AI systems that influence treatment decisions carry both regulatory and patient safety risks that make proactive compliance essential.

Financial Services

Financial institutions find themselves at the intersection of multiple enforcement mandates. The CFPB's guidance on AI in lending, combined with the FTC's enforcement posture on algorithmic discrimination, creates a dual compliance burden. Organizations in this sector should conduct adverse impact analyses for credit algorithms, implement adverse action notice procedures for AI-driven decisions, document ECOA and FCRA compliance in detail, and prepare for CFPB examinations specifically focused on AI use in underwriting and credit decisioning.

Employment

Employment decisions represent one of the order's highest-priority enforcement areas. The DOL and EEOC have developed best practices for hiring algorithms, and enhanced scrutiny of automated employment decisions is already underway. Companies should conduct bias audits on hiring and promotion algorithms (following the model of NYC's Local Law 144), implement human review for consequential employment decisions, document the job-relatedness of selection criteria used by AI systems, and prepare for EEOC investigations. The pattern is clear: automated hiring tools that produce disparate outcomes will attract enforcement attention.

Housing

HUD's guidance on algorithmic discrimination in housing brings tenant screening algorithms under the Fair Housing Act's enforcement umbrella. Property management companies and housing providers should review tenant screening algorithms for disparate impact, document Fair Housing Act compliance with specificity, implement appeal mechanisms for automated denials, and establish ongoing monitoring for discriminatory patterns. Tenant screening is a particularly high-risk application because of the direct impact on individuals' access to housing.

Critical Infrastructure

Organizations operating critical infrastructure face security-focused requirements from CISA, supplemented by sector-specific mandates from agencies like the DOE and DOT. The practical steps include assessing AI systems embedded in critical operations, implementing security controls for AI models and training data, establishing vulnerability disclosure processes, and coordinating with the relevant sector-specific regulatory agencies.

Compliance Strategy

Phase 1: Assessment (Immediate)

The first compliance priority is visibility. Organizations should inventory all AI and machine learning systems in use, classify each by risk level and use case, determine regulatory exposure by sector, and identify dependencies on third-party foundation models.

Regulatory mapping follows naturally from the inventory. The key questions are straightforward: Which federal agencies regulate your industry? Which existing laws apply (Title VII, the Fair Housing Act, ECOA, FCRA, HIPAA, and others)? Are you a federal contractor or grant recipient? Do you operate critical infrastructure?

A gap analysis comparing current practices against OMB M-24-10 minimum practices will reveal where documentation is missing, where bias testing capabilities fall short, and where governance structures need strengthening.

Phase 2: Documentation (Q1-Q2 2024)

Documentation is the backbone of demonstrable compliance. Organizations should build an AI system inventory with use cases and risk levels, create impact assessments for rights-impacting AI, document bias testing methodologies and results, formalize human oversight procedures, establish monitoring and performance metrics, and develop incident response procedures.

On the governance side, organizations should designate AI governance roles (analogous to a Chief AI Officer function), establish a cross-functional AI review committee, create approval workflows for high-risk AI deployments, and implement change management processes for AI systems. These structures should be designed to scale, because the regulatory requirements will only grow more detailed over time.

Phase 3: Monitoring (Ongoing)

Compliance is not a one-time exercise. Continuous monitoring requires tracking AI system performance metrics, watching for bias and discriminatory patterns, logging human review and override decisions, and tracking complaints and appeals. Equally important is active engagement with the regulatory environment: monitoring agency guidance as it is released, participating in public comment periods, engaging with industry associations, and building relationships with relevant agency personnel.

Phase 4: Adaptation (2025 and Beyond)

The regulatory landscape will continue to evolve. Organizations should track new agency rules and guidance, update practices based on enforcement actions and settlements, monitor litigation trends, and adapt to evolving frameworks such as the NIST AI Risk Management Framework. The organizations that treat compliance as an ongoing capability rather than a project will be best positioned as requirements mature.

Enforcement and Penalties

How Enforcement Works

The Executive Order itself creates no penalties, but it activates enforcement through the existing statutory authority of federal agencies. The penalty exposure is significant. Under Title VII, the EEOC can pursue employment discrimination claims carrying substantial compensatory and punitive damages. Under the Fair Housing Act, HUD and the DOJ can seek civil penalties per violation. Under the Equal Credit Opportunity Act, the CFPB can impose civil penalties including per-day fines for credit discrimination.

On the consumer protection front, Section 5 of the FTC Act empowers the FTC to seek civil penalties for deceptive or unfair AI practices, with state consumer protection laws creating additional liability depending on jurisdiction. In healthcare, HIPAA violations can result in penalties reaching tens of thousands of dollars per violation with annual caps, and the FDA retains enforcement authority over AI-enabled medical devices. Federal contractors face contract termination, suspension and debarment, and False Claims Act liability with treble damages.

Several enforcement areas are already active. The FTC has pursued actions on algorithmic discrimination and deceptive AI marketing claims. The EEOC is investigating hiring algorithms. The CFPB is examining AI-driven credit decisioning. HUD is scrutinizing tenant screening algorithms. The triggers for enforcement action include consumer or employee complaints, disparate impact identified in agency audits, data breaches exposing AI vulnerabilities, media coverage or whistleblower reports, and findings from routine examinations. The five highest-priority enforcement areas are employment decisions (EEOC), credit and lending (CFPB), housing (HUD), deceptive AI marketing claims (FTC), and healthcare algorithms (HHS and FDA).

Relationship to State and International AI Laws

Federal vs. State Authority

A critical point for compliance planning: the Executive Order does not preempt state laws. States can and are passing their own AI regulations, and organizations must comply with both federal and state requirements simultaneously. Where federal and state requirements conflict, the more protective standard typically applies in practice. The result is a layered compliance landscape.

Key state laws already in effect include California's CPRA (which provides automated decision-making opt-outs), Colorado's CPA (which requires algorithm impact assessments), New York City's Local Law 144 (which mandates employment bias audits), the Illinois BIPA and AI Video Interview Act, and Virginia's VCDPA (which addresses automated decision profiling). Each adds obligations beyond what the federal order requires.

International Comparison

The contrast with the EU AI Act is instructive. The EU has adopted a comprehensive horizontal regulation with specific prohibited uses, a detailed risk-based classification system, and significant penalties calculated as a percentage of global revenue. It is far more prescriptive than the US approach. The US Executive Order, by contrast, relies on existing agency authority rather than new legislation, takes a sector-specific approach through multiple agencies, and blends voluntary guidelines with enforcement through existing laws. The US approach offers more flexibility but less predictability.

For multinational companies, the strategic calculus is clear. The EU AI Act will likely function as a de facto global standard through the "Brussels Effect." US requirements, while potentially less stringent on paper, are enforced through a fragmented multi-agency system that creates its own complexity. Harmonizing compliance to the highest common denominator across jurisdictions is typically the most efficient long-term strategy.

Preparing for Future Federal AI Legislation

The Executive Order should be understood as a precursor to comprehensive federal AI legislation, not a destination. Multiple AI bills have been introduced in the 118th Congress, and bipartisan AI working groups are active in both chambers. The likely focus areas for legislation include foundation model safety, algorithmic discrimination, transparency, and accountability.

Companies should expect the codification of EO requirements into binding law, a shift from voluntary to mandatory compliance, specific statutory penalties for non-compliance, the possible creation of a federal AI regulator or expanded agency authority, and potential (though still uncertain) preemption of some state laws.

The strategic implications are clear. Organizations should treat the Executive Order's requirements as a floor rather than a ceiling. Building compliance infrastructure that can scale with new rules is more cost-effective than retrofitting systems after legislation passes. Engaging in policy discussions and public comment periods provides visibility into regulatory direction. And adopting voluntary frameworks like the NIST AI Risk Management Framework creates practical safe harbors while formal requirements continue to evolve. The organizations that move early will find themselves with a structural advantage as the regulatory environment matures.

Key Takeaways

Executive Order 14110 is not legislation, but it directs federal agencies to regulate AI using existing authority across civil rights, consumer protection, healthcare, financial services, employment, and housing. Foundation model developers face mandatory reporting under the Defense Production Act for models exceeding the 10^26 FLOPs threshold (or 10^23 FLOPs for biological models), covering safety testing, cybersecurity, and misuse prevention.

The sector-specific guidance flowing from eight federal agencies creates tailored compliance requirements for healthcare, finance, employment, housing, transportation, energy, and critical infrastructure. Federal contractors face a particularly concrete set of obligations under OMB M-24-10, including AI governance structures, impact assessments, continuous monitoring, human review, and opt-out mechanisms, with the December 2024 deadline for rights-impacting AI already passed.

Enforcement operates through existing statutes. Agencies like the FTC, EEOC, CFPB, HUD, and HHS wield Title VII, the Fair Housing Act, ECOA, FCRA, the FTC Act, and HIPAA, with penalties ranging from contract termination to substantial civil fines. The order does not preempt state laws, meaning organizations must navigate both federal requirements and a growing body of state AI regulations simultaneously.

This is the beginning of federal AI regulation, not the end. Comprehensive federal legislation will likely codify and expand these requirements. Organizations that invest in compliance infrastructure now will hold a meaningful advantage over those that wait.

Citations

  1. Executive Order 14110 on Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence (October 30, 2023) – https://www.whitehouse.gov/briefing-room/presidential-actions/2023/10/30/executive-order-on-the-safe-secure-and-trustworthy-development-and-use-of-artificial-intelligence/
  2. OMB Memorandum M-24-10: Advancing Governance, Innovation, and Risk Management for Agency Use of Artificial Intelligence (March 28, 2024) – https://www.whitehouse.gov/wp-content/uploads/2024/03/M-24-10-Memorandum-on-Advancing-Governance-Innovation-and-Risk-Management-for-Agency-Use-of-Artificial-Intelligence.pdf
  3. NIST AI Risk Management Framework (January 2023) – https://www.nist.gov/itl/ai-risk-management-framework
  4. Federal Trade Commission – AI and Algorithmic Tools (Enforcement and Guidance) – https://www.ftc.gov/business-guidance/blog/2023/02/keep-your-ai-claims-check
  5. Department of Commerce Bureau of Industry and Security – AI Diffusion Reporting Rule (April 2024) – https://www.bis.doc.gov/index.php/documents/about-bis/3435-ai-diffusion-reporting-requirements/file

Common Questions

No. The Executive Order does not create new criminal offenses. It directs agencies to use existing regulatory authority, while existing federal criminal laws (such as computer fraud, wire fraud, and certain civil rights statutes) can still apply to AI misuse.

Only foundation model developers that exceed the compute thresholds (>10^26 FLOPs, or >10^23 FLOPs for certain biological models) must report to the Department of Commerce. Other organizations generally do not have to report models centrally but may need AI inventories and documentation for sector regulators or under state laws.

You remain responsible for compliance even when using third-party AI. You should perform due diligence on vendors, require documentation and testing, include compliance warranties and audit rights in contracts, and implement your own monitoring and human oversight controls.

The EO and related guidance apply to existing systems. You should retrospectively assess legacy AI for risk, bias, and security, and bring them into alignment with agency guidance and OMB M-24-10 requirements, including monitoring, human review, and impact assessments where applicable.

Frameworks like NIST AI RMF are strong foundations and may function as practical safe harbors, but they do not automatically satisfy specific statutory or regulatory obligations. You should use them as a baseline and then map controls to sector-specific rules and enforcement expectations.

They are separate regimes. The EU AI Act is more prescriptive and risk-tiered, while the US EO relies on existing sector laws and agency guidance. Multinationals typically design controls to meet the stricter EU requirements and then adjust for US sector-specific expectations.

Begin with an AI inventory and risk classification, map applicable regulators and laws, and implement OMB M-24-10 minimum practices—governance roles, impact assessments for rights-impacting AI, continuous monitoring, and human review and opt-out mechanisms.

Key Policy Framework

Executive Order 14110 takes a risk-based approach focusing on: - Foundation model safety reporting (models trained on >10^26 FLOPs) - Sector-specific guidance from federal agencies - Civil rights protections against algorithmic discrimination - Critical infrastructure security standards - Federal government AI procurement and deployment rules

Enforcement Priority Areas

Federal agencies are prioritizing enforcement in: 1. Employment decisions (EEOC focus) 2. Credit and lending (CFPB focus) 3. Housing (HUD focus) 4. Deceptive AI marketing claims (FTC focus) 5. Healthcare algorithms (HHS/FDA focus)

"If you sell to federal agencies, you'll need to demonstrate compliance with agency-specific AI requirements, including impact assessments, continuous monitoring, human review, and bias testing, by the December 2024 deadline for rights-impacting AI."

Analysis of OMB Memorandum M-24-10

References

  1. AI Risk Management Framework (AI RMF 1.0). National Institute of Standards and Technology (NIST) (2023). View source
  2. Cybersecurity Framework (CSF) 2.0. National Institute of Standards and Technology (NIST) (2024). View source
  3. ISO/IEC 42001:2023 — Artificial Intelligence Management System. International Organization for Standardization (2023). View source
  4. EU AI Act — Regulatory Framework for Artificial Intelligence. European Commission (2024). View source
  5. OECD Principles on Artificial Intelligence. OECD (2019). View source
  6. Model AI Governance Framework (Second Edition). PDPC and IMDA Singapore (2020). View source
  7. ASEAN Guide on AI Governance and Ethics. ASEAN Secretariat (2024). View source
Michael Lansdowne Hauge

Managing Partner · HRDF-Certified Trainer (Malaysia), Delivered Training for Big Four, MBB, and Fortune 500 Clients, 100+ Angel Investments (Seed–Series C), Dartmouth College, Economics & Asian Studies

Advises leadership teams across Southeast Asia on AI strategy, readiness, and implementation. HRDF-certified trainer with engagements for a Big Four accounting firm, a leading global management consulting firm, and the world's largest ERP software company.

AI StrategyAI GovernanceExecutive AI TrainingDigital TransformationASEAN MarketsAI ImplementationAI Readiness AssessmentsResponsible AIPrompt EngineeringAI Literacy Programs

EXPLORE MORE

Other AI Governance & Risk Management Solutions

INSIGHTS

Related reading

Talk to Us About AI Governance & Risk Management

We work with organizations across Southeast Asia on ai governance & risk management programs. Let us know what you are working on.