Back to Insights
AI Governance & Risk ManagementGuide

AI Liability: Who's Responsible When AI Fails?

October 11, 202514 min readPertama Partners
For:Legal/ComplianceConsultantCTO/CIOIT ManagerCISOBoard Member

Navigate the complex legal landscape of AI liability. Understand product liability, professional negligence, algorithmic accountability, and emerging AI-specific liability frameworks across jurisdictions.

Summarize and fact-check this article with:
Japanese Executive - ai governance & risk management insights

Key Takeaways

  • 1.AI liability is distributed across developers, deployers, data providers, and human decision-makers; responsibility turns on role, contracts, and applicable law.
  • 2.Product liability regimes are most likely to apply when AI is embedded in physical products, while AI-as-a-service is primarily governed by contract and negligence law.
  • 3.Professionals and fiduciaries remain fully responsible for AI-assisted decisions and must understand, supervise, and, where appropriate, override AI outputs.
  • 4.Algorithmic discrimination can trigger civil rights liability under Title VII, the Fair Housing Act, ECOA, and analogous laws, even when biased outcomes stem from third-party tools.
  • 5.The EU AI Liability Directive and proposed strict liability regime materially increase exposure for high-risk AI deployers compared with the more fragmented, fault-based U.S. approach.
  • 6.Liability caps and disclaimers have hard limits, particularly for personal injury and consumer harms, making robust testing, documentation, and oversight essential.
  • 7.Dedicated insurance—spanning product liability, E&O, cyber, EPLI, and D&O—is becoming a critical component of AI risk transfer strategies.

Executive Summary: AI failures—from autonomous vehicle crashes to medical misdiagnoses to discriminatory hiring decisions—raise urgent questions about legal accountability. Traditional liability frameworks struggle with AI's complexity, opacity, and autonomy. Product liability law holds manufacturers liable for defective products but may not cover AI services or open-source models. Professional negligence standards apply to human experts but aren't designed for algorithmic decision-makers. The EU AI Liability Directive (2024) establishes fault-based and strict liability regimes for AI systems. The U.S. lacks comprehensive federal AI liability law, leaving gaps filled by state tort law, sector-specific regulations, and contract terms. This guide maps the liability landscape for AI developers, deployers, and users, providing practical risk management strategies and insurance considerations for the age of algorithmic accountability.

The AI Liability Gap

Why Traditional Liability Frameworks Fall Short

Causation Challenges:

  • Complexity: AI systems with billions of parameters make it difficult to trace outcomes to specific causes
  • Opacity: Black-box models obscure decision-making logic, preventing plaintiffs from proving defect or negligence
  • Distributed responsibility: AI involves multiple actors (data providers, model developers, deployers, users)—who bears liability?
  • Autonomy: Self-learning AI may behave in ways not anticipated by developers, breaking traditional causation chains

Existing Framework Limitations:

  • Product liability: Designed for physical products, may not cover software-as-a-service or algorithmic outputs
  • Professional negligence: Requires human expert standard of care, unclear how to apply to AI "professionals"
  • Contract law: Terms of service may disclaim liability, but enforceability varies by jurisdiction and harm type
  • Criminal law: Requires mens rea (criminal intent), difficult to attribute to AI or establish for corporate actors

High-Profile AI Liability Cases

Uber Autonomous Vehicle Fatality (2018, Arizona):

  • Pedestrian killed by self-driving Uber vehicle
  • Backup safety driver criminally charged with negligent homicide (2023 conviction)
  • Uber settled civil claims; no liability determination for AI system itself
  • Question: Should Uber (deployer), Volvo (vehicle manufacturer), or third-party sensors/software bear liability?

Tesla Autopilot Crashes (Multiple incidents, 2016-2024):

  • NHTSA investigating 35+ crashes involving Tesla Autopilot/Full Self-Driving
  • Civil suits allege deceptive marketing ("Autopilot" implies full automation when it's Level 2 driver assistance)
  • Product liability claims: Defective design (over-reliance on cameras, inadequate driver monitoring)
  • No definitive liability rulings yet; most cases settled

Amazon's Discriminatory Hiring Algorithm (2018 disclosure):

  • AI recruiting tool penalized résumés mentioning "women" (women's chess club, women's college)
  • No lawsuit filed (Amazon discontinued tool before deployment)
  • Hypothetical liability: Title VII disparate impact, state employment discrimination laws

Apple Card Gender Discrimination (2019-2020):

  • Goldman Sachs credit algorithm allegedly offered women lower limits than men with identical finances
  • New York DFS investigation; no formal finding of discrimination
  • Settled with agreement to improve fairness testing
  • Hypothetical liability: Equal Credit Opportunity Act (ECOA), state consumer protection laws

Liability vs. Accountability

Liability is legal responsibility (enforceable through lawsuits, fines). Accountability is broader—ethical, social, and organizational responsibility that may not rise to legal liability but still matters for reputation and trust.

Product Liability Framework

When AI Qualifies as a "Product"

Strict Liability (Restatement Third, Torts: Products Liability): Manufacturer liable for defective product that causes harm, regardless of fault.

Three Types of Defects:

  1. Manufacturing Defect: Product deviates from intended design (rare for software—more common: data corruption, incorrect model weights)
  2. Design Defect: Inherent design makes product unreasonably dangerous (e.g., AI trained on biased data, lacks safety constraints)
  3. Warning Defect: Inadequate instructions or warnings about risks (failure to disclose AI limitations, failure modes)

AI as Product:

  • Clearly products: AI embedded in physical goods (autonomous vehicles, medical devices, industrial robots)
  • Arguably products: AI software sold as packaged product (boxed software, one-time license)
  • Not products: AI delivered as service (cloud APIs, SaaS)—governed by contract law, not product liability

Design Defect Claims Against AI

Risk-Utility Test: Product is defectively designed if risks outweigh utility, considering:

  • Magnitude of foreseeable harms
  • Likelihood of harm occurring
  • Availability of safer alternative designs
  • Cost of safer alternative
  • User's ability to avoid harm

AI-Specific Defects:

  • Inadequate training data: Model trained on unrepresentative or biased dataset
  • Insufficient testing: Deployed without adequate validation on edge cases
  • Missing safety constraints: No guardrails preventing dangerous outputs
  • Lack of human oversight: Fully autonomous decision-making in high-stakes contexts
  • Poor failure handling: System doesn't fail safely (e.g., autonomous vehicle in unrecognized scenario)

Reasonable Alternative Design (RAD):

  • Plaintiff must show safer design was feasible and would have prevented harm
  • Example: AI hiring tool with disparate impact—RAD might be fairness-constrained model with equal selection rates
  • Challenge: Technical complexity makes it hard for plaintiffs/juries to evaluate alternative designs

Warning Defect Claims

Duty to Warn:

  • Manufacturers must warn about risks that aren't obvious to ordinary users
  • Warnings must be clear, conspicuous, and comprehensible

AI Warning Obligations:

  • Limitations: Disclose what AI can and cannot do (e.g., "This AI is not a substitute for professional medical advice")
  • Failure modes: Describe known scenarios where AI performs poorly (low lighting, unusual accents, rare diseases)
  • Bias and fairness: Warn if AI shows disparate performance across demographic groups
  • Uncertainty: Communicate confidence intervals, not just point predictions

Learned Intermediary Doctrine (Medical context):

  • For prescription medical devices, duty to warn physician (learned intermediary), not patient directly
  • Physician decides whether to use AI and informs patient of risks
  • May shield AI manufacturer from direct patient liability if warnings to physicians adequate

AI Product Liability Cases

As of 2024, fewer than 50 product liability cases involving AI have reached trial in the U.S. Most settle. Juries have not yet established clear precedents on design defect standards for AI systems.

Professional Negligence and Standard of Care

When AI Acts as "Professional"

Professional Negligence Elements:

  1. Duty of care (professional-client relationship)
  2. Breach of duty (failed to meet standard of care)
  3. Causation (breach caused harm)
  4. Damages (actual injury or loss)

Standard of Care:

  • Professionals must exercise skill and knowledge consistent with reasonably competent practitioners in the field
  • Measured against human expert standard, not perfection

AI as Professional (Emerging concept):

  • Medical diagnostic AI as "physician"
  • Legal research AI as "attorney"
  • Financial advisory AI as "investment adviser"
  • Accounting AI as "CPA"

Challenge: How to define "standard of care" for AI?

  • Human-equivalent standard: AI must perform as well as competent human professional
  • AI-specific standard: AI judged against other AI systems (but what if all AI has same flaws?)
  • Hybrid standard: AI + human team judged together

Malpractice Liability for AI-Assisted Professionals

Physician Uses AI Diagnostic Tool:

Scenario 1: Physician Follows AI, AI Wrong:

  • Physician argues reliance on FDA-cleared AI was reasonable
  • Plaintiff argues physician should have overridden obviously wrong AI recommendation
  • Likely outcome: Depends on how obvious the error was and whether physician exercised independent judgment

Scenario 2: Physician Overrides AI, AI Was Right:

  • Plaintiff argues physician negligent for ignoring AI's correct diagnosis
  • Physician argues AI is advisory tool, not binding
  • Likely outcome: Physician not negligent for exercising independent judgment, unless override was clearly unreasonable

Automation Bias:

  • Over-reliance on AI recommendations without critical thinking
  • Courts may find professionals negligent for blindly trusting AI
  • Standard of care: Professionals must understand AI limitations and verify recommendations

Fiduciary Duty and AI

Fiduciary Relationships:

  • Attorneys, investment advisers, trustees, corporate directors owe fiduciary duty to clients/beneficiaries
  • Highest standard: Duty of loyalty (act in client's best interest) and duty of care (reasonable diligence)

AI in Fiduciary Context:

  • Robo-advisors: Investment advisers using AI remain fully liable for advice; fiduciary duty doesn't transfer to algorithm
  • Legal AI: Attorneys using AI research tools responsible for errors (duty of competence)
  • Corporate boards: Directors using AI for strategic decisions must exercise reasonable oversight

Key Principle: Fiduciary duty is non-delegable. Using AI doesn't absolve fiduciary of responsibility.

The "Reasonable Professional" Standard

Courts are likely to hold that reasonable professionals must (1) understand AI capabilities and limitations, (2) independently verify AI outputs in high-stakes decisions, and (3) maintain competence to detect AI errors.

Algorithmic Discrimination and Civil Rights Liability

Disparate Impact Framework

Civil Rights Statutes:

  • Title VII (employment): Race, color, religion, sex, national origin
  • Fair Housing Act (housing, credit): Race, color, religion, sex, familial status, national origin, disability
  • ECOA (credit): Race, color, religion, sex, marital status, age, national origin, public assistance receipt
  • ADA (disability): Employment, public accommodations, services

Disparate Impact Theory:

  • Facially neutral practice that disproportionately harms protected group
  • Plaintiff shows statistical disparity
  • Defendant must prove practice serves legitimate business interest and is necessary
  • Plaintiff can show less discriminatory alternative exists

AI Algorithmic Discrimination:

  • Training on historical data perpetuates past discrimination
  • Proxy variables (zip code, education, names) correlate with protected characteristics
  • Optimization for accuracy alone ignores fairness

Employer Liability (for AI hiring tools):

  • Employers liable even if purchased from third-party vendor
  • "The algorithm did it" not a defense
  • Must conduct adverse impact analysis, validate job-relatedness

Recent Enforcement Actions

HUD v. Facebook (2019, settled 2022):

  • Facebook ad platform allowed advertisers to exclude users by race, religion, national origin in housing ads
  • Violated Fair Housing Act
  • Settlement: $115,000 penalty, algorithm changes, independent auditor

EEOC AI Guidance (2023):

  • Clarified employers liable for discriminatory AI hiring tools
  • Must validate tools predict job performance, not just correlate with race/sex
  • Regular adverse impact testing required

New York DFS Guidance (ongoing):

EU AI Liability Directive (2024)

Fault-Based Liability Regime (Chapter 2)

Presumption of Fault: If plaintiff shows:

  1. AI system caused harm
  2. Defendant failed to comply with AI Act obligations
  3. Non-compliance causally linked to harm

Then: Fault is presumed; defendant must prove they were not at fault

Disclosure Obligations:

  • Defendant must disclose relevant evidence about AI system
  • Court can order disclosure if plaintiff shows plausible claim
  • Failure to disclose can result in presumption of non-compliance

Reduced Burden of Proof:

  • Plaintiff need not prove exact technical defect in AI
  • Showing AI-caused harm + regulatory non-compliance shifts burden to defendant

Strict Liability for High-Risk AI (Proposed Directive)

Scope: AI systems classified as high-risk under AI Act (employment, credit, law enforcement, critical infrastructure)

Strict Liability Elements:

  1. High-risk AI system
  2. System caused harm (property damage, personal injury, economic loss)
  3. Causal link between AI and harm

No Fault Required: Victim need not prove defect or negligence; AI deployer liable regardless

Cap on Damages:

  • Personal injury/death: Unlimited
  • Property damage: €2 million cap
  • Economic loss: €1 million cap (if not consequential to physical harm)

Defenses:

  • Force majeure (unforeseeable, unavoidable external event)
  • Victim's intentional misconduct or gross negligence

Allocation of Liability

Primary Liability: Deployer (entity using AI in commercial activity)

Supplier Liability: AI developer/manufacturer liable if:

  • Deployer insolvent or cannot be identified
  • Failure due to supplier's lack of compliance with AI Act obligations

Joint and Several Liability: Multiple parties can be held jointly liable; plaintiff can recover full amount from any liable party

EU Strict Liability Paradigm Shift

The proposed EU strict liability regime represents a major departure from U.S. law. Deployers of high-risk AI face liability without need to prove defect or negligence—simply causing harm is sufficient.

U.S. Liability Landscape

Federal Approach (Sector-Specific)

No Comprehensive Federal AI Liability Law:

  • Unlike EU, U.S. has not enacted general AI liability statute
  • Relies on existing laws applied to AI contexts

Sector-Specific Regulation:

  • Autonomous vehicles: NHTSA regulations, state motor vehicle codes
  • Medical devices: FDA regulations, medical malpractice law
  • Financial services: SEC, FINRA, OCC regulations, securities fraud, breach of fiduciary duty
  • Employment: EEOC guidance, Title VII, ADA, state employment discrimination laws
  • Consumer protection: FTC Act Section 5 (unfair/deceptive practices)

State Tort Law

Negligence:

  • General duty to exercise reasonable care
  • AI developers/deployers owe duty to foreseeable plaintiffs
  • Breach: Failing to test adequately, deploying known-defective AI, inadequate warnings

Product Liability (Restatement Third):

  • Strict liability for defective products
  • States vary on whether software/AI qualifies as "product"
  • Majority trend: Software can be product if sold as packaged good; services exempted

Misrepresentation:

  • Fraudulent misrepresentation: Intentional false statement about AI capabilities
  • Negligent misrepresentation: Careless false statement to person relying on expertise
  • Example: AI vendor claiming "99% accuracy" without adequate testing

State AI-Specific Laws (Emerging):

  • California: Proposed AI liability law (not enacted) would create strict liability for high-risk AI
  • Texas: Considering AI liability framework for autonomous vehicles
  • New York: NYC AI auditing law creates private right of action for algorithmic discrimination

Contract Law and Liability Disclaimers

Limitation of Liability Clauses:

  • AI vendor contracts often limit liability to fees paid or disclaim consequential damages
  • Enforceability varies:
    • Consumer contracts: Often unenforceable under unconscionability doctrine or consumer protection laws
    • B2B contracts: Generally enforceable if negotiated, not against public policy
    • Personal injury: Cannot disclaim liability for physical harm in most jurisdictions

Indemnification Clauses:

  • AI vendor may agree to indemnify customer for third-party claims arising from AI defects
  • Customer may be required to indemnify vendor for customer's misuse of AI
  • Key question: Who bears risk of AI-caused harms to third parties?

Terms of Service:

  • Click-wrap and browse-wrap agreements common for consumer AI
  • Often disclaim all warranties, limit liability
  • Courts scrutinize: Was agreement conspicuous? Did user have meaningful choice?

Practical Risk Management Strategies

For AI Developers/Vendors

1. Rigorous Testing and Validation

  • Validate on diverse, representative datasets (demographic subgroups, edge cases)
  • Adversarial testing (red teaming, penetration testing)
  • Safety testing (failure mode analysis, worst-case scenarios)
  • Document all testing: Creates evidence of reasonable care (negligence defense)

2. Transparency and Warnings

  • Clear documentation of AI capabilities and limitations
  • Warnings about known failure modes, demographic performance disparities
  • Model cards (standardized documentation of training data, performance metrics)
  • User training (how to use AI safely, when to override recommendations)

3. Post-Market Monitoring

  • Real-world performance tracking
  • Incident reporting system (collect and analyze failure cases)
  • Rapid response: Patch critical vulnerabilities, issue safety notices
  • Demonstrates ongoing diligence (negligence defense)

4. Contract Risk Allocation

  • Limitation of liability clauses (enforceable in B2B context)
  • Indemnification from customers (customer liable for misuse)
  • Insurance requirements (require customers to maintain liability insurance)
  • Dispute resolution (arbitration clauses can limit class action exposure)

5. Insurance

  • General liability: Covers bodily injury, property damage (may exclude software)
  • Professional liability (E&O): Covers negligent advice, services
  • Cyber liability: Covers data breaches, network security failures
  • Product liability: Specific AI product liability policies emerging
  • D&O insurance: Covers directors/officers for fiduciary duty breaches

For AI Deployers/Users

1. Vendor Due Diligence

  • Request validation studies, fairness audits, security assessments
  • Review vendor's testing methodology, known limitations
  • Check for regulatory compliance (FDA clearance, CE marking, bias audits)
  • Negotiate favorable contract terms (indemnification, liability caps)

2. Human-in-the-Loop

  • Use AI to assist, not replace, human decision-makers
  • Train personnel on AI limitations, when to override
  • Establish clear escalation procedures (when to seek human review)
  • Document human review: Shows reasonable care (negligence defense)

3. Ongoing Oversight

  • Monitor AI performance in deployment (accuracy, bias, failures)
  • Periodic revalidation (ensure performance hasn't degraded)
  • Incident investigation (root cause analysis when AI fails)
  • Corrective action (update models, retrain personnel, improve processes)

4. Contractual Protections

  • Indemnification from vendors (vendor liable for AI defects)
  • Insurance requirements (vendor must maintain product liability insurance)
  • Right to audit (verify vendor's testing, compliance claims)
  • Termination rights (discontinue use if AI fails to perform)

5. Insurance

  • General liability and professional liability (as above)
  • Employment practices liability: Covers discrimination claims (relevant for AI hiring)
  • Directors and officers: Covers board decisions to deploy AI
  • Cyber: Covers AI-related data breaches

For Individuals Harmed by AI

1. Document the Harm

  • Medical records (for injury), employment records (for discrimination), financial records (for economic loss)
  • AI system outputs (predictions, recommendations, decisions)
  • Communications with AI deployer (complaints, responses)

2. Identify Responsible Parties

  • AI developer/manufacturer
  • AI deployer (company using AI)
  • Data providers (if training data defective)
  • Human decision-makers (if negligent oversight)

3. Establish Causation

  • Show AI system caused harm (not other factors)
  • Technical expert testimony often required
  • Discovery: Request AI system documentation, training data, validation studies

4. Legal Theories

  • Product liability: If AI sold as product (strict liability for defects)
  • Negligence: If AI developer/deployer failed to exercise reasonable care
  • Discrimination: If AI produced disparate impact (Title VII, Fair Housing, ECOA)
  • Breach of contract: If AI failed to perform as warranted
  • Misrepresentation: If AI vendor made false claims about capabilities

5. Remedies

  • Compensatory damages: Medical expenses, lost wages, pain and suffering
  • Punitive damages: If defendant's conduct reckless or malicious (rare)
  • Injunctive relief: Court order requiring AI changes or discontinuation
  • Attorney's fees: Available in discrimination cases (Title VII, Fair Housing, ECOA)

Emerging Issues

Issue: AI trained on copyrighted works without permission; generated outputs may infringe

Lawsuits Filed (2023-2024):

  • Authors Guild v. OpenAI (copyright infringement for training GPT on books without license)
  • Getty Images v. Stability AI (training Stable Diffusion on copyrighted images)
  • NYT v. OpenAI & Microsoft (training on news articles)

Legal Theories:

  • Direct infringement: AI reproduced copyrighted works without authorization
  • Contributory infringement: AI enables users to generate infringing content
  • Vicarious infringement: AI provider profits from users' infringement

Defenses:

  • Fair use: Training on copyrighted works is transformative use (case law developing)
  • No substantial similarity: AI outputs don't copy protectable expression from training data
  • DMCA safe harbor: AI provider merely hosts user-generated content (weak argument)

Open-Source AI Liability

Issue: Who's liable when open-source AI causes harm?

Challenges:

  • No centralized vendor to sue
  • Developers often anonymous or judgment-proof
  • Permissive licenses (MIT, Apache) disclaim all warranties and liability

Potential Liability Theories:

  • Negligence: Developer breached duty of care (high bar—courts hesitant to impose duty on open-source contributors)
  • Product liability: Unlikely—open-source typically free and not "sold"
  • Deployer liability: Entity using open-source AI in commercial context more likely liable than original developers

Autonomous Weapons and Military AI

Issue: Who's liable when military AI causes unlawful harm?

International Humanitarian Law (IHL):

  • Distinction: Must distinguish combatants from civilians
  • Proportionality: Attack must not cause excessive civilian harm relative to military advantage
  • Precautions: Must take feasible precautions to minimize civilian harm

Autonomous Weapons Debate:

  • Proponents: AI can be more precise than humans, reducing civilian casualties
  • Critics: AI lacks human judgment for complex ethical decisions; accountability gap

Liability:

  • Command responsibility: Military commanders liable for war crimes committed by subordinates (including AI?)
  • State responsibility: States liable for IHL violations under international law
  • Individual criminal liability: Designers, operators may face ICC prosecution for AI war crimes

Key Takeaways

  1. No One Entity Bears All Liability: AI liability is distributed among developers, deployers, data providers, and human decision-makers. Determining who bears primary liability depends on the harm, contractual allocation, and applicable law.
  2. Product Liability Applies to Physical AI Products: Autonomous vehicles, medical devices, and robots are subject to strict liability for design defects, manufacturing defects, and warning defects. AI-as-a-service generally escapes product liability.
  3. Professionals Can't Delegate Liability to AI: Physicians, attorneys, investment advisers, and other fiduciaries remain fully liable for AI-assisted decisions. Fiduciary duty is non-delegable.
  4. Algorithmic Discrimination Creates Civil Rights Liability: AI producing disparate impact violates Title VII, Fair Housing Act, ECOA. Employers/deployers liable even if AI purchased from third-party vendor.
  5. EU AI Liability Directive Establishes Strict Liability: High-risk AI deployers face strict liability (no fault required) for harms caused. Represents paradigm shift from U.S. fault-based approach.
  6. Contract Disclaimers Have Limits: Cannot disclaim liability for personal injury. Consumer disclaimers often unenforceable. B2B limitation clauses generally valid but scrutinized.
  7. Insurance Is Critical: Traditional general liability and E&O policies may not cover AI-specific risks. Consider AI-specific product liability, professional liability, and cyber insurance with adequate limits.

Facing AI liability concerns? Our legal team provides risk assessments, contract review, insurance analysis, and litigation support for AI developers, deployers, and organizations navigating the complex liability landscape.

Common Questions

Liability depends on your role. Developers and vendors may face product liability, negligence, or misrepresentation claims. Deployers can be liable under vicarious liability, negligence, or discrimination laws if they fail to oversee AI or ignore risks. Individual end-users are usually not liable unless they misuse AI or act negligently in a professional capacity. Courts will ask whether you exercised reasonable care given your role and expertise.

No. Current law only recognizes humans and legal entities like corporations as liable persons. AI systems lack legal personhood and cannot be sued or prosecuted. All civil and criminal liability attaches to the humans and organizations that design, deploy, or control the AI.

No. Open-source licenses typically disclaim warranties and liability for the contributors, not for you. If you deploy open-source AI in a product or service, you remain responsible for defects, negligence, and regulatory violations. You must test, validate, and monitor open-source AI just as you would a commercial system.

AI embedded in physical products is usually treated as a product and subject to strict product liability for design, manufacturing, and warning defects. AI delivered as a cloud service is generally governed by contract and negligence law, with vendors relying on terms of service and liability caps. As a result, plaintiffs typically have an easier path to recovery against physical AI products than against AI-as-a-service.

You cannot disclaim liability for personal injury or for gross negligence and intentional misconduct in most jurisdictions. In B2B settings, you can often limit liability for economic loss if terms are negotiated and not against public policy. In consumer contexts, aggressive disclaimers are frequently struck down as unconscionable or inconsistent with consumer protection and warranty laws.

AI developers and vendors should consider product liability, professional liability (E&O), cyber liability, general liability, and D&O coverage. Deployers should add employment practices liability for algorithmic discrimination, professional liability for AI-assisted services, cyber coverage, and D&O. Review policies for AI-specific exclusions and ensure limits are adequate for potential AI-related claims.

The EU AI Liability Directive introduces a harmonized, AI-specific framework with fault presumptions, evidence disclosure duties, and strict liability for high-risk AI with capped damages for property and pure economic loss. The U.S. relies on existing tort, contract, and sectoral regulations, with significant state-by-state variation and a stronger emphasis on fault-based liability and contractual risk allocation.

Liability vs. Accountability

Liability is formal legal responsibility enforceable through courts and regulators. Accountability is broader and includes ethical, reputational, and governance responsibilities that may not trigger legal sanctions but still drive stakeholder trust and regulatory scrutiny.

EU Strict Liability Paradigm Shift

Under the proposed EU strict liability regime, deployers of high-risk AI can be liable for harm without any showing of defect or negligence. For organizations used to U.S.-style fault-based standards, this dramatically raises the bar for risk controls, documentation, and insurance in EU-facing deployments.

The Reasonable Professional in an AI World

Courts are converging on a view that competent professionals must understand AI tools they use, remain able to challenge AI outputs, and document independent judgment—especially in medicine, law, and finance.

50+

Estimated number of U.S. product liability cases involving AI that have reached trial as of 2024, with most settling before verdict

Source: Synthesized from U.S. case tracking through 2024

"Fiduciary duties are non-delegable: boards, advisers, and professionals cannot shift legal responsibility to algorithms, no matter how advanced."

AI governance and fiduciary duty commentary, 2024

References

  1. EU AI Act — Regulatory Framework for Artificial Intelligence. European Commission (2024). View source
  2. AI Risk Management Framework (AI RMF 1.0). National Institute of Standards and Technology (NIST) (2023). View source
  3. OECD Principles on Artificial Intelligence. OECD (2019). View source
  4. Model AI Governance Framework (Second Edition). PDPC and IMDA Singapore (2020). View source
  5. ASEAN Guide on AI Governance and Ethics. ASEAN Secretariat (2024). View source
  6. ISO/IEC 42001:2023 — Artificial Intelligence Management System. International Organization for Standardization (2023). View source
  7. Personal Data Protection Act 2012. Personal Data Protection Commission Singapore (2012). View source

EXPLORE MORE

Other AI Governance & Risk Management Solutions

INSIGHTS

Related reading

Talk to Us About AI Governance & Risk Management

We work with organizations across Southeast Asia on ai governance & risk management programs. Let us know what you are working on.